Compare commits

..

826 commits

Author SHA1 Message Date
alexjg
cb409b6ffe
docs: timestamp -> time in automerge.change examples (#548) 2023-03-09 18:10:23 +00:00
Conrad Irwin
b34b46fa16
smaller automerge c (#545)
* Fix automerge-c tests on mac

* Generate significantly smaller automerge-c builds

This cuts the size of libautomerge_core.a from 25Mb to 1.6Mb on macOS
and 53Mb to 2.7Mb on Linux.

As a side-effect of setting codegen-units = 1 for all release builds the
optimized wasm files are also 100kb smaller.
2023-03-09 15:09:43 +00:00
Conrad Irwin
7b747b8341
Error instead of corrupt large op counters (#543)
Since b78211ca6, OpIds have been silently truncated to 2**32. This
causes corruption in the case the op id overflows.

This change converts the silent error to a panic, and guards against the
panic on the codepath found by the fuzzer.
2023-03-07 16:49:04 +00:00
Conrad Irwin
2c1970f664
Fix panic on invalid action (#541)
We make the validation on parsing operations in the encoded changes stricter to avoid a possible panic when applying changes.
2023-03-04 12:09:08 +00:00
christine betts
63b761c0d1
Suppress clippy warning in parse.rs + bump toolchain (#542)
* Fix rust error in parse.rs
* Bump toolchain to 1.67.0
2023-03-03 22:42:40 +00:00
Conrad Irwin
44fa7ac416
Don't panic on missing deps of change chunks (#538)
* Fix doubly-reported ops in load of change chunks

Since c3c04128f5, observers have been
called twice when calling Automerge::load() with change chunks.

* Better handle change chunks with missing deps

Before this change Automerge::load would panic if you passed a change
chunk that was missing a dependency, or multiple change chunks not in
strict dependency order. After this change these cases will error
instead.
2023-02-27 20:12:09 +00:00
Jason Kankiewicz
8de2fa9bd4
C API 2 (#530)
The AMvalue union, AMlistItem struct, AMmapItem struct, and AMobjItem struct are gone, replaced by the AMitem struct.

The AMchangeHashes, AMchanges, AMlistItems, AMmapItems, AMobjItems, AMstrs, and AMsyncHaves iterators are gone, replaced by the AMitems iterator.

The AMitem struct is opaque, getting and setting values is now achieved exclusively through function calls.

The AMitemsNext(), AMitemsPrev(), and AMresultItem() functions return a pointer to an AMitem struct so you ultimately get the same thing whether you're iterating over a sequence or calling AMmapGet() or AMlistGet().

Calling AMitemResult() on an AMitem struct will produce a new AMresult struct referencing its storage so now the AMresult struct for an iterator can be subsequently freed without affecting the AMitem structs that were filtered out of it.

The storage for a set of AMitem structs can be recombined into a single AMresult struct by passing pointers to their corresponding AMresult structs to AMresultCat().

For C/C++ programmers, I've added AMstrCmp(), AMstrdup(), AM{idxType,objType,status,valType}ToString() and AM{idxType,objType,status,valType}FromString(). It's also now possible to pass arbitrary parameters through AMstack{Item,Items,Result}() to a callback function.
2023-02-25 18:47:00 +00:00
Philip Schatz
407faefa6e
A few setup fixes (#529)
* include deno in dependencies

* install javascript dependencies

* remove redundant operation
2023-02-15 09:23:02 +00:00
Alex Good
1425af43cd @automerge/automerge@2.0.2 2023-02-15 00:06:23 +00:00
Alex Good
c92d042c87 @automerge/automerge-wasm@0.1.24 and @automerge/automerge@2.0.2-alpha.2 2023-02-14 17:59:23 +00:00
Alex Good
9271b20cf5 Correct logic when skip = B and fix formatting
A few tests were failing which exposed the fact that if skip is `B` (the
out factor of the OpTree) then we set `skip = None` and this causes us
to attempt to return `Skip` in a non root node. I ported the failing
test from JS to Rust and fixed the problem.

I also fixed the formatting issues.
2023-02-14 17:21:59 +00:00
Orion Henry
5e82dbc3c8 rework how skip works to push the logic into node 2023-02-14 17:21:59 +00:00
Conrad Irwin
2cd7427f35 Use our leb128 parser for values
This ensures that values in automerge documents are encoded correctly,
and that no extra data is smuggled in any LEB fields.
2023-02-09 15:46:22 +00:00
Alex Good
11f063cbfe
Remove nightly from CI 2023-02-09 11:06:24 +00:00
Alex Good
a24d536d16 Move automerge::SequenceTree to automerge_wasm::SequenceTree
The `SequenceTree` is only ever used in `automerge_wasm` so move it
there.
2023-02-05 11:08:33 +00:00
Alex Good
c5fde2802f @automerge/automerge-wasm@0.1.24 and @automerge/automerge@2.0.2-alpha.1 2023-02-03 16:31:46 +00:00
Alex Good
13a775ed9a Speed up loading by generating clocks on demand
Context: currently we store a mapping from ChangeHash -> Clock, where
`Clock` is the set of (ActorId, (Sequence number, max Op)) pairs derived
from the given change and it's dependencies. This clock is used to
determine what operations are visible at a given set of heads.

Problem: populating this mapping for documents with large histories
containing many actors can be very slow as for each change we have to
allocate and merge a bunch of hashmaps.

Solution: instead of creating the clocks on load, create an adjacency
list based representation of the change graph and then derive the clock
from this graph when it is needed. Traversing even large graphs is still
almost as fast as looking up the clock in a hashmap.
2023-02-03 16:15:15 +00:00
Alex Good
1e33c9d9e0 Use Automerge::load instead of load_incremental if empty
Problem: when running the sync protocol for a new document the API
requires that the user create an empty document and then call
`receive_sync_message` on that document. This results in the OpObserver
for the new document being called with every single op in the document
history. For documents with a large history this can be extremely time
consuming, but the OpObserver doesn't need to know about all the hidden
states.

Solution: Modify `Automerge::load_with` and
`Automerge::apply_changes_with` to check if the document is empty before
applying changes. If the document _is_ empty then we don't call the
observer for every change, but instead use
`automerge::observe_current_state` to notify the observer of the new
state once all the changes have been applied.
2023-02-03 10:01:12 +00:00
Alex Good
c3c04128f5 Only observe the current state on load
Problem: When loading a document whilst passing an `OpObserver` we call
the OpObserver for every change in the loaded document. This slows down
the loading process for two reasons: 1) we have to make a call to the
observer for every op 2) we cannot just stream the ops into the OpSet in
topological order but must instead buffer them to pass to the observer.

Solution: Construct the OpSet first, then only traverse the visible ops
in the OpSet, calling the observer. For documents with a deep history
this results in vastly fewer calls to the observer and also allows us to
construct the OpSet much more quickly. It is slightly different
semantically because the observer never gets notified of changes which
are not visible, but that shouldn't matter to most observers.
2023-02-03 10:01:12 +00:00
Alex Good
da55dfac7a refactor: make fields of Automerge private
The fields of `automerge::Automerge` were crate public, which made it
hard to change the structure of `Automerge` with confidence. Make all
fields private and put them behind accessors where necessary to allow
for easy internal changes.
2023-02-03 10:01:12 +00:00
alexjg
9195e9cb76
Fix deny errors (#518)
* Ignore deny errors on duplicate windows-sys

* Delete spurious lockfile in automerge-cli
2023-02-02 15:02:53 +00:00
dependabot[bot]
f8d5a8ea98
Bump json5 from 1.0.1 to 1.0.2 in /javascript/examples/create-react-app (#487)
Bumps [json5](https://github.com/json5/json5) from 1.0.1 to 1.0.2. in javascript/examples/create-react-app
2023-02-01 09:15:54 +00:00
alexjg
2a9652e642
typescript: Hide API type and make SyncState opaque (#514) 2023-02-01 09:15:00 +00:00
Conrad Irwin
a6959e70e8
More robust leb128 parsing (#515)
Before this change i64 decoding did not work for negative numbers (not a
real problem because it is only used for the timestamp of a change),
and both u64 and i64 would allow overlong LEB encodings.
2023-01-31 17:54:54 +00:00
alexjg
de5af2fffa
automerge-rs 0.3.0 and automerge-test 0.2.0 (#512) 2023-01-30 19:58:35 +00:00
alexjg
08801ab580
automerge-rs: Introduce ReadDoc and SyncDoc traits and add documentation (#511)
The Rust API has so far grown somewhat organically driven by the needs of the
javascript implementation. This has led to an API which is quite awkward and
unfamiliar to Rust programmers. Additionally there is no documentation to speak
of. This commit is the first movement towards cleaning things up a bit. We touch
a lot of files but the changes are all very mechanical. We introduce a few
traits to abstract over the common operations between `Automerge` and
`AutoCommit`, and add a whole bunch of documentation.

* Add a `ReadDoc` trait to describe methods which read value from a document.
  make `Transactable` extend `ReadDoc`
* Add a `SyncDoc` trait to describe methods necessary for synchronizing
  documents.
* Put the `SyncDoc` implementation for `AutoCommit` behind `AutoCommit::sync` to
  ensure that any open transactions are closed before taking part in the sync
  protocol
* Split `OpObserver` into two traits: `OpObserver` + `BranchableObserver`.
  `BranchableObserver` captures the methods which are only needed for observing
  transactions.
* Add a whole bunch of documentation.

The main changes Rust users will need to make is:

* Import the `ReadDoc` trait wherever you are using the methods which have been
  moved to it. Optionally change concrete paramters on functions to `ReadDoc`
  constraints.
* Likewise import the `SyncDoc` trait wherever you are doing synchronisation
  work
* If you are using the `AutoCommit::*_sync_message` methods you will need to add
  a call to `AutoCommit::sync()` first. E.g. `doc.generate_sync_message` becomes
  `doc.sync().generate_sync_message`
* If you have an implementation of `OpObserver` which you are using in an
  `AutoCommit` then split it into an implementation of `OpObserver` and
  `BranchableObserver`
2023-01-30 19:37:03 +00:00
alexjg
89a0866272
@automerge/automerge@2.0.1 (#510) 2023-01-28 21:22:45 +00:00
Alex Good
9b6a3c8691
Update README 2023-01-28 09:32:21 +00:00
alexjg
58a7a06b75
@automerge/automerge-wasm@0.1.23 and @automerge/automerge@2.0.1-alpha.6 (#509) 2023-01-27 20:27:11 +00:00
alexjg
f428fe0169
Improve typescript types (#508) 2023-01-27 17:23:13 +00:00
Conrad Irwin
931ee7e77b
Add Fuzz Testing (#498)
* Add fuzz testing for document load

* Fix fuzz crashers and add to test suite
2023-01-25 16:03:05 +00:00
alexjg
819767cc33
fix: use saturating_sub when updating cached text width (#505)
Problem: In `automerge::query::Index::change_vis` we use `-=` to
subtract the width of an operation which is being hidden from the text
widths which we store on the index of each node in the optree. This
index represents the width of all the visible text operations in this
node and below. This was causing an integer underflow error when
encountering some list operations. More specifically, when a
`ScalarValue::Str` in a list was made invisible by a later operation
which contained a _shorter_ string, the width subtracted from the indexed
text widths could be longer than the current index.

Solution: use `saturating_sub` instead. This is technically papering
over the problem because really the width should never go below zero,
but the text widths are only relevant for text objects where the
existing logic works as advertised because we don't have a `set`
operation for text indices. A more robust solution would be to track the
type of the Index (and consequently of the `OpTree`) at the type level,
but time is limited and problems are infinite.

Also, add a lengthy description of the reason we are using
`saturating_sub` so that when I read it in about a month I don't have
to redo the painful debugging process that got me to this commit.
2023-01-23 19:19:55 +00:00
Alex Currie-Clark
78adbc4ff9
Update patch types (#499)
* Update `Patch` types

* Clarify that the splice patch applies to text

* Add Splice patch type to exports

* Add new patches to javascript
2023-01-23 17:02:02 +00:00
Andrew Jeffery
1f7b109dcd
Add From<SmolStr> for ScalarValue::Str (#506) 2023-01-23 17:01:41 +00:00
Conrad Irwin
98e755106f
Fix and simplify lebsize calculations (#503)
Before this change numbits_i64() was incorrect for every value of the
form 0 - 2^x. This only manifested in a visible error if x%7 == 6 (so
for -64, -8192, etc.) at which point `lebsize` would return a value one
too large, causing a panic in commit().
2023-01-23 11:01:05 +00:00
alexjg
6b0ee6da2e
Bump js to 2.0.1-alpha.5 and automerge-wasm to 0.1.22 (#497) 2023-01-19 22:15:06 +00:00
alexjg
9b44a75f69
fix: don't panic when generating parents for hidden objects (#500)
Problem: the `OpSet::export_key` method uses `query::ElemIdPos` to
determine the index of sequence elements when exporting a key. This
query returned `None` for invisible elements. The `Parents` iterator
which is used to generate paths to objects in patches in
`automerge-wasm` used `export_key`. The end result is that applying a
remote change which deletes an object in a sequence would panic as it
tries to generate a path for an invisible object.

Solution: modify `query::ElemIdPos` to include invisible objects. This
does mean that the path generated will refer to the previous visible
object in the sequence as it's index, but this is probably fine as for
an invisible object the path shouldn't be used anyway.

While we're here also change the return value of `OpSet::export_key` to
an `Option` and make `query::Index::ops` private as obeisance to the
Lady of the Golden Blade.
2023-01-19 21:11:36 +00:00
alexjg
d8baa116e7
automerge-rs: Add ExId::to_bytes (#491)
The `ExId` structure has some internal details which make lookups for
object IDs which were produced by the document doing the looking up
faster. These internal details are quite specific to the implementation
so we don't want to expose them as a public API. On the other hand, we
need to be able to serialize `ExId`s so that FFI clients can hold on to
them without referencing memory which is owned by the document (ahem,
looking at you Java).

Introduce `ExId::to_bytes` and `TryFrom<&[u8]> ExId` implementing a
canonical serialization which includes a version tag, giveing us
compatibility options if we decide to change the implementation.
2023-01-19 17:02:47 +00:00
alexjg
5629a7bec4
Various CI script fixes (#501)
Some of the scripts in scripts/ci were not reliable detecting the path
they were operating in. Additionally the deno_tests script was not
correctly picking up the ROOT_MODULE environment variable. Add more
robust path handling and fix the deno_tests script.
2023-01-19 15:38:27 +00:00
alexjg
964ae2bd81
Fix SeekOpWithPatch on optrees with only internal optrees (#496)
In #480 we fixed an issue where `SeekOp` calculated an incorrect
insertion index on optrees where the only visible ops were on internal
nodes. We forgot to port this fix to `SeekOpWithPatch`, which has almost
the same logic just with additional work done in order to notify an
`OpObserver` of changes. Add a test and fix to `SeekOpWithPatch`
2023-01-14 11:27:48 +00:00
Alex Good
d8df1707d9
Update rust toolchain for "linux" step 2023-01-14 11:06:58 +00:00
Alex Currie-Clark
681a3f1f3f
Add github action to deploy deno package 2023-01-13 10:33:47 +00:00
Alex Good
22e9915fac automerge-wasm: publish release build in Github Action 2023-01-12 12:42:19 +00:00
Alex Good
2d8df12522
re-enable version check for WASM release 2023-01-12 11:35:48 +00:00
Alex Good
f073dbf701
use setup-node prior to attempting to publish in release action 2023-01-12 11:04:22 +00:00
Alex Good
5c02445bee
Bump automerge-wasm, again
In order to re-trigger the release action we are testing we bump the
version which was de-bumped in the last commit.
2023-01-12 10:39:11 +00:00
Alex Good
3ef60747f4
Roll back automerge-wasm to test release action
The release action we are working conditionally executes based on the
version of `automerge-wasm` in the previous commit. We need to trigger
it even though the version has not changed so we roll back the version
in this commit and the commit immediately following this will bump it
again.
2023-01-12 10:37:11 +00:00
Alex Good
d12bd3bb06
correctly call npm publish in release action 2023-01-12 10:27:03 +00:00
Alex Good
a0d698dc8e
Version bump js and wasm
js: 2.0.1-alpha.3
wasm: 0.1.20
2023-01-12 09:55:12 +00:00
Alex Currie-Clark
93a257896e Release action: Fix for check that WASM version has been updated before publishing 2023-01-12 09:44:48 +00:00
Alex Currie-Clark
9c3d0976c8 Add workflow to generate a deno.land and npm release when pushing a new automerge-wasm version to #main 2023-01-11 17:19:24 +00:00
Orion Henry
1ca1cc38ef
Merge pull request #484 from automerge/text2-compat
Text2 compat
2023-01-10 09:16:22 -08:00
Alex Good
0e7fb6cc10
javascript: Add @packageDocumentation TSDoc
Instead of using the `--readme` argument to `typedoc` use the
`@packageDocumentation` TSDoc tag to include the readme text in the
typedoc output.
2023-01-10 15:02:56 +00:00
Alex Good
d1220b9dd0
javascript: Use glob to list files in package.json
We have been listing all the files to be included in the distributed
package in package.json:files. This is tedious and error prone. We
change to using globs instead, to do this without also including the
test and src files when outputting declarations we add a new typescript
config file for the declaration generation which excludes tests.
2023-01-10 12:52:21 +00:00
Alex Good
6c0d102032
automerge-js: Add backwards compatibility text layer
The new text features are faster and more ergonomic but not backwards
compatible. In order to make them backwards compatible re-expose the
original functionality and move the new API under a `future` export.
This allows users to interoperably use both implementations.
2023-01-10 12:52:21 +00:00
Alex Good
5763210b07
wasm: Allow a choice of text representations
The wasm codebase assumed that clients want to represent text as a
string of characters. This is faster, but in order to enable backwards
compatibility we add a `TextRepresentation` argument to
`automerge_wasm::Automerge::new` to allow clients to choose between a
`string` or `Array<any>` representation. The `automerge_wasm::Observer`
will consult this setting to determine what kind of diffs to generate.
2023-01-10 12:52:19 +00:00
Alex Good
18a3f61704 Update rust toolchain to 1.66 2023-01-10 12:51:56 +00:00
Alex Currie-Clark
0306ade939 Update action name on IncPatch type 2023-01-06 15:23:41 +00:00
Alex Good
1e7dcdedec automerge-js: Add prettier
It's christmas, everyone is on holiday, it's time to change every single
file in the repository!
2022-12-22 17:33:14 +00:00
Alex Good
8a645bb193 js: Enable typescript for the JS tests
The tsconfig.json was setup to not include the JS tests. Update the
config to include the tests when checking typescript and fix all the
consequent errors. None of this is semantically meaningful _except_ for
a few incorrect usages of the API which were leading to flaky tests.
Hooray for types!
2022-12-22 11:48:06 +00:00
Alex Good
4de0756bb4 Correctly handle ops on optree node boundaries
The `SeekOp` query can produce incorrect results when the optree it is
searching only has visible ops on the internal nodes. Add some tests to
demonstrate the issue as well as a fix.
2022-12-20 20:38:29 +00:00
Alex Good
d678280b57 automerge-cli: Add an examine-sync command
This is useful when receiving sync messages that behave in unexptected
ways
2022-12-19 16:30:14 +00:00
Alex Good
f682db3039 automerge-cli: Add a flag to skip verifiying heads 2022-12-19 16:30:14 +00:00
Alex Good
6da93b6adc Correctly implement colored json
My quickly thrown together implementation had somem mistakes in it which
meant that the JSON produced was malformed.
2022-12-19 16:30:14 +00:00
Alex Good
0f90fe4d02 Add a method for loading a document without verifying heads
This is primarily useful when debugging documents which have been
corrupted somehow so you would like to see the ops even if you can't
trust them. Note that this is _not_ currently useful for performance
reasons as the hash graph is still constructed, just not verified.
2022-12-19 16:30:14 +00:00
alexjg
8aff1296b9
automerge-cli: remove a bunch of bad dependencies (#478)
Automerge CLI depends transitively (via and old version of `clap` and
via `colored_json` on `atty` and `ansi_term`. These crates are both
marked as unmaintained and this generates irritating `cargo deny`
messages. To avoid this, implement colored JSON ourselves using the
`termcolor` crate - colored JSON is pretty mechanical. Also update
criterion and cbindgen dependencies and ignore the criterion tree in
deny.toml as we only ever use it in benchmarks.

All that's left now is a warning about atty in cbindgen, we'll just have
to wait for cbindgen to fix that, it's a build time dependency anyway so
it's not really an issue.
2022-12-14 18:06:19 +00:00
Conrad Irwin
6dad2b7df1
Don't panic on invalid gzip stream (#477)
* Don't panic on invalid gzip stream

Before this change automerge-rs would panic if the gzip data in
a raw column was invalid; after this change the error is propagated
to the caller correctly.
2022-12-14 17:34:22 +00:00
patryk
e75ca2a834
Update README.md (Update Slack invite link) (#475)
Slack invite link updated to the one used on the website, as the current one returns "This link is no longer active".
2022-12-14 11:41:21 +00:00
Orion Henry
3229548fc7
update js dependencies and some lint errors (#474) 2022-12-11 21:26:00 +00:00
Orion Henry
a96f77c96b
Merge pull request #458 from automerge/dependabot/npm_and_yarn/javascript/examples/create-react-app/loader-utils-2.0.4
Bump loader-utils from 2.0.2 to 2.0.4 in /javascript/examples/create-react-app
2022-12-11 11:36:38 -08:00
Orion Henry
b78211ca65
change opid to (u32,u32) - 10% performance uptick (#473) 2022-12-11 18:56:20 +00:00
Orion Henry
1222fc0df1
rewrite opnode to store usize instead of Op (#471) 2022-12-10 10:36:05 +00:00
Orion Henry
2db9e78f2a
Text v2. JS Api now uses text by default (#462) 2022-12-09 23:48:07 +00:00
Conrad Irwin
b05c9e83a4
Use AMbyteSpan for AM{list,map}PutBytes (#464)
* Use AMbyteSpan for byte values

Before this change there was an inconsistency between AMmapPutString
(which took an AMbyteSpan) and AMmapPutBytes (which took a pointer +
length).

Either is fine, but we should do the same in both places. I chose this
path to make it clear that the value passed in was an automerge value,
and to be symmetric with AMvalue.bytes when you do an AMmapGet().

I did not update other APIs (like load) that take a pointer + length, as
that is idiomatic usage for C, and these functions are not operating on
byte values stored in automerge.
2022-12-09 16:11:23 +00:00
Conrad Irwin
c3932e6267
Improve docs for building automerge-c on a mac (#465)
* More detailed instructions in README

I struggled to get the project to build for a while when first getting
started, so have added some instructions; and also some usage
instructions for automerge-c that show more clearly what is happening
without `AMpush()`
2022-12-09 13:46:23 +00:00
Alex Good
becc301877
automerge-wasm@0.1.19 & automerge-js@2.0.1-alpha.2 2022-12-02 15:10:24 +00:00
Alex Good
0ab6a770d8 wasm: improve error messages
The error messages produced by various conversions in `automerge-wasm`
were quite uninformative - often consisting of just returning the
offending value with no description of the problem. The logic of these
error messages was often hard to trace due to the use of `JsValue` to
represent both error conditions and valid values - evidenced by most of
the public functions of `automerge-wasm` having return types of
`Result<JsValue, JsValue>`. Change these return types to mention
specific errors, thus enlisting the compilers help in ensuring that
specific error messages are emitted.
2022-12-02 14:42:55 +00:00
Alex Currie-Clark
2826f4f08c
automerge-wasm: Add deno as a target 2022-12-02 14:42:13 +00:00
Alex Good
de16adbcc5 Explicity create empty changes
Transactions with no ops in them are generally undesirable. They take up
space in the change log but do nothing else. They are not useless
though, it may occasionally be necessary to create an empty change in
order to list all the current heads of the document as dependents of the
empty change.

The current API makes no distinction between empty changes and non-empty
changes. If the user calls `Transaction::commit` a change is created
regardless of whether there are ops to commit. To provide a more useful
API modify `commit` so that if there is a no-op transaction then no
changes are created, but provide explicit methods to create an empty
change via `Transaction::empty_change`, `Automerge::empty_change` and
`Autocommit::empty_change`. Also make these APIs available in Javascript
and C.
2022-12-02 12:12:54 +00:00
Alex Good
ea5688e418 rust: Make fields of Transaction and TransactionInner private
It's tricky to modify these structs with the fields public as every
change requires scanning the codebase for references to make sure you're
not breaking any invariants. Make the fields private to ease
development.
2022-12-02 12:12:54 +00:00
Alex Good
149f870102 rust: Remove Default constraint from OpObserver 2022-12-02 12:12:54 +00:00
Andrew Jeffery
e0b2bc995a
Update nix flake and add formatter and dead code check (#466)
* Add formatter for flake

* Update flake inputs

* Remove unused vars in flake

* Add deadnix check and fixup devshells naming
2022-11-30 12:57:59 +00:00
Orion Henry
aaddb3c9ea fix error message 2022-11-28 15:43:27 -06:00
Orion Henry
2400d67755
Merge pull request #457 from jkankiewicz/return_NUL_string_as_bytes
Prevent panic when string contains a null character.
2022-11-28 12:34:45 -08:00
Jason Kankiewicz
d3885a3443 Hard-coded automerge-c's initial independent
version number to "0.0.1" for @alexjg.
2022-11-28 00:08:33 -08:00
Jason Kankiewicz
f8428896bd Added a test case for a map key containing NUL
('\0') based on #455.
2022-11-27 23:52:47 -08:00
Jason Kankiewicz
fb0c69cc52 Updated the quickstart example to work with
`AMbyteSpan` values instead of `*const libc::c_char` values.
2022-11-27 23:52:47 -08:00
Jason Kankiewicz
edbb33522d Replaced the C string (*const libc::c_char)
value of the `AMresult::Error` variant with a UTF-8 string view
(`AMbyteSpan`).
2022-11-27 23:52:47 -08:00
Jason Kankiewicz
625f48f33a Fixed clippy violations. 2022-11-27 23:52:47 -08:00
Jason Kankiewicz
7c9f927136 Fixed code formatting violations. 2022-11-27 23:52:47 -08:00
Jason Kankiewicz
b60c310f5c Changed Default::default() calls to be through
the trait.
2022-11-27 23:52:47 -08:00
Jason Kankiewicz
3dd954d5b7 Moved the to_obj_id macro in with AMobjId. 2022-11-27 23:52:47 -08:00
Jason Kankiewicz
3e2e697504 Replaced C string (*const libc::c_char) values
with UTF-8 string view  (`AMbyteSpan`) values except with the
`AMresult::Error` variant.
Added `AMstr()` for creating an `AMbyteSpan` from a C string.
2022-11-27 23:52:47 -08:00
Jason Kankiewicz
a324b02005 Added automerge::AutomergeError::InvalidActorId.
Added `automerge::AutomergeError::InvalidCharacter`.
Alphabetized the `automerge::AutomergeError` variants.
2022-11-27 23:52:47 -08:00
Alex Good
d26cb0c0cb
rust:automerge-test:0.1.0 2022-11-27 16:54:41 +00:00
Alex Good
ed108ba6fc
rust:automerge:0.2.0 2022-11-27 16:44:26 +00:00
Alex Good
484a5bac4f
rust: Add Transactable::base_heads
Sometimes it is necessary to query the heads of a document at the time a
transaction started without having a mutable reference to the
transactable. Add `Transactable::base_heads` to do this.
2022-11-27 16:39:02 +00:00
Alex Good
01350c2b3f
automerge-wasm@0.1.18 and automerge@2.0.1-alpha.1 2022-11-22 19:37:01 +00:00
alexjg
22d60987f6
Dont send duplicate sync messages (#460)
The API of Automerge::generate_sync_message requires that the user keep
track of in flight messages themselves if they want to avoid sending
duplicate messages. To avoid this add a flag to `automerge::sync::State`
to track if there are any in flight messages and return `None` from
`generate_sync_message` if there are.
2022-11-22 18:29:06 +00:00
Alex Good
bbf729e1d6
@automerge/automerge 2.0.0 2022-11-22 12:13:42 +00:00
Orion Henry
ca25ed0ca0
automerge-wasm: Use a SequenceTree in the OpObserver
Generating patches to text objects (a la the edit-trace benchmark) was
very slow due to appending to the back of a Vec. Use the SequenceTree
(effectively a B-tree) instead so as to speed up sequence patch
generation.
2022-11-22 12:13:42 +00:00
Alex Good
03b3da203d
@automerge/automerge-wasm 0.1.16 2022-11-22 00:02:28 +00:00
Alex Good
e713c35d21
Fix some typescript errors 2022-11-21 18:26:28 +00:00
dependabot[bot]
92c044eadb
Bump loader-utils in /javascript/examples/create-react-app
Bumps [loader-utils](https://github.com/webpack/loader-utils) from 2.0.2 to 2.0.4.
- [Release notes](https://github.com/webpack/loader-utils/releases)
- [Changelog](https://github.com/webpack/loader-utils/blob/v2.0.4/CHANGELOG.md)
- [Commits](https://github.com/webpack/loader-utils/compare/v2.0.2...v2.0.4)

---
updated-dependencies:
- dependency-name: loader-utils
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2022-11-16 13:35:34 +00:00
Jason Kankiewicz
a7656b999b
Add AMobjObjType() (#454)
automerge-c: Add AmobjObjType()
2022-11-07 23:10:53 +00:00
Alex Good
05093071ce
rust/automerge-test: add From<f64> for RealizedObject 2022-11-07 12:08:12 +00:00
Alex Good
bcab3b6e47 Move automerge/tests::helpers to crate automerge-test
The assert_doc and assert_obj macros in automerge/tests::helpers are
useful for writing tests for any application working with automerge
documents. Typically however, you only want these utilities in tests so
rather than packaging them in the main `automerge` crate move them to a
new crate (in the spirit of `tokio_test`)
2022-11-06 19:52:21 +00:00
Alex Good
b53584bec0
Ritual obeisance before the altar of clippy 2022-11-05 22:48:43 +00:00
Orion Henry
91f313bb83 revert compiler flags to max opt 2022-11-04 18:02:32 +00:00
tosti007
6bbed76f0f Update uuid dependency to v1.2.1 2022-11-01 11:39:24 +00:00
Alex Good
bba4fe2c36
@automerge/automerge@2.0.0-beta.4 2022-10-28 11:31:51 +01:00
Alex Good
61aaa52718 Allow changing a cloned document
The logic for `clone` which was updated to support cloning a viewed
document inadverantly left the heads of the cloned document state in
place, which meant that cloned documents could not be `change`d. Set
state.heads to undefined when cloning to allow changing them.
2022-10-27 19:20:41 +01:00
Alex Good
20d543d28d
@automerge/automerge@2.0.0-beta.3 2022-10-26 14:14:01 +01:00
Alex Good
5adb6952e9
@automerge/automerge@2.0.0-beta.2 and @automerge/automerge-wasm@0.1.15 2022-10-26 14:03:12 +01:00
Orion Henry
3705212747
js: Add Automerge.clone(_, heads) and Automerge.view
Sometimes you need a cheap copy of a document at a given set of heads
just so you can see what has changed. Cloning the document to do this is
quite expensive when you don't need a writable copy. Add automerge.view
to allow a cheap read only copy of a document at a given set of heads
and add an additional heads argument to clone for when you do want a
writable copy.
2022-10-26 14:01:11 +01:00
Orion Henry
d7d2916acb tiny change that might remove a bloom filter false positive error 2022-10-21 15:15:30 -05:00
Alex Good
3482e06b15
javascript 2.0.0-beta1 2022-10-18 19:43:46 +01:00
Orion Henry
59289f67b1 consolidate inserts and deletes more aggressivly into a single splice 2022-10-18 13:29:56 +01:00
Alex Good
a4a3dd9ed3
Fix docs CI 2022-10-18 13:08:08 +01:00
Alex Good
ac6eeb8711
Another attempt at fixing cmake build CI 2022-10-18 12:46:22 +01:00
Alex Good
20adff0071
Fix cmake CI
The cmake CI seemed to reference a few nonexistent targets for docs and
tests. Remove the doc generation step and point the test CI script at
the generated test program.
2022-10-18 11:56:37 +01:00
Alex Good
6bb611e4b3
Update CI to rust 1.64.0 2022-10-18 11:49:46 +01:00
Alex Good
e8309495ce
Update cargo deny to point at rust subdirectory 2022-10-18 11:28:56 +01:00
Orion Henry
4755c5bf5e
Merge pull request #444 from automerge/freeze
make freeze work recursively
2022-10-17 16:31:52 -07:00
Orion Henry
38205fbcc2 enableFreeze() instead of implicit freeze 2022-10-17 17:35:34 -05:00
Orion Henry
a2704bac4b Merge remote-tracking branch 'origin/main' into f2 2022-10-17 16:32:23 -05:00
Orion Henry
ac90f8f028 Merge remote-tracking branch 'origin/freeze' into f2 2022-10-17 16:21:35 -05:00
Orion Henry
c602e9e7ed update build to match directory restructuring 2022-10-17 16:20:25 -05:00
Alex Good
1c6da6f9a3
Add JS worker config to Vite app example
Vite apps which use SharedWorker of WebWorker require additional
configuration to get WebAssembly imports to work effectively, add these
to the example.
2022-10-17 01:09:13 +01:00
Alex Good
24dcf8270a
Add typedoc comments to the entire public JS API 2022-10-17 00:41:06 +01:00
Alex Good
e189ec9ca8
Add some READMEs to the javascript directory 2022-10-16 20:01:49 +01:00
Alex Good
96f15c6e00
Update main README to reflect new repo layout 2022-10-16 20:01:45 +01:00
Alex Good
8e131922e7
Move wrappers/javascript -> javascript
Continuing our theme of treating all languages equally, move
wrappers/javascript to javascrpit. Automerge libraries for new languages
should be built at this top level if possible.
2022-10-16 19:55:54 +01:00
Alex Good
dd3c6d1303
Move rust workspace into ./rust
After some discussion with PVH I realise that the repo structure in the
last reorg was very rust-centric. In an attempt to put each language on
a level footing move the rust code and project files into ./rust
2022-10-16 19:55:51 +01:00
Orion Henry
5ce3a556a9 weak_refs 2022-10-16 19:55:25 +01:00
Orion Henry
dd5edafa9d make freeze work recursively 2022-10-15 21:21:18 -05:00
Alex Good
cd2997e63f
@automerge/automerge@2.0.0-alpha.5 and @automerge/automerge-wasm@0.1.10 2022-10-13 23:13:09 +01:00
Orion Henry
f0f036eb89
add loadIncremental to js 2022-10-13 23:03:01 +01:00
Alex Good
ee0c3ef3ac javascript: Make getObjectId tolerate non object arguments
Fixes #433. `getObjectId` was previously throwing an error if passed
something which was not an object. In the process of fixing this I
simplified the logic of `getObjectId` by modifying automerge-wasm to not
set the OBJECT_ID hidden property on objects which are not maps, lists,
or text - it was previously setting this property on anything which was
a JS object, including `Date` and `Uint8Array`.
2022-10-13 21:37:37 +01:00
Orion Henry
e6d1828c12
Merge pull request #440 from automerge/repo-reorg
Repo reorg
2022-10-12 10:07:02 -07:00
Alex Good
4c17fd9c00
Update README
We're making this project the primary implementation of automerge.
Update the README to provide more context and signpost other resources.
2022-10-12 16:25:43 +01:00
Alex Good
660678d038
remove unneeded files 2022-10-12 16:25:43 +01:00
Alex Good
a7a4bd42f1
Move automerge-js -> wrappers/javascript
Whilst we only have one wrapper library, we anticipate more.
Furthermore, the naming of the `wrappers` directory makes it clear what
the role of the JS codebase is.
2022-10-12 16:25:43 +01:00
Alex Good
352a0127c7
Move all rust code into crates/*
For larger rust projects it's common to put all rust code in a directory
called `crates`. This helps in general by reducing the number of
directories in the top level but it's particularly helpful for us
because some directories _do not_ contain Rust code. In particular
`automerge-js`. Move rust code into `/crates` to make the repo easier
to navigate.
2022-10-12 16:25:38 +01:00
Alex Good
ed0da24020
Track whether a transaction is observed in types
With the `OpObserver` moving to the transaction rather than being passed
in to the `Transaction::commit` method we have needed to add a way to
get the observer back out of the transaction (via
`Transaction::observer` and `AutoCommit::observer`). This `Observer`
type is then used to handle patch generation logic. However, there are
cases where we might not want an `OpObserver` and in these cases we can
execute various things fast - so we need to have something like an
`Option<OpObserver>`. In order to track the presence or otherwise of the
observer at the type level introduce
`automerge::transaction::observation`, which is a type level `Option`.
This allows us to efficiently choose the right code paths whilst
maintaining correct types for `Transaction::observer` and
`AutoCommit::observer`
2022-10-12 16:11:23 +01:00
Orion Henry
3989cac405
Merge pull request #439 from automerge/type-patchcallback
Add TypeScript type for PatchCallback
2022-10-10 14:25:43 -07:00
Alex Good
2d072d81fb
Add TypeScript type for PatchCallback 2022-10-10 21:19:39 +01:00
Alex Good
430d842343
Update vite.config.js in Vite Example README 2022-10-10 14:14:38 +01:00
Alex Good
dff0fc2b21
Remove automerge-wasm devDependency
This dependency was added in a PR which is no longer relevant as we've
switched to depending directly on `@automerge/automerge-wasm` and
testing by running a local NPM registry.
2022-10-10 13:05:10 +01:00
Orion Henry
9e1fe65a64
Merge pull request #429 from automerge/actually-run-js-tests
Use the local automerge-wasm in automerge-js tests
2022-10-06 15:41:07 -07:00
Orion Henry
3d5fe83e2b
Merge branch 'main' into actually-run-js-tests 2022-10-06 15:41:01 -07:00
Alex Good
ba328992ff
bump @automerge/automerge-wasm and @automerge/automerge versions 2022-10-06 22:53:21 +01:00
Orion Henry
23a07699e2
typescript fixes 2022-10-06 22:42:33 +01:00
Orion Henry
238d05a0e3
move automerge-js onto the applyPatches model 2022-10-06 22:42:31 +01:00
Orion Henry
7a6dfcc289
The patch interface needs an accurate path per patch op
For the path to be accurate it needs to be calculated at the moment of op insert
not at commit.  This is because the path may contain list indexes in parent
objects that could change by inserts and deletes later in the transaction.

The primary change was adding op_observer to the transaction object and
removing it from commit options.  The beginnings of a wasm level
`applyPatch` system is laid out here.
2022-10-06 22:41:37 +01:00
Alex Good
92145e6131
@automerge/automerge-wasm 0.1.8 2022-10-05 00:55:10 +01:00
Alex Good
2012f5c6e4
Fix some typescript bugs, automerge-js 2.0.0-alpha.3 2022-10-05 00:52:36 +01:00
Alex Good
fb4d1f4361
Ship generated typescript types correctly
Generated typescript types were being shipped in the `dist/cjs` and `dist/mjs`
directories but are referenced at the top level in package.json. Add a
step to generate `*.d.ts` files in the top level `dist/*.d.ts`.
2022-10-04 22:54:19 +01:00
Alex Good
74af537800
Rename automerge and automerge-wasm packages
In an attempt to make our package naming more understandable we move all
our packages to a single NPM scope. `automerge` ->
`@automerge/automerge` and `automerge-wasm` ->
@automerge/automerge-wasm`
2022-10-04 22:05:56 +01:00
Alex Good
29f2c9945e query::Prop: don't scan past end of OpTree
The logic in `query::Prop` works by first doing a binary search in the
OpTree for the node where the key we are looking for starts, and then
proceeding from this point forwards skipping over nodes which contain
only invisible ops. This logic was incorrect if the start index returned
by the binary search was in the last child of the optree and the last
child only contains invisible ops. In this case the index returned by
the query would be greater than the length of the optree.

Clamp the index returned by the query to the total length of the opset.
2022-10-04 17:25:56 +01:00
Alex Good
d6a8d41e0a Update JS README 2022-10-04 17:23:37 +01:00
Alex Good
b6c375efb9 Fix a few small typescript complaints 2022-10-04 17:23:37 +01:00
Alex Good
16f2272b5b Generate index.d.ts from source
The JS package is now written in typescript so we don't need to manually
maintain an index.d.ts file. Generate the index.d.ts file from source
and ship it with the JS package.
2022-10-04 17:23:37 +01:00
Alex Good
da51492327 build both nodejs and bundler packages in yarn build 2022-10-04 17:23:37 +01:00
Alex Good
577bda3e7f update wasm-bindgen 2022-10-04 17:23:37 +01:00
Alex Good
20dc0fb54e Set optimization levels to 'Z' for release profile
This reduces the size of the WASM bundle which is generated to around
800kb. Unfortunately wasm-pack doesn't allow us to use arbitrary
profiles when building and the optimization level has to be set at the
workspace root - consequently this flag is set for all packages in the
workspace. This shouldn't be an issue really as all our dependents in
the Rust world will be setting their own optimization flags anyway.
2022-10-04 17:23:37 +01:00
Alex Good
4f03cd2a37 Add an e2e testing tool for the JS packaging
JS packaging is complicated and testing it manually is irritating. Add a
tool in `automerge-js/e2e` which stands up a local NPM registry and
publishes the various packages to that registry for use in automated and
manual tests. Update the test script in `scripts/ci/js_tests` to run the
tests using this tool
2022-10-04 17:23:37 +01:00
Alex Good
7825da3ab9 Add examples of using automerge with bundlers 2022-10-04 17:23:37 +01:00
Alex Good
8557ce0b69 Rename automerge-js to automerge
Now that automerge-js is ready to go we rename it to `automerge-js` and
set the version to `2.0.0-alpha.1`
2022-10-04 17:23:37 +01:00
Alex Good
a9e23308ce Remove async automerge-wasm wrapper
By moving to wasm-bindgens `bundler` target rather than using the `web`
target we remove the need for an async initialization step on the
automerge-wasm package. This means that the automerge-js package can now
depend directly on automerge-wasm and perform initialization itself,
thus making automerge-js a drop in replacement for the `automerge` JS
package (hopefully).

We bump the versions of automerge-wasm
2022-10-04 17:23:37 +01:00
Alex Good
837c07b23a
Correctly encode compressed changes in sync messages
Sync messages encode changes as length prefixed byte arrays. We were
calculating the length using the uncompressed bytes of a change but
encoding the bytes of the change using the (possibly) compressed bytes.
This meant that if a change was large enough to compress then it  would
fail to decode. Switch to using uncompressed bytes in sync messages.
2022-10-02 18:59:41 +01:00
Alex Good
3d59e61cd6
Allow empty changes when loading document format
The logic for loading compressed document chunks has a check that the
`max_op` of a change is valid. This check was overly strict in that it
checked that the max op was strictly larger than the max op of a
previous strange - this rejects valid documents which contain changes
with no ops in them, in which case the max op can be equal to the max op
of the previous change. Loosen the logic to allow empty changes.
2022-09-30 19:00:48 +01:00
Alex Good
e57548f6e2
Fix broken encode/decode change
Previous ceremonies to appease clippy resulted in the
encodeChange/decodeChange wasm functions being slightly broken. Here we
fix them.
2022-09-29 15:49:31 -05:00
Alex Good
c7e370a1df
Appease clippy 2022-09-28 17:18:37 -05:00
Alex Good
427002caf3 Correctly load documents with deleted objects
The logic for reconstructing changes from the compressed document format
records operations which set a key in an object so that it can later
reconstruct delete operations from the successor list of the document
format operations. The logic to do this was only recording set
operations and not `make*` operations. This meant that delete operations
targeting `make*` operations could not be loaded correctly.

Correctly record `make*` operations for later use in constructing delete
operations.
2022-09-12 12:38:57 +01:00
Alex Good
fc9cb17b34
Use the local automerge-wasm in automerge-js tests
Somehow the `devDependencies` for `automerge-js` dependended on the
released `automerge-wasm` package, rather than the local version, which
means that the JS tests are not actually testing the current
implementation. Depend on the local `automerge-wasm` package to fix
this.
2022-09-08 16:27:30 +01:00
Alex Good
f586c82557 OpSet::visualise: add argument to filter by obj ID
Occasionally one needs to debug problems in a document with a large
number of objects. In this case it is unhelpful to print a graphviz of
the whole opset because there are too many objects. Add a
`Option<Vec<ObjId>>` argument to `OpSet::visualise` to filter the
objects which are visualised.
2022-09-08 12:48:53 +01:00
+merlan #flirora
649b75deb1 Correct documentation for AutoSerde 2022-09-05 21:11:13 +01:00
Alex Good
eba7038bd2 Allow for empty head indices when decoding doc
The compressed document format includes at the end of the document chunk
the indicies of the heads of the document. Older versions of the
javascript implementation do not include these indicies so we allow them
to be omitted when decoding.

Whilst we're here add some tracing::trace logs to make it easier to
understand where parsing is failing.
2022-09-02 14:59:51 +01:00
Alex Good
dd69f6f7b4
Add readme field to automerge/Cargo.toml 2022-09-01 12:27:34 +01:00
Alex Good
e295a55b41 Add #[derive(Eq)] to satisfy clippy
The latest clippy (90.1.65 for me) added a lint which checks for types
that implement `PartialEq` and could implement `Eq`
(`derive_partial_eq_without_eq`). Add a `derive(Eq)` in a bunch of
places to satisfy this lint.
2022-09-01 12:24:00 +01:00
Orion Henry
c2ed212dbc
Merge pull request #422 from automerge/fix-transaction-put-doc
Update docs for Transaction::put
2022-08-29 13:35:42 -05:00
Orion Henry
1817e98ec9
Merge pull request #418 from jkankiewicz/normalize_C_API_header_include
Expose `Vec<automerge::Change>` initialization and `automerge::AutoCommit::with_actor()` to the C API
2022-08-29 13:35:01 -05:00
Alex Good
a0eb4218d8
Update docs for Transaction::put
Fixes #420
2022-08-27 11:59:14 +01:00
Orion Henry
9879fd9342 copy pasta typo fix 2022-08-26 14:19:28 -05:00
Orion Henry
59bde120ee automerge-js adding trace to out of date errors 2022-08-26 14:17:56 -05:00
Jason Kankiewicz
22f720c465 Emphasize that an AMbyteSpan is only a view onto
the memory that it references.
2022-08-25 13:51:15 -07:00
Orion Henry
e6cd366aa0 automerge-js 0.1.12 2022-08-24 19:12:47 -05:00
Orion Henry
6d05cbd9e3 fix indexOf 2022-08-23 12:13:32 -05:00
Peter van Hardenberg
43bdd60904 the fields in a doc are not docs themselves 2022-08-23 09:31:09 -07:00
Orion Henry
363ad7d59a automerge-js ts fixes 2022-08-23 11:12:22 -05:00
Jason Kankiewicz
7da1832b52 Fix documentation bug caused by missing /. 2022-08-23 06:04:22 -07:00
Jason Kankiewicz
5e37ebfed0 Add AMchangesInit() for @rkuhn in #411.
Expose `automerge::AutoCommit::with_actor()` through `AMcreate()`.
Add notes to clarify the purpose of `AMfreeStack()`, `AMpop()`,
`AMpush()`, `AMpushCallback()`, and `AMresultStack`.
2022-08-23 05:34:45 -07:00
Jason Kankiewicz
1ed67a7658 Add missing documentation for the AMvalue.unknown
variant, the `AMunknownValue.bytes` member and the
`AMunknownValue.type_code` member.
2022-08-22 23:31:55 -07:00
Jason Kankiewicz
3ddde2fff2 Normalize the header include statement for all C
source files.
Normalize the header include statement within the documentation.
Limit `AMpush()` usage within the quickstart example to variable
assignment.
2022-08-22 22:28:23 -07:00
Orion Henry
b4705691c2
Merge pull request #355 from automerge/storage-v2
Storage v2
2022-08-22 18:18:50 -05:00
Alex Good
9ac8827219
Remove storage-v2 feature flag
Signed-off-by: Alex Good <alex@memoryandthought.me>
2022-08-22 21:21:21 +01:00
Alex Good
9c86c09aaa
Rename Change::compressed_bytes -> Change::bytes 2022-08-22 21:18:11 +01:00
Jason Kankiewicz
632da04d60
Add the -DFEATURE_FLAG_STORAGE_V2 CMake option
for toggling the "storage-v2" feature flag in a Cargo invocation.
Correct the `AMunknownValue` struct misnomer.
Ease the rebasing of changes to the `AMvalue` struct declaration with
pending upstream changes to same.
2022-08-22 21:18:07 +01:00
Alex Good
8f2d4a494f
Test entire workspace for storage-v2 in CI
Now that all crates support the storage-v2 feature flag of the automerge
crate we update CI to run tests for '--workspace --all-features'

Signed-off-by: Alex Good <alex@memoryandthought.me>
2022-08-22 21:16:48 +01:00
Alex Good
db4cb52750
Add a storage-v2 feature flag to edit-trace
Signed-off-by: Alex Good <alex@memoryandthought.me>
2022-08-22 21:16:48 +01:00
Alex Good
fc94d43e53
Expose storage-v2 in automerge-wasm
Signed-off-by: Alex Good <alex@memoryandthought.me>
2022-08-22 21:16:47 +01:00
Alex Good
d53d107076
Expose storage-v2 in automerge-c
Signed-off-by: Alex Good <alex@memoryandthought.me>
2022-08-22 21:16:47 +01:00
Alex Good
63dca26fe2
Additional tests for storage-v2
Various tests were required to cover edge cases in the new storage-v2
implementation.

Signed-off-by: Alex Good <alex@memoryandthought.me>
2022-08-22 21:16:47 +01:00
Alex Good
252a7eb8a5
Add automerge::Automerge::save_nocompress
For some usecases the overhead of compressed columns in the document
format is not worth it. Add `Automerge::save_nocompress` to save without
compressing columns.

Signed-off-by: Alex Good <alex@memoryandthought.me>
2022-08-22 21:16:47 +01:00
Alex Good
34e919a4c8
Plumb in storage-v2
This is achieved by liberal use of feature flags. Main additions are:

* Build the OpSet more efficiently when loading from compressed
  document storage using a DocObserver as implemented in
  `automerge::op_tree::load`
* Reimplement the parsing login in the various types in
  `automerge::sync`

There are numerous other small changes required to get the types to line
up.

Signed-off-by: Alex Good <alex@memoryandthought.me>
2022-08-22 21:16:47 +01:00
Alex Good
fc7657bcc6
Add a wrapper to implement Deserialize for Automerge
It is useful to be able to generate a `serde::Value` representation of
an automerge document. We can do this without an intermediate type by
iterating over the keys of the document recursively. Add
`autoeserde::AutoSerde` to implement this.

Signed-off-by: Alex Good <alex@memoryandthought.me>
2022-08-22 21:16:47 +01:00
Alex Good
771733deac
Implement storage-v2
Implement parsing the binary format using the new parser library and the
new encoding types. This is superior to the previous parsing
implementation in that invalid data should never cause panics and it
exposes and interface to construct an OpSet from a saved document much
more efficiently.

Signed-off-by: Alex Good <alex@memoryandthought.me>
2022-08-22 21:16:47 +01:00
Alex Good
3a3df45b85
Access change fields through field accessors
The representation of changes in storage-v2 is different to the existing
representation so add accessor methods to the fields of `Change` and
make all accesses go through them. This allows the change representation
in storage-v2 to be a drop-in.

Signed-off-by: Alex Good <alex@memoryandthought.me>
2022-08-22 21:16:42 +01:00
Orion Henry
d28767e689 automerge-js v0.1.10 2022-08-22 15:13:08 -05:00
Alex Good
de997e2c50
Reimplement columnar decoding types
The existing implementation of the columnar format elides a lot of error
handling (by converting `Err` to `None`) and doesn't allow writing to a
single chunk of memory when encoding. Implement a new set of encoding and
decoding primitives which handle errors more robustly and allow us to
use a single chunk of memory when reading and writing.

Signed-off-by: Alex Good <alex@memoryandthought.me>
2022-08-22 21:06:35 +01:00
Alex Good
782f351322
Add types to convert between different Op types
Op IDs in the OpSet are represented using an index into a set of actor
IDs. This is efficient but requires conversion when reading and
writing from storage (where the set of actors might be different from
ths in the OpSet). Add a trait for converting between different
representations of an OpID.

Signed-off-by: Alex Good <alex@memoryandthought.me>
2022-08-22 21:06:35 +01:00
Alex Good
e1295b9daa
Add a simple parser combinator library
We have parsing needs which are slightly more complex than just reading
stuff from a buffer, but not complex enough to justify a dependency on a
parsing library. Implement a simple parser combinator library for use in
parsing the binary storage format.

Signed-off-by: Alex Good <alex@memoryandthought.me>
2022-08-22 21:06:35 +01:00
Alex Good
d785c319b8
Add ScalarValue::Unknown
The colunar storage format allows for values which we do not know the
type of. In order that we can handle these types in a forward compatible
way we add ScalarValue::Unknown.

Signed-off-by: Alex Good <alex@memoryandthought.me>
2022-08-22 21:04:19 +01:00
Orion Henry
88f8976d0a automerge-js 0.1.9 2022-08-22 14:58:13 -05:00
Alex Good
56563a4a60
Add a storage-v2 feature flag
The new storage implementation is sufficiently large a change that it
warrants a period of testing. To facilitate testing the new and old
implementations side by side we slightly abuse cargo's feature flags and
add a storage-v2 feature which enables the new storage and disables the
old storage.

Note that this commit doesn't use `--all-features` when building the
workspace in scripts/ci/build-test. This will be rectified in a later
commit once the storage-v2 feature is integrated into the other crates
in the workspace.

Signed-off-by: Alex Good <alex@memoryandthought.me>
2022-08-20 17:36:26 +01:00
Orion Henry
ff90327b52
Merge pull request #414 from jkankiewicz/port_WASM_basic_tests_to_C
Port the WASM API's basic tests to C
2022-08-11 19:00:11 -05:00
Orion Henry
ece66396e8
Merge pull request #417 from tombh/readme-update
Readme updates
2022-08-11 18:58:44 -05:00
Orion Henry
d1a926bcbe fix ownKeys bug in automerge-js 2022-08-11 18:49:42 -05:00
Orion Henry
1a955e1f0d fix some typescript errors - depricate default export of the wasm package 2022-08-11 18:24:21 -05:00
Thomas Buckley-Houston
f89e9ad9cc
Readme updates 2022-08-10 08:46:05 -04:00
Jason Kankiewicz
bc28faee71 Replace NULL with std::ptr::null() within the
safety notes for @alexjg in #414.
2022-08-07 20:04:49 -07:00
Jason Kankiewicz
50981acc5a Replace to_del!() and to_pos!() with
`to_index!()` for @alexjg in #414.
2022-08-07 19:37:48 -07:00
Jason Kankiewicz
7ec17b26a9 Replace `From<&AMvalue<'_>> for Result<
am::ScalarValue, am::AutomergeError>` with `TryFrom<&AMvalue<'_>> for
am::ScalarValue` for @alexjg in #414.
2022-08-07 19:24:47 -07:00
Jason Kankiewicz
825342cbb1 Remove reflexive struct reference from a Doxygen
variable declaration.
2022-08-07 08:07:00 -07:00
Jason Kankiewicz
04d0175113 Add missing past-the-end checks to the unit tests
for `AMmapRange()`.
2022-08-06 16:20:35 -07:00
Jason Kankiewicz
14bd8fbe97 Port the WASM API's basic unit tests to C.
Weave the original TypeScript code into the C ports of the WASM API's
sync tests.
Fix misnomers in the WASM API's basic and sync unit tests.
Fix misspellings in the WASM API's basic and sync unit tests.
2022-08-06 16:18:59 -07:00
Jason Kankiewicz
d48e366272 Fix some documentation content bugs.
Fix some documentation formatting bugs.
2022-08-06 15:56:21 -07:00
Jason Kankiewicz
4217019cbc Expose automerge::AutoCommit::get_all() as AMlistGetAll() and AMmapGetAll().
Add symbolic last index specification to `AMlist{Delete,Get,Increment}()`.
Add symbolic last index specification to `AMlistPut{Bool,Bytes,Counter,
F64,Int,Null,Object,Str,Timestamp,Uint}()`.
Prevent `doc::utils::to_str(NULL)` from segfaulting.
Fix some documentation content bugs.
Fix some documentation formatting bugs.
2022-08-06 15:47:53 -07:00
Jason Kankiewicz
eeb75f74f4 Fix AMstrsCmp().
Fix some documentation content bugs.
Fix some documentation formatting bugs.
2022-08-06 15:07:48 -07:00
Jason Kankiewicz
a22afdd70d Expose automerge::AutoCommit::get_change_by_hash()
as `AMgetChangeByHash()`.
Add the `AM_CHANGE_HASH_SIZE` macro define constant for
`AMgetChangeByHash()`.
Replace the literal `32` with the `automerge::types::HASH_SIZE` constant.
Expose `automerge::AutoCommit::splice()` as `AMsplice()`.
Add the `automerge::error::AutomergeError::InvalidValueType` variant for
`AMsplice()`.
Add push functionality to `AMspliceText()`.
Fix some documentation content bugs.
Fix some documentation formatting bugs.
2022-08-06 15:04:46 -07:00
Orion Henry
5e8f4caed6
Merge pull request #392 from rf-/rf-fix-export-default-syntax
Fix TypeScript syntax error in `automerge-wasm` definitions
2022-08-03 11:01:11 -05:00
Orion Henry
5c6f375f99
Merge pull request #410 from jkankiewicz/add_range_functions_to_C_API
Add range functions to C API
2022-08-03 10:47:44 -05:00
Jason Kankiewicz
3a556c5991 Expose Autocommit::fork_at().
Rename `AMdup()` to `AMclone()` to match the WASM API.
Rename `AMgetActor()` to `AMgetActorId()` to match the WASM API.
Rename `AMsetActor()` to `AMsetActorId()` to match the WASM API.
2022-08-01 07:02:30 -07:00
Orion Henry
1bc5fbb81e
Merge pull request #413 from jkankiewicz/remove_original_C_API_files
Remove original C API files
2022-07-29 16:06:45 -05:00
Jason Kankiewicz
69de8187a5 Update the build system with the added and
renamed source files.
Defer `BTreeMap` creation until necessary for  `AMresult::Changes`.
Add `AMvalueEqual()` to enable direct comparison of two `AMvalue` structs regardless of their respective variants.
2022-07-25 01:41:52 -07:00
Jason Kankiewicz
877744d40b Add equality comparison to the AM* types from
which it was missing.
Add equality comparison to `automerge::sync::message`.
Defer `std::ffi::CString` creation until necessary.
2022-07-25 01:33:50 -07:00
Jason Kankiewicz
14b55c4a73 Fix a bug with the iterators when they pass their
initial positions in reverse.
Rename `AMstrings` to `AMstrs` for consistency with the `AMvalue.str`
field.
2022-07-25 01:23:26 -07:00
Jason Kankiewicz
23fbb4917a Replace _INCLUDED with _H as the suffix for
include guards in C headers like the one generated by cbindgen.
2022-07-25 01:04:35 -07:00
Jason Kankiewicz
877dbbfce8 Simplify the unit tests with AMresultStack et.
al.
2022-07-25 01:00:50 -07:00
Jason Kankiewicz
a22bcb916b Promoted ResultStack/StackNode from the
quickstart example up to the library as `AMresultStack` so that it can
appear in the README.md and be used to simplify the unit tests.
Promoted `free_results()` to `AMfreeStack()` and `push()` to `AMpush()`.
Added `AMpop()` because no stack should be without one.
2022-07-25 00:50:40 -07:00
Jason Kankiewicz
42ab1639db Add heads argument to AMmapGet() to expose
`automerge::AutoCommit::get_at()`.
Add `AMmapRange()` to expose `automerge::AutoCommit::map_range()` and
`automerge::AutoCommit::map_range_at()`.
Add `AMmapItems` for `AMlistRange()`.
Add `AMmapItem` for `AMmapItems`.
2022-07-25 00:11:00 -07:00
Jason Kankiewicz
eba18d1ad6 Add heads argument to AMlistGet() to expose
`automerge::AutoCommit::get_at()`.
Add `AMlistRange()` to expose `automerge::AutoCommit::list_range()` and
`automerge::AutoCommit::list_range_at()`.
Add `AMlistItems` for `AMlistRange()`.
Add `AMlistItem` for `AMlistItems`.
2022-07-24 22:41:32 -07:00
Jason Kankiewicz
ee68645f31 Add AMfork() to expose `automerge::AutoCommit::
fork()`.
Add `AMobjValues()` to expose `automerge::AutoCommit::values()` and
`automerge::AutoCommit::values_at()`.
Add `AMobjIdActorId()`, `AMobjIdCounter()`, and `AMobjIdIndex()` to expose `automerge::ObjId::Id` fields.
Change `AMactorId` to reference an `automerge::ActorId` instead of
owning one for `AMobjIdActorId()`.
Add `AMactorIdCmp()` for `AMobjIdActorId()` comparison.
Add `AMobjItems` for `AMobjValues()`.
Add `AMobjItem` for `AMobjItems`.
Add `AMobjIdEqual()` for property comparison.
Rename `to_doc!()` to `to_doc_mut!()` and `to_doc_const!()` to `to_doc!()`
for consistency with the Rust standard library.
2022-07-24 22:23:54 -07:00
Jason Kankiewicz
cc19a37f01 Remove the makefile for the original C
API to prevent confusion.
2022-07-23 08:48:19 -07:00
Jason Kankiewicz
15c9adf965 Remove the obsolete test suite for the original
C API to prevent confusion.
2022-07-23 08:47:21 -07:00
Jason Kankiewicz
52a558ee4d Cease writing a pristine copy of the generated
header file into the root of the C API's source directory to prevent
confusion.
2022-07-23 08:44:41 -07:00
Alex Good
668b7b86ca Add license for unicode-idents
`unicode-idents` distributes some data tables from unicode.org which
require an additional license. This doesn't affect our licensing because
we don't distribute the data files - just the generated code. Explicitly
allow the Unicode-DFS-2016 license for unicode-idents.
2022-07-18 10:50:32 +01:00
Alex Good
d71a734e49 Add OpIds to enforce ordering of Op::succ and Op::pred
The ordering of opids in the successor and predecessors of an op is
relevant when encoding because inconsistent ordering changes the
hashgraph. This means we must maintain the invariant that opids are
encoded in ascending lamport order. We have been maintaining this
invariant in the encoding implementation - however, this is not ideal
because it requires allocating for every op in the change when we commit
a transaction.

Add `types::OpIds` and use it in place of `Vec<OpId>` for `Op::succ` and
`Op::pred`. `OpIds` maintains the invariant that the IDs it contains
must be ordered with respect to some comparator function - which is
always `OpSetMetadata::lamport_cmp`. Remove the sorting of opids in
SuccEncoder::append.
2022-07-17 20:58:47 +01:00
Andrew Jeffery
97575d3a90
Merge pull request #408 from jeffa5/automerge-description
publish: Add description to automerge crate
2022-07-14 19:01:09 +01:00
Andrew Jeffery
359376b3db publish: Add description to automerge crate
Came up as a warning in a dry-run publish.
2022-07-14 18:33:00 +01:00
Andrew Jeffery
5452aa4e4d
Merge pull request #406 from jeffa5/docs-ci
ci: Rename docs script to rust-docs and build cmake docs in CI
2022-07-13 19:47:42 +01:00
Andrew Jeffery
8c93d498b3 ci: Rename docs script to rust-docs and build cmake docs in CI 2022-07-13 18:25:25 +01:00
Adel Salakh
f14a61e581 Sort successors in SuccEncoder
Makes SuccEncoder sort successors in Lamport clock order.
Such an ordering is expected by automerge js when loading documents,
otherwise some documents fail to load with a "operation IDs are not in
ascending order" error.
2022-07-13 11:25:12 +01:00
Andrew Jeffery
65c478981c
Merge pull request #403 from jeffa5/parents-error
Change parents to return result if objid is not an object
2022-07-12 21:19:31 +01:00
Andrew Jeffery
75fb4f0f0c
Merge pull request #404 from jeffa5/trim-deps
Clean up automerge dependencies
2022-07-12 19:26:48 +01:00
Andrew Jeffery
be439892a4 Clean up automerge dependencies 2022-07-12 19:09:47 +01:00
Andrew Jeffery
6ea5982c16 Change parents to return result if objid is not an object
There is easy confusion when calling parents with the id of a scalar,
wanting it to get the parent object first but that is not implemented.
To get the parent object of a scalar id would mean searching every
object for the OpId which may get too expensive when lots of objects are
around, this may be reconsidered later but the result would still be
useful to indicate when the id doesn't exist in the document vs has no
parents.
2022-07-12 18:36:47 +01:00
Andrew Jeffery
0cd515526d
Merge pull request #402 from jeffa5/fix-cmake-docs
Don't build tests for docs
2022-07-12 10:17:43 +01:00
Andrew Jeffery
246ed4afab Test building docs on PRs 2022-07-12 10:12:07 +01:00
Andrew Jeffery
0a86a4d92c Don't build tests for docs
The test `CMakeLists.txt` brings in cmocka but we don't actually need to
build the tests to get the docs. This just makes the cmake docs script
tell cmake not to build docs.
2022-07-12 09:59:03 +01:00
Andrew Jeffery
1d3263c002
Merge pull request #397 from jeffa5/edit-trace-improvements
Fixup js edit-trace script and documentation bits
2022-07-07 09:47:43 +01:00
Andrew Jeffery
7e8cbf510a Add links to projects 2022-07-07 09:40:18 +01:00
Andrew Jeffery
c49ba5ea98 Fixup js edit-trace script and documentation bits 2022-07-07 09:25:45 +01:00
Andrew Jeffery
fe4071316d Add docs workflow status badge to README 2022-07-07 09:24:57 +01:00
Orion Henry
1a6f56f7e6
Merge pull request #393 from jkankiewicz/add_AMkeys_to_C_API
Add AMkeys() to the C API
2022-06-21 20:24:44 +02:00
Orion Henry
d5ca0947c0 minor update on js wrapper 2022-06-21 13:40:15 -04:00
Jason Kankiewicz
e5a8b67b11 Added AMspliceText().
Added `AMtext()`.
Replaced `*mut` function arguments with `*const`
function arguments where possible.
Added "out" directions to the documentation for
out function parameters.
2022-06-20 23:15:25 -07:00
Jason Kankiewicz
aeb8db556c Added "out" directions to the documentation for
out function parameters.
2022-06-20 23:11:03 -07:00
Jason Kankiewicz
eb462cb228 Made free_results() reset the stack pointer. 2022-06-20 15:55:31 -07:00
Jason Kankiewicz
0cbacaebb6 Simplified the AMstrings struct to directly
reference `std::ffi::CString` values.
Switched the `AMresult` struct to store a `Vec<CString>` instead of a
`Vec<String>`.
2022-06-20 14:35:30 -07:00
Jason Kankiewicz
bf4988dcca Fixed AM{change_hashes,changes,haves,strings}Prev(). 2022-06-20 13:50:05 -07:00
Jason Kankiewicz
770c064978 Made cosmetic changes to the quickstart example. 2022-06-20 13:45:32 -07:00
Jason Kankiewicz
db0333fc5a Added AM_ROOT usage to the documentation.
Renamed the `value` argument of `AM{list,map}PutBytes()` to `src` for
consistency with standard `memcpy()`.
2022-06-20 02:16:33 -07:00
Jason Kankiewicz
7bdf726ce1 Sublimated memory management in the quickstart
example.
2022-06-20 02:07:33 -07:00
Jason Kankiewicz
47c5277406 Added AMkeys().
Removed `AMobjSizeAt()`.
Added an optional `AMchangeHashes` argument to `AMobjSize()`.
Replaced the term "length" with "size" in the
documentation.
2022-06-20 01:53:31 -07:00
Jason Kankiewicz
ea8bd32cc1 Added the AMstrings type. 2022-06-20 01:38:32 -07:00
Jason Kankiewicz
be130560f0 Added a check for a 0 increment in the iterator
types.
Improved the documentation for the `detail` field in the iterator types.
2022-06-20 01:34:36 -07:00
Jason Kankiewicz
103d729bd1 Replaced the term "length" with "size" in the
documentation.
2022-06-20 01:31:08 -07:00
Jason Kankiewicz
7b30c84a4c Added AMchangeHashesInit(). 2022-06-20 01:17:20 -07:00
Jason Kankiewicz
39db64e5d9 Publicized the AMbyteSpan fields. 2022-06-20 01:11:30 -07:00
Jason Kankiewicz
32baae1a31 Hoisted InvalidChangeHashSlice into the
`Automerge` namespace.
2022-06-20 01:09:50 -07:00
Ryan Fitzgerald
88073c0cf4 Fix TypeScript syntax error in automerge-wasm definitions
I'm not sure if there are some configurations under which this works,
but I get

    index.d.ts:2:21 - error TS1005: ';' expected.

    2 export default from "automerge-types"
                          ~~~~~~~~~~~~~~~~~

both in my project that depends on `automerge-wasm` and when I run `tsc`
in this repo.

It seems like `export default from` is still a Stage 1 proposal, so I
wouldn't expect it to be supported by TS, although I couldn't really
find hard evidence one way or the other. It does seem like this syntax
should be exactly equivalent based on the proposal doc though.
2022-06-17 20:11:26 -07:00
Orion Henry
f5e9e3537d v0.1.4 2022-06-16 17:50:46 -04:00
Orion Henry
44b6709a60 add getBackend to automerge-js 2022-06-16 17:49:32 -04:00
Orion Henry
1610f6d6a6
Merge pull request #391 from jkankiewicz/expose_ActorId_to_C_API
Add `AMactorId` to the C API
2022-06-16 21:57:56 +02:00
Orion Henry
40b32566f4
Merge pull request #390 from jkankiewicz/make_C_API_testing_explicit
Make C API testing explicit
2022-06-16 21:56:26 +02:00
Orion Henry
3a4af9a719
Merge pull request #371 from automerge/typescript
Convert automerge-js to typescript
2022-06-16 21:52:22 +02:00
Jason Kankiewicz
400b8acdff Switched the AMactorId unit test suite to group
setup/teardown.
Removed superfluous group state from the `AMactorIdInit()` test.
2022-06-14 23:16:45 -07:00
Jason Kankiewicz
2f37d194ba Asserted that the string forms of two random
`AMactorId` structs are unequal.
2022-06-14 23:04:18 -07:00
Orion Henry
ceecef3b87 update list of read methods in c readme 2022-06-14 21:28:10 -04:00
Jason Kankiewicz
6de9ff620d Moved hex_to_bytes() so that it could be shared
by the unit test suites for `AMactorId` and `AMdoc` functions.
2022-06-14 00:52:06 -07:00
Jason Kankiewicz
84fa83a3f0 Added AMactorId.
Updated `AMchangeActorId()`.
Updated `AMsetActor()`.
Removed `AMgetActorHex()`.
Removed `AMsetActorHex()`.
2022-06-14 00:49:20 -07:00
Jason Kankiewicz
ac3709e670 Hoisted InvalidActorId into the automerge
namespace.
2022-06-14 00:38:55 -07:00
Jason Kankiewicz
71d8a7e717 Removed the superfluous AutomergeError::HexDecode
variant.
2022-06-14 00:37:42 -07:00
Jason Kankiewicz
bdedafa021 Decouple the "test_automerge" build target from
the "ALL" target.
2022-06-13 12:01:54 -07:00
Jason Kankiewicz
efa0a5624a Removed renamed unit test suite source files. 2022-06-11 21:04:36 -07:00
Jason Kankiewicz
4efe9a4f68 Replaced "cmake -E make_directory" invocation with
"mkdir -p" invocation for consistency with the other CI scripts.
2022-06-11 21:03:26 -07:00
Jason Kankiewicz
4f7843e007 Removed CMocka from the "docs" CI workflow's list
of dependencies.
2022-06-11 20:57:28 -07:00
Jason Kankiewicz
30dd3da578 Updated the CMake build CI script to build the
"test_automerge" target explicitly.
2022-06-11 20:55:44 -07:00
Jason Kankiewicz
6668f79a6e Decouple the "test_automerge" build target from
the "ALL" target.
2022-06-11 20:53:17 -07:00
Orion Henry
0c9e77b644 added a test to ensure we dont break counter serialization 2022-06-09 12:45:20 +02:00
Orion Henry
d6bce697a5 normalize edit trace 2022-06-09 12:42:43 +02:00
Orion Henry
22117f4997
Merge pull request #387 from jeromegn/counter-ser-current
Serialize Counter with it's current value instead of start value
2022-06-09 03:42:28 -07:00
Jerome Gravel-Niquet
b20d04b0f2
serialize Counter with it's current value instead of start value 2022-06-08 14:00:03 -04:00
Orion Henry
d5c07f22af
Merge pull request #385 from jkankiewicz/add_some_functions_from_README
Add some functions from the README.md file
2022-06-07 06:33:44 -07:00
Jason Kankiewicz
bfa85050b8 Fix Rust code formatting violations. 2022-06-07 00:29:58 -07:00
Jason Kankiewicz
1c78aab5f0 Fixed the AMsyncStateDecode() documentation. 2022-06-07 00:23:41 -07:00
Jason Kankiewicz
ad7dd07cf7 Simplify the names of the unit test suites' source
files.
2022-06-07 00:21:22 -07:00
Jason Kankiewicz
2e84c6e9ef Added AMlistIncrement(). 2022-06-07 00:15:37 -07:00
Jason Kankiewicz
0ecb9e7dce Added AMmapIncrement(). 2022-06-07 00:14:42 -07:00
Jason Kankiewicz
99ab5b4ed7 Added AMgetChangesAdded().
Added `AMpendingOps()`.
Added `AMrollback()`.
Added `AMsaveIncremental()`.
Fixed the `AMmerge()` documentation.
2022-06-07 00:14:11 -07:00
Andrew Jeffery
7439a49e37 Fix automerge-c html nesting 2022-06-06 19:49:18 +01:00
Andrew Jeffery
7a9786a146 Fix index.html location 2022-06-06 19:35:50 +01:00
Andrew Jeffery
82fe420a10 Use cmocka dev instead of lib 2022-06-06 19:11:07 +01:00
Andrew Jeffery
7d2be219ac Update cmocka to be libcmocka0 for install 2022-06-06 19:05:02 +01:00
Andrew Jeffery
00ab853813 Add cmake docs deps 2022-06-06 18:40:25 +01:00
Andrew Jeffery
97ef4fe7cd
Merge pull request #384 from jeffa5/serve-c-docs
Build c docs in CI
2022-06-06 18:31:48 +01:00
Andrew Jeffery
5c1cbc8eeb Build c docs in CI 2022-06-06 18:21:14 +01:00
Orion Henry
cf264f3bf4
Merge pull request #382 from jkankiewicz/obfuscate_iterator_fields
Remove artificial iteration from the C API
2022-06-06 06:41:45 -07:00
Jason Kankiewicz
8222ec1705 Move the AMsyncHaves.ptr field into the
`sync::haves::Detail` struct.
Change `AMsyncHavesAdvance()`, `AMsyncHavesNext()` and `AMsyncHavesPrev()`
to interpret their `n` argument relatively instead of absolutely.
Renamed `AMsyncHavesReverse()` to `AMsyncHavesReversed()`.
Updated the C API's documentation for the `AMsyncHaves` struct.
2022-06-05 14:41:48 -07:00
Jason Kankiewicz
74632a0512 Move the AMchanges.ptr field into the
`changes::Detail` struct.
Change `AMchangesAdvance()`, `AMchangesNext()` and `AMchangesPrev()` to
interpret their `n` argument relatively instead of absolutely.
Renamed `AMchangesReverse()` to `AMchangesReversed()`.
Updated the C API's documentation for the `AMchanges` struct.
2022-06-05 14:37:32 -07:00
Jason Kankiewicz
7e1ae60bdc Move the AMchangeHashes.ptr field into the
`change_hashes::Detail` struct.
Change `AMchangeHashesAdvance()`, `AMchangeHashesNext()` and
`AMchangeHashesPrev()` to interpret their `n` argument relatively
instead of absolutely.
Renamed `AMchangeHashesReverse()` to `AMchangeHashesReversed()`.
Updated the C API's documentation for the `AMchangeHashes` struct.
2022-06-05 14:32:55 -07:00
Jason Kankiewicz
92d6fff22f Compensate for the removal of the AMchanges.ptr
member.
2022-06-05 14:28:33 -07:00
Jason Kankiewicz
92f3efd6e0 Removed the 0 argument from AMresultValue()
calls.
2022-06-04 22:31:15 -07:00
Jason Kankiewicz
31fe8dbb36 Renamed the AMresult::Scalars variant to
`AMresult::Value`.
Removed the `Vec` wrapping the 0th field of an `AMresult::Value`.
Removed the `index` argument from `AMresultValue()`.
2022-06-04 22:24:02 -07:00
Jason Kankiewicz
d4d1b64cf4 Compensate for cbindgen issue #252. 2022-06-04 19:18:47 -07:00
Jason Kankiewicz
92b1216101 Obfuscated most implementation details of the
`AMsyncHaves` struct.
Added `AMsyncHavesReverse()`.
2022-06-04 19:14:31 -07:00
Jason Kankiewicz
1990f29c60 Obfuscated most implementation details of the
`AMChanges` struct.
Added `AMchangesReverse()`.
2022-06-04 19:13:22 -07:00
Jason Kankiewicz
b38be0750b Obfuscated most implementation details of the
`AMChangeHashes` struct.
Added `AMchangeHashesReverse()`.
2022-06-04 18:51:57 -07:00
Orion Henry
3866e9066f
Merge pull request #381 from jkankiewicz/unify_C_API_results
Simplify management of memory allocated by C API calls
2022-06-02 10:14:55 -07:00
Orion Henry
51554e7793
Merge pull request #377 from jeffa5/more-sync-opt
Some more sync optimisations
2022-06-02 10:14:44 -07:00
Jason Kankiewicz
afddf7d508 Fix "fmt" script violations.
Fix "lint" script violations.
2022-06-01 23:34:28 -07:00
Jason Kankiewicz
ca383f03e4 Wrapped all newly-allocated values in an AMresult struct.
Removed `AMfree()`.
Renamed `AMresultFree()` to `AMfree()`.
Removed type names from brief descriptions.
2022-06-01 23:10:23 -07:00
Orion Henry
de25e8f7c8
Merge pull request #380 from jkankiewicz/add_syncing_to_C_API
Add syncing to C API
2022-06-01 13:46:55 -07:00
Orion Henry
27dfa4ca27 missed some bugs related to the wasm api change 2022-06-01 16:31:18 -04:00
Orion Henry
9a0dd24714 fmt / tests 2022-06-01 08:08:01 -04:00
Orion Henry
8ce10dab69 some api changes/tweaks - basic js package 2022-05-31 13:49:18 -04:00
Jason Kankiewicz
fbdb5da508 Ported 17 synchronization unit test cases from JS
to C.
2022-05-30 23:17:44 -07:00
Jason Kankiewicz
cdcd5156db Added the synchronization unit test suite to the
CTest suite.
2022-05-30 23:16:14 -07:00
Jason Kankiewicz
d08eeeed61 Renamed AMfreeDoc() to AMFree(). 2022-05-30 23:15:20 -07:00
Jason Kankiewicz
472b5dc348 Added the synchronization unit test suite to the
CTest suite.
2022-05-30 23:14:38 -07:00
Jason Kankiewicz
846b96bc9a Renamed AMfreeResult() to AMresultFree(). 2022-05-30 23:11:56 -07:00
Jason Kankiewicz
4cb7481a1b Moved the AMsyncState struct into its own
source file.
Added `AMsyncStateDecode()`.
Added `AMsyncStateEncode()`.
Added `AMsyncStateEqual()`.
Added `AMsyncStateSharedHeads()`.
Added `AMsyncStateLastSentHeads()`.
Added `AMsyncStateTheirHaves()`.
Added `AMsyncStateTheirHeads()`.
Added `AMsyncStateTheirNeeds()`.
2022-05-30 23:07:55 -07:00
Jason Kankiewicz
3c11946c16 Moved the AMsyncMessage struct into its own
source file.
Added `AMsyncMessageChanges()`.
Added `AMsyncMessageDecode()`.
Added `AMsyncMessageEncode()`.
Added `AMsyncMessageHaves()`.
Added `AMsyncMessageHeads()`.
Added `AMsyncMessageNeeds()`.
2022-05-30 22:58:45 -07:00
Jason Kankiewicz
c5d3d1b0a0 Added the AMsyncHaves struct.
Added `AMsyncHavesAdvance()`.
Added `AMsyncHavesNext()`.
Added `AMsyncHavesPrev()`.
Added `AMsyncHavesSize()`.
2022-05-30 22:55:34 -07:00
Jason Kankiewicz
be3c7d6233 Added the AMsyncHave struct.
Added `AMsyncHaveLastSync()`.
2022-05-30 22:54:02 -07:00
Jason Kankiewicz
9213d43850 Grouped some common macros and functions into
their own source file.
2022-05-30 22:53:09 -07:00
Jason Kankiewicz
18ee9b71e0 Grouped the AMmap*() functions into their own
source file.
2022-05-30 22:52:02 -07:00
Jason Kankiewicz
a9912d4b9f Grouped the AMlist*() functions into their own
source file.
2022-05-30 22:51:41 -07:00
Jason Kankiewicz
d9bf29e8fd Grouped AMsyncMessage and AMsyncState into
separate source files.
2022-05-30 22:50:26 -07:00
Jason Kankiewicz
546b6ccbbd Moved AMobjId into its own source file.
Added the `AMvalue::SyncState` variant.
Enabled `AMchange` structs to be lazily created.
Added the `AMresult::SyncState` variant.
Added an `Option<&automerge::Change>` conversion for `AMresult`.
Added a `Result<automerge::Change, automerge::DecodingError>` conversion
for `AMresult`.
Added a `Result<automerge::sync::Message, automerge::DecodingError>`
conversion for `AMresult`.
Added a `Result<automerge::sync::State, automerge::DecodingError>`
conversion for `AMresult`.
Moved `AMerrorMessage()` and `AMresult*()` into the source file for
`AMresult`.
2022-05-30 22:49:23 -07:00
Jason Kankiewicz
bb0b023c9a Moved AMobjId into its own source file. 2022-05-30 22:37:22 -07:00
Jason Kankiewicz
c3554199f3 Grouped related AM*() functions into separate source files. 2022-05-30 22:36:26 -07:00
Jason Kankiewicz
e56fe64a18 Added AMapplyChanges().
Fixed `AMdup()`.
Added `AMequal()`.
Renamed `AMfreeDoc()` to `AMfree()`.
Added `AMgetHeads()`.
Added `AMgetMissingDeps()`.
Added `AMgetLastLocalChange()`.
2022-05-30 22:34:01 -07:00
Jason Kankiewicz
007253d6ae Updated the file dependencies of the CMake custom
command for Cargo.
2022-05-30 22:27:14 -07:00
Jason Kankiewicz
e8f1f07f21 Changed AMchanges to lazily create AMchange structs.
Renamed `AMadvanceChanges()` to `AMchangesAdvance()`.
Added `AMchangesEqual()`.
Renamed `AMnextChange()` to `AMchangesNext()`.
Renamed `AMprevChange()` to `AMchangesPrev()`.
2022-05-30 22:24:53 -07:00
Jason Kankiewicz
3ad979a178 Added AMchangeActorId().
Added `AMchangeCompress()`.
Added `AMchangeDeps()`.
Added `AMchangeExtraBytes()`.
Added `AMchangeFromBytes()`.
Added `AMchangeHash()`.
Added `AMchangeIsEmpty()`.
Added `AMchangeMaxOp()`.
Added `AMchangeMessage()`.
Added `AMchangeSeq()`.
Added `AMchangeSize()`.
Added `AMchangeStartOp()`.
Added `AMchangeTime()`.
Added `AMchangeRawBytes()`.
Added `AMchangeLoadDocument()`.
2022-05-30 22:19:54 -07:00
Jason Kankiewicz
fb0ea2c7a4 Renamed AMadvanceChangeHashes() to AMchangeHashesAdvance().
Added `AMchangeHashesCmp()`.
Renamed `AMnextChangeHash()` to `AMchangeHashesNext()`.
2022-05-30 22:12:03 -07:00
Jason Kankiewicz
a31a65033f Renamed AMfreeResult() to AMresultFree().
Renamed `AMfreeDoc()` to `AMfree()`.
Renamed `AMnextChange()` to `AMchangesNext()`.
Renamed `AMgetMessage()` to `AMchangeMessage()`.
2022-05-30 22:08:27 -07:00
Jason Kankiewicz
5765fea771 Renamed AMfreeResult() to AMresultFree().
Remove the `&AMchange` conversion for `AMbyteSpan`.
Add a `&automerge::ActorId` conversion to for `AMbyteSpan`.
Remove the `&Vec<u8>` conversion for `AMbyteSpan`.
Add a `&[u8]` conversion for `AMbyteSpan`.
2022-05-30 22:06:22 -07:00
Jason Kankiewicz
4bed03f008 Added the AMsyncMessage struct.
Added the `AMsyncState` struct.
Added the `AMfreeSyncState()` function.
Added the `AMgenerateSyncMessage()` function.
Added the `AMinitSyncState()` function.
Added the `AMreceiveSyncMessage()` function.
2022-05-30 08:22:17 -07:00
Orion Henry
210c6d2045 move types to their own package 2022-05-27 10:23:51 -07:00
Andrew Jeffery
a569611d83 Use clock_at for filter_changes 2022-05-26 19:03:09 +01:00
Andrew Jeffery
03a635a926 Extend last_sync_hashes 2022-05-26 19:03:09 +01:00
Andrew Jeffery
97a5144d59 Reduce the amount of shuffling data for changes_to_send 2022-05-26 19:03:09 +01:00
Andrew Jeffery
03289510d6 Remove cloning their_have in sync 2022-05-26 19:03:09 +01:00
Andrew Jeffery
b1712cb0c6
Merge pull request #379 from jeffa5/apply-changes-iter
Update autocommit's apply_changes to take an iterator
2022-05-26 13:11:51 +01:00
Andrew Jeffery
dae6509e13 Update autocommit's apply_changes to take an iterator 2022-05-26 09:02:59 +01:00
Andrew Jeffery
587adf7418 Add Eq to ObjType 2022-05-24 09:48:55 +01:00
Orion Henry
df8cae8a2b README 2022-05-23 19:25:23 +02:00
Orion Henry
3a44ccd52d clean up lint, simplify package, hand write an index.d.ts 2022-05-23 19:04:31 +02:00
Orion Henry
07f5678a2b linting in wasm 2022-05-22 13:54:59 -04:00
Orion Henry
d638a41a6c record type 2022-05-22 13:53:11 -04:00
Orion Henry
bd35361354 fixed typescript errors, pull wasm dep (mostly) out 2022-05-22 13:53:11 -04:00
Scott Trinh
d2fba6bf04 Use an UnknownObject type alias 2022-05-22 13:53:11 -04:00
Orion Henry
fd02585d2a removed a bunch of lint errors 2022-05-22 13:53:11 -04:00
Orion Henry
515a2eb94b removing some ts errors 2022-05-22 13:53:11 -04:00
Orion Henry
5e1bdb79ed eslint --fix 2022-05-22 13:53:11 -04:00
Orion Henry
1cf8f80ba4 pull wasm out of deps 2022-05-22 13:53:11 -04:00
Orion Henry
226bbeb023 tslint to eslint 2022-05-22 13:53:11 -04:00
Orion Henry
1eec70f116 example webpack for js 2022-05-22 13:53:11 -04:00
Orion Henry
4f898b67b3 able to build npm package 2022-05-22 13:53:11 -04:00
Orion Henry
551f6e1343 convert automerge-js to typescript 2022-05-22 13:53:11 -04:00
Orion Henry
c353abfe4e
Merge pull request #375 from jeffa5/get-changes-opt
Get changes opt
2022-05-22 10:30:24 -07:00
Orion Henry
f0abcf0605
Merge pull request #376 from jeffa5/api-interoperability
Use BTreeSet for sync::State to allow deriving Hash
2022-05-22 10:30:07 -07:00
Andrew Jeffery
2c1a71e143 Use expect for getting clock 2022-05-20 18:01:46 +01:00
Andrew Jeffery
8b1c3c73cd Use BTreeSet for sync::State to allow deriving Hash 2022-05-20 16:13:10 +01:00
Andrew Jeffery
3a8e833187 Document num_ops on change 2022-05-20 10:05:08 +01:00
Andrew Jeffery
1355a024a7 Use actor_index to get state in update_history 2022-05-20 10:05:08 +01:00
Andrew Jeffery
e5b527e17d Remove old functions 2022-05-20 10:05:08 +01:00
Andrew Jeffery
4b344ac308 Add sync benchmark 2022-05-20 10:05:08 +01:00
Andrew Jeffery
36857e0f6b Store seq in clock to remove binary_search_by_key 2022-05-20 10:05:08 +01:00
Andrew Jeffery
b7c50e47b9 Just use get_changes_clock 2022-05-20 10:05:08 +01:00
Andrew Jeffery
16f1304345 Fix wasm test calling getChanges with wrong heads 2022-05-20 10:05:08 +01:00
Andrew Jeffery
933bf5ee07 Return an error when getting clock for missing hash 2022-05-20 10:05:08 +01:00
Andrew Jeffery
c2765885fd Maintain incremental clocks 2022-05-20 10:05:08 +01:00
Andrew Jeffery
5e088ee9e0 Document clock module and add merge function 2022-05-20 10:05:08 +01:00
Andrew Jeffery
1b34892585 Add num_ops to change to quickly get the len 2022-05-20 10:05:08 +01:00
Andrew Jeffery
0de37d292d Sort change results from clock search 2022-05-20 10:05:08 +01:00
Andrew Jeffery
b9a6b3129f Add method to get changes by clock 2022-05-20 10:05:08 +01:00
Andrew Jeffery
11fbde47bb Use HASH_SIZE const in ChangeHash definition 2022-05-20 10:04:32 +01:00
Andrew Jeffery
70021556c0
Merge pull request #373 from jeffa5/sync-opt
Sync opt
2022-05-19 13:42:10 +01:00
Andrew Jeffery
e8e42b2d16 Remove need to collect hashes when building bloom filter 2022-05-19 10:41:23 +01:00
Andrew Jeffery
6bce8bf4fd Use vec with capacity when calculating bloom probes 2022-05-19 10:40:44 +01:00
Orion Henry
c7429abbf5
Merge pull request #369 from automerge/webpack
Webpack
2022-05-17 10:28:12 -07:00
Orion Henry
24fa61c11d
Merge pull request #370 from jeffa5/opt-seek-op
Optimise seek op and seek op with patch
2022-05-17 10:27:58 -07:00
Andrew Jeffery
d89669fcaa Add apply benchmarks 2022-05-16 23:13:35 +01:00
Andrew Jeffery
43c4ce76fb Optimise seek op with patch 2022-05-16 23:07:45 +01:00
Andrew Jeffery
531e434bf6 Optimise seek op 2022-05-16 22:45:41 +01:00
Orion Henry
e1f3ecfcf5 typescript implicit any 2022-05-16 15:09:55 -04:00
Orion Henry
409189e36a
Merge pull request #368 from jeromegn/rollback-no-actors
Don't remove last actor when there are none
2022-05-16 08:39:03 -07:00
Orion Henry
81dd1a56eb add start script - split up outputs 2022-05-16 11:33:08 -04:00
Jerome Gravel-Niquet
7acb9ed0e2
don't remove last actor when there are none 2022-05-16 10:56:10 -04:00
Orion Henry
d01e7ceb0e add webpack example and move into wasm folder 2022-05-15 11:53:55 -04:00
Orion Henry
aa5a03a0c4 webpack example config 2022-05-15 11:53:04 -04:00
Orion Henry
f6eca5eec6
Merge pull request #362 from jeffa5/range-rev
Add tests and fixes for double ended map range iterator
2022-05-12 09:02:05 -07:00
Orion Henry
b17c86e36e
Merge pull request #365 from automerge/opset-iter-nth
Implement OpTreeIter::nth correctly
2022-05-12 09:00:29 -07:00
Andrew Jeffery
f373deba6b Add length assertion 2022-05-11 21:15:50 +01:00
Andrew Jeffery
8f71ac30a4 Add index info to op_tree panic message 2022-05-11 20:26:39 +01:00
Alex Good
4e431c00a1
Implement OpTreeIter::nth correctly
The previous implementation of nth was incorrect, it returned the nth
element of the optree but it did not modify the internal state of the
iterator such that future calls to `next()` were after the nth element.
This commit fixes that.

Signed-off-by: Alex Good <alex@memoryandthought.me>
2022-05-09 23:11:18 +01:00
Alex Good
004d1a0cf2
Update CI toolchain to 1.60 2022-05-09 22:53:39 +01:00
Orion Henry
d6a6b34e99
Merge pull request #364 from jkankiewicz/improve_symmetry
C API symmetry improvements
2022-05-07 11:03:34 -04:00
Jason Kankiewicz
fdd3880bd3 Renamed AMalloc() to AMcreate().
Renamed `AMload()` to `AMloadIncremental()`.
Added the `AMload()` function.
2022-05-07 09:55:05 -05:00
Orion Henry
f0da2d2348
Merge pull request #361 from jkankiewicz/quickstart_error_reporting
Improve error reporting in C quickstart example.
2022-05-06 11:18:01 -04:00
Jason Kankiewicz
b56464c2e7 Switched to C comment delimiting. 2022-05-06 04:59:47 -05:00
Jason Kankiewicz
bb3d75604a Improved the documentation slightly. 2022-05-06 04:51:44 -05:00
Jason Kankiewicz
eb3155e49b Sorted main() to the top. Documented test(). 2022-05-06 04:50:02 -05:00
Andrew Jeffery
28a61f2dcd Add tests and fixes for double ended map range iterator 2022-05-06 09:49:00 +01:00
Jason Kankiewicz
944e5d8001 Trap and report all errors. 2022-05-05 21:21:46 -05:00
Andrew Jeffery
7d5eaa0b7f Move automerge unit tests to new file for clarity 2022-05-05 14:58:22 +01:00
Andrew Jeffery
5b15a04516 Some tidies 2022-05-05 14:52:01 +01:00
Orion Henry
dc441a1a61
Merge pull request #360 from jkankiewicz/add_quickstart
Add a port of Rust's quickstart to the C API
2022-05-04 10:28:14 -04:00
Orion Henry
3f746a0dc3
Merge pull request #358 from jeffa5/msrv
Use an MSRV in CI
2022-05-04 10:23:58 -04:00
Orion Henry
c43f672924
Merge pull request #356 from automerge/values_range_fix
fixed panic in doc.values() - fixed concurrency bugs in range
2022-05-04 10:22:46 -04:00
Orion Henry
fb8f3e5d4e fixme: performance 2022-05-04 10:09:50 -04:00
Orion Henry
54042bcf96 and unimplemented double ended iterator 2022-05-04 09:50:27 -04:00
Jason Kankiewicz
729752dac2 De-emphasized the AMload() call's result. 2022-05-04 08:27:15 -05:00
Jason Kankiewicz
3cf990eabf Fixed some minor inconsistencies in quickstart.c. 2022-05-04 07:45:05 -05:00
Jason Kankiewicz
069c33a13e Moved the AMbyteSpan struct into its own source
file.
Added the `AMchangeHashes` struct.
Added the `AMchange` and `AMchanges` structs.
Tied the lifetime of an `AMobjId` struct to the `AMresult` struct that
it's returned through so that it can be used to reach equivalent objects
within multiple `AMdoc` structs.
Removed the `AMfreeObjId()` function.
Renamed `AMallocDoc()` to `AMalloc()`.
Added the `AMcommit()` function.
Added the `AMgetChangeHash()` function.
Added the `AMgetChanges()` function.
Added the `AMgetMessage()` function.
Added the `AMlistDelete()` function.
Added the `AMlistPutBool()` function.
Added the `AMmapDelete()` function.
Added the `AMmapPutBool()` function.
Added the `AMobjSizeAt()` function.
Added the `AMsave()` function.
Renamed the `AMvalue::Nothing` variant to `AMvalue::Void`.
Changed all `AMobjId` struct function arguments to be immutable.
2022-05-04 01:04:43 -05:00
Jason Kankiewicz
58e0ce5efb Renamed the AMvalue::Nothing variant to AMvalue::Void.
Tied the lifetime of an `AMobjId` struct to the `AMresult` struct that
it's returned through so that it can be used to reach equivalent objects
within multiple `AMdoc` structs.
Added test cases for the `AMlistPutBool()` function.
Added a test case for the `AMmapPutBool()` function.
2022-05-04 01:04:43 -05:00
Jason Kankiewicz
c6e7f993fd Moved the AMbyteSpan struct into its own source
file.
Added the `AMchangeHashes` struct.
Added the `AMchange` and `AMchanges` structs.
Added `ChangeHashes` and `Changes` variants to the `AMresult` struct.
Renamed the `AMvalue::Nothing` variant to `AMvalue::Void`.
Tied the lifetime of an `AMobjId` struct to the `AMresult` struct that
it's returned through so that it can be used to reach equivalent objects
within multiple `AMdoc` structs.
Consolidated the `AMresult` struct's related trait implementations.
2022-05-04 01:04:43 -05:00
Jason Kankiewicz
30b220d9b7 Added a port of the Rust quickstart example. 2022-05-04 01:04:43 -05:00
Jason Kankiewicz
bf6ee85c58 Added the time_t header. 2022-05-04 01:04:43 -05:00
Orion Henry
a728b8216b range -> map_range(), added list_range() values() works on both 2022-05-03 19:27:51 -04:00
Andrew Jeffery
0aab13a990 Set rust-version in cargo.tomls 2022-05-02 21:18:00 +01:00
Andrew Jeffery
3ec1127b50 Try 1.57.0 as msrv 2022-05-02 21:18:00 +01:00
Orion Henry
291557a019
Merge pull request #350 from jeffa5/opt-prop
Optimise prop query
2022-05-02 14:15:53 -04:00
Orion Henry
cc4b8399b1
Merge pull request #357 from automerge/faster-opset-iterator
Make the OpSet iterator faster
2022-05-02 14:15:06 -04:00
Orion Henry
bcdc8a2752 fmt 2022-05-02 13:32:59 -04:00
Orion Henry
0d3eb07f3f fix key/elemid bug and rename range to map_range 2022-05-02 13:30:59 -04:00
Alex Good
7f4460f200
Make the OpSet iterator faster
The opset iterator was using `OpTreeInternal::get(index)` to fetch each
successive element of the OpSet. This is pretty slow. We make this much
faster by implementing an iterator which is aware of the internal
structure of the OpTree.

This speeds up the save benchmark by about 10%.

Signed-off-by: Alex Good <alex@memoryandthought.me>
2022-05-01 00:07:39 +01:00
Orion Henry
9e6044c128 fixed panic in doc.values() - fixed concurrency bugs in range 2022-04-29 15:11:07 -04:00
Andrew Jeffery
6bf03e006c Add ability to skip in tree searches 2022-04-28 14:14:03 +01:00
Andrew Jeffery
8baacb281b Add save and load map benchmarks 2022-04-28 14:14:03 +01:00
Andrew Jeffery
7de0cff2c9 Rework benchmarks to be in a group 2022-04-28 14:14:03 +01:00
Andrew Jeffery
c38b49609f Remove clone from update
The cloning of the op was eating up a significant part of the increment
operation's time. This makes it zero-clone and just extracts the fields
needed.
2022-04-28 14:14:03 +01:00
Andrew Jeffery
db280c3d1d prop: Skip over nodes 2022-04-28 14:14:03 +01:00
Andrew Jeffery
7dfe311aae Store keys as well as elemids in visible index 2022-04-28 14:14:03 +01:00
Andrew Jeffery
bb4727ac34 Skip empty nodes in prop query 2022-04-28 14:14:03 +01:00
Andrew Jeffery
bdacaa1703 Use treequery rather than repeated gets 2022-04-28 14:14:03 +01:00
Andrew Jeffery
a388ffbf19 Add some benches 2022-04-28 14:14:03 +01:00
Andrew Jeffery
ca8a2a0762 Add cmake deps to nix flake 2022-04-28 14:13:36 +01:00
Orion Henry
be33f91346 Merge branch 'experiment' 2022-04-27 11:58:53 -04:00
Orion Henry
1f86a92ca1 typo 2022-04-25 12:47:13 -04:00
Andrew Jeffery
8e6306b546 Re-add caching and just clean docs dir from cache 2022-04-23 11:44:12 +01:00
Andrew Jeffery
37e29e4473 Remove docs cache
The docs aren't built with deps so it should be relatively quick to do
without a cache. The cache is also messing with keeping things from
previous versions (e.g. edit-trace).
2022-04-23 11:41:39 +01:00
Andrew Jeffery
adf8a5db12 Don't document edit-trace bin 2022-04-23 11:32:01 +01:00
Andrew Jeffery
ec446f4839 Add favicon 2022-04-23 11:31:58 +01:00
Andrew Jeffery
1e504de6ea
Merge pull request #351 from jeffa5/lints
Add generally useful lints and fixes
2022-04-23 11:21:32 +01:00
Andrew Jeffery
67da930a40 Add missing lints 2022-04-23 11:15:15 +01:00
Andrew Jeffery
9788cd881d Add debug impls 2022-04-23 11:14:07 +01:00
Andrew Jeffery
af951f324a Run cargo fix 2022-04-23 11:06:39 +01:00
Andrew Jeffery
48e397e82f Add lints 2022-04-23 11:05:43 +01:00
Andrew Jeffery
e7a8718434 Update badges 2022-04-23 10:47:21 +01:00
Andrew Jeffery
5b0ce54229 Add logo to docs 2022-04-23 10:46:03 +01:00
Andrew Jeffery
64c575fa85 Add image assets and sign to readme 2022-04-23 10:44:16 +01:00
Andrew Jeffery
a033ffa02b Update docs badge 2022-04-23 09:32:24 +01:00
Andrew Jeffery
afb1957d19 Add homepage badge 2022-04-23 09:31:28 +01:00
Andrew Jeffery
78ef6e3a2d Fix formatting 2022-04-23 09:27:01 +01:00
Andrew Jeffery
e3864e8fbd Add ci badge 2022-04-23 09:26:13 +01:00
Andrew Jeffery
23786bc746 Rename workflows 2022-04-23 09:26:05 +01:00
Andrew Jeffery
64363d7da2 Add docs badge 2022-04-23 09:22:25 +01:00
Andrew Jeffery
070608ddf2 Update CI to run on main branch 2022-04-22 17:51:01 +01:00
Andrew Jeffery
4f187859e7 Make web-sys optional and behind the wasm feature 2022-04-22 14:51:54 +01:00
Orion Henry
e41c5ae021 typescript bugfix 2022-04-20 22:05:05 -04:00
Orion Henry
e36f3c27c9
Merge pull request #347 from jeffa5/observer-counters
Add increment observation for observer
2022-04-20 20:35:18 -04:00
Orion Henry
1bee30c784
Merge branch 'main' into observer-counters 2022-04-20 20:35:07 -04:00
Orion Henry
9152c8366b
Merge pull request #343 from jkankiewicz/add_c_api
Add a C API to the "experiment" branch
2022-04-20 13:08:52 -04:00
Jason Kankiewicz
d099d553cc Apply patch from @orionz for the
"needless_lifetimes" clippy violation.
2022-04-20 10:21:56 -06:00
Orion Henry
1fc5e551bd
Merge pull request #346 from jeffa5/non-counter-increment
Prevent increment on non-counter
2022-04-20 11:48:52 -04:00
Andrew Jeffery
d667552a98 Add increment observation for observer 2022-04-20 14:44:04 +01:00
Andrew Jeffery
bfe7378968 Prevent increment on non-counter 2022-04-20 11:37:03 +01:00
Jason Kankiewicz
bc01267425 Fixed the clippy errors whose resolutions don't
cause compilation errors.
2022-04-20 01:54:57 -06:00
Jason Kankiewicz
dad2fd4928 Fixed a formatting violation. 2022-04-20 01:04:35 -06:00
Jason Kankiewicz
5128d1926d Replaced the verb "set" with the verb "put" within
the names of the source files for unit test suites.
2022-04-20 01:00:49 -06:00
Jason Kankiewicz
aaa2f7489b Fixed the compilation errors caused by merging PR
#310 into the "experiment" branch.
2022-04-20 00:57:52 -06:00
Jason Kankiewicz
8005f31a95 Squashed commit of the following:
commit 97a36e728e
Author: Jason Kankiewicz <jason.kankiewicz@gmail.com>
Date:   Fri Apr 8 03:24:46 2022 -0600

    Updated the unit test suites.

commit 56e2beb946
Author: Jason Kankiewicz <jason.kankiewicz@gmail.com>
Date:   Fri Apr 8 03:22:30 2022 -0600

    Tied the lifetime of an `AMobjId` struct to its
    owning `AMdoc` struct.

commit e16c980b2e
Author: Jason Kankiewicz <jason.kankiewicz@gmail.com>
Date:   Fri Apr 8 03:21:26 2022 -0600

    Reverted the `AMobjId` struct to being an opaque
    type.
    Added `AMobjId::new()`to fix a compilation error.
    Tied the lifetime of an `AMobjId` struct to its owning `AMdoc` struct.
    Added the `AMvalue::ChangeHash` variant.

commit 7c769b2cfe
Author: Jason Kankiewicz <jason.kankiewicz@gmail.com>
Date:   Fri Apr 8 03:12:15 2022 -0600

    Renamed the `AMobj` struct to `AMobjId` for
    clarity.
    Reverted the `AMobjId` struct to being an opaque type.
    Tied the lifetime of an `AMobjId` struct to its owning `AMdoc` struct.
    Renamed `AMcreate()` to `AMallocDoc()` for consistency with C's standard
    library functions.
    Renamed `AMdestroy()` to `AMfreeDoc()` for consistency with C's standard
    library functions.
    Renamed the `obj` function arguments to `obj_id` for clarity.
    Replaced the "set" verb in function names with the "put" verb for
    consistency iwth recent API changes.
    Renamed `AMclear()` to `AMfreeResult()` for consistency with C's
    standard library functions.
    Added `AMfreeObjId()` to enable dropping a persisted `AMojbId` struct.

commit 8d1b3bfcf2
Author: Jason Kankiewicz <jason.kankiewicz@gmail.com>
Date:   Fri Apr 8 02:52:52 2022 -0600

    Added a field for persisting `AMobjId` structs to the
    `AMdoc` struct.
    Renamed `AMdoc::create()` to `AMdoc::new()` to be more idiomatic.
    Added `AMdoc::insert_object()` and `AMdoc::set_object()` for persisting
    `AMobjId` structs.
    Added `AMdoc::drop_obj_id()` to enable dropping a persisted `AMobjId`
    struct.

commit b9b0f96357
Author: Jason Kankiewicz <jason.kankiewicz@gmail.com>
Date:   Thu Mar 17 15:17:08 2022 -0700

    Ensure CMake targets can be built after a clean.

commit d565db1ea8
Author: Jason Kankiewicz <jason.kankiewicz@gmail.com>
Date:   Thu Mar 17 15:10:09 2022 -0700

    Prevent unnecessary updating of the generated
    header file.

commit d3647e75d3
Author: Jason Kankiewicz <jason.kankiewicz@gmail.com>
Date:   Wed Mar 16 02:50:59 2022 -0700

    Documented the `AMObj.ID` struct member.

commit cc58cbf4bb
Author: Jason Kankiewicz <jason.kankiewicz@gmail.com>
Date:   Wed Mar 16 02:03:37 2022 -0700

    Normalize the formatting of the `AMobjType_tag()`
    function.

commit c2954dd2c7
Author: Jason Kankiewicz <jason.kankiewicz@gmail.com>
Date:   Wed Mar 16 02:02:03 2022 -0700

    Remove superfluous backslashes.

commit bcb6e759a4
Author: Jason Kankiewicz <jason.kankiewicz@gmail.com>
Date:   Wed Mar 16 02:01:33 2022 -0700

    Removed the `AMconfig()` function.
    Implemented the `AMgetActor()` function.
    Added the `AMgetActorHex()` function.
    Added the `AMsetActor()` function.
    Added the `AMsetActorHex()` function.

commit 9b2c566b9e
Author: Jason Kankiewicz <jason.kankiewicz@gmail.com>
Date:   Wed Mar 16 01:50:31 2022 -0700

    Added the "hex" and "smol_str" crate dependencies
    to the C API.

commit 99e06e1f73
Author: Jason Kankiewicz <jason.kankiewicz@gmail.com>
Date:   Wed Mar 16 01:06:15 2022 -0700

    Corrected a spelling error.

commit 629b19c71d
Author: Jason Kankiewicz <jason.kankiewicz@gmail.com>
Date:   Sun Mar 13 20:30:54 2022 -0700

    Align backslashes.

commit 09d25a32b7
Author: Jason Kankiewicz <jason.kankiewicz@gmail.com>
Date:   Sun Mar 13 20:30:23 2022 -0700

    Add EOF linefeed.

commit 4ed14ee748
Author: Jason Kankiewicz <jason.kankiewicz@gmail.com>
Date:   Sun Mar 13 20:05:30 2022 -0700

    Fix "fmt" CI job violations.

commit f53b40625d
Merge: 7d5538d8 e1f8d769
Author: Jason Kankiewicz <jason.kankiewicz@gmail.com>
Date:   Sun Mar 13 16:34:11 2022 -0700

    Merge branch 'c_api_exp' of https://github.com/automerge/automerge-rs into c_api_exp

commit 7d5538d8a4
Author: Jason Kankiewicz <jason.kankiewicz@gmail.com>
Date:   Sun Mar 13 16:31:22 2022 -0700

    Updated the C API's unit test suite.

commit 335cd1c85f
Author: Jason Kankiewicz <jason.kankiewicz@gmail.com>
Date:   Sun Mar 13 16:27:39 2022 -0700

    Removed superfluous `AMobj` traits.

commit 420f8cab64
Author: Jason Kankiewicz <jason.kankiewicz@gmail.com>
Date:   Sun Mar 13 16:25:48 2022 -0700

    Moved the `AMobj` struct into the `result` module.
    Changed the `AMobj` struct into an enum.
    Added the `AMbyteSpan` struct.
    Added the `AMvalue` enum.
    Added the `AMresult::Nothing` variant.

commit 4eca88ff01
Author: Jason Kankiewicz <jason.kankiewicz@gmail.com>
Date:   Sun Mar 13 15:56:06 2022 -0700

    Normalized all type name prefixes to "AM".
    Reduced the `AMstatus` enum's `<type>Ok` tags to a single `Ok` tag.
    Removed the `to_obj` macro.
    Added the `to_obj_id` macro.
    Moved the `AMobj` struct into the `result` module.
    Added the `AMresult::Nothing` variant.
    Added the `AMresultSize` function.
    Added the `AMresultValue` function.
    Added the `AMlistGet` function.
    Added the `AMmapGet` function.
    Removed the `AMgetObj` function.
    Added the `AMobjSize` function.

commit 2f94c6fd90
Author: Jason Kankiewicz <jason.kankiewicz@gmail.com>
Date:   Sun Mar 13 15:29:02 2022 -0700

    Compensate for unconfigurable cbindgen behavior.
    Prevent Doxygen documentation regeneration.

commit 5de00b7998
Author: Jason Kankiewicz <jason.kankiewicz@gmail.com>
Date:   Sun Mar 13 15:24:45 2022 -0700

    Alphabetized the cbindgen settings.

commit e1f8d769f4
Author: Orion Henry <orion.henry@gmail.com>
Date:   Thu Mar 10 08:53:07 2022 -0500

    update authors

commit 3e5525f1a6
Merge: f4ba1770 1c21abc5
Author: Orion Henry <orionz@users.noreply.github.com>
Date:   Wed Mar 9 14:36:29 2022 -0500

    Merge pull request #304 from jkankiewicz/c_api_exp

    Fix "fmt" workflow step violations

commit 1c21abc5a3
Author: Jason Kankiewicz <jason.kankiewicz@gmail.com>
Date:   Wed Mar 9 11:13:01 2022 -0800

    Fix CMake and Rust code formatting issues.

commit f4ba1770a9
Merge: bf1ae609 f41b30d1
Author: Orion Henry <orionz@users.noreply.github.com>
Date:   Wed Mar 9 12:05:58 2022 -0500

    Merge pull request #300 from jkankiewicz/c_api_exp

    Add unit test suites for the `AMlistSet*` and `AMmapSet*` functions

commit f41b30d118
Author: Jason Kankiewicz <jason.kankiewicz@gmail.com>
Date:   Tue Mar 8 22:08:36 2022 -0800

    Added a brief description of the `AmObjType` enum.
    Added the `AmStatus` enum to the enum docs page.

commit af7386a482
Author: Jason Kankiewicz <jason.kankiewicz@gmail.com>
Date:   Tue Mar 8 21:50:52 2022 -0800

    Added a unit test suite for the  `AMlistSet*`
    functions.

commit 1eb70c6eee
Author: Jason Kankiewicz <jason.kankiewicz@gmail.com>
Date:   Tue Mar 8 21:42:42 2022 -0800

    Added the rest of the `AMlistSet*` functions.
    Started the enum tags at `1` so they won't be
    inherently false.
    Alphabetized enum tags for the docs.
    Improved the docs.

commit 6489cba13b
Author: Jason Kankiewicz <jason.kankiewicz@gmail.com>
Date:   Tue Mar 8 18:01:46 2022 -0800

    Alphabetize functions in the docs.

commit 74c245b82d
Author: Jason Kankiewicz <jason.kankiewicz@gmail.com>
Date:   Tue Mar 8 07:54:25 2022 -0800

    Fix a typo in `AMmapSetObject()`'s documentation.

commit b2a879ba4e
Author: Jason Kankiewicz <jason.kankiewicz@gmail.com>
Date:   Tue Mar 8 06:24:22 2022 -0800

    Append missing EOF linefeed.

commit fbf0f29b66
Merge: c56d54b5 bf1ae609
Author: Jason Kankiewicz <jason.kankiewicz@gmail.com>
Date:   Tue Mar 8 01:08:12 2022 -0800

    Merge branch 'c_api_exp' of https://github.com/automerge/automerge-rs into c_api_exp

commit c56d54b565
Author: Jason Kankiewicz <jason.kankiewicz@gmail.com>
Date:   Tue Mar 8 01:07:11 2022 -0800

    Added unit test cases for the new `AMmapSet*`
    functions by @orionz.
    Moved the unit test cases for the `AMmapSet*` functions into their own
    unit test suite.

commit 7e59b55760
Author: Jason Kankiewicz <jason.kankiewicz@gmail.com>
Date:   Tue Mar 8 01:01:47 2022 -0800

    Edited the Doxygen documentation.

commit bf1ae60913
Author: Orion Henry <orion.henry@gmail.com>
Date:   Mon Mar 7 11:59:22 2022 -0500

    fmt

commit e82a7cc78e
Merge: a44e69d2 965c2d56
Author: Orion Henry <orionz@users.noreply.github.com>
Date:   Mon Mar 7 11:55:32 2022 -0500

    Merge pull request #299 from jkankiewicz/c_api_exp

    Enable unit testing of the C API

commit 965c2d56c3
Author: Jason Kankiewicz <jason.kankiewicz@gmail.com>
Date:   Mon Mar 7 06:37:36 2022 -0800

    Enable unit testing of the C API.

commit a44e69d2c7
Author: Orion Henry <orion.henry@gmail.com>
Date:   Sun Mar 6 14:00:46 2022 -0500

    remove datatype mapset

commit 88153c44e7
Merge: 41512e9c c6194e97
Author: Orion Henry <orionz@users.noreply.github.com>
Date:   Sun Mar 6 10:32:39 2022 -0500

    Merge pull request #298 from jkankiewicz/rebase_c_api_exp

    Rebase the "c_api_exp" branch on the "experiment" branch

commit c6194e9732
Merge: a2d745c8 41512e9c
Author: Jason Kankiewicz <jason.kankiewicz@gmail.com>
Date:   Sun Mar 6 01:09:56 2022 -0800

    Merge branch 'c_api_exp' into rebase_c_api_exp

commit a2d745c8d9
Author: Jason Kankiewicz <jason.kankiewicz@gmail.com>
Date:   Sun Mar 6 00:44:37 2022 -0800

    Replace the `utils::import_value` function with
    the `utils::import_scalar` function.
    Exclude `# Safety` comments from the documentation.

commit 0681e28b40
Author: Orion Henry <orion.henry@gmail.com>
Date:   Thu Mar 3 16:04:17 2022 -0500

    support new as_ref api

commit 916e23fcc2
Author: Orion Henry <orion.henry@gmail.com>
Date:   Thu Mar 3 15:56:27 2022 -0500

    fmt

commit 71cd6a1f18
Author: Orion Henry <orion.henry@gmail.com>
Date:   Thu Mar 3 15:54:38 2022 -0500

    lock data at 64 bit - no c_long

commit e00bd4c201
Author: Orion Henry <orion.henry@gmail.com>
Date:   Thu Mar 3 15:27:55 2022 -0500

    verbose

commit 39d157c554
Author: Orion Henry <orion.henry@gmail.com>
Date:   Thu Mar 3 14:56:23 2022 -0500

    clippy cleanup

commit 7f650fb8e0
Author: Jason Kankiewicz <jason.kankiewicz@gmail.com>
Date:   Wed Feb 23 02:14:06 2022 -0800

    Added Doxygen documentation generation.
    Renamed `AMDatatype` to `AmDataType`.
    Reorganized the `AmDataType` tags.
    Renamed `AMfree()` to `AMdestroy()`.
    Renamed `AMclone()` to `AMdup()`.

commit b0b803eef8
Author: Orion Henry <orion.henry@gmail.com>
Date:   Tue Feb 22 11:30:42 2022 -0500

    get simple test passing

commit cab9017ffa
Author: Orion Henry <orion.henry@gmail.com>
Date:   Wed Feb 9 15:50:44 2022 -0500

    rework to return a queriable result

commit a557e848f3
Author: Jason Kankiewicz <you@example.com>
Date:   Mon Feb 14 14:38:00 2022 -0800

    Add a CI step to run the CMake build of the C bindings for @alexjg.

commit c8c0c72f3b
Author: Jason Kankiewicz <you@example.com>
Date:   Mon Feb 14 14:09:58 2022 -0800

    Add CMake instructions for @orionz.

commit fb62c4b02a
Author: Jason Kankiewicz <you@example.com>
Date:   Thu Feb 10 23:28:54 2022 -0800

    Add CMake support.

commit 7bc3bb6850
Author: Jason Kankiewicz <you@example.com>
Date:   Thu Feb 10 22:49:53 2022 -0800

    Replace *intptr_t in C function signatures.

commit 60395a2db0
Author: Orion Henry <orion.henry@gmail.com>
Date:   Sun Feb 6 18:59:19 2022 -0500

    am_pop and am_pop_value

commit b1e88047d2
Author: Orion Henry <orion.henry@gmail.com>
Date:   Thu Feb 3 19:43:36 2022 -0500

    break the ground

commit 41512e9c78
Author: Orion Henry <orion.henry@gmail.com>
Date:   Thu Mar 3 16:04:17 2022 -0500

    support new as_ref api

commit bcee6a9623
Merge: cf98f78d 9a89db3f
Author: Orion Henry <orion.henry@gmail.com>
Date:   Thu Mar 3 15:58:19 2022 -0500

    Merge remote-tracking branch 'origin/experiment' into c_api_exp

commit cf98f78dd1
Author: Orion Henry <orion.henry@gmail.com>
Date:   Thu Mar 3 15:56:27 2022 -0500

    fmt

commit 3c1f449c5c
Author: Orion Henry <orion.henry@gmail.com>
Date:   Thu Mar 3 15:54:38 2022 -0500

    lock data at 64 bit - no c_long

commit 2c2ec0b0c5
Author: Orion Henry <orion.henry@gmail.com>
Date:   Thu Mar 3 15:27:55 2022 -0500

    verbose

commit b72b9c989a
Author: Orion Henry <orion.henry@gmail.com>
Date:   Thu Mar 3 14:56:23 2022 -0500

    clippy cleanup

commit 3ba28f91cc
Author: Jason Kankiewicz <jason.kankiewicz@gmail.com>
Date:   Wed Feb 23 02:14:06 2022 -0800

    Added Doxygen documentation generation.
    Renamed `AMDatatype` to `AmDataType`.
    Reorganized the `AmDataType` tags.
    Renamed `AMfree()` to `AMdestroy()`.
    Renamed `AMclone()` to `AMdup()`.

commit 8564e5b753
Author: Orion Henry <orion.henry@gmail.com>
Date:   Tue Feb 22 11:30:42 2022 -0500

    get simple test passing

commit 60835e6ae7
Author: Orion Henry <orion.henry@gmail.com>
Date:   Wed Feb 9 15:50:44 2022 -0500

    rework to return a queriable result

commit 89466d9e8c
Author: Jason Kankiewicz <you@example.com>
Date:   Mon Feb 14 14:38:00 2022 -0800

    Add a CI step to run the CMake build of the C bindings for @alexjg.

commit e2485bd5fd
Author: Jason Kankiewicz <you@example.com>
Date:   Mon Feb 14 14:09:58 2022 -0800

    Add CMake instructions for @orionz.

commit b5cc7dd63d
Author: Jason Kankiewicz <you@example.com>
Date:   Thu Feb 10 23:28:54 2022 -0800

    Add CMake support.

commit 685536f0cf
Author: Jason Kankiewicz <you@example.com>
Date:   Thu Feb 10 22:49:53 2022 -0800

    Replace *intptr_t in C function signatures.

commit c1c6e7bb66
Author: Orion Henry <orion.henry@gmail.com>
Date:   Sun Feb 6 18:59:19 2022 -0500

    am_pop and am_pop_value

commit e68c8d347e
Author: Orion Henry <orion.henry@gmail.com>
Date:   Thu Feb 3 19:43:36 2022 -0500

    break the ground
2022-04-20 00:48:59 -06:00
Orion Henry
439b9104d6 touch up readme and package files 2022-04-19 14:48:33 -04:00
Andrew Jeffery
e65200b150 Add docs workflow 2022-04-19 19:08:28 +01:00
Orion Henry
77eb094aea
Merge pull request #332 from jeffa5/experiment-observer
Add OpObserver to the document
2022-04-19 13:36:43 -04:00
Andrew Jeffery
aa3c32cea3 Add ApplyOptions 2022-04-19 18:15:22 +01:00
Andrew Jeffery
76a19185b7 Add separate functions for with op_observer 2022-04-19 17:48:11 +01:00
Andrew Jeffery
e1283e781d Add some more docs to the patches 2022-04-19 17:30:06 +01:00
Andrew Jeffery
702a0ec172 Add lifetimes to transact_with and fixup watch example 2022-04-19 17:30:06 +01:00
Andrew Jeffery
b6fd7ac26e Add op_observer to documents and transactions
This replaces the built-in patches with a more generic mechanism, and
includes a convenience observer which uses the old patches.
2022-04-19 17:30:05 +01:00
Jason Kankiewicz
d4a904414d Squashed commit of the following:
commit e1f8d769f4
Author: Orion Henry <orion.henry@gmail.com>
Date:   Thu Mar 10 08:53:07 2022 -0500

    update authors

commit 3e5525f1a6
Merge: f4ba1770 1c21abc5
Author: Orion Henry <orionz@users.noreply.github.com>
Date:   Wed Mar 9 14:36:29 2022 -0500

    Merge pull request #304 from jkankiewicz/c_api_exp

    Fix "fmt" workflow step violations

commit 1c21abc5a3
Author: Jason Kankiewicz <jason.kankiewicz@gmail.com>
Date:   Wed Mar 9 11:13:01 2022 -0800

    Fix CMake and Rust code formatting issues.

commit f4ba1770a9
Merge: bf1ae609 f41b30d1
Author: Orion Henry <orionz@users.noreply.github.com>
Date:   Wed Mar 9 12:05:58 2022 -0500

    Merge pull request #300 from jkankiewicz/c_api_exp

    Add unit test suites for the `AMlistSet*` and `AMmapSet*` functions

commit f41b30d118
Author: Jason Kankiewicz <jason.kankiewicz@gmail.com>
Date:   Tue Mar 8 22:08:36 2022 -0800

    Added a brief description of the `AmObjType` enum.
    Added the `AmStatus` enum to the enum docs page.

commit af7386a482
Author: Jason Kankiewicz <jason.kankiewicz@gmail.com>
Date:   Tue Mar 8 21:50:52 2022 -0800

    Added a unit test suite for the  `AMlistSet*`
    functions.

commit 1eb70c6eee
Author: Jason Kankiewicz <jason.kankiewicz@gmail.com>
Date:   Tue Mar 8 21:42:42 2022 -0800

    Added the rest of the `AMlistSet*` functions.
    Started the enum tags at `1` so they won't be
    inherently false.
    Alphabetized enum tags for the docs.
    Improved the docs.

commit 6489cba13b
Author: Jason Kankiewicz <jason.kankiewicz@gmail.com>
Date:   Tue Mar 8 18:01:46 2022 -0800

    Alphabetize functions in the docs.

commit 74c245b82d
Author: Jason Kankiewicz <jason.kankiewicz@gmail.com>
Date:   Tue Mar 8 07:54:25 2022 -0800

    Fix a typo in `AMmapSetObject()`'s documentation.

commit b2a879ba4e
Author: Jason Kankiewicz <jason.kankiewicz@gmail.com>
Date:   Tue Mar 8 06:24:22 2022 -0800

    Append missing EOF linefeed.

commit fbf0f29b66
Merge: c56d54b5 bf1ae609
Author: Jason Kankiewicz <jason.kankiewicz@gmail.com>
Date:   Tue Mar 8 01:08:12 2022 -0800

    Merge branch 'c_api_exp' of https://github.com/automerge/automerge-rs into c_api_exp

commit c56d54b565
Author: Jason Kankiewicz <jason.kankiewicz@gmail.com>
Date:   Tue Mar 8 01:07:11 2022 -0800

    Added unit test cases for the new `AMmapSet*`
    functions by @orionz.
    Moved the unit test cases for the `AMmapSet*` functions into their own
    unit test suite.

commit 7e59b55760
Author: Jason Kankiewicz <jason.kankiewicz@gmail.com>
Date:   Tue Mar 8 01:01:47 2022 -0800

    Edited the Doxygen documentation.

commit bf1ae60913
Author: Orion Henry <orion.henry@gmail.com>
Date:   Mon Mar 7 11:59:22 2022 -0500

    fmt

commit e82a7cc78e
Merge: a44e69d2 965c2d56
Author: Orion Henry <orionz@users.noreply.github.com>
Date:   Mon Mar 7 11:55:32 2022 -0500

    Merge pull request #299 from jkankiewicz/c_api_exp

    Enable unit testing of the C API

commit 965c2d56c3
Author: Jason Kankiewicz <jason.kankiewicz@gmail.com>
Date:   Mon Mar 7 06:37:36 2022 -0800

    Enable unit testing of the C API.

commit a44e69d2c7
Author: Orion Henry <orion.henry@gmail.com>
Date:   Sun Mar 6 14:00:46 2022 -0500

    remove datatype mapset

commit 88153c44e7
Merge: 41512e9c c6194e97
Author: Orion Henry <orionz@users.noreply.github.com>
Date:   Sun Mar 6 10:32:39 2022 -0500

    Merge pull request #298 from jkankiewicz/rebase_c_api_exp

    Rebase the "c_api_exp" branch on the "experiment" branch

commit c6194e9732
Merge: a2d745c8 41512e9c
Author: Jason Kankiewicz <jason.kankiewicz@gmail.com>
Date:   Sun Mar 6 01:09:56 2022 -0800

    Merge branch 'c_api_exp' into rebase_c_api_exp

commit a2d745c8d9
Author: Jason Kankiewicz <jason.kankiewicz@gmail.com>
Date:   Sun Mar 6 00:44:37 2022 -0800

    Replace the `utils::import_value` function with
    the `utils::import_scalar` function.
    Exclude `# Safety` comments from the documentation.

commit 0681e28b40
Author: Orion Henry <orion.henry@gmail.com>
Date:   Thu Mar 3 16:04:17 2022 -0500

    support new as_ref api

commit 916e23fcc2
Author: Orion Henry <orion.henry@gmail.com>
Date:   Thu Mar 3 15:56:27 2022 -0500

    fmt

commit 71cd6a1f18
Author: Orion Henry <orion.henry@gmail.com>
Date:   Thu Mar 3 15:54:38 2022 -0500

    lock data at 64 bit - no c_long

commit e00bd4c201
Author: Orion Henry <orion.henry@gmail.com>
Date:   Thu Mar 3 15:27:55 2022 -0500

    verbose

commit 39d157c554
Author: Orion Henry <orion.henry@gmail.com>
Date:   Thu Mar 3 14:56:23 2022 -0500

    clippy cleanup

commit 7f650fb8e0
Author: Jason Kankiewicz <jason.kankiewicz@gmail.com>
Date:   Wed Feb 23 02:14:06 2022 -0800

    Added Doxygen documentation generation.
    Renamed `AMDatatype` to `AmDataType`.
    Reorganized the `AmDataType` tags.
    Renamed `AMfree()` to `AMdestroy()`.
    Renamed `AMclone()` to `AMdup()`.

commit b0b803eef8
Author: Orion Henry <orion.henry@gmail.com>
Date:   Tue Feb 22 11:30:42 2022 -0500

    get simple test passing

commit cab9017ffa
Author: Orion Henry <orion.henry@gmail.com>
Date:   Wed Feb 9 15:50:44 2022 -0500

    rework to return a queriable result

commit a557e848f3
Author: Jason Kankiewicz <you@example.com>
Date:   Mon Feb 14 14:38:00 2022 -0800

    Add a CI step to run the CMake build of the C bindings for @alexjg.

commit c8c0c72f3b
Author: Jason Kankiewicz <you@example.com>
Date:   Mon Feb 14 14:09:58 2022 -0800

    Add CMake instructions for @orionz.

commit fb62c4b02a
Author: Jason Kankiewicz <you@example.com>
Date:   Thu Feb 10 23:28:54 2022 -0800

    Add CMake support.

commit 7bc3bb6850
Author: Jason Kankiewicz <you@example.com>
Date:   Thu Feb 10 22:49:53 2022 -0800

    Replace *intptr_t in C function signatures.

commit 60395a2db0
Author: Orion Henry <orion.henry@gmail.com>
Date:   Sun Feb 6 18:59:19 2022 -0500

    am_pop and am_pop_value

commit b1e88047d2
Author: Orion Henry <orion.henry@gmail.com>
Date:   Thu Feb 3 19:43:36 2022 -0500

    break the ground

commit 41512e9c78
Author: Orion Henry <orion.henry@gmail.com>
Date:   Thu Mar 3 16:04:17 2022 -0500

    support new as_ref api

commit bcee6a9623
Merge: cf98f78d 9a89db3f
Author: Orion Henry <orion.henry@gmail.com>
Date:   Thu Mar 3 15:58:19 2022 -0500

    Merge remote-tracking branch 'origin/experiment' into c_api_exp

commit cf98f78dd1
Author: Orion Henry <orion.henry@gmail.com>
Date:   Thu Mar 3 15:56:27 2022 -0500

    fmt

commit 3c1f449c5c
Author: Orion Henry <orion.henry@gmail.com>
Date:   Thu Mar 3 15:54:38 2022 -0500

    lock data at 64 bit - no c_long

commit 2c2ec0b0c5
Author: Orion Henry <orion.henry@gmail.com>
Date:   Thu Mar 3 15:27:55 2022 -0500

    verbose

commit b72b9c989a
Author: Orion Henry <orion.henry@gmail.com>
Date:   Thu Mar 3 14:56:23 2022 -0500

    clippy cleanup

commit 3ba28f91cc
Author: Jason Kankiewicz <jason.kankiewicz@gmail.com>
Date:   Wed Feb 23 02:14:06 2022 -0800

    Added Doxygen documentation generation.
    Renamed `AMDatatype` to `AmDataType`.
    Reorganized the `AmDataType` tags.
    Renamed `AMfree()` to `AMdestroy()`.
    Renamed `AMclone()` to `AMdup()`.

commit 8564e5b753
Author: Orion Henry <orion.henry@gmail.com>
Date:   Tue Feb 22 11:30:42 2022 -0500

    get simple test passing

commit 60835e6ae7
Author: Orion Henry <orion.henry@gmail.com>
Date:   Wed Feb 9 15:50:44 2022 -0500

    rework to return a queriable result

commit 89466d9e8c
Author: Jason Kankiewicz <you@example.com>
Date:   Mon Feb 14 14:38:00 2022 -0800

    Add a CI step to run the CMake build of the C bindings for @alexjg.

commit e2485bd5fd
Author: Jason Kankiewicz <you@example.com>
Date:   Mon Feb 14 14:09:58 2022 -0800

    Add CMake instructions for @orionz.

commit b5cc7dd63d
Author: Jason Kankiewicz <you@example.com>
Date:   Thu Feb 10 23:28:54 2022 -0800

    Add CMake support.

commit 685536f0cf
Author: Jason Kankiewicz <you@example.com>
Date:   Thu Feb 10 22:49:53 2022 -0800

    Replace *intptr_t in C function signatures.

commit c1c6e7bb66
Author: Orion Henry <orion.henry@gmail.com>
Date:   Sun Feb 6 18:59:19 2022 -0500

    am_pop and am_pop_value

commit e68c8d347e
Author: Orion Henry <orion.henry@gmail.com>
Date:   Thu Feb 3 19:43:36 2022 -0500

    break the ground
2022-04-19 08:35:44 -06:00
Orion Henry
696adb5005
Merge pull request #342 from automerge/doublequeue
duplicate changes in the queue could corrupt internal state
2022-04-19 10:31:35 -04:00
Orion Henry
9d7798a8c4 readme updates 2022-04-18 18:41:44 -04:00
Orion Henry
6872e3fa9b
Merge pull request #338 from jeffa5/experiment-double-ended-range
Add double ended iterator for Range and Values
2022-04-18 17:28:03 -04:00
Orion Henry
96d5fc7e60
Merge pull request #340 from jeffa5/experiment-parents-iter
Add parents iterator
2022-04-18 17:26:33 -04:00
Orion Henry
757f1f058a simplify test more 2022-04-18 17:03:32 -04:00
Orion Henry
c66d8a5b54 fmt 2022-04-18 16:43:28 -04:00
Orion Henry
ab09a7aa5d make test simpler 2022-04-18 16:39:11 -04:00
Orion Henry
5923d67bea duplicate changes in the queue could corrupt internal state 2022-04-18 16:31:13 -04:00
Andrew Jeffery
a65838076d Add parents iterator
This allows users to have the convenience of getting all of the parents
of an object, whilst allowing them to terminate early when they have
found what they need.
2022-04-18 16:15:29 +01:00
Andrew Jeffery
122b227101 Borrow the key 2022-04-15 20:47:02 +01:00
Andrew Jeffery
fb3b740a57 Make range just be over maps 2022-04-15 15:01:28 +01:00
Andrew Jeffery
cdfc2d056f Add double ended iterator for Range and Values 2022-04-15 14:39:44 +01:00
Orion Henry
d1407480d2
Merge pull request #333 from automerge/wasm_readme
Wasm readme
2022-04-08 19:05:50 -04:00
Orion Henry
93870d4127 smol str issue 2022-04-08 18:58:52 -04:00
Orion Henry
99dc6e2314 fix smol_str dep 2022-04-08 18:55:53 -04:00
Orion Henry
a791714f74 extend documentation 2022-04-08 18:34:04 -04:00
Orion Henry
965240d8f6 Merge remote-tracking branch 'origin/experiment' into wasm_readme 2022-04-08 18:07:44 -04:00
Orion Henry
09259e5f68
Merge pull request #326 from jeffa5/experiment-range
Add range and values queries
2022-04-08 17:19:47 -04:00
Orion Henry
5555d50693 readme fixes 2022-04-08 17:10:53 -04:00
Andrew Jeffery
07553195fa Update wasm and js with new names 2022-04-08 18:23:56 +01:00
Andrew Jeffery
679b3d20ce Add range_at and values_at to transactable 2022-04-08 18:19:03 +01:00
Andrew Jeffery
bcf191bea3 Add values_at 2022-04-08 18:18:48 +01:00
Andrew Jeffery
89eb598858 Fix keys_at 2022-04-08 18:18:48 +01:00
Andrew Jeffery
baa56b0b57 Add range_at 2022-04-08 18:18:48 +01:00
Andrew Jeffery
decd03a5d7 Add values iterator 2022-04-08 18:18:47 +01:00
Andrew Jeffery
1ca49cfa9b Add range to transactable and rename value to get
Also changes values to get_conflicts for more clarity on what it does
and opening up the name for iterating over values.
2022-04-08 18:18:22 +01:00
Andrew Jeffery
4406a5b208 Add range query
This is a way of efficiently getting just the keys and values in a
range.
2022-04-08 18:17:54 +01:00
Orion Henry
609234bb9d
Merge pull request #330 from jeffa5/experiment-graphemes
Remove grapheme splitting internally
2022-04-08 12:54:54 -04:00
Andrew Jeffery
69f51b77f4
Merge pull request #334 from jeffa5/experiment-add-object-text-at
Add object replacement character in text_at
2022-04-08 05:38:05 -05:00
Andrew Jeffery
94a122478d Add object replacement character in text_at 2022-04-08 10:13:52 +01:00
Andrew Jeffery
1bbcd4c151 Test that we can insert long strings into text 2022-04-08 09:36:48 +01:00
Andrew Jeffery
80ce447d72 Add conversion from &String for Value and ScalarValue 2022-04-08 09:36:48 +01:00
Andrew Jeffery
e4e9e9a691 Add tests for inserting into text
This ensures that we can still insert entire graphemes (small strings)
and break them into chars automatically.
2022-04-08 09:36:47 +01:00
Andrew Jeffery
842797f3aa Use Unicode Scalars instead of graphemes in text 2022-04-08 09:35:59 +01:00
Orion Henry
9ca4792424 fmt 2022-04-07 14:53:14 -04:00
Orion Henry
37d90c5b8e optimize fork_at 2022-04-07 14:43:56 -04:00
Orion Henry
9f3ae61b91 use doc.text() in js toString() 2022-04-07 14:24:12 -04:00
Orion Henry
f5d858df82 Merge remote-tracking branch 'origin/experiment' into wasm_readme 2022-04-07 14:23:52 -04:00
Orion Henry
6d9ed5cde4 start at 0.0.1 2022-04-07 14:17:16 -04:00
Orion Henry
88bd14c07e
Merge pull request #309 from jeffa5/experiment-parent-obj
Add `parent_object` and `path_to_object` functions
2022-04-07 14:03:35 -04:00
Andrew Jeffery
06d2306d54 Add path_to_object 2022-04-07 15:04:00 +01:00
Andrew Jeffery
cc8134047a Document parent_object 2022-04-07 14:52:25 +01:00
Andrew Jeffery
e9adc32486 Fixup OpIdSearch's key extraction 2022-04-07 14:51:31 +01:00
Andrew Jeffery
a88d49cf45 Fixup builds 2022-04-07 14:32:17 +01:00
Andrew Jeffery
ebb73738da Remove B 2022-04-07 14:21:52 +01:00
Andrew Jeffery
bd2f252e0b Try and fix parent object query 2022-04-07 14:21:17 +01:00
Andrew Jeffery
9e71736b88 Fixup after rebase 2022-04-07 14:21:16 +01:00
Andrew Jeffery
12a4987ce7 Use prop rather than exposing legacy::Key 2022-04-07 14:20:57 +01:00
Andrew Jeffery
aeadedd584 Add watch example 2022-04-07 14:20:57 +01:00
Andrew Jeffery
dcc6c68485 Add parent's id to the op tree 2022-04-07 14:20:56 +01:00
Andrew Jeffery
0f2bd3fb27 Make edit-trace vals be a string and use splice_text 2022-04-07 12:22:28 +01:00
Orion Henry
9fe8447d21 loadDoc -> load() and forkAt() 2022-04-07 01:19:27 -04:00
Andrew Jeffery
d65280518d
Merge pull request #329 from jeffa5/experiment-treequery-ref
Have queries be able to return references to scalars
2022-04-06 03:06:39 -05:00
Andrew Jeffery
53f6904ae5 Add to_owned method to get a static value 2022-04-04 21:13:09 +01:00
Andrew Jeffery
330aebb44a Make wasm ScalarValue take a cow 2022-04-04 21:04:23 +01:00
Orion Henry
17acab25b5 fix _obj notation 2022-04-04 12:51:54 -04:00
Orion Henry
0d83f5f595 decorate 2022-04-04 12:50:13 -04:00
Orion Henry
777a516051 spelling/grammar 2022-04-04 12:50:13 -04:00
Orion Henry
4edb034a64 adding readme tests 2022-04-04 12:50:08 -04:00
Orion Henry
3737ad316b spelling 2022-04-04 12:37:59 -04:00
Orion Henry
051a0bbb54 early draft of the readme 2022-04-04 12:37:59 -04:00
Orion Henry
83c08344e7 wip2 2022-04-04 12:37:57 -04:00
Orion Henry
d8c126d1bc wip 2022-04-04 12:35:28 -04:00
Andrew Jeffery
545807cf74 Have historic versions clone the value again
This is to currently avoid the issue with counters.
2022-04-04 13:06:36 +01:00
Andrew Jeffery
fa2971a29a Have value be a reference for scalars 2022-04-04 12:47:08 +01:00
Andrew Jeffery
a2d4b2a778 Use ref on seek_op 2022-04-04 11:58:37 +01:00
Andrew Jeffery
48ce85dbfb Add ref to treequery to allow borrowing ops 2022-04-04 11:55:22 +01:00
Andrew Jeffery
8f4562b2cb Have apply_changes take an iterator 2022-04-01 23:02:56 +01:00
Andrew Jeffery
b54075fe4d Add makefile to run edit-traces 2022-04-01 13:56:15 +01:00
Andrew Jeffery
6494945a42
Merge pull request #327 from jeffa5/experiment-del-inc-names
Rename `del` and `inc` to `delete` and `increment`
2022-04-01 07:47:13 -05:00
Andrew Jeffery
d331ceb6d4 Rename set to put and set_object to put_object 2022-04-01 13:40:58 +01:00
Andrew Jeffery
5cbc977076 More internal renames of del and inc 2022-04-01 13:36:27 +01:00
Andrew Jeffery
632857a4e6 Rename del and inc in wasm and js 2022-04-01 13:36:26 +01:00
Andrew Jeffery
1a66dc7ab1 Use full names for delete and increment 2022-04-01 13:36:00 +01:00
Andrew Jeffery
790423c7ae
Merge pull request #328 from jeffa5/experiment-js-names
Make Js names consistently camelCase
2022-04-01 07:34:20 -05:00
Andrew Jeffery
3631ddfd55 Fix js side 2022-04-01 11:48:04 +01:00
Andrew Jeffery
0c16dfe2aa Change js function names to camelCase 2022-04-01 11:46:43 +01:00
Andrew Jeffery
35ddda5e0f
Merge pull request #324 from jeffa5/experiment-remove-const-b
Remove const B: usize requirement everywhere
2022-03-31 08:14:25 -05:00
Andrew Jeffery
0e457d5891 Remove const B: usize requirement everywhere
This doesn't need to be generic on everything, just defined once as a
const and referenced.
2022-03-31 13:53:26 +01:00
Andrew Jeffery
12f070ce45
Merge pull request #323 from jeffa5/experiment-update-tree
Change set to update to avoid cloning and make it more efficient
2022-03-31 07:05:37 -05:00
Andrew Jeffery
a69643c9cc Change set to update to avoid cloning and make it more efficient 2022-03-31 12:04:42 +01:00
Orion Henry
1c4dc88de3
Merge pull request #312 from automerge/generate-patches
Generate patches
2022-03-30 16:59:48 -04:00
Orion Henry
ab580df947 Merge remote-tracking branch 'origin/experiment' into getnerate-patches 2022-03-30 13:04:51 -06:00
Orion Henry
2dcbfbf27d clippy 2022-03-30 13:28:52 -04:00
Martin Kleppmann
f83fb5ec61 More tests 2022-03-30 13:12:07 -04:00
Martin Kleppmann
ab4dc331ac cargo fmt 2022-03-30 13:12:07 -04:00
Martin Kleppmann
a9eddd88cc Bugfix: resurrection of deleted list elements 2022-03-30 13:12:07 -04:00
Martin Kleppmann
975338900c Document another suspected bug
Testing this is harder because I need to construct a tree in which list
elements are split across multiple tree nodes, and the number of list
elements required to trigger this condition depends on the branching
factor of the tree, which I don't really want to hard-code into the
tests in case we change it...
2022-03-30 13:12:07 -04:00
Martin Kleppmann
361db06eb5 Delete unnecessary code
This check is not needed because the case `e == HEAD` can only happen if
`self.op` is a list insertion operation, and an insertion operation
always has empty `preds`, so it can never overwrite any existing list
element.
2022-03-30 13:12:07 -04:00
Martin Kleppmann
ba177c3d83 Fix broken handling of conflicts on list elements 2022-03-30 13:12:07 -04:00
Martin Kleppmann
fa0a8953dc More tests and comments 2022-03-30 13:12:07 -04:00
Martin Kleppmann
cf508a94a9 Slight simplification 2022-03-30 13:12:07 -04:00
Martin Kleppmann
289dd95196 Fix index calculation for insertions at the head 2022-03-30 13:12:07 -04:00
Martin Kleppmann
c908979372 Fix search for the correct insertion position 2022-03-30 13:12:07 -04:00
Martin Kleppmann
7025bb6541 Tests and fixes for list patches 2022-03-30 13:12:07 -04:00
Martin Kleppmann
145969152a Fix conversion of OpId to ExId when referring to root object 2022-03-30 13:12:07 -04:00
Martin Kleppmann
94ff10f690 Rename and reformat a bit 2022-03-30 13:12:07 -04:00
Martin Kleppmann
26efee509d First patch implementation from pairing session with Orion 2022-03-30 13:12:01 -04:00
Andrew Jeffery
3039efca9b Use pending_ops rather than direct len of operations 2022-03-30 12:18:44 +01:00
Andrew Jeffery
a989e294f8 Use entry api in index 2022-03-29 21:05:09 +01:00
Andrew Jeffery
3c294d8fca Document some fields on structs 2022-03-29 21:05:03 +01:00
Andrew Jeffery
0af471a1a1 Document object_type function 2022-03-29 20:45:20 +01:00
Andrew Jeffery
0da8ceddce Use iter() in IntoIterator 2022-03-29 20:34:20 +01:00
Orion Henry
be8f367d07 missing test tag 2022-03-29 11:39:25 -04:00
Orion Henry
93082ad6a9
Merge pull request #319 from jeffa5/experiment-broken-list
Add broken list tests
2022-03-29 11:14:46 -04:00
Orion Henry
fb586455dd
Merge branch 'experiment' into experiment-broken-list 2022-03-29 11:14:35 -04:00
Orion Henry
5d9880e1e1
Merge pull request #320 from jeffa5/experiment-last-elem
Fix nth query's last_elem
2022-03-29 11:08:31 -04:00
Andrew Jeffery
f002e7261b Update comments 2022-03-28 10:37:14 +01:00
Andrew Jeffery
636fe75647 Simplify query_node for insert and nth 2022-03-28 10:34:00 +01:00
Andrew Jeffery
1c6032bee0 Reset B to 16 2022-03-28 10:33:42 +01:00
Andrew Jeffery
fb6f2787b2 Remove last_elem in nth query 2022-03-28 10:18:15 +01:00
Andrew Jeffery
ece1e22283 Fix clippy 2022-03-28 10:18:15 +01:00
Andrew Jeffery
8f201562c3 Add better comments 2022-03-28 10:18:15 +01:00
Andrew Jeffery
a19aae484c Don't set last_seen unless the elemid was actually visible 2022-03-28 10:18:15 +01:00
Andrew Jeffery
b280138f84 Remove explicit len on index 2022-03-28 10:18:13 +01:00
Andrew Jeffery
1b5730c0ae Fix insert query to not skip past insert positions
When inserting and we have seen enough elements then look for the first
index to insert at rather than skipping over it.
2022-03-28 10:17:46 +01:00
Andrew Jeffery
49c4bf4911 Rename has to has_visible 2022-03-28 10:17:46 +01:00
Andrew Jeffery
a30bdc3888 Add broken list tests 2022-03-28 10:17:46 +01:00
Andrew Jeffery
e945ebbe74 Remove last_elem from nth_at 2022-03-27 15:35:44 +01:00
Andrew Jeffery
20229ee2d0 Remove last_elem in nth query 2022-03-27 15:28:49 +01:00
Andrew Jeffery
83d298ce8d Add test for broken last_elem 2022-03-27 15:28:49 +01:00
Andrew Jeffery
192356c099
Merge pull request #318 from jeffa5/experiment-query-consts
Remove unnecessary consts in queries
2022-03-26 12:35:39 -05:00
Andrew Jeffery
666782896d Remove unnecessary consts in queries 2022-03-26 09:11:41 +00:00
Andrew Jeffery
edbfce056c
Merge pull request #317 from jeffa5/experiment-nonzero-start_op
Make start_op be nonzero to prevent bad loads
2022-03-24 12:17:28 -05:00
Andrew Jeffery
9cb52d127f
Merge pull request #316 from jeffa5/experiment-errors
Expose encoding and decoding errors
2022-03-24 12:17:12 -05:00
Andrew Jeffery
ed244d980a Make start_op be nonzero to prevent bad loads 2022-03-24 16:42:46 +00:00
Andrew Jeffery
ec3785ab2b Expose encoding and decoding errors 2022-03-24 16:20:23 +00:00
Orion Henry
f5e8b998ca expose getChangeByHash in wasm 2022-03-23 09:34:44 -04:00
Orion Henry
9e1a063bc0 v20 - object replacement char 2022-03-14 14:47:54 -04:00
Andrew Jeffery
a4e8d20266 Optimise getting number of ops when applying tx or changes 2022-03-11 12:25:34 +00:00
Andrew Jeffery
ac18f7116f And fixup IntoIterator 2022-03-11 12:25:18 +00:00
Andrew Jeffery
67251f4d53 Have splice take IntoIterator 2022-03-11 12:24:02 +00:00
Andrew Jeffery
2e49561ab2 Make splice take iterator instead of vec 2022-03-11 12:13:11 +00:00
Andrew Jeffery
927c867884 Replace no longer returns an op 2022-03-11 12:04:00 +00:00
Andrew Jeffery
288b4674a0
Merge pull request #308 from jeffa5/experiment-redundant-objid
Remove obj and change from Op
2022-03-11 11:40:52 +00:00
Andrew Jeffery
488df55385 Remove change field on Op as unused
This field was never read from.
2022-03-11 11:40:42 +00:00
Andrew Jeffery
a2cb15e936 Remove obj from the op as it can be gotten from the optree
This makes the Op struct smaller, helping memory usage and cache
coherence.
2022-03-11 11:40:28 +00:00
Andrew Jeffery
4b52c2053e
Merge pull request #307 from jeffa5/experiment-apply-change
Stop exposing apply_change
2022-03-11 11:39:03 +00:00
Andrew Jeffery
4fa1d056c6 Stop exposing apply_change
It doesn't do checks or raise errors so shouldn't really be exposed.
2022-03-10 18:22:06 +00:00
Orion Henry
ed232fae72
Merge pull request #305 from automerge/paths
add paths/materialize to api
2022-03-10 09:22:59 -05:00
Orion Henry
4ff6dca175 rename error message for foreign objid 2022-03-10 08:47:52 -05:00
Orion Henry
ee116bb5d7 object_type returns an option 2022-03-09 19:42:58 -05:00
Orion Henry
c51073c150 add paths/materialize to api 2022-03-09 17:53:30 -05:00
Andrew Jeffery
0fca6a48ee Add loading to edit-trace rust benchmark 2022-03-09 18:12:05 +00:00
Orion Henry
5b2582bc04
Merge pull request #301 from jeffa5/experiment-value-api
Cleanup value API
2022-03-09 12:06:54 -05:00
Andrew Jeffery
42233414b3 Add some documentation 2022-03-09 16:53:26 +00:00
Andrew Jeffery
0d7f52d21f
Merge pull request #303 from jeffa5/experiment-wasm-tests-ci
Add wasm tests to CI
2022-03-09 16:11:56 +00:00
Andrew Jeffery
d3b97a3cbb Add wasm tests to CI 2022-03-09 16:02:08 +00:00
Orion Henry
f230be8aec change the wasm commit back to an array 2022-03-09 10:41:14 -05:00
Orion Henry
e4d85f47a3
Merge pull request #302 from jeffa5/experiment-misc-api
Misc API changes
2022-03-09 09:48:49 -05:00
Andrew Jeffery
266f112e91 Document some sync api 2022-03-09 13:04:10 +00:00
Andrew Jeffery
e26837b09d Move sync structs to module 2022-03-09 12:43:52 +00:00
Andrew Jeffery
d00cee1637 Misc API updates
- Commit now returns just a single hash rather than a vec. Since the
  change we create from committing has all of the heads as deps there
  can only be one hash/head after committing.
- Apply changes now takes a Vec rather than a slice. This avoids having
  to clone them inside.
- transact_with now passes the result of the closure to the commit
  options function
- Remove patch struct
- Change receive_sync_message to return a () instead of the
  `Option<Patch>`
- Change `Transaction*` structs to just `*` and use the transaction
  module
- Make CommitOptions fields public
2022-03-09 12:33:20 +00:00
Andrew Jeffery
3cff67002a
Merge pull request #297 from jeffa5/experiment-del-nothing
Add failing tests for deleting nothing
2022-03-09 11:15:22 +00:00
Andrew Jeffery
ebe7bae992 Fix typo on QueryResult 2022-03-09 11:14:21 +00:00
Andrew Jeffery
875bfdd7f2 Update save call 2022-03-09 10:33:57 +00:00
Andrew Jeffery
5f200e3bf5 Update delete nothing tests 2022-03-09 10:31:25 +00:00
Andrew Jeffery
a9737a6815 Fix del missing key in map 2022-03-09 10:31:25 +00:00
Andrew Jeffery
73ac96b7a2 Add failing tests for deleting nothing 2022-03-09 10:31:25 +00:00
Andrew Jeffery
4b32ee882a Cleanup value API
Adds conversions to contained types, is_* methods for checking the
variant and conversions from more types.
2022-03-09 10:28:25 +00:00
Orion Henry
beae33402a update wasm test for set_object 2022-03-07 11:46:25 -05:00
Orion Henry
95f27f362c
Merge pull request #283 from jeffa5/experiment-make
Separate scalars and objects in transaction API
2022-03-04 16:53:17 -05:00
Orion Henry
6b505419b6
Merge pull request #292 from jeffa5/experiment-actorid-api
Cleanup actor id api
2022-03-04 12:44:27 -05:00
Orion Henry
b9acf611fa
Merge pull request #293 from jeffa5/experiment-sync-api
Clean up sync api
2022-03-04 12:39:03 -05:00
Orion Henry
390ae49be0
Merge pull request #294 from jeffa5/experiment-infallible-save
Make save infallible
2022-03-04 12:37:51 -05:00
Andrew Jeffery
b79da38dea
Merge pull request #295 from jeffa5/experiment-decode-change
Make decode_change an associated function
2022-03-04 16:50:19 +00:00
Andrew Jeffery
cd5e734735 Make decode_change an associated function 2022-03-04 13:09:29 +00:00
Andrew Jeffery
a4432bdc3d Nothing really Into's ObjType so just take it directly 2022-03-04 13:03:19 +00:00
Andrew Jeffery
000576191e Clean up sync api 2022-03-04 12:32:07 +00:00
Andrew Jeffery
d71e87882e Make save infallible 2022-03-04 12:28:05 +00:00
Andrew Jeffery
c406742760
Merge pull request #291 from jeffa5/experiment-tx-closed
Unpub ensure_transaction_closed
2022-03-04 12:07:53 +00:00
Andrew Jeffery
2f3fe0e342 Cleanup actor id api
Default can be a footgun and confuse users, it was used internally but
that now uses the `from` impls. Also, opidat wasn't used and doesn't
seem to need to be public.
2022-03-04 12:06:43 +00:00
Andrew Jeffery
555f4c6b98 Unpub ensure_transaction_closed
This does the same functionality as a commit but without messages or
timestamps and doesn't return the heads. This shouldn't really be a
public API as they should use commit.
2022-03-04 11:59:51 +00:00
Andrew Jeffery
535d2eb92f Fix js proxy api 2022-03-04 11:46:03 +00:00
Andrew Jeffery
2ebb3fea6f Fixup cli 2022-03-04 11:37:44 +00:00
Andrew Jeffery
e1aeb4fd88 Fixup new test after rebase 2022-03-04 11:33:03 +00:00
Andrew Jeffery
4fe7df3d0e Fix clippy lint 2022-03-04 09:51:50 +00:00
Andrew Jeffery
93a20f302d Fixup wasm lib 2022-03-04 09:51:50 +00:00
Andrew Jeffery
f8cffa3deb Fix edit trace 2022-03-04 09:51:49 +00:00
Andrew Jeffery
b6c9d90d84 Rename value to object in insert_object 2022-03-04 09:51:17 +00:00
Andrew Jeffery
338dc1bece Change splice to accept scalars only 2022-03-04 09:51:16 +00:00
Andrew Jeffery
79d493ddd2 Rename make to set_object 2022-03-04 09:50:48 +00:00
Andrew Jeffery
e42adaf84b Fixup automerge tests 2022-03-04 09:47:37 +00:00
Andrew Jeffery
9406bf09ea Fix some tests 2022-03-03 22:53:55 +00:00
Andrew Jeffery
1a6abddb50 Example of make in the API 2022-03-03 22:53:21 +00:00
Andrew Jeffery
affb85b0b4 Add make to transaction API 2022-03-03 22:51:51 +00:00
Orion Henry
9a89db3f91
Merge pull request #287 from jeffa5/experiment-borrow-exid
AsRef exid to avoid &ROOT everywhere
2022-03-03 15:55:55 -05:00
Orion Henry
9d01406e13 missing gitignore 2022-03-03 14:36:10 -05:00
Andrew Jeffery
967b467aa6 Fix clippy 2022-03-03 18:22:42 +00:00
Andrew Jeffery
c0070e081d Reorder generics 2022-03-03 18:21:58 +00:00
Orion Henry
4fbecf86af
Merge pull request #286 from automerge/import_cli
import cli
2022-03-03 12:15:16 -05:00
Orion Henry
76ff910e06 update license deny.yaml 2022-03-03 11:09:26 -05:00
Andrew Jeffery
c46e6e6321
Merge pull request #288 from jeffa5/experiment-map-overwrite
Add test for overwriting a map and getting value from old one
2022-03-03 14:50:57 +00:00
Andrew Jeffery
7cf9faf7da Fix overwriting maps test 2022-03-03 14:40:35 +00:00
Andrew Jeffery
9ae988e754 Use as_ref instead of borrow 2022-03-03 14:37:24 +00:00
Andrew Jeffery
51f1c05545 Add mutation of old object 2022-03-03 10:36:10 +00:00
Andrew Jeffery
b323f988f9 Add test for overwriting a map and getting value from old one 2022-03-03 10:28:40 +00:00
Andrew Jeffery
682b8007b9 Borrow exid to avoid &ROOT everywhere 2022-03-03 09:05:08 +00:00
Orion Henry
0f71b48857
Merge pull request #282 from automerge/move_wasm_to_feature
move wasm to feature flag
2022-03-02 14:07:34 -05:00
Orion Henry
0141bcdc8f import cli 2022-03-02 14:05:10 -05:00
Andrew Jeffery
0b9d14edc4
Merge pull request #285 from jeffa5/experiment-rollback-transaction
Fix rolling back a transaction with a new actor
2022-03-02 18:15:00 +00:00
Andrew Jeffery
f6f6b5181d Fix rolling back of transaction infecting document 2022-03-02 18:08:00 +00:00
Andrew Jeffery
712697cff0 Add test for rolling back a transaction 2022-03-02 18:03:11 +00:00
Orion Henry
8f11825003
Merge pull request #284 from jeffa5/experiment-actors
Always have an actor on the document
2022-03-02 12:32:57 -05:00
Andrew Jeffery
8f2877a67c Fix wasm 2022-03-02 17:24:15 +00:00
Andrew Jeffery
06241336fe Add with_actor for functional style 2022-03-02 17:22:26 +00:00
Andrew Jeffery
52eb193950 Add custom actor enum to avoid caching an unused one 2022-03-02 17:20:44 +00:00
Andrew Jeffery
30e0748c15 Remove new_with_actor_id on documents 2022-03-02 17:02:26 +00:00
Andrew Jeffery
8eea9d7c0b Always have an actor 2022-03-02 16:59:45 +00:00
Orion Henry
2747d5bf2b move wasm to feature flag 2022-03-02 11:05:48 -05:00
Andrew Jeffery
93e0156c87
Merge pull request #281 from jeffa5/experiment-save-opt
Optimise saving documents
2022-03-02 14:57:09 +00:00
Andrew Jeffery
dfd3d27d44 Don't clone value in splice 2022-03-02 14:25:02 +00:00
Andrew Jeffery
d2e33867f6 Update style 2022-03-02 10:51:09 +00:00
Andrew Jeffery
57cf8200ac Remove unnecessary to_vec 2022-03-02 10:47:55 +00:00
Andrew Jeffery
7a930db44d Don't decode changes for save 2022-03-02 10:45:25 +00:00
Andrew Jeffery
cffadafbd0 Stop collecting to vecs in save 2022-03-02 10:27:29 +00:00
Orion Henry
96488a2774
Merge pull request #278 from jeffa5/iterable-query
Make keys and keys_at iterators
2022-03-01 22:13:33 -05:00
Andrew Jeffery
dfb21ea8d6
Add quickstart example using new transaction (#273)
* Add quickstart example

Also change ordering of transact_with arguments.
This makes it more natural read: transact_with these commit options,
doing this.
2022-02-28 11:49:36 +00:00
Andrew Jeffery
d80a9c6746 Rename IterKeys and IterKeysAt 2022-02-25 17:31:50 +00:00
Andrew Jeffery
f8af94b317 Move B to internal Keys 2022-02-25 17:31:48 +00:00
Andrew Jeffery
6f2536c232 Make keysat double ended 2022-02-25 17:31:34 +00:00
Andrew Jeffery
4ff456cdcc Update keys to use map 2022-02-25 17:31:34 +00:00
Andrew Jeffery
989310866f Add DoubleEndedIterator for Keys 2022-02-25 17:31:34 +00:00
Andrew Jeffery
f51e44c211 Update keys iterator to iterate at the tree level
No more big vec allocation now!
2022-02-25 17:31:33 +00:00
Andrew Jeffery
a726cf33c7 Add keys struct for iteration
This at least helps to not convert all of the keys to their strings
automatically but still allocates a vec.
2022-02-25 17:31:14 +00:00
Andrew Jeffery
7439593caf Document keys functions 2022-02-25 17:30:34 +00:00
Orion Henry
337fabe5a9
Merge pull request #271 from jeffa5/experiment-txn
Transaction API
2022-02-24 18:07:05 -05:00
Orion Henry
06302e4a17 make() defaults to text 2022-02-24 00:22:56 -05:00
Orion Henry
2fc0705907 change MAP,LIST,TEXT to be {},[],'' - allow recursion 2022-02-23 19:43:13 -05:00
Orion Henry
b96aa168b4 choking on bad value function 2022-02-22 12:10:11 -05:00
Andrew Jeffery
8d24c9e4c3 Fix rollback of transaction using index into the tree 2022-02-21 14:00:41 +00:00
Andrew Jeffery
4a6b91adb2 Add test for broken rollback 2022-02-21 13:30:23 +00:00
Andrew Jeffery
6b4393c0b3 Rename transaction module 2022-02-21 11:40:49 +00:00
Andrew Jeffery
355cbdd251 Rename try_start_transaction to ensure_transaction_open 2022-02-21 10:49:58 +00:00
Andrew Jeffery
3493dbd74a Rename autotxn to autocommit 2022-02-21 10:49:14 +00:00
Andrew Jeffery
cbd3406f8d Document commit_with and CommitOptions 2022-02-21 10:47:23 +00:00
Andrew Jeffery
66f8c73dba Document drop on transaction 2022-02-21 10:36:42 +00:00
Andrew Jeffery
50a1b4f99c Add transactable trait 2022-02-21 10:32:57 +00:00
Andrew Jeffery
f8c9343a45 Add get_heads to transaction 2022-02-19 18:57:32 +00:00
Andrew Jeffery
59e36cebe4 Improve transactions with drop, transact and better commit
Also remove modification operations directly on Automerge and switch
tests to using AutoTxn.
2022-02-17 11:29:36 +00:00
Andrew Jeffery
62c71845cd Add some basic docs for Automerge mutations 2022-02-16 15:12:51 +00:00
Andrew Jeffery
e970854042 Fix benchmark ids 2022-02-16 14:56:17 +00:00
Andrew Jeffery
2f49a82eea Have generate_sync_message not take mut self 2022-02-16 14:20:49 +00:00
Andrew Jeffery
ea826b70f4 Move TransactionInner and add get methods to Transaction 2022-02-16 14:15:36 +00:00
Andrew Jeffery
7cbd6effb7 Add autotxn document for wasm and cross-language use
These don't have the ability to preserve the semantics of the reference
based transaction model and so can make use of the nicer auto
transaction model.
2022-02-16 14:06:22 +00:00
Andrew Jeffery
d7da7267d9 Initial wasm fix 2022-02-16 11:39:14 +00:00
Andrew Jeffery
735a4ab84c Add explicit transaction API
This removes the requirement for `&mut self`s on some of the immutable
methods on `Automerge` which can be quite inconvenient.

I've reimplemented the main functions on `Automerge` that manipulate
state to create a transaction for their op for ease of use but not
performance. I've updated the edit trace to run in a single
transaction, like on a page load.

Wasm API still needs working on at the moment to expose this properly.
2022-02-16 11:38:43 +00:00
Orion Henry
ef938fdf0a manually handle js types - make sure we have good errors 2022-02-15 14:02:19 -05:00
Orion Henry
b6e0da28d8 fmt 2022-02-10 11:48:09 -05:00
Orion Henry
c8c695618b remove marks 2022-02-10 11:42:15 -05:00
Orion Henry
d1b0d41239 move marks into its own test 2022-02-10 11:17:15 -05:00
Orion Henry
9136f00e43 bugfix: duplicate seq not blocked on apply_changes, clone did not close a transaction, added fork and merge to wasm 2022-02-10 11:14:44 -05:00
Orion Henry
b53305cf7f Merge branch 'marks' into tmp 2022-02-10 09:42:38 -05:00
Karissa McKelvey
98a65f98f7 Add failing test for decoding a conflicted merge 2022-02-09 13:17:07 -08:00
rae
c655427f9a
Add support for web 2022-02-07 16:33:10 -08:00
Orion Henry
1aab66d160 fix version number 2022-02-06 19:57:25 -05:00
Orion Henry
a9ddb9398c cleanup typescript defs 2022-02-06 19:01:37 -05:00
Orion Henry
3f82850e44 fix bug in set scalar 2022-02-04 20:15:57 -05:00
Orion Henry
c54aab66c4 better error on invalid value 2022-02-04 14:43:22 -05:00
Andrew Jeffery
70c5fea968 Change rust flake to use default profile 2022-02-04 16:58:58 +00:00
Andrew Jeffery
df435b671f flake.lock: Update
Flake lock file changes:

• Updated input 'flake-utils':
    'github:numtide/flake-utils/2ebf2558e5bf978c7fb8ea927dfaed8fefab2e28' (2021-04-25)
  → 'github:numtide/flake-utils/846b2ae0fc4cc943637d3d1def4454213e203cba' (2022-01-20)
• Updated input 'nixpkgs':
    'github:nixos/nixpkgs/63586475587d7e0e078291ad4b49b6f6a6885100' (2021-05-06)
  → 'github:nixos/nixpkgs/554d2d8aa25b6e583575459c297ec23750adb6cb' (2022-02-02)
• Updated input 'rust-overlay':
    'github:oxalica/rust-overlay/d8efe70dc561c4bea0b7bf440d36ce98c497e054' (2021-05-07)
  → 'github:oxalica/rust-overlay/674156c4c2f46dd6a6846466cb8f9fee84c211ca' (2022-02-04)
• Updated input 'rust-overlay/flake-utils':
    'github:numtide/flake-utils/5466c5bbece17adaab2d82fae80b46e807611bf3' (2021-02-28)
  → 'github:numtide/flake-utils/bba5dcc8e0b20ab664967ad83d24d64cb64ec4f4' (2021-11-15)
• Updated input 'rust-overlay/nixpkgs':
    'github:nixos/nixpkgs/54c1e44240d8a527a8f4892608c4bce5440c3ecb' (2021-04-02)
  → 'github:NixOS/nixpkgs/8afc4e543663ca0a6a4f496262cd05233737e732' (2021-11-21)
2022-02-04 16:56:38 +00:00
Andrew Jeffery
7607ebbfcc Add from () for Value 2022-02-04 11:37:33 +00:00
Orion Henry
bf184fe980 remove some un needed imports 2022-02-03 14:43:02 -05:00
Orion Henry
2019943849 bump edition from 2018 to 2021 2022-02-03 14:38:21 -05:00
Orion Henry
0f49608dde spans have types not names 2022-02-02 16:29:23 -05:00
Orion Henry
1d0c54ca9a raw_spans with ids 2022-02-02 16:21:33 -05:00
Orion Henry
ee80837feb raw_spans experiment 2022-02-02 15:55:41 -05:00
Orion Henry
da73607c98 adding make 2022-01-31 17:45:07 -05:00
Orion Henry
e88f673d63 Revert "Remove make"
This reverts commit 5b9360155c.
2022-01-31 17:43:56 -05:00
Orion Henry
5b9360155c Remove make 2022-01-31 17:28:24 -05:00
Orion Henry
17e6a9a955 fixed fixed 2022-01-31 17:24:46 -05:00
Orion Henry
1269a8951e use types in pkg 2022-01-31 17:24:17 -05:00
Orion Henry
836e6ba510 fix return types 2022-01-31 17:21:16 -05:00
Orion Henry
a9dec7aa0b remove dead code 2022-01-31 17:11:22 -05:00
Orion Henry
7b32faa238 all ts tests passing 2022-01-31 17:07:20 -05:00
Orion Henry
c49bf55ea4 almost working ts 2022-01-31 16:48:03 -05:00
Karissa McKelvey
d3f4be0654 Fix typescript errors in test 2022-01-31 13:03:27 -08:00
Karissa McKelvey
831faa2589 uint datatypes & fix some more typescript errors 2022-01-31 12:48:49 -08:00
Orion Henry
4c84ccba06 half done - not working typescript 2022-01-31 15:23:46 -05:00
Orion Henry
bfc051f4fb cleanup / rename 2022-01-31 14:02:24 -05:00
Orion Henry
a2e433348a mark encode/decode/serde 2022-01-31 14:02:24 -05:00
Orion Henry
b794f4803d rework marks as inserts between values 2022-01-31 14:02:24 -05:00
Orion Henry
e679c4f6a0 v0 wip 2022-01-31 14:02:23 -05:00
karissa
a59ffebd64 Update app to include text editor, import Automerge correctly 2022-01-31 10:55:45 -07:00
Orion Henry
e85f47b1f4 remove from package.json 2022-01-28 18:58:47 -05:00
Orion Henry
2990f33803 remove tmp file 2022-01-28 18:07:08 -05:00
Orion Henry
9ff0c60ccb add cra example code 2022-01-28 18:05:33 -05:00
Orion Henry
cfa1067c19 rework wasm function to use js types more directly 2022-01-28 17:07:59 -05:00
Orion Henry
3393a60e59 clippy lint 2022-01-20 14:17:11 -08:00
Orion Henry
7b3db2f15a clippy lint 2022-01-20 14:17:11 -08:00
Orion Henry
54fec3e438 lamport compare was backward on actorids and so was value resolution 2022-01-20 14:17:11 -08:00
Andrew Jeffery
0388c46480 Remove unused is_empty function on optrees 2022-01-19 15:16:02 -08:00
Andrew Jeffery
429426a693 Fix removal and rollback
Credit to @orionz
2022-01-19 15:16:02 -08:00
Andrew Jeffery
2015428452 Detect object type before getting length 2022-01-19 15:16:02 -08:00
Andrew Jeffery
812c7df3a7 Add length tests to props tests 2022-01-19 15:16:02 -08:00
Andrew Jeffery
5867c8d131 Fixup CI 2022-01-19 15:11:04 -08:00
Andrew Jeffery
0ccf36fe49 Add test and doc update for setting scalarvalues 2022-01-19 15:11:04 -08:00
Orion Henry
a12af10ee1 optimize js 2022-01-19 18:08:15 -05:00
Orion Henry
faf3e2cae4 update todo 2022-01-18 12:45:10 -05:00
Orion Henry
8b2f0238f3 create sub op tree at a time when we know the type 2022-01-18 09:11:46 -08:00
Orion Henry
acbf394290 cleanup some dead code 2022-01-14 06:27:42 -08:00
Orion Henry
b30a2b9cc1 give Counter its own type 2022-01-14 06:27:42 -08:00
Orion Henry
d50062b769 move values into the counter type - remove need for vis_window 2022-01-14 06:27:42 -08:00
Orion Henry
e59d24f68b return values are sorted - add counter del test 2022-01-14 06:27:42 -08:00
Orion Henry
642a7ce316 update todo list 2022-01-11 17:54:23 -05:00
Orion Henry
067df1f894 break sync, interop, and value code into their own files 2022-01-09 08:05:00 -08:00
Orion Henry
fdab61e213 derive default 2022-01-09 08:05:00 -08:00
Orion Henry
557bfe1cc9 update todo 2022-01-09 08:05:00 -08:00
Orion Henry
a2e6778730 fmt 2022-01-09 08:05:00 -08:00
Orion Henry
04c7e9184d port over all the sync tests to the wasm api 2022-01-09 08:05:00 -08:00
Orion Henry
b67098d5e1 convert automerge-js to use import/export 2022-01-09 08:05:00 -08:00
Orion Henry
45ee5ddbd9 add import/export 2022-01-09 08:05:00 -08:00
Orion Henry
d2a7cc5f75 get sync tests working 2022-01-09 08:05:00 -08:00
Alex Good
1f0a1e4071 Correctly sort actor IDs when encoding changes
This is a port of a fix previously merged into `main`.

The javascript implementation of automerge sorts actor IDs
lexicographically when encoding changes. We were sorting actor IDs in
the order the appear in the change we're encoding. This meant that the
index that we assigned to operations in the encoded change was different
to that which the javascript implementation assigns, resulting in
mismatched head errors as the hashes we created did not match the
javascript implementation.

This change fixes the issue by sorting actor IDs lexicographically. We
make a pass over the operations in the change before encoding to collect
the actor IDs and sort them. This means we no longer need to pass a
mutable `Vec<ActorId>` to the various encode functions, which cleans
things up a little.
2022-01-04 15:28:03 -08:00
Orion Henry
ef89520d7c more tests for wasm 2022-01-03 14:59:46 -05:00
Orion Henry
96a8357e36 add hasher for exid 2022-01-03 14:59:34 -05:00
Orion Henry
4c4484b897 fix bug in wasm 2022-01-03 12:58:08 -05:00
Alex Good
dc8140cb0b fmt 🙄 2022-01-01 20:17:38 +00:00
Orion Henry
3046cbab35
Replace the OpID API with an object ID
Rather than returning an OpID for every mutation, we now return an
`Option<ObjId>`. This is `Some` only when a `make*` operation was
applied. This `ObjID` is an opaque type which can be used with any
document.
2022-01-01 20:15:02 +00:00
482 changed files with 76162 additions and 14298 deletions

View file

@ -1,4 +1,4 @@
name: ci
name: Advisories
on:
schedule:
- cron: '0 18 * * *'

View file

@ -1,11 +1,11 @@
name: ci
on:
name: CI
on:
push:
branches:
- experiment
- main
pull_request:
branches:
- experiment
- main
jobs:
fmt:
runs-on: ubuntu-latest
@ -14,7 +14,8 @@ jobs:
- uses: actions-rs/toolchain@v1
with:
profile: minimal
toolchain: stable
toolchain: 1.67.0
default: true
components: rustfmt
- uses: Swatinem/rust-cache@v1
- run: ./scripts/ci/fmt
@ -27,7 +28,8 @@ jobs:
- uses: actions-rs/toolchain@v1
with:
profile: minimal
toolchain: stable
toolchain: 1.67.0
default: true
components: clippy
- uses: Swatinem/rust-cache@v1
- run: ./scripts/ci/lint
@ -40,9 +42,14 @@ jobs:
- uses: actions-rs/toolchain@v1
with:
profile: minimal
toolchain: stable
toolchain: 1.67.0
default: true
- uses: Swatinem/rust-cache@v1
- run: ./scripts/ci/docs
- name: Build rust docs
run: ./scripts/ci/rust-docs
shell: bash
- name: Install doxygen
run: sudo apt-get install -y doxygen
shell: bash
cargo-deny:
@ -57,31 +64,88 @@ jobs:
- uses: actions/checkout@v2
- uses: EmbarkStudios/cargo-deny-action@v1
with:
arguments: '--manifest-path ./rust/Cargo.toml'
command: check ${{ matrix.checks }}
wasm_tests:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Install wasm-bindgen-cli
run: cargo install wasm-bindgen-cli wasm-opt
- name: Install wasm32 target
run: rustup target add wasm32-unknown-unknown
- name: run tests
run: ./scripts/ci/wasm_tests
deno_tests:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- uses: denoland/setup-deno@v1
with:
deno-version: v1.x
- name: Install wasm-bindgen-cli
run: cargo install wasm-bindgen-cli wasm-opt
- name: Install wasm32 target
run: rustup target add wasm32-unknown-unknown
- name: run tests
run: ./scripts/ci/deno_tests
js_fmt:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: install
run: yarn global add prettier
- name: format
run: prettier -c javascript/.prettierrc javascript
js_tests:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Install wasm-pack
run: curl https://rustwasm.github.io/wasm-pack/installer/init.sh -sSf | sh
- name: Install wasm-bindgen-cli
run: cargo install wasm-bindgen-cli wasm-opt
- name: Install wasm32 target
run: rustup target add wasm32-unknown-unknown
- name: run tests
run: ./scripts/ci/js_tests
cmake_build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- uses: actions-rs/toolchain@v1
with:
profile: minimal
toolchain: nightly-2023-01-26
default: true
- uses: Swatinem/rust-cache@v1
- name: Install CMocka
run: sudo apt-get install -y libcmocka-dev
- name: Install/update CMake
uses: jwlawson/actions-setup-cmake@v1.12
with:
cmake-version: latest
- name: Install rust-src
run: rustup component add rust-src
- name: Build and test C bindings
run: ./scripts/ci/cmake-build Release Static
shell: bash
linux:
runs-on: ubuntu-latest
strategy:
matrix:
toolchain:
- stable
- nightly
continue-on-error: ${{ matrix.toolchain == 'nightly' }}
- 1.67.0
steps:
- uses: actions/checkout@v2
- uses: actions-rs/toolchain@v1
with:
profile: minimal
toolchain: ${{ matrix.toolchain }}
default: true
- uses: Swatinem/rust-cache@v1
- run: ./scripts/ci/build-test
shell: bash
@ -93,7 +157,8 @@ jobs:
- uses: actions-rs/toolchain@v1
with:
profile: minimal
toolchain: stable
toolchain: 1.67.0
default: true
- uses: Swatinem/rust-cache@v1
- run: ./scripts/ci/build-test
shell: bash
@ -105,8 +170,8 @@ jobs:
- uses: actions-rs/toolchain@v1
with:
profile: minimal
toolchain: stable
toolchain: 1.67.0
default: true
- uses: Swatinem/rust-cache@v1
- run: ./scripts/ci/build-test
shell: bash

52
.github/workflows/docs.yaml vendored Normal file
View file

@ -0,0 +1,52 @@
on:
push:
branches:
- main
name: Documentation
jobs:
deploy-docs:
concurrency: deploy-docs
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Toolchain
uses: actions-rs/toolchain@v1
with:
profile: minimal
toolchain: stable
override: true
- name: Cache
uses: Swatinem/rust-cache@v1
- name: Clean docs dir
run: rm -rf docs
shell: bash
- name: Clean Rust docs dir
uses: actions-rs/cargo@v1
with:
command: clean
args: --manifest-path ./rust/Cargo.toml --doc
- name: Build Rust docs
uses: actions-rs/cargo@v1
with:
command: doc
args: --manifest-path ./rust/Cargo.toml --workspace --all-features --no-deps
- name: Move Rust docs
run: mkdir -p docs && mv rust/target/doc/* docs/.
shell: bash
- name: Configure root page
run: echo '<meta http-equiv="refresh" content="0; url=automerge">' > docs/index.html
- name: Deploy docs
uses: peaceiris/actions-gh-pages@v3
with:
github_token: ${{ secrets.GITHUB_TOKEN }}
publish_dir: ./docs

214
.github/workflows/release.yaml vendored Normal file
View file

@ -0,0 +1,214 @@
name: Release
on:
push:
branches:
- main
jobs:
check_if_wasm_version_upgraded:
name: Check if WASM version has been upgraded
runs-on: ubuntu-latest
outputs:
wasm_version: ${{ steps.version-updated.outputs.current-package-version }}
wasm_has_updated: ${{ steps.version-updated.outputs.has-updated }}
steps:
- uses: JiPaix/package-json-updated-action@v1.0.5
id: version-updated
with:
path: rust/automerge-wasm/package.json
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
publish-wasm:
name: Publish WASM package
runs-on: ubuntu-latest
needs:
- check_if_wasm_version_upgraded
# We create release only if the version in the package.json has been upgraded
if: needs.check_if_wasm_version_upgraded.outputs.wasm_has_updated == 'true'
steps:
- uses: actions/setup-node@v3
with:
node-version: '16.x'
registry-url: 'https://registry.npmjs.org'
- uses: denoland/setup-deno@v1
- uses: actions/checkout@v3
with:
fetch-depth: 0
ref: ${{ github.ref }}
- name: Get rid of local github workflows
run: rm -r .github/workflows
- name: Remove tmp_branch if it exists
run: git push origin :tmp_branch || true
- run: git checkout -b tmp_branch
- name: Install wasm-bindgen-cli
run: cargo install wasm-bindgen-cli wasm-opt
- name: Install wasm32 target
run: rustup target add wasm32-unknown-unknown
- name: run wasm js tests
id: wasm_js_tests
run: ./scripts/ci/wasm_tests
- name: run wasm deno tests
id: wasm_deno_tests
run: ./scripts/ci/deno_tests
- name: build release
id: build_release
run: |
npm --prefix $GITHUB_WORKSPACE/rust/automerge-wasm run release
- name: Collate deno release files
if: steps.wasm_js_tests.outcome == 'success' && steps.wasm_deno_tests.outcome == 'success'
run: |
mkdir $GITHUB_WORKSPACE/deno_wasm_dist
cp $GITHUB_WORKSPACE/rust/automerge-wasm/deno/* $GITHUB_WORKSPACE/deno_wasm_dist
cp $GITHUB_WORKSPACE/rust/automerge-wasm/index.d.ts $GITHUB_WORKSPACE/deno_wasm_dist
cp $GITHUB_WORKSPACE/rust/automerge-wasm/README.md $GITHUB_WORKSPACE/deno_wasm_dist
cp $GITHUB_WORKSPACE/rust/automerge-wasm/LICENSE $GITHUB_WORKSPACE/deno_wasm_dist
sed -i '1i /// <reference types="./index.d.ts" />' $GITHUB_WORKSPACE/deno_wasm_dist/automerge_wasm.js
- name: Create npm release
if: steps.wasm_js_tests.outcome == 'success' && steps.wasm_deno_tests.outcome == 'success'
run: |
if [ "$(npm --prefix $GITHUB_WORKSPACE/rust/automerge-wasm show . version)" = "$VERSION" ]; then
echo "This version is already published"
exit 0
fi
EXTRA_ARGS="--access public"
if [[ $VERSION == *"alpha."* ]] || [[ $VERSION == *"beta."* ]] || [[ $VERSION == *"rc."* ]]; then
echo "Is pre-release version"
EXTRA_ARGS="$EXTRA_ARGS --tag next"
fi
if [ "$NODE_AUTH_TOKEN" = "" ]; then
echo "Can't publish on NPM, You need a NPM_TOKEN secret."
false
fi
npm publish $GITHUB_WORKSPACE/rust/automerge-wasm $EXTRA_ARGS
env:
NODE_AUTH_TOKEN: ${{secrets.NPM_TOKEN}}
VERSION: ${{ needs.check_if_wasm_version_upgraded.outputs.wasm_version }}
- name: Commit wasm deno release files
run: |
git config --global user.name "actions"
git config --global user.email actions@github.com
git add $GITHUB_WORKSPACE/deno_wasm_dist
git commit -am "Add deno release files"
git push origin tmp_branch
- name: Tag wasm release
if: steps.wasm_js_tests.outcome == 'success' && steps.wasm_deno_tests.outcome == 'success'
uses: softprops/action-gh-release@v1
with:
name: Automerge Wasm v${{ needs.check_if_wasm_version_upgraded.outputs.wasm_version }}
tag_name: js/automerge-wasm-${{ needs.check_if_wasm_version_upgraded.outputs.wasm_version }}
target_commitish: tmp_branch
generate_release_notes: false
draft: false
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
- name: Remove tmp_branch
run: git push origin :tmp_branch
check_if_js_version_upgraded:
name: Check if JS version has been upgraded
runs-on: ubuntu-latest
outputs:
js_version: ${{ steps.version-updated.outputs.current-package-version }}
js_has_updated: ${{ steps.version-updated.outputs.has-updated }}
steps:
- uses: JiPaix/package-json-updated-action@v1.0.5
id: version-updated
with:
path: javascript/package.json
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
publish-js:
name: Publish JS package
runs-on: ubuntu-latest
needs:
- check_if_js_version_upgraded
- check_if_wasm_version_upgraded
- publish-wasm
# We create release only if the version in the package.json has been upgraded and after the WASM release
if: |
(always() && ! cancelled()) &&
(needs.publish-wasm.result == 'success' || needs.publish-wasm.result == 'skipped') &&
needs.check_if_js_version_upgraded.outputs.js_has_updated == 'true'
steps:
- uses: actions/setup-node@v3
with:
node-version: '16.x'
registry-url: 'https://registry.npmjs.org'
- uses: denoland/setup-deno@v1
- uses: actions/checkout@v3
with:
fetch-depth: 0
ref: ${{ github.ref }}
- name: Get rid of local github workflows
run: rm -r .github/workflows
- name: Remove js_tmp_branch if it exists
run: git push origin :js_tmp_branch || true
- run: git checkout -b js_tmp_branch
- name: check js formatting
run: |
yarn global add prettier
prettier -c javascript/.prettierrc javascript
- name: run js tests
id: js_tests
run: |
cargo install wasm-bindgen-cli wasm-opt
rustup target add wasm32-unknown-unknown
./scripts/ci/js_tests
- name: build js release
id: build_release
run: |
npm --prefix $GITHUB_WORKSPACE/javascript run build
- name: build js deno release
id: build_deno_release
run: |
VERSION=$WASM_VERSION npm --prefix $GITHUB_WORKSPACE/javascript run deno:build
env:
WASM_VERSION: ${{ needs.check_if_wasm_version_upgraded.outputs.wasm_version }}
- name: run deno tests
id: deno_tests
run: |
npm --prefix $GITHUB_WORKSPACE/javascript run deno:test
- name: Collate deno release files
if: steps.js_tests.outcome == 'success' && steps.deno_tests.outcome == 'success'
run: |
mkdir $GITHUB_WORKSPACE/deno_js_dist
cp $GITHUB_WORKSPACE/javascript/deno_dist/* $GITHUB_WORKSPACE/deno_js_dist
- name: Create npm release
if: steps.js_tests.outcome == 'success' && steps.deno_tests.outcome == 'success'
run: |
if [ "$(npm --prefix $GITHUB_WORKSPACE/javascript show . version)" = "$VERSION" ]; then
echo "This version is already published"
exit 0
fi
EXTRA_ARGS="--access public"
if [[ $VERSION == *"alpha."* ]] || [[ $VERSION == *"beta."* ]] || [[ $VERSION == *"rc."* ]]; then
echo "Is pre-release version"
EXTRA_ARGS="$EXTRA_ARGS --tag next"
fi
if [ "$NODE_AUTH_TOKEN" = "" ]; then
echo "Can't publish on NPM, You need a NPM_TOKEN secret."
false
fi
npm publish $GITHUB_WORKSPACE/javascript $EXTRA_ARGS
env:
NODE_AUTH_TOKEN: ${{secrets.NPM_TOKEN}}
VERSION: ${{ needs.check_if_js_version_upgraded.outputs.js_version }}
- name: Commit js deno release files
run: |
git config --global user.name "actions"
git config --global user.email actions@github.com
git add $GITHUB_WORKSPACE/deno_js_dist
git commit -am "Add deno js release files"
git push origin js_tmp_branch
- name: Tag JS release
if: steps.js_tests.outcome == 'success' && steps.deno_tests.outcome == 'success'
uses: softprops/action-gh-release@v1
with:
name: Automerge v${{ needs.check_if_js_version_upgraded.outputs.js_version }}
tag_name: js/automerge-${{ needs.check_if_js_version_upgraded.outputs.js_version }}
target_commitish: js_tmp_branch
generate_release_notes: false
draft: false
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
- name: Remove js_tmp_branch
run: git push origin :js_tmp_branch

4
.gitignore vendored
View file

@ -1,4 +1,6 @@
/target
/.direnv
perf.*
/Cargo.lock
build/
.vim/*
/target

View file

@ -1,13 +0,0 @@
rust:
cd automerge && cargo test
wasm:
cd automerge-wasm && yarn
cd automerge-wasm && yarn build
cd automerge-wasm && yarn test
cd automerge-wasm && yarn link
js: wasm
cd automerge-js && yarn
cd automerge-js && yarn link "automerge-wasm"
cd automerge-js && yarn test

188
README.md
View file

@ -1,81 +1,147 @@
# Automerge - NEXT
# Automerge
This is pretty much a ground up rewrite of automerge-rs. The objective of this
rewrite is to radically simplify the API. The end goal being to produce a library
which is easy to work with both in Rust and from FFI.
<img src='./img/sign.svg' width='500' alt='Automerge logo' />
## How?
[![homepage](https://img.shields.io/badge/homepage-published-informational)](https://automerge.org/)
[![main docs](https://img.shields.io/badge/docs-main-informational)](https://automerge.org/automerge-rs/automerge/)
[![ci](https://github.com/automerge/automerge-rs/actions/workflows/ci.yaml/badge.svg)](https://github.com/automerge/automerge-rs/actions/workflows/ci.yaml)
[![docs](https://github.com/automerge/automerge-rs/actions/workflows/docs.yaml/badge.svg)](https://github.com/automerge/automerge-rs/actions/workflows/docs.yaml)
The current iteration of automerge-rs is complicated to work with because it
adopts the frontend/backend split architecture of the JS implementation. This
architecture was necessary due to basic operations on the automerge opset being
too slow to perform on the UI thread. Recently @orionz has been able to improve
the performance to the point where the split is no longer necessary. This means
we can adopt a much simpler mutable API.
Automerge is a library which provides fast implementations of several different
CRDTs, a compact compression format for these CRDTs, and a sync protocol for
efficiently transmitting those changes over the network. The objective of the
project is to support [local-first](https://www.inkandswitch.com/local-first/) applications in the same way that relational
databases support server applications - by providing mechanisms for persistence
which allow application developers to avoid thinking about hard distributed
computing problems. Automerge aims to be PostgreSQL for your local-first app.
The architecture is now built around the `OpTree`. This is a data structure
which supports efficiently inserting new operations and realising values of
existing operations. Most interactions with the `OpTree` are in the form of
implementations of `TreeQuery` - a trait which can be used to traverse the
optree and producing state of some kind. User facing operations are exposed on
an `Automerge` object, under the covers these operations typically instantiate
some `TreeQuery` and run it over the `OpTree`.
If you're looking for documentation on the JavaScript implementation take a look
at https://automerge.org/docs/hello/. There are other implementations in both
Rust and C, but they are earlier and don't have documentation yet. You can find
them in `rust/automerge` and `rust/automerge-c` if you are comfortable
reading the code and tests to figure out how to use them.
If you're familiar with CRDTs and interested in the design of Automerge in
particular take a look at https://automerge.org/docs/how-it-works/backend/
Finally, if you want to talk to us about this project please [join the
Slack](https://join.slack.com/t/automerge/shared_invite/zt-e4p3760n-kKh7r3KRH1YwwNfiZM8ktw)
## Status
We have working code which passes all of the tests in the JS test suite. We're
now working on writing a bunch more tests and cleaning up the API.
This project is formed of a core Rust implementation which is exposed via FFI in
javascript+WASM, C, and soon other languages. Alex
([@alexjg](https://github.com/alexjg/)]) is working full time on maintaining
automerge, other members of Ink and Switch are also contributing time and there
are several other maintainers. The focus is currently on shipping the new JS
package. We expect to be iterating the API and adding new features over the next
six months so there will likely be several major version bumps in all packages
in that time.
## Development
In general we try and respect semver.
### Running CI
### JavaScript
The steps CI will run are all defined in `./scripts/ci`. Obviously CI will run
everything when you submit a PR, but if you want to run everything locally
before you push you can run `./scripts/ci/run` to run everything.
A stable release of the javascript package is currently available as
`@automerge/automerge@2.0.0` where. pre-release verisions of the `2.0.1` are
available as `2.0.1-alpha.n`. `2.0.1*` packages are also available for Deno at
https://deno.land/x/automerge
### Running the JS tests
### Rust
You will need to have [node](https://nodejs.org/en/), [yarn](https://yarnpkg.com/getting-started/install), [rust](https://rustup.rs/) and [wasm-pack](https://rustwasm.github.io/wasm-pack/installer/) installed.
The rust codebase is currently oriented around producing a performant backend
for the Javascript wrapper and as such the API for Rust code is low level and
not well documented. We will be returning to this over the next few months but
for now you will need to be comfortable reading the tests and asking questions
to figure out how to use it. If you are looking to build rust applications which
use automerge you may want to look into
[autosurgeon](https://github.com/alexjg/autosurgeon)
To build and test the rust library:
## Repository Organisation
```shell
$ cd automerge
$ cargo test
- `./rust` - the rust rust implementation and also the Rust components of
platform specific wrappers (e.g. `automerge-wasm` for the WASM API or
`automerge-c` for the C FFI bindings)
- `./javascript` - The javascript library which uses `automerge-wasm`
internally but presents a more idiomatic javascript interface
- `./scripts` - scripts which are useful to maintenance of the repository.
This includes the scripts which are run in CI.
- `./img` - static assets for use in `.md` files
## Building
To build this codebase you will need:
- `rust`
- `node`
- `yarn`
- `cmake`
- `cmocka`
You will also need to install the following with `cargo install`
- `wasm-bindgen-cli`
- `wasm-opt`
- `cargo-deny`
And ensure you have added the `wasm32-unknown-unknown` target for rust cross-compilation.
The various subprojects (the rust code, the wrapper projects) have their own
build instructions, but to run the tests that will be run in CI you can run
`./scripts/ci/run`.
### For macOS
These instructions worked to build locally on macOS 13.1 (arm64) as of
Nov 29th 2022.
```bash
# clone the repo
git clone https://github.com/automerge/automerge-rs
cd automerge-rs
# install rustup
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
# install homebrew
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
# install cmake, node, cmocka
brew install cmake node cmocka
# install yarn
npm install --global yarn
# install javascript dependencies
yarn --cwd ./javascript
# install rust dependencies
cargo install wasm-bindgen-cli wasm-opt cargo-deny
# get nightly rust to produce optimized automerge-c builds
rustup toolchain install nightly
rustup component add rust-src --toolchain nightly
# add wasm target in addition to current architecture
rustup target add wasm32-unknown-unknown
# Run ci script
./scripts/ci/run
```
To build and test the wasm library:
If your build fails to find `cmocka.h` you may need to teach it about homebrew's
installation location:
```shell
## setup
$ cd automerge-wasm
$ yarn
## building or testing
$ yarn build
$ yarn test
## without this the js library wont automatically use changes
$ yarn link
## cutting a release or doing benchmarking
$ yarn release
$ yarn opt ## or set `wasm-opt = false` in Cargo.toml on supported platforms (not arm64 osx)
```
export CPATH=/opt/homebrew/include
export LIBRARY_PATH=/opt/homebrew/lib
./scripts/ci/run
```
And finally to test the js library. This is where most of the tests reside.
## Contributing
```shell
## setup
$ cd automerge-js
$ yarn
$ yarn link "automerge-wasm"
## testing
$ yarn test
```
## Benchmarking
The `edit-trace` folder has the main code for running the edit trace benchmarking.
Please try and split your changes up into relatively independent commits which
change one subsystem at a time and add good commit messages which describe what
the change is and why you're making it (err on the side of longer commit
messages). `git blame` should give future maintainers a good idea of why
something is the way it is.

20
TODO.md
View file

@ -1,20 +0,0 @@
### next steps:
1. C API
### ergronomics:
1. value() -> () or something that into's a value
### automerge:
1. single pass (fast) load
2. micro-patches / bare bones observation API / fully hydrated documents
### sync
1. get all sync tests passing
### maybe:
1. tables
### no:
1. cursors

View file

@ -1,2 +0,0 @@
/node_modules
/yarn.lock

View file

@ -1,18 +0,0 @@
{
"name": "automerge-js",
"version": "0.1.0",
"main": "src/index.js",
"license": "MIT",
"scripts": {
"test": "mocha --bail --full-trace"
},
"devDependencies": {
"mocha": "^9.1.1"
},
"dependencies": {
"automerge-wasm": "file:../automerge-wasm/dev",
"fast-sha256": "^1.3.0",
"pako": "^2.0.4",
"uuid": "^8.3"
}
}

View file

@ -1,18 +0,0 @@
// Properties of the document root object
//const OPTIONS = Symbol('_options') // object containing options passed to init()
//const CACHE = Symbol('_cache') // map from objectId to immutable object
const STATE = Symbol('_state') // object containing metadata about current state (e.g. sequence numbers)
const HEADS = Symbol('_heads') // object containing metadata about current state (e.g. sequence numbers)
const OBJECT_ID = Symbol('_objectId') // object containing metadata about current state (e.g. sequence numbers)
const READ_ONLY = Symbol('_readOnly') // object containing metadata about current state (e.g. sequence numbers)
const FROZEN = Symbol('_frozen') // object containing metadata about current state (e.g. sequence numbers)
// Properties of all Automerge objects
//const OBJECT_ID = Symbol('_objectId') // the object ID of the current object (string)
//const CONFLICTS = Symbol('_conflicts') // map or list (depending on object type) of conflicts
//const CHANGE = Symbol('_change') // the context object on proxy objects used in change callback
//const ELEM_IDS = Symbol('_elemIds') // list containing the element ID of each list element
module.exports = {
STATE, HEADS, OBJECT_ID, READ_ONLY, FROZEN
}

View file

@ -1,372 +0,0 @@
const AutomergeWASM = require("automerge-wasm")
const uuid = require('./uuid')
let { rootProxy, listProxy, textProxy, mapProxy } = require("./proxies")
let { Counter } = require("./counter")
let { Text } = require("./text")
let { Int, Uint, Float64 } = require("./numbers")
let { STATE, HEADS, OBJECT_ID, READ_ONLY, FROZEN } = require("./constants")
function init(actor) {
const state = AutomergeWASM.init(actor)
return rootProxy(state, true);
}
function clone(doc) {
const state = doc[STATE].clone()
return rootProxy(state, true);
}
function free(doc) {
return doc[STATE].free()
}
function from(data, actor) {
let doc1 = init(actor)
let doc2 = change(doc1, (d) => Object.assign(d, data))
return doc2
}
function change(doc, options, callback) {
if (callback === undefined) {
// FIXME implement options
callback = options
options = {}
}
if (typeof options === "string") {
options = { message: options }
}
if (doc === undefined || doc[STATE] === undefined || doc[OBJECT_ID] !== "_root") {
throw new RangeError("must be the document root");
}
if (doc[FROZEN] === true) {
throw new RangeError("Attempting to use an outdated Automerge document")
}
if (!!doc[HEADS] === true) {
console.log("HEADS", doc[HEADS])
throw new RangeError("Attempting to change an out of date document");
}
if (doc[READ_ONLY] === false) {
throw new RangeError("Calls to Automerge.change cannot be nested")
}
const state = doc[STATE]
const heads = state.getHeads()
try {
doc[HEADS] = heads
doc[FROZEN] = true
let root = rootProxy(state);
callback(root)
if (state.pending_ops() === 0) {
doc[FROZEN] = false
doc[HEADS] = undefined
return doc
} else {
state.commit(options.message, options.time)
return rootProxy(state, true);
}
} catch (e) {
//console.log("ERROR: ",e)
doc[FROZEN] = false
doc[HEADS] = undefined
state.rollback()
throw e
}
}
function emptyChange(doc, options) {
if (options === undefined) {
options = {}
}
if (typeof options === "string") {
options = { message: options }
}
if (doc === undefined || doc[STATE] === undefined || doc[OBJECT_ID] !== "_root") {
throw new RangeError("must be the document root");
}
if (doc[FROZEN] === true) {
throw new RangeError("Attempting to use an outdated Automerge document")
}
if (doc[READ_ONLY] === false) {
throw new RangeError("Calls to Automerge.change cannot be nested")
}
const state = doc[STATE]
state.commit(options.message, options.time)
return rootProxy(state, true);
}
function load(data, actor) {
const state = AutomergeWASM.load(data, actor)
return rootProxy(state, true);
}
function save(doc) {
const state = doc[STATE]
return state.save()
}
function merge(local, remote) {
if (local[HEADS] === true) {
throw new RangeError("Attempting to change an out of date document");
}
const localState = local[STATE]
const heads = localState.getHeads()
const remoteState = remote[STATE]
const changes = localState.getChangesAdded(remoteState)
localState.applyChanges(changes)
local[HEADS] = heads
return rootProxy(localState, true)
}
function getActorId(doc) {
const state = doc[STATE]
return state.getActorId()
}
function conflictAt(context, objectId, prop) {
let values = context.values(objectId, prop)
if (values.length <= 1) {
return
}
let result = {}
for (const conflict of values) {
const datatype = conflict[0]
const value = conflict[1]
switch (datatype) {
case "map":
result[value] = mapProxy(context, value, [ prop ], true, true)
break;
case "list":
result[value] = listProxy(context, value, [ prop ], true, true)
break;
case "text":
result[value] = textProxy(context, value, [ prop ], true, true)
break;
//case "table":
//case "cursor":
case "str":
case "uint":
case "int":
case "f64":
case "boolean":
case "bytes":
case "null":
result[conflict[2]] = value
break;
case "counter":
result[conflict[2]] = new Counter(value)
break;
case "timestamp":
result[conflict[2]] = new Date(value)
break;
default:
throw RangeError(`datatype ${datatype} unimplemented`)
}
}
return result
}
function getConflicts(doc, prop) {
const state = doc[STATE]
const objectId = doc[OBJECT_ID]
return conflictAt(state, objectId, prop)
}
function getLastLocalChange(doc) {
const state = doc[STATE]
return state.getLastLocalChange()
}
function getObjectId(doc) {
return doc[OBJECT_ID]
}
function getChanges(oldState, newState) {
const o = oldState[STATE]
const n = newState[STATE]
const heads = oldState[HEADS]
return n.getChanges(heads || o.getHeads())
}
function getAllChanges(doc) {
const state = doc[STATE]
return state.getChanges([])
}
function applyChanges(doc, changes) {
if (doc === undefined || doc[STATE] === undefined || doc[OBJECT_ID] !== "_root") {
throw new RangeError("must be the document root");
}
if (doc[FROZEN] === true) {
throw new RangeError("Attempting to use an outdated Automerge document")
}
if (doc[READ_ONLY] === false) {
throw new RangeError("Calls to Automerge.change cannot be nested")
}
const state = doc[STATE]
const heads = state.getHeads()
state.applyChanges(changes)
doc[HEADS] = heads
return [rootProxy(state, true)];
}
function getHistory(doc) {
const actor = getActorId(doc)
const history = getAllChanges(doc)
return history.map((change, index) => ({
get change () {
return decodeChange(change)
},
get snapshot () {
const [state] = applyChanges(init(), history.slice(0, index + 1))
return state
}
})
)
}
function equals() {
if (!isObject(val1) || !isObject(val2)) return val1 === val2
const keys1 = Object.keys(val1).sort(), keys2 = Object.keys(val2).sort()
if (keys1.length !== keys2.length) return false
for (let i = 0; i < keys1.length; i++) {
if (keys1[i] !== keys2[i]) return false
if (!equals(val1[keys1[i]], val2[keys2[i]])) return false
}
return true
}
function encodeSyncMessage(msg) {
return AutomergeWASM.encodeSyncMessage(msg)
}
function decodeSyncMessage(msg) {
return AutomergeWASM.decodeSyncMessage(msg)
}
function encodeSyncState(state) {
return AutomergeWASM.encodeSyncState(state)
}
function decodeSyncState() {
return AutomergeWASM.decodeSyncState(state)
}
function generateSyncMessage(doc, syncState) {
const state = doc[STATE]
return [ syncState, state.generateSyncMessage(syncState) ]
}
function receiveSyncMessage(doc, syncState, message) {
if (doc === undefined || doc[STATE] === undefined || doc[OBJECT_ID] !== "_root") {
throw new RangeError("must be the document root");
}
if (doc[FROZEN] === true) {
throw new RangeError("Attempting to use an outdated Automerge document")
}
if (!!doc[HEADS] === true) {
throw new RangeError("Attempting to change an out of date document");
}
if (doc[READ_ONLY] === false) {
throw new RangeError("Calls to Automerge.change cannot be nested")
}
const state = doc[STATE]
const heads = state.getHeads()
state.receiveSyncMessage(syncState, message)
doc[HEADS] = heads
return [rootProxy(state, true), syncState, null];
}
function initSyncState() {
return AutomergeWASM.initSyncState(change)
}
function encodeChange(change) {
return AutomergeWASM.encodeChange(change)
}
function decodeChange(data) {
return AutomergeWASM.decodeChange(data)
}
function encodeSyncMessage(change) {
return AutomergeWASM.encodeSyncMessage(change)
}
function decodeSyncMessage(data) {
return AutomergeWASM.decodeSyncMessage(data)
}
function encodeSyncState(change) {
return AutomergeWASM.encodeSyncState(change)
}
function decodeSyncState(data) {
return AutomergeWASM.decodeSyncState(data)
}
function getMissingDeps(doc, heads) {
const state = doc[STATE]
if (!heads) {
heads = []
}
return state.getMissingDeps(heads)
}
function getHeads(doc) {
const state = doc[STATE]
return doc[HEADS] || state.getHeads()
}
function dump(doc) {
const state = doc[STATE]
state.dump()
}
function toJS(doc) {
if (typeof doc === "object") {
if (doc instanceof Uint8Array) {
return doc
}
if (doc === null) {
return doc
}
if (doc instanceof Array) {
return doc.map((a) => toJS(a))
}
if (doc instanceof Text) {
return doc.map((a) => toJS(a))
}
let tmp = {}
for (index in doc) {
tmp[index] = toJS(doc[index])
}
return tmp
} else {
return doc
}
}
module.exports = {
init, from, change, emptyChange, clone, free,
load, save, merge, getChanges, getAllChanges, applyChanges,
getLastLocalChange, getObjectId, getActorId, getConflicts,
encodeChange, decodeChange, equals, getHistory, getHeads, uuid,
generateSyncMessage, receiveSyncMessage, initSyncState,
decodeSyncMessage, encodeSyncMessage, decodeSyncState, encodeSyncState,
getMissingDeps,
dump, Text, Counter, Int, Uint, Float64, toJS,
}
// depricated
// Frontend, setDefaultBackend, Backend
// more...
/*
for (let name of ['getObjectId', 'getObjectById',
'setActorId',
'Text', 'Table', 'Counter', 'Observable' ]) {
module.exports[name] = Frontend[name]
}
*/

View file

@ -1,33 +0,0 @@
// Convience classes to allow users to stricly specify the number type they want
class Int {
constructor(value) {
if (!(Number.isInteger(value) && value <= Number.MAX_SAFE_INTEGER && value >= Number.MIN_SAFE_INTEGER)) {
throw new RangeError(`Value ${value} cannot be a uint`)
}
this.value = value
Object.freeze(this)
}
}
class Uint {
constructor(value) {
if (!(Number.isInteger(value) && value <= Number.MAX_SAFE_INTEGER && value >= 0)) {
throw new RangeError(`Value ${value} cannot be a uint`)
}
this.value = value
Object.freeze(this)
}
}
class Float64 {
constructor(value) {
if (typeof value !== 'number') {
throw new RangeError(`Value ${value} cannot be a float64`)
}
this.value = value || 0.0
Object.freeze(this)
}
}
module.exports = { Int, Uint, Float64 }

View file

@ -1,623 +0,0 @@
const AutomergeWASM = require("automerge-wasm")
const { Int, Uint, Float64 } = require("./numbers");
const { Counter, getWriteableCounter } = require("./counter");
const { Text } = require("./text");
const { STATE, HEADS, FROZEN, OBJECT_ID, READ_ONLY } = require("./constants")
const { MAP, LIST, TABLE, TEXT } = require("automerge-wasm")
function parseListIndex(key) {
if (typeof key === 'string' && /^[0-9]+$/.test(key)) key = parseInt(key, 10)
if (typeof key !== 'number') {
// throw new TypeError('A list index must be a number, but you passed ' + JSON.stringify(key))
return key
}
if (key < 0 || isNaN(key) || key === Infinity || key === -Infinity) {
throw new RangeError('A list index must be positive, but you passed ' + key)
}
return key
}
function valueAt(target, prop) {
const { context, objectId, path, readonly, heads} = target
let value = context.value(objectId, prop, heads)
if (value === undefined) {
return
}
const datatype = value[0]
const val = value[1]
switch (datatype) {
case undefined: return;
case "map": return mapProxy(context, val, [ ... path, prop ], readonly, heads);
case "list": return listProxy(context, val, [ ... path, prop ], readonly, heads);
case "text": return textProxy(context, val, [ ... path, prop ], readonly, heads);
//case "table":
//case "cursor":
case "str": return val;
case "uint": return val;
case "int": return val;
case "f64": return val;
case "boolean": return val;
case "null": return null;
case "bytes": return val;
case "counter": {
if (readonly) {
return new Counter(val);
} else {
return getWriteableCounter(val, context, path, objectId, prop)
}
}
case "timestamp": return new Date(val);
default:
throw RangeError(`datatype ${datatype} unimplemented`)
}
}
function import_value(value) {
switch (typeof value) {
case 'object':
if (value == null) {
return [ null, "null"]
} else if (value instanceof Uint) {
return [ value.value, "uint" ]
} else if (value instanceof Int) {
return [ value.value, "int" ]
} else if (value instanceof Float64) {
return [ value.value, "f64" ]
} else if (value instanceof Counter) {
return [ value.value, "counter" ]
} else if (value instanceof Date) {
return [ value.getTime(), "timestamp" ]
} else if (value instanceof Uint8Array) {
return [ value, "bytes" ]
} else if (value instanceof Array) {
return [ value, "list" ]
} else if (value instanceof Text) {
return [ value, "text" ]
} else if (value[OBJECT_ID]) {
throw new RangeError('Cannot create a reference to an existing document object')
} else {
return [ value, "map" ]
}
break;
case 'boolean':
return [ value, "boolean" ]
case 'number':
if (Number.isInteger(value)) {
return [ value, "int" ]
} else {
return [ value, "f64" ]
}
break;
case 'string':
return [ value ]
break;
default:
throw new RangeError(`Unsupported type of value: ${typeof value}`)
}
}
const MapHandler = {
get (target, key) {
const { context, objectId, path, readonly, frozen, heads } = target
if (key === Symbol.toStringTag) { return target[Symbol.toStringTag] }
if (key === OBJECT_ID) return objectId
if (key === READ_ONLY) return readonly
if (key === FROZEN) return frozen
if (key === HEADS) return heads
if (key === STATE) return context;
return valueAt(target, key)
},
set (target, key, val) {
let { context, objectId, path, readonly, frozen} = target
if (val && val[OBJECT_ID]) {
throw new RangeError('Cannot create a reference to an existing document object')
}
if (key === FROZEN) {
target.frozen = val
return
}
if (key === HEADS) {
target.heads = val
return
}
let [ value, datatype ] = import_value(val)
if (frozen) {
throw new RangeError("Attempting to use an outdated Automerge document")
}
if (readonly) {
throw new RangeError(`Object property "${key}" cannot be modified`)
}
switch (datatype) {
case "list":
const list = context.set(objectId, key, LIST)
const proxyList = listProxy(context, list, [ ... path, key ], readonly );
for (let i = 0; i < value.length; i++) {
proxyList[i] = value[i]
}
break;
case "text":
const text = context.set(objectId, key, TEXT)
const proxyText = textProxy(context, text, [ ... path, key ], readonly );
for (let i = 0; i < value.length; i++) {
proxyText[i] = value.get(i)
}
break;
case "map":
const map = context.set(objectId, key, MAP)
const proxyMap = mapProxy(context, map, [ ... path, key ], readonly );
for (const key in value) {
proxyMap[key] = value[key]
}
break;
default:
context.set(objectId, key, value, datatype)
}
return true
},
deleteProperty (target, key) {
const { context, objectId, path, readonly, frozen } = target
if (readonly) {
throw new RangeError(`Object property "${key}" cannot be modified`)
}
context.del(objectId, key)
return true
},
has (target, key) {
const value = this.get(target, key)
return value !== undefined
},
getOwnPropertyDescriptor (target, key) {
const { context, objectId } = target
const value = this.get(target, key)
if (typeof value !== 'undefined') {
return {
configurable: true, enumerable: true, value
}
}
},
ownKeys (target) {
const { context, objectId, heads} = target
return context.keys(objectId, heads)
},
}
const ListHandler = {
get (target, index) {
const {context, objectId, path, readonly, frozen, heads } = target
index = parseListIndex(index)
if (index === Symbol.hasInstance) { return (instance) => { return [].has(instance) } }
if (index === Symbol.toStringTag) { return target[Symbol.toStringTag] }
if (index === OBJECT_ID) return objectId
if (index === READ_ONLY) return readonly
if (index === FROZEN) return frozen
if (index === HEADS) return heads
if (index === STATE) return context;
if (index === 'length') return context.length(objectId, heads);
if (index === Symbol.iterator) {
let i = 0;
return function *() {
// FIXME - ugly
let value = valueAt(target, i)
while (value !== undefined) {
yield value
i += 1
value = valueAt(target, i)
}
}
}
if (typeof index === 'number') {
return valueAt(target, index)
} else {
return listMethods(target)[index]
}
},
set (target, index, val) {
let {context, objectId, path, readonly, frozen } = target
index = parseListIndex(index)
if (val && val[OBJECT_ID]) {
throw new RangeError('Cannot create a reference to an existing document object')
}
if (index === FROZEN) {
target.frozen = val
return
}
if (index === HEADS) {
target.heads = val
return
}
if (typeof index == "string") {
throw new RangeError('list index must be a number')
}
const [ value, datatype] = import_value(val)
if (frozen) {
throw new RangeError("Attempting to use an outdated Automerge document")
}
if (readonly) {
throw new RangeError(`Object property "${index}" cannot be modified`)
}
switch (datatype) {
case "list":
let list
if (index >= context.length(objectId)) {
list = context.insert(objectId, index, LIST)
} else {
list = context.set(objectId, index, LIST)
}
const proxyList = listProxy(context, list, [ ... path, index ], readonly);
proxyList.splice(0,0,...value)
break;
case "text":
let text
if (index >= context.length(objectId)) {
text = context.insert(objectId, index, TEXT)
} else {
text = context.set(objectId, index, TEXT)
}
const proxyText = textProxy(context, text, [ ... path, index ], readonly);
proxyText.splice(0,0,...value)
break;
case "map":
let map
if (index >= context.length(objectId)) {
map = context.insert(objectId, index, MAP)
} else {
map = context.set(objectId, index, MAP)
}
const proxyMap = mapProxy(context, map, [ ... path, index ], readonly);
for (const key in value) {
proxyMap[key] = value[key]
}
break;
default:
if (index >= context.length(objectId)) {
context.insert(objectId, index, value, datatype)
} else {
context.set(objectId, index, value, datatype)
}
}
return true
},
deleteProperty (target, index) {
const {context, objectId} = target
index = parseListIndex(index)
if (context.value(objectId, index)[0] == "counter") {
throw new TypeError('Unsupported operation: deleting a counter from a list')
}
context.del(objectId, index)
return true
},
has (target, index) {
const {context, objectId, heads} = target
index = parseListIndex(index)
if (typeof index === 'number') {
return index < context.length(objectId, heads)
}
return index === 'length'
},
getOwnPropertyDescriptor (target, index) {
const {context, objectId, path, readonly, frozen, heads} = target
if (index === 'length') return {writable: true, value: context.length(objectId, heads) }
if (index === OBJECT_ID) return {configurable: false, enumerable: false, value: objectId}
index = parseListIndex(index)
let value = valueAt(target, index)
return { configurable: true, enumerable: true, value }
},
getPrototypeOf(target) { return Object.getPrototypeOf([]) },
ownKeys (target) {
const {context, objectId, heads } = target
let keys = []
// uncommenting this causes assert.deepEqual() to fail when comparing to a pojo array
// but not uncommenting it causes for (i in list) {} to not enumerate values properly
//for (let i = 0; i < target.context.length(objectId, heads); i++) { keys.push(i.toString()) }
keys.push("length");
return keys
}
}
const TextHandler = Object.assign({}, ListHandler, {
get (target, index) {
// FIXME this is a one line change from ListHandler.get()
const {context, objectId, path, readonly, frozen, heads } = target
index = parseListIndex(index)
if (index === Symbol.toStringTag) { return target[Symbol.toStringTag] }
if (index === Symbol.hasInstance) { return (instance) => { return [].has(instance) } }
if (index === OBJECT_ID) return objectId
if (index === READ_ONLY) return readonly
if (index === FROZEN) return frozen
if (index === HEADS) return heads
if (index === STATE) return context;
if (index === 'length') return context.length(objectId, heads);
if (index === Symbol.iterator) {
let i = 0;
return function *() {
let value = valueAt(target, i)
while (value !== undefined) {
yield value
i += 1
value = valueAt(target, i)
}
}
}
if (typeof index === 'number') {
return valueAt(target, index)
} else {
return textMethods(target)[index] || listMethods(target)[index]
}
},
getPrototypeOf(target) {
return Object.getPrototypeOf(new Text())
},
})
function mapProxy(context, objectId, path, readonly, heads) {
return new Proxy({context, objectId, path, readonly: !!readonly, frozen: false, heads}, MapHandler)
}
function listProxy(context, objectId, path, readonly, heads) {
let target = []
Object.assign(target, {context, objectId, path, readonly: !!readonly, frozen: false, heads})
return new Proxy(target, ListHandler)
}
function textProxy(context, objectId, path, readonly, heads) {
let target = []
Object.assign(target, {context, objectId, path, readonly: !!readonly, frozen: false, heads})
return new Proxy(target, TextHandler)
}
function rootProxy(context, readonly) {
return mapProxy(context, "_root", [], readonly, false)
}
function listMethods(target) {
const {context, objectId, path, readonly, frozen, heads} = target
const methods = {
deleteAt(index, numDelete) {
// FIXME - what about many deletes?
if (context.value(objectId, index)[0] == "counter") {
throw new TypeError('Unsupported operation: deleting a counter from a list')
}
if (typeof numDelete === 'number') {
context.splice(objectId, index, numDelete)
} else {
context.del(objectId, index)
}
return this
},
fill(val, start, end) {
// FIXME
let list = context.getObject(objectId)
let [value, datatype] = valueAt(target, index)
for (let index = parseListIndex(start || 0); index < parseListIndex(end || list.length); index++) {
context.set(objectId, index, value, datatype)
}
return this
},
indexOf(o, start = 0) {
// FIXME
const id = o[OBJECT_ID]
if (id) {
const list = context.getObject(objectId)
for (let index = start; index < list.length; index++) {
if (list[index][OBJECT_ID] === id) {
return index
}
}
return -1
} else {
return context.indexOf(objectId, o, start)
}
},
insertAt(index, ...values) {
this.splice(index, 0, ...values)
return this
},
pop() {
let length = context.length(objectId)
if (length == 0) {
return undefined
}
let last = valueAt(target, length - 1)
context.del(objectId, length - 1)
return last
},
push(...values) {
let len = context.length(objectId)
this.splice(len, 0, ...values)
return context.length(objectId)
},
shift() {
if (context.length(objectId) == 0) return
const first = valueAt(target, 0)
context.del(objectId, 0)
return first
},
splice(index, del, ...vals) {
index = parseListIndex(index)
del = parseListIndex(del)
for (let val of vals) {
if (val && val[OBJECT_ID]) {
throw new RangeError('Cannot create a reference to an existing document object')
}
}
if (frozen) {
throw new RangeError("Attempting to use an outdated Automerge document")
}
if (readonly) {
throw new RangeError("Sequence object cannot be modified outside of a change block")
}
let result = []
for (let i = 0; i < del; i++) {
let value = valueAt(target, index)
result.push(value)
context.del(objectId, index)
}
const values = vals.map((val) => import_value(val))
for (let [value,datatype] of values) {
switch (datatype) {
case "list":
const list = context.insert(objectId, index, LIST)
const proxyList = listProxy(context, list, [ ... path, index ], readonly);
proxyList.splice(0,0,...value)
break;
case "text":
const text = context.insert(objectId, index, TEXT)
const proxyText = textProxy(context, text, [ ... path, index ], readonly);
proxyText.splice(0,0,...value)
break;
case "map":
const map = context.insert(objectId, index, MAP)
const proxyMap = mapProxy(context, map, [ ... path, index ], readonly);
for (const key in value) {
proxyMap[key] = value[key]
}
break;
default:
context.insert(objectId, index, value, datatype)
}
index += 1
}
return result
},
unshift(...values) {
this.splice(0, 0, ...values)
return context.length(objectId)
},
entries() {
let i = 0;
const iterator = {
next: () => {
let value = valueAt(target, i)
if (value === undefined) {
return { value: undefined, done: true }
} else {
return { value: [ i, value ], done: false }
}
}
}
return iterator
},
keys() {
let i = 0;
let len = context.length(objectId, heads)
const iterator = {
next: () => {
let value = undefined
if (i < len) { value = i; i++ }
return { value, done: true }
}
}
return iterator
},
values() {
let i = 0;
const iterator = {
next: () => {
let value = valueAt(target, i)
if (value === undefined) {
return { value: undefined, done: true }
} else {
return { value, done: false }
}
}
}
return iterator
}
}
// Read-only methods that can delegate to the JavaScript built-in implementations
// FIXME - super slow
for (let method of ['concat', 'every', 'filter', 'find', 'findIndex', 'forEach', 'includes',
'join', 'lastIndexOf', 'map', 'reduce', 'reduceRight',
'slice', 'some', 'toLocaleString', 'toString']) {
methods[method] = (...args) => {
const list = []
while (true) {
let value = valueAt(target, list.length)
if (value == undefined) {
break
}
list.push(value)
}
return list[method](...args)
}
}
return methods
}
function textMethods(target) {
const {context, objectId, path, readonly, frozen} = target
const methods = {
set (index, value) {
return this[index] = value
},
get (index) {
return this[index]
},
toString () {
let str = ''
let length = this.length
for (let i = 0; i < length; i++) {
const value = this.get(i)
if (typeof value === 'string') str += value
}
return str
},
toSpans () {
let spans = []
let chars = ''
let length = this.length
for (let i = 0; i < length; i++) {
const value = this[i]
if (typeof value === 'string') {
chars += value
} else {
if (chars.length > 0) {
spans.push(chars)
chars = ''
}
spans.push(value)
}
}
if (chars.length > 0) {
spans.push(chars)
}
return spans
},
toJSON () {
return this.toString()
}
}
return methods
}
module.exports = { rootProxy, textProxy, listProxy, mapProxy, MapHandler, ListHandler, TextHandler }

View file

@ -1,132 +0,0 @@
const { OBJECT_ID } = require('./constants')
const { isObject } = require('../src/common')
class Text {
constructor (text) {
const instance = Object.create(Text.prototype)
if (typeof text === 'string') {
instance.elems = [...text]
} else if (Array.isArray(text)) {
instance.elems = text
} else if (text === undefined) {
instance.elems = []
} else {
throw new TypeError(`Unsupported initial value for Text: ${text}`)
}
return instance
}
get length () {
return this.elems.length
}
get (index) {
return this.elems[index]
}
getElemId (index) {
return undefined
}
/**
* Iterates over the text elements character by character, including any
* inline objects.
*/
[Symbol.iterator] () {
let elems = this.elems, index = -1
return {
next () {
index += 1
if (index < elems.length) {
return {done: false, value: elems[index]}
} else {
return {done: true}
}
}
}
}
/**
* Returns the content of the Text object as a simple string, ignoring any
* non-character elements.
*/
toString() {
// Concatting to a string is faster than creating an array and then
// .join()ing for small (<100KB) arrays.
// https://jsperf.com/join-vs-loop-w-type-test
let str = ''
for (const elem of this.elems) {
if (typeof elem === 'string') str += elem
}
return str
}
/**
* Returns the content of the Text object as a sequence of strings,
* interleaved with non-character elements.
*
* For example, the value ['a', 'b', {x: 3}, 'c', 'd'] has spans:
* => ['ab', {x: 3}, 'cd']
*/
toSpans() {
let spans = []
let chars = ''
for (const elem of this.elems) {
if (typeof elem === 'string') {
chars += elem
} else {
if (chars.length > 0) {
spans.push(chars)
chars = ''
}
spans.push(elem)
}
}
if (chars.length > 0) {
spans.push(chars)
}
return spans
}
/**
* Returns the content of the Text object as a simple string, so that the
* JSON serialization of an Automerge document represents text nicely.
*/
toJSON() {
return this.toString()
}
/**
* Updates the list item at position `index` to a new value `value`.
*/
set (index, value) {
this.elems[index] = value
}
/**
* Inserts new list items `values` starting at position `index`.
*/
insertAt(index, ...values) {
this.elems.splice(index, 0, ... values)
}
/**
* Deletes `numDelete` list items starting at position `index`.
* if `numDelete` is not given, one item is deleted.
*/
deleteAt(index, numDelete = 1) {
this.elems.splice(index, numDelete)
}
}
// Read-only methods that can delegate to the JavaScript built-in array
for (let method of ['concat', 'every', 'filter', 'find', 'findIndex', 'forEach', 'includes',
'indexOf', 'join', 'lastIndexOf', 'map', 'reduce', 'reduceRight',
'slice', 'some', 'toLocaleString']) {
Text.prototype[method] = function (...args) {
const array = [...this]
return array[method](...args)
}
}
module.exports = { Text }

View file

@ -1,16 +0,0 @@
const { v4: uuid } = require('uuid')
function defaultFactory() {
return uuid().replace(/-/g, '')
}
let factory = defaultFactory
function makeUuid() {
return factory()
}
makeUuid.setFactory = newFactory => { factory = newFactory }
makeUuid.reset = () => { factory = defaultFactory }
module.exports = makeUuid

View file

@ -1,164 +0,0 @@
const assert = require('assert')
const util = require('util')
const Automerge = require('..')
describe('Automerge', () => {
describe('basics', () => {
it('should init clone and free', () => {
let doc1 = Automerge.init()
let doc2 = Automerge.clone(doc1);
})
it('handle basic set and read on root object', () => {
let doc1 = Automerge.init()
let doc2 = Automerge.change(doc1, (d) => {
d.hello = "world"
d.big = "little"
d.zip = "zop"
d.app = "dap"
assert.deepEqual(d, { hello: "world", big: "little", zip: "zop", app: "dap" })
})
assert.deepEqual(doc2, { hello: "world", big: "little", zip: "zop", app: "dap" })
})
it('handle basic sets over many changes', () => {
let doc1 = Automerge.init()
let timestamp = new Date();
let counter = new Automerge.Counter(100);
let bytes = new Uint8Array([10,11,12]);
let doc2 = Automerge.change(doc1, (d) => {
d.hello = "world"
})
let doc3 = Automerge.change(doc2, (d) => {
d.counter1 = counter
})
let doc4 = Automerge.change(doc3, (d) => {
d.timestamp1 = timestamp
})
let doc5 = Automerge.change(doc4, (d) => {
d.app = null
})
let doc6 = Automerge.change(doc5, (d) => {
d.bytes1 = bytes
})
let doc7 = Automerge.change(doc6, (d) => {
d.uint = new Automerge.Uint(1)
d.int = new Automerge.Int(-1)
d.float64 = new Automerge.Float64(5.5)
d.number1 = 100
d.number2 = -45.67
d.true = true
d.false = false
})
assert.deepEqual(doc7, { hello: "world", true: true, false: false, int: -1, uint: 1, float64: 5.5, number1: 100, number2: -45.67, counter1: counter, timestamp1: timestamp, bytes1: bytes, app: null })
let changes = Automerge.getAllChanges(doc7)
let t1 = Automerge.init()
;let [t2] = Automerge.applyChanges(t1, changes)
assert.deepEqual(doc7,t2)
})
it('handle overwrites to values', () => {
let doc1 = Automerge.init()
let doc2 = Automerge.change(doc1, (d) => {
d.hello = "world1"
})
let doc3 = Automerge.change(doc2, (d) => {
d.hello = "world2"
})
let doc4 = Automerge.change(doc3, (d) => {
d.hello = "world3"
})
let doc5 = Automerge.change(doc4, (d) => {
d.hello = "world4"
})
assert.deepEqual(doc5, { hello: "world4" } )
})
it('handle set with object value', () => {
let doc1 = Automerge.init()
let doc2 = Automerge.change(doc1, (d) => {
d.subobj = { hello: "world", subsubobj: { zip: "zop" } }
})
assert.deepEqual(doc2, { subobj: { hello: "world", subsubobj: { zip: "zop" } } })
})
it('handle simple list creation', () => {
let doc1 = Automerge.init()
let doc2 = Automerge.change(doc1, (d) => d.list = [])
assert.deepEqual(doc2, { list: []})
})
it('handle simple lists', () => {
let doc1 = Automerge.init()
let doc2 = Automerge.change(doc1, (d) => {
d.list = [ 1, 2, 3 ]
})
assert.deepEqual(doc2.list.length, 3)
assert.deepEqual(doc2.list[0], 1)
assert.deepEqual(doc2.list[1], 2)
assert.deepEqual(doc2.list[2], 3)
assert.deepEqual(doc2, { list: [1,2,3] })
// assert.deepStrictEqual(Automerge.toJS(doc2), { list: [1,2,3] })
let doc3 = Automerge.change(doc2, (d) => {
d.list[1] = "a"
})
assert.deepEqual(doc3.list.length, 3)
assert.deepEqual(doc3.list[0], 1)
assert.deepEqual(doc3.list[1], "a")
assert.deepEqual(doc3.list[2], 3)
assert.deepEqual(doc3, { list: [1,"a",3] })
})
it('handle simple lists', () => {
let doc1 = Automerge.init()
let doc2 = Automerge.change(doc1, (d) => {
d.list = [ 1, 2, 3 ]
})
let changes = Automerge.getChanges(doc1, doc2)
let docB1 = Automerge.init()
;let [docB2] = Automerge.applyChanges(docB1, changes)
assert.deepEqual(docB2, doc2);
})
it('handle text', () => {
let doc1 = Automerge.init()
let tmp = new Automerge.Text("hello")
let doc2 = Automerge.change(doc1, (d) => {
d.list = new Automerge.Text("hello")
d.list.insertAt(2,"Z")
})
let changes = Automerge.getChanges(doc1, doc2)
let docB1 = Automerge.init()
;let [docB2] = Automerge.applyChanges(docB1, changes)
assert.deepEqual(docB2, doc2);
})
it('have many list methods', () => {
let doc1 = Automerge.from({ list: [1,2,3] })
assert.deepEqual(doc1, { list: [1,2,3] });
let doc2 = Automerge.change(doc1, (d) => {
d.list.splice(1,1,9,10)
})
assert.deepEqual(doc2, { list: [1,9,10,3] });
let doc3 = Automerge.change(doc2, (d) => {
d.list.push(11,12)
})
assert.deepEqual(doc3, { list: [1,9,10,3,11,12] });
let doc4 = Automerge.change(doc3, (d) => {
d.list.unshift(2,2)
})
assert.deepEqual(doc4, { list: [2,2,1,9,10,3,11,12] });
let doc5 = Automerge.change(doc4, (d) => {
d.list.shift()
})
assert.deepEqual(doc5, { list: [2,1,9,10,3,11,12] });
let doc6 = Automerge.change(doc5, (d) => {
d.list.insertAt(3,100,101)
})
assert.deepEqual(doc6, { list: [2,1,9,100,101,10,3,11,12] });
})
})
})

View file

@ -1,97 +0,0 @@
const assert = require('assert')
const { checkEncoded } = require('./helpers')
const Automerge = require('..')
const { encodeChange, decodeChange } = Automerge
describe('change encoding', () => {
it('should encode text edits', () => {
/*
const change1 = {actor: 'aaaa', seq: 1, startOp: 1, time: 9, message: '', deps: [], ops: [
{action: 'makeText', obj: '_root', key: 'text', insert: false, pred: []},
{action: 'set', obj: '1@aaaa', elemId: '_head', insert: true, value: 'h', pred: []},
{action: 'del', obj: '1@aaaa', elemId: '2@aaaa', insert: false, pred: ['2@aaaa']},
{action: 'set', obj: '1@aaaa', elemId: '_head', insert: true, value: 'H', pred: []},
{action: 'set', obj: '1@aaaa', elemId: '4@aaaa', insert: true, value: 'i', pred: []}
]}
*/
const change1 = {actor: 'aaaa', seq: 1, startOp: 1, time: 9, message: null, deps: [], ops: [
{action: 'makeText', obj: '_root', key: 'text', pred: []},
{action: 'set', obj: '1@aaaa', elemId: '_head', insert: true, value: 'h', pred: []},
{action: 'del', obj: '1@aaaa', elemId: '2@aaaa', pred: ['2@aaaa']},
{action: 'set', obj: '1@aaaa', elemId: '_head', insert: true, value: 'H', pred: []},
{action: 'set', obj: '1@aaaa', elemId: '4@aaaa', insert: true, value: 'i', pred: []}
]}
checkEncoded(encodeChange(change1), [
0x85, 0x6f, 0x4a, 0x83, // magic bytes
0xe2, 0xbd, 0xfb, 0xf5, // checksum
1, 94, 0, 2, 0xaa, 0xaa, // chunkType: change, length, deps, actor 'aaaa'
1, 1, 9, 0, 0, // seq, startOp, time, message, actor list
12, 0x01, 4, 0x02, 4, // column count, objActor, objCtr
0x11, 8, 0x13, 7, 0x15, 8, // keyActor, keyCtr, keyStr
0x34, 4, 0x42, 6, // insert, action
0x56, 6, 0x57, 3, // valLen, valRaw
0x70, 6, 0x71, 2, 0x73, 2, // predNum, predActor, predCtr
0, 1, 4, 0, // objActor column: null, 0, 0, 0, 0
0, 1, 4, 1, // objCtr column: null, 1, 1, 1, 1
0, 2, 0x7f, 0, 0, 1, 0x7f, 0, // keyActor column: null, null, 0, null, 0
0, 1, 0x7c, 0, 2, 0x7e, 4, // keyCtr column: null, 0, 2, 0, 4
0x7f, 4, 0x74, 0x65, 0x78, 0x74, 0, 4, // keyStr column: 'text', null, null, null, null
1, 1, 1, 2, // insert column: false, true, false, true, true
0x7d, 4, 1, 3, 2, 1, // action column: makeText, set, del, set, set
0x7d, 0, 0x16, 0, 2, 0x16, // valLen column: 0, 0x16, 0, 0x16, 0x16
0x68, 0x48, 0x69, // valRaw column: 'h', 'H', 'i'
2, 0, 0x7f, 1, 2, 0, // predNum column: 0, 0, 1, 0, 0
0x7f, 0, // predActor column: 0
0x7f, 2 // predCtr column: 2
])
const decoded = decodeChange(encodeChange(change1))
assert.deepStrictEqual(decoded, Object.assign({hash: decoded.hash}, change1))
})
// FIXME - skipping this b/c it was never implemented in the rust impl and isnt trivial
/*
it.skip('should require strict ordering of preds', () => {
const change = new Uint8Array([
133, 111, 74, 131, 31, 229, 112, 44, 1, 105, 1, 58, 30, 190, 100, 253, 180, 180, 66, 49, 126,
81, 142, 10, 3, 35, 140, 189, 231, 34, 145, 57, 66, 23, 224, 149, 64, 97, 88, 140, 168, 194,
229, 4, 244, 209, 58, 138, 67, 140, 1, 152, 236, 250, 2, 0, 1, 4, 55, 234, 66, 242, 8, 21, 11,
52, 1, 66, 2, 86, 3, 87, 10, 112, 2, 113, 3, 115, 4, 127, 9, 99, 111, 109, 109, 111, 110, 86,
97, 114, 1, 127, 1, 127, 166, 1, 52, 48, 57, 49, 52, 57, 52, 53, 56, 50, 127, 2, 126, 0, 1,
126, 139, 1, 0
])
assert.throws(() => { decodeChange(change) }, /operation IDs are not in ascending order/)
})
*/
describe('with trailing bytes', () => {
let change = new Uint8Array([
0x85, 0x6f, 0x4a, 0x83, // magic bytes
0xb2, 0x98, 0x9e, 0xa9, // checksum
1, 61, 0, 2, 0x12, 0x34, // chunkType: change, length, deps, actor '1234'
1, 1, 252, 250, 220, 255, 5, // seq, startOp, time
14, 73, 110, 105, 116, 105, 97, 108, 105, 122, 97, 116, 105, 111, 110, // message: 'Initialization'
0, 6, // actor list, column count
0x15, 3, 0x34, 1, 0x42, 2, // keyStr, insert, action
0x56, 2, 0x57, 1, 0x70, 2, // valLen, valRaw, predNum
0x7f, 1, 0x78, // keyStr: 'x'
1, // insert: false
0x7f, 1, // action: set
0x7f, 19, // valLen: 1 byte of type uint
1, // valRaw: 1
0x7f, 0, // predNum: 0
0, 1, 2, 3, 4, 5, 6, 7, 8, 9 // 10 trailing bytes
])
it('should allow decoding and re-encoding', () => {
// NOTE: This calls the JavaScript encoding and decoding functions, even when the WebAssembly
// backend is loaded. Should the wasm backend export its own functions for testing?
checkEncoded(change, encodeChange(decodeChange(change)))
})
it('should be preserved in document encoding', () => {
const [doc] = Automerge.applyChanges(Automerge.init(), [change])
const [reconstructed] = Automerge.getAllChanges(Automerge.load(Automerge.save(doc)))
checkEncoded(change, reconstructed)
})
})
})

File diff suppressed because it is too large Load diff

View file

@ -1,697 +0,0 @@
const assert = require('assert')
const Automerge = require('..')
const { assertEqualsOneOf } = require('./helpers')
function attributeStateToAttributes(accumulatedAttributes) {
const attributes = {}
Object.entries(accumulatedAttributes).forEach(([key, values]) => {
if (values.length && values[0] !== null) {
attributes[key] = values[0]
}
})
return attributes
}
function isEquivalent(a, b) {
const aProps = Object.getOwnPropertyNames(a)
const bProps = Object.getOwnPropertyNames(b)
if (aProps.length != bProps.length) {
return false
}
for (let i = 0; i < aProps.length; i++) {
const propName = aProps[i]
if (a[propName] !== b[propName]) {
return false
}
}
return true
}
function isControlMarker(pseudoCharacter) {
return typeof pseudoCharacter === 'object' && pseudoCharacter.attributes
}
function opFrom(text, attributes) {
let op = { insert: text }
if (Object.keys(attributes).length > 0) {
op.attributes = attributes
}
return op
}
function accumulateAttributes(span, accumulatedAttributes) {
Object.entries(span).forEach(([key, value]) => {
if (!accumulatedAttributes[key]) {
accumulatedAttributes[key] = []
}
if (value === null) {
if (accumulatedAttributes[key].length === 0 || accumulatedAttributes[key] === null) {
accumulatedAttributes[key].unshift(null)
} else {
accumulatedAttributes[key].shift()
}
} else {
if (accumulatedAttributes[key][0] === null) {
accumulatedAttributes[key].shift()
} else {
accumulatedAttributes[key].unshift(value)
}
}
})
return accumulatedAttributes
}
function automergeTextToDeltaDoc(text) {
let ops = []
let controlState = {}
let currentString = ""
let attributes = {}
text.toSpans().forEach((span) => {
if (isControlMarker(span)) {
controlState = accumulateAttributes(span.attributes, controlState)
} else {
let next = attributeStateToAttributes(controlState)
// if the next span has the same calculated attributes as the current span
// don't bother outputting it as a separate span, just let it ride
if (typeof span === 'string' && isEquivalent(next, attributes)) {
currentString = currentString + span
return
}
if (currentString) {
ops.push(opFrom(currentString, attributes))
}
// If we've got a string, we might be able to concatenate it to another
// same-attributed-string, so remember it and go to the next iteration.
if (typeof span === 'string') {
currentString = span
attributes = next
} else {
// otherwise we have an embed "character" and should output it immediately.
// embeds are always one-"character" in length.
ops.push(opFrom(span, next))
currentString = ''
attributes = {}
}
}
})
// at the end, flush any accumulated string out
if (currentString) {
ops.push(opFrom(currentString, attributes))
}
return ops
}
function inverseAttributes(attributes) {
let invertedAttributes = {}
Object.keys(attributes).forEach((key) => {
invertedAttributes[key] = null
})
return invertedAttributes
}
function applyDeleteOp(text, offset, op) {
let length = op.delete
while (length > 0) {
if (isControlMarker(text.get(offset))) {
offset += 1
} else {
// we need to not delete control characters, but we do delete embed characters
text.deleteAt(offset, 1)
length -= 1
}
}
return [text, offset]
}
function applyRetainOp(text, offset, op) {
let length = op.retain
if (op.attributes) {
text.insertAt(offset, { attributes: op.attributes })
offset += 1
}
while (length > 0) {
const char = text.get(offset)
offset += 1
if (!isControlMarker(char)) {
length -= 1
}
}
if (op.attributes) {
text.insertAt(offset, { attributes: inverseAttributes(op.attributes) })
offset += 1
}
return [text, offset]
}
function applyInsertOp(text, offset, op) {
let originalOffset = offset
if (typeof op.insert === 'string') {
text.insertAt(offset, ...op.insert.split(''))
offset += op.insert.length
} else {
// we have an embed or something similar
text.insertAt(offset, op.insert)
offset += 1
}
if (op.attributes) {
text.insertAt(originalOffset, { attributes: op.attributes })
offset += 1
}
if (op.attributes) {
text.insertAt(offset, { attributes: inverseAttributes(op.attributes) })
offset += 1
}
return [text, offset]
}
// XXX: uhhhhh, why can't I pass in text?
function applyDeltaDocToAutomergeText(delta, doc) {
let offset = 0
delta.forEach(op => {
if (op.retain) {
[, offset] = applyRetainOp(doc.text, offset, op)
} else if (op.delete) {
[, offset] = applyDeleteOp(doc.text, offset, op)
} else if (op.insert) {
[, offset] = applyInsertOp(doc.text, offset, op)
}
})
}
describe('Automerge.Text', () => {
let s1, s2
beforeEach(() => {
s1 = Automerge.change(Automerge.init(), doc => doc.text = new Automerge.Text())
s2 = Automerge.merge(Automerge.init(), s1)
})
it('should support insertion', () => {
s1 = Automerge.change(s1, doc => doc.text.insertAt(0, 'a'))
assert.strictEqual(s1.text.length, 1)
assert.strictEqual(s1.text.get(0), 'a')
assert.strictEqual(s1.text.toString(), 'a')
//assert.strictEqual(s1.text.getElemId(0), `2@${Automerge.getActorId(s1)}`)
})
it('should support deletion', () => {
s1 = Automerge.change(s1, doc => doc.text.insertAt(0, 'a', 'b', 'c'))
s1 = Automerge.change(s1, doc => doc.text.deleteAt(1, 1))
assert.strictEqual(s1.text.length, 2)
assert.strictEqual(s1.text.get(0), 'a')
assert.strictEqual(s1.text.get(1), 'c')
assert.strictEqual(s1.text.toString(), 'ac')
})
it("should support implicit and explicit deletion", () => {
s1 = Automerge.change(s1, doc => doc.text.insertAt(0, "a", "b", "c"))
s1 = Automerge.change(s1, doc => doc.text.deleteAt(1))
s1 = Automerge.change(s1, doc => doc.text.deleteAt(1, 0))
assert.strictEqual(s1.text.length, 2)
assert.strictEqual(s1.text.get(0), "a")
assert.strictEqual(s1.text.get(1), "c")
assert.strictEqual(s1.text.toString(), "ac")
})
it('should handle concurrent insertion', () => {
s1 = Automerge.change(s1, doc => doc.text.insertAt(0, 'a', 'b', 'c'))
s2 = Automerge.change(s2, doc => doc.text.insertAt(0, 'x', 'y', 'z'))
s1 = Automerge.merge(s1, s2)
assert.strictEqual(s1.text.length, 6)
assertEqualsOneOf(s1.text.toString(), 'abcxyz', 'xyzabc')
assertEqualsOneOf(s1.text.join(''), 'abcxyz', 'xyzabc')
})
it('should handle text and other ops in the same change', () => {
s1 = Automerge.change(s1, doc => {
doc.foo = 'bar'
doc.text.insertAt(0, 'a')
})
assert.strictEqual(s1.foo, 'bar')
assert.strictEqual(s1.text.toString(), 'a')
assert.strictEqual(s1.text.join(''), 'a')
})
it('should serialize to JSON as a simple string', () => {
s1 = Automerge.change(s1, doc => doc.text.insertAt(0, 'a', '"', 'b'))
assert.strictEqual(JSON.stringify(s1), '{"text":"a\\"b"}')
})
it('should allow modification before an object is assigned to a document', () => {
s1 = Automerge.change(Automerge.init(), doc => {
const text = new Automerge.Text()
text.insertAt(0, 'a', 'b', 'c', 'd')
text.deleteAt(2)
doc.text = text
assert.strictEqual(doc.text.toString(), 'abd')
assert.strictEqual(doc.text.join(''), 'abd')
})
assert.strictEqual(s1.text.toString(), 'abd')
assert.strictEqual(s1.text.join(''), 'abd')
})
it('should allow modification after an object is assigned to a document', () => {
s1 = Automerge.change(Automerge.init(), doc => {
const text = new Automerge.Text()
doc.text = text
doc.text.insertAt(0, 'a', 'b', 'c', 'd')
doc.text.deleteAt(2)
assert.strictEqual(doc.text.toString(), 'abd')
assert.strictEqual(doc.text.join(''), 'abd')
})
assert.strictEqual(s1.text.join(''), 'abd')
})
it('should not allow modification outside of a change callback', () => {
assert.throws(() => s1.text.insertAt(0, 'a'), /object cannot be modified outside of a change block/)
})
describe('with initial value', () => {
it('should accept a string as initial value', () => {
let s1 = Automerge.change(Automerge.init(), doc => doc.text = new Automerge.Text('init'))
assert.strictEqual(s1.text.length, 4)
assert.strictEqual(s1.text.get(0), 'i')
assert.strictEqual(s1.text.get(1), 'n')
assert.strictEqual(s1.text.get(2), 'i')
assert.strictEqual(s1.text.get(3), 't')
assert.strictEqual(s1.text.toString(), 'init')
})
it('should accept an array as initial value', () => {
let s1 = Automerge.change(Automerge.init(), doc => doc.text = new Automerge.Text(['i', 'n', 'i', 't']))
assert.strictEqual(s1.text.length, 4)
assert.strictEqual(s1.text.get(0), 'i')
assert.strictEqual(s1.text.get(1), 'n')
assert.strictEqual(s1.text.get(2), 'i')
assert.strictEqual(s1.text.get(3), 't')
assert.strictEqual(s1.text.toString(), 'init')
})
it('should initialize text in Automerge.from()', () => {
let s1 = Automerge.from({text: new Automerge.Text('init')})
assert.strictEqual(s1.text.length, 4)
assert.strictEqual(s1.text.get(0), 'i')
assert.strictEqual(s1.text.get(1), 'n')
assert.strictEqual(s1.text.get(2), 'i')
assert.strictEqual(s1.text.get(3), 't')
assert.strictEqual(s1.text.toString(), 'init')
})
it('should encode the initial value as a change', () => {
const s1 = Automerge.from({text: new Automerge.Text('init')})
const changes = Automerge.getAllChanges(s1)
assert.strictEqual(changes.length, 1)
const [s2] = Automerge.applyChanges(Automerge.init(), changes)
assert.strictEqual(s2.text instanceof Automerge.Text, true)
assert.strictEqual(s2.text.toString(), 'init')
assert.strictEqual(s2.text.join(''), 'init')
})
it('should allow immediate access to the value', () => {
Automerge.change(Automerge.init(), doc => {
const text = new Automerge.Text('init')
assert.strictEqual(text.length, 4)
assert.strictEqual(text.get(0), 'i')
assert.strictEqual(text.toString(), 'init')
doc.text = text
assert.strictEqual(doc.text.length, 4)
assert.strictEqual(doc.text.get(0), 'i')
assert.strictEqual(doc.text.toString(), 'init')
})
})
it('should allow pre-assignment modification of the initial value', () => {
let s1 = Automerge.change(Automerge.init(), doc => {
const text = new Automerge.Text('init')
text.deleteAt(3)
assert.strictEqual(text.join(''), 'ini')
doc.text = text
assert.strictEqual(doc.text.join(''), 'ini')
assert.strictEqual(doc.text.toString(), 'ini')
})
assert.strictEqual(s1.text.toString(), 'ini')
assert.strictEqual(s1.text.join(''), 'ini')
})
it('should allow post-assignment modification of the initial value', () => {
let s1 = Automerge.change(Automerge.init(), doc => {
const text = new Automerge.Text('init')
doc.text = text
doc.text.deleteAt(0)
doc.text.insertAt(0, 'I')
assert.strictEqual(doc.text.join(''), 'Init')
assert.strictEqual(doc.text.toString(), 'Init')
})
assert.strictEqual(s1.text.join(''), 'Init')
assert.strictEqual(s1.text.toString(), 'Init')
})
})
describe('non-textual control characters', () => {
let s1
beforeEach(() => {
s1 = Automerge.change(Automerge.init(), doc => {
doc.text = new Automerge.Text()
doc.text.insertAt(0, 'a')
doc.text.insertAt(1, { attribute: 'bold' })
})
})
it('should allow fetching non-textual characters', () => {
assert.deepEqual(s1.text.get(1), { attribute: 'bold' })
//assert.strictEqual(s1.text.getElemId(1), `3@${Automerge.getActorId(s1)}`)
})
it('should include control characters in string length', () => {
assert.strictEqual(s1.text.length, 2)
assert.strictEqual(s1.text.get(0), 'a')
})
it('should exclude control characters from toString()', () => {
assert.strictEqual(s1.text.toString(), 'a')
})
it('should allow control characters to be updated', () => {
const s2 = Automerge.change(s1, doc => doc.text.get(1).attribute = 'italic')
const s3 = Automerge.load(Automerge.save(s2))
assert.strictEqual(s1.text.get(1).attribute, 'bold')
assert.strictEqual(s2.text.get(1).attribute, 'italic')
assert.strictEqual(s3.text.get(1).attribute, 'italic')
})
describe('spans interface to Text', () => {
it('should return a simple string as a single span', () => {
let s1 = Automerge.change(Automerge.init(), doc => {
doc.text = new Automerge.Text('hello world')
})
assert.deepEqual(s1.text.toSpans(), ['hello world'])
})
it('should return an empty string as an empty array', () => {
let s1 = Automerge.change(Automerge.init(), doc => {
doc.text = new Automerge.Text()
})
assert.deepEqual(s1.text.toSpans(), [])
})
it('should split a span at a control character', () => {
let s1 = Automerge.change(Automerge.init(), doc => {
doc.text = new Automerge.Text('hello world')
doc.text.insertAt(5, { attributes: { bold: true } })
})
assert.deepEqual(s1.text.toSpans(),
['hello', { attributes: { bold: true } }, ' world'])
})
it('should allow consecutive control characters', () => {
let s1 = Automerge.change(Automerge.init(), doc => {
doc.text = new Automerge.Text('hello world')
doc.text.insertAt(5, { attributes: { bold: true } })
doc.text.insertAt(6, { attributes: { italic: true } })
})
assert.deepEqual(s1.text.toSpans(),
['hello',
{ attributes: { bold: true } },
{ attributes: { italic: true } },
' world'
])
})
it('should allow non-consecutive control characters', () => {
let s1 = Automerge.change(Automerge.init(), doc => {
doc.text = new Automerge.Text('hello world')
doc.text.insertAt(5, { attributes: { bold: true } })
doc.text.insertAt(12, { attributes: { italic: true } })
})
assert.deepEqual(s1.text.toSpans(),
['hello',
{ attributes: { bold: true } },
' world',
{ attributes: { italic: true } }
])
})
it('should be convertable into a Quill delta', () => {
let s1 = Automerge.change(Automerge.init(), doc => {
doc.text = new Automerge.Text('Gandalf the Grey')
doc.text.insertAt(0, { attributes: { bold: true } })
doc.text.insertAt(7 + 1, { attributes: { bold: null } })
doc.text.insertAt(12 + 2, { attributes: { color: '#cccccc' } })
})
let deltaDoc = automergeTextToDeltaDoc(s1.text)
// From https://quilljs.com/docs/delta/
let expectedDoc = [
{ insert: 'Gandalf', attributes: { bold: true } },
{ insert: ' the ' },
{ insert: 'Grey', attributes: { color: '#cccccc' } }
]
assert.deepEqual(deltaDoc, expectedDoc)
})
it('should support embeds', () => {
let s1 = Automerge.change(Automerge.init(), doc => {
doc.text = new Automerge.Text('')
doc.text.insertAt(0, { attributes: { link: 'https://quilljs.com' } })
doc.text.insertAt(1, {
image: 'https://quilljs.com/assets/images/icon.png'
})
doc.text.insertAt(2, { attributes: { link: null } })
})
let deltaDoc = automergeTextToDeltaDoc(s1.text)
// From https://quilljs.com/docs/delta/
let expectedDoc = [{
// An image link
insert: {
image: 'https://quilljs.com/assets/images/icon.png'
},
attributes: {
link: 'https://quilljs.com'
}
}]
assert.deepEqual(deltaDoc, expectedDoc)
})
it('should handle concurrent overlapping spans', () => {
let s1 = Automerge.change(Automerge.init(), doc => {
doc.text = new Automerge.Text('Gandalf the Grey')
})
let s2 = Automerge.merge(Automerge.init(), s1)
let s3 = Automerge.change(s1, doc => {
doc.text.insertAt(8, { attributes: { bold: true } })
doc.text.insertAt(16 + 1, { attributes: { bold: null } })
})
let s4 = Automerge.change(s2, doc => {
doc.text.insertAt(0, { attributes: { bold: true } })
doc.text.insertAt(11 + 1, { attributes: { bold: null } })
})
let merged = Automerge.merge(s3, s4)
let deltaDoc = automergeTextToDeltaDoc(merged.text)
// From https://quilljs.com/docs/delta/
let expectedDoc = [
{ insert: 'Gandalf the Grey', attributes: { bold: true } },
]
assert.deepEqual(deltaDoc, expectedDoc)
})
it('should handle debolding spans', () => {
let s1 = Automerge.change(Automerge.init(), doc => {
doc.text = new Automerge.Text('Gandalf the Grey')
})
let s2 = Automerge.merge(Automerge.init(), s1)
let s3 = Automerge.change(s1, doc => {
doc.text.insertAt(0, { attributes: { bold: true } })
doc.text.insertAt(16 + 1, { attributes: { bold: null } })
})
let s4 = Automerge.change(s2, doc => {
doc.text.insertAt(8, { attributes: { bold: null } })
doc.text.insertAt(11 + 1, { attributes: { bold: true } })
})
let merged = Automerge.merge(s3, s4)
let deltaDoc = automergeTextToDeltaDoc(merged.text)
// From https://quilljs.com/docs/delta/
let expectedDoc = [
{ insert: 'Gandalf ', attributes: { bold: true } },
{ insert: 'the' },
{ insert: ' Grey', attributes: { bold: true } },
]
assert.deepEqual(deltaDoc, expectedDoc)
})
// xxx: how would this work for colors?
it('should handle destyling across destyled spans', () => {
let s1 = Automerge.change(Automerge.init(), doc => {
doc.text = new Automerge.Text('Gandalf the Grey')
})
let s2 = Automerge.merge(Automerge.init(), s1)
let s3 = Automerge.change(s1, doc => {
doc.text.insertAt(0, { attributes: { bold: true } })
doc.text.insertAt(16 + 1, { attributes: { bold: null } })
})
let s4 = Automerge.change(s2, doc => {
doc.text.insertAt(8, { attributes: { bold: null } })
doc.text.insertAt(11 + 1, { attributes: { bold: true } })
})
let merged = Automerge.merge(s3, s4)
let final = Automerge.change(merged, doc => {
doc.text.insertAt(3 + 1, { attributes: { bold: null } })
doc.text.insertAt(doc.text.length, { attributes: { bold: true } })
})
let deltaDoc = automergeTextToDeltaDoc(final.text)
// From https://quilljs.com/docs/delta/
let expectedDoc = [
{ insert: 'Gan', attributes: { bold: true } },
{ insert: 'dalf the Grey' },
]
assert.deepEqual(deltaDoc, expectedDoc)
})
it('should apply an insert', () => {
let s1 = Automerge.change(Automerge.init(), doc => {
doc.text = new Automerge.Text('Hello world')
})
const delta = [
{ retain: 6 },
{ insert: 'reader' },
{ delete: 5 }
]
let s2 = Automerge.change(s1, doc => {
applyDeltaDocToAutomergeText(delta, doc)
})
assert.strictEqual(s2.text.join(''), 'Hello reader')
})
it('should apply an insert with control characters', () => {
let s1 = Automerge.change(Automerge.init(), doc => {
doc.text = new Automerge.Text('Hello world')
})
const delta = [
{ retain: 6 },
{ insert: 'reader', attributes: { bold: true } },
{ delete: 5 },
{ insert: '!' }
]
let s2 = Automerge.change(s1, doc => {
applyDeltaDocToAutomergeText(delta, doc)
})
assert.strictEqual(s2.text.toString(), 'Hello reader!')
assert.deepEqual(s2.text.toSpans(), [
"Hello ",
{ attributes: { bold: true } },
"reader",
{ attributes: { bold: null } },
"!"
])
})
it('should account for control characters in retain/delete lengths', () => {
let s1 = Automerge.change(Automerge.init(), doc => {
doc.text = new Automerge.Text('Hello world')
doc.text.insertAt(4, { attributes: { color: '#ccc' } })
doc.text.insertAt(10, { attributes: { color: '#f00' } })
})
const delta = [
{ retain: 6 },
{ insert: 'reader', attributes: { bold: true } },
{ delete: 5 },
{ insert: '!' }
]
let s2 = Automerge.change(s1, doc => {
applyDeltaDocToAutomergeText(delta, doc)
})
assert.strictEqual(s2.text.toString(), 'Hello reader!')
assert.deepEqual(s2.text.toSpans(), [
"Hell",
{ attributes: { color: '#ccc'} },
"o ",
{ attributes: { bold: true } },
"reader",
{ attributes: { bold: null } },
{ attributes: { color: '#f00'} },
"!"
])
})
it('should support embeds', () => {
let s1 = Automerge.change(Automerge.init(), doc => {
doc.text = new Automerge.Text('')
})
let deltaDoc = [{
// An image link
insert: {
image: 'https://quilljs.com/assets/images/icon.png'
},
attributes: {
link: 'https://quilljs.com'
}
}]
let s2 = Automerge.change(s1, doc => {
applyDeltaDocToAutomergeText(deltaDoc, doc)
})
assert.deepEqual(s2.text.toSpans(), [
{ attributes: { link: 'https://quilljs.com' } },
{ image: 'https://quilljs.com/assets/images/icon.png'},
{ attributes: { link: null } },
])
})
})
})
it('should support unicode when creating text', () => {
s1 = Automerge.from({
text: new Automerge.Text('🐦')
})
assert.strictEqual(s1.text.get(0), '🐦')
})
})

View file

@ -1,32 +0,0 @@
const assert = require('assert')
const Automerge = require('..')
const uuid = Automerge.uuid
describe('uuid', () => {
afterEach(() => {
uuid.reset()
})
describe('default implementation', () => {
it('generates unique values', () => {
assert.notEqual(uuid(), uuid())
})
})
describe('custom implementation', () => {
let counter
function customUuid() {
return `custom-uuid-${counter++}`
}
before(() => uuid.setFactory(customUuid))
beforeEach(() => counter = 0)
it('invokes the custom factory', () => {
assert.equal(uuid(), 'custom-uuid-0')
assert.equal(uuid(), 'custom-uuid-1')
})
})
})

View file

@ -1 +0,0 @@
todo

View file

@ -1,31 +0,0 @@
{
"collaborators": [
"Orion Henry <orion@inkandswitch.com>",
"Alex Good <alex@memoryandthought.me>",
"Martin Kleppmann"
],
"name": "automerge-wasm",
"description": "wasm-bindgen bindings to the automerge rust implementation",
"version": "0.1.0",
"license": "MIT",
"files": [
"README.md",
"LICENSE",
"package.json",
"automerge_wasm_bg.wasm",
"automerge_wasm.js"
],
"main": "./dev/index.js",
"scripts": {
"build": "rimraf ./dev && wasm-pack build --target nodejs --dev --out-name index -d dev",
"release": "rimraf ./dev && wasm-pack build --target nodejs --release --out-name index -d dev && yarn opt",
"prof": "rimraf ./dev && wasm-pack build --target nodejs --profiling --out-name index -d dev",
"opt": "wasm-opt -Oz dev/index_bg.wasm -o tmp.wasm && mv tmp.wasm dev/index_bg.wasm",
"test": "yarn build && mocha --bail --full-trace"
},
"dependencies": {},
"devDependencies": {
"mocha": "^9.1.3",
"rimraf": "^3.0.2"
}
}

View file

@ -1,822 +0,0 @@
extern crate web_sys;
use automerge as am;
use automerge::{Change, ChangeHash, Prop, Value, ExId};
use js_sys::{Array, Object, Reflect, Uint8Array};
use serde::de::DeserializeOwned;
use serde::Serialize;
use std::collections::{HashMap, HashSet};
use std::convert::TryFrom;
use std::convert::TryInto;
use std::fmt::Display;
use wasm_bindgen::prelude::*;
use wasm_bindgen::JsCast;
#[allow(unused_macros)]
macro_rules! log {
( $( $t:tt )* ) => {
web_sys::console::log_1(&format!( $( $t )* ).into());
};
}
#[cfg(feature = "wee_alloc")]
#[global_allocator]
static ALLOC: wee_alloc::WeeAlloc = wee_alloc::WeeAlloc::INIT;
fn datatype(s: &am::ScalarValue) -> String {
match s {
am::ScalarValue::Bytes(_) => "bytes".into(),
am::ScalarValue::Str(_) => "str".into(),
am::ScalarValue::Int(_) => "int".into(),
am::ScalarValue::Uint(_) => "uint".into(),
am::ScalarValue::F64(_) => "f64".into(),
am::ScalarValue::Counter(_) => "counter".into(),
am::ScalarValue::Timestamp(_) => "timestamp".into(),
am::ScalarValue::Boolean(_) => "boolean".into(),
am::ScalarValue::Null => "null".into(),
}
}
#[derive(Debug)]
pub struct ScalarValue(am::ScalarValue);
impl From<ScalarValue> for JsValue {
fn from(val: ScalarValue) -> Self {
match &val.0 {
am::ScalarValue::Bytes(v) => Uint8Array::from(v.as_slice()).into(),
am::ScalarValue::Str(v) => v.to_string().into(),
am::ScalarValue::Int(v) => (*v as f64).into(),
am::ScalarValue::Uint(v) => (*v as f64).into(),
am::ScalarValue::F64(v) => (*v).into(),
am::ScalarValue::Counter(v) => (*v as f64).into(),
am::ScalarValue::Timestamp(v) => (*v as f64).into(),
am::ScalarValue::Boolean(v) => (*v).into(),
am::ScalarValue::Null => JsValue::null(),
}
}
}
#[wasm_bindgen]
#[derive(Debug)]
pub struct Automerge(automerge::Automerge);
#[wasm_bindgen]
#[derive(Debug)]
pub struct SyncState(am::SyncState);
#[wasm_bindgen]
impl SyncState {
#[wasm_bindgen(getter, js_name = sharedHeads)]
pub fn shared_heads(&self) -> JsValue {
rust_to_js(&self.0.shared_heads).unwrap()
}
#[wasm_bindgen(getter, js_name = lastSentHeads)]
pub fn last_sent_heads(&self) -> JsValue {
rust_to_js(self.0.last_sent_heads.as_ref()).unwrap()
}
#[wasm_bindgen(setter, js_name = lastSentHeads)]
pub fn set_last_sent_heads(&mut self, heads: JsValue) {
let heads: Option<Vec<ChangeHash>> = js_to_rust(&heads).unwrap();
self.0.last_sent_heads = heads
}
#[wasm_bindgen(setter, js_name = sentHashes)]
pub fn set_sent_hashes(&mut self, hashes: JsValue) {
let hashes_map: HashMap<ChangeHash, bool> = js_to_rust(&hashes).unwrap();
let hashes_set: HashSet<ChangeHash> = hashes_map.keys().cloned().collect();
self.0.sent_hashes = hashes_set
}
fn decode(data: Uint8Array) -> Result<SyncState, JsValue> {
let data = data.to_vec();
let s = am::SyncState::decode(&data);
let s = s.map_err(to_js_err)?;
Ok(SyncState(s))
}
}
#[derive(Debug)]
pub struct JsErr(String);
impl From<JsErr> for JsValue {
fn from(err: JsErr) -> Self {
js_sys::Error::new(&std::format!("{}", err.0)).into()
}
}
impl<'a> From<&'a str> for JsErr {
fn from(s: &'a str) -> Self {
JsErr(s.to_owned())
}
}
#[wasm_bindgen]
impl Automerge {
pub fn new(actor: JsValue) -> Result<Automerge, JsValue> {
let mut automerge = automerge::Automerge::new();
if let Some(a) = actor.as_string() {
let a = automerge::ActorId::from(hex::decode(a).map_err(to_js_err)?.to_vec());
automerge.set_actor(a);
}
Ok(Automerge(automerge))
}
#[allow(clippy::should_implement_trait)]
pub fn clone(&self) -> Self {
Automerge(self.0.clone())
}
pub fn free(self) {}
pub fn pending_ops(&self) -> JsValue {
(self.0.pending_ops() as u32).into()
}
pub fn commit(&mut self, message: JsValue, time: JsValue) -> Array {
let message = message.as_string();
let time = time.as_f64().map(|v| v as i64);
let heads = self.0.commit(message, time);
let heads: Array = heads
.iter()
.map(|h| JsValue::from_str(&hex::encode(&h.0)))
.collect();
heads
}
pub fn rollback(&mut self) -> JsValue {
self.0.rollback().into()
}
pub fn keys(&mut self, obj: JsValue, heads: JsValue) -> Result<Array, JsValue> {
let obj = self.import(obj)?;
let result = if let Some(heads) = get_heads(heads) {
self.0.keys_at(&obj, &heads)
} else {
self.0.keys(&obj)
}
.iter()
.map(|s| JsValue::from_str(s))
.collect();
Ok(result)
}
pub fn text(&mut self, obj: JsValue, heads: JsValue) -> Result<JsValue, JsValue> {
let obj = self.import(obj)?;
if let Some(heads) = get_heads(heads) {
self.0.text_at(&obj, &heads)
} else {
self.0.text(&obj)
}
.map_err(to_js_err)
.map(|t| t.into())
}
pub fn splice(
&mut self,
obj: JsValue,
start: JsValue,
delete_count: JsValue,
text: JsValue,
) -> Result<(), JsValue> {
let obj = self.import(obj)?;
let start = to_usize(start, "start")?;
let delete_count = to_usize(delete_count, "deleteCount")?;
let mut vals = vec![];
if let Some(t) = text.as_string() {
self.0
.splice_text(&obj, start, delete_count, &t)
.map_err(to_js_err)?;
} else {
if let Ok(array) = text.dyn_into::<Array>() {
for i in array.iter() {
if let Some(t) = i.as_string() {
vals.push(t.into());
} else if let Ok(array) = i.dyn_into::<Array>() {
let value = array.get(1);
let datatype = array.get(2);
let value = self.import_value(value, datatype)?;
vals.push(value);
}
}
}
self.0
.splice(&obj, start, delete_count, vals)
.map_err(to_js_err)?;
}
Ok(())
}
pub fn insert(
&mut self,
obj: JsValue,
index: JsValue,
value: JsValue,
datatype: JsValue,
) -> Result<JsValue, JsValue> {
let obj = self.import(obj)?;
//let key = self.insert_pos_for_index(&obj, prop)?;
let index: Result<_, JsValue> = index
.as_f64()
.ok_or_else(|| "insert index must be a number".into());
let index = index?;
let value = self.import_value(value, datatype)?;
let opid = self
.0
.insert(&obj, index as usize, value)
.map_err(to_js_err)?;
match opid {
Some(opid) => Ok(self.export(opid)),
None => Ok(JsValue::null()),
}
}
pub fn set(
&mut self,
obj: JsValue,
prop: JsValue,
value: JsValue,
datatype: JsValue,
) -> Result<JsValue, JsValue> {
let obj = self.import(obj)?;
let prop = self.import_prop(prop)?;
let value = self.import_value(value, datatype)?;
let opid = self.0.set(&obj, prop, value).map_err(to_js_err)?;
match opid {
Some(opid) => Ok(self.export(opid)),
None => Ok(JsValue::null()),
}
}
pub fn inc(&mut self, obj: JsValue, prop: JsValue, value: JsValue) -> Result<(), JsValue> {
let obj = self.import(obj)?;
let prop = self.import_prop(prop)?;
let value: f64 = value
.as_f64()
.ok_or("inc needs a numberic value")
.map_err(to_js_err)?;
self.0.inc(&obj, prop, value as i64).map_err(to_js_err)?;
Ok(())
}
pub fn value(&mut self, obj: JsValue, prop: JsValue, heads: JsValue) -> Result<Array, JsValue> {
let obj = self.import(obj)?;
let result = Array::new();
let prop = to_prop(prop);
let heads = get_heads(heads);
if let Ok(prop) = prop {
let value = if let Some(h) = heads {
self.0.value_at(&obj, prop, &h)
} else {
self.0.value(&obj, prop)
}
.map_err(to_js_err)?;
match value {
Some((Value::Object(obj_type), obj_id)) => {
result.push(&obj_type.to_string().into());
result.push(&self.export(obj_id));
}
Some((Value::Scalar(value), _)) => {
result.push(&datatype(&value).into());
result.push(&ScalarValue(value).into());
}
None => {}
}
}
Ok(result)
}
pub fn values(&mut self, obj: JsValue, arg: JsValue, heads: JsValue) -> Result<Array, JsValue> {
let obj = self.import(obj)?;
let result = Array::new();
let prop = to_prop(arg);
if let Ok(prop) = prop {
let values = if let Some(heads) = get_heads(heads) {
self.0.values_at(&obj, prop, &heads)
} else {
self.0.values(&obj, prop)
}
.map_err(to_js_err)?;
for value in values {
match value {
(Value::Object(obj_type), obj_id) => {
let sub = Array::new();
sub.push(&obj_type.to_string().into());
sub.push(&self.export(obj_id));
result.push(&sub.into());
}
(Value::Scalar(value), id) => {
let sub = Array::new();
sub.push(&datatype(&value).into());
sub.push(&ScalarValue(value).into());
sub.push(&self.export(id));
result.push(&sub.into());
}
}
}
}
Ok(result)
}
pub fn length(&mut self, obj: JsValue, heads: JsValue) -> Result<JsValue, JsValue> {
let obj = self.import(obj)?;
if let Some(heads) = get_heads(heads) {
Ok((self.0.length_at(&obj, &heads) as f64).into())
} else {
Ok((self.0.length(&obj) as f64).into())
}
}
pub fn del(&mut self, obj: JsValue, prop: JsValue) -> Result<(), JsValue> {
let obj = self.import(obj)?;
let prop = to_prop(prop)?;
self.0.del(&obj, prop).map_err(to_js_err)?;
Ok(())
}
pub fn save(&mut self) -> Result<Uint8Array, JsValue> {
self.0
.save()
.map(|v| Uint8Array::from(v.as_slice()))
.map_err(to_js_err)
}
#[wasm_bindgen(js_name = saveIncremental)]
pub fn save_incremental(&mut self) -> JsValue {
let bytes = self.0.save_incremental();
Uint8Array::from(bytes.as_slice()).into()
}
#[wasm_bindgen(js_name = loadIncremental)]
pub fn load_incremental(&mut self, data: Uint8Array) -> Result<JsValue, JsValue> {
let data = data.to_vec();
let len = self.0.load_incremental(&data).map_err(to_js_err)?;
Ok(len.into())
}
#[wasm_bindgen(js_name = applyChanges)]
pub fn apply_changes(&mut self, changes: JsValue) -> Result<(), JsValue> {
let changes: Vec<_> = JS(changes).try_into()?;
self.0.apply_changes(&changes).map_err(to_js_err)?;
Ok(())
}
#[wasm_bindgen(js_name = getChanges)]
pub fn get_changes(&mut self, have_deps: JsValue) -> Result<Array, JsValue> {
let deps: Vec<_> = JS(have_deps).try_into()?;
let changes = self.0.get_changes(&deps);
let changes: Array = changes
.iter()
.map(|c| Uint8Array::from(c.raw_bytes()))
.collect();
Ok(changes)
}
#[wasm_bindgen(js_name = getChangesAdded)]
pub fn get_changes_added(&mut self, other: &Automerge) -> Result<Array, JsValue> {
let changes = self.0.get_changes_added(&other.0);
let changes: Array = changes
.iter()
.map(|c| Uint8Array::from(c.raw_bytes()))
.collect();
Ok(changes)
}
#[wasm_bindgen(js_name = getHeads)]
pub fn get_heads(&mut self) -> Result<Array, JsValue> {
let heads = self.0.get_heads();
let heads: Array = heads
.iter()
.map(|h| JsValue::from_str(&hex::encode(&h.0)))
.collect();
Ok(heads)
}
#[wasm_bindgen(js_name = getActorId)]
pub fn get_actor_id(&mut self) -> Result<JsValue, JsValue> {
let actor = self.0.get_actor();
Ok(actor.to_string().into())
}
#[wasm_bindgen(js_name = getLastLocalChange)]
pub fn get_last_local_change(&mut self) -> Result<JsValue, JsValue> {
if let Some(change) = self.0.get_last_local_change() {
Ok(Uint8Array::from(change.raw_bytes()).into())
} else {
Ok(JsValue::null())
}
}
pub fn dump(&self) {
self.0.dump()
}
#[wasm_bindgen(js_name = getMissingDeps)]
pub fn get_missing_deps(&mut self, heads: JsValue) -> Result<Array, JsValue> {
let heads: Vec<_> = JS(heads).try_into()?;
let deps = self.0.get_missing_deps(&heads);
let deps: Array = deps
.iter()
.map(|h| JsValue::from_str(&hex::encode(&h.0)))
.collect();
Ok(deps)
}
#[wasm_bindgen(js_name = receiveSyncMessage)]
pub fn receive_sync_message(
&mut self,
state: &mut SyncState,
message: Uint8Array,
) -> Result<(), JsValue> {
let message = message.to_vec();
let message = am::SyncMessage::decode(message.as_slice()).map_err(to_js_err)?;
self.0
.receive_sync_message(&mut state.0, message)
.map_err(to_js_err)?;
Ok(())
}
#[wasm_bindgen(js_name = generateSyncMessage)]
pub fn generate_sync_message(&mut self, state: &mut SyncState) -> Result<JsValue, JsValue> {
if let Some(message) = self.0.generate_sync_message(&mut state.0) {
Ok(Uint8Array::from(message.encode().map_err(to_js_err)?.as_slice()).into())
} else {
Ok(JsValue::null())
}
}
fn export(&self, val: ExId) -> JsValue {
val.to_string().into()
}
fn import(&self, id: JsValue) -> Result<ExId, JsValue> {
let id_str = id
.as_string()
.ok_or("invalid opid/objid/elemid")
.map_err(to_js_err)?;
self.0.import(&id_str).map_err(to_js_err)
}
fn import_prop(&mut self, prop: JsValue) -> Result<Prop, JsValue> {
if let Some(s) = prop.as_string() {
Ok(s.into())
} else if let Some(n) = prop.as_f64() {
Ok((n as usize).into())
} else {
Err(format!("invalid prop {:?}", prop).into())
}
}
fn import_value(&mut self, value: JsValue, datatype: JsValue) -> Result<Value, JsValue> {
let datatype = datatype.as_string();
match datatype.as_deref() {
Some("boolean") => value
.as_bool()
.ok_or_else(|| "value must be a bool".into())
.map(|v| am::ScalarValue::Boolean(v).into()),
Some("int") => value
.as_f64()
.ok_or_else(|| "value must be a number".into())
.map(|v| am::ScalarValue::Int(v as i64).into()),
Some("uint") => value
.as_f64()
.ok_or_else(|| "value must be a number".into())
.map(|v| am::ScalarValue::Uint(v as u64).into()),
Some("f64") => value
.as_f64()
.ok_or_else(|| "value must be a number".into())
.map(|n| am::ScalarValue::F64(n).into()),
Some("bytes") => {
Ok(am::ScalarValue::Bytes(value.dyn_into::<Uint8Array>().unwrap().to_vec()).into())
}
Some("counter") => value
.as_f64()
.ok_or_else(|| "value must be a number".into())
.map(|v| am::ScalarValue::Counter(v as i64).into()),
Some("timestamp") => value
.as_f64()
.ok_or_else(|| "value must be a number".into())
.map(|v| am::ScalarValue::Timestamp(v as i64).into()),
/*
Some("bytes") => unimplemented!(),
Some("cursor") => unimplemented!(),
*/
Some("null") => Ok(am::ScalarValue::Null.into()),
Some(_) => Err(format!("unknown datatype {:?}", datatype).into()),
None => {
if value.is_null() {
Ok(am::ScalarValue::Null.into())
} else if let Some(b) = value.as_bool() {
Ok(am::ScalarValue::Boolean(b).into())
} else if let Some(s) = value.as_string() {
// FIXME - we need to detect str vs int vs float vs bool here :/
Ok(am::ScalarValue::Str(s.into()).into())
} else if let Some(n) = value.as_f64() {
if (n.round() - n).abs() < f64::EPSILON {
Ok(am::ScalarValue::Int(n as i64).into())
} else {
Ok(am::ScalarValue::F64(n).into())
}
} else if let Some(o) = to_objtype(&value) {
Ok(o.into())
} else if let Ok(o) = &value.dyn_into::<Uint8Array>() {
Ok(am::ScalarValue::Bytes(o.to_vec()).into())
} else {
Err("value is invalid".into())
}
}
}
}
}
pub fn to_usize(val: JsValue, name: &str) -> Result<usize, JsValue> {
match val.as_f64() {
Some(n) => Ok(n as usize),
None => Err(format!("{} must be a number", name).into()),
}
}
pub fn to_prop(p: JsValue) -> Result<Prop, JsValue> {
if let Some(s) = p.as_string() {
Ok(Prop::Map(s))
} else if let Some(n) = p.as_f64() {
Ok(Prop::Seq(n as usize))
} else {
Err("prop must me a string or number".into())
}
}
fn to_objtype(a: &JsValue) -> Option<am::ObjType> {
if !a.is_function() {
return None;
}
let f: js_sys::Function = a.clone().try_into().unwrap();
let f = f.to_string();
if f.starts_with("class MAP", 0) {
Some(am::ObjType::Map)
} else if f.starts_with("class LIST", 0) {
Some(am::ObjType::List)
} else if f.starts_with("class TEXT", 0) {
Some(am::ObjType::Text)
} else if f.starts_with("class TABLE", 0) {
Some(am::ObjType::Table)
} else {
None
}
}
struct ObjType(am::ObjType);
impl TryFrom<JsValue> for ObjType {
type Error = JsValue;
fn try_from(val: JsValue) -> Result<Self, Self::Error> {
match &val.as_string() {
Some(o) if o == "map" => Ok(ObjType(am::ObjType::Map)),
Some(o) if o == "list" => Ok(ObjType(am::ObjType::List)),
Some(o) => Err(format!("unknown obj type {}", o).into()),
_ => Err("obj type must be a string".into()),
}
}
}
#[wasm_bindgen]
pub fn init(actor: JsValue) -> Result<Automerge, JsValue> {
console_error_panic_hook::set_once();
Automerge::new(actor)
}
#[wasm_bindgen]
pub fn load(data: Uint8Array, actor: JsValue) -> Result<Automerge, JsValue> {
let data = data.to_vec();
let mut automerge = am::Automerge::load(&data).map_err(to_js_err)?;
if let Some(s) = actor.as_string() {
let actor = automerge::ActorId::from(hex::decode(s).map_err(to_js_err)?.to_vec());
automerge.set_actor(actor)
}
Ok(Automerge(automerge))
}
#[wasm_bindgen(js_name = encodeChange)]
pub fn encode_change(change: JsValue) -> Result<Uint8Array, JsValue> {
let change: am::ExpandedChange = change.into_serde().map_err(to_js_err)?;
let change: Change = change.into();
Ok(Uint8Array::from(change.raw_bytes()))
}
#[wasm_bindgen(js_name = decodeChange)]
pub fn decode_change(change: Uint8Array) -> Result<JsValue, JsValue> {
let change = Change::from_bytes(change.to_vec()).map_err(to_js_err)?;
let change: am::ExpandedChange = change.decode();
JsValue::from_serde(&change).map_err(to_js_err)
}
#[wasm_bindgen(js_name = initSyncState)]
pub fn init_sync_state() -> SyncState {
SyncState(Default::default())
}
#[wasm_bindgen(js_name = encodeSyncMessage)]
pub fn encode_sync_message(message: JsValue) -> Result<Uint8Array, JsValue> {
let heads = get(&message, "heads")?.try_into()?;
let need = get(&message, "need")?.try_into()?;
let changes = get(&message, "changes")?.try_into()?;
let have = get(&message, "have")?.try_into()?;
Ok(Uint8Array::from(
am::SyncMessage {
heads,
need,
have,
changes,
}
.encode()
.unwrap()
.as_slice(),
))
}
#[wasm_bindgen(js_name = decodeSyncMessage)]
pub fn decode_sync_message(msg: Uint8Array) -> Result<JsValue, JsValue> {
let data = msg.to_vec();
let msg = am::SyncMessage::decode(&data).map_err(to_js_err)?;
let heads: Array = VH(&msg.heads).into();
let need: Array = VH(&msg.need).into();
let changes: Array = VC(&msg.changes).into();
let have: Array = VSH(&msg.have).try_into()?;
let obj = Object::new().into();
set(&obj, "heads", heads)?;
set(&obj, "need", need)?;
set(&obj, "have", have)?;
set(&obj, "changes", changes)?;
Ok(obj)
}
#[wasm_bindgen(js_name = encodeSyncState)]
pub fn encode_sync_state(state: SyncState) -> Result<Uint8Array, JsValue> {
Ok(Uint8Array::from(
state.0.encode().map_err(to_js_err)?.as_slice(),
))
}
#[wasm_bindgen(js_name = decodeSyncState)]
pub fn decode_sync_state(state: Uint8Array) -> Result<SyncState, JsValue> {
SyncState::decode(state)
}
#[wasm_bindgen(js_name = MAP)]
pub struct Map {}
#[wasm_bindgen(js_name = LIST)]
pub struct List {}
#[wasm_bindgen(js_name = TEXT)]
pub struct Text {}
#[wasm_bindgen(js_name = TABLE)]
pub struct Table {}
fn to_js_err<T: Display>(err: T) -> JsValue {
js_sys::Error::new(&std::format!("{}", err)).into()
}
fn get(obj: &JsValue, prop: &str) -> Result<JS, JsValue> {
Ok(JS(Reflect::get(obj, &prop.into())?))
}
fn set<V: Into<JsValue>>(obj: &JsValue, prop: &str, val: V) -> Result<bool, JsValue> {
Reflect::set(obj, &prop.into(), &val.into())
}
struct JS(JsValue);
impl TryFrom<JS> for Vec<ChangeHash> {
type Error = JsValue;
fn try_from(value: JS) -> Result<Self, Self::Error> {
let value = value.0.dyn_into::<Array>()?;
let value: Result<Vec<ChangeHash>, _> = value.iter().map(|j| j.into_serde()).collect();
let value = value.map_err(to_js_err)?;
Ok(value)
}
}
impl From<JS> for Option<Vec<ChangeHash>> {
fn from(value: JS) -> Self {
let value = value.0.dyn_into::<Array>().ok()?;
let value: Result<Vec<ChangeHash>, _> = value.iter().map(|j| j.into_serde()).collect();
let value = value.ok()?;
Some(value)
}
}
impl TryFrom<JS> for Vec<Change> {
type Error = JsValue;
fn try_from(value: JS) -> Result<Self, Self::Error> {
let value = value.0.dyn_into::<Array>()?;
let changes: Result<Vec<Uint8Array>, _> = value.iter().map(|j| j.dyn_into()).collect();
let changes = changes?;
let changes: Result<Vec<Change>, _> = changes
.iter()
.map(|a| am::decode_change(a.to_vec()))
.collect();
let changes = changes.map_err(to_js_err)?;
Ok(changes)
}
}
impl TryFrom<JS> for Vec<am::SyncHave> {
type Error = JsValue;
fn try_from(value: JS) -> Result<Self, Self::Error> {
let value = value.0.dyn_into::<Array>()?;
let have: Result<Vec<am::SyncHave>, JsValue> = value
.iter()
.map(|s| {
let last_sync = get(&s, "lastSync")?.try_into()?;
let bloom = get(&s, "bloom")?.try_into()?;
Ok(am::SyncHave { last_sync, bloom })
})
.collect();
let have = have?;
Ok(have)
}
}
impl TryFrom<JS> for am::BloomFilter {
type Error = JsValue;
fn try_from(value: JS) -> Result<Self, Self::Error> {
let value: Uint8Array = value.0.dyn_into()?;
let value = value.to_vec();
let value = value.as_slice().try_into().map_err(to_js_err)?;
Ok(value)
}
}
struct VH<'a>(&'a [ChangeHash]);
impl<'a> From<VH<'a>> for Array {
fn from(value: VH<'a>) -> Self {
let heads: Array = value
.0
.iter()
.map(|h| JsValue::from_str(&hex::encode(&h.0)))
.collect();
heads
}
}
struct VC<'a>(&'a [Change]);
impl<'a> From<VC<'a>> for Array {
fn from(value: VC<'a>) -> Self {
let changes: Array = value
.0
.iter()
.map(|c| Uint8Array::from(c.raw_bytes()))
.collect();
changes
}
}
#[allow(clippy::upper_case_acronyms)]
struct VSH<'a>(&'a [am::SyncHave]);
impl<'a> TryFrom<VSH<'a>> for Array {
type Error = JsValue;
fn try_from(value: VSH<'a>) -> Result<Self, Self::Error> {
let have: Result<Array, JsValue> = value
.0
.iter()
.map(|have| {
let last_sync: Array = have
.last_sync
.iter()
.map(|h| JsValue::from_str(&hex::encode(&h.0)))
.collect();
// FIXME - the clone and the unwrap here shouldnt be needed - look at into_bytes()
let bloom = Uint8Array::from(have.bloom.clone().into_bytes().unwrap().as_slice());
let obj: JsValue = Object::new().into();
Reflect::set(&obj, &"lastSync".into(), &last_sync.into())?;
Reflect::set(&obj, &"bloom".into(), &bloom.into())?;
Ok(obj)
})
.collect();
let have = have?;
Ok(have)
}
}
fn rust_to_js<T: Serialize>(value: T) -> Result<JsValue, JsValue> {
JsValue::from_serde(&value).map_err(to_js_err)
}
fn js_to_rust<T: DeserializeOwned>(value: &JsValue) -> Result<T, JsValue> {
value.into_serde().map_err(to_js_err)
}
fn get_heads(heads: JsValue) -> Option<Vec<ChangeHash>> {
JS(heads).into()
}

View file

@ -1,284 +0,0 @@
const assert = require('assert')
const util = require('util')
const Automerge = require('..')
const { MAP, LIST, TEXT } = Automerge
// str to uint8array
function en(str) {
return new TextEncoder('utf8').encode(str)
}
// uint8array to str
function de(bytes) {
return new TextDecoder('utf8').decode(bytes);
}
describe('Automerge', () => {
describe('basics', () => {
it('should init clone and free', () => {
let doc1 = Automerge.init()
let doc2 = doc1.clone()
doc1.free()
doc2.free()
})
it('should be able to start and commit', () => {
let doc = Automerge.init()
doc.commit()
})
it('getting a nonexistant prop does not throw an error', () => {
let doc = Automerge.init()
let root = "_root"
let result = doc.value(root,"hello")
assert.deepEqual(result,[])
})
it('should be able to set and get a simple value', () => {
let doc = Automerge.init()
let root = "_root"
let result
doc.set(root, "hello", "world")
doc.set(root, "number1", 5, "uint")
doc.set(root, "number2", 5)
doc.set(root, "number3", 5.5)
doc.set(root, "number4", 5.5, "f64")
doc.set(root, "number5", 5.5, "int")
doc.set(root, "bool", true)
result = doc.value(root,"hello")
assert.deepEqual(result,["str","world"])
result = doc.value(root,"number1")
assert.deepEqual(result,["uint",5])
result = doc.value(root,"number2")
assert.deepEqual(result,["int",5])
result = doc.value(root,"number3")
assert.deepEqual(result,["f64",5.5])
result = doc.value(root,"number4")
assert.deepEqual(result,["f64",5.5])
result = doc.value(root,"number5")
assert.deepEqual(result,["int",5])
result = doc.value(root,"bool")
assert.deepEqual(result,["boolean",true])
doc.set(root, "bool", false, "boolean")
result = doc.value(root,"bool")
assert.deepEqual(result,["boolean",false])
})
it('should be able to use bytes', () => {
let doc = Automerge.init()
doc.set("_root","data1", new Uint8Array([10,11,12]));
doc.set("_root","data2", new Uint8Array([13,14,15]), "bytes");
let value1 = doc.value("_root", "data1")
assert.deepEqual(value1, ["bytes", new Uint8Array([10,11,12])]);
let value2 = doc.value("_root", "data2")
assert.deepEqual(value2, ["bytes", new Uint8Array([13,14,15])]);
})
it('should be able to make sub objects', () => {
let doc = Automerge.init()
let root = "_root"
let result
let submap = doc.set(root, "submap", MAP)
doc.set(submap, "number", 6, "uint")
assert.strictEqual(doc.pending_ops(),2)
result = doc.value(root,"submap")
assert.deepEqual(result,["map",submap])
result = doc.value(submap,"number")
assert.deepEqual(result,["uint",6])
})
it('should be able to make lists', () => {
let doc = Automerge.init()
let root = "_root"
let submap = doc.set(root, "numbers", LIST)
doc.insert(submap, 0, "a");
doc.insert(submap, 1, "b");
doc.insert(submap, 2, "c");
doc.insert(submap, 0, "z");
assert.deepEqual(doc.value(submap, 0),["str","z"])
assert.deepEqual(doc.value(submap, 1),["str","a"])
assert.deepEqual(doc.value(submap, 2),["str","b"])
assert.deepEqual(doc.value(submap, 3),["str","c"])
assert.deepEqual(doc.length(submap),4)
doc.set(submap, 2, "b v2");
assert.deepEqual(doc.value(submap, 2),["str","b v2"])
assert.deepEqual(doc.length(submap),4)
})
it('should be able delete non-existant props', () => {
let doc = Automerge.init()
doc.set("_root", "foo","bar")
doc.set("_root", "bip","bap")
let heads1 = doc.commit()
assert.deepEqual(doc.keys("_root"),["bip","foo"])
doc.del("_root", "foo")
doc.del("_root", "baz")
let heads2 = doc.commit()
assert.deepEqual(doc.keys("_root"),["bip"])
assert.deepEqual(doc.keys("_root", heads1),["bip", "foo"])
assert.deepEqual(doc.keys("_root", heads2),["bip"])
})
it('should be able to del', () => {
let doc = Automerge.init()
let root = "_root"
doc.set(root, "xxx", "xxx");
assert.deepEqual(doc.value(root, "xxx"),["str","xxx"])
doc.del(root, "xxx");
assert.deepEqual(doc.value(root, "xxx"),[])
})
it('should be able to use counters', () => {
let doc = Automerge.init()
let root = "_root"
doc.set(root, "counter", 10, "counter");
assert.deepEqual(doc.value(root, "counter"),["counter",10])
doc.inc(root, "counter", 10);
assert.deepEqual(doc.value(root, "counter"),["counter",20])
doc.inc(root, "counter", -5);
assert.deepEqual(doc.value(root, "counter"),["counter",15])
})
it('should be able to splice text', () => {
let doc = Automerge.init()
let root = "_root";
let text = doc.set(root, "text", Automerge.TEXT);
doc.splice(text, 0, 0, "hello ")
doc.splice(text, 6, 0, ["w","o","r","l","d"])
doc.splice(text, 11, 0, [["str","!"],["str","?"]])
assert.deepEqual(doc.value(text, 0),["str","h"])
assert.deepEqual(doc.value(text, 1),["str","e"])
assert.deepEqual(doc.value(text, 9),["str","l"])
assert.deepEqual(doc.value(text, 10),["str","d"])
assert.deepEqual(doc.value(text, 11),["str","!"])
assert.deepEqual(doc.value(text, 12),["str","?"])
})
it('should be able save all or incrementally', () => {
let doc = Automerge.init()
doc.set("_root", "foo", 1)
let save1 = doc.save()
doc.set("_root", "bar", 2)
let saveMidway = doc.clone().save();
let save2 = doc.saveIncremental();
doc.set("_root", "baz", 3);
let save3 = doc.saveIncremental();
let saveA = doc.save();
let saveB = new Uint8Array([... save1, ...save2, ...save3]);
assert.notDeepEqual(saveA, saveB);
let docA = Automerge.load(saveA);
let docB = Automerge.load(saveB);
let docC = Automerge.load(saveMidway)
docC.loadIncremental(save3)
assert.deepEqual(docA.keys("_root"), docB.keys("_root"));
assert.deepEqual(docA.save(), docB.save());
assert.deepEqual(docA.save(), docC.save());
})
it('should be able to splice text', () => {
let doc = Automerge.init()
let text = doc.set("_root", "text", TEXT);
doc.splice(text, 0, 0, "hello world");
let heads1 = doc.commit();
doc.splice(text, 6, 0, "big bad ");
let heads2 = doc.commit();
assert.strictEqual(doc.text(text), "hello big bad world")
assert.strictEqual(doc.length(text), 19)
assert.strictEqual(doc.text(text, heads1), "hello world")
assert.strictEqual(doc.length(text, heads1), 11)
assert.strictEqual(doc.text(text, heads2), "hello big bad world")
assert.strictEqual(doc.length(text, heads2), 19)
})
it('local inc increments all visible counters in a map', () => {
let doc1 = Automerge.init("aaaa")
doc1.set("_root", "hello", "world")
let doc2 = Automerge.load(doc1.save(), "bbbb");
let doc3 = Automerge.load(doc1.save(), "cccc");
doc1.set("_root", "cnt", 20)
doc2.set("_root", "cnt", 0, "counter")
doc3.set("_root", "cnt", 10, "counter")
doc1.applyChanges(doc2.getChanges(doc1.getHeads()))
doc1.applyChanges(doc3.getChanges(doc1.getHeads()))
let result = doc1.values("_root", "cnt")
assert.deepEqual(result,[
['counter',10,'2@cccc'],
['counter',0,'2@bbbb'],
['int',20,'2@aaaa']
])
doc1.inc("_root", "cnt", 5)
result = doc1.values("_root", "cnt")
assert.deepEqual(result, [
[ 'counter', 15, '2@cccc' ], [ 'counter', 5, '2@bbbb' ]
])
let save1 = doc1.save()
let doc4 = Automerge.load(save1)
assert.deepEqual(doc4.save(), save1);
})
it('local inc increments all visible counters in a sequence', () => {
let doc1 = Automerge.init("aaaa")
let seq = doc1.set("_root", "seq", LIST)
doc1.insert(seq, 0, "hello")
let doc2 = Automerge.load(doc1.save(), "bbbb");
let doc3 = Automerge.load(doc1.save(), "cccc");
doc1.set(seq, 0, 20)
doc2.set(seq, 0, 0, "counter")
doc3.set(seq, 0, 10, "counter")
doc1.applyChanges(doc2.getChanges(doc1.getHeads()))
doc1.applyChanges(doc3.getChanges(doc1.getHeads()))
let result = doc1.values(seq, 0)
assert.deepEqual(result,[
['counter',10,'3@cccc'],
['counter',0,'3@bbbb'],
['int',20,'3@aaaa']
])
doc1.inc(seq, 0, 5)
result = doc1.values(seq, 0)
assert.deepEqual(result, [
[ 'counter', 15, '3@cccc' ], [ 'counter', 5, '3@bbbb' ]
])
let save = doc1.save()
let doc4 = Automerge.load(save)
assert.deepEqual(doc4.save(), save);
})
})
})

View file

@ -1,38 +0,0 @@
[package]
name = "automerge"
version = "0.1.0"
edition = "2018"
license = "MIT"
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
[features]
optree-visualisation = ["dot"]
[dependencies]
hex = "^0.4.3"
leb128 = "^0.2.5"
sha2 = "^0.10.0"
rand = { version = "^0.8.4" }
thiserror = "^1.0.16"
itertools = "^0.10.3"
flate2 = "^1.0.22"
nonzero_ext = "^0.2.0"
uuid = { version = "^0.8.2", features=["v4", "wasm-bindgen", "serde"] }
smol_str = "^0.1.21"
tracing = { version = "^0.1.29", features = ["log"] }
fxhash = "^0.2.1"
tinyvec = { version = "^1.5.1", features = ["alloc"] }
unicode-segmentation = "1.7.1"
serde = { version = "^1.0", features=["derive"] }
dot = { version = "0.1.4", optional = true }
[dependencies.web-sys]
version = "^0.3.55"
features = ["console"]
[dev-dependencies]
pretty_assertions = "1.0.0"
proptest = { version = "^1.0.0", default-features = false, features = ["std"] }
serde_json = { version = "^1.0.73", features=["float_roundtrip"], default-features=true }
maplit = { version = "^1.0" }

View file

@ -1,18 +0,0 @@
counters -> Visibility
fast load
values at clock
length at clock
keys at clock
text at clock
extra tests
counters in lists -> inserts with tombstones
ergronomics
set(obj, prop, val) vs mapset(obj, str, val) and seqset(obj, usize, val)
value() -> (id, value)

View file

@ -1,916 +0,0 @@
use crate::columnar::{
ChangeEncoder, ChangeIterator, ColumnEncoder, DepsIterator, DocChange, DocOp, DocOpEncoder,
DocOpIterator, OperationIterator, COLUMN_TYPE_DEFLATE,
};
use crate::decoding;
use crate::decoding::{Decodable, InvalidChangeError};
use crate::encoding::{Encodable, DEFLATE_MIN_SIZE};
use crate::legacy as amp;
use crate::{
ActorId, AutomergeError, ElemId, IndexedCache, Key, ObjId, Op, OpId, OpType, Transaction, HEAD,
};
use core::ops::Range;
use flate2::{
bufread::{DeflateDecoder, DeflateEncoder},
Compression,
};
use itertools::Itertools;
use sha2::Digest;
use sha2::Sha256;
use std::collections::{HashMap, HashSet};
use std::convert::TryInto;
use std::fmt::Debug;
use std::io::{Read, Write};
use tracing::instrument;
const MAGIC_BYTES: [u8; 4] = [0x85, 0x6f, 0x4a, 0x83];
const PREAMBLE_BYTES: usize = 8;
const HEADER_BYTES: usize = PREAMBLE_BYTES + 1;
const HASH_BYTES: usize = 32;
const BLOCK_TYPE_DOC: u8 = 0;
const BLOCK_TYPE_CHANGE: u8 = 1;
const BLOCK_TYPE_DEFLATE: u8 = 2;
const CHUNK_START: usize = 8;
const HASH_RANGE: Range<usize> = 4..8;
fn get_heads(changes: &[amp::Change]) -> HashSet<amp::ChangeHash> {
changes.iter().fold(HashSet::new(), |mut acc, c| {
if let Some(h) = c.hash {
acc.insert(h);
}
for dep in &c.deps {
acc.remove(dep);
}
acc
})
}
pub(crate) fn encode_document(
changes: &[amp::Change],
doc_ops: &[Op],
actors_index: &IndexedCache<ActorId>,
props: &[String],
) -> Result<Vec<u8>, AutomergeError> {
let mut bytes: Vec<u8> = Vec::new();
let heads = get_heads(changes);
let actors_map = actors_index.encode_index();
let actors = actors_index.sorted();
/*
// this assumes that all actor_ids referenced are seen in changes.actor_id which is true
// so long as we have a full history
let mut actors: Vec<_> = changes
.iter()
.map(|c| &c.actor)
.unique()
.sorted()
.cloned()
.collect();
*/
let (change_bytes, change_info) = ChangeEncoder::encode_changes(changes, &actors);
//let doc_ops = group_doc_ops(changes, &actors);
let (ops_bytes, ops_info) = DocOpEncoder::encode_doc_ops(doc_ops, &actors_map, props);
bytes.extend(&MAGIC_BYTES);
bytes.extend(vec![0, 0, 0, 0]); // we dont know the hash yet so fill in a fake
bytes.push(BLOCK_TYPE_DOC);
let mut chunk = Vec::new();
actors.len().encode(&mut chunk)?;
for a in actors.into_iter() {
a.to_bytes().encode(&mut chunk)?;
}
heads.len().encode(&mut chunk)?;
for head in heads.iter().sorted() {
chunk.write_all(&head.0).unwrap();
}
chunk.extend(change_info);
chunk.extend(ops_info);
chunk.extend(change_bytes);
chunk.extend(ops_bytes);
leb128::write::unsigned(&mut bytes, chunk.len() as u64).unwrap();
bytes.extend(&chunk);
let hash_result = Sha256::digest(&bytes[CHUNK_START..bytes.len()]);
bytes.splice(HASH_RANGE, hash_result[0..4].iter().copied());
Ok(bytes)
}
impl From<amp::Change> for Change {
fn from(value: amp::Change) -> Self {
encode(&value)
}
}
impl From<&amp::Change> for Change {
fn from(value: &amp::Change) -> Self {
encode(value)
}
}
fn encode(change: &amp::Change) -> Change {
let mut deps = change.deps.clone();
deps.sort_unstable();
let mut chunk = encode_chunk(change, &deps);
let mut bytes = Vec::with_capacity(MAGIC_BYTES.len() + 4 + chunk.bytes.len());
bytes.extend(&MAGIC_BYTES);
bytes.extend(vec![0, 0, 0, 0]); // we dont know the hash yet so fill in a fake
bytes.push(BLOCK_TYPE_CHANGE);
leb128::write::unsigned(&mut bytes, chunk.bytes.len() as u64).unwrap();
let body_start = bytes.len();
increment_range(&mut chunk.body, bytes.len());
increment_range(&mut chunk.message, bytes.len());
increment_range(&mut chunk.extra_bytes, bytes.len());
increment_range_map(&mut chunk.ops, bytes.len());
bytes.extend(&chunk.bytes);
let hash_result = Sha256::digest(&bytes[CHUNK_START..bytes.len()]);
let hash: amp::ChangeHash = hash_result[..].try_into().unwrap();
bytes.splice(HASH_RANGE, hash_result[0..4].iter().copied());
// any time I make changes to the encoder decoder its a good idea
// to run it through a round trip to detect errors the tests might not
// catch
// let c0 = Change::from_bytes(bytes.clone()).unwrap();
// std::assert_eq!(c1, c0);
// perhaps we should add something like this to the test suite
let bytes = ChangeBytes::Uncompressed(bytes);
Change {
bytes,
body_start,
hash,
seq: change.seq,
start_op: change.start_op,
time: change.time,
actors: chunk.actors,
message: chunk.message,
deps,
ops: chunk.ops,
extra_bytes: chunk.extra_bytes,
}
}
struct ChunkIntermediate {
bytes: Vec<u8>,
body: Range<usize>,
actors: Vec<ActorId>,
message: Range<usize>,
ops: HashMap<u32, Range<usize>>,
extra_bytes: Range<usize>,
}
fn encode_chunk(change: &amp::Change, deps: &[amp::ChangeHash]) -> ChunkIntermediate {
let mut bytes = Vec::new();
// All these unwraps are okay because we're writing to an in memory buffer so io erros should
// not happen
// encode deps
deps.len().encode(&mut bytes).unwrap();
for hash in deps.iter() {
bytes.write_all(&hash.0).unwrap();
}
// encode first actor
let mut actors = vec![change.actor_id.clone()];
change.actor_id.to_bytes().encode(&mut bytes).unwrap();
// encode seq, start_op, time, message
change.seq.encode(&mut bytes).unwrap();
change.start_op.encode(&mut bytes).unwrap();
change.time.encode(&mut bytes).unwrap();
let message = bytes.len() + 1;
change.message.encode(&mut bytes).unwrap();
let message = message..bytes.len();
// encode ops into a side buffer - collect all other actors
let (ops_buf, mut ops) = ColumnEncoder::encode_ops(&change.operations, &mut actors);
// encode all other actors
actors[1..].encode(&mut bytes).unwrap();
// now we know how many bytes ops are offset by so we can adjust the ranges
increment_range_map(&mut ops, bytes.len());
// write out the ops
bytes.write_all(&ops_buf).unwrap();
// write out the extra bytes
let extra_bytes = bytes.len()..(bytes.len() + change.extra_bytes.len());
bytes.write_all(&change.extra_bytes).unwrap();
let body = 0..bytes.len();
ChunkIntermediate {
bytes,
body,
actors,
message,
ops,
extra_bytes,
}
}
#[derive(PartialEq, Debug, Clone)]
enum ChangeBytes {
Compressed {
compressed: Vec<u8>,
uncompressed: Vec<u8>,
},
Uncompressed(Vec<u8>),
}
impl ChangeBytes {
fn uncompressed(&self) -> &[u8] {
match self {
ChangeBytes::Compressed { uncompressed, .. } => &uncompressed[..],
ChangeBytes::Uncompressed(b) => &b[..],
}
}
fn compress(&mut self, body_start: usize) {
match self {
ChangeBytes::Compressed { .. } => {}
ChangeBytes::Uncompressed(uncompressed) => {
if uncompressed.len() > DEFLATE_MIN_SIZE {
let mut result = Vec::with_capacity(uncompressed.len());
result.extend(&uncompressed[0..8]);
result.push(BLOCK_TYPE_DEFLATE);
let mut deflater =
DeflateEncoder::new(&uncompressed[body_start..], Compression::default());
let mut deflated = Vec::new();
let deflated_len = deflater.read_to_end(&mut deflated).unwrap();
leb128::write::unsigned(&mut result, deflated_len as u64).unwrap();
result.extend(&deflated[..]);
*self = ChangeBytes::Compressed {
compressed: result,
uncompressed: std::mem::take(uncompressed),
}
}
}
}
}
fn raw(&self) -> &[u8] {
match self {
ChangeBytes::Compressed { compressed, .. } => &compressed[..],
ChangeBytes::Uncompressed(b) => &b[..],
}
}
}
#[derive(PartialEq, Debug, Clone)]
pub struct Change {
bytes: ChangeBytes,
body_start: usize,
pub hash: amp::ChangeHash,
pub seq: u64,
pub start_op: u64,
pub time: i64,
message: Range<usize>,
actors: Vec<ActorId>,
pub deps: Vec<amp::ChangeHash>,
ops: HashMap<u32, Range<usize>>,
extra_bytes: Range<usize>,
}
impl Change {
pub fn actor_id(&self) -> &ActorId {
&self.actors[0]
}
#[instrument(level = "debug", skip(bytes))]
pub fn load_document(bytes: &[u8]) -> Result<Vec<Change>, AutomergeError> {
load_blocks(bytes)
}
pub fn from_bytes(bytes: Vec<u8>) -> Result<Change, decoding::Error> {
decode_change(bytes)
}
pub fn is_empty(&self) -> bool {
self.len() == 0
}
pub fn len(&self) -> usize {
// TODO - this could be a lot more efficient
self.iter_ops().count()
}
pub fn max_op(&self) -> u64 {
self.start_op + (self.len() as u64) - 1
}
fn message(&self) -> Option<String> {
let m = &self.bytes.uncompressed()[self.message.clone()];
if m.is_empty() {
None
} else {
std::str::from_utf8(m).map(ToString::to_string).ok()
}
}
pub fn decode(&self) -> amp::Change {
amp::Change {
start_op: self.start_op,
seq: self.seq,
time: self.time,
hash: Some(self.hash),
message: self.message(),
actor_id: self.actors[0].clone(),
deps: self.deps.clone(),
operations: self
.iter_ops()
.map(|op| amp::Op {
action: op.action.clone(),
obj: op.obj.clone(),
key: op.key.clone(),
pred: op.pred.clone(),
insert: op.insert,
})
.collect(),
extra_bytes: self.extra_bytes().into(),
}
}
pub(crate) fn iter_ops(&self) -> OperationIterator {
OperationIterator::new(self.bytes.uncompressed(), self.actors.as_slice(), &self.ops)
}
pub fn extra_bytes(&self) -> &[u8] {
&self.bytes.uncompressed()[self.extra_bytes.clone()]
}
pub fn compress(&mut self) {
self.bytes.compress(self.body_start);
}
pub fn raw_bytes(&self) -> &[u8] {
self.bytes.raw()
}
}
fn read_leb128(bytes: &mut &[u8]) -> Result<(usize, usize), decoding::Error> {
let mut buf = &bytes[..];
let val = leb128::read::unsigned(&mut buf)? as usize;
let leb128_bytes = bytes.len() - buf.len();
Ok((val, leb128_bytes))
}
fn read_slice<T: Decodable + Debug>(
bytes: &[u8],
cursor: &mut Range<usize>,
) -> Result<T, decoding::Error> {
let mut view = &bytes[cursor.clone()];
let init_len = view.len();
let val = T::decode::<&[u8]>(&mut view).ok_or(decoding::Error::NoDecodedValue);
let bytes_read = init_len - view.len();
*cursor = (cursor.start + bytes_read)..cursor.end;
val
}
fn slice_bytes(bytes: &[u8], cursor: &mut Range<usize>) -> Result<Range<usize>, decoding::Error> {
let (val, len) = read_leb128(&mut &bytes[cursor.clone()])?;
let start = cursor.start + len;
let end = start + val;
*cursor = end..cursor.end;
Ok(start..end)
}
fn increment_range(range: &mut Range<usize>, len: usize) {
range.end += len;
range.start += len;
}
fn increment_range_map(ranges: &mut HashMap<u32, Range<usize>>, len: usize) {
for range in ranges.values_mut() {
increment_range(range, len);
}
}
fn export_objid(id: &ObjId, actors: &IndexedCache<ActorId>) -> amp::ObjectId {
if id == &ObjId::root() {
amp::ObjectId::Root
} else {
export_opid(&id.0, actors).into()
}
}
fn export_elemid(id: &ElemId, actors: &IndexedCache<ActorId>) -> amp::ElementId {
if id == &HEAD {
amp::ElementId::Head
} else {
export_opid(&id.0, actors).into()
}
}
fn export_opid(id: &OpId, actors: &IndexedCache<ActorId>) -> amp::OpId {
amp::OpId(id.0, actors.get(id.1).clone())
}
fn export_op(op: &Op, actors: &IndexedCache<ActorId>, props: &IndexedCache<String>) -> amp::Op {
let action = op.action.clone();
let key = match &op.key {
Key::Map(n) => amp::Key::Map(props.get(*n).clone().into()),
Key::Seq(id) => amp::Key::Seq(export_elemid(id, actors)),
};
let obj = export_objid(&op.obj, actors);
let pred = op.pred.iter().map(|id| export_opid(id, actors)).collect();
amp::Op {
action,
obj,
insert: op.insert,
pred,
key,
}
}
pub(crate) fn export_change(
change: &Transaction,
actors: &IndexedCache<ActorId>,
props: &IndexedCache<String>,
) -> Change {
amp::Change {
actor_id: actors.get(change.actor).clone(),
seq: change.seq,
start_op: change.start_op,
time: change.time,
deps: change.deps.clone(),
message: change.message.clone(),
hash: change.hash,
operations: change
.operations
.iter()
.map(|op| export_op(op, actors, props))
.collect(),
extra_bytes: change.extra_bytes.clone(),
}
.into()
}
pub fn decode_change(bytes: Vec<u8>) -> Result<Change, decoding::Error> {
let (chunktype, body) = decode_header_without_hash(&bytes)?;
let bytes = if chunktype == BLOCK_TYPE_DEFLATE {
decompress_chunk(0..PREAMBLE_BYTES, body, bytes)?
} else {
ChangeBytes::Uncompressed(bytes)
};
let (chunktype, hash, body) = decode_header(bytes.uncompressed())?;
if chunktype != BLOCK_TYPE_CHANGE {
return Err(decoding::Error::WrongType {
expected_one_of: vec![BLOCK_TYPE_CHANGE],
found: chunktype,
});
}
let body_start = body.start;
let mut cursor = body;
let deps = decode_hashes(bytes.uncompressed(), &mut cursor)?;
let actor =
ActorId::from(&bytes.uncompressed()[slice_bytes(bytes.uncompressed(), &mut cursor)?]);
let seq = read_slice(bytes.uncompressed(), &mut cursor)?;
let start_op = read_slice(bytes.uncompressed(), &mut cursor)?;
let time = read_slice(bytes.uncompressed(), &mut cursor)?;
let message = slice_bytes(bytes.uncompressed(), &mut cursor)?;
let actors = decode_actors(bytes.uncompressed(), &mut cursor, Some(actor))?;
let ops_info = decode_column_info(bytes.uncompressed(), &mut cursor, false)?;
let ops = decode_columns(&mut cursor, &ops_info);
Ok(Change {
bytes,
body_start,
hash,
seq,
start_op,
time,
actors,
message,
deps,
ops,
extra_bytes: cursor,
})
}
fn decompress_chunk(
preamble: Range<usize>,
body: Range<usize>,
compressed: Vec<u8>,
) -> Result<ChangeBytes, decoding::Error> {
let mut decoder = DeflateDecoder::new(&compressed[body]);
let mut decompressed = Vec::new();
decoder.read_to_end(&mut decompressed)?;
let mut result = Vec::with_capacity(decompressed.len() + preamble.len());
result.extend(&compressed[preamble]);
result.push(BLOCK_TYPE_CHANGE);
leb128::write::unsigned::<Vec<u8>>(&mut result, decompressed.len() as u64).unwrap();
result.extend(decompressed);
Ok(ChangeBytes::Compressed {
uncompressed: result,
compressed,
})
}
fn decode_hashes(
bytes: &[u8],
cursor: &mut Range<usize>,
) -> Result<Vec<amp::ChangeHash>, decoding::Error> {
let num_hashes = read_slice(bytes, cursor)?;
let mut hashes = Vec::with_capacity(num_hashes);
for _ in 0..num_hashes {
let hash = cursor.start..(cursor.start + HASH_BYTES);
*cursor = hash.end..cursor.end;
hashes.push(
bytes
.get(hash)
.ok_or(decoding::Error::NotEnoughBytes)?
.try_into()
.map_err(InvalidChangeError::from)?,
);
}
Ok(hashes)
}
fn decode_actors(
bytes: &[u8],
cursor: &mut Range<usize>,
first: Option<ActorId>,
) -> Result<Vec<ActorId>, decoding::Error> {
let num_actors: usize = read_slice(bytes, cursor)?;
let mut actors = Vec::with_capacity(num_actors + 1);
if let Some(actor) = first {
actors.push(actor);
}
for _ in 0..num_actors {
actors.push(ActorId::from(
bytes
.get(slice_bytes(bytes, cursor)?)
.ok_or(decoding::Error::NotEnoughBytes)?,
));
}
Ok(actors)
}
fn decode_column_info(
bytes: &[u8],
cursor: &mut Range<usize>,
allow_compressed_column: bool,
) -> Result<Vec<(u32, usize)>, decoding::Error> {
let num_columns = read_slice(bytes, cursor)?;
let mut columns = Vec::with_capacity(num_columns);
let mut last_id = 0;
for _ in 0..num_columns {
let id: u32 = read_slice(bytes, cursor)?;
if (id & !COLUMN_TYPE_DEFLATE) <= (last_id & !COLUMN_TYPE_DEFLATE) {
return Err(decoding::Error::ColumnsNotInAscendingOrder {
last: last_id,
found: id,
});
}
if id & COLUMN_TYPE_DEFLATE != 0 && !allow_compressed_column {
return Err(decoding::Error::ChangeContainedCompressedColumns);
}
last_id = id;
let length = read_slice(bytes, cursor)?;
columns.push((id, length));
}
Ok(columns)
}
fn decode_columns(
cursor: &mut Range<usize>,
columns: &[(u32, usize)],
) -> HashMap<u32, Range<usize>> {
let mut ops = HashMap::new();
for (id, length) in columns {
let start = cursor.start;
let end = start + length;
*cursor = end..cursor.end;
ops.insert(*id, start..end);
}
ops
}
fn decode_header(bytes: &[u8]) -> Result<(u8, amp::ChangeHash, Range<usize>), decoding::Error> {
let (chunktype, body) = decode_header_without_hash(bytes)?;
let calculated_hash = Sha256::digest(&bytes[PREAMBLE_BYTES..]);
let checksum = &bytes[4..8];
if checksum != &calculated_hash[0..4] {
return Err(decoding::Error::InvalidChecksum {
found: checksum.try_into().unwrap(),
calculated: calculated_hash[0..4].try_into().unwrap(),
});
}
let hash = calculated_hash[..]
.try_into()
.map_err(InvalidChangeError::from)?;
Ok((chunktype, hash, body))
}
fn decode_header_without_hash(bytes: &[u8]) -> Result<(u8, Range<usize>), decoding::Error> {
if bytes.len() <= HEADER_BYTES {
return Err(decoding::Error::NotEnoughBytes);
}
if bytes[0..4] != MAGIC_BYTES {
return Err(decoding::Error::WrongMagicBytes);
}
let (val, len) = read_leb128(&mut &bytes[HEADER_BYTES..])?;
let body = (HEADER_BYTES + len)..(HEADER_BYTES + len + val);
if bytes.len() != body.end {
return Err(decoding::Error::WrongByteLength {
expected: body.end,
found: bytes.len(),
});
}
let chunktype = bytes[PREAMBLE_BYTES];
Ok((chunktype, body))
}
fn load_blocks(bytes: &[u8]) -> Result<Vec<Change>, AutomergeError> {
let mut changes = Vec::new();
for slice in split_blocks(bytes)? {
decode_block(slice, &mut changes)?;
}
Ok(changes)
}
fn split_blocks(bytes: &[u8]) -> Result<Vec<&[u8]>, decoding::Error> {
// split off all valid blocks - ignore the rest if its corrupted or truncated
let mut blocks = Vec::new();
let mut cursor = bytes;
while let Some(block) = pop_block(cursor)? {
blocks.push(&cursor[block.clone()]);
if cursor.len() <= block.end {
break;
}
cursor = &cursor[block.end..];
}
Ok(blocks)
}
fn pop_block(bytes: &[u8]) -> Result<Option<Range<usize>>, decoding::Error> {
if bytes.len() < 4 || bytes[0..4] != MAGIC_BYTES {
// not reporting error here - file got corrupted?
return Ok(None);
}
let (val, len) = read_leb128(
&mut bytes
.get(HEADER_BYTES..)
.ok_or(decoding::Error::NotEnoughBytes)?,
)?;
// val is arbitrary so it could overflow
let end = (HEADER_BYTES + len)
.checked_add(val)
.ok_or(decoding::Error::Overflow)?;
if end > bytes.len() {
// not reporting error here - file got truncated?
return Ok(None);
}
Ok(Some(0..end))
}
fn decode_block(bytes: &[u8], changes: &mut Vec<Change>) -> Result<(), decoding::Error> {
match bytes[PREAMBLE_BYTES] {
BLOCK_TYPE_DOC => {
changes.extend(decode_document(bytes)?);
Ok(())
}
BLOCK_TYPE_CHANGE | BLOCK_TYPE_DEFLATE => {
changes.push(decode_change(bytes.to_vec())?);
Ok(())
}
found => Err(decoding::Error::WrongType {
expected_one_of: vec![BLOCK_TYPE_DOC, BLOCK_TYPE_CHANGE, BLOCK_TYPE_DEFLATE],
found,
}),
}
}
fn decode_document(bytes: &[u8]) -> Result<Vec<Change>, decoding::Error> {
let (chunktype, _hash, mut cursor) = decode_header(bytes)?;
// chunktype == 0 is a document, chunktype = 1 is a change
if chunktype > 0 {
return Err(decoding::Error::WrongType {
expected_one_of: vec![0],
found: chunktype,
});
}
let actors = decode_actors(bytes, &mut cursor, None)?;
let heads = decode_hashes(bytes, &mut cursor)?;
let changes_info = decode_column_info(bytes, &mut cursor, true)?;
let ops_info = decode_column_info(bytes, &mut cursor, true)?;
let changes_data = decode_columns(&mut cursor, &changes_info);
let mut doc_changes = ChangeIterator::new(bytes, &changes_data).collect::<Vec<_>>();
let doc_changes_deps = DepsIterator::new(bytes, &changes_data);
let doc_changes_len = doc_changes.len();
let ops_data = decode_columns(&mut cursor, &ops_info);
let doc_ops: Vec<_> = DocOpIterator::new(bytes, &actors, &ops_data).collect();
group_doc_change_and_doc_ops(&mut doc_changes, doc_ops, &actors)?;
let uncompressed_changes =
doc_changes_to_uncompressed_changes(doc_changes.into_iter(), &actors);
let changes = compress_doc_changes(uncompressed_changes, doc_changes_deps, doc_changes_len)
.ok_or(decoding::Error::NoDocChanges)?;
let mut calculated_heads = HashSet::new();
for change in &changes {
for dep in &change.deps {
calculated_heads.remove(dep);
}
calculated_heads.insert(change.hash);
}
if calculated_heads != heads.into_iter().collect::<HashSet<_>>() {
return Err(decoding::Error::MismatchedHeads);
}
Ok(changes)
}
fn compress_doc_changes(
uncompressed_changes: impl Iterator<Item = amp::Change>,
doc_changes_deps: impl Iterator<Item = Vec<usize>>,
num_changes: usize,
) -> Option<Vec<Change>> {
let mut changes: Vec<Change> = Vec::with_capacity(num_changes);
// fill out the hashes as we go
for (deps, mut uncompressed_change) in doc_changes_deps.zip_eq(uncompressed_changes) {
for idx in deps {
uncompressed_change.deps.push(changes.get(idx)?.hash);
}
changes.push(uncompressed_change.into());
}
Some(changes)
}
fn group_doc_change_and_doc_ops(
changes: &mut [DocChange],
mut ops: Vec<DocOp>,
actors: &[ActorId],
) -> Result<(), decoding::Error> {
let mut changes_by_actor: HashMap<usize, Vec<usize>> = HashMap::new();
for (i, change) in changes.iter().enumerate() {
let actor_change_index = changes_by_actor.entry(change.actor).or_default();
if change.seq != (actor_change_index.len() + 1) as u64 {
return Err(decoding::Error::ChangeDecompressFailed(
"Doc Seq Invalid".into(),
));
}
if change.actor >= actors.len() {
return Err(decoding::Error::ChangeDecompressFailed(
"Doc Actor Invalid".into(),
));
}
actor_change_index.push(i);
}
let mut op_by_id = HashMap::new();
ops.iter().enumerate().for_each(|(i, op)| {
op_by_id.insert((op.ctr, op.actor), i);
});
for i in 0..ops.len() {
let op = ops[i].clone(); // this is safe - avoid borrow checker issues
//let id = (op.ctr, op.actor);
//op_by_id.insert(id, i);
for succ in &op.succ {
if let Some(index) = op_by_id.get(succ) {
ops[*index].pred.push((op.ctr, op.actor));
} else {
let key = if op.insert {
amp::OpId(op.ctr, actors[op.actor].clone()).into()
} else {
op.key.clone()
};
let del = DocOp {
actor: succ.1,
ctr: succ.0,
action: OpType::Del,
obj: op.obj.clone(),
key,
succ: Vec::new(),
pred: vec![(op.ctr, op.actor)],
insert: false,
};
op_by_id.insert(*succ, ops.len());
ops.push(del);
}
}
}
for op in ops {
// binary search for our change
let actor_change_index = changes_by_actor.entry(op.actor).or_default();
let mut left = 0;
let mut right = actor_change_index.len();
while left < right {
let seq = (left + right) / 2;
if changes[actor_change_index[seq]].max_op < op.ctr {
left = seq + 1;
} else {
right = seq;
}
}
if left >= actor_change_index.len() {
return Err(decoding::Error::ChangeDecompressFailed(
"Doc MaxOp Invalid".into(),
));
}
changes[actor_change_index[left]].ops.push(op);
}
changes
.iter_mut()
.for_each(|change| change.ops.sort_unstable());
Ok(())
}
fn doc_changes_to_uncompressed_changes<'a>(
changes: impl Iterator<Item = DocChange> + 'a,
actors: &'a [ActorId],
) -> impl Iterator<Item = amp::Change> + 'a {
changes.map(move |change| amp::Change {
// we've already confirmed that all change.actor's are valid
actor_id: actors[change.actor].clone(),
seq: change.seq,
time: change.time,
start_op: change.max_op - change.ops.len() as u64 + 1,
hash: None,
message: change.message,
operations: change
.ops
.into_iter()
.map(|op| amp::Op {
action: op.action.clone(),
insert: op.insert,
key: op.key,
obj: op.obj,
// we've already confirmed that all op.actor's are valid
pred: pred_into(op.pred.into_iter(), actors),
})
.collect(),
deps: Vec::new(),
extra_bytes: change.extra_bytes,
})
}
fn pred_into(
pred: impl Iterator<Item = (u64, usize)>,
actors: &[ActorId],
) -> amp::SortedVec<amp::OpId> {
pred.map(|(ctr, actor)| amp::OpId(ctr, actors[actor].clone()))
.collect()
}

View file

@ -1,52 +0,0 @@
use crate::OpId;
use fxhash::FxBuildHasher;
use std::cmp;
use std::collections::HashMap;
#[derive(Debug, Clone, PartialEq)]
pub(crate) struct Clock(HashMap<usize, u64, FxBuildHasher>);
impl Clock {
pub fn new() -> Self {
Clock(Default::default())
}
pub fn include(&mut self, key: usize, n: u64) {
self.0
.entry(key)
.and_modify(|m| *m = cmp::max(n, *m))
.or_insert(n);
}
pub fn covers(&self, id: &OpId) -> bool {
if let Some(val) = self.0.get(&id.1) {
val >= &id.0
} else {
false
}
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn covers() {
let mut clock = Clock::new();
clock.include(1, 20);
clock.include(2, 10);
assert!(clock.covers(&OpId(10, 1)));
assert!(clock.covers(&OpId(20, 1)));
assert!(!clock.covers(&OpId(30, 1)));
assert!(clock.covers(&OpId(5, 2)));
assert!(clock.covers(&OpId(10, 2)));
assert!(!clock.covers(&OpId(15, 2)));
assert!(!clock.covers(&OpId(1, 3)));
assert!(!clock.covers(&OpId(100, 3)));
}
}

File diff suppressed because it is too large Load diff

View file

@ -1,376 +0,0 @@
use core::fmt::Debug;
use std::{
io,
io::{Read, Write},
mem,
};
use flate2::{bufread::DeflateEncoder, Compression};
use smol_str::SmolStr;
use crate::columnar::COLUMN_TYPE_DEFLATE;
use crate::ActorId;
pub(crate) const DEFLATE_MIN_SIZE: usize = 256;
/// The error type for encoding operations.
#[derive(Debug, thiserror::Error)]
pub enum Error {
#[error(transparent)]
Io(#[from] io::Error),
}
/// Encodes booleans by storing the count of the same value.
///
/// The sequence of numbers describes the count of false values on even indices (0-indexed) and the
/// count of true values on odd indices (0-indexed).
///
/// Counts are encoded as usize.
pub(crate) struct BooleanEncoder {
buf: Vec<u8>,
last: bool,
count: usize,
}
impl BooleanEncoder {
pub fn new() -> BooleanEncoder {
BooleanEncoder {
buf: Vec::new(),
last: false,
count: 0,
}
}
pub fn append(&mut self, value: bool) {
if value == self.last {
self.count += 1;
} else {
self.count.encode(&mut self.buf).ok();
self.last = value;
self.count = 1;
}
}
pub fn finish(mut self, col: u32) -> ColData {
if self.count > 0 {
self.count.encode(&mut self.buf).ok();
}
ColData::new(col, self.buf)
}
}
/// Encodes integers as the change since the previous value.
///
/// The initial value is 0 encoded as u64. Deltas are encoded as i64.
///
/// Run length encoding is then applied to the resulting sequence.
pub(crate) struct DeltaEncoder {
rle: RleEncoder<i64>,
absolute_value: u64,
}
impl DeltaEncoder {
pub fn new() -> DeltaEncoder {
DeltaEncoder {
rle: RleEncoder::new(),
absolute_value: 0,
}
}
pub fn append_value(&mut self, value: u64) {
self.rle
.append_value(value as i64 - self.absolute_value as i64);
self.absolute_value = value;
}
pub fn append_null(&mut self) {
self.rle.append_null();
}
pub fn finish(self, col: u32) -> ColData {
self.rle.finish(col)
}
}
enum RleState<T> {
Empty,
NullRun(usize),
LiteralRun(T, Vec<T>),
LoneVal(T),
Run(T, usize),
}
/// Encodes data in run lengh encoding format. This is very efficient for long repeats of data
///
/// There are 3 types of 'run' in this encoder:
/// - a normal run (compresses repeated values)
/// - a null run (compresses repeated nulls)
/// - a literal run (no compression)
///
/// A normal run consists of the length of the run (encoded as an i64) followed by the encoded value that this run contains.
///
/// A null run consists of a zero value (encoded as an i64) followed by the length of the null run (encoded as a usize).
///
/// A literal run consists of the **negative** length of the run (encoded as an i64) followed by the values in the run.
///
/// Therefore all the types start with an encoded i64, the value of which determines the type of the following data.
pub(crate) struct RleEncoder<T>
where
T: Encodable + PartialEq + Clone,
{
buf: Vec<u8>,
state: RleState<T>,
}
impl<T> RleEncoder<T>
where
T: Encodable + PartialEq + Clone,
{
pub fn new() -> RleEncoder<T> {
RleEncoder {
buf: Vec::new(),
state: RleState::Empty,
}
}
pub fn finish(mut self, col: u32) -> ColData {
match self.take_state() {
// this covers `only_nulls`
RleState::NullRun(size) => {
if !self.buf.is_empty() {
self.flush_null_run(size);
}
}
RleState::LoneVal(value) => self.flush_lit_run(vec![value]),
RleState::Run(value, len) => self.flush_run(&value, len),
RleState::LiteralRun(last, mut run) => {
run.push(last);
self.flush_lit_run(run);
}
RleState::Empty => {}
}
ColData::new(col, self.buf)
}
fn flush_run(&mut self, val: &T, len: usize) {
self.encode(&(len as i64));
self.encode(val);
}
fn flush_null_run(&mut self, len: usize) {
self.encode::<i64>(&0);
self.encode(&len);
}
fn flush_lit_run(&mut self, run: Vec<T>) {
self.encode(&-(run.len() as i64));
for val in run {
self.encode(&val);
}
}
fn take_state(&mut self) -> RleState<T> {
let mut state = RleState::Empty;
mem::swap(&mut self.state, &mut state);
state
}
pub fn append_null(&mut self) {
self.state = match self.take_state() {
RleState::Empty => RleState::NullRun(1),
RleState::NullRun(size) => RleState::NullRun(size + 1),
RleState::LoneVal(other) => {
self.flush_lit_run(vec![other]);
RleState::NullRun(1)
}
RleState::Run(other, len) => {
self.flush_run(&other, len);
RleState::NullRun(1)
}
RleState::LiteralRun(last, mut run) => {
run.push(last);
self.flush_lit_run(run);
RleState::NullRun(1)
}
}
}
pub fn append_value(&mut self, value: T) {
self.state = match self.take_state() {
RleState::Empty => RleState::LoneVal(value),
RleState::LoneVal(other) => {
if other == value {
RleState::Run(value, 2)
} else {
let mut v = Vec::with_capacity(2);
v.push(other);
RleState::LiteralRun(value, v)
}
}
RleState::Run(other, len) => {
if other == value {
RleState::Run(other, len + 1)
} else {
self.flush_run(&other, len);
RleState::LoneVal(value)
}
}
RleState::LiteralRun(last, mut run) => {
if last == value {
self.flush_lit_run(run);
RleState::Run(value, 2)
} else {
run.push(last);
RleState::LiteralRun(value, run)
}
}
RleState::NullRun(size) => {
self.flush_null_run(size);
RleState::LoneVal(value)
}
}
}
fn encode<V>(&mut self, val: &V)
where
V: Encodable,
{
val.encode(&mut self.buf).ok();
}
}
pub(crate) trait Encodable {
fn encode_with_actors_to_vec(&self, actors: &mut Vec<ActorId>) -> io::Result<Vec<u8>> {
let mut buf = Vec::new();
self.encode_with_actors(&mut buf, actors)?;
Ok(buf)
}
fn encode_with_actors<R: Write>(
&self,
buf: &mut R,
_actors: &mut Vec<ActorId>,
) -> io::Result<usize> {
self.encode(buf)
}
fn encode<R: Write>(&self, buf: &mut R) -> io::Result<usize>;
}
impl Encodable for SmolStr {
fn encode<R: Write>(&self, buf: &mut R) -> io::Result<usize> {
let bytes = self.as_bytes();
let head = bytes.len().encode(buf)?;
buf.write_all(bytes)?;
Ok(head + bytes.len())
}
}
impl Encodable for String {
fn encode<R: Write>(&self, buf: &mut R) -> io::Result<usize> {
let bytes = self.as_bytes();
let head = bytes.len().encode(buf)?;
buf.write_all(bytes)?;
Ok(head + bytes.len())
}
}
impl Encodable for Option<String> {
fn encode<R: Write>(&self, buf: &mut R) -> io::Result<usize> {
if let Some(s) = self {
s.encode(buf)
} else {
0.encode(buf)
}
}
}
impl Encodable for u64 {
fn encode<R: Write>(&self, buf: &mut R) -> io::Result<usize> {
leb128::write::unsigned(buf, *self)
}
}
impl Encodable for f64 {
fn encode<R: Write>(&self, buf: &mut R) -> io::Result<usize> {
let bytes = self.to_le_bytes();
buf.write_all(&bytes)?;
Ok(bytes.len())
}
}
impl Encodable for f32 {
fn encode<R: Write>(&self, buf: &mut R) -> io::Result<usize> {
let bytes = self.to_le_bytes();
buf.write_all(&bytes)?;
Ok(bytes.len())
}
}
impl Encodable for i64 {
fn encode<R: Write>(&self, buf: &mut R) -> io::Result<usize> {
leb128::write::signed(buf, *self)
}
}
impl Encodable for usize {
fn encode<R: Write>(&self, buf: &mut R) -> io::Result<usize> {
(*self as u64).encode(buf)
}
}
impl Encodable for u32 {
fn encode<R: Write>(&self, buf: &mut R) -> io::Result<usize> {
u64::from(*self).encode(buf)
}
}
impl Encodable for i32 {
fn encode<R: Write>(&self, buf: &mut R) -> io::Result<usize> {
i64::from(*self).encode(buf)
}
}
#[derive(Debug)]
pub(crate) struct ColData {
pub col: u32,
pub data: Vec<u8>,
#[cfg(debug_assertions)]
has_been_deflated: bool,
}
impl ColData {
pub fn new(col_id: u32, data: Vec<u8>) -> ColData {
ColData {
col: col_id,
data,
#[cfg(debug_assertions)]
has_been_deflated: false,
}
}
pub fn encode_col_len<R: Write>(&self, buf: &mut R) -> io::Result<usize> {
let mut len = 0;
if !self.data.is_empty() {
len += self.col.encode(buf)?;
len += self.data.len().encode(buf)?;
}
Ok(len)
}
pub fn deflate(&mut self) {
#[cfg(debug_assertions)]
{
debug_assert!(!self.has_been_deflated);
self.has_been_deflated = true;
}
if self.data.len() > DEFLATE_MIN_SIZE {
let mut deflated = Vec::new();
let mut deflater = DeflateEncoder::new(&self.data[..], Compression::default());
//This unwrap should be okay as we're reading and writing to in memory buffers
deflater.read_to_end(&mut deflated).unwrap();
self.col |= COLUMN_TYPE_DEFLATE;
self.data = deflated;
}
}
}

View file

@ -1,63 +0,0 @@
use crate::decoding;
use crate::value::DataType;
use crate::ScalarValue;
use thiserror::Error;
#[derive(Error, Debug)]
pub enum AutomergeError {
#[error("invalid opid format `{0}`")]
InvalidOpId(String),
#[error("there was an ecoding problem")]
Encoding,
#[error("there was a decoding problem")]
Decoding,
#[error("key must not be an empty string")]
EmptyStringKey,
#[error("invalid seq {0}")]
InvalidSeq(u64),
#[error("index {0} is out of bounds")]
InvalidIndex(usize),
#[error("generic automerge error")]
Fail,
}
impl From<std::io::Error> for AutomergeError {
fn from(_: std::io::Error) -> Self {
AutomergeError::Encoding
}
}
impl From<decoding::Error> for AutomergeError {
fn from(_: decoding::Error) -> Self {
AutomergeError::Decoding
}
}
#[derive(Error, Debug)]
#[error("Invalid actor ID: {0}")]
pub struct InvalidActorId(pub String);
#[derive(Error, Debug, PartialEq)]
#[error("Invalid scalar value, expected {expected} but received {unexpected}")]
pub(crate) struct InvalidScalarValue {
pub raw_value: ScalarValue,
pub datatype: DataType,
pub unexpected: String,
pub expected: String,
}
#[derive(Error, Debug, PartialEq)]
#[error("Invalid change hash slice: {0:?}")]
pub struct InvalidChangeHashSlice(pub Vec<u8>);
#[derive(Error, Debug, PartialEq)]
#[error("Invalid object ID: {0}")]
pub struct InvalidObjectId(pub String);
#[derive(Error, Debug)]
#[error("Invalid element ID: {0}")]
pub struct InvalidElementId(pub String);
#[derive(Error, Debug)]
#[error("Invalid OpID: {0}")]
pub struct InvalidOpId(pub String);

View file

@ -1,80 +0,0 @@
use itertools::Itertools;
use std::collections::HashMap;
use std::hash::Hash;
use std::ops::Index;
use std::rc::Rc;
#[derive(Debug, Clone)]
pub(crate) struct IndexedCache<T> {
pub cache: Vec<Rc<T>>,
lookup: HashMap<T, usize>,
}
impl<T> IndexedCache<T>
where
T: Clone + Eq + Hash + Ord,
{
pub fn new() -> Self {
IndexedCache {
cache: Default::default(),
lookup: Default::default(),
}
}
pub fn cache(&mut self, item: T) -> usize {
if let Some(n) = self.lookup.get(&item) {
*n
} else {
let n = self.cache.len();
self.cache.push(Rc::new(item.clone()));
self.lookup.insert(item, n);
n
}
}
pub fn lookup(&self, item: &T) -> Option<usize> {
self.lookup.get(item).cloned()
}
pub fn len(&self) -> usize {
self.cache.len()
}
pub fn get(&self, index: usize) -> &T {
&self.cache[index]
}
pub fn sorted(&self) -> IndexedCache<T> {
let mut sorted = Self::new();
self.cache.iter().sorted().cloned().for_each(|item| {
let n = sorted.cache.len();
sorted.cache.push(item.clone());
sorted.lookup.insert(item.as_ref().clone(), n);
});
sorted
}
pub fn encode_index(&self) -> Vec<usize> {
let sorted: Vec<_> = self.cache.iter().sorted().cloned().collect();
self.cache
.iter()
.map(|a| sorted.iter().position(|r| r == a).unwrap())
.collect()
}
}
impl<T> IntoIterator for IndexedCache<T> {
type Item = Rc<T>;
type IntoIter = std::vec::IntoIter<Self::Item>;
fn into_iter(self) -> Self::IntoIter {
self.cache.into_iter()
}
}
impl<T> Index<usize> for IndexedCache<T> {
type Output = T;
fn index(&self, i: usize) -> &T {
&self.cache[i]
}
}

View file

@ -1,57 +0,0 @@
use std::fmt;
use smol_str::SmolStr;
use crate::legacy::ScalarValue;
impl From<&str> for ScalarValue {
fn from(s: &str) -> Self {
ScalarValue::Str(s.into())
}
}
impl From<i64> for ScalarValue {
fn from(n: i64) -> Self {
ScalarValue::Int(n)
}
}
impl From<u64> for ScalarValue {
fn from(n: u64) -> Self {
ScalarValue::Uint(n)
}
}
impl From<i32> for ScalarValue {
fn from(n: i32) -> Self {
ScalarValue::Int(n as i64)
}
}
impl From<bool> for ScalarValue {
fn from(b: bool) -> Self {
ScalarValue::Boolean(b)
}
}
impl From<char> for ScalarValue {
fn from(c: char) -> Self {
ScalarValue::Str(SmolStr::new(c.to_string()))
}
}
impl fmt::Display for ScalarValue {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
match self {
ScalarValue::Bytes(b) => write!(f, "\"{:?}\"", b),
ScalarValue::Str(s) => write!(f, "\"{}\"", s),
ScalarValue::Int(i) => write!(f, "{}", i),
ScalarValue::Uint(i) => write!(f, "{}", i),
ScalarValue::F64(n) => write!(f, "{:.324}", n),
ScalarValue::Counter(c) => write!(f, "Counter: {}", c),
ScalarValue::Timestamp(i) => write!(f, "Timestamp: {}", i),
ScalarValue::Boolean(b) => write!(f, "{}", b),
ScalarValue::Null => write!(f, "null"),
}
}
}

File diff suppressed because it is too large Load diff

View file

@ -1,175 +0,0 @@
use crate::op_tree::OpTreeInternal;
use crate::query::TreeQuery;
use crate::{ActorId, IndexedCache, Key, ObjId, Op, OpId};
use fxhash::FxBuildHasher;
use std::cmp::Ordering;
use std::collections::HashMap;
pub(crate) type OpSet = OpSetInternal<16>;
#[derive(Debug, Clone)]
pub(crate) struct OpSetInternal<const B: usize> {
trees: HashMap<ObjId, OpTreeInternal<B>, FxBuildHasher>,
objs: Vec<ObjId>,
length: usize,
pub m: OpSetMetadata,
}
impl<const B: usize> OpSetInternal<B> {
pub fn new() -> Self {
OpSetInternal {
trees: Default::default(),
objs: Default::default(),
length: 0,
m: OpSetMetadata {
actors: IndexedCache::new(),
props: IndexedCache::new(),
},
}
}
pub fn iter(&self) -> Iter<'_, B> {
Iter {
inner: self,
index: 0,
sub_index: 0,
}
}
pub fn search<Q>(&self, obj: ObjId, query: Q) -> Q
where
Q: TreeQuery<B>,
{
if let Some(tree) = self.trees.get(&obj) {
tree.search(query, &self.m)
} else {
query
}
}
pub fn replace<F>(&mut self, obj: ObjId, index: usize, f: F) -> Option<Op>
where
F: FnMut(&mut Op),
{
if let Some(tree) = self.trees.get_mut(&obj) {
tree.replace(index, f)
} else {
None
}
}
pub fn remove(&mut self, obj: ObjId, index: usize) -> Op {
let tree = self.trees.get_mut(&obj).unwrap();
self.length -= 1;
let op = tree.remove(index);
if tree.is_empty() {
self.trees.remove(&obj);
}
op
}
pub fn len(&self) -> usize {
self.length
}
pub fn insert(&mut self, index: usize, element: Op) {
let Self {
ref mut trees,
ref mut objs,
ref mut m,
..
} = self;
trees
.entry(element.obj)
.or_insert_with(|| {
let pos = objs
.binary_search_by(|probe| m.lamport_cmp(probe.0, element.obj.0))
.unwrap_err();
objs.insert(pos, element.obj);
Default::default()
})
.insert(index, element);
self.length += 1;
}
#[cfg(feature = "optree-visualisation")]
pub fn visualise(&self) -> String {
let mut out = Vec::new();
let graph = super::visualisation::GraphVisualisation::construct(&self.trees, &self.m);
dot::render(&graph, &mut out).unwrap();
String::from_utf8_lossy(&out[..]).to_string()
}
}
impl<const B: usize> Default for OpSetInternal<B> {
fn default() -> Self {
Self::new()
}
}
impl<'a, const B: usize> IntoIterator for &'a OpSetInternal<B> {
type Item = &'a Op;
type IntoIter = Iter<'a, B>;
fn into_iter(self) -> Self::IntoIter {
Iter {
inner: self,
index: 0,
sub_index: 0,
}
}
}
pub(crate) struct Iter<'a, const B: usize> {
inner: &'a OpSetInternal<B>,
index: usize,
sub_index: usize,
}
impl<'a, const B: usize> Iterator for Iter<'a, B> {
type Item = &'a Op;
fn next(&mut self) -> Option<Self::Item> {
let obj = self.inner.objs.get(self.index)?;
let tree = self.inner.trees.get(obj)?;
self.sub_index += 1;
if let Some(op) = tree.get(self.sub_index - 1) {
Some(op)
} else {
self.index += 1;
self.sub_index = 1;
// FIXME is it possible that a rolled back transaction could break the iterator by
// having an empty tree?
let obj = self.inner.objs.get(self.index)?;
let tree = self.inner.trees.get(obj)?;
tree.get(self.sub_index - 1)
}
}
}
#[derive(Clone, Debug)]
pub(crate) struct OpSetMetadata {
pub actors: IndexedCache<ActorId>,
pub props: IndexedCache<String>,
}
impl OpSetMetadata {
pub fn key_cmp(&self, left: &Key, right: &Key) -> Ordering {
match (left, right) {
(Key::Map(a), Key::Map(b)) => self.props[*a].cmp(&self.props[*b]),
_ => panic!("can only compare map keys"),
}
}
pub fn lamport_cmp(&self, left: OpId, right: OpId) -> Ordering {
match (left, right) {
(OpId(0, _), OpId(0, _)) => Ordering::Equal,
(OpId(0, _), OpId(_, _)) => Ordering::Less,
(OpId(_, _), OpId(0, _)) => Ordering::Greater,
// FIXME - this one seems backwards to me - why - is values() returning in the wrong order?
(OpId(a, x), OpId(b, y)) if a == b => self.actors[y].cmp(&self.actors[x]),
(OpId(a, _), OpId(b, _)) => a.cmp(&b),
}
}
}

View file

@ -1,361 +0,0 @@
use crate::op_tree::{OpSetMetadata, OpTreeNode};
use crate::{Clock, ElemId, Op, OpId, OpType, ScalarValue};
use fxhash::FxBuildHasher;
use std::cmp::Ordering;
use std::collections::{HashMap, HashSet};
use std::fmt::Debug;
mod insert;
mod keys;
mod keys_at;
mod len;
mod len_at;
mod list_vals;
mod list_vals_at;
mod nth;
mod nth_at;
mod prop;
mod prop_at;
mod seek_op;
pub(crate) use insert::InsertNth;
pub(crate) use keys::Keys;
pub(crate) use keys_at::KeysAt;
pub(crate) use len::Len;
pub(crate) use len_at::LenAt;
pub(crate) use list_vals::ListVals;
pub(crate) use list_vals_at::ListValsAt;
pub(crate) use nth::Nth;
pub(crate) use nth_at::NthAt;
pub(crate) use prop::Prop;
pub(crate) use prop_at::PropAt;
pub(crate) use seek_op::SeekOp;
#[derive(Debug, Clone, PartialEq)]
pub(crate) struct CounterData {
pos: usize,
val: i64,
succ: HashSet<OpId>,
op: Op,
}
pub(crate) trait TreeQuery<const B: usize> {
#[inline(always)]
fn query_node_with_metadata(
&mut self,
child: &OpTreeNode<B>,
_m: &OpSetMetadata,
) -> QueryResult {
self.query_node(child)
}
fn query_node(&mut self, _child: &OpTreeNode<B>) -> QueryResult {
QueryResult::Decend
}
#[inline(always)]
fn query_element_with_metadata(&mut self, element: &Op, _m: &OpSetMetadata) -> QueryResult {
self.query_element(element)
}
fn query_element(&mut self, _element: &Op) -> QueryResult {
panic!("invalid element query")
}
}
#[derive(Debug, Clone, PartialEq)]
pub(crate) enum QueryResult {
Next,
Decend,
Finish,
}
#[derive(Clone, Debug, PartialEq)]
pub(crate) struct Index {
pub len: usize,
pub visible: HashMap<ElemId, usize, FxBuildHasher>,
pub ops: HashSet<OpId, FxBuildHasher>,
}
impl Index {
pub fn new() -> Self {
Index {
len: 0,
visible: Default::default(),
ops: Default::default(),
}
}
pub fn has(&self, e: &Option<ElemId>) -> bool {
if let Some(seen) = e {
self.visible.contains_key(seen)
} else {
false
}
}
pub fn replace(&mut self, old: &Op, new: &Op) {
if old.id != new.id {
self.ops.remove(&old.id);
self.ops.insert(new.id);
}
assert!(new.key == old.key);
match (new.succ.is_empty(), old.succ.is_empty(), new.elemid()) {
(false, true, Some(elem)) => match self.visible.get(&elem).copied() {
Some(n) if n == 1 => {
self.len -= 1;
self.visible.remove(&elem);
}
Some(n) => {
self.visible.insert(elem, n - 1);
}
None => panic!("remove overun in index"),
},
(true, false, Some(elem)) => match self.visible.get(&elem).copied() {
Some(n) => {
self.visible.insert(elem, n + 1);
}
None => {
self.len += 1;
self.visible.insert(elem, 1);
}
},
_ => {}
}
}
pub fn insert(&mut self, op: &Op) {
self.ops.insert(op.id);
if op.succ.is_empty() {
if let Some(elem) = op.elemid() {
match self.visible.get(&elem).copied() {
Some(n) => {
self.visible.insert(elem, n + 1);
}
None => {
self.len += 1;
self.visible.insert(elem, 1);
}
}
}
}
}
pub fn remove(&mut self, op: &Op) {
self.ops.remove(&op.id);
if op.succ.is_empty() {
if let Some(elem) = op.elemid() {
match self.visible.get(&elem).copied() {
Some(n) if n == 1 => {
self.len -= 1;
self.visible.remove(&elem);
}
Some(n) => {
self.visible.insert(elem, n - 1);
}
None => panic!("remove overun in index"),
}
}
}
}
pub fn merge(&mut self, other: &Index) {
for id in &other.ops {
self.ops.insert(*id);
}
for (elem, n) in other.visible.iter() {
match self.visible.get(elem).cloned() {
None => {
self.visible.insert(*elem, 1);
self.len += 1;
}
Some(m) => {
self.visible.insert(*elem, m + n);
}
}
}
}
}
impl Default for Index {
fn default() -> Self {
Self::new()
}
}
#[derive(Debug, Clone, PartialEq, Default)]
pub(crate) struct VisWindow {
counters: HashMap<OpId, CounterData>,
}
impl VisWindow {
fn visible(&mut self, op: &Op, pos: usize) -> bool {
let mut visible = false;
match op.action {
OpType::Set(ScalarValue::Counter(val)) => {
self.counters.insert(
op.id,
CounterData {
pos,
val,
succ: op.succ.iter().cloned().collect(),
op: op.clone(),
},
);
if op.succ.is_empty() {
visible = true;
}
}
OpType::Inc(inc_val) => {
for id in &op.pred {
if let Some(mut entry) = self.counters.get_mut(id) {
entry.succ.remove(&op.id);
entry.val += inc_val;
entry.op.action = OpType::Set(ScalarValue::Counter(entry.val));
if entry.succ.is_empty() {
visible = true;
}
}
}
}
_ => {
if op.succ.is_empty() {
visible = true;
}
}
};
visible
}
fn visible_at(&mut self, op: &Op, pos: usize, clock: &Clock) -> bool {
if !clock.covers(&op.id) {
return false;
}
let mut visible = false;
match op.action {
OpType::Set(ScalarValue::Counter(val)) => {
self.counters.insert(
op.id,
CounterData {
pos,
val,
succ: op.succ.iter().cloned().collect(),
op: op.clone(),
},
);
if !op.succ.iter().any(|i| clock.covers(i)) {
visible = true;
}
}
OpType::Inc(inc_val) => {
for id in &op.pred {
// pred is always before op.id so we can see them
if let Some(mut entry) = self.counters.get_mut(id) {
entry.succ.remove(&op.id);
entry.val += inc_val;
entry.op.action = OpType::Set(ScalarValue::Counter(entry.val));
if !entry.succ.iter().any(|i| clock.covers(i)) {
visible = true;
}
}
}
}
_ => {
if !op.succ.iter().any(|i| clock.covers(i)) {
visible = true;
}
}
};
visible
}
pub fn seen_op(&self, op: &Op, pos: usize) -> Vec<(usize, Op)> {
let mut result = vec![];
for pred in &op.pred {
if let Some(entry) = self.counters.get(pred) {
result.push((entry.pos, entry.op.clone()));
}
}
if result.is_empty() {
vec![(pos, op.clone())]
} else {
result
}
}
}
pub(crate) fn is_visible(op: &Op, pos: usize, counters: &mut HashMap<OpId, CounterData>) -> bool {
let mut visible = false;
match op.action {
OpType::Set(ScalarValue::Counter(val)) => {
counters.insert(
op.id,
CounterData {
pos,
val,
succ: op.succ.iter().cloned().collect(),
op: op.clone(),
},
);
if op.succ.is_empty() {
visible = true;
}
}
OpType::Inc(inc_val) => {
for id in &op.pred {
if let Some(mut entry) = counters.get_mut(id) {
entry.succ.remove(&op.id);
entry.val += inc_val;
entry.op.action = OpType::Set(ScalarValue::Counter(entry.val));
if entry.succ.is_empty() {
visible = true;
}
}
}
}
_ => {
if op.succ.is_empty() {
visible = true;
}
}
};
visible
}
pub(crate) fn visible_op(
op: &Op,
pos: usize,
counters: &HashMap<OpId, CounterData>,
) -> Vec<(usize, Op)> {
let mut result = vec![];
for pred in &op.pred {
if let Some(entry) = counters.get(pred) {
result.push((entry.pos, entry.op.clone()));
}
}
if result.is_empty() {
vec![(pos, op.clone())]
} else {
result
}
}
pub(crate) fn binary_search_by<F, const B: usize>(node: &OpTreeNode<B>, f: F) -> usize
where
F: Fn(&Op) -> Ordering,
{
let mut right = node.len();
let mut left = 0;
while left < right {
let seq = (left + right) / 2;
if f(node.get(seq).unwrap()) == Ordering::Less {
left = seq + 1;
} else {
right = seq;
}
}
left
}

View file

@ -1,80 +0,0 @@
use crate::op_tree::OpTreeNode;
use crate::query::{QueryResult, TreeQuery, VisWindow};
use crate::{AutomergeError, ElemId, Key, Op, HEAD};
use std::fmt::Debug;
#[derive(Debug, Clone, PartialEq)]
pub(crate) struct InsertNth<const B: usize> {
target: usize,
seen: usize,
pub pos: usize,
last_seen: Option<ElemId>,
last_insert: Option<ElemId>,
window: VisWindow,
}
impl<const B: usize> InsertNth<B> {
pub fn new(target: usize) -> Self {
InsertNth {
target,
seen: 0,
pos: 0,
last_seen: None,
last_insert: None,
window: Default::default(),
}
}
pub fn key(&self) -> Result<Key, AutomergeError> {
if self.target == 0 {
Ok(HEAD.into())
} else if self.seen == self.target && self.last_insert.is_some() {
Ok(Key::Seq(self.last_insert.unwrap()))
} else {
Err(AutomergeError::InvalidIndex(self.target))
}
}
}
impl<const B: usize> TreeQuery<B> for InsertNth<B> {
fn query_node(&mut self, child: &OpTreeNode<B>) -> QueryResult {
if self.target == 0 {
// insert at the start of the obj all inserts are lesser b/c this is local
self.pos = 0;
return QueryResult::Finish;
}
let mut num_vis = child.index.len;
if num_vis > 0 {
if child.index.has(&self.last_seen) {
num_vis -= 1;
}
if self.seen + num_vis >= self.target {
QueryResult::Decend
} else {
self.pos += child.len();
self.seen += num_vis;
self.last_seen = child.last().elemid();
QueryResult::Next
}
} else {
self.pos += child.len();
QueryResult::Next
}
}
fn query_element(&mut self, element: &Op) -> QueryResult {
if element.insert {
if self.seen >= self.target {
return QueryResult::Finish;
};
self.last_seen = None;
self.last_insert = element.elemid();
}
if self.last_seen.is_none() && self.window.visible(element, self.pos) {
self.seen += 1;
self.last_seen = element.elemid()
}
self.pos += 1;
QueryResult::Next
}
}

View file

@ -1,34 +0,0 @@
use crate::op_tree::OpTreeNode;
use crate::query::{QueryResult, TreeQuery, VisWindow};
use crate::Key;
use std::fmt::Debug;
#[derive(Debug, Clone, PartialEq)]
pub(crate) struct Keys<const B: usize> {
pub keys: Vec<Key>,
window: VisWindow,
}
impl<const B: usize> Keys<B> {
pub fn new() -> Self {
Keys {
keys: vec![],
window: Default::default(),
}
}
}
impl<const B: usize> TreeQuery<B> for Keys<B> {
fn query_node(&mut self, child: &OpTreeNode<B>) -> QueryResult {
let mut last = None;
for i in 0..child.len() {
let op = child.get(i).unwrap();
let visible = self.window.visible(op, i);
if Some(op.key) != last && visible {
self.keys.push(op.key);
last = Some(op.key);
}
}
QueryResult::Finish
}
}

View file

@ -1,36 +0,0 @@
use crate::query::{QueryResult, TreeQuery, VisWindow};
use crate::{Clock, Key, Op};
use std::fmt::Debug;
#[derive(Debug, Clone, PartialEq)]
pub(crate) struct KeysAt<const B: usize> {
clock: Clock,
pub keys: Vec<Key>,
last: Option<Key>,
window: VisWindow,
pos: usize,
}
impl<const B: usize> KeysAt<B> {
pub fn new(clock: Clock) -> Self {
KeysAt {
clock,
pos: 0,
last: None,
keys: vec![],
window: Default::default(),
}
}
}
impl<const B: usize> TreeQuery<B> for KeysAt<B> {
fn query_element(&mut self, op: &Op) -> QueryResult {
let visible = self.window.visible_at(op, self.pos, &self.clock);
if Some(op.key) != self.last && visible {
self.keys.push(op.key);
self.last = Some(op.key);
}
self.pos += 1;
QueryResult::Next
}
}

View file

@ -1,23 +0,0 @@
use crate::op_tree::OpTreeNode;
use crate::query::{QueryResult, TreeQuery};
use crate::ObjId;
use std::fmt::Debug;
#[derive(Debug, Clone, PartialEq)]
pub(crate) struct Len<const B: usize> {
obj: ObjId,
pub len: usize,
}
impl<const B: usize> Len<B> {
pub fn new(obj: ObjId) -> Self {
Len { obj, len: 0 }
}
}
impl<const B: usize> TreeQuery<B> for Len<B> {
fn query_node(&mut self, child: &OpTreeNode<B>) -> QueryResult {
self.len = child.index.len;
QueryResult::Finish
}
}

View file

@ -1,48 +0,0 @@
use crate::op_tree::{OpSetMetadata, OpTreeNode};
use crate::query::{binary_search_by, is_visible, visible_op, QueryResult, TreeQuery};
use crate::{ElemId, ObjId, Op};
use std::fmt::Debug;
#[derive(Debug, Clone, PartialEq)]
pub(crate) struct ListVals {
obj: ObjId,
last_elem: Option<ElemId>,
pub ops: Vec<Op>,
}
impl ListVals {
pub fn new(obj: ObjId) -> Self {
ListVals {
obj,
last_elem: None,
ops: vec![],
}
}
}
impl<const B: usize> TreeQuery<B> for ListVals {
fn query_node_with_metadata(
&mut self,
child: &OpTreeNode<B>,
m: &OpSetMetadata,
) -> QueryResult {
let start = binary_search_by(child, |op| m.lamport_cmp(op.obj.0, self.obj.0));
let mut counters = Default::default();
for pos in start..child.len() {
let op = child.get(pos).unwrap();
if op.obj != self.obj {
break;
}
if op.insert {
self.last_elem = None;
}
if self.last_elem.is_none() && is_visible(op, pos, &mut counters) {
for (_, vop) in visible_op(op, pos, &counters) {
self.last_elem = vop.elemid();
self.ops.push(vop);
}
}
}
QueryResult::Finish
}
}

View file

@ -1,40 +0,0 @@
use crate::query::{QueryResult, TreeQuery, VisWindow};
use crate::{Clock, ElemId, Op};
use std::fmt::Debug;
#[derive(Debug, Clone, PartialEq)]
pub(crate) struct ListValsAt {
clock: Clock,
last_elem: Option<ElemId>,
pub ops: Vec<Op>,
window: VisWindow,
pos: usize,
}
impl ListValsAt {
pub fn new(clock: Clock) -> Self {
ListValsAt {
clock,
last_elem: None,
ops: vec![],
window: Default::default(),
pos: 0,
}
}
}
impl<const B: usize> TreeQuery<B> for ListValsAt {
fn query_element(&mut self, op: &Op) -> QueryResult {
if op.insert {
self.last_elem = None;
}
if self.last_elem.is_none() && self.window.visible_at(op, self.pos, &self.clock) {
for (_, vop) in self.window.seen_op(op, self.pos) {
self.last_elem = vop.elemid();
self.ops.push(vop);
}
}
self.pos += 1;
QueryResult::Next
}
}

View file

@ -1,87 +0,0 @@
use crate::op_tree::OpTreeNode;
use crate::query::{QueryResult, TreeQuery, VisWindow};
use crate::{AutomergeError, ElemId, Key, Op};
use std::fmt::Debug;
#[derive(Debug, Clone, PartialEq)]
pub(crate) struct Nth<const B: usize> {
target: usize,
seen: usize,
last_seen: Option<ElemId>,
last_elem: Option<ElemId>,
window: VisWindow,
pub ops: Vec<Op>,
pub ops_pos: Vec<usize>,
pub pos: usize,
}
impl<const B: usize> Nth<B> {
pub fn new(target: usize) -> Self {
Nth {
target,
seen: 0,
last_seen: None,
ops: vec![],
ops_pos: vec![],
pos: 0,
last_elem: None,
window: Default::default(),
}
}
pub fn key(&self) -> Result<Key, AutomergeError> {
if let Some(e) = self.last_elem {
Ok(Key::Seq(e))
} else {
Err(AutomergeError::InvalidIndex(self.target))
}
}
}
impl<const B: usize> TreeQuery<B> for Nth<B> {
fn query_node(&mut self, child: &OpTreeNode<B>) -> QueryResult {
let mut num_vis = child.index.len;
if num_vis > 0 {
// num vis is the number of keys in the index
// minus one if we're counting last_seen
// let mut num_vis = s.keys().count();
if child.index.has(&self.last_seen) {
num_vis -= 1;
}
if self.seen + num_vis > self.target {
QueryResult::Decend
} else {
self.pos += child.len();
self.seen += num_vis;
self.last_seen = child.last().elemid();
QueryResult::Next
}
} else {
self.pos += child.len();
QueryResult::Next
}
}
fn query_element(&mut self, element: &Op) -> QueryResult {
if element.insert {
if self.seen > self.target {
return QueryResult::Finish;
};
self.last_elem = element.elemid();
self.last_seen = None
}
let visible = self.window.visible(element, self.pos);
if visible && self.last_seen.is_none() {
self.seen += 1;
self.last_seen = element.elemid()
}
if self.seen == self.target + 1 && visible {
for (vpos, vop) in self.window.seen_op(element, self.pos) {
self.ops.push(vop);
self.ops_pos.push(vpos);
}
}
self.pos += 1;
QueryResult::Next
}
}

View file

@ -1,57 +0,0 @@
use crate::query::{QueryResult, TreeQuery, VisWindow};
use crate::{Clock, ElemId, Op};
use std::fmt::Debug;
#[derive(Debug, Clone, PartialEq)]
pub(crate) struct NthAt<const B: usize> {
clock: Clock,
target: usize,
seen: usize,
last_seen: Option<ElemId>,
last_elem: Option<ElemId>,
window: VisWindow,
pub ops: Vec<Op>,
pub ops_pos: Vec<usize>,
pub pos: usize,
}
impl<const B: usize> NthAt<B> {
pub fn new(target: usize, clock: Clock) -> Self {
NthAt {
clock,
target,
seen: 0,
last_seen: None,
ops: vec![],
ops_pos: vec![],
pos: 0,
last_elem: None,
window: Default::default(),
}
}
}
impl<const B: usize> TreeQuery<B> for NthAt<B> {
fn query_element(&mut self, element: &Op) -> QueryResult {
if element.insert {
if self.seen > self.target {
return QueryResult::Finish;
};
self.last_elem = element.elemid();
self.last_seen = None
}
let visible = self.window.visible_at(element, self.pos, &self.clock);
if visible && self.last_seen.is_none() {
self.seen += 1;
self.last_seen = element.elemid()
}
if self.seen == self.target + 1 && visible {
for (vpos, vop) in self.window.seen_op(element, self.pos) {
self.ops.push(vop);
self.ops_pos.push(vpos);
}
}
self.pos += 1;
QueryResult::Next
}
}

View file

@ -1,54 +0,0 @@
use crate::op_tree::{OpSetMetadata, OpTreeNode};
use crate::query::{binary_search_by, is_visible, visible_op, QueryResult, TreeQuery};
use crate::{Key, ObjId, Op};
use std::fmt::Debug;
#[derive(Debug, Clone, PartialEq)]
pub(crate) struct Prop {
obj: ObjId,
key: Key,
pub ops: Vec<Op>,
pub ops_pos: Vec<usize>,
pub pos: usize,
}
impl Prop {
pub fn new(obj: ObjId, prop: usize) -> Self {
Prop {
obj,
key: Key::Map(prop),
ops: vec![],
ops_pos: vec![],
pos: 0,
}
}
}
impl<const B: usize> TreeQuery<B> for Prop {
fn query_node_with_metadata(
&mut self,
child: &OpTreeNode<B>,
m: &OpSetMetadata,
) -> QueryResult {
let start = binary_search_by(child, |op| {
m.lamport_cmp(op.obj.0, self.obj.0)
.then_with(|| m.key_cmp(&op.key, &self.key))
});
let mut counters = Default::default();
self.pos = start;
for pos in start..child.len() {
let op = child.get(pos).unwrap();
if !(op.obj == self.obj && op.key == self.key) {
break;
}
if is_visible(op, pos, &mut counters) {
for (vpos, vop) in visible_op(op, pos, &counters) {
self.ops.push(vop);
self.ops_pos.push(vpos);
}
}
self.pos += 1;
}
QueryResult::Finish
}
}

View file

@ -1,51 +0,0 @@
use crate::op_tree::{OpSetMetadata, OpTreeNode};
use crate::query::{binary_search_by, QueryResult, TreeQuery, VisWindow};
use crate::{Clock, Key, Op};
use std::fmt::Debug;
#[derive(Debug, Clone, PartialEq)]
pub(crate) struct PropAt {
clock: Clock,
key: Key,
pub ops: Vec<Op>,
pub ops_pos: Vec<usize>,
pub pos: usize,
}
impl PropAt {
pub fn new(prop: usize, clock: Clock) -> Self {
PropAt {
clock,
key: Key::Map(prop),
ops: vec![],
ops_pos: vec![],
pos: 0,
}
}
}
impl<const B: usize> TreeQuery<B> for PropAt {
fn query_node_with_metadata(
&mut self,
child: &OpTreeNode<B>,
m: &OpSetMetadata,
) -> QueryResult {
let start = binary_search_by(child, |op| m.key_cmp(&op.key, &self.key));
let mut window: VisWindow = Default::default();
self.pos = start;
for pos in start..child.len() {
let op = child.get(pos).unwrap();
if op.key != self.key {
break;
}
if window.visible_at(op, pos, &self.clock) {
for (vpos, vop) in window.seen_op(op, pos) {
self.ops.push(vop);
self.ops_pos.push(vpos);
}
}
self.pos += 1;
}
QueryResult::Finish
}
}

View file

@ -1,129 +0,0 @@
use crate::op_tree::{OpSetMetadata, OpTreeNode};
use crate::query::{binary_search_by, QueryResult, TreeQuery};
use crate::{Key, Op, HEAD};
use std::cmp::Ordering;
use std::fmt::Debug;
#[derive(Debug, Clone, PartialEq)]
pub(crate) struct SeekOp<const B: usize> {
op: Op,
pub pos: usize,
pub succ: Vec<usize>,
found: bool,
}
impl<const B: usize> SeekOp<B> {
pub fn new(op: &Op) -> Self {
SeekOp {
op: op.clone(),
succ: vec![],
pos: 0,
found: false,
}
}
fn different_obj(&self, op: &Op) -> bool {
op.obj != self.op.obj
}
fn lesser_insert(&self, op: &Op, m: &OpSetMetadata) -> bool {
op.insert && m.lamport_cmp(op.id, self.op.id) == Ordering::Less
}
fn greater_opid(&self, op: &Op, m: &OpSetMetadata) -> bool {
m.lamport_cmp(op.id, self.op.id) == Ordering::Greater
}
fn is_target_insert(&self, op: &Op) -> bool {
if !op.insert {
return false;
}
if self.op.insert {
op.elemid() == self.op.key.elemid()
} else {
op.elemid() == self.op.elemid()
}
}
}
impl<const B: usize> TreeQuery<B> for SeekOp<B> {
fn query_node_with_metadata(
&mut self,
child: &OpTreeNode<B>,
m: &OpSetMetadata,
) -> QueryResult {
if self.found {
return QueryResult::Decend;
}
match self.op.key {
Key::Seq(e) if e == HEAD => {
while self.pos < child.len() {
let op = child.get(self.pos).unwrap();
if self.op.overwrites(op) {
self.succ.push(self.pos);
}
if op.insert && m.lamport_cmp(op.id, self.op.id) == Ordering::Less {
break;
}
self.pos += 1;
}
QueryResult::Finish
}
Key::Seq(e) => {
if self.found || child.index.ops.contains(&e.0) {
QueryResult::Decend
} else {
self.pos += child.len();
QueryResult::Next
}
}
Key::Map(_) => {
self.pos = binary_search_by(child, |op| m.key_cmp(&op.key, &self.op.key));
while self.pos < child.len() {
let op = child.get(self.pos).unwrap();
if op.key != self.op.key {
break;
}
if self.op.overwrites(op) {
self.succ.push(self.pos);
}
if m.lamport_cmp(op.id, self.op.id) == Ordering::Greater {
break;
}
self.pos += 1;
}
QueryResult::Finish
}
}
}
fn query_element_with_metadata(&mut self, e: &Op, m: &OpSetMetadata) -> QueryResult {
if !self.found {
if self.is_target_insert(e) {
self.found = true;
if self.op.overwrites(e) {
self.succ.push(self.pos);
}
}
self.pos += 1;
QueryResult::Next
} else {
if self.op.overwrites(e) {
self.succ.push(self.pos);
}
if self.op.insert {
if self.different_obj(e) || self.lesser_insert(e, m) {
QueryResult::Finish
} else {
self.pos += 1;
QueryResult::Next
}
} else if e.insert || self.different_obj(e) || self.greater_opid(e, m) {
QueryResult::Finish
} else {
self.pos += 1;
QueryResult::Next
}
}
}
}

View file

@ -1,381 +0,0 @@
use std::{
borrow::Cow,
collections::{HashMap, HashSet},
convert::TryFrom,
io,
io::Write,
};
use crate::{
decoding, decoding::Decoder, encoding, encoding::Encodable, Automerge, AutomergeError, Change,
ChangeHash, Patch,
};
mod bloom;
mod state;
pub use bloom::BloomFilter;
pub use state::{SyncHave, SyncState};
const HASH_SIZE: usize = 32; // 256 bits = 32 bytes
const MESSAGE_TYPE_SYNC: u8 = 0x42; // first byte of a sync message, for identification
impl Automerge {
pub fn generate_sync_message(&mut self, sync_state: &mut SyncState) -> Option<SyncMessage> {
self.ensure_transaction_closed();
self._generate_sync_message(sync_state)
}
fn _generate_sync_message(&self, sync_state: &mut SyncState) -> Option<SyncMessage> {
let our_heads = self._get_heads();
let our_need = self._get_missing_deps(sync_state.their_heads.as_ref().unwrap_or(&vec![]));
let their_heads_set = if let Some(ref heads) = sync_state.their_heads {
heads.iter().collect::<HashSet<_>>()
} else {
HashSet::new()
};
let our_have = if our_need.iter().all(|hash| their_heads_set.contains(hash)) {
vec![self.make_bloom_filter(sync_state.shared_heads.clone())]
} else {
Vec::new()
};
if let Some(ref their_have) = sync_state.their_have {
if let Some(first_have) = their_have.first().as_ref() {
if !first_have
.last_sync
.iter()
.all(|hash| self._get_change_by_hash(hash).is_some())
{
let reset_msg = SyncMessage {
heads: our_heads,
need: Vec::new(),
have: vec![SyncHave::default()],
changes: Vec::new(),
};
return Some(reset_msg);
}
}
}
let mut changes_to_send = if let (Some(their_have), Some(their_need)) = (
sync_state.their_have.as_ref(),
sync_state.their_need.as_ref(),
) {
self.get_changes_to_send(their_have.clone(), their_need)
} else {
Vec::new()
};
let heads_unchanged = if let Some(last_sent_heads) = sync_state.last_sent_heads.as_ref() {
last_sent_heads == &our_heads
} else {
false
};
let heads_equal = if let Some(their_heads) = sync_state.their_heads.as_ref() {
their_heads == &our_heads
} else {
false
};
if heads_unchanged && heads_equal && changes_to_send.is_empty() {
return None;
}
// deduplicate the changes to send with those we have already sent
changes_to_send.retain(|change| !sync_state.sent_hashes.contains(&change.hash));
sync_state.last_sent_heads = Some(our_heads.clone());
sync_state
.sent_hashes
.extend(changes_to_send.iter().map(|c| c.hash));
let sync_message = SyncMessage {
heads: our_heads,
have: our_have,
need: our_need,
changes: changes_to_send.into_iter().cloned().collect(),
};
Some(sync_message)
}
pub fn receive_sync_message(
&mut self,
sync_state: &mut SyncState,
message: SyncMessage,
) -> Result<Option<Patch>, AutomergeError> {
self.ensure_transaction_closed();
self._receive_sync_message(sync_state, message)
}
fn _receive_sync_message(
&mut self,
sync_state: &mut SyncState,
message: SyncMessage,
) -> Result<Option<Patch>, AutomergeError> {
let mut patch = None;
let before_heads = self.get_heads();
let SyncMessage {
heads: message_heads,
changes: message_changes,
need: message_need,
have: message_have,
} = message;
let changes_is_empty = message_changes.is_empty();
if !changes_is_empty {
patch = Some(self.apply_changes(&message_changes)?);
sync_state.shared_heads = advance_heads(
&before_heads.iter().collect(),
&self.get_heads().into_iter().collect(),
&sync_state.shared_heads,
);
}
// trim down the sent hashes to those that we know they haven't seen
self.filter_changes(&message_heads, &mut sync_state.sent_hashes);
if changes_is_empty && message_heads == before_heads {
sync_state.last_sent_heads = Some(message_heads.clone());
}
let known_heads = message_heads
.iter()
.filter(|head| self.get_change_by_hash(head).is_some())
.collect::<Vec<_>>();
if known_heads.len() == message_heads.len() {
sync_state.shared_heads = message_heads.clone();
} else {
sync_state.shared_heads = sync_state
.shared_heads
.iter()
.chain(known_heads)
.collect::<HashSet<_>>()
.into_iter()
.copied()
.collect::<Vec<_>>();
sync_state.shared_heads.sort();
}
sync_state.their_have = Some(message_have);
sync_state.their_heads = Some(message_heads);
sync_state.their_need = Some(message_need);
Ok(patch)
}
fn make_bloom_filter(&self, last_sync: Vec<ChangeHash>) -> SyncHave {
let new_changes = self._get_changes(&last_sync);
let hashes = new_changes
.into_iter()
.map(|change| change.hash)
.collect::<Vec<_>>();
SyncHave {
last_sync,
bloom: BloomFilter::from(&hashes[..]),
}
}
fn get_changes_to_send(&self, have: Vec<SyncHave>, need: &[ChangeHash]) -> Vec<&Change> {
if have.is_empty() {
need.iter()
.filter_map(|hash| self._get_change_by_hash(hash))
.collect()
} else {
let mut last_sync_hashes = HashSet::new();
let mut bloom_filters = Vec::with_capacity(have.len());
for h in have {
let SyncHave { last_sync, bloom } = h;
for hash in last_sync {
last_sync_hashes.insert(hash);
}
bloom_filters.push(bloom);
}
let last_sync_hashes = last_sync_hashes.into_iter().collect::<Vec<_>>();
let changes = self._get_changes(&last_sync_hashes);
let mut change_hashes = HashSet::with_capacity(changes.len());
let mut dependents: HashMap<ChangeHash, Vec<ChangeHash>> = HashMap::new();
let mut hashes_to_send = HashSet::new();
for change in &changes {
change_hashes.insert(change.hash);
for dep in &change.deps {
dependents.entry(*dep).or_default().push(change.hash);
}
if bloom_filters
.iter()
.all(|bloom| !bloom.contains_hash(&change.hash))
{
hashes_to_send.insert(change.hash);
}
}
let mut stack = hashes_to_send.iter().copied().collect::<Vec<_>>();
while let Some(hash) = stack.pop() {
if let Some(deps) = dependents.get(&hash) {
for dep in deps {
if hashes_to_send.insert(*dep) {
stack.push(*dep);
}
}
}
}
let mut changes_to_send = Vec::new();
for hash in need {
hashes_to_send.insert(*hash);
if !change_hashes.contains(hash) {
let change = self._get_change_by_hash(hash);
if let Some(change) = change {
changes_to_send.push(change);
}
}
}
for change in changes {
if hashes_to_send.contains(&change.hash) {
changes_to_send.push(change);
}
}
changes_to_send
}
}
}
#[derive(Debug, Clone)]
pub struct SyncMessage {
pub heads: Vec<ChangeHash>,
pub need: Vec<ChangeHash>,
pub have: Vec<SyncHave>,
pub changes: Vec<Change>,
}
impl SyncMessage {
pub fn encode(self) -> Result<Vec<u8>, encoding::Error> {
let mut buf = vec![MESSAGE_TYPE_SYNC];
encode_hashes(&mut buf, &self.heads)?;
encode_hashes(&mut buf, &self.need)?;
(self.have.len() as u32).encode(&mut buf)?;
for have in self.have {
encode_hashes(&mut buf, &have.last_sync)?;
have.bloom.into_bytes()?.encode(&mut buf)?;
}
(self.changes.len() as u32).encode(&mut buf)?;
for mut change in self.changes {
change.compress();
change.raw_bytes().encode(&mut buf)?;
}
Ok(buf)
}
pub fn decode(bytes: &[u8]) -> Result<SyncMessage, decoding::Error> {
let mut decoder = Decoder::new(Cow::Borrowed(bytes));
let message_type = decoder.read::<u8>()?;
if message_type != MESSAGE_TYPE_SYNC {
return Err(decoding::Error::WrongType {
expected_one_of: vec![MESSAGE_TYPE_SYNC],
found: message_type,
});
}
let heads = decode_hashes(&mut decoder)?;
let need = decode_hashes(&mut decoder)?;
let have_count = decoder.read::<u32>()?;
let mut have = Vec::with_capacity(have_count as usize);
for _ in 0..have_count {
let last_sync = decode_hashes(&mut decoder)?;
let bloom_bytes: Vec<u8> = decoder.read()?;
let bloom = BloomFilter::try_from(bloom_bytes.as_slice())?;
have.push(SyncHave { last_sync, bloom });
}
let change_count = decoder.read::<u32>()?;
let mut changes = Vec::with_capacity(change_count as usize);
for _ in 0..change_count {
let change = decoder.read()?;
changes.push(Change::from_bytes(change)?);
}
Ok(SyncMessage {
heads,
need,
have,
changes,
})
}
}
fn encode_hashes(buf: &mut Vec<u8>, hashes: &[ChangeHash]) -> Result<(), encoding::Error> {
debug_assert!(
hashes.windows(2).all(|h| h[0] <= h[1]),
"hashes were not sorted"
);
hashes.encode(buf)?;
Ok(())
}
impl Encodable for &[ChangeHash] {
fn encode<W: Write>(&self, buf: &mut W) -> io::Result<usize> {
let head = self.len().encode(buf)?;
let mut body = 0;
for hash in self.iter() {
buf.write_all(&hash.0)?;
body += hash.0.len();
}
Ok(head + body)
}
}
fn decode_hashes(decoder: &mut Decoder) -> Result<Vec<ChangeHash>, decoding::Error> {
let length = decoder.read::<u32>()?;
let mut hashes = Vec::with_capacity(length as usize);
for _ in 0..length {
let hash_bytes = decoder.read_bytes(HASH_SIZE)?;
let hash = ChangeHash::try_from(hash_bytes).map_err(decoding::Error::BadChangeFormat)?;
hashes.push(hash);
}
Ok(hashes)
}
fn advance_heads(
my_old_heads: &HashSet<&ChangeHash>,
my_new_heads: &HashSet<ChangeHash>,
our_old_shared_heads: &[ChangeHash],
) -> Vec<ChangeHash> {
let new_heads = my_new_heads
.iter()
.filter(|head| !my_old_heads.contains(head))
.copied()
.collect::<Vec<_>>();
let common_heads = our_old_shared_heads
.iter()
.filter(|head| my_new_heads.contains(head))
.copied()
.collect::<Vec<_>>();
let mut advanced_heads = HashSet::with_capacity(new_heads.len() + common_heads.len());
for head in new_heads.into_iter().chain(common_heads) {
advanced_heads.insert(head);
}
let mut advanced_heads = advanced_heads.into_iter().collect::<Vec<_>>();
advanced_heads.sort();
advanced_heads
}

View file

@ -1,65 +0,0 @@
use std::{borrow::Cow, collections::HashSet};
use super::{decode_hashes, encode_hashes};
use crate::{decoding, decoding::Decoder, encoding, BloomFilter, ChangeHash};
const SYNC_STATE_TYPE: u8 = 0x43; // first byte of an encoded sync state, for identification
#[derive(Debug, Clone)]
pub struct SyncState {
pub shared_heads: Vec<ChangeHash>,
pub last_sent_heads: Option<Vec<ChangeHash>>,
pub their_heads: Option<Vec<ChangeHash>>,
pub their_need: Option<Vec<ChangeHash>>,
pub their_have: Option<Vec<SyncHave>>,
pub sent_hashes: HashSet<ChangeHash>,
}
#[derive(Debug, Clone, Default)]
pub struct SyncHave {
pub last_sync: Vec<ChangeHash>,
pub bloom: BloomFilter,
}
impl SyncState {
pub fn encode(&self) -> Result<Vec<u8>, encoding::Error> {
let mut buf = vec![SYNC_STATE_TYPE];
encode_hashes(&mut buf, &self.shared_heads)?;
Ok(buf)
}
pub fn decode(bytes: &[u8]) -> Result<Self, decoding::Error> {
let mut decoder = Decoder::new(Cow::Borrowed(bytes));
let record_type = decoder.read::<u8>()?;
if record_type != SYNC_STATE_TYPE {
return Err(decoding::Error::WrongType {
expected_one_of: vec![SYNC_STATE_TYPE],
found: record_type,
});
}
let shared_heads = decode_hashes(&mut decoder)?;
Ok(Self {
shared_heads,
last_sent_heads: Some(Vec::new()),
their_heads: None,
their_need: None,
their_have: Some(Vec::new()),
sent_hashes: HashSet::new(),
})
}
}
impl Default for SyncState {
fn default() -> Self {
Self {
shared_heads: Vec::new(),
last_sent_heads: Some(Vec::new()),
their_heads: None,
their_need: None,
their_have: None,
sent_hashes: HashSet::new(),
}
}
}

View file

@ -1,425 +0,0 @@
use crate::error;
use crate::legacy as amp;
use crate::{ Value, ScalarValue };
use serde::{Deserialize, Serialize};
use std::cmp::Eq;
use std::convert::TryFrom;
use std::convert::TryInto;
use std::fmt;
use std::str::FromStr;
use tinyvec::{ArrayVec, TinyVec};
pub(crate) const HEAD: ElemId = ElemId(OpId(0, 0));
pub(crate) const ROOT: OpId = OpId(0, 0);
const ROOT_STR: &str = "_root";
const HEAD_STR: &str = "_head";
/// An actor id is a sequence of bytes. By default we use a uuid which can be nicely stack
/// allocated.
///
/// In the event that users want to use their own type of identifier that is longer than a uuid
/// then they will likely end up pushing it onto the heap which is still fine.
///
// Note that change encoding relies on the Ord implementation for the ActorId being implemented in
// terms of the lexicographic ordering of the underlying bytes. Be aware of this if you are
// changing the ActorId implementation in ways which might affect the Ord implementation
#[derive(Eq, PartialEq, Hash, Clone, PartialOrd, Ord)]
#[cfg_attr(feature = "derive-arbitrary", derive(arbitrary::Arbitrary))]
pub struct ActorId(TinyVec<[u8; 16]>);
impl fmt::Debug for ActorId {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
f.debug_tuple("ActorID")
.field(&hex::encode(&self.0))
.finish()
}
}
impl ActorId {
pub fn random() -> ActorId {
ActorId(TinyVec::from(*uuid::Uuid::new_v4().as_bytes()))
}
pub fn to_bytes(&self) -> &[u8] {
&self.0
}
pub fn to_hex_string(&self) -> String {
hex::encode(&self.0)
}
pub fn op_id_at(&self, seq: u64) -> amp::OpId {
amp::OpId(seq, self.clone())
}
}
impl TryFrom<&str> for ActorId {
type Error = error::InvalidActorId;
fn try_from(s: &str) -> Result<Self, Self::Error> {
hex::decode(s)
.map(ActorId::from)
.map_err(|_| error::InvalidActorId(s.into()))
}
}
impl From<uuid::Uuid> for ActorId {
fn from(u: uuid::Uuid) -> Self {
ActorId(TinyVec::from(*u.as_bytes()))
}
}
impl From<&[u8]> for ActorId {
fn from(b: &[u8]) -> Self {
ActorId(TinyVec::from(b))
}
}
impl From<&Vec<u8>> for ActorId {
fn from(b: &Vec<u8>) -> Self {
ActorId::from(b.as_slice())
}
}
impl From<Vec<u8>> for ActorId {
fn from(b: Vec<u8>) -> Self {
let inner = if let Ok(arr) = ArrayVec::try_from(b.as_slice()) {
TinyVec::Inline(arr)
} else {
TinyVec::Heap(b)
};
ActorId(inner)
}
}
impl FromStr for ActorId {
type Err = error::InvalidActorId;
fn from_str(s: &str) -> Result<Self, Self::Err> {
ActorId::try_from(s)
}
}
impl fmt::Display for ActorId {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
write!(f, "{}", self.to_hex_string())
}
}
#[derive(Deserialize, Serialize, Debug, Clone, PartialEq, Copy, Hash)]
#[serde(rename_all = "camelCase", untagged)]
pub enum ObjType {
Map,
Table,
List,
Text,
}
impl ObjType {
pub fn is_sequence(&self) -> bool {
matches!(self, Self::List | Self::Text)
}
}
impl From<amp::MapType> for ObjType {
fn from(other: amp::MapType) -> Self {
match other {
amp::MapType::Map => Self::Map,
amp::MapType::Table => Self::Table,
}
}
}
impl From<amp::SequenceType> for ObjType {
fn from(other: amp::SequenceType) -> Self {
match other {
amp::SequenceType::List => Self::List,
amp::SequenceType::Text => Self::Text,
}
}
}
impl fmt::Display for ObjType {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
match self {
ObjType::Map => write!(f, "map"),
ObjType::Table => write!(f, "table"),
ObjType::List => write!(f, "list"),
ObjType::Text => write!(f, "text"),
}
}
}
#[derive(PartialEq, Debug, Clone)]
pub enum OpType {
Make(ObjType),
/// Perform a deletion, expanding the operation to cover `n` deletions (multiOp).
Del,
Inc(i64),
Set(ScalarValue),
}
#[derive(Debug)]
pub(crate) enum Export {
Id(OpId),
Special(String),
Prop(usize),
}
pub(crate) trait Exportable {
fn export(&self) -> Export;
}
impl OpId {
#[inline]
pub fn counter(&self) -> u64 {
self.0
}
#[inline]
pub fn actor(&self) -> usize {
self.1
}
}
impl Exportable for ObjId {
fn export(&self) -> Export {
if self.0 == ROOT {
Export::Special(ROOT_STR.to_owned())
} else {
Export::Id(self.0)
}
}
}
impl Exportable for &ObjId {
fn export(&self) -> Export {
if self.0 == ROOT {
Export::Special(ROOT_STR.to_owned())
} else {
Export::Id(self.0)
}
}
}
impl Exportable for ElemId {
fn export(&self) -> Export {
if self == &HEAD {
Export::Special(HEAD_STR.to_owned())
} else {
Export::Id(self.0)
}
}
}
impl Exportable for OpId {
fn export(&self) -> Export {
Export::Id(*self)
}
}
impl Exportable for Key {
fn export(&self) -> Export {
match self {
Key::Map(p) => Export::Prop(*p),
Key::Seq(e) => e.export(),
}
}
}
impl From<OpId> for ObjId {
fn from(o: OpId) -> Self {
ObjId(o)
}
}
impl From<OpId> for ElemId {
fn from(o: OpId) -> Self {
ElemId(o)
}
}
impl From<String> for Prop {
fn from(p: String) -> Self {
Prop::Map(p)
}
}
impl From<&String> for Prop {
fn from(p: &String) -> Self {
Prop::Map(p.clone())
}
}
impl From<&str> for Prop {
fn from(p: &str) -> Self {
Prop::Map(p.to_owned())
}
}
impl From<usize> for Prop {
fn from(index: usize) -> Self {
Prop::Seq(index)
}
}
impl From<f64> for Prop {
fn from(index: f64) -> Self {
Prop::Seq(index as usize)
}
}
impl From<OpId> for Key {
fn from(id: OpId) -> Self {
Key::Seq(ElemId(id))
}
}
impl From<ElemId> for Key {
fn from(e: ElemId) -> Self {
Key::Seq(e)
}
}
#[derive(Debug, PartialEq, PartialOrd, Eq, Ord, Clone, Copy, Hash)]
pub(crate) enum Key {
Map(usize),
Seq(ElemId),
}
#[derive(Debug, PartialEq, PartialOrd, Eq, Ord, Clone)]
pub enum Prop {
Map(String),
Seq(usize),
}
#[derive(Debug, PartialEq, PartialOrd, Eq, Ord, Clone)]
pub struct Patch {}
impl Key {
pub fn elemid(&self) -> Option<ElemId> {
match self {
Key::Map(_) => None,
Key::Seq(id) => Some(*id),
}
}
}
#[derive(Debug, Clone, PartialOrd, Ord, Eq, PartialEq, Copy, Hash, Default)]
pub(crate) struct OpId(pub u64, pub usize);
#[derive(Debug, Clone, Copy, PartialOrd, Eq, PartialEq, Ord, Hash, Default)]
pub(crate) struct ObjId(pub OpId);
impl ObjId {
pub fn root() -> Self {
ObjId(OpId(0,0))
}
}
#[derive(Debug, Clone, Copy, PartialOrd, Eq, PartialEq, Ord, Hash, Default)]
pub(crate) struct ElemId(pub OpId);
#[derive(Debug, Clone, PartialEq)]
pub(crate) struct Op {
pub change: usize,
pub id: OpId,
pub action: OpType,
pub obj: ObjId,
pub key: Key,
pub succ: Vec<OpId>,
pub pred: Vec<OpId>,
pub insert: bool,
}
impl Op {
pub fn is_del(&self) -> bool {
matches!(&self.action, OpType::Del)
}
pub fn is_noop(&self, action: &OpType) -> bool {
matches!((&self.action, action), (OpType::Set(n), OpType::Set(m)) if n == m)
}
pub fn overwrites(&self, other: &Op) -> bool {
self.pred.iter().any(|i| i == &other.id)
}
pub fn elemid(&self) -> Option<ElemId> {
if self.insert {
Some(ElemId(self.id))
} else {
self.key.elemid()
}
}
pub fn value(&self) -> Value {
match &self.action {
OpType::Make(obj_type) => Value::Object(*obj_type),
OpType::Set(scalar) => Value::Scalar(scalar.clone()),
_ => panic!("cant convert op into a value - {:?}", self),
}
}
#[allow(dead_code)]
pub fn dump(&self) -> String {
match &self.action {
OpType::Set(value) if self.insert => format!("i:{}", value),
OpType::Set(value) => format!("s:{}", value),
OpType::Make(obj) => format!("make{}", obj),
OpType::Inc(val) => format!("inc:{}", val),
OpType::Del => "del".to_string(),
}
}
}
#[derive(Debug, Clone)]
pub struct Peer {}
#[derive(Eq, PartialEq, Hash, Clone, PartialOrd, Ord, Copy)]
pub struct ChangeHash(pub [u8; 32]);
impl fmt::Debug for ChangeHash {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
f.debug_tuple("ChangeHash")
.field(&hex::encode(&self.0))
.finish()
}
}
#[derive(thiserror::Error, Debug)]
pub enum ParseChangeHashError {
#[error(transparent)]
HexDecode(#[from] hex::FromHexError),
#[error("incorrect length, change hash should be 32 bytes, got {actual}")]
IncorrectLength { actual: usize },
}
impl FromStr for ChangeHash {
type Err = ParseChangeHashError;
fn from_str(s: &str) -> Result<Self, Self::Err> {
let bytes = hex::decode(s)?;
if bytes.len() == 32 {
Ok(ChangeHash(bytes.try_into().unwrap()))
} else {
Err(ParseChangeHashError::IncorrectLength {
actual: bytes.len(),
})
}
}
}
impl TryFrom<&[u8]> for ChangeHash {
type Error = error::InvalidChangeHashSlice;
fn try_from(bytes: &[u8]) -> Result<Self, Self::Error> {
if bytes.len() != 32 {
Err(error::InvalidChangeHashSlice(Vec::from(bytes)))
} else {
let mut array = [0; 32];
array.copy_from_slice(bytes);
Ok(ChangeHash(array))
}
}
}

View file

@ -1,295 +0,0 @@
use crate::{error, ObjType, Op, OpId, OpType};
use serde::{Deserialize, Serialize};
use smol_str::SmolStr;
use std::convert::TryFrom;
#[derive(Debug, Clone, PartialEq)]
pub enum Value {
Object(ObjType),
Scalar(ScalarValue),
}
impl Value {
pub fn to_string(&self) -> Option<String> {
match self {
Value::Scalar(val) => Some(val.to_string()),
_ => None,
}
}
pub fn map() -> Value {
Value::Object(ObjType::Map)
}
pub fn list() -> Value {
Value::Object(ObjType::List)
}
pub fn text() -> Value {
Value::Object(ObjType::Text)
}
pub fn table() -> Value {
Value::Object(ObjType::Table)
}
pub fn str(s: &str) -> Value {
Value::Scalar(ScalarValue::Str(s.into()))
}
pub fn int(n: i64) -> Value {
Value::Scalar(ScalarValue::Int(n))
}
pub fn uint(n: u64) -> Value {
Value::Scalar(ScalarValue::Uint(n))
}
pub fn counter(n: i64) -> Value {
Value::Scalar(ScalarValue::Counter(n))
}
pub fn timestamp(n: i64) -> Value {
Value::Scalar(ScalarValue::Timestamp(n))
}
pub fn f64(n: f64) -> Value {
Value::Scalar(ScalarValue::F64(n))
}
pub fn bytes(b: Vec<u8>) -> Value {
Value::Scalar(ScalarValue::Bytes(b))
}
}
impl From<&str> for Value {
fn from(s: &str) -> Self {
Value::Scalar(s.into())
}
}
impl From<String> for Value {
fn from(s: String) -> Self {
Value::Scalar(ScalarValue::Str(s.into()))
}
}
impl From<i64> for Value {
fn from(n: i64) -> Self {
Value::Scalar(ScalarValue::Int(n))
}
}
impl From<i32> for Value {
fn from(n: i32) -> Self {
Value::Scalar(ScalarValue::Int(n.into()))
}
}
impl From<u64> for Value {
fn from(n: u64) -> Self {
Value::Scalar(ScalarValue::Uint(n))
}
}
impl From<bool> for Value {
fn from(v: bool) -> Self {
Value::Scalar(ScalarValue::Boolean(v))
}
}
impl From<ObjType> for Value {
fn from(o: ObjType) -> Self {
Value::Object(o)
}
}
impl From<ScalarValue> for Value {
fn from(v: ScalarValue) -> Self {
Value::Scalar(v)
}
}
impl From<&Op> for (Value, OpId) {
fn from(op: &Op) -> Self {
match &op.action {
OpType::Make(obj_type) => (Value::Object(*obj_type), op.id),
OpType::Set(scalar) => (Value::Scalar(scalar.clone()), op.id),
_ => panic!("cant convert op into a value - {:?}", op),
}
}
}
impl From<Op> for (Value, OpId) {
fn from(op: Op) -> Self {
match &op.action {
OpType::Make(obj_type) => (Value::Object(*obj_type), op.id),
OpType::Set(scalar) => (Value::Scalar(scalar.clone()), op.id),
_ => panic!("cant convert op into a value - {:?}", op),
}
}
}
impl From<Value> for OpType {
fn from(v: Value) -> Self {
match v {
Value::Object(o) => OpType::Make(o),
Value::Scalar(s) => OpType::Set(s),
}
}
}
#[derive(Deserialize, Serialize, PartialEq, Debug, Clone, Copy)]
pub(crate) enum DataType {
#[serde(rename = "counter")]
Counter,
#[serde(rename = "timestamp")]
Timestamp,
#[serde(rename = "bytes")]
Bytes,
#[serde(rename = "uint")]
Uint,
#[serde(rename = "int")]
Int,
#[serde(rename = "float64")]
F64,
#[serde(rename = "undefined")]
Undefined,
}
#[derive(Serialize, PartialEq, Debug, Clone)]
#[serde(untagged)]
pub enum ScalarValue {
Bytes(Vec<u8>),
Str(SmolStr),
Int(i64),
Uint(u64),
F64(f64),
Counter(i64),
Timestamp(i64),
Boolean(bool),
Null,
}
impl ScalarValue {
pub(crate) fn as_datatype(
&self,
datatype: DataType,
) -> Result<ScalarValue, error::InvalidScalarValue> {
match (datatype, self) {
(DataType::Counter, ScalarValue::Int(i)) => Ok(ScalarValue::Counter(*i)),
(DataType::Counter, ScalarValue::Uint(u)) => match i64::try_from(*u) {
Ok(i) => Ok(ScalarValue::Counter(i)),
Err(_) => Err(error::InvalidScalarValue {
raw_value: self.clone(),
expected: "an integer".to_string(),
unexpected: "an integer larger than i64::max_value".to_string(),
datatype,
}),
},
(DataType::Bytes, ScalarValue::Bytes(bytes)) => Ok(ScalarValue::Bytes(bytes.clone())),
(DataType::Bytes, v) => Err(error::InvalidScalarValue {
raw_value: self.clone(),
expected: "a vector of bytes".to_string(),
unexpected: v.to_string(),
datatype,
}),
(DataType::Counter, v) => Err(error::InvalidScalarValue {
raw_value: self.clone(),
expected: "an integer".to_string(),
unexpected: v.to_string(),
datatype,
}),
(DataType::Timestamp, ScalarValue::Int(i)) => Ok(ScalarValue::Timestamp(*i)),
(DataType::Timestamp, ScalarValue::Uint(u)) => match i64::try_from(*u) {
Ok(i) => Ok(ScalarValue::Timestamp(i)),
Err(_) => Err(error::InvalidScalarValue {
raw_value: self.clone(),
expected: "an integer".to_string(),
unexpected: "an integer larger than i64::max_value".to_string(),
datatype,
}),
},
(DataType::Timestamp, v) => Err(error::InvalidScalarValue {
raw_value: self.clone(),
expected: "an integer".to_string(),
unexpected: v.to_string(),
datatype,
}),
(DataType::Int, v) => Ok(ScalarValue::Int(v.to_i64().ok_or(
error::InvalidScalarValue {
raw_value: self.clone(),
expected: "an int".to_string(),
unexpected: v.to_string(),
datatype,
},
)?)),
(DataType::Uint, v) => Ok(ScalarValue::Uint(v.to_u64().ok_or(
error::InvalidScalarValue {
raw_value: self.clone(),
expected: "a uint".to_string(),
unexpected: v.to_string(),
datatype,
},
)?)),
(DataType::F64, v) => Ok(ScalarValue::F64(v.to_f64().ok_or(
error::InvalidScalarValue {
raw_value: self.clone(),
expected: "an f64".to_string(),
unexpected: v.to_string(),
datatype,
},
)?)),
(DataType::Undefined, _) => Ok(self.clone()),
}
}
/// Returns an Option containing a `DataType` if
/// `self` represents a numerical scalar value
/// This is necessary b/c numerical values are not self-describing
/// (unlike strings / bytes / etc. )
pub(crate) fn as_numerical_datatype(&self) -> Option<DataType> {
match self {
ScalarValue::Counter(..) => Some(DataType::Counter),
ScalarValue::Timestamp(..) => Some(DataType::Timestamp),
ScalarValue::Int(..) => Some(DataType::Int),
ScalarValue::Uint(..) => Some(DataType::Uint),
ScalarValue::F64(..) => Some(DataType::F64),
_ => None,
}
}
/// If this value can be coerced to an i64, return the i64 value
pub fn to_i64(&self) -> Option<i64> {
match self {
ScalarValue::Int(n) => Some(*n),
ScalarValue::Uint(n) => Some(*n as i64),
ScalarValue::F64(n) => Some(*n as i64),
ScalarValue::Counter(n) => Some(*n),
ScalarValue::Timestamp(n) => Some(*n),
_ => None,
}
}
pub fn to_u64(&self) -> Option<u64> {
match self {
ScalarValue::Int(n) => Some(*n as u64),
ScalarValue::Uint(n) => Some(*n),
ScalarValue::F64(n) => Some(*n as u64),
ScalarValue::Counter(n) => Some(*n as u64),
ScalarValue::Timestamp(n) => Some(*n as u64),
_ => None,
}
}
pub fn to_f64(&self) -> Option<f64> {
match self {
ScalarValue::Int(n) => Some(*n as f64),
ScalarValue::Uint(n) => Some(*n as f64),
ScalarValue::F64(n) => Some(*n),
ScalarValue::Counter(n) => Some(*n as f64),
ScalarValue::Timestamp(n) => Some(*n as f64),
_ => None,
}
}
}

View file

@ -1,504 +0,0 @@
use std::{collections::HashMap, convert::TryInto, hash::Hash};
use serde::ser::{SerializeMap, SerializeSeq};
pub fn new_doc() -> automerge::Automerge {
automerge::Automerge::new_with_actor_id(automerge::ActorId::random())
}
pub fn new_doc_with_actor(actor: automerge::ActorId) -> automerge::Automerge {
automerge::Automerge::new_with_actor_id(actor)
}
/// Returns two actor IDs, the first considered to be ordered before the second
pub fn sorted_actors() -> (automerge::ActorId, automerge::ActorId) {
let a = automerge::ActorId::random();
let b = automerge::ActorId::random();
if a > b {
(b, a)
} else {
(a, b)
}
}
/// This macro makes it easy to make assertions about a document. It is called with two arguments,
/// the first is a reference to an `automerge::Automerge`, the second is an instance of
/// `RealizedObject<ExportableOpId>`.
///
/// What - I hear you ask - is a `RealizedObject`? It's a fully hydrated version of the contents of
/// an automerge document. You don't need to think about this too much though because you can
/// easily construct one with the `map!` and `list!` macros. Here's an example:
///
/// ## Constructing documents
///
/// ```rust
/// let mut doc = automerge::Automerge::new();
/// let todos = doc.set(automerge::ROOT, "todos", automerge::Value::map()).unwrap().unwrap();
/// let todo = doc.insert(todos, 0, automerge::Value::map()).unwrap();
/// let title = doc.set(todo, "title", "water plants").unwrap().unwrap();
///
/// assert_doc!(
/// &doc,
/// map!{
/// "todos" => {
/// todos => list![
/// { todo => map!{ title = "water plants" } }
/// ]
/// }
/// }
/// );
///
/// ```
///
/// This might look more complicated than you were expecting. Why are there OpIds (`todos`, `todo`,
/// `title`) in there? Well the `RealizedObject` contains all the changes in the document tagged by
/// OpId. This makes it easy to test for conflicts:
///
/// ```rust
/// let mut doc1 = automerge::Automerge::new();
/// let mut doc2 = automerge::Automerge::new();
/// let op1 = doc1.set(automerge::ROOT, "field", "one").unwrap().unwrap();
/// let op2 = doc2.set(automerge::ROOT, "field", "two").unwrap().unwrap();
/// doc1.merge(&mut doc2);
/// assert_doc!(
/// &doc1,
/// map!{
/// "field" => {
/// op1 => "one",
/// op2.translate(&doc2) => "two"
/// }
/// }
/// );
/// ```
///
/// ## Translating OpIds
///
/// One thing you may have noticed in the example above is the `op2.translate(&doc2)` call. What is
/// that doing there? Well, the problem is that automerge OpIDs (in the current API) are specific
/// to a document. Using an opid from one document in a different document will not work. Therefore
/// this module defines an `OpIdExt` trait with a `translate` method on it. This method takes a
/// document and converts the opid into something which knows how to be compared with opids from
/// another document by using the document you pass to `translate`. Again, all you really need to
/// know is that when constructing a document for comparison you should call `translate(fromdoc)`
/// on opids which come from a document other than the one you pass to `assert_doc`.
#[macro_export]
macro_rules! assert_doc {
($doc: expr, $expected: expr) => {{
use $crate::helpers::{realize, ExportableOpId};
let realized = realize($doc);
let to_export: RealizedObject<ExportableOpId<'_>> = $expected.into();
let exported = to_export.export($doc);
if realized != exported {
let serde_right = serde_json::to_string_pretty(&realized).unwrap();
let serde_left = serde_json::to_string_pretty(&exported).unwrap();
panic!(
"documents didn't match\n expected\n{}\n got\n{}",
&serde_left, &serde_right
);
}
pretty_assertions::assert_eq!(realized, exported);
}};
}
/// Like `assert_doc` except that you can specify an object ID and property to select subsections
/// of the document.
#[macro_export]
macro_rules! assert_obj {
($doc: expr, $obj_id: expr, $prop: expr, $expected: expr) => {{
use $crate::helpers::{realize_prop, ExportableOpId};
let realized = realize_prop($doc, $obj_id, $prop);
let to_export: RealizedObject<ExportableOpId<'_>> = $expected.into();
let exported = to_export.export($doc);
if realized != exported {
let serde_right = serde_json::to_string_pretty(&realized).unwrap();
let serde_left = serde_json::to_string_pretty(&exported).unwrap();
panic!(
"documents didn't match\n expected\n{}\n got\n{}",
&serde_left, &serde_right
);
}
pretty_assertions::assert_eq!(realized, exported);
}};
}
/// Construct `RealizedObject::Map`. This macro takes a nested set of curl braces. The outer set is
/// the keys of the map, the inner set is the opid tagged values:
///
/// ```
/// map!{
/// "key" => {
/// opid1 => "value1",
/// opid2 => "value2",
/// }
/// }
/// ```
///
/// The map above would represent a map with a conflict on the "key" property. The values can be
/// anything which implements `Into<RealizedObject<ExportableOpId<'_>>`. Including nested calls to
/// `map!` or `list!`.
#[macro_export]
macro_rules! map {
(@single $($x:tt)*) => (());
(@count $($rest:expr),*) => (<[()]>::len(&[$(map!(@single $rest)),*]));
(@inner { $($opid:expr => $value:expr,)+ }) => { map!(@inner { $($opid => $value),+ }) };
(@inner { $($opid:expr => $value:expr),* }) => {
{
use std::collections::HashMap;
let mut inner: HashMap<ExportableOpId<'_>, RealizedObject<ExportableOpId<'_>>> = HashMap::new();
$(
let _ = inner.insert($opid.into(), $value.into());
)*
inner
}
};
//(&inner $map:expr, $opid:expr => $value:expr, $($tail:tt),*) => {
//$map.insert($opid.into(), $value.into());
//}
($($key:expr => $inner:tt,)+) => { map!($($key => $inner),+) };
($($key:expr => $inner:tt),*) => {
{
use std::collections::HashMap;
use crate::helpers::ExportableOpId;
let _cap = map!(@count $($key),*);
let mut _map: HashMap<String, HashMap<ExportableOpId<'_>, RealizedObject<ExportableOpId<'_>>>> = ::std::collections::HashMap::with_capacity(_cap);
$(
let inner = map!(@inner $inner);
let _ = _map.insert($key.to_string(), inner);
)*
RealizedObject::Map(_map)
}
}
}
/// Construct `RealizedObject::Sequence`. This macro represents a sequence of opid tagged values
///
/// ```
/// list![
/// {
/// opid1 => "value1",
/// opid2 => "value2",
/// }
/// ]
/// ```
///
/// The list above would represent a list with a conflict on the 0 index. The values can be
/// anything which implements `Into<RealizedObject<ExportableOpId<'_>>` including nested calls to
/// `map!` or `list!`.
#[macro_export]
macro_rules! list {
(@single $($x:tt)*) => (());
(@count $($rest:tt),*) => (<[()]>::len(&[$(list!(@single $rest)),*]));
(@inner { $($opid:expr => $value:expr,)+ }) => { list!(@inner { $($opid => $value),+ }) };
(@inner { $($opid:expr => $value:expr),* }) => {
{
use std::collections::HashMap;
let mut inner: HashMap<ExportableOpId<'_>, RealizedObject<ExportableOpId<'_>>> = HashMap::new();
$(
let _ = inner.insert($opid.into(), $value.into());
)*
inner
}
};
($($inner:tt,)+) => { list!($($inner),+) };
($($inner:tt),*) => {
{
use crate::helpers::ExportableOpId;
let _cap = list!(@count $($inner),*);
let mut _list: Vec<HashMap<ExportableOpId<'_>, RealizedObject<ExportableOpId<'_>>>> = Vec::new();
$(
//println!("{}", stringify!($inner));
let inner = list!(@inner $inner);
let _ = _list.push(inner);
)*
RealizedObject::Sequence(_list)
}
}
}
/// Translate an op ID produced by one document to an op ID which can be understood by
/// another
///
/// The current API of automerge exposes OpIds of the form (u64, usize) where the first component
/// is the counter of an actors lamport timestamp and the second component is the index into an
/// array of actor IDs stored by the document where the opid was generated. Obviously this is not
/// portable between documents as the index of the actor array is unlikely to match between two
/// documents. This function translates between the two representations.
///
/// At some point we will probably change the API to not be document specific but this function
/// allows us to write tests first.
pub fn translate_obj_id(
from: &automerge::Automerge,
to: &automerge::Automerge,
id: automerge::OpId,
) -> automerge::OpId {
let exported = from.export(id);
to.import(&exported).unwrap()
}
pub fn mk_counter(value: i64) -> automerge::ScalarValue {
automerge::ScalarValue::Counter(value)
}
#[derive(Eq, Hash, PartialEq, Debug)]
pub struct ExportedOpId(String);
impl std::fmt::Display for ExportedOpId {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
write!(f, "{}", self.0)
}
}
/// A `RealizedObject` is a representation of all the current values in a document - including
/// conflicts.
#[derive(PartialEq, Debug)]
pub enum RealizedObject<Oid: PartialEq + Eq + Hash> {
Map(HashMap<String, HashMap<Oid, RealizedObject<Oid>>>),
Sequence(Vec<HashMap<Oid, RealizedObject<Oid>>>),
Value(automerge::ScalarValue),
}
impl serde::Serialize for RealizedObject<ExportedOpId> {
fn serialize<S>(&self, serializer: S) -> Result<S::Ok, S::Error>
where
S: serde::Serializer,
{
match self {
Self::Map(kvs) => {
let mut map_ser = serializer.serialize_map(Some(kvs.len()))?;
for (k, kvs) in kvs {
let kvs_serded = kvs
.iter()
.map(|(opid, value)| (opid.to_string(), value))
.collect::<HashMap<String, &RealizedObject<ExportedOpId>>>();
map_ser.serialize_entry(k, &kvs_serded)?;
}
map_ser.end()
}
Self::Sequence(elems) => {
let mut list_ser = serializer.serialize_seq(Some(elems.len()))?;
for elem in elems {
let kvs_serded = elem
.iter()
.map(|(opid, value)| (opid.to_string(), value))
.collect::<HashMap<String, &RealizedObject<ExportedOpId>>>();
list_ser.serialize_element(&kvs_serded)?;
}
list_ser.end()
}
Self::Value(v) => v.serialize(serializer),
}
}
}
pub fn realize(doc: &automerge::Automerge) -> RealizedObject<ExportedOpId> {
realize_obj(doc, automerge::ROOT, automerge::ObjType::Map)
}
pub fn realize_prop<P: Into<automerge::Prop>>(
doc: &automerge::Automerge,
obj_id: automerge::OpId,
prop: P,
) -> RealizedObject<ExportedOpId> {
let (val, obj_id) = doc.value(obj_id, prop).unwrap().unwrap();
match val {
automerge::Value::Object(obj_type) => realize_obj(doc, obj_id, obj_type),
automerge::Value::Scalar(v) => RealizedObject::Value(v),
}
}
pub fn realize_obj(
doc: &automerge::Automerge,
obj_id: automerge::OpId,
objtype: automerge::ObjType,
) -> RealizedObject<ExportedOpId> {
match objtype {
automerge::ObjType::Map | automerge::ObjType::Table => {
let mut result = HashMap::new();
for key in doc.keys(obj_id) {
result.insert(key.clone(), realize_values(doc, obj_id, key));
}
RealizedObject::Map(result)
}
automerge::ObjType::List | automerge::ObjType::Text => {
let length = doc.length(obj_id);
let mut result = Vec::with_capacity(length);
for i in 0..length {
result.push(realize_values(doc, obj_id, i));
}
RealizedObject::Sequence(result)
}
}
}
fn realize_values<K: Into<automerge::Prop>>(
doc: &automerge::Automerge,
obj_id: automerge::OpId,
key: K,
) -> HashMap<ExportedOpId, RealizedObject<ExportedOpId>> {
let mut values_by_opid = HashMap::new();
for (value, opid) in doc.values(obj_id, key).unwrap() {
let realized = match value {
automerge::Value::Object(objtype) => realize_obj(doc, opid, objtype),
automerge::Value::Scalar(v) => RealizedObject::Value(v),
};
let exported_opid = ExportedOpId(doc.export(opid));
values_by_opid.insert(exported_opid, realized);
}
values_by_opid
}
impl<'a> RealizedObject<ExportableOpId<'a>> {
pub fn export(self, doc: &automerge::Automerge) -> RealizedObject<ExportedOpId> {
match self {
Self::Map(kvs) => RealizedObject::Map(
kvs.into_iter()
.map(|(k, v)| {
(
k,
v.into_iter()
.map(|(k, v)| (k.export(doc), v.export(doc)))
.collect(),
)
})
.collect(),
),
Self::Sequence(values) => RealizedObject::Sequence(
values
.into_iter()
.map(|v| {
v.into_iter()
.map(|(k, v)| (k.export(doc), v.export(doc)))
.collect()
})
.collect(),
),
Self::Value(v) => RealizedObject::Value(v),
}
}
}
impl<'a, O: Into<ExportableOpId<'a>>, I: Into<RealizedObject<ExportableOpId<'a>>>>
From<HashMap<&str, HashMap<O, I>>> for RealizedObject<ExportableOpId<'a>>
{
fn from(values: HashMap<&str, HashMap<O, I>>) -> Self {
let intoed = values
.into_iter()
.map(|(k, v)| {
(
k.to_string(),
v.into_iter().map(|(k, v)| (k.into(), v.into())).collect(),
)
})
.collect();
RealizedObject::Map(intoed)
}
}
impl<'a, O: Into<ExportableOpId<'a>>, I: Into<RealizedObject<ExportableOpId<'a>>>>
From<Vec<HashMap<O, I>>> for RealizedObject<ExportableOpId<'a>>
{
fn from(values: Vec<HashMap<O, I>>) -> Self {
RealizedObject::Sequence(
values
.into_iter()
.map(|v| v.into_iter().map(|(k, v)| (k.into(), v.into())).collect())
.collect(),
)
}
}
impl From<bool> for RealizedObject<ExportableOpId<'_>> {
fn from(b: bool) -> Self {
RealizedObject::Value(b.into())
}
}
impl From<usize> for RealizedObject<ExportableOpId<'_>> {
fn from(u: usize) -> Self {
let v = u.try_into().unwrap();
RealizedObject::Value(automerge::ScalarValue::Int(v))
}
}
impl From<automerge::ScalarValue> for RealizedObject<ExportableOpId<'_>> {
fn from(s: automerge::ScalarValue) -> Self {
RealizedObject::Value(s)
}
}
impl From<&str> for RealizedObject<ExportableOpId<'_>> {
fn from(s: &str) -> Self {
RealizedObject::Value(automerge::ScalarValue::Str(s.into()))
}
}
#[derive(Eq, PartialEq, Hash)]
pub enum ExportableOpId<'a> {
Native(automerge::OpId),
Translate(Translate<'a>),
}
impl<'a> ExportableOpId<'a> {
fn export(self, doc: &automerge::Automerge) -> ExportedOpId {
let oid = match self {
Self::Native(oid) => oid,
Self::Translate(Translate { from, opid }) => translate_obj_id(from, doc, opid),
};
ExportedOpId(doc.export(oid))
}
}
pub struct Translate<'a> {
from: &'a automerge::Automerge,
opid: automerge::OpId,
}
impl<'a> PartialEq for Translate<'a> {
fn eq(&self, other: &Self) -> bool {
self.from.maybe_get_actor().unwrap() == other.from.maybe_get_actor().unwrap()
&& self.opid == other.opid
}
}
impl<'a> Eq for Translate<'a> {}
impl<'a> Hash for Translate<'a> {
fn hash<H: std::hash::Hasher>(&self, state: &mut H) {
self.from.maybe_get_actor().unwrap().hash(state);
self.opid.hash(state);
}
}
pub trait OpIdExt {
fn native(self) -> ExportableOpId<'static>;
fn translate(self, doc: &automerge::Automerge) -> ExportableOpId<'_>;
}
impl OpIdExt for automerge::OpId {
/// Use this opid directly when exporting
fn native(self) -> ExportableOpId<'static> {
ExportableOpId::Native(self)
}
/// Translate this OpID from `doc` when exporting
fn translate(self, doc: &automerge::Automerge) -> ExportableOpId<'_> {
ExportableOpId::Translate(Translate {
from: doc,
opid: self,
})
}
}
impl From<automerge::OpId> for ExportableOpId<'_> {
fn from(oid: automerge::OpId) -> Self {
ExportableOpId::Native(oid)
}
}
/// Pretty print the contents of a document
#[allow(dead_code)]
pub fn pretty_print(doc: &automerge::Automerge) {
println!("{}", serde_json::to_string_pretty(&realize(doc)).unwrap())
}

View file

@ -1,943 +0,0 @@
use automerge::Automerge;
mod helpers;
#[allow(unused_imports)]
use helpers::{
mk_counter, new_doc, new_doc_with_actor, pretty_print, realize, realize_obj, sorted_actors,
translate_obj_id, OpIdExt, RealizedObject,
};
#[test]
fn no_conflict_on_repeated_assignment() {
let mut doc = Automerge::new();
doc.set(automerge::ROOT, "foo", 1).unwrap();
let op = doc.set(automerge::ROOT, "foo", 2).unwrap().unwrap();
assert_doc!(
&doc,
map! {
"foo" => { op => 2},
}
);
}
#[test]
fn no_change_on_repeated_map_set() {
let mut doc = new_doc();
doc.set(automerge::ROOT, "foo", 1).unwrap();
assert!(doc.set(automerge::ROOT, "foo", 1).unwrap().is_none());
}
#[test]
fn no_change_on_repeated_list_set() {
let mut doc = new_doc();
let list_id = doc
.set(automerge::ROOT, "list", automerge::Value::list())
.unwrap()
.unwrap();
doc.insert(list_id, 0, 1).unwrap();
doc.set(list_id, 0, 1).unwrap();
assert!(doc.set(list_id, 0, 1).unwrap().is_none());
}
#[test]
fn no_change_on_list_insert_followed_by_set_of_same_value() {
let mut doc = new_doc();
let list_id = doc
.set(automerge::ROOT, "list", automerge::Value::list())
.unwrap()
.unwrap();
doc.insert(list_id, 0, 1).unwrap();
assert!(doc.set(list_id, 0, 1).unwrap().is_none());
}
#[test]
fn repeated_map_assignment_which_resolves_conflict_not_ignored() {
let mut doc1 = new_doc();
let mut doc2 = new_doc();
doc1.set(automerge::ROOT, "field", 123).unwrap();
doc2.merge(&mut doc1);
doc2.set(automerge::ROOT, "field", 456).unwrap();
doc1.set(automerge::ROOT, "field", 789).unwrap();
doc1.merge(&mut doc2);
assert_eq!(doc1.values(automerge::ROOT, "field").unwrap().len(), 2);
let op = doc1.set(automerge::ROOT, "field", 123).unwrap().unwrap();
assert_doc!(
&doc1,
map! {
"field" => {
op => 123
}
}
);
}
#[test]
fn repeated_list_assignment_which_resolves_conflict_not_ignored() {
let mut doc1 = new_doc();
let mut doc2 = new_doc();
let list_id = doc1
.set(automerge::ROOT, "list", automerge::Value::list())
.unwrap()
.unwrap();
doc1.insert(list_id, 0, 123).unwrap();
doc2.merge(&mut doc1);
let list_id_in_doc2 = translate_obj_id(&doc1, &doc2, list_id);
doc2.set(list_id_in_doc2, 0, 456).unwrap().unwrap();
doc1.merge(&mut doc2);
let doc1_op = doc1.set(list_id, 0, 789).unwrap().unwrap();
assert_doc!(
&doc1,
map! {
"list" => {
list_id => list![
{ doc1_op => 789 },
]
}
}
);
}
#[test]
fn list_deletion() {
let mut doc = new_doc();
let list_id = doc
.set(automerge::ROOT, "list", automerge::Value::list())
.unwrap()
.unwrap();
let op1 = doc.insert(list_id, 0, 123).unwrap();
doc.insert(list_id, 1, 456).unwrap();
let op3 = doc.insert(list_id, 2, 789).unwrap();
doc.del(list_id, 1).unwrap();
assert_doc!(
&doc,
map! {
"list" => {list_id => list![
{ op1 => 123 },
{ op3 => 789 },
]}
}
)
}
#[test]
fn merge_concurrent_map_prop_updates() {
let mut doc1 = new_doc();
let mut doc2 = new_doc();
let op1 = doc1.set(automerge::ROOT, "foo", "bar").unwrap().unwrap();
let hello = doc2
.set(automerge::ROOT, "hello", "world")
.unwrap()
.unwrap();
doc1.merge(&mut doc2);
assert_eq!(
doc1.value(automerge::ROOT, "foo").unwrap().unwrap().0,
"bar".into()
);
assert_doc!(
&doc1,
map! {
"foo" => { op1 => "bar" },
"hello" => { hello.translate(&doc2) => "world" },
}
);
doc2.merge(&mut doc1);
assert_doc!(
&doc2,
map! {
"foo" => { op1.translate(&doc1) => "bar" },
"hello" => { hello => "world" },
}
);
assert_eq!(realize(&doc1), realize(&doc2));
}
#[test]
fn add_concurrent_increments_of_same_property() {
let mut doc1 = new_doc();
let mut doc2 = new_doc();
let counter_id = doc1
.set(automerge::ROOT, "counter", mk_counter(0))
.unwrap()
.unwrap();
doc2.merge(&mut doc1);
doc1.inc(automerge::ROOT, "counter", 1).unwrap();
doc2.inc(automerge::ROOT, "counter", 2).unwrap();
doc1.merge(&mut doc2);
assert_doc!(
&doc1,
map! {
"counter" => {
counter_id => mk_counter(3)
}
}
);
}
#[test]
fn add_increments_only_to_preceeded_values() {
let mut doc1 = new_doc();
let mut doc2 = new_doc();
// create a counter in doc1
let doc1_counter_id = doc1
.set(automerge::ROOT, "counter", mk_counter(0))
.unwrap()
.unwrap();
doc1.inc(automerge::ROOT, "counter", 1).unwrap();
// create a counter in doc2
let doc2_counter_id = doc2
.set(automerge::ROOT, "counter", mk_counter(0))
.unwrap()
.unwrap();
doc2.inc(automerge::ROOT, "counter", 3).unwrap();
// The two values should be conflicting rather than added
doc1.merge(&mut doc2);
assert_doc!(
&doc1,
map! {
"counter" => {
doc1_counter_id.native() => mk_counter(1),
doc2_counter_id.translate(&doc2) => mk_counter(3),
}
}
);
}
#[test]
fn concurrent_updates_of_same_field() {
let mut doc1 = new_doc();
let mut doc2 = new_doc();
let set_one_opid = doc1.set(automerge::ROOT, "field", "one").unwrap().unwrap();
let set_two_opid = doc2.set(automerge::ROOT, "field", "two").unwrap().unwrap();
doc1.merge(&mut doc2);
assert_doc!(
&doc1,
map! {
"field" => {
set_one_opid.native() => "one",
set_two_opid.translate(&doc2) => "two",
}
}
);
}
#[test]
fn concurrent_updates_of_same_list_element() {
let mut doc1 = new_doc();
let mut doc2 = new_doc();
let list_id = doc1
.set(automerge::ROOT, "birds", automerge::Value::list())
.unwrap()
.unwrap();
doc1.insert(list_id, 0, "finch").unwrap();
doc2.merge(&mut doc1);
let set_one_op = doc1.set(list_id, 0, "greenfinch").unwrap().unwrap();
let list_id_in_doc2 = translate_obj_id(&doc1, &doc2, list_id);
let set_op_two = doc2.set(list_id_in_doc2, 0, "goldfinch").unwrap().unwrap();
doc1.merge(&mut doc2);
assert_doc!(
&doc1,
map! {
"birds" => {
list_id => list![{
set_one_op.native() => "greenfinch",
set_op_two.translate(&doc2) => "goldfinch",
}]
}
}
);
}
#[test]
fn assignment_conflicts_of_different_types() {
let mut doc1 = new_doc();
let mut doc2 = new_doc();
let mut doc3 = new_doc();
let op_one = doc1
.set(automerge::ROOT, "field", "string")
.unwrap()
.unwrap();
let op_two = doc2
.set(automerge::ROOT, "field", automerge::Value::list())
.unwrap()
.unwrap();
let op_three = doc3
.set(automerge::ROOT, "field", automerge::Value::map())
.unwrap()
.unwrap();
doc1.merge(&mut doc2);
doc1.merge(&mut doc3);
assert_doc!(
&doc1,
map! {
"field" => {
op_one.native() => "string",
op_two.translate(&doc2) => list!{},
op_three.translate(&doc3) => map!{},
}
}
);
}
#[test]
fn changes_within_conflicting_map_field() {
let mut doc1 = new_doc();
let mut doc2 = new_doc();
let op_one = doc1
.set(automerge::ROOT, "field", "string")
.unwrap()
.unwrap();
let map_id = doc2
.set(automerge::ROOT, "field", automerge::Value::map())
.unwrap()
.unwrap();
let set_in_doc2 = doc2.set(map_id, "innerKey", 42).unwrap().unwrap();
doc1.merge(&mut doc2);
assert_doc!(
&doc1,
map! {
"field" => {
op_one.native() => "string",
map_id.translate(&doc2) => map!{
"innerKey" => {
set_in_doc2.translate(&doc2) => 42,
}
}
}
}
);
}
#[test]
fn changes_within_conflicting_list_element() {
let (actor1, actor2) = sorted_actors();
let mut doc1 = new_doc_with_actor(actor1);
let mut doc2 = new_doc_with_actor(actor2);
let list_id = doc1
.set(automerge::ROOT, "list", automerge::Value::list())
.unwrap()
.unwrap();
doc1.insert(list_id, 0, "hello").unwrap();
doc2.merge(&mut doc1);
let map_in_doc1 = doc1
.set(list_id, 0, automerge::Value::map())
.unwrap()
.unwrap();
let set_map1 = doc1.set(map_in_doc1, "map1", true).unwrap().unwrap();
let set_key1 = doc1.set(map_in_doc1, "key", 1).unwrap().unwrap();
let list_id_in_doc2 = translate_obj_id(&doc1, &doc2, list_id);
let map_in_doc2 = doc2
.set(list_id_in_doc2, 0, automerge::Value::map())
.unwrap()
.unwrap();
doc1.merge(&mut doc2);
let set_map2 = doc2.set(map_in_doc2, "map2", true).unwrap().unwrap();
let set_key2 = doc2.set(map_in_doc2, "key", 2).unwrap().unwrap();
doc1.merge(&mut doc2);
assert_doc!(
&doc1,
map! {
"list" => {
list_id => list![
{
map_in_doc2.translate(&doc2) => map!{
"map2" => { set_map2.translate(&doc2) => true },
"key" => { set_key2.translate(&doc2) => 2 },
},
map_in_doc1.native() => map!{
"key" => { set_key1.native() => 1 },
"map1" => { set_map1.native() => true },
}
}
]
}
}
);
}
#[test]
fn concurrently_assigned_nested_maps_should_not_merge() {
let mut doc1 = new_doc();
let mut doc2 = new_doc();
let doc1_map_id = doc1
.set(automerge::ROOT, "config", automerge::Value::map())
.unwrap()
.unwrap();
let doc1_field = doc1
.set(doc1_map_id, "background", "blue")
.unwrap()
.unwrap();
let doc2_map_id = doc2
.set(automerge::ROOT, "config", automerge::Value::map())
.unwrap()
.unwrap();
let doc2_field = doc2
.set(doc2_map_id, "logo_url", "logo.png")
.unwrap()
.unwrap();
doc1.merge(&mut doc2);
assert_doc!(
&doc1,
map! {
"config" => {
doc1_map_id.native() => map!{
"background" => {doc1_field.native() => "blue"}
},
doc2_map_id.translate(&doc2) => map!{
"logo_url" => {doc2_field.translate(&doc2) => "logo.png"}
}
}
}
);
}
#[test]
fn concurrent_insertions_at_different_list_positions() {
let (actor1, actor2) = sorted_actors();
let mut doc1 = new_doc_with_actor(actor1);
let mut doc2 = new_doc_with_actor(actor2);
assert!(doc1.maybe_get_actor().unwrap() < doc2.maybe_get_actor().unwrap());
let list_id = doc1
.set(automerge::ROOT, "list", automerge::Value::list())
.unwrap()
.unwrap();
let one = doc1.insert(list_id, 0, "one").unwrap();
let three = doc1.insert(list_id, 1, "three").unwrap();
doc2.merge(&mut doc1);
let two = doc1.splice(list_id, 1, 0, vec!["two".into()]).unwrap()[0];
let list_id_in_doc2 = translate_obj_id(&doc1, &doc2, list_id);
let four = doc2.insert(list_id_in_doc2, 2, "four").unwrap();
doc1.merge(&mut doc2);
assert_doc!(
&doc1,
map! {
"list" => {
list_id => list![
{one.native() => "one"},
{two.native() => "two"},
{three.native() => "three"},
{four.translate(&doc2) => "four"},
]
}
}
);
}
#[test]
fn concurrent_insertions_at_same_list_position() {
let (actor1, actor2) = sorted_actors();
let mut doc1 = new_doc_with_actor(actor1);
let mut doc2 = new_doc_with_actor(actor2);
assert!(doc1.maybe_get_actor().unwrap() < doc2.maybe_get_actor().unwrap());
let list_id = doc1
.set(automerge::ROOT, "birds", automerge::Value::list())
.unwrap()
.unwrap();
let parakeet = doc1.insert(list_id, 0, "parakeet").unwrap();
doc2.merge(&mut doc1);
let list_id_in_doc2 = translate_obj_id(&doc1, &doc2, list_id);
let starling = doc1.insert(list_id, 1, "starling").unwrap();
let chaffinch = doc2.insert(list_id_in_doc2, 1, "chaffinch").unwrap();
doc1.merge(&mut doc2);
assert_doc!(
&doc1,
map! {
"birds" => {
list_id => list![
{
parakeet.native() => "parakeet",
},
{
starling.native() => "starling",
},
{
chaffinch.translate(&doc2) => "chaffinch",
},
]
},
}
);
}
#[test]
fn concurrent_assignment_and_deletion_of_a_map_entry() {
let mut doc1 = new_doc();
let mut doc2 = new_doc();
doc1.set(automerge::ROOT, "bestBird", "robin").unwrap();
doc2.merge(&mut doc1);
doc1.del(automerge::ROOT, "bestBird").unwrap();
let set_two = doc2
.set(automerge::ROOT, "bestBird", "magpie")
.unwrap()
.unwrap();
doc1.merge(&mut doc2);
assert_doc!(
&doc1,
map! {
"bestBird" => {
set_two.translate(&doc2) => "magpie",
}
}
);
}
#[test]
fn concurrent_assignment_and_deletion_of_list_entry() {
let mut doc1 = new_doc();
let mut doc2 = new_doc();
let list_id = doc1
.set(automerge::ROOT, "birds", automerge::Value::list())
.unwrap()
.unwrap();
let blackbird = doc1.insert(list_id, 0, "blackbird").unwrap();
doc1.insert(list_id, 1, "thrush").unwrap();
let goldfinch = doc1.insert(list_id, 2, "goldfinch").unwrap();
doc2.merge(&mut doc1);
let starling = doc1.set(list_id, 1, "starling").unwrap().unwrap();
let list_id_in_doc2 = translate_obj_id(&doc1, &doc2, list_id);
doc2.del(list_id_in_doc2, 1).unwrap();
assert_doc!(
&doc2,
map! {
"birds" => {list_id.translate(&doc1) => list![
{ blackbird.translate(&doc1) => "blackbird"},
{ goldfinch.translate(&doc1) => "goldfinch"},
]}
}
);
assert_doc!(
&doc1,
map! {
"birds" => {list_id => list![
{ blackbird => "blackbird" },
{ starling => "starling" },
{ goldfinch => "goldfinch" },
]}
}
);
doc1.merge(&mut doc2);
assert_doc!(
&doc1,
map! {
"birds" => {list_id => list![
{ blackbird => "blackbird" },
{ starling => "starling" },
{ goldfinch => "goldfinch" },
]}
}
);
}
#[test]
fn insertion_after_a_deleted_list_element() {
let mut doc1 = new_doc();
let mut doc2 = new_doc();
let list_id = doc1
.set(automerge::ROOT, "birds", automerge::Value::list())
.unwrap()
.unwrap();
let blackbird = doc1.insert(list_id, 0, "blackbird").unwrap();
doc1.insert(list_id, 1, "thrush").unwrap();
doc1.insert(list_id, 2, "goldfinch").unwrap();
doc2.merge(&mut doc1);
doc1.splice(list_id, 1, 2, Vec::new()).unwrap();
let list_id_in_doc2 = translate_obj_id(&doc1, &doc2, list_id);
let starling = doc2
.splice(list_id_in_doc2, 2, 0, vec!["starling".into()])
.unwrap()[0];
doc1.merge(&mut doc2);
assert_doc!(
&doc1,
map! {
"birds" => {list_id => list![
{ blackbird.native() => "blackbird" },
{ starling.translate(&doc2) => "starling" }
]}
}
);
doc2.merge(&mut doc1);
assert_doc!(
&doc2,
map! {
"birds" => {list_id.translate(&doc1) => list![
{ blackbird.translate(&doc1) => "blackbird" },
{ starling.native() => "starling" }
]}
}
);
}
#[test]
fn concurrent_deletion_of_same_list_element() {
let mut doc1 = new_doc();
let mut doc2 = new_doc();
let list_id = doc1
.set(automerge::ROOT, "birds", automerge::Value::list())
.unwrap()
.unwrap();
let albatross = doc1.insert(list_id, 0, "albatross").unwrap();
doc1.insert(list_id, 1, "buzzard").unwrap();
let cormorant = doc1.insert(list_id, 2, "cormorant").unwrap();
doc2.merge(&mut doc1);
doc1.del(list_id, 1).unwrap();
let list_id_in_doc2 = translate_obj_id(&doc1, &doc2, list_id);
doc2.del(list_id_in_doc2, 1).unwrap();
doc1.merge(&mut doc2);
assert_doc!(
&doc1,
map! {
"birds" => {list_id => list![
{ albatross => "albatross" },
{ cormorant => "cormorant" }
]}
}
);
doc2.merge(&mut doc1);
assert_doc!(
&doc2,
map! {
"birds" => {list_id.translate(&doc1) => list![
{ albatross.translate(&doc1) => "albatross" },
{ cormorant.translate(&doc1) => "cormorant" }
]}
}
);
}
#[test]
fn concurrent_updates_at_different_levels() {
let mut doc1 = new_doc();
let mut doc2 = new_doc();
let animals = doc1
.set(automerge::ROOT, "animals", automerge::Value::map())
.unwrap()
.unwrap();
let birds = doc1
.set(animals, "birds", automerge::Value::map())
.unwrap()
.unwrap();
doc1.set(birds, "pink", "flamingo").unwrap().unwrap();
doc1.set(birds, "black", "starling").unwrap().unwrap();
let mammals = doc1
.set(animals, "mammals", automerge::Value::list())
.unwrap()
.unwrap();
let badger = doc1.insert(mammals, 0, "badger").unwrap();
doc2.merge(&mut doc1);
doc1.set(birds, "brown", "sparrow").unwrap().unwrap();
let animals_in_doc2 = translate_obj_id(&doc1, &doc2, animals);
doc2.del(animals_in_doc2, "birds").unwrap();
doc1.merge(&mut doc2);
assert_obj!(
&doc1,
automerge::ROOT,
"animals",
map! {
"mammals" => {
mammals => list![{ badger => "badger" }],
}
}
);
assert_obj!(
&doc2,
automerge::ROOT,
"animals",
map! {
"mammals" => {
mammals.translate(&doc1) => list![{ badger.translate(&doc1) => "badger" }],
}
}
);
}
#[test]
fn concurrent_updates_of_concurrently_deleted_objects() {
let mut doc1 = new_doc();
let mut doc2 = new_doc();
let birds = doc1
.set(automerge::ROOT, "birds", automerge::Value::map())
.unwrap()
.unwrap();
let blackbird = doc1
.set(birds, "blackbird", automerge::Value::map())
.unwrap()
.unwrap();
doc1.set(blackbird, "feathers", "black").unwrap().unwrap();
doc2.merge(&mut doc1);
doc1.del(birds, "blackbird").unwrap();
translate_obj_id(&doc1, &doc2, blackbird);
doc2.set(blackbird, "beak", "orange").unwrap();
doc1.merge(&mut doc2);
assert_doc!(
&doc1,
map! {
"birds" => {
birds => map!{},
}
}
);
}
#[test]
fn does_not_interleave_sequence_insertions_at_same_position() {
let (actor1, actor2) = sorted_actors();
let mut doc1 = new_doc_with_actor(actor1);
let mut doc2 = new_doc_with_actor(actor2);
let wisdom = doc1
.set(automerge::ROOT, "wisdom", automerge::Value::list())
.unwrap()
.unwrap();
doc2.merge(&mut doc1);
let doc1elems = doc1
.splice(
wisdom,
0,
0,
vec![
"to".into(),
"be".into(),
"is".into(),
"to".into(),
"do".into(),
],
)
.unwrap();
let wisdom_in_doc2 = translate_obj_id(&doc1, &doc2, wisdom);
let doc2elems = doc2
.splice(
wisdom_in_doc2,
0,
0,
vec![
"to".into(),
"do".into(),
"is".into(),
"to".into(),
"be".into(),
],
)
.unwrap();
doc1.merge(&mut doc2);
assert_doc!(
&doc1,
map! {
"wisdom" => {wisdom => list![
{doc1elems[0].native() => "to"},
{doc1elems[1].native() => "be"},
{doc1elems[2].native() => "is"},
{doc1elems[3].native() => "to"},
{doc1elems[4].native() => "do"},
{doc2elems[0].translate(&doc2) => "to"},
{doc2elems[1].translate(&doc2) => "do"},
{doc2elems[2].translate(&doc2) => "is"},
{doc2elems[3].translate(&doc2) => "to"},
{doc2elems[4].translate(&doc2) => "be"},
]}
}
);
}
#[test]
fn mutliple_insertions_at_same_list_position_with_insertion_by_greater_actor_id() {
let (actor1, actor2) = sorted_actors();
assert!(actor2 > actor1);
let mut doc1 = new_doc_with_actor(actor1);
let mut doc2 = new_doc_with_actor(actor2);
let list = doc1
.set(automerge::ROOT, "list", automerge::Value::list())
.unwrap()
.unwrap();
let two = doc1.insert(list, 0, "two").unwrap();
doc2.merge(&mut doc1);
let list_in_doc2 = translate_obj_id(&doc1, &doc2, list);
let one = doc2.insert(list_in_doc2, 0, "one").unwrap();
assert_doc!(
&doc2,
map! {
"list" => { list.translate(&doc1) => list![
{ one.native() => "one" },
{ two.translate(&doc1) => "two" },
]}
}
);
}
#[test]
fn mutliple_insertions_at_same_list_position_with_insertion_by_lesser_actor_id() {
let (actor2, actor1) = sorted_actors();
assert!(actor2 < actor1);
let mut doc1 = new_doc_with_actor(actor1);
let mut doc2 = new_doc_with_actor(actor2);
let list = doc1
.set(automerge::ROOT, "list", automerge::Value::list())
.unwrap()
.unwrap();
let two = doc1.insert(list, 0, "two").unwrap();
doc2.merge(&mut doc1);
let list_in_doc2 = translate_obj_id(&doc1, &doc2, list);
let one = doc2.insert(list_in_doc2, 0, "one").unwrap();
assert_doc!(
&doc2,
map! {
"list" => { list.translate(&doc1) => list![
{ one.native() => "one" },
{ two.translate(&doc1) => "two" },
]}
}
);
}
#[test]
fn insertion_consistent_with_causality() {
let mut doc1 = new_doc();
let mut doc2 = new_doc();
let list = doc1
.set(automerge::ROOT, "list", automerge::Value::list())
.unwrap()
.unwrap();
let four = doc1.insert(list, 0, "four").unwrap();
doc2.merge(&mut doc1);
let list_in_doc2 = translate_obj_id(&doc1, &doc2, list);
let three = doc2.insert(list_in_doc2, 0, "three").unwrap();
doc1.merge(&mut doc2);
let two = doc1.insert(list, 0, "two").unwrap();
doc2.merge(&mut doc1);
let one = doc2.insert(list_in_doc2, 0, "one").unwrap();
assert_doc!(
&doc2,
map! {
"list" => {list.translate(&doc1) => list![
{one.native() => "one"},
{two.translate(&doc1) => "two"},
{three.native() => "three" },
{four.translate(&doc1) => "four"},
]}
}
);
}
#[test]
fn save_and_restore_empty() {
let mut doc = new_doc();
let loaded = Automerge::load(&doc.save().unwrap()).unwrap();
assert_doc!(&loaded, map! {});
}
#[test]
fn save_restore_complex() {
let mut doc1 = new_doc();
let todos = doc1
.set(automerge::ROOT, "todos", automerge::Value::list())
.unwrap()
.unwrap();
let first_todo = doc1.insert(todos, 0, automerge::Value::map()).unwrap();
doc1.set(first_todo, "title", "water plants")
.unwrap()
.unwrap();
let first_done = doc1.set(first_todo, "done", false).unwrap().unwrap();
let mut doc2 = new_doc();
doc2.merge(&mut doc1);
let first_todo_in_doc2 = translate_obj_id(&doc1, &doc2, first_todo);
let weed_title = doc2
.set(first_todo_in_doc2, "title", "weed plants")
.unwrap()
.unwrap();
let kill_title = doc1
.set(first_todo, "title", "kill plants")
.unwrap()
.unwrap();
doc1.merge(&mut doc2);
let reloaded = Automerge::load(&doc1.save().unwrap()).unwrap();
assert_doc!(
&reloaded,
map! {
"todos" => {todos.translate(&doc1) => list![
{first_todo.translate(&doc1) => map!{
"title" => {
weed_title.translate(&doc2) => "weed plants",
kill_title.translate(&doc1) => "kill plants",
},
"done" => {first_done.translate(&doc1) => false},
}}
]}
}
);
}

View file

@ -1,52 +0,0 @@
Try the different editing traces on different automerge implementations
### Automerge Experiement - pure rust
```code
# cargo --release run
```
#### Benchmarks
There are some criterion benchmarks in the `benches` folder which can be run with `cargo bench` or `cargo criterion`.
For flamegraphing, `cargo flamegraph --bench main -- --bench "save" # or "load" or "replay" or nothing` can be useful.
### Automerge Experiement - wasm api
```code
# node automerge-wasm.js
```
### Automerge Experiment - JS wrapper
```code
# node automerge-js.js
```
### Automerge 1.0 pure javascript - new fast backend
This assume automerge has been checked out in a directory along side this repo
```code
# node automerge-1.0.js
```
### Automerge 1.0 with rust backend
This assume automerge has been checked out in a directory along side this repo
```code
# node automerge-rs.js
```
### Automerge Experiment - JS wrapper
```code
# node automerge-js.js
```
### Baseline Test. Javascript Array with no CRDT info
```code
# node baseline.js
```

View file

@ -1,23 +0,0 @@
// Apply the paper editing trace to an Automerge.Text object, one char at a time
const { edits, finalText } = require('./editing-trace')
const Automerge = require('../automerge-js')
const start = new Date()
let state = Automerge.from({text: new Automerge.Text()})
state = Automerge.change(state, doc => {
for (let i = 0; i < edits.length; i++) {
if (i % 1000 === 0) {
console.log(`Processed ${i} edits in ${new Date() - start} ms`)
}
if (edits[i][1] > 0) doc.text.deleteAt(edits[i][0], edits[i][1])
if (edits[i].length > 2) doc.text.insertAt(edits[i][0], ...edits[i].slice(2))
}
})
let _ = Automerge.save(state)
console.log(`Done in ${new Date() - start} ms`)
if (state.text.join('') !== finalText) {
throw new RangeError('ERROR: final text did not match expectation')
}

View file

@ -1,31 +0,0 @@
// this assumes that the automerge-rs folder is checked out along side this repo
// and someone has run
// # cd automerge-rs/automerge-backend-wasm
// # yarn release
const { edits, finalText } = require('./editing-trace')
const Automerge = require('../../automerge')
const path = require('path')
const wasmBackend = require(path.resolve("../../automerge-rs/automerge-backend-wasm"))
Automerge.setDefaultBackend(wasmBackend)
const start = new Date()
let state = Automerge.from({text: new Automerge.Text()})
state = Automerge.change(state, doc => {
for (let i = 0; i < edits.length; i++) {
if (i % 1000 === 0) {
console.log(`Processed ${i} edits in ${new Date() - start} ms`)
}
if (edits[i][1] > 0) doc.text.deleteAt(edits[i][0], edits[i][1])
if (edits[i].length > 2) doc.text.insertAt(edits[i][0], ...edits[i].slice(2))
}
})
console.log(`Done in ${new Date() - start} ms`)
if (state.text.join('') !== finalText) {
throw new RangeError('ERROR: final text did not match expectation')
}

View file

@ -1,30 +0,0 @@
// make sure to
// # cd ../automerge-wasm
// # yarn release
// # yarn opt
const { edits, finalText } = require('./editing-trace')
const Automerge = require('../automerge-wasm')
const start = new Date()
let doc = Automerge.init();
let text = doc.set("_root", "text", Automerge.TEXT)
for (let i = 0; i < edits.length; i++) {
let edit = edits[i]
if (i % 1000 === 0) {
console.log(`Processed ${i} edits in ${new Date() - start} ms`)
}
doc.splice(text, ...edit)
}
let _ = doc.save()
console.log(`Done in ${new Date() - start} ms`)
if (doc.text(text) !== finalText) {
throw new RangeError('ERROR: final text did not match expectation')
}

View file

@ -1,71 +0,0 @@
use automerge::{Automerge, Value, ROOT};
use criterion::{criterion_group, criterion_main, BenchmarkId, Criterion, Throughput};
use std::fs;
fn replay_trace(commands: Vec<(usize, usize, Vec<Value>)>) -> Automerge {
let mut doc = Automerge::new();
let text = doc.set(&ROOT, "text", Value::text()).unwrap().unwrap();
for (pos, del, vals) in commands {
doc.splice(&text, pos, del, vals).unwrap();
}
doc.commit(None, None);
doc
}
fn save_trace(mut doc: Automerge) {
doc.save().unwrap();
}
fn load_trace(bytes: &[u8]) {
Automerge::load(bytes).unwrap();
}
fn bench(c: &mut Criterion) {
let contents = fs::read_to_string("edits.json").expect("cannot read edits file");
let edits = json::parse(&contents).expect("cant parse edits");
let mut commands = vec![];
for i in 0..edits.len() {
let pos: usize = edits[i][0].as_usize().unwrap();
let del: usize = edits[i][1].as_usize().unwrap();
let mut vals = vec![];
for j in 2..edits[i].len() {
let v = edits[i][j].as_str().unwrap();
vals.push(Value::str(v));
}
commands.push((pos, del, vals));
}
let mut group = c.benchmark_group("edit trace");
group.throughput(Throughput::Elements(commands.len() as u64));
group.bench_with_input(
BenchmarkId::new("replay", commands.len()),
&commands,
|b, commands| {
b.iter_batched(
|| commands.clone(),
replay_trace,
criterion::BatchSize::LargeInput,
)
},
);
let commands_len = commands.len();
let mut doc = replay_trace(commands);
group.bench_with_input(BenchmarkId::new("save", commands_len), &doc, |b, doc| {
b.iter_batched(|| doc.clone(), save_trace, criterion::BatchSize::LargeInput)
});
let bytes = doc.save().unwrap();
group.bench_with_input(
BenchmarkId::new("load", commands_len),
&bytes,
|b, bytes| b.iter(|| load_trace(bytes)),
);
group.finish();
}
criterion_group!(benches, bench);
criterion_main!(benches);

View file

@ -1,32 +0,0 @@
use automerge::{Automerge, AutomergeError, Value, ROOT};
use std::fs;
use std::time::Instant;
fn main() -> Result<(), AutomergeError> {
let contents = fs::read_to_string("edits.json").expect("cannot read edits file");
let edits = json::parse(&contents).expect("cant parse edits");
let mut commands = vec![];
for i in 0..edits.len() {
let pos: usize = edits[i][0].as_usize().unwrap();
let del: usize = edits[i][1].as_usize().unwrap();
let mut vals = vec![];
for j in 2..edits[i].len() {
let v = edits[i][j].as_str().unwrap();
vals.push(Value::str(v));
}
commands.push((pos, del, vals));
}
let mut doc = Automerge::new();
let now = Instant::now();
let text = doc.set( &ROOT, "text", Value::text()).unwrap().unwrap();
for (i, (pos, del, vals)) in commands.into_iter().enumerate() {
if i % 1000 == 0 {
println!("Processed {} edits in {} ms", i, now.elapsed().as_millis());
}
doc.splice(&text, pos, del, vals)?;
}
let _ = doc.save();
println!("Done in {} ms", now.elapsed().as_millis());
Ok(())
}

33
flake.lock generated
View file

@ -2,11 +2,11 @@
"nodes": {
"flake-utils": {
"locked": {
"lastModified": 1619345332,
"narHash": "sha256-qHnQkEp1uklKTpx3MvKtY6xzgcqXDsz5nLilbbuL+3A=",
"lastModified": 1667395993,
"narHash": "sha256-nuEHfE/LcWyuSWnS8t12N1wc105Qtau+/OdUAjtQ0rA=",
"owner": "numtide",
"repo": "flake-utils",
"rev": "2ebf2558e5bf978c7fb8ea927dfaed8fefab2e28",
"rev": "5aed5285a952e0b949eb3ba02c12fa4fcfef535f",
"type": "github"
},
"original": {
@ -17,11 +17,11 @@
},
"flake-utils_2": {
"locked": {
"lastModified": 1614513358,
"narHash": "sha256-LakhOx3S1dRjnh0b5Dg3mbZyH0ToC9I8Y2wKSkBaTzU=",
"lastModified": 1659877975,
"narHash": "sha256-zllb8aq3YO3h8B/U0/J1WBgAL8EX5yWf5pMj3G0NAmc=",
"owner": "numtide",
"repo": "flake-utils",
"rev": "5466c5bbece17adaab2d82fae80b46e807611bf3",
"rev": "c0e246b9b83f637f4681389ecabcb2681b4f3af0",
"type": "github"
},
"original": {
@ -32,11 +32,11 @@
},
"nixpkgs": {
"locked": {
"lastModified": 1620340338,
"narHash": "sha256-Op/4K0+Z9Sp5jtFH0s/zMM4H7VFZxrekcAmjQ6JpQ4w=",
"lastModified": 1669542132,
"narHash": "sha256-DRlg++NJAwPh8io3ExBJdNW7Djs3plVI5jgYQ+iXAZQ=",
"owner": "nixos",
"repo": "nixpkgs",
"rev": "63586475587d7e0e078291ad4b49b6f6a6885100",
"rev": "a115bb9bd56831941be3776c8a94005867f316a7",
"type": "github"
},
"original": {
@ -48,15 +48,16 @@
},
"nixpkgs_2": {
"locked": {
"lastModified": 1617325113,
"narHash": "sha256-GksR0nvGxfZ79T91UUtWjjccxazv6Yh/MvEJ82v1Xmw=",
"owner": "nixos",
"lastModified": 1665296151,
"narHash": "sha256-uOB0oxqxN9K7XGF1hcnY+PQnlQJ+3bP2vCn/+Ru/bbc=",
"owner": "NixOS",
"repo": "nixpkgs",
"rev": "54c1e44240d8a527a8f4892608c4bce5440c3ecb",
"rev": "14ccaaedd95a488dd7ae142757884d8e125b3363",
"type": "github"
},
"original": {
"owner": "NixOS",
"ref": "nixpkgs-unstable",
"repo": "nixpkgs",
"type": "github"
}
@ -74,11 +75,11 @@
"nixpkgs": "nixpkgs_2"
},
"locked": {
"lastModified": 1620355527,
"narHash": "sha256-mUTnUODiAtxH83gbv7uuvCbqZ/BNkYYk/wa3MkwrskE=",
"lastModified": 1669775522,
"narHash": "sha256-6xxGArBqssX38DdHpDoPcPvB/e79uXyQBwpBcaO/BwY=",
"owner": "oxalica",
"repo": "rust-overlay",
"rev": "d8efe70dc561c4bea0b7bf440d36ce98c497e054",
"rev": "3158e47f6b85a288d12948aeb9a048e0ed4434d6",
"type": "github"
},
"original": {

104
flake.nix
View file

@ -3,57 +3,67 @@
inputs = {
nixpkgs.url = "github:nixos/nixpkgs/nixos-unstable";
flake-utils = {
url = "github:numtide/flake-utils";
inputs.nixpkgs.follows = "nixpkgs";
};
flake-utils.url = "github:numtide/flake-utils";
rust-overlay.url = "github:oxalica/rust-overlay";
};
outputs = { self, nixpkgs, flake-utils, rust-overlay }:
outputs = {
self,
nixpkgs,
flake-utils,
rust-overlay,
}:
flake-utils.lib.eachDefaultSystem
(system:
let
pkgs = import nixpkgs {
overlays = [ rust-overlay.overlay ];
inherit system;
};
lib = pkgs.lib;
rust = pkgs.rust-bin.stable.latest.rust;
cargoNix = pkgs.callPackage ./Cargo.nix {
inherit pkgs;
release = true;
};
debugCargoNix = pkgs.callPackage ./Cargo.nix {
inherit pkgs;
release = false;
};
in
{
devShell = pkgs.mkShell {
buildInputs = with pkgs;
[
(rust.override {
extensions = [ "rust-src" ];
targets = [ "wasm32-unknown-unknown" ];
})
cargo-edit
cargo-watch
cargo-criterion
cargo-fuzz
cargo-flamegraph
crate2nix
wasm-pack
pkgconfig
openssl
gnuplot
(system: let
pkgs = import nixpkgs {
overlays = [rust-overlay.overlays.default];
inherit system;
};
rust = pkgs.rust-bin.stable.latest.default;
in {
formatter = pkgs.alejandra;
nodejs
yarn
packages = {
deadnix = pkgs.runCommand "deadnix" {} ''
${pkgs.deadnix}/bin/deadnix --fail ${./.}
mkdir $out
'';
};
rnix-lsp
nixpkgs-fmt
];
};
});
checks = {
inherit (self.packages.${system}) deadnix;
};
devShells.default = pkgs.mkShell {
buildInputs = with pkgs; [
(rust.override {
extensions = ["rust-src"];
targets = ["wasm32-unknown-unknown"];
})
cargo-edit
cargo-watch
cargo-criterion
cargo-fuzz
cargo-flamegraph
cargo-deny
crate2nix
wasm-pack
pkgconfig
openssl
gnuplot
nodejs
yarn
deno
# c deps
cmake
cmocka
doxygen
rnix-lsp
nixpkgs-fmt
];
};
});
}

BIN
img/brandmark.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.4 KiB

1
img/brandmark.svg Normal file
View file

@ -0,0 +1 @@
<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 80.46 80.46"><defs><style>.cls-1{fill:#fc3;}.cls-1,.cls-2{fill-rule:evenodd;}.cls-2{fill:#2a1e20;}</style></defs><g id="Layer_2" data-name="Layer 2"><g id="Layer_1-2" data-name="Layer 1"><path class="cls-1" d="M79.59,38.12a3,3,0,0,1,0,4.21L42.34,79.58a3,3,0,0,1-4.22,0L.88,42.33a3,3,0,0,1,0-4.2L38.12.87a3,3,0,0,1,4.22,0"/><path class="cls-2" d="M76.87,38.76,41.71,3.59a2.09,2.09,0,0,0-2.93,0L3.62,38.76a2.07,2.07,0,0,0,0,2.93L38.78,76.85a2.07,2.07,0,0,0,2.93,0L76.87,41.69a2.07,2.07,0,0,0,0-2.93m-2,.79a.93.93,0,0,1,0,1.34l-33.94,34a1,1,0,0,1-1.33,0l-34-33.95a.94.94,0,0,1,0-1.32l34-34a1,1,0,0,1,1.33,0Z"/><path class="cls-2" d="M36.25,32.85v1.71c0,6.35-5.05,11.38-9.51,16.45l4.08,4.07c2.48-2.6,4.72-5.24,5.43-6.19V60.14h7.94V32.88l4.25,1.3a1.68,1.68,0,0,0,2.25-2.24L40.27,16.7,29.75,31.94A1.68,1.68,0,0,0,32,34.18"/></g></g></svg>

After

Width:  |  Height:  |  Size: 885 B

BIN
img/favicon.ico Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 254 KiB

BIN
img/lockup.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 5.7 KiB

1
img/lockup.svg Normal file
View file

@ -0,0 +1 @@
<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 400.72 80.46"><defs><style>.cls-1{fill:#fc3;}.cls-1,.cls-2{fill-rule:evenodd;}.cls-2{fill:#2a1e20;}</style></defs><g id="Layer_2" data-name="Layer 2"><g id="Layer_1-2" data-name="Layer 1"><path class="cls-1" d="M79.59,38.12a3,3,0,0,1,0,4.21L42.34,79.58a3,3,0,0,1-4.22,0L.88,42.33a3,3,0,0,1,0-4.2L38.12.87a3,3,0,0,1,4.22,0"/><path class="cls-2" d="M76.87,38.76,41.71,3.59a2.09,2.09,0,0,0-2.93,0L3.62,38.76a2.07,2.07,0,0,0,0,2.93L38.78,76.85a2.07,2.07,0,0,0,2.93,0L76.87,41.69a2.07,2.07,0,0,0,0-2.93m-2,.79a.93.93,0,0,1,0,1.34l-33.94,34a1,1,0,0,1-1.33,0l-34-33.95a.94.94,0,0,1,0-1.32l34-34a1,1,0,0,1,1.33,0Z"/><path class="cls-2" d="M36.25,32.85v1.71c0,6.35-5.05,11.38-9.51,16.45l4.08,4.07c2.48-2.6,4.72-5.24,5.43-6.19V60.14h7.94V32.88l4.25,1.3a1.68,1.68,0,0,0,2.25-2.24L40.27,16.7,29.75,31.94A1.68,1.68,0,0,0,32,34.18"/><path d="M124.14,60.08,120.55,50h-17L100,60.08H93.34l15.34-42.61h6.75L131,60.08Zm-9-25.63c-1-3-2.74-8-3.22-9.8-.49,1.83-2,6.7-3.11,9.86l-3.41,9.74H118.6Z"/><path d="M156.7,60.08V57c-1.58,2.32-4.74,3.72-8,3.72-7.43,0-11.38-4.87-11.38-14.31V28.12h6.27V46.2c0,6.45,2.43,8.76,6.57,8.76s6.57-3,6.57-8.15V28.12H163v32Z"/><path d="M187.5,59.29a12.74,12.74,0,0,1-6.15,1.46c-4.44,0-7.18-2.74-7.18-8.46V33.84h-4.56V28.12h4.56V19l6.15-3.29V28.12h7.91v5.72h-7.91V51.19c0,3,1,3.83,3.29,3.83a10,10,0,0,0,4.62-1.27Z"/><path d="M208.08,60.75c-8,0-14.06-6.64-14.06-16.62,0-10.47,6.2-16.68,14.24-16.68S222.5,34,222.5,44C222.5,54.54,216.29,60.75,208.08,60.75ZM208,33.42c-4.75,0-7.67,4.2-7.67,10.53,0,7,3.22,10.83,8,10.83s7.85-4.81,7.85-10.65C216.17,37.62,213.07,33.42,208,33.42Z"/><path d="M267.36,60.08V42c0-6.45-2-8.77-6.15-8.77s-6.14,3-6.14,8.16V60.08H248.8V42c0-6.45-2-8.77-6.15-8.77s-6.15,3-6.15,8.16V60.08h-6.27v-32h6.27v3a9,9,0,0,1,7.61-3.71c4.32,0,7.06,1.65,8.76,4.69,2.32-2.86,4.81-4.69,9.8-4.69,7.43,0,11,4.87,11,14.31V60.08Z"/><path d="M308.39,46.32H287.27c.66,6.15,4.13,8.77,8,8.77a11.22,11.22,0,0,0,6.94-2.56l3.71,4a14.9,14.9,0,0,1-11,4.2c-7.48,0-13.81-6-13.81-16.62,0-10.84,5.72-16.68,14-16.68,9.07,0,13.45,7.37,13.45,16C308.57,44.62,308.45,45.65,308.39,46.32Zm-13.7-13.21c-4.2,0-6.76,2.92-7.3,8h14.85C301.93,36.76,299.86,33.11,294.69,33.11Z"/><path d="M333.71,34.76a9.37,9.37,0,0,0-4.81-1.16c-4,0-6.27,2.8-6.27,8.22V60.08h-6.27v-32h6.27v3a8.86,8.86,0,0,1,7.3-3.71,9.22,9.22,0,0,1,5.42,1.34Z"/><path d="M350.45,71.82l-2.14-4.74c9-.43,11-2.86,11-9.5V57c-2.31,2.13-4.93,3.72-8.28,3.72-6.81,0-12.29-5-12.29-17.17,0-10.95,6-16.13,12.6-16.13a11.11,11.11,0,0,1,8,3.65v-3h6.27V57C365.54,66.77,362,71.46,350.45,71.82Zm8.94-34.39c-1.4-1.88-4.32-4.2-7.48-4.2-4.51,0-6.94,3.41-6.94,10.17,0,8,2.55,11.56,7.18,11.56,3,0,5.6-2,7.24-4.07Z"/><path d="M400.54,46.32H379.42c.67,6.15,4.14,8.77,8,8.77a11.22,11.22,0,0,0,6.94-2.56l3.71,4a14.87,14.87,0,0,1-11,4.2c-7.49,0-13.82-6-13.82-16.62,0-10.84,5.72-16.68,14-16.68,9.07,0,13.45,7.37,13.45,16C400.72,44.62,400.6,45.65,400.54,46.32Zm-13.7-13.21c-4.2,0-6.75,2.92-7.3,8h14.85C394.09,36.76,392,33.11,386.84,33.11Z"/></g></g></svg>

After

Width:  |  Height:  |  Size: 3 KiB

BIN
img/sign.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 7.7 KiB

1
img/sign.svg Normal file
View file

@ -0,0 +1 @@
<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 485 108"><defs><style>.cls-1{fill:#fff;}.cls-2{fill:#fc3;}.cls-3{fill:#2a1e20;fill-rule:evenodd;}</style></defs><g id="Layer_2" data-name="Layer 2"><g id="Layer_1-2" data-name="Layer 1"><path class="cls-1" d="M465,5a15,15,0,0,1,15,15V88a15,15,0,0,1-15,15H20A15,15,0,0,1,5,88V20A15,15,0,0,1,20,5H465m0-5H20A20,20,0,0,0,0,20V88a20,20,0,0,0,20,20H465a20,20,0,0,0,20-20V20A20,20,0,0,0,465,0Z"/><rect class="cls-2" x="3.7" y="3.7" width="477.6" height="100.6" rx="16.3"/><path class="cls-2" d="M465,5a15,15,0,0,1,15,15V88a15,15,0,0,1-15,15H20A15,15,0,0,1,5,88V20A15,15,0,0,1,20,5H465m0-2.6H20A17.63,17.63,0,0,0,2.4,20V88A17.63,17.63,0,0,0,20,105.6H465A17.63,17.63,0,0,0,482.6,88V20A17.63,17.63,0,0,0,465,2.4Z"/><path d="M465,7.6A12.41,12.41,0,0,1,477.4,20V88A12.41,12.41,0,0,1,465,100.4H20A12.41,12.41,0,0,1,7.6,88V20A12.41,12.41,0,0,1,20,7.6H465M465,5H20A15,15,0,0,0,5,20V88a15,15,0,0,0,15,15H465a15,15,0,0,0,15-15V20A15,15,0,0,0,465,5Z"/><path class="cls-3" d="M106.1,51.48l-34-34a2,2,0,0,0-2.83,0l-34,34a2,2,0,0,0,0,2.82l34,34a2,2,0,0,0,2.83,0l34-34a2,2,0,0,0,0-2.82m-.76.74a.93.93,0,0,1,0,1.34L71.4,87.5a1,1,0,0,1-1.33,0l-34-33.94a.94.94,0,0,1,0-1.32l34-34a1,1,0,0,1,1.33,0Z"/><path class="cls-3" d="M67,45.62V47c0,6.2-5.1,11.11-9.59,16.06l4.11,4C64,64.52,66.28,61.94,67,61V72h8V45.37l4.29,1.27a1.67,1.67,0,0,0,2.27-2.19L71,29.56,60.45,44.45a1.67,1.67,0,0,0,2.27,2.19"/><path d="M162.62,72.74,159,62.64H142l-3.53,10.1h-6.63l15.34-42.61h6.75l15.53,42.61Zm-9-25.62c-1-3-2.74-8-3.22-9.8-.49,1.82-2,6.69-3.11,9.86l-3.41,9.73h13.15Z"/><path d="M195.18,72.74v-3c-1.58,2.31-4.74,3.71-8,3.71-7.43,0-11.38-4.87-11.38-14.3V40.78H182V58.86c0,6.45,2.43,8.77,6.57,8.77s6.57-3,6.57-8.16V40.78h6.27v32Z"/><path d="M226,72a12.74,12.74,0,0,1-6.15,1.46c-4.44,0-7.18-2.74-7.18-8.46V46.51h-4.56V40.78h4.56V31.65l6.15-3.28V40.78h7.91v5.73H218.8V63.85c0,3,1,3.84,3.29,3.84a10,10,0,0,0,4.62-1.28Z"/><path d="M246.56,73.41c-8,0-14.06-6.63-14.06-16.62,0-10.47,6.2-16.67,14.24-16.67S261,46.63,261,56.61C261,67.2,254.77,73.41,246.56,73.41Zm-.07-27.33c-4.74,0-7.66,4.2-7.66,10.53,0,7,3.22,10.83,8,10.83s7.85-4.8,7.85-10.65C254.65,50.28,251.55,46.08,246.49,46.08Z"/><path d="M305.84,72.74V54.66c0-6.45-2-8.76-6.15-8.76s-6.14,3-6.14,8.15V72.74h-6.27V54.66c0-6.45-2-8.76-6.15-8.76s-6.15,3-6.15,8.15V72.74h-6.27v-32H275v3a9,9,0,0,1,7.61-3.71c4.32,0,7.06,1.64,8.76,4.68,2.32-2.86,4.81-4.68,9.8-4.68,7.43,0,11,4.86,11,14.3V72.74Z"/><path d="M346.87,59H325.74c.67,6.15,4.14,8.77,8,8.77a11.16,11.16,0,0,0,6.94-2.56l3.71,4a14.86,14.86,0,0,1-11,4.2c-7.48,0-13.81-6-13.81-16.62,0-10.83,5.72-16.67,14-16.67,9.07,0,13.45,7.36,13.45,16C347.05,57.28,346.93,58.31,346.87,59Zm-13.7-13.2c-4.2,0-6.76,2.92-7.3,8h14.85C340.41,49.43,338.34,45.78,333.17,45.78Z"/><path d="M372.19,47.42a9.37,9.37,0,0,0-4.81-1.16c-4,0-6.27,2.8-6.27,8.22V72.74h-6.27v-32h6.27v3a8.86,8.86,0,0,1,7.3-3.71,9.22,9.22,0,0,1,5.42,1.33Z"/><path d="M388.92,84.49l-2.13-4.75c9-.43,11-2.86,11-9.5V69.7c-2.31,2.13-4.93,3.71-8.28,3.71-6.81,0-12.29-5-12.29-17.16,0-11,6-16.13,12.6-16.13a11.07,11.07,0,0,1,8,3.65v-3H404V69.7C404,79.44,400.49,84.12,388.92,84.49Zm8.95-34.39c-1.4-1.89-4.32-4.2-7.48-4.2-4.51,0-6.94,3.41-6.94,10.16,0,8,2.55,11.57,7.18,11.57,3,0,5.6-2,7.24-4.08Z"/><path d="M439,59H417.9c.67,6.15,4.14,8.77,8,8.77a11.16,11.16,0,0,0,6.94-2.56l3.71,4a14.84,14.84,0,0,1-11,4.2c-7.49,0-13.82-6-13.82-16.62,0-10.83,5.72-16.67,14-16.67,9.07,0,13.45,7.36,13.45,16C439.2,57.28,439.08,58.31,439,59Zm-13.7-13.2c-4.2,0-6.75,2.92-7.3,8h14.85C432.57,49.43,430.5,45.78,425.32,45.78Z"/></g></g></svg>

After

Width:  |  Height:  |  Size: 3.5 KiB

View file

@ -0,0 +1,3 @@
{
"replacer": "scripts/denoify-replacer.mjs"
}

2
javascript/.eslintignore Normal file
View file

@ -0,0 +1,2 @@
dist
examples

15
javascript/.eslintrc.cjs Normal file
View file

@ -0,0 +1,15 @@
module.exports = {
root: true,
parser: "@typescript-eslint/parser",
plugins: ["@typescript-eslint"],
extends: ["eslint:recommended", "plugin:@typescript-eslint/recommended"],
rules: {
"@typescript-eslint/no-unused-vars": [
"error",
{
argsIgnorePattern: "^_",
varsIgnorePattern: "^_",
},
],
},
}

6
javascript/.gitignore vendored Normal file
View file

@ -0,0 +1,6 @@
/node_modules
/yarn.lock
dist
docs/
.vim
deno_dist/

View file

@ -0,0 +1,4 @@
e2e/verdacciodb
dist
docs
deno_dist

4
javascript/.prettierrc Normal file
View file

@ -0,0 +1,4 @@
{
"semi": false,
"arrowParens": "avoid"
}

39
javascript/HACKING.md Normal file
View file

@ -0,0 +1,39 @@
## Architecture
The `@automerge/automerge` package is a set of
[`Proxy`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Proxy)
objects which provide an idiomatic javascript interface built on top of the
lower level `@automerge/automerge-wasm` package (which is in turn built from the
Rust codebase and can be found in `~/automerge-wasm`). I.e. the responsibility
of this codebase is
- To map from the javascript data model to the underlying `set`, `make`,
`insert`, and `delete` operations of Automerge.
- To expose a more convenient interface to functions in `automerge-wasm` which
generate messages to send over the network or compressed file formats to store
on disk
## Building and testing
Much of the functionality of this package depends on the
`@automerge/automerge-wasm` package and frequently you will be working on both
of them at the same time. It would be frustrating to have to push
`automerge-wasm` to NPM every time you want to test a change but I (Alex) also
don't trust `yarn link` to do the right thing here. Therefore, the `./e2e`
folder contains a little yarn package which spins up a local NPM registry. See
`./e2e/README` for details. In brief though:
To build `automerge-wasm` and install it in the local `node_modules`
```bash
cd e2e && yarn install && yarn run e2e buildjs
```
NOw that you've done this you can run the tests
```bash
yarn test
```
If you make changes to the `automerge-wasm` package you will need to re-run
`yarn e2e buildjs`

10
javascript/LICENSE Normal file
View file

@ -0,0 +1,10 @@
MIT License
Copyright 2022, Ink & Switch LLC
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

109
javascript/README.md Normal file
View file

@ -0,0 +1,109 @@
## Automerge
Automerge is a library of data structures for building collaborative
applications, this package is the javascript implementation.
Detailed documentation is available at [automerge.org](http://automerge.org/)
but see the following for a short getting started guid.
## Quickstart
First, install the library.
```
yarn add @automerge/automerge
```
If you're writing a `node` application, you can skip straight to [Make some
data](#make-some-data). If you're in a browser you need a bundler
### Bundler setup
`@automerge/automerge` is a wrapper around a core library which is written in
rust, compiled to WebAssembly and distributed as a separate package called
`@automerge/automerge-wasm`. Browsers don't currently support WebAssembly
modules taking part in ESM module imports, so you must use a bundler to import
`@automerge/automerge` in the browser. There are a lot of bundlers out there, we
have examples for common bundlers in the `examples` folder. Here is a short
example using Webpack 5.
Assuming a standard setup of a new webpack project, you'll need to enable the
`asyncWebAssembly` experiment. In a typical webpack project that means adding
something like this to `webpack.config.js`
```javascript
module.exports = {
...
experiments: { asyncWebAssembly: true },
performance: { // we dont want the wasm blob to generate warnings
hints: false,
maxEntrypointSize: 512000,
maxAssetSize: 512000
}
};
```
### Make some data
Automerge allows to separate threads of execution to make changes to some data
and always be able to merge their changes later.
```javascript
import * as automerge from "@automerge/automerge"
import * as assert from "assert"
let doc1 = automerge.from({
tasks: [
{ description: "feed fish", done: false },
{ description: "water plants", done: false },
],
})
// Create a new thread of execution
let doc2 = automerge.clone(doc1)
// Now we concurrently make changes to doc1 and doc2
// Complete a task in doc2
doc2 = automerge.change(doc2, d => {
d.tasks[0].done = true
})
// Add a task in doc1
doc1 = automerge.change(doc1, d => {
d.tasks.push({
description: "water fish",
done: false,
})
})
// Merge changes from both docs
doc1 = automerge.merge(doc1, doc2)
doc2 = automerge.merge(doc2, doc1)
// Both docs are merged and identical
assert.deepEqual(doc1, {
tasks: [
{ description: "feed fish", done: true },
{ description: "water plants", done: false },
{ description: "water fish", done: false },
],
})
assert.deepEqual(doc2, {
tasks: [
{ description: "feed fish", done: true },
{ description: "water plants", done: false },
{ description: "water fish", done: false },
],
})
```
## Development
See [HACKING.md](./HACKING.md)
## Meta
Copyright 2017present, the Automerge contributors. Released under the terms of the
MIT license (see `LICENSE`).

View file

@ -0,0 +1,12 @@
{
"extends": "../tsconfig.json",
"exclude": [
"../dist/**/*",
"../node_modules",
"../test/**/*",
"../src/**/*.deno.ts"
],
"compilerOptions": {
"outDir": "../dist/cjs"
}
}

View file

@ -0,0 +1,13 @@
{
"extends": "../tsconfig.json",
"exclude": [
"../dist/**/*",
"../node_modules",
"../test/**/*",
"../src/**/*.deno.ts"
],
"emitDeclarationOnly": true,
"compilerOptions": {
"outDir": "../dist"
}
}

View file

@ -0,0 +1,14 @@
{
"extends": "../tsconfig.json",
"exclude": [
"../dist/**/*",
"../node_modules",
"../test/**/*",
"../src/**/*.deno.ts"
],
"compilerOptions": {
"target": "es6",
"module": "es6",
"outDir": "../dist/mjs"
}
}

View file

@ -0,0 +1,10 @@
import * as Automerge from "../deno_dist/index.ts"
Deno.test("It should create, clone and free", () => {
let doc1 = Automerge.init()
let doc2 = Automerge.clone(doc1)
// this is only needed if weakrefs are not supported
Automerge.free(doc1)
Automerge.free(doc2)
})

3
javascript/e2e/.gitignore vendored Normal file
View file

@ -0,0 +1,3 @@
node_modules/
verdacciodb/
htpasswd

70
javascript/e2e/README.md Normal file
View file

@ -0,0 +1,70 @@
#End to end testing for javascript packaging
The network of packages and bundlers we rely on to get the `automerge` package
working is a little complex. We have the `automerge-wasm` package, which the
`automerge` package depends upon, which means that anyone who depends on
`automerge` needs to either a) be using node or b) use a bundler in order to
load the underlying WASM module which is packaged in `automerge-wasm`.
The various bundlers involved are complicated and capricious and so we need an
easy way of testing that everything is in fact working as expected. To do this
we run a custom NPM registry (namely [Verdaccio](https://verdaccio.org/)) and
build the `automerge-wasm` and `automerge` packages and publish them to this
registry. Once we have this registry running we are able to build the example
projects which depend on these packages and check that everything works as
expected.
## Usage
First, install everything:
```
yarn install
```
### Build `automerge-js`
This builds the `automerge-wasm` package and then runs `yarn build` in the
`automerge-js` project with the `--registry` set to the verdaccio registry. The
end result is that you can run `yarn test` in the resulting `automerge-js`
directory in order to run tests against the current `automerge-wasm`.
```
yarn e2e buildjs
```
### Build examples
This either builds or the examples in `automerge-js/examples` or just a subset
of them. Once this is complete you can run the relevant scripts (e.g. `vite dev`
for the Vite example) to check everything works.
```
yarn e2e buildexamples
```
Or, to just build the webpack example
```
yarn e2e buildexamples -e webpack
```
### Run Registry
If you're experimenting with a project which is not in the `examples` folder
you'll need a running registry. `run-registry` builds and publishes
`automerge-js` and `automerge-wasm` and then runs the registry at
`localhost:4873`.
```
yarn e2e run-registry
```
You can now run `yarn install --registry http://localhost:4873` to experiment
with the built packages.
## Using the `dev` build of `automerge-wasm`
All the commands above take a `-p` flag which can be either `release` or
`debug`. The `debug` builds with additional debug symbols which makes errors
less cryptic.

534
javascript/e2e/index.ts Normal file
View file

@ -0,0 +1,534 @@
import { once } from "events"
import { setTimeout } from "timers/promises"
import { spawn, ChildProcess } from "child_process"
import * as child_process from "child_process"
import {
command,
subcommands,
run,
array,
multioption,
option,
Type,
} from "cmd-ts"
import * as path from "path"
import * as fsPromises from "fs/promises"
import fetch from "node-fetch"
const VERDACCIO_DB_PATH = path.normalize(`${__dirname}/verdacciodb`)
const VERDACCIO_CONFIG_PATH = path.normalize(`${__dirname}/verdaccio.yaml`)
const AUTOMERGE_WASM_PATH = path.normalize(
`${__dirname}/../../rust/automerge-wasm`
)
const AUTOMERGE_JS_PATH = path.normalize(`${__dirname}/..`)
const EXAMPLES_DIR = path.normalize(path.join(__dirname, "../", "examples"))
// The different example projects in "../examples"
type Example = "webpack" | "vite" | "create-react-app"
// Type to parse strings to `Example` so the types line up for the `buildExamples` commmand
const ReadExample: Type<string, Example> = {
async from(str) {
if (str === "webpack") {
return "webpack"
} else if (str === "vite") {
return "vite"
} else if (str === "create-react-app") {
return "create-react-app"
} else {
throw new Error(`Unknown example type ${str}`)
}
},
}
type Profile = "dev" | "release"
const ReadProfile: Type<string, Profile> = {
async from(str) {
if (str === "dev") {
return "dev"
} else if (str === "release") {
return "release"
} else {
throw new Error(`Unknown profile ${str}`)
}
},
}
const buildjs = command({
name: "buildjs",
args: {
profile: option({
type: ReadProfile,
long: "profile",
short: "p",
defaultValue: () => "dev" as Profile,
}),
},
handler: ({ profile }) => {
console.log("building js")
withPublishedWasm(profile, async (registryUrl: string) => {
await buildAndPublishAutomergeJs(registryUrl)
})
},
})
const buildWasm = command({
name: "buildwasm",
args: {
profile: option({
type: ReadProfile,
long: "profile",
short: "p",
defaultValue: () => "dev" as Profile,
}),
},
handler: ({ profile }) => {
console.log("building automerge-wasm")
withRegistry(buildAutomergeWasm(profile))
},
})
const buildexamples = command({
name: "buildexamples",
args: {
examples: multioption({
long: "example",
short: "e",
type: array(ReadExample),
}),
profile: option({
type: ReadProfile,
long: "profile",
short: "p",
defaultValue: () => "dev" as Profile,
}),
},
handler: ({ examples, profile }) => {
if (examples.length === 0) {
examples = ["webpack", "vite", "create-react-app"]
}
buildExamples(examples, profile)
},
})
const runRegistry = command({
name: "run-registry",
args: {
profile: option({
type: ReadProfile,
long: "profile",
short: "p",
defaultValue: () => "dev" as Profile,
}),
},
handler: ({ profile }) => {
withPublishedWasm(profile, async (registryUrl: string) => {
await buildAndPublishAutomergeJs(registryUrl)
console.log("\n************************")
console.log(` Verdaccio NPM registry is running at ${registryUrl}`)
console.log(" press CTRL-C to exit ")
console.log("************************")
await once(process, "SIGINT")
}).catch(e => {
console.error(`Failed: ${e}`)
})
},
})
const app = subcommands({
name: "e2e",
cmds: {
buildjs,
buildexamples,
buildwasm: buildWasm,
"run-registry": runRegistry,
},
})
run(app, process.argv.slice(2))
async function buildExamples(examples: Array<Example>, profile: Profile) {
await withPublishedWasm(profile, async registryUrl => {
printHeader("building and publishing automerge")
await buildAndPublishAutomergeJs(registryUrl)
for (const example of examples) {
printHeader(`building ${example} example`)
if (example === "webpack") {
const projectPath = path.join(EXAMPLES_DIR, example)
await removeExistingAutomerge(projectPath)
await fsPromises.rm(path.join(projectPath, "yarn.lock"), {
force: true,
})
await spawnAndWait(
"yarn",
[
"--cwd",
projectPath,
"install",
"--registry",
registryUrl,
"--check-files",
],
{ stdio: "inherit" }
)
await spawnAndWait("yarn", ["--cwd", projectPath, "build"], {
stdio: "inherit",
})
} else if (example === "vite") {
const projectPath = path.join(EXAMPLES_DIR, example)
await removeExistingAutomerge(projectPath)
await fsPromises.rm(path.join(projectPath, "yarn.lock"), {
force: true,
})
await spawnAndWait(
"yarn",
[
"--cwd",
projectPath,
"install",
"--registry",
registryUrl,
"--check-files",
],
{ stdio: "inherit" }
)
await spawnAndWait("yarn", ["--cwd", projectPath, "build"], {
stdio: "inherit",
})
} else if (example === "create-react-app") {
const projectPath = path.join(EXAMPLES_DIR, example)
await removeExistingAutomerge(projectPath)
await fsPromises.rm(path.join(projectPath, "yarn.lock"), {
force: true,
})
await spawnAndWait(
"yarn",
[
"--cwd",
projectPath,
"install",
"--registry",
registryUrl,
"--check-files",
],
{ stdio: "inherit" }
)
await spawnAndWait("yarn", ["--cwd", projectPath, "build"], {
stdio: "inherit",
})
}
}
})
}
type WithRegistryAction = (registryUrl: string) => Promise<void>
async function withRegistry(
action: WithRegistryAction,
...actions: Array<WithRegistryAction>
) {
// First, start verdaccio
printHeader("Starting verdaccio NPM server")
const verd = await VerdaccioProcess.start()
actions.unshift(action)
for (const action of actions) {
try {
type Step = "verd-died" | "action-completed"
const verdDied: () => Promise<Step> = async () => {
await verd.died()
return "verd-died"
}
const actionComplete: () => Promise<Step> = async () => {
await action("http://localhost:4873")
return "action-completed"
}
const result = await Promise.race([verdDied(), actionComplete()])
if (result === "verd-died") {
throw new Error("verdaccio unexpectedly exited")
}
} catch (e) {
await verd.kill()
throw e
}
}
await verd.kill()
}
async function withPublishedWasm(profile: Profile, action: WithRegistryAction) {
await withRegistry(buildAutomergeWasm(profile), publishAutomergeWasm, action)
}
function buildAutomergeWasm(profile: Profile): WithRegistryAction {
return async (registryUrl: string) => {
printHeader("building automerge-wasm")
await spawnAndWait(
"yarn",
["--cwd", AUTOMERGE_WASM_PATH, "--registry", registryUrl, "install"],
{ stdio: "inherit" }
)
const cmd = profile === "release" ? "release" : "debug"
await spawnAndWait("yarn", ["--cwd", AUTOMERGE_WASM_PATH, cmd], {
stdio: "inherit",
})
}
}
async function publishAutomergeWasm(registryUrl: string) {
printHeader("Publishing automerge-wasm to verdaccio")
await fsPromises.rm(
path.join(VERDACCIO_DB_PATH, "@automerge/automerge-wasm"),
{ recursive: true, force: true }
)
await yarnPublish(registryUrl, AUTOMERGE_WASM_PATH)
}
async function buildAndPublishAutomergeJs(registryUrl: string) {
// Build the js package
printHeader("Building automerge")
await removeExistingAutomerge(AUTOMERGE_JS_PATH)
await removeFromVerdaccio("@automerge/automerge")
await fsPromises.rm(path.join(AUTOMERGE_JS_PATH, "yarn.lock"), {
force: true,
})
await spawnAndWait(
"yarn",
[
"--cwd",
AUTOMERGE_JS_PATH,
"install",
"--registry",
registryUrl,
"--check-files",
],
{ stdio: "inherit" }
)
await spawnAndWait("yarn", ["--cwd", AUTOMERGE_JS_PATH, "build"], {
stdio: "inherit",
})
await yarnPublish(registryUrl, AUTOMERGE_JS_PATH)
}
/**
* A running verdaccio process
*
*/
class VerdaccioProcess {
child: ChildProcess
stdout: Array<Buffer>
stderr: Array<Buffer>
constructor(child: ChildProcess) {
this.child = child
// Collect stdout/stderr otherwise the subprocess gets blocked writing
this.stdout = []
this.stderr = []
this.child.stdout &&
this.child.stdout.on("data", data => this.stdout.push(data))
this.child.stderr &&
this.child.stderr.on("data", data => this.stderr.push(data))
const errCallback = (e: any) => {
console.error("!!!!!!!!!ERROR IN VERDACCIO PROCESS!!!!!!!!!")
console.error(" ", e)
if (this.stdout.length > 0) {
console.log("\n**Verdaccio stdout**")
const stdout = Buffer.concat(this.stdout)
process.stdout.write(stdout)
}
if (this.stderr.length > 0) {
console.log("\n**Verdaccio stderr**")
const stdout = Buffer.concat(this.stderr)
process.stdout.write(stdout)
}
process.exit(-1)
}
this.child.on("error", errCallback)
}
/**
* Spawn a verdaccio process and wait for it to respond succesfully to http requests
*
* The returned `VerdaccioProcess` can be used to control the subprocess
*/
static async start() {
const child = spawn(
"yarn",
["verdaccio", "--config", VERDACCIO_CONFIG_PATH],
{ env: { ...process.env, FORCE_COLOR: "true" } }
)
// Forward stdout and stderr whilst waiting for startup to complete
const stdoutCallback = (data: Buffer) => process.stdout.write(data)
const stderrCallback = (data: Buffer) => process.stderr.write(data)
child.stdout && child.stdout.on("data", stdoutCallback)
child.stderr && child.stderr.on("data", stderrCallback)
const healthCheck = async () => {
while (true) {
try {
const resp = await fetch("http://localhost:4873")
if (resp.status === 200) {
return
} else {
console.log(`Healthcheck failed: bad status ${resp.status}`)
}
} catch (e) {
console.error(`Healthcheck failed: ${e}`)
}
await setTimeout(500)
}
}
await withTimeout(healthCheck(), 10000)
// Stop forwarding stdout/stderr
child.stdout && child.stdout.off("data", stdoutCallback)
child.stderr && child.stderr.off("data", stderrCallback)
return new VerdaccioProcess(child)
}
/**
* Send a SIGKILL to the process and wait for it to stop
*/
async kill() {
this.child.stdout && this.child.stdout.destroy()
this.child.stderr && this.child.stderr.destroy()
this.child.kill()
try {
await withTimeout(once(this.child, "close"), 500)
} catch (e) {
console.error("unable to kill verdaccio subprocess, trying -9")
this.child.kill(9)
await withTimeout(once(this.child, "close"), 500)
}
}
/**
* A promise which resolves if the subprocess exits for some reason
*/
async died(): Promise<number | null> {
const [exit, _signal] = await once(this.child, "exit")
return exit
}
}
function printHeader(header: string) {
console.log("\n===============================")
console.log(` ${header}`)
console.log("===============================")
}
/**
* Removes the automerge, @automerge/automerge-wasm, and @automerge/automerge packages from
* `$packageDir/node_modules`
*
* This is useful to force refreshing a package by use in combination with
* `yarn install --check-files`, which checks if a package is present in
* `node_modules` and if it is not forces a reinstall.
*
* @param packageDir - The directory containing the package.json of the target project
*/
async function removeExistingAutomerge(packageDir: string) {
await fsPromises.rm(path.join(packageDir, "node_modules", "@automerge"), {
recursive: true,
force: true,
})
await fsPromises.rm(path.join(packageDir, "node_modules", "automerge"), {
recursive: true,
force: true,
})
}
type SpawnResult = {
stdout?: Buffer
stderr?: Buffer
}
async function spawnAndWait(
cmd: string,
args: Array<string>,
options: child_process.SpawnOptions
): Promise<SpawnResult> {
const child = spawn(cmd, args, options)
let stdout = null
let stderr = null
if (child.stdout) {
stdout = []
child.stdout.on("data", data => stdout.push(data))
}
if (child.stderr) {
stderr = []
child.stderr.on("data", data => stderr.push(data))
}
const [exit, _signal] = await once(child, "exit")
if (exit && exit !== 0) {
throw new Error("nonzero exit code")
}
return {
stderr: stderr ? Buffer.concat(stderr) : null,
stdout: stdout ? Buffer.concat(stdout) : null,
}
}
/**
* Remove a package from the verdaccio registry. This is necessary because we
* often want to _replace_ a version rather than update the version number.
* Obviously this is very bad and verboten in normal circumastances, but the
* whole point here is to be able to test the entire packaging story so it's
* okay I Promise.
*/
async function removeFromVerdaccio(packageName: string) {
await fsPromises.rm(path.join(VERDACCIO_DB_PATH, packageName), {
force: true,
recursive: true,
})
}
async function yarnPublish(registryUrl: string, cwd: string) {
await spawnAndWait(
"yarn",
["--registry", registryUrl, "--cwd", cwd, "publish", "--non-interactive"],
{
stdio: "inherit",
env: {
...process.env,
FORCE_COLOR: "true",
// This is a fake token, it just has to be the right format
npm_config__auth:
"//localhost:4873/:_authToken=Gp2Mgxm4faa/7wp0dMSuRA==",
},
}
)
}
/**
* Wait for a given delay to resolve a promise, throwing an error if the
* promise doesn't resolve with the timeout
*
* @param promise - the promise to wait for @param timeout - the delay in
* milliseconds to wait before throwing
*/
async function withTimeout<T>(
promise: Promise<T>,
timeout: number
): Promise<T> {
type Step = "timed-out" | { result: T }
const timedOut: () => Promise<Step> = async () => {
await setTimeout(timeout)
return "timed-out"
}
const succeeded: () => Promise<Step> = async () => {
const result = await promise
return { result }
}
const result = await Promise.race([timedOut(), succeeded()])
if (result === "timed-out") {
throw new Error("timed out")
} else {
return result.result
}
}

View file

@ -0,0 +1,23 @@
{
"name": "e2e",
"version": "0.0.1",
"description": "",
"main": "index.js",
"scripts": {
"e2e": "ts-node index.ts"
},
"author": "",
"license": "ISC",
"dependencies": {
"@types/node": "^18.7.18",
"cmd-ts": "^0.11.0",
"node-fetch": "^2",
"ts-node": "^10.9.1",
"typed-emitter": "^2.1.0",
"typescript": "^4.8.3",
"verdaccio": "5"
},
"devDependencies": {
"@types/node-fetch": "2.x"
}
}

View file

@ -0,0 +1,6 @@
{
"compilerOptions": {
"types": ["node"]
},
"module": "nodenext"
}

View file

@ -0,0 +1,25 @@
storage: "./verdacciodb"
auth:
htpasswd:
file: ./htpasswd
publish:
allow_offline: true
logs: { type: stdout, format: pretty, level: info }
packages:
"@automerge/automerge-wasm":
access: "$all"
publish: "$all"
"@automerge/automerge":
access: "$all"
publish: "$all"
"*":
access: "$all"
publish: "$all"
proxy: npmjs
"@*/*":
access: "$all"
publish: "$all"
proxy: npmjs
uplinks:
npmjs:
url: https://registry.npmjs.org/

2130
javascript/e2e/yarn.lock Normal file

File diff suppressed because it is too large Load diff

View file

@ -0,0 +1 @@
node_modules/

View file

@ -0,0 +1,59 @@
# Automerge + `create-react-app`
This is a little fiddly to get working. The problem is that `create-react-app`
hard codes a webpack configuration which does not support WASM modules, which we
require in order to bundle the WASM implementation of automerge. To get around
this we use [`craco`](https://github.com/dilanx/craco) which does some monkey
patching to allow us to modify the webpack config that `create-react-app`
bundles. Then we use a craco plugin called
[`craco-wasm`](https://www.npmjs.com/package/craco-wasm) to perform the
necessary modifications to the webpack config. It should be noted that this is
all quite fragile and ideally you probably don't want to use `create-react-app`
to do this in production.
## Setup
Assuming you have already run `create-react-app` and your working directory is
the project.
### Install craco and craco-wasm
```bash
yarn add craco craco-wasm
```
### Modify `package.json` to use `craco` for scripts
In `package.json` the `scripts` section will look like this:
```json
"scripts": {
"start": "craco start",
"build": "craco build",
"test": "craco test",
"eject": "craco eject"
},
```
Replace that section with:
```json
"scripts": {
"start": "craco start",
"build": "craco build",
"test": "craco test",
"eject": "craco eject"
},
```
### Create `craco.config.js`
In the root of the project add the following contents to `craco.config.js`
```javascript
const cracoWasm = require("craco-wasm")
module.exports = {
plugins: [cracoWasm()],
}
```

View file

@ -0,0 +1,5 @@
const cracoWasm = require("craco-wasm")
module.exports = {
plugins: [cracoWasm()],
}

View file

@ -0,0 +1,41 @@
{
"name": "automerge-create-react-app",
"version": "0.1.0",
"private": true,
"dependencies": {
"@craco/craco": "^7.0.0-alpha.8",
"craco-wasm": "0.0.1",
"@testing-library/jest-dom": "^5.16.5",
"@testing-library/react": "^13.4.0",
"@testing-library/user-event": "^13.5.0",
"@automerge/automerge": "2.0.0-alpha.7",
"react": "^18.2.0",
"react-dom": "^18.2.0",
"react-scripts": "5.0.1",
"web-vitals": "^2.1.4"
},
"scripts": {
"start": "craco start",
"build": "craco build",
"test": "craco test",
"eject": "craco eject"
},
"eslintConfig": {
"extends": [
"react-app",
"react-app/jest"
]
},
"browserslist": {
"production": [
">0.2%",
"not dead",
"not op_mini all"
],
"development": [
"last 1 chrome version",
"last 1 firefox version",
"last 1 safari version"
]
}
}

Binary file not shown.

After

Width:  |  Height:  |  Size: 3.8 KiB

View file

@ -0,0 +1,43 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8" />
<link rel="icon" href="%PUBLIC_URL%/favicon.ico" />
<meta name="viewport" content="width=device-width, initial-scale=1" />
<meta name="theme-color" content="#000000" />
<meta
name="description"
content="Web site created using create-react-app"
/>
<link rel="apple-touch-icon" href="%PUBLIC_URL%/logo192.png" />
<!--
manifest.json provides metadata used when your web app is installed on a
user's mobile device or desktop. See https://developers.google.com/web/fundamentals/web-app-manifest/
-->
<link rel="manifest" href="%PUBLIC_URL%/manifest.json" />
<!--
Notice the use of %PUBLIC_URL% in the tags above.
It will be replaced with the URL of the `public` folder during the build.
Only files inside the `public` folder can be referenced from the HTML.
Unlike "/favicon.ico" or "favicon.ico", "%PUBLIC_URL%/favicon.ico" will
work correctly both with client-side routing and a non-root public URL.
Learn how to configure a non-root public URL by running `npm run build`.
-->
<title>React App</title>
</head>
<body>
<noscript>You need to enable JavaScript to run this app.</noscript>
<div id="root"></div>
<!--
This HTML file is a template.
If you open it directly in the browser, you will see an empty page.
You can add webfonts, meta tags, or analytics to this file.
The build step will place the bundled scripts into the <body> tag.
To begin the development, run `npm start` or `yarn start`.
To create a production bundle, use `npm run build` or `yarn build`.
-->
</body>
</html>

Binary file not shown.

After

Width:  |  Height:  |  Size: 5.2 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 9.4 KiB

View file

@ -0,0 +1,25 @@
{
"short_name": "React App",
"name": "Create React App Sample",
"icons": [
{
"src": "favicon.ico",
"sizes": "64x64 32x32 24x24 16x16",
"type": "image/x-icon"
},
{
"src": "logo192.png",
"type": "image/png",
"sizes": "192x192"
},
{
"src": "logo512.png",
"type": "image/png",
"sizes": "512x512"
}
],
"start_url": ".",
"display": "standalone",
"theme_color": "#000000",
"background_color": "#ffffff"
}

View file

@ -0,0 +1,3 @@
# https://www.robotstxt.org/robotstxt.html
User-agent: *
Disallow:

View file

@ -0,0 +1,38 @@
.App {
text-align: center;
}
.App-logo {
height: 40vmin;
pointer-events: none;
}
@media (prefers-reduced-motion: no-preference) {
.App-logo {
animation: App-logo-spin infinite 20s linear;
}
}
.App-header {
background-color: #282c34;
min-height: 100vh;
display: flex;
flex-direction: column;
align-items: center;
justify-content: center;
font-size: calc(10px + 2vmin);
color: white;
}
.App-link {
color: #61dafb;
}
@keyframes App-logo-spin {
from {
transform: rotate(0deg);
}
to {
transform: rotate(360deg);
}
}

Some files were not shown because too many files have changed in this diff Show more