Compare commits

..

68 commits

Author SHA1 Message Date
alexjg
cb409b6ffe
docs: timestamp -> time in automerge.change examples (#548) 2023-03-09 18:10:23 +00:00
Conrad Irwin
b34b46fa16
smaller automerge c (#545)
* Fix automerge-c tests on mac

* Generate significantly smaller automerge-c builds

This cuts the size of libautomerge_core.a from 25Mb to 1.6Mb on macOS
and 53Mb to 2.7Mb on Linux.

As a side-effect of setting codegen-units = 1 for all release builds the
optimized wasm files are also 100kb smaller.
2023-03-09 15:09:43 +00:00
Conrad Irwin
7b747b8341
Error instead of corrupt large op counters (#543)
Since b78211ca6, OpIds have been silently truncated to 2**32. This
causes corruption in the case the op id overflows.

This change converts the silent error to a panic, and guards against the
panic on the codepath found by the fuzzer.
2023-03-07 16:49:04 +00:00
Conrad Irwin
2c1970f664
Fix panic on invalid action (#541)
We make the validation on parsing operations in the encoded changes stricter to avoid a possible panic when applying changes.
2023-03-04 12:09:08 +00:00
christine betts
63b761c0d1
Suppress clippy warning in parse.rs + bump toolchain (#542)
* Fix rust error in parse.rs
* Bump toolchain to 1.67.0
2023-03-03 22:42:40 +00:00
Conrad Irwin
44fa7ac416
Don't panic on missing deps of change chunks (#538)
* Fix doubly-reported ops in load of change chunks

Since c3c04128f5, observers have been
called twice when calling Automerge::load() with change chunks.

* Better handle change chunks with missing deps

Before this change Automerge::load would panic if you passed a change
chunk that was missing a dependency, or multiple change chunks not in
strict dependency order. After this change these cases will error
instead.
2023-02-27 20:12:09 +00:00
Jason Kankiewicz
8de2fa9bd4
C API 2 (#530)
The AMvalue union, AMlistItem struct, AMmapItem struct, and AMobjItem struct are gone, replaced by the AMitem struct.

The AMchangeHashes, AMchanges, AMlistItems, AMmapItems, AMobjItems, AMstrs, and AMsyncHaves iterators are gone, replaced by the AMitems iterator.

The AMitem struct is opaque, getting and setting values is now achieved exclusively through function calls.

The AMitemsNext(), AMitemsPrev(), and AMresultItem() functions return a pointer to an AMitem struct so you ultimately get the same thing whether you're iterating over a sequence or calling AMmapGet() or AMlistGet().

Calling AMitemResult() on an AMitem struct will produce a new AMresult struct referencing its storage so now the AMresult struct for an iterator can be subsequently freed without affecting the AMitem structs that were filtered out of it.

The storage for a set of AMitem structs can be recombined into a single AMresult struct by passing pointers to their corresponding AMresult structs to AMresultCat().

For C/C++ programmers, I've added AMstrCmp(), AMstrdup(), AM{idxType,objType,status,valType}ToString() and AM{idxType,objType,status,valType}FromString(). It's also now possible to pass arbitrary parameters through AMstack{Item,Items,Result}() to a callback function.
2023-02-25 18:47:00 +00:00
Philip Schatz
407faefa6e
A few setup fixes (#529)
* include deno in dependencies

* install javascript dependencies

* remove redundant operation
2023-02-15 09:23:02 +00:00
Alex Good
1425af43cd @automerge/automerge@2.0.2 2023-02-15 00:06:23 +00:00
Alex Good
c92d042c87 @automerge/automerge-wasm@0.1.24 and @automerge/automerge@2.0.2-alpha.2 2023-02-14 17:59:23 +00:00
Alex Good
9271b20cf5 Correct logic when skip = B and fix formatting
A few tests were failing which exposed the fact that if skip is `B` (the
out factor of the OpTree) then we set `skip = None` and this causes us
to attempt to return `Skip` in a non root node. I ported the failing
test from JS to Rust and fixed the problem.

I also fixed the formatting issues.
2023-02-14 17:21:59 +00:00
Orion Henry
5e82dbc3c8 rework how skip works to push the logic into node 2023-02-14 17:21:59 +00:00
Conrad Irwin
2cd7427f35 Use our leb128 parser for values
This ensures that values in automerge documents are encoded correctly,
and that no extra data is smuggled in any LEB fields.
2023-02-09 15:46:22 +00:00
Alex Good
11f063cbfe
Remove nightly from CI 2023-02-09 11:06:24 +00:00
Alex Good
a24d536d16 Move automerge::SequenceTree to automerge_wasm::SequenceTree
The `SequenceTree` is only ever used in `automerge_wasm` so move it
there.
2023-02-05 11:08:33 +00:00
Alex Good
c5fde2802f @automerge/automerge-wasm@0.1.24 and @automerge/automerge@2.0.2-alpha.1 2023-02-03 16:31:46 +00:00
Alex Good
13a775ed9a Speed up loading by generating clocks on demand
Context: currently we store a mapping from ChangeHash -> Clock, where
`Clock` is the set of (ActorId, (Sequence number, max Op)) pairs derived
from the given change and it's dependencies. This clock is used to
determine what operations are visible at a given set of heads.

Problem: populating this mapping for documents with large histories
containing many actors can be very slow as for each change we have to
allocate and merge a bunch of hashmaps.

Solution: instead of creating the clocks on load, create an adjacency
list based representation of the change graph and then derive the clock
from this graph when it is needed. Traversing even large graphs is still
almost as fast as looking up the clock in a hashmap.
2023-02-03 16:15:15 +00:00
Alex Good
1e33c9d9e0 Use Automerge::load instead of load_incremental if empty
Problem: when running the sync protocol for a new document the API
requires that the user create an empty document and then call
`receive_sync_message` on that document. This results in the OpObserver
for the new document being called with every single op in the document
history. For documents with a large history this can be extremely time
consuming, but the OpObserver doesn't need to know about all the hidden
states.

Solution: Modify `Automerge::load_with` and
`Automerge::apply_changes_with` to check if the document is empty before
applying changes. If the document _is_ empty then we don't call the
observer for every change, but instead use
`automerge::observe_current_state` to notify the observer of the new
state once all the changes have been applied.
2023-02-03 10:01:12 +00:00
Alex Good
c3c04128f5 Only observe the current state on load
Problem: When loading a document whilst passing an `OpObserver` we call
the OpObserver for every change in the loaded document. This slows down
the loading process for two reasons: 1) we have to make a call to the
observer for every op 2) we cannot just stream the ops into the OpSet in
topological order but must instead buffer them to pass to the observer.

Solution: Construct the OpSet first, then only traverse the visible ops
in the OpSet, calling the observer. For documents with a deep history
this results in vastly fewer calls to the observer and also allows us to
construct the OpSet much more quickly. It is slightly different
semantically because the observer never gets notified of changes which
are not visible, but that shouldn't matter to most observers.
2023-02-03 10:01:12 +00:00
Alex Good
da55dfac7a refactor: make fields of Automerge private
The fields of `automerge::Automerge` were crate public, which made it
hard to change the structure of `Automerge` with confidence. Make all
fields private and put them behind accessors where necessary to allow
for easy internal changes.
2023-02-03 10:01:12 +00:00
alexjg
9195e9cb76
Fix deny errors (#518)
* Ignore deny errors on duplicate windows-sys

* Delete spurious lockfile in automerge-cli
2023-02-02 15:02:53 +00:00
dependabot[bot]
f8d5a8ea98
Bump json5 from 1.0.1 to 1.0.2 in /javascript/examples/create-react-app (#487)
Bumps [json5](https://github.com/json5/json5) from 1.0.1 to 1.0.2. in javascript/examples/create-react-app
2023-02-01 09:15:54 +00:00
alexjg
2a9652e642
typescript: Hide API type and make SyncState opaque (#514) 2023-02-01 09:15:00 +00:00
Conrad Irwin
a6959e70e8
More robust leb128 parsing (#515)
Before this change i64 decoding did not work for negative numbers (not a
real problem because it is only used for the timestamp of a change),
and both u64 and i64 would allow overlong LEB encodings.
2023-01-31 17:54:54 +00:00
alexjg
de5af2fffa
automerge-rs 0.3.0 and automerge-test 0.2.0 (#512) 2023-01-30 19:58:35 +00:00
alexjg
08801ab580
automerge-rs: Introduce ReadDoc and SyncDoc traits and add documentation (#511)
The Rust API has so far grown somewhat organically driven by the needs of the
javascript implementation. This has led to an API which is quite awkward and
unfamiliar to Rust programmers. Additionally there is no documentation to speak
of. This commit is the first movement towards cleaning things up a bit. We touch
a lot of files but the changes are all very mechanical. We introduce a few
traits to abstract over the common operations between `Automerge` and
`AutoCommit`, and add a whole bunch of documentation.

* Add a `ReadDoc` trait to describe methods which read value from a document.
  make `Transactable` extend `ReadDoc`
* Add a `SyncDoc` trait to describe methods necessary for synchronizing
  documents.
* Put the `SyncDoc` implementation for `AutoCommit` behind `AutoCommit::sync` to
  ensure that any open transactions are closed before taking part in the sync
  protocol
* Split `OpObserver` into two traits: `OpObserver` + `BranchableObserver`.
  `BranchableObserver` captures the methods which are only needed for observing
  transactions.
* Add a whole bunch of documentation.

The main changes Rust users will need to make is:

* Import the `ReadDoc` trait wherever you are using the methods which have been
  moved to it. Optionally change concrete paramters on functions to `ReadDoc`
  constraints.
* Likewise import the `SyncDoc` trait wherever you are doing synchronisation
  work
* If you are using the `AutoCommit::*_sync_message` methods you will need to add
  a call to `AutoCommit::sync()` first. E.g. `doc.generate_sync_message` becomes
  `doc.sync().generate_sync_message`
* If you have an implementation of `OpObserver` which you are using in an
  `AutoCommit` then split it into an implementation of `OpObserver` and
  `BranchableObserver`
2023-01-30 19:37:03 +00:00
alexjg
89a0866272
@automerge/automerge@2.0.1 (#510) 2023-01-28 21:22:45 +00:00
Alex Good
9b6a3c8691
Update README 2023-01-28 09:32:21 +00:00
alexjg
58a7a06b75
@automerge/automerge-wasm@0.1.23 and @automerge/automerge@2.0.1-alpha.6 (#509) 2023-01-27 20:27:11 +00:00
alexjg
f428fe0169
Improve typescript types (#508) 2023-01-27 17:23:13 +00:00
Conrad Irwin
931ee7e77b
Add Fuzz Testing (#498)
* Add fuzz testing for document load

* Fix fuzz crashers and add to test suite
2023-01-25 16:03:05 +00:00
alexjg
819767cc33
fix: use saturating_sub when updating cached text width (#505)
Problem: In `automerge::query::Index::change_vis` we use `-=` to
subtract the width of an operation which is being hidden from the text
widths which we store on the index of each node in the optree. This
index represents the width of all the visible text operations in this
node and below. This was causing an integer underflow error when
encountering some list operations. More specifically, when a
`ScalarValue::Str` in a list was made invisible by a later operation
which contained a _shorter_ string, the width subtracted from the indexed
text widths could be longer than the current index.

Solution: use `saturating_sub` instead. This is technically papering
over the problem because really the width should never go below zero,
but the text widths are only relevant for text objects where the
existing logic works as advertised because we don't have a `set`
operation for text indices. A more robust solution would be to track the
type of the Index (and consequently of the `OpTree`) at the type level,
but time is limited and problems are infinite.

Also, add a lengthy description of the reason we are using
`saturating_sub` so that when I read it in about a month I don't have
to redo the painful debugging process that got me to this commit.
2023-01-23 19:19:55 +00:00
Alex Currie-Clark
78adbc4ff9
Update patch types (#499)
* Update `Patch` types

* Clarify that the splice patch applies to text

* Add Splice patch type to exports

* Add new patches to javascript
2023-01-23 17:02:02 +00:00
Andrew Jeffery
1f7b109dcd
Add From<SmolStr> for ScalarValue::Str (#506) 2023-01-23 17:01:41 +00:00
Conrad Irwin
98e755106f
Fix and simplify lebsize calculations (#503)
Before this change numbits_i64() was incorrect for every value of the
form 0 - 2^x. This only manifested in a visible error if x%7 == 6 (so
for -64, -8192, etc.) at which point `lebsize` would return a value one
too large, causing a panic in commit().
2023-01-23 11:01:05 +00:00
alexjg
6b0ee6da2e
Bump js to 2.0.1-alpha.5 and automerge-wasm to 0.1.22 (#497) 2023-01-19 22:15:06 +00:00
alexjg
9b44a75f69
fix: don't panic when generating parents for hidden objects (#500)
Problem: the `OpSet::export_key` method uses `query::ElemIdPos` to
determine the index of sequence elements when exporting a key. This
query returned `None` for invisible elements. The `Parents` iterator
which is used to generate paths to objects in patches in
`automerge-wasm` used `export_key`. The end result is that applying a
remote change which deletes an object in a sequence would panic as it
tries to generate a path for an invisible object.

Solution: modify `query::ElemIdPos` to include invisible objects. This
does mean that the path generated will refer to the previous visible
object in the sequence as it's index, but this is probably fine as for
an invisible object the path shouldn't be used anyway.

While we're here also change the return value of `OpSet::export_key` to
an `Option` and make `query::Index::ops` private as obeisance to the
Lady of the Golden Blade.
2023-01-19 21:11:36 +00:00
alexjg
d8baa116e7
automerge-rs: Add ExId::to_bytes (#491)
The `ExId` structure has some internal details which make lookups for
object IDs which were produced by the document doing the looking up
faster. These internal details are quite specific to the implementation
so we don't want to expose them as a public API. On the other hand, we
need to be able to serialize `ExId`s so that FFI clients can hold on to
them without referencing memory which is owned by the document (ahem,
looking at you Java).

Introduce `ExId::to_bytes` and `TryFrom<&[u8]> ExId` implementing a
canonical serialization which includes a version tag, giveing us
compatibility options if we decide to change the implementation.
2023-01-19 17:02:47 +00:00
alexjg
5629a7bec4
Various CI script fixes (#501)
Some of the scripts in scripts/ci were not reliable detecting the path
they were operating in. Additionally the deno_tests script was not
correctly picking up the ROOT_MODULE environment variable. Add more
robust path handling and fix the deno_tests script.
2023-01-19 15:38:27 +00:00
alexjg
964ae2bd81
Fix SeekOpWithPatch on optrees with only internal optrees (#496)
In #480 we fixed an issue where `SeekOp` calculated an incorrect
insertion index on optrees where the only visible ops were on internal
nodes. We forgot to port this fix to `SeekOpWithPatch`, which has almost
the same logic just with additional work done in order to notify an
`OpObserver` of changes. Add a test and fix to `SeekOpWithPatch`
2023-01-14 11:27:48 +00:00
Alex Good
d8df1707d9
Update rust toolchain for "linux" step 2023-01-14 11:06:58 +00:00
Alex Currie-Clark
681a3f1f3f
Add github action to deploy deno package 2023-01-13 10:33:47 +00:00
Alex Good
22e9915fac automerge-wasm: publish release build in Github Action 2023-01-12 12:42:19 +00:00
Alex Good
2d8df12522
re-enable version check for WASM release 2023-01-12 11:35:48 +00:00
Alex Good
f073dbf701
use setup-node prior to attempting to publish in release action 2023-01-12 11:04:22 +00:00
Alex Good
5c02445bee
Bump automerge-wasm, again
In order to re-trigger the release action we are testing we bump the
version which was de-bumped in the last commit.
2023-01-12 10:39:11 +00:00
Alex Good
3ef60747f4
Roll back automerge-wasm to test release action
The release action we are working conditionally executes based on the
version of `automerge-wasm` in the previous commit. We need to trigger
it even though the version has not changed so we roll back the version
in this commit and the commit immediately following this will bump it
again.
2023-01-12 10:37:11 +00:00
Alex Good
d12bd3bb06
correctly call npm publish in release action 2023-01-12 10:27:03 +00:00
Alex Good
a0d698dc8e
Version bump js and wasm
js: 2.0.1-alpha.3
wasm: 0.1.20
2023-01-12 09:55:12 +00:00
Alex Currie-Clark
93a257896e Release action: Fix for check that WASM version has been updated before publishing 2023-01-12 09:44:48 +00:00
Alex Currie-Clark
9c3d0976c8 Add workflow to generate a deno.land and npm release when pushing a new automerge-wasm version to #main 2023-01-11 17:19:24 +00:00
Orion Henry
1ca1cc38ef
Merge pull request #484 from automerge/text2-compat
Text2 compat
2023-01-10 09:16:22 -08:00
Alex Good
0e7fb6cc10
javascript: Add @packageDocumentation TSDoc
Instead of using the `--readme` argument to `typedoc` use the
`@packageDocumentation` TSDoc tag to include the readme text in the
typedoc output.
2023-01-10 15:02:56 +00:00
Alex Good
d1220b9dd0
javascript: Use glob to list files in package.json
We have been listing all the files to be included in the distributed
package in package.json:files. This is tedious and error prone. We
change to using globs instead, to do this without also including the
test and src files when outputting declarations we add a new typescript
config file for the declaration generation which excludes tests.
2023-01-10 12:52:21 +00:00
Alex Good
6c0d102032
automerge-js: Add backwards compatibility text layer
The new text features are faster and more ergonomic but not backwards
compatible. In order to make them backwards compatible re-expose the
original functionality and move the new API under a `future` export.
This allows users to interoperably use both implementations.
2023-01-10 12:52:21 +00:00
Alex Good
5763210b07
wasm: Allow a choice of text representations
The wasm codebase assumed that clients want to represent text as a
string of characters. This is faster, but in order to enable backwards
compatibility we add a `TextRepresentation` argument to
`automerge_wasm::Automerge::new` to allow clients to choose between a
`string` or `Array<any>` representation. The `automerge_wasm::Observer`
will consult this setting to determine what kind of diffs to generate.
2023-01-10 12:52:19 +00:00
Alex Good
18a3f61704 Update rust toolchain to 1.66 2023-01-10 12:51:56 +00:00
Alex Currie-Clark
0306ade939 Update action name on IncPatch type 2023-01-06 15:23:41 +00:00
Alex Good
1e7dcdedec automerge-js: Add prettier
It's christmas, everyone is on holiday, it's time to change every single
file in the repository!
2022-12-22 17:33:14 +00:00
Alex Good
8a645bb193 js: Enable typescript for the JS tests
The tsconfig.json was setup to not include the JS tests. Update the
config to include the tests when checking typescript and fix all the
consequent errors. None of this is semantically meaningful _except_ for
a few incorrect usages of the API which were leading to flaky tests.
Hooray for types!
2022-12-22 11:48:06 +00:00
Alex Good
4de0756bb4 Correctly handle ops on optree node boundaries
The `SeekOp` query can produce incorrect results when the optree it is
searching only has visible ops on the internal nodes. Add some tests to
demonstrate the issue as well as a fix.
2022-12-20 20:38:29 +00:00
Alex Good
d678280b57 automerge-cli: Add an examine-sync command
This is useful when receiving sync messages that behave in unexptected
ways
2022-12-19 16:30:14 +00:00
Alex Good
f682db3039 automerge-cli: Add a flag to skip verifiying heads 2022-12-19 16:30:14 +00:00
Alex Good
6da93b6adc Correctly implement colored json
My quickly thrown together implementation had somem mistakes in it which
meant that the JSON produced was malformed.
2022-12-19 16:30:14 +00:00
Alex Good
0f90fe4d02 Add a method for loading a document without verifying heads
This is primarily useful when debugging documents which have been
corrupted somehow so you would like to see the ops even if you can't
trust them. Note that this is _not_ currently useful for performance
reasons as the hash graph is still constructed, just not verified.
2022-12-19 16:30:14 +00:00
alexjg
8aff1296b9
automerge-cli: remove a bunch of bad dependencies (#478)
Automerge CLI depends transitively (via and old version of `clap` and
via `colored_json` on `atty` and `ansi_term`. These crates are both
marked as unmaintained and this generates irritating `cargo deny`
messages. To avoid this, implement colored JSON ourselves using the
`termcolor` crate - colored JSON is pretty mechanical. Also update
criterion and cbindgen dependencies and ignore the criterion tree in
deny.toml as we only ever use it in benchmarks.

All that's left now is a warning about atty in cbindgen, we'll just have
to wait for cbindgen to fix that, it's a build time dependency anyway so
it's not really an issue.
2022-12-14 18:06:19 +00:00
Conrad Irwin
6dad2b7df1
Don't panic on invalid gzip stream (#477)
* Don't panic on invalid gzip stream

Before this change automerge-rs would panic if the gzip data in
a raw column was invalid; after this change the error is propagated
to the caller correctly.
2022-12-14 17:34:22 +00:00
patryk
e75ca2a834
Update README.md (Update Slack invite link) (#475)
Slack invite link updated to the one used on the website, as the current one returns "This link is no longer active".
2022-12-14 11:41:21 +00:00
274 changed files with 21887 additions and 15394 deletions

View file

@ -2,10 +2,10 @@ name: CI
on:
push:
branches:
- main
- main
pull_request:
branches:
- main
- main
jobs:
fmt:
runs-on: ubuntu-latest
@ -14,7 +14,7 @@ jobs:
- uses: actions-rs/toolchain@v1
with:
profile: minimal
toolchain: 1.64.0
toolchain: 1.67.0
default: true
components: rustfmt
- uses: Swatinem/rust-cache@v1
@ -28,7 +28,7 @@ jobs:
- uses: actions-rs/toolchain@v1
with:
profile: minimal
toolchain: 1.64.0
toolchain: 1.67.0
default: true
components: clippy
- uses: Swatinem/rust-cache@v1
@ -42,7 +42,7 @@ jobs:
- uses: actions-rs/toolchain@v1
with:
profile: minimal
toolchain: 1.64.0
toolchain: 1.67.0
default: true
- uses: Swatinem/rust-cache@v1
- name: Build rust docs
@ -90,6 +90,16 @@ jobs:
run: rustup target add wasm32-unknown-unknown
- name: run tests
run: ./scripts/ci/deno_tests
js_fmt:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: install
run: yarn global add prettier
- name: format
run: prettier -c javascript/.prettierrc javascript
js_tests:
runs-on: ubuntu-latest
steps:
@ -108,7 +118,7 @@ jobs:
- uses: actions-rs/toolchain@v1
with:
profile: minimal
toolchain: 1.64.0
toolchain: nightly-2023-01-26
default: true
- uses: Swatinem/rust-cache@v1
- name: Install CMocka
@ -117,6 +127,8 @@ jobs:
uses: jwlawson/actions-setup-cmake@v1.12
with:
cmake-version: latest
- name: Install rust-src
run: rustup component add rust-src
- name: Build and test C bindings
run: ./scripts/ci/cmake-build Release Static
shell: bash
@ -126,9 +138,7 @@ jobs:
strategy:
matrix:
toolchain:
- 1.60.0
- nightly
continue-on-error: ${{ matrix.toolchain == 'nightly' }}
- 1.67.0
steps:
- uses: actions/checkout@v2
- uses: actions-rs/toolchain@v1
@ -147,7 +157,7 @@ jobs:
- uses: actions-rs/toolchain@v1
with:
profile: minimal
toolchain: 1.64.0
toolchain: 1.67.0
default: true
- uses: Swatinem/rust-cache@v1
- run: ./scripts/ci/build-test
@ -160,7 +170,7 @@ jobs:
- uses: actions-rs/toolchain@v1
with:
profile: minimal
toolchain: 1.64.0
toolchain: 1.67.0
default: true
- uses: Swatinem/rust-cache@v1
- run: ./scripts/ci/build-test

214
.github/workflows/release.yaml vendored Normal file
View file

@ -0,0 +1,214 @@
name: Release
on:
push:
branches:
- main
jobs:
check_if_wasm_version_upgraded:
name: Check if WASM version has been upgraded
runs-on: ubuntu-latest
outputs:
wasm_version: ${{ steps.version-updated.outputs.current-package-version }}
wasm_has_updated: ${{ steps.version-updated.outputs.has-updated }}
steps:
- uses: JiPaix/package-json-updated-action@v1.0.5
id: version-updated
with:
path: rust/automerge-wasm/package.json
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
publish-wasm:
name: Publish WASM package
runs-on: ubuntu-latest
needs:
- check_if_wasm_version_upgraded
# We create release only if the version in the package.json has been upgraded
if: needs.check_if_wasm_version_upgraded.outputs.wasm_has_updated == 'true'
steps:
- uses: actions/setup-node@v3
with:
node-version: '16.x'
registry-url: 'https://registry.npmjs.org'
- uses: denoland/setup-deno@v1
- uses: actions/checkout@v3
with:
fetch-depth: 0
ref: ${{ github.ref }}
- name: Get rid of local github workflows
run: rm -r .github/workflows
- name: Remove tmp_branch if it exists
run: git push origin :tmp_branch || true
- run: git checkout -b tmp_branch
- name: Install wasm-bindgen-cli
run: cargo install wasm-bindgen-cli wasm-opt
- name: Install wasm32 target
run: rustup target add wasm32-unknown-unknown
- name: run wasm js tests
id: wasm_js_tests
run: ./scripts/ci/wasm_tests
- name: run wasm deno tests
id: wasm_deno_tests
run: ./scripts/ci/deno_tests
- name: build release
id: build_release
run: |
npm --prefix $GITHUB_WORKSPACE/rust/automerge-wasm run release
- name: Collate deno release files
if: steps.wasm_js_tests.outcome == 'success' && steps.wasm_deno_tests.outcome == 'success'
run: |
mkdir $GITHUB_WORKSPACE/deno_wasm_dist
cp $GITHUB_WORKSPACE/rust/automerge-wasm/deno/* $GITHUB_WORKSPACE/deno_wasm_dist
cp $GITHUB_WORKSPACE/rust/automerge-wasm/index.d.ts $GITHUB_WORKSPACE/deno_wasm_dist
cp $GITHUB_WORKSPACE/rust/automerge-wasm/README.md $GITHUB_WORKSPACE/deno_wasm_dist
cp $GITHUB_WORKSPACE/rust/automerge-wasm/LICENSE $GITHUB_WORKSPACE/deno_wasm_dist
sed -i '1i /// <reference types="./index.d.ts" />' $GITHUB_WORKSPACE/deno_wasm_dist/automerge_wasm.js
- name: Create npm release
if: steps.wasm_js_tests.outcome == 'success' && steps.wasm_deno_tests.outcome == 'success'
run: |
if [ "$(npm --prefix $GITHUB_WORKSPACE/rust/automerge-wasm show . version)" = "$VERSION" ]; then
echo "This version is already published"
exit 0
fi
EXTRA_ARGS="--access public"
if [[ $VERSION == *"alpha."* ]] || [[ $VERSION == *"beta."* ]] || [[ $VERSION == *"rc."* ]]; then
echo "Is pre-release version"
EXTRA_ARGS="$EXTRA_ARGS --tag next"
fi
if [ "$NODE_AUTH_TOKEN" = "" ]; then
echo "Can't publish on NPM, You need a NPM_TOKEN secret."
false
fi
npm publish $GITHUB_WORKSPACE/rust/automerge-wasm $EXTRA_ARGS
env:
NODE_AUTH_TOKEN: ${{secrets.NPM_TOKEN}}
VERSION: ${{ needs.check_if_wasm_version_upgraded.outputs.wasm_version }}
- name: Commit wasm deno release files
run: |
git config --global user.name "actions"
git config --global user.email actions@github.com
git add $GITHUB_WORKSPACE/deno_wasm_dist
git commit -am "Add deno release files"
git push origin tmp_branch
- name: Tag wasm release
if: steps.wasm_js_tests.outcome == 'success' && steps.wasm_deno_tests.outcome == 'success'
uses: softprops/action-gh-release@v1
with:
name: Automerge Wasm v${{ needs.check_if_wasm_version_upgraded.outputs.wasm_version }}
tag_name: js/automerge-wasm-${{ needs.check_if_wasm_version_upgraded.outputs.wasm_version }}
target_commitish: tmp_branch
generate_release_notes: false
draft: false
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
- name: Remove tmp_branch
run: git push origin :tmp_branch
check_if_js_version_upgraded:
name: Check if JS version has been upgraded
runs-on: ubuntu-latest
outputs:
js_version: ${{ steps.version-updated.outputs.current-package-version }}
js_has_updated: ${{ steps.version-updated.outputs.has-updated }}
steps:
- uses: JiPaix/package-json-updated-action@v1.0.5
id: version-updated
with:
path: javascript/package.json
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
publish-js:
name: Publish JS package
runs-on: ubuntu-latest
needs:
- check_if_js_version_upgraded
- check_if_wasm_version_upgraded
- publish-wasm
# We create release only if the version in the package.json has been upgraded and after the WASM release
if: |
(always() && ! cancelled()) &&
(needs.publish-wasm.result == 'success' || needs.publish-wasm.result == 'skipped') &&
needs.check_if_js_version_upgraded.outputs.js_has_updated == 'true'
steps:
- uses: actions/setup-node@v3
with:
node-version: '16.x'
registry-url: 'https://registry.npmjs.org'
- uses: denoland/setup-deno@v1
- uses: actions/checkout@v3
with:
fetch-depth: 0
ref: ${{ github.ref }}
- name: Get rid of local github workflows
run: rm -r .github/workflows
- name: Remove js_tmp_branch if it exists
run: git push origin :js_tmp_branch || true
- run: git checkout -b js_tmp_branch
- name: check js formatting
run: |
yarn global add prettier
prettier -c javascript/.prettierrc javascript
- name: run js tests
id: js_tests
run: |
cargo install wasm-bindgen-cli wasm-opt
rustup target add wasm32-unknown-unknown
./scripts/ci/js_tests
- name: build js release
id: build_release
run: |
npm --prefix $GITHUB_WORKSPACE/javascript run build
- name: build js deno release
id: build_deno_release
run: |
VERSION=$WASM_VERSION npm --prefix $GITHUB_WORKSPACE/javascript run deno:build
env:
WASM_VERSION: ${{ needs.check_if_wasm_version_upgraded.outputs.wasm_version }}
- name: run deno tests
id: deno_tests
run: |
npm --prefix $GITHUB_WORKSPACE/javascript run deno:test
- name: Collate deno release files
if: steps.js_tests.outcome == 'success' && steps.deno_tests.outcome == 'success'
run: |
mkdir $GITHUB_WORKSPACE/deno_js_dist
cp $GITHUB_WORKSPACE/javascript/deno_dist/* $GITHUB_WORKSPACE/deno_js_dist
- name: Create npm release
if: steps.js_tests.outcome == 'success' && steps.deno_tests.outcome == 'success'
run: |
if [ "$(npm --prefix $GITHUB_WORKSPACE/javascript show . version)" = "$VERSION" ]; then
echo "This version is already published"
exit 0
fi
EXTRA_ARGS="--access public"
if [[ $VERSION == *"alpha."* ]] || [[ $VERSION == *"beta."* ]] || [[ $VERSION == *"rc."* ]]; then
echo "Is pre-release version"
EXTRA_ARGS="$EXTRA_ARGS --tag next"
fi
if [ "$NODE_AUTH_TOKEN" = "" ]; then
echo "Can't publish on NPM, You need a NPM_TOKEN secret."
false
fi
npm publish $GITHUB_WORKSPACE/javascript $EXTRA_ARGS
env:
NODE_AUTH_TOKEN: ${{secrets.NPM_TOKEN}}
VERSION: ${{ needs.check_if_js_version_upgraded.outputs.js_version }}
- name: Commit js deno release files
run: |
git config --global user.name "actions"
git config --global user.email actions@github.com
git add $GITHUB_WORKSPACE/deno_js_dist
git commit -am "Add deno js release files"
git push origin js_tmp_branch
- name: Tag JS release
if: steps.js_tests.outcome == 'success' && steps.deno_tests.outcome == 'success'
uses: softprops/action-gh-release@v1
with:
name: Automerge v${{ needs.check_if_js_version_upgraded.outputs.js_version }}
tag_name: js/automerge-${{ needs.check_if_js_version_upgraded.outputs.js_version }}
target_commitish: js_tmp_branch
generate_release_notes: false
draft: false
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
- name: Remove js_tmp_branch
run: git push origin :js_tmp_branch

View file

@ -25,7 +25,7 @@ If you're familiar with CRDTs and interested in the design of Automerge in
particular take a look at https://automerge.org/docs/how-it-works/backend/
Finally, if you want to talk to us about this project please [join the
Slack](https://join.slack.com/t/automerge/shared_invite/zt-1ho1ieas2-DnWZcRR82BRu65vCD4t3Xw)
Slack](https://join.slack.com/t/automerge/shared_invite/zt-e4p3760n-kKh7r3KRH1YwwNfiZM8ktw)
## Status
@ -42,9 +42,10 @@ In general we try and respect semver.
### JavaScript
An alpha release of the javascript package is currently available as
`@automerge/automerge@2.0.0-alpha.n` where `n` is an integer. We are gathering
feedback on the API and looking to release a `2.0.0` in the next few weeks.
A stable release of the javascript package is currently available as
`@automerge/automerge@2.0.0` where. pre-release verisions of the `2.0.1` are
available as `2.0.1-alpha.n`. `2.0.1*` packages are also available for Deno at
https://deno.land/x/automerge
### Rust
@ -52,7 +53,9 @@ The rust codebase is currently oriented around producing a performant backend
for the Javascript wrapper and as such the API for Rust code is low level and
not well documented. We will be returning to this over the next few months but
for now you will need to be comfortable reading the tests and asking questions
to figure out how to use it.
to figure out how to use it. If you are looking to build rust applications which
use automerge you may want to look into
[autosurgeon](https://github.com/alexjg/autosurgeon)
## Repository Organisation
@ -109,9 +112,16 @@ brew install cmake node cmocka
# install yarn
npm install --global yarn
# install javascript dependencies
yarn --cwd ./javascript
# install rust dependencies
cargo install wasm-bindgen-cli wasm-opt cargo-deny
# get nightly rust to produce optimized automerge-c builds
rustup toolchain install nightly
rustup component add rust-src --toolchain nightly
# add wasm target in addition to current architecture
rustup target add wasm32-unknown-unknown

View file

@ -54,6 +54,7 @@
nodejs
yarn
deno
# c deps
cmake

View file

@ -0,0 +1,3 @@
{
"replacer": "scripts/denoify-replacer.mjs"
}

View file

@ -1,11 +1,15 @@
module.exports = {
root: true,
parser: '@typescript-eslint/parser',
plugins: [
'@typescript-eslint',
],
extends: [
'eslint:recommended',
'plugin:@typescript-eslint/recommended',
],
};
parser: "@typescript-eslint/parser",
plugins: ["@typescript-eslint"],
extends: ["eslint:recommended", "plugin:@typescript-eslint/recommended"],
rules: {
"@typescript-eslint/no-unused-vars": [
"error",
{
argsIgnorePattern: "^_",
varsIgnorePattern: "^_",
},
],
},
}

View file

@ -2,3 +2,5 @@
/yarn.lock
dist
docs/
.vim
deno_dist/

View file

@ -0,0 +1,4 @@
e2e/verdacciodb
dist
docs
deno_dist

4
javascript/.prettierrc Normal file
View file

@ -0,0 +1,4 @@
{
"semi": false,
"arrowParens": "avoid"
}

View file

@ -8,7 +8,7 @@ Rust codebase and can be found in `~/automerge-wasm`). I.e. the responsibility
of this codebase is
- To map from the javascript data model to the underlying `set`, `make`,
`insert`, and `delete` operations of Automerge.
`insert`, and `delete` operations of Automerge.
- To expose a more convenient interface to functions in `automerge-wasm` which
generate messages to send over the network or compressed file formats to store
on disk
@ -37,4 +37,3 @@ yarn test
If you make changes to the `automerge-wasm` package you will need to re-run
`yarn e2e buildjs`

View file

@ -19,7 +19,6 @@ data](#make-some-data). If you're in a browser you need a bundler
### Bundler setup
`@automerge/automerge` is a wrapper around a core library which is written in
rust, compiled to WebAssembly and distributed as a separate package called
`@automerge/automerge-wasm`. Browsers don't currently support WebAssembly
@ -54,28 +53,28 @@ import * as automerge from "@automerge/automerge"
import * as assert from "assert"
let doc1 = automerge.from({
tasks: [
{description: "feed fish", done: false},
{description: "water plants", done: false},
]
tasks: [
{ description: "feed fish", done: false },
{ description: "water plants", done: false },
],
})
// Create a new thread of execution
// Create a new thread of execution
let doc2 = automerge.clone(doc1)
// Now we concurrently make changes to doc1 and doc2
// Complete a task in doc2
doc2 = automerge.change(doc2, d => {
d.tasks[0].done = true
d.tasks[0].done = true
})
// Add a task in doc1
doc1 = automerge.change(doc1, d => {
d.tasks.push({
description: "water fish",
done: false
})
d.tasks.push({
description: "water fish",
done: false,
})
})
// Merge changes from both docs
@ -84,19 +83,19 @@ doc2 = automerge.merge(doc2, doc1)
// Both docs are merged and identical
assert.deepEqual(doc1, {
tasks: [
{description: "feed fish", done: true},
{description: "water plants", done: false},
{description: "water fish", done: false},
]
tasks: [
{ description: "feed fish", done: true },
{ description: "water plants", done: false },
{ description: "water fish", done: false },
],
})
assert.deepEqual(doc2, {
tasks: [
{description: "feed fish", done: true},
{description: "water plants", done: false},
{description: "water fish", done: false},
]
tasks: [
{ description: "feed fish", done: true },
{ description: "water plants", done: false },
{ description: "water fish", done: false },
],
})
```

View file

@ -1,6 +1,12 @@
{
"extends": "../tsconfig.json",
"compilerOptions": {
"outDir": "../dist/cjs"
}
"extends": "../tsconfig.json",
"exclude": [
"../dist/**/*",
"../node_modules",
"../test/**/*",
"../src/**/*.deno.ts"
],
"compilerOptions": {
"outDir": "../dist/cjs"
}
}

View file

@ -0,0 +1,13 @@
{
"extends": "../tsconfig.json",
"exclude": [
"../dist/**/*",
"../node_modules",
"../test/**/*",
"../src/**/*.deno.ts"
],
"emitDeclarationOnly": true,
"compilerOptions": {
"outDir": "../dist"
}
}

View file

@ -1,8 +1,14 @@
{
"extends": "../tsconfig.json",
"compilerOptions": {
"target": "es6",
"module": "es6",
"outDir": "../dist/mjs"
}
"extends": "../tsconfig.json",
"exclude": [
"../dist/**/*",
"../node_modules",
"../test/**/*",
"../src/**/*.deno.ts"
],
"compilerOptions": {
"target": "es6",
"module": "es6",
"outDir": "../dist/mjs"
}
}

View file

@ -0,0 +1,10 @@
import * as Automerge from "../deno_dist/index.ts"
Deno.test("It should create, clone and free", () => {
let doc1 = Automerge.init()
let doc2 = Automerge.clone(doc1)
// this is only needed if weakrefs are not supported
Automerge.free(doc1)
Automerge.free(doc2)
})

View file

@ -54,7 +54,7 @@ yarn e2e buildexamples -e webpack
If you're experimenting with a project which is not in the `examples` folder
you'll need a running registry. `run-registry` builds and publishes
`automerge-js` and `automerge-wasm` and then runs the registry at
`localhost:4873`.
`localhost:4873`.
```
yarn e2e run-registry
@ -63,7 +63,6 @@ yarn e2e run-registry
You can now run `yarn install --registry http://localhost:4873` to experiment
with the built packages.
## Using the `dev` build of `automerge-wasm`
All the commands above take a `-p` flag which can be either `release` or

View file

@ -1,15 +1,25 @@
import {once} from "events"
import {setTimeout} from "timers/promises"
import {spawn, ChildProcess} from "child_process"
import { once } from "events"
import { setTimeout } from "timers/promises"
import { spawn, ChildProcess } from "child_process"
import * as child_process from "child_process"
import {command, subcommands, run, array, multioption, option, Type} from "cmd-ts"
import {
command,
subcommands,
run,
array,
multioption,
option,
Type,
} from "cmd-ts"
import * as path from "path"
import * as fsPromises from "fs/promises"
import fetch from "node-fetch"
const VERDACCIO_DB_PATH = path.normalize(`${__dirname}/verdacciodb`)
const VERDACCIO_CONFIG_PATH = path.normalize(`${__dirname}/verdaccio.yaml`)
const AUTOMERGE_WASM_PATH = path.normalize(`${__dirname}/../../rust/automerge-wasm`)
const AUTOMERGE_WASM_PATH = path.normalize(
`${__dirname}/../../rust/automerge-wasm`
)
const AUTOMERGE_JS_PATH = path.normalize(`${__dirname}/..`)
const EXAMPLES_DIR = path.normalize(path.join(__dirname, "../", "examples"))
@ -18,217 +28,286 @@ type Example = "webpack" | "vite" | "create-react-app"
// Type to parse strings to `Example` so the types line up for the `buildExamples` commmand
const ReadExample: Type<string, Example> = {
async from(str) {
if (str === "webpack") {
return "webpack"
} else if (str === "vite") {
return "vite"
} else if (str === "create-react-app") {
return "create-react-app"
} else {
throw new Error(`Unknown example type ${str}`)
}
async from(str) {
if (str === "webpack") {
return "webpack"
} else if (str === "vite") {
return "vite"
} else if (str === "create-react-app") {
return "create-react-app"
} else {
throw new Error(`Unknown example type ${str}`)
}
},
}
type Profile = "dev" | "release"
const ReadProfile: Type<string, Profile> = {
async from(str) {
if (str === "dev") {
return "dev"
} else if (str === "release") {
return "release"
} else {
throw new Error(`Unknown profile ${str}`)
}
async from(str) {
if (str === "dev") {
return "dev"
} else if (str === "release") {
return "release"
} else {
throw new Error(`Unknown profile ${str}`)
}
},
}
const buildjs = command({
name: "buildjs",
args: {
profile: option({
type: ReadProfile,
long: "profile",
short: "p",
defaultValue: () => "dev" as Profile
})
},
handler: ({profile}) => {
console.log("building js")
withPublishedWasm(profile, async (registryUrl: string) => {
await buildAndPublishAutomergeJs(registryUrl)
})
}
name: "buildjs",
args: {
profile: option({
type: ReadProfile,
long: "profile",
short: "p",
defaultValue: () => "dev" as Profile,
}),
},
handler: ({ profile }) => {
console.log("building js")
withPublishedWasm(profile, async (registryUrl: string) => {
await buildAndPublishAutomergeJs(registryUrl)
})
},
})
const buildWasm = command({
name: "buildwasm",
args: {
profile: option({
type: ReadProfile,
long: "profile",
short: "p",
defaultValue: () => "dev" as Profile
})
},
handler: ({profile}) => {
console.log("building automerge-wasm")
withRegistry(
buildAutomergeWasm(profile),
)
}
name: "buildwasm",
args: {
profile: option({
type: ReadProfile,
long: "profile",
short: "p",
defaultValue: () => "dev" as Profile,
}),
},
handler: ({ profile }) => {
console.log("building automerge-wasm")
withRegistry(buildAutomergeWasm(profile))
},
})
const buildexamples = command({
name: "buildexamples",
args: {
examples: multioption({
long: "example",
short: "e",
type: array(ReadExample),
}),
profile: option({
type: ReadProfile,
long: "profile",
short: "p",
defaultValue: () => "dev" as Profile
})
},
handler: ({examples, profile}) => {
if (examples.length === 0) {
examples = ["webpack", "vite", "create-react-app"]
}
buildExamples(examples, profile)
name: "buildexamples",
args: {
examples: multioption({
long: "example",
short: "e",
type: array(ReadExample),
}),
profile: option({
type: ReadProfile,
long: "profile",
short: "p",
defaultValue: () => "dev" as Profile,
}),
},
handler: ({ examples, profile }) => {
if (examples.length === 0) {
examples = ["webpack", "vite", "create-react-app"]
}
buildExamples(examples, profile)
},
})
const runRegistry = command({
name: "run-registry",
args: {
profile: option({
type: ReadProfile,
long: "profile",
short: "p",
defaultValue: () => "dev" as Profile
})
},
handler: ({profile}) => {
withPublishedWasm(profile, async (registryUrl: string) => {
await buildAndPublishAutomergeJs(registryUrl)
console.log("\n************************")
console.log(` Verdaccio NPM registry is running at ${registryUrl}`)
console.log(" press CTRL-C to exit ")
console.log("************************")
await once(process, "SIGINT")
}).catch(e => {
console.error(`Failed: ${e}`)
})
}
name: "run-registry",
args: {
profile: option({
type: ReadProfile,
long: "profile",
short: "p",
defaultValue: () => "dev" as Profile,
}),
},
handler: ({ profile }) => {
withPublishedWasm(profile, async (registryUrl: string) => {
await buildAndPublishAutomergeJs(registryUrl)
console.log("\n************************")
console.log(` Verdaccio NPM registry is running at ${registryUrl}`)
console.log(" press CTRL-C to exit ")
console.log("************************")
await once(process, "SIGINT")
}).catch(e => {
console.error(`Failed: ${e}`)
})
},
})
const app = subcommands({
name: "e2e",
cmds: {buildjs, buildexamples, buildwasm: buildWasm, "run-registry": runRegistry}
name: "e2e",
cmds: {
buildjs,
buildexamples,
buildwasm: buildWasm,
"run-registry": runRegistry,
},
})
run(app, process.argv.slice(2))
async function buildExamples(examples: Array<Example>, profile: Profile) {
await withPublishedWasm(profile, async (registryUrl) => {
printHeader("building and publishing automerge")
await buildAndPublishAutomergeJs(registryUrl)
for (const example of examples) {
printHeader(`building ${example} example`)
if (example === "webpack") {
const projectPath = path.join(EXAMPLES_DIR, example)
await removeExistingAutomerge(projectPath)
await fsPromises.rm(path.join(projectPath, "yarn.lock"), {force: true})
await spawnAndWait("yarn", ["--cwd", projectPath, "install", "--registry", registryUrl, "--check-files"], {stdio: "inherit"})
await spawnAndWait("yarn", ["--cwd", projectPath, "build"], {stdio: "inherit"})
} else if (example === "vite") {
const projectPath = path.join(EXAMPLES_DIR, example)
await removeExistingAutomerge(projectPath)
await fsPromises.rm(path.join(projectPath, "yarn.lock"), {force: true})
await spawnAndWait("yarn", ["--cwd", projectPath, "install", "--registry", registryUrl, "--check-files"], {stdio: "inherit"})
await spawnAndWait("yarn", ["--cwd", projectPath, "build"], {stdio: "inherit"})
} else if (example === "create-react-app") {
const projectPath = path.join(EXAMPLES_DIR, example)
await removeExistingAutomerge(projectPath)
await fsPromises.rm(path.join(projectPath, "yarn.lock"), {force: true})
await spawnAndWait("yarn", ["--cwd", projectPath, "install", "--registry", registryUrl, "--check-files"], {stdio: "inherit"})
await spawnAndWait("yarn", ["--cwd", projectPath, "build"], {stdio: "inherit"})
}
}
})
await withPublishedWasm(profile, async registryUrl => {
printHeader("building and publishing automerge")
await buildAndPublishAutomergeJs(registryUrl)
for (const example of examples) {
printHeader(`building ${example} example`)
if (example === "webpack") {
const projectPath = path.join(EXAMPLES_DIR, example)
await removeExistingAutomerge(projectPath)
await fsPromises.rm(path.join(projectPath, "yarn.lock"), {
force: true,
})
await spawnAndWait(
"yarn",
[
"--cwd",
projectPath,
"install",
"--registry",
registryUrl,
"--check-files",
],
{ stdio: "inherit" }
)
await spawnAndWait("yarn", ["--cwd", projectPath, "build"], {
stdio: "inherit",
})
} else if (example === "vite") {
const projectPath = path.join(EXAMPLES_DIR, example)
await removeExistingAutomerge(projectPath)
await fsPromises.rm(path.join(projectPath, "yarn.lock"), {
force: true,
})
await spawnAndWait(
"yarn",
[
"--cwd",
projectPath,
"install",
"--registry",
registryUrl,
"--check-files",
],
{ stdio: "inherit" }
)
await spawnAndWait("yarn", ["--cwd", projectPath, "build"], {
stdio: "inherit",
})
} else if (example === "create-react-app") {
const projectPath = path.join(EXAMPLES_DIR, example)
await removeExistingAutomerge(projectPath)
await fsPromises.rm(path.join(projectPath, "yarn.lock"), {
force: true,
})
await spawnAndWait(
"yarn",
[
"--cwd",
projectPath,
"install",
"--registry",
registryUrl,
"--check-files",
],
{ stdio: "inherit" }
)
await spawnAndWait("yarn", ["--cwd", projectPath, "build"], {
stdio: "inherit",
})
}
}
})
}
type WithRegistryAction = (registryUrl: string) => Promise<void>
async function withRegistry(action: WithRegistryAction, ...actions: Array<WithRegistryAction>) {
// First, start verdaccio
printHeader("Starting verdaccio NPM server")
const verd = await VerdaccioProcess.start()
actions.unshift(action)
async function withRegistry(
action: WithRegistryAction,
...actions: Array<WithRegistryAction>
) {
// First, start verdaccio
printHeader("Starting verdaccio NPM server")
const verd = await VerdaccioProcess.start()
actions.unshift(action)
for (const action of actions) {
try {
type Step = "verd-died" | "action-completed"
const verdDied: () => Promise<Step> = async () => {
await verd.died()
return "verd-died"
}
const actionComplete: () => Promise<Step> = async () => {
await action("http://localhost:4873")
return "action-completed"
}
const result = await Promise.race([verdDied(), actionComplete()])
if (result === "verd-died") {
throw new Error("verdaccio unexpectedly exited")
}
} catch(e) {
await verd.kill()
throw e
}
for (const action of actions) {
try {
type Step = "verd-died" | "action-completed"
const verdDied: () => Promise<Step> = async () => {
await verd.died()
return "verd-died"
}
const actionComplete: () => Promise<Step> = async () => {
await action("http://localhost:4873")
return "action-completed"
}
const result = await Promise.race([verdDied(), actionComplete()])
if (result === "verd-died") {
throw new Error("verdaccio unexpectedly exited")
}
} catch (e) {
await verd.kill()
throw e
}
await verd.kill()
}
await verd.kill()
}
async function withPublishedWasm(profile: Profile, action: WithRegistryAction) {
await withRegistry(
buildAutomergeWasm(profile),
publishAutomergeWasm,
action
)
await withRegistry(buildAutomergeWasm(profile), publishAutomergeWasm, action)
}
function buildAutomergeWasm(profile: Profile): WithRegistryAction {
return async (registryUrl: string) => {
printHeader("building automerge-wasm")
await spawnAndWait("yarn", ["--cwd", AUTOMERGE_WASM_PATH, "--registry", registryUrl, "install"], {stdio: "inherit"})
const cmd = profile === "release" ? "release" : "debug"
await spawnAndWait("yarn", ["--cwd", AUTOMERGE_WASM_PATH, cmd], {stdio: "inherit"})
}
return async (registryUrl: string) => {
printHeader("building automerge-wasm")
await spawnAndWait(
"yarn",
["--cwd", AUTOMERGE_WASM_PATH, "--registry", registryUrl, "install"],
{ stdio: "inherit" }
)
const cmd = profile === "release" ? "release" : "debug"
await spawnAndWait("yarn", ["--cwd", AUTOMERGE_WASM_PATH, cmd], {
stdio: "inherit",
})
}
}
async function publishAutomergeWasm(registryUrl: string) {
printHeader("Publishing automerge-wasm to verdaccio")
await fsPromises.rm(path.join(VERDACCIO_DB_PATH, "@automerge/automerge-wasm"), { recursive: true, force: true} )
await yarnPublish(registryUrl, AUTOMERGE_WASM_PATH)
printHeader("Publishing automerge-wasm to verdaccio")
await fsPromises.rm(
path.join(VERDACCIO_DB_PATH, "@automerge/automerge-wasm"),
{ recursive: true, force: true }
)
await yarnPublish(registryUrl, AUTOMERGE_WASM_PATH)
}
async function buildAndPublishAutomergeJs(registryUrl: string) {
// Build the js package
printHeader("Building automerge")
await removeExistingAutomerge(AUTOMERGE_JS_PATH)
await removeFromVerdaccio("@automerge/automerge")
await fsPromises.rm(path.join(AUTOMERGE_JS_PATH, "yarn.lock"), {force: true})
await spawnAndWait("yarn", ["--cwd", AUTOMERGE_JS_PATH, "install", "--registry", registryUrl, "--check-files"], {stdio: "inherit"})
await spawnAndWait("yarn", ["--cwd", AUTOMERGE_JS_PATH, "build"], {stdio: "inherit"})
await yarnPublish(registryUrl, AUTOMERGE_JS_PATH)
// Build the js package
printHeader("Building automerge")
await removeExistingAutomerge(AUTOMERGE_JS_PATH)
await removeFromVerdaccio("@automerge/automerge")
await fsPromises.rm(path.join(AUTOMERGE_JS_PATH, "yarn.lock"), {
force: true,
})
await spawnAndWait(
"yarn",
[
"--cwd",
AUTOMERGE_JS_PATH,
"install",
"--registry",
registryUrl,
"--check-files",
],
{ stdio: "inherit" }
)
await spawnAndWait("yarn", ["--cwd", AUTOMERGE_JS_PATH, "build"], {
stdio: "inherit",
})
await yarnPublish(registryUrl, AUTOMERGE_JS_PATH)
}
/**
@ -236,104 +315,110 @@ async function buildAndPublishAutomergeJs(registryUrl: string) {
*
*/
class VerdaccioProcess {
child: ChildProcess
stdout: Array<Buffer>
stderr: Array<Buffer>
child: ChildProcess
stdout: Array<Buffer>
stderr: Array<Buffer>
constructor(child: ChildProcess) {
this.child = child
constructor(child: ChildProcess) {
this.child = child
// Collect stdout/stderr otherwise the subprocess gets blocked writing
this.stdout = []
this.stderr = []
this.child.stdout && this.child.stdout.on("data", (data) => this.stdout.push(data))
this.child.stderr && this.child.stderr.on("data", (data) => this.stderr.push(data))
// Collect stdout/stderr otherwise the subprocess gets blocked writing
this.stdout = []
this.stderr = []
this.child.stdout &&
this.child.stdout.on("data", data => this.stdout.push(data))
this.child.stderr &&
this.child.stderr.on("data", data => this.stderr.push(data))
const errCallback = (e: any) => {
console.error("!!!!!!!!!ERROR IN VERDACCIO PROCESS!!!!!!!!!")
console.error(" ", e)
if (this.stdout.length > 0) {
console.log("\n**Verdaccio stdout**")
const stdout = Buffer.concat(this.stdout)
process.stdout.write(stdout)
}
const errCallback = (e: any) => {
console.error("!!!!!!!!!ERROR IN VERDACCIO PROCESS!!!!!!!!!")
console.error(" ", e)
if (this.stdout.length > 0) {
console.log("\n**Verdaccio stdout**")
const stdout = Buffer.concat(this.stdout)
process.stdout.write(stdout)
}
if (this.stderr.length > 0) {
console.log("\n**Verdaccio stderr**")
const stdout = Buffer.concat(this.stderr)
process.stdout.write(stdout)
}
process.exit(-1)
}
this.child.on("error", errCallback)
if (this.stderr.length > 0) {
console.log("\n**Verdaccio stderr**")
const stdout = Buffer.concat(this.stderr)
process.stdout.write(stdout)
}
process.exit(-1)
}
this.child.on("error", errCallback)
}
/**
* Spawn a verdaccio process and wait for it to respond succesfully to http requests
*
* The returned `VerdaccioProcess` can be used to control the subprocess
*/
static async start() {
const child = spawn("yarn", ["verdaccio", "--config", VERDACCIO_CONFIG_PATH], {env: { ...process.env, FORCE_COLOR: "true"}})
/**
* Spawn a verdaccio process and wait for it to respond succesfully to http requests
*
* The returned `VerdaccioProcess` can be used to control the subprocess
*/
static async start() {
const child = spawn(
"yarn",
["verdaccio", "--config", VERDACCIO_CONFIG_PATH],
{ env: { ...process.env, FORCE_COLOR: "true" } }
)
// Forward stdout and stderr whilst waiting for startup to complete
const stdoutCallback = (data: Buffer) => process.stdout.write(data)
const stderrCallback = (data: Buffer) => process.stderr.write(data)
child.stdout && child.stdout.on("data", stdoutCallback)
child.stderr && child.stderr.on("data", stderrCallback)
// Forward stdout and stderr whilst waiting for startup to complete
const stdoutCallback = (data: Buffer) => process.stdout.write(data)
const stderrCallback = (data: Buffer) => process.stderr.write(data)
child.stdout && child.stdout.on("data", stdoutCallback)
child.stderr && child.stderr.on("data", stderrCallback)
const healthCheck = async () => {
while (true) {
try {
const resp = await fetch("http://localhost:4873")
if (resp.status === 200) {
return
} else {
console.log(`Healthcheck failed: bad status ${resp.status}`)
}
} catch (e) {
console.error(`Healthcheck failed: ${e}`)
}
await setTimeout(500)
}
}
await withTimeout(healthCheck(), 10000)
// Stop forwarding stdout/stderr
child.stdout && child.stdout.off("data", stdoutCallback)
child.stderr && child.stderr.off("data", stderrCallback)
return new VerdaccioProcess(child)
}
/**
* Send a SIGKILL to the process and wait for it to stop
*/
async kill() {
this.child.stdout && this.child.stdout.destroy()
this.child.stderr && this.child.stderr.destroy()
this.child.kill();
const healthCheck = async () => {
while (true) {
try {
await withTimeout(once(this.child, "close"), 500)
const resp = await fetch("http://localhost:4873")
if (resp.status === 200) {
return
} else {
console.log(`Healthcheck failed: bad status ${resp.status}`)
}
} catch (e) {
console.error("unable to kill verdaccio subprocess, trying -9")
this.child.kill(9)
await withTimeout(once(this.child, "close"), 500)
console.error(`Healthcheck failed: ${e}`)
}
await setTimeout(500)
}
}
await withTimeout(healthCheck(), 10000)
/**
* A promise which resolves if the subprocess exits for some reason
*/
async died(): Promise<number | null> {
const [exit, _signal] = await once(this.child, "exit")
return exit
// Stop forwarding stdout/stderr
child.stdout && child.stdout.off("data", stdoutCallback)
child.stderr && child.stderr.off("data", stderrCallback)
return new VerdaccioProcess(child)
}
/**
* Send a SIGKILL to the process and wait for it to stop
*/
async kill() {
this.child.stdout && this.child.stdout.destroy()
this.child.stderr && this.child.stderr.destroy()
this.child.kill()
try {
await withTimeout(once(this.child, "close"), 500)
} catch (e) {
console.error("unable to kill verdaccio subprocess, trying -9")
this.child.kill(9)
await withTimeout(once(this.child, "close"), 500)
}
}
/**
* A promise which resolves if the subprocess exits for some reason
*/
async died(): Promise<number | null> {
const [exit, _signal] = await once(this.child, "exit")
return exit
}
}
function printHeader(header: string) {
console.log("\n===============================")
console.log(` ${header}`)
console.log("===============================")
console.log("\n===============================")
console.log(` ${header}`)
console.log("===============================")
}
/**
@ -347,36 +432,46 @@ function printHeader(header: string) {
* @param packageDir - The directory containing the package.json of the target project
*/
async function removeExistingAutomerge(packageDir: string) {
await fsPromises.rm(path.join(packageDir, "node_modules", "@automerge"), {recursive: true, force: true})
await fsPromises.rm(path.join(packageDir, "node_modules", "automerge"), {recursive: true, force: true})
await fsPromises.rm(path.join(packageDir, "node_modules", "@automerge"), {
recursive: true,
force: true,
})
await fsPromises.rm(path.join(packageDir, "node_modules", "automerge"), {
recursive: true,
force: true,
})
}
type SpawnResult = {
stdout?: Buffer,
stderr?: Buffer,
stdout?: Buffer
stderr?: Buffer
}
async function spawnAndWait(cmd: string, args: Array<string>, options: child_process.SpawnOptions): Promise<SpawnResult> {
const child = spawn(cmd, args, options)
let stdout = null
let stderr = null
if (child.stdout) {
stdout = []
child.stdout.on("data", data => stdout.push(data))
}
if (child.stderr) {
stderr = []
child.stderr.on("data", data => stderr.push(data))
}
async function spawnAndWait(
cmd: string,
args: Array<string>,
options: child_process.SpawnOptions
): Promise<SpawnResult> {
const child = spawn(cmd, args, options)
let stdout = null
let stderr = null
if (child.stdout) {
stdout = []
child.stdout.on("data", data => stdout.push(data))
}
if (child.stderr) {
stderr = []
child.stderr.on("data", data => stderr.push(data))
}
const [exit, _signal] = await once(child, "exit")
if (exit && exit !== 0) {
throw new Error("nonzero exit code")
}
return {
stderr: stderr? Buffer.concat(stderr) : null,
stdout: stdout ? Buffer.concat(stdout) : null
}
const [exit, _signal] = await once(child, "exit")
if (exit && exit !== 0) {
throw new Error("nonzero exit code")
}
return {
stderr: stderr ? Buffer.concat(stderr) : null,
stdout: stdout ? Buffer.concat(stdout) : null,
}
}
/**
@ -387,29 +482,27 @@ async function spawnAndWait(cmd: string, args: Array<string>, options: child_pro
* okay I Promise.
*/
async function removeFromVerdaccio(packageName: string) {
await fsPromises.rm(path.join(VERDACCIO_DB_PATH, packageName), {force: true, recursive: true})
await fsPromises.rm(path.join(VERDACCIO_DB_PATH, packageName), {
force: true,
recursive: true,
})
}
async function yarnPublish(registryUrl: string, cwd: string) {
await spawnAndWait(
"yarn",
[
"--registry",
registryUrl,
"--cwd",
cwd,
"publish",
"--non-interactive",
],
{
stdio: "inherit",
env: {
...process.env,
FORCE_COLOR: "true",
// This is a fake token, it just has to be the right format
npm_config__auth: "//localhost:4873/:_authToken=Gp2Mgxm4faa/7wp0dMSuRA=="
}
})
await spawnAndWait(
"yarn",
["--registry", registryUrl, "--cwd", cwd, "publish", "--non-interactive"],
{
stdio: "inherit",
env: {
...process.env,
FORCE_COLOR: "true",
// This is a fake token, it just has to be the right format
npm_config__auth:
"//localhost:4873/:_authToken=Gp2Mgxm4faa/7wp0dMSuRA==",
},
}
)
}
/**
@ -419,20 +512,23 @@ async function yarnPublish(registryUrl: string, cwd: string) {
* @param promise - the promise to wait for @param timeout - the delay in
* milliseconds to wait before throwing
*/
async function withTimeout<T>(promise: Promise<T>, timeout: number): Promise<T> {
type Step = "timed-out" | {result: T}
const timedOut: () => Promise<Step> = async () => {
await setTimeout(timeout)
return "timed-out"
}
const succeeded: () => Promise<Step> = async () => {
const result = await promise
return {result}
}
const result = await Promise.race([timedOut(), succeeded()])
if (result === "timed-out") {
throw new Error("timed out")
} else {
return result.result
}
async function withTimeout<T>(
promise: Promise<T>,
timeout: number
): Promise<T> {
type Step = "timed-out" | { result: T }
const timedOut: () => Promise<Step> = async () => {
await setTimeout(timeout)
return "timed-out"
}
const succeeded: () => Promise<Step> = async () => {
const result = await promise
return { result }
}
const result = await Promise.race([timedOut(), succeeded()])
if (result === "timed-out") {
throw new Error("timed out")
} else {
return result.result
}
}

View file

@ -1,6 +1,6 @@
{
"compilerOptions": {
"types": ["node"]
},
"module": "nodenext"
"compilerOptions": {
"types": ["node"]
},
"module": "nodenext"
}

View file

@ -4,22 +4,22 @@ auth:
file: ./htpasswd
publish:
allow_offline: true
logs: {type: stdout, format: pretty, level: info}
packages:
logs: { type: stdout, format: pretty, level: info }
packages:
"@automerge/automerge-wasm":
access: "$all"
publish: "$all"
access: "$all"
publish: "$all"
"@automerge/automerge":
access: "$all"
publish: "$all"
access: "$all"
publish: "$all"
"*":
access: "$all"
publish: "$all"
proxy: npmjs
access: "$all"
publish: "$all"
proxy: npmjs
"@*/*":
access: "$all"
publish: "$all"
proxy: npmjs
access: "$all"
publish: "$all"
proxy: npmjs
uplinks:
npmjs:
url: https://registry.npmjs.org/

View file

@ -54,6 +54,6 @@ In the root of the project add the following contents to `craco.config.js`
const cracoWasm = require("craco-wasm")
module.exports = {
plugins: [cracoWasm()]
plugins: [cracoWasm()],
}
```

View file

@ -1,5 +1,5 @@
const cracoWasm = require("craco-wasm")
module.exports = {
plugins: [cracoWasm()]
plugins: [cracoWasm()],
}

View file

@ -1,12 +1,11 @@
import * as Automerge from "@automerge/automerge"
import logo from './logo.svg';
import './App.css';
import logo from "./logo.svg"
import "./App.css"
let doc = Automerge.init()
doc = Automerge.change(doc, (d) => d.hello = "from automerge")
doc = Automerge.change(doc, d => (d.hello = "from automerge"))
const result = JSON.stringify(doc)
function App() {
return (
<div className="App">
@ -15,7 +14,7 @@ function App() {
<p>{result}</p>
</header>
</div>
);
)
}
export default App;
export default App

View file

@ -1,8 +1,8 @@
import { render, screen } from '@testing-library/react';
import App from './App';
import { render, screen } from "@testing-library/react"
import App from "./App"
test('renders learn react link', () => {
render(<App />);
const linkElement = screen.getByText(/learn react/i);
expect(linkElement).toBeInTheDocument();
});
test("renders learn react link", () => {
render(<App />)
const linkElement = screen.getByText(/learn react/i)
expect(linkElement).toBeInTheDocument()
})

View file

@ -1,13 +1,13 @@
body {
margin: 0;
font-family: -apple-system, BlinkMacSystemFont, 'Segoe UI', 'Roboto', 'Oxygen',
'Ubuntu', 'Cantarell', 'Fira Sans', 'Droid Sans', 'Helvetica Neue',
font-family: -apple-system, BlinkMacSystemFont, "Segoe UI", "Roboto", "Oxygen",
"Ubuntu", "Cantarell", "Fira Sans", "Droid Sans", "Helvetica Neue",
sans-serif;
-webkit-font-smoothing: antialiased;
-moz-osx-font-smoothing: grayscale;
}
code {
font-family: source-code-pro, Menlo, Monaco, Consolas, 'Courier New',
font-family: source-code-pro, Menlo, Monaco, Consolas, "Courier New",
monospace;
}

View file

@ -1,17 +1,17 @@
import React from 'react';
import ReactDOM from 'react-dom/client';
import './index.css';
import App from './App';
import reportWebVitals from './reportWebVitals';
import React from "react"
import ReactDOM from "react-dom/client"
import "./index.css"
import App from "./App"
import reportWebVitals from "./reportWebVitals"
const root = ReactDOM.createRoot(document.getElementById('root'));
const root = ReactDOM.createRoot(document.getElementById("root"))
root.render(
<React.StrictMode>
<App />
</React.StrictMode>
);
)
// If you want to start measuring performance in your app, pass a function
// to log results (for example: reportWebVitals(console.log))
// or send to an analytics endpoint. Learn more: https://bit.ly/CRA-vitals
reportWebVitals();
reportWebVitals()

View file

@ -1,13 +1,13 @@
const reportWebVitals = onPerfEntry => {
if (onPerfEntry && onPerfEntry instanceof Function) {
import('web-vitals').then(({ getCLS, getFID, getFCP, getLCP, getTTFB }) => {
getCLS(onPerfEntry);
getFID(onPerfEntry);
getFCP(onPerfEntry);
getLCP(onPerfEntry);
getTTFB(onPerfEntry);
});
import("web-vitals").then(({ getCLS, getFID, getFCP, getLCP, getTTFB }) => {
getCLS(onPerfEntry)
getFID(onPerfEntry)
getFCP(onPerfEntry)
getLCP(onPerfEntry)
getTTFB(onPerfEntry)
})
}
};
}
export default reportWebVitals;
export default reportWebVitals

View file

@ -2,4 +2,4 @@
// allows you to do things like:
// expect(element).toHaveTextContent(/react/i)
// learn more: https://github.com/testing-library/jest-dom
import '@testing-library/jest-dom';
import "@testing-library/jest-dom"

View file

@ -5845,9 +5845,9 @@ json-stable-stringify-without-jsonify@^1.0.1:
integrity sha512-Bdboy+l7tA3OGW6FjyFHWkP5LuByj1Tk33Ljyq0axyzdk9//JSi2u3fP1QSmd1KNwq6VOKYGlAu87CisVir6Pw==
json5@^1.0.1:
version "1.0.1"
resolved "http://localhost:4873/json5/-/json5-1.0.1.tgz#779fb0018604fa854eacbf6252180d83543e3dbe"
integrity sha512-aKS4WQjPenRxiQsC93MNfjx+nbF4PAdYzmd/1JIj8HYzqfbu86beTuNgXDzPknWk0n0uARlyewZo4s++ES36Ow==
version "1.0.2"
resolved "https://registry.yarnpkg.com/json5/-/json5-1.0.2.tgz#63d98d60f21b313b77c4d6da18bfa69d80e1d593"
integrity sha512-g1MWMLBiz8FKi1e4w0UyVL3w+iJceWAFBAaBnnGKOpNa5f8TLktkbre1+s6oICydWAm+HRUGTmI+//xv2hvXYA==
dependencies:
minimist "^1.2.0"
@ -6165,9 +6165,9 @@ minimatch@^5.0.1:
brace-expansion "^2.0.1"
minimist@^1.2.0, minimist@^1.2.6:
version "1.2.6"
resolved "http://localhost:4873/minimist/-/minimist-1.2.6.tgz#8637a5b759ea0d6e98702cfb3a9283323c93af44"
integrity sha512-Jsjnk4bw3YJqYzbdyBiNsPWHPfO++UGG749Cxs6peCu5Xg4nrena6OVxOYxrQTqww0Jmwt+Ref8rggumkTLz9Q==
version "1.2.7"
resolved "https://registry.yarnpkg.com/minimist/-/minimist-1.2.7.tgz#daa1c4d91f507390437c6a8bc01078e7000c4d18"
integrity sha512-bzfL1YUZsP41gmu/qjrEk0Q6i2ix/cVeAhbCbqH9u3zYutS1cLg00qhrD0M2MVdCcx4Sc0UpP2eBWo9rotpq6g==
mkdirp@~0.5.1:
version "0.5.6"

View file

@ -7,6 +7,7 @@ There are three things you need to do to get WASM packaging working with vite:
3. Exclude `automerge-wasm` from the optimizer
First, install the packages we need:
```bash
yarn add vite-plugin-top-level-await
yarn add vite-plugin-wasm
@ -20,22 +21,22 @@ import wasm from "vite-plugin-wasm"
import topLevelAwait from "vite-plugin-top-level-await"
export default defineConfig({
plugins: [topLevelAwait(), wasm()],
// This is only necessary if you are using `SharedWorker` or `WebWorker`, as
// documented in https://vitejs.dev/guide/features.html#import-with-constructors
worker: {
format: "es",
plugins: [topLevelAwait(), wasm()]
},
plugins: [topLevelAwait(), wasm()],
optimizeDeps: {
// This is necessary because otherwise `vite dev` includes two separate
// versions of the JS wrapper. This causes problems because the JS
// wrapper has a module level variable to track JS side heap
// allocations, initializing this twice causes horrible breakage
exclude: ["@automerge/automerge-wasm"]
}
// This is only necessary if you are using `SharedWorker` or `WebWorker`, as
// documented in https://vitejs.dev/guide/features.html#import-with-constructors
worker: {
format: "es",
plugins: [topLevelAwait(), wasm()],
},
optimizeDeps: {
// This is necessary because otherwise `vite dev` includes two separate
// versions of the JS wrapper. This causes problems because the JS
// wrapper has a module level variable to track JS side heap
// allocations, initializing this twice causes horrible breakage
exclude: ["@automerge/automerge-wasm"],
},
})
```
@ -51,4 +52,3 @@ yarn vite
yarn install
yarn dev
```

View file

@ -1,15 +1,15 @@
import * as Automerge from "/node_modules/.vite/deps/automerge-js.js?v=6e973f28";
console.log(Automerge);
let doc = Automerge.init();
doc = Automerge.change(doc, (d) => d.hello = "from automerge-js");
console.log(doc);
const result = JSON.stringify(doc);
import * as Automerge from "/node_modules/.vite/deps/automerge-js.js?v=6e973f28"
console.log(Automerge)
let doc = Automerge.init()
doc = Automerge.change(doc, d => (d.hello = "from automerge-js"))
console.log(doc)
const result = JSON.stringify(doc)
if (typeof document !== "undefined") {
const element = document.createElement("div");
element.innerHTML = JSON.stringify(result);
document.body.appendChild(element);
const element = document.createElement("div")
element.innerHTML = JSON.stringify(result)
document.body.appendChild(element)
} else {
console.log("node:", result);
console.log("node:", result)
}
//# sourceMappingURL=data:application/json;base64,eyJ2ZXJzaW9uIjozLCJzb3VyY2VzIjpbIi9ob21lL2FsZXgvUHJvamVjdHMvYXV0b21lcmdlL2F1dG9tZXJnZS1ycy9hdXRvbWVyZ2UtanMvZXhhbXBsZXMvdml0ZS9zcmMvbWFpbi50cyJdLCJzb3VyY2VzQ29udGVudCI6WyJpbXBvcnQgKiBhcyBBdXRvbWVyZ2UgZnJvbSBcImF1dG9tZXJnZS1qc1wiXG5cbi8vIGhlbGxvIHdvcmxkIGNvZGUgdGhhdCB3aWxsIHJ1biBjb3JyZWN0bHkgb24gd2ViIG9yIG5vZGVcblxuY29uc29sZS5sb2coQXV0b21lcmdlKVxubGV0IGRvYyA9IEF1dG9tZXJnZS5pbml0KClcbmRvYyA9IEF1dG9tZXJnZS5jaGFuZ2UoZG9jLCAoZDogYW55KSA9PiBkLmhlbGxvID0gXCJmcm9tIGF1dG9tZXJnZS1qc1wiKVxuY29uc29sZS5sb2coZG9jKVxuY29uc3QgcmVzdWx0ID0gSlNPTi5zdHJpbmdpZnkoZG9jKVxuXG5pZiAodHlwZW9mIGRvY3VtZW50ICE9PSAndW5kZWZpbmVkJykge1xuICAgIC8vIGJyb3dzZXJcbiAgICBjb25zdCBlbGVtZW50ID0gZG9jdW1lbnQuY3JlYXRlRWxlbWVudCgnZGl2Jyk7XG4gICAgZWxlbWVudC5pbm5lckhUTUwgPSBKU09OLnN0cmluZ2lmeShyZXN1bHQpXG4gICAgZG9jdW1lbnQuYm9keS5hcHBlbmRDaGlsZChlbGVtZW50KTtcbn0gZWxzZSB7XG4gICAgLy8gc2VydmVyXG4gICAgY29uc29sZS5sb2coXCJub2RlOlwiLCByZXN1bHQpXG59XG5cbiJdLCJtYXBwaW5ncyI6IkFBQUEsWUFBWSxlQUFlO0FBSTNCLFFBQVEsSUFBSSxTQUFTO0FBQ3JCLElBQUksTUFBTSxVQUFVLEtBQUs7QUFDekIsTUFBTSxVQUFVLE9BQU8sS0FBSyxDQUFDLE1BQVcsRUFBRSxRQUFRLG1CQUFtQjtBQUNyRSxRQUFRLElBQUksR0FBRztBQUNmLE1BQU0sU0FBUyxLQUFLLFVBQVUsR0FBRztBQUVqQyxJQUFJLE9BQU8sYUFBYSxhQUFhO0FBRWpDLFFBQU0sVUFBVSxTQUFTLGNBQWMsS0FBSztBQUM1QyxVQUFRLFlBQVksS0FBSyxVQUFVLE1BQU07QUFDekMsV0FBUyxLQUFLLFlBQVksT0FBTztBQUNyQyxPQUFPO0FBRUgsVUFBUSxJQUFJLFNBQVMsTUFBTTtBQUMvQjsiLCJuYW1lcyI6W119
//# sourceMappingURL=data:application/json;base64,eyJ2ZXJzaW9uIjozLCJzb3VyY2VzIjpbIi9ob21lL2FsZXgvUHJvamVjdHMvYXV0b21lcmdlL2F1dG9tZXJnZS1ycy9hdXRvbWVyZ2UtanMvZXhhbXBsZXMvdml0ZS9zcmMvbWFpbi50cyJdLCJzb3VyY2VzQ29udGVudCI6WyJpbXBvcnQgKiBhcyBBdXRvbWVyZ2UgZnJvbSBcImF1dG9tZXJnZS1qc1wiXG5cbi8vIGhlbGxvIHdvcmxkIGNvZGUgdGhhdCB3aWxsIHJ1biBjb3JyZWN0bHkgb24gd2ViIG9yIG5vZGVcblxuY29uc29sZS5sb2coQXV0b21lcmdlKVxubGV0IGRvYyA9IEF1dG9tZXJnZS5pbml0KClcbmRvYyA9IEF1dG9tZXJnZS5jaGFuZ2UoZG9jLCAoZDogYW55KSA9PiBkLmhlbGxvID0gXCJmcm9tIGF1dG9tZXJnZS1qc1wiKVxuY29uc29sZS5sb2coZG9jKVxuY29uc3QgcmVzdWx0ID0gSlNPTi5zdHJpbmdpZnkoZG9jKVxuXG5pZiAodHlwZW9mIGRvY3VtZW50ICE9PSAndW5kZWZpbmVkJykge1xuICAgIC8vIGJyb3dzZXJcbiAgICBjb25zdCBlbGVtZW50ID0gZG9jdW1lbnQuY3JlYXRlRWxlbWVudCgnZGl2Jyk7XG4gICAgZWxlbWVudC5pbm5lckhUTUwgPSBKU09OLnN0cmluZ2lmeShyZXN1bHQpXG4gICAgZG9jdW1lbnQuYm9keS5hcHBlbmRDaGlsZChlbGVtZW50KTtcbn0gZWxzZSB7XG4gICAgLy8gc2VydmVyXG4gICAgY29uc29sZS5sb2coXCJub2RlOlwiLCByZXN1bHQpXG59XG5cbiJdLCJtYXBwaW5ncyI6IkFBQUEsWUFBWSxlQUFlO0FBSTNCLFFBQVEsSUFBSSxTQUFTO0FBQ3JCLElBQUksTUFBTSxVQUFVLEtBQUs7QUFDekIsTUFBTSxVQUFVLE9BQU8sS0FBSyxDQUFDLE1BQVcsRUFBRSxRQUFRLG1CQUFtQjtBQUNyRSxRQUFRLElBQUksR0FBRztBQUNmLE1BQU0sU0FBUyxLQUFLLFVBQVUsR0FBRztBQUVqQyxJQUFJLE9BQU8sYUFBYSxhQUFhO0FBRWpDLFFBQU0sVUFBVSxTQUFTLGNBQWMsS0FBSztBQUM1QyxVQUFRLFlBQVksS0FBSyxVQUFVLE1BQU07QUFDekMsV0FBUyxLQUFLLFlBQVksT0FBTztBQUNyQyxPQUFPO0FBRUgsVUFBUSxJQUFJLFNBQVMsTUFBTTtBQUMvQjsiLCJuYW1lcyI6W119

View file

@ -4,6 +4,6 @@ export function setupCounter(element: HTMLButtonElement) {
counter = count
element.innerHTML = `count is ${counter}`
}
element.addEventListener('click', () => setCounter(++counter))
element.addEventListener("click", () => setCounter(++counter))
setCounter(0)
}

View file

@ -3,16 +3,15 @@ import * as Automerge from "@automerge/automerge"
// hello world code that will run correctly on web or node
let doc = Automerge.init()
doc = Automerge.change(doc, (d: any) => d.hello = "from automerge")
doc = Automerge.change(doc, (d: any) => (d.hello = "from automerge"))
const result = JSON.stringify(doc)
if (typeof document !== 'undefined') {
// browser
const element = document.createElement('div');
element.innerHTML = JSON.stringify(result)
document.body.appendChild(element);
if (typeof document !== "undefined") {
// browser
const element = document.createElement("div")
element.innerHTML = JSON.stringify(result)
document.body.appendChild(element)
} else {
// server
console.log("node:", result)
// server
console.log("node:", result)
}

View file

@ -3,20 +3,20 @@ import wasm from "vite-plugin-wasm"
import topLevelAwait from "vite-plugin-top-level-await"
export default defineConfig({
plugins: [topLevelAwait(), wasm()],
// This is only necessary if you are using `SharedWorker` or `WebWorker`, as
// documented in https://vitejs.dev/guide/features.html#import-with-constructors
worker: {
format: "es",
plugins: [topLevelAwait(), wasm()],
},
// This is only necessary if you are using `SharedWorker` or `WebWorker`, as
// documented in https://vitejs.dev/guide/features.html#import-with-constructors
worker: {
format: "es",
plugins: [topLevelAwait(), wasm()]
},
optimizeDeps: {
// This is necessary because otherwise `vite dev` includes two separate
// versions of the JS wrapper. This causes problems because the JS
// wrapper has a module level variable to track JS side heap
// allocations, initializing this twice causes horrible breakage
exclude: ["@automerge/automerge-wasm"]
}
optimizeDeps: {
// This is necessary because otherwise `vite dev` includes two separate
// versions of the JS wrapper. This causes problems because the JS
// wrapper has a module level variable to track JS side heap
// allocations, initializing this twice causes horrible breakage
exclude: ["@automerge/automerge-wasm"],
},
})

View file

@ -1,36 +1,34 @@
# Webpack + Automerge
Getting WASM working in webpack 5 is very easy. You just need to enable the
`asyncWebAssembly`
[experiment](https://webpack.js.org/configuration/experiments/). For example:
```javascript
const path = require('path');
const path = require("path")
const clientConfig = {
experiments: { asyncWebAssembly: true },
target: 'web',
entry: './src/index.js',
target: "web",
entry: "./src/index.js",
output: {
filename: 'main.js',
path: path.resolve(__dirname, 'public'),
filename: "main.js",
path: path.resolve(__dirname, "public"),
},
mode: "development", // or production
performance: { // we dont want the wasm blob to generate warnings
hints: false,
maxEntrypointSize: 512000,
maxAssetSize: 512000
}
};
performance: {
// we dont want the wasm blob to generate warnings
hints: false,
maxEntrypointSize: 512000,
maxAssetSize: 512000,
},
}
module.exports = clientConfig
```
## Running the example
```bash
yarn install
yarn start

View file

@ -3,16 +3,15 @@ import * as Automerge from "@automerge/automerge"
// hello world code that will run correctly on web or node
let doc = Automerge.init()
doc = Automerge.change(doc, (d) => d.hello = "from automerge")
doc = Automerge.change(doc, d => (d.hello = "from automerge"))
const result = JSON.stringify(doc)
if (typeof document !== 'undefined') {
if (typeof document !== "undefined") {
// browser
const element = document.createElement('div');
const element = document.createElement("div")
element.innerHTML = JSON.stringify(result)
document.body.appendChild(element);
document.body.appendChild(element)
} else {
// server
console.log("node:", result)
}

View file

@ -1,36 +1,37 @@
const path = require('path');
const nodeExternals = require('webpack-node-externals');
const path = require("path")
const nodeExternals = require("webpack-node-externals")
// the most basic webpack config for node or web targets for automerge-wasm
const serverConfig = {
// basic setup for bundling a node package
target: 'node',
target: "node",
externals: [nodeExternals()],
externalsPresets: { node: true },
entry: './src/index.js',
entry: "./src/index.js",
output: {
filename: 'node.js',
path: path.resolve(__dirname, 'dist'),
filename: "node.js",
path: path.resolve(__dirname, "dist"),
},
mode: "development", // or production
};
}
const clientConfig = {
experiments: { asyncWebAssembly: true },
target: 'web',
entry: './src/index.js',
target: "web",
entry: "./src/index.js",
output: {
filename: 'main.js',
path: path.resolve(__dirname, 'public'),
filename: "main.js",
path: path.resolve(__dirname, "public"),
},
mode: "development", // or production
performance: { // we dont want the wasm blob to generate warnings
hints: false,
maxEntrypointSize: 512000,
maxAssetSize: 512000
}
};
performance: {
// we dont want the wasm blob to generate warnings
hints: false,
maxEntrypointSize: 512000,
maxAssetSize: 512000,
},
}
module.exports = [serverConfig, clientConfig];
module.exports = [serverConfig, clientConfig]

View file

@ -4,7 +4,7 @@
"Orion Henry <orion@inkandswitch.com>",
"Martin Kleppmann"
],
"version": "2.0.1-alpha.2",
"version": "2.0.2",
"description": "Javascript implementation of automerge, backed by @automerge/automerge-wasm",
"homepage": "https://github.com/automerge/automerge-rs/tree/main/wrappers/javascript",
"repository": "github:automerge/automerge-rs",
@ -12,26 +12,10 @@
"README.md",
"LICENSE",
"package.json",
"index.d.ts",
"dist/*.d.ts",
"dist/cjs/constants.js",
"dist/cjs/types.js",
"dist/cjs/numbers.js",
"dist/cjs/index.js",
"dist/cjs/uuid.js",
"dist/cjs/counter.js",
"dist/cjs/low_level.js",
"dist/cjs/text.js",
"dist/cjs/proxies.js",
"dist/mjs/constants.js",
"dist/mjs/types.js",
"dist/mjs/numbers.js",
"dist/mjs/index.js",
"dist/mjs/uuid.js",
"dist/mjs/counter.js",
"dist/mjs/low_level.js",
"dist/mjs/text.js",
"dist/mjs/proxies.js"
"dist/index.d.ts",
"dist/cjs/**/*.js",
"dist/mjs/**/*.js",
"dist/*.d.ts"
],
"types": "./dist/index.d.ts",
"module": "./dist/mjs/index.js",
@ -39,9 +23,11 @@
"license": "MIT",
"scripts": {
"lint": "eslint src",
"build": "tsc -p config/mjs.json && tsc -p config/cjs.json && tsc --emitDeclarationOnly",
"build": "tsc -p config/mjs.json && tsc -p config/cjs.json && tsc -p config/declonly.json --emitDeclarationOnly",
"test": "ts-mocha test/*.ts",
"watch-docs": "typedoc src/index.ts --watch --readme typedoc-readme.md"
"deno:build": "denoify && node ./scripts/deno-prefixer.mjs",
"deno:test": "deno test ./deno-tests/deno.ts --allow-read --allow-net",
"watch-docs": "typedoc src/index.ts --watch --readme none"
},
"devDependencies": {
"@types/expect": "^24.3.0",
@ -49,17 +35,19 @@
"@types/uuid": "^9.0.0",
"@typescript-eslint/eslint-plugin": "^5.46.0",
"@typescript-eslint/parser": "^5.46.0",
"denoify": "^1.4.5",
"eslint": "^8.29.0",
"fast-sha256": "^1.3.0",
"mocha": "^10.2.0",
"pako": "^2.1.0",
"prettier": "^2.8.1",
"ts-mocha": "^10.0.0",
"ts-node": "^10.9.1",
"typedoc": "^0.23.22",
"typescript": "^4.9.4"
},
"dependencies": {
"@automerge/automerge-wasm": "0.1.19",
"@automerge/automerge-wasm": "0.1.25",
"uuid": "^9.0.0"
}
}

View file

@ -0,0 +1,9 @@
import * as fs from "fs"
const files = ["./deno_dist/proxies.ts"]
for (const filepath of files) {
const data = fs.readFileSync(filepath)
fs.writeFileSync(filepath, "// @ts-nocheck \n" + data)
console.log('Prepended "// @ts-nocheck" to ' + filepath)
}

View file

@ -0,0 +1,42 @@
// @denoify-ignore
import { makeThisModuleAnExecutableReplacer } from "denoify"
// import { assert } from "tsafe";
// import * as path from "path";
makeThisModuleAnExecutableReplacer(
async ({ parsedImportExportStatement, destDirPath, version }) => {
version = process.env.VERSION || version
switch (parsedImportExportStatement.parsedArgument.nodeModuleName) {
case "@automerge/automerge-wasm":
{
const moduleRoot =
process.env.ROOT_MODULE ||
`https://deno.land/x/automerge_wasm@${version}`
/*
*We expect not to run against statements like
*import(..).then(...)
*or
*export * from "..."
*in our code.
*/
if (
!parsedImportExportStatement.isAsyncImport &&
(parsedImportExportStatement.statementType === "import" ||
parsedImportExportStatement.statementType === "export")
) {
if (parsedImportExportStatement.isTypeOnly) {
return `${parsedImportExportStatement.statementType} type ${parsedImportExportStatement.target} from "${moduleRoot}/index.d.ts";`
} else {
return `${parsedImportExportStatement.statementType} ${parsedImportExportStatement.target} from "${moduleRoot}/automerge_wasm.js";`
}
}
}
break
}
//The replacer should return undefined when we want to let denoify replace the statement
return undefined
}
)

100
javascript/src/conflicts.ts Normal file
View file

@ -0,0 +1,100 @@
import { Counter, type AutomergeValue } from "./types"
import { Text } from "./text"
import { type AutomergeValue as UnstableAutomergeValue } from "./unstable_types"
import { type Target, Text1Target, Text2Target } from "./proxies"
import { mapProxy, listProxy, ValueType } from "./proxies"
import type { Prop, ObjID } from "@automerge/automerge-wasm"
import { Automerge } from "@automerge/automerge-wasm"
export type ConflictsF<T extends Target> = { [key: string]: ValueType<T> }
export type Conflicts = ConflictsF<Text1Target>
export type UnstableConflicts = ConflictsF<Text2Target>
export function stableConflictAt(
context: Automerge,
objectId: ObjID,
prop: Prop
): Conflicts | undefined {
return conflictAt<Text1Target>(
context,
objectId,
prop,
true,
(context: Automerge, conflictId: ObjID): AutomergeValue => {
return new Text(context.text(conflictId))
}
)
}
export function unstableConflictAt(
context: Automerge,
objectId: ObjID,
prop: Prop
): UnstableConflicts | undefined {
return conflictAt<Text2Target>(
context,
objectId,
prop,
true,
(context: Automerge, conflictId: ObjID): UnstableAutomergeValue => {
return context.text(conflictId)
}
)
}
function conflictAt<T extends Target>(
context: Automerge,
objectId: ObjID,
prop: Prop,
textV2: boolean,
handleText: (a: Automerge, conflictId: ObjID) => ValueType<T>
): ConflictsF<T> | undefined {
const values = context.getAll(objectId, prop)
if (values.length <= 1) {
return
}
const result: ConflictsF<T> = {}
for (const fullVal of values) {
switch (fullVal[0]) {
case "map":
result[fullVal[1]] = mapProxy<T>(
context,
fullVal[1],
textV2,
[prop],
true
)
break
case "list":
result[fullVal[1]] = listProxy<T>(
context,
fullVal[1],
textV2,
[prop],
true
)
break
case "text":
result[fullVal[1]] = handleText(context, fullVal[1] as ObjID)
break
case "str":
case "uint":
case "int":
case "f64":
case "boolean":
case "bytes":
case "null":
result[fullVal[2]] = fullVal[1] as ValueType<T>
break
case "counter":
result[fullVal[2]] = new Counter(fullVal[1]) as ValueType<T>
break
case "timestamp":
result[fullVal[2]] = new Date(fullVal[1]) as ValueType<T>
break
default:
throw RangeError(`datatype ${fullVal[0]} unimplemented`)
}
}
return result
}

View file

@ -1,13 +1,12 @@
// Properties of the document root object
export const STATE = Symbol.for('_am_meta') // symbol used to hide application metadata on automerge objects
export const TRACE = Symbol.for('_am_trace') // used for debugging
export const OBJECT_ID = Symbol.for('_am_objectId') // synbol used to hide the object id on automerge objects
export const IS_PROXY = Symbol.for('_am_isProxy') // symbol used to test if the document is a proxy object
export const UINT = Symbol.for('_am_uint')
export const INT = Symbol.for('_am_int')
export const F64 = Symbol.for('_am_f64')
export const COUNTER = Symbol.for('_am_counter')
export const TEXT = Symbol.for('_am_text')
export const STATE = Symbol.for("_am_meta") // symbol used to hide application metadata on automerge objects
export const TRACE = Symbol.for("_am_trace") // used for debugging
export const OBJECT_ID = Symbol.for("_am_objectId") // symbol used to hide the object id on automerge objects
export const IS_PROXY = Symbol.for("_am_isProxy") // symbol used to test if the document is a proxy object
export const UINT = Symbol.for("_am_uint")
export const INT = Symbol.for("_am_int")
export const F64 = Symbol.for("_am_f64")
export const COUNTER = Symbol.for("_am_counter")
export const TEXT = Symbol.for("_am_text")

View file

@ -1,4 +1,4 @@
import { Automerge, ObjID, Prop } from "@automerge/automerge-wasm"
import { Automerge, type ObjID, type Prop } from "@automerge/automerge-wasm"
import { COUNTER } from "./constants"
/**
* The most basic CRDT: an integer value that can be changed only by
@ -6,7 +6,7 @@ import { COUNTER } from "./constants"
* the value trivially converges.
*/
export class Counter {
value : number;
value: number
constructor(value?: number) {
this.value = value || 0
@ -21,7 +21,7 @@ export class Counter {
* concatenating it with another string, as in `x + ''`.
* https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Object/valueOf
*/
valueOf() : number {
valueOf(): number {
return this.value
}
@ -30,7 +30,7 @@ export class Counter {
* this method is called e.g. when you do `['value: ', x].join('')` or when
* you use string interpolation: `value: ${x}`.
*/
toString() : string {
toString(): string {
return this.valueOf().toString()
}
@ -38,7 +38,7 @@ export class Counter {
* Returns the counter value, so that a JSON serialization of an Automerge
* document represents the counter simply as an integer.
*/
toJSON() : number {
toJSON(): number {
return this.value
}
}
@ -49,24 +49,30 @@ export class Counter {
*/
class WriteableCounter extends Counter {
context: Automerge
path: string[]
path: Prop[]
objectId: ObjID
key: Prop
constructor(value: number, context: Automerge, path: string[], objectId: ObjID, key: Prop) {
constructor(
value: number,
context: Automerge,
path: Prop[],
objectId: ObjID,
key: Prop
) {
super(value)
this.context = context
this.path = path
this.objectId = objectId
this.key = key
}
/**
* Increases the value of the counter by `delta`. If `delta` is not given,
* increases the value of the counter by 1.
*/
increment(delta: number) : number {
delta = typeof delta === 'number' ? delta : 1
increment(delta: number): number {
delta = typeof delta === "number" ? delta : 1
this.context.increment(this.objectId, this.key, delta)
this.value += delta
return this.value
@ -76,8 +82,8 @@ class WriteableCounter extends Counter {
* Decreases the value of the counter by `delta`. If `delta` is not given,
* decreases the value of the counter by 1.
*/
decrement(delta: number) : number {
return this.increment(typeof delta === 'number' ? -delta : -1)
decrement(delta: number): number {
return this.increment(typeof delta === "number" ? -delta : -1)
}
}
@ -87,8 +93,14 @@ class WriteableCounter extends Counter {
* `objectId` is the ID of the object containing the counter, and `key` is
* the property name (key in map, or index in list) where the counter is
* located.
*/
export function getWriteableCounter(value: number, context: Automerge, path: string[], objectId: ObjID, key: Prop) {
*/
export function getWriteableCounter(
value: number,
context: Automerge,
path: Prop[],
objectId: ObjID,
key: Prop
): WriteableCounter {
return new WriteableCounter(value, context, path, objectId, key)
}

File diff suppressed because it is too large Load diff

View file

@ -0,0 +1,43 @@
import { type ObjID, type Heads, Automerge } from "@automerge/automerge-wasm"
import { STATE, OBJECT_ID, TRACE, IS_PROXY } from "./constants"
import type { Doc, PatchCallback } from "./types"
export interface InternalState<T> {
handle: Automerge
heads: Heads | undefined
freeze: boolean
patchCallback?: PatchCallback<T>
textV2: boolean
}
export function _state<T>(doc: Doc<T>, checkroot = true): InternalState<T> {
if (typeof doc !== "object") {
throw new RangeError("must be the document root")
}
const state = Reflect.get(doc, STATE) as InternalState<T>
if (
state === undefined ||
state == null ||
(checkroot && _obj(doc) !== "_root")
) {
throw new RangeError("must be the document root")
}
return state
}
export function _trace<T>(doc: Doc<T>): string | undefined {
return Reflect.get(doc, TRACE) as string
}
export function _obj<T>(doc: Doc<T>): ObjID | null {
if (!(typeof doc === "object") || doc === null) {
return null
}
return Reflect.get(doc, OBJECT_ID) as ObjID
}
export function _is_proxy<T>(doc: Doc<T>): boolean {
return !!Reflect.get(doc, IS_PROXY)
}

View file

@ -1,25 +1,58 @@
import { Automerge, Change, DecodedChange, Actor, SyncState, SyncMessage, JsSyncState, DecodedSyncMessage } from "@automerge/automerge-wasm"
import { API } from "@automerge/automerge-wasm"
import {
type API,
Automerge,
type Change,
type DecodedChange,
type Actor,
SyncState,
type SyncMessage,
type JsSyncState,
type DecodedSyncMessage,
type ChangeToEncode,
} from "@automerge/automerge-wasm"
export type { ChangeToEncode } from "@automerge/automerge-wasm"
export function UseApi(api: API) {
for (const k in api) {
ApiHandler[k] = api[k]
// eslint-disable-next-line @typescript-eslint/no-extra-semi,@typescript-eslint/no-explicit-any
;(ApiHandler as any)[k] = (api as any)[k]
}
}
/* eslint-disable */
export const ApiHandler : API = {
create(actor?: Actor): Automerge { throw new RangeError("Automerge.use() not called") },
load(data: Uint8Array, actor?: Actor): Automerge { throw new RangeError("Automerge.use() not called (load)") },
encodeChange(change: DecodedChange): Change { throw new RangeError("Automerge.use() not called (encodeChange)") },
decodeChange(change: Change): DecodedChange { throw new RangeError("Automerge.use() not called (decodeChange)") },
initSyncState(): SyncState { throw new RangeError("Automerge.use() not called (initSyncState)") },
encodeSyncMessage(message: DecodedSyncMessage): SyncMessage { throw new RangeError("Automerge.use() not called (encodeSyncMessage)") },
decodeSyncMessage(msg: SyncMessage): DecodedSyncMessage { throw new RangeError("Automerge.use() not called (decodeSyncMessage)") },
encodeSyncState(state: SyncState): Uint8Array { throw new RangeError("Automerge.use() not called (encodeSyncState)") },
decodeSyncState(data: Uint8Array): SyncState { throw new RangeError("Automerge.use() not called (decodeSyncState)") },
exportSyncState(state: SyncState): JsSyncState { throw new RangeError("Automerge.use() not called (exportSyncState)") },
importSyncState(state: JsSyncState): SyncState { throw new RangeError("Automerge.use() not called (importSyncState)") },
export const ApiHandler: API = {
create(textV2: boolean, actor?: Actor): Automerge {
throw new RangeError("Automerge.use() not called")
},
load(data: Uint8Array, textV2: boolean, actor?: Actor): Automerge {
throw new RangeError("Automerge.use() not called (load)")
},
encodeChange(change: ChangeToEncode): Change {
throw new RangeError("Automerge.use() not called (encodeChange)")
},
decodeChange(change: Change): DecodedChange {
throw new RangeError("Automerge.use() not called (decodeChange)")
},
initSyncState(): SyncState {
throw new RangeError("Automerge.use() not called (initSyncState)")
},
encodeSyncMessage(message: DecodedSyncMessage): SyncMessage {
throw new RangeError("Automerge.use() not called (encodeSyncMessage)")
},
decodeSyncMessage(msg: SyncMessage): DecodedSyncMessage {
throw new RangeError("Automerge.use() not called (decodeSyncMessage)")
},
encodeSyncState(state: SyncState): Uint8Array {
throw new RangeError("Automerge.use() not called (encodeSyncState)")
},
decodeSyncState(data: Uint8Array): SyncState {
throw new RangeError("Automerge.use() not called (decodeSyncState)")
},
exportSyncState(state: SyncState): JsSyncState {
throw new RangeError("Automerge.use() not called (exportSyncState)")
},
importSyncState(state: JsSyncState): SyncState {
throw new RangeError("Automerge.use() not called (importSyncState)")
},
}
/* eslint-enable */

View file

@ -1,12 +1,18 @@
// Convience classes to allow users to stricly specify the number type they want
// Convenience classes to allow users to strictly specify the number type they want
import { INT, UINT, F64 } from "./constants"
export class Int {
value: number;
value: number
constructor(value: number) {
if (!(Number.isInteger(value) && value <= Number.MAX_SAFE_INTEGER && value >= Number.MIN_SAFE_INTEGER)) {
if (
!(
Number.isInteger(value) &&
value <= Number.MAX_SAFE_INTEGER &&
value >= Number.MIN_SAFE_INTEGER
)
) {
throw new RangeError(`Value ${value} cannot be a uint`)
}
this.value = value
@ -16,10 +22,16 @@ export class Int {
}
export class Uint {
value: number;
value: number
constructor(value: number) {
if (!(Number.isInteger(value) && value <= Number.MAX_SAFE_INTEGER && value >= 0)) {
if (
!(
Number.isInteger(value) &&
value <= Number.MAX_SAFE_INTEGER &&
value >= 0
)
) {
throw new RangeError(`Value ${value} cannot be a uint`)
}
this.value = value
@ -29,10 +41,10 @@ export class Uint {
}
export class Float64 {
value: number;
value: number
constructor(value: number) {
if (typeof value !== 'number') {
if (typeof value !== "number") {
throw new RangeError(`Value ${value} cannot be a float64`)
}
this.value = value || 0.0
@ -40,4 +52,3 @@ export class Float64 {
Object.freeze(this)
}
}

File diff suppressed because it is too large Load diff

View file

@ -0,0 +1,6 @@
export class RawString {
val: string
constructor(val: string) {
this.val = val
}
}

944
javascript/src/stable.ts Normal file
View file

@ -0,0 +1,944 @@
/** @hidden **/
export { /** @hidden */ uuid } from "./uuid"
import { rootProxy } from "./proxies"
import { STATE } from "./constants"
import {
type AutomergeValue,
Counter,
type Doc,
type PatchCallback,
} from "./types"
export {
type AutomergeValue,
Counter,
type Doc,
Int,
Uint,
Float64,
type Patch,
type PatchCallback,
type ScalarValue,
} from "./types"
import { Text } from "./text"
export { Text } from "./text"
import type {
API as WasmAPI,
Actor as ActorId,
Prop,
ObjID,
Change,
DecodedChange,
Heads,
MaterializeValue,
JsSyncState,
SyncMessage,
DecodedSyncMessage,
} from "@automerge/automerge-wasm"
export type {
PutPatch,
DelPatch,
SpliceTextPatch,
InsertPatch,
IncPatch,
SyncMessage,
} from "@automerge/automerge-wasm"
/** @hidden **/
type API = WasmAPI
const SyncStateSymbol = Symbol("_syncstate")
/**
* An opaque type tracking the state of sync with a remote peer
*/
type SyncState = JsSyncState & { _opaque: typeof SyncStateSymbol }
import { ApiHandler, type ChangeToEncode, UseApi } from "./low_level"
import { Automerge } from "@automerge/automerge-wasm"
import { RawString } from "./raw_string"
import { _state, _is_proxy, _trace, _obj } from "./internal_state"
import { stableConflictAt } from "./conflicts"
/** Options passed to {@link change}, and {@link emptyChange}
* @typeParam T - The type of value contained in the document
*/
export type ChangeOptions<T> = {
/** A message which describes the changes */
message?: string
/** The unix timestamp of the change (purely advisory, not used in conflict resolution) */
time?: number
/** A callback which will be called to notify the caller of any changes to the document */
patchCallback?: PatchCallback<T>
}
/** Options passed to {@link loadIncremental}, {@link applyChanges}, and {@link receiveSyncMessage}
* @typeParam T - The type of value contained in the document
*/
export type ApplyOptions<T> = { patchCallback?: PatchCallback<T> }
/**
* A List is an extended Array that adds the two helper methods `deleteAt` and `insertAt`.
*/
export interface List<T> extends Array<T> {
insertAt(index: number, ...args: T[]): List<T>
deleteAt(index: number, numDelete?: number): List<T>
}
/**
* To extend an arbitrary type, we have to turn any arrays that are part of the type's definition into Lists.
* So we recurse through the properties of T, turning any Arrays we find into Lists.
*/
export type Extend<T> =
// is it an array? make it a list (we recursively extend the type of the array's elements as well)
T extends Array<infer T>
? List<Extend<T>>
: // is it an object? recursively extend all of its properties
// eslint-disable-next-line @typescript-eslint/ban-types
T extends Object
? { [P in keyof T]: Extend<T[P]> }
: // otherwise leave the type alone
T
/**
* Function which is called by {@link change} when making changes to a `Doc<T>`
* @typeParam T - The type of value contained in the document
*
* This function may mutate `doc`
*/
export type ChangeFn<T> = (doc: Extend<T>) => void
/** @hidden **/
export interface State<T> {
change: DecodedChange
snapshot: T
}
/** @hidden **/
export function use(api: API) {
UseApi(api)
}
import * as wasm from "@automerge/automerge-wasm"
use(wasm)
/**
* Options to be passed to {@link init} or {@link load}
* @typeParam T - The type of the value the document contains
*/
export type InitOptions<T> = {
/** The actor ID to use for this document, a random one will be generated if `null` is passed */
actor?: ActorId
freeze?: boolean
/** A callback which will be called with the initial patch once the document has finished loading */
patchCallback?: PatchCallback<T>
/** @hidden */
enableTextV2?: boolean
}
/** @hidden */
export function getBackend<T>(doc: Doc<T>): Automerge {
return _state(doc).handle
}
function importOpts<T>(_actor?: ActorId | InitOptions<T>): InitOptions<T> {
if (typeof _actor === "object") {
return _actor
} else {
return { actor: _actor }
}
}
/**
* Create a new automerge document
*
* @typeParam T - The type of value contained in the document. This will be the
* type that is passed to the change closure in {@link change}
* @param _opts - Either an actorId or an {@link InitOptions} (which may
* contain an actorId). If this is null the document will be initialised with a
* random actor ID
*/
export function init<T>(_opts?: ActorId | InitOptions<T>): Doc<T> {
const opts = importOpts(_opts)
const freeze = !!opts.freeze
const patchCallback = opts.patchCallback
const handle = ApiHandler.create(opts.enableTextV2 || false, opts.actor)
handle.enablePatches(true)
handle.enableFreeze(!!opts.freeze)
handle.registerDatatype("counter", (n: number) => new Counter(n))
const textV2 = opts.enableTextV2 || false
if (textV2) {
handle.registerDatatype("str", (n: string) => new RawString(n))
} else {
// eslint-disable-next-line @typescript-eslint/no-explicit-any
handle.registerDatatype("text", (n: any) => new Text(n))
}
const doc = handle.materialize("/", undefined, {
handle,
heads: undefined,
freeze,
patchCallback,
textV2,
}) as Doc<T>
return doc
}
/**
* Make an immutable view of an automerge document as at `heads`
*
* @remarks
* The document returned from this function cannot be passed to {@link change}.
* This is because it shares the same underlying memory as `doc`, but it is
* consequently a very cheap copy.
*
* Note that this function will throw an error if any of the hashes in `heads`
* are not in the document.
*
* @typeParam T - The type of the value contained in the document
* @param doc - The document to create a view of
* @param heads - The hashes of the heads to create a view at
*/
export function view<T>(doc: Doc<T>, heads: Heads): Doc<T> {
const state = _state(doc)
const handle = state.handle
return state.handle.materialize("/", heads, {
...state,
handle,
heads,
}) as Doc<T>
}
/**
* Make a full writable copy of an automerge document
*
* @remarks
* Unlike {@link view} this function makes a full copy of the memory backing
* the document and can thus be passed to {@link change}. It also generates a
* new actor ID so that changes made in the new document do not create duplicate
* sequence numbers with respect to the old document. If you need control over
* the actor ID which is generated you can pass the actor ID as the second
* argument
*
* @typeParam T - The type of the value contained in the document
* @param doc - The document to clone
* @param _opts - Either an actor ID to use for the new doc or an {@link InitOptions}
*/
export function clone<T>(
doc: Doc<T>,
_opts?: ActorId | InitOptions<T>
): Doc<T> {
const state = _state(doc)
const heads = state.heads
const opts = importOpts(_opts)
const handle = state.handle.fork(opts.actor, heads)
// `change` uses the presence of state.heads to determine if we are in a view
// set it to undefined to indicate that this is a full fat document
const { heads: _oldHeads, ...stateSansHeads } = state
return handle.applyPatches(doc, { ...stateSansHeads, handle })
}
/** Explicity free the memory backing a document. Note that this is note
* necessary in environments which support
* [`FinalizationRegistry`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/FinalizationRegistry)
*/
export function free<T>(doc: Doc<T>) {
return _state(doc).handle.free()
}
/**
* Create an automerge document from a POJO
*
* @param initialState - The initial state which will be copied into the document
* @typeParam T - The type of the value passed to `from` _and_ the type the resulting document will contain
* @typeParam actor - The actor ID of the resulting document, if this is null a random actor ID will be used
*
* @example
* ```
* const doc = automerge.from({
* tasks: [
* {description: "feed dogs", done: false}
* ]
* })
* ```
*/
export function from<T extends Record<string, unknown>>(
initialState: T | Doc<T>,
_opts?: ActorId | InitOptions<T>
): Doc<T> {
return change(init(_opts), d => Object.assign(d, initialState))
}
/**
* Update the contents of an automerge document
* @typeParam T - The type of the value contained in the document
* @param doc - The document to update
* @param options - Either a message, an {@link ChangeOptions}, or a {@link ChangeFn}
* @param callback - A `ChangeFn` to be used if `options` was a `string`
*
* Note that if the second argument is a function it will be used as the `ChangeFn` regardless of what the third argument is.
*
* @example A simple change
* ```
* let doc1 = automerge.init()
* doc1 = automerge.change(doc1, d => {
* d.key = "value"
* })
* assert.equal(doc1.key, "value")
* ```
*
* @example A change with a message
*
* ```
* doc1 = automerge.change(doc1, "add another value", d => {
* d.key2 = "value2"
* })
* ```
*
* @example A change with a message and a timestamp
*
* ```
* doc1 = automerge.change(doc1, {message: "add another value", time: 1640995200}, d => {
* d.key2 = "value2"
* })
* ```
*
* @example responding to a patch callback
* ```
* let patchedPath
* let patchCallback = patch => {
* patchedPath = patch.path
* }
* doc1 = automerge.change(doc1, {message, "add another value", time: 1640995200, patchCallback}, d => {
* d.key2 = "value2"
* })
* assert.equal(patchedPath, ["key2"])
* ```
*/
export function change<T>(
doc: Doc<T>,
options: string | ChangeOptions<T> | ChangeFn<T>,
callback?: ChangeFn<T>
): Doc<T> {
if (typeof options === "function") {
return _change(doc, {}, options)
} else if (typeof callback === "function") {
if (typeof options === "string") {
options = { message: options }
}
return _change(doc, options, callback)
} else {
throw RangeError("Invalid args for change")
}
}
function progressDocument<T>(
doc: Doc<T>,
heads: Heads | null,
callback?: PatchCallback<T>
): Doc<T> {
if (heads == null) {
return doc
}
const state = _state(doc)
const nextState = { ...state, heads: undefined }
const nextDoc = state.handle.applyPatches(doc, nextState, callback)
state.heads = heads
return nextDoc
}
function _change<T>(
doc: Doc<T>,
options: ChangeOptions<T>,
callback: ChangeFn<T>
): Doc<T> {
if (typeof callback !== "function") {
throw new RangeError("invalid change function")
}
const state = _state(doc)
if (doc === undefined || state === undefined) {
throw new RangeError("must be the document root")
}
if (state.heads) {
throw new RangeError(
"Attempting to change an outdated document. Use Automerge.clone() if you wish to make a writable copy."
)
}
if (_is_proxy(doc)) {
throw new RangeError("Calls to Automerge.change cannot be nested")
}
const heads = state.handle.getHeads()
try {
state.heads = heads
const root: T = rootProxy(state.handle, state.textV2)
callback(root as Extend<T>)
if (state.handle.pendingOps() === 0) {
state.heads = undefined
return doc
} else {
state.handle.commit(options.message, options.time)
return progressDocument(
doc,
heads,
options.patchCallback || state.patchCallback
)
}
} catch (e) {
state.heads = undefined
state.handle.rollback()
throw e
}
}
/**
* Make a change to a document which does not modify the document
*
* @param doc - The doc to add the empty change to
* @param options - Either a message or a {@link ChangeOptions} for the new change
*
* Why would you want to do this? One reason might be that you have merged
* changes from some other peers and you want to generate a change which
* depends on those merged changes so that you can sign the new change with all
* of the merged changes as part of the new change.
*/
export function emptyChange<T>(
doc: Doc<T>,
options: string | ChangeOptions<T> | void
) {
if (options === undefined) {
options = {}
}
if (typeof options === "string") {
options = { message: options }
}
const state = _state(doc)
if (state.heads) {
throw new RangeError(
"Attempting to change an outdated document. Use Automerge.clone() if you wish to make a writable copy."
)
}
if (_is_proxy(doc)) {
throw new RangeError("Calls to Automerge.change cannot be nested")
}
const heads = state.handle.getHeads()
state.handle.emptyChange(options.message, options.time)
return progressDocument(doc, heads)
}
/**
* Load an automerge document from a compressed document produce by {@link save}
*
* @typeParam T - The type of the value which is contained in the document.
* Note that no validation is done to make sure this type is in
* fact the type of the contained value so be a bit careful
* @param data - The compressed document
* @param _opts - Either an actor ID or some {@link InitOptions}, if the actor
* ID is null a random actor ID will be created
*
* Note that `load` will throw an error if passed incomplete content (for
* example if you are receiving content over the network and don't know if you
* have the complete document yet). If you need to handle incomplete content use
* {@link init} followed by {@link loadIncremental}.
*/
export function load<T>(
data: Uint8Array,
_opts?: ActorId | InitOptions<T>
): Doc<T> {
const opts = importOpts(_opts)
const actor = opts.actor
const patchCallback = opts.patchCallback
const handle = ApiHandler.load(data, opts.enableTextV2 || false, actor)
handle.enablePatches(true)
handle.enableFreeze(!!opts.freeze)
handle.registerDatatype("counter", (n: number) => new Counter(n))
const textV2 = opts.enableTextV2 || false
if (textV2) {
handle.registerDatatype("str", (n: string) => new RawString(n))
} else {
handle.registerDatatype("text", (n: string) => new Text(n))
}
const doc = handle.materialize("/", undefined, {
handle,
heads: undefined,
patchCallback,
textV2,
}) as Doc<T>
return doc
}
/**
* Load changes produced by {@link saveIncremental}, or partial changes
*
* @typeParam T - The type of the value which is contained in the document.
* Note that no validation is done to make sure this type is in
* fact the type of the contained value so be a bit careful
* @param data - The compressedchanges
* @param opts - an {@link ApplyOptions}
*
* This function is useful when staying up to date with a connected peer.
* Perhaps the other end sent you a full compresed document which you loaded
* with {@link load} and they're sending you the result of
* {@link getLastLocalChange} every time they make a change.
*
* Note that this function will succesfully load the results of {@link save} as
* well as {@link getLastLocalChange} or any other incremental change.
*/
export function loadIncremental<T>(
doc: Doc<T>,
data: Uint8Array,
opts?: ApplyOptions<T>
): Doc<T> {
if (!opts) {
opts = {}
}
const state = _state(doc)
if (state.heads) {
throw new RangeError(
"Attempting to change an out of date document - set at: " + _trace(doc)
)
}
if (_is_proxy(doc)) {
throw new RangeError("Calls to Automerge.change cannot be nested")
}
const heads = state.handle.getHeads()
state.handle.loadIncremental(data)
return progressDocument(doc, heads, opts.patchCallback || state.patchCallback)
}
/**
* Export the contents of a document to a compressed format
*
* @param doc - The doc to save
*
* The returned bytes can be passed to {@link load} or {@link loadIncremental}
*/
export function save<T>(doc: Doc<T>): Uint8Array {
return _state(doc).handle.save()
}
/**
* Merge `local` into `remote`
* @typeParam T - The type of values contained in each document
* @param local - The document to merge changes into
* @param remote - The document to merge changes from
*
* @returns - The merged document
*
* Often when you are merging documents you will also need to clone them. Both
* arguments to `merge` are frozen after the call so you can no longer call
* mutating methods (such as {@link change}) on them. The symtom of this will be
* an error which says "Attempting to change an out of date document". To
* overcome this call {@link clone} on the argument before passing it to {@link
* merge}.
*/
export function merge<T>(local: Doc<T>, remote: Doc<T>): Doc<T> {
const localState = _state(local)
if (localState.heads) {
throw new RangeError(
"Attempting to change an out of date document - set at: " + _trace(local)
)
}
const heads = localState.handle.getHeads()
const remoteState = _state(remote)
const changes = localState.handle.getChangesAdded(remoteState.handle)
localState.handle.applyChanges(changes)
return progressDocument(local, heads, localState.patchCallback)
}
/**
* Get the actor ID associated with the document
*/
export function getActorId<T>(doc: Doc<T>): ActorId {
const state = _state(doc)
return state.handle.getActorId()
}
/**
* The type of conflicts for particular key or index
*
* Maps and sequences in automerge can contain conflicting values for a
* particular key or index. In this case {@link getConflicts} can be used to
* obtain a `Conflicts` representing the multiple values present for the property
*
* A `Conflicts` is a map from a unique (per property or index) key to one of
* the possible conflicting values for the given property.
*/
type Conflicts = { [key: string]: AutomergeValue }
/**
* Get the conflicts associated with a property
*
* The values of properties in a map in automerge can be conflicted if there
* are concurrent "put" operations to the same key. Automerge chooses one value
* arbitrarily (but deterministically, any two nodes who have the same set of
* changes will choose the same value) from the set of conflicting values to
* present as the value of the key.
*
* Sometimes you may want to examine these conflicts, in this case you can use
* {@link getConflicts} to get the conflicts for the key.
*
* @example
* ```
* import * as automerge from "@automerge/automerge"
*
* type Profile = {
* pets: Array<{name: string, type: string}>
* }
*
* let doc1 = automerge.init<Profile>("aaaa")
* doc1 = automerge.change(doc1, d => {
* d.pets = [{name: "Lassie", type: "dog"}]
* })
* let doc2 = automerge.init<Profile>("bbbb")
* doc2 = automerge.merge(doc2, automerge.clone(doc1))
*
* doc2 = automerge.change(doc2, d => {
* d.pets[0].name = "Beethoven"
* })
*
* doc1 = automerge.change(doc1, d => {
* d.pets[0].name = "Babe"
* })
*
* const doc3 = automerge.merge(doc1, doc2)
*
* // Note that here we pass `doc3.pets`, not `doc3`
* let conflicts = automerge.getConflicts(doc3.pets[0], "name")
*
* // The two conflicting values are the keys of the conflicts object
* assert.deepEqual(Object.values(conflicts), ["Babe", Beethoven"])
* ```
*/
export function getConflicts<T>(
doc: Doc<T>,
prop: Prop
): Conflicts | undefined {
const state = _state(doc, false)
if (state.textV2) {
throw new Error("use unstable.getConflicts for an unstable document")
}
const objectId = _obj(doc)
if (objectId != null) {
return stableConflictAt(state.handle, objectId, prop)
} else {
return undefined
}
}
/**
* Get the binary representation of the last change which was made to this doc
*
* This is most useful when staying in sync with other peers, every time you
* make a change locally via {@link change} you immediately call {@link
* getLastLocalChange} and send the result over the network to other peers.
*/
export function getLastLocalChange<T>(doc: Doc<T>): Change | undefined {
const state = _state(doc)
return state.handle.getLastLocalChange() || undefined
}
/**
* Return the object ID of an arbitrary javascript value
*
* This is useful to determine if something is actually an automerge document,
* if `doc` is not an automerge document this will return null.
*/
// eslint-disable-next-line @typescript-eslint/no-explicit-any
export function getObjectId(doc: any, prop?: Prop): ObjID | null {
if (prop) {
const state = _state(doc, false)
const objectId = _obj(doc)
if (!state || !objectId) {
return null
}
return state.handle.get(objectId, prop) as ObjID
} else {
return _obj(doc)
}
}
/**
* Get the changes which are in `newState` but not in `oldState`. The returned
* changes can be loaded in `oldState` via {@link applyChanges}.
*
* Note that this will crash if there are changes in `oldState` which are not in `newState`.
*/
export function getChanges<T>(oldState: Doc<T>, newState: Doc<T>): Change[] {
const n = _state(newState)
return n.handle.getChanges(getHeads(oldState))
}
/**
* Get all the changes in a document
*
* This is different to {@link save} because the output is an array of changes
* which can be individually applied via {@link applyChanges}`
*
*/
export function getAllChanges<T>(doc: Doc<T>): Change[] {
const state = _state(doc)
return state.handle.getChanges([])
}
/**
* Apply changes received from another document
*
* `doc` will be updated to reflect the `changes`. If there are changes which
* we do not have dependencies for yet those will be stored in the document and
* applied when the depended on changes arrive.
*
* You can use the {@link ApplyOptions} to pass a patchcallback which will be
* informed of any changes which occur as a result of applying the changes
*
*/
export function applyChanges<T>(
doc: Doc<T>,
changes: Change[],
opts?: ApplyOptions<T>
): [Doc<T>] {
const state = _state(doc)
if (!opts) {
opts = {}
}
if (state.heads) {
throw new RangeError(
"Attempting to change an outdated document. Use Automerge.clone() if you wish to make a writable copy."
)
}
if (_is_proxy(doc)) {
throw new RangeError("Calls to Automerge.change cannot be nested")
}
const heads = state.handle.getHeads()
state.handle.applyChanges(changes)
state.heads = heads
return [
progressDocument(doc, heads, opts.patchCallback || state.patchCallback),
]
}
/** @hidden */
export function getHistory<T>(doc: Doc<T>): State<T>[] {
const textV2 = _state(doc).textV2
const history = getAllChanges(doc)
return history.map((change, index) => ({
get change() {
return decodeChange(change)
},
get snapshot() {
const [state] = applyChanges(
init({ enableTextV2: textV2 }),
history.slice(0, index + 1)
)
return <T>state
},
}))
}
/** @hidden */
// FIXME : no tests
// FIXME can we just use deep equals now?
export function equals(val1: unknown, val2: unknown): boolean {
if (!isObject(val1) || !isObject(val2)) return val1 === val2
const keys1 = Object.keys(val1).sort(),
keys2 = Object.keys(val2).sort()
if (keys1.length !== keys2.length) return false
for (let i = 0; i < keys1.length; i++) {
if (keys1[i] !== keys2[i]) return false
if (!equals(val1[keys1[i]], val2[keys2[i]])) return false
}
return true
}
/**
* encode a {@link SyncState} into binary to send over the network
*
* @group sync
* */
export function encodeSyncState(state: SyncState): Uint8Array {
const sync = ApiHandler.importSyncState(state)
const result = ApiHandler.encodeSyncState(sync)
sync.free()
return result
}
/**
* Decode some binary data into a {@link SyncState}
*
* @group sync
*/
export function decodeSyncState(state: Uint8Array): SyncState {
const sync = ApiHandler.decodeSyncState(state)
const result = ApiHandler.exportSyncState(sync)
sync.free()
return result as SyncState
}
/**
* Generate a sync message to send to the peer represented by `inState`
* @param doc - The doc to generate messages about
* @param inState - The {@link SyncState} representing the peer we are talking to
*
* @group sync
*
* @returns An array of `[newSyncState, syncMessage | null]` where
* `newSyncState` should replace `inState` and `syncMessage` should be sent to
* the peer if it is not null. If `syncMessage` is null then we are up to date.
*/
export function generateSyncMessage<T>(
doc: Doc<T>,
inState: SyncState
): [SyncState, SyncMessage | null] {
const state = _state(doc)
const syncState = ApiHandler.importSyncState(inState)
const message = state.handle.generateSyncMessage(syncState)
const outState = ApiHandler.exportSyncState(syncState) as SyncState
return [outState, message]
}
/**
* Update a document and our sync state on receiving a sync message
*
* @group sync
*
* @param doc - The doc the sync message is about
* @param inState - The {@link SyncState} for the peer we are communicating with
* @param message - The message which was received
* @param opts - Any {@link ApplyOption}s, used for passing a
* {@link PatchCallback} which will be informed of any changes
* in `doc` which occur because of the received sync message.
*
* @returns An array of `[newDoc, newSyncState, syncMessage | null]` where
* `newDoc` is the updated state of `doc`, `newSyncState` should replace
* `inState` and `syncMessage` should be sent to the peer if it is not null. If
* `syncMessage` is null then we are up to date.
*/
export function receiveSyncMessage<T>(
doc: Doc<T>,
inState: SyncState,
message: SyncMessage,
opts?: ApplyOptions<T>
): [Doc<T>, SyncState, null] {
const syncState = ApiHandler.importSyncState(inState)
if (!opts) {
opts = {}
}
const state = _state(doc)
if (state.heads) {
throw new RangeError(
"Attempting to change an outdated document. Use Automerge.clone() if you wish to make a writable copy."
)
}
if (_is_proxy(doc)) {
throw new RangeError("Calls to Automerge.change cannot be nested")
}
const heads = state.handle.getHeads()
state.handle.receiveSyncMessage(syncState, message)
const outSyncState = ApiHandler.exportSyncState(syncState) as SyncState
return [
progressDocument(doc, heads, opts.patchCallback || state.patchCallback),
outSyncState,
null,
]
}
/**
* Create a new, blank {@link SyncState}
*
* When communicating with a peer for the first time use this to generate a new
* {@link SyncState} for them
*
* @group sync
*/
export function initSyncState(): SyncState {
return ApiHandler.exportSyncState(ApiHandler.initSyncState()) as SyncState
}
/** @hidden */
export function encodeChange(change: ChangeToEncode): Change {
return ApiHandler.encodeChange(change)
}
/** @hidden */
export function decodeChange(data: Change): DecodedChange {
return ApiHandler.decodeChange(data)
}
/** @hidden */
export function encodeSyncMessage(message: DecodedSyncMessage): SyncMessage {
return ApiHandler.encodeSyncMessage(message)
}
/** @hidden */
export function decodeSyncMessage(message: SyncMessage): DecodedSyncMessage {
return ApiHandler.decodeSyncMessage(message)
}
/**
* Get any changes in `doc` which are not dependencies of `heads`
*/
export function getMissingDeps<T>(doc: Doc<T>, heads: Heads): Heads {
const state = _state(doc)
return state.handle.getMissingDeps(heads)
}
/**
* Get the hashes of the heads of this document
*/
export function getHeads<T>(doc: Doc<T>): Heads {
const state = _state(doc)
return state.heads || state.handle.getHeads()
}
/** @hidden */
export function dump<T>(doc: Doc<T>) {
const state = _state(doc)
state.handle.dump()
}
/** @hidden */
export function toJS<T>(doc: Doc<T>): T {
const state = _state(doc)
const enabled = state.handle.enableFreeze(false)
const result = state.handle.materialize()
state.handle.enableFreeze(enabled)
return result as T
}
export function isAutomerge(doc: unknown): boolean {
if (typeof doc == "object" && doc !== null) {
return getObjectId(doc) === "_root" && !!Reflect.get(doc, STATE)
} else {
return false
}
}
function isObject(obj: unknown): obj is Record<string, unknown> {
return typeof obj === "object" && obj !== null
}
export type {
API,
SyncState,
ActorId,
Conflicts,
Prop,
Change,
ObjID,
DecodedChange,
DecodedSyncMessage,
Heads,
MaterializeValue,
}

224
javascript/src/text.ts Normal file
View file

@ -0,0 +1,224 @@
import type { Value } from "@automerge/automerge-wasm"
import { TEXT, STATE } from "./constants"
import type { InternalState } from "./internal_state"
export class Text {
//eslint-disable-next-line @typescript-eslint/no-explicit-any
elems: Array<any>
str: string | undefined
//eslint-disable-next-line @typescript-eslint/no-explicit-any
spans: Array<any> | undefined;
//eslint-disable-next-line @typescript-eslint/no-explicit-any
[STATE]?: InternalState<any>
constructor(text?: string | string[] | Value[]) {
if (typeof text === "string") {
this.elems = [...text]
} else if (Array.isArray(text)) {
this.elems = text
} else if (text === undefined) {
this.elems = []
} else {
throw new TypeError(`Unsupported initial value for Text: ${text}`)
}
Reflect.defineProperty(this, TEXT, { value: true })
}
get length(): number {
return this.elems.length
}
//eslint-disable-next-line @typescript-eslint/no-explicit-any
get(index: number): any {
return this.elems[index]
}
/**
* Iterates over the text elements character by character, including any
* inline objects.
*/
[Symbol.iterator]() {
const elems = this.elems
let index = -1
return {
next() {
index += 1
if (index < elems.length) {
return { done: false, value: elems[index] }
} else {
return { done: true }
}
},
}
}
/**
* Returns the content of the Text object as a simple string, ignoring any
* non-character elements.
*/
toString(): string {
if (!this.str) {
// Concatting to a string is faster than creating an array and then
// .join()ing for small (<100KB) arrays.
// https://jsperf.com/join-vs-loop-w-type-test
this.str = ""
for (const elem of this.elems) {
if (typeof elem === "string") this.str += elem
else this.str += "\uFFFC"
}
}
return this.str
}
/**
* Returns the content of the Text object as a sequence of strings,
* interleaved with non-character elements.
*
* For example, the value `['a', 'b', {x: 3}, 'c', 'd']` has spans:
* `=> ['ab', {x: 3}, 'cd']`
*/
toSpans(): Array<Value | object> {
if (!this.spans) {
this.spans = []
let chars = ""
for (const elem of this.elems) {
if (typeof elem === "string") {
chars += elem
} else {
if (chars.length > 0) {
this.spans.push(chars)
chars = ""
}
this.spans.push(elem)
}
}
if (chars.length > 0) {
this.spans.push(chars)
}
}
return this.spans
}
/**
* Returns the content of the Text object as a simple string, so that the
* JSON serialization of an Automerge document represents text nicely.
*/
toJSON(): string {
return this.toString()
}
/**
* Updates the list item at position `index` to a new value `value`.
*/
set(index: number, value: Value) {
if (this[STATE]) {
throw new RangeError(
"object cannot be modified outside of a change block"
)
}
this.elems[index] = value
}
/**
* Inserts new list items `values` starting at position `index`.
*/
insertAt(index: number, ...values: Array<Value | object>) {
if (this[STATE]) {
throw new RangeError(
"object cannot be modified outside of a change block"
)
}
this.elems.splice(index, 0, ...values)
}
/**
* Deletes `numDelete` list items starting at position `index`.
* if `numDelete` is not given, one item is deleted.
*/
deleteAt(index: number, numDelete = 1) {
if (this[STATE]) {
throw new RangeError(
"object cannot be modified outside of a change block"
)
}
this.elems.splice(index, numDelete)
}
map<T>(callback: (e: Value | object) => T) {
this.elems.map(callback)
}
lastIndexOf(searchElement: Value, fromIndex?: number) {
this.elems.lastIndexOf(searchElement, fromIndex)
}
concat(other: Text): Text {
return new Text(this.elems.concat(other.elems))
}
every(test: (v: Value) => boolean): boolean {
return this.elems.every(test)
}
filter(test: (v: Value) => boolean): Text {
return new Text(this.elems.filter(test))
}
find(test: (v: Value) => boolean): Value | undefined {
return this.elems.find(test)
}
findIndex(test: (v: Value) => boolean): number | undefined {
return this.elems.findIndex(test)
}
forEach(f: (v: Value) => undefined) {
this.elems.forEach(f)
}
includes(elem: Value): boolean {
return this.elems.includes(elem)
}
indexOf(elem: Value) {
return this.elems.indexOf(elem)
}
join(sep?: string): string {
return this.elems.join(sep)
}
reduce(
f: (
previousValue: Value,
currentValue: Value,
currentIndex: number,
array: Value[]
) => Value
) {
this.elems.reduce(f)
}
reduceRight(
f: (
previousValue: Value,
currentValue: Value,
currentIndex: number,
array: Value[]
) => Value
) {
this.elems.reduceRight(f)
}
slice(start?: number, end?: number) {
new Text(this.elems.slice(start, end))
}
some(test: (arg: Value) => boolean): boolean {
return this.elems.some(test)
}
toLocaleString() {
this.toString()
}
}

View file

@ -1,10 +1,46 @@
export { Counter } from "./counter"
export { Int, Uint, Float64 } from "./numbers"
export { Text } from "./text"
import { Text } from "./text"
export { Counter } from "./counter"
export { Int, Uint, Float64 } from "./numbers"
import { Counter } from "./counter"
import type { Patch } from "@automerge/automerge-wasm"
export type { Patch } from "@automerge/automerge-wasm"
export type AutomergeValue = ScalarValue | { [key: string]: AutomergeValue } | Array<AutomergeValue>
export type MapValue = { [key: string]: AutomergeValue }
export type ListValue = Array<AutomergeValue>
export type ScalarValue = string | number | null | boolean | Date | Counter | Uint8Array
export type AutomergeValue =
| ScalarValue
| { [key: string]: AutomergeValue }
| Array<AutomergeValue>
| Text
export type MapValue = { [key: string]: AutomergeValue }
export type ListValue = Array<AutomergeValue>
export type ScalarValue =
| string
| number
| null
| boolean
| Date
| Counter
| Uint8Array
/**
* An automerge document.
* @typeParam T - The type of the value contained in this document
*
* Note that this provides read only access to the fields of the value. To
* modify the value use {@link change}
*/
export type Doc<T> = { readonly [P in keyof T]: T[P] }
/**
* Callback which is called by various methods in this library to notify the
* user of what changes have been made.
* @param patch - A description of the changes made
* @param before - The document before the change was made
* @param after - The document after the change was made
*/
export type PatchCallback<T> = (
patches: Array<Patch>,
before: Doc<T>,
after: Doc<T>
) => void

294
javascript/src/unstable.ts Normal file
View file

@ -0,0 +1,294 @@
/**
* # The unstable API
*
* This module contains new features we are working on which are either not yet
* ready for a stable release and/or which will result in backwards incompatible
* API changes. The API of this module may change in arbitrary ways between
* point releases - we will always document what these changes are in the
* [CHANGELOG](#changelog) below, but only depend on this module if you are prepared to deal
* with frequent changes.
*
* ## Differences from stable
*
* In the stable API text objects are represented using the {@link Text} class.
* This means you must decide up front whether your string data might need
* concurrent merges in the future and if you change your mind you have to
* figure out how to migrate your data. In the unstable API the `Text` class is
* gone and all `string`s are represented using the text CRDT, allowing for
* concurrent changes. Modifying a string is done using the {@link splice}
* function. You can still access the old behaviour of strings which do not
* support merging behaviour via the {@link RawString} class.
*
* This leads to the following differences from `stable`:
*
* * There is no `unstable.Text` class, all strings are text objects
* * Reading strings in an `unstable` document is the same as reading any other
* javascript string
* * To modify strings in an `unstable` document use {@link splice}
* * The {@link AutomergeValue} type does not include the {@link Text}
* class but the {@link RawString} class is included in the {@link ScalarValue}
* type
*
* ## CHANGELOG
* * Introduce this module to expose the new API which has no `Text` class
*
*
* @module
*/
export {
Counter,
type Doc,
Int,
Uint,
Float64,
type Patch,
type PatchCallback,
type AutomergeValue,
type ScalarValue,
} from "./unstable_types"
import type { PatchCallback } from "./stable"
import { type UnstableConflicts as Conflicts } from "./conflicts"
import { unstableConflictAt } from "./conflicts"
export type {
PutPatch,
DelPatch,
SpliceTextPatch,
InsertPatch,
IncPatch,
SyncMessage,
} from "@automerge/automerge-wasm"
export type { ChangeOptions, ApplyOptions, ChangeFn } from "./stable"
export {
view,
free,
getHeads,
change,
emptyChange,
loadIncremental,
save,
merge,
getActorId,
getLastLocalChange,
getChanges,
getAllChanges,
applyChanges,
getHistory,
equals,
encodeSyncState,
decodeSyncState,
generateSyncMessage,
receiveSyncMessage,
initSyncState,
encodeChange,
decodeChange,
encodeSyncMessage,
decodeSyncMessage,
getMissingDeps,
dump,
toJS,
isAutomerge,
getObjectId,
} from "./stable"
export type InitOptions<T> = {
/** The actor ID to use for this document, a random one will be generated if `null` is passed */
actor?: ActorId
freeze?: boolean
/** A callback which will be called with the initial patch once the document has finished loading */
patchCallback?: PatchCallback<T>
}
import { ActorId, Doc } from "./stable"
import * as stable from "./stable"
export { RawString } from "./raw_string"
/** @hidden */
export const getBackend = stable.getBackend
import { _is_proxy, _state, _obj } from "./internal_state"
/**
* Create a new automerge document
*
* @typeParam T - The type of value contained in the document. This will be the
* type that is passed to the change closure in {@link change}
* @param _opts - Either an actorId or an {@link InitOptions} (which may
* contain an actorId). If this is null the document will be initialised with a
* random actor ID
*/
export function init<T>(_opts?: ActorId | InitOptions<T>): Doc<T> {
const opts = importOpts(_opts)
opts.enableTextV2 = true
return stable.init(opts)
}
/**
* Make a full writable copy of an automerge document
*
* @remarks
* Unlike {@link view} this function makes a full copy of the memory backing
* the document and can thus be passed to {@link change}. It also generates a
* new actor ID so that changes made in the new document do not create duplicate
* sequence numbers with respect to the old document. If you need control over
* the actor ID which is generated you can pass the actor ID as the second
* argument
*
* @typeParam T - The type of the value contained in the document
* @param doc - The document to clone
* @param _opts - Either an actor ID to use for the new doc or an {@link InitOptions}
*/
export function clone<T>(
doc: Doc<T>,
_opts?: ActorId | InitOptions<T>
): Doc<T> {
const opts = importOpts(_opts)
opts.enableTextV2 = true
return stable.clone(doc, opts)
}
/**
* Create an automerge document from a POJO
*
* @param initialState - The initial state which will be copied into the document
* @typeParam T - The type of the value passed to `from` _and_ the type the resulting document will contain
* @typeParam actor - The actor ID of the resulting document, if this is null a random actor ID will be used
*
* @example
* ```
* const doc = automerge.from({
* tasks: [
* {description: "feed dogs", done: false}
* ]
* })
* ```
*/
export function from<T extends Record<string, unknown>>(
initialState: T | Doc<T>,
_opts?: ActorId | InitOptions<T>
): Doc<T> {
const opts = importOpts(_opts)
opts.enableTextV2 = true
return stable.from(initialState, opts)
}
/**
* Load an automerge document from a compressed document produce by {@link save}
*
* @typeParam T - The type of the value which is contained in the document.
* Note that no validation is done to make sure this type is in
* fact the type of the contained value so be a bit careful
* @param data - The compressed document
* @param _opts - Either an actor ID or some {@link InitOptions}, if the actor
* ID is null a random actor ID will be created
*
* Note that `load` will throw an error if passed incomplete content (for
* example if you are receiving content over the network and don't know if you
* have the complete document yet). If you need to handle incomplete content use
* {@link init} followed by {@link loadIncremental}.
*/
export function load<T>(
data: Uint8Array,
_opts?: ActorId | InitOptions<T>
): Doc<T> {
const opts = importOpts(_opts)
opts.enableTextV2 = true
return stable.load(data, opts)
}
function importOpts<T>(
_actor?: ActorId | InitOptions<T>
): stable.InitOptions<T> {
if (typeof _actor === "object") {
return _actor
} else {
return { actor: _actor }
}
}
export function splice<T>(
doc: Doc<T>,
prop: stable.Prop,
index: number,
del: number,
newText?: string
) {
if (!_is_proxy(doc)) {
throw new RangeError("object cannot be modified outside of a change block")
}
const state = _state(doc, false)
const objectId = _obj(doc)
if (!objectId) {
throw new RangeError("invalid object for splice")
}
const value = `${objectId}/${prop}`
try {
return state.handle.splice(value, index, del, newText)
} catch (e) {
throw new RangeError(`Cannot splice: ${e}`)
}
}
/**
* Get the conflicts associated with a property
*
* The values of properties in a map in automerge can be conflicted if there
* are concurrent "put" operations to the same key. Automerge chooses one value
* arbitrarily (but deterministically, any two nodes who have the same set of
* changes will choose the same value) from the set of conflicting values to
* present as the value of the key.
*
* Sometimes you may want to examine these conflicts, in this case you can use
* {@link getConflicts} to get the conflicts for the key.
*
* @example
* ```
* import * as automerge from "@automerge/automerge"
*
* type Profile = {
* pets: Array<{name: string, type: string}>
* }
*
* let doc1 = automerge.init<Profile>("aaaa")
* doc1 = automerge.change(doc1, d => {
* d.pets = [{name: "Lassie", type: "dog"}]
* })
* let doc2 = automerge.init<Profile>("bbbb")
* doc2 = automerge.merge(doc2, automerge.clone(doc1))
*
* doc2 = automerge.change(doc2, d => {
* d.pets[0].name = "Beethoven"
* })
*
* doc1 = automerge.change(doc1, d => {
* d.pets[0].name = "Babe"
* })
*
* const doc3 = automerge.merge(doc1, doc2)
*
* // Note that here we pass `doc3.pets`, not `doc3`
* let conflicts = automerge.getConflicts(doc3.pets[0], "name")
*
* // The two conflicting values are the keys of the conflicts object
* assert.deepEqual(Object.values(conflicts), ["Babe", Beethoven"])
* ```
*/
export function getConflicts<T>(
doc: Doc<T>,
prop: stable.Prop
): Conflicts | undefined {
const state = _state(doc, false)
if (!state.textV2) {
throw new Error("use getConflicts for a stable document")
}
const objectId = _obj(doc)
if (objectId != null) {
return unstableConflictAt(state.handle, objectId, prop)
} else {
return undefined
}
}

View file

@ -0,0 +1,30 @@
import { Counter } from "./types"
export {
Counter,
type Doc,
Int,
Uint,
Float64,
type Patch,
type PatchCallback,
} from "./types"
import { RawString } from "./raw_string"
export { RawString } from "./raw_string"
export type AutomergeValue =
| ScalarValue
| { [key: string]: AutomergeValue }
| Array<AutomergeValue>
export type MapValue = { [key: string]: AutomergeValue }
export type ListValue = Array<AutomergeValue>
export type ScalarValue =
| string
| number
| null
| boolean
| Date
| Counter
| Uint8Array
| RawString

View file

@ -0,0 +1,26 @@
import * as v4 from "https://deno.land/x/uuid@v0.1.2/mod.ts"
// this file is a deno only port of the uuid module
function defaultFactory() {
return v4.uuid().replace(/-/g, "")
}
let factory = defaultFactory
interface UUIDFactory extends Function {
setFactory(f: typeof factory): void
reset(): void
}
export const uuid: UUIDFactory = () => {
return factory()
}
uuid.setFactory = newFactory => {
factory = newFactory
}
uuid.reset = () => {
factory = defaultFactory
}

View file

@ -1,21 +1,24 @@
import { v4 } from 'uuid'
import { v4 } from "uuid"
function defaultFactory() {
return v4().replace(/-/g, '')
return v4().replace(/-/g, "")
}
let factory = defaultFactory
interface UUIDFactory extends Function {
setFactory(f: typeof factory): void;
reset(): void;
setFactory(f: typeof factory): void
reset(): void
}
export const uuid : UUIDFactory = () => {
export const uuid: UUIDFactory = () => {
return factory()
}
uuid.setFactory = newFactory => { factory = newFactory }
uuid.reset = () => { factory = defaultFactory }
uuid.setFactory = newFactory => {
factory = newFactory
}
uuid.reset = () => {
factory = defaultFactory
}

View file

@ -1,366 +1,488 @@
import * as assert from 'assert'
import {Counter} from 'automerge'
import * as Automerge from '../src'
import * as assert from "assert"
import { unstable as Automerge } from "../src"
import * as WASM from "@automerge/automerge-wasm"
describe('Automerge', () => {
describe('basics', () => {
it('should init clone and free', () => {
let doc1 = Automerge.init()
let doc2 = Automerge.clone(doc1);
describe("Automerge", () => {
describe("basics", () => {
it("should init clone and free", () => {
let doc1 = Automerge.init()
let doc2 = Automerge.clone(doc1)
// this is only needed if weakrefs are not supported
Automerge.free(doc1)
Automerge.free(doc2)
})
it('should be able to make a view with specifc heads', () => {
let doc1 = Automerge.init()
let doc2 = Automerge.change(doc1, (d) => d.value = 1)
let heads2 = Automerge.getHeads(doc2)
let doc3 = Automerge.change(doc2, (d) => d.value = 2)
let doc2_v2 = Automerge.view(doc3, heads2)
assert.deepEqual(doc2, doc2_v2)
let doc2_v2_clone = Automerge.clone(doc2, "aabbcc")
assert.deepEqual(doc2, doc2_v2_clone)
assert.equal(Automerge.getActorId(doc2_v2_clone), "aabbcc")
})
it("should allow you to change a clone of a view", () => {
let doc1 = Automerge.init<any>()
doc1 = Automerge.change(doc1, d => d.key = "value")
let heads = Automerge.getHeads(doc1)
doc1 = Automerge.change(doc1, d => d.key = "value2")
let fork = Automerge.clone(Automerge.view(doc1, heads))
assert.deepEqual(fork, {key: "value"})
fork = Automerge.change(fork, d => d.key = "value3")
assert.deepEqual(fork, {key: "value3"})
})
it('handle basic set and read on root object', () => {
let doc1 = Automerge.init()
let doc2 = Automerge.change(doc1, (d) => {
d.hello = "world"
d.big = "little"
d.zip = "zop"
d.app = "dap"
assert.deepEqual(d, { hello: "world", big: "little", zip: "zop", app: "dap" })
})
assert.deepEqual(doc2, { hello: "world", big: "little", zip: "zop", app: "dap" })
})
it('can detect an automerge doc with isAutomerge()', () => {
const doc1 = Automerge.from({ sub: { object: true } })
assert(Automerge.isAutomerge(doc1))
assert(!Automerge.isAutomerge(doc1.sub))
assert(!Automerge.isAutomerge("String"))
assert(!Automerge.isAutomerge({ sub: { object: true }}))
assert(!Automerge.isAutomerge(undefined))
const jsObj = Automerge.toJS(doc1)
assert(!Automerge.isAutomerge(jsObj))
assert.deepEqual(jsObj, doc1)
})
it('it should recursively freeze the document if requested', () => {
let doc1 = Automerge.init({ freeze: true } )
let doc2 = Automerge.init()
assert(Object.isFrozen(doc1))
assert(!Object.isFrozen(doc2))
// will also freeze sub objects
doc1 = Automerge.change(doc1, (doc) => doc.book = { title: "how to win friends" })
doc2 = Automerge.merge(doc2,doc1)
assert(Object.isFrozen(doc1))
assert(Object.isFrozen(doc1.book))
assert(!Object.isFrozen(doc2))
assert(!Object.isFrozen(doc2.book))
// works on from
let doc3 = Automerge.from({ sub: { obj: "inner" } }, { freeze: true })
assert(Object.isFrozen(doc3))
assert(Object.isFrozen(doc3.sub))
// works on load
let doc4 = Automerge.load(Automerge.save(doc3), { freeze: true })
assert(Object.isFrozen(doc4))
assert(Object.isFrozen(doc4.sub))
// follows clone
let doc5 = Automerge.clone(doc4)
assert(Object.isFrozen(doc5))
assert(Object.isFrozen(doc5.sub))
// toJS does not freeze
let exported = Automerge.toJS(doc5)
assert(!Object.isFrozen(exported))
})
it('handle basic sets over many changes', () => {
let doc1 = Automerge.init()
let timestamp = new Date();
let counter = new Automerge.Counter(100);
let bytes = new Uint8Array([10,11,12]);
let doc2 = Automerge.change(doc1, (d) => {
d.hello = "world"
})
let doc3 = Automerge.change(doc2, (d) => {
d.counter1 = counter
})
let doc4 = Automerge.change(doc3, (d) => {
d.timestamp1 = timestamp
})
let doc5 = Automerge.change(doc4, (d) => {
d.app = null
})
let doc6 = Automerge.change(doc5, (d) => {
d.bytes1 = bytes
})
let doc7 = Automerge.change(doc6, (d) => {
d.uint = new Automerge.Uint(1)
d.int = new Automerge.Int(-1)
d.float64 = new Automerge.Float64(5.5)
d.number1 = 100
d.number2 = -45.67
d.true = true
d.false = false
})
assert.deepEqual(doc7, { hello: "world", true: true, false: false, int: -1, uint: 1, float64: 5.5, number1: 100, number2: -45.67, counter1: counter, timestamp1: timestamp, bytes1: bytes, app: null })
let changes = Automerge.getAllChanges(doc7)
let t1 = Automerge.init()
;let [t2] = Automerge.applyChanges(t1, changes)
assert.deepEqual(doc7,t2)
})
it('handle overwrites to values', () => {
let doc1 = Automerge.init()
let doc2 = Automerge.change(doc1, (d) => {
d.hello = "world1"
})
let doc3 = Automerge.change(doc2, (d) => {
d.hello = "world2"
})
let doc4 = Automerge.change(doc3, (d) => {
d.hello = "world3"
})
let doc5 = Automerge.change(doc4, (d) => {
d.hello = "world4"
})
assert.deepEqual(doc5, { hello: "world4" } )
})
it('handle set with object value', () => {
let doc1 = Automerge.init()
let doc2 = Automerge.change(doc1, (d) => {
d.subobj = { hello: "world", subsubobj: { zip: "zop" } }
})
assert.deepEqual(doc2, { subobj: { hello: "world", subsubobj: { zip: "zop" } } })
})
it('handle simple list creation', () => {
let doc1 = Automerge.init()
let doc2 = Automerge.change(doc1, (d) => d.list = [])
assert.deepEqual(doc2, { list: []})
})
it('handle simple lists', () => {
let doc1 = Automerge.init()
let doc2 = Automerge.change(doc1, (d) => {
d.list = [ 1, 2, 3 ]
})
assert.deepEqual(doc2.list.length, 3)
assert.deepEqual(doc2.list[0], 1)
assert.deepEqual(doc2.list[1], 2)
assert.deepEqual(doc2.list[2], 3)
assert.deepEqual(doc2, { list: [1,2,3] })
// assert.deepStrictEqual(Automerge.toJS(doc2), { list: [1,2,3] })
let doc3 = Automerge.change(doc2, (d) => {
d.list[1] = "a"
})
assert.deepEqual(doc3.list.length, 3)
assert.deepEqual(doc3.list[0], 1)
assert.deepEqual(doc3.list[1], "a")
assert.deepEqual(doc3.list[2], 3)
assert.deepEqual(doc3, { list: [1,"a",3] })
})
it('handle simple lists', () => {
let doc1 = Automerge.init()
let doc2 = Automerge.change(doc1, (d) => {
d.list = [ 1, 2, 3 ]
})
let changes = Automerge.getChanges(doc1, doc2)
let docB1 = Automerge.init()
;let [docB2] = Automerge.applyChanges(docB1, changes)
assert.deepEqual(docB2, doc2);
})
it('handle text', () => {
let doc1 = Automerge.init()
let doc2 = Automerge.change(doc1, (d) => {
d.list = "hello"
Automerge.splice(d, "list", 2, 0, "Z")
})
let changes = Automerge.getChanges(doc1, doc2)
let docB1 = Automerge.init()
;let [docB2] = Automerge.applyChanges(docB1, changes)
assert.deepEqual(docB2, doc2);
})
it('handle non-text strings', () => {
let doc1 = WASM.create();
doc1.put("_root", "text", "hello world");
let doc2 = Automerge.load(doc1.save())
assert.throws(() => {
Automerge.change(doc2, (d) => { Automerge.splice(d, "text", 1, 0, "Z") })
}, /Cannot splice/)
})
it('have many list methods', () => {
let doc1 = Automerge.from({ list: [1,2,3] })
assert.deepEqual(doc1, { list: [1,2,3] });
let doc2 = Automerge.change(doc1, (d) => {
d.list.splice(1,1,9,10)
})
assert.deepEqual(doc2, { list: [1,9,10,3] });
let doc3 = Automerge.change(doc2, (d) => {
d.list.push(11,12)
})
assert.deepEqual(doc3, { list: [1,9,10,3,11,12] });
let doc4 = Automerge.change(doc3, (d) => {
d.list.unshift(2,2)
})
assert.deepEqual(doc4, { list: [2,2,1,9,10,3,11,12] });
let doc5 = Automerge.change(doc4, (d) => {
d.list.shift()
})
assert.deepEqual(doc5, { list: [2,1,9,10,3,11,12] });
let doc6 = Automerge.change(doc5, (d) => {
d.list.insertAt(3,100,101)
})
assert.deepEqual(doc6, { list: [2,1,9,100,101,10,3,11,12] });
})
it('allows access to the backend', () => {
let doc = Automerge.init()
assert.deepEqual(Object.keys(Automerge.getBackend(doc)), ["ptr"])
})
it('lists and text have indexof', () => {
let doc = Automerge.from({ list: [0,1,2,3,4,5,6], text: "hello world" })
assert.deepEqual(doc.list.indexOf(5), 5)
assert.deepEqual(doc.text.indexOf("world"), 6)
})
// this is only needed if weakrefs are not supported
Automerge.free(doc1)
Automerge.free(doc2)
})
describe('emptyChange', () => {
it('should generate a hash', () => {
let doc = Automerge.init()
doc = Automerge.change<any>(doc, d => {
d.key = "value"
})
let _ = Automerge.save(doc)
let headsBefore = Automerge.getHeads(doc)
headsBefore.sort()
doc = Automerge.emptyChange(doc, "empty change")
let headsAfter = Automerge.getHeads(doc)
headsAfter.sort()
assert.notDeepEqual(headsBefore, headsAfter)
})
it("should be able to make a view with specifc heads", () => {
let doc1 = Automerge.init<any>()
let doc2 = Automerge.change(doc1, d => (d.value = 1))
let heads2 = Automerge.getHeads(doc2)
let doc3 = Automerge.change(doc2, d => (d.value = 2))
let doc2_v2 = Automerge.view(doc3, heads2)
assert.deepEqual(doc2, doc2_v2)
let doc2_v2_clone = Automerge.clone(doc2, "aabbcc")
assert.deepEqual(doc2, doc2_v2_clone)
assert.equal(Automerge.getActorId(doc2_v2_clone), "aabbcc")
})
describe('proxy lists', () => {
it('behave like arrays', () => {
let doc = Automerge.from({
chars: ["a","b","c"],
numbers: [20,3,100],
repeats: [20,20,3,3,3,3,100,100]
})
let r1 = []
doc = Automerge.change(doc, (d) => {
assert.deepEqual(d.chars.concat([1,2]), ["a","b","c",1,2])
assert.deepEqual(d.chars.map((n) => n + "!"), ["a!", "b!", "c!"])
assert.deepEqual(d.numbers.map((n) => n + 10), [30, 13, 110])
assert.deepEqual(d.numbers.toString(), "20,3,100")
assert.deepEqual(d.numbers.toLocaleString(), "20,3,100")
assert.deepEqual(d.numbers.forEach((n) => r1.push(n)), undefined)
assert.deepEqual(d.numbers.every((n) => n > 1), true)
assert.deepEqual(d.numbers.every((n) => n > 10), false)
assert.deepEqual(d.numbers.filter((n) => n > 10), [20,100])
assert.deepEqual(d.repeats.find((n) => n < 10), 3)
assert.deepEqual(d.repeats.toArray().find((n) => n < 10), 3)
assert.deepEqual(d.repeats.find((n) => n < 0), undefined)
assert.deepEqual(d.repeats.findIndex((n) => n < 10), 2)
assert.deepEqual(d.repeats.findIndex((n) => n < 0), -1)
assert.deepEqual(d.repeats.toArray().findIndex((n) => n < 10), 2)
assert.deepEqual(d.repeats.toArray().findIndex((n) => n < 0), -1)
assert.deepEqual(d.numbers.includes(3), true)
assert.deepEqual(d.numbers.includes(-3), false)
assert.deepEqual(d.numbers.join("|"), "20|3|100")
assert.deepEqual(d.numbers.join(), "20,3,100")
assert.deepEqual(d.numbers.some((f) => f === 3), true)
assert.deepEqual(d.numbers.some((f) => f < 0), false)
assert.deepEqual(d.numbers.reduce((sum,n) => sum + n, 100), 223)
assert.deepEqual(d.repeats.reduce((sum,n) => sum + n, 100), 352)
assert.deepEqual(d.chars.reduce((sum,n) => sum + n, "="), "=abc")
assert.deepEqual(d.chars.reduceRight((sum,n) => sum + n, "="), "=cba")
assert.deepEqual(d.numbers.reduceRight((sum,n) => sum + n, 100), 223)
assert.deepEqual(d.repeats.lastIndexOf(3), 5)
assert.deepEqual(d.repeats.lastIndexOf(3,3), 3)
})
doc = Automerge.change(doc, (d) => {
assert.deepEqual(d.numbers.fill(-1,1,2), [20,-1,100])
assert.deepEqual(d.chars.fill("z",1,100), ["a","z","z"])
})
assert.deepEqual(r1, [20,3,100])
assert.deepEqual(doc.numbers, [20,-1,100])
assert.deepEqual(doc.chars, ["a","z","z"])
})
})
it('should obtain the same conflicts, regardless of merge order', () => {
let s1 = Automerge.init()
let s2 = Automerge.init()
s1 = Automerge.change(s1, doc => { doc.x = 1; doc.y = 2 })
s2 = Automerge.change(s2, doc => { doc.x = 3; doc.y = 4 })
const m1 = Automerge.merge(Automerge.clone(s1), Automerge.clone(s2))
const m2 = Automerge.merge(Automerge.clone(s2), Automerge.clone(s1))
assert.deepStrictEqual(Automerge.getConflicts(m1, 'x'), Automerge.getConflicts(m2, 'x'))
it("should allow you to change a clone of a view", () => {
let doc1 = Automerge.init<any>()
doc1 = Automerge.change(doc1, d => (d.key = "value"))
let heads = Automerge.getHeads(doc1)
doc1 = Automerge.change(doc1, d => (d.key = "value2"))
let fork = Automerge.clone(Automerge.view(doc1, heads))
assert.deepEqual(fork, { key: "value" })
fork = Automerge.change(fork, d => (d.key = "value3"))
assert.deepEqual(fork, { key: "value3" })
})
describe("getObjectId", () => {
let s1 = Automerge.from({
"string": "string",
"number": 1,
"null": null,
"date": new Date(),
"counter": new Automerge.Counter(),
"bytes": new Uint8Array(10),
"text": "",
"list": [],
"map": {}
})
it("should return null for scalar values", () => {
assert.equal(Automerge.getObjectId(s1.string), null)
assert.equal(Automerge.getObjectId(s1.number), null)
assert.equal(Automerge.getObjectId(s1.null), null)
assert.equal(Automerge.getObjectId(s1.date), null)
assert.equal(Automerge.getObjectId(s1.counter), null)
assert.equal(Automerge.getObjectId(s1.bytes), null)
})
it("should return _root for the root object", () => {
assert.equal(Automerge.getObjectId(s1), "_root")
})
it("should return non-null for map, list, text, and objects", () => {
assert.equal(Automerge.getObjectId(s1.text), null)
assert.notEqual(Automerge.getObjectId(s1.list), null)
assert.notEqual(Automerge.getObjectId(s1.map), null)
it("handle basic set and read on root object", () => {
let doc1 = Automerge.init<any>()
let doc2 = Automerge.change(doc1, d => {
d.hello = "world"
d.big = "little"
d.zip = "zop"
d.app = "dap"
assert.deepEqual(d, {
hello: "world",
big: "little",
zip: "zop",
app: "dap",
})
})
assert.deepEqual(doc2, {
hello: "world",
big: "little",
zip: "zop",
app: "dap",
})
})
it("should be able to insert and delete a large number of properties", () => {
let doc = Automerge.init<any>()
doc = Automerge.change(doc, doc => {
doc["k1"] = true
})
for (let idx = 1; idx <= 200; idx++) {
doc = Automerge.change(doc, doc => {
delete doc["k" + idx]
doc["k" + (idx + 1)] = true
assert(Object.keys(doc).length == 1)
})
}
})
it("can detect an automerge doc with isAutomerge()", () => {
const doc1 = Automerge.from({ sub: { object: true } })
assert(Automerge.isAutomerge(doc1))
assert(!Automerge.isAutomerge(doc1.sub))
assert(!Automerge.isAutomerge("String"))
assert(!Automerge.isAutomerge({ sub: { object: true } }))
assert(!Automerge.isAutomerge(undefined))
const jsObj = Automerge.toJS(doc1)
assert(!Automerge.isAutomerge(jsObj))
assert.deepEqual(jsObj, doc1)
})
it("it should recursively freeze the document if requested", () => {
let doc1 = Automerge.init<any>({ freeze: true })
let doc2 = Automerge.init<any>()
assert(Object.isFrozen(doc1))
assert(!Object.isFrozen(doc2))
// will also freeze sub objects
doc1 = Automerge.change(
doc1,
doc => (doc.book = { title: "how to win friends" })
)
doc2 = Automerge.merge(doc2, doc1)
assert(Object.isFrozen(doc1))
assert(Object.isFrozen(doc1.book))
assert(!Object.isFrozen(doc2))
assert(!Object.isFrozen(doc2.book))
// works on from
let doc3 = Automerge.from({ sub: { obj: "inner" } }, { freeze: true })
assert(Object.isFrozen(doc3))
assert(Object.isFrozen(doc3.sub))
// works on load
let doc4 = Automerge.load<any>(Automerge.save(doc3), { freeze: true })
assert(Object.isFrozen(doc4))
assert(Object.isFrozen(doc4.sub))
// follows clone
let doc5 = Automerge.clone(doc4)
assert(Object.isFrozen(doc5))
assert(Object.isFrozen(doc5.sub))
// toJS does not freeze
let exported = Automerge.toJS(doc5)
assert(!Object.isFrozen(exported))
})
it("handle basic sets over many changes", () => {
let doc1 = Automerge.init<any>()
let timestamp = new Date()
let counter = new Automerge.Counter(100)
let bytes = new Uint8Array([10, 11, 12])
let doc2 = Automerge.change(doc1, d => {
d.hello = "world"
})
let doc3 = Automerge.change(doc2, d => {
d.counter1 = counter
})
let doc4 = Automerge.change(doc3, d => {
d.timestamp1 = timestamp
})
let doc5 = Automerge.change(doc4, d => {
d.app = null
})
let doc6 = Automerge.change(doc5, d => {
d.bytes1 = bytes
})
let doc7 = Automerge.change(doc6, d => {
d.uint = new Automerge.Uint(1)
d.int = new Automerge.Int(-1)
d.float64 = new Automerge.Float64(5.5)
d.number1 = 100
d.number2 = -45.67
d.true = true
d.false = false
})
assert.deepEqual(doc7, {
hello: "world",
true: true,
false: false,
int: -1,
uint: 1,
float64: 5.5,
number1: 100,
number2: -45.67,
counter1: counter,
timestamp1: timestamp,
bytes1: bytes,
app: null,
})
let changes = Automerge.getAllChanges(doc7)
let t1 = Automerge.init()
let [t2] = Automerge.applyChanges(t1, changes)
assert.deepEqual(doc7, t2)
})
it("handle overwrites to values", () => {
let doc1 = Automerge.init<any>()
let doc2 = Automerge.change(doc1, d => {
d.hello = "world1"
})
let doc3 = Automerge.change(doc2, d => {
d.hello = "world2"
})
let doc4 = Automerge.change(doc3, d => {
d.hello = "world3"
})
let doc5 = Automerge.change(doc4, d => {
d.hello = "world4"
})
assert.deepEqual(doc5, { hello: "world4" })
})
it("handle set with object value", () => {
let doc1 = Automerge.init<any>()
let doc2 = Automerge.change(doc1, d => {
d.subobj = { hello: "world", subsubobj: { zip: "zop" } }
})
assert.deepEqual(doc2, {
subobj: { hello: "world", subsubobj: { zip: "zop" } },
})
})
it("handle simple list creation", () => {
let doc1 = Automerge.init<any>()
let doc2 = Automerge.change(doc1, d => (d.list = []))
assert.deepEqual(doc2, { list: [] })
})
it("handle simple lists", () => {
let doc1 = Automerge.init<any>()
let doc2 = Automerge.change(doc1, d => {
d.list = [1, 2, 3]
})
assert.deepEqual(doc2.list.length, 3)
assert.deepEqual(doc2.list[0], 1)
assert.deepEqual(doc2.list[1], 2)
assert.deepEqual(doc2.list[2], 3)
assert.deepEqual(doc2, { list: [1, 2, 3] })
// assert.deepStrictEqual(Automerge.toJS(doc2), { list: [1,2,3] })
let doc3 = Automerge.change(doc2, d => {
d.list[1] = "a"
})
assert.deepEqual(doc3.list.length, 3)
assert.deepEqual(doc3.list[0], 1)
assert.deepEqual(doc3.list[1], "a")
assert.deepEqual(doc3.list[2], 3)
assert.deepEqual(doc3, { list: [1, "a", 3] })
})
it("handle simple lists", () => {
let doc1 = Automerge.init<any>()
let doc2 = Automerge.change(doc1, d => {
d.list = [1, 2, 3]
})
let changes = Automerge.getChanges(doc1, doc2)
let docB1 = Automerge.init()
let [docB2] = Automerge.applyChanges(docB1, changes)
assert.deepEqual(docB2, doc2)
})
it("handle text", () => {
let doc1 = Automerge.init<any>()
let doc2 = Automerge.change(doc1, d => {
d.list = "hello"
Automerge.splice(d, "list", 2, 0, "Z")
})
let changes = Automerge.getChanges(doc1, doc2)
let docB1 = Automerge.init()
let [docB2] = Automerge.applyChanges(docB1, changes)
assert.deepEqual(docB2, doc2)
})
it("handle non-text strings", () => {
let doc1 = WASM.create(true)
doc1.put("_root", "text", "hello world")
let doc2 = Automerge.load<any>(doc1.save())
assert.throws(() => {
Automerge.change(doc2, d => {
Automerge.splice(d, "text", 1, 0, "Z")
})
}, /Cannot splice/)
})
it("have many list methods", () => {
let doc1 = Automerge.from({ list: [1, 2, 3] })
assert.deepEqual(doc1, { list: [1, 2, 3] })
let doc2 = Automerge.change(doc1, d => {
d.list.splice(1, 1, 9, 10)
})
assert.deepEqual(doc2, { list: [1, 9, 10, 3] })
let doc3 = Automerge.change(doc2, d => {
d.list.push(11, 12)
})
assert.deepEqual(doc3, { list: [1, 9, 10, 3, 11, 12] })
let doc4 = Automerge.change(doc3, d => {
d.list.unshift(2, 2)
})
assert.deepEqual(doc4, { list: [2, 2, 1, 9, 10, 3, 11, 12] })
let doc5 = Automerge.change(doc4, d => {
d.list.shift()
})
assert.deepEqual(doc5, { list: [2, 1, 9, 10, 3, 11, 12] })
let doc6 = Automerge.change(doc5, d => {
d.list.insertAt(3, 100, 101)
})
assert.deepEqual(doc6, { list: [2, 1, 9, 100, 101, 10, 3, 11, 12] })
})
it("allows access to the backend", () => {
let doc = Automerge.init()
assert.deepEqual(Object.keys(Automerge.getBackend(doc)), ["ptr"])
})
it("lists and text have indexof", () => {
let doc = Automerge.from({
list: [0, 1, 2, 3, 4, 5, 6],
text: "hello world",
})
assert.deepEqual(doc.list.indexOf(5), 5)
assert.deepEqual(doc.text.indexOf("world"), 6)
})
})
describe("emptyChange", () => {
it("should generate a hash", () => {
let doc = Automerge.init()
doc = Automerge.change<any>(doc, d => {
d.key = "value"
})
Automerge.save(doc)
let headsBefore = Automerge.getHeads(doc)
headsBefore.sort()
doc = Automerge.emptyChange(doc, "empty change")
let headsAfter = Automerge.getHeads(doc)
headsAfter.sort()
assert.notDeepEqual(headsBefore, headsAfter)
})
})
describe("proxy lists", () => {
it("behave like arrays", () => {
let doc = Automerge.from({
chars: ["a", "b", "c"],
numbers: [20, 3, 100],
repeats: [20, 20, 3, 3, 3, 3, 100, 100],
})
let r1: Array<number> = []
doc = Automerge.change(doc, d => {
assert.deepEqual((d.chars as any[]).concat([1, 2]), [
"a",
"b",
"c",
1,
2,
])
assert.deepEqual(
d.chars.map(n => n + "!"),
["a!", "b!", "c!"]
)
assert.deepEqual(
d.numbers.map(n => n + 10),
[30, 13, 110]
)
assert.deepEqual(d.numbers.toString(), "20,3,100")
assert.deepEqual(d.numbers.toLocaleString(), "20,3,100")
assert.deepEqual(
d.numbers.forEach((n: number) => r1.push(n)),
undefined
)
assert.deepEqual(
d.numbers.every(n => n > 1),
true
)
assert.deepEqual(
d.numbers.every(n => n > 10),
false
)
assert.deepEqual(
d.numbers.filter(n => n > 10),
[20, 100]
)
assert.deepEqual(
d.repeats.find(n => n < 10),
3
)
assert.deepEqual(
d.repeats.find(n => n < 10),
3
)
assert.deepEqual(
d.repeats.find(n => n < 0),
undefined
)
assert.deepEqual(
d.repeats.findIndex(n => n < 10),
2
)
assert.deepEqual(
d.repeats.findIndex(n => n < 0),
-1
)
assert.deepEqual(
d.repeats.findIndex(n => n < 10),
2
)
assert.deepEqual(
d.repeats.findIndex(n => n < 0),
-1
)
assert.deepEqual(d.numbers.includes(3), true)
assert.deepEqual(d.numbers.includes(-3), false)
assert.deepEqual(d.numbers.join("|"), "20|3|100")
assert.deepEqual(d.numbers.join(), "20,3,100")
assert.deepEqual(
d.numbers.some(f => f === 3),
true
)
assert.deepEqual(
d.numbers.some(f => f < 0),
false
)
assert.deepEqual(
d.numbers.reduce((sum, n) => sum + n, 100),
223
)
assert.deepEqual(
d.repeats.reduce((sum, n) => sum + n, 100),
352
)
assert.deepEqual(
d.chars.reduce((sum, n) => sum + n, "="),
"=abc"
)
assert.deepEqual(
d.chars.reduceRight((sum, n) => sum + n, "="),
"=cba"
)
assert.deepEqual(
d.numbers.reduceRight((sum, n) => sum + n, 100),
223
)
assert.deepEqual(d.repeats.lastIndexOf(3), 5)
assert.deepEqual(d.repeats.lastIndexOf(3, 3), 3)
})
doc = Automerge.change(doc, d => {
assert.deepEqual(d.numbers.fill(-1, 1, 2), [20, -1, 100])
assert.deepEqual(d.chars.fill("z", 1, 100), ["a", "z", "z"])
})
assert.deepEqual(r1, [20, 3, 100])
assert.deepEqual(doc.numbers, [20, -1, 100])
assert.deepEqual(doc.chars, ["a", "z", "z"])
})
})
it("should obtain the same conflicts, regardless of merge order", () => {
let s1 = Automerge.init<any>()
let s2 = Automerge.init<any>()
s1 = Automerge.change(s1, doc => {
doc.x = 1
doc.y = 2
})
s2 = Automerge.change(s2, doc => {
doc.x = 3
doc.y = 4
})
const m1 = Automerge.merge(Automerge.clone(s1), Automerge.clone(s2))
const m2 = Automerge.merge(Automerge.clone(s2), Automerge.clone(s1))
assert.deepStrictEqual(
Automerge.getConflicts(m1, "x"),
Automerge.getConflicts(m2, "x")
)
})
describe("getObjectId", () => {
let s1 = Automerge.from({
string: "string",
number: 1,
null: null,
date: new Date(),
counter: new Automerge.Counter(),
bytes: new Uint8Array(10),
text: "",
list: [],
map: {},
})
it("should return null for scalar values", () => {
assert.equal(Automerge.getObjectId(s1.string), null)
assert.equal(Automerge.getObjectId(s1.number), null)
assert.equal(Automerge.getObjectId(s1.null!), null)
assert.equal(Automerge.getObjectId(s1.date), null)
assert.equal(Automerge.getObjectId(s1.counter), null)
assert.equal(Automerge.getObjectId(s1.bytes), null)
})
it("should return _root for the root object", () => {
assert.equal(Automerge.getObjectId(s1), "_root")
})
it("should return non-null for map, list, text, and objects", () => {
assert.equal(Automerge.getObjectId(s1.text), null)
assert.notEqual(Automerge.getObjectId(s1.list), null)
assert.notEqual(Automerge.getObjectId(s1.map), null)
})
})
})

View file

@ -1,97 +0,0 @@
import * as assert from 'assert'
import { checkEncoded } from './helpers'
import * as Automerge from '../src'
import { encodeChange, decodeChange } from '../src'
describe('change encoding', () => {
it('should encode text edits', () => {
/*
const change1 = {actor: 'aaaa', seq: 1, startOp: 1, time: 9, message: '', deps: [], ops: [
{action: 'makeText', obj: '_root', key: 'text', insert: false, pred: []},
{action: 'set', obj: '1@aaaa', elemId: '_head', insert: true, value: 'h', pred: []},
{action: 'del', obj: '1@aaaa', elemId: '2@aaaa', insert: false, pred: ['2@aaaa']},
{action: 'set', obj: '1@aaaa', elemId: '_head', insert: true, value: 'H', pred: []},
{action: 'set', obj: '1@aaaa', elemId: '4@aaaa', insert: true, value: 'i', pred: []}
]}
*/
const change1 = {actor: 'aaaa', seq: 1, startOp: 1, time: 9, message: null, deps: [], ops: [
{action: 'makeText', obj: '_root', key: 'text', pred: []},
{action: 'set', obj: '1@aaaa', elemId: '_head', insert: true, value: 'h', pred: []},
{action: 'del', obj: '1@aaaa', elemId: '2@aaaa', pred: ['2@aaaa']},
{action: 'set', obj: '1@aaaa', elemId: '_head', insert: true, value: 'H', pred: []},
{action: 'set', obj: '1@aaaa', elemId: '4@aaaa', insert: true, value: 'i', pred: []}
]}
checkEncoded(encodeChange(change1), [
0x85, 0x6f, 0x4a, 0x83, // magic bytes
0xe2, 0xbd, 0xfb, 0xf5, // checksum
1, 94, 0, 2, 0xaa, 0xaa, // chunkType: change, length, deps, actor 'aaaa'
1, 1, 9, 0, 0, // seq, startOp, time, message, actor list
12, 0x01, 4, 0x02, 4, // column count, objActor, objCtr
0x11, 8, 0x13, 7, 0x15, 8, // keyActor, keyCtr, keyStr
0x34, 4, 0x42, 6, // insert, action
0x56, 6, 0x57, 3, // valLen, valRaw
0x70, 6, 0x71, 2, 0x73, 2, // predNum, predActor, predCtr
0, 1, 4, 0, // objActor column: null, 0, 0, 0, 0
0, 1, 4, 1, // objCtr column: null, 1, 1, 1, 1
0, 2, 0x7f, 0, 0, 1, 0x7f, 0, // keyActor column: null, null, 0, null, 0
0, 1, 0x7c, 0, 2, 0x7e, 4, // keyCtr column: null, 0, 2, 0, 4
0x7f, 4, 0x74, 0x65, 0x78, 0x74, 0, 4, // keyStr column: 'text', null, null, null, null
1, 1, 1, 2, // insert column: false, true, false, true, true
0x7d, 4, 1, 3, 2, 1, // action column: makeText, set, del, set, set
0x7d, 0, 0x16, 0, 2, 0x16, // valLen column: 0, 0x16, 0, 0x16, 0x16
0x68, 0x48, 0x69, // valRaw column: 'h', 'H', 'i'
2, 0, 0x7f, 1, 2, 0, // predNum column: 0, 0, 1, 0, 0
0x7f, 0, // predActor column: 0
0x7f, 2 // predCtr column: 2
])
const decoded = decodeChange(encodeChange(change1))
assert.deepStrictEqual(decoded, Object.assign({hash: decoded.hash}, change1))
})
// FIXME - skipping this b/c it was never implemented in the rust impl and isnt trivial
/*
it.skip('should require strict ordering of preds', () => {
const change = new Uint8Array([
133, 111, 74, 131, 31, 229, 112, 44, 1, 105, 1, 58, 30, 190, 100, 253, 180, 180, 66, 49, 126,
81, 142, 10, 3, 35, 140, 189, 231, 34, 145, 57, 66, 23, 224, 149, 64, 97, 88, 140, 168, 194,
229, 4, 244, 209, 58, 138, 67, 140, 1, 152, 236, 250, 2, 0, 1, 4, 55, 234, 66, 242, 8, 21, 11,
52, 1, 66, 2, 86, 3, 87, 10, 112, 2, 113, 3, 115, 4, 127, 9, 99, 111, 109, 109, 111, 110, 86,
97, 114, 1, 127, 1, 127, 166, 1, 52, 48, 57, 49, 52, 57, 52, 53, 56, 50, 127, 2, 126, 0, 1,
126, 139, 1, 0
])
assert.throws(() => { decodeChange(change) }, /operation IDs are not in ascending order/)
})
*/
describe('with trailing bytes', () => {
let change = new Uint8Array([
0x85, 0x6f, 0x4a, 0x83, // magic bytes
0xb2, 0x98, 0x9e, 0xa9, // checksum
1, 61, 0, 2, 0x12, 0x34, // chunkType: change, length, deps, actor '1234'
1, 1, 252, 250, 220, 255, 5, // seq, startOp, time
14, 73, 110, 105, 116, 105, 97, 108, 105, 122, 97, 116, 105, 111, 110, // message: 'Initialization'
0, 6, // actor list, column count
0x15, 3, 0x34, 1, 0x42, 2, // keyStr, insert, action
0x56, 2, 0x57, 1, 0x70, 2, // valLen, valRaw, predNum
0x7f, 1, 0x78, // keyStr: 'x'
1, // insert: false
0x7f, 1, // action: set
0x7f, 19, // valLen: 1 byte of type uint
1, // valRaw: 1
0x7f, 0, // predNum: 0
0, 1, 2, 3, 4, 5, 6, 7, 8, 9 // 10 trailing bytes
])
it('should allow decoding and re-encoding', () => {
// NOTE: This calls the JavaScript encoding and decoding functions, even when the WebAssembly
// backend is loaded. Should the wasm backend export its own functions for testing?
checkEncoded(change, encodeChange(decodeChange(change)))
})
it('should be preserved in document encoding', () => {
const [doc] = Automerge.applyChanges(Automerge.init(), [change])
const [reconstructed] = Automerge.getAllChanges(Automerge.load(Automerge.save(doc)))
checkEncoded(change, reconstructed)
})
})
})

View file

@ -1,20 +1,28 @@
import * as assert from "assert"
import { unstable as Automerge } from "../src"
import * as assert from 'assert'
import * as Automerge from '../src'
describe('Automerge', () => {
describe('basics', () => {
it('should allow you to load incrementally', () => {
let doc1 = Automerge.from({ foo: "bar" })
let doc2 = Automerge.init();
doc2 = Automerge.loadIncremental(doc2, Automerge.save(doc1))
doc1 = Automerge.change(doc1, (d) => d.foo2 = "bar2")
doc2 = Automerge.loadIncremental(doc2, Automerge.getBackend(doc1).saveIncremental() )
doc1 = Automerge.change(doc1, (d) => d.foo = "bar2")
doc2 = Automerge.loadIncremental(doc2, Automerge.getBackend(doc1).saveIncremental() )
doc1 = Automerge.change(doc1, (d) => d.x = "y")
doc2 = Automerge.loadIncremental(doc2, Automerge.getBackend(doc1).saveIncremental() )
assert.deepEqual(doc1,doc2)
})
describe("Automerge", () => {
describe("basics", () => {
it("should allow you to load incrementally", () => {
let doc1 = Automerge.from<any>({ foo: "bar" })
let doc2 = Automerge.init<any>()
doc2 = Automerge.loadIncremental(doc2, Automerge.save(doc1))
doc1 = Automerge.change(doc1, d => (d.foo2 = "bar2"))
doc2 = Automerge.loadIncremental(
doc2,
Automerge.getBackend(doc1).saveIncremental()
)
doc1 = Automerge.change(doc1, d => (d.foo = "bar2"))
doc2 = Automerge.loadIncremental(
doc2,
Automerge.getBackend(doc1).saveIncremental()
)
doc1 = Automerge.change(doc1, d => (d.x = "y"))
doc2 = Automerge.loadIncremental(
doc2,
Automerge.getBackend(doc1).saveIncremental()
)
assert.deepEqual(doc1, doc2)
})
})
})

View file

@ -1,16 +1,21 @@
import * as assert from 'assert'
import { Encoder } from './legacy/encoding'
import * as assert from "assert"
import { Encoder } from "./legacy/encoding"
// Assertion that succeeds if the first argument deepStrictEquals at least one of the
// subsequent arguments (but we don't care which one)
function assertEqualsOneOf(actual, ...expected) {
export function assertEqualsOneOf(actual, ...expected) {
assert(expected.length > 0)
for (let i = 0; i < expected.length; i++) {
try {
assert.deepStrictEqual(actual, expected[i])
return // if we get here without an exception, that means success
} catch (e) {
if (!e.name.match(/^AssertionError/) || i === expected.length - 1) throw e
if (e instanceof assert.AssertionError) {
if (!e.name.match(/^AssertionError/) || i === expected.length - 1)
throw e
} else {
throw e
}
}
}
}
@ -19,14 +24,13 @@ function assertEqualsOneOf(actual, ...expected) {
* Asserts that the byte array maintained by `encoder` contains the same byte
* sequence as the array `bytes`.
*/
function checkEncoded(encoder, bytes, detail) {
const encoded = (encoder instanceof Encoder) ? encoder.buffer : encoder
export function checkEncoded(encoder, bytes, detail?) {
const encoded = encoder instanceof Encoder ? encoder.buffer : encoder
const expected = new Uint8Array(bytes)
const message = (detail ? `${detail}: ` : '') + `${encoded} expected to equal ${expected}`
const message =
(detail ? `${detail}: ` : "") + `${encoded} expected to equal ${expected}`
assert(encoded.byteLength === expected.byteLength, message)
for (let i = 0; i < encoded.byteLength; i++) {
assert(encoded[i] === expected[i], message)
}
}
module.exports = { assertEqualsOneOf, checkEncoded }

File diff suppressed because it is too large Load diff

View file

@ -1,5 +1,5 @@
function isObject(obj) {
return typeof obj === 'object' && obj !== null
return typeof obj === "object" && obj !== null
}
/**
@ -20,11 +20,11 @@ function copyObject(obj) {
* with an actor ID, separated by an `@` sign) and returns an object `{counter, actorId}`.
*/
function parseOpId(opId) {
const match = /^(\d+)@(.*)$/.exec(opId || '')
const match = /^(\d+)@(.*)$/.exec(opId || "")
if (!match) {
throw new RangeError(`Not a valid opId: ${opId}`)
}
return {counter: parseInt(match[1], 10), actorId: match[2]}
return { counter: parseInt(match[1], 10), actorId: match[2] }
}
/**
@ -32,7 +32,7 @@ function parseOpId(opId) {
*/
function equalBytes(array1, array2) {
if (!(array1 instanceof Uint8Array) || !(array2 instanceof Uint8Array)) {
throw new TypeError('equalBytes can only compare Uint8Arrays')
throw new TypeError("equalBytes can only compare Uint8Arrays")
}
if (array1.byteLength !== array2.byteLength) return false
for (let i = 0; i < array1.byteLength; i++) {
@ -51,5 +51,9 @@ function createArrayOfNulls(length) {
}
module.exports = {
isObject, copyObject, parseOpId, equalBytes, createArrayOfNulls
isObject,
copyObject,
parseOpId,
equalBytes,
createArrayOfNulls,
}

View file

@ -6,7 +6,7 @@
* https://github.com/anonyco/FastestSmallestTextEncoderDecoder
*/
const utf8encoder = new TextEncoder()
const utf8decoder = new TextDecoder('utf-8')
const utf8decoder = new TextDecoder("utf-8")
function stringToUtf8(string) {
return utf8encoder.encode(string)
@ -20,30 +20,48 @@ function utf8ToString(buffer) {
* Converts a string consisting of hexadecimal digits into an Uint8Array.
*/
function hexStringToBytes(value) {
if (typeof value !== 'string') {
throw new TypeError('value is not a string')
if (typeof value !== "string") {
throw new TypeError("value is not a string")
}
if (!/^([0-9a-f][0-9a-f])*$/.test(value)) {
throw new RangeError('value is not hexadecimal')
throw new RangeError("value is not hexadecimal")
}
if (value === '') {
if (value === "") {
return new Uint8Array(0)
} else {
return new Uint8Array(value.match(/../g).map(b => parseInt(b, 16)))
}
}
const NIBBLE_TO_HEX = ['0', '1', '2', '3', '4', '5', '6', '7', '8', '9', 'a', 'b', 'c', 'd', 'e', 'f']
const NIBBLE_TO_HEX = [
"0",
"1",
"2",
"3",
"4",
"5",
"6",
"7",
"8",
"9",
"a",
"b",
"c",
"d",
"e",
"f",
]
const BYTE_TO_HEX = new Array(256)
for (let i = 0; i < 256; i++) {
BYTE_TO_HEX[i] = `${NIBBLE_TO_HEX[(i >>> 4) & 0xf]}${NIBBLE_TO_HEX[i & 0xf]}`;
BYTE_TO_HEX[i] = `${NIBBLE_TO_HEX[(i >>> 4) & 0xf]}${NIBBLE_TO_HEX[i & 0xf]}`
}
/**
* Converts a Uint8Array into the equivalent hexadecimal string.
*/
function bytesToHexString(bytes) {
let hex = '', len = bytes.byteLength
let hex = "",
len = bytes.byteLength
for (let i = 0; i < len; i++) {
hex += BYTE_TO_HEX[bytes[i]]
}
@ -95,14 +113,17 @@ class Encoder {
* appends it to the buffer. Returns the number of bytes written.
*/
appendUint32(value) {
if (!Number.isInteger(value)) throw new RangeError('value is not an integer')
if (value < 0 || value > 0xffffffff) throw new RangeError('number out of range')
if (!Number.isInteger(value))
throw new RangeError("value is not an integer")
if (value < 0 || value > 0xffffffff)
throw new RangeError("number out of range")
const numBytes = Math.max(1, Math.ceil((32 - Math.clz32(value)) / 7))
if (this.offset + numBytes > this.buf.byteLength) this.grow()
for (let i = 0; i < numBytes; i++) {
this.buf[this.offset + i] = (value & 0x7f) | (i === numBytes - 1 ? 0x00 : 0x80)
this.buf[this.offset + i] =
(value & 0x7f) | (i === numBytes - 1 ? 0x00 : 0x80)
value >>>= 7 // zero-filling right shift
}
this.offset += numBytes
@ -115,14 +136,19 @@ class Encoder {
* it to the buffer. Returns the number of bytes written.
*/
appendInt32(value) {
if (!Number.isInteger(value)) throw new RangeError('value is not an integer')
if (value < -0x80000000 || value > 0x7fffffff) throw new RangeError('number out of range')
if (!Number.isInteger(value))
throw new RangeError("value is not an integer")
if (value < -0x80000000 || value > 0x7fffffff)
throw new RangeError("number out of range")
const numBytes = Math.ceil((33 - Math.clz32(value >= 0 ? value : -value - 1)) / 7)
const numBytes = Math.ceil(
(33 - Math.clz32(value >= 0 ? value : -value - 1)) / 7
)
if (this.offset + numBytes > this.buf.byteLength) this.grow()
for (let i = 0; i < numBytes; i++) {
this.buf[this.offset + i] = (value & 0x7f) | (i === numBytes - 1 ? 0x00 : 0x80)
this.buf[this.offset + i] =
(value & 0x7f) | (i === numBytes - 1 ? 0x00 : 0x80)
value >>= 7 // sign-propagating right shift
}
this.offset += numBytes
@ -135,9 +161,10 @@ class Encoder {
* (53 bits).
*/
appendUint53(value) {
if (!Number.isInteger(value)) throw new RangeError('value is not an integer')
if (!Number.isInteger(value))
throw new RangeError("value is not an integer")
if (value < 0 || value > Number.MAX_SAFE_INTEGER) {
throw new RangeError('number out of range')
throw new RangeError("number out of range")
}
const high32 = Math.floor(value / 0x100000000)
const low32 = (value & 0xffffffff) >>> 0 // right shift to interpret as unsigned
@ -150,9 +177,10 @@ class Encoder {
* (53 bits).
*/
appendInt53(value) {
if (!Number.isInteger(value)) throw new RangeError('value is not an integer')
if (!Number.isInteger(value))
throw new RangeError("value is not an integer")
if (value < Number.MIN_SAFE_INTEGER || value > Number.MAX_SAFE_INTEGER) {
throw new RangeError('number out of range')
throw new RangeError("number out of range")
}
const high32 = Math.floor(value / 0x100000000)
const low32 = (value & 0xffffffff) >>> 0 // right shift to interpret as unsigned
@ -167,10 +195,10 @@ class Encoder {
*/
appendUint64(high32, low32) {
if (!Number.isInteger(high32) || !Number.isInteger(low32)) {
throw new RangeError('value is not an integer')
throw new RangeError("value is not an integer")
}
if (high32 < 0 || high32 > 0xffffffff || low32 < 0 || low32 > 0xffffffff) {
throw new RangeError('number out of range')
throw new RangeError("number out of range")
}
if (high32 === 0) return this.appendUint32(low32)
@ -180,10 +208,12 @@ class Encoder {
this.buf[this.offset + i] = (low32 & 0x7f) | 0x80
low32 >>>= 7 // zero-filling right shift
}
this.buf[this.offset + 4] = (low32 & 0x0f) | ((high32 & 0x07) << 4) | (numBytes === 5 ? 0x00 : 0x80)
this.buf[this.offset + 4] =
(low32 & 0x0f) | ((high32 & 0x07) << 4) | (numBytes === 5 ? 0x00 : 0x80)
high32 >>>= 3
for (let i = 5; i < numBytes; i++) {
this.buf[this.offset + i] = (high32 & 0x7f) | (i === numBytes - 1 ? 0x00 : 0x80)
this.buf[this.offset + i] =
(high32 & 0x7f) | (i === numBytes - 1 ? 0x00 : 0x80)
high32 >>>= 7
}
this.offset += numBytes
@ -200,25 +230,35 @@ class Encoder {
*/
appendInt64(high32, low32) {
if (!Number.isInteger(high32) || !Number.isInteger(low32)) {
throw new RangeError('value is not an integer')
throw new RangeError("value is not an integer")
}
if (high32 < -0x80000000 || high32 > 0x7fffffff || low32 < -0x80000000 || low32 > 0xffffffff) {
throw new RangeError('number out of range')
if (
high32 < -0x80000000 ||
high32 > 0x7fffffff ||
low32 < -0x80000000 ||
low32 > 0xffffffff
) {
throw new RangeError("number out of range")
}
low32 >>>= 0 // interpret as unsigned
if (high32 === 0 && low32 <= 0x7fffffff) return this.appendInt32(low32)
if (high32 === -1 && low32 >= 0x80000000) return this.appendInt32(low32 - 0x100000000)
if (high32 === -1 && low32 >= 0x80000000)
return this.appendInt32(low32 - 0x100000000)
const numBytes = Math.ceil((65 - Math.clz32(high32 >= 0 ? high32 : -high32 - 1)) / 7)
const numBytes = Math.ceil(
(65 - Math.clz32(high32 >= 0 ? high32 : -high32 - 1)) / 7
)
if (this.offset + numBytes > this.buf.byteLength) this.grow()
for (let i = 0; i < 4; i++) {
this.buf[this.offset + i] = (low32 & 0x7f) | 0x80
low32 >>>= 7 // zero-filling right shift
}
this.buf[this.offset + 4] = (low32 & 0x0f) | ((high32 & 0x07) << 4) | (numBytes === 5 ? 0x00 : 0x80)
this.buf[this.offset + 4] =
(low32 & 0x0f) | ((high32 & 0x07) << 4) | (numBytes === 5 ? 0x00 : 0x80)
high32 >>= 3 // sign-propagating right shift
for (let i = 5; i < numBytes; i++) {
this.buf[this.offset + i] = (high32 & 0x7f) | (i === numBytes - 1 ? 0x00 : 0x80)
this.buf[this.offset + i] =
(high32 & 0x7f) | (i === numBytes - 1 ? 0x00 : 0x80)
high32 >>= 7
}
this.offset += numBytes
@ -243,7 +283,7 @@ class Encoder {
* number of bytes appended.
*/
appendRawString(value) {
if (typeof value !== 'string') throw new TypeError('value is not a string')
if (typeof value !== "string") throw new TypeError("value is not a string")
return this.appendRawBytes(stringToUtf8(value))
}
@ -262,7 +302,7 @@ class Encoder {
* (where the length is encoded as an unsigned LEB128 integer).
*/
appendPrefixedString(value) {
if (typeof value !== 'string') throw new TypeError('value is not a string')
if (typeof value !== "string") throw new TypeError("value is not a string")
this.appendPrefixedBytes(stringToUtf8(value))
return this
}
@ -281,8 +321,7 @@ class Encoder {
* Flushes any unwritten data to the buffer. Call this before reading from
* the buffer constructed by this Encoder.
*/
finish() {
}
finish() {}
}
/**
@ -321,7 +360,7 @@ class Decoder {
*/
skip(bytes) {
if (this.offset + bytes > this.buf.byteLength) {
throw new RangeError('cannot skip beyond end of buffer')
throw new RangeError("cannot skip beyond end of buffer")
}
this.offset += bytes
}
@ -339,18 +378,20 @@ class Decoder {
* Throws an exception if the value doesn't fit in a 32-bit unsigned int.
*/
readUint32() {
let result = 0, shift = 0
let result = 0,
shift = 0
while (this.offset < this.buf.byteLength) {
const nextByte = this.buf[this.offset]
if (shift === 28 && (nextByte & 0xf0) !== 0) { // more than 5 bytes, or value > 0xffffffff
throw new RangeError('number out of range')
if (shift === 28 && (nextByte & 0xf0) !== 0) {
// more than 5 bytes, or value > 0xffffffff
throw new RangeError("number out of range")
}
result = (result | (nextByte & 0x7f) << shift) >>> 0 // right shift to interpret value as unsigned
result = (result | ((nextByte & 0x7f) << shift)) >>> 0 // right shift to interpret value as unsigned
shift += 7
this.offset++
if ((nextByte & 0x80) === 0) return result
}
throw new RangeError('buffer ended with incomplete number')
throw new RangeError("buffer ended with incomplete number")
}
/**
@ -358,13 +399,17 @@ class Decoder {
* Throws an exception if the value doesn't fit in a 32-bit signed int.
*/
readInt32() {
let result = 0, shift = 0
let result = 0,
shift = 0
while (this.offset < this.buf.byteLength) {
const nextByte = this.buf[this.offset]
if ((shift === 28 && (nextByte & 0x80) !== 0) || // more than 5 bytes
(shift === 28 && (nextByte & 0x40) === 0 && (nextByte & 0x38) !== 0) || // positive int > 0x7fffffff
(shift === 28 && (nextByte & 0x40) !== 0 && (nextByte & 0x38) !== 0x38)) { // negative int < -0x80000000
throw new RangeError('number out of range')
if (
(shift === 28 && (nextByte & 0x80) !== 0) || // more than 5 bytes
(shift === 28 && (nextByte & 0x40) === 0 && (nextByte & 0x38) !== 0) || // positive int > 0x7fffffff
(shift === 28 && (nextByte & 0x40) !== 0 && (nextByte & 0x38) !== 0x38)
) {
// negative int < -0x80000000
throw new RangeError("number out of range")
}
result |= (nextByte & 0x7f) << shift
shift += 7
@ -378,7 +423,7 @@ class Decoder {
}
}
}
throw new RangeError('buffer ended with incomplete number')
throw new RangeError("buffer ended with incomplete number")
}
/**
@ -389,7 +434,7 @@ class Decoder {
readUint53() {
const { low32, high32 } = this.readUint64()
if (high32 < 0 || high32 > 0x1fffff) {
throw new RangeError('number out of range')
throw new RangeError("number out of range")
}
return high32 * 0x100000000 + low32
}
@ -401,8 +446,12 @@ class Decoder {
*/
readInt53() {
const { low32, high32 } = this.readInt64()
if (high32 < -0x200000 || (high32 === -0x200000 && low32 === 0) || high32 > 0x1fffff) {
throw new RangeError('number out of range')
if (
high32 < -0x200000 ||
(high32 === -0x200000 && low32 === 0) ||
high32 > 0x1fffff
) {
throw new RangeError("number out of range")
}
return high32 * 0x100000000 + low32
}
@ -414,10 +463,12 @@ class Decoder {
* `{high32, low32}`.
*/
readUint64() {
let low32 = 0, high32 = 0, shift = 0
let low32 = 0,
high32 = 0,
shift = 0
while (this.offset < this.buf.byteLength && shift <= 28) {
const nextByte = this.buf[this.offset]
low32 = (low32 | (nextByte & 0x7f) << shift) >>> 0 // right shift to interpret value as unsigned
low32 = (low32 | ((nextByte & 0x7f) << shift)) >>> 0 // right shift to interpret value as unsigned
if (shift === 28) {
high32 = (nextByte & 0x70) >>> 4
}
@ -429,15 +480,16 @@ class Decoder {
shift = 3
while (this.offset < this.buf.byteLength) {
const nextByte = this.buf[this.offset]
if (shift === 31 && (nextByte & 0xfe) !== 0) { // more than 10 bytes, or value > 2^64 - 1
throw new RangeError('number out of range')
if (shift === 31 && (nextByte & 0xfe) !== 0) {
// more than 10 bytes, or value > 2^64 - 1
throw new RangeError("number out of range")
}
high32 = (high32 | (nextByte & 0x7f) << shift) >>> 0
high32 = (high32 | ((nextByte & 0x7f) << shift)) >>> 0
shift += 7
this.offset++
if ((nextByte & 0x80) === 0) return { high32, low32 }
}
throw new RangeError('buffer ended with incomplete number')
throw new RangeError("buffer ended with incomplete number")
}
/**
@ -448,17 +500,20 @@ class Decoder {
* sign of the `high32` half indicates the sign of the 64-bit number.
*/
readInt64() {
let low32 = 0, high32 = 0, shift = 0
let low32 = 0,
high32 = 0,
shift = 0
while (this.offset < this.buf.byteLength && shift <= 28) {
const nextByte = this.buf[this.offset]
low32 = (low32 | (nextByte & 0x7f) << shift) >>> 0 // right shift to interpret value as unsigned
low32 = (low32 | ((nextByte & 0x7f) << shift)) >>> 0 // right shift to interpret value as unsigned
if (shift === 28) {
high32 = (nextByte & 0x70) >>> 4
}
shift += 7
this.offset++
if ((nextByte & 0x80) === 0) {
if ((nextByte & 0x40) !== 0) { // sign-extend negative integer
if ((nextByte & 0x40) !== 0) {
// sign-extend negative integer
if (shift < 32) low32 = (low32 | (-1 << shift)) >>> 0
high32 |= -1 << Math.max(shift - 32, 0)
}
@ -472,19 +527,20 @@ class Decoder {
// On the 10th byte there are only two valid values: all 7 value bits zero
// (if the value is positive) or all 7 bits one (if the value is negative)
if (shift === 31 && nextByte !== 0 && nextByte !== 0x7f) {
throw new RangeError('number out of range')
throw new RangeError("number out of range")
}
high32 |= (nextByte & 0x7f) << shift
shift += 7
this.offset++
if ((nextByte & 0x80) === 0) {
if ((nextByte & 0x40) !== 0 && shift < 32) { // sign-extend negative integer
if ((nextByte & 0x40) !== 0 && shift < 32) {
// sign-extend negative integer
high32 |= -1 << shift
}
return { high32, low32 }
}
}
throw new RangeError('buffer ended with incomplete number')
throw new RangeError("buffer ended with incomplete number")
}
/**
@ -494,7 +550,7 @@ class Decoder {
readRawBytes(length) {
const start = this.offset
if (start + length > this.buf.byteLength) {
throw new RangeError('subarray exceeds buffer size')
throw new RangeError("subarray exceeds buffer size")
}
this.offset += length
return this.buf.subarray(start, this.offset)
@ -559,7 +615,7 @@ class RLEEncoder extends Encoder {
constructor(type) {
super()
this.type = type
this.state = 'empty'
this.state = "empty"
this.lastValue = undefined
this.count = 0
this.literal = []
@ -578,76 +634,81 @@ class RLEEncoder extends Encoder {
*/
_appendValue(value, repetitions = 1) {
if (repetitions <= 0) return
if (this.state === 'empty') {
this.state = (value === null ? 'nulls' : (repetitions === 1 ? 'loneValue' : 'repetition'))
if (this.state === "empty") {
this.state =
value === null
? "nulls"
: repetitions === 1
? "loneValue"
: "repetition"
this.lastValue = value
this.count = repetitions
} else if (this.state === 'loneValue') {
} else if (this.state === "loneValue") {
if (value === null) {
this.flush()
this.state = 'nulls'
this.state = "nulls"
this.count = repetitions
} else if (value === this.lastValue) {
this.state = 'repetition'
this.state = "repetition"
this.count = 1 + repetitions
} else if (repetitions > 1) {
this.flush()
this.state = 'repetition'
this.state = "repetition"
this.count = repetitions
this.lastValue = value
} else {
this.state = 'literal'
this.state = "literal"
this.literal = [this.lastValue]
this.lastValue = value
}
} else if (this.state === 'repetition') {
} else if (this.state === "repetition") {
if (value === null) {
this.flush()
this.state = 'nulls'
this.state = "nulls"
this.count = repetitions
} else if (value === this.lastValue) {
this.count += repetitions
} else if (repetitions > 1) {
this.flush()
this.state = 'repetition'
this.state = "repetition"
this.count = repetitions
this.lastValue = value
} else {
this.flush()
this.state = 'loneValue'
this.state = "loneValue"
this.lastValue = value
}
} else if (this.state === 'literal') {
} else if (this.state === "literal") {
if (value === null) {
this.literal.push(this.lastValue)
this.flush()
this.state = 'nulls'
this.state = "nulls"
this.count = repetitions
} else if (value === this.lastValue) {
this.flush()
this.state = 'repetition'
this.state = "repetition"
this.count = 1 + repetitions
} else if (repetitions > 1) {
this.literal.push(this.lastValue)
this.flush()
this.state = 'repetition'
this.state = "repetition"
this.count = repetitions
this.lastValue = value
} else {
this.literal.push(this.lastValue)
this.lastValue = value
}
} else if (this.state === 'nulls') {
} else if (this.state === "nulls") {
if (value === null) {
this.count += repetitions
} else if (repetitions > 1) {
this.flush()
this.state = 'repetition'
this.state = "repetition"
this.count = repetitions
this.lastValue = value
} else {
this.flush()
this.state = 'loneValue'
this.state = "loneValue"
this.lastValue = value
}
}
@ -666,13 +727,16 @@ class RLEEncoder extends Encoder {
*/
copyFrom(decoder, options = {}) {
const { count, sumValues, sumShift } = options
if (!(decoder instanceof RLEDecoder) || (decoder.type !== this.type)) {
throw new TypeError('incompatible type of decoder')
if (!(decoder instanceof RLEDecoder) || decoder.type !== this.type) {
throw new TypeError("incompatible type of decoder")
}
let remaining = (typeof count === 'number' ? count : Number.MAX_SAFE_INTEGER)
let nonNullValues = 0, sum = 0
if (count && remaining > 0 && decoder.done) throw new RangeError(`cannot copy ${count} values`)
if (remaining === 0 || decoder.done) return sumValues ? {nonNullValues, sum} : {nonNullValues}
let remaining = typeof count === "number" ? count : Number.MAX_SAFE_INTEGER
let nonNullValues = 0,
sum = 0
if (count && remaining > 0 && decoder.done)
throw new RangeError(`cannot copy ${count} values`)
if (remaining === 0 || decoder.done)
return sumValues ? { nonNullValues, sum } : { nonNullValues }
// Copy a value so that we have a well-defined starting state. NB: when super.copyFrom() is
// called by the DeltaEncoder subclass, the following calls to readValue() and appendValue()
@ -684,87 +748,101 @@ class RLEEncoder extends Encoder {
remaining -= numNulls
decoder.count -= numNulls - 1
this.appendValue(null, numNulls)
if (count && remaining > 0 && decoder.done) throw new RangeError(`cannot copy ${count} values`)
if (remaining === 0 || decoder.done) return sumValues ? {nonNullValues, sum} : {nonNullValues}
if (count && remaining > 0 && decoder.done)
throw new RangeError(`cannot copy ${count} values`)
if (remaining === 0 || decoder.done)
return sumValues ? { nonNullValues, sum } : { nonNullValues }
firstValue = decoder.readValue()
if (firstValue === null) throw new RangeError('null run must be followed by non-null value')
if (firstValue === null)
throw new RangeError("null run must be followed by non-null value")
}
this.appendValue(firstValue)
remaining--
nonNullValues++
if (sumValues) sum += (sumShift ? (firstValue >>> sumShift) : firstValue)
if (count && remaining > 0 && decoder.done) throw new RangeError(`cannot copy ${count} values`)
if (remaining === 0 || decoder.done) return sumValues ? {nonNullValues, sum} : {nonNullValues}
if (sumValues) sum += sumShift ? firstValue >>> sumShift : firstValue
if (count && remaining > 0 && decoder.done)
throw new RangeError(`cannot copy ${count} values`)
if (remaining === 0 || decoder.done)
return sumValues ? { nonNullValues, sum } : { nonNullValues }
// Copy data at the record level without expanding repetitions
let firstRun = (decoder.count > 0)
let firstRun = decoder.count > 0
while (remaining > 0 && !decoder.done) {
if (!firstRun) decoder.readRecord()
const numValues = Math.min(decoder.count, remaining)
decoder.count -= numValues
if (decoder.state === 'literal') {
if (decoder.state === "literal") {
nonNullValues += numValues
for (let i = 0; i < numValues; i++) {
if (decoder.done) throw new RangeError('incomplete literal')
if (decoder.done) throw new RangeError("incomplete literal")
const value = decoder.readRawValue()
if (value === decoder.lastValue) throw new RangeError('Repetition of values is not allowed in literal')
if (value === decoder.lastValue)
throw new RangeError(
"Repetition of values is not allowed in literal"
)
decoder.lastValue = value
this._appendValue(value)
if (sumValues) sum += (sumShift ? (value >>> sumShift) : value)
if (sumValues) sum += sumShift ? value >>> sumShift : value
}
} else if (decoder.state === 'repetition') {
} else if (decoder.state === "repetition") {
nonNullValues += numValues
if (sumValues) sum += numValues * (sumShift ? (decoder.lastValue >>> sumShift) : decoder.lastValue)
if (sumValues)
sum +=
numValues *
(sumShift ? decoder.lastValue >>> sumShift : decoder.lastValue)
const value = decoder.lastValue
this._appendValue(value)
if (numValues > 1) {
this._appendValue(value)
if (this.state !== 'repetition') throw new RangeError(`Unexpected state ${this.state}`)
if (this.state !== "repetition")
throw new RangeError(`Unexpected state ${this.state}`)
this.count += numValues - 2
}
} else if (decoder.state === 'nulls') {
} else if (decoder.state === "nulls") {
this._appendValue(null)
if (this.state !== 'nulls') throw new RangeError(`Unexpected state ${this.state}`)
if (this.state !== "nulls")
throw new RangeError(`Unexpected state ${this.state}`)
this.count += numValues - 1
}
firstRun = false
remaining -= numValues
}
if (count && remaining > 0 && decoder.done) throw new RangeError(`cannot copy ${count} values`)
return sumValues ? {nonNullValues, sum} : {nonNullValues}
if (count && remaining > 0 && decoder.done)
throw new RangeError(`cannot copy ${count} values`)
return sumValues ? { nonNullValues, sum } : { nonNullValues }
}
/**
* Private method, do not call from outside the class.
*/
flush() {
if (this.state === 'loneValue') {
if (this.state === "loneValue") {
this.appendInt32(-1)
this.appendRawValue(this.lastValue)
} else if (this.state === 'repetition') {
} else if (this.state === "repetition") {
this.appendInt53(this.count)
this.appendRawValue(this.lastValue)
} else if (this.state === 'literal') {
} else if (this.state === "literal") {
this.appendInt53(-this.literal.length)
for (let v of this.literal) this.appendRawValue(v)
} else if (this.state === 'nulls') {
} else if (this.state === "nulls") {
this.appendInt32(0)
this.appendUint53(this.count)
}
this.state = 'empty'
this.state = "empty"
}
/**
* Private method, do not call from outside the class.
*/
appendRawValue(value) {
if (this.type === 'int') {
if (this.type === "int") {
this.appendInt53(value)
} else if (this.type === 'uint') {
} else if (this.type === "uint") {
this.appendUint53(value)
} else if (this.type === 'utf8') {
} else if (this.type === "utf8") {
this.appendPrefixedString(value)
} else {
throw new RangeError(`Unknown RLEEncoder datatype: ${this.type}`)
@ -776,9 +854,9 @@ class RLEEncoder extends Encoder {
* the buffer constructed by this Encoder.
*/
finish() {
if (this.state === 'literal') this.literal.push(this.lastValue)
if (this.state === "literal") this.literal.push(this.lastValue)
// Don't write anything if the only values we have seen are nulls
if (this.state !== 'nulls' || this.offset > 0) this.flush()
if (this.state !== "nulls" || this.offset > 0) this.flush()
}
}
@ -800,7 +878,7 @@ class RLEDecoder extends Decoder {
* position, and true if we are at the end of the buffer.
*/
get done() {
return (this.count === 0) && (this.offset === this.buf.byteLength)
return this.count === 0 && this.offset === this.buf.byteLength
}
/**
@ -821,9 +899,10 @@ class RLEDecoder extends Decoder {
if (this.done) return null
if (this.count === 0) this.readRecord()
this.count -= 1
if (this.state === 'literal') {
if (this.state === "literal") {
const value = this.readRawValue()
if (value === this.lastValue) throw new RangeError('Repetition of values is not allowed in literal')
if (value === this.lastValue)
throw new RangeError("Repetition of values is not allowed in literal")
this.lastValue = value
return value
} else {
@ -839,20 +918,22 @@ class RLEDecoder extends Decoder {
if (this.count === 0) {
this.count = this.readInt53()
if (this.count > 0) {
this.lastValue = (this.count <= numSkip) ? this.skipRawValues(1) : this.readRawValue()
this.state = 'repetition'
this.lastValue =
this.count <= numSkip ? this.skipRawValues(1) : this.readRawValue()
this.state = "repetition"
} else if (this.count < 0) {
this.count = -this.count
this.state = 'literal'
} else { // this.count == 0
this.state = "literal"
} else {
// this.count == 0
this.count = this.readUint53()
this.lastValue = null
this.state = 'nulls'
this.state = "nulls"
}
}
const consume = Math.min(numSkip, this.count)
if (this.state === 'literal') this.skipRawValues(consume)
if (this.state === "literal") this.skipRawValues(consume)
numSkip -= consume
this.count -= consume
}
@ -866,23 +947,34 @@ class RLEDecoder extends Decoder {
this.count = this.readInt53()
if (this.count > 1) {
const value = this.readRawValue()
if ((this.state === 'repetition' || this.state === 'literal') && this.lastValue === value) {
throw new RangeError('Successive repetitions with the same value are not allowed')
if (
(this.state === "repetition" || this.state === "literal") &&
this.lastValue === value
) {
throw new RangeError(
"Successive repetitions with the same value are not allowed"
)
}
this.state = 'repetition'
this.state = "repetition"
this.lastValue = value
} else if (this.count === 1) {
throw new RangeError('Repetition count of 1 is not allowed, use a literal instead')
throw new RangeError(
"Repetition count of 1 is not allowed, use a literal instead"
)
} else if (this.count < 0) {
this.count = -this.count
if (this.state === 'literal') throw new RangeError('Successive literals are not allowed')
this.state = 'literal'
} else { // this.count == 0
if (this.state === 'nulls') throw new RangeError('Successive null runs are not allowed')
if (this.state === "literal")
throw new RangeError("Successive literals are not allowed")
this.state = "literal"
} else {
// this.count == 0
if (this.state === "nulls")
throw new RangeError("Successive null runs are not allowed")
this.count = this.readUint53()
if (this.count === 0) throw new RangeError('Zero-length null runs are not allowed')
if (this.count === 0)
throw new RangeError("Zero-length null runs are not allowed")
this.lastValue = null
this.state = 'nulls'
this.state = "nulls"
}
}
@ -891,11 +983,11 @@ class RLEDecoder extends Decoder {
* Reads one value of the datatype configured on construction.
*/
readRawValue() {
if (this.type === 'int') {
if (this.type === "int") {
return this.readInt53()
} else if (this.type === 'uint') {
} else if (this.type === "uint") {
return this.readUint53()
} else if (this.type === 'utf8') {
} else if (this.type === "utf8") {
return this.readPrefixedString()
} else {
throw new RangeError(`Unknown RLEDecoder datatype: ${this.type}`)
@ -907,14 +999,14 @@ class RLEDecoder extends Decoder {
* Skips over `num` values of the datatype configured on construction.
*/
skipRawValues(num) {
if (this.type === 'utf8') {
if (this.type === "utf8") {
for (let i = 0; i < num; i++) this.skip(this.readUint53())
} else {
while (num > 0 && this.offset < this.buf.byteLength) {
if ((this.buf[this.offset] & 0x80) === 0) num--
this.offset++
}
if (num > 0) throw new RangeError('cannot skip beyond end of buffer')
if (num > 0) throw new RangeError("cannot skip beyond end of buffer")
}
}
}
@ -931,7 +1023,7 @@ class RLEDecoder extends Decoder {
*/
class DeltaEncoder extends RLEEncoder {
constructor() {
super('int')
super("int")
this.absoluteValue = 0
}
@ -941,7 +1033,7 @@ class DeltaEncoder extends RLEEncoder {
*/
appendValue(value, repetitions = 1) {
if (repetitions <= 0) return
if (typeof value === 'number') {
if (typeof value === "number") {
super.appendValue(value - this.absoluteValue, 1)
this.absoluteValue = value
if (repetitions > 1) super.appendValue(0, repetitions - 1)
@ -957,26 +1049,29 @@ class DeltaEncoder extends RLEEncoder {
*/
copyFrom(decoder, options = {}) {
if (options.sumValues) {
throw new RangeError('unsupported options for DeltaEncoder.copyFrom()')
throw new RangeError("unsupported options for DeltaEncoder.copyFrom()")
}
if (!(decoder instanceof DeltaDecoder)) {
throw new TypeError('incompatible type of decoder')
throw new TypeError("incompatible type of decoder")
}
let remaining = options.count
if (remaining > 0 && decoder.done) throw new RangeError(`cannot copy ${remaining} values`)
if (remaining > 0 && decoder.done)
throw new RangeError(`cannot copy ${remaining} values`)
if (remaining === 0 || decoder.done) return
// Copy any null values, and the first non-null value, so that appendValue() computes the
// difference between the encoder's last value and the decoder's first (absolute) value.
let value = decoder.readValue(), nulls = 0
let value = decoder.readValue(),
nulls = 0
this.appendValue(value)
if (value === null) {
nulls = decoder.count + 1
if (remaining !== undefined && remaining < nulls) nulls = remaining
decoder.count -= nulls - 1
this.count += nulls - 1
if (remaining > nulls && decoder.done) throw new RangeError(`cannot copy ${remaining} values`)
if (remaining > nulls && decoder.done)
throw new RangeError(`cannot copy ${remaining} values`)
if (remaining === nulls || decoder.done) return
// The next value read is certain to be non-null because we're not at the end of the decoder,
@ -989,7 +1084,10 @@ class DeltaEncoder extends RLEEncoder {
// value, while subsequent values are relative. Thus, the sum of all of the (non-null) copied
// values must equal the absolute value of the final element copied.
if (remaining !== undefined) remaining -= nulls + 1
const { nonNullValues, sum } = super.copyFrom(decoder, {count: remaining, sumValues: true})
const { nonNullValues, sum } = super.copyFrom(decoder, {
count: remaining,
sumValues: true,
})
if (nonNullValues > 0) {
this.absoluteValue = sum
decoder.absoluteValue = sum
@ -1003,7 +1101,7 @@ class DeltaEncoder extends RLEEncoder {
*/
class DeltaDecoder extends RLEDecoder {
constructor(buffer) {
super('int', buffer)
super("int", buffer)
this.absoluteValue = 0
}
@ -1036,12 +1134,12 @@ class DeltaDecoder extends RLEDecoder {
while (numSkip > 0 && !this.done) {
if (this.count === 0) this.readRecord()
const consume = Math.min(numSkip, this.count)
if (this.state === 'literal') {
if (this.state === "literal") {
for (let i = 0; i < consume; i++) {
this.lastValue = this.readRawValue()
this.absoluteValue += this.lastValue
}
} else if (this.state === 'repetition') {
} else if (this.state === "repetition") {
this.absoluteValue += consume * this.lastValue
}
numSkip -= consume
@ -1090,12 +1188,13 @@ class BooleanEncoder extends Encoder {
*/
copyFrom(decoder, options = {}) {
if (!(decoder instanceof BooleanDecoder)) {
throw new TypeError('incompatible type of decoder')
throw new TypeError("incompatible type of decoder")
}
const { count } = options
let remaining = (typeof count === 'number' ? count : Number.MAX_SAFE_INTEGER)
if (count && remaining > 0 && decoder.done) throw new RangeError(`cannot copy ${count} values`)
let remaining = typeof count === "number" ? count : Number.MAX_SAFE_INTEGER
if (count && remaining > 0 && decoder.done)
throw new RangeError(`cannot copy ${count} values`)
if (remaining === 0 || decoder.done) return
// Copy one value to bring decoder and encoder state into sync, then finish that value's repetitions
@ -1108,7 +1207,8 @@ class BooleanEncoder extends Encoder {
while (remaining > 0 && !decoder.done) {
decoder.count = decoder.readUint53()
if (decoder.count === 0) throw new RangeError('Zero-length runs are not allowed')
if (decoder.count === 0)
throw new RangeError("Zero-length runs are not allowed")
decoder.lastValue = !decoder.lastValue
this.appendUint53(this.count)
@ -1119,7 +1219,8 @@ class BooleanEncoder extends Encoder {
remaining -= numCopied
}
if (count && remaining > 0 && decoder.done) throw new RangeError(`cannot copy ${count} values`)
if (count && remaining > 0 && decoder.done)
throw new RangeError(`cannot copy ${count} values`)
}
/**
@ -1151,7 +1252,7 @@ class BooleanDecoder extends Decoder {
* position, and true if we are at the end of the buffer.
*/
get done() {
return (this.count === 0) && (this.offset === this.buf.byteLength)
return this.count === 0 && this.offset === this.buf.byteLength
}
/**
@ -1174,7 +1275,7 @@ class BooleanDecoder extends Decoder {
this.count = this.readUint53()
this.lastValue = !this.lastValue
if (this.count === 0 && !this.firstRun) {
throw new RangeError('Zero-length runs are not allowed')
throw new RangeError("Zero-length runs are not allowed")
}
this.firstRun = false
}
@ -1190,7 +1291,8 @@ class BooleanDecoder extends Decoder {
if (this.count === 0) {
this.count = this.readUint53()
this.lastValue = !this.lastValue
if (this.count === 0) throw new RangeError('Zero-length runs are not allowed')
if (this.count === 0)
throw new RangeError("Zero-length runs are not allowed")
}
if (this.count < numSkip) {
numSkip -= this.count
@ -1204,6 +1306,16 @@ class BooleanDecoder extends Decoder {
}
module.exports = {
stringToUtf8, utf8ToString, hexStringToBytes, bytesToHexString,
Encoder, Decoder, RLEEncoder, RLEDecoder, DeltaEncoder, DeltaDecoder, BooleanEncoder, BooleanDecoder
stringToUtf8,
utf8ToString,
hexStringToBytes,
bytesToHexString,
Encoder,
Decoder,
RLEEncoder,
RLEDecoder,
DeltaEncoder,
DeltaDecoder,
BooleanEncoder,
BooleanDecoder,
}

View file

@ -17,9 +17,14 @@
*/
const Backend = null //require('./backend')
const { hexStringToBytes, bytesToHexString, Encoder, Decoder } = require('./encoding')
const { decodeChangeMeta } = require('./columnar')
const { copyObject } = require('./common')
const {
hexStringToBytes,
bytesToHexString,
Encoder,
Decoder,
} = require("./encoding")
const { decodeChangeMeta } = require("./columnar")
const { copyObject } = require("./common")
const HASH_SIZE = 32 // 256 bits = 32 bytes
const MESSAGE_TYPE_SYNC = 0x42 // first byte of a sync message, for identification
@ -28,7 +33,8 @@ const PEER_STATE_TYPE = 0x43 // first byte of an encoded peer state, for identif
// These constants correspond to a 1% false positive rate. The values can be changed without
// breaking compatibility of the network protocol, since the parameters used for a particular
// Bloom filter are encoded in the wire format.
const BITS_PER_ENTRY = 10, NUM_PROBES = 7
const BITS_PER_ENTRY = 10,
NUM_PROBES = 7
/**
* A Bloom filter implementation that can be serialised to a byte array for transmission
@ -36,13 +42,15 @@ const BITS_PER_ENTRY = 10, NUM_PROBES = 7
* so this implementation does not perform its own hashing.
*/
class BloomFilter {
constructor (arg) {
constructor(arg) {
if (Array.isArray(arg)) {
// arg is an array of SHA256 hashes in hexadecimal encoding
this.numEntries = arg.length
this.numBitsPerEntry = BITS_PER_ENTRY
this.numProbes = NUM_PROBES
this.bits = new Uint8Array(Math.ceil(this.numEntries * this.numBitsPerEntry / 8))
this.bits = new Uint8Array(
Math.ceil((this.numEntries * this.numBitsPerEntry) / 8)
)
for (let hash of arg) this.addHash(hash)
} else if (arg instanceof Uint8Array) {
if (arg.byteLength === 0) {
@ -55,10 +63,12 @@ class BloomFilter {
this.numEntries = decoder.readUint32()
this.numBitsPerEntry = decoder.readUint32()
this.numProbes = decoder.readUint32()
this.bits = decoder.readRawBytes(Math.ceil(this.numEntries * this.numBitsPerEntry / 8))
this.bits = decoder.readRawBytes(
Math.ceil((this.numEntries * this.numBitsPerEntry) / 8)
)
}
} else {
throw new TypeError('invalid argument')
throw new TypeError("invalid argument")
}
}
@ -86,12 +96,32 @@ class BloomFilter {
* http://www.ccis.northeastern.edu/home/pete/pub/bloom-filters-verification.pdf
*/
getProbes(hash) {
const hashBytes = hexStringToBytes(hash), modulo = 8 * this.bits.byteLength
if (hashBytes.byteLength !== 32) throw new RangeError(`Not a 256-bit hash: ${hash}`)
const hashBytes = hexStringToBytes(hash),
modulo = 8 * this.bits.byteLength
if (hashBytes.byteLength !== 32)
throw new RangeError(`Not a 256-bit hash: ${hash}`)
// on the next three lines, the right shift means interpret value as unsigned
let x = ((hashBytes[0] | hashBytes[1] << 8 | hashBytes[2] << 16 | hashBytes[3] << 24) >>> 0) % modulo
let y = ((hashBytes[4] | hashBytes[5] << 8 | hashBytes[6] << 16 | hashBytes[7] << 24) >>> 0) % modulo
let z = ((hashBytes[8] | hashBytes[9] << 8 | hashBytes[10] << 16 | hashBytes[11] << 24) >>> 0) % modulo
let x =
((hashBytes[0] |
(hashBytes[1] << 8) |
(hashBytes[2] << 16) |
(hashBytes[3] << 24)) >>>
0) %
modulo
let y =
((hashBytes[4] |
(hashBytes[5] << 8) |
(hashBytes[6] << 16) |
(hashBytes[7] << 24)) >>>
0) %
modulo
let z =
((hashBytes[8] |
(hashBytes[9] << 8) |
(hashBytes[10] << 16) |
(hashBytes[11] << 24)) >>>
0) %
modulo
const probes = [x]
for (let i = 1; i < this.numProbes; i++) {
x = (x + y) % modulo
@ -128,12 +158,14 @@ class BloomFilter {
* Encodes a sorted array of SHA-256 hashes (as hexadecimal strings) into a byte array.
*/
function encodeHashes(encoder, hashes) {
if (!Array.isArray(hashes)) throw new TypeError('hashes must be an array')
if (!Array.isArray(hashes)) throw new TypeError("hashes must be an array")
encoder.appendUint32(hashes.length)
for (let i = 0; i < hashes.length; i++) {
if (i > 0 && hashes[i - 1] >= hashes[i]) throw new RangeError('hashes must be sorted')
if (i > 0 && hashes[i - 1] >= hashes[i])
throw new RangeError("hashes must be sorted")
const bytes = hexStringToBytes(hashes[i])
if (bytes.byteLength !== HASH_SIZE) throw new TypeError('heads hashes must be 256 bits')
if (bytes.byteLength !== HASH_SIZE)
throw new TypeError("heads hashes must be 256 bits")
encoder.appendRawBytes(bytes)
}
}
@ -143,7 +175,8 @@ function encodeHashes(encoder, hashes) {
* array of hex strings.
*/
function decodeHashes(decoder) {
let length = decoder.readUint32(), hashes = []
let length = decoder.readUint32(),
hashes = []
for (let i = 0; i < length; i++) {
hashes.push(bytesToHexString(decoder.readRawBytes(HASH_SIZE)))
}
@ -183,11 +216,11 @@ function decodeSyncMessage(bytes) {
const heads = decodeHashes(decoder)
const need = decodeHashes(decoder)
const haveCount = decoder.readUint32()
let message = {heads, need, have: [], changes: []}
let message = { heads, need, have: [], changes: [] }
for (let i = 0; i < haveCount; i++) {
const lastSync = decodeHashes(decoder)
const bloom = decoder.readPrefixedBytes(decoder)
message.have.push({lastSync, bloom})
message.have.push({ lastSync, bloom })
}
const changeCount = decoder.readUint32()
for (let i = 0; i < changeCount; i++) {
@ -234,7 +267,7 @@ function decodeSyncState(bytes) {
function makeBloomFilter(backend, lastSync) {
const newChanges = Backend.getChanges(backend, lastSync)
const hashes = newChanges.map(change => decodeChangeMeta(change, true).hash)
return {lastSync, bloom: new BloomFilter(hashes).bytes}
return { lastSync, bloom: new BloomFilter(hashes).bytes }
}
/**
@ -245,20 +278,26 @@ function makeBloomFilter(backend, lastSync) {
*/
function getChangesToSend(backend, have, need) {
if (have.length === 0) {
return need.map(hash => Backend.getChangeByHash(backend, hash)).filter(change => change !== undefined)
return need
.map(hash => Backend.getChangeByHash(backend, hash))
.filter(change => change !== undefined)
}
let lastSyncHashes = {}, bloomFilters = []
let lastSyncHashes = {},
bloomFilters = []
for (let h of have) {
for (let hash of h.lastSync) lastSyncHashes[hash] = true
bloomFilters.push(new BloomFilter(h.bloom))
}
// Get all changes that were added since the last sync
const changes = Backend.getChanges(backend, Object.keys(lastSyncHashes))
.map(change => decodeChangeMeta(change, true))
const changes = Backend.getChanges(backend, Object.keys(lastSyncHashes)).map(
change => decodeChangeMeta(change, true)
)
let changeHashes = {}, dependents = {}, hashesToSend = {}
let changeHashes = {},
dependents = {},
hashesToSend = {}
for (let change of changes) {
changeHashes[change.hash] = true
@ -292,7 +331,8 @@ function getChangesToSend(backend, have, need) {
let changesToSend = []
for (let hash of need) {
hashesToSend[hash] = true
if (!changeHashes[hash]) { // Change is not among those returned by getMissingChanges()?
if (!changeHashes[hash]) {
// Change is not among those returned by getMissingChanges()?
const change = Backend.getChangeByHash(backend, hash)
if (change) changesToSend.push(change)
}
@ -317,7 +357,7 @@ function initSyncState() {
}
function compareArrays(a, b) {
return (a.length === b.length) && a.every((v, i) => v === b[i])
return a.length === b.length && a.every((v, i) => v === b[i])
}
/**
@ -329,10 +369,19 @@ function generateSyncMessage(backend, syncState) {
throw new Error("generateSyncMessage called with no Automerge document")
}
if (!syncState) {
throw new Error("generateSyncMessage requires a syncState, which can be created with initSyncState()")
throw new Error(
"generateSyncMessage requires a syncState, which can be created with initSyncState()"
)
}
let { sharedHeads, lastSentHeads, theirHeads, theirNeed, theirHave, sentHashes } = syncState
let {
sharedHeads,
lastSentHeads,
theirHeads,
theirNeed,
theirHave,
sentHashes,
} = syncState
const ourHeads = Backend.getHeads(backend)
// Hashes to explicitly request from the remote peer: any missing dependencies of unapplied
@ -356,18 +405,28 @@ function generateSyncMessage(backend, syncState) {
const lastSync = theirHave[0].lastSync
if (!lastSync.every(hash => Backend.getChangeByHash(backend, hash))) {
// we need to queue them to send us a fresh sync message, the one they sent is uninteligible so we don't know what they need
const resetMsg = {heads: ourHeads, need: [], have: [{ lastSync: [], bloom: new Uint8Array(0) }], changes: []}
const resetMsg = {
heads: ourHeads,
need: [],
have: [{ lastSync: [], bloom: new Uint8Array(0) }],
changes: [],
}
return [syncState, encodeSyncMessage(resetMsg)]
}
}
// XXX: we should limit ourselves to only sending a subset of all the messages, probably limited by a total message size
// these changes should ideally be RLE encoded but we haven't implemented that yet.
let changesToSend = Array.isArray(theirHave) && Array.isArray(theirNeed) ? getChangesToSend(backend, theirHave, theirNeed) : []
let changesToSend =
Array.isArray(theirHave) && Array.isArray(theirNeed)
? getChangesToSend(backend, theirHave, theirNeed)
: []
// If the heads are equal, we're in sync and don't need to do anything further
const headsUnchanged = Array.isArray(lastSentHeads) && compareArrays(ourHeads, lastSentHeads)
const headsEqual = Array.isArray(theirHeads) && compareArrays(ourHeads, theirHeads)
const headsUnchanged =
Array.isArray(lastSentHeads) && compareArrays(ourHeads, lastSentHeads)
const headsEqual =
Array.isArray(theirHeads) && compareArrays(ourHeads, theirHeads)
if (headsUnchanged && headsEqual && changesToSend.length === 0) {
// no need to send a sync message if we know we're synced!
return [syncState, null]
@ -375,12 +434,19 @@ function generateSyncMessage(backend, syncState) {
// TODO: this recomputes the SHA-256 hash of each change; we should restructure this to avoid the
// unnecessary recomputation
changesToSend = changesToSend.filter(change => !sentHashes[decodeChangeMeta(change, true).hash])
changesToSend = changesToSend.filter(
change => !sentHashes[decodeChangeMeta(change, true).hash]
)
// Regular response to a sync message: send any changes that the other node
// doesn't have. We leave the "have" field empty because the previous message
// generated by `syncStart` already indicated what changes we have.
const syncMessage = {heads: ourHeads, have: ourHave, need: ourNeed, changes: changesToSend}
const syncMessage = {
heads: ourHeads,
have: ourHave,
need: ourNeed,
changes: changesToSend,
}
if (changesToSend.length > 0) {
sentHashes = copyObject(sentHashes)
for (const change of changesToSend) {
@ -388,7 +454,10 @@ function generateSyncMessage(backend, syncState) {
}
}
syncState = Object.assign({}, syncState, {lastSentHeads: ourHeads, sentHashes})
syncState = Object.assign({}, syncState, {
lastSentHeads: ourHeads,
sentHashes,
})
return [syncState, encodeSyncMessage(syncMessage)]
}
@ -406,13 +475,14 @@ function generateSyncMessage(backend, syncState) {
* another peer, that means that peer had those changes, and therefore we now both know about them.
*/
function advanceHeads(myOldHeads, myNewHeads, ourOldSharedHeads) {
const newHeads = myNewHeads.filter((head) => !myOldHeads.includes(head))
const commonHeads = ourOldSharedHeads.filter((head) => myNewHeads.includes(head))
const newHeads = myNewHeads.filter(head => !myOldHeads.includes(head))
const commonHeads = ourOldSharedHeads.filter(head =>
myNewHeads.includes(head)
)
const advancedHeads = [...new Set([...newHeads, ...commonHeads])].sort()
return advancedHeads
}
/**
* Given a backend, a message message and the state of our peer, apply any changes, update what
* we believe about the peer, and (if there were applied changes) produce a patch for the frontend
@ -422,10 +492,13 @@ function receiveSyncMessage(backend, oldSyncState, binaryMessage) {
throw new Error("generateSyncMessage called with no Automerge document")
}
if (!oldSyncState) {
throw new Error("generateSyncMessage requires a syncState, which can be created with initSyncState()")
throw new Error(
"generateSyncMessage requires a syncState, which can be created with initSyncState()"
)
}
let { sharedHeads, lastSentHeads, sentHashes } = oldSyncState, patch = null
let { sharedHeads, lastSentHeads, sentHashes } = oldSyncState,
patch = null
const message = decodeSyncMessage(binaryMessage)
const beforeHeads = Backend.getHeads(backend)
@ -434,18 +507,27 @@ function receiveSyncMessage(backend, oldSyncState, binaryMessage) {
// changes without applying them. The set of changes may also be incomplete if the sender decided
// to break a large set of changes into chunks.
if (message.changes.length > 0) {
[backend, patch] = Backend.applyChanges(backend, message.changes)
sharedHeads = advanceHeads(beforeHeads, Backend.getHeads(backend), sharedHeads)
;[backend, patch] = Backend.applyChanges(backend, message.changes)
sharedHeads = advanceHeads(
beforeHeads,
Backend.getHeads(backend),
sharedHeads
)
}
// If heads are equal, indicate we don't need to send a response message
if (message.changes.length === 0 && compareArrays(message.heads, beforeHeads)) {
if (
message.changes.length === 0 &&
compareArrays(message.heads, beforeHeads)
) {
lastSentHeads = message.heads
}
// If all of the remote heads are known to us, that means either our heads are equal, or we are
// ahead of the remote peer. In this case, take the remote heads to be our shared heads.
const knownHeads = message.heads.filter(head => Backend.getChangeByHash(backend, head))
const knownHeads = message.heads.filter(head =>
Backend.getChangeByHash(backend, head)
)
if (knownHeads.length === message.heads.length) {
sharedHeads = message.heads
// If the remote peer has lost all its data, reset our state to perform a full resync
@ -467,14 +549,18 @@ function receiveSyncMessage(backend, oldSyncState, binaryMessage) {
theirHave: message.have, // the information we need to calculate the changes they need
theirHeads: message.heads,
theirNeed: message.need,
sentHashes
sentHashes,
}
return [backend, syncState, patch]
}
module.exports = {
receiveSyncMessage, generateSyncMessage,
encodeSyncMessage, decodeSyncMessage,
initSyncState, encodeSyncState, decodeSyncState,
BloomFilter // BloomFilter is a private API, exported only for testing purposes
receiveSyncMessage,
generateSyncMessage,
encodeSyncMessage,
decodeSyncMessage,
initSyncState,
encodeSyncState,
decodeSyncState,
BloomFilter, // BloomFilter is a private API, exported only for testing purposes
}

File diff suppressed because it is too large Load diff

View file

@ -0,0 +1,99 @@
import * as assert from "assert"
import * as stable from "../src"
import { unstable } from "../src"
describe("stable/unstable interop", () => {
it("should allow reading Text from stable as strings in unstable", () => {
let stableDoc = stable.from({
text: new stable.Text("abc"),
})
let unstableDoc = unstable.init<any>()
unstableDoc = unstable.merge(unstableDoc, stableDoc)
assert.deepStrictEqual(unstableDoc.text, "abc")
})
it("should allow string from stable as Text in unstable", () => {
let unstableDoc = unstable.from({
text: "abc",
})
let stableDoc = stable.init<any>()
stableDoc = unstable.merge(stableDoc, unstableDoc)
assert.deepStrictEqual(stableDoc.text, new stable.Text("abc"))
})
it("should allow reading strings from stable as RawString in unstable", () => {
let stableDoc = stable.from({
text: "abc",
})
let unstableDoc = unstable.init<any>()
unstableDoc = unstable.merge(unstableDoc, stableDoc)
assert.deepStrictEqual(unstableDoc.text, new unstable.RawString("abc"))
})
it("should allow reading RawString from unstable as string in stable", () => {
let unstableDoc = unstable.from({
text: new unstable.RawString("abc"),
})
let stableDoc = stable.init<any>()
stableDoc = unstable.merge(stableDoc, unstableDoc)
assert.deepStrictEqual(stableDoc.text, "abc")
})
it("should show conflicts on text objects", () => {
let doc1 = stable.from({ text: new stable.Text("abc") }, "bb")
let doc2 = stable.from({ text: new stable.Text("def") }, "aa")
doc1 = stable.merge(doc1, doc2)
let conflicts = stable.getConflicts(doc1, "text")!
assert.equal(conflicts["1@bb"]!.toString(), "abc")
assert.equal(conflicts["1@aa"]!.toString(), "def")
let unstableDoc = unstable.init<any>()
unstableDoc = unstable.merge(unstableDoc, doc1)
let conflicts2 = unstable.getConflicts(unstableDoc, "text")!
assert.equal(conflicts2["1@bb"]!.toString(), "abc")
assert.equal(conflicts2["1@aa"]!.toString(), "def")
})
it("should allow filling a list with text in stable", () => {
let doc = stable.from<{ list: Array<stable.Text | null> }>({
list: [null, null, null],
})
doc = stable.change(doc, doc => {
doc.list.fill(new stable.Text("abc"), 0, 3)
})
assert.deepStrictEqual(doc.list, [
new stable.Text("abc"),
new stable.Text("abc"),
new stable.Text("abc"),
])
})
it("should allow filling a list with text in unstable", () => {
let doc = unstable.from<{ list: Array<string | null> }>({
list: [null, null, null],
})
doc = stable.change(doc, doc => {
doc.list.fill("abc", 0, 3)
})
assert.deepStrictEqual(doc.list, ["abc", "abc", "abc"])
})
it("should allow splicing text into a list on stable", () => {
let doc = stable.from<{ list: Array<stable.Text> }>({ list: [] })
doc = stable.change(doc, doc => {
doc.list.splice(0, 0, new stable.Text("abc"), new stable.Text("def"))
})
assert.deepStrictEqual(doc.list, [
new stable.Text("abc"),
new stable.Text("def"),
])
})
it("should allow splicing text into a list on unstable", () => {
let doc = unstable.from<{ list: Array<string> }>({ list: [] })
doc = unstable.change(doc, doc => {
doc.list.splice(0, 0, "abc", "def")
})
assert.deepStrictEqual(doc.list, ["abc", "def"])
})
})

File diff suppressed because it is too large Load diff

View file

@ -1,221 +1,34 @@
import * as assert from 'assert'
import * as Automerge from '../src'
import { assertEqualsOneOf } from './helpers'
import * as assert from "assert"
import { unstable as Automerge } from "../src"
import { assertEqualsOneOf } from "./helpers"
function attributeStateToAttributes(accumulatedAttributes) {
const attributes = {}
Object.entries(accumulatedAttributes).forEach(([key, values]) => {
if (values.length && values[0] !== null) {
attributes[key] = values[0]
}
})
return attributes
type DocType = {
text: string
[key: string]: any
}
function isEquivalent(a, b) {
const aProps = Object.getOwnPropertyNames(a)
const bProps = Object.getOwnPropertyNames(b)
if (aProps.length != bProps.length) {
return false
}
for (let i = 0; i < aProps.length; i++) {
const propName = aProps[i]
if (a[propName] !== b[propName]) {
return false
}
}
return true
}
function isControlMarker(pseudoCharacter) {
return typeof pseudoCharacter === 'object' && pseudoCharacter.attributes
}
function opFrom(text, attributes) {
let op = { insert: text }
if (Object.keys(attributes).length > 0) {
op.attributes = attributes
}
return op
}
function accumulateAttributes(span, accumulatedAttributes) {
Object.entries(span).forEach(([key, value]) => {
if (!accumulatedAttributes[key]) {
accumulatedAttributes[key] = []
}
if (value === null) {
if (accumulatedAttributes[key].length === 0 || accumulatedAttributes[key] === null) {
accumulatedAttributes[key].unshift(null)
} else {
accumulatedAttributes[key].shift()
}
} else {
if (accumulatedAttributes[key][0] === null) {
accumulatedAttributes[key].shift()
} else {
accumulatedAttributes[key].unshift(value)
}
}
})
return accumulatedAttributes
}
function automergeTextToDeltaDoc(text) {
let ops = []
let controlState = {}
let currentString = ""
let attributes = {}
text.toSpans().forEach((span) => {
if (isControlMarker(span)) {
controlState = accumulateAttributes(span.attributes, controlState)
} else {
let next = attributeStateToAttributes(controlState)
// if the next span has the same calculated attributes as the current span
// don't bother outputting it as a separate span, just let it ride
if (typeof span === 'string' && isEquivalent(next, attributes)) {
currentString = currentString + span
return
}
if (currentString) {
ops.push(opFrom(currentString, attributes))
}
// If we've got a string, we might be able to concatenate it to another
// same-attributed-string, so remember it and go to the next iteration.
if (typeof span === 'string') {
currentString = span
attributes = next
} else {
// otherwise we have an embed "character" and should output it immediately.
// embeds are always one-"character" in length.
ops.push(opFrom(span, next))
currentString = ''
attributes = {}
}
}
})
// at the end, flush any accumulated string out
if (currentString) {
ops.push(opFrom(currentString, attributes))
}
return ops
}
function inverseAttributes(attributes) {
let invertedAttributes = {}
Object.keys(attributes).forEach((key) => {
invertedAttributes[key] = null
})
return invertedAttributes
}
function applyDeleteOp(text, offset, op) {
let length = op.delete
while (length > 0) {
if (isControlMarker(text.get(offset))) {
offset += 1
} else {
// we need to not delete control characters, but we do delete embed characters
text.deleteAt(offset, 1)
length -= 1
}
}
return [text, offset]
}
function applyRetainOp(text, offset, op) {
let length = op.retain
if (op.attributes) {
text.insertAt(offset, { attributes: op.attributes })
offset += 1
}
while (length > 0) {
const char = text.get(offset)
offset += 1
if (!isControlMarker(char)) {
length -= 1
}
}
if (op.attributes) {
text.insertAt(offset, { attributes: inverseAttributes(op.attributes) })
offset += 1
}
return [text, offset]
}
function applyInsertOp(text, offset, op) {
let originalOffset = offset
if (typeof op.insert === 'string') {
text.insertAt(offset, ...op.insert.split(''))
offset += op.insert.length
} else {
// we have an embed or something similar
text.insertAt(offset, op.insert)
offset += 1
}
if (op.attributes) {
text.insertAt(originalOffset, { attributes: op.attributes })
offset += 1
}
if (op.attributes) {
text.insertAt(offset, { attributes: inverseAttributes(op.attributes) })
offset += 1
}
return [text, offset]
}
// XXX: uhhhhh, why can't I pass in text?
function applyDeltaDocToAutomergeText(delta, doc) {
let offset = 0
delta.forEach(op => {
if (op.retain) {
[, offset] = applyRetainOp(doc.text, offset, op)
} else if (op.delete) {
[, offset] = applyDeleteOp(doc.text, offset, op)
} else if (op.insert) {
[, offset] = applyInsertOp(doc.text, offset, op)
}
})
}
describe('Automerge.Text', () => {
let s1, s2
describe("Automerge.Text", () => {
let s1: Automerge.Doc<DocType>, s2: Automerge.Doc<DocType>
beforeEach(() => {
s1 = Automerge.change(Automerge.init(), doc => doc.text = "")
s2 = Automerge.merge(Automerge.init(), s1)
s1 = Automerge.change(Automerge.init<DocType>(), doc => (doc.text = ""))
s2 = Automerge.merge(Automerge.init<DocType>(), s1)
})
it('should support insertion', () => {
it("should support insertion", () => {
s1 = Automerge.change(s1, doc => Automerge.splice(doc, "text", 0, 0, "a"))
assert.strictEqual(s1.text.length, 1)
assert.strictEqual(s1.text[0], 'a')
assert.strictEqual(s1.text, 'a')
assert.strictEqual(s1.text[0], "a")
assert.strictEqual(s1.text, "a")
//assert.strictEqual(s1.text.getElemId(0), `2@${Automerge.getActorId(s1)}`)
})
it('should support deletion', () => {
it("should support deletion", () => {
s1 = Automerge.change(s1, doc => Automerge.splice(doc, "text", 0, 0, "abc"))
s1 = Automerge.change(s1, doc => Automerge.splice(doc, "text", 1, 1))
assert.strictEqual(s1.text.length, 2)
assert.strictEqual(s1.text[0], 'a')
assert.strictEqual(s1.text[1], 'c')
assert.strictEqual(s1.text, 'ac')
assert.strictEqual(s1.text[0], "a")
assert.strictEqual(s1.text[1], "c")
assert.strictEqual(s1.text, "ac")
})
it("should support implicit and explicit deletion", () => {
@ -228,70 +41,71 @@ describe('Automerge.Text', () => {
assert.strictEqual(s1.text, "ac")
})
it('should handle concurrent insertion', () => {
it("should handle concurrent insertion", () => {
s1 = Automerge.change(s1, doc => Automerge.splice(doc, "text", 0, 0, "abc"))
s2 = Automerge.change(s2, doc => Automerge.splice(doc, "text", 0, 0, "xyz"))
s1 = Automerge.merge(s1, s2)
assert.strictEqual(s1.text.length, 6)
assertEqualsOneOf(s1.text, 'abcxyz', 'xyzabc')
assertEqualsOneOf(s1.text, "abcxyz", "xyzabc")
})
it('should handle text and other ops in the same change', () => {
it("should handle text and other ops in the same change", () => {
s1 = Automerge.change(s1, doc => {
doc.foo = 'bar'
Automerge.splice(doc, "text", 0, 0, 'a')
doc.foo = "bar"
Automerge.splice(doc, "text", 0, 0, "a")
})
assert.strictEqual(s1.foo, 'bar')
assert.strictEqual(s1.text, 'a')
assert.strictEqual(s1.text, 'a')
assert.strictEqual(s1.foo, "bar")
assert.strictEqual(s1.text, "a")
assert.strictEqual(s1.text, "a")
})
it('should serialize to JSON as a simple string', () => {
it("should serialize to JSON as a simple string", () => {
s1 = Automerge.change(s1, doc => Automerge.splice(doc, "text", 0, 0, 'a"b'))
assert.strictEqual(JSON.stringify(s1), '{"text":"a\\"b"}')
})
it('should allow modification after an object is assigned to a document', () => {
it("should allow modification after an object is assigned to a document", () => {
s1 = Automerge.change(Automerge.init(), doc => {
doc.text = ""
Automerge.splice(doc ,"text", 0, 0, 'abcd')
Automerge.splice(doc ,"text", 2, 1)
assert.strictEqual(doc.text, 'abd')
Automerge.splice(doc, "text", 0, 0, "abcd")
Automerge.splice(doc, "text", 2, 1)
assert.strictEqual(doc.text, "abd")
})
assert.strictEqual(s1.text, 'abd')
assert.strictEqual(s1.text, "abd")
})
it('should not allow modification outside of a change callback', () => {
assert.throws(() => Automerge.splice(s1 ,"text", 0, 0, 'a'), /object cannot be modified outside of a change block/)
it("should not allow modification outside of a change callback", () => {
assert.throws(
() => Automerge.splice(s1, "text", 0, 0, "a"),
/object cannot be modified outside of a change block/
)
})
describe('with initial value', () => {
it('should initialize text in Automerge.from()', () => {
let s1 = Automerge.from({text: 'init'})
describe("with initial value", () => {
it("should initialize text in Automerge.from()", () => {
let s1 = Automerge.from({ text: "init" })
assert.strictEqual(s1.text.length, 4)
assert.strictEqual(s1.text[0], 'i')
assert.strictEqual(s1.text[1], 'n')
assert.strictEqual(s1.text[2], 'i')
assert.strictEqual(s1.text[3], 't')
assert.strictEqual(s1.text, 'init')
assert.strictEqual(s1.text[0], "i")
assert.strictEqual(s1.text[1], "n")
assert.strictEqual(s1.text[2], "i")
assert.strictEqual(s1.text[3], "t")
assert.strictEqual(s1.text, "init")
})
it('should encode the initial value as a change', () => {
const s1 = Automerge.from({text: 'init'})
it("should encode the initial value as a change", () => {
const s1 = Automerge.from({ text: "init" })
const changes = Automerge.getAllChanges(s1)
assert.strictEqual(changes.length, 1)
const [s2] = Automerge.applyChanges(Automerge.init(), changes)
assert.strictEqual(s2.text, 'init')
assert.strictEqual(s2.text, 'init')
const [s2] = Automerge.applyChanges(Automerge.init<DocType>(), changes)
assert.strictEqual(s2.text, "init")
assert.strictEqual(s2.text, "init")
})
})
it('should support unicode when creating text', () => {
it("should support unicode when creating text", () => {
s1 = Automerge.from({
text: '🐦'
text: "🐦",
})
assert.strictEqual(s1.text, '🐦')
assert.strictEqual(s1.text, "🐦")
})
})

281
javascript/test/text_v1.ts Normal file
View file

@ -0,0 +1,281 @@
import * as assert from "assert"
import * as Automerge from "../src"
import { assertEqualsOneOf } from "./helpers"
type DocType = { text: Automerge.Text; [key: string]: any }
describe("Automerge.Text", () => {
let s1: Automerge.Doc<DocType>, s2: Automerge.Doc<DocType>
beforeEach(() => {
s1 = Automerge.change(
Automerge.init<DocType>(),
doc => (doc.text = new Automerge.Text())
)
s2 = Automerge.merge(Automerge.init(), s1)
})
it("should support insertion", () => {
s1 = Automerge.change(s1, doc => doc.text.insertAt(0, "a"))
assert.strictEqual(s1.text.length, 1)
assert.strictEqual(s1.text.get(0), "a")
assert.strictEqual(s1.text.toString(), "a")
//assert.strictEqual(s1.text.getElemId(0), `2@${Automerge.getActorId(s1)}`)
})
it("should support deletion", () => {
s1 = Automerge.change(s1, doc => doc.text.insertAt(0, "a", "b", "c"))
s1 = Automerge.change(s1, doc => doc.text.deleteAt(1, 1))
assert.strictEqual(s1.text.length, 2)
assert.strictEqual(s1.text.get(0), "a")
assert.strictEqual(s1.text.get(1), "c")
assert.strictEqual(s1.text.toString(), "ac")
})
it("should support implicit and explicit deletion", () => {
s1 = Automerge.change(s1, doc => doc.text.insertAt(0, "a", "b", "c"))
s1 = Automerge.change(s1, doc => doc.text.deleteAt(1))
s1 = Automerge.change(s1, doc => doc.text.deleteAt(1, 0))
assert.strictEqual(s1.text.length, 2)
assert.strictEqual(s1.text.get(0), "a")
assert.strictEqual(s1.text.get(1), "c")
assert.strictEqual(s1.text.toString(), "ac")
})
it("should handle concurrent insertion", () => {
s1 = Automerge.change(s1, doc => doc.text.insertAt(0, "a", "b", "c"))
s2 = Automerge.change(s2, doc => doc.text.insertAt(0, "x", "y", "z"))
s1 = Automerge.merge(s1, s2)
assert.strictEqual(s1.text.length, 6)
assertEqualsOneOf(s1.text.toString(), "abcxyz", "xyzabc")
assertEqualsOneOf(s1.text.join(""), "abcxyz", "xyzabc")
})
it("should handle text and other ops in the same change", () => {
s1 = Automerge.change(s1, doc => {
doc.foo = "bar"
doc.text.insertAt(0, "a")
})
assert.strictEqual(s1.foo, "bar")
assert.strictEqual(s1.text.toString(), "a")
assert.strictEqual(s1.text.join(""), "a")
})
it("should serialize to JSON as a simple string", () => {
s1 = Automerge.change(s1, doc => doc.text.insertAt(0, "a", '"', "b"))
assert.strictEqual(JSON.stringify(s1), '{"text":"a\\"b"}')
})
it("should allow modification before an object is assigned to a document", () => {
s1 = Automerge.change(Automerge.init(), doc => {
const text = new Automerge.Text()
text.insertAt(0, "a", "b", "c", "d")
text.deleteAt(2)
doc.text = text
assert.strictEqual(doc.text.toString(), "abd")
assert.strictEqual(doc.text.join(""), "abd")
})
assert.strictEqual(s1.text.toString(), "abd")
assert.strictEqual(s1.text.join(""), "abd")
})
it("should allow modification after an object is assigned to a document", () => {
s1 = Automerge.change(Automerge.init(), doc => {
const text = new Automerge.Text()
doc.text = text
doc.text.insertAt(0, "a", "b", "c", "d")
doc.text.deleteAt(2)
assert.strictEqual(doc.text.toString(), "abd")
assert.strictEqual(doc.text.join(""), "abd")
})
assert.strictEqual(s1.text.join(""), "abd")
})
it("should not allow modification outside of a change callback", () => {
assert.throws(
() => s1.text.insertAt(0, "a"),
/object cannot be modified outside of a change block/
)
})
describe("with initial value", () => {
it("should accept a string as initial value", () => {
let s1 = Automerge.change(
Automerge.init<DocType>(),
doc => (doc.text = new Automerge.Text("init"))
)
assert.strictEqual(s1.text.length, 4)
assert.strictEqual(s1.text.get(0), "i")
assert.strictEqual(s1.text.get(1), "n")
assert.strictEqual(s1.text.get(2), "i")
assert.strictEqual(s1.text.get(3), "t")
assert.strictEqual(s1.text.toString(), "init")
})
it("should accept an array as initial value", () => {
let s1 = Automerge.change(
Automerge.init<DocType>(),
doc => (doc.text = new Automerge.Text(["i", "n", "i", "t"]))
)
assert.strictEqual(s1.text.length, 4)
assert.strictEqual(s1.text.get(0), "i")
assert.strictEqual(s1.text.get(1), "n")
assert.strictEqual(s1.text.get(2), "i")
assert.strictEqual(s1.text.get(3), "t")
assert.strictEqual(s1.text.toString(), "init")
})
it("should initialize text in Automerge.from()", () => {
let s1 = Automerge.from({ text: new Automerge.Text("init") })
assert.strictEqual(s1.text.length, 4)
assert.strictEqual(s1.text.get(0), "i")
assert.strictEqual(s1.text.get(1), "n")
assert.strictEqual(s1.text.get(2), "i")
assert.strictEqual(s1.text.get(3), "t")
assert.strictEqual(s1.text.toString(), "init")
})
it("should encode the initial value as a change", () => {
const s1 = Automerge.from({ text: new Automerge.Text("init") })
const changes = Automerge.getAllChanges(s1)
assert.strictEqual(changes.length, 1)
const [s2] = Automerge.applyChanges(Automerge.init<DocType>(), changes)
assert.strictEqual(s2.text instanceof Automerge.Text, true)
assert.strictEqual(s2.text.toString(), "init")
assert.strictEqual(s2.text.join(""), "init")
})
it("should allow immediate access to the value", () => {
Automerge.change(Automerge.init<DocType>(), doc => {
const text = new Automerge.Text("init")
assert.strictEqual(text.length, 4)
assert.strictEqual(text.get(0), "i")
assert.strictEqual(text.toString(), "init")
doc.text = text
assert.strictEqual(doc.text.length, 4)
assert.strictEqual(doc.text.get(0), "i")
assert.strictEqual(doc.text.toString(), "init")
})
})
it("should allow pre-assignment modification of the initial value", () => {
let s1 = Automerge.change(Automerge.init<DocType>(), doc => {
const text = new Automerge.Text("init")
text.deleteAt(3)
assert.strictEqual(text.join(""), "ini")
doc.text = text
assert.strictEqual(doc.text.join(""), "ini")
assert.strictEqual(doc.text.toString(), "ini")
})
assert.strictEqual(s1.text.toString(), "ini")
assert.strictEqual(s1.text.join(""), "ini")
})
it("should allow post-assignment modification of the initial value", () => {
let s1 = Automerge.change(Automerge.init<DocType>(), doc => {
const text = new Automerge.Text("init")
doc.text = text
doc.text.deleteAt(0)
doc.text.insertAt(0, "I")
assert.strictEqual(doc.text.join(""), "Init")
assert.strictEqual(doc.text.toString(), "Init")
})
assert.strictEqual(s1.text.join(""), "Init")
assert.strictEqual(s1.text.toString(), "Init")
})
})
describe("non-textual control characters", () => {
let s1: Automerge.Doc<DocType>
beforeEach(() => {
s1 = Automerge.change(Automerge.init<DocType>(), doc => {
doc.text = new Automerge.Text()
doc.text.insertAt(0, "a")
doc.text.insertAt(1, { attribute: "bold" })
})
})
it("should allow fetching non-textual characters", () => {
assert.deepEqual(s1.text.get(1), { attribute: "bold" })
//assert.strictEqual(s1.text.getElemId(1), `3@${Automerge.getActorId(s1)}`)
})
it("should include control characters in string length", () => {
assert.strictEqual(s1.text.length, 2)
assert.strictEqual(s1.text.get(0), "a")
})
it("should replace control characters from toString()", () => {
assert.strictEqual(s1.text.toString(), "a\uFFFC")
})
it("should allow control characters to be updated", () => {
const s2 = Automerge.change(
s1,
doc => (doc.text.get(1)!.attribute = "italic")
)
const s3 = Automerge.load<DocType>(Automerge.save(s2))
assert.strictEqual(s1.text.get(1).attribute, "bold")
assert.strictEqual(s2.text.get(1).attribute, "italic")
assert.strictEqual(s3.text.get(1).attribute, "italic")
})
describe("spans interface to Text", () => {
it("should return a simple string as a single span", () => {
let s1 = Automerge.change(Automerge.init<DocType>(), doc => {
doc.text = new Automerge.Text("hello world")
})
assert.deepEqual(s1.text.toSpans(), ["hello world"])
})
it("should return an empty string as an empty array", () => {
let s1 = Automerge.change(Automerge.init<DocType>(), doc => {
doc.text = new Automerge.Text()
})
assert.deepEqual(s1.text.toSpans(), [])
})
it("should split a span at a control character", () => {
let s1 = Automerge.change(Automerge.init<DocType>(), doc => {
doc.text = new Automerge.Text("hello world")
doc.text.insertAt(5, { attributes: { bold: true } })
})
assert.deepEqual(s1.text.toSpans(), [
"hello",
{ attributes: { bold: true } },
" world",
])
})
it("should allow consecutive control characters", () => {
let s1 = Automerge.change(Automerge.init<DocType>(), doc => {
doc.text = new Automerge.Text("hello world")
doc.text.insertAt(5, { attributes: { bold: true } })
doc.text.insertAt(6, { attributes: { italic: true } })
})
assert.deepEqual(s1.text.toSpans(), [
"hello",
{ attributes: { bold: true } },
{ attributes: { italic: true } },
" world",
])
})
it("should allow non-consecutive control characters", () => {
let s1 = Automerge.change(Automerge.init<DocType>(), doc => {
doc.text = new Automerge.Text("hello world")
doc.text.insertAt(5, { attributes: { bold: true } })
doc.text.insertAt(12, { attributes: { italic: true } })
})
assert.deepEqual(s1.text.toSpans(), [
"hello",
{ attributes: { bold: true } },
" world",
{ attributes: { italic: true } },
])
})
})
})
it("should support unicode when creating text", () => {
s1 = Automerge.from({
text: new Automerge.Text("🐦"),
})
assert.strictEqual(s1.text.get(0), "🐦")
})
})

View file

@ -1,20 +1,20 @@
import * as assert from 'assert'
import * as Automerge from '../src'
import * as assert from "assert"
import * as Automerge from "../src"
const uuid = Automerge.uuid
describe('uuid', () => {
describe("uuid", () => {
afterEach(() => {
uuid.reset()
})
describe('default implementation', () => {
it('generates unique values', () => {
describe("default implementation", () => {
it("generates unique values", () => {
assert.notEqual(uuid(), uuid())
})
})
describe('custom implementation', () => {
describe("custom implementation", () => {
let counter
function customUuid() {
@ -22,11 +22,11 @@ describe('uuid', () => {
}
before(() => uuid.setFactory(customUuid))
beforeEach(() => counter = 0)
beforeEach(() => (counter = 0))
it('invokes the custom factory', () => {
assert.equal(uuid(), 'custom-uuid-0')
assert.equal(uuid(), 'custom-uuid-1')
it("invokes the custom factory", () => {
assert.equal(uuid(), "custom-uuid-0")
assert.equal(uuid(), "custom-uuid-1")
})
})
})

View file

@ -1,22 +1,19 @@
{
"compilerOptions": {
"target": "es2016",
"sourceMap": false,
"declaration": true,
"resolveJsonModule": true,
"module": "commonjs",
"moduleResolution": "node",
"noImplicitAny": false,
"allowSyntheticDefaultImports": true,
"forceConsistentCasingInFileNames": true,
"strict": true,
"noFallthroughCasesInSwitch": true,
"skipLibCheck": true,
"outDir": "./dist"
},
"include": [ "src/**/*" ],
"exclude": [
"./dist/**/*",
"./node_modules"
]
"compilerOptions": {
"target": "es2016",
"sourceMap": false,
"declaration": true,
"resolveJsonModule": true,
"module": "commonjs",
"moduleResolution": "node",
"noImplicitAny": false,
"allowSyntheticDefaultImports": true,
"forceConsistentCasingInFileNames": true,
"strict": true,
"noFallthroughCasesInSwitch": true,
"skipLibCheck": true,
"outDir": "./dist"
},
"include": ["src/**/*", "test/**/*"],
"exclude": ["./dist/**/*", "./node_modules", "./src/**/*.deno.ts"]
}

View file

@ -1,220 +0,0 @@
# Automerge
This library provides the core automerge data structure and sync algorithms.
Other libraries can be built on top of this one which provide IO and
persistence.
An automerge document can be though of an immutable POJO (plain old javascript
object) which `automerge` tracks the history of, allowing it to be merged with
any other automerge document.
## Creating and modifying a document
You can create a document with {@link init} or {@link from} and then make
changes to it with {@link change}, you can merge two documents with {@link
merge}.
```javascript
import * as automerge from "@automerge/automerge"
type DocType = {ideas: Array<automerge.Text>}
let doc1 = automerge.init<DocType>()
doc1 = automerge.change(doc1, d => {
d.ideas = [new automerge.Text("an immutable document")]
})
let doc2 = automerge.init<DocType>()
doc2 = automerge.merge(doc2, automerge.clone(doc1))
doc2 = automerge.change<DocType>(doc2, d => {
d.ideas.push(new automerge.Text("which records it's history"))
})
// Note the `automerge.clone` call, see the "cloning" section of this readme for
// more detail
doc1 = automerge.merge(doc1, automerge.clone(doc2))
doc1 = automerge.change(doc1, d => {
d.ideas[0].deleteAt(13, 8)
d.ideas[0].insertAt(13, "object")
})
let doc3 = automerge.merge(doc1, doc2)
// doc3 is now {ideas: ["an immutable object", "which records it's history"]}
```
## Applying changes from another document
You can get a representation of the result of the last {@link change} you made
to a document with {@link getLastLocalChange} and you can apply that change to
another document using {@link applyChanges}.
If you need to get just the changes which are in one document but not in another
you can use {@link getHeads} to get the heads of the document without the
changes and then {@link getMissingDeps}, passing the result of {@link getHeads}
on the document with the changes.
## Saving and loading documents
You can {@link save} a document to generate a compresed binary representation of
the document which can be loaded with {@link load}. If you have a document which
you have recently made changes to you can generate recent changes with {@link
saveIncremental}, this will generate all the changes since you last called
`saveIncremental`, the changes generated can be applied to another document with
{@link loadIncremental}.
## Viewing different versions of a document
Occasionally you may wish to explicitly step to a different point in a document
history. One common reason to do this is if you need to obtain a set of changes
which take the document from one state to another in order to send those changes
to another peer (or to save them somewhere). You can use {@link view} to do this.
```javascript
import * as automerge from "@automerge/automerge"
import * as assert from "assert"
let doc = automerge.from({
"key1": "value1"
})
// Make a clone of the document at this point, maybe this is actually on another
// peer.
let doc2 = automerge.clone<any>(doc)
let heads = automerge.getHeads(doc)
doc = automerge.change<any>(doc, d => {
d.key2 = "value2"
})
doc = automerge.change<any>(doc, d => {
d.key3 = "value3"
})
// At this point we've generated two separate changes, now we want to send
// just those changes to someone else
// view is a cheap reference based copy of a document at a given set of heads
let before = automerge.view(doc, heads)
// This view doesn't show the last two changes in the document state
assert.deepEqual(before, {
key1: "value1"
})
// Get the changes to send to doc2
let changes = automerge.getChanges(before, doc)
// Apply the changes at doc2
doc2 = automerge.applyChanges<any>(doc2, changes)[0]
assert.deepEqual(doc2, {
key1: "value1",
key2: "value2",
key3: "value3"
})
```
If you have a {@link view} of a document which you want to make changes to you
can {@link clone} the viewed document.
## Syncing
The sync protocol is stateful. This means that we start by creating a {@link
SyncState} for each peer we are communicating with using {@link initSyncState}.
Then we generate a message to send to the peer by calling {@link
generateSyncMessage}. When we receive a message from the peer we call {@link
receiveSyncMessage}. Here's a simple example of a loop which just keeps two
peers in sync.
```javascript
let sync1 = automerge.initSyncState()
let msg: Uint8Array | null
[sync1, msg] = automerge.generateSyncMessage(doc1, sync1)
while (true) {
if (msg != null) {
network.send(msg)
}
let resp: Uint8Array = network.receive()
[doc1, sync1, _ignore] = automerge.receiveSyncMessage(doc1, sync1, resp)
[sync1, msg] = automerge.generateSyncMessage(doc1, sync1)
}
```
## Conflicts
The only time conflicts occur in automerge documents is in concurrent
assignments to the same key in an object. In this case automerge
deterministically chooses an arbitrary value to present to the application but
you can examine the conflicts using {@link getConflicts}.
```
import * as automerge from "@automerge/automerge"
type Profile = {
pets: Array<{name: string, type: string}>
}
let doc1 = automerge.init<Profile>("aaaa")
doc1 = automerge.change(doc1, d => {
d.pets = [{name: "Lassie", type: "dog"}]
})
let doc2 = automerge.init<Profile>("bbbb")
doc2 = automerge.merge(doc2, automerge.clone(doc1))
doc2 = automerge.change(doc2, d => {
d.pets[0].name = "Beethoven"
})
doc1 = automerge.change(doc1, d => {
d.pets[0].name = "Babe"
})
const doc3 = automerge.merge(doc1, doc2)
// Note that here we pass `doc3.pets`, not `doc3`
let conflicts = automerge.getConflicts(doc3.pets[0], "name")
// The two conflicting values are the keys of the conflicts object
assert.deepEqual(Object.values(conflicts), ["Babe", Beethoven"])
```
## Actor IDs
By default automerge will generate a random actor ID for you, but most methods
for creating a document allow you to set the actor ID. You can get the actor ID
associated with the document by calling {@link getActorId}. Actor IDs must not
be used in concurrent threads of executiong - all changes by a given actor ID
are expected to be sequential.
## Listening to patches
Sometimes you want to respond to changes made to an automerge document. In this
case you can use the {@link PatchCallback} type to receive notifications when
changes have been made.
## Cloning
Currently you cannot make mutating changes (i.e. call {@link change}) to a
document which you have two pointers to. For example, in this code:
```javascript
let doc1 = automerge.init()
let doc2 = automerge.change(doc1, d => d.key = "value")
```
`doc1` and `doc2` are both pointers to the same state. Any attempt to call
mutating methods on `doc1` will now result in an error like
Attempting to change an out of date document
If you encounter this you need to clone the original document, the above sample
would work as:
```javascript
let doc1 = automerge.init()
let doc2 = automerge.change(automerge.clone(doc1), d => d.key = "value")
```

View file

@ -10,13 +10,8 @@ members = [
resolver = "2"
[profile.release]
debug = true
lto = true
opt-level = 3
codegen-units = 1
[profile.bench]
debug = true
[profile.release.package.automerge-wasm]
debug = false
opt-level = 3
debug = true

View file

@ -0,0 +1,250 @@
---
Language: Cpp
# BasedOnStyle: Chromium
AccessModifierOffset: -1
AlignAfterOpenBracket: Align
AlignArrayOfStructures: None
AlignConsecutiveAssignments:
Enabled: false
AcrossEmptyLines: false
AcrossComments: false
AlignCompound: false
PadOperators: true
AlignConsecutiveBitFields:
Enabled: false
AcrossEmptyLines: false
AcrossComments: false
AlignCompound: false
PadOperators: false
AlignConsecutiveDeclarations:
Enabled: false
AcrossEmptyLines: false
AcrossComments: false
AlignCompound: false
PadOperators: false
AlignConsecutiveMacros:
Enabled: false
AcrossEmptyLines: false
AcrossComments: false
AlignCompound: false
PadOperators: false
AlignEscapedNewlines: Left
AlignOperands: Align
AlignTrailingComments: true
AllowAllArgumentsOnNextLine: true
AllowAllParametersOfDeclarationOnNextLine: false
AllowShortEnumsOnASingleLine: true
AllowShortBlocksOnASingleLine: Never
AllowShortCaseLabelsOnASingleLine: false
AllowShortFunctionsOnASingleLine: Inline
AllowShortLambdasOnASingleLine: All
AllowShortIfStatementsOnASingleLine: Never
AllowShortLoopsOnASingleLine: false
AlwaysBreakAfterDefinitionReturnType: None
AlwaysBreakAfterReturnType: None
AlwaysBreakBeforeMultilineStrings: true
AlwaysBreakTemplateDeclarations: Yes
AttributeMacros:
- __capability
BinPackArguments: true
BinPackParameters: false
BraceWrapping:
AfterCaseLabel: false
AfterClass: false
AfterControlStatement: Never
AfterEnum: false
AfterFunction: false
AfterNamespace: false
AfterObjCDeclaration: false
AfterStruct: false
AfterUnion: false
AfterExternBlock: false
BeforeCatch: false
BeforeElse: false
BeforeLambdaBody: false
BeforeWhile: false
IndentBraces: false
SplitEmptyFunction: true
SplitEmptyRecord: true
SplitEmptyNamespace: true
BreakBeforeBinaryOperators: None
BreakBeforeConceptDeclarations: Always
BreakBeforeBraces: Attach
BreakBeforeInheritanceComma: false
BreakInheritanceList: BeforeColon
BreakBeforeTernaryOperators: true
BreakConstructorInitializersBeforeComma: false
BreakConstructorInitializers: BeforeColon
BreakAfterJavaFieldAnnotations: false
BreakStringLiterals: true
ColumnLimit: 120
CommentPragmas: '^ IWYU pragma:'
QualifierAlignment: Leave
CompactNamespaces: false
ConstructorInitializerIndentWidth: 4
ContinuationIndentWidth: 4
Cpp11BracedListStyle: true
DeriveLineEnding: true
DerivePointerAlignment: false
DisableFormat: false
EmptyLineAfterAccessModifier: Never
EmptyLineBeforeAccessModifier: LogicalBlock
ExperimentalAutoDetectBinPacking: false
PackConstructorInitializers: NextLine
BasedOnStyle: ''
ConstructorInitializerAllOnOneLineOrOnePerLine: false
AllowAllConstructorInitializersOnNextLine: true
FixNamespaceComments: true
ForEachMacros:
- foreach
- Q_FOREACH
- BOOST_FOREACH
IfMacros:
- KJ_IF_MAYBE
IncludeBlocks: Preserve
IncludeCategories:
- Regex: '^<ext/.*\.h>'
Priority: 2
SortPriority: 0
CaseSensitive: false
- Regex: '^<.*\.h>'
Priority: 1
SortPriority: 0
CaseSensitive: false
- Regex: '^<.*'
Priority: 2
SortPriority: 0
CaseSensitive: false
- Regex: '.*'
Priority: 3
SortPriority: 0
CaseSensitive: false
IncludeIsMainRegex: '([-_](test|unittest))?$'
IncludeIsMainSourceRegex: ''
IndentAccessModifiers: false
IndentCaseLabels: true
IndentCaseBlocks: false
IndentGotoLabels: true
IndentPPDirectives: None
IndentExternBlock: AfterExternBlock
IndentRequiresClause: true
IndentWidth: 4
IndentWrappedFunctionNames: false
InsertBraces: false
InsertTrailingCommas: None
JavaScriptQuotes: Leave
JavaScriptWrapImports: true
KeepEmptyLinesAtTheStartOfBlocks: false
LambdaBodyIndentation: Signature
MacroBlockBegin: ''
MacroBlockEnd: ''
MaxEmptyLinesToKeep: 1
NamespaceIndentation: None
ObjCBinPackProtocolList: Never
ObjCBlockIndentWidth: 2
ObjCBreakBeforeNestedBlockParam: true
ObjCSpaceAfterProperty: false
ObjCSpaceBeforeProtocolList: true
PenaltyBreakAssignment: 2
PenaltyBreakBeforeFirstCallParameter: 1
PenaltyBreakComment: 300
PenaltyBreakFirstLessLess: 120
PenaltyBreakOpenParenthesis: 0
PenaltyBreakString: 1000
PenaltyBreakTemplateDeclaration: 10
PenaltyExcessCharacter: 1000000
PenaltyReturnTypeOnItsOwnLine: 200
PenaltyIndentedWhitespace: 0
PointerAlignment: Left
PPIndentWidth: -1
RawStringFormats:
- Language: Cpp
Delimiters:
- cc
- CC
- cpp
- Cpp
- CPP
- 'c++'
- 'C++'
CanonicalDelimiter: ''
BasedOnStyle: google
- Language: TextProto
Delimiters:
- pb
- PB
- proto
- PROTO
EnclosingFunctions:
- EqualsProto
- EquivToProto
- PARSE_PARTIAL_TEXT_PROTO
- PARSE_TEST_PROTO
- PARSE_TEXT_PROTO
- ParseTextOrDie
- ParseTextProtoOrDie
- ParseTestProto
- ParsePartialTestProto
CanonicalDelimiter: pb
BasedOnStyle: google
ReferenceAlignment: Pointer
ReflowComments: true
RemoveBracesLLVM: false
RequiresClausePosition: OwnLine
SeparateDefinitionBlocks: Leave
ShortNamespaceLines: 1
SortIncludes: CaseSensitive
SortJavaStaticImport: Before
SortUsingDeclarations: true
SpaceAfterCStyleCast: false
SpaceAfterLogicalNot: false
SpaceAfterTemplateKeyword: true
SpaceBeforeAssignmentOperators: true
SpaceBeforeCaseColon: false
SpaceBeforeCpp11BracedList: false
SpaceBeforeCtorInitializerColon: true
SpaceBeforeInheritanceColon: true
SpaceBeforeParens: ControlStatements
SpaceBeforeParensOptions:
AfterControlStatements: true
AfterForeachMacros: true
AfterFunctionDefinitionName: false
AfterFunctionDeclarationName: false
AfterIfMacros: true
AfterOverloadedOperator: false
AfterRequiresInClause: false
AfterRequiresInExpression: false
BeforeNonEmptyParentheses: false
SpaceAroundPointerQualifiers: Default
SpaceBeforeRangeBasedForLoopColon: true
SpaceInEmptyBlock: false
SpaceInEmptyParentheses: false
SpacesBeforeTrailingComments: 2
SpacesInAngles: Never
SpacesInConditionalStatement: false
SpacesInContainerLiterals: true
SpacesInCStyleCastParentheses: false
SpacesInLineCommentPrefix:
Minimum: 1
Maximum: -1
SpacesInParentheses: false
SpacesInSquareBrackets: false
SpaceBeforeSquareBrackets: false
BitFieldColonSpacing: Both
Standard: Auto
StatementAttributeLikeMacros:
- Q_EMIT
StatementMacros:
- Q_UNUSED
- QT_REQUIRE_VERSION
TabWidth: 8
UseCRLF: false
UseTab: Never
WhitespaceSensitiveMacros:
- STRINGIZE
- PP_STRINGIZE
- BOOST_PP_STRINGIZE
- NS_SWIFT_NAME
- CF_SWIFT_NAME
...

View file

@ -1,10 +1,10 @@
automerge
automerge.h
automerge.o
*.cmake
build/
CMakeCache.txt
CMakeFiles
CMakePresets.json
Makefile
DartConfiguration.tcl
config.h
CMakeCache.txt
Cargo
out/

View file

@ -1,97 +1,297 @@
cmake_minimum_required(VERSION 3.18 FATAL_ERROR)
cmake_minimum_required(VERSION 3.23 FATAL_ERROR)
set(CMAKE_MODULE_PATH "${CMAKE_SOURCE_DIR}/cmake")
project(automerge-c VERSION 0.1.0
LANGUAGES C
DESCRIPTION "C bindings for the Automerge Rust library.")
# Parse the library name, project name and project version out of Cargo's TOML file.
set(CARGO_LIB_SECTION OFF)
set(LIBRARY_NAME "automerge")
set(LIBRARY_NAME "")
set(CARGO_PKG_SECTION OFF)
set(CARGO_PKG_NAME "")
set(CARGO_PKG_VERSION "")
file(READ Cargo.toml TOML_STRING)
string(REPLACE ";" "\\\\;" TOML_STRING "${TOML_STRING}")
string(REPLACE "\n" ";" TOML_LINES "${TOML_STRING}")
foreach(TOML_LINE IN ITEMS ${TOML_LINES})
string(REGEX MATCH "^\\[(lib|package)\\]$" _ ${TOML_LINE})
if(CMAKE_MATCH_1 STREQUAL "lib")
set(CARGO_LIB_SECTION ON)
set(CARGO_PKG_SECTION OFF)
elseif(CMAKE_MATCH_1 STREQUAL "package")
set(CARGO_LIB_SECTION OFF)
set(CARGO_PKG_SECTION ON)
endif()
string(REGEX MATCH "^name += +\"([^\"]+)\"$" _ ${TOML_LINE})
if(CMAKE_MATCH_1 AND (CARGO_LIB_SECTION AND NOT CARGO_PKG_SECTION))
set(LIBRARY_NAME "${CMAKE_MATCH_1}")
elseif(CMAKE_MATCH_1 AND (NOT CARGO_LIB_SECTION AND CARGO_PKG_SECTION))
set(CARGO_PKG_NAME "${CMAKE_MATCH_1}")
endif()
string(REGEX MATCH "^version += +\"([^\"]+)\"$" _ ${TOML_LINE})
if(CMAKE_MATCH_1 AND CARGO_PKG_SECTION)
set(CARGO_PKG_VERSION "${CMAKE_MATCH_1}")
endif()
if(LIBRARY_NAME AND (CARGO_PKG_NAME AND CARGO_PKG_VERSION))
break()
endif()
endforeach()
project(${CARGO_PKG_NAME} VERSION 0.0.1 LANGUAGES C DESCRIPTION "C bindings for the Automerge Rust backend.")
include(CTest)
set(CMAKE_WINDOWS_EXPORT_ALL_SYMBOLS ON)
option(BUILD_SHARED_LIBS "Enable the choice of a shared or static library.")
include(CTest)
include(CMakePackageConfigHelpers)
include(GNUInstallDirs)
set(CMAKE_MODULE_PATH "${CMAKE_SOURCE_DIR}/cmake")
string(MAKE_C_IDENTIFIER ${PROJECT_NAME} SYMBOL_PREFIX)
string(TOUPPER ${SYMBOL_PREFIX} SYMBOL_PREFIX)
set(CARGO_TARGET_DIR "${CMAKE_CURRENT_BINARY_DIR}/Cargo/target")
set(CARGO_TARGET_DIR "${CMAKE_BINARY_DIR}/Cargo/target")
set(CBINDGEN_INCLUDEDIR "${CARGO_TARGET_DIR}/${CMAKE_INSTALL_INCLUDEDIR}")
set(CBINDGEN_INCLUDEDIR "${CMAKE_BINARY_DIR}/${CMAKE_INSTALL_INCLUDEDIR}")
set(CBINDGEN_TARGET_DIR "${CBINDGEN_INCLUDEDIR}/${PROJECT_NAME}")
add_subdirectory(src)
find_program (
CARGO_CMD
"cargo"
PATHS "$ENV{CARGO_HOME}/bin"
DOC "The Cargo command"
)
# Generate and install the configuration header.
if(NOT CARGO_CMD)
message(FATAL_ERROR "Cargo (Rust package manager) not found! "
"Please install it and/or set the CARGO_HOME "
"environment variable to its path.")
endif()
string(TOLOWER "${CMAKE_BUILD_TYPE}" BUILD_TYPE_LOWER)
# In order to build with -Z build-std, we need to pass target explicitly.
# https://doc.rust-lang.org/cargo/reference/unstable.html#build-std
execute_process (
COMMAND rustc -vV
OUTPUT_VARIABLE RUSTC_VERSION
OUTPUT_STRIP_TRAILING_WHITESPACE
)
string(REGEX REPLACE ".*host: ([^ \n]*).*" "\\1"
CARGO_TARGET
${RUSTC_VERSION}
)
if(BUILD_TYPE_LOWER STREQUAL debug)
set(CARGO_BUILD_TYPE "debug")
set(CARGO_FLAG --target=${CARGO_TARGET})
else()
set(CARGO_BUILD_TYPE "release")
if (NOT RUSTC_VERSION MATCHES "nightly")
set(RUSTUP_TOOLCHAIN nightly)
endif()
set(RUSTFLAGS -C\ panic=abort)
set(CARGO_FLAG -Z build-std=std,panic_abort --release --target=${CARGO_TARGET})
endif()
set(CARGO_FEATURES "")
set(CARGO_BINARY_DIR "${CARGO_TARGET_DIR}/${CARGO_TARGET}/${CARGO_BUILD_TYPE}")
set(BINDINGS_NAME "${LIBRARY_NAME}_core")
configure_file(
${CMAKE_MODULE_PATH}/Cargo.toml.in
${CMAKE_SOURCE_DIR}/Cargo.toml
@ONLY
NEWLINE_STYLE LF
)
set(INCLUDE_GUARD_PREFIX "${SYMBOL_PREFIX}")
configure_file(
${CMAKE_MODULE_PATH}/cbindgen.toml.in
${CMAKE_SOURCE_DIR}/cbindgen.toml
@ONLY
NEWLINE_STYLE LF
)
set(CARGO_OUTPUT
${CBINDGEN_TARGET_DIR}/${LIBRARY_NAME}.h
${CARGO_BINARY_DIR}/${CMAKE_STATIC_LIBRARY_PREFIX}${BINDINGS_NAME}${CMAKE_STATIC_LIBRARY_SUFFIX}
)
# \note cbindgen's naming behavior isn't fully configurable and it ignores
# `const fn` calls (https://github.com/eqrion/cbindgen/issues/252).
add_custom_command(
OUTPUT
${CARGO_OUTPUT}
COMMAND
# \note cbindgen won't regenerate its output header file after it's been removed but it will after its
# configuration file has been updated.
${CMAKE_COMMAND} -DCONDITION=NOT_EXISTS -P ${CMAKE_SOURCE_DIR}/cmake/file-touch.cmake -- ${CBINDGEN_TARGET_DIR}/${LIBRARY_NAME}.h ${CMAKE_SOURCE_DIR}/cbindgen.toml
COMMAND
${CMAKE_COMMAND} -E env CARGO_TARGET_DIR=${CARGO_TARGET_DIR} CBINDGEN_TARGET_DIR=${CBINDGEN_TARGET_DIR} RUSTUP_TOOLCHAIN=${RUSTUP_TOOLCHAIN} RUSTFLAGS=${RUSTFLAGS} ${CARGO_CMD} build ${CARGO_FLAG} ${CARGO_FEATURES}
COMMAND
# Compensate for cbindgen's translation of consecutive uppercase letters to "ScreamingSnakeCase".
${CMAKE_COMMAND} -DMATCH_REGEX=A_M\([^_]+\)_ -DREPLACE_EXPR=AM_\\1_ -P ${CMAKE_SOURCE_DIR}/cmake/file-regex-replace.cmake -- ${CBINDGEN_TARGET_DIR}/${LIBRARY_NAME}.h
COMMAND
# Compensate for cbindgen ignoring `std:mem::size_of<usize>()` calls.
${CMAKE_COMMAND} -DMATCH_REGEX=USIZE_ -DREPLACE_EXPR=\+${CMAKE_SIZEOF_VOID_P} -P ${CMAKE_SOURCE_DIR}/cmake/file-regex-replace.cmake -- ${CBINDGEN_TARGET_DIR}/${LIBRARY_NAME}.h
MAIN_DEPENDENCY
src/lib.rs
DEPENDS
src/actor_id.rs
src/byte_span.rs
src/change.rs
src/doc.rs
src/doc/list.rs
src/doc/map.rs
src/doc/utils.rs
src/index.rs
src/item.rs
src/items.rs
src/obj.rs
src/result.rs
src/sync.rs
src/sync/have.rs
src/sync/message.rs
src/sync/state.rs
${CMAKE_SOURCE_DIR}/build.rs
${CMAKE_MODULE_PATH}/Cargo.toml.in
${CMAKE_MODULE_PATH}/cbindgen.toml.in
WORKING_DIRECTORY
${CMAKE_SOURCE_DIR}
COMMENT
"Producing the bindings' artifacts with Cargo..."
VERBATIM
)
add_custom_target(${BINDINGS_NAME}_artifacts ALL
DEPENDS ${CARGO_OUTPUT}
)
add_library(${BINDINGS_NAME} STATIC IMPORTED GLOBAL)
target_include_directories(${BINDINGS_NAME} INTERFACE "${CBINDGEN_INCLUDEDIR}")
set_target_properties(
${BINDINGS_NAME}
PROPERTIES
# \note Cargo writes a debug build into a nested directory instead of
# decorating its name.
DEBUG_POSTFIX ""
DEFINE_SYMBOL ""
IMPORTED_IMPLIB ""
IMPORTED_LOCATION "${CARGO_BINARY_DIR}/${CMAKE_STATIC_LIBRARY_PREFIX}${BINDINGS_NAME}${CMAKE_STATIC_LIBRARY_SUFFIX}"
IMPORTED_NO_SONAME "TRUE"
IMPORTED_SONAME ""
LINKER_LANGUAGE C
PUBLIC_HEADER "${CBINDGEN_TARGET_DIR}/${LIBRARY_NAME}.h"
SOVERSION "${PROJECT_VERSION_MAJOR}"
VERSION "${PROJECT_VERSION}"
# \note Cargo exports all of the symbols automatically.
WINDOWS_EXPORT_ALL_SYMBOLS "TRUE"
)
target_compile_definitions(${BINDINGS_NAME} INTERFACE $<TARGET_PROPERTY:${BINDINGS_NAME},DEFINE_SYMBOL>)
set(UTILS_SUBDIR "utils")
add_custom_command(
OUTPUT
${CBINDGEN_TARGET_DIR}/${UTILS_SUBDIR}/enum_string.h
${CMAKE_BINARY_DIR}/src/${UTILS_SUBDIR}/enum_string.c
COMMAND
${CMAKE_COMMAND} -DPROJECT_NAME=${PROJECT_NAME} -DLIBRARY_NAME=${LIBRARY_NAME} -DSUBDIR=${UTILS_SUBDIR} -P ${CMAKE_SOURCE_DIR}/cmake/enum-string-functions-gen.cmake -- ${CBINDGEN_TARGET_DIR}/${LIBRARY_NAME}.h ${CBINDGEN_TARGET_DIR}/${UTILS_SUBDIR}/enum_string.h ${CMAKE_BINARY_DIR}/src/${UTILS_SUBDIR}/enum_string.c
MAIN_DEPENDENCY
${CBINDGEN_TARGET_DIR}/${LIBRARY_NAME}.h
DEPENDS
${CMAKE_SOURCE_DIR}/cmake/enum-string-functions-gen.cmake
WORKING_DIRECTORY
${CMAKE_SOURCE_DIR}
COMMENT
"Generating the enum string functions with CMake..."
VERBATIM
)
add_custom_target(${LIBRARY_NAME}_utilities
DEPENDS ${CBINDGEN_TARGET_DIR}/${UTILS_SUBDIR}/enum_string.h
${CMAKE_BINARY_DIR}/src/${UTILS_SUBDIR}/enum_string.c
)
add_library(${LIBRARY_NAME})
target_compile_features(${LIBRARY_NAME} PRIVATE c_std_99)
set(CMAKE_THREAD_PREFER_PTHREAD TRUE)
set(THREADS_PREFER_PTHREAD_FLAG TRUE)
find_package(Threads REQUIRED)
set(LIBRARY_DEPENDENCIES Threads::Threads ${CMAKE_DL_LIBS})
if(WIN32)
list(APPEND LIBRARY_DEPENDENCIES Bcrypt userenv ws2_32)
else()
list(APPEND LIBRARY_DEPENDENCIES m)
endif()
target_link_libraries(${LIBRARY_NAME}
PUBLIC ${BINDINGS_NAME}
${LIBRARY_DEPENDENCIES}
)
# \note An imported library's INTERFACE_INCLUDE_DIRECTORIES property can't
# contain a non-existent path so its build-time include directory
# must be specified for all of its dependent targets instead.
target_include_directories(${LIBRARY_NAME}
PUBLIC "$<BUILD_INTERFACE:${CBINDGEN_INCLUDEDIR};${CMAKE_SOURCE_DIR}/${CMAKE_INSTALL_INCLUDEDIR}>"
"$<INSTALL_INTERFACE:${CMAKE_INSTALL_INCLUDEDIR}>"
)
add_dependencies(${LIBRARY_NAME} ${BINDINGS_NAME}_artifacts)
# Generate the configuration header.
math(EXPR INTEGER_PROJECT_VERSION_MAJOR "${PROJECT_VERSION_MAJOR} * 100000")
math(EXPR INTEGER_PROJECT_VERSION_MINOR "${PROJECT_VERSION_MINOR} * 100")
math(EXPR INTEGER_PROJECT_VERSION_PATCH "${PROJECT_VERSION_PATCH}")
math(EXPR INTEGER_PROJECT_VERSION "${INTEGER_PROJECT_VERSION_MAJOR} + ${INTEGER_PROJECT_VERSION_MINOR} + ${INTEGER_PROJECT_VERSION_PATCH}")
math(EXPR INTEGER_PROJECT_VERSION "${INTEGER_PROJECT_VERSION_MAJOR} + \
${INTEGER_PROJECT_VERSION_MINOR} + \
${INTEGER_PROJECT_VERSION_PATCH}")
configure_file(
${CMAKE_MODULE_PATH}/config.h.in
config.h
${CBINDGEN_TARGET_DIR}/config.h
@ONLY
NEWLINE_STYLE LF
)
target_sources(${LIBRARY_NAME}
PRIVATE
src/${UTILS_SUBDIR}/result.c
src/${UTILS_SUBDIR}/stack_callback_data.c
src/${UTILS_SUBDIR}/stack.c
src/${UTILS_SUBDIR}/string.c
${CMAKE_BINARY_DIR}/src/${UTILS_SUBDIR}/enum_string.c
PUBLIC
FILE_SET api TYPE HEADERS
BASE_DIRS
${CBINDGEN_INCLUDEDIR}
${CMAKE_SOURCE_DIR}/${CMAKE_INSTALL_INCLUDEDIR}
FILES
${CBINDGEN_TARGET_DIR}/${LIBRARY_NAME}.h
${CBINDGEN_TARGET_DIR}/${UTILS_SUBDIR}/enum_string.h
${CMAKE_SOURCE_DIR}/${CMAKE_INSTALL_INCLUDEDIR}/${PROJECT_NAME}/${UTILS_SUBDIR}/result.h
${CMAKE_SOURCE_DIR}/${CMAKE_INSTALL_INCLUDEDIR}/${PROJECT_NAME}/${UTILS_SUBDIR}/stack_callback_data.h
${CMAKE_SOURCE_DIR}/${CMAKE_INSTALL_INCLUDEDIR}/${PROJECT_NAME}/${UTILS_SUBDIR}/stack.h
${CMAKE_SOURCE_DIR}/${CMAKE_INSTALL_INCLUDEDIR}/${PROJECT_NAME}/${UTILS_SUBDIR}/string.h
INTERFACE
FILE_SET config TYPE HEADERS
BASE_DIRS
${CBINDGEN_INCLUDEDIR}
FILES
${CBINDGEN_TARGET_DIR}/config.h
)
install(
FILES ${CMAKE_BINARY_DIR}/config.h
DESTINATION ${CMAKE_INSTALL_INCLUDEDIR}/${PROJECT_NAME}
TARGETS ${LIBRARY_NAME}
EXPORT ${PROJECT_NAME}-config
FILE_SET api
FILE_SET config
)
# \note Install the Cargo-built core bindings to enable direct linkage.
install(
FILES $<TARGET_PROPERTY:${BINDINGS_NAME},IMPORTED_LOCATION>
DESTINATION ${CMAKE_INSTALL_LIBDIR}
)
install(EXPORT ${PROJECT_NAME}-config
FILE ${PROJECT_NAME}-config.cmake
NAMESPACE "${PROJECT_NAME}::"
DESTINATION ${CMAKE_INSTALL_LIBDIR}/cmake/${LIB}
)
if(BUILD_TESTING)
@ -100,42 +300,6 @@ if(BUILD_TESTING)
enable_testing()
endif()
add_subdirectory(docs)
add_subdirectory(examples EXCLUDE_FROM_ALL)
# Generate and install .cmake files
set(PROJECT_CONFIG_NAME "${PROJECT_NAME}-config")
set(PROJECT_CONFIG_VERSION_NAME "${PROJECT_CONFIG_NAME}-version")
write_basic_package_version_file(
${CMAKE_CURRENT_BINARY_DIR}/${PROJECT_CONFIG_VERSION_NAME}.cmake
VERSION ${PROJECT_VERSION}
COMPATIBILITY ExactVersion
)
# The namespace label starts with the title-cased library name.
string(SUBSTRING ${LIBRARY_NAME} 0 1 NS_FIRST)
string(SUBSTRING ${LIBRARY_NAME} 1 -1 NS_REST)
string(TOUPPER ${NS_FIRST} NS_FIRST)
string(TOLOWER ${NS_REST} NS_REST)
string(CONCAT NAMESPACE ${NS_FIRST} ${NS_REST} "::")
# \note CMake doesn't automate the exporting of an imported library's targets
# so the package configuration script must do it.
configure_package_config_file(
${CMAKE_MODULE_PATH}/${PROJECT_CONFIG_NAME}.cmake.in
${CMAKE_CURRENT_BINARY_DIR}/${PROJECT_CONFIG_NAME}.cmake
INSTALL_DESTINATION ${CMAKE_INSTALL_LIBDIR}/cmake/${PROJECT_NAME}
)
install(
FILES
${CMAKE_CURRENT_BINARY_DIR}/${PROJECT_CONFIG_NAME}.cmake
${CMAKE_CURRENT_BINARY_DIR}/${PROJECT_CONFIG_VERSION_NAME}.cmake
DESTINATION
${CMAKE_INSTALL_LIBDIR}/cmake/${PROJECT_NAME}
)

View file

@ -7,8 +7,8 @@ license = "MIT"
rust-version = "1.57.0"
[lib]
name = "automerge"
crate-type = ["cdylib", "staticlib"]
name = "automerge_core"
crate-type = ["staticlib"]
bench = false
doc = false
@ -19,4 +19,4 @@ libc = "^0.2"
smol_str = "^0.1.21"
[build-dependencies]
cbindgen = "^0.20"
cbindgen = "^0.24"

View file

@ -1,22 +1,29 @@
automerge-c exposes an API to C that can either be used directly or as a basis
for other language bindings that have good support for calling into C functions.
# Overview
# Building
automerge-c exposes a C API that can either be used directly or as the basis
for other language bindings that have good support for calling C functions.
See the main README for instructions on getting your environment set up, then
you can use `./scripts/ci/cmake-build Release static` to build automerge-c.
# Installing
It will output two files:
See the main README for instructions on getting your environment set up and then
you can build the automerge-c library and install its constituent files within
a root directory of your choosing (e.g. "/usr/local") like so:
```shell
cmake -E make_directory automerge-c/build
cmake -S automerge-c -B automerge-c/build
cmake --build automerge-c/build
cmake --install automerge-c/build --prefix "/usr/local"
```
Installation is important because the name, location and structure of CMake's
out-of-source build subdirectory is subject to change based on the platform and
the release version; generated headers like `automerge-c/config.h` and
`automerge-c/utils/enum_string.h` are only sure to be found within their
installed locations.
- ./build/Cargo/target/include/automerge-c/automerge.h
- ./build/Cargo/target/release/libautomerge.a
To use these in your application you must arrange for your C compiler to find
these files, either by moving them to the right location on your computer, or
by configuring the compiler to reference these directories.
- `export LDFLAGS=-L./build/Cargo/target/release -lautomerge`
- `export CFLAGS=-I./build/Cargo/target/include`
It's not obvious because they are versioned but the `Cargo.toml` and
`cbindgen.toml` configuration files are also generated in order to ensure that
the project name, project version and library name that they contain match those
specified within the top-level `CMakeLists.txt` file.
If you'd like to cross compile the library for different platforms you can do so
using [cross](https://github.com/cross-rs/cross). For example:
@ -25,134 +32,176 @@ using [cross](https://github.com/cross-rs/cross). For example:
This will output a shared library in the directory `rust/target/aarch64-unknown-linux-gnu/release/`.
You can replace `aarch64-unknown-linux-gnu` with any [cross supported targets](https://github.com/cross-rs/cross#supported-targets). The targets below are known to work, though other targets are expected to work too:
You can replace `aarch64-unknown-linux-gnu` with any
[cross supported targets](https://github.com/cross-rs/cross#supported-targets).
The targets below are known to work, though other targets are expected to work
too:
- `x86_64-apple-darwin`
- `aarch64-apple-darwin`
- `x86_64-unknown-linux-gnu`
- `aarch64-unknown-linux-gnu`
As a caveat, the header file is currently 32/64-bit dependant. You can re-use it
for all 64-bit architectures, but you must generate a specific header for 32-bit
targets.
As a caveat, CMake generates the `automerge.h` header file in terms of the
processor architecture of the computer on which it was built so, for example,
don't use a header generated for a 64-bit processor if your target is a 32-bit
processor.
# Usage
For full reference, read through `automerge.h`, or to get started quickly look
at the
You can build and view the C API's HTML reference documentation like so:
```shell
cmake -E make_directory automerge-c/build
cmake -S automerge-c -B automerge-c/build
cmake --build automerge-c/build --target automerge_docs
firefox automerge-c/build/src/html/index.html
```
To get started quickly, look at the
[examples](https://github.com/automerge/automerge-rs/tree/main/rust/automerge-c/examples).
Almost all operations in automerge-c act on an AMdoc struct which you can get
from `AMcreate()` or `AMload()`. Operations on a given doc are not thread safe
so you must use a mutex or similar to avoid calling more than one function with
the same AMdoc pointer concurrently.
Almost all operations in automerge-c act on an Automerge document
(`AMdoc` struct) which is structurally similar to a JSON document.
As with all functions that either allocate memory, or could fail if given
invalid input, `AMcreate()` returns an `AMresult`. The `AMresult` contains the
returned doc (or error message), and must be freed with `AMfree()` after you are
done to avoid leaking memory.
You can get a document by calling either `AMcreate()` or `AMload()`. Operations
on a given document are not thread-safe so you must use a mutex or similar to
avoid calling more than one function on the same one concurrently.
A C API function that could succeed or fail returns a result (`AMresult` struct)
containing a status code (`AMstatus` enum) and either a sequence of at least one
item (`AMitem` struct) or a read-only view onto a UTF-8 error message string
(`AMbyteSpan` struct).
An item contains up to three components: an index within its parent object
(`AMbyteSpan` struct or `size_t`), a unique identifier (`AMobjId` struct) and a
value.
The result of a successful function call that doesn't produce any values will
contain a single item that is void (`AM_VAL_TYPE_VOID`).
A returned result **must** be passed to `AMresultFree()` once the item(s) or
error message it contains is no longer needed in order to avoid a memory leak.
```
#include <automerge-c/automerge.h>
#include <stdio.h>
#include <stdlib.h>
#include <automerge-c/automerge.h>
#include <automerge-c/utils/string.h>
int main(int argc, char** argv) {
AMresult *docResult = AMcreate(NULL);
if (AMresultStatus(docResult) != AM_STATUS_OK) {
printf("failed to create doc: %s", AMerrorMessage(docResult).src);
char* const err_msg = AMstrdup(AMresultError(docResult), NULL);
printf("failed to create doc: %s", err_msg);
free(err_msg);
goto cleanup;
}
AMdoc *doc = AMresultValue(docResult).doc;
AMdoc *doc;
AMitemToDoc(AMresultItem(docResult), &doc);
// useful code goes here!
cleanup:
AMfree(docResult);
AMresultFree(docResult);
}
```
If you are writing code in C directly, you can use the `AMpush()` helper
function to reduce the boilerplate of error handling and freeing for you (see
examples/quickstart.c).
If you are writing an application in C, the `AMstackItem()`, `AMstackItems()`
and `AMstackResult()` functions enable the lifetimes of anonymous results to be
centrally managed and allow the same validation logic to be reused without
relying upon the `goto` statement (see examples/quickstart.c).
If you are wrapping automerge-c in another language, particularly one that has a
garbage collector, you can call `AMfree` within a finalizer to ensure that memory
is reclaimed when it is no longer needed.
garbage collector, you can call the `AMresultFree()` function within a finalizer
to ensure that memory is reclaimed when it is no longer needed.
An AMdoc wraps an automerge document which are very similar to JSON documents.
Automerge documents consist of a mutable root, which is always a map from string
keys to values. Values can have the following types:
Automerge documents consist of a mutable root which is always a map from string
keys to values. A value can be one of the following types:
- A number of type double / int64_t / uint64_t
- An explicit true / false / nul
- An immutable utf-8 string (AMbyteSpan)
- An immutable array of arbitrary bytes (AMbyteSpan)
- A mutable map from string keys to values (AMmap)
- A mutable list of values (AMlist)
- A mutable string (AMtext)
- An explicit true / false / null
- An immutable UTF-8 string (`AMbyteSpan`).
- An immutable array of arbitrary bytes (`AMbyteSpan`).
- A mutable map from string keys to values.
- A mutable list of values.
- A mutable UTF-8 string.
If you read from a location in the document with no value a value with
`.tag == AM_VALUE_VOID` will be returned, but you cannot write such a value explicitly.
If you read from a location in the document with no value, an item with type
`AM_VAL_TYPE_VOID` will be returned, but you cannot write such a value
explicitly.
Under the hood, automerge references mutable objects by the internal object id,
and `AM_ROOT` is always the object id of the root value.
Under the hood, automerge references a mutable object by its object identifier
where `AM_ROOT` signifies a document's root map object.
There is a function to put each type of value into either a map or a list, and a
function to read the current value from a list. As (in general) collaborators
There are functions to put each type of value into either a map or a list, and
functions to read the current or a historical value from a map or a list. As (in general) collaborators
may edit the document at any time, you cannot guarantee that the type of the
value at a given part of the document will stay the same. As a result reading
from the document will return an `AMvalue` union that you can inspect to
determine its type.
value at a given part of the document will stay the same. As a result, reading
from the document will return an `AMitem` struct that you can inspect to
determine the type of value that it contains.
Strings in automerge-c are represented using an `AMbyteSpan` which contains a
pointer and a length. Strings must be valid utf-8 and may contain null bytes.
As a convenience you can use `AMstr()` to get the representation of a
null-terminated C string as an `AMbyteSpan`.
pointer and a length. Strings must be valid UTF-8 and may contain NUL (`0`)
characters.
For your convenience, you can call `AMstr()` to get the `AMbyteSpan` struct
equivalent of a null-terminated byte string or `AMstrdup()` to get the
representation of an `AMbyteSpan` struct as a null-terminated byte string
wherein its NUL characters have been removed/replaced as you choose.
Putting all of that together, to read and write from the root of the document
you can do this:
```
#include <automerge-c/automerge.h>
#include <stdio.h>
#include <stdlib.h>
#include <automerge-c/automerge.h>
#include <automerge-c/utils/string.h>
int main(int argc, char** argv) {
// ...previous example...
AMdoc *doc = AMresultValue(docResult).doc;
AMdoc *doc;
AMitemToDoc(AMresultItem(docResult), &doc);
AMresult *putResult = AMmapPutStr(doc, AM_ROOT, AMstr("key"), AMstr("value"));
if (AMresultStatus(putResult) != AM_STATUS_OK) {
printf("failed to put: %s", AMerrorMessage(putResult).src);
char* const err_msg = AMstrdup(AMresultError(putResult), NULL);
printf("failed to put: %s", err_msg);
free(err_msg);
goto cleanup;
}
AMresult *getResult = AMmapGet(doc, AM_ROOT, AMstr("key"), NULL);
if (AMresultStatus(getResult) != AM_STATUS_OK) {
printf("failed to get: %s", AMerrorMessage(getResult).src);
char* const err_msg = AMstrdup(AMresultError(putResult), NULL);
printf("failed to get: %s", err_msg);
free(err_msg);
goto cleanup;
}
AMvalue got = AMresultValue(getResult);
if (got.tag != AM_VALUE_STR) {
AMbyteSpan got;
if (AMitemToStr(AMresultItem(getResult), &got)) {
char* const c_str = AMstrdup(got, NULL);
printf("Got %zu-character string \"%s\"", got.count, c_str);
free(c_str);
} else {
printf("expected to read a string!");
goto cleanup;
}
printf("Got %zu-character string `%s`", got.str.count, got.str.src);
cleanup:
AMfree(getResult);
AMfree(putResult);
AMfree(docResult);
AMresultFree(getResult);
AMresultFree(putResult);
AMresultFree(docResult);
}
```
Functions that do not return an `AMresult` (for example `AMmapItemValue()`) do
not allocate memory, but continue to reference memory that was previously
allocated. It's thus important to keep the original `AMresult` alive (in this
case the one returned by `AMmapRange()`) until after you are done with the return
values of these functions.
Functions that do not return an `AMresult` (for example `AMitemKey()`) do
not allocate memory but rather reference memory that was previously
allocated. It's therefore important to keep the original `AMresult` alive (in
this case the one returned by `AMmapRange()`) until after you are finished with
the items that it contains. However, the memory for an individual `AMitem` can
be shared with a new `AMresult` by calling `AMitemResult()` on it. In other
words, a select group of items can be filtered out of a collection and only each
one's corresponding `AMresult` must be kept alive from that point forward; the
originating collection's `AMresult` can be safely freed.
Beyond that, good luck!

View file

@ -10,7 +10,7 @@ fn main() {
let config = cbindgen::Config::from_file("cbindgen.toml")
.expect("Unable to find cbindgen.toml configuration file");
if let Ok(writer) = cbindgen::generate_with_config(&crate_dir, config) {
if let Ok(writer) = cbindgen::generate_with_config(crate_dir, config) {
// \note CMake sets this environment variable before invoking Cargo so
// that it can direct the generated header file into its
// out-of-source build directory for post-processing.

View file

@ -1,7 +1,7 @@
after_includes = """\n
/**
* \\defgroup enumerations Public Enumerations
Symbolic names for integer constants.
* Symbolic names for integer constants.
*/
/**
@ -12,21 +12,23 @@ after_includes = """\n
#define AM_ROOT NULL
/**
* \\memberof AMchangeHash
* \\memberof AMdoc
* \\def AM_CHANGE_HASH_SIZE
* \\brief The count of bytes in a change hash.
*/
#define AM_CHANGE_HASH_SIZE 32
"""
autogen_warning = "/* Warning, this file is autogenerated by cbindgen. Don't modify this manually. */"
autogen_warning = """
/**
* \\file
* \\brief All constants, functions and types in the core Automerge C API.
*
* \\warning This file is auto-generated by cbindgen.
*/
"""
documentation = true
documentation_style = "doxy"
header = """
/** \\file
* All constants, functions and types in the Automerge library's C API.
*/
"""
include_guard = "AUTOMERGE_H"
include_guard = "AUTOMERGE_C_H"
includes = []
language = "C"
line_length = 140

View file

@ -0,0 +1,22 @@
[package]
name = "@PROJECT_NAME@"
version = "@PROJECT_VERSION@"
authors = ["Orion Henry <orion.henry@gmail.com>", "Jason Kankiewicz <jason.kankiewicz@gmail.com>"]
edition = "2021"
license = "MIT"
rust-version = "1.57.0"
[lib]
name = "@BINDINGS_NAME@"
crate-type = ["staticlib"]
bench = false
doc = false
[dependencies]
@LIBRARY_NAME@ = { path = "../@LIBRARY_NAME@" }
hex = "^0.4.3"
libc = "^0.2"
smol_str = "^0.1.21"
[build-dependencies]
cbindgen = "^0.24"

View file

@ -0,0 +1,48 @@
after_includes = """\n
/**
* \\defgroup enumerations Public Enumerations
* Symbolic names for integer constants.
*/
/**
* \\memberof AMdoc
* \\def AM_ROOT
* \\brief The root object of a document.
*/
#define AM_ROOT NULL
/**
* \\memberof AMdoc
* \\def AM_CHANGE_HASH_SIZE
* \\brief The count of bytes in a change hash.
*/
#define AM_CHANGE_HASH_SIZE 32
"""
autogen_warning = """
/**
* \\file
* \\brief All constants, functions and types in the core Automerge C API.
*
* \\warning This file is auto-generated by cbindgen.
*/
"""
documentation = true
documentation_style = "doxy"
include_guard = "@INCLUDE_GUARD_PREFIX@_H"
includes = []
language = "C"
line_length = 140
no_includes = true
style = "both"
sys_includes = ["stdbool.h", "stddef.h", "stdint.h", "time.h"]
usize_is_size_t = true
[enum]
derive_const_casts = true
enum_class = true
must_use = "MUST_USE_ENUM"
prefix_with_name = true
rename_variants = "ScreamingSnakeCase"
[export]
item_types = ["constants", "enums", "functions", "opaque", "structs", "typedefs"]

View file

@ -1,14 +1,35 @@
#ifndef @SYMBOL_PREFIX@_CONFIG_H
#define @SYMBOL_PREFIX@_CONFIG_H
/* This header is auto-generated by CMake. */
#ifndef @INCLUDE_GUARD_PREFIX@_CONFIG_H
#define @INCLUDE_GUARD_PREFIX@_CONFIG_H
/**
* \file
* \brief Configuration pararameters defined by the build system.
*
* \warning This file is auto-generated by CMake.
*/
/**
* \def @SYMBOL_PREFIX@_VERSION
* \brief Denotes a semantic version of the form {MAJOR}{MINOR}{PATCH} as three,
* two-digit decimal numbers without leading zeros (e.g. 100 is 0.1.0).
*/
#define @SYMBOL_PREFIX@_VERSION @INTEGER_PROJECT_VERSION@
/**
* \def @SYMBOL_PREFIX@_MAJOR_VERSION
* \brief Denotes a semantic major version as a decimal number.
*/
#define @SYMBOL_PREFIX@_MAJOR_VERSION (@SYMBOL_PREFIX@_VERSION / 100000)
/**
* \def @SYMBOL_PREFIX@_MINOR_VERSION
* \brief Denotes a semantic minor version as a decimal number.
*/
#define @SYMBOL_PREFIX@_MINOR_VERSION ((@SYMBOL_PREFIX@_VERSION / 100) % 1000)
/**
* \def @SYMBOL_PREFIX@_PATCH_VERSION
* \brief Denotes a semantic patch version as a decimal number.
*/
#define @SYMBOL_PREFIX@_PATCH_VERSION (@SYMBOL_PREFIX@_VERSION % 100)
#endif /* @SYMBOL_PREFIX@_CONFIG_H */
#endif /* @INCLUDE_GUARD_PREFIX@_CONFIG_H */

View file

@ -0,0 +1,183 @@
# This CMake script is used to generate a header and a source file for utility
# functions that convert the tags of generated enum types into strings and
# strings into the tags of generated enum types.
cmake_minimum_required(VERSION 3.23 FATAL_ERROR)
# Seeks the starting line of the source enum's declaration.
macro(seek_enum_mode)
if (line MATCHES "^(typedef[ \t]+)?enum ")
string(REGEX REPLACE "^enum ([0-9a-zA-Z_]+).*$" "\\1" enum_name "${line}")
set(mode "read_tags")
endif()
endmacro()
# Scans the input for the current enum's tags.
macro(read_tags_mode)
if(line MATCHES "^}")
set(mode "generate")
elseif(line MATCHES "^[A-Z0-9_]+.*$")
string(REGEX REPLACE "^([A-Za-z0-9_]+).*$" "\\1" tmp "${line}")
list(APPEND enum_tags "${tmp}")
endif()
endmacro()
macro(write_header_file)
# Generate a to-string function declaration.
list(APPEND header_body
"/**\n"
" * \\ingroup enumerations\n"
" * \\brief Gets the string representation of an `${enum_name}` enum tag.\n"
" *\n"
" * \\param[in] tag An `${enum_name}` enum tag.\n"
" * \\return A null-terminated byte string.\n"
" */\n"
"char const* ${enum_name}ToString(${enum_name} const tag)\;\n"
"\n")
# Generate a from-string function declaration.
list(APPEND header_body
"/**\n"
" * \\ingroup enumerations\n"
" * \\brief Gets an `${enum_name}` enum tag from its string representation.\n"
" *\n"
" * \\param[out] dest An `${enum_name}` enum tag pointer.\n"
" * \\param[in] src A null-terminated byte string.\n"
" * \\return `true` if \\p src matches the string representation of an\n"
" * `${enum_name}` enum tag, `false` otherwise.\n"
" */\n"
"bool ${enum_name}FromString(${enum_name}* dest, char const* const src)\;\n"
"\n")
endmacro()
macro(write_source_file)
# Generate a to-string function implementation.
list(APPEND source_body
"char const* ${enum_name}ToString(${enum_name} const tag) {\n"
" switch (tag) {\n"
" default:\n"
" return \"???\"\;\n")
foreach(label IN LISTS enum_tags)
list(APPEND source_body
" case ${label}:\n"
" return \"${label}\"\;\n")
endforeach()
list(APPEND source_body
" }\n"
"}\n"
"\n")
# Generate a from-string function implementation.
list(APPEND source_body
"bool ${enum_name}FromString(${enum_name}* dest, char const* const src) {\n")
foreach(label IN LISTS enum_tags)
list(APPEND source_body
" if (!strcmp(src, \"${label}\")) {\n"
" *dest = ${label}\;\n"
" return true\;\n"
" }\n")
endforeach()
list(APPEND source_body
" return false\;\n"
"}\n"
"\n")
endmacro()
function(main)
set(header_body "")
# File header and includes.
list(APPEND header_body
"#ifndef ${include_guard}\n"
"#define ${include_guard}\n"
"/**\n"
" * \\file\n"
" * \\brief Utility functions for converting enum tags into null-terminated\n"
" * byte strings and vice versa.\n"
" *\n"
" * \\warning This file is auto-generated by CMake.\n"
" */\n"
"\n"
"#include <stdbool.h>\n"
"\n"
"#include <${library_include}>\n"
"\n")
set(source_body "")
# File includes.
list(APPEND source_body
"/** \\warning This file is auto-generated by CMake. */\n"
"\n"
"#include \"stdio.h\"\n"
"#include \"string.h\"\n"
"\n"
"#include <${header_include}>\n"
"\n")
set(enum_name "")
set(enum_tags "")
set(mode "seek_enum")
file(STRINGS "${input_path}" lines)
foreach(line IN LISTS lines)
string(REGEX REPLACE "^(.+)(//.*)?" "\\1" line "${line}")
string(STRIP "${line}" line)
if(mode STREQUAL "seek_enum")
seek_enum_mode()
elseif(mode STREQUAL "read_tags")
read_tags_mode()
else()
# The end of the enum declaration was reached.
if(NOT enum_name)
# The end of the file was reached.
return()
endif()
if(NOT enum_tags)
message(FATAL_ERROR "No tags found for `${enum_name}`.")
endif()
string(TOLOWER "${enum_name}" output_stem_prefix)
string(CONCAT output_stem "${output_stem_prefix}" "_string")
cmake_path(REPLACE_EXTENSION output_stem "h" OUTPUT_VARIABLE output_header_basename)
write_header_file()
write_source_file()
set(enum_name "")
set(enum_tags "")
set(mode "seek_enum")
endif()
endforeach()
# File footer.
list(APPEND header_body
"#endif /* ${include_guard} */\n")
message(STATUS "Generating header file \"${output_header_path}\"...")
file(WRITE "${output_header_path}" ${header_body})
message(STATUS "Generating source file \"${output_source_path}\"...")
file(WRITE "${output_source_path}" ${source_body})
endfunction()
if(NOT DEFINED PROJECT_NAME)
message(FATAL_ERROR "Variable PROJECT_NAME is not defined.")
elseif(NOT DEFINED LIBRARY_NAME)
message(FATAL_ERROR "Variable LIBRARY_NAME is not defined.")
elseif(NOT DEFINED SUBDIR)
message(FATAL_ERROR "Variable SUBDIR is not defined.")
elseif(${CMAKE_ARGC} LESS 9)
message(FATAL_ERROR "Too few arguments.")
elseif(${CMAKE_ARGC} GREATER 10)
message(FATAL_ERROR "Too many arguments.")
elseif(NOT EXISTS ${CMAKE_ARGV5})
message(FATAL_ERROR "Input header \"${CMAKE_ARGV7}\" not found.")
endif()
cmake_path(CONVERT "${CMAKE_ARGV7}" TO_CMAKE_PATH_LIST input_path NORMALIZE)
cmake_path(CONVERT "${CMAKE_ARGV8}" TO_CMAKE_PATH_LIST output_header_path NORMALIZE)
cmake_path(CONVERT "${CMAKE_ARGV9}" TO_CMAKE_PATH_LIST output_source_path NORMALIZE)
string(TOLOWER "${PROJECT_NAME}" project_root)
cmake_path(CONVERT "${SUBDIR}" TO_CMAKE_PATH_LIST project_subdir NORMALIZE)
string(TOLOWER "${project_subdir}" project_subdir)
string(TOLOWER "${LIBRARY_NAME}" library_stem)
cmake_path(REPLACE_EXTENSION library_stem "h" OUTPUT_VARIABLE library_basename)
string(JOIN "/" library_include "${project_root}" "${library_basename}")
string(TOUPPER "${PROJECT_NAME}" project_name_upper)
string(TOUPPER "${project_subdir}" include_guard_infix)
string(REGEX REPLACE "/" "_" include_guard_infix "${include_guard_infix}")
string(REGEX REPLACE "-" "_" include_guard_prefix "${project_name_upper}")
string(JOIN "_" include_guard_prefix "${include_guard_prefix}" "${include_guard_infix}")
string(JOIN "/" output_header_prefix "${project_root}" "${project_subdir}")
cmake_path(GET output_header_path STEM output_header_stem)
string(TOUPPER "${output_header_stem}" include_guard_stem)
string(JOIN "_" include_guard "${include_guard_prefix}" "${include_guard_stem}" "H")
cmake_path(GET output_header_path FILENAME output_header_basename)
string(JOIN "/" header_include "${output_header_prefix}" "${output_header_basename}")
main()

View file

@ -1,4 +1,6 @@
cmake_minimum_required(VERSION 3.18 FATAL_ERROR)
# This CMake script is used to perform string substitutions within a generated
# file.
cmake_minimum_required(VERSION 3.23 FATAL_ERROR)
if(NOT DEFINED MATCH_REGEX)
message(FATAL_ERROR "Variable \"MATCH_REGEX\" is not defined.")

View file

@ -1,4 +1,6 @@
cmake_minimum_required(VERSION 3.18 FATAL_ERROR)
# This CMake script is used to force Cargo to regenerate the header file for the
# core bindings after the out-of-source build directory has been cleaned.
cmake_minimum_required(VERSION 3.23 FATAL_ERROR)
if(NOT DEFINED CONDITION)
message(FATAL_ERROR "Variable \"CONDITION\" is not defined.")

View file

@ -0,0 +1,35 @@
find_package(Doxygen OPTIONAL_COMPONENTS dot)
if(DOXYGEN_FOUND)
set(DOXYGEN_ALIASES "installed_headerfile=\\headerfile ${LIBRARY_NAME}.h <${PROJECT_NAME}/${LIBRARY_NAME}.h>")
set(DOXYGEN_GENERATE_LATEX YES)
set(DOXYGEN_PDF_HYPERLINKS YES)
set(DOXYGEN_PROJECT_LOGO "${CMAKE_CURRENT_SOURCE_DIR}/img/brandmark.png")
set(DOXYGEN_SORT_BRIEF_DOCS YES)
set(DOXYGEN_USE_MDFILE_AS_MAINPAGE "${CMAKE_SOURCE_DIR}/README.md")
doxygen_add_docs(
${LIBRARY_NAME}_docs
"${CBINDGEN_TARGET_DIR}/${LIBRARY_NAME}.h"
"${CBINDGEN_TARGET_DIR}/config.h"
"${CBINDGEN_TARGET_DIR}/${UTILS_SUBDIR}/enum_string.h"
"${CMAKE_SOURCE_DIR}/${CMAKE_INSTALL_INCLUDEDIR}/${PROJECT_NAME}/${UTILS_SUBDIR}/result.h"
"${CMAKE_SOURCE_DIR}/${CMAKE_INSTALL_INCLUDEDIR}/${PROJECT_NAME}/${UTILS_SUBDIR}/stack_callback_data.h"
"${CMAKE_SOURCE_DIR}/${CMAKE_INSTALL_INCLUDEDIR}/${PROJECT_NAME}/${UTILS_SUBDIR}/stack.h"
"${CMAKE_SOURCE_DIR}/${CMAKE_INSTALL_INCLUDEDIR}/${PROJECT_NAME}/${UTILS_SUBDIR}/string.h"
"${CMAKE_SOURCE_DIR}/README.md"
WORKING_DIRECTORY ${CMAKE_SOURCE_DIR}
COMMENT "Producing documentation with Doxygen..."
)
# \note A Doxygen input file isn't a file-level dependency so the Doxygen
# command must instead depend upon a target that either outputs the
# file or depends upon it also or it will just output an error message
# when it can't be found.
add_dependencies(${LIBRARY_NAME}_docs ${BINDINGS_NAME}_artifacts ${LIBRARY_NAME}_utilities)
endif()

View file

Before

Width:  |  Height:  |  Size: 1.4 KiB

After

Width:  |  Height:  |  Size: 1.4 KiB

View file

@ -1,41 +1,39 @@
cmake_minimum_required(VERSION 3.18 FATAL_ERROR)
add_executable(
example_quickstart
${LIBRARY_NAME}_quickstart
quickstart.c
)
set_target_properties(example_quickstart PROPERTIES LINKER_LANGUAGE C)
set_target_properties(${LIBRARY_NAME}_quickstart PROPERTIES LINKER_LANGUAGE C)
# \note An imported library's INTERFACE_INCLUDE_DIRECTORIES property can't
# contain a non-existent path so its build-time include directory
# must be specified for all of its dependent targets instead.
target_include_directories(
example_quickstart
${LIBRARY_NAME}_quickstart
PRIVATE "$<BUILD_INTERFACE:${CBINDGEN_INCLUDEDIR}>"
)
target_link_libraries(example_quickstart PRIVATE ${LIBRARY_NAME})
target_link_libraries(${LIBRARY_NAME}_quickstart PRIVATE ${LIBRARY_NAME})
add_dependencies(example_quickstart ${LIBRARY_NAME}_artifacts)
add_dependencies(${LIBRARY_NAME}_quickstart ${BINDINGS_NAME}_artifacts)
if(BUILD_SHARED_LIBS AND WIN32)
add_custom_command(
TARGET example_quickstart
TARGET ${LIBRARY_NAME}_quickstart
POST_BUILD
COMMAND ${CMAKE_COMMAND} -E copy_if_different
${CARGO_CURRENT_BINARY_DIR}/${CMAKE_SHARED_LIBRARY_PREFIX}${LIBRARY_NAME}${CMAKE_${CMAKE_BUILD_TYPE}_POSTFIX}${CMAKE_SHARED_LIBRARY_SUFFIX}
${CMAKE_CURRENT_BINARY_DIR}
${CMAKE_BINARY_DIR}
COMMENT "Copying the DLL built by Cargo into the examples directory..."
VERBATIM
)
endif()
add_custom_command(
TARGET example_quickstart
TARGET ${LIBRARY_NAME}_quickstart
POST_BUILD
COMMAND
example_quickstart
${LIBRARY_NAME}_quickstart
COMMENT
"Running the example quickstart..."
VERBATIM

View file

@ -5,5 +5,5 @@
```shell
cmake -E make_directory automerge-c/build
cmake -S automerge-c -B automerge-c/build
cmake --build automerge-c/build --target example_quickstart
cmake --build automerge-c/build --target automerge_quickstart
```

View file

@ -3,152 +3,127 @@
#include <string.h>
#include <automerge-c/automerge.h>
#include <automerge-c/utils/enum_string.h>
#include <automerge-c/utils/stack.h>
#include <automerge-c/utils/stack_callback_data.h>
#include <automerge-c/utils/string.h>
static void abort_cb(AMresultStack**, uint8_t);
static bool abort_cb(AMstack**, void*);
/**
* \brief Based on https://automerge.github.io/docs/quickstart
*/
int main(int argc, char** argv) {
AMresultStack* stack = NULL;
AMdoc* const doc1 = AMpush(&stack, AMcreate(NULL), AM_VALUE_DOC, abort_cb).doc;
AMobjId const* const cards = AMpush(&stack,
AMmapPutObject(doc1, AM_ROOT, AMstr("cards"), AM_OBJ_TYPE_LIST),
AM_VALUE_OBJ_ID,
abort_cb).obj_id;
AMobjId const* const card1 = AMpush(&stack,
AMlistPutObject(doc1, cards, SIZE_MAX, true, AM_OBJ_TYPE_MAP),
AM_VALUE_OBJ_ID,
abort_cb).obj_id;
AMfree(AMmapPutStr(doc1, card1, AMstr("title"), AMstr("Rewrite everything in Clojure")));
AMfree(AMmapPutBool(doc1, card1, AMstr("done"), false));
AMobjId const* const card2 = AMpush(&stack,
AMlistPutObject(doc1, cards, SIZE_MAX, true, AM_OBJ_TYPE_MAP),
AM_VALUE_OBJ_ID,
abort_cb).obj_id;
AMfree(AMmapPutStr(doc1, card2, AMstr("title"), AMstr("Rewrite everything in Haskell")));
AMfree(AMmapPutBool(doc1, card2, AMstr("done"), false));
AMfree(AMcommit(doc1, AMstr("Add card"), NULL));
AMstack* stack = NULL;
AMdoc* doc1;
AMitemToDoc(AMstackItem(&stack, AMcreate(NULL), abort_cb, AMexpect(AM_VAL_TYPE_DOC)), &doc1);
AMobjId const* const cards =
AMitemObjId(AMstackItem(&stack, AMmapPutObject(doc1, AM_ROOT, AMstr("cards"), AM_OBJ_TYPE_LIST), abort_cb,
AMexpect(AM_VAL_TYPE_OBJ_TYPE)));
AMobjId const* const card1 =
AMitemObjId(AMstackItem(&stack, AMlistPutObject(doc1, cards, SIZE_MAX, true, AM_OBJ_TYPE_MAP), abort_cb,
AMexpect(AM_VAL_TYPE_OBJ_TYPE)));
AMstackItem(NULL, AMmapPutStr(doc1, card1, AMstr("title"), AMstr("Rewrite everything in Clojure")), abort_cb,
AMexpect(AM_VAL_TYPE_VOID));
AMstackItem(NULL, AMmapPutBool(doc1, card1, AMstr("done"), false), abort_cb, AMexpect(AM_VAL_TYPE_VOID));
AMobjId const* const card2 =
AMitemObjId(AMstackItem(&stack, AMlistPutObject(doc1, cards, SIZE_MAX, true, AM_OBJ_TYPE_MAP), abort_cb,
AMexpect(AM_VAL_TYPE_OBJ_TYPE)));
AMstackItem(NULL, AMmapPutStr(doc1, card2, AMstr("title"), AMstr("Rewrite everything in Haskell")), abort_cb,
AMexpect(AM_VAL_TYPE_VOID));
AMstackItem(NULL, AMmapPutBool(doc1, card2, AMstr("done"), false), abort_cb, AMexpect(AM_VAL_TYPE_VOID));
AMstackItem(NULL, AMcommit(doc1, AMstr("Add card"), NULL), abort_cb, AMexpect(AM_VAL_TYPE_CHANGE_HASH));
AMdoc* doc2 = AMpush(&stack, AMcreate(NULL), AM_VALUE_DOC, abort_cb).doc;
AMfree(AMmerge(doc2, doc1));
AMdoc* doc2;
AMitemToDoc(AMstackItem(&stack, AMcreate(NULL), abort_cb, AMexpect(AM_VAL_TYPE_DOC)), &doc2);
AMstackItem(NULL, AMmerge(doc2, doc1), abort_cb, AMexpect(AM_VAL_TYPE_CHANGE_HASH));
AMbyteSpan const binary = AMpush(&stack, AMsave(doc1), AM_VALUE_BYTES, abort_cb).bytes;
doc2 = AMpush(&stack, AMload(binary.src, binary.count), AM_VALUE_DOC, abort_cb).doc;
AMbyteSpan binary;
AMitemToBytes(AMstackItem(&stack, AMsave(doc1), abort_cb, AMexpect(AM_VAL_TYPE_BYTES)), &binary);
AMitemToDoc(AMstackItem(&stack, AMload(binary.src, binary.count), abort_cb, AMexpect(AM_VAL_TYPE_DOC)), &doc2);
AMfree(AMmapPutBool(doc1, card1, AMstr("done"), true));
AMfree(AMcommit(doc1, AMstr("Mark card as done"), NULL));
AMstackItem(NULL, AMmapPutBool(doc1, card1, AMstr("done"), true), abort_cb, AMexpect(AM_VAL_TYPE_VOID));
AMstackItem(NULL, AMcommit(doc1, AMstr("Mark card as done"), NULL), abort_cb, AMexpect(AM_VAL_TYPE_CHANGE_HASH));
AMfree(AMlistDelete(doc2, cards, 0));
AMfree(AMcommit(doc2, AMstr("Delete card"), NULL));
AMstackItem(NULL, AMlistDelete(doc2, cards, 0), abort_cb, AMexpect(AM_VAL_TYPE_VOID));
AMstackItem(NULL, AMcommit(doc2, AMstr("Delete card"), NULL), abort_cb, AMexpect(AM_VAL_TYPE_CHANGE_HASH));
AMfree(AMmerge(doc1, doc2));
AMstackItem(NULL, AMmerge(doc1, doc2), abort_cb, AMexpect(AM_VAL_TYPE_CHANGE_HASH));
AMchanges changes = AMpush(&stack, AMgetChanges(doc1, NULL), AM_VALUE_CHANGES, abort_cb).changes;
AMchange const* change = NULL;
while ((change = AMchangesNext(&changes, 1)) != NULL) {
AMbyteSpan const change_hash = AMchangeHash(change);
AMchangeHashes const heads = AMpush(&stack,
AMchangeHashesInit(&change_hash, 1),
AM_VALUE_CHANGE_HASHES,
abort_cb).change_hashes;
AMbyteSpan const msg = AMchangeMessage(change);
char* const c_msg = calloc(1, msg.count + 1);
strncpy(c_msg, msg.src, msg.count);
printf("%s %ld\n", c_msg, AMobjSize(doc1, cards, &heads));
AMitems changes = AMstackItems(&stack, AMgetChanges(doc1, NULL), abort_cb, AMexpect(AM_VAL_TYPE_CHANGE));
AMitem* item = NULL;
while ((item = AMitemsNext(&changes, 1)) != NULL) {
AMchange const* change;
AMitemToChange(item, &change);
AMitems const heads = AMstackItems(&stack, AMitemFromChangeHash(AMchangeHash(change)), abort_cb,
AMexpect(AM_VAL_TYPE_CHANGE_HASH));
char* const c_msg = AMstrdup(AMchangeMessage(change), NULL);
printf("%s %zu\n", c_msg, AMobjSize(doc1, cards, &heads));
free(c_msg);
}
AMfreeStack(&stack);
AMstackFree(&stack);
}
static char const* discriminant_suffix(AMvalueVariant const);
/**
* \brief Prints an error message to `stderr`, deallocates all results in the
* given stack and exits.
* \brief Examines the result at the top of the given stack and, if it's
* invalid, prints an error message to `stderr`, deallocates all results
* in the stack and exits.
*
* \param[in,out] stack A pointer to a pointer to an `AMresultStack` struct.
* \param[in] discriminant An `AMvalueVariant` enum tag.
* \pre \p stack` != NULL`.
* \post `*stack == NULL`.
* \param[in,out] stack A pointer to a pointer to an `AMstack` struct.
* \param[in] data A pointer to an owned `AMstackCallbackData` struct or `NULL`.
* \return `true` if the top `AMresult` in \p stack is valid, `false` otherwise.
* \pre \p stack `!= NULL`.
*/
static void abort_cb(AMresultStack** stack, uint8_t discriminant) {
static bool abort_cb(AMstack** stack, void* data) {
static char buffer[512] = {0};
char const* suffix = NULL;
if (!stack) {
suffix = "Stack*";
}
else if (!*stack) {
} else if (!*stack) {
suffix = "Stack";
}
else if (!(*stack)->result) {
} else if (!(*stack)->result) {
suffix = "";
}
if (suffix) {
fprintf(stderr, "Null `AMresult%s*`.", suffix);
AMfreeStack(stack);
fprintf(stderr, "Null `AMresult%s*`.\n", suffix);
AMstackFree(stack);
exit(EXIT_FAILURE);
return;
return false;
}
AMstatus const status = AMresultStatus((*stack)->result);
switch (status) {
case AM_STATUS_ERROR: strcpy(buffer, "Error"); break;
case AM_STATUS_INVALID_RESULT: strcpy(buffer, "Invalid result"); break;
case AM_STATUS_OK: break;
default: sprintf(buffer, "Unknown `AMstatus` tag %d", status);
case AM_STATUS_ERROR:
strcpy(buffer, "Error");
break;
case AM_STATUS_INVALID_RESULT:
strcpy(buffer, "Invalid result");
break;
case AM_STATUS_OK:
break;
default:
sprintf(buffer, "Unknown `AMstatus` tag %d", status);
}
if (buffer[0]) {
AMbyteSpan const msg = AMerrorMessage((*stack)->result);
char* const c_msg = calloc(1, msg.count + 1);
strncpy(c_msg, msg.src, msg.count);
fprintf(stderr, "%s; %s.", buffer, c_msg);
char* const c_msg = AMstrdup(AMresultError((*stack)->result), NULL);
fprintf(stderr, "%s; %s.\n", buffer, c_msg);
free(c_msg);
AMfreeStack(stack);
AMstackFree(stack);
exit(EXIT_FAILURE);
return;
return false;
}
AMvalue const value = AMresultValue((*stack)->result);
fprintf(stderr, "Unexpected tag `AM_VALUE_%s` (%d); expected `AM_VALUE_%s`.",
discriminant_suffix(value.tag),
value.tag,
discriminant_suffix(discriminant));
AMfreeStack(stack);
exit(EXIT_FAILURE);
}
/**
* \brief Gets the suffix for a discriminant's corresponding string
* representation.
*
* \param[in] discriminant An `AMvalueVariant` enum tag.
* \return A UTF-8 string.
*/
static char const* discriminant_suffix(AMvalueVariant const discriminant) {
char const* suffix = NULL;
switch (discriminant) {
case AM_VALUE_ACTOR_ID: suffix = "ACTOR_ID"; break;
case AM_VALUE_BOOLEAN: suffix = "BOOLEAN"; break;
case AM_VALUE_BYTES: suffix = "BYTES"; break;
case AM_VALUE_CHANGE_HASHES: suffix = "CHANGE_HASHES"; break;
case AM_VALUE_CHANGES: suffix = "CHANGES"; break;
case AM_VALUE_COUNTER: suffix = "COUNTER"; break;
case AM_VALUE_DOC: suffix = "DOC"; break;
case AM_VALUE_F64: suffix = "F64"; break;
case AM_VALUE_INT: suffix = "INT"; break;
case AM_VALUE_LIST_ITEMS: suffix = "LIST_ITEMS"; break;
case AM_VALUE_MAP_ITEMS: suffix = "MAP_ITEMS"; break;
case AM_VALUE_NULL: suffix = "NULL"; break;
case AM_VALUE_OBJ_ID: suffix = "OBJ_ID"; break;
case AM_VALUE_OBJ_ITEMS: suffix = "OBJ_ITEMS"; break;
case AM_VALUE_STR: suffix = "STR"; break;
case AM_VALUE_STRS: suffix = "STRINGS"; break;
case AM_VALUE_SYNC_MESSAGE: suffix = "SYNC_MESSAGE"; break;
case AM_VALUE_SYNC_STATE: suffix = "SYNC_STATE"; break;
case AM_VALUE_TIMESTAMP: suffix = "TIMESTAMP"; break;
case AM_VALUE_UINT: suffix = "UINT"; break;
case AM_VALUE_VOID: suffix = "VOID"; break;
default: suffix = "...";
if (data) {
AMstackCallbackData* sc_data = (AMstackCallbackData*)data;
AMvalType const tag = AMitemValType(AMresultItem((*stack)->result));
if (tag != sc_data->bitmask) {
fprintf(stderr, "Unexpected tag `%s` (%d) instead of `%s` at %s:%d.\n", AMvalTypeToString(tag), tag,
AMvalTypeToString(sc_data->bitmask), sc_data->file, sc_data->line);
free(sc_data);
AMstackFree(stack);
exit(EXIT_FAILURE);
return false;
}
}
return suffix;
free(data);
return true;
}

View file

@ -0,0 +1,30 @@
#ifndef AUTOMERGE_C_UTILS_RESULT_H
#define AUTOMERGE_C_UTILS_RESULT_H
/**
* \file
* \brief Utility functions for use with `AMresult` structs.
*/
#include <stdarg.h>
#include <automerge-c/automerge.h>
/**
* \brief Transfers the items within an arbitrary list of results into a
* new result in their order of specification.
* \param[in] count The count of subsequent arguments.
* \param[in] ... A \p count list of arguments, each of which is a pointer to
* an `AMresult` struct whose items will be transferred out of it
* and which is subsequently freed.
* \return A pointer to an `AMresult` struct or `NULL`.
* \pre `𝑥 ` \p ... `, AMresultStatus(𝑥) == AM_STATUS_OK`
* \post `(𝑥 ` \p ... `, AMresultStatus(𝑥) != AM_STATUS_OK) -> NULL`
* \attention All `AMresult` struct pointer arguments are passed to
* `AMresultFree()` regardless of success; use `AMresultCat()`
* instead if you wish to pass them to `AMresultFree()` yourself.
* \warning The returned `AMresult` struct pointer must be passed to
* `AMresultFree()` in order to avoid a memory leak.
*/
AMresult* AMresultFrom(int count, ...);
#endif /* AUTOMERGE_C_UTILS_RESULT_H */

View file

@ -0,0 +1,130 @@
#ifndef AUTOMERGE_C_UTILS_STACK_H
#define AUTOMERGE_C_UTILS_STACK_H
/**
* \file
* \brief Utility data structures and functions for hiding `AMresult` structs,
* managing their lifetimes, and automatically applying custom
* validation logic to the `AMitem` structs that they contain.
*
* \note The `AMstack` struct and its related functions drastically reduce the
* need for boilerplate code and/or `goto` statement usage within a C
* application but a higher-level programming language offers even better
* ways to do the same things.
*/
#include <automerge-c/automerge.h>
/**
* \struct AMstack
* \brief A node in a singly-linked list of result pointers.
*/
typedef struct AMstack {
/** A result to be deallocated. */
AMresult* result;
/** The previous node in the singly-linked list or `NULL`. */
struct AMstack* prev;
} AMstack;
/**
* \memberof AMstack
* \brief The prototype of a function that examines the result at the top of
* the given stack in terms of some arbitrary data.
*
* \param[in,out] stack A pointer to a pointer to an `AMstack` struct.
* \param[in] data A pointer to arbitrary data or `NULL`.
* \return `true` if the top `AMresult` struct in \p stack is valid, `false`
* otherwise.
* \pre \p stack `!= NULL`.
*/
typedef bool (*AMstackCallback)(AMstack** stack, void* data);
/**
* \memberof AMstack
* \brief Deallocates the storage for a stack of results.
*
* \param[in,out] stack A pointer to a pointer to an `AMstack` struct.
* \pre \p stack `!= NULL`
* \post `*stack == NULL`
*/
void AMstackFree(AMstack** stack);
/**
* \memberof AMstack
* \brief Gets a result from the stack after removing it.
*
* \param[in,out] stack A pointer to a pointer to an `AMstack` struct.
* \param[in] result A pointer to the `AMresult` to be popped or `NULL` to
* select the top result in \p stack.
* \return A pointer to an `AMresult` struct or `NULL`.
* \pre \p stack `!= NULL`
* \warning The returned `AMresult` struct pointer must be passed to
* `AMresultFree()` in order to avoid a memory leak.
*/
AMresult* AMstackPop(AMstack** stack, AMresult const* result);
/**
* \memberof AMstack
* \brief Pushes the given result onto the given stack, calls the given
* callback with the given data to validate it and then either gets the
* result if it's valid or gets `NULL` instead.
*
* \param[in,out] stack A pointer to a pointer to an `AMstack` struct.
* \param[in] result A pointer to an `AMresult` struct.
* \param[in] callback A pointer to a function with the same signature as
* `AMstackCallback()` or `NULL`.
* \param[in] data A pointer to arbitrary data or `NULL` which is passed to
* \p callback.
* \return \p result or `NULL`.
* \warning If \p stack `== NULL` then \p result is deallocated in order to
* avoid a memory leak.
*/
AMresult* AMstackResult(AMstack** stack, AMresult* result, AMstackCallback callback, void* data);
/**
* \memberof AMstack
* \brief Pushes the given result onto the given stack, calls the given
* callback with the given data to validate it and then either gets the
* first item in the sequence of items within that result if it's valid
* or gets `NULL` instead.
*
* \param[in,out] stack A pointer to a pointer to an `AMstack` struct.
* \param[in] result A pointer to an `AMresult` struct.
* \param[in] callback A pointer to a function with the same signature as
* `AMstackCallback()` or `NULL`.
* \param[in] data A pointer to arbitrary data or `NULL` which is passed to
* \p callback.
* \return A pointer to an `AMitem` struct or `NULL`.
* \warning If \p stack `== NULL` then \p result is deallocated in order to
* avoid a memory leak.
*/
AMitem* AMstackItem(AMstack** stack, AMresult* result, AMstackCallback callback, void* data);
/**
* \memberof AMstack
* \brief Pushes the given result onto the given stack, calls the given
* callback with the given data to validate it and then either gets an
* `AMitems` struct over the sequence of items within that result if it's
* valid or gets an empty `AMitems` instead.
*
* \param[in,out] stack A pointer to a pointer to an `AMstack` struct.
* \param[in] result A pointer to an `AMresult` struct.
* \param[in] callback A pointer to a function with the same signature as
* `AMstackCallback()` or `NULL`.
* \param[in] data A pointer to arbitrary data or `NULL` which is passed to
* \p callback.
* \return An `AMitems` struct.
* \warning If \p stack `== NULL` then \p result is deallocated immediately
* in order to avoid a memory leak.
*/
AMitems AMstackItems(AMstack** stack, AMresult* result, AMstackCallback callback, void* data);
/**
* \memberof AMstack
* \brief Gets the count of results that have been pushed onto the stack.
*
* \param[in,out] stack A pointer to an `AMstack` struct.
* \return A 64-bit unsigned integer.
*/
size_t AMstackSize(AMstack const* const stack);
#endif /* AUTOMERGE_C_UTILS_STACK_H */

View file

@ -0,0 +1,53 @@
#ifndef AUTOMERGE_C_UTILS_PUSH_CALLBACK_DATA_H
#define AUTOMERGE_C_UTILS_PUSH_CALLBACK_DATA_H
/**
* \file
* \brief Utility data structures, functions and macros for supplying
* parameters to the custom validation logic applied to `AMitem`
* structs.
*/
#include <automerge-c/automerge.h>
/**
* \struct AMstackCallbackData
* \brief A data structure for passing the parameters of an item value test
* to an implementation of the `AMstackCallback` function prototype.
*/
typedef struct {
/** A bitmask of `AMvalType` tags. */
AMvalType bitmask;
/** A null-terminated file path string. */
char const* file;
/** The ordinal number of a line within a file. */
int line;
} AMstackCallbackData;
/**
* \memberof AMstackCallbackData
* \brief Allocates a new `AMstackCallbackData` struct and initializes its
* members from their corresponding arguments.
*
* \param[in] bitmask A bitmask of `AMvalType` tags.
* \param[in] file A null-terminated file path string.
* \param[in] line The ordinal number of a line within a file.
* \return A pointer to a disowned `AMstackCallbackData` struct.
* \warning The returned pointer must be passed to `free()` to avoid a memory
* leak.
*/
AMstackCallbackData* AMstackCallbackDataInit(AMvalType const bitmask, char const* const file, int const line);
/**
* \memberof AMstackCallbackData
* \def AMexpect
* \brief Allocates a new `AMstackCallbackData` struct and initializes it from
* an `AMvalueType` bitmask.
*
* \param[in] bitmask A bitmask of `AMvalType` tags.
* \return A pointer to a disowned `AMstackCallbackData` struct.
* \warning The returned pointer must be passed to `free()` to avoid a memory
* leak.
*/
#define AMexpect(bitmask) AMstackCallbackDataInit(bitmask, __FILE__, __LINE__)
#endif /* AUTOMERGE_C_UTILS_PUSH_CALLBACK_DATA_H */

View file

@ -0,0 +1,29 @@
#ifndef AUTOMERGE_C_UTILS_STRING_H
#define AUTOMERGE_C_UTILS_STRING_H
/**
* \file
* \brief Utility functions for use with `AMbyteSpan` structs that provide
* UTF-8 string views.
*/
#include <automerge-c/automerge.h>
/**
* \memberof AMbyteSpan
* \brief Returns a pointer to a null-terminated byte string which is a
* duplicate of the given UTF-8 string view except for the substitution
* of its NUL (0) characters with the specified null-terminated byte
* string.
*
* \param[in] str A UTF-8 string view as an `AMbyteSpan` struct.
* \param[in] nul A null-terminated byte string to substitute for NUL characters
* or `NULL` to substitute `"\\0"` for NUL characters.
* \return A disowned null-terminated byte string.
* \pre \p str.src `!= NULL`
* \pre \p str.count `<= sizeof(`\p str.src `)`
* \warning The returned pointer must be passed to `free()` to avoid a memory
* leak.
*/
char* AMstrdup(AMbyteSpan const str, char const* nul);
#endif /* AUTOMERGE_C_UTILS_STRING_H */

View file

@ -1,250 +0,0 @@
cmake_minimum_required(VERSION 3.18 FATAL_ERROR)
find_program (
CARGO_CMD
"cargo"
PATHS "$ENV{CARGO_HOME}/bin"
DOC "The Cargo command"
)
if(NOT CARGO_CMD)
message(FATAL_ERROR "Cargo (Rust package manager) not found! Install it and/or set the CARGO_HOME environment variable.")
endif()
string(TOLOWER "${CMAKE_BUILD_TYPE}" BUILD_TYPE_LOWER)
if(BUILD_TYPE_LOWER STREQUAL debug)
set(CARGO_BUILD_TYPE "debug")
set(CARGO_FLAG "")
else()
set(CARGO_BUILD_TYPE "release")
set(CARGO_FLAG "--release")
endif()
set(CARGO_FEATURES "")
set(CARGO_CURRENT_BINARY_DIR "${CARGO_TARGET_DIR}/${CARGO_BUILD_TYPE}")
set(
CARGO_OUTPUT
${CBINDGEN_TARGET_DIR}/${LIBRARY_NAME}.h
${CARGO_CURRENT_BINARY_DIR}/${CMAKE_SHARED_LIBRARY_PREFIX}${LIBRARY_NAME}${CMAKE_SHARED_LIBRARY_SUFFIX}
${CARGO_CURRENT_BINARY_DIR}/${CMAKE_STATIC_LIBRARY_PREFIX}${LIBRARY_NAME}${CMAKE_STATIC_LIBRARY_SUFFIX}
)
if(WIN32)
# \note The basename of an import library output by Cargo is the filename
# of its corresponding shared library.
list(APPEND CARGO_OUTPUT ${CARGO_CURRENT_BINARY_DIR}/${CMAKE_SHARED_LIBRARY_PREFIX}${LIBRARY_NAME}${CMAKE_SHARED_LIBRARY_SUFFIX}${CMAKE_STATIC_LIBRARY_SUFFIX})
endif()
add_custom_command(
OUTPUT
${CARGO_OUTPUT}
COMMAND
# \note cbindgen won't regenerate its output header file after it's
# been removed but it will after its configuration file has been
# updated.
${CMAKE_COMMAND} -DCONDITION=NOT_EXISTS -P ${CMAKE_SOURCE_DIR}/cmake/file_touch.cmake -- ${CBINDGEN_TARGET_DIR}/${LIBRARY_NAME}.h ${CMAKE_SOURCE_DIR}/cbindgen.toml
COMMAND
${CMAKE_COMMAND} -E env CARGO_TARGET_DIR=${CARGO_TARGET_DIR} CBINDGEN_TARGET_DIR=${CBINDGEN_TARGET_DIR} ${CARGO_CMD} build ${CARGO_FLAG} ${CARGO_FEATURES}
MAIN_DEPENDENCY
lib.rs
DEPENDS
actor_id.rs
byte_span.rs
change_hashes.rs
change.rs
changes.rs
doc.rs
doc/list.rs
doc/list/item.rs
doc/list/items.rs
doc/map.rs
doc/map/item.rs
doc/map/items.rs
doc/utils.rs
obj.rs
obj/item.rs
obj/items.rs
result.rs
result_stack.rs
strs.rs
sync.rs
sync/have.rs
sync/haves.rs
sync/message.rs
sync/state.rs
${CMAKE_SOURCE_DIR}/build.rs
${CMAKE_SOURCE_DIR}/Cargo.toml
${CMAKE_SOURCE_DIR}/cbindgen.toml
WORKING_DIRECTORY
${CMAKE_SOURCE_DIR}
COMMENT
"Producing the library artifacts with Cargo..."
VERBATIM
)
add_custom_target(
${LIBRARY_NAME}_artifacts ALL
DEPENDS ${CARGO_OUTPUT}
)
# \note cbindgen's naming behavior isn't fully configurable and it ignores
# `const fn` calls (https://github.com/eqrion/cbindgen/issues/252).
add_custom_command(
TARGET ${LIBRARY_NAME}_artifacts
POST_BUILD
COMMAND
# Compensate for cbindgen's variant struct naming.
${CMAKE_COMMAND} -DMATCH_REGEX=AM\([^_]+_[^_]+\)_Body -DREPLACE_EXPR=AM\\1 -P ${CMAKE_SOURCE_DIR}/cmake/file_regex_replace.cmake -- ${CBINDGEN_TARGET_DIR}/${LIBRARY_NAME}.h
COMMAND
# Compensate for cbindgen's union tag enum type naming.
${CMAKE_COMMAND} -DMATCH_REGEX=AM\([^_]+\)_Tag -DREPLACE_EXPR=AM\\1Variant -P ${CMAKE_SOURCE_DIR}/cmake/file_regex_replace.cmake -- ${CBINDGEN_TARGET_DIR}/${LIBRARY_NAME}.h
COMMAND
# Compensate for cbindgen's translation of consecutive uppercase letters to "ScreamingSnakeCase".
${CMAKE_COMMAND} -DMATCH_REGEX=A_M\([^_]+\)_ -DREPLACE_EXPR=AM_\\1_ -P ${CMAKE_SOURCE_DIR}/cmake/file_regex_replace.cmake -- ${CBINDGEN_TARGET_DIR}/${LIBRARY_NAME}.h
COMMAND
# Compensate for cbindgen ignoring `std:mem::size_of<usize>()` calls.
${CMAKE_COMMAND} -DMATCH_REGEX=USIZE_ -DREPLACE_EXPR=\+${CMAKE_SIZEOF_VOID_P} -P ${CMAKE_SOURCE_DIR}/cmake/file_regex_replace.cmake -- ${CBINDGEN_TARGET_DIR}/${LIBRARY_NAME}.h
WORKING_DIRECTORY
${CMAKE_SOURCE_DIR}
COMMENT
"Compensating for cbindgen deficits..."
VERBATIM
)
if(BUILD_SHARED_LIBS)
if(WIN32)
set(LIBRARY_DESTINATION "${CMAKE_INSTALL_BINDIR}")
else()
set(LIBRARY_DESTINATION "${CMAKE_INSTALL_LIBDIR}")
endif()
set(LIBRARY_DEFINE_SYMBOL "${SYMBOL_PREFIX}_EXPORTS")
# \note The basename of an import library output by Cargo is the filename
# of its corresponding shared library.
set(LIBRARY_IMPLIB "${CARGO_CURRENT_BINARY_DIR}/${CMAKE_SHARED_LIBRARY_PREFIX}${LIBRARY_NAME}${CMAKE_SHARED_LIBRARY_SUFFIX}${CMAKE_STATIC_LIBRARY_SUFFIX}")
set(LIBRARY_LOCATION "${CARGO_CURRENT_BINARY_DIR}/${CMAKE_SHARED_LIBRARY_PREFIX}${LIBRARY_NAME}${CMAKE_SHARED_LIBRARY_SUFFIX}")
set(LIBRARY_NO_SONAME "${WIN32}")
set(LIBRARY_SONAME "${CMAKE_SHARED_LIBRARY_PREFIX}${LIBRARY_NAME}${CMAKE_${CMAKE_BUILD_TYPE}_POSTFIX}${CMAKE_SHARED_LIBRARY_SUFFIX}")
set(LIBRARY_TYPE "SHARED")
else()
set(LIBRARY_DEFINE_SYMBOL "")
set(LIBRARY_DESTINATION "${CMAKE_INSTALL_LIBDIR}")
set(LIBRARY_IMPLIB "")
set(LIBRARY_LOCATION "${CARGO_CURRENT_BINARY_DIR}/${CMAKE_STATIC_LIBRARY_PREFIX}${LIBRARY_NAME}${CMAKE_STATIC_LIBRARY_SUFFIX}")
set(LIBRARY_NO_SONAME "TRUE")
set(LIBRARY_SONAME "")
set(LIBRARY_TYPE "STATIC")
endif()
add_library(${LIBRARY_NAME} ${LIBRARY_TYPE} IMPORTED GLOBAL)
set_target_properties(
${LIBRARY_NAME}
PROPERTIES
# \note Cargo writes a debug build into a nested directory instead of
# decorating its name.
DEBUG_POSTFIX ""
DEFINE_SYMBOL "${LIBRARY_DEFINE_SYMBOL}"
IMPORTED_IMPLIB "${LIBRARY_IMPLIB}"
IMPORTED_LOCATION "${LIBRARY_LOCATION}"
IMPORTED_NO_SONAME "${LIBRARY_NO_SONAME}"
IMPORTED_SONAME "${LIBRARY_SONAME}"
LINKER_LANGUAGE C
PUBLIC_HEADER "${CBINDGEN_TARGET_DIR}/${LIBRARY_NAME}.h"
SOVERSION "${PROJECT_VERSION_MAJOR}"
VERSION "${PROJECT_VERSION}"
# \note Cargo exports all of the symbols automatically.
WINDOWS_EXPORT_ALL_SYMBOLS "TRUE"
)
target_compile_definitions(${LIBRARY_NAME} INTERFACE $<TARGET_PROPERTY:${LIBRARY_NAME},DEFINE_SYMBOL>)
target_include_directories(
${LIBRARY_NAME}
INTERFACE
"$<INSTALL_INTERFACE:${CMAKE_INSTALL_INCLUDEDIR}/${PROJECT_NAME}>"
)
set(CMAKE_THREAD_PREFER_PTHREAD TRUE)
set(THREADS_PREFER_PTHREAD_FLAG TRUE)
find_package(Threads REQUIRED)
set(LIBRARY_DEPENDENCIES Threads::Threads ${CMAKE_DL_LIBS})
if(WIN32)
list(APPEND LIBRARY_DEPENDENCIES Bcrypt userenv ws2_32)
else()
list(APPEND LIBRARY_DEPENDENCIES m)
endif()
target_link_libraries(${LIBRARY_NAME} INTERFACE ${LIBRARY_DEPENDENCIES})
install(
FILES $<TARGET_PROPERTY:${LIBRARY_NAME},IMPORTED_IMPLIB>
TYPE LIB
# \note The basename of an import library output by Cargo is the filename
# of its corresponding shared library.
RENAME "${CMAKE_STATIC_LIBRARY_PREFIX}${LIBRARY_NAME}${CMAKE_${CMAKE_BUILD_TYPE}_POSTFIX}${CMAKE_STATIC_LIBRARY_SUFFIX}"
OPTIONAL
)
set(LIBRARY_FILE_NAME "${CMAKE_${LIBRARY_TYPE}_LIBRARY_PREFIX}${LIBRARY_NAME}${CMAKE_${CMAKE_BUILD_TYPE}_POSTFIX}${CMAKE_${LIBRARY_TYPE}_LIBRARY_SUFFIX}")
install(
FILES $<TARGET_PROPERTY:${LIBRARY_NAME},IMPORTED_LOCATION>
RENAME "${LIBRARY_FILE_NAME}"
DESTINATION ${LIBRARY_DESTINATION}
)
install(
FILES $<TARGET_PROPERTY:${LIBRARY_NAME},PUBLIC_HEADER>
DESTINATION ${CMAKE_INSTALL_INCLUDEDIR}/${PROJECT_NAME}
)
find_package(Doxygen OPTIONAL_COMPONENTS dot)
if(DOXYGEN_FOUND)
set(DOXYGEN_ALIASES "installed_headerfile=\\headerfile ${LIBRARY_NAME}.h <${PROJECT_NAME}/${LIBRARY_NAME}.h>")
set(DOXYGEN_GENERATE_LATEX YES)
set(DOXYGEN_PDF_HYPERLINKS YES)
set(DOXYGEN_PROJECT_LOGO "${CMAKE_SOURCE_DIR}/img/brandmark.png")
set(DOXYGEN_SORT_BRIEF_DOCS YES)
set(DOXYGEN_USE_MDFILE_AS_MAINPAGE "${CMAKE_SOURCE_DIR}/README.md")
doxygen_add_docs(
${LIBRARY_NAME}_docs
"${CBINDGEN_TARGET_DIR}/${LIBRARY_NAME}.h"
"${CMAKE_SOURCE_DIR}/README.md"
USE_STAMP_FILE
WORKING_DIRECTORY ${CMAKE_SOURCE_DIR}
COMMENT "Producing documentation with Doxygen..."
)
# \note A Doxygen input file isn't a file-level dependency so the Doxygen
# command must instead depend upon a target that outputs the file or
# it will just output an error message when it can't be found.
add_dependencies(${LIBRARY_NAME}_docs ${LIBRARY_NAME}_artifacts)
endif()

View file

@ -1,4 +1,5 @@
use automerge as am;
use libc::c_int;
use std::cell::RefCell;
use std::cmp::Ordering;
use std::str::FromStr;
@ -11,7 +12,7 @@ macro_rules! to_actor_id {
let handle = $handle.as_ref();
match handle {
Some(b) => b,
None => return AMresult::err("Invalid AMactorId pointer").into(),
None => return AMresult::error("Invalid `AMactorId*`").into(),
}
}};
}
@ -57,11 +58,11 @@ impl AsRef<am::ActorId> for AMactorId {
}
/// \memberof AMactorId
/// \brief Gets the value of an actor identifier as a sequence of bytes.
/// \brief Gets the value of an actor identifier as an array of bytes.
///
/// \param[in] actor_id A pointer to an `AMactorId` struct.
/// \pre \p actor_id `!= NULL`.
/// \return An `AMbyteSpan` struct.
/// \return An `AMbyteSpan` struct for an array of bytes.
/// \pre \p actor_id `!= NULL`
/// \internal
///
/// # Safety
@ -82,8 +83,8 @@ pub unsafe extern "C" fn AMactorIdBytes(actor_id: *const AMactorId) -> AMbyteSpa
/// \return `-1` if \p actor_id1 `<` \p actor_id2, `0` if
/// \p actor_id1 `==` \p actor_id2 and `1` if
/// \p actor_id1 `>` \p actor_id2.
/// \pre \p actor_id1 `!= NULL`.
/// \pre \p actor_id2 `!= NULL`.
/// \pre \p actor_id1 `!= NULL`
/// \pre \p actor_id2 `!= NULL`
/// \internal
///
/// #Safety
@ -93,7 +94,7 @@ pub unsafe extern "C" fn AMactorIdBytes(actor_id: *const AMactorId) -> AMbyteSpa
pub unsafe extern "C" fn AMactorIdCmp(
actor_id1: *const AMactorId,
actor_id2: *const AMactorId,
) -> isize {
) -> c_int {
match (actor_id1.as_ref(), actor_id2.as_ref()) {
(Some(actor_id1), Some(actor_id2)) => match actor_id1.as_ref().cmp(actor_id2.as_ref()) {
Ordering::Less => -1,
@ -101,65 +102,69 @@ pub unsafe extern "C" fn AMactorIdCmp(
Ordering::Greater => 1,
},
(None, Some(_)) => -1,
(Some(_), None) => 1,
(None, None) => 0,
(Some(_), None) => 1,
}
}
/// \memberof AMactorId
/// \brief Allocates a new actor identifier and initializes it with a random
/// UUID.
/// \brief Allocates a new actor identifier and initializes it from a random
/// UUID value.
///
/// \return A pointer to an `AMresult` struct containing a pointer to an
/// `AMactorId` struct.
/// \warning The returned `AMresult` struct must be deallocated with `AMfree()`
/// in order to prevent a memory leak.
/// \return A pointer to an `AMresult` struct with an `AM_VAL_TYPE_ACTOR_ID` item.
/// \warning The returned `AMresult` struct pointer must be passed to
/// `AMresultFree()` in order to avoid a memory leak.
#[no_mangle]
pub unsafe extern "C" fn AMactorIdInit() -> *mut AMresult {
to_result(Ok::<am::ActorId, am::AutomergeError>(am::ActorId::random()))
}
/// \memberof AMactorId
/// \brief Allocates a new actor identifier and initializes it from a sequence
/// of bytes.
/// \brief Allocates a new actor identifier and initializes it from an array of
/// bytes value.
///
/// \param[in] src A pointer to a contiguous sequence of bytes.
/// \param[in] count The number of bytes to copy from \p src.
/// \pre `0 <` \p count `<= sizeof(`\p src`)`.
/// \return A pointer to an `AMresult` struct containing a pointer to an
/// `AMactorId` struct.
/// \warning The returned `AMresult` struct must be deallocated with `AMfree()`
/// in order to prevent a memory leak.
/// \param[in] src A pointer to an array of bytes.
/// \param[in] count The count of bytes to copy from the array pointed to by
/// \p src.
/// \return A pointer to an `AMresult` struct with an `AM_VAL_TYPE_ACTOR_ID` item.
/// \pre \p src `!= NULL`
/// \pre `sizeof(`\p src `) > 0`
/// \pre \p count `<= sizeof(`\p src `)`
/// \warning The returned `AMresult` struct pointer must be passed to
/// `AMresultFree()` in order to avoid a memory leak.
/// \internal
///
/// # Safety
/// src must be a byte array of size `>= count`
/// src must be a byte array of length `>= count`
#[no_mangle]
pub unsafe extern "C" fn AMactorIdInitBytes(src: *const u8, count: usize) -> *mut AMresult {
let slice = std::slice::from_raw_parts(src, count);
to_result(Ok::<am::ActorId, am::InvalidActorId>(am::ActorId::from(
slice,
)))
pub unsafe extern "C" fn AMactorIdFromBytes(src: *const u8, count: usize) -> *mut AMresult {
if !src.is_null() {
let value = std::slice::from_raw_parts(src, count);
to_result(Ok::<am::ActorId, am::InvalidActorId>(am::ActorId::from(
value,
)))
} else {
AMresult::error("Invalid uint8_t*").into()
}
}
/// \memberof AMactorId
/// \brief Allocates a new actor identifier and initializes it from a
/// hexadecimal string.
/// hexadecimal UTF-8 string view value.
///
/// \param[in] hex_str A UTF-8 string view as an `AMbyteSpan` struct.
/// \return A pointer to an `AMresult` struct containing a pointer to an
/// `AMactorId` struct.
/// \warning The returned `AMresult` struct must be deallocated with `AMfree()`
/// in order to prevent a memory leak.
/// \param[in] value A UTF-8 string view as an `AMbyteSpan` struct.
/// \return A pointer to an `AMresult` struct with an `AM_VAL_TYPE_ACTOR_ID` item.
/// \warning The returned `AMresult` struct pointer must be passed to
/// `AMresultFree()` in order to avoid a memory leak.
/// \internal
///
/// # Safety
/// hex_str must be a valid pointer to an AMbyteSpan
#[no_mangle]
pub unsafe extern "C" fn AMactorIdInitStr(hex_str: AMbyteSpan) -> *mut AMresult {
pub unsafe extern "C" fn AMactorIdFromStr(value: AMbyteSpan) -> *mut AMresult {
use am::AutomergeError::InvalidActorId;
to_result(match (&hex_str).try_into() {
to_result(match (&value).try_into() {
Ok(s) => match am::ActorId::from_str(s) {
Ok(actor_id) => Ok(actor_id),
Err(_) => Err(InvalidActorId(String::from(s))),
@ -169,11 +174,12 @@ pub unsafe extern "C" fn AMactorIdInitStr(hex_str: AMbyteSpan) -> *mut AMresult
}
/// \memberof AMactorId
/// \brief Gets the value of an actor identifier as a hexadecimal string.
/// \brief Gets the value of an actor identifier as a UTF-8 hexadecimal string
/// view.
///
/// \param[in] actor_id A pointer to an `AMactorId` struct.
/// \pre \p actor_id `!= NULL`.
/// \return A UTF-8 string view as an `AMbyteSpan` struct.
/// \pre \p actor_id `!= NULL`
/// \internal
///
/// # Safety

View file

@ -1,14 +1,17 @@
use automerge as am;
use libc::strlen;
use std::cmp::Ordering;
use std::convert::TryFrom;
use std::os::raw::c_char;
use libc::{c_int, strlen};
use smol_str::SmolStr;
macro_rules! to_str {
($span:expr) => {{
let result: Result<&str, am::AutomergeError> = (&$span).try_into();
($byte_span:expr) => {{
let result: Result<&str, am::AutomergeError> = (&$byte_span).try_into();
match result {
Ok(s) => s,
Err(e) => return AMresult::err(&e.to_string()).into(),
Err(e) => return AMresult::error(&e.to_string()).into(),
}
}};
}
@ -17,16 +20,17 @@ pub(crate) use to_str;
/// \struct AMbyteSpan
/// \installed_headerfile
/// \brief A view onto a contiguous sequence of bytes.
/// \brief A view onto an array of bytes.
#[repr(C)]
pub struct AMbyteSpan {
/// A pointer to an array of bytes.
/// \attention <b>NEVER CALL `free()` ON \p src!</b>
/// \warning \p src is only valid until the `AMfree()` function is called
/// on the `AMresult` struct that stores the array of bytes to
/// which it points.
/// A pointer to the first byte of an array of bytes.
/// \warning \p src is only valid until the array of bytes to which it
/// points is freed.
/// \note If the `AMbyteSpan` came from within an `AMitem` struct then
/// \p src will be freed when the pointer to the `AMresult` struct
/// containing the `AMitem` struct is passed to `AMresultFree()`.
pub src: *const u8,
/// The number of bytes in the array.
/// The count of bytes in the array.
pub count: usize,
}
@ -52,9 +56,7 @@ impl PartialEq for AMbyteSpan {
} else if self.src == other.src {
return true;
}
let slice = unsafe { std::slice::from_raw_parts(self.src, self.count) };
let other_slice = unsafe { std::slice::from_raw_parts(other.src, other.count) };
slice == other_slice
<&[u8]>::from(self) == <&[u8]>::from(other)
}
}
@ -72,10 +74,15 @@ impl From<&am::ActorId> for AMbyteSpan {
impl From<&mut am::ActorId> for AMbyteSpan {
fn from(actor: &mut am::ActorId) -> Self {
let slice = actor.to_bytes();
actor.as_ref().into()
}
}
impl From<&am::ChangeHash> for AMbyteSpan {
fn from(change_hash: &am::ChangeHash) -> Self {
Self {
src: slice.as_ptr(),
count: slice.len(),
src: change_hash.0.as_ptr(),
count: change_hash.0.len(),
}
}
}
@ -93,12 +100,9 @@ impl From<*const c_char> for AMbyteSpan {
}
}
impl From<&am::ChangeHash> for AMbyteSpan {
fn from(change_hash: &am::ChangeHash) -> Self {
Self {
src: change_hash.0.as_ptr(),
count: change_hash.0.len(),
}
impl From<&SmolStr> for AMbyteSpan {
fn from(smol_str: &SmolStr) -> Self {
smol_str.as_bytes().into()
}
}
@ -111,13 +115,39 @@ impl From<&[u8]> for AMbyteSpan {
}
}
impl From<&AMbyteSpan> for &[u8] {
fn from(byte_span: &AMbyteSpan) -> Self {
unsafe { std::slice::from_raw_parts(byte_span.src, byte_span.count) }
}
}
impl From<&AMbyteSpan> for Vec<u8> {
fn from(byte_span: &AMbyteSpan) -> Self {
<&[u8]>::from(byte_span).to_vec()
}
}
impl TryFrom<&AMbyteSpan> for am::ChangeHash {
type Error = am::AutomergeError;
fn try_from(byte_span: &AMbyteSpan) -> Result<Self, Self::Error> {
use am::AutomergeError::InvalidChangeHashBytes;
let slice: &[u8] = byte_span.into();
match slice.try_into() {
Ok(change_hash) => Ok(change_hash),
Err(e) => Err(InvalidChangeHashBytes(e)),
}
}
}
impl TryFrom<&AMbyteSpan> for &str {
type Error = am::AutomergeError;
fn try_from(span: &AMbyteSpan) -> Result<Self, Self::Error> {
fn try_from(byte_span: &AMbyteSpan) -> Result<Self, Self::Error> {
use am::AutomergeError::InvalidCharacter;
let slice = unsafe { std::slice::from_raw_parts(span.src, span.count) };
let slice = byte_span.into();
match std::str::from_utf8(slice) {
Ok(str_) => Ok(str_),
Err(e) => Err(InvalidCharacter(e.valid_up_to())),
@ -125,17 +155,69 @@ impl TryFrom<&AMbyteSpan> for &str {
}
}
/// \brief Creates an AMbyteSpan from a pointer + length
/// \memberof AMbyteSpan
/// \brief Creates a view onto an array of bytes.
///
/// \param[in] src A pointer to a span of bytes
/// \param[in] count The number of bytes in the span
/// \return An `AMbyteSpan` struct
/// \param[in] src A pointer to an array of bytes or `NULL`.
/// \param[in] count The count of bytes to view from the array pointed to by
/// \p src.
/// \return An `AMbyteSpan` struct.
/// \pre \p count `<= sizeof(`\p src `)`
/// \post `(`\p src `== NULL) -> (AMbyteSpan){NULL, 0}`
/// \internal
///
/// #Safety
/// AMbytes does not retain the underlying storage, so you must discard the
/// return value before freeing the bytes.
/// src must be a byte array of length `>= count` or `std::ptr::null()`
#[no_mangle]
pub unsafe extern "C" fn AMbytes(src: *const u8, count: usize) -> AMbyteSpan {
AMbyteSpan { src, count }
AMbyteSpan {
src,
count: if src.is_null() { 0 } else { count },
}
}
/// \memberof AMbyteSpan
/// \brief Creates a view onto a C string.
///
/// \param[in] c_str A null-terminated byte string or `NULL`.
/// \return An `AMbyteSpan` struct.
/// \pre Each byte in \p c_str encodes one UTF-8 character.
/// \internal
///
/// #Safety
/// c_str must be a null-terminated array of `std::os::raw::c_char` or `std::ptr::null()`.
#[no_mangle]
pub unsafe extern "C" fn AMstr(c_str: *const c_char) -> AMbyteSpan {
c_str.into()
}
/// \memberof AMbyteSpan
/// \brief Compares two UTF-8 string views lexicographically.
///
/// \param[in] lhs A UTF-8 string view as an `AMbyteSpan` struct.
/// \param[in] rhs A UTF-8 string view as an `AMbyteSpan` struct.
/// \return Negative value if \p lhs appears before \p rhs in lexicographical order.
/// Zero if \p lhs and \p rhs compare equal.
/// Positive value if \p lhs appears after \p rhs in lexicographical order.
/// \pre \p lhs.src `!= NULL`
/// \pre \p lhs.count `<= sizeof(`\p lhs.src `)`
/// \pre \p rhs.src `!= NULL`
/// \pre \p rhs.count `<= sizeof(`\p rhs.src `)`
/// \internal
///
/// #Safety
/// lhs.src must be a byte array of length >= lhs.count
/// rhs.src must be a a byte array of length >= rhs.count
#[no_mangle]
pub unsafe extern "C" fn AMstrCmp(lhs: AMbyteSpan, rhs: AMbyteSpan) -> c_int {
match (<&str>::try_from(&lhs), <&str>::try_from(&rhs)) {
(Ok(lhs), Ok(rhs)) => match lhs.cmp(rhs) {
Ordering::Less => -1,
Ordering::Equal => 0,
Ordering::Greater => 1,
},
(Err(_), Ok(_)) => -1,
(Err(_), Err(_)) => 0,
(Ok(_), Err(_)) => 1,
}
}

View file

@ -2,7 +2,6 @@ use automerge as am;
use std::cell::RefCell;
use crate::byte_span::AMbyteSpan;
use crate::change_hashes::AMchangeHashes;
use crate::result::{to_result, AMresult};
macro_rules! to_change {
@ -10,7 +9,7 @@ macro_rules! to_change {
let handle = $handle.as_ref();
match handle {
Some(b) => b,
None => return AMresult::err("Invalid AMchange pointer").into(),
None => return AMresult::error("Invalid `AMchange*`").into(),
}
}};
}
@ -21,14 +20,14 @@ macro_rules! to_change {
#[derive(Eq, PartialEq)]
pub struct AMchange {
body: *mut am::Change,
changehash: RefCell<Option<am::ChangeHash>>,
change_hash: RefCell<Option<am::ChangeHash>>,
}
impl AMchange {
pub fn new(change: &mut am::Change) -> Self {
Self {
body: change,
changehash: Default::default(),
change_hash: Default::default(),
}
}
@ -40,12 +39,12 @@ impl AMchange {
}
pub fn hash(&self) -> AMbyteSpan {
let mut changehash = self.changehash.borrow_mut();
if let Some(changehash) = changehash.as_ref() {
changehash.into()
let mut change_hash = self.change_hash.borrow_mut();
if let Some(change_hash) = change_hash.as_ref() {
change_hash.into()
} else {
let hash = unsafe { (*self.body).hash() };
let ptr = changehash.insert(hash);
let ptr = change_hash.insert(hash);
AMbyteSpan {
src: ptr.0.as_ptr(),
count: hash.as_ref().len(),
@ -70,11 +69,10 @@ impl AsRef<am::Change> for AMchange {
/// \brief Gets the first referenced actor identifier in a change.
///
/// \param[in] change A pointer to an `AMchange` struct.
/// \pre \p change `!= NULL`.
/// \return A pointer to an `AMresult` struct containing a pointer to an
/// `AMactorId` struct.
/// \warning The returned `AMresult` struct must be deallocated with `AMfree()`
/// in order to prevent a memory leak.
/// \return A pointer to an `AMresult` struct with an `AM_VAL_TYPE_ACTOR_ID` item.
/// \pre \p change `!= NULL`
/// \warning The returned `AMresult` struct pointer must be passed to
/// `AMresultFree()` in order to avoid a memory leak.
/// \internal
///
/// # Safety
@ -90,8 +88,8 @@ pub unsafe extern "C" fn AMchangeActorId(change: *const AMchange) -> *mut AMresu
/// \memberof AMchange
/// \brief Compresses the raw bytes of a change.
///
/// \param[in,out] change A pointer to an `AMchange` struct.
/// \pre \p change `!= NULL`.
/// \param[in] change A pointer to an `AMchange` struct.
/// \pre \p change `!= NULL`
/// \internal
///
/// # Safety
@ -107,18 +105,20 @@ pub unsafe extern "C" fn AMchangeCompress(change: *mut AMchange) {
/// \brief Gets the dependencies of a change.
///
/// \param[in] change A pointer to an `AMchange` struct.
/// \return A pointer to an `AMchangeHashes` struct or `NULL`.
/// \pre \p change `!= NULL`.
/// \return A pointer to an `AMresult` struct with `AM_VAL_TYPE_CHANGE_HASH` items.
/// \pre \p change `!= NULL`
/// \warning The returned `AMresult` struct pointer must be passed to
/// `AMresultFree()` in order to avoid a memory leak.
/// \internal
///
/// # Safety
/// change must be a valid pointer to an AMchange
#[no_mangle]
pub unsafe extern "C" fn AMchangeDeps(change: *const AMchange) -> AMchangeHashes {
match change.as_ref() {
Some(change) => AMchangeHashes::new(change.as_ref().deps()),
pub unsafe extern "C" fn AMchangeDeps(change: *const AMchange) -> *mut AMresult {
to_result(match change.as_ref() {
Some(change) => change.as_ref().deps(),
None => Default::default(),
}
})
}
/// \memberof AMchange
@ -126,7 +126,7 @@ pub unsafe extern "C" fn AMchangeDeps(change: *const AMchange) -> AMchangeHashes
///
/// \param[in] change A pointer to an `AMchange` struct.
/// \return An `AMbyteSpan` struct.
/// \pre \p change `!= NULL`.
/// \pre \p change `!= NULL`
/// \internal
///
/// # Safety
@ -141,32 +141,33 @@ pub unsafe extern "C" fn AMchangeExtraBytes(change: *const AMchange) -> AMbyteSp
}
/// \memberof AMchange
/// \brief Loads a sequence of bytes into a change.
/// \brief Allocates a new change and initializes it from an array of bytes value.
///
/// \param[in] src A pointer to an array of bytes.
/// \param[in] count The number of bytes in \p src to load.
/// \return A pointer to an `AMresult` struct containing an `AMchange` struct.
/// \pre \p src `!= NULL`.
/// \pre `0 <` \p count `<= sizeof(`\p src`)`.
/// \warning The returned `AMresult` struct must be deallocated with `AMfree()`
/// in order to prevent a memory leak.
/// \param[in] count The count of bytes to load from the array pointed to by
/// \p src.
/// \return A pointer to an `AMresult` struct with an `AM_VAL_TYPE_CHANGE` item.
/// \pre \p src `!= NULL`
/// \pre `sizeof(`\p src `) > 0`
/// \pre \p count `<= sizeof(`\p src `)`
/// \warning The returned `AMresult` struct pointer must be passed to
/// `AMresultFree()` in order to avoid a memory leak.
/// \internal
///
/// # Safety
/// src must be a byte array of size `>= count`
/// src must be a byte array of length `>= count`
#[no_mangle]
pub unsafe extern "C" fn AMchangeFromBytes(src: *const u8, count: usize) -> *mut AMresult {
let mut data = Vec::new();
data.extend_from_slice(std::slice::from_raw_parts(src, count));
to_result(am::Change::from_bytes(data))
let data = std::slice::from_raw_parts(src, count);
to_result(am::Change::from_bytes(data.to_vec()))
}
/// \memberof AMchange
/// \brief Gets the hash of a change.
///
/// \param[in] change A pointer to an `AMchange` struct.
/// \return A change hash as an `AMbyteSpan` struct.
/// \pre \p change `!= NULL`.
/// \return An `AMbyteSpan` struct for a change hash.
/// \pre \p change `!= NULL`
/// \internal
///
/// # Safety
@ -183,8 +184,8 @@ pub unsafe extern "C" fn AMchangeHash(change: *const AMchange) -> AMbyteSpan {
/// \brief Tests the emptiness of a change.
///
/// \param[in] change A pointer to an `AMchange` struct.
/// \return A boolean.
/// \pre \p change `!= NULL`.
/// \return `true` if \p change is empty, `false` otherwise.
/// \pre \p change `!= NULL`
/// \internal
///
/// # Safety
@ -198,12 +199,37 @@ pub unsafe extern "C" fn AMchangeIsEmpty(change: *const AMchange) -> bool {
}
}
/// \memberof AMchange
/// \brief Loads a document into a sequence of changes.
///
/// \param[in] src A pointer to an array of bytes.
/// \param[in] count The count of bytes to load from the array pointed to by
/// \p src.
/// \return A pointer to an `AMresult` struct with `AM_VAL_TYPE_CHANGE` items.
/// \pre \p src `!= NULL`
/// \pre `sizeof(`\p src `) > 0`
/// \pre \p count `<= sizeof(`\p src `)`
/// \warning The returned `AMresult` struct pointer must be passed to
/// `AMresultFree()` in order to avoid a memory leak.
/// \internal
///
/// # Safety
/// src must be a byte array of length `>= count`
#[no_mangle]
pub unsafe extern "C" fn AMchangeLoadDocument(src: *const u8, count: usize) -> *mut AMresult {
let data = std::slice::from_raw_parts(src, count);
to_result::<Result<Vec<am::Change>, _>>(
am::Automerge::load(data)
.and_then(|d| d.get_changes(&[]).map(|c| c.into_iter().cloned().collect())),
)
}
/// \memberof AMchange
/// \brief Gets the maximum operation index of a change.
///
/// \param[in] change A pointer to an `AMchange` struct.
/// \return A 64-bit unsigned integer.
/// \pre \p change `!= NULL`.
/// \pre \p change `!= NULL`
/// \internal
///
/// # Safety
@ -221,8 +247,8 @@ pub unsafe extern "C" fn AMchangeMaxOp(change: *const AMchange) -> u64 {
/// \brief Gets the message of a change.
///
/// \param[in] change A pointer to an `AMchange` struct.
/// \return A UTF-8 string view as an `AMbyteSpan` struct.
/// \pre \p change `!= NULL`.
/// \return An `AMbyteSpan` struct for a UTF-8 string.
/// \pre \p change `!= NULL`
/// \internal
///
/// # Safety
@ -240,7 +266,7 @@ pub unsafe extern "C" fn AMchangeMessage(change: *const AMchange) -> AMbyteSpan
///
/// \param[in] change A pointer to an `AMchange` struct.
/// \return A 64-bit unsigned integer.
/// \pre \p change `!= NULL`.
/// \pre \p change `!= NULL`
/// \internal
///
/// # Safety
@ -259,7 +285,7 @@ pub unsafe extern "C" fn AMchangeSeq(change: *const AMchange) -> u64 {
///
/// \param[in] change A pointer to an `AMchange` struct.
/// \return A 64-bit unsigned integer.
/// \pre \p change `!= NULL`.
/// \pre \p change `!= NULL`
/// \internal
///
/// # Safety
@ -267,10 +293,9 @@ pub unsafe extern "C" fn AMchangeSeq(change: *const AMchange) -> u64 {
#[no_mangle]
pub unsafe extern "C" fn AMchangeSize(change: *const AMchange) -> usize {
if let Some(change) = change.as_ref() {
change.as_ref().len()
} else {
0
return change.as_ref().len();
}
0
}
/// \memberof AMchange
@ -278,7 +303,7 @@ pub unsafe extern "C" fn AMchangeSize(change: *const AMchange) -> usize {
///
/// \param[in] change A pointer to an `AMchange` struct.
/// \return A 64-bit unsigned integer.
/// \pre \p change `!= NULL`.
/// \pre \p change `!= NULL`
/// \internal
///
/// # Safety
@ -297,7 +322,7 @@ pub unsafe extern "C" fn AMchangeStartOp(change: *const AMchange) -> u64 {
///
/// \param[in] change A pointer to an `AMchange` struct.
/// \return A 64-bit signed integer.
/// \pre \p change `!= NULL`.
/// \pre \p change `!= NULL`
/// \internal
///
/// # Safety
@ -315,8 +340,8 @@ pub unsafe extern "C" fn AMchangeTime(change: *const AMchange) -> i64 {
/// \brief Gets the raw bytes of a change.
///
/// \param[in] change A pointer to an `AMchange` struct.
/// \return An `AMbyteSpan` struct.
/// \pre \p change `!= NULL`.
/// \return An `AMbyteSpan` struct for an array of bytes.
/// \pre \p change `!= NULL`
/// \internal
///
/// # Safety
@ -329,28 +354,3 @@ pub unsafe extern "C" fn AMchangeRawBytes(change: *const AMchange) -> AMbyteSpan
Default::default()
}
}
/// \memberof AMchange
/// \brief Loads a document into a sequence of changes.
///
/// \param[in] src A pointer to an array of bytes.
/// \param[in] count The number of bytes in \p src to load.
/// \return A pointer to an `AMresult` struct containing a sequence of
/// `AMchange` structs.
/// \pre \p src `!= NULL`.
/// \pre `0 <` \p count `<= sizeof(`\p src`)`.
/// \warning The returned `AMresult` struct must be deallocated with `AMfree()`
/// in order to prevent a memory leak.
/// \internal
///
/// # Safety
/// src must be a byte array of size `>= count`
#[no_mangle]
pub unsafe extern "C" fn AMchangeLoadDocument(src: *const u8, count: usize) -> *mut AMresult {
let mut data = Vec::new();
data.extend_from_slice(std::slice::from_raw_parts(src, count));
to_result::<Result<Vec<am::Change>, _>>(
am::Automerge::load(&data)
.and_then(|d| d.get_changes(&[]).map(|c| c.into_iter().cloned().collect())),
)
}

View file

@ -1,400 +0,0 @@
use automerge as am;
use std::cmp::Ordering;
use std::ffi::c_void;
use std::mem::size_of;
use crate::byte_span::AMbyteSpan;
use crate::result::{to_result, AMresult};
#[repr(C)]
struct Detail {
len: usize,
offset: isize,
ptr: *const c_void,
}
/// \note cbindgen won't propagate the value of a `std::mem::size_of<T>()` call
/// (https://github.com/eqrion/cbindgen/issues/252) but it will
/// propagate the name of a constant initialized from it so if the
/// constant's name is a symbolic representation of the value it can be
/// converted into a number by post-processing the header it generated.
pub const USIZE_USIZE_USIZE_: usize = size_of::<Detail>();
impl Detail {
fn new(change_hashes: &[am::ChangeHash], offset: isize) -> Self {
Self {
len: change_hashes.len(),
offset,
ptr: change_hashes.as_ptr() as *const c_void,
}
}
pub fn advance(&mut self, n: isize) {
if n == 0 {
return;
}
let len = self.len as isize;
self.offset = if self.offset < 0 {
// It's reversed.
let unclipped = self.offset.checked_sub(n).unwrap_or(isize::MIN);
if unclipped >= 0 {
// Clip it to the forward stop.
len
} else {
std::cmp::min(std::cmp::max(-(len + 1), unclipped), -1)
}
} else {
let unclipped = self.offset.checked_add(n).unwrap_or(isize::MAX);
if unclipped < 0 {
// Clip it to the reverse stop.
-(len + 1)
} else {
std::cmp::max(0, std::cmp::min(unclipped, len))
}
}
}
pub fn get_index(&self) -> usize {
(self.offset
+ if self.offset < 0 {
self.len as isize
} else {
0
}) as usize
}
pub fn next(&mut self, n: isize) -> Option<&am::ChangeHash> {
if self.is_stopped() {
return None;
}
let slice: &[am::ChangeHash] =
unsafe { std::slice::from_raw_parts(self.ptr as *const am::ChangeHash, self.len) };
let value = &slice[self.get_index()];
self.advance(n);
Some(value)
}
pub fn is_stopped(&self) -> bool {
let len = self.len as isize;
self.offset < -len || self.offset == len
}
pub fn prev(&mut self, n: isize) -> Option<&am::ChangeHash> {
self.advance(-n);
if self.is_stopped() {
return None;
}
let slice: &[am::ChangeHash] =
unsafe { std::slice::from_raw_parts(self.ptr as *const am::ChangeHash, self.len) };
Some(&slice[self.get_index()])
}
pub fn reversed(&self) -> Self {
Self {
len: self.len,
offset: -(self.offset + 1),
ptr: self.ptr,
}
}
pub fn rewound(&self) -> Self {
Self {
len: self.len,
offset: if self.offset < 0 { -1 } else { 0 },
ptr: self.ptr,
}
}
}
impl From<Detail> for [u8; USIZE_USIZE_USIZE_] {
fn from(detail: Detail) -> Self {
unsafe {
std::slice::from_raw_parts((&detail as *const Detail) as *const u8, USIZE_USIZE_USIZE_)
.try_into()
.unwrap()
}
}
}
/// \struct AMchangeHashes
/// \installed_headerfile
/// \brief A random-access iterator over a sequence of change hashes.
#[repr(C)]
#[derive(Eq, PartialEq)]
pub struct AMchangeHashes {
/// An implementation detail that is intentionally opaque.
/// \warning Modifying \p detail will cause undefined behavior.
/// \note The actual size of \p detail will vary by platform, this is just
/// the one for the platform this documentation was built on.
detail: [u8; USIZE_USIZE_USIZE_],
}
impl AMchangeHashes {
pub fn new(change_hashes: &[am::ChangeHash]) -> Self {
Self {
detail: Detail::new(change_hashes, 0).into(),
}
}
pub fn advance(&mut self, n: isize) {
let detail = unsafe { &mut *(self.detail.as_mut_ptr() as *mut Detail) };
detail.advance(n);
}
pub fn len(&self) -> usize {
let detail = unsafe { &*(self.detail.as_ptr() as *const Detail) };
detail.len
}
pub fn next(&mut self, n: isize) -> Option<&am::ChangeHash> {
let detail = unsafe { &mut *(self.detail.as_mut_ptr() as *mut Detail) };
detail.next(n)
}
pub fn prev(&mut self, n: isize) -> Option<&am::ChangeHash> {
let detail = unsafe { &mut *(self.detail.as_mut_ptr() as *mut Detail) };
detail.prev(n)
}
pub fn reversed(&self) -> Self {
let detail = unsafe { &*(self.detail.as_ptr() as *const Detail) };
Self {
detail: detail.reversed().into(),
}
}
pub fn rewound(&self) -> Self {
let detail = unsafe { &*(self.detail.as_ptr() as *const Detail) };
Self {
detail: detail.rewound().into(),
}
}
}
impl AsRef<[am::ChangeHash]> for AMchangeHashes {
fn as_ref(&self) -> &[am::ChangeHash] {
let detail = unsafe { &*(self.detail.as_ptr() as *const Detail) };
unsafe { std::slice::from_raw_parts(detail.ptr as *const am::ChangeHash, detail.len) }
}
}
impl Default for AMchangeHashes {
fn default() -> Self {
Self {
detail: [0; USIZE_USIZE_USIZE_],
}
}
}
/// \memberof AMchangeHashes
/// \brief Advances an iterator over a sequence of change hashes by at most
/// \p |n| positions where the sign of \p n is relative to the
/// iterator's direction.
///
/// \param[in,out] change_hashes A pointer to an `AMchangeHashes` struct.
/// \param[in] n The direction (\p -n -> opposite, \p n -> same) and maximum
/// number of positions to advance.
/// \pre \p change_hashes `!= NULL`.
/// \internal
///
/// #Safety
/// change_hashes must be a valid pointer to an AMchangeHashes
#[no_mangle]
pub unsafe extern "C" fn AMchangeHashesAdvance(change_hashes: *mut AMchangeHashes, n: isize) {
if let Some(change_hashes) = change_hashes.as_mut() {
change_hashes.advance(n);
};
}
/// \memberof AMchangeHashes
/// \brief Compares the sequences of change hashes underlying a pair of
/// iterators.
///
/// \param[in] change_hashes1 A pointer to an `AMchangeHashes` struct.
/// \param[in] change_hashes2 A pointer to an `AMchangeHashes` struct.
/// \return `-1` if \p change_hashes1 `<` \p change_hashes2, `0` if
/// \p change_hashes1 `==` \p change_hashes2 and `1` if
/// \p change_hashes1 `>` \p change_hashes2.
/// \pre \p change_hashes1 `!= NULL`.
/// \pre \p change_hashes2 `!= NULL`.
/// \internal
///
/// #Safety
/// change_hashes1 must be a valid pointer to an AMchangeHashes
/// change_hashes2 must be a valid pointer to an AMchangeHashes
#[no_mangle]
pub unsafe extern "C" fn AMchangeHashesCmp(
change_hashes1: *const AMchangeHashes,
change_hashes2: *const AMchangeHashes,
) -> isize {
match (change_hashes1.as_ref(), change_hashes2.as_ref()) {
(Some(change_hashes1), Some(change_hashes2)) => {
match change_hashes1.as_ref().cmp(change_hashes2.as_ref()) {
Ordering::Less => -1,
Ordering::Equal => 0,
Ordering::Greater => 1,
}
}
(None, Some(_)) => -1,
(Some(_), None) => 1,
(None, None) => 0,
}
}
/// \memberof AMchangeHashes
/// \brief Allocates an iterator over a sequence of change hashes and
/// initializes it from a sequence of byte spans.
///
/// \param[in] src A pointer to an array of `AMbyteSpan` structs.
/// \param[in] count The number of `AMbyteSpan` structs to copy from \p src.
/// \return A pointer to an `AMresult` struct containing an `AMchangeHashes`
/// struct.
/// \pre \p src `!= NULL`.
/// \pre `0 <` \p count `<= sizeof(`\p src`) / sizeof(AMbyteSpan)`.
/// \warning The returned `AMresult` struct must be deallocated with `AMfree()`
/// in order to prevent a memory leak.
/// \internal
///
/// # Safety
/// src must be an AMbyteSpan array of size `>= count`
#[no_mangle]
pub unsafe extern "C" fn AMchangeHashesInit(src: *const AMbyteSpan, count: usize) -> *mut AMresult {
let mut change_hashes = Vec::<am::ChangeHash>::new();
for n in 0..count {
let byte_span = &*src.add(n);
let slice = std::slice::from_raw_parts(byte_span.src, byte_span.count);
match slice.try_into() {
Ok(change_hash) => {
change_hashes.push(change_hash);
}
Err(e) => {
return to_result(Err(e));
}
}
}
to_result(Ok::<Vec<am::ChangeHash>, am::InvalidChangeHashSlice>(
change_hashes,
))
}
/// \memberof AMchangeHashes
/// \brief Gets the change hash at the current position of an iterator over a
/// sequence of change hashes and then advances it by at most \p |n|
/// positions where the sign of \p n is relative to the iterator's
/// direction.
///
/// \param[in,out] change_hashes A pointer to an `AMchangeHashes` struct.
/// \param[in] n The direction (\p -n -> opposite, \p n -> same) and maximum
/// number of positions to advance.
/// \return An `AMbyteSpan` struct with `.src == NULL` when \p change_hashes
/// was previously advanced past its forward/reverse limit.
/// \pre \p change_hashes `!= NULL`.
/// \internal
///
/// #Safety
/// change_hashes must be a valid pointer to an AMchangeHashes
#[no_mangle]
pub unsafe extern "C" fn AMchangeHashesNext(
change_hashes: *mut AMchangeHashes,
n: isize,
) -> AMbyteSpan {
if let Some(change_hashes) = change_hashes.as_mut() {
if let Some(change_hash) = change_hashes.next(n) {
return change_hash.into();
}
}
Default::default()
}
/// \memberof AMchangeHashes
/// \brief Advances an iterator over a sequence of change hashes by at most
/// \p |n| positions where the sign of \p n is relative to the
/// iterator's direction and then gets the change hash at its new
/// position.
///
/// \param[in,out] change_hashes A pointer to an `AMchangeHashes` struct.
/// \param[in] n The direction (\p -n -> opposite, \p n -> same) and maximum
/// number of positions to advance.
/// \return An `AMbyteSpan` struct with `.src == NULL` when \p change_hashes is
/// presently advanced past its forward/reverse limit.
/// \pre \p change_hashes `!= NULL`.
/// \internal
///
/// #Safety
/// change_hashes must be a valid pointer to an AMchangeHashes
#[no_mangle]
pub unsafe extern "C" fn AMchangeHashesPrev(
change_hashes: *mut AMchangeHashes,
n: isize,
) -> AMbyteSpan {
if let Some(change_hashes) = change_hashes.as_mut() {
if let Some(change_hash) = change_hashes.prev(n) {
return change_hash.into();
}
}
Default::default()
}
/// \memberof AMchangeHashes
/// \brief Gets the size of the sequence of change hashes underlying an
/// iterator.
///
/// \param[in] change_hashes A pointer to an `AMchangeHashes` struct.
/// \return The count of values in \p change_hashes.
/// \pre \p change_hashes `!= NULL`.
/// \internal
///
/// #Safety
/// change_hashes must be a valid pointer to an AMchangeHashes
#[no_mangle]
pub unsafe extern "C" fn AMchangeHashesSize(change_hashes: *const AMchangeHashes) -> usize {
if let Some(change_hashes) = change_hashes.as_ref() {
change_hashes.len()
} else {
0
}
}
/// \memberof AMchangeHashes
/// \brief Creates an iterator over the same sequence of change hashes as the
/// given one but with the opposite position and direction.
///
/// \param[in] change_hashes A pointer to an `AMchangeHashes` struct.
/// \return An `AMchangeHashes` struct
/// \pre \p change_hashes `!= NULL`.
/// \internal
///
/// #Safety
/// change_hashes must be a valid pointer to an AMchangeHashes
#[no_mangle]
pub unsafe extern "C" fn AMchangeHashesReversed(
change_hashes: *const AMchangeHashes,
) -> AMchangeHashes {
if let Some(change_hashes) = change_hashes.as_ref() {
change_hashes.reversed()
} else {
Default::default()
}
}
/// \memberof AMchangeHashes
/// \brief Creates an iterator at the starting position over the same sequence
/// of change hashes as the given one.
///
/// \param[in] change_hashes A pointer to an `AMchangeHashes` struct.
/// \return An `AMchangeHashes` struct
/// \pre \p change_hashes `!= NULL`.
/// \internal
///
/// #Safety
/// change_hashes must be a valid pointer to an AMchangeHashes
#[no_mangle]
pub unsafe extern "C" fn AMchangeHashesRewound(
change_hashes: *const AMchangeHashes,
) -> AMchangeHashes {
if let Some(change_hashes) = change_hashes.as_ref() {
change_hashes.rewound()
} else {
Default::default()
}
}

View file

@ -1,399 +0,0 @@
use automerge as am;
use std::collections::BTreeMap;
use std::ffi::c_void;
use std::mem::size_of;
use crate::byte_span::AMbyteSpan;
use crate::change::AMchange;
use crate::result::{to_result, AMresult};
#[repr(C)]
struct Detail {
len: usize,
offset: isize,
ptr: *const c_void,
storage: *mut c_void,
}
/// \note cbindgen won't propagate the value of a `std::mem::size_of<T>()` call
/// (https://github.com/eqrion/cbindgen/issues/252) but it will
/// propagate the name of a constant initialized from it so if the
/// constant's name is a symbolic representation of the value it can be
/// converted into a number by post-processing the header it generated.
pub const USIZE_USIZE_USIZE_USIZE_: usize = size_of::<Detail>();
impl Detail {
fn new(changes: &[am::Change], offset: isize, storage: &mut BTreeMap<usize, AMchange>) -> Self {
let storage: *mut BTreeMap<usize, AMchange> = storage;
Self {
len: changes.len(),
offset,
ptr: changes.as_ptr() as *const c_void,
storage: storage as *mut c_void,
}
}
pub fn advance(&mut self, n: isize) {
if n == 0 {
return;
}
let len = self.len as isize;
self.offset = if self.offset < 0 {
// It's reversed.
let unclipped = self.offset.checked_sub(n).unwrap_or(isize::MIN);
if unclipped >= 0 {
// Clip it to the forward stop.
len
} else {
std::cmp::min(std::cmp::max(-(len + 1), unclipped), -1)
}
} else {
let unclipped = self.offset.checked_add(n).unwrap_or(isize::MAX);
if unclipped < 0 {
// Clip it to the reverse stop.
-(len + 1)
} else {
std::cmp::max(0, std::cmp::min(unclipped, len))
}
}
}
pub fn get_index(&self) -> usize {
(self.offset
+ if self.offset < 0 {
self.len as isize
} else {
0
}) as usize
}
pub fn next(&mut self, n: isize) -> Option<*const AMchange> {
if self.is_stopped() {
return None;
}
let slice: &mut [am::Change] =
unsafe { std::slice::from_raw_parts_mut(self.ptr as *mut am::Change, self.len) };
let storage = unsafe { &mut *(self.storage as *mut BTreeMap<usize, AMchange>) };
let index = self.get_index();
let value = match storage.get_mut(&index) {
Some(value) => value,
None => {
storage.insert(index, AMchange::new(&mut slice[index]));
storage.get_mut(&index).unwrap()
}
};
self.advance(n);
Some(value)
}
pub fn is_stopped(&self) -> bool {
let len = self.len as isize;
self.offset < -len || self.offset == len
}
pub fn prev(&mut self, n: isize) -> Option<*const AMchange> {
self.advance(-n);
if self.is_stopped() {
return None;
}
let slice: &mut [am::Change] =
unsafe { std::slice::from_raw_parts_mut(self.ptr as *mut am::Change, self.len) };
let storage = unsafe { &mut *(self.storage as *mut BTreeMap<usize, AMchange>) };
let index = self.get_index();
Some(match storage.get_mut(&index) {
Some(value) => value,
None => {
storage.insert(index, AMchange::new(&mut slice[index]));
storage.get_mut(&index).unwrap()
}
})
}
pub fn reversed(&self) -> Self {
Self {
len: self.len,
offset: -(self.offset + 1),
ptr: self.ptr,
storage: self.storage,
}
}
pub fn rewound(&self) -> Self {
Self {
len: self.len,
offset: if self.offset < 0 { -1 } else { 0 },
ptr: self.ptr,
storage: self.storage,
}
}
}
impl From<Detail> for [u8; USIZE_USIZE_USIZE_USIZE_] {
fn from(detail: Detail) -> Self {
unsafe {
std::slice::from_raw_parts(
(&detail as *const Detail) as *const u8,
USIZE_USIZE_USIZE_USIZE_,
)
.try_into()
.unwrap()
}
}
}
/// \struct AMchanges
/// \installed_headerfile
/// \brief A random-access iterator over a sequence of changes.
#[repr(C)]
#[derive(Eq, PartialEq)]
pub struct AMchanges {
/// An implementation detail that is intentionally opaque.
/// \warning Modifying \p detail will cause undefined behavior.
/// \note The actual size of \p detail will vary by platform, this is just
/// the one for the platform this documentation was built on.
detail: [u8; USIZE_USIZE_USIZE_USIZE_],
}
impl AMchanges {
pub fn new(changes: &[am::Change], storage: &mut BTreeMap<usize, AMchange>) -> Self {
Self {
detail: Detail::new(changes, 0, &mut *storage).into(),
}
}
pub fn advance(&mut self, n: isize) {
let detail = unsafe { &mut *(self.detail.as_mut_ptr() as *mut Detail) };
detail.advance(n);
}
pub fn len(&self) -> usize {
let detail = unsafe { &*(self.detail.as_ptr() as *const Detail) };
detail.len
}
pub fn next(&mut self, n: isize) -> Option<*const AMchange> {
let detail = unsafe { &mut *(self.detail.as_mut_ptr() as *mut Detail) };
detail.next(n)
}
pub fn prev(&mut self, n: isize) -> Option<*const AMchange> {
let detail = unsafe { &mut *(self.detail.as_mut_ptr() as *mut Detail) };
detail.prev(n)
}
pub fn reversed(&self) -> Self {
let detail = unsafe { &*(self.detail.as_ptr() as *const Detail) };
Self {
detail: detail.reversed().into(),
}
}
pub fn rewound(&self) -> Self {
let detail = unsafe { &*(self.detail.as_ptr() as *const Detail) };
Self {
detail: detail.rewound().into(),
}
}
}
impl AsRef<[am::Change]> for AMchanges {
fn as_ref(&self) -> &[am::Change] {
let detail = unsafe { &*(self.detail.as_ptr() as *const Detail) };
unsafe { std::slice::from_raw_parts(detail.ptr as *const am::Change, detail.len) }
}
}
impl Default for AMchanges {
fn default() -> Self {
Self {
detail: [0; USIZE_USIZE_USIZE_USIZE_],
}
}
}
/// \memberof AMchanges
/// \brief Advances an iterator over a sequence of changes by at most \p |n|
/// positions where the sign of \p n is relative to the iterator's
/// direction.
///
/// \param[in,out] changes A pointer to an `AMchanges` struct.
/// \param[in] n The direction (\p -n -> opposite, \p n -> same) and maximum
/// number of positions to advance.
/// \pre \p changes `!= NULL`.
/// \internal
///
/// #Safety
/// changes must be a valid pointer to an AMchanges
#[no_mangle]
pub unsafe extern "C" fn AMchangesAdvance(changes: *mut AMchanges, n: isize) {
if let Some(changes) = changes.as_mut() {
changes.advance(n);
};
}
/// \memberof AMchanges
/// \brief Tests the equality of two sequences of changes underlying a pair of
/// iterators.
///
/// \param[in] changes1 A pointer to an `AMchanges` struct.
/// \param[in] changes2 A pointer to an `AMchanges` struct.
/// \return `true` if \p changes1 `==` \p changes2 and `false` otherwise.
/// \pre \p changes1 `!= NULL`.
/// \pre \p changes2 `!= NULL`.
/// \internal
///
/// #Safety
/// changes1 must be a valid pointer to an AMchanges
/// changes2 must be a valid pointer to an AMchanges
#[no_mangle]
pub unsafe extern "C" fn AMchangesEqual(
changes1: *const AMchanges,
changes2: *const AMchanges,
) -> bool {
match (changes1.as_ref(), changes2.as_ref()) {
(Some(changes1), Some(changes2)) => changes1.as_ref() == changes2.as_ref(),
(None, Some(_)) | (Some(_), None) | (None, None) => false,
}
}
/// \memberof AMchanges
/// \brief Allocates an iterator over a sequence of changes and initializes it
/// from a sequence of byte spans.
///
/// \param[in] src A pointer to an array of `AMbyteSpan` structs.
/// \param[in] count The number of `AMbyteSpan` structs to copy from \p src.
/// \return A pointer to an `AMresult` struct containing an `AMchanges` struct.
/// \pre \p src `!= NULL`.
/// \pre `0 <` \p count `<= sizeof(`\p src`) / sizeof(AMbyteSpan)`.
/// \warning The returned `AMresult` struct must be deallocated with `AMfree()`
/// in order to prevent a memory leak.
/// \internal
///
/// # Safety
/// src must be an AMbyteSpan array of size `>= count`
#[no_mangle]
pub unsafe extern "C" fn AMchangesInit(src: *const AMbyteSpan, count: usize) -> *mut AMresult {
let mut changes = Vec::<am::Change>::new();
for n in 0..count {
let byte_span = &*src.add(n);
let slice = std::slice::from_raw_parts(byte_span.src, byte_span.count);
match slice.try_into() {
Ok(change) => {
changes.push(change);
}
Err(e) => {
return to_result(Err::<Vec<am::Change>, am::LoadChangeError>(e));
}
}
}
to_result(Ok::<Vec<am::Change>, am::LoadChangeError>(changes))
}
/// \memberof AMchanges
/// \brief Gets the change at the current position of an iterator over a
/// sequence of changes and then advances it by at most \p |n| positions
/// where the sign of \p n is relative to the iterator's direction.
///
/// \param[in,out] changes A pointer to an `AMchanges` struct.
/// \param[in] n The direction (\p -n -> opposite, \p n -> same) and maximum
/// number of positions to advance.
/// \return A pointer to an `AMchange` struct that's `NULL` when \p changes was
/// previously advanced past its forward/reverse limit.
/// \pre \p changes `!= NULL`.
/// \internal
///
/// #Safety
/// changes must be a valid pointer to an AMchanges
#[no_mangle]
pub unsafe extern "C" fn AMchangesNext(changes: *mut AMchanges, n: isize) -> *const AMchange {
if let Some(changes) = changes.as_mut() {
if let Some(change) = changes.next(n) {
return change;
}
}
std::ptr::null()
}
/// \memberof AMchanges
/// \brief Advances an iterator over a sequence of changes by at most \p |n|
/// positions where the sign of \p n is relative to the iterator's
/// direction and then gets the change at its new position.
///
/// \param[in,out] changes A pointer to an `AMchanges` struct.
/// \param[in] n The direction (\p -n -> opposite, \p n -> same) and maximum
/// number of positions to advance.
/// \return A pointer to an `AMchange` struct that's `NULL` when \p changes is
/// presently advanced past its forward/reverse limit.
/// \pre \p changes `!= NULL`.
/// \internal
///
/// #Safety
/// changes must be a valid pointer to an AMchanges
#[no_mangle]
pub unsafe extern "C" fn AMchangesPrev(changes: *mut AMchanges, n: isize) -> *const AMchange {
if let Some(changes) = changes.as_mut() {
if let Some(change) = changes.prev(n) {
return change;
}
}
std::ptr::null()
}
/// \memberof AMchanges
/// \brief Gets the size of the sequence of changes underlying an iterator.
///
/// \param[in] changes A pointer to an `AMchanges` struct.
/// \return The count of values in \p changes.
/// \pre \p changes `!= NULL`.
/// \internal
///
/// #Safety
/// changes must be a valid pointer to an AMchanges
#[no_mangle]
pub unsafe extern "C" fn AMchangesSize(changes: *const AMchanges) -> usize {
if let Some(changes) = changes.as_ref() {
changes.len()
} else {
0
}
}
/// \memberof AMchanges
/// \brief Creates an iterator over the same sequence of changes as the given
/// one but with the opposite position and direction.
///
/// \param[in] changes A pointer to an `AMchanges` struct.
/// \return An `AMchanges` struct.
/// \pre \p changes `!= NULL`.
/// \internal
///
/// #Safety
/// changes must be a valid pointer to an AMchanges
#[no_mangle]
pub unsafe extern "C" fn AMchangesReversed(changes: *const AMchanges) -> AMchanges {
if let Some(changes) = changes.as_ref() {
changes.reversed()
} else {
Default::default()
}
}
/// \memberof AMchanges
/// \brief Creates an iterator at the starting position over the same sequence
/// of changes as the given one.
///
/// \param[in] changes A pointer to an `AMchanges` struct.
/// \return An `AMchanges` struct
/// \pre \p changes `!= NULL`.
/// \internal
///
/// #Safety
/// changes must be a valid pointer to an AMchanges
#[no_mangle]
pub unsafe extern "C" fn AMchangesRewound(changes: *const AMchanges) -> AMchanges {
if let Some(changes) = changes.as_ref() {
changes.rewound()
} else {
Default::default()
}
}

Some files were not shown because too many files have changed in this diff Show more