Compare commits
462 commits
experiment
...
main
Author | SHA1 | Date | |
---|---|---|---|
|
cb409b6ffe | ||
|
b34b46fa16 | ||
|
7b747b8341 | ||
|
2c1970f664 | ||
|
63b761c0d1 | ||
|
44fa7ac416 | ||
|
8de2fa9bd4 | ||
|
407faefa6e | ||
|
1425af43cd | ||
|
c92d042c87 | ||
|
9271b20cf5 | ||
|
5e82dbc3c8 | ||
|
2cd7427f35 | ||
|
11f063cbfe | ||
|
a24d536d16 | ||
|
c5fde2802f | ||
|
13a775ed9a | ||
|
1e33c9d9e0 | ||
|
c3c04128f5 | ||
|
da55dfac7a | ||
|
9195e9cb76 | ||
|
f8d5a8ea98 | ||
|
2a9652e642 | ||
|
a6959e70e8 | ||
|
de5af2fffa | ||
|
08801ab580 | ||
|
89a0866272 | ||
|
9b6a3c8691 | ||
|
58a7a06b75 | ||
|
f428fe0169 | ||
|
931ee7e77b | ||
|
819767cc33 | ||
|
78adbc4ff9 | ||
|
1f7b109dcd | ||
|
98e755106f | ||
|
6b0ee6da2e | ||
|
9b44a75f69 | ||
|
d8baa116e7 | ||
|
5629a7bec4 | ||
|
964ae2bd81 | ||
|
d8df1707d9 | ||
|
681a3f1f3f | ||
|
22e9915fac | ||
|
2d8df12522 | ||
|
f073dbf701 | ||
|
5c02445bee | ||
|
3ef60747f4 | ||
|
d12bd3bb06 | ||
|
a0d698dc8e | ||
|
93a257896e | ||
|
9c3d0976c8 | ||
|
1ca1cc38ef | ||
|
0e7fb6cc10 | ||
|
d1220b9dd0 | ||
|
6c0d102032 | ||
|
5763210b07 | ||
|
18a3f61704 | ||
|
0306ade939 | ||
|
1e7dcdedec | ||
|
8a645bb193 | ||
|
4de0756bb4 | ||
|
d678280b57 | ||
|
f682db3039 | ||
|
6da93b6adc | ||
|
0f90fe4d02 | ||
|
8aff1296b9 | ||
|
6dad2b7df1 | ||
|
e75ca2a834 | ||
|
3229548fc7 | ||
|
a96f77c96b | ||
|
b78211ca65 | ||
|
1222fc0df1 | ||
|
2db9e78f2a | ||
|
b05c9e83a4 | ||
|
c3932e6267 | ||
|
becc301877 | ||
|
0ab6a770d8 | ||
|
2826f4f08c | ||
|
de16adbcc5 | ||
|
ea5688e418 | ||
|
149f870102 | ||
|
e0b2bc995a | ||
|
aaddb3c9ea | ||
|
2400d67755 | ||
|
d3885a3443 | ||
|
f8428896bd | ||
|
fb0c69cc52 | ||
|
edbb33522d | ||
|
625f48f33a | ||
|
7c9f927136 | ||
|
b60c310f5c | ||
|
3dd954d5b7 | ||
|
3e2e697504 | ||
|
a324b02005 | ||
|
d26cb0c0cb | ||
|
ed108ba6fc | ||
|
484a5bac4f | ||
|
01350c2b3f | ||
|
22d60987f6 | ||
|
bbf729e1d6 | ||
|
ca25ed0ca0 | ||
|
03b3da203d | ||
|
e713c35d21 | ||
|
92c044eadb | ||
|
a7656b999b | ||
|
05093071ce | ||
|
bcab3b6e47 | ||
|
b53584bec0 | ||
|
91f313bb83 | ||
|
6bbed76f0f | ||
|
bba4fe2c36 | ||
|
61aaa52718 | ||
|
20d543d28d | ||
|
5adb6952e9 | ||
|
3705212747 | ||
|
d7d2916acb | ||
|
3482e06b15 | ||
|
59289f67b1 | ||
|
a4a3dd9ed3 | ||
|
ac6eeb8711 | ||
|
20adff0071 | ||
|
6bb611e4b3 | ||
|
e8309495ce | ||
|
4755c5bf5e | ||
|
38205fbcc2 | ||
|
a2704bac4b | ||
|
ac90f8f028 | ||
|
c602e9e7ed | ||
|
1c6da6f9a3 | ||
|
24dcf8270a | ||
|
e189ec9ca8 | ||
|
96f15c6e00 | ||
|
8e131922e7 | ||
|
dd3c6d1303 | ||
|
5ce3a556a9 | ||
|
dd5edafa9d | ||
|
cd2997e63f | ||
|
f0f036eb89 | ||
|
ee0c3ef3ac | ||
|
e6d1828c12 | ||
|
4c17fd9c00 | ||
|
660678d038 | ||
|
a7a4bd42f1 | ||
|
352a0127c7 | ||
|
ed0da24020 | ||
|
3989cac405 | ||
|
2d072d81fb | ||
|
430d842343 | ||
|
dff0fc2b21 | ||
|
9e1fe65a64 | ||
|
3d5fe83e2b | ||
|
ba328992ff | ||
|
23a07699e2 | ||
|
238d05a0e3 | ||
|
7a6dfcc289 | ||
|
92145e6131 | ||
|
2012f5c6e4 | ||
|
fb4d1f4361 | ||
|
74af537800 | ||
|
29f2c9945e | ||
|
d6a8d41e0a | ||
|
b6c375efb9 | ||
|
16f2272b5b | ||
|
da51492327 | ||
|
577bda3e7f | ||
|
20dc0fb54e | ||
|
4f03cd2a37 | ||
|
7825da3ab9 | ||
|
8557ce0b69 | ||
|
a9e23308ce | ||
|
837c07b23a | ||
|
3d59e61cd6 | ||
|
e57548f6e2 | ||
|
c7e370a1df | ||
|
427002caf3 | ||
|
fc9cb17b34 | ||
|
f586c82557 | ||
|
649b75deb1 | ||
|
eba7038bd2 | ||
|
dd69f6f7b4 | ||
|
e295a55b41 | ||
|
c2ed212dbc | ||
|
1817e98ec9 | ||
|
a0eb4218d8 | ||
|
9879fd9342 | ||
|
59bde120ee | ||
|
22f720c465 | ||
|
e6cd366aa0 | ||
|
6d05cbd9e3 | ||
|
43bdd60904 | ||
|
363ad7d59a | ||
|
7da1832b52 | ||
|
5e37ebfed0 | ||
|
1ed67a7658 | ||
|
3ddde2fff2 | ||
|
b4705691c2 | ||
|
9ac8827219 | ||
|
9c86c09aaa | ||
|
632da04d60 | ||
|
8f2d4a494f | ||
|
db4cb52750 | ||
|
fc94d43e53 | ||
|
d53d107076 | ||
|
63dca26fe2 | ||
|
252a7eb8a5 | ||
|
34e919a4c8 | ||
|
fc7657bcc6 | ||
|
771733deac | ||
|
3a3df45b85 | ||
|
d28767e689 | ||
|
de997e2c50 | ||
|
782f351322 | ||
|
e1295b9daa | ||
|
d785c319b8 | ||
|
88f8976d0a | ||
|
56563a4a60 | ||
|
ff90327b52 | ||
|
ece66396e8 | ||
|
d1a926bcbe | ||
|
1a955e1f0d | ||
|
f89e9ad9cc | ||
|
bc28faee71 | ||
|
50981acc5a | ||
|
7ec17b26a9 | ||
|
825342cbb1 | ||
|
04d0175113 | ||
|
14bd8fbe97 | ||
|
d48e366272 | ||
|
4217019cbc | ||
|
eeb75f74f4 | ||
|
a22afdd70d | ||
|
5e8f4caed6 | ||
|
5c6f375f99 | ||
|
3a556c5991 | ||
|
1bc5fbb81e | ||
|
69de8187a5 | ||
|
877744d40b | ||
|
14b55c4a73 | ||
|
23fbb4917a | ||
|
877dbbfce8 | ||
|
a22bcb916b | ||
|
42ab1639db | ||
|
eba18d1ad6 | ||
|
ee68645f31 | ||
|
cc19a37f01 | ||
|
15c9adf965 | ||
|
52a558ee4d | ||
|
668b7b86ca | ||
|
d71a734e49 | ||
|
97575d3a90 | ||
|
359376b3db | ||
|
5452aa4e4d | ||
|
8c93d498b3 | ||
|
f14a61e581 | ||
|
65c478981c | ||
|
75fb4f0f0c | ||
|
be439892a4 | ||
|
6ea5982c16 | ||
|
0cd515526d | ||
|
246ed4afab | ||
|
0a86a4d92c | ||
|
1d3263c002 | ||
|
7e8cbf510a | ||
|
c49ba5ea98 | ||
|
fe4071316d | ||
|
1a6f56f7e6 | ||
|
d5ca0947c0 | ||
|
e5a8b67b11 | ||
|
aeb8db556c | ||
|
eb462cb228 | ||
|
0cbacaebb6 | ||
|
bf4988dcca | ||
|
770c064978 | ||
|
db0333fc5a | ||
|
7bdf726ce1 | ||
|
47c5277406 | ||
|
ea8bd32cc1 | ||
|
be130560f0 | ||
|
103d729bd1 | ||
|
7b30c84a4c | ||
|
39db64e5d9 | ||
|
32baae1a31 | ||
|
88073c0cf4 | ||
|
f5e9e3537d | ||
|
44b6709a60 | ||
|
1610f6d6a6 | ||
|
40b32566f4 | ||
|
3a4af9a719 | ||
|
400b8acdff | ||
|
2f37d194ba | ||
|
ceecef3b87 | ||
|
6de9ff620d | ||
|
84fa83a3f0 | ||
|
ac3709e670 | ||
|
71d8a7e717 | ||
|
bdedafa021 | ||
|
efa0a5624a | ||
|
4efe9a4f68 | ||
|
4f7843e007 | ||
|
30dd3da578 | ||
|
6668f79a6e | ||
|
0c9e77b644 | ||
|
d6bce697a5 | ||
|
22117f4997 | ||
|
b20d04b0f2 | ||
|
d5c07f22af | ||
|
bfa85050b8 | ||
|
1c78aab5f0 | ||
|
ad7dd07cf7 | ||
|
2e84c6e9ef | ||
|
0ecb9e7dce | ||
|
99ab5b4ed7 | ||
|
7439a49e37 | ||
|
7a9786a146 | ||
|
82fe420a10 | ||
|
7d2be219ac | ||
|
00ab853813 | ||
|
97ef4fe7cd | ||
|
5c1cbc8eeb | ||
|
cf264f3bf4 | ||
|
8222ec1705 | ||
|
74632a0512 | ||
|
7e1ae60bdc | ||
|
92d6fff22f | ||
|
92f3efd6e0 | ||
|
31fe8dbb36 | ||
|
d4d1b64cf4 | ||
|
92b1216101 | ||
|
1990f29c60 | ||
|
b38be0750b | ||
|
3866e9066f | ||
|
51554e7793 | ||
|
afddf7d508 | ||
|
ca383f03e4 | ||
|
de25e8f7c8 | ||
|
27dfa4ca27 | ||
|
9a0dd24714 | ||
|
8ce10dab69 | ||
|
fbdb5da508 | ||
|
cdcd5156db | ||
|
d08eeeed61 | ||
|
472b5dc348 | ||
|
846b96bc9a | ||
|
4cb7481a1b | ||
|
3c11946c16 | ||
|
c5d3d1b0a0 | ||
|
be3c7d6233 | ||
|
9213d43850 | ||
|
18ee9b71e0 | ||
|
a9912d4b9f | ||
|
d9bf29e8fd | ||
|
546b6ccbbd | ||
|
bb0b023c9a | ||
|
c3554199f3 | ||
|
e56fe64a18 | ||
|
007253d6ae | ||
|
e8f1f07f21 | ||
|
3ad979a178 | ||
|
fb0ea2c7a4 | ||
|
a31a65033f | ||
|
5765fea771 | ||
|
4bed03f008 | ||
|
210c6d2045 | ||
|
a569611d83 | ||
|
03a635a926 | ||
|
97a5144d59 | ||
|
03289510d6 | ||
|
b1712cb0c6 | ||
|
dae6509e13 | ||
|
587adf7418 | ||
|
df8cae8a2b | ||
|
3a44ccd52d | ||
|
07f5678a2b | ||
|
d638a41a6c | ||
|
bd35361354 | ||
|
d2fba6bf04 | ||
|
fd02585d2a | ||
|
515a2eb94b | ||
|
5e1bdb79ed | ||
|
1cf8f80ba4 | ||
|
226bbeb023 | ||
|
1eec70f116 | ||
|
4f898b67b3 | ||
|
551f6e1343 | ||
|
c353abfe4e | ||
|
f0abcf0605 | ||
|
2c1a71e143 | ||
|
8b1c3c73cd | ||
|
3a8e833187 | ||
|
1355a024a7 | ||
|
e5b527e17d | ||
|
4b344ac308 | ||
|
36857e0f6b | ||
|
b7c50e47b9 | ||
|
16f1304345 | ||
|
933bf5ee07 | ||
|
c2765885fd | ||
|
5e088ee9e0 | ||
|
1b34892585 | ||
|
0de37d292d | ||
|
b9a6b3129f | ||
|
11fbde47bb | ||
|
70021556c0 | ||
|
e8e42b2d16 | ||
|
6bce8bf4fd | ||
|
c7429abbf5 | ||
|
24fa61c11d | ||
|
d89669fcaa | ||
|
43c4ce76fb | ||
|
531e434bf6 | ||
|
e1f3ecfcf5 | ||
|
409189e36a | ||
|
81dd1a56eb | ||
|
7acb9ed0e2 | ||
|
d01e7ceb0e | ||
|
aa5a03a0c4 | ||
|
f6eca5eec6 | ||
|
b17c86e36e | ||
|
f373deba6b | ||
|
8f71ac30a4 | ||
|
4e431c00a1 | ||
|
004d1a0cf2 | ||
|
d6a6b34e99 | ||
|
fdd3880bd3 | ||
|
f0da2d2348 | ||
|
b56464c2e7 | ||
|
bb3d75604a | ||
|
eb3155e49b | ||
|
28a61f2dcd | ||
|
944e5d8001 | ||
|
7d5eaa0b7f | ||
|
5b15a04516 | ||
|
dc441a1a61 | ||
|
3f746a0dc3 | ||
|
c43f672924 | ||
|
fb8f3e5d4e | ||
|
54042bcf96 | ||
|
729752dac2 | ||
|
3cf990eabf | ||
|
069c33a13e | ||
|
58e0ce5efb | ||
|
c6e7f993fd | ||
|
30b220d9b7 | ||
|
bf6ee85c58 | ||
|
a728b8216b | ||
|
0aab13a990 | ||
|
3ec1127b50 | ||
|
291557a019 | ||
|
cc4b8399b1 | ||
|
bcdc8a2752 | ||
|
0d3eb07f3f | ||
|
7f4460f200 | ||
|
9e6044c128 | ||
|
6bf03e006c | ||
|
8baacb281b | ||
|
7de0cff2c9 | ||
|
c38b49609f | ||
|
db280c3d1d | ||
|
7dfe311aae | ||
|
bb4727ac34 | ||
|
bdacaa1703 | ||
|
a388ffbf19 |
498 changed files with 67014 additions and 21476 deletions
71
.github/workflows/ci.yaml
vendored
71
.github/workflows/ci.yaml
vendored
|
@ -2,10 +2,10 @@ name: CI
|
|||
on:
|
||||
push:
|
||||
branches:
|
||||
- main
|
||||
- main
|
||||
pull_request:
|
||||
branches:
|
||||
- main
|
||||
- main
|
||||
jobs:
|
||||
fmt:
|
||||
runs-on: ubuntu-latest
|
||||
|
@ -14,7 +14,8 @@ jobs:
|
|||
- uses: actions-rs/toolchain@v1
|
||||
with:
|
||||
profile: minimal
|
||||
toolchain: stable
|
||||
toolchain: 1.67.0
|
||||
default: true
|
||||
components: rustfmt
|
||||
- uses: Swatinem/rust-cache@v1
|
||||
- run: ./scripts/ci/fmt
|
||||
|
@ -27,7 +28,8 @@ jobs:
|
|||
- uses: actions-rs/toolchain@v1
|
||||
with:
|
||||
profile: minimal
|
||||
toolchain: stable
|
||||
toolchain: 1.67.0
|
||||
default: true
|
||||
components: clippy
|
||||
- uses: Swatinem/rust-cache@v1
|
||||
- run: ./scripts/ci/lint
|
||||
|
@ -40,9 +42,14 @@ jobs:
|
|||
- uses: actions-rs/toolchain@v1
|
||||
with:
|
||||
profile: minimal
|
||||
toolchain: stable
|
||||
toolchain: 1.67.0
|
||||
default: true
|
||||
- uses: Swatinem/rust-cache@v1
|
||||
- run: ./scripts/ci/docs
|
||||
- name: Build rust docs
|
||||
run: ./scripts/ci/rust-docs
|
||||
shell: bash
|
||||
- name: Install doxygen
|
||||
run: sudo apt-get install -y doxygen
|
||||
shell: bash
|
||||
|
||||
cargo-deny:
|
||||
|
@ -57,23 +64,50 @@ jobs:
|
|||
- uses: actions/checkout@v2
|
||||
- uses: EmbarkStudios/cargo-deny-action@v1
|
||||
with:
|
||||
arguments: '--manifest-path ./rust/Cargo.toml'
|
||||
command: check ${{ matrix.checks }}
|
||||
|
||||
wasm_tests:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v2
|
||||
- name: Install wasm-pack
|
||||
run: curl https://rustwasm.github.io/wasm-pack/installer/init.sh -sSf | sh
|
||||
- name: Install wasm-bindgen-cli
|
||||
run: cargo install wasm-bindgen-cli wasm-opt
|
||||
- name: Install wasm32 target
|
||||
run: rustup target add wasm32-unknown-unknown
|
||||
- name: run tests
|
||||
run: ./scripts/ci/wasm_tests
|
||||
deno_tests:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v2
|
||||
- uses: denoland/setup-deno@v1
|
||||
with:
|
||||
deno-version: v1.x
|
||||
- name: Install wasm-bindgen-cli
|
||||
run: cargo install wasm-bindgen-cli wasm-opt
|
||||
- name: Install wasm32 target
|
||||
run: rustup target add wasm32-unknown-unknown
|
||||
- name: run tests
|
||||
run: ./scripts/ci/deno_tests
|
||||
|
||||
js_fmt:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v2
|
||||
- name: install
|
||||
run: yarn global add prettier
|
||||
- name: format
|
||||
run: prettier -c javascript/.prettierrc javascript
|
||||
|
||||
js_tests:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v2
|
||||
- name: Install wasm-pack
|
||||
run: curl https://rustwasm.github.io/wasm-pack/installer/init.sh -sSf | sh
|
||||
- name: Install wasm-bindgen-cli
|
||||
run: cargo install wasm-bindgen-cli wasm-opt
|
||||
- name: Install wasm32 target
|
||||
run: rustup target add wasm32-unknown-unknown
|
||||
- name: run tests
|
||||
run: ./scripts/ci/js_tests
|
||||
|
||||
|
@ -84,7 +118,8 @@ jobs:
|
|||
- uses: actions-rs/toolchain@v1
|
||||
with:
|
||||
profile: minimal
|
||||
toolchain: stable
|
||||
toolchain: nightly-2023-01-26
|
||||
default: true
|
||||
- uses: Swatinem/rust-cache@v1
|
||||
- name: Install CMocka
|
||||
run: sudo apt-get install -y libcmocka-dev
|
||||
|
@ -92,6 +127,8 @@ jobs:
|
|||
uses: jwlawson/actions-setup-cmake@v1.12
|
||||
with:
|
||||
cmake-version: latest
|
||||
- name: Install rust-src
|
||||
run: rustup component add rust-src
|
||||
- name: Build and test C bindings
|
||||
run: ./scripts/ci/cmake-build Release Static
|
||||
shell: bash
|
||||
|
@ -101,15 +138,14 @@ jobs:
|
|||
strategy:
|
||||
matrix:
|
||||
toolchain:
|
||||
- stable
|
||||
- nightly
|
||||
continue-on-error: ${{ matrix.toolchain == 'nightly' }}
|
||||
- 1.67.0
|
||||
steps:
|
||||
- uses: actions/checkout@v2
|
||||
- uses: actions-rs/toolchain@v1
|
||||
with:
|
||||
profile: minimal
|
||||
toolchain: ${{ matrix.toolchain }}
|
||||
default: true
|
||||
- uses: Swatinem/rust-cache@v1
|
||||
- run: ./scripts/ci/build-test
|
||||
shell: bash
|
||||
|
@ -121,7 +157,8 @@ jobs:
|
|||
- uses: actions-rs/toolchain@v1
|
||||
with:
|
||||
profile: minimal
|
||||
toolchain: stable
|
||||
toolchain: 1.67.0
|
||||
default: true
|
||||
- uses: Swatinem/rust-cache@v1
|
||||
- run: ./scripts/ci/build-test
|
||||
shell: bash
|
||||
|
@ -133,8 +170,8 @@ jobs:
|
|||
- uses: actions-rs/toolchain@v1
|
||||
with:
|
||||
profile: minimal
|
||||
toolchain: stable
|
||||
toolchain: 1.67.0
|
||||
default: true
|
||||
- uses: Swatinem/rust-cache@v1
|
||||
- run: ./scripts/ci/build-test
|
||||
shell: bash
|
||||
|
||||
|
|
18
.github/workflows/docs.yaml
vendored
18
.github/workflows/docs.yaml
vendored
|
@ -23,22 +23,30 @@ jobs:
|
|||
uses: Swatinem/rust-cache@v1
|
||||
|
||||
- name: Clean docs dir
|
||||
run: rm -rf docs
|
||||
shell: bash
|
||||
|
||||
- name: Clean Rust docs dir
|
||||
uses: actions-rs/cargo@v1
|
||||
with:
|
||||
command: clean
|
||||
args: --doc
|
||||
args: --manifest-path ./rust/Cargo.toml --doc
|
||||
|
||||
- name: Build docs
|
||||
- name: Build Rust docs
|
||||
uses: actions-rs/cargo@v1
|
||||
with:
|
||||
command: doc
|
||||
args: --workspace --all-features --no-deps
|
||||
args: --manifest-path ./rust/Cargo.toml --workspace --all-features --no-deps
|
||||
|
||||
- name: Move Rust docs
|
||||
run: mkdir -p docs && mv rust/target/doc/* docs/.
|
||||
shell: bash
|
||||
|
||||
- name: Configure root page
|
||||
run: echo '<meta http-equiv="refresh" content="0; url=automerge">' > target/doc/index.html
|
||||
run: echo '<meta http-equiv="refresh" content="0; url=automerge">' > docs/index.html
|
||||
|
||||
- name: Deploy docs
|
||||
uses: peaceiris/actions-gh-pages@v3
|
||||
with:
|
||||
github_token: ${{ secrets.GITHUB_TOKEN }}
|
||||
publish_dir: ./target/doc
|
||||
publish_dir: ./docs
|
||||
|
|
214
.github/workflows/release.yaml
vendored
Normal file
214
.github/workflows/release.yaml
vendored
Normal file
|
@ -0,0 +1,214 @@
|
|||
name: Release
|
||||
on:
|
||||
push:
|
||||
branches:
|
||||
- main
|
||||
|
||||
jobs:
|
||||
check_if_wasm_version_upgraded:
|
||||
name: Check if WASM version has been upgraded
|
||||
runs-on: ubuntu-latest
|
||||
outputs:
|
||||
wasm_version: ${{ steps.version-updated.outputs.current-package-version }}
|
||||
wasm_has_updated: ${{ steps.version-updated.outputs.has-updated }}
|
||||
steps:
|
||||
- uses: JiPaix/package-json-updated-action@v1.0.5
|
||||
id: version-updated
|
||||
with:
|
||||
path: rust/automerge-wasm/package.json
|
||||
env:
|
||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
publish-wasm:
|
||||
name: Publish WASM package
|
||||
runs-on: ubuntu-latest
|
||||
needs:
|
||||
- check_if_wasm_version_upgraded
|
||||
# We create release only if the version in the package.json has been upgraded
|
||||
if: needs.check_if_wasm_version_upgraded.outputs.wasm_has_updated == 'true'
|
||||
steps:
|
||||
- uses: actions/setup-node@v3
|
||||
with:
|
||||
node-version: '16.x'
|
||||
registry-url: 'https://registry.npmjs.org'
|
||||
- uses: denoland/setup-deno@v1
|
||||
- uses: actions/checkout@v3
|
||||
with:
|
||||
fetch-depth: 0
|
||||
ref: ${{ github.ref }}
|
||||
- name: Get rid of local github workflows
|
||||
run: rm -r .github/workflows
|
||||
- name: Remove tmp_branch if it exists
|
||||
run: git push origin :tmp_branch || true
|
||||
- run: git checkout -b tmp_branch
|
||||
- name: Install wasm-bindgen-cli
|
||||
run: cargo install wasm-bindgen-cli wasm-opt
|
||||
- name: Install wasm32 target
|
||||
run: rustup target add wasm32-unknown-unknown
|
||||
- name: run wasm js tests
|
||||
id: wasm_js_tests
|
||||
run: ./scripts/ci/wasm_tests
|
||||
- name: run wasm deno tests
|
||||
id: wasm_deno_tests
|
||||
run: ./scripts/ci/deno_tests
|
||||
- name: build release
|
||||
id: build_release
|
||||
run: |
|
||||
npm --prefix $GITHUB_WORKSPACE/rust/automerge-wasm run release
|
||||
- name: Collate deno release files
|
||||
if: steps.wasm_js_tests.outcome == 'success' && steps.wasm_deno_tests.outcome == 'success'
|
||||
run: |
|
||||
mkdir $GITHUB_WORKSPACE/deno_wasm_dist
|
||||
cp $GITHUB_WORKSPACE/rust/automerge-wasm/deno/* $GITHUB_WORKSPACE/deno_wasm_dist
|
||||
cp $GITHUB_WORKSPACE/rust/automerge-wasm/index.d.ts $GITHUB_WORKSPACE/deno_wasm_dist
|
||||
cp $GITHUB_WORKSPACE/rust/automerge-wasm/README.md $GITHUB_WORKSPACE/deno_wasm_dist
|
||||
cp $GITHUB_WORKSPACE/rust/automerge-wasm/LICENSE $GITHUB_WORKSPACE/deno_wasm_dist
|
||||
sed -i '1i /// <reference types="./index.d.ts" />' $GITHUB_WORKSPACE/deno_wasm_dist/automerge_wasm.js
|
||||
- name: Create npm release
|
||||
if: steps.wasm_js_tests.outcome == 'success' && steps.wasm_deno_tests.outcome == 'success'
|
||||
run: |
|
||||
if [ "$(npm --prefix $GITHUB_WORKSPACE/rust/automerge-wasm show . version)" = "$VERSION" ]; then
|
||||
echo "This version is already published"
|
||||
exit 0
|
||||
fi
|
||||
EXTRA_ARGS="--access public"
|
||||
if [[ $VERSION == *"alpha."* ]] || [[ $VERSION == *"beta."* ]] || [[ $VERSION == *"rc."* ]]; then
|
||||
echo "Is pre-release version"
|
||||
EXTRA_ARGS="$EXTRA_ARGS --tag next"
|
||||
fi
|
||||
if [ "$NODE_AUTH_TOKEN" = "" ]; then
|
||||
echo "Can't publish on NPM, You need a NPM_TOKEN secret."
|
||||
false
|
||||
fi
|
||||
npm publish $GITHUB_WORKSPACE/rust/automerge-wasm $EXTRA_ARGS
|
||||
env:
|
||||
NODE_AUTH_TOKEN: ${{secrets.NPM_TOKEN}}
|
||||
VERSION: ${{ needs.check_if_wasm_version_upgraded.outputs.wasm_version }}
|
||||
- name: Commit wasm deno release files
|
||||
run: |
|
||||
git config --global user.name "actions"
|
||||
git config --global user.email actions@github.com
|
||||
git add $GITHUB_WORKSPACE/deno_wasm_dist
|
||||
git commit -am "Add deno release files"
|
||||
git push origin tmp_branch
|
||||
- name: Tag wasm release
|
||||
if: steps.wasm_js_tests.outcome == 'success' && steps.wasm_deno_tests.outcome == 'success'
|
||||
uses: softprops/action-gh-release@v1
|
||||
with:
|
||||
name: Automerge Wasm v${{ needs.check_if_wasm_version_upgraded.outputs.wasm_version }}
|
||||
tag_name: js/automerge-wasm-${{ needs.check_if_wasm_version_upgraded.outputs.wasm_version }}
|
||||
target_commitish: tmp_branch
|
||||
generate_release_notes: false
|
||||
draft: false
|
||||
env:
|
||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
- name: Remove tmp_branch
|
||||
run: git push origin :tmp_branch
|
||||
check_if_js_version_upgraded:
|
||||
name: Check if JS version has been upgraded
|
||||
runs-on: ubuntu-latest
|
||||
outputs:
|
||||
js_version: ${{ steps.version-updated.outputs.current-package-version }}
|
||||
js_has_updated: ${{ steps.version-updated.outputs.has-updated }}
|
||||
steps:
|
||||
- uses: JiPaix/package-json-updated-action@v1.0.5
|
||||
id: version-updated
|
||||
with:
|
||||
path: javascript/package.json
|
||||
env:
|
||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
publish-js:
|
||||
name: Publish JS package
|
||||
runs-on: ubuntu-latest
|
||||
needs:
|
||||
- check_if_js_version_upgraded
|
||||
- check_if_wasm_version_upgraded
|
||||
- publish-wasm
|
||||
# We create release only if the version in the package.json has been upgraded and after the WASM release
|
||||
if: |
|
||||
(always() && ! cancelled()) &&
|
||||
(needs.publish-wasm.result == 'success' || needs.publish-wasm.result == 'skipped') &&
|
||||
needs.check_if_js_version_upgraded.outputs.js_has_updated == 'true'
|
||||
steps:
|
||||
- uses: actions/setup-node@v3
|
||||
with:
|
||||
node-version: '16.x'
|
||||
registry-url: 'https://registry.npmjs.org'
|
||||
- uses: denoland/setup-deno@v1
|
||||
- uses: actions/checkout@v3
|
||||
with:
|
||||
fetch-depth: 0
|
||||
ref: ${{ github.ref }}
|
||||
- name: Get rid of local github workflows
|
||||
run: rm -r .github/workflows
|
||||
- name: Remove js_tmp_branch if it exists
|
||||
run: git push origin :js_tmp_branch || true
|
||||
- run: git checkout -b js_tmp_branch
|
||||
- name: check js formatting
|
||||
run: |
|
||||
yarn global add prettier
|
||||
prettier -c javascript/.prettierrc javascript
|
||||
- name: run js tests
|
||||
id: js_tests
|
||||
run: |
|
||||
cargo install wasm-bindgen-cli wasm-opt
|
||||
rustup target add wasm32-unknown-unknown
|
||||
./scripts/ci/js_tests
|
||||
- name: build js release
|
||||
id: build_release
|
||||
run: |
|
||||
npm --prefix $GITHUB_WORKSPACE/javascript run build
|
||||
- name: build js deno release
|
||||
id: build_deno_release
|
||||
run: |
|
||||
VERSION=$WASM_VERSION npm --prefix $GITHUB_WORKSPACE/javascript run deno:build
|
||||
env:
|
||||
WASM_VERSION: ${{ needs.check_if_wasm_version_upgraded.outputs.wasm_version }}
|
||||
- name: run deno tests
|
||||
id: deno_tests
|
||||
run: |
|
||||
npm --prefix $GITHUB_WORKSPACE/javascript run deno:test
|
||||
- name: Collate deno release files
|
||||
if: steps.js_tests.outcome == 'success' && steps.deno_tests.outcome == 'success'
|
||||
run: |
|
||||
mkdir $GITHUB_WORKSPACE/deno_js_dist
|
||||
cp $GITHUB_WORKSPACE/javascript/deno_dist/* $GITHUB_WORKSPACE/deno_js_dist
|
||||
- name: Create npm release
|
||||
if: steps.js_tests.outcome == 'success' && steps.deno_tests.outcome == 'success'
|
||||
run: |
|
||||
if [ "$(npm --prefix $GITHUB_WORKSPACE/javascript show . version)" = "$VERSION" ]; then
|
||||
echo "This version is already published"
|
||||
exit 0
|
||||
fi
|
||||
EXTRA_ARGS="--access public"
|
||||
if [[ $VERSION == *"alpha."* ]] || [[ $VERSION == *"beta."* ]] || [[ $VERSION == *"rc."* ]]; then
|
||||
echo "Is pre-release version"
|
||||
EXTRA_ARGS="$EXTRA_ARGS --tag next"
|
||||
fi
|
||||
if [ "$NODE_AUTH_TOKEN" = "" ]; then
|
||||
echo "Can't publish on NPM, You need a NPM_TOKEN secret."
|
||||
false
|
||||
fi
|
||||
npm publish $GITHUB_WORKSPACE/javascript $EXTRA_ARGS
|
||||
env:
|
||||
NODE_AUTH_TOKEN: ${{secrets.NPM_TOKEN}}
|
||||
VERSION: ${{ needs.check_if_js_version_upgraded.outputs.js_version }}
|
||||
- name: Commit js deno release files
|
||||
run: |
|
||||
git config --global user.name "actions"
|
||||
git config --global user.email actions@github.com
|
||||
git add $GITHUB_WORKSPACE/deno_js_dist
|
||||
git commit -am "Add deno js release files"
|
||||
git push origin js_tmp_branch
|
||||
- name: Tag JS release
|
||||
if: steps.js_tests.outcome == 'success' && steps.deno_tests.outcome == 'success'
|
||||
uses: softprops/action-gh-release@v1
|
||||
with:
|
||||
name: Automerge v${{ needs.check_if_js_version_upgraded.outputs.js_version }}
|
||||
tag_name: js/automerge-${{ needs.check_if_js_version_upgraded.outputs.js_version }}
|
||||
target_commitish: js_tmp_branch
|
||||
generate_release_notes: false
|
||||
draft: false
|
||||
env:
|
||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
- name: Remove js_tmp_branch
|
||||
run: git push origin :js_tmp_branch
|
3
.gitignore
vendored
3
.gitignore
vendored
|
@ -1,5 +1,6 @@
|
|||
/target
|
||||
/.direnv
|
||||
perf.*
|
||||
/Cargo.lock
|
||||
build/
|
||||
.vim/*
|
||||
/target
|
||||
|
|
13
Makefile
13
Makefile
|
@ -1,13 +0,0 @@
|
|||
rust:
|
||||
cd automerge && cargo test
|
||||
|
||||
wasm:
|
||||
cd automerge-wasm && yarn
|
||||
cd automerge-wasm && yarn build
|
||||
cd automerge-wasm && yarn test
|
||||
cd automerge-wasm && yarn link
|
||||
|
||||
js: wasm
|
||||
cd automerge-js && yarn
|
||||
cd automerge-js && yarn link "automerge-wasm"
|
||||
cd automerge-js && yarn test
|
199
README.md
199
README.md
|
@ -1,110 +1,147 @@
|
|||
# Automerge RS
|
||||
# Automerge
|
||||
|
||||
<img src='./img/sign.svg' width='500' alt='Automerge logo' />
|
||||
|
||||
[](https://automerge.org/)
|
||||
[](https://automerge.org/automerge-rs/automerge/)
|
||||
[](https://github.com/automerge/automerge-rs/actions/workflows/ci.yaml)
|
||||
[](https://github.com/automerge/automerge-rs/actions/workflows/docs.yaml)
|
||||
|
||||
This is a rust implementation of the [Automerge](https://github.com/automerge/automerge) file format and network protocol.
|
||||
Automerge is a library which provides fast implementations of several different
|
||||
CRDTs, a compact compression format for these CRDTs, and a sync protocol for
|
||||
efficiently transmitting those changes over the network. The objective of the
|
||||
project is to support [local-first](https://www.inkandswitch.com/local-first/) applications in the same way that relational
|
||||
databases support server applications - by providing mechanisms for persistence
|
||||
which allow application developers to avoid thinking about hard distributed
|
||||
computing problems. Automerge aims to be PostgreSQL for your local-first app.
|
||||
|
||||
If you are looking for the origional `automerge-rs` project that can be used as a wasm backend to the javascript implementation, it can be found [here](https://github.com/automerge/automerge-rs/tree/automerge-1.0).
|
||||
If you're looking for documentation on the JavaScript implementation take a look
|
||||
at https://automerge.org/docs/hello/. There are other implementations in both
|
||||
Rust and C, but they are earlier and don't have documentation yet. You can find
|
||||
them in `rust/automerge` and `rust/automerge-c` if you are comfortable
|
||||
reading the code and tests to figure out how to use them.
|
||||
|
||||
If you're familiar with CRDTs and interested in the design of Automerge in
|
||||
particular take a look at https://automerge.org/docs/how-it-works/backend/
|
||||
|
||||
Finally, if you want to talk to us about this project please [join the
|
||||
Slack](https://join.slack.com/t/automerge/shared_invite/zt-e4p3760n-kKh7r3KRH1YwwNfiZM8ktw)
|
||||
|
||||
## Status
|
||||
|
||||
This project has 4 components:
|
||||
This project is formed of a core Rust implementation which is exposed via FFI in
|
||||
javascript+WASM, C, and soon other languages. Alex
|
||||
([@alexjg](https://github.com/alexjg/)]) is working full time on maintaining
|
||||
automerge, other members of Ink and Switch are also contributing time and there
|
||||
are several other maintainers. The focus is currently on shipping the new JS
|
||||
package. We expect to be iterating the API and adding new features over the next
|
||||
six months so there will likely be several major version bumps in all packages
|
||||
in that time.
|
||||
|
||||
1. _automerge_ - a rust implementation of the library. This project is the most mature and being used in a handful of small applications.
|
||||
2. _automerge-wasm_ - a js/wasm interface to the underlying rust library. This api is generally mature and in use in a handful of projects as well.
|
||||
3. _automerge-js_ - this is a javascript library using the wasm interface to export the same public api of the primary automerge project. Currently this project passes all of automerge's tests but has not been used in any real project or packaged as an NPM. Alpha testers welcome.
|
||||
4. _automerge-c_ - this is a c library intended to be an ffi integration point for all other languages. It is currently a work in progress and not yet ready for any testing.
|
||||
In general we try and respect semver.
|
||||
|
||||
## How?
|
||||
### JavaScript
|
||||
|
||||
The current iteration of automerge-rs is complicated to work with because it
|
||||
adopts the frontend/backend split architecture of the JS implementation. This
|
||||
architecture was necessary due to basic operations on the automerge opset being
|
||||
too slow to perform on the UI thread. Recently @orionz has been able to improve
|
||||
the performance to the point where the split is no longer necessary. This means
|
||||
we can adopt a much simpler mutable API.
|
||||
A stable release of the javascript package is currently available as
|
||||
`@automerge/automerge@2.0.0` where. pre-release verisions of the `2.0.1` are
|
||||
available as `2.0.1-alpha.n`. `2.0.1*` packages are also available for Deno at
|
||||
https://deno.land/x/automerge
|
||||
|
||||
The architecture is now built around the `OpTree`. This is a data structure
|
||||
which supports efficiently inserting new operations and realising values of
|
||||
existing operations. Most interactions with the `OpTree` are in the form of
|
||||
implementations of `TreeQuery` - a trait which can be used to traverse the
|
||||
optree and producing state of some kind. User facing operations are exposed on
|
||||
an `Automerge` object, under the covers these operations typically instantiate
|
||||
some `TreeQuery` and run it over the `OpTree`.
|
||||
### Rust
|
||||
|
||||
## Development
|
||||
The rust codebase is currently oriented around producing a performant backend
|
||||
for the Javascript wrapper and as such the API for Rust code is low level and
|
||||
not well documented. We will be returning to this over the next few months but
|
||||
for now you will need to be comfortable reading the tests and asking questions
|
||||
to figure out how to use it. If you are looking to build rust applications which
|
||||
use automerge you may want to look into
|
||||
[autosurgeon](https://github.com/alexjg/autosurgeon)
|
||||
|
||||
Please feel free to open issues and pull requests.
|
||||
## Repository Organisation
|
||||
|
||||
### Running CI
|
||||
- `./rust` - the rust rust implementation and also the Rust components of
|
||||
platform specific wrappers (e.g. `automerge-wasm` for the WASM API or
|
||||
`automerge-c` for the C FFI bindings)
|
||||
- `./javascript` - The javascript library which uses `automerge-wasm`
|
||||
internally but presents a more idiomatic javascript interface
|
||||
- `./scripts` - scripts which are useful to maintenance of the repository.
|
||||
This includes the scripts which are run in CI.
|
||||
- `./img` - static assets for use in `.md` files
|
||||
|
||||
The steps CI will run are all defined in `./scripts/ci`. Obviously CI will run
|
||||
everything when you submit a PR, but if you want to run everything locally
|
||||
before you push you can run `./scripts/ci/run` to run everything.
|
||||
## Building
|
||||
|
||||
### Running the JS tests
|
||||
To build this codebase you will need:
|
||||
|
||||
You will need to have [node](https://nodejs.org/en/), [yarn](https://yarnpkg.com/getting-started/install), [rust](https://rustup.rs/) and [wasm-pack](https://rustwasm.github.io/wasm-pack/installer/) installed.
|
||||
- `rust`
|
||||
- `node`
|
||||
- `yarn`
|
||||
- `cmake`
|
||||
- `cmocka`
|
||||
|
||||
To build and test the rust library:
|
||||
You will also need to install the following with `cargo install`
|
||||
|
||||
```shell
|
||||
$ cd automerge
|
||||
$ cargo test
|
||||
- `wasm-bindgen-cli`
|
||||
- `wasm-opt`
|
||||
- `cargo-deny`
|
||||
|
||||
And ensure you have added the `wasm32-unknown-unknown` target for rust cross-compilation.
|
||||
|
||||
The various subprojects (the rust code, the wrapper projects) have their own
|
||||
build instructions, but to run the tests that will be run in CI you can run
|
||||
`./scripts/ci/run`.
|
||||
|
||||
### For macOS
|
||||
|
||||
These instructions worked to build locally on macOS 13.1 (arm64) as of
|
||||
Nov 29th 2022.
|
||||
|
||||
```bash
|
||||
# clone the repo
|
||||
git clone https://github.com/automerge/automerge-rs
|
||||
cd automerge-rs
|
||||
|
||||
# install rustup
|
||||
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
|
||||
|
||||
# install homebrew
|
||||
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
|
||||
|
||||
# install cmake, node, cmocka
|
||||
brew install cmake node cmocka
|
||||
|
||||
# install yarn
|
||||
npm install --global yarn
|
||||
|
||||
# install javascript dependencies
|
||||
yarn --cwd ./javascript
|
||||
|
||||
# install rust dependencies
|
||||
cargo install wasm-bindgen-cli wasm-opt cargo-deny
|
||||
|
||||
# get nightly rust to produce optimized automerge-c builds
|
||||
rustup toolchain install nightly
|
||||
rustup component add rust-src --toolchain nightly
|
||||
|
||||
# add wasm target in addition to current architecture
|
||||
rustup target add wasm32-unknown-unknown
|
||||
|
||||
# Run ci script
|
||||
./scripts/ci/run
|
||||
```
|
||||
|
||||
To build and test the wasm library:
|
||||
If your build fails to find `cmocka.h` you may need to teach it about homebrew's
|
||||
installation location:
|
||||
|
||||
```shell
|
||||
## setup
|
||||
$ cd automerge-wasm
|
||||
$ yarn
|
||||
|
||||
## building or testing
|
||||
$ yarn build
|
||||
$ yarn test
|
||||
|
||||
## without this the js library wont automatically use changes
|
||||
$ yarn link
|
||||
|
||||
## cutting a release or doing benchmarking
|
||||
$ yarn release
|
||||
```
|
||||
export CPATH=/opt/homebrew/include
|
||||
export LIBRARY_PATH=/opt/homebrew/lib
|
||||
./scripts/ci/run
|
||||
```
|
||||
|
||||
To test the js library. This is where most of the tests reside.
|
||||
## Contributing
|
||||
|
||||
```shell
|
||||
## setup
|
||||
$ cd automerge-js
|
||||
$ yarn
|
||||
$ yarn link "automerge-wasm"
|
||||
|
||||
## testing
|
||||
$ yarn test
|
||||
```
|
||||
|
||||
And finally, to build and test the C bindings with CMake:
|
||||
|
||||
```shell
|
||||
## setup
|
||||
$ cd automerge-c
|
||||
$ mkdir -p build
|
||||
$ cd build
|
||||
$ cmake -S .. -DCMAKE_BUILD_TYPE=Release -DBUILD_SHARED_LIBS=OFF
|
||||
## building and testing
|
||||
$ cmake --build .
|
||||
```
|
||||
To add debugging symbols, replace `Release` with `Debug`.
|
||||
To build a shared library instead of a static one, replace `OFF` with `ON`.
|
||||
|
||||
The C bindings can be built and tested on any platform for which CMake is
|
||||
available but the steps for doing so vary across platforms and are too numerous
|
||||
to list here.
|
||||
|
||||
## Benchmarking
|
||||
|
||||
The `edit-trace` folder has the main code for running the edit trace benchmarking.
|
||||
Please try and split your changes up into relatively independent commits which
|
||||
change one subsystem at a time and add good commit messages which describe what
|
||||
the change is and why you're making it (err on the side of longer commit
|
||||
messages). `git blame` should give future maintainers a good idea of why
|
||||
something is the way it is.
|
||||
|
|
32
TODO.md
32
TODO.md
|
@ -1,32 +0,0 @@
|
|||
### next steps:
|
||||
1. C API
|
||||
2. port rust command line tool
|
||||
3. fast load
|
||||
|
||||
### ergonomics:
|
||||
1. value() -> () or something that into's a value
|
||||
|
||||
### automerge:
|
||||
1. single pass (fast) load
|
||||
2. micro-patches / bare bones observation API / fully hydrated documents
|
||||
|
||||
### future:
|
||||
1. handle columns with unknown data in and out
|
||||
2. branches with different indexes
|
||||
|
||||
### Peritext
|
||||
1. add mark / remove mark -- type, start/end elemid (inclusive,exclusive)
|
||||
2. track any formatting ops that start or end on a character
|
||||
3. ops right before the character, ops right after that character
|
||||
4. query a single character - character, plus marks that start or end on that character
|
||||
what is its current formatting,
|
||||
what are the ops that include that in their span,
|
||||
None = same as last time, Set( bold, italic ),
|
||||
keep these on index
|
||||
5. op probably belongs with the start character - possible packed at the beginning or end of the list
|
||||
|
||||
### maybe:
|
||||
1. tables
|
||||
|
||||
### no:
|
||||
1. cursors
|
3
automerge-c/.gitignore
vendored
3
automerge-c/.gitignore
vendored
|
@ -1,3 +0,0 @@
|
|||
automerge
|
||||
automerge.h
|
||||
automerge.o
|
|
@ -1,135 +0,0 @@
|
|||
cmake_minimum_required(VERSION 3.18 FATAL_ERROR)
|
||||
|
||||
set(CMAKE_MODULE_PATH "${CMAKE_SOURCE_DIR}/cmake")
|
||||
|
||||
# Parse the library name, project name and project version out of Cargo's TOML file.
|
||||
set(CARGO_LIB_SECTION OFF)
|
||||
|
||||
set(LIBRARY_NAME "")
|
||||
|
||||
set(CARGO_PKG_SECTION OFF)
|
||||
|
||||
set(CARGO_PKG_NAME "")
|
||||
|
||||
set(CARGO_PKG_VERSION "")
|
||||
|
||||
file(READ Cargo.toml TOML_STRING)
|
||||
|
||||
string(REPLACE ";" "\\\\;" TOML_STRING "${TOML_STRING}")
|
||||
|
||||
string(REPLACE "\n" ";" TOML_LINES "${TOML_STRING}")
|
||||
|
||||
foreach(TOML_LINE IN ITEMS ${TOML_LINES})
|
||||
string(REGEX MATCH "^\\[(lib|package)\\]$" _ ${TOML_LINE})
|
||||
|
||||
if(CMAKE_MATCH_1 STREQUAL "lib")
|
||||
set(CARGO_LIB_SECTION ON)
|
||||
|
||||
set(CARGO_PKG_SECTION OFF)
|
||||
elseif(CMAKE_MATCH_1 STREQUAL "package")
|
||||
set(CARGO_LIB_SECTION OFF)
|
||||
|
||||
set(CARGO_PKG_SECTION ON)
|
||||
endif()
|
||||
|
||||
string(REGEX MATCH "^name += +\"([^\"]+)\"$" _ ${TOML_LINE})
|
||||
|
||||
if(CMAKE_MATCH_1 AND (CARGO_LIB_SECTION AND NOT CARGO_PKG_SECTION))
|
||||
set(LIBRARY_NAME "${CMAKE_MATCH_1}")
|
||||
elseif(CMAKE_MATCH_1 AND (NOT CARGO_LIB_SECTION AND CARGO_PKG_SECTION))
|
||||
set(CARGO_PKG_NAME "${CMAKE_MATCH_1}")
|
||||
endif()
|
||||
|
||||
string(REGEX MATCH "^version += +\"([^\"]+)\"$" _ ${TOML_LINE})
|
||||
|
||||
if(CMAKE_MATCH_1 AND CARGO_PKG_SECTION)
|
||||
set(CARGO_PKG_VERSION "${CMAKE_MATCH_1}")
|
||||
endif()
|
||||
|
||||
if(LIBRARY_NAME AND (CARGO_PKG_NAME AND CARGO_PKG_VERSION))
|
||||
break()
|
||||
endif()
|
||||
endforeach()
|
||||
|
||||
project(${CARGO_PKG_NAME} VERSION ${CARGO_PKG_VERSION} LANGUAGES C DESCRIPTION "C bindings for the Automerge Rust backend.")
|
||||
|
||||
include(CTest)
|
||||
|
||||
option(BUILD_SHARED_LIBS "Enable the choice of a shared or static library.")
|
||||
|
||||
include(CMakePackageConfigHelpers)
|
||||
|
||||
include(GNUInstallDirs)
|
||||
|
||||
string(MAKE_C_IDENTIFIER ${PROJECT_NAME} SYMBOL_PREFIX)
|
||||
|
||||
string(TOUPPER ${SYMBOL_PREFIX} SYMBOL_PREFIX)
|
||||
|
||||
set(CARGO_TARGET_DIR "${CMAKE_CURRENT_BINARY_DIR}/Cargo/target")
|
||||
|
||||
add_subdirectory(src)
|
||||
|
||||
# Generate and install the configuration header.
|
||||
math(EXPR INTEGER_PROJECT_VERSION_MAJOR "${PROJECT_VERSION_MAJOR} * 100000")
|
||||
|
||||
math(EXPR INTEGER_PROJECT_VERSION_MINOR "${PROJECT_VERSION_MINOR} * 100")
|
||||
|
||||
math(EXPR INTEGER_PROJECT_VERSION_PATCH "${PROJECT_VERSION_PATCH}")
|
||||
|
||||
math(EXPR INTEGER_PROJECT_VERSION "${INTEGER_PROJECT_VERSION_MAJOR} + ${INTEGER_PROJECT_VERSION_MINOR} + ${INTEGER_PROJECT_VERSION_PATCH}")
|
||||
|
||||
configure_file(
|
||||
${CMAKE_MODULE_PATH}/config.h.in
|
||||
config.h
|
||||
@ONLY
|
||||
NEWLINE_STYLE LF
|
||||
)
|
||||
|
||||
install(
|
||||
FILES ${CMAKE_BINARY_DIR}/config.h
|
||||
DESTINATION ${CMAKE_INSTALL_INCLUDEDIR}/${PROJECT_NAME}
|
||||
)
|
||||
|
||||
if(BUILD_TESTING)
|
||||
add_subdirectory(test)
|
||||
|
||||
enable_testing()
|
||||
endif()
|
||||
|
||||
# Generate and install .cmake files
|
||||
set(PROJECT_CONFIG_NAME "${PROJECT_NAME}-config")
|
||||
|
||||
set(PROJECT_CONFIG_VERSION_NAME "${PROJECT_CONFIG_NAME}-version")
|
||||
|
||||
write_basic_package_version_file(
|
||||
${CMAKE_CURRENT_BINARY_DIR}/${PROJECT_CONFIG_VERSION_NAME}.cmake
|
||||
VERSION ${PROJECT_VERSION}
|
||||
COMPATIBILITY ExactVersion
|
||||
)
|
||||
|
||||
# The namespace label starts with the title-cased library name.
|
||||
string(SUBSTRING ${LIBRARY_NAME} 0 1 NS_FIRST)
|
||||
|
||||
string(SUBSTRING ${LIBRARY_NAME} 1 -1 NS_REST)
|
||||
|
||||
string(TOUPPER ${NS_FIRST} NS_FIRST)
|
||||
|
||||
string(TOLOWER ${NS_REST} NS_REST)
|
||||
|
||||
string(CONCAT NAMESPACE ${NS_FIRST} ${NS_REST} "::")
|
||||
|
||||
# \note CMake doesn't automate the exporting of an imported library's targets
|
||||
# so the package configuration script must do it.
|
||||
configure_package_config_file(
|
||||
${CMAKE_MODULE_PATH}/${PROJECT_CONFIG_NAME}.cmake.in
|
||||
${CMAKE_CURRENT_BINARY_DIR}/${PROJECT_CONFIG_NAME}.cmake
|
||||
INSTALL_DESTINATION ${CMAKE_INSTALL_LIBDIR}/cmake/${PROJECT_NAME}
|
||||
)
|
||||
|
||||
install(
|
||||
FILES
|
||||
${CMAKE_CURRENT_BINARY_DIR}/${PROJECT_CONFIG_NAME}.cmake
|
||||
${CMAKE_CURRENT_BINARY_DIR}/${PROJECT_CONFIG_VERSION_NAME}.cmake
|
||||
DESTINATION
|
||||
${CMAKE_INSTALL_LIBDIR}/cmake/${PROJECT_NAME}
|
||||
)
|
|
@ -1,30 +0,0 @@
|
|||
|
||||
CC=gcc
|
||||
CFLAGS=-I.
|
||||
DEPS=automerge.h
|
||||
LIBS=-lpthread -ldl -lm
|
||||
LDIR=../target/release
|
||||
LIB=../target/release/libautomerge.a
|
||||
DEBUG_LIB=../target/debug/libautomerge.a
|
||||
|
||||
all: $(DEBUG_LIB) automerge
|
||||
|
||||
debug: LDIR=../target/debug
|
||||
debug: automerge $(DEBUG_LIB)
|
||||
|
||||
automerge: automerge.o $(LDIR)/libautomerge.a
|
||||
$(CC) -o $@ automerge.o $(LDIR)/libautomerge.a $(LIBS) -L$(LDIR)
|
||||
|
||||
$(DEBUG_LIB): src/*.rs
|
||||
cargo build
|
||||
|
||||
$(LIB): src/*.rs
|
||||
cargo build --release
|
||||
|
||||
%.o: %.c $(DEPS)
|
||||
$(CC) -c -o $@ $< $(CFLAGS)
|
||||
|
||||
.PHONY: clean
|
||||
|
||||
clean:
|
||||
rm -f *.o automerge $(LIB) $(DEBUG_LIB)
|
|
@ -1,95 +0,0 @@
|
|||
|
||||
## Methods we need to support
|
||||
|
||||
### Basic management
|
||||
|
||||
1. `AMcreate()`
|
||||
1. `AMclone(doc)`
|
||||
1. `AMfree(doc)`
|
||||
1. `AMconfig(doc, key, val)` // set actor
|
||||
1. `actor = get_actor(doc)`
|
||||
|
||||
### Transactions
|
||||
|
||||
1. `AMpendingOps(doc)`
|
||||
1. `AMcommit(doc, message, time)`
|
||||
1. `AMrollback(doc)`
|
||||
|
||||
### Write
|
||||
|
||||
1. `AMset{Map|List}(doc, obj, prop, value)`
|
||||
1. `AMinsert(doc, obj, index, value)`
|
||||
1. `AMpush(doc, obj, value)`
|
||||
1. `AMdel{Map|List}(doc, obj, prop)`
|
||||
1. `AMinc{Map|List}(doc, obj, prop, value)`
|
||||
1. `AMspliceText(doc, obj, start, num_del, text)`
|
||||
|
||||
### Read
|
||||
|
||||
1. `AMkeys(doc, obj, heads)`
|
||||
1. `AMlength(doc, obj, heads)`
|
||||
1. `AMvalues(doc, obj, heads)`
|
||||
1. `AMtext(doc, obj, heads)`
|
||||
|
||||
### Sync
|
||||
|
||||
1. `AMgenerateSyncMessage(doc, state)`
|
||||
1. `AMreceiveSyncMessage(doc, state, message)`
|
||||
1. `AMinitSyncState()`
|
||||
|
||||
### Save / Load
|
||||
|
||||
1. `AMload(data)`
|
||||
1. `AMloadIncremental(doc, data)`
|
||||
1. `AMsave(doc)`
|
||||
1. `AMsaveIncremental(doc)`
|
||||
|
||||
### Low Level Access
|
||||
|
||||
1. `AMapplyChanges(doc, changes)`
|
||||
1. `AMgetChanges(doc, deps)`
|
||||
1. `AMgetChangesAdded(doc1, doc2)`
|
||||
1. `AMgetHeads(doc)`
|
||||
1. `AMgetLastLocalChange(doc)`
|
||||
1. `AMgetMissingDeps(doc, heads)`
|
||||
|
||||
### Encode/Decode
|
||||
|
||||
1. `AMencodeChange(change)`
|
||||
1. `AMdecodeChange(change)`
|
||||
1. `AMencodeSyncMessage(change)`
|
||||
1. `AMdecodeSyncMessage(change)`
|
||||
1. `AMencodeSyncState(change)`
|
||||
1. `AMdecodeSyncState(change)`
|
||||
|
||||
## Open Question - Memory management
|
||||
|
||||
Most of these calls return one or more items of arbitrary length. Doing memory management in C is tricky. This is my proposed solution...
|
||||
|
||||
###
|
||||
|
||||
```
|
||||
// returns 1 or zero opids
|
||||
n = automerge_set(doc, "_root", "hello", datatype, value);
|
||||
if (n) {
|
||||
automerge_pop(doc, &obj, len);
|
||||
}
|
||||
|
||||
// returns n values
|
||||
n = automerge_values(doc, "_root", "hello");
|
||||
for (i = 0; i<n ;i ++) {
|
||||
automerge_pop_value(doc, &value, &datatype, len);
|
||||
}
|
||||
```
|
||||
|
||||
There would be one pop method per object type. Users allocs and frees the buffers. Multiple return values would result in multiple pops. Too small buffers would error and allow retry.
|
||||
|
||||
|
||||
### Formats
|
||||
|
||||
Actors - We could do (bytes,len) or a hex encoded string?.
|
||||
ObjIds - We could do flat bytes of the ExId struct but lets do human readable strings for now - the struct would be faster but opque
|
||||
Heads - Might as well make it a flat buffer `(n, hash, hash, ...)`
|
||||
Changes - Put them all in a flat concatenated buffer
|
||||
Encode/Decode - to json strings?
|
||||
|
|
@ -1,36 +0,0 @@
|
|||
#include <stdio.h>
|
||||
#include <stdlib.h>
|
||||
#include <string.h>
|
||||
#include <assert.h>
|
||||
#include "automerge.h"
|
||||
|
||||
#define MAX_BUFF_SIZE 4096
|
||||
|
||||
int main() {
|
||||
int n = 0;
|
||||
int data_type = 0;
|
||||
char buff[MAX_BUFF_SIZE];
|
||||
char obj[MAX_BUFF_SIZE];
|
||||
AMresult* res = NULL;
|
||||
|
||||
printf("begin\n");
|
||||
|
||||
AMdoc* doc = AMcreate();
|
||||
|
||||
printf("AMconfig()...");
|
||||
AMconfig(doc, "actor", "aabbcc");
|
||||
printf("pass!\n");
|
||||
|
||||
printf("AMmapSetStr()...\n");
|
||||
res = AMmapSetStr(doc, NULL, "string", "hello world");
|
||||
if (AMresultStatus(res) != AM_STATUS_COMMAND_OK)
|
||||
{
|
||||
printf("AMmapSet() failed: %s\n", AMerrorMessage(res));
|
||||
return 1;
|
||||
}
|
||||
AMclear(res);
|
||||
printf("pass!\n");
|
||||
|
||||
AMdestroy(doc);
|
||||
printf("end\n");
|
||||
}
|
|
@ -1,25 +0,0 @@
|
|||
extern crate cbindgen;
|
||||
|
||||
use std::{env, path::PathBuf};
|
||||
|
||||
fn main() {
|
||||
let crate_dir = PathBuf::from(
|
||||
env::var("CARGO_MANIFEST_DIR").expect("CARGO_MANIFEST_DIR env var is not defined"),
|
||||
);
|
||||
|
||||
let config = cbindgen::Config::from_file("cbindgen.toml")
|
||||
.expect("Unable to find cbindgen.toml configuration file");
|
||||
|
||||
// let mut config: cbindgen::Config = Default::default();
|
||||
// config.language = cbindgen::Language::C;
|
||||
|
||||
if let Ok(writer) = cbindgen::generate_with_config(&crate_dir, config) {
|
||||
writer.write_to_file(crate_dir.join("automerge.h"));
|
||||
|
||||
// Also write the generated header into the target directory when
|
||||
// specified (necessary for an out-of-source build a la CMake).
|
||||
if let Ok(target_dir) = env::var("CARGO_TARGET_DIR") {
|
||||
writer.write_to_file(PathBuf::from(target_dir).join("automerge.h"));
|
||||
}
|
||||
}
|
||||
}
|
|
@ -1,39 +0,0 @@
|
|||
after_includes = """\n
|
||||
/**
|
||||
* \\defgroup enumerations Public Enumerations
|
||||
Symbolic names for integer constants.
|
||||
*/
|
||||
|
||||
/**
|
||||
* \\memberof AMdoc
|
||||
* \\def AM_ROOT
|
||||
* \\brief The root object of an `AMdoc` struct.
|
||||
*/
|
||||
#define AM_ROOT NULL
|
||||
"""
|
||||
autogen_warning = "/* Warning, this file is autogenerated by cbindgen. Don't modify this manually. */"
|
||||
documentation = true
|
||||
documentation_style = "doxy"
|
||||
header = """
|
||||
/** \\file
|
||||
* All constants, functions and types in the Automerge library's C API.
|
||||
*/
|
||||
"""
|
||||
include_guard = "automerge_h"
|
||||
includes = []
|
||||
language = "C"
|
||||
line_length = 140
|
||||
no_includes = true
|
||||
style = "both"
|
||||
sys_includes = ["stdbool.h", "stddef.h", "stdint.h"]
|
||||
usize_is_size_t = true
|
||||
|
||||
[enum]
|
||||
derive_const_casts = true
|
||||
enum_class = true
|
||||
must_use = "MUST_USE_ENUM"
|
||||
prefix_with_name = true
|
||||
rename_variants = "ScreamingSnakeCase"
|
||||
|
||||
[export]
|
||||
item_types = ["enums", "structs", "opaque", "constants", "functions"]
|
|
@ -1,14 +0,0 @@
|
|||
#ifndef @SYMBOL_PREFIX@_CONFIG_INCLUDED
|
||||
#define @SYMBOL_PREFIX@_CONFIG_INCLUDED
|
||||
|
||||
/* This header is auto-generated by CMake. */
|
||||
|
||||
#define @SYMBOL_PREFIX@_VERSION @INTEGER_PROJECT_VERSION@
|
||||
|
||||
#define @SYMBOL_PREFIX@_MAJOR_VERSION (@SYMBOL_PREFIX@_VERSION / 100000)
|
||||
|
||||
#define @SYMBOL_PREFIX@_MINOR_VERSION ((@SYMBOL_PREFIX@_VERSION / 100) % 1000)
|
||||
|
||||
#define @SYMBOL_PREFIX@_PATCH_VERSION (@SYMBOL_PREFIX@_VERSION % 100)
|
||||
|
||||
#endif /* @SYMBOL_PREFIX@_CONFIG_INCLUDED */
|
|
@ -1,220 +0,0 @@
|
|||
cmake_minimum_required(VERSION 3.18 FATAL_ERROR)
|
||||
|
||||
find_program (
|
||||
CARGO_CMD
|
||||
"cargo"
|
||||
PATHS "$ENV{CARGO_HOME}/bin"
|
||||
DOC "The Cargo command"
|
||||
)
|
||||
|
||||
if(NOT CARGO_CMD)
|
||||
message(FATAL_ERROR "Cargo (Rust package manager) not found! Install it and/or set the CARGO_HOME environment variable.")
|
||||
endif()
|
||||
|
||||
string(TOLOWER "${CMAKE_BUILD_TYPE}" BUILD_TYPE_LOWER)
|
||||
|
||||
if(BUILD_TYPE_LOWER STREQUAL debug)
|
||||
set(CARGO_BUILD_TYPE "debug")
|
||||
|
||||
set(CARGO_FLAG "")
|
||||
else()
|
||||
set(CARGO_BUILD_TYPE "release")
|
||||
|
||||
set(CARGO_FLAG "--release")
|
||||
endif()
|
||||
|
||||
set(CARGO_CURRENT_BINARY_DIR "${CARGO_TARGET_DIR}/${CARGO_BUILD_TYPE}")
|
||||
|
||||
set(
|
||||
CARGO_OUTPUT
|
||||
${CARGO_TARGET_DIR}/${LIBRARY_NAME}.h
|
||||
${CARGO_CURRENT_BINARY_DIR}/${CMAKE_SHARED_LIBRARY_PREFIX}${LIBRARY_NAME}${CMAKE_SHARED_LIBRARY_SUFFIX}
|
||||
${CARGO_CURRENT_BINARY_DIR}/${CMAKE_STATIC_LIBRARY_PREFIX}${LIBRARY_NAME}${CMAKE_STATIC_LIBRARY_SUFFIX}
|
||||
)
|
||||
|
||||
if(WIN32)
|
||||
# \note The basename of an import library output by Cargo is the filename
|
||||
# of its corresponding shared library.
|
||||
list(APPEND CARGO_OUTPUT ${CARGO_CURRENT_BINARY_DIR}/${CMAKE_SHARED_LIBRARY_PREFIX}${LIBRARY_NAME}${CMAKE_SHARED_LIBRARY_SUFFIX}${CMAKE_STATIC_LIBRARY_SUFFIX})
|
||||
endif()
|
||||
|
||||
add_custom_command(
|
||||
OUTPUT ${CARGO_OUTPUT}
|
||||
COMMAND
|
||||
# \note cbindgen won't regenerate its output header file after it's
|
||||
# been removed but it will after its configuration file has been
|
||||
# updated.
|
||||
${CMAKE_COMMAND} -DCONDITION=NOT_EXISTS -P ${CMAKE_SOURCE_DIR}/cmake/file_touch.cmake -- ${CARGO_TARGET_DIR}/${LIBRARY_NAME}.h ${CMAKE_SOURCE_DIR}/cbindgen.toml
|
||||
COMMAND
|
||||
${CMAKE_COMMAND} -E env CARGO_TARGET_DIR=${CARGO_TARGET_DIR} ${CARGO_CMD} build ${CARGO_FLAG}
|
||||
MAIN_DEPENDENCY
|
||||
lib.rs
|
||||
DEPENDS
|
||||
doc.rs
|
||||
result.rs
|
||||
utils.rs
|
||||
${CMAKE_SOURCE_DIR}/build.rs
|
||||
${CMAKE_SOURCE_DIR}/Cargo.toml
|
||||
${CMAKE_SOURCE_DIR}/cbindgen.toml
|
||||
WORKING_DIRECTORY
|
||||
${CMAKE_SOURCE_DIR}
|
||||
COMMENT
|
||||
"Producing the library artifacts with Cargo..."
|
||||
VERBATIM
|
||||
)
|
||||
|
||||
add_custom_target(
|
||||
${LIBRARY_NAME}_artifacts
|
||||
DEPENDS ${CARGO_OUTPUT}
|
||||
)
|
||||
|
||||
# \note cbindgen's naming behavior isn't fully configurable.
|
||||
add_custom_command(
|
||||
TARGET ${LIBRARY_NAME}_artifacts
|
||||
POST_BUILD
|
||||
COMMAND
|
||||
# Compensate for cbindgen's variant struct naming.
|
||||
${CMAKE_COMMAND} -DMATCH_REGEX=AM\([^_]+_[^_]+\)_Body -DREPLACE_EXPR=AM\\1 -P ${CMAKE_SOURCE_DIR}/cmake/file_regex_replace.cmake -- ${CARGO_TARGET_DIR}/${LIBRARY_NAME}.h
|
||||
COMMAND
|
||||
# Compensate for cbindgen's union tag enum type naming.
|
||||
${CMAKE_COMMAND} -DMATCH_REGEX=AM\([^_]+\)_Tag -DREPLACE_EXPR=AM\\1Variant -P ${CMAKE_SOURCE_DIR}/cmake/file_regex_replace.cmake -- ${CARGO_TARGET_DIR}/${LIBRARY_NAME}.h
|
||||
COMMAND
|
||||
# Compensate for cbindgen's translation of consecutive uppercase letters to "ScreamingSnakeCase".
|
||||
${CMAKE_COMMAND} -DMATCH_REGEX=A_M\([^_]+\)_ -DREPLACE_EXPR=AM_\\1_ -P ${CMAKE_SOURCE_DIR}/cmake/file_regex_replace.cmake -- ${CARGO_TARGET_DIR}/${LIBRARY_NAME}.h
|
||||
WORKING_DIRECTORY
|
||||
${CMAKE_SOURCE_DIR}
|
||||
COMMENT
|
||||
"Compensating for hard-coded cbindgen naming behaviors..."
|
||||
VERBATIM
|
||||
)
|
||||
|
||||
if(BUILD_SHARED_LIBS)
|
||||
if(WIN32)
|
||||
set(LIBRARY_DESTINATION "${CMAKE_INSTALL_BINDIR}")
|
||||
else()
|
||||
set(LIBRARY_DESTINATION "${CMAKE_INSTALL_LIBDIR}")
|
||||
endif()
|
||||
|
||||
set(LIBRARY_DEFINE_SYMBOL "${SYMBOL_PREFIX}_EXPORTS")
|
||||
|
||||
# \note The basename of an import library output by Cargo is the filename
|
||||
# of its corresponding shared library.
|
||||
set(LIBRARY_IMPLIB "${CARGO_CURRENT_BINARY_DIR}/${CMAKE_SHARED_LIBRARY_PREFIX}${LIBRARY_NAME}${CMAKE_SHARED_LIBRARY_SUFFIX}${CMAKE_STATIC_LIBRARY_SUFFIX}")
|
||||
|
||||
set(LIBRARY_LOCATION "${CARGO_CURRENT_BINARY_DIR}/${CMAKE_SHARED_LIBRARY_PREFIX}${LIBRARY_NAME}${CMAKE_SHARED_LIBRARY_SUFFIX}")
|
||||
|
||||
set(LIBRARY_NO_SONAME "${WIN32}")
|
||||
|
||||
set(LIBRARY_SONAME "${CMAKE_SHARED_LIBRARY_PREFIX}${LIBRARY_NAME}${CMAKE_${CMAKE_BUILD_TYPE}_POSTFIX}${CMAKE_SHARED_LIBRARY_SUFFIX}")
|
||||
|
||||
set(LIBRARY_TYPE "SHARED")
|
||||
else()
|
||||
set(LIBRARY_DEFINE_SYMBOL "")
|
||||
|
||||
set(LIBRARY_DESTINATION "${CMAKE_INSTALL_LIBDIR}")
|
||||
|
||||
set(LIBRARY_IMPLIB "")
|
||||
|
||||
set(LIBRARY_LOCATION "${CARGO_CURRENT_BINARY_DIR}/${CMAKE_STATIC_LIBRARY_PREFIX}${LIBRARY_NAME}${CMAKE_STATIC_LIBRARY_SUFFIX}")
|
||||
|
||||
set(LIBRARY_NO_SONAME "TRUE")
|
||||
|
||||
set(LIBRARY_SONAME "")
|
||||
|
||||
set(LIBRARY_TYPE "STATIC")
|
||||
endif()
|
||||
|
||||
add_library(${LIBRARY_NAME} ${LIBRARY_TYPE} IMPORTED GLOBAL)
|
||||
|
||||
set_target_properties(
|
||||
${LIBRARY_NAME}
|
||||
PROPERTIES
|
||||
# \note Cargo writes a debug build into a nested directory instead of
|
||||
# decorating its name.
|
||||
DEBUG_POSTFIX ""
|
||||
DEFINE_SYMBOL "${LIBRARY_DEFINE_SYMBOL}"
|
||||
IMPORTED_IMPLIB "${LIBRARY_IMPLIB}"
|
||||
IMPORTED_LOCATION "${LIBRARY_LOCATION}"
|
||||
IMPORTED_NO_SONAME "${LIBRARY_NO_SONAME}"
|
||||
IMPORTED_SONAME "${LIBRARY_SONAME}"
|
||||
LINKER_LANGUAGE C
|
||||
PUBLIC_HEADER "${CARGO_TARGET_DIR}/${LIBRARY_NAME}.h"
|
||||
SOVERSION "${PROJECT_VERSION_MAJOR}"
|
||||
VERSION "${PROJECT_VERSION}"
|
||||
# \note Cargo exports all of the symbols automatically.
|
||||
WINDOWS_EXPORT_ALL_SYMBOLS "TRUE"
|
||||
)
|
||||
|
||||
target_compile_definitions(${LIBRARY_NAME} INTERFACE $<TARGET_PROPERTY:${LIBRARY_NAME},DEFINE_SYMBOL>)
|
||||
|
||||
target_include_directories(
|
||||
${LIBRARY_NAME}
|
||||
INTERFACE
|
||||
"$<INSTALL_INTERFACE:${CMAKE_INSTALL_INCLUDEDIR}/${PROJECT_NAME}>"
|
||||
)
|
||||
|
||||
set(CMAKE_THREAD_PREFER_PTHREAD TRUE)
|
||||
|
||||
set(THREADS_PREFER_PTHREAD_FLAG TRUE)
|
||||
|
||||
find_package(Threads REQUIRED)
|
||||
|
||||
set(LIBRARY_DEPENDENCIES Threads::Threads ${CMAKE_DL_LIBS})
|
||||
|
||||
if(WIN32)
|
||||
list(APPEND LIBRARY_DEPENDENCIES Bcrypt userenv ws2_32)
|
||||
else()
|
||||
list(APPEND LIBRARY_DEPENDENCIES m)
|
||||
endif()
|
||||
|
||||
target_link_libraries(${LIBRARY_NAME} INTERFACE ${LIBRARY_DEPENDENCIES})
|
||||
|
||||
install(
|
||||
FILES $<TARGET_PROPERTY:${LIBRARY_NAME},IMPORTED_IMPLIB>
|
||||
TYPE LIB
|
||||
# \note The basename of an import library output by Cargo is the filename
|
||||
# of its corresponding shared library.
|
||||
RENAME "${CMAKE_STATIC_LIBRARY_PREFIX}${LIBRARY_NAME}${CMAKE_${CMAKE_BUILD_TYPE}_POSTFIX}${CMAKE_STATIC_LIBRARY_SUFFIX}"
|
||||
OPTIONAL
|
||||
)
|
||||
|
||||
set(LIBRARY_FILE_NAME "${CMAKE_${LIBRARY_TYPE}_LIBRARY_PREFIX}${LIBRARY_NAME}${CMAKE_${CMAKE_BUILD_TYPE}_POSTFIX}${CMAKE_${LIBRARY_TYPE}_LIBRARY_SUFFIX}")
|
||||
|
||||
install(
|
||||
FILES $<TARGET_PROPERTY:${LIBRARY_NAME},IMPORTED_LOCATION>
|
||||
RENAME "${LIBRARY_FILE_NAME}"
|
||||
DESTINATION ${LIBRARY_DESTINATION}
|
||||
)
|
||||
|
||||
install(
|
||||
FILES $<TARGET_PROPERTY:${LIBRARY_NAME},PUBLIC_HEADER>
|
||||
DESTINATION ${CMAKE_INSTALL_INCLUDEDIR}/${PROJECT_NAME}
|
||||
)
|
||||
|
||||
find_package(Doxygen OPTIONAL_COMPONENTS dot)
|
||||
|
||||
if(DOXYGEN_FOUND)
|
||||
set(DOXYGEN_GENERATE_LATEX YES)
|
||||
|
||||
set(DOXYGEN_PDF_HYPERLINKS YES)
|
||||
|
||||
set(DOXYGEN_PROJECT_LOGO "${CMAKE_SOURCE_DIR}/img/brandmark.png")
|
||||
|
||||
set(DOXYGEN_SORT_BRIEF_DOCS YES)
|
||||
|
||||
set(DOXYGEN_USE_MDFILE_AS_MAINPAGE "${CMAKE_SOURCE_DIR}/README.md")
|
||||
|
||||
doxygen_add_docs(
|
||||
${LIBRARY_NAME}_docs
|
||||
"${CARGO_TARGET_DIR}/${LIBRARY_NAME}.h"
|
||||
"${CMAKE_SOURCE_DIR}/README.md"
|
||||
USE_STAMP_FILE
|
||||
WORKING_DIRECTORY ${CMAKE_SOURCE_DIR}
|
||||
COMMENT "Producing documentation with Doxygen..."
|
||||
)
|
||||
|
||||
# \note A Doxygen input file isn't a file-level dependency so the Doxygen
|
||||
# command must instead depend upon a target that outputs the file or
|
||||
# it will just output an error message when it can't be found.
|
||||
add_dependencies(${LIBRARY_NAME}_docs ${LIBRARY_NAME}_artifacts)
|
||||
endif()
|
|
@ -1,85 +0,0 @@
|
|||
use automerge as am;
|
||||
use std::collections::BTreeSet;
|
||||
use std::ops::{Deref, DerefMut};
|
||||
|
||||
use crate::result::AMobjId;
|
||||
use automerge::transaction::Transactable;
|
||||
|
||||
/// \struct AMdoc
|
||||
/// \brief A JSON-like CRDT.
|
||||
#[derive(Clone)]
|
||||
pub struct AMdoc {
|
||||
body: am::AutoCommit,
|
||||
obj_ids: BTreeSet<AMobjId>,
|
||||
}
|
||||
|
||||
impl AMdoc {
|
||||
pub fn new(body: am::AutoCommit) -> Self {
|
||||
Self {
|
||||
body,
|
||||
obj_ids: BTreeSet::new(),
|
||||
}
|
||||
}
|
||||
|
||||
pub fn insert_object(
|
||||
&mut self,
|
||||
obj: &am::ObjId,
|
||||
index: usize,
|
||||
value: am::ObjType,
|
||||
) -> Result<&AMobjId, am::AutomergeError> {
|
||||
match self.body.insert_object(obj, index, value) {
|
||||
Ok(ex_id) => {
|
||||
let obj_id = AMobjId::new(ex_id);
|
||||
self.obj_ids.insert(obj_id.clone());
|
||||
match self.obj_ids.get(&obj_id) {
|
||||
Some(obj_id) => Ok(obj_id),
|
||||
None => Err(am::AutomergeError::Fail),
|
||||
}
|
||||
}
|
||||
Err(e) => Err(e),
|
||||
}
|
||||
}
|
||||
|
||||
pub fn put_object<O: AsRef<am::ObjId>, P: Into<am::Prop>>(
|
||||
&mut self,
|
||||
obj: O,
|
||||
prop: P,
|
||||
value: am::ObjType,
|
||||
) -> Result<&AMobjId, am::AutomergeError> {
|
||||
match self.body.put_object(obj, prop, value) {
|
||||
Ok(ex_id) => {
|
||||
let obj_id = AMobjId::new(ex_id);
|
||||
self.obj_ids.insert(obj_id.clone());
|
||||
match self.obj_ids.get(&obj_id) {
|
||||
Some(obj_id) => Ok(obj_id),
|
||||
None => Err(am::AutomergeError::Fail),
|
||||
}
|
||||
}
|
||||
Err(e) => Err(e),
|
||||
}
|
||||
}
|
||||
|
||||
pub fn drop_obj_id(&mut self, obj_id: &AMobjId) -> bool {
|
||||
self.obj_ids.remove(obj_id)
|
||||
}
|
||||
}
|
||||
|
||||
impl Deref for AMdoc {
|
||||
type Target = am::AutoCommit;
|
||||
|
||||
fn deref(&self) -> &Self::Target {
|
||||
&self.body
|
||||
}
|
||||
}
|
||||
|
||||
impl DerefMut for AMdoc {
|
||||
fn deref_mut(&mut self) -> &mut Self::Target {
|
||||
&mut self.body
|
||||
}
|
||||
}
|
||||
|
||||
impl From<AMdoc> for *mut AMdoc {
|
||||
fn from(b: AMdoc) -> Self {
|
||||
Box::into_raw(Box::new(b))
|
||||
}
|
||||
}
|
File diff suppressed because it is too large
Load diff
|
@ -1,212 +0,0 @@
|
|||
use automerge as am;
|
||||
use std::ffi::CString;
|
||||
use std::ops::Deref;
|
||||
|
||||
/// \struct AMobjId
|
||||
/// \brief An object's unique identifier.
|
||||
#[derive(Clone, Eq, Ord, PartialEq, PartialOrd)]
|
||||
pub struct AMobjId(am::ObjId);
|
||||
|
||||
impl AMobjId {
|
||||
pub fn new(obj_id: am::ObjId) -> Self {
|
||||
Self(obj_id)
|
||||
}
|
||||
}
|
||||
|
||||
impl AsRef<am::ObjId> for AMobjId {
|
||||
fn as_ref(&self) -> &am::ObjId {
|
||||
&self.0
|
||||
}
|
||||
}
|
||||
|
||||
impl Deref for AMobjId {
|
||||
type Target = am::ObjId;
|
||||
|
||||
fn deref(&self) -> &Self::Target {
|
||||
&self.0
|
||||
}
|
||||
}
|
||||
|
||||
/// \memberof AMvalue
|
||||
/// \struct AMbyteSpan
|
||||
/// \brief A contiguous sequence of bytes.
|
||||
///
|
||||
#[repr(C)]
|
||||
pub struct AMbyteSpan {
|
||||
/// A pointer to the byte at position zero.
|
||||
/// \warning \p src is only valid until the `AMfreeResult()` function is called
|
||||
/// on the `AMresult` struct hosting the array of bytes to which
|
||||
/// it points.
|
||||
src: *const u8,
|
||||
/// The number of bytes in the sequence.
|
||||
count: usize,
|
||||
}
|
||||
|
||||
impl From<&Vec<u8>> for AMbyteSpan {
|
||||
fn from(v: &Vec<u8>) -> Self {
|
||||
AMbyteSpan {
|
||||
src: (*v).as_ptr(),
|
||||
count: (*v).len(),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl From<&mut am::ActorId> for AMbyteSpan {
|
||||
fn from(actor: &mut am::ActorId) -> Self {
|
||||
let slice = actor.to_bytes();
|
||||
AMbyteSpan {
|
||||
src: slice.as_ptr(),
|
||||
count: slice.len(),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/// \struct AMvalue
|
||||
/// \brief A discriminated union of value type variants for an `AMresult` struct.
|
||||
///
|
||||
/// \enum AMvalueVariant
|
||||
/// \brief A value type discriminant.
|
||||
///
|
||||
/// \var AMvalue::tag
|
||||
/// The variant discriminator of an `AMvalue` struct.
|
||||
///
|
||||
/// \var AMvalue::actor_id
|
||||
/// An actor ID as an `AMbyteSpan` struct.
|
||||
///
|
||||
/// \var AMvalue::boolean
|
||||
/// A boolean.
|
||||
///
|
||||
/// \var AMvalue::bytes
|
||||
/// An array of bytes as an `AMbyteSpan` struct.
|
||||
///
|
||||
/// \var AMvalue::counter
|
||||
/// A CRDT counter.
|
||||
///
|
||||
/// \var AMvalue::f64
|
||||
/// A 64-bit float.
|
||||
///
|
||||
/// \var AMvalue::change_hash
|
||||
/// A change hash as an `AMbyteSpan` struct.
|
||||
///
|
||||
/// \var AMvalue::int_
|
||||
/// A 64-bit signed integer.
|
||||
///
|
||||
/// \var AMvalue::obj_id
|
||||
/// An object identifier.
|
||||
///
|
||||
/// \var AMvalue::str
|
||||
/// A UTF-8 string.
|
||||
///
|
||||
/// \var AMvalue::timestamp
|
||||
/// A Lamport timestamp.
|
||||
///
|
||||
/// \var AMvalue::uint
|
||||
/// A 64-bit unsigned integer.
|
||||
#[repr(C)]
|
||||
pub enum AMvalue<'a> {
|
||||
/// An actor ID variant.
|
||||
ActorId(AMbyteSpan),
|
||||
/// A boolean variant.
|
||||
Boolean(libc::c_char),
|
||||
/// An array of bytes variant.
|
||||
Bytes(AMbyteSpan),
|
||||
/*
|
||||
/// A changes variant.
|
||||
Changes(_),
|
||||
*/
|
||||
/// A CRDT counter variant.
|
||||
Counter(i64),
|
||||
/// A 64-bit float variant.
|
||||
F64(f64),
|
||||
/// A change hash variant.
|
||||
ChangeHash(AMbyteSpan),
|
||||
/// A 64-bit signed integer variant.
|
||||
Int(i64),
|
||||
/*
|
||||
/// A keys variant.
|
||||
Keys(_),
|
||||
*/
|
||||
/// A nothing variant.
|
||||
Nothing,
|
||||
/// A null variant.
|
||||
Null,
|
||||
/// An object identifier variant.
|
||||
ObjId(&'a AMobjId),
|
||||
/// A UTF-8 string variant.
|
||||
Str(*const libc::c_char),
|
||||
/// A Lamport timestamp variant.
|
||||
Timestamp(i64),
|
||||
/*
|
||||
/// A transaction variant.
|
||||
Transaction(_),
|
||||
*/
|
||||
/// A 64-bit unsigned integer variant.
|
||||
Uint(u64),
|
||||
}
|
||||
|
||||
/// \struct AMresult
|
||||
/// \brief A discriminated union of result variants.
|
||||
///
|
||||
pub enum AMresult<'a> {
|
||||
ActorId(am::ActorId),
|
||||
Changes(Vec<am::Change>),
|
||||
Error(CString),
|
||||
ObjId(&'a AMobjId),
|
||||
Nothing,
|
||||
Scalars(Vec<am::Value<'static>>, Option<CString>),
|
||||
}
|
||||
|
||||
impl<'a> AMresult<'a> {
|
||||
pub(crate) fn err(s: &str) -> Self {
|
||||
AMresult::Error(CString::new(s).unwrap())
|
||||
}
|
||||
}
|
||||
|
||||
impl<'a> From<Result<am::ActorId, am::AutomergeError>> for AMresult<'a> {
|
||||
fn from(maybe: Result<am::ActorId, am::AutomergeError>) -> Self {
|
||||
match maybe {
|
||||
Ok(actor_id) => AMresult::ActorId(actor_id),
|
||||
Err(e) => AMresult::Error(CString::new(e.to_string()).unwrap()),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl<'a> From<Result<&'a AMobjId, am::AutomergeError>> for AMresult<'a> {
|
||||
fn from(maybe: Result<&'a AMobjId, am::AutomergeError>) -> Self {
|
||||
match maybe {
|
||||
Ok(obj_id) => AMresult::ObjId(obj_id),
|
||||
Err(e) => AMresult::Error(CString::new(e.to_string()).unwrap()),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl<'a> From<Result<(), am::AutomergeError>> for AMresult<'a> {
|
||||
fn from(maybe: Result<(), am::AutomergeError>) -> Self {
|
||||
match maybe {
|
||||
Ok(()) => AMresult::Nothing,
|
||||
Err(e) => AMresult::Error(CString::new(e.to_string()).unwrap()),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl<'a> From<Result<Option<(am::Value<'static>, am::ObjId)>, am::AutomergeError>>
|
||||
for AMresult<'a>
|
||||
{
|
||||
fn from(maybe: Result<Option<(am::Value<'static>, am::ObjId)>, am::AutomergeError>) -> Self {
|
||||
match maybe {
|
||||
// \todo Ensure that it's alright to ignore the `am::ObjId` value.
|
||||
Ok(Some((value, _))) => AMresult::Scalars(vec![value], None),
|
||||
Ok(None) => AMresult::Nothing,
|
||||
Err(e) => AMresult::Error(CString::new(e.to_string()).unwrap()),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl<'a> From<Result<am::Value<'static>, am::AutomergeError>> for AMresult<'a> {
|
||||
fn from(maybe: Result<am::Value<'static>, am::AutomergeError>) -> Self {
|
||||
match maybe {
|
||||
Ok(value) => AMresult::Scalars(vec![value], None),
|
||||
Err(e) => AMresult::Error(CString::new(e.to_string()).unwrap()),
|
||||
}
|
||||
}
|
||||
}
|
|
@ -1,7 +0,0 @@
|
|||
use crate::AMresult;
|
||||
|
||||
impl<'a> From<AMresult<'a>> for *mut AMresult<'a> {
|
||||
fn from(b: AMresult<'a>) -> Self {
|
||||
Box::into_raw(Box::new(b))
|
||||
}
|
||||
}
|
|
@ -1,51 +0,0 @@
|
|||
cmake_minimum_required(VERSION 3.18 FATAL_ERROR)
|
||||
|
||||
find_package(cmocka REQUIRED)
|
||||
|
||||
add_executable(
|
||||
test_${LIBRARY_NAME}
|
||||
group_state.c
|
||||
amdoc_property_tests.c
|
||||
amlistput_tests.c
|
||||
ammapput_tests.c
|
||||
macro_utils.c
|
||||
main.c
|
||||
)
|
||||
|
||||
set_target_properties(test_${LIBRARY_NAME} PROPERTIES LINKER_LANGUAGE C)
|
||||
|
||||
# \note An imported library's INTERFACE_INCLUDE_DIRECTORIES property can't
|
||||
# contain a non-existent path so its build-time include directory
|
||||
# must be specified for all of its dependent targets instead.
|
||||
target_include_directories(
|
||||
test_${LIBRARY_NAME}
|
||||
PRIVATE "$<BUILD_INTERFACE:${CARGO_TARGET_DIR}>"
|
||||
)
|
||||
|
||||
target_link_libraries(test_${LIBRARY_NAME} PRIVATE cmocka ${LIBRARY_NAME})
|
||||
|
||||
add_dependencies(test_${LIBRARY_NAME} ${LIBRARY_NAME}_artifacts)
|
||||
|
||||
if(BUILD_SHARED_LIBS AND WIN32)
|
||||
add_custom_command(
|
||||
TARGET test_${LIBRARY_NAME}
|
||||
POST_BUILD
|
||||
COMMAND ${CMAKE_COMMAND} -E copy_if_different
|
||||
${CARGO_CURRENT_BINARY_DIR}/${CMAKE_SHARED_LIBRARY_PREFIX}${LIBRARY_NAME}${CMAKE_${CMAKE_BUILD_TYPE}_POSTFIX}${CMAKE_SHARED_LIBRARY_SUFFIX}
|
||||
${CMAKE_CURRENT_BINARY_DIR}
|
||||
COMMENT "Copying the DLL built by Cargo into the test directory..."
|
||||
VERBATIM
|
||||
)
|
||||
endif()
|
||||
|
||||
add_test(NAME test_${LIBRARY_NAME} COMMAND test_${LIBRARY_NAME})
|
||||
|
||||
add_custom_command(
|
||||
TARGET test_${LIBRARY_NAME}
|
||||
POST_BUILD
|
||||
COMMAND
|
||||
${CMAKE_CTEST_COMMAND} --config $<CONFIG> --output-on-failure
|
||||
COMMENT
|
||||
"Running the test(s)..."
|
||||
VERBATIM
|
||||
)
|
|
@ -1,110 +0,0 @@
|
|||
#include <setjmp.h>
|
||||
#include <stdarg.h>
|
||||
#include <stddef.h>
|
||||
#include <stdio.h>
|
||||
#include <stdlib.h>
|
||||
#include <string.h>
|
||||
|
||||
/* third-party */
|
||||
#include <cmocka.h>
|
||||
|
||||
/* local */
|
||||
#include "group_state.h"
|
||||
|
||||
typedef struct {
|
||||
GroupState* group_state;
|
||||
char const* actor_id_str;
|
||||
uint8_t* actor_id_bytes;
|
||||
size_t actor_id_size;
|
||||
} TestState;
|
||||
|
||||
static void hex_to_bytes(char const* hex_str, uint8_t* bytes, size_t const count) {
|
||||
unsigned int byte;
|
||||
char const* next = hex_str;
|
||||
for (size_t index = 0; *next && index != count; next += 2, ++index) {
|
||||
if (sscanf(next, "%02x", &byte) == 1) {
|
||||
bytes[index] = (uint8_t)byte;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
static int setup(void** state) {
|
||||
TestState* test_state = calloc(1, sizeof(TestState));
|
||||
group_setup((void**)&test_state->group_state);
|
||||
test_state->actor_id_str = "000102030405060708090a0b0c0d0e0f";
|
||||
test_state->actor_id_size = strlen(test_state->actor_id_str) / 2;
|
||||
test_state->actor_id_bytes = malloc(test_state->actor_id_size);
|
||||
hex_to_bytes(test_state->actor_id_str, test_state->actor_id_bytes, test_state->actor_id_size);
|
||||
*state = test_state;
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int teardown(void** state) {
|
||||
TestState* test_state = *state;
|
||||
group_teardown((void**)&test_state->group_state);
|
||||
free(test_state->actor_id_bytes);
|
||||
free(test_state);
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void test_AMputActor(void **state) {
|
||||
TestState* test_state = *state;
|
||||
GroupState* group_state = test_state->group_state;
|
||||
AMresult* res = AMsetActor(
|
||||
group_state->doc,
|
||||
test_state->actor_id_bytes,
|
||||
test_state->actor_id_size
|
||||
);
|
||||
if (AMresultStatus(res) != AM_STATUS_OK) {
|
||||
fail_msg("%s", AMerrorMessage(res));
|
||||
}
|
||||
assert_int_equal(AMresultSize(res), 0);
|
||||
AMvalue value = AMresultValue(res, 0);
|
||||
assert_int_equal(value.tag, AM_VALUE_NOTHING);
|
||||
AMfreeResult(res);
|
||||
res = AMgetActor(group_state->doc);
|
||||
if (AMresultStatus(res) != AM_STATUS_OK) {
|
||||
fail_msg("%s", AMerrorMessage(res));
|
||||
}
|
||||
assert_int_equal(AMresultSize(res), 1);
|
||||
value = AMresultValue(res, 0);
|
||||
assert_int_equal(value.tag, AM_VALUE_ACTOR_ID);
|
||||
assert_int_equal(value.actor_id.count, test_state->actor_id_size);
|
||||
assert_memory_equal(value.actor_id.src, test_state->actor_id_bytes, value.actor_id.count);
|
||||
AMfreeResult(res);
|
||||
}
|
||||
|
||||
static void test_AMputActorHex(void **state) {
|
||||
TestState* test_state = *state;
|
||||
GroupState* group_state = test_state->group_state;
|
||||
AMresult* res = AMsetActorHex(
|
||||
group_state->doc,
|
||||
test_state->actor_id_str
|
||||
);
|
||||
if (AMresultStatus(res) != AM_STATUS_OK) {
|
||||
fail_msg("%s", AMerrorMessage(res));
|
||||
}
|
||||
assert_int_equal(AMresultSize(res), 0);
|
||||
AMvalue value = AMresultValue(res, 0);
|
||||
assert_int_equal(value.tag, AM_VALUE_NOTHING);
|
||||
AMfreeResult(res);
|
||||
res = AMgetActorHex(group_state->doc);
|
||||
if (AMresultStatus(res) != AM_STATUS_OK) {
|
||||
fail_msg("%s", AMerrorMessage(res));
|
||||
}
|
||||
assert_int_equal(AMresultSize(res), 1);
|
||||
value = AMresultValue(res, 0);
|
||||
assert_int_equal(value.tag, AM_VALUE_STR);
|
||||
assert_int_equal(strlen(value.str), test_state->actor_id_size * 2);
|
||||
assert_string_equal(value.str, test_state->actor_id_str);
|
||||
AMfreeResult(res);
|
||||
}
|
||||
|
||||
int run_AMdoc_property_tests(void) {
|
||||
const struct CMUnitTest tests[] = {
|
||||
cmocka_unit_test_setup_teardown(test_AMputActor, setup, teardown),
|
||||
cmocka_unit_test_setup_teardown(test_AMputActorHex, setup, teardown),
|
||||
};
|
||||
|
||||
return cmocka_run_group_tests(tests, NULL, NULL);
|
||||
}
|
|
@ -1,235 +0,0 @@
|
|||
#include <float.h>
|
||||
#include <limits.h>
|
||||
#include <setjmp.h>
|
||||
#include <stdarg.h>
|
||||
#include <stddef.h>
|
||||
#include <stdint.h>
|
||||
#include <string.h>
|
||||
|
||||
/* third-party */
|
||||
#include <cmocka.h>
|
||||
|
||||
/* local */
|
||||
#include "group_state.h"
|
||||
#include "macro_utils.h"
|
||||
|
||||
#define test_AMlistPut(suffix, mode) test_AMlistPut ## suffix ## _ ## mode
|
||||
|
||||
#define static_void_test_AMlistPut(suffix, mode, member, scalar_value) \
|
||||
static void test_AMlistPut ## suffix ## _ ## mode(void **state) { \
|
||||
GroupState* group_state = *state; \
|
||||
AMresult* res = AMlistPut ## suffix( \
|
||||
group_state->doc, AM_ROOT, 0, !strcmp(#mode, "insert"), scalar_value \
|
||||
); \
|
||||
if (AMresultStatus(res) != AM_STATUS_OK) { \
|
||||
fail_msg("%s", AMerrorMessage(res)); \
|
||||
} \
|
||||
assert_int_equal(AMresultSize(res), 0); \
|
||||
AMvalue value = AMresultValue(res, 0); \
|
||||
assert_int_equal(value.tag, AM_VALUE_NOTHING); \
|
||||
AMfreeResult(res); \
|
||||
res = AMlistGet(group_state->doc, AM_ROOT, 0); \
|
||||
if (AMresultStatus(res) != AM_STATUS_OK) { \
|
||||
fail_msg("%s", AMerrorMessage(res)); \
|
||||
} \
|
||||
assert_int_equal(AMresultSize(res), 1); \
|
||||
value = AMresultValue(res, 0); \
|
||||
assert_int_equal(value.tag, AMvalue_discriminant(#suffix)); \
|
||||
assert_true(value.member == scalar_value); \
|
||||
AMfreeResult(res); \
|
||||
}
|
||||
|
||||
#define test_AMlistPutBytes(mode) test_AMlistPutBytes ## _ ## mode
|
||||
|
||||
#define static_void_test_AMlistPutBytes(mode, bytes_value) \
|
||||
static void test_AMlistPutBytes_ ## mode(void **state) { \
|
||||
static size_t const BYTES_SIZE = sizeof(bytes_value) / sizeof(uint8_t); \
|
||||
\
|
||||
GroupState* group_state = *state; \
|
||||
AMresult* res = AMlistPutBytes( \
|
||||
group_state->doc, \
|
||||
AM_ROOT, \
|
||||
0, \
|
||||
!strcmp(#mode, "insert"), \
|
||||
bytes_value, \
|
||||
BYTES_SIZE \
|
||||
); \
|
||||
if (AMresultStatus(res) != AM_STATUS_OK) { \
|
||||
fail_msg("%s", AMerrorMessage(res)); \
|
||||
} \
|
||||
assert_int_equal(AMresultSize(res), 0); \
|
||||
AMvalue value = AMresultValue(res, 0); \
|
||||
assert_int_equal(value.tag, AM_VALUE_NOTHING); \
|
||||
AMfreeResult(res); \
|
||||
res = AMlistGet(group_state->doc, AM_ROOT, 0); \
|
||||
if (AMresultStatus(res) != AM_STATUS_OK) { \
|
||||
fail_msg("%s", AMerrorMessage(res)); \
|
||||
} \
|
||||
assert_int_equal(AMresultSize(res), 1); \
|
||||
value = AMresultValue(res, 0); \
|
||||
assert_int_equal(value.tag, AM_VALUE_BYTES); \
|
||||
assert_int_equal(value.bytes.count, BYTES_SIZE); \
|
||||
assert_memory_equal(value.bytes.src, bytes_value, BYTES_SIZE); \
|
||||
AMfreeResult(res); \
|
||||
}
|
||||
|
||||
#define test_AMlistPutNull(mode) test_AMlistPutNull_ ## mode
|
||||
|
||||
#define static_void_test_AMlistPutNull(mode) \
|
||||
static void test_AMlistPutNull_ ## mode(void **state) { \
|
||||
GroupState* group_state = *state; \
|
||||
AMresult* res = AMlistPutNull( \
|
||||
group_state->doc, AM_ROOT, 0, !strcmp(#mode, "insert")); \
|
||||
if (AMresultStatus(res) != AM_STATUS_OK) { \
|
||||
fail_msg("%s", AMerrorMessage(res)); \
|
||||
} \
|
||||
assert_int_equal(AMresultSize(res), 0); \
|
||||
AMvalue value = AMresultValue(res, 0); \
|
||||
assert_int_equal(value.tag, AM_VALUE_NOTHING); \
|
||||
AMfreeResult(res); \
|
||||
res = AMlistGet(group_state->doc, AM_ROOT, 0); \
|
||||
if (AMresultStatus(res) != AM_STATUS_OK) { \
|
||||
fail_msg("%s", AMerrorMessage(res)); \
|
||||
} \
|
||||
assert_int_equal(AMresultSize(res), 1); \
|
||||
value = AMresultValue(res, 0); \
|
||||
assert_int_equal(value.tag, AM_VALUE_NULL); \
|
||||
AMfreeResult(res); \
|
||||
}
|
||||
|
||||
#define test_AMlistPutObject(label, mode) test_AMlistPutObject_ ## label ## _ ## mode
|
||||
|
||||
#define static_void_test_AMlistPutObject(label, mode) \
|
||||
static void test_AMlistPutObject_ ## label ## _ ## mode(void **state) { \
|
||||
GroupState* group_state = *state; \
|
||||
AMresult* res = AMlistPutObject( \
|
||||
group_state->doc, \
|
||||
AM_ROOT, \
|
||||
0, \
|
||||
!strcmp(#mode, "insert"), \
|
||||
AMobjType_tag(#label) \
|
||||
); \
|
||||
if (AMresultStatus(res) != AM_STATUS_OK) { \
|
||||
fail_msg("%s", AMerrorMessage(res)); \
|
||||
} \
|
||||
assert_int_equal(AMresultSize(res), 1); \
|
||||
AMvalue value = AMresultValue(res, 0); \
|
||||
assert_int_equal(value.tag, AM_VALUE_OBJ_ID); \
|
||||
/** \
|
||||
* \note The `AMresult` struct can be deallocated immediately when its \
|
||||
* value is a pointer to an opaque struct because its lifetime \
|
||||
* is tied to the `AMdoc` struct instead. \
|
||||
*/ \
|
||||
AMfreeResult(res); \
|
||||
assert_non_null(value.obj_id); \
|
||||
assert_int_equal(AMobjSize(group_state->doc, value.obj_id), 0); \
|
||||
AMfreeObjId(group_state->doc, value.obj_id); \
|
||||
}
|
||||
|
||||
#define test_AMlistPutStr(mode) test_AMlistPutStr ## _ ## mode
|
||||
|
||||
#define static_void_test_AMlistPutStr(mode, str_value) \
|
||||
static void test_AMlistPutStr_ ## mode(void **state) { \
|
||||
static size_t const STR_LEN = strlen(str_value); \
|
||||
\
|
||||
GroupState* group_state = *state; \
|
||||
AMresult* res = AMlistPutStr( \
|
||||
group_state->doc, \
|
||||
AM_ROOT, \
|
||||
0, \
|
||||
!strcmp(#mode, "insert"), \
|
||||
str_value \
|
||||
); \
|
||||
if (AMresultStatus(res) != AM_STATUS_OK) { \
|
||||
fail_msg("%s", AMerrorMessage(res)); \
|
||||
} \
|
||||
assert_int_equal(AMresultSize(res), 0); \
|
||||
AMvalue value = AMresultValue(res, 0); \
|
||||
assert_int_equal(value.tag, AM_VALUE_NOTHING); \
|
||||
AMfreeResult(res); \
|
||||
res = AMlistGet(group_state->doc, AM_ROOT, 0); \
|
||||
if (AMresultStatus(res) != AM_STATUS_OK) { \
|
||||
fail_msg("%s", AMerrorMessage(res)); \
|
||||
} \
|
||||
assert_int_equal(AMresultSize(res), 1); \
|
||||
value = AMresultValue(res, 0); \
|
||||
assert_int_equal(value.tag, AM_VALUE_STR); \
|
||||
assert_int_equal(strlen(value.str), STR_LEN); \
|
||||
assert_memory_equal(value.str, str_value, STR_LEN + 1); \
|
||||
AMfreeResult(res); \
|
||||
}
|
||||
|
||||
static uint8_t const BYTES_VALUE[] = {INT8_MIN, INT8_MAX / 2, INT8_MAX};
|
||||
|
||||
static_void_test_AMlistPutBytes(insert, BYTES_VALUE)
|
||||
|
||||
static_void_test_AMlistPutBytes(update, BYTES_VALUE)
|
||||
|
||||
static_void_test_AMlistPut(Counter, insert, counter, INT64_MAX)
|
||||
|
||||
static_void_test_AMlistPut(Counter, update, counter, INT64_MAX)
|
||||
|
||||
static_void_test_AMlistPut(F64, insert, f64, DBL_MAX)
|
||||
|
||||
static_void_test_AMlistPut(F64, update, f64, DBL_MAX)
|
||||
|
||||
static_void_test_AMlistPut(Int, insert, int_, INT64_MAX)
|
||||
|
||||
static_void_test_AMlistPut(Int, update, int_, INT64_MAX)
|
||||
|
||||
static_void_test_AMlistPutNull(insert)
|
||||
|
||||
static_void_test_AMlistPutNull(update)
|
||||
|
||||
static_void_test_AMlistPutObject(List, insert)
|
||||
|
||||
static_void_test_AMlistPutObject(List, update)
|
||||
|
||||
static_void_test_AMlistPutObject(Map, insert)
|
||||
|
||||
static_void_test_AMlistPutObject(Map, update)
|
||||
|
||||
static_void_test_AMlistPutObject(Text, insert)
|
||||
|
||||
static_void_test_AMlistPutObject(Text, update)
|
||||
|
||||
static_void_test_AMlistPutStr(insert, "Hello, world!")
|
||||
|
||||
static_void_test_AMlistPutStr(update, "Hello, world!")
|
||||
|
||||
static_void_test_AMlistPut(Timestamp, insert, timestamp, INT64_MAX)
|
||||
|
||||
static_void_test_AMlistPut(Timestamp, update, timestamp, INT64_MAX)
|
||||
|
||||
static_void_test_AMlistPut(Uint, insert, uint, UINT64_MAX)
|
||||
|
||||
static_void_test_AMlistPut(Uint, update, uint, UINT64_MAX)
|
||||
|
||||
int run_AMlistPut_tests(void) {
|
||||
const struct CMUnitTest tests[] = {
|
||||
cmocka_unit_test(test_AMlistPutBytes(insert)),
|
||||
cmocka_unit_test(test_AMlistPutBytes(update)),
|
||||
cmocka_unit_test(test_AMlistPut(Counter, insert)),
|
||||
cmocka_unit_test(test_AMlistPut(Counter, update)),
|
||||
cmocka_unit_test(test_AMlistPut(F64, insert)),
|
||||
cmocka_unit_test(test_AMlistPut(F64, update)),
|
||||
cmocka_unit_test(test_AMlistPut(Int, insert)),
|
||||
cmocka_unit_test(test_AMlistPut(Int, update)),
|
||||
cmocka_unit_test(test_AMlistPutNull(insert)),
|
||||
cmocka_unit_test(test_AMlistPutNull(update)),
|
||||
cmocka_unit_test(test_AMlistPutObject(List, insert)),
|
||||
cmocka_unit_test(test_AMlistPutObject(List, update)),
|
||||
cmocka_unit_test(test_AMlistPutObject(Map, insert)),
|
||||
cmocka_unit_test(test_AMlistPutObject(Map, update)),
|
||||
cmocka_unit_test(test_AMlistPutObject(Text, insert)),
|
||||
cmocka_unit_test(test_AMlistPutObject(Text, update)),
|
||||
cmocka_unit_test(test_AMlistPutStr(insert)),
|
||||
cmocka_unit_test(test_AMlistPutStr(update)),
|
||||
cmocka_unit_test(test_AMlistPut(Timestamp, insert)),
|
||||
cmocka_unit_test(test_AMlistPut(Timestamp, update)),
|
||||
cmocka_unit_test(test_AMlistPut(Uint, insert)),
|
||||
cmocka_unit_test(test_AMlistPut(Uint, update)),
|
||||
};
|
||||
|
||||
return cmocka_run_group_tests(tests, group_setup, group_teardown);
|
||||
}
|
|
@ -1,190 +0,0 @@
|
|||
#include <float.h>
|
||||
#include <limits.h>
|
||||
#include <setjmp.h>
|
||||
#include <stdarg.h>
|
||||
#include <stddef.h>
|
||||
#include <stdint.h>
|
||||
#include <string.h>
|
||||
|
||||
/* third-party */
|
||||
#include <cmocka.h>
|
||||
|
||||
/* local */
|
||||
#include "group_state.h"
|
||||
#include "macro_utils.h"
|
||||
|
||||
#define test_AMmapPut(suffix) test_AMmapPut ## suffix
|
||||
|
||||
#define static_void_test_AMmapPut(suffix, member, scalar_value) \
|
||||
static void test_AMmapPut ## suffix(void **state) { \
|
||||
GroupState* group_state = *state; \
|
||||
AMresult* res = AMmapPut ## suffix( \
|
||||
group_state->doc, \
|
||||
AM_ROOT, \
|
||||
#suffix, \
|
||||
scalar_value \
|
||||
); \
|
||||
if (AMresultStatus(res) != AM_STATUS_OK) { \
|
||||
fail_msg("%s", AMerrorMessage(res)); \
|
||||
} \
|
||||
assert_int_equal(AMresultSize(res), 0); \
|
||||
AMvalue value = AMresultValue(res, 0); \
|
||||
assert_int_equal(value.tag, AM_VALUE_NOTHING); \
|
||||
AMfreeResult(res); \
|
||||
res = AMmapGet(group_state->doc, AM_ROOT, #suffix); \
|
||||
if (AMresultStatus(res) != AM_STATUS_OK) { \
|
||||
fail_msg("%s", AMerrorMessage(res)); \
|
||||
} \
|
||||
assert_int_equal(AMresultSize(res), 1); \
|
||||
value = AMresultValue(res, 0); \
|
||||
assert_int_equal(value.tag, AMvalue_discriminant(#suffix)); \
|
||||
assert_true(value.member == scalar_value); \
|
||||
AMfreeResult(res); \
|
||||
}
|
||||
|
||||
#define test_AMmapPutObject(label) test_AMmapPutObject_ ## label
|
||||
|
||||
#define static_void_test_AMmapPutObject(label) \
|
||||
static void test_AMmapPutObject_ ## label(void **state) { \
|
||||
GroupState* group_state = *state; \
|
||||
AMresult* res = AMmapPutObject( \
|
||||
group_state->doc, \
|
||||
AM_ROOT, \
|
||||
#label, \
|
||||
AMobjType_tag(#label) \
|
||||
); \
|
||||
if (AMresultStatus(res) != AM_STATUS_OK) { \
|
||||
fail_msg("%s", AMerrorMessage(res)); \
|
||||
} \
|
||||
assert_int_equal(AMresultSize(res), 1); \
|
||||
AMvalue value = AMresultValue(res, 0); \
|
||||
assert_int_equal(value.tag, AM_VALUE_OBJ_ID); \
|
||||
/** \
|
||||
* \note The `AMresult` struct can be deallocated immediately when its \
|
||||
* value is a pointer to an opaque struct because its lifetime \
|
||||
* is tied to the `AMdoc` struct instead. \
|
||||
*/ \
|
||||
AMfreeResult(res); \
|
||||
assert_non_null(value.obj_id); \
|
||||
assert_int_equal(AMobjSize(group_state->doc, value.obj_id), 0); \
|
||||
AMfreeObjId(group_state->doc, value.obj_id); \
|
||||
}
|
||||
|
||||
static void test_AMmapPutBytes(void **state) {
|
||||
static char const* const KEY = "Bytes";
|
||||
static uint8_t const BYTES_VALUE[] = {INT8_MIN, INT8_MAX / 2, INT8_MAX};
|
||||
static size_t const BYTES_SIZE = sizeof(BYTES_VALUE) / sizeof(uint8_t);
|
||||
|
||||
GroupState* group_state = *state;
|
||||
AMresult* res = AMmapPutBytes(
|
||||
group_state->doc,
|
||||
AM_ROOT,
|
||||
KEY,
|
||||
BYTES_VALUE,
|
||||
BYTES_SIZE
|
||||
);
|
||||
if (AMresultStatus(res) != AM_STATUS_OK) {
|
||||
fail_msg("%s", AMerrorMessage(res));
|
||||
}
|
||||
assert_int_equal(AMresultSize(res), 0);
|
||||
AMvalue value = AMresultValue(res, 0);
|
||||
assert_int_equal(value.tag, AM_VALUE_NOTHING);
|
||||
AMfreeResult(res);
|
||||
res = AMmapGet(group_state->doc, AM_ROOT, KEY);
|
||||
if (AMresultStatus(res) != AM_STATUS_OK) {
|
||||
fail_msg("%s", AMerrorMessage(res));
|
||||
}
|
||||
assert_int_equal(AMresultSize(res), 1);
|
||||
value = AMresultValue(res, 0);
|
||||
assert_int_equal(value.tag, AM_VALUE_BYTES);
|
||||
assert_int_equal(value.bytes.count, BYTES_SIZE);
|
||||
assert_memory_equal(value.bytes.src, BYTES_VALUE, BYTES_SIZE);
|
||||
AMfreeResult(res);
|
||||
}
|
||||
|
||||
static_void_test_AMmapPut(Counter, counter, INT64_MAX)
|
||||
|
||||
static_void_test_AMmapPut(F64, f64, DBL_MAX)
|
||||
|
||||
static_void_test_AMmapPut(Int, int_, INT64_MAX)
|
||||
|
||||
static void test_AMmapPutNull(void **state) {
|
||||
static char const* const KEY = "Null";
|
||||
|
||||
GroupState* group_state = *state;
|
||||
AMresult* res = AMmapPutNull(group_state->doc, AM_ROOT, KEY);
|
||||
if (AMresultStatus(res) != AM_STATUS_OK) {
|
||||
fail_msg("%s", AMerrorMessage(res));
|
||||
}
|
||||
assert_int_equal(AMresultSize(res), 0);
|
||||
AMvalue value = AMresultValue(res, 0);
|
||||
assert_int_equal(value.tag, AM_VALUE_NOTHING);
|
||||
AMfreeResult(res);
|
||||
res = AMmapGet(group_state->doc, AM_ROOT, KEY);
|
||||
if (AMresultStatus(res) != AM_STATUS_OK) {
|
||||
fail_msg("%s", AMerrorMessage(res));
|
||||
}
|
||||
assert_int_equal(AMresultSize(res), 1);
|
||||
value = AMresultValue(res, 0);
|
||||
assert_int_equal(value.tag, AM_VALUE_NULL);
|
||||
AMfreeResult(res);
|
||||
}
|
||||
|
||||
static_void_test_AMmapPutObject(List)
|
||||
|
||||
static_void_test_AMmapPutObject(Map)
|
||||
|
||||
static_void_test_AMmapPutObject(Text)
|
||||
|
||||
static void test_AMmapPutStr(void **state) {
|
||||
static char const* const KEY = "Str";
|
||||
static char const* const STR_VALUE = "Hello, world!";
|
||||
size_t const STR_LEN = strlen(STR_VALUE);
|
||||
|
||||
GroupState* group_state = *state;
|
||||
AMresult* res = AMmapPutStr(
|
||||
group_state->doc,
|
||||
AM_ROOT,
|
||||
KEY,
|
||||
STR_VALUE
|
||||
);
|
||||
if (AMresultStatus(res) != AM_STATUS_OK) {
|
||||
fail_msg("%s", AMerrorMessage(res));
|
||||
}
|
||||
assert_int_equal(AMresultSize(res), 0);
|
||||
AMvalue value = AMresultValue(res, 0);
|
||||
assert_int_equal(value.tag, AM_VALUE_NOTHING);
|
||||
AMfreeResult(res);
|
||||
res = AMmapGet(group_state->doc, AM_ROOT, KEY);
|
||||
if (AMresultStatus(res) != AM_STATUS_OK) {
|
||||
fail_msg("%s", AMerrorMessage(res));
|
||||
}
|
||||
assert_int_equal(AMresultSize(res), 1);
|
||||
value = AMresultValue(res, 0);
|
||||
assert_int_equal(value.tag, AM_VALUE_STR);
|
||||
assert_int_equal(strlen(value.str), STR_LEN);
|
||||
assert_memory_equal(value.str, STR_VALUE, STR_LEN + 1);
|
||||
AMfreeResult(res);
|
||||
}
|
||||
|
||||
static_void_test_AMmapPut(Timestamp, timestamp, INT64_MAX)
|
||||
|
||||
static_void_test_AMmapPut(Uint, uint, UINT64_MAX)
|
||||
|
||||
int run_AMmapPut_tests(void) {
|
||||
const struct CMUnitTest tests[] = {
|
||||
cmocka_unit_test(test_AMmapPutBytes),
|
||||
cmocka_unit_test(test_AMmapPut(Counter)),
|
||||
cmocka_unit_test(test_AMmapPut(F64)),
|
||||
cmocka_unit_test(test_AMmapPut(Int)),
|
||||
cmocka_unit_test(test_AMmapPutNull),
|
||||
cmocka_unit_test(test_AMmapPutObject(List)),
|
||||
cmocka_unit_test(test_AMmapPutObject(Map)),
|
||||
cmocka_unit_test(test_AMmapPutObject(Text)),
|
||||
cmocka_unit_test(test_AMmapPutStr),
|
||||
cmocka_unit_test(test_AMmapPut(Timestamp)),
|
||||
cmocka_unit_test(test_AMmapPut(Uint)),
|
||||
};
|
||||
|
||||
return cmocka_run_group_tests(tests, group_setup, group_teardown);
|
||||
}
|
|
@ -1,18 +0,0 @@
|
|||
#include <stdlib.h>
|
||||
|
||||
/* local */
|
||||
#include "group_state.h"
|
||||
|
||||
int group_setup(void** state) {
|
||||
GroupState* group_state = calloc(1, sizeof(GroupState));
|
||||
group_state->doc = AMallocDoc();
|
||||
*state = group_state;
|
||||
return 0;
|
||||
}
|
||||
|
||||
int group_teardown(void** state) {
|
||||
GroupState* group_state = *state;
|
||||
AMfreeDoc(group_state->doc);
|
||||
free(group_state);
|
||||
return 0;
|
||||
}
|
|
@ -1,15 +0,0 @@
|
|||
#ifndef GROUP_STATE_INCLUDED
|
||||
#define GROUP_STATE_INCLUDED
|
||||
|
||||
/* local */
|
||||
#include "automerge.h"
|
||||
|
||||
typedef struct {
|
||||
AMdoc* doc;
|
||||
} GroupState;
|
||||
|
||||
int group_setup(void** state);
|
||||
|
||||
int group_teardown(void** state);
|
||||
|
||||
#endif
|
|
@ -1,23 +0,0 @@
|
|||
#include <string.h>
|
||||
|
||||
/* local */
|
||||
#include "macro_utils.h"
|
||||
|
||||
AMvalueVariant AMvalue_discriminant(char const* suffix) {
|
||||
if (!strcmp(suffix, "Bytes")) return AM_VALUE_BYTES;
|
||||
else if (!strcmp(suffix, "Counter")) return AM_VALUE_COUNTER;
|
||||
else if (!strcmp(suffix, "F64")) return AM_VALUE_F64;
|
||||
else if (!strcmp(suffix, "Int")) return AM_VALUE_INT;
|
||||
else if (!strcmp(suffix, "Null")) return AM_VALUE_NULL;
|
||||
else if (!strcmp(suffix, "Str")) return AM_VALUE_STR;
|
||||
else if (!strcmp(suffix, "Timestamp")) return AM_VALUE_TIMESTAMP;
|
||||
else if (!strcmp(suffix, "Uint")) return AM_VALUE_UINT;
|
||||
else return AM_VALUE_NOTHING;
|
||||
}
|
||||
|
||||
AMobjType AMobjType_tag(char const* obj_type_label) {
|
||||
if (!strcmp(obj_type_label, "List")) return AM_OBJ_TYPE_LIST;
|
||||
else if (!strcmp(obj_type_label, "Map")) return AM_OBJ_TYPE_MAP;
|
||||
else if (!strcmp(obj_type_label, "Text")) return AM_OBJ_TYPE_TEXT;
|
||||
else return 0;
|
||||
}
|
|
@ -1,23 +0,0 @@
|
|||
#ifndef MACRO_UTILS_INCLUDED
|
||||
#define MACRO_UTILS_INCLUDED
|
||||
|
||||
/* local */
|
||||
#include "automerge.h"
|
||||
|
||||
/**
|
||||
* \brief Gets the `AMvalue` discriminant corresponding to a function name suffix.
|
||||
*
|
||||
* \param[in] suffix A string.
|
||||
* \return An `AMvalue` variant discriminant enum tag.
|
||||
*/
|
||||
AMvalueVariant AMvalue_discriminant(char const* suffix);
|
||||
|
||||
/**
|
||||
* \brief Gets the `AMobjType` tag corresponding to a object type label.
|
||||
*
|
||||
* \param[in] obj_type_label A string.
|
||||
* \return An `AMobjType` enum tag.
|
||||
*/
|
||||
AMobjType AMobjType_tag(char const* obj_type_label);
|
||||
|
||||
#endif
|
|
@ -1,21 +0,0 @@
|
|||
#include <stdarg.h>
|
||||
#include <stddef.h>
|
||||
#include <setjmp.h>
|
||||
#include <stdint.h>
|
||||
|
||||
/* third-party */
|
||||
#include <cmocka.h>
|
||||
|
||||
extern int run_AMdoc_property_tests(void);
|
||||
|
||||
extern int run_AMlistPut_tests(void);
|
||||
|
||||
extern int run_AMmapPut_tests(void);
|
||||
|
||||
int main(void) {
|
||||
return (
|
||||
run_AMdoc_property_tests() +
|
||||
run_AMlistPut_tests() +
|
||||
run_AMmapPut_tests()
|
||||
);
|
||||
}
|
857
automerge-cli/Cargo.lock
generated
857
automerge-cli/Cargo.lock
generated
|
@ -1,857 +0,0 @@
|
|||
# This file is automatically @generated by Cargo.
|
||||
# It is not intended for manual editing.
|
||||
version = 3
|
||||
|
||||
[[package]]
|
||||
name = "adler"
|
||||
version = "1.0.2"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "f26201604c87b1e01bd3d98f8d5d9a8fcbb815e8cedb41ffccbeb4bf593a35fe"
|
||||
|
||||
[[package]]
|
||||
name = "ansi_term"
|
||||
version = "0.12.1"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "d52a9bb7ec0cf484c551830a7ce27bd20d67eac647e1befb56b0be4ee39a55d2"
|
||||
dependencies = [
|
||||
"winapi",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "anyhow"
|
||||
version = "1.0.55"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "159bb86af3a200e19a068f4224eae4c8bb2d0fa054c7e5d1cacd5cef95e684cd"
|
||||
|
||||
[[package]]
|
||||
name = "atty"
|
||||
version = "0.2.14"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "d9b39be18770d11421cdb1b9947a45dd3f37e93092cbf377614828a319d5fee8"
|
||||
dependencies = [
|
||||
"hermit-abi",
|
||||
"libc",
|
||||
"winapi",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "autocfg"
|
||||
version = "1.1.0"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "d468802bab17cbc0cc575e9b053f41e72aa36bfa6b7f55e3529ffa43161b97fa"
|
||||
|
||||
[[package]]
|
||||
name = "automerge"
|
||||
version = "0.1.0"
|
||||
dependencies = [
|
||||
"flate2",
|
||||
"fxhash",
|
||||
"hex",
|
||||
"itertools",
|
||||
"js-sys",
|
||||
"leb128",
|
||||
"nonzero_ext",
|
||||
"rand",
|
||||
"serde",
|
||||
"sha2",
|
||||
"smol_str",
|
||||
"thiserror",
|
||||
"tinyvec",
|
||||
"tracing",
|
||||
"unicode-segmentation",
|
||||
"uuid",
|
||||
"wasm-bindgen",
|
||||
"web-sys",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "automerge-cli"
|
||||
version = "0.1.0"
|
||||
dependencies = [
|
||||
"anyhow",
|
||||
"atty",
|
||||
"automerge",
|
||||
"clap",
|
||||
"colored_json",
|
||||
"combine",
|
||||
"duct",
|
||||
"maplit",
|
||||
"serde_json",
|
||||
"thiserror",
|
||||
"tracing-subscriber",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "bitflags"
|
||||
version = "1.3.2"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "bef38d45163c2f1dde094a7dfd33ccf595c92905c8f8f4fdc18d06fb1037718a"
|
||||
|
||||
[[package]]
|
||||
name = "block-buffer"
|
||||
version = "0.10.2"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "0bf7fe51849ea569fd452f37822f606a5cabb684dc918707a0193fd4664ff324"
|
||||
dependencies = [
|
||||
"generic-array",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "bumpalo"
|
||||
version = "3.9.1"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "a4a45a46ab1f2412e53d3a0ade76ffad2025804294569aae387231a0cd6e0899"
|
||||
|
||||
[[package]]
|
||||
name = "byteorder"
|
||||
version = "1.4.3"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "14c189c53d098945499cdfa7ecc63567cf3886b3332b312a5b4585d8d3a6a610"
|
||||
|
||||
[[package]]
|
||||
name = "bytes"
|
||||
version = "1.1.0"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "c4872d67bab6358e59559027aa3b9157c53d9358c51423c17554809a8858e0f8"
|
||||
|
||||
[[package]]
|
||||
name = "cfg-if"
|
||||
version = "1.0.0"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "baf1de4339761588bc0619e3cbc0120ee582ebb74b53b4efbf79117bd2da40fd"
|
||||
|
||||
[[package]]
|
||||
name = "clap"
|
||||
version = "3.1.5"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "ced1892c55c910c1219e98d6fc8d71f6bddba7905866ce740066d8bfea859312"
|
||||
dependencies = [
|
||||
"atty",
|
||||
"bitflags",
|
||||
"clap_derive",
|
||||
"indexmap",
|
||||
"lazy_static",
|
||||
"os_str_bytes",
|
||||
"strsim",
|
||||
"termcolor",
|
||||
"textwrap",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "clap_derive"
|
||||
version = "3.1.4"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "da95d038ede1a964ce99f49cbe27a7fb538d1da595e4b4f70b8c8f338d17bf16"
|
||||
dependencies = [
|
||||
"heck",
|
||||
"proc-macro-error",
|
||||
"proc-macro2",
|
||||
"quote",
|
||||
"syn",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "colored_json"
|
||||
version = "2.1.0"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "1fd32eb54d016e203b7c2600e3a7802c75843a92e38ccc4869aefeca21771a64"
|
||||
dependencies = [
|
||||
"ansi_term",
|
||||
"atty",
|
||||
"libc",
|
||||
"serde",
|
||||
"serde_json",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "combine"
|
||||
version = "4.6.3"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "50b727aacc797f9fc28e355d21f34709ac4fc9adecfe470ad07b8f4464f53062"
|
||||
dependencies = [
|
||||
"bytes",
|
||||
"memchr",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "cpufeatures"
|
||||
version = "0.2.1"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "95059428f66df56b63431fdb4e1947ed2190586af5c5a8a8b71122bdf5a7f469"
|
||||
dependencies = [
|
||||
"libc",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "crc32fast"
|
||||
version = "1.3.2"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "b540bd8bc810d3885c6ea91e2018302f68baba2129ab3e88f32389ee9370880d"
|
||||
dependencies = [
|
||||
"cfg-if",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "crypto-common"
|
||||
version = "0.1.3"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "57952ca27b5e3606ff4dd79b0020231aaf9d6aa76dc05fd30137538c50bd3ce8"
|
||||
dependencies = [
|
||||
"generic-array",
|
||||
"typenum",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "digest"
|
||||
version = "0.10.3"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "f2fb860ca6fafa5552fb6d0e816a69c8e49f0908bf524e30a90d97c85892d506"
|
||||
dependencies = [
|
||||
"block-buffer",
|
||||
"crypto-common",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "duct"
|
||||
version = "0.13.5"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "0fc6a0a59ed0888e0041cf708e66357b7ae1a82f1c67247e1f93b5e0818f7d8d"
|
||||
dependencies = [
|
||||
"libc",
|
||||
"once_cell",
|
||||
"os_pipe",
|
||||
"shared_child",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "either"
|
||||
version = "1.6.1"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "e78d4f1cc4ae33bbfc157ed5d5a5ef3bc29227303d595861deb238fcec4e9457"
|
||||
|
||||
[[package]]
|
||||
name = "flate2"
|
||||
version = "1.0.22"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "1e6988e897c1c9c485f43b47a529cef42fde0547f9d8d41a7062518f1d8fc53f"
|
||||
dependencies = [
|
||||
"cfg-if",
|
||||
"crc32fast",
|
||||
"libc",
|
||||
"miniz_oxide",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "fxhash"
|
||||
version = "0.2.1"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "c31b6d751ae2c7f11320402d34e41349dd1016f8d5d45e48c4312bc8625af50c"
|
||||
dependencies = [
|
||||
"byteorder",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "generic-array"
|
||||
version = "0.14.5"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "fd48d33ec7f05fbfa152300fdad764757cbded343c1aa1cff2fbaf4134851803"
|
||||
dependencies = [
|
||||
"typenum",
|
||||
"version_check",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "getrandom"
|
||||
version = "0.2.5"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "d39cd93900197114fa1fcb7ae84ca742095eed9442088988ae74fa744e930e77"
|
||||
dependencies = [
|
||||
"cfg-if",
|
||||
"js-sys",
|
||||
"libc",
|
||||
"wasi",
|
||||
"wasm-bindgen",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "hashbrown"
|
||||
version = "0.11.2"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "ab5ef0d4909ef3724cc8cce6ccc8572c5c817592e9285f5464f8e86f8bd3726e"
|
||||
|
||||
[[package]]
|
||||
name = "heck"
|
||||
version = "0.4.0"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "2540771e65fc8cb83cd6e8a237f70c319bd5c29f78ed1084ba5d50eeac86f7f9"
|
||||
|
||||
[[package]]
|
||||
name = "hermit-abi"
|
||||
version = "0.1.19"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "62b467343b94ba476dcb2500d242dadbb39557df889310ac77c5d99100aaac33"
|
||||
dependencies = [
|
||||
"libc",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "hex"
|
||||
version = "0.4.3"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "7f24254aa9a54b5c858eaee2f5bccdb46aaf0e486a595ed5fd8f86ba55232a70"
|
||||
|
||||
[[package]]
|
||||
name = "indexmap"
|
||||
version = "1.8.0"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "282a6247722caba404c065016bbfa522806e51714c34f5dfc3e4a3a46fcb4223"
|
||||
dependencies = [
|
||||
"autocfg",
|
||||
"hashbrown",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "itertools"
|
||||
version = "0.10.3"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "a9a9d19fa1e79b6215ff29b9d6880b706147f16e9b1dbb1e4e5947b5b02bc5e3"
|
||||
dependencies = [
|
||||
"either",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "itoa"
|
||||
version = "1.0.1"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "1aab8fc367588b89dcee83ab0fd66b72b50b72fa1904d7095045ace2b0c81c35"
|
||||
|
||||
[[package]]
|
||||
name = "js-sys"
|
||||
version = "0.3.56"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "a38fc24e30fd564ce974c02bf1d337caddff65be6cc4735a1f7eab22a7440f04"
|
||||
dependencies = [
|
||||
"wasm-bindgen",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "lazy_static"
|
||||
version = "1.4.0"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "e2abad23fbc42b3700f2f279844dc832adb2b2eb069b2df918f455c4e18cc646"
|
||||
|
||||
[[package]]
|
||||
name = "leb128"
|
||||
version = "0.2.5"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "884e2677b40cc8c339eaefcb701c32ef1fd2493d71118dc0ca4b6a736c93bd67"
|
||||
|
||||
[[package]]
|
||||
name = "libc"
|
||||
version = "0.2.119"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "1bf2e165bb3457c8e098ea76f3e3bc9db55f87aa90d52d0e6be741470916aaa4"
|
||||
|
||||
[[package]]
|
||||
name = "log"
|
||||
version = "0.4.14"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "51b9bbe6c47d51fc3e1a9b945965946b4c44142ab8792c50835a980d362c2710"
|
||||
dependencies = [
|
||||
"cfg-if",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "maplit"
|
||||
version = "1.0.2"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "3e2e65a1a2e43cfcb47a895c4c8b10d1f4a61097f9f254f183aee60cad9c651d"
|
||||
|
||||
[[package]]
|
||||
name = "memchr"
|
||||
version = "2.4.1"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "308cc39be01b73d0d18f82a0e7b2a3df85245f84af96fdddc5d202d27e47b86a"
|
||||
|
||||
[[package]]
|
||||
name = "miniz_oxide"
|
||||
version = "0.4.4"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "a92518e98c078586bc6c934028adcca4c92a53d6a958196de835170a01d84e4b"
|
||||
dependencies = [
|
||||
"adler",
|
||||
"autocfg",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "nonzero_ext"
|
||||
version = "0.2.0"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "44a1290799eababa63ea60af0cbc3f03363e328e58f32fb0294798ed3e85f444"
|
||||
|
||||
[[package]]
|
||||
name = "once_cell"
|
||||
version = "1.9.0"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "da32515d9f6e6e489d7bc9d84c71b060db7247dc035bbe44eac88cf87486d8d5"
|
||||
|
||||
[[package]]
|
||||
name = "os_pipe"
|
||||
version = "0.9.2"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "fb233f06c2307e1f5ce2ecad9f8121cffbbee2c95428f44ea85222e460d0d213"
|
||||
dependencies = [
|
||||
"libc",
|
||||
"winapi",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "os_str_bytes"
|
||||
version = "6.0.0"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "8e22443d1643a904602595ba1cd8f7d896afe56d26712531c5ff73a15b2fbf64"
|
||||
dependencies = [
|
||||
"memchr",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "pin-project-lite"
|
||||
version = "0.2.8"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "e280fbe77cc62c91527259e9442153f4688736748d24660126286329742b4c6c"
|
||||
|
||||
[[package]]
|
||||
name = "ppv-lite86"
|
||||
version = "0.2.16"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "eb9f9e6e233e5c4a35559a617bf40a4ec447db2e84c20b55a6f83167b7e57872"
|
||||
|
||||
[[package]]
|
||||
name = "proc-macro-error"
|
||||
version = "1.0.4"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "da25490ff9892aab3fcf7c36f08cfb902dd3e71ca0f9f9517bea02a73a5ce38c"
|
||||
dependencies = [
|
||||
"proc-macro-error-attr",
|
||||
"proc-macro2",
|
||||
"quote",
|
||||
"syn",
|
||||
"version_check",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "proc-macro-error-attr"
|
||||
version = "1.0.4"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "a1be40180e52ecc98ad80b184934baf3d0d29f979574e439af5a55274b35f869"
|
||||
dependencies = [
|
||||
"proc-macro2",
|
||||
"quote",
|
||||
"version_check",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "proc-macro2"
|
||||
version = "1.0.36"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "c7342d5883fbccae1cc37a2353b09c87c9b0f3afd73f5fb9bba687a1f733b029"
|
||||
dependencies = [
|
||||
"unicode-xid",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "quote"
|
||||
version = "1.0.15"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "864d3e96a899863136fc6e99f3d7cae289dafe43bf2c5ac19b70df7210c0a145"
|
||||
dependencies = [
|
||||
"proc-macro2",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "rand"
|
||||
version = "0.8.5"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "34af8d1a0e25924bc5b7c43c079c942339d8f0a8b57c39049bef581b46327404"
|
||||
dependencies = [
|
||||
"libc",
|
||||
"rand_chacha",
|
||||
"rand_core",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "rand_chacha"
|
||||
version = "0.3.1"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "e6c10a63a0fa32252be49d21e7709d4d4baf8d231c2dbce1eaa8141b9b127d88"
|
||||
dependencies = [
|
||||
"ppv-lite86",
|
||||
"rand_core",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "rand_core"
|
||||
version = "0.6.3"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "d34f1408f55294453790c48b2f1ebbb1c5b4b7563eb1f418bcfcfdbb06ebb4e7"
|
||||
dependencies = [
|
||||
"getrandom",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "ryu"
|
||||
version = "1.0.9"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "73b4b750c782965c211b42f022f59af1fbceabdd026623714f104152f1ec149f"
|
||||
|
||||
[[package]]
|
||||
name = "serde"
|
||||
version = "1.0.136"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "ce31e24b01e1e524df96f1c2fdd054405f8d7376249a5110886fb4b658484789"
|
||||
dependencies = [
|
||||
"serde_derive",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "serde_derive"
|
||||
version = "1.0.136"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "08597e7152fcd306f41838ed3e37be9eaeed2b61c42e2117266a554fab4662f9"
|
||||
dependencies = [
|
||||
"proc-macro2",
|
||||
"quote",
|
||||
"syn",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "serde_json"
|
||||
version = "1.0.79"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "8e8d9fa5c3b304765ce1fd9c4c8a3de2c8db365a5b91be52f186efc675681d95"
|
||||
dependencies = [
|
||||
"itoa",
|
||||
"ryu",
|
||||
"serde",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "sha2"
|
||||
version = "0.10.2"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "55deaec60f81eefe3cce0dc50bda92d6d8e88f2a27df7c5033b42afeb1ed2676"
|
||||
dependencies = [
|
||||
"cfg-if",
|
||||
"cpufeatures",
|
||||
"digest",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "sharded-slab"
|
||||
version = "0.1.4"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "900fba806f70c630b0a382d0d825e17a0f19fcd059a2ade1ff237bcddf446b31"
|
||||
dependencies = [
|
||||
"lazy_static",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "shared_child"
|
||||
version = "0.3.5"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "6be9f7d5565b1483af3e72975e2dee33879b3b86bd48c0929fccf6585d79e65a"
|
||||
dependencies = [
|
||||
"libc",
|
||||
"winapi",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "smallvec"
|
||||
version = "1.8.0"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "f2dd574626839106c320a323308629dcb1acfc96e32a8cba364ddc61ac23ee83"
|
||||
|
||||
[[package]]
|
||||
name = "smol_str"
|
||||
version = "0.1.21"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "61d15c83e300cce35b7c8cd39ff567c1ef42dde6d4a1a38dbdbf9a59902261bd"
|
||||
dependencies = [
|
||||
"serde",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "strsim"
|
||||
version = "0.10.0"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "73473c0e59e6d5812c5dfe2a064a6444949f089e20eec9a2e5506596494e4623"
|
||||
|
||||
[[package]]
|
||||
name = "syn"
|
||||
version = "1.0.86"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "8a65b3f4ffa0092e9887669db0eae07941f023991ab58ea44da8fe8e2d511c6b"
|
||||
dependencies = [
|
||||
"proc-macro2",
|
||||
"quote",
|
||||
"unicode-xid",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "termcolor"
|
||||
version = "1.1.3"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "bab24d30b911b2376f3a13cc2cd443142f0c81dda04c118693e35b3835757755"
|
||||
dependencies = [
|
||||
"winapi-util",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "textwrap"
|
||||
version = "0.15.0"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "b1141d4d61095b28419e22cb0bbf02755f5e54e0526f97f1e3d1d160e60885fb"
|
||||
|
||||
[[package]]
|
||||
name = "thiserror"
|
||||
version = "1.0.30"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "854babe52e4df1653706b98fcfc05843010039b406875930a70e4d9644e5c417"
|
||||
dependencies = [
|
||||
"thiserror-impl",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "thiserror-impl"
|
||||
version = "1.0.30"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "aa32fd3f627f367fe16f893e2597ae3c05020f8bba2666a4e6ea73d377e5714b"
|
||||
dependencies = [
|
||||
"proc-macro2",
|
||||
"quote",
|
||||
"syn",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "thread_local"
|
||||
version = "1.1.4"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "5516c27b78311c50bf42c071425c560ac799b11c30b31f87e3081965fe5e0180"
|
||||
dependencies = [
|
||||
"once_cell",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "tinyvec"
|
||||
version = "1.5.1"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "2c1c1d5a42b6245520c249549ec267180beaffcc0615401ac8e31853d4b6d8d2"
|
||||
dependencies = [
|
||||
"tinyvec_macros",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "tinyvec_macros"
|
||||
version = "0.1.0"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "cda74da7e1a664f795bb1f8a87ec406fb89a02522cf6e50620d016add6dbbf5c"
|
||||
|
||||
[[package]]
|
||||
name = "tracing"
|
||||
version = "0.1.31"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "f6c650a8ef0cd2dd93736f033d21cbd1224c5a967aa0c258d00fcf7dafef9b9f"
|
||||
dependencies = [
|
||||
"cfg-if",
|
||||
"log",
|
||||
"pin-project-lite",
|
||||
"tracing-attributes",
|
||||
"tracing-core",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "tracing-attributes"
|
||||
version = "0.1.19"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "8276d9a4a3a558d7b7ad5303ad50b53d58264641b82914b7ada36bd762e7a716"
|
||||
dependencies = [
|
||||
"proc-macro2",
|
||||
"quote",
|
||||
"syn",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "tracing-core"
|
||||
version = "0.1.22"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "03cfcb51380632a72d3111cb8d3447a8d908e577d31beeac006f836383d29a23"
|
||||
dependencies = [
|
||||
"lazy_static",
|
||||
"valuable",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "tracing-log"
|
||||
version = "0.1.2"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "a6923477a48e41c1951f1999ef8bb5a3023eb723ceadafe78ffb65dc366761e3"
|
||||
dependencies = [
|
||||
"lazy_static",
|
||||
"log",
|
||||
"tracing-core",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "tracing-subscriber"
|
||||
version = "0.3.9"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "9e0ab7bdc962035a87fba73f3acca9b8a8d0034c2e6f60b84aeaaddddc155dce"
|
||||
dependencies = [
|
||||
"ansi_term",
|
||||
"sharded-slab",
|
||||
"smallvec",
|
||||
"thread_local",
|
||||
"tracing-core",
|
||||
"tracing-log",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "typenum"
|
||||
version = "1.15.0"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "dcf81ac59edc17cc8697ff311e8f5ef2d99fcbd9817b34cec66f90b6c3dfd987"
|
||||
|
||||
[[package]]
|
||||
name = "unicode-segmentation"
|
||||
version = "1.9.0"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "7e8820f5d777f6224dc4be3632222971ac30164d4a258d595640799554ebfd99"
|
||||
|
||||
[[package]]
|
||||
name = "unicode-xid"
|
||||
version = "0.2.2"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "8ccb82d61f80a663efe1f787a51b16b5a51e3314d6ac365b08639f52387b33f3"
|
||||
|
||||
[[package]]
|
||||
name = "uuid"
|
||||
version = "0.8.2"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "bc5cf98d8186244414c848017f0e2676b3fcb46807f6668a97dfe67359a3c4b7"
|
||||
dependencies = [
|
||||
"getrandom",
|
||||
"serde",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "valuable"
|
||||
version = "0.1.0"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "830b7e5d4d90034032940e4ace0d9a9a057e7a45cd94e6c007832e39edb82f6d"
|
||||
|
||||
[[package]]
|
||||
name = "version_check"
|
||||
version = "0.9.4"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "49874b5167b65d7193b8aba1567f5c7d93d001cafc34600cee003eda787e483f"
|
||||
|
||||
[[package]]
|
||||
name = "wasi"
|
||||
version = "0.10.2+wasi-snapshot-preview1"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "fd6fbd9a79829dd1ad0cc20627bf1ed606756a7f77edff7b66b7064f9cb327c6"
|
||||
|
||||
[[package]]
|
||||
name = "wasm-bindgen"
|
||||
version = "0.2.79"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "25f1af7423d8588a3d840681122e72e6a24ddbcb3f0ec385cac0d12d24256c06"
|
||||
dependencies = [
|
||||
"cfg-if",
|
||||
"wasm-bindgen-macro",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "wasm-bindgen-backend"
|
||||
version = "0.2.79"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "8b21c0df030f5a177f3cba22e9bc4322695ec43e7257d865302900290bcdedca"
|
||||
dependencies = [
|
||||
"bumpalo",
|
||||
"lazy_static",
|
||||
"log",
|
||||
"proc-macro2",
|
||||
"quote",
|
||||
"syn",
|
||||
"wasm-bindgen-shared",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "wasm-bindgen-macro"
|
||||
version = "0.2.79"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "2f4203d69e40a52ee523b2529a773d5ffc1dc0071801c87b3d270b471b80ed01"
|
||||
dependencies = [
|
||||
"quote",
|
||||
"wasm-bindgen-macro-support",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "wasm-bindgen-macro-support"
|
||||
version = "0.2.79"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "bfa8a30d46208db204854cadbb5d4baf5fcf8071ba5bf48190c3e59937962ebc"
|
||||
dependencies = [
|
||||
"proc-macro2",
|
||||
"quote",
|
||||
"syn",
|
||||
"wasm-bindgen-backend",
|
||||
"wasm-bindgen-shared",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "wasm-bindgen-shared"
|
||||
version = "0.2.79"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "3d958d035c4438e28c70e4321a2911302f10135ce78a9c7834c0cab4123d06a2"
|
||||
|
||||
[[package]]
|
||||
name = "web-sys"
|
||||
version = "0.3.56"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "c060b319f29dd25724f09a2ba1418f142f539b2be99fbf4d2d5a8f7330afb8eb"
|
||||
dependencies = [
|
||||
"js-sys",
|
||||
"wasm-bindgen",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "winapi"
|
||||
version = "0.3.9"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "5c839a674fcd7a98952e593242ea400abe93992746761e38641405d28b00f419"
|
||||
dependencies = [
|
||||
"winapi-i686-pc-windows-gnu",
|
||||
"winapi-x86_64-pc-windows-gnu",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "winapi-i686-pc-windows-gnu"
|
||||
version = "0.4.0"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "ac3b87c63620426dd9b991e5ce0329eff545bccbbb34f3be09ff6fb6ab51b7b6"
|
||||
|
||||
[[package]]
|
||||
name = "winapi-util"
|
||||
version = "0.1.5"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "70ec6ce85bb158151cae5e5c87f95a8e97d2c0c4b001223f33a334e3ce5de178"
|
||||
dependencies = [
|
||||
"winapi",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "winapi-x86_64-pc-windows-gnu"
|
||||
version = "0.4.0"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "712e227841d057c1ee1cd2fb22fa7e5a5461ae8e48fa2ca79ec42cfc1931183f"
|
2
automerge-js/.gitignore
vendored
2
automerge-js/.gitignore
vendored
|
@ -1,2 +0,0 @@
|
|||
/node_modules
|
||||
/yarn.lock
|
|
@ -1,18 +0,0 @@
|
|||
{
|
||||
"name": "automerge-js",
|
||||
"version": "0.1.0",
|
||||
"main": "src/index.js",
|
||||
"license": "MIT",
|
||||
"scripts": {
|
||||
"test": "mocha --bail --full-trace"
|
||||
},
|
||||
"devDependencies": {
|
||||
"mocha": "^9.1.1"
|
||||
},
|
||||
"dependencies": {
|
||||
"automerge-wasm": "file:../automerge-wasm",
|
||||
"fast-sha256": "^1.3.0",
|
||||
"pako": "^2.0.4",
|
||||
"uuid": "^8.3"
|
||||
}
|
||||
}
|
|
@ -1,18 +0,0 @@
|
|||
// Properties of the document root object
|
||||
//const OPTIONS = Symbol('_options') // object containing options passed to init()
|
||||
//const CACHE = Symbol('_cache') // map from objectId to immutable object
|
||||
const STATE = Symbol('_state') // object containing metadata about current state (e.g. sequence numbers)
|
||||
const HEADS = Symbol('_heads') // object containing metadata about current state (e.g. sequence numbers)
|
||||
const OBJECT_ID = Symbol('_objectId') // object containing metadata about current state (e.g. sequence numbers)
|
||||
const READ_ONLY = Symbol('_readOnly') // object containing metadata about current state (e.g. sequence numbers)
|
||||
const FROZEN = Symbol('_frozen') // object containing metadata about current state (e.g. sequence numbers)
|
||||
|
||||
// Properties of all Automerge objects
|
||||
//const OBJECT_ID = Symbol('_objectId') // the object ID of the current object (string)
|
||||
//const CONFLICTS = Symbol('_conflicts') // map or list (depending on object type) of conflicts
|
||||
//const CHANGE = Symbol('_change') // the context object on proxy objects used in change callback
|
||||
//const ELEM_IDS = Symbol('_elemIds') // list containing the element ID of each list element
|
||||
|
||||
module.exports = {
|
||||
STATE, HEADS, OBJECT_ID, READ_ONLY, FROZEN
|
||||
}
|
|
@ -1,372 +0,0 @@
|
|||
const AutomergeWASM = require("automerge-wasm")
|
||||
const uuid = require('./uuid')
|
||||
|
||||
let { rootProxy, listProxy, textProxy, mapProxy } = require("./proxies")
|
||||
let { Counter } = require("./counter")
|
||||
let { Text } = require("./text")
|
||||
let { Int, Uint, Float64 } = require("./numbers")
|
||||
let { STATE, HEADS, OBJECT_ID, READ_ONLY, FROZEN } = require("./constants")
|
||||
|
||||
function init(actor) {
|
||||
if (typeof actor != 'string') {
|
||||
actor = null
|
||||
}
|
||||
const state = AutomergeWASM.create(actor)
|
||||
return rootProxy(state, true);
|
||||
}
|
||||
|
||||
function clone(doc) {
|
||||
const state = doc[STATE].clone()
|
||||
return rootProxy(state, true);
|
||||
}
|
||||
|
||||
function free(doc) {
|
||||
return doc[STATE].free()
|
||||
}
|
||||
|
||||
function from(data, actor) {
|
||||
let doc1 = init(actor)
|
||||
let doc2 = change(doc1, (d) => Object.assign(d, data))
|
||||
return doc2
|
||||
}
|
||||
|
||||
function change(doc, options, callback) {
|
||||
if (callback === undefined) {
|
||||
// FIXME implement options
|
||||
callback = options
|
||||
options = {}
|
||||
}
|
||||
if (typeof options === "string") {
|
||||
options = { message: options }
|
||||
}
|
||||
if (doc === undefined || doc[STATE] === undefined || doc[OBJECT_ID] !== "_root") {
|
||||
throw new RangeError("must be the document root");
|
||||
}
|
||||
if (doc[FROZEN] === true) {
|
||||
throw new RangeError("Attempting to use an outdated Automerge document")
|
||||
}
|
||||
if (!!doc[HEADS] === true) {
|
||||
throw new RangeError("Attempting to change an out of date document");
|
||||
}
|
||||
if (doc[READ_ONLY] === false) {
|
||||
throw new RangeError("Calls to Automerge.change cannot be nested")
|
||||
}
|
||||
const state = doc[STATE]
|
||||
const heads = state.getHeads()
|
||||
try {
|
||||
doc[HEADS] = heads
|
||||
doc[FROZEN] = true
|
||||
let root = rootProxy(state);
|
||||
callback(root)
|
||||
if (state.pendingOps() === 0) {
|
||||
doc[FROZEN] = false
|
||||
doc[HEADS] = undefined
|
||||
return doc
|
||||
} else {
|
||||
state.commit(options.message, options.time)
|
||||
return rootProxy(state, true);
|
||||
}
|
||||
} catch (e) {
|
||||
//console.log("ERROR: ",e)
|
||||
doc[FROZEN] = false
|
||||
doc[HEADS] = undefined
|
||||
state.rollback()
|
||||
throw e
|
||||
}
|
||||
}
|
||||
|
||||
function emptyChange(doc, options) {
|
||||
if (options === undefined) {
|
||||
options = {}
|
||||
}
|
||||
if (typeof options === "string") {
|
||||
options = { message: options }
|
||||
}
|
||||
|
||||
if (doc === undefined || doc[STATE] === undefined || doc[OBJECT_ID] !== "_root") {
|
||||
throw new RangeError("must be the document root");
|
||||
}
|
||||
if (doc[FROZEN] === true) {
|
||||
throw new RangeError("Attempting to use an outdated Automerge document")
|
||||
}
|
||||
if (doc[READ_ONLY] === false) {
|
||||
throw new RangeError("Calls to Automerge.change cannot be nested")
|
||||
}
|
||||
|
||||
const state = doc[STATE]
|
||||
state.commit(options.message, options.time)
|
||||
return rootProxy(state, true);
|
||||
}
|
||||
|
||||
function load(data, actor) {
|
||||
const state = AutomergeWASM.load(data, actor)
|
||||
return rootProxy(state, true);
|
||||
}
|
||||
|
||||
function save(doc) {
|
||||
const state = doc[STATE]
|
||||
return state.save()
|
||||
}
|
||||
|
||||
function merge(local, remote) {
|
||||
if (local[HEADS] === true) {
|
||||
throw new RangeError("Attempting to change an out of date document");
|
||||
}
|
||||
const localState = local[STATE]
|
||||
const heads = localState.getHeads()
|
||||
const remoteState = remote[STATE]
|
||||
const changes = localState.getChangesAdded(remoteState)
|
||||
localState.applyChanges(changes)
|
||||
local[HEADS] = heads
|
||||
return rootProxy(localState, true)
|
||||
}
|
||||
|
||||
function getActorId(doc) {
|
||||
const state = doc[STATE]
|
||||
return state.getActorId()
|
||||
}
|
||||
|
||||
function conflictAt(context, objectId, prop) {
|
||||
let values = context.getAll(objectId, prop)
|
||||
if (values.length <= 1) {
|
||||
return
|
||||
}
|
||||
let result = {}
|
||||
for (const conflict of values) {
|
||||
const datatype = conflict[0]
|
||||
const value = conflict[1]
|
||||
switch (datatype) {
|
||||
case "map":
|
||||
result[value] = mapProxy(context, value, [ prop ], true)
|
||||
break;
|
||||
case "list":
|
||||
result[value] = listProxy(context, value, [ prop ], true)
|
||||
break;
|
||||
case "text":
|
||||
result[value] = textProxy(context, value, [ prop ], true)
|
||||
break;
|
||||
//case "table":
|
||||
//case "cursor":
|
||||
case "str":
|
||||
case "uint":
|
||||
case "int":
|
||||
case "f64":
|
||||
case "boolean":
|
||||
case "bytes":
|
||||
case "null":
|
||||
result[conflict[2]] = value
|
||||
break;
|
||||
case "counter":
|
||||
result[conflict[2]] = new Counter(value)
|
||||
break;
|
||||
case "timestamp":
|
||||
result[conflict[2]] = new Date(value)
|
||||
break;
|
||||
default:
|
||||
throw RangeError(`datatype ${datatype} unimplemented`)
|
||||
}
|
||||
}
|
||||
return result
|
||||
}
|
||||
|
||||
function getConflicts(doc, prop) {
|
||||
const state = doc[STATE]
|
||||
const objectId = doc[OBJECT_ID]
|
||||
return conflictAt(state, objectId, prop)
|
||||
}
|
||||
|
||||
function getLastLocalChange(doc) {
|
||||
const state = doc[STATE]
|
||||
try {
|
||||
return state.getLastLocalChange()
|
||||
} catch (e) {
|
||||
return
|
||||
}
|
||||
}
|
||||
|
||||
function getObjectId(doc) {
|
||||
return doc[OBJECT_ID]
|
||||
}
|
||||
|
||||
function getChanges(oldState, newState) {
|
||||
const o = oldState[STATE]
|
||||
const n = newState[STATE]
|
||||
const heads = oldState[HEADS]
|
||||
return n.getChanges(heads || o.getHeads())
|
||||
}
|
||||
|
||||
function getAllChanges(doc) {
|
||||
const state = doc[STATE]
|
||||
return state.getChanges([])
|
||||
}
|
||||
|
||||
function applyChanges(doc, changes) {
|
||||
if (doc === undefined || doc[STATE] === undefined || doc[OBJECT_ID] !== "_root") {
|
||||
throw new RangeError("must be the document root");
|
||||
}
|
||||
if (doc[FROZEN] === true) {
|
||||
throw new RangeError("Attempting to use an outdated Automerge document")
|
||||
}
|
||||
if (doc[READ_ONLY] === false) {
|
||||
throw new RangeError("Calls to Automerge.change cannot be nested")
|
||||
}
|
||||
const state = doc[STATE]
|
||||
const heads = state.getHeads()
|
||||
state.applyChanges(changes)
|
||||
doc[HEADS] = heads
|
||||
return [rootProxy(state, true)];
|
||||
}
|
||||
|
||||
function getHistory(doc) {
|
||||
const actor = getActorId(doc)
|
||||
const history = getAllChanges(doc)
|
||||
return history.map((change, index) => ({
|
||||
get change () {
|
||||
return decodeChange(change)
|
||||
},
|
||||
get snapshot () {
|
||||
const [state] = applyChanges(init(), history.slice(0, index + 1))
|
||||
return state
|
||||
}
|
||||
})
|
||||
)
|
||||
}
|
||||
|
||||
function equals() {
|
||||
if (!isObject(val1) || !isObject(val2)) return val1 === val2
|
||||
const keys1 = Object.keys(val1).sort(), keys2 = Object.keys(val2).sort()
|
||||
if (keys1.length !== keys2.length) return false
|
||||
for (let i = 0; i < keys1.length; i++) {
|
||||
if (keys1[i] !== keys2[i]) return false
|
||||
if (!equals(val1[keys1[i]], val2[keys2[i]])) return false
|
||||
}
|
||||
return true
|
||||
}
|
||||
|
||||
function encodeSyncMessage(msg) {
|
||||
return AutomergeWASM.encodeSyncMessage(msg)
|
||||
}
|
||||
|
||||
function decodeSyncMessage(msg) {
|
||||
return AutomergeWASM.decodeSyncMessage(msg)
|
||||
}
|
||||
|
||||
function encodeSyncState(state) {
|
||||
return AutomergeWASM.encodeSyncState(AutomergeWASM.importSyncState(state))
|
||||
}
|
||||
|
||||
function decodeSyncState(state) {
|
||||
return AutomergeWASM.exportSyncState(AutomergeWASM.decodeSyncState(state))
|
||||
}
|
||||
|
||||
function generateSyncMessage(doc, inState) {
|
||||
const state = doc[STATE]
|
||||
const syncState = AutomergeWASM.importSyncState(inState)
|
||||
const message = state.generateSyncMessage(syncState)
|
||||
const outState = AutomergeWASM.exportSyncState(syncState)
|
||||
return [ outState, message ]
|
||||
}
|
||||
|
||||
function receiveSyncMessage(doc, inState, message) {
|
||||
const syncState = AutomergeWASM.importSyncState(inState)
|
||||
if (doc === undefined || doc[STATE] === undefined || doc[OBJECT_ID] !== "_root") {
|
||||
throw new RangeError("must be the document root");
|
||||
}
|
||||
if (doc[FROZEN] === true) {
|
||||
throw new RangeError("Attempting to use an outdated Automerge document")
|
||||
}
|
||||
if (!!doc[HEADS] === true) {
|
||||
throw new RangeError("Attempting to change an out of date document");
|
||||
}
|
||||
if (doc[READ_ONLY] === false) {
|
||||
throw new RangeError("Calls to Automerge.change cannot be nested")
|
||||
}
|
||||
const state = doc[STATE]
|
||||
const heads = state.getHeads()
|
||||
state.receiveSyncMessage(syncState, message)
|
||||
const outState = AutomergeWASM.exportSyncState(syncState)
|
||||
doc[HEADS] = heads
|
||||
return [rootProxy(state, true), outState, null];
|
||||
}
|
||||
|
||||
function initSyncState() {
|
||||
return AutomergeWASM.exportSyncState(AutomergeWASM.initSyncState(change))
|
||||
}
|
||||
|
||||
function encodeChange(change) {
|
||||
return AutomergeWASM.encodeChange(change)
|
||||
}
|
||||
|
||||
function decodeChange(data) {
|
||||
return AutomergeWASM.decodeChange(data)
|
||||
}
|
||||
|
||||
function encodeSyncMessage(change) {
|
||||
return AutomergeWASM.encodeSyncMessage(change)
|
||||
}
|
||||
|
||||
function decodeSyncMessage(data) {
|
||||
return AutomergeWASM.decodeSyncMessage(data)
|
||||
}
|
||||
|
||||
function getMissingDeps(doc, heads) {
|
||||
const state = doc[STATE]
|
||||
return state.getMissingDeps(heads)
|
||||
}
|
||||
|
||||
function getHeads(doc) {
|
||||
const state = doc[STATE]
|
||||
return doc[HEADS] || state.getHeads()
|
||||
}
|
||||
|
||||
function dump(doc) {
|
||||
const state = doc[STATE]
|
||||
state.dump()
|
||||
}
|
||||
|
||||
function toJS(doc) {
|
||||
if (typeof doc === "object") {
|
||||
if (doc instanceof Uint8Array) {
|
||||
return doc
|
||||
}
|
||||
if (doc === null) {
|
||||
return doc
|
||||
}
|
||||
if (doc instanceof Array) {
|
||||
return doc.map((a) => toJS(a))
|
||||
}
|
||||
if (doc instanceof Text) {
|
||||
return doc.map((a) => toJS(a))
|
||||
}
|
||||
let tmp = {}
|
||||
for (index in doc) {
|
||||
tmp[index] = toJS(doc[index])
|
||||
}
|
||||
return tmp
|
||||
} else {
|
||||
return doc
|
||||
}
|
||||
}
|
||||
|
||||
module.exports = {
|
||||
init, from, change, emptyChange, clone, free,
|
||||
load, save, merge, getChanges, getAllChanges, applyChanges,
|
||||
getLastLocalChange, getObjectId, getActorId, getConflicts,
|
||||
encodeChange, decodeChange, equals, getHistory, getHeads, uuid,
|
||||
generateSyncMessage, receiveSyncMessage, initSyncState,
|
||||
decodeSyncMessage, encodeSyncMessage, decodeSyncState, encodeSyncState,
|
||||
getMissingDeps,
|
||||
dump, Text, Counter, Int, Uint, Float64, toJS,
|
||||
}
|
||||
|
||||
// depricated
|
||||
// Frontend, setDefaultBackend, Backend
|
||||
|
||||
// more...
|
||||
/*
|
||||
for (let name of ['getObjectId', 'getObjectById',
|
||||
'setActorId',
|
||||
'Text', 'Table', 'Counter', 'Observable' ]) {
|
||||
module.exports[name] = Frontend[name]
|
||||
}
|
||||
*/
|
|
@ -1,33 +0,0 @@
|
|||
// Convience classes to allow users to stricly specify the number type they want
|
||||
|
||||
class Int {
|
||||
constructor(value) {
|
||||
if (!(Number.isInteger(value) && value <= Number.MAX_SAFE_INTEGER && value >= Number.MIN_SAFE_INTEGER)) {
|
||||
throw new RangeError(`Value ${value} cannot be a uint`)
|
||||
}
|
||||
this.value = value
|
||||
Object.freeze(this)
|
||||
}
|
||||
}
|
||||
|
||||
class Uint {
|
||||
constructor(value) {
|
||||
if (!(Number.isInteger(value) && value <= Number.MAX_SAFE_INTEGER && value >= 0)) {
|
||||
throw new RangeError(`Value ${value} cannot be a uint`)
|
||||
}
|
||||
this.value = value
|
||||
Object.freeze(this)
|
||||
}
|
||||
}
|
||||
|
||||
class Float64 {
|
||||
constructor(value) {
|
||||
if (typeof value !== 'number') {
|
||||
throw new RangeError(`Value ${value} cannot be a float64`)
|
||||
}
|
||||
this.value = value || 0.0
|
||||
Object.freeze(this)
|
||||
}
|
||||
}
|
||||
|
||||
module.exports = { Int, Uint, Float64 }
|
|
@ -1,617 +0,0 @@
|
|||
|
||||
const AutomergeWASM = require("automerge-wasm")
|
||||
const { Int, Uint, Float64 } = require("./numbers");
|
||||
const { Counter, getWriteableCounter } = require("./counter");
|
||||
const { Text } = require("./text");
|
||||
const { STATE, HEADS, FROZEN, OBJECT_ID, READ_ONLY } = require("./constants")
|
||||
|
||||
function parseListIndex(key) {
|
||||
if (typeof key === 'string' && /^[0-9]+$/.test(key)) key = parseInt(key, 10)
|
||||
if (typeof key !== 'number') {
|
||||
// throw new TypeError('A list index must be a number, but you passed ' + JSON.stringify(key))
|
||||
return key
|
||||
}
|
||||
if (key < 0 || isNaN(key) || key === Infinity || key === -Infinity) {
|
||||
throw new RangeError('A list index must be positive, but you passed ' + key)
|
||||
}
|
||||
return key
|
||||
}
|
||||
|
||||
function valueAt(target, prop) {
|
||||
const { context, objectId, path, readonly, heads} = target
|
||||
let value = context.get(objectId, prop, heads)
|
||||
if (value === undefined) {
|
||||
return
|
||||
}
|
||||
const datatype = value[0]
|
||||
const val = value[1]
|
||||
switch (datatype) {
|
||||
case undefined: return;
|
||||
case "map": return mapProxy(context, val, [ ... path, prop ], readonly, heads);
|
||||
case "list": return listProxy(context, val, [ ... path, prop ], readonly, heads);
|
||||
case "text": return textProxy(context, val, [ ... path, prop ], readonly, heads);
|
||||
//case "table":
|
||||
//case "cursor":
|
||||
case "str": return val;
|
||||
case "uint": return val;
|
||||
case "int": return val;
|
||||
case "f64": return val;
|
||||
case "boolean": return val;
|
||||
case "null": return null;
|
||||
case "bytes": return val;
|
||||
case "timestamp": return val;
|
||||
case "counter": {
|
||||
if (readonly) {
|
||||
return new Counter(val);
|
||||
} else {
|
||||
return getWriteableCounter(val, context, path, objectId, prop)
|
||||
}
|
||||
}
|
||||
default:
|
||||
throw RangeError(`datatype ${datatype} unimplemented`)
|
||||
}
|
||||
}
|
||||
|
||||
function import_value(value) {
|
||||
switch (typeof value) {
|
||||
case 'object':
|
||||
if (value == null) {
|
||||
return [ null, "null"]
|
||||
} else if (value instanceof Uint) {
|
||||
return [ value.value, "uint" ]
|
||||
} else if (value instanceof Int) {
|
||||
return [ value.value, "int" ]
|
||||
} else if (value instanceof Float64) {
|
||||
return [ value.value, "f64" ]
|
||||
} else if (value instanceof Counter) {
|
||||
return [ value.value, "counter" ]
|
||||
} else if (value instanceof Date) {
|
||||
return [ value.getTime(), "timestamp" ]
|
||||
} else if (value instanceof Uint8Array) {
|
||||
return [ value, "bytes" ]
|
||||
} else if (value instanceof Array) {
|
||||
return [ value, "list" ]
|
||||
} else if (value instanceof Text) {
|
||||
return [ value, "text" ]
|
||||
} else if (value[OBJECT_ID]) {
|
||||
throw new RangeError('Cannot create a reference to an existing document object')
|
||||
} else {
|
||||
return [ value, "map" ]
|
||||
}
|
||||
break;
|
||||
case 'boolean':
|
||||
return [ value, "boolean" ]
|
||||
case 'number':
|
||||
if (Number.isInteger(value)) {
|
||||
return [ value, "int" ]
|
||||
} else {
|
||||
return [ value, "f64" ]
|
||||
}
|
||||
break;
|
||||
case 'string':
|
||||
return [ value ]
|
||||
break;
|
||||
default:
|
||||
throw new RangeError(`Unsupported type of value: ${typeof value}`)
|
||||
}
|
||||
}
|
||||
|
||||
const MapHandler = {
|
||||
get (target, key) {
|
||||
const { context, objectId, path, readonly, frozen, heads, cache } = target
|
||||
if (key === Symbol.toStringTag) { return target[Symbol.toStringTag] }
|
||||
if (key === OBJECT_ID) return objectId
|
||||
if (key === READ_ONLY) return readonly
|
||||
if (key === FROZEN) return frozen
|
||||
if (key === HEADS) return heads
|
||||
if (key === STATE) return context;
|
||||
if (!cache[key]) {
|
||||
cache[key] = valueAt(target, key)
|
||||
}
|
||||
return cache[key]
|
||||
},
|
||||
|
||||
set (target, key, val) {
|
||||
let { context, objectId, path, readonly, frozen} = target
|
||||
target.cache = {} // reset cache on set
|
||||
if (val && val[OBJECT_ID]) {
|
||||
throw new RangeError('Cannot create a reference to an existing document object')
|
||||
}
|
||||
if (key === FROZEN) {
|
||||
target.frozen = val
|
||||
return
|
||||
}
|
||||
if (key === HEADS) {
|
||||
target.heads = val
|
||||
return
|
||||
}
|
||||
let [ value, datatype ] = import_value(val)
|
||||
if (frozen) {
|
||||
throw new RangeError("Attempting to use an outdated Automerge document")
|
||||
}
|
||||
if (readonly) {
|
||||
throw new RangeError(`Object property "${key}" cannot be modified`)
|
||||
}
|
||||
switch (datatype) {
|
||||
case "list":
|
||||
const list = context.putObject(objectId, key, [])
|
||||
const proxyList = listProxy(context, list, [ ... path, key ], readonly );
|
||||
for (let i = 0; i < value.length; i++) {
|
||||
proxyList[i] = value[i]
|
||||
}
|
||||
break;
|
||||
case "text":
|
||||
const text = context.putObject(objectId, key, "", "text")
|
||||
const proxyText = textProxy(context, text, [ ... path, key ], readonly );
|
||||
for (let i = 0; i < value.length; i++) {
|
||||
proxyText[i] = value.get(i)
|
||||
}
|
||||
break;
|
||||
case "map":
|
||||
const map = context.putObject(objectId, key, {})
|
||||
const proxyMap = mapProxy(context, map, [ ... path, key ], readonly );
|
||||
for (const key in value) {
|
||||
proxyMap[key] = value[key]
|
||||
}
|
||||
break;
|
||||
default:
|
||||
context.put(objectId, key, value, datatype)
|
||||
}
|
||||
return true
|
||||
},
|
||||
|
||||
deleteProperty (target, key) {
|
||||
const { context, objectId, path, readonly, frozen } = target
|
||||
target.cache = {} // reset cache on delete
|
||||
if (readonly) {
|
||||
throw new RangeError(`Object property "${key}" cannot be modified`)
|
||||
}
|
||||
context.delete(objectId, key)
|
||||
return true
|
||||
},
|
||||
|
||||
has (target, key) {
|
||||
const value = this.get(target, key)
|
||||
return value !== undefined
|
||||
},
|
||||
|
||||
getOwnPropertyDescriptor (target, key) {
|
||||
const { context, objectId } = target
|
||||
const value = this.get(target, key)
|
||||
if (typeof value !== 'undefined') {
|
||||
return {
|
||||
configurable: true, enumerable: true, value
|
||||
}
|
||||
}
|
||||
},
|
||||
|
||||
ownKeys (target) {
|
||||
const { context, objectId, heads} = target
|
||||
return context.keys(objectId, heads)
|
||||
},
|
||||
}
|
||||
|
||||
|
||||
const ListHandler = {
|
||||
get (target, index) {
|
||||
const {context, objectId, path, readonly, frozen, heads } = target
|
||||
index = parseListIndex(index)
|
||||
if (index === Symbol.hasInstance) { return (instance) => { return [].has(instance) } }
|
||||
if (index === Symbol.toStringTag) { return target[Symbol.toStringTag] }
|
||||
if (index === OBJECT_ID) return objectId
|
||||
if (index === READ_ONLY) return readonly
|
||||
if (index === FROZEN) return frozen
|
||||
if (index === HEADS) return heads
|
||||
if (index === STATE) return context;
|
||||
if (index === 'length') return context.length(objectId, heads);
|
||||
if (index === Symbol.iterator) {
|
||||
let i = 0;
|
||||
return function *() {
|
||||
// FIXME - ugly
|
||||
let value = valueAt(target, i)
|
||||
while (value !== undefined) {
|
||||
yield value
|
||||
i += 1
|
||||
value = valueAt(target, i)
|
||||
}
|
||||
}
|
||||
}
|
||||
if (typeof index === 'number') {
|
||||
return valueAt(target, index)
|
||||
} else {
|
||||
return listMethods(target)[index]
|
||||
}
|
||||
},
|
||||
|
||||
set (target, index, val) {
|
||||
let {context, objectId, path, readonly, frozen } = target
|
||||
index = parseListIndex(index)
|
||||
if (val && val[OBJECT_ID]) {
|
||||
throw new RangeError('Cannot create a reference to an existing document object')
|
||||
}
|
||||
if (index === FROZEN) {
|
||||
target.frozen = val
|
||||
return
|
||||
}
|
||||
if (index === HEADS) {
|
||||
target.heads = val
|
||||
return
|
||||
}
|
||||
if (typeof index == "string") {
|
||||
throw new RangeError('list index must be a number')
|
||||
}
|
||||
const [ value, datatype] = import_value(val)
|
||||
if (frozen) {
|
||||
throw new RangeError("Attempting to use an outdated Automerge document")
|
||||
}
|
||||
if (readonly) {
|
||||
throw new RangeError(`Object property "${index}" cannot be modified`)
|
||||
}
|
||||
switch (datatype) {
|
||||
case "list":
|
||||
let list
|
||||
if (index >= context.length(objectId)) {
|
||||
list = context.insertObject(objectId, index, [])
|
||||
} else {
|
||||
list = context.putObject(objectId, index, [])
|
||||
}
|
||||
const proxyList = listProxy(context, list, [ ... path, index ], readonly);
|
||||
proxyList.splice(0,0,...value)
|
||||
break;
|
||||
case "text":
|
||||
let text
|
||||
if (index >= context.length(objectId)) {
|
||||
text = context.insertObject(objectId, index, "", "text")
|
||||
} else {
|
||||
text = context.putObject(objectId, index, "", "text")
|
||||
}
|
||||
const proxyText = textProxy(context, text, [ ... path, index ], readonly);
|
||||
proxyText.splice(0,0,...value)
|
||||
break;
|
||||
case "map":
|
||||
let map
|
||||
if (index >= context.length(objectId)) {
|
||||
map = context.insertObject(objectId, index, {})
|
||||
} else {
|
||||
map = context.putObject(objectId, index, {})
|
||||
}
|
||||
const proxyMap = mapProxy(context, map, [ ... path, index ], readonly);
|
||||
for (const key in value) {
|
||||
proxyMap[key] = value[key]
|
||||
}
|
||||
break;
|
||||
default:
|
||||
if (index >= context.length(objectId)) {
|
||||
context.insert(objectId, index, value, datatype)
|
||||
} else {
|
||||
context.put(objectId, index, value, datatype)
|
||||
}
|
||||
}
|
||||
return true
|
||||
},
|
||||
|
||||
deleteProperty (target, index) {
|
||||
const {context, objectId} = target
|
||||
index = parseListIndex(index)
|
||||
if (context.get(objectId, index)[0] == "counter") {
|
||||
throw new TypeError('Unsupported operation: deleting a counter from a list')
|
||||
}
|
||||
context.delete(objectId, index)
|
||||
return true
|
||||
},
|
||||
|
||||
has (target, index) {
|
||||
const {context, objectId, heads} = target
|
||||
index = parseListIndex(index)
|
||||
if (typeof index === 'number') {
|
||||
return index < context.length(objectId, heads)
|
||||
}
|
||||
return index === 'length'
|
||||
},
|
||||
|
||||
getOwnPropertyDescriptor (target, index) {
|
||||
const {context, objectId, path, readonly, frozen, heads} = target
|
||||
|
||||
if (index === 'length') return {writable: true, value: context.length(objectId, heads) }
|
||||
if (index === OBJECT_ID) return {configurable: false, enumerable: false, value: objectId}
|
||||
|
||||
index = parseListIndex(index)
|
||||
|
||||
let value = valueAt(target, index)
|
||||
return { configurable: true, enumerable: true, value }
|
||||
},
|
||||
|
||||
getPrototypeOf(target) { return Object.getPrototypeOf([]) },
|
||||
ownKeys (target) {
|
||||
const {context, objectId, heads } = target
|
||||
let keys = []
|
||||
// uncommenting this causes assert.deepEqual() to fail when comparing to a pojo array
|
||||
// but not uncommenting it causes for (i in list) {} to not enumerate values properly
|
||||
//for (let i = 0; i < target.context.length(objectId, heads); i++) { keys.push(i.toString()) }
|
||||
keys.push("length");
|
||||
return keys
|
||||
}
|
||||
}
|
||||
|
||||
const TextHandler = Object.assign({}, ListHandler, {
|
||||
get (target, index) {
|
||||
// FIXME this is a one line change from ListHandler.get()
|
||||
const {context, objectId, path, readonly, frozen, heads } = target
|
||||
index = parseListIndex(index)
|
||||
if (index === Symbol.toStringTag) { return target[Symbol.toStringTag] }
|
||||
if (index === Symbol.hasInstance) { return (instance) => { return [].has(instance) } }
|
||||
if (index === OBJECT_ID) return objectId
|
||||
if (index === READ_ONLY) return readonly
|
||||
if (index === FROZEN) return frozen
|
||||
if (index === HEADS) return heads
|
||||
if (index === STATE) return context;
|
||||
if (index === 'length') return context.length(objectId, heads);
|
||||
if (index === Symbol.iterator) {
|
||||
let i = 0;
|
||||
return function *() {
|
||||
let value = valueAt(target, i)
|
||||
while (value !== undefined) {
|
||||
yield value
|
||||
i += 1
|
||||
value = valueAt(target, i)
|
||||
}
|
||||
}
|
||||
}
|
||||
if (typeof index === 'number') {
|
||||
return valueAt(target, index)
|
||||
} else {
|
||||
return textMethods(target)[index] || listMethods(target)[index]
|
||||
}
|
||||
},
|
||||
getPrototypeOf(target) {
|
||||
return Object.getPrototypeOf(new Text())
|
||||
},
|
||||
})
|
||||
|
||||
function mapProxy(context, objectId, path, readonly, heads) {
|
||||
return new Proxy({context, objectId, path, readonly: !!readonly, frozen: false, heads, cache: {}}, MapHandler)
|
||||
}
|
||||
|
||||
function listProxy(context, objectId, path, readonly, heads) {
|
||||
let target = []
|
||||
Object.assign(target, {context, objectId, path, readonly: !!readonly, frozen: false, heads, cache: {}})
|
||||
return new Proxy(target, ListHandler)
|
||||
}
|
||||
|
||||
function textProxy(context, objectId, path, readonly, heads) {
|
||||
let target = []
|
||||
Object.assign(target, {context, objectId, path, readonly: !!readonly, frozen: false, heads, cache: {}})
|
||||
return new Proxy(target, TextHandler)
|
||||
}
|
||||
|
||||
function rootProxy(context, readonly) {
|
||||
return mapProxy(context, "_root", [], readonly)
|
||||
}
|
||||
|
||||
function listMethods(target) {
|
||||
const {context, objectId, path, readonly, frozen, heads} = target
|
||||
const methods = {
|
||||
deleteAt(index, numDelete) {
|
||||
if (typeof numDelete === 'number') {
|
||||
context.splice(objectId, index, numDelete)
|
||||
} else {
|
||||
context.delete(objectId, index)
|
||||
}
|
||||
return this
|
||||
},
|
||||
|
||||
fill(val, start, end) {
|
||||
// FIXME
|
||||
let list = context.getObject(objectId)
|
||||
let [value, datatype] = valueAt(target, index)
|
||||
for (let index = parseListIndex(start || 0); index < parseListIndex(end || list.length); index++) {
|
||||
context.put(objectId, index, value, datatype)
|
||||
}
|
||||
return this
|
||||
},
|
||||
|
||||
indexOf(o, start = 0) {
|
||||
// FIXME
|
||||
const id = o[OBJECT_ID]
|
||||
if (id) {
|
||||
const list = context.getObject(objectId)
|
||||
for (let index = start; index < list.length; index++) {
|
||||
if (list[index][OBJECT_ID] === id) {
|
||||
return index
|
||||
}
|
||||
}
|
||||
return -1
|
||||
} else {
|
||||
return context.indexOf(objectId, o, start)
|
||||
}
|
||||
},
|
||||
|
||||
insertAt(index, ...values) {
|
||||
this.splice(index, 0, ...values)
|
||||
return this
|
||||
},
|
||||
|
||||
pop() {
|
||||
let length = context.length(objectId)
|
||||
if (length == 0) {
|
||||
return undefined
|
||||
}
|
||||
let last = valueAt(target, length - 1)
|
||||
context.delete(objectId, length - 1)
|
||||
return last
|
||||
},
|
||||
|
||||
push(...values) {
|
||||
let len = context.length(objectId)
|
||||
this.splice(len, 0, ...values)
|
||||
return context.length(objectId)
|
||||
},
|
||||
|
||||
shift() {
|
||||
if (context.length(objectId) == 0) return
|
||||
const first = valueAt(target, 0)
|
||||
context.delete(objectId, 0)
|
||||
return first
|
||||
},
|
||||
|
||||
splice(index, del, ...vals) {
|
||||
index = parseListIndex(index)
|
||||
del = parseListIndex(del)
|
||||
for (let val of vals) {
|
||||
if (val && val[OBJECT_ID]) {
|
||||
throw new RangeError('Cannot create a reference to an existing document object')
|
||||
}
|
||||
}
|
||||
if (frozen) {
|
||||
throw new RangeError("Attempting to use an outdated Automerge document")
|
||||
}
|
||||
if (readonly) {
|
||||
throw new RangeError("Sequence object cannot be modified outside of a change block")
|
||||
}
|
||||
let result = []
|
||||
for (let i = 0; i < del; i++) {
|
||||
let value = valueAt(target, index)
|
||||
result.push(value)
|
||||
context.delete(objectId, index)
|
||||
}
|
||||
const values = vals.map((val) => import_value(val))
|
||||
for (let [value,datatype] of values) {
|
||||
switch (datatype) {
|
||||
case "list":
|
||||
const list = context.insertObject(objectId, index, [])
|
||||
const proxyList = listProxy(context, list, [ ... path, index ], readonly);
|
||||
proxyList.splice(0,0,...value)
|
||||
break;
|
||||
case "text":
|
||||
const text = context.insertObject(objectId, index, "", "text")
|
||||
const proxyText = textProxy(context, text, [ ... path, index ], readonly);
|
||||
proxyText.splice(0,0,...value)
|
||||
break;
|
||||
case "map":
|
||||
const map = context.insertObject(objectId, index, {})
|
||||
const proxyMap = mapProxy(context, map, [ ... path, index ], readonly);
|
||||
for (const key in value) {
|
||||
proxyMap[key] = value[key]
|
||||
}
|
||||
break;
|
||||
default:
|
||||
context.insert(objectId, index, value, datatype)
|
||||
}
|
||||
index += 1
|
||||
}
|
||||
return result
|
||||
},
|
||||
|
||||
unshift(...values) {
|
||||
this.splice(0, 0, ...values)
|
||||
return context.length(objectId)
|
||||
},
|
||||
|
||||
entries() {
|
||||
let i = 0;
|
||||
const iterator = {
|
||||
next: () => {
|
||||
let value = valueAt(target, i)
|
||||
if (value === undefined) {
|
||||
return { value: undefined, done: true }
|
||||
} else {
|
||||
return { value: [ i, value ], done: false }
|
||||
}
|
||||
}
|
||||
}
|
||||
return iterator
|
||||
},
|
||||
|
||||
keys() {
|
||||
let i = 0;
|
||||
let len = context.length(objectId, heads)
|
||||
const iterator = {
|
||||
next: () => {
|
||||
let value = undefined
|
||||
if (i < len) { value = i; i++ }
|
||||
return { value, done: true }
|
||||
}
|
||||
}
|
||||
return iterator
|
||||
},
|
||||
|
||||
values() {
|
||||
let i = 0;
|
||||
const iterator = {
|
||||
next: () => {
|
||||
let value = valueAt(target, i)
|
||||
if (value === undefined) {
|
||||
return { value: undefined, done: true }
|
||||
} else {
|
||||
return { value, done: false }
|
||||
}
|
||||
}
|
||||
}
|
||||
return iterator
|
||||
}
|
||||
}
|
||||
|
||||
// Read-only methods that can delegate to the JavaScript built-in implementations
|
||||
// FIXME - super slow
|
||||
for (let method of ['concat', 'every', 'filter', 'find', 'findIndex', 'forEach', 'includes',
|
||||
'join', 'lastIndexOf', 'map', 'reduce', 'reduceRight',
|
||||
'slice', 'some', 'toLocaleString', 'toString']) {
|
||||
methods[method] = (...args) => {
|
||||
const list = []
|
||||
while (true) {
|
||||
let value = valueAt(target, list.length)
|
||||
if (value == undefined) {
|
||||
break
|
||||
}
|
||||
list.push(value)
|
||||
}
|
||||
|
||||
return list[method](...args)
|
||||
}
|
||||
}
|
||||
|
||||
return methods
|
||||
}
|
||||
|
||||
function textMethods(target) {
|
||||
const {context, objectId, path, readonly, frozen, heads } = target
|
||||
const methods = {
|
||||
set (index, value) {
|
||||
return this[index] = value
|
||||
},
|
||||
get (index) {
|
||||
return this[index]
|
||||
},
|
||||
toString () {
|
||||
return context.text(objectId, heads).replace(//g,'')
|
||||
},
|
||||
toSpans () {
|
||||
let spans = []
|
||||
let chars = ''
|
||||
let length = this.length
|
||||
for (let i = 0; i < length; i++) {
|
||||
const value = this[i]
|
||||
if (typeof value === 'string') {
|
||||
chars += value
|
||||
} else {
|
||||
if (chars.length > 0) {
|
||||
spans.push(chars)
|
||||
chars = ''
|
||||
}
|
||||
spans.push(value)
|
||||
}
|
||||
}
|
||||
if (chars.length > 0) {
|
||||
spans.push(chars)
|
||||
}
|
||||
return spans
|
||||
},
|
||||
toJSON () {
|
||||
return this.toString()
|
||||
}
|
||||
}
|
||||
return methods
|
||||
}
|
||||
|
||||
|
||||
module.exports = { rootProxy, textProxy, listProxy, mapProxy, MapHandler, ListHandler, TextHandler }
|
|
@ -1,132 +0,0 @@
|
|||
const { OBJECT_ID } = require('./constants')
|
||||
const { isObject } = require('../src/common')
|
||||
|
||||
class Text {
|
||||
constructor (text) {
|
||||
const instance = Object.create(Text.prototype)
|
||||
if (typeof text === 'string') {
|
||||
instance.elems = [...text]
|
||||
} else if (Array.isArray(text)) {
|
||||
instance.elems = text
|
||||
} else if (text === undefined) {
|
||||
instance.elems = []
|
||||
} else {
|
||||
throw new TypeError(`Unsupported initial value for Text: ${text}`)
|
||||
}
|
||||
return instance
|
||||
}
|
||||
|
||||
get length () {
|
||||
return this.elems.length
|
||||
}
|
||||
|
||||
get (index) {
|
||||
return this.elems[index]
|
||||
}
|
||||
|
||||
getElemId (index) {
|
||||
return undefined
|
||||
}
|
||||
|
||||
/**
|
||||
* Iterates over the text elements character by character, including any
|
||||
* inline objects.
|
||||
*/
|
||||
[Symbol.iterator] () {
|
||||
let elems = this.elems, index = -1
|
||||
return {
|
||||
next () {
|
||||
index += 1
|
||||
if (index < elems.length) {
|
||||
return {done: false, value: elems[index]}
|
||||
} else {
|
||||
return {done: true}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Returns the content of the Text object as a simple string, ignoring any
|
||||
* non-character elements.
|
||||
*/
|
||||
toString() {
|
||||
// Concatting to a string is faster than creating an array and then
|
||||
// .join()ing for small (<100KB) arrays.
|
||||
// https://jsperf.com/join-vs-loop-w-type-test
|
||||
let str = ''
|
||||
for (const elem of this.elems) {
|
||||
if (typeof elem === 'string') str += elem
|
||||
}
|
||||
return str
|
||||
}
|
||||
|
||||
/**
|
||||
* Returns the content of the Text object as a sequence of strings,
|
||||
* interleaved with non-character elements.
|
||||
*
|
||||
* For example, the value ['a', 'b', {x: 3}, 'c', 'd'] has spans:
|
||||
* => ['ab', {x: 3}, 'cd']
|
||||
*/
|
||||
toSpans() {
|
||||
let spans = []
|
||||
let chars = ''
|
||||
for (const elem of this.elems) {
|
||||
if (typeof elem === 'string') {
|
||||
chars += elem
|
||||
} else {
|
||||
if (chars.length > 0) {
|
||||
spans.push(chars)
|
||||
chars = ''
|
||||
}
|
||||
spans.push(elem)
|
||||
}
|
||||
}
|
||||
if (chars.length > 0) {
|
||||
spans.push(chars)
|
||||
}
|
||||
return spans
|
||||
}
|
||||
|
||||
/**
|
||||
* Returns the content of the Text object as a simple string, so that the
|
||||
* JSON serialization of an Automerge document represents text nicely.
|
||||
*/
|
||||
toJSON() {
|
||||
return this.toString()
|
||||
}
|
||||
|
||||
/**
|
||||
* Updates the list item at position `index` to a new value `value`.
|
||||
*/
|
||||
set (index, value) {
|
||||
this.elems[index] = value
|
||||
}
|
||||
|
||||
/**
|
||||
* Inserts new list items `values` starting at position `index`.
|
||||
*/
|
||||
insertAt(index, ...values) {
|
||||
this.elems.splice(index, 0, ... values)
|
||||
}
|
||||
|
||||
/**
|
||||
* Deletes `numDelete` list items starting at position `index`.
|
||||
* if `numDelete` is not given, one item is deleted.
|
||||
*/
|
||||
deleteAt(index, numDelete = 1) {
|
||||
this.elems.splice(index, numDelete)
|
||||
}
|
||||
}
|
||||
|
||||
// Read-only methods that can delegate to the JavaScript built-in array
|
||||
for (let method of ['concat', 'every', 'filter', 'find', 'findIndex', 'forEach', 'includes',
|
||||
'indexOf', 'join', 'lastIndexOf', 'map', 'reduce', 'reduceRight',
|
||||
'slice', 'some', 'toLocaleString']) {
|
||||
Text.prototype[method] = function (...args) {
|
||||
const array = [...this]
|
||||
return array[method](...args)
|
||||
}
|
||||
}
|
||||
|
||||
module.exports = { Text }
|
|
@ -1,16 +0,0 @@
|
|||
const { v4: uuid } = require('uuid')
|
||||
|
||||
function defaultFactory() {
|
||||
return uuid().replace(/-/g, '')
|
||||
}
|
||||
|
||||
let factory = defaultFactory
|
||||
|
||||
function makeUuid() {
|
||||
return factory()
|
||||
}
|
||||
|
||||
makeUuid.setFactory = newFactory => { factory = newFactory }
|
||||
makeUuid.reset = () => { factory = defaultFactory }
|
||||
|
||||
module.exports = makeUuid
|
|
@ -1,164 +0,0 @@
|
|||
|
||||
const assert = require('assert')
|
||||
const util = require('util')
|
||||
const Automerge = require('..')
|
||||
|
||||
describe('Automerge', () => {
|
||||
describe('basics', () => {
|
||||
it('should init clone and free', () => {
|
||||
let doc1 = Automerge.init()
|
||||
let doc2 = Automerge.clone(doc1);
|
||||
})
|
||||
|
||||
it('handle basic set and read on root object', () => {
|
||||
let doc1 = Automerge.init()
|
||||
let doc2 = Automerge.change(doc1, (d) => {
|
||||
d.hello = "world"
|
||||
d.big = "little"
|
||||
d.zip = "zop"
|
||||
d.app = "dap"
|
||||
assert.deepEqual(d, { hello: "world", big: "little", zip: "zop", app: "dap" })
|
||||
})
|
||||
assert.deepEqual(doc2, { hello: "world", big: "little", zip: "zop", app: "dap" })
|
||||
})
|
||||
|
||||
it('handle basic sets over many changes', () => {
|
||||
let doc1 = Automerge.init()
|
||||
let timestamp = new Date();
|
||||
let counter = new Automerge.Counter(100);
|
||||
let bytes = new Uint8Array([10,11,12]);
|
||||
let doc2 = Automerge.change(doc1, (d) => {
|
||||
d.hello = "world"
|
||||
})
|
||||
let doc3 = Automerge.change(doc2, (d) => {
|
||||
d.counter1 = counter
|
||||
})
|
||||
let doc4 = Automerge.change(doc3, (d) => {
|
||||
d.timestamp1 = timestamp
|
||||
})
|
||||
let doc5 = Automerge.change(doc4, (d) => {
|
||||
d.app = null
|
||||
})
|
||||
let doc6 = Automerge.change(doc5, (d) => {
|
||||
d.bytes1 = bytes
|
||||
})
|
||||
let doc7 = Automerge.change(doc6, (d) => {
|
||||
d.uint = new Automerge.Uint(1)
|
||||
d.int = new Automerge.Int(-1)
|
||||
d.float64 = new Automerge.Float64(5.5)
|
||||
d.number1 = 100
|
||||
d.number2 = -45.67
|
||||
d.true = true
|
||||
d.false = false
|
||||
})
|
||||
|
||||
assert.deepEqual(doc7, { hello: "world", true: true, false: false, int: -1, uint: 1, float64: 5.5, number1: 100, number2: -45.67, counter1: counter, timestamp1: timestamp, bytes1: bytes, app: null })
|
||||
|
||||
let changes = Automerge.getAllChanges(doc7)
|
||||
let t1 = Automerge.init()
|
||||
;let [t2] = Automerge.applyChanges(t1, changes)
|
||||
assert.deepEqual(doc7,t2)
|
||||
})
|
||||
|
||||
it('handle overwrites to values', () => {
|
||||
let doc1 = Automerge.init()
|
||||
let doc2 = Automerge.change(doc1, (d) => {
|
||||
d.hello = "world1"
|
||||
})
|
||||
let doc3 = Automerge.change(doc2, (d) => {
|
||||
d.hello = "world2"
|
||||
})
|
||||
let doc4 = Automerge.change(doc3, (d) => {
|
||||
d.hello = "world3"
|
||||
})
|
||||
let doc5 = Automerge.change(doc4, (d) => {
|
||||
d.hello = "world4"
|
||||
})
|
||||
assert.deepEqual(doc5, { hello: "world4" } )
|
||||
})
|
||||
|
||||
it('handle set with object value', () => {
|
||||
let doc1 = Automerge.init()
|
||||
let doc2 = Automerge.change(doc1, (d) => {
|
||||
d.subobj = { hello: "world", subsubobj: { zip: "zop" } }
|
||||
})
|
||||
assert.deepEqual(doc2, { subobj: { hello: "world", subsubobj: { zip: "zop" } } })
|
||||
})
|
||||
|
||||
it('handle simple list creation', () => {
|
||||
let doc1 = Automerge.init()
|
||||
let doc2 = Automerge.change(doc1, (d) => d.list = [])
|
||||
assert.deepEqual(doc2, { list: []})
|
||||
})
|
||||
|
||||
it('handle simple lists', () => {
|
||||
let doc1 = Automerge.init()
|
||||
let doc2 = Automerge.change(doc1, (d) => {
|
||||
d.list = [ 1, 2, 3 ]
|
||||
})
|
||||
assert.deepEqual(doc2.list.length, 3)
|
||||
assert.deepEqual(doc2.list[0], 1)
|
||||
assert.deepEqual(doc2.list[1], 2)
|
||||
assert.deepEqual(doc2.list[2], 3)
|
||||
assert.deepEqual(doc2, { list: [1,2,3] })
|
||||
// assert.deepStrictEqual(Automerge.toJS(doc2), { list: [1,2,3] })
|
||||
|
||||
let doc3 = Automerge.change(doc2, (d) => {
|
||||
d.list[1] = "a"
|
||||
})
|
||||
|
||||
assert.deepEqual(doc3.list.length, 3)
|
||||
assert.deepEqual(doc3.list[0], 1)
|
||||
assert.deepEqual(doc3.list[1], "a")
|
||||
assert.deepEqual(doc3.list[2], 3)
|
||||
assert.deepEqual(doc3, { list: [1,"a",3] })
|
||||
})
|
||||
it('handle simple lists', () => {
|
||||
let doc1 = Automerge.init()
|
||||
let doc2 = Automerge.change(doc1, (d) => {
|
||||
d.list = [ 1, 2, 3 ]
|
||||
})
|
||||
let changes = Automerge.getChanges(doc1, doc2)
|
||||
let docB1 = Automerge.init()
|
||||
;let [docB2] = Automerge.applyChanges(docB1, changes)
|
||||
assert.deepEqual(docB2, doc2);
|
||||
})
|
||||
it('handle text', () => {
|
||||
let doc1 = Automerge.init()
|
||||
let tmp = new Automerge.Text("hello")
|
||||
let doc2 = Automerge.change(doc1, (d) => {
|
||||
d.list = new Automerge.Text("hello")
|
||||
d.list.insertAt(2,"Z")
|
||||
})
|
||||
let changes = Automerge.getChanges(doc1, doc2)
|
||||
let docB1 = Automerge.init()
|
||||
;let [docB2] = Automerge.applyChanges(docB1, changes)
|
||||
assert.deepEqual(docB2, doc2);
|
||||
})
|
||||
|
||||
it('have many list methods', () => {
|
||||
let doc1 = Automerge.from({ list: [1,2,3] })
|
||||
assert.deepEqual(doc1, { list: [1,2,3] });
|
||||
let doc2 = Automerge.change(doc1, (d) => {
|
||||
d.list.splice(1,1,9,10)
|
||||
})
|
||||
assert.deepEqual(doc2, { list: [1,9,10,3] });
|
||||
let doc3 = Automerge.change(doc2, (d) => {
|
||||
d.list.push(11,12)
|
||||
})
|
||||
assert.deepEqual(doc3, { list: [1,9,10,3,11,12] });
|
||||
let doc4 = Automerge.change(doc3, (d) => {
|
||||
d.list.unshift(2,2)
|
||||
})
|
||||
assert.deepEqual(doc4, { list: [2,2,1,9,10,3,11,12] });
|
||||
let doc5 = Automerge.change(doc4, (d) => {
|
||||
d.list.shift()
|
||||
})
|
||||
assert.deepEqual(doc5, { list: [2,1,9,10,3,11,12] });
|
||||
let doc6 = Automerge.change(doc5, (d) => {
|
||||
d.list.insertAt(3,100,101)
|
||||
})
|
||||
assert.deepEqual(doc6, { list: [2,1,9,100,101,10,3,11,12] });
|
||||
})
|
||||
})
|
||||
})
|
|
@ -1,97 +0,0 @@
|
|||
const assert = require('assert')
|
||||
const { checkEncoded } = require('./helpers')
|
||||
const Automerge = require('..')
|
||||
const { encodeChange, decodeChange } = Automerge
|
||||
|
||||
describe('change encoding', () => {
|
||||
it('should encode text edits', () => {
|
||||
/*
|
||||
const change1 = {actor: 'aaaa', seq: 1, startOp: 1, time: 9, message: '', deps: [], ops: [
|
||||
{action: 'makeText', obj: '_root', key: 'text', insert: false, pred: []},
|
||||
{action: 'set', obj: '1@aaaa', elemId: '_head', insert: true, value: 'h', pred: []},
|
||||
{action: 'del', obj: '1@aaaa', elemId: '2@aaaa', insert: false, pred: ['2@aaaa']},
|
||||
{action: 'set', obj: '1@aaaa', elemId: '_head', insert: true, value: 'H', pred: []},
|
||||
{action: 'set', obj: '1@aaaa', elemId: '4@aaaa', insert: true, value: 'i', pred: []}
|
||||
]}
|
||||
*/
|
||||
const change1 = {actor: 'aaaa', seq: 1, startOp: 1, time: 9, message: null, deps: [], ops: [
|
||||
{action: 'makeText', obj: '_root', key: 'text', pred: []},
|
||||
{action: 'set', obj: '1@aaaa', elemId: '_head', insert: true, value: 'h', pred: []},
|
||||
{action: 'del', obj: '1@aaaa', elemId: '2@aaaa', pred: ['2@aaaa']},
|
||||
{action: 'set', obj: '1@aaaa', elemId: '_head', insert: true, value: 'H', pred: []},
|
||||
{action: 'set', obj: '1@aaaa', elemId: '4@aaaa', insert: true, value: 'i', pred: []}
|
||||
]}
|
||||
checkEncoded(encodeChange(change1), [
|
||||
0x85, 0x6f, 0x4a, 0x83, // magic bytes
|
||||
0xe2, 0xbd, 0xfb, 0xf5, // checksum
|
||||
1, 94, 0, 2, 0xaa, 0xaa, // chunkType: change, length, deps, actor 'aaaa'
|
||||
1, 1, 9, 0, 0, // seq, startOp, time, message, actor list
|
||||
12, 0x01, 4, 0x02, 4, // column count, objActor, objCtr
|
||||
0x11, 8, 0x13, 7, 0x15, 8, // keyActor, keyCtr, keyStr
|
||||
0x34, 4, 0x42, 6, // insert, action
|
||||
0x56, 6, 0x57, 3, // valLen, valRaw
|
||||
0x70, 6, 0x71, 2, 0x73, 2, // predNum, predActor, predCtr
|
||||
0, 1, 4, 0, // objActor column: null, 0, 0, 0, 0
|
||||
0, 1, 4, 1, // objCtr column: null, 1, 1, 1, 1
|
||||
0, 2, 0x7f, 0, 0, 1, 0x7f, 0, // keyActor column: null, null, 0, null, 0
|
||||
0, 1, 0x7c, 0, 2, 0x7e, 4, // keyCtr column: null, 0, 2, 0, 4
|
||||
0x7f, 4, 0x74, 0x65, 0x78, 0x74, 0, 4, // keyStr column: 'text', null, null, null, null
|
||||
1, 1, 1, 2, // insert column: false, true, false, true, true
|
||||
0x7d, 4, 1, 3, 2, 1, // action column: makeText, set, del, set, set
|
||||
0x7d, 0, 0x16, 0, 2, 0x16, // valLen column: 0, 0x16, 0, 0x16, 0x16
|
||||
0x68, 0x48, 0x69, // valRaw column: 'h', 'H', 'i'
|
||||
2, 0, 0x7f, 1, 2, 0, // predNum column: 0, 0, 1, 0, 0
|
||||
0x7f, 0, // predActor column: 0
|
||||
0x7f, 2 // predCtr column: 2
|
||||
])
|
||||
const decoded = decodeChange(encodeChange(change1))
|
||||
assert.deepStrictEqual(decoded, Object.assign({hash: decoded.hash}, change1))
|
||||
})
|
||||
|
||||
// FIXME - skipping this b/c it was never implemented in the rust impl and isnt trivial
|
||||
/*
|
||||
it.skip('should require strict ordering of preds', () => {
|
||||
const change = new Uint8Array([
|
||||
133, 111, 74, 131, 31, 229, 112, 44, 1, 105, 1, 58, 30, 190, 100, 253, 180, 180, 66, 49, 126,
|
||||
81, 142, 10, 3, 35, 140, 189, 231, 34, 145, 57, 66, 23, 224, 149, 64, 97, 88, 140, 168, 194,
|
||||
229, 4, 244, 209, 58, 138, 67, 140, 1, 152, 236, 250, 2, 0, 1, 4, 55, 234, 66, 242, 8, 21, 11,
|
||||
52, 1, 66, 2, 86, 3, 87, 10, 112, 2, 113, 3, 115, 4, 127, 9, 99, 111, 109, 109, 111, 110, 86,
|
||||
97, 114, 1, 127, 1, 127, 166, 1, 52, 48, 57, 49, 52, 57, 52, 53, 56, 50, 127, 2, 126, 0, 1,
|
||||
126, 139, 1, 0
|
||||
])
|
||||
assert.throws(() => { decodeChange(change) }, /operation IDs are not in ascending order/)
|
||||
})
|
||||
*/
|
||||
|
||||
describe('with trailing bytes', () => {
|
||||
let change = new Uint8Array([
|
||||
0x85, 0x6f, 0x4a, 0x83, // magic bytes
|
||||
0xb2, 0x98, 0x9e, 0xa9, // checksum
|
||||
1, 61, 0, 2, 0x12, 0x34, // chunkType: change, length, deps, actor '1234'
|
||||
1, 1, 252, 250, 220, 255, 5, // seq, startOp, time
|
||||
14, 73, 110, 105, 116, 105, 97, 108, 105, 122, 97, 116, 105, 111, 110, // message: 'Initialization'
|
||||
0, 6, // actor list, column count
|
||||
0x15, 3, 0x34, 1, 0x42, 2, // keyStr, insert, action
|
||||
0x56, 2, 0x57, 1, 0x70, 2, // valLen, valRaw, predNum
|
||||
0x7f, 1, 0x78, // keyStr: 'x'
|
||||
1, // insert: false
|
||||
0x7f, 1, // action: set
|
||||
0x7f, 19, // valLen: 1 byte of type uint
|
||||
1, // valRaw: 1
|
||||
0x7f, 0, // predNum: 0
|
||||
0, 1, 2, 3, 4, 5, 6, 7, 8, 9 // 10 trailing bytes
|
||||
])
|
||||
|
||||
it('should allow decoding and re-encoding', () => {
|
||||
// NOTE: This calls the JavaScript encoding and decoding functions, even when the WebAssembly
|
||||
// backend is loaded. Should the wasm backend export its own functions for testing?
|
||||
checkEncoded(change, encodeChange(decodeChange(change)))
|
||||
})
|
||||
|
||||
it('should be preserved in document encoding', () => {
|
||||
const [doc] = Automerge.applyChanges(Automerge.init(), [change])
|
||||
const [reconstructed] = Automerge.getAllChanges(Automerge.load(Automerge.save(doc)))
|
||||
checkEncoded(change, reconstructed)
|
||||
})
|
||||
})
|
||||
})
|
File diff suppressed because it is too large
Load diff
|
@ -1,697 +0,0 @@
|
|||
const assert = require('assert')
|
||||
const Automerge = require('..')
|
||||
const { assertEqualsOneOf } = require('./helpers')
|
||||
|
||||
function attributeStateToAttributes(accumulatedAttributes) {
|
||||
const attributes = {}
|
||||
Object.entries(accumulatedAttributes).forEach(([key, values]) => {
|
||||
if (values.length && values[0] !== null) {
|
||||
attributes[key] = values[0]
|
||||
}
|
||||
})
|
||||
return attributes
|
||||
}
|
||||
|
||||
function isEquivalent(a, b) {
|
||||
const aProps = Object.getOwnPropertyNames(a)
|
||||
const bProps = Object.getOwnPropertyNames(b)
|
||||
|
||||
if (aProps.length != bProps.length) {
|
||||
return false
|
||||
}
|
||||
|
||||
for (let i = 0; i < aProps.length; i++) {
|
||||
const propName = aProps[i]
|
||||
if (a[propName] !== b[propName]) {
|
||||
return false
|
||||
}
|
||||
}
|
||||
|
||||
return true
|
||||
}
|
||||
|
||||
function isControlMarker(pseudoCharacter) {
|
||||
return typeof pseudoCharacter === 'object' && pseudoCharacter.attributes
|
||||
}
|
||||
|
||||
function opFrom(text, attributes) {
|
||||
let op = { insert: text }
|
||||
if (Object.keys(attributes).length > 0) {
|
||||
op.attributes = attributes
|
||||
}
|
||||
return op
|
||||
}
|
||||
|
||||
function accumulateAttributes(span, accumulatedAttributes) {
|
||||
Object.entries(span).forEach(([key, value]) => {
|
||||
if (!accumulatedAttributes[key]) {
|
||||
accumulatedAttributes[key] = []
|
||||
}
|
||||
if (value === null) {
|
||||
if (accumulatedAttributes[key].length === 0 || accumulatedAttributes[key] === null) {
|
||||
accumulatedAttributes[key].unshift(null)
|
||||
} else {
|
||||
accumulatedAttributes[key].shift()
|
||||
}
|
||||
} else {
|
||||
if (accumulatedAttributes[key][0] === null) {
|
||||
accumulatedAttributes[key].shift()
|
||||
} else {
|
||||
accumulatedAttributes[key].unshift(value)
|
||||
}
|
||||
}
|
||||
})
|
||||
return accumulatedAttributes
|
||||
}
|
||||
|
||||
function automergeTextToDeltaDoc(text) {
|
||||
let ops = []
|
||||
let controlState = {}
|
||||
let currentString = ""
|
||||
let attributes = {}
|
||||
text.toSpans().forEach((span) => {
|
||||
if (isControlMarker(span)) {
|
||||
controlState = accumulateAttributes(span.attributes, controlState)
|
||||
} else {
|
||||
let next = attributeStateToAttributes(controlState)
|
||||
|
||||
// if the next span has the same calculated attributes as the current span
|
||||
// don't bother outputting it as a separate span, just let it ride
|
||||
if (typeof span === 'string' && isEquivalent(next, attributes)) {
|
||||
currentString = currentString + span
|
||||
return
|
||||
}
|
||||
|
||||
if (currentString) {
|
||||
ops.push(opFrom(currentString, attributes))
|
||||
}
|
||||
|
||||
// If we've got a string, we might be able to concatenate it to another
|
||||
// same-attributed-string, so remember it and go to the next iteration.
|
||||
if (typeof span === 'string') {
|
||||
currentString = span
|
||||
attributes = next
|
||||
} else {
|
||||
// otherwise we have an embed "character" and should output it immediately.
|
||||
// embeds are always one-"character" in length.
|
||||
ops.push(opFrom(span, next))
|
||||
currentString = ''
|
||||
attributes = {}
|
||||
}
|
||||
}
|
||||
})
|
||||
|
||||
// at the end, flush any accumulated string out
|
||||
if (currentString) {
|
||||
ops.push(opFrom(currentString, attributes))
|
||||
}
|
||||
|
||||
return ops
|
||||
}
|
||||
|
||||
function inverseAttributes(attributes) {
|
||||
let invertedAttributes = {}
|
||||
Object.keys(attributes).forEach((key) => {
|
||||
invertedAttributes[key] = null
|
||||
})
|
||||
return invertedAttributes
|
||||
}
|
||||
|
||||
function applyDeleteOp(text, offset, op) {
|
||||
let length = op.delete
|
||||
while (length > 0) {
|
||||
if (isControlMarker(text.get(offset))) {
|
||||
offset += 1
|
||||
} else {
|
||||
// we need to not delete control characters, but we do delete embed characters
|
||||
text.deleteAt(offset, 1)
|
||||
length -= 1
|
||||
}
|
||||
}
|
||||
return [text, offset]
|
||||
}
|
||||
|
||||
function applyRetainOp(text, offset, op) {
|
||||
let length = op.retain
|
||||
|
||||
if (op.attributes) {
|
||||
text.insertAt(offset, { attributes: op.attributes })
|
||||
offset += 1
|
||||
}
|
||||
|
||||
while (length > 0) {
|
||||
const char = text.get(offset)
|
||||
offset += 1
|
||||
if (!isControlMarker(char)) {
|
||||
length -= 1
|
||||
}
|
||||
}
|
||||
|
||||
if (op.attributes) {
|
||||
text.insertAt(offset, { attributes: inverseAttributes(op.attributes) })
|
||||
offset += 1
|
||||
}
|
||||
|
||||
return [text, offset]
|
||||
}
|
||||
|
||||
|
||||
function applyInsertOp(text, offset, op) {
|
||||
let originalOffset = offset
|
||||
|
||||
if (typeof op.insert === 'string') {
|
||||
text.insertAt(offset, ...op.insert.split(''))
|
||||
offset += op.insert.length
|
||||
} else {
|
||||
// we have an embed or something similar
|
||||
text.insertAt(offset, op.insert)
|
||||
offset += 1
|
||||
}
|
||||
|
||||
if (op.attributes) {
|
||||
text.insertAt(originalOffset, { attributes: op.attributes })
|
||||
offset += 1
|
||||
}
|
||||
if (op.attributes) {
|
||||
text.insertAt(offset, { attributes: inverseAttributes(op.attributes) })
|
||||
offset += 1
|
||||
}
|
||||
return [text, offset]
|
||||
}
|
||||
|
||||
// XXX: uhhhhh, why can't I pass in text?
|
||||
function applyDeltaDocToAutomergeText(delta, doc) {
|
||||
let offset = 0
|
||||
|
||||
delta.forEach(op => {
|
||||
if (op.retain) {
|
||||
[, offset] = applyRetainOp(doc.text, offset, op)
|
||||
} else if (op.delete) {
|
||||
[, offset] = applyDeleteOp(doc.text, offset, op)
|
||||
} else if (op.insert) {
|
||||
[, offset] = applyInsertOp(doc.text, offset, op)
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
describe('Automerge.Text', () => {
|
||||
let s1, s2
|
||||
beforeEach(() => {
|
||||
s1 = Automerge.change(Automerge.init(), doc => doc.text = new Automerge.Text())
|
||||
s2 = Automerge.merge(Automerge.init(), s1)
|
||||
})
|
||||
|
||||
it('should support insertion', () => {
|
||||
s1 = Automerge.change(s1, doc => doc.text.insertAt(0, 'a'))
|
||||
assert.strictEqual(s1.text.length, 1)
|
||||
assert.strictEqual(s1.text.get(0), 'a')
|
||||
assert.strictEqual(s1.text.toString(), 'a')
|
||||
//assert.strictEqual(s1.text.getElemId(0), `2@${Automerge.getActorId(s1)}`)
|
||||
})
|
||||
|
||||
it('should support deletion', () => {
|
||||
s1 = Automerge.change(s1, doc => doc.text.insertAt(0, 'a', 'b', 'c'))
|
||||
s1 = Automerge.change(s1, doc => doc.text.deleteAt(1, 1))
|
||||
assert.strictEqual(s1.text.length, 2)
|
||||
assert.strictEqual(s1.text.get(0), 'a')
|
||||
assert.strictEqual(s1.text.get(1), 'c')
|
||||
assert.strictEqual(s1.text.toString(), 'ac')
|
||||
})
|
||||
|
||||
it("should support implicit and explicit deletion", () => {
|
||||
s1 = Automerge.change(s1, doc => doc.text.insertAt(0, "a", "b", "c"))
|
||||
s1 = Automerge.change(s1, doc => doc.text.deleteAt(1))
|
||||
s1 = Automerge.change(s1, doc => doc.text.deleteAt(1, 0))
|
||||
assert.strictEqual(s1.text.length, 2)
|
||||
assert.strictEqual(s1.text.get(0), "a")
|
||||
assert.strictEqual(s1.text.get(1), "c")
|
||||
assert.strictEqual(s1.text.toString(), "ac")
|
||||
})
|
||||
|
||||
it('should handle concurrent insertion', () => {
|
||||
s1 = Automerge.change(s1, doc => doc.text.insertAt(0, 'a', 'b', 'c'))
|
||||
s2 = Automerge.change(s2, doc => doc.text.insertAt(0, 'x', 'y', 'z'))
|
||||
s1 = Automerge.merge(s1, s2)
|
||||
assert.strictEqual(s1.text.length, 6)
|
||||
assertEqualsOneOf(s1.text.toString(), 'abcxyz', 'xyzabc')
|
||||
assertEqualsOneOf(s1.text.join(''), 'abcxyz', 'xyzabc')
|
||||
})
|
||||
|
||||
it('should handle text and other ops in the same change', () => {
|
||||
s1 = Automerge.change(s1, doc => {
|
||||
doc.foo = 'bar'
|
||||
doc.text.insertAt(0, 'a')
|
||||
})
|
||||
assert.strictEqual(s1.foo, 'bar')
|
||||
assert.strictEqual(s1.text.toString(), 'a')
|
||||
assert.strictEqual(s1.text.join(''), 'a')
|
||||
})
|
||||
|
||||
it('should serialize to JSON as a simple string', () => {
|
||||
s1 = Automerge.change(s1, doc => doc.text.insertAt(0, 'a', '"', 'b'))
|
||||
assert.strictEqual(JSON.stringify(s1), '{"text":"a\\"b"}')
|
||||
})
|
||||
|
||||
it('should allow modification before an object is assigned to a document', () => {
|
||||
s1 = Automerge.change(Automerge.init(), doc => {
|
||||
const text = new Automerge.Text()
|
||||
text.insertAt(0, 'a', 'b', 'c', 'd')
|
||||
text.deleteAt(2)
|
||||
doc.text = text
|
||||
assert.strictEqual(doc.text.toString(), 'abd')
|
||||
assert.strictEqual(doc.text.join(''), 'abd')
|
||||
})
|
||||
assert.strictEqual(s1.text.toString(), 'abd')
|
||||
assert.strictEqual(s1.text.join(''), 'abd')
|
||||
})
|
||||
|
||||
it('should allow modification after an object is assigned to a document', () => {
|
||||
s1 = Automerge.change(Automerge.init(), doc => {
|
||||
const text = new Automerge.Text()
|
||||
doc.text = text
|
||||
doc.text.insertAt(0, 'a', 'b', 'c', 'd')
|
||||
doc.text.deleteAt(2)
|
||||
assert.strictEqual(doc.text.toString(), 'abd')
|
||||
assert.strictEqual(doc.text.join(''), 'abd')
|
||||
})
|
||||
assert.strictEqual(s1.text.join(''), 'abd')
|
||||
})
|
||||
|
||||
it('should not allow modification outside of a change callback', () => {
|
||||
assert.throws(() => s1.text.insertAt(0, 'a'), /object cannot be modified outside of a change block/)
|
||||
})
|
||||
|
||||
describe('with initial value', () => {
|
||||
it('should accept a string as initial value', () => {
|
||||
let s1 = Automerge.change(Automerge.init(), doc => doc.text = new Automerge.Text('init'))
|
||||
assert.strictEqual(s1.text.length, 4)
|
||||
assert.strictEqual(s1.text.get(0), 'i')
|
||||
assert.strictEqual(s1.text.get(1), 'n')
|
||||
assert.strictEqual(s1.text.get(2), 'i')
|
||||
assert.strictEqual(s1.text.get(3), 't')
|
||||
assert.strictEqual(s1.text.toString(), 'init')
|
||||
})
|
||||
|
||||
it('should accept an array as initial value', () => {
|
||||
let s1 = Automerge.change(Automerge.init(), doc => doc.text = new Automerge.Text(['i', 'n', 'i', 't']))
|
||||
assert.strictEqual(s1.text.length, 4)
|
||||
assert.strictEqual(s1.text.get(0), 'i')
|
||||
assert.strictEqual(s1.text.get(1), 'n')
|
||||
assert.strictEqual(s1.text.get(2), 'i')
|
||||
assert.strictEqual(s1.text.get(3), 't')
|
||||
assert.strictEqual(s1.text.toString(), 'init')
|
||||
})
|
||||
|
||||
it('should initialize text in Automerge.from()', () => {
|
||||
let s1 = Automerge.from({text: new Automerge.Text('init')})
|
||||
assert.strictEqual(s1.text.length, 4)
|
||||
assert.strictEqual(s1.text.get(0), 'i')
|
||||
assert.strictEqual(s1.text.get(1), 'n')
|
||||
assert.strictEqual(s1.text.get(2), 'i')
|
||||
assert.strictEqual(s1.text.get(3), 't')
|
||||
assert.strictEqual(s1.text.toString(), 'init')
|
||||
})
|
||||
|
||||
it('should encode the initial value as a change', () => {
|
||||
const s1 = Automerge.from({text: new Automerge.Text('init')})
|
||||
const changes = Automerge.getAllChanges(s1)
|
||||
assert.strictEqual(changes.length, 1)
|
||||
const [s2] = Automerge.applyChanges(Automerge.init(), changes)
|
||||
assert.strictEqual(s2.text instanceof Automerge.Text, true)
|
||||
assert.strictEqual(s2.text.toString(), 'init')
|
||||
assert.strictEqual(s2.text.join(''), 'init')
|
||||
})
|
||||
|
||||
it('should allow immediate access to the value', () => {
|
||||
Automerge.change(Automerge.init(), doc => {
|
||||
const text = new Automerge.Text('init')
|
||||
assert.strictEqual(text.length, 4)
|
||||
assert.strictEqual(text.get(0), 'i')
|
||||
assert.strictEqual(text.toString(), 'init')
|
||||
doc.text = text
|
||||
assert.strictEqual(doc.text.length, 4)
|
||||
assert.strictEqual(doc.text.get(0), 'i')
|
||||
assert.strictEqual(doc.text.toString(), 'init')
|
||||
})
|
||||
})
|
||||
|
||||
it('should allow pre-assignment modification of the initial value', () => {
|
||||
let s1 = Automerge.change(Automerge.init(), doc => {
|
||||
const text = new Automerge.Text('init')
|
||||
text.deleteAt(3)
|
||||
assert.strictEqual(text.join(''), 'ini')
|
||||
doc.text = text
|
||||
assert.strictEqual(doc.text.join(''), 'ini')
|
||||
assert.strictEqual(doc.text.toString(), 'ini')
|
||||
})
|
||||
assert.strictEqual(s1.text.toString(), 'ini')
|
||||
assert.strictEqual(s1.text.join(''), 'ini')
|
||||
})
|
||||
|
||||
it('should allow post-assignment modification of the initial value', () => {
|
||||
let s1 = Automerge.change(Automerge.init(), doc => {
|
||||
const text = new Automerge.Text('init')
|
||||
doc.text = text
|
||||
doc.text.deleteAt(0)
|
||||
doc.text.insertAt(0, 'I')
|
||||
assert.strictEqual(doc.text.join(''), 'Init')
|
||||
assert.strictEqual(doc.text.toString(), 'Init')
|
||||
})
|
||||
assert.strictEqual(s1.text.join(''), 'Init')
|
||||
assert.strictEqual(s1.text.toString(), 'Init')
|
||||
})
|
||||
})
|
||||
|
||||
describe('non-textual control characters', () => {
|
||||
let s1
|
||||
beforeEach(() => {
|
||||
s1 = Automerge.change(Automerge.init(), doc => {
|
||||
doc.text = new Automerge.Text()
|
||||
doc.text.insertAt(0, 'a')
|
||||
doc.text.insertAt(1, { attribute: 'bold' })
|
||||
})
|
||||
})
|
||||
|
||||
it('should allow fetching non-textual characters', () => {
|
||||
assert.deepEqual(s1.text.get(1), { attribute: 'bold' })
|
||||
//assert.strictEqual(s1.text.getElemId(1), `3@${Automerge.getActorId(s1)}`)
|
||||
})
|
||||
|
||||
it('should include control characters in string length', () => {
|
||||
assert.strictEqual(s1.text.length, 2)
|
||||
assert.strictEqual(s1.text.get(0), 'a')
|
||||
})
|
||||
|
||||
it('should exclude control characters from toString()', () => {
|
||||
assert.strictEqual(s1.text.toString(), 'a')
|
||||
})
|
||||
|
||||
it('should allow control characters to be updated', () => {
|
||||
const s2 = Automerge.change(s1, doc => doc.text.get(1).attribute = 'italic')
|
||||
const s3 = Automerge.load(Automerge.save(s2))
|
||||
assert.strictEqual(s1.text.get(1).attribute, 'bold')
|
||||
assert.strictEqual(s2.text.get(1).attribute, 'italic')
|
||||
assert.strictEqual(s3.text.get(1).attribute, 'italic')
|
||||
})
|
||||
|
||||
describe('spans interface to Text', () => {
|
||||
it('should return a simple string as a single span', () => {
|
||||
let s1 = Automerge.change(Automerge.init(), doc => {
|
||||
doc.text = new Automerge.Text('hello world')
|
||||
})
|
||||
assert.deepEqual(s1.text.toSpans(), ['hello world'])
|
||||
})
|
||||
it('should return an empty string as an empty array', () => {
|
||||
let s1 = Automerge.change(Automerge.init(), doc => {
|
||||
doc.text = new Automerge.Text()
|
||||
})
|
||||
assert.deepEqual(s1.text.toSpans(), [])
|
||||
})
|
||||
it('should split a span at a control character', () => {
|
||||
let s1 = Automerge.change(Automerge.init(), doc => {
|
||||
doc.text = new Automerge.Text('hello world')
|
||||
doc.text.insertAt(5, { attributes: { bold: true } })
|
||||
})
|
||||
assert.deepEqual(s1.text.toSpans(),
|
||||
['hello', { attributes: { bold: true } }, ' world'])
|
||||
})
|
||||
it('should allow consecutive control characters', () => {
|
||||
let s1 = Automerge.change(Automerge.init(), doc => {
|
||||
doc.text = new Automerge.Text('hello world')
|
||||
doc.text.insertAt(5, { attributes: { bold: true } })
|
||||
doc.text.insertAt(6, { attributes: { italic: true } })
|
||||
})
|
||||
assert.deepEqual(s1.text.toSpans(),
|
||||
['hello',
|
||||
{ attributes: { bold: true } },
|
||||
{ attributes: { italic: true } },
|
||||
' world'
|
||||
])
|
||||
})
|
||||
it('should allow non-consecutive control characters', () => {
|
||||
let s1 = Automerge.change(Automerge.init(), doc => {
|
||||
doc.text = new Automerge.Text('hello world')
|
||||
doc.text.insertAt(5, { attributes: { bold: true } })
|
||||
doc.text.insertAt(12, { attributes: { italic: true } })
|
||||
})
|
||||
assert.deepEqual(s1.text.toSpans(),
|
||||
['hello',
|
||||
{ attributes: { bold: true } },
|
||||
' world',
|
||||
{ attributes: { italic: true } }
|
||||
])
|
||||
})
|
||||
|
||||
it('should be convertable into a Quill delta', () => {
|
||||
let s1 = Automerge.change(Automerge.init(), doc => {
|
||||
doc.text = new Automerge.Text('Gandalf the Grey')
|
||||
doc.text.insertAt(0, { attributes: { bold: true } })
|
||||
doc.text.insertAt(7 + 1, { attributes: { bold: null } })
|
||||
doc.text.insertAt(12 + 2, { attributes: { color: '#cccccc' } })
|
||||
})
|
||||
|
||||
let deltaDoc = automergeTextToDeltaDoc(s1.text)
|
||||
|
||||
// From https://quilljs.com/docs/delta/
|
||||
let expectedDoc = [
|
||||
{ insert: 'Gandalf', attributes: { bold: true } },
|
||||
{ insert: ' the ' },
|
||||
{ insert: 'Grey', attributes: { color: '#cccccc' } }
|
||||
]
|
||||
|
||||
assert.deepEqual(deltaDoc, expectedDoc)
|
||||
})
|
||||
|
||||
it('should support embeds', () => {
|
||||
let s1 = Automerge.change(Automerge.init(), doc => {
|
||||
doc.text = new Automerge.Text('')
|
||||
doc.text.insertAt(0, { attributes: { link: 'https://quilljs.com' } })
|
||||
doc.text.insertAt(1, {
|
||||
image: 'https://quilljs.com/assets/images/icon.png'
|
||||
})
|
||||
doc.text.insertAt(2, { attributes: { link: null } })
|
||||
})
|
||||
|
||||
let deltaDoc = automergeTextToDeltaDoc(s1.text)
|
||||
|
||||
// From https://quilljs.com/docs/delta/
|
||||
let expectedDoc = [{
|
||||
// An image link
|
||||
insert: {
|
||||
image: 'https://quilljs.com/assets/images/icon.png'
|
||||
},
|
||||
attributes: {
|
||||
link: 'https://quilljs.com'
|
||||
}
|
||||
}]
|
||||
|
||||
assert.deepEqual(deltaDoc, expectedDoc)
|
||||
})
|
||||
|
||||
it('should handle concurrent overlapping spans', () => {
|
||||
let s1 = Automerge.change(Automerge.init(), doc => {
|
||||
doc.text = new Automerge.Text('Gandalf the Grey')
|
||||
})
|
||||
|
||||
let s2 = Automerge.merge(Automerge.init(), s1)
|
||||
|
||||
let s3 = Automerge.change(s1, doc => {
|
||||
doc.text.insertAt(8, { attributes: { bold: true } })
|
||||
doc.text.insertAt(16 + 1, { attributes: { bold: null } })
|
||||
})
|
||||
|
||||
let s4 = Automerge.change(s2, doc => {
|
||||
doc.text.insertAt(0, { attributes: { bold: true } })
|
||||
doc.text.insertAt(11 + 1, { attributes: { bold: null } })
|
||||
})
|
||||
|
||||
let merged = Automerge.merge(s3, s4)
|
||||
|
||||
let deltaDoc = automergeTextToDeltaDoc(merged.text)
|
||||
|
||||
// From https://quilljs.com/docs/delta/
|
||||
let expectedDoc = [
|
||||
{ insert: 'Gandalf the Grey', attributes: { bold: true } },
|
||||
]
|
||||
|
||||
assert.deepEqual(deltaDoc, expectedDoc)
|
||||
})
|
||||
|
||||
it('should handle debolding spans', () => {
|
||||
let s1 = Automerge.change(Automerge.init(), doc => {
|
||||
doc.text = new Automerge.Text('Gandalf the Grey')
|
||||
})
|
||||
|
||||
let s2 = Automerge.merge(Automerge.init(), s1)
|
||||
|
||||
let s3 = Automerge.change(s1, doc => {
|
||||
doc.text.insertAt(0, { attributes: { bold: true } })
|
||||
doc.text.insertAt(16 + 1, { attributes: { bold: null } })
|
||||
})
|
||||
|
||||
let s4 = Automerge.change(s2, doc => {
|
||||
doc.text.insertAt(8, { attributes: { bold: null } })
|
||||
doc.text.insertAt(11 + 1, { attributes: { bold: true } })
|
||||
})
|
||||
|
||||
|
||||
let merged = Automerge.merge(s3, s4)
|
||||
|
||||
let deltaDoc = automergeTextToDeltaDoc(merged.text)
|
||||
|
||||
// From https://quilljs.com/docs/delta/
|
||||
let expectedDoc = [
|
||||
{ insert: 'Gandalf ', attributes: { bold: true } },
|
||||
{ insert: 'the' },
|
||||
{ insert: ' Grey', attributes: { bold: true } },
|
||||
]
|
||||
|
||||
assert.deepEqual(deltaDoc, expectedDoc)
|
||||
})
|
||||
|
||||
// xxx: how would this work for colors?
|
||||
it('should handle destyling across destyled spans', () => {
|
||||
let s1 = Automerge.change(Automerge.init(), doc => {
|
||||
doc.text = new Automerge.Text('Gandalf the Grey')
|
||||
})
|
||||
|
||||
let s2 = Automerge.merge(Automerge.init(), s1)
|
||||
|
||||
let s3 = Automerge.change(s1, doc => {
|
||||
doc.text.insertAt(0, { attributes: { bold: true } })
|
||||
doc.text.insertAt(16 + 1, { attributes: { bold: null } })
|
||||
})
|
||||
|
||||
let s4 = Automerge.change(s2, doc => {
|
||||
doc.text.insertAt(8, { attributes: { bold: null } })
|
||||
doc.text.insertAt(11 + 1, { attributes: { bold: true } })
|
||||
})
|
||||
|
||||
let merged = Automerge.merge(s3, s4)
|
||||
|
||||
let final = Automerge.change(merged, doc => {
|
||||
doc.text.insertAt(3 + 1, { attributes: { bold: null } })
|
||||
doc.text.insertAt(doc.text.length, { attributes: { bold: true } })
|
||||
})
|
||||
|
||||
let deltaDoc = automergeTextToDeltaDoc(final.text)
|
||||
|
||||
// From https://quilljs.com/docs/delta/
|
||||
let expectedDoc = [
|
||||
{ insert: 'Gan', attributes: { bold: true } },
|
||||
{ insert: 'dalf the Grey' },
|
||||
]
|
||||
|
||||
assert.deepEqual(deltaDoc, expectedDoc)
|
||||
})
|
||||
|
||||
it('should apply an insert', () => {
|
||||
let s1 = Automerge.change(Automerge.init(), doc => {
|
||||
doc.text = new Automerge.Text('Hello world')
|
||||
})
|
||||
|
||||
const delta = [
|
||||
{ retain: 6 },
|
||||
{ insert: 'reader' },
|
||||
{ delete: 5 }
|
||||
]
|
||||
|
||||
let s2 = Automerge.change(s1, doc => {
|
||||
applyDeltaDocToAutomergeText(delta, doc)
|
||||
})
|
||||
|
||||
assert.strictEqual(s2.text.join(''), 'Hello reader')
|
||||
})
|
||||
|
||||
it('should apply an insert with control characters', () => {
|
||||
let s1 = Automerge.change(Automerge.init(), doc => {
|
||||
doc.text = new Automerge.Text('Hello world')
|
||||
})
|
||||
|
||||
const delta = [
|
||||
{ retain: 6 },
|
||||
{ insert: 'reader', attributes: { bold: true } },
|
||||
{ delete: 5 },
|
||||
{ insert: '!' }
|
||||
]
|
||||
|
||||
let s2 = Automerge.change(s1, doc => {
|
||||
applyDeltaDocToAutomergeText(delta, doc)
|
||||
})
|
||||
|
||||
assert.strictEqual(s2.text.toString(), 'Hello reader!')
|
||||
assert.deepEqual(s2.text.toSpans(), [
|
||||
"Hello ",
|
||||
{ attributes: { bold: true } },
|
||||
"reader",
|
||||
{ attributes: { bold: null } },
|
||||
"!"
|
||||
])
|
||||
})
|
||||
|
||||
it('should account for control characters in retain/delete lengths', () => {
|
||||
let s1 = Automerge.change(Automerge.init(), doc => {
|
||||
doc.text = new Automerge.Text('Hello world')
|
||||
doc.text.insertAt(4, { attributes: { color: '#ccc' } })
|
||||
doc.text.insertAt(10, { attributes: { color: '#f00' } })
|
||||
})
|
||||
|
||||
const delta = [
|
||||
{ retain: 6 },
|
||||
{ insert: 'reader', attributes: { bold: true } },
|
||||
{ delete: 5 },
|
||||
{ insert: '!' }
|
||||
]
|
||||
|
||||
let s2 = Automerge.change(s1, doc => {
|
||||
applyDeltaDocToAutomergeText(delta, doc)
|
||||
})
|
||||
|
||||
assert.strictEqual(s2.text.toString(), 'Hello reader!')
|
||||
assert.deepEqual(s2.text.toSpans(), [
|
||||
"Hell",
|
||||
{ attributes: { color: '#ccc'} },
|
||||
"o ",
|
||||
{ attributes: { bold: true } },
|
||||
"reader",
|
||||
{ attributes: { bold: null } },
|
||||
{ attributes: { color: '#f00'} },
|
||||
"!"
|
||||
])
|
||||
})
|
||||
|
||||
it('should support embeds', () => {
|
||||
let s1 = Automerge.change(Automerge.init(), doc => {
|
||||
doc.text = new Automerge.Text('')
|
||||
})
|
||||
|
||||
let deltaDoc = [{
|
||||
// An image link
|
||||
insert: {
|
||||
image: 'https://quilljs.com/assets/images/icon.png'
|
||||
},
|
||||
attributes: {
|
||||
link: 'https://quilljs.com'
|
||||
}
|
||||
}]
|
||||
|
||||
let s2 = Automerge.change(s1, doc => {
|
||||
applyDeltaDocToAutomergeText(deltaDoc, doc)
|
||||
})
|
||||
|
||||
assert.deepEqual(s2.text.toSpans(), [
|
||||
{ attributes: { link: 'https://quilljs.com' } },
|
||||
{ image: 'https://quilljs.com/assets/images/icon.png'},
|
||||
{ attributes: { link: null } },
|
||||
])
|
||||
})
|
||||
})
|
||||
})
|
||||
|
||||
it('should support unicode when creating text', () => {
|
||||
s1 = Automerge.from({
|
||||
text: new Automerge.Text('🐦')
|
||||
})
|
||||
assert.strictEqual(s1.text.get(0), '🐦')
|
||||
})
|
||||
})
|
|
@ -1,32 +0,0 @@
|
|||
const assert = require('assert')
|
||||
const Automerge = require('..')
|
||||
|
||||
const uuid = Automerge.uuid
|
||||
|
||||
describe('uuid', () => {
|
||||
afterEach(() => {
|
||||
uuid.reset()
|
||||
})
|
||||
|
||||
describe('default implementation', () => {
|
||||
it('generates unique values', () => {
|
||||
assert.notEqual(uuid(), uuid())
|
||||
})
|
||||
})
|
||||
|
||||
describe('custom implementation', () => {
|
||||
let counter
|
||||
|
||||
function customUuid() {
|
||||
return `custom-uuid-${counter++}`
|
||||
}
|
||||
|
||||
before(() => uuid.setFactory(customUuid))
|
||||
beforeEach(() => counter = 0)
|
||||
|
||||
it('invokes the custom factory', () => {
|
||||
assert.equal(uuid(), 'custom-uuid-0')
|
||||
assert.equal(uuid(), 'custom-uuid-1')
|
||||
})
|
||||
})
|
||||
})
|
164
automerge-wasm/index.d.ts
vendored
164
automerge-wasm/index.d.ts
vendored
|
@ -1,164 +0,0 @@
|
|||
|
||||
export type Actor = string;
|
||||
export type ObjID = string;
|
||||
export type Change = Uint8Array;
|
||||
export type SyncMessage = Uint8Array;
|
||||
export type Prop = string | number;
|
||||
export type Hash = string;
|
||||
export type Heads = Hash[];
|
||||
export type Value = string | number | boolean | null | Date | Uint8Array
|
||||
export type ObjType = string | Array | Object
|
||||
export type FullValue =
|
||||
["str", string] |
|
||||
["int", number] |
|
||||
["uint", number] |
|
||||
["f64", number] |
|
||||
["boolean", boolean] |
|
||||
["timestamp", Date] |
|
||||
["counter", number] |
|
||||
["bytes", Uint8Array] |
|
||||
["null", Uint8Array] |
|
||||
["map", ObjID] |
|
||||
["list", ObjID] |
|
||||
["text", ObjID] |
|
||||
["table", ObjID]
|
||||
|
||||
export enum ObjTypeName {
|
||||
list = "list",
|
||||
map = "map",
|
||||
table = "table",
|
||||
text = "text",
|
||||
}
|
||||
|
||||
export type Datatype =
|
||||
"boolean" |
|
||||
"str" |
|
||||
"int" |
|
||||
"uint" |
|
||||
"f64" |
|
||||
"null" |
|
||||
"timestamp" |
|
||||
"counter" |
|
||||
"bytes" |
|
||||
"map" |
|
||||
"text" |
|
||||
"list";
|
||||
|
||||
export type DecodedSyncMessage = {
|
||||
heads: Heads,
|
||||
need: Heads,
|
||||
have: any[]
|
||||
changes: Change[]
|
||||
}
|
||||
|
||||
export type DecodedChange = {
|
||||
actor: Actor,
|
||||
seq: number
|
||||
startOp: number,
|
||||
time: number,
|
||||
message: string | null,
|
||||
deps: Heads,
|
||||
hash: Hash,
|
||||
ops: Op[]
|
||||
}
|
||||
|
||||
export type Op = {
|
||||
action: string,
|
||||
obj: ObjID,
|
||||
key: string,
|
||||
value?: string | number | boolean,
|
||||
datatype?: string,
|
||||
pred: string[],
|
||||
}
|
||||
|
||||
export type Patch = {
|
||||
obj: ObjID
|
||||
action: 'assign' | 'insert' | 'delete'
|
||||
key: Prop
|
||||
value: Value
|
||||
datatype: Datatype
|
||||
conflict: boolean
|
||||
}
|
||||
|
||||
export function create(actor?: Actor): Automerge;
|
||||
export function load(data: Uint8Array, actor?: Actor): Automerge;
|
||||
export function encodeChange(change: DecodedChange): Change;
|
||||
export function decodeChange(change: Change): DecodedChange;
|
||||
export function initSyncState(): SyncState;
|
||||
export function encodeSyncMessage(message: DecodedSyncMessage): SyncMessage;
|
||||
export function decodeSyncMessage(msg: SyncMessage): DecodedSyncMessage;
|
||||
export function encodeSyncState(state: SyncState): Uint8Array;
|
||||
export function decodeSyncState(data: Uint8Array): SyncState;
|
||||
|
||||
export class Automerge {
|
||||
// change state
|
||||
put(obj: ObjID, prop: Prop, value: Value, datatype?: Datatype): undefined;
|
||||
putObject(obj: ObjID, prop: Prop, value: ObjType): ObjID;
|
||||
insert(obj: ObjID, index: number, value: Value, datatype?: Datatype): undefined;
|
||||
insertObject(obj: ObjID, index: number, value: ObjType): ObjID;
|
||||
push(obj: ObjID, value: Value, datatype?: Datatype): undefined;
|
||||
pushObject(obj: ObjID, value: ObjType): ObjID;
|
||||
splice(obj: ObjID, start: number, delete_count: number, text?: string | Array<Value>): ObjID[] | undefined;
|
||||
increment(obj: ObjID, prop: Prop, value: number): void;
|
||||
delete(obj: ObjID, prop: Prop): void;
|
||||
|
||||
// returns a single value - if there is a conflict return the winner
|
||||
get(obj: ObjID, prop: any, heads?: Heads): FullValue | null;
|
||||
// return all values in case of a conflict
|
||||
getAll(obj: ObjID, arg: any, heads?: Heads): FullValue[];
|
||||
keys(obj: ObjID, heads?: Heads): string[];
|
||||
text(obj: ObjID, heads?: Heads): string;
|
||||
length(obj: ObjID, heads?: Heads): number;
|
||||
materialize(obj?: ObjID, heads?: Heads): any;
|
||||
|
||||
// transactions
|
||||
commit(message?: string, time?: number): Hash;
|
||||
merge(other: Automerge): Heads;
|
||||
getActorId(): Actor;
|
||||
pendingOps(): number;
|
||||
rollback(): number;
|
||||
|
||||
// patches
|
||||
enablePatches(enable: boolean): void;
|
||||
popPatches(): Patch[];
|
||||
|
||||
// save and load to local store
|
||||
save(): Uint8Array;
|
||||
saveIncremental(): Uint8Array;
|
||||
loadIncremental(data: Uint8Array): number;
|
||||
|
||||
// sync over network
|
||||
receiveSyncMessage(state: SyncState, message: SyncMessage): void;
|
||||
generateSyncMessage(state: SyncState): SyncMessage | null;
|
||||
|
||||
// low level change functions
|
||||
applyChanges(changes: Change[]): void;
|
||||
getChanges(have_deps: Heads): Change[];
|
||||
getChangeByHash(hash: Hash): Change | null;
|
||||
getChangesAdded(other: Automerge): Change[];
|
||||
getHeads(): Heads;
|
||||
getLastLocalChange(): Change;
|
||||
getMissingDeps(heads?: Heads): Heads;
|
||||
|
||||
// memory management
|
||||
free(): void;
|
||||
clone(actor?: string): Automerge;
|
||||
fork(actor?: string): Automerge;
|
||||
forkAt(heads: Heads, actor?: string): Automerge;
|
||||
|
||||
// dump internal state to console.log
|
||||
dump(): void;
|
||||
|
||||
// dump internal state to a JS object
|
||||
toJS(): any;
|
||||
}
|
||||
|
||||
export class SyncState {
|
||||
free(): void;
|
||||
clone(): SyncState;
|
||||
lastSentHeads: any;
|
||||
sentHashes: any;
|
||||
readonly sharedHeads: any;
|
||||
}
|
||||
|
||||
export default function init (): Promise<any>;
|
|
@ -1,6 +0,0 @@
|
|||
let wasm = require("./bindgen")
|
||||
module.exports = wasm
|
||||
module.exports.load = module.exports.loadDoc
|
||||
delete module.exports.loadDoc
|
||||
Object.defineProperty(module.exports, "__esModule", { value: true });
|
||||
module.exports.default = () => (new Promise((resolve,reject) => { resolve() }))
|
|
@ -1,49 +0,0 @@
|
|||
{
|
||||
"collaborators": [
|
||||
"Orion Henry <orion@inkandswitch.com>",
|
||||
"Alex Good <alex@memoryandthought.me>",
|
||||
"Martin Kleppmann"
|
||||
],
|
||||
"name": "automerge-wasm",
|
||||
"description": "wasm-bindgen bindings to the automerge rust implementation",
|
||||
"homepage": "https://github.com/automerge/automerge-rs/tree/main/automerge-wasm",
|
||||
"repository": "github:automerge/automerge-rs",
|
||||
"version": "0.1.2",
|
||||
"license": "MIT",
|
||||
"files": [
|
||||
"README.md",
|
||||
"LICENSE",
|
||||
"package.json",
|
||||
"index.d.ts",
|
||||
"nodejs/index.js",
|
||||
"nodejs/bindgen.js",
|
||||
"nodejs/bindgen_bg.wasm",
|
||||
"web/index.js",
|
||||
"web/bindgen.js",
|
||||
"web/bindgen_bg.wasm"
|
||||
],
|
||||
"types": "index.d.ts",
|
||||
"module": "./web/index.js",
|
||||
"main": "./nodejs/index.js",
|
||||
"scripts": {
|
||||
"build": "cross-env PROFILE=dev TARGET=nodejs yarn target",
|
||||
"release": "cross-env PROFILE=release yarn buildall",
|
||||
"buildall": "cross-env TARGET=nodejs yarn target && cross-env TARGET=web yarn target",
|
||||
"target": "rimraf ./$TARGET && wasm-pack build --target $TARGET --$PROFILE --out-name bindgen -d $TARGET && cp $TARGET-index.js $TARGET/index.js",
|
||||
"test": "ts-mocha -p tsconfig.json --type-check --bail --full-trace test/*.ts"
|
||||
},
|
||||
"dependencies": {},
|
||||
"devDependencies": {
|
||||
"@types/expect": "^24.3.0",
|
||||
"@types/jest": "^27.4.0",
|
||||
"@types/mocha": "^9.1.0",
|
||||
"@types/node": "^17.0.13",
|
||||
"cross-env": "^7.0.3",
|
||||
"fast-sha256": "^1.3.0",
|
||||
"mocha": "^9.1.3",
|
||||
"pako": "^2.0.4",
|
||||
"rimraf": "^3.0.2",
|
||||
"ts-mocha": "^9.0.2",
|
||||
"typescript": "^4.5.5"
|
||||
}
|
||||
}
|
|
@ -1,433 +0,0 @@
|
|||
use automerge as am;
|
||||
use automerge::transaction::Transactable;
|
||||
use automerge::{Change, ChangeHash, Prop};
|
||||
use js_sys::{Array, Object, Reflect, Uint8Array};
|
||||
use std::collections::HashSet;
|
||||
use std::fmt::Display;
|
||||
use wasm_bindgen::prelude::*;
|
||||
use wasm_bindgen::JsCast;
|
||||
|
||||
use crate::{ObjId, ScalarValue, Value};
|
||||
|
||||
pub(crate) struct JS(pub(crate) JsValue);
|
||||
pub(crate) struct AR(pub(crate) Array);
|
||||
|
||||
impl From<AR> for JsValue {
|
||||
fn from(ar: AR) -> Self {
|
||||
ar.0.into()
|
||||
}
|
||||
}
|
||||
|
||||
impl From<JS> for JsValue {
|
||||
fn from(js: JS) -> Self {
|
||||
js.0
|
||||
}
|
||||
}
|
||||
|
||||
impl From<am::sync::State> for JS {
|
||||
fn from(state: am::sync::State) -> Self {
|
||||
let shared_heads: JS = state.shared_heads.into();
|
||||
let last_sent_heads: JS = state.last_sent_heads.into();
|
||||
let their_heads: JS = state.their_heads.into();
|
||||
let their_need: JS = state.their_need.into();
|
||||
let sent_hashes: JS = state.sent_hashes.into();
|
||||
let their_have = if let Some(have) = &state.their_have {
|
||||
JsValue::from(AR::from(have.as_slice()).0)
|
||||
} else {
|
||||
JsValue::null()
|
||||
};
|
||||
let result: JsValue = Object::new().into();
|
||||
// we can unwrap here b/c we made the object and know its not frozen
|
||||
Reflect::set(&result, &"sharedHeads".into(), &shared_heads.0).unwrap();
|
||||
Reflect::set(&result, &"lastSentHeads".into(), &last_sent_heads.0).unwrap();
|
||||
Reflect::set(&result, &"theirHeads".into(), &their_heads.0).unwrap();
|
||||
Reflect::set(&result, &"theirNeed".into(), &their_need.0).unwrap();
|
||||
Reflect::set(&result, &"theirHave".into(), &their_have).unwrap();
|
||||
Reflect::set(&result, &"sentHashes".into(), &sent_hashes.0).unwrap();
|
||||
JS(result)
|
||||
}
|
||||
}
|
||||
|
||||
impl From<Vec<ChangeHash>> for JS {
|
||||
fn from(heads: Vec<ChangeHash>) -> Self {
|
||||
let heads: Array = heads
|
||||
.iter()
|
||||
.map(|h| JsValue::from_str(&h.to_string()))
|
||||
.collect();
|
||||
JS(heads.into())
|
||||
}
|
||||
}
|
||||
|
||||
impl From<HashSet<ChangeHash>> for JS {
|
||||
fn from(heads: HashSet<ChangeHash>) -> Self {
|
||||
let result: JsValue = Object::new().into();
|
||||
for key in &heads {
|
||||
Reflect::set(&result, &key.to_string().into(), &true.into()).unwrap();
|
||||
}
|
||||
JS(result)
|
||||
}
|
||||
}
|
||||
|
||||
impl From<Option<Vec<ChangeHash>>> for JS {
|
||||
fn from(heads: Option<Vec<ChangeHash>>) -> Self {
|
||||
if let Some(v) = heads {
|
||||
let v: Array = v
|
||||
.iter()
|
||||
.map(|h| JsValue::from_str(&h.to_string()))
|
||||
.collect();
|
||||
JS(v.into())
|
||||
} else {
|
||||
JS(JsValue::null())
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl TryFrom<JS> for HashSet<ChangeHash> {
|
||||
type Error = JsValue;
|
||||
|
||||
fn try_from(value: JS) -> Result<Self, Self::Error> {
|
||||
let mut result = HashSet::new();
|
||||
for key in Reflect::own_keys(&value.0)?.iter() {
|
||||
if let Some(true) = Reflect::get(&value.0, &key)?.as_bool() {
|
||||
result.insert(key.into_serde().map_err(to_js_err)?);
|
||||
}
|
||||
}
|
||||
Ok(result)
|
||||
}
|
||||
}
|
||||
|
||||
impl TryFrom<JS> for Vec<ChangeHash> {
|
||||
type Error = JsValue;
|
||||
|
||||
fn try_from(value: JS) -> Result<Self, Self::Error> {
|
||||
let value = value.0.dyn_into::<Array>()?;
|
||||
let value: Result<Vec<ChangeHash>, _> = value.iter().map(|j| j.into_serde()).collect();
|
||||
let value = value.map_err(to_js_err)?;
|
||||
Ok(value)
|
||||
}
|
||||
}
|
||||
|
||||
impl From<JS> for Option<Vec<ChangeHash>> {
|
||||
fn from(value: JS) -> Self {
|
||||
let value = value.0.dyn_into::<Array>().ok()?;
|
||||
let value: Result<Vec<ChangeHash>, _> = value.iter().map(|j| j.into_serde()).collect();
|
||||
let value = value.ok()?;
|
||||
Some(value)
|
||||
}
|
||||
}
|
||||
|
||||
impl TryFrom<JS> for Vec<Change> {
|
||||
type Error = JsValue;
|
||||
|
||||
fn try_from(value: JS) -> Result<Self, Self::Error> {
|
||||
let value = value.0.dyn_into::<Array>()?;
|
||||
let changes: Result<Vec<Uint8Array>, _> = value.iter().map(|j| j.dyn_into()).collect();
|
||||
let changes = changes?;
|
||||
let changes: Result<Vec<Change>, _> = changes
|
||||
.iter()
|
||||
.map(|a| Change::try_from(a.to_vec()))
|
||||
.collect();
|
||||
let changes = changes.map_err(to_js_err)?;
|
||||
Ok(changes)
|
||||
}
|
||||
}
|
||||
|
||||
impl TryFrom<JS> for am::sync::State {
|
||||
type Error = JsValue;
|
||||
|
||||
fn try_from(value: JS) -> Result<Self, Self::Error> {
|
||||
let value = value.0;
|
||||
let shared_heads = js_get(&value, "sharedHeads")?.try_into()?;
|
||||
let last_sent_heads = js_get(&value, "lastSentHeads")?.try_into()?;
|
||||
let their_heads = js_get(&value, "theirHeads")?.into();
|
||||
let their_need = js_get(&value, "theirNeed")?.into();
|
||||
let their_have = js_get(&value, "theirHave")?.try_into()?;
|
||||
let sent_hashes = js_get(&value, "sentHashes")?.try_into()?;
|
||||
Ok(am::sync::State {
|
||||
shared_heads,
|
||||
last_sent_heads,
|
||||
their_heads,
|
||||
their_need,
|
||||
their_have,
|
||||
sent_hashes,
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
impl TryFrom<JS> for Option<Vec<am::sync::Have>> {
|
||||
type Error = JsValue;
|
||||
|
||||
fn try_from(value: JS) -> Result<Self, Self::Error> {
|
||||
if value.0.is_null() {
|
||||
Ok(None)
|
||||
} else {
|
||||
Ok(Some(value.try_into()?))
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl TryFrom<JS> for Vec<am::sync::Have> {
|
||||
type Error = JsValue;
|
||||
|
||||
fn try_from(value: JS) -> Result<Self, Self::Error> {
|
||||
let value = value.0.dyn_into::<Array>()?;
|
||||
let have: Result<Vec<am::sync::Have>, JsValue> = value
|
||||
.iter()
|
||||
.map(|s| {
|
||||
let last_sync = js_get(&s, "lastSync")?.try_into()?;
|
||||
let bloom = js_get(&s, "bloom")?.try_into()?;
|
||||
Ok(am::sync::Have { last_sync, bloom })
|
||||
})
|
||||
.collect();
|
||||
let have = have?;
|
||||
Ok(have)
|
||||
}
|
||||
}
|
||||
|
||||
impl TryFrom<JS> for am::sync::BloomFilter {
|
||||
type Error = JsValue;
|
||||
|
||||
fn try_from(value: JS) -> Result<Self, Self::Error> {
|
||||
let value: Uint8Array = value.0.dyn_into()?;
|
||||
let value = value.to_vec();
|
||||
let value = value.as_slice().try_into().map_err(to_js_err)?;
|
||||
Ok(value)
|
||||
}
|
||||
}
|
||||
|
||||
impl From<&[ChangeHash]> for AR {
|
||||
fn from(value: &[ChangeHash]) -> Self {
|
||||
AR(value
|
||||
.iter()
|
||||
.map(|h| JsValue::from_str(&hex::encode(&h.0)))
|
||||
.collect())
|
||||
}
|
||||
}
|
||||
|
||||
impl From<&[Change]> for AR {
|
||||
fn from(value: &[Change]) -> Self {
|
||||
let changes: Array = value
|
||||
.iter()
|
||||
.map(|c| Uint8Array::from(c.raw_bytes()))
|
||||
.collect();
|
||||
AR(changes)
|
||||
}
|
||||
}
|
||||
|
||||
impl From<&[am::sync::Have]> for AR {
|
||||
fn from(value: &[am::sync::Have]) -> Self {
|
||||
AR(value
|
||||
.iter()
|
||||
.map(|have| {
|
||||
let last_sync: Array = have
|
||||
.last_sync
|
||||
.iter()
|
||||
.map(|h| JsValue::from_str(&hex::encode(&h.0)))
|
||||
.collect();
|
||||
// FIXME - the clone and the unwrap here shouldnt be needed - look at into_bytes()
|
||||
let bloom = Uint8Array::from(have.bloom.to_bytes().as_slice());
|
||||
let obj: JsValue = Object::new().into();
|
||||
// we can unwrap here b/c we created the object and know its not frozen
|
||||
Reflect::set(&obj, &"lastSync".into(), &last_sync.into()).unwrap();
|
||||
Reflect::set(&obj, &"bloom".into(), &bloom.into()).unwrap();
|
||||
obj
|
||||
})
|
||||
.collect())
|
||||
}
|
||||
}
|
||||
|
||||
pub(crate) fn to_js_err<T: Display>(err: T) -> JsValue {
|
||||
js_sys::Error::new(&std::format!("{}", err)).into()
|
||||
}
|
||||
|
||||
pub(crate) fn js_get<J: Into<JsValue>>(obj: J, prop: &str) -> Result<JS, JsValue> {
|
||||
Ok(JS(Reflect::get(&obj.into(), &prop.into())?))
|
||||
}
|
||||
|
||||
pub(crate) fn js_set<V: Into<JsValue>>(obj: &JsValue, prop: &str, val: V) -> Result<bool, JsValue> {
|
||||
Reflect::set(obj, &prop.into(), &val.into())
|
||||
}
|
||||
|
||||
pub(crate) fn to_prop(p: JsValue) -> Result<Prop, JsValue> {
|
||||
if let Some(s) = p.as_string() {
|
||||
Ok(Prop::Map(s))
|
||||
} else if let Some(n) = p.as_f64() {
|
||||
Ok(Prop::Seq(n as usize))
|
||||
} else {
|
||||
Err(to_js_err("prop must me a string or number"))
|
||||
}
|
||||
}
|
||||
|
||||
pub(crate) fn to_objtype(
|
||||
value: &JsValue,
|
||||
datatype: &Option<String>,
|
||||
) -> Option<(am::ObjType, Vec<(Prop, JsValue)>)> {
|
||||
match datatype.as_deref() {
|
||||
Some("map") => {
|
||||
let map = value.clone().dyn_into::<js_sys::Object>().ok()?;
|
||||
// FIXME unwrap
|
||||
let map = js_sys::Object::keys(&map)
|
||||
.iter()
|
||||
.zip(js_sys::Object::values(&map).iter())
|
||||
.map(|(key, val)| (key.as_string().unwrap().into(), val))
|
||||
.collect();
|
||||
Some((am::ObjType::Map, map))
|
||||
}
|
||||
Some("list") => {
|
||||
let list = value.clone().dyn_into::<js_sys::Array>().ok()?;
|
||||
let list = list
|
||||
.iter()
|
||||
.enumerate()
|
||||
.map(|(i, e)| (i.into(), e))
|
||||
.collect();
|
||||
Some((am::ObjType::List, list))
|
||||
}
|
||||
Some("text") => {
|
||||
let text = value.as_string()?;
|
||||
let text = text
|
||||
.chars()
|
||||
.enumerate()
|
||||
.map(|(i, ch)| (i.into(), ch.to_string().into()))
|
||||
.collect();
|
||||
Some((am::ObjType::Text, text))
|
||||
}
|
||||
Some(_) => None,
|
||||
None => {
|
||||
if let Ok(list) = value.clone().dyn_into::<js_sys::Array>() {
|
||||
let list = list
|
||||
.iter()
|
||||
.enumerate()
|
||||
.map(|(i, e)| (i.into(), e))
|
||||
.collect();
|
||||
Some((am::ObjType::List, list))
|
||||
} else if let Ok(map) = value.clone().dyn_into::<js_sys::Object>() {
|
||||
// FIXME unwrap
|
||||
let map = js_sys::Object::keys(&map)
|
||||
.iter()
|
||||
.zip(js_sys::Object::values(&map).iter())
|
||||
.map(|(key, val)| (key.as_string().unwrap().into(), val))
|
||||
.collect();
|
||||
Some((am::ObjType::Map, map))
|
||||
} else if let Some(text) = value.as_string() {
|
||||
let text = text
|
||||
.chars()
|
||||
.enumerate()
|
||||
.map(|(i, ch)| (i.into(), ch.to_string().into()))
|
||||
.collect();
|
||||
Some((am::ObjType::Text, text))
|
||||
} else {
|
||||
None
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
pub(crate) fn get_heads(heads: Option<Array>) -> Option<Vec<ChangeHash>> {
|
||||
let heads = heads?;
|
||||
let heads: Result<Vec<ChangeHash>, _> = heads.iter().map(|j| j.into_serde()).collect();
|
||||
heads.ok()
|
||||
}
|
||||
|
||||
pub(crate) fn map_to_js(doc: &am::AutoCommit, obj: &ObjId) -> JsValue {
|
||||
let keys = doc.keys(obj);
|
||||
let map = Object::new();
|
||||
for k in keys {
|
||||
let val = doc.get(obj, &k);
|
||||
match val {
|
||||
Ok(Some((Value::Object(o), exid)))
|
||||
if o == am::ObjType::Map || o == am::ObjType::Table =>
|
||||
{
|
||||
Reflect::set(&map, &k.into(), &map_to_js(doc, &exid)).unwrap();
|
||||
}
|
||||
Ok(Some((Value::Object(o), exid))) if o == am::ObjType::List => {
|
||||
Reflect::set(&map, &k.into(), &list_to_js(doc, &exid)).unwrap();
|
||||
}
|
||||
Ok(Some((Value::Object(o), exid))) if o == am::ObjType::Text => {
|
||||
Reflect::set(&map, &k.into(), &doc.text(&exid).unwrap().into()).unwrap();
|
||||
}
|
||||
Ok(Some((Value::Scalar(v), _))) => {
|
||||
Reflect::set(&map, &k.into(), &ScalarValue(v).into()).unwrap();
|
||||
}
|
||||
_ => (),
|
||||
};
|
||||
}
|
||||
map.into()
|
||||
}
|
||||
|
||||
pub(crate) fn map_to_js_at(doc: &am::AutoCommit, obj: &ObjId, heads: &[ChangeHash]) -> JsValue {
|
||||
let keys = doc.keys(obj);
|
||||
let map = Object::new();
|
||||
for k in keys {
|
||||
let val = doc.get_at(obj, &k, heads);
|
||||
match val {
|
||||
Ok(Some((Value::Object(o), exid)))
|
||||
if o == am::ObjType::Map || o == am::ObjType::Table =>
|
||||
{
|
||||
Reflect::set(&map, &k.into(), &map_to_js_at(doc, &exid, heads)).unwrap();
|
||||
}
|
||||
Ok(Some((Value::Object(o), exid))) if o == am::ObjType::List => {
|
||||
Reflect::set(&map, &k.into(), &list_to_js_at(doc, &exid, heads)).unwrap();
|
||||
}
|
||||
Ok(Some((Value::Object(o), exid))) if o == am::ObjType::Text => {
|
||||
Reflect::set(&map, &k.into(), &doc.text_at(&exid, heads).unwrap().into()).unwrap();
|
||||
}
|
||||
Ok(Some((Value::Scalar(v), _))) => {
|
||||
Reflect::set(&map, &k.into(), &ScalarValue(v).into()).unwrap();
|
||||
}
|
||||
_ => (),
|
||||
};
|
||||
}
|
||||
map.into()
|
||||
}
|
||||
|
||||
pub(crate) fn list_to_js(doc: &am::AutoCommit, obj: &ObjId) -> JsValue {
|
||||
let len = doc.length(obj);
|
||||
let array = Array::new();
|
||||
for i in 0..len {
|
||||
let val = doc.get(obj, i as usize);
|
||||
match val {
|
||||
Ok(Some((Value::Object(o), exid)))
|
||||
if o == am::ObjType::Map || o == am::ObjType::Table =>
|
||||
{
|
||||
array.push(&map_to_js(doc, &exid));
|
||||
}
|
||||
Ok(Some((Value::Object(o), exid))) if o == am::ObjType::List => {
|
||||
array.push(&list_to_js(doc, &exid));
|
||||
}
|
||||
Ok(Some((Value::Object(o), exid))) if o == am::ObjType::Text => {
|
||||
array.push(&doc.text(&exid).unwrap().into());
|
||||
}
|
||||
Ok(Some((Value::Scalar(v), _))) => {
|
||||
array.push(&ScalarValue(v).into());
|
||||
}
|
||||
_ => (),
|
||||
};
|
||||
}
|
||||
array.into()
|
||||
}
|
||||
|
||||
pub(crate) fn list_to_js_at(doc: &am::AutoCommit, obj: &ObjId, heads: &[ChangeHash]) -> JsValue {
|
||||
let len = doc.length(obj);
|
||||
let array = Array::new();
|
||||
for i in 0..len {
|
||||
let val = doc.get_at(obj, i as usize, heads);
|
||||
match val {
|
||||
Ok(Some((Value::Object(o), exid)))
|
||||
if o == am::ObjType::Map || o == am::ObjType::Table =>
|
||||
{
|
||||
array.push(&map_to_js_at(doc, &exid, heads));
|
||||
}
|
||||
Ok(Some((Value::Object(o), exid))) if o == am::ObjType::List => {
|
||||
array.push(&list_to_js_at(doc, &exid, heads));
|
||||
}
|
||||
Ok(Some((Value::Object(o), exid))) if o == am::ObjType::Text => {
|
||||
array.push(&doc.text_at(exid, heads).unwrap().into());
|
||||
}
|
||||
Ok(Some((Value::Scalar(v), _))) => {
|
||||
array.push(&ScalarValue(v).into());
|
||||
}
|
||||
_ => (),
|
||||
};
|
||||
}
|
||||
array.into()
|
||||
}
|
|
@ -1,919 +0,0 @@
|
|||
#![doc(
|
||||
html_logo_url = "https://raw.githubusercontent.com/automerge/automerge-rs/main/img/brandmark.svg",
|
||||
html_favicon_url = "https:///raw.githubusercontent.com/automerge/automerge-rs/main/img/favicon.ico"
|
||||
)]
|
||||
#![warn(
|
||||
missing_debug_implementations,
|
||||
// missing_docs, // TODO: add documentation!
|
||||
rust_2021_compatibility,
|
||||
rust_2018_idioms,
|
||||
unreachable_pub,
|
||||
bad_style,
|
||||
const_err,
|
||||
dead_code,
|
||||
improper_ctypes,
|
||||
non_shorthand_field_patterns,
|
||||
no_mangle_generic_items,
|
||||
overflowing_literals,
|
||||
path_statements,
|
||||
patterns_in_fns_without_body,
|
||||
private_in_public,
|
||||
unconditional_recursion,
|
||||
unused,
|
||||
unused_allocation,
|
||||
unused_comparisons,
|
||||
unused_parens,
|
||||
while_true
|
||||
)]
|
||||
#![allow(clippy::unused_unit)]
|
||||
use am::transaction::CommitOptions;
|
||||
use am::transaction::Transactable;
|
||||
use am::ApplyOptions;
|
||||
use automerge as am;
|
||||
use automerge::Patch;
|
||||
use automerge::VecOpObserver;
|
||||
use automerge::{Change, ObjId, Prop, Value, ROOT};
|
||||
use js_sys::{Array, Object, Uint8Array};
|
||||
use std::convert::TryInto;
|
||||
use wasm_bindgen::prelude::*;
|
||||
use wasm_bindgen::JsCast;
|
||||
|
||||
mod interop;
|
||||
mod sync;
|
||||
mod value;
|
||||
|
||||
use interop::{
|
||||
get_heads, js_get, js_set, list_to_js, list_to_js_at, map_to_js, map_to_js_at, to_js_err,
|
||||
to_objtype, to_prop, AR, JS,
|
||||
};
|
||||
use sync::SyncState;
|
||||
use value::{datatype, ScalarValue};
|
||||
|
||||
#[allow(unused_macros)]
|
||||
macro_rules! log {
|
||||
( $( $t:tt )* ) => {
|
||||
web_sys::console::log_1(&format!( $( $t )* ).into());
|
||||
};
|
||||
}
|
||||
|
||||
#[cfg(feature = "wee_alloc")]
|
||||
#[global_allocator]
|
||||
static ALLOC: wee_alloc::WeeAlloc = wee_alloc::WeeAlloc::INIT;
|
||||
|
||||
#[wasm_bindgen]
|
||||
#[derive(Debug)]
|
||||
pub struct Automerge {
|
||||
doc: automerge::AutoCommit,
|
||||
observer: Option<VecOpObserver>,
|
||||
}
|
||||
|
||||
#[wasm_bindgen]
|
||||
impl Automerge {
|
||||
pub fn new(actor: Option<String>) -> Result<Automerge, JsValue> {
|
||||
let mut automerge = automerge::AutoCommit::new();
|
||||
if let Some(a) = actor {
|
||||
let a = automerge::ActorId::from(hex::decode(a).map_err(to_js_err)?.to_vec());
|
||||
automerge.set_actor(a);
|
||||
}
|
||||
Ok(Automerge {
|
||||
doc: automerge,
|
||||
observer: None,
|
||||
})
|
||||
}
|
||||
|
||||
fn ensure_transaction_closed(&mut self) {
|
||||
if self.doc.pending_ops() > 0 {
|
||||
let mut opts = CommitOptions::default();
|
||||
if let Some(observer) = self.observer.as_mut() {
|
||||
opts.set_op_observer(observer);
|
||||
}
|
||||
self.doc.commit_with(opts);
|
||||
}
|
||||
}
|
||||
|
||||
#[allow(clippy::should_implement_trait)]
|
||||
pub fn clone(&mut self, actor: Option<String>) -> Result<Automerge, JsValue> {
|
||||
self.ensure_transaction_closed();
|
||||
let mut automerge = Automerge {
|
||||
doc: self.doc.clone(),
|
||||
observer: None,
|
||||
};
|
||||
if let Some(s) = actor {
|
||||
let actor = automerge::ActorId::from(hex::decode(s).map_err(to_js_err)?.to_vec());
|
||||
automerge.doc.set_actor(actor);
|
||||
}
|
||||
Ok(automerge)
|
||||
}
|
||||
|
||||
pub fn fork(&mut self, actor: Option<String>) -> Result<Automerge, JsValue> {
|
||||
self.ensure_transaction_closed();
|
||||
let mut automerge = Automerge {
|
||||
doc: self.doc.fork(),
|
||||
observer: None,
|
||||
};
|
||||
if let Some(s) = actor {
|
||||
let actor = automerge::ActorId::from(hex::decode(s).map_err(to_js_err)?.to_vec());
|
||||
automerge.doc.set_actor(actor);
|
||||
}
|
||||
Ok(automerge)
|
||||
}
|
||||
|
||||
#[wasm_bindgen(js_name = forkAt)]
|
||||
pub fn fork_at(&mut self, heads: JsValue, actor: Option<String>) -> Result<Automerge, JsValue> {
|
||||
let deps: Vec<_> = JS(heads).try_into()?;
|
||||
let mut automerge = Automerge {
|
||||
doc: self.doc.fork_at(&deps)?,
|
||||
observer: None,
|
||||
};
|
||||
if let Some(s) = actor {
|
||||
let actor = automerge::ActorId::from(hex::decode(s).map_err(to_js_err)?.to_vec());
|
||||
automerge.doc.set_actor(actor);
|
||||
}
|
||||
Ok(automerge)
|
||||
}
|
||||
|
||||
pub fn free(self) {}
|
||||
|
||||
#[wasm_bindgen(js_name = pendingOps)]
|
||||
pub fn pending_ops(&self) -> JsValue {
|
||||
(self.doc.pending_ops() as u32).into()
|
||||
}
|
||||
|
||||
pub fn commit(&mut self, message: Option<String>, time: Option<f64>) -> JsValue {
|
||||
let mut commit_opts = CommitOptions::default();
|
||||
if let Some(message) = message {
|
||||
commit_opts.set_message(message);
|
||||
}
|
||||
if let Some(time) = time {
|
||||
commit_opts.set_time(time as i64);
|
||||
}
|
||||
if let Some(observer) = self.observer.as_mut() {
|
||||
commit_opts.set_op_observer(observer);
|
||||
}
|
||||
let hash = self.doc.commit_with(commit_opts);
|
||||
JsValue::from_str(&hex::encode(&hash.0))
|
||||
}
|
||||
|
||||
pub fn merge(&mut self, other: &mut Automerge) -> Result<Array, JsValue> {
|
||||
self.ensure_transaction_closed();
|
||||
let options = if let Some(observer) = self.observer.as_mut() {
|
||||
ApplyOptions::default().with_op_observer(observer)
|
||||
} else {
|
||||
ApplyOptions::default()
|
||||
};
|
||||
let heads = self.doc.merge_with(&mut other.doc, options)?;
|
||||
let heads: Array = heads
|
||||
.iter()
|
||||
.map(|h| JsValue::from_str(&hex::encode(&h.0)))
|
||||
.collect();
|
||||
Ok(heads)
|
||||
}
|
||||
|
||||
pub fn rollback(&mut self) -> f64 {
|
||||
self.doc.rollback() as f64
|
||||
}
|
||||
|
||||
pub fn keys(&self, obj: JsValue, heads: Option<Array>) -> Result<Array, JsValue> {
|
||||
let obj = self.import(obj)?;
|
||||
let result = if let Some(heads) = get_heads(heads) {
|
||||
self.doc
|
||||
.keys_at(&obj, &heads)
|
||||
.map(|s| JsValue::from_str(&s))
|
||||
.collect()
|
||||
} else {
|
||||
self.doc.keys(&obj).map(|s| JsValue::from_str(&s)).collect()
|
||||
};
|
||||
Ok(result)
|
||||
}
|
||||
|
||||
pub fn text(&self, obj: JsValue, heads: Option<Array>) -> Result<String, JsValue> {
|
||||
let obj = self.import(obj)?;
|
||||
if let Some(heads) = get_heads(heads) {
|
||||
Ok(self.doc.text_at(&obj, &heads)?)
|
||||
} else {
|
||||
Ok(self.doc.text(&obj)?)
|
||||
}
|
||||
}
|
||||
|
||||
pub fn splice(
|
||||
&mut self,
|
||||
obj: JsValue,
|
||||
start: f64,
|
||||
delete_count: f64,
|
||||
text: JsValue,
|
||||
) -> Result<(), JsValue> {
|
||||
let obj = self.import(obj)?;
|
||||
let start = start as usize;
|
||||
let delete_count = delete_count as usize;
|
||||
let mut vals = vec![];
|
||||
if let Some(t) = text.as_string() {
|
||||
self.doc.splice_text(&obj, start, delete_count, &t)?;
|
||||
} else {
|
||||
if let Ok(array) = text.dyn_into::<Array>() {
|
||||
for i in array.iter() {
|
||||
let value = self
|
||||
.import_scalar(&i, &None)
|
||||
.ok_or_else(|| to_js_err("expected scalar"))?;
|
||||
vals.push(value);
|
||||
}
|
||||
}
|
||||
self.doc
|
||||
.splice(&obj, start, delete_count, vals.into_iter())?;
|
||||
}
|
||||
Ok(())
|
||||
}
|
||||
|
||||
pub fn push(&mut self, obj: JsValue, value: JsValue, datatype: JsValue) -> Result<(), JsValue> {
|
||||
let obj = self.import(obj)?;
|
||||
let value = self
|
||||
.import_scalar(&value, &datatype.as_string())
|
||||
.ok_or_else(|| to_js_err("invalid scalar value"))?;
|
||||
let index = self.doc.length(&obj);
|
||||
self.doc.insert(&obj, index, value)?;
|
||||
Ok(())
|
||||
}
|
||||
|
||||
#[wasm_bindgen(js_name = pushObject)]
|
||||
pub fn push_object(&mut self, obj: JsValue, value: JsValue) -> Result<Option<String>, JsValue> {
|
||||
let obj = self.import(obj)?;
|
||||
let (value, subvals) =
|
||||
to_objtype(&value, &None).ok_or_else(|| to_js_err("expected object"))?;
|
||||
let index = self.doc.length(&obj);
|
||||
let opid = self.doc.insert_object(&obj, index, value)?;
|
||||
self.subset(&opid, subvals)?;
|
||||
Ok(opid.to_string().into())
|
||||
}
|
||||
|
||||
pub fn insert(
|
||||
&mut self,
|
||||
obj: JsValue,
|
||||
index: f64,
|
||||
value: JsValue,
|
||||
datatype: JsValue,
|
||||
) -> Result<(), JsValue> {
|
||||
let obj = self.import(obj)?;
|
||||
let index = index as f64;
|
||||
let value = self
|
||||
.import_scalar(&value, &datatype.as_string())
|
||||
.ok_or_else(|| to_js_err("expected scalar value"))?;
|
||||
self.doc.insert(&obj, index as usize, value)?;
|
||||
Ok(())
|
||||
}
|
||||
|
||||
#[wasm_bindgen(js_name = insertObject)]
|
||||
pub fn insert_object(
|
||||
&mut self,
|
||||
obj: JsValue,
|
||||
index: f64,
|
||||
value: JsValue,
|
||||
) -> Result<Option<String>, JsValue> {
|
||||
let obj = self.import(obj)?;
|
||||
let index = index as f64;
|
||||
let (value, subvals) =
|
||||
to_objtype(&value, &None).ok_or_else(|| to_js_err("expected object"))?;
|
||||
let opid = self.doc.insert_object(&obj, index as usize, value)?;
|
||||
self.subset(&opid, subvals)?;
|
||||
Ok(opid.to_string().into())
|
||||
}
|
||||
|
||||
pub fn put(
|
||||
&mut self,
|
||||
obj: JsValue,
|
||||
prop: JsValue,
|
||||
value: JsValue,
|
||||
datatype: JsValue,
|
||||
) -> Result<(), JsValue> {
|
||||
let obj = self.import(obj)?;
|
||||
let prop = self.import_prop(prop)?;
|
||||
let value = self
|
||||
.import_scalar(&value, &datatype.as_string())
|
||||
.ok_or_else(|| to_js_err("expected scalar value"))?;
|
||||
self.doc.put(&obj, prop, value)?;
|
||||
Ok(())
|
||||
}
|
||||
|
||||
#[wasm_bindgen(js_name = putObject)]
|
||||
pub fn put_object(
|
||||
&mut self,
|
||||
obj: JsValue,
|
||||
prop: JsValue,
|
||||
value: JsValue,
|
||||
) -> Result<JsValue, JsValue> {
|
||||
let obj = self.import(obj)?;
|
||||
let prop = self.import_prop(prop)?;
|
||||
let (value, subvals) =
|
||||
to_objtype(&value, &None).ok_or_else(|| to_js_err("expected object"))?;
|
||||
let opid = self.doc.put_object(&obj, prop, value)?;
|
||||
self.subset(&opid, subvals)?;
|
||||
Ok(opid.to_string().into())
|
||||
}
|
||||
|
||||
fn subset(&mut self, obj: &am::ObjId, vals: Vec<(am::Prop, JsValue)>) -> Result<(), JsValue> {
|
||||
for (p, v) in vals {
|
||||
let (value, subvals) = self.import_value(&v, None)?;
|
||||
//let opid = self.0.set(id, p, value)?;
|
||||
let opid = match (p, value) {
|
||||
(Prop::Map(s), Value::Object(objtype)) => {
|
||||
Some(self.doc.put_object(obj, s, objtype)?)
|
||||
}
|
||||
(Prop::Map(s), Value::Scalar(scalar)) => {
|
||||
self.doc.put(obj, s, scalar.into_owned())?;
|
||||
None
|
||||
}
|
||||
(Prop::Seq(i), Value::Object(objtype)) => {
|
||||
Some(self.doc.insert_object(obj, i, objtype)?)
|
||||
}
|
||||
(Prop::Seq(i), Value::Scalar(scalar)) => {
|
||||
self.doc.insert(obj, i, scalar.into_owned())?;
|
||||
None
|
||||
}
|
||||
};
|
||||
if let Some(opid) = opid {
|
||||
self.subset(&opid, subvals)?;
|
||||
}
|
||||
}
|
||||
Ok(())
|
||||
}
|
||||
|
||||
pub fn increment(
|
||||
&mut self,
|
||||
obj: JsValue,
|
||||
prop: JsValue,
|
||||
value: JsValue,
|
||||
) -> Result<(), JsValue> {
|
||||
let obj = self.import(obj)?;
|
||||
let prop = self.import_prop(prop)?;
|
||||
let value: f64 = value
|
||||
.as_f64()
|
||||
.ok_or_else(|| to_js_err("increment needs a numeric value"))?;
|
||||
self.doc.increment(&obj, prop, value as i64)?;
|
||||
Ok(())
|
||||
}
|
||||
|
||||
#[wasm_bindgen(js_name = get)]
|
||||
pub fn get(
|
||||
&self,
|
||||
obj: JsValue,
|
||||
prop: JsValue,
|
||||
heads: Option<Array>,
|
||||
) -> Result<Option<Array>, JsValue> {
|
||||
let obj = self.import(obj)?;
|
||||
let result = Array::new();
|
||||
let prop = to_prop(prop);
|
||||
let heads = get_heads(heads);
|
||||
if let Ok(prop) = prop {
|
||||
let value = if let Some(h) = heads {
|
||||
self.doc.get_at(&obj, prop, &h)?
|
||||
} else {
|
||||
self.doc.get(&obj, prop)?
|
||||
};
|
||||
match value {
|
||||
Some((Value::Object(obj_type), obj_id)) => {
|
||||
result.push(&obj_type.to_string().into());
|
||||
result.push(&obj_id.to_string().into());
|
||||
Ok(Some(result))
|
||||
}
|
||||
Some((Value::Scalar(value), _)) => {
|
||||
result.push(&datatype(&value).into());
|
||||
result.push(&ScalarValue(value).into());
|
||||
Ok(Some(result))
|
||||
}
|
||||
None => Ok(None),
|
||||
}
|
||||
} else {
|
||||
Ok(None)
|
||||
}
|
||||
}
|
||||
|
||||
#[wasm_bindgen(js_name = getAll)]
|
||||
pub fn get_all(
|
||||
&self,
|
||||
obj: JsValue,
|
||||
arg: JsValue,
|
||||
heads: Option<Array>,
|
||||
) -> Result<Array, JsValue> {
|
||||
let obj = self.import(obj)?;
|
||||
let result = Array::new();
|
||||
let prop = to_prop(arg);
|
||||
if let Ok(prop) = prop {
|
||||
let values = if let Some(heads) = get_heads(heads) {
|
||||
self.doc.get_all_at(&obj, prop, &heads)
|
||||
} else {
|
||||
self.doc.get_all(&obj, prop)
|
||||
}
|
||||
.map_err(to_js_err)?;
|
||||
for value in values {
|
||||
match value {
|
||||
(Value::Object(obj_type), obj_id) => {
|
||||
let sub = Array::new();
|
||||
sub.push(&obj_type.to_string().into());
|
||||
sub.push(&obj_id.to_string().into());
|
||||
result.push(&sub.into());
|
||||
}
|
||||
(Value::Scalar(value), id) => {
|
||||
let sub = Array::new();
|
||||
sub.push(&datatype(&value).into());
|
||||
sub.push(&ScalarValue(value).into());
|
||||
sub.push(&id.to_string().into());
|
||||
result.push(&sub.into());
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
Ok(result)
|
||||
}
|
||||
|
||||
#[wasm_bindgen(js_name = enablePatches)]
|
||||
pub fn enable_patches(&mut self, enable: JsValue) -> Result<(), JsValue> {
|
||||
let enable = enable
|
||||
.as_bool()
|
||||
.ok_or_else(|| to_js_err("expected boolean"))?;
|
||||
if enable {
|
||||
if self.observer.is_none() {
|
||||
self.observer = Some(VecOpObserver::default());
|
||||
}
|
||||
} else {
|
||||
self.observer = None;
|
||||
}
|
||||
Ok(())
|
||||
}
|
||||
|
||||
#[wasm_bindgen(js_name = popPatches)]
|
||||
pub fn pop_patches(&mut self) -> Result<Array, JsValue> {
|
||||
// transactions send out observer updates as they occur, not waiting for them to be
|
||||
// committed.
|
||||
// If we pop the patches then we won't be able to revert them.
|
||||
self.ensure_transaction_closed();
|
||||
|
||||
let patches = self
|
||||
.observer
|
||||
.as_mut()
|
||||
.map_or_else(Vec::new, |o| o.take_patches());
|
||||
let result = Array::new();
|
||||
for p in patches {
|
||||
let patch = Object::new();
|
||||
match p {
|
||||
Patch::Put {
|
||||
obj,
|
||||
key,
|
||||
value,
|
||||
conflict,
|
||||
} => {
|
||||
js_set(&patch, "action", "put")?;
|
||||
js_set(&patch, "obj", obj.to_string())?;
|
||||
js_set(&patch, "key", key)?;
|
||||
match value {
|
||||
(Value::Object(obj_type), obj_id) => {
|
||||
js_set(&patch, "datatype", obj_type.to_string())?;
|
||||
js_set(&patch, "value", obj_id.to_string())?;
|
||||
}
|
||||
(Value::Scalar(value), _) => {
|
||||
js_set(&patch, "datatype", datatype(&value))?;
|
||||
js_set(&patch, "value", ScalarValue(value))?;
|
||||
}
|
||||
};
|
||||
js_set(&patch, "conflict", conflict)?;
|
||||
}
|
||||
|
||||
Patch::Insert { obj, index, value } => {
|
||||
js_set(&patch, "action", "insert")?;
|
||||
js_set(&patch, "obj", obj.to_string())?;
|
||||
js_set(&patch, "key", index as f64)?;
|
||||
match value {
|
||||
(Value::Object(obj_type), obj_id) => {
|
||||
js_set(&patch, "datatype", obj_type.to_string())?;
|
||||
js_set(&patch, "value", obj_id.to_string())?;
|
||||
}
|
||||
(Value::Scalar(value), _) => {
|
||||
js_set(&patch, "datatype", datatype(&value))?;
|
||||
js_set(&patch, "value", ScalarValue(value))?;
|
||||
}
|
||||
};
|
||||
}
|
||||
|
||||
Patch::Increment { obj, key, value } => {
|
||||
js_set(&patch, "action", "increment")?;
|
||||
js_set(&patch, "obj", obj.to_string())?;
|
||||
js_set(&patch, "key", key)?;
|
||||
js_set(&patch, "value", value.0)?;
|
||||
}
|
||||
|
||||
Patch::Delete { obj, key } => {
|
||||
js_set(&patch, "action", "delete")?;
|
||||
js_set(&patch, "obj", obj.to_string())?;
|
||||
js_set(&patch, "key", key)?;
|
||||
}
|
||||
}
|
||||
result.push(&patch);
|
||||
}
|
||||
Ok(result)
|
||||
}
|
||||
|
||||
pub fn length(&self, obj: JsValue, heads: Option<Array>) -> Result<f64, JsValue> {
|
||||
let obj = self.import(obj)?;
|
||||
if let Some(heads) = get_heads(heads) {
|
||||
Ok(self.doc.length_at(&obj, &heads) as f64)
|
||||
} else {
|
||||
Ok(self.doc.length(&obj) as f64)
|
||||
}
|
||||
}
|
||||
|
||||
pub fn delete(&mut self, obj: JsValue, prop: JsValue) -> Result<(), JsValue> {
|
||||
let obj = self.import(obj)?;
|
||||
let prop = to_prop(prop)?;
|
||||
self.doc.delete(&obj, prop).map_err(to_js_err)?;
|
||||
Ok(())
|
||||
}
|
||||
|
||||
pub fn save(&mut self) -> Uint8Array {
|
||||
self.ensure_transaction_closed();
|
||||
Uint8Array::from(self.doc.save().as_slice())
|
||||
}
|
||||
|
||||
#[wasm_bindgen(js_name = saveIncremental)]
|
||||
pub fn save_incremental(&mut self) -> Uint8Array {
|
||||
self.ensure_transaction_closed();
|
||||
let bytes = self.doc.save_incremental();
|
||||
Uint8Array::from(bytes.as_slice())
|
||||
}
|
||||
|
||||
#[wasm_bindgen(js_name = loadIncremental)]
|
||||
pub fn load_incremental(&mut self, data: Uint8Array) -> Result<f64, JsValue> {
|
||||
self.ensure_transaction_closed();
|
||||
let data = data.to_vec();
|
||||
let options = if let Some(observer) = self.observer.as_mut() {
|
||||
ApplyOptions::default().with_op_observer(observer)
|
||||
} else {
|
||||
ApplyOptions::default()
|
||||
};
|
||||
let len = self
|
||||
.doc
|
||||
.load_incremental_with(&data, options)
|
||||
.map_err(to_js_err)?;
|
||||
Ok(len as f64)
|
||||
}
|
||||
|
||||
#[wasm_bindgen(js_name = applyChanges)]
|
||||
pub fn apply_changes(&mut self, changes: JsValue) -> Result<(), JsValue> {
|
||||
self.ensure_transaction_closed();
|
||||
let changes: Vec<_> = JS(changes).try_into()?;
|
||||
let options = if let Some(observer) = self.observer.as_mut() {
|
||||
ApplyOptions::default().with_op_observer(observer)
|
||||
} else {
|
||||
ApplyOptions::default()
|
||||
};
|
||||
self.doc
|
||||
.apply_changes_with(changes, options)
|
||||
.map_err(to_js_err)?;
|
||||
Ok(())
|
||||
}
|
||||
|
||||
#[wasm_bindgen(js_name = getChanges)]
|
||||
pub fn get_changes(&mut self, have_deps: JsValue) -> Result<Array, JsValue> {
|
||||
self.ensure_transaction_closed();
|
||||
let deps: Vec<_> = JS(have_deps).try_into()?;
|
||||
let changes = self.doc.get_changes(&deps);
|
||||
let changes: Array = changes
|
||||
.iter()
|
||||
.map(|c| Uint8Array::from(c.raw_bytes()))
|
||||
.collect();
|
||||
Ok(changes)
|
||||
}
|
||||
|
||||
#[wasm_bindgen(js_name = getChangeByHash)]
|
||||
pub fn get_change_by_hash(&mut self, hash: JsValue) -> Result<JsValue, JsValue> {
|
||||
self.ensure_transaction_closed();
|
||||
let hash = hash.into_serde().map_err(to_js_err)?;
|
||||
let change = self.doc.get_change_by_hash(&hash);
|
||||
if let Some(c) = change {
|
||||
Ok(Uint8Array::from(c.raw_bytes()).into())
|
||||
} else {
|
||||
Ok(JsValue::null())
|
||||
}
|
||||
}
|
||||
|
||||
#[wasm_bindgen(js_name = getChangesAdded)]
|
||||
pub fn get_changes_added(&mut self, other: &mut Automerge) -> Result<Array, JsValue> {
|
||||
self.ensure_transaction_closed();
|
||||
let changes = self.doc.get_changes_added(&mut other.doc);
|
||||
let changes: Array = changes
|
||||
.iter()
|
||||
.map(|c| Uint8Array::from(c.raw_bytes()))
|
||||
.collect();
|
||||
Ok(changes)
|
||||
}
|
||||
|
||||
#[wasm_bindgen(js_name = getHeads)]
|
||||
pub fn get_heads(&mut self) -> Array {
|
||||
self.ensure_transaction_closed();
|
||||
let heads = self.doc.get_heads();
|
||||
let heads: Array = heads
|
||||
.iter()
|
||||
.map(|h| JsValue::from_str(&hex::encode(&h.0)))
|
||||
.collect();
|
||||
heads
|
||||
}
|
||||
|
||||
#[wasm_bindgen(js_name = getActorId)]
|
||||
pub fn get_actor_id(&self) -> String {
|
||||
let actor = self.doc.get_actor();
|
||||
actor.to_string()
|
||||
}
|
||||
|
||||
#[wasm_bindgen(js_name = getLastLocalChange)]
|
||||
pub fn get_last_local_change(&mut self) -> Result<Uint8Array, JsValue> {
|
||||
self.ensure_transaction_closed();
|
||||
if let Some(change) = self.doc.get_last_local_change() {
|
||||
Ok(Uint8Array::from(change.raw_bytes()))
|
||||
} else {
|
||||
Err(to_js_err("no local changes"))
|
||||
}
|
||||
}
|
||||
|
||||
pub fn dump(&mut self) {
|
||||
self.ensure_transaction_closed();
|
||||
self.doc.dump()
|
||||
}
|
||||
|
||||
#[wasm_bindgen(js_name = getMissingDeps)]
|
||||
pub fn get_missing_deps(&mut self, heads: Option<Array>) -> Result<Array, JsValue> {
|
||||
self.ensure_transaction_closed();
|
||||
let heads = get_heads(heads).unwrap_or_default();
|
||||
let deps = self.doc.get_missing_deps(&heads);
|
||||
let deps: Array = deps
|
||||
.iter()
|
||||
.map(|h| JsValue::from_str(&hex::encode(&h.0)))
|
||||
.collect();
|
||||
Ok(deps)
|
||||
}
|
||||
|
||||
#[wasm_bindgen(js_name = receiveSyncMessage)]
|
||||
pub fn receive_sync_message(
|
||||
&mut self,
|
||||
state: &mut SyncState,
|
||||
message: Uint8Array,
|
||||
) -> Result<(), JsValue> {
|
||||
self.ensure_transaction_closed();
|
||||
let message = message.to_vec();
|
||||
let message = am::sync::Message::decode(message.as_slice()).map_err(to_js_err)?;
|
||||
let options = if let Some(observer) = self.observer.as_mut() {
|
||||
ApplyOptions::default().with_op_observer(observer)
|
||||
} else {
|
||||
ApplyOptions::default()
|
||||
};
|
||||
self.doc
|
||||
.receive_sync_message_with(&mut state.0, message, options)
|
||||
.map_err(to_js_err)?;
|
||||
Ok(())
|
||||
}
|
||||
|
||||
#[wasm_bindgen(js_name = generateSyncMessage)]
|
||||
pub fn generate_sync_message(&mut self, state: &mut SyncState) -> Result<JsValue, JsValue> {
|
||||
self.ensure_transaction_closed();
|
||||
if let Some(message) = self.doc.generate_sync_message(&mut state.0) {
|
||||
Ok(Uint8Array::from(message.encode().as_slice()).into())
|
||||
} else {
|
||||
Ok(JsValue::null())
|
||||
}
|
||||
}
|
||||
|
||||
#[wasm_bindgen(js_name = toJS)]
|
||||
pub fn to_js(&self) -> JsValue {
|
||||
map_to_js(&self.doc, &ROOT)
|
||||
}
|
||||
|
||||
pub fn materialize(&self, obj: JsValue, heads: Option<Array>) -> Result<JsValue, JsValue> {
|
||||
let obj = self.import(obj).unwrap_or(ROOT);
|
||||
let heads = get_heads(heads);
|
||||
if let Some(heads) = heads {
|
||||
match self.doc.object_type(&obj) {
|
||||
Some(am::ObjType::Map) => Ok(map_to_js_at(&self.doc, &obj, heads.as_slice())),
|
||||
Some(am::ObjType::List) => Ok(list_to_js_at(&self.doc, &obj, heads.as_slice())),
|
||||
Some(am::ObjType::Text) => Ok(self.doc.text_at(&obj, heads.as_slice())?.into()),
|
||||
Some(am::ObjType::Table) => Ok(map_to_js_at(&self.doc, &obj, heads.as_slice())),
|
||||
None => Err(to_js_err(format!("invalid obj {}", obj))),
|
||||
}
|
||||
} else {
|
||||
match self.doc.object_type(&obj) {
|
||||
Some(am::ObjType::Map) => Ok(map_to_js(&self.doc, &obj)),
|
||||
Some(am::ObjType::List) => Ok(list_to_js(&self.doc, &obj)),
|
||||
Some(am::ObjType::Text) => Ok(self.doc.text(&obj)?.into()),
|
||||
Some(am::ObjType::Table) => Ok(map_to_js(&self.doc, &obj)),
|
||||
None => Err(to_js_err(format!("invalid obj {}", obj))),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
fn import(&self, id: JsValue) -> Result<ObjId, JsValue> {
|
||||
if let Some(s) = id.as_string() {
|
||||
if let Some(post) = s.strip_prefix('/') {
|
||||
let mut obj = ROOT;
|
||||
let mut is_map = true;
|
||||
let parts = post.split('/');
|
||||
for prop in parts {
|
||||
if prop.is_empty() {
|
||||
break;
|
||||
}
|
||||
let val = if is_map {
|
||||
self.doc.get(obj, prop)?
|
||||
} else {
|
||||
self.doc.get(obj, am::Prop::Seq(prop.parse().unwrap()))?
|
||||
};
|
||||
match val {
|
||||
Some((am::Value::Object(am::ObjType::Map), id)) => {
|
||||
is_map = true;
|
||||
obj = id;
|
||||
}
|
||||
Some((am::Value::Object(am::ObjType::Table), id)) => {
|
||||
is_map = true;
|
||||
obj = id;
|
||||
}
|
||||
Some((am::Value::Object(_), id)) => {
|
||||
is_map = false;
|
||||
obj = id;
|
||||
}
|
||||
None => return Err(to_js_err(format!("invalid path '{}'", s))),
|
||||
_ => return Err(to_js_err(format!("path '{}' is not an object", s))),
|
||||
};
|
||||
}
|
||||
Ok(obj)
|
||||
} else {
|
||||
Ok(self.doc.import(&s)?)
|
||||
}
|
||||
} else {
|
||||
Err(to_js_err("invalid objid"))
|
||||
}
|
||||
}
|
||||
|
||||
fn import_prop(&self, prop: JsValue) -> Result<Prop, JsValue> {
|
||||
if let Some(s) = prop.as_string() {
|
||||
Ok(s.into())
|
||||
} else if let Some(n) = prop.as_f64() {
|
||||
Ok((n as usize).into())
|
||||
} else {
|
||||
Err(to_js_err(format!("invalid prop {:?}", prop)))
|
||||
}
|
||||
}
|
||||
|
||||
fn import_scalar(&self, value: &JsValue, datatype: &Option<String>) -> Option<am::ScalarValue> {
|
||||
match datatype.as_deref() {
|
||||
Some("boolean") => value.as_bool().map(am::ScalarValue::Boolean),
|
||||
Some("int") => value.as_f64().map(|v| am::ScalarValue::Int(v as i64)),
|
||||
Some("uint") => value.as_f64().map(|v| am::ScalarValue::Uint(v as u64)),
|
||||
Some("str") => value.as_string().map(|v| am::ScalarValue::Str(v.into())),
|
||||
Some("f64") => value.as_f64().map(am::ScalarValue::F64),
|
||||
Some("bytes") => Some(am::ScalarValue::Bytes(
|
||||
value.clone().dyn_into::<Uint8Array>().unwrap().to_vec(),
|
||||
)),
|
||||
Some("counter") => value.as_f64().map(|v| am::ScalarValue::counter(v as i64)),
|
||||
Some("timestamp") => {
|
||||
if let Some(v) = value.as_f64() {
|
||||
Some(am::ScalarValue::Timestamp(v as i64))
|
||||
} else if let Ok(d) = value.clone().dyn_into::<js_sys::Date>() {
|
||||
Some(am::ScalarValue::Timestamp(d.get_time() as i64))
|
||||
} else {
|
||||
None
|
||||
}
|
||||
}
|
||||
Some("null") => Some(am::ScalarValue::Null),
|
||||
Some(_) => None,
|
||||
None => {
|
||||
if value.is_null() {
|
||||
Some(am::ScalarValue::Null)
|
||||
} else if let Some(b) = value.as_bool() {
|
||||
Some(am::ScalarValue::Boolean(b))
|
||||
} else if let Some(s) = value.as_string() {
|
||||
Some(am::ScalarValue::Str(s.into()))
|
||||
} else if let Some(n) = value.as_f64() {
|
||||
if (n.round() - n).abs() < f64::EPSILON {
|
||||
Some(am::ScalarValue::Int(n as i64))
|
||||
} else {
|
||||
Some(am::ScalarValue::F64(n))
|
||||
}
|
||||
} else if let Ok(d) = value.clone().dyn_into::<js_sys::Date>() {
|
||||
Some(am::ScalarValue::Timestamp(d.get_time() as i64))
|
||||
} else if let Ok(o) = &value.clone().dyn_into::<Uint8Array>() {
|
||||
Some(am::ScalarValue::Bytes(o.to_vec()))
|
||||
} else {
|
||||
None
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
fn import_value(
|
||||
&self,
|
||||
value: &JsValue,
|
||||
datatype: Option<String>,
|
||||
) -> Result<(Value<'static>, Vec<(Prop, JsValue)>), JsValue> {
|
||||
match self.import_scalar(value, &datatype) {
|
||||
Some(val) => Ok((val.into(), vec![])),
|
||||
None => {
|
||||
if let Some((o, subvals)) = to_objtype(value, &datatype) {
|
||||
Ok((o.into(), subvals))
|
||||
} else {
|
||||
web_sys::console::log_2(&"Invalid value".into(), value);
|
||||
Err(to_js_err("invalid value"))
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
#[wasm_bindgen(js_name = create)]
|
||||
pub fn init(actor: Option<String>) -> Result<Automerge, JsValue> {
|
||||
console_error_panic_hook::set_once();
|
||||
Automerge::new(actor)
|
||||
}
|
||||
|
||||
#[wasm_bindgen(js_name = loadDoc)]
|
||||
pub fn load(data: Uint8Array, actor: Option<String>) -> Result<Automerge, JsValue> {
|
||||
let data = data.to_vec();
|
||||
let observer = None;
|
||||
let options = ApplyOptions::<()>::default();
|
||||
let mut automerge = am::AutoCommit::load_with(&data, options).map_err(to_js_err)?;
|
||||
if let Some(s) = actor {
|
||||
let actor = automerge::ActorId::from(hex::decode(s).map_err(to_js_err)?.to_vec());
|
||||
automerge.set_actor(actor);
|
||||
}
|
||||
Ok(Automerge {
|
||||
doc: automerge,
|
||||
observer,
|
||||
})
|
||||
}
|
||||
|
||||
#[wasm_bindgen(js_name = encodeChange)]
|
||||
pub fn encode_change(change: JsValue) -> Result<Uint8Array, JsValue> {
|
||||
let change: am::ExpandedChange = change.into_serde().map_err(to_js_err)?;
|
||||
let change: Change = change.into();
|
||||
Ok(Uint8Array::from(change.raw_bytes()))
|
||||
}
|
||||
|
||||
#[wasm_bindgen(js_name = decodeChange)]
|
||||
pub fn decode_change(change: Uint8Array) -> Result<JsValue, JsValue> {
|
||||
let change = Change::from_bytes(change.to_vec()).map_err(to_js_err)?;
|
||||
let change: am::ExpandedChange = change.decode();
|
||||
JsValue::from_serde(&change).map_err(to_js_err)
|
||||
}
|
||||
|
||||
#[wasm_bindgen(js_name = initSyncState)]
|
||||
pub fn init_sync_state() -> SyncState {
|
||||
SyncState(am::sync::State::new())
|
||||
}
|
||||
|
||||
// this is needed to be compatible with the automerge-js api
|
||||
#[wasm_bindgen(js_name = importSyncState)]
|
||||
pub fn import_sync_state(state: JsValue) -> Result<SyncState, JsValue> {
|
||||
Ok(SyncState(JS(state).try_into()?))
|
||||
}
|
||||
|
||||
// this is needed to be compatible with the automerge-js api
|
||||
#[wasm_bindgen(js_name = exportSyncState)]
|
||||
pub fn export_sync_state(state: SyncState) -> JsValue {
|
||||
JS::from(state.0).into()
|
||||
}
|
||||
|
||||
#[wasm_bindgen(js_name = encodeSyncMessage)]
|
||||
pub fn encode_sync_message(message: JsValue) -> Result<Uint8Array, JsValue> {
|
||||
let heads = js_get(&message, "heads")?.try_into()?;
|
||||
let need = js_get(&message, "need")?.try_into()?;
|
||||
let changes = js_get(&message, "changes")?.try_into()?;
|
||||
let have = js_get(&message, "have")?.try_into()?;
|
||||
Ok(Uint8Array::from(
|
||||
am::sync::Message {
|
||||
heads,
|
||||
need,
|
||||
have,
|
||||
changes,
|
||||
}
|
||||
.encode()
|
||||
.as_slice(),
|
||||
))
|
||||
}
|
||||
|
||||
#[wasm_bindgen(js_name = decodeSyncMessage)]
|
||||
pub fn decode_sync_message(msg: Uint8Array) -> Result<JsValue, JsValue> {
|
||||
let data = msg.to_vec();
|
||||
let msg = am::sync::Message::decode(&data).map_err(to_js_err)?;
|
||||
let heads = AR::from(msg.heads.as_slice());
|
||||
let need = AR::from(msg.need.as_slice());
|
||||
let changes = AR::from(msg.changes.as_slice());
|
||||
let have = AR::from(msg.have.as_slice());
|
||||
let obj = Object::new().into();
|
||||
js_set(&obj, "heads", heads)?;
|
||||
js_set(&obj, "need", need)?;
|
||||
js_set(&obj, "have", have)?;
|
||||
js_set(&obj, "changes", changes)?;
|
||||
Ok(obj)
|
||||
}
|
||||
|
||||
#[wasm_bindgen(js_name = encodeSyncState)]
|
||||
pub fn encode_sync_state(state: SyncState) -> Result<Uint8Array, JsValue> {
|
||||
let state = state.0;
|
||||
Ok(Uint8Array::from(state.encode().as_slice()))
|
||||
}
|
||||
|
||||
#[wasm_bindgen(js_name = decodeSyncState)]
|
||||
pub fn decode_sync_state(data: Uint8Array) -> Result<SyncState, JsValue> {
|
||||
SyncState::decode(data)
|
||||
}
|
|
@ -1,38 +0,0 @@
|
|||
use std::borrow::Cow;
|
||||
|
||||
use automerge as am;
|
||||
use js_sys::Uint8Array;
|
||||
use wasm_bindgen::prelude::*;
|
||||
|
||||
#[derive(Debug)]
|
||||
pub struct ScalarValue<'a>(pub(crate) Cow<'a, am::ScalarValue>);
|
||||
|
||||
impl<'a> From<ScalarValue<'a>> for JsValue {
|
||||
fn from(val: ScalarValue<'a>) -> Self {
|
||||
match &*val.0 {
|
||||
am::ScalarValue::Bytes(v) => Uint8Array::from(v.as_slice()).into(),
|
||||
am::ScalarValue::Str(v) => v.to_string().into(),
|
||||
am::ScalarValue::Int(v) => (*v as f64).into(),
|
||||
am::ScalarValue::Uint(v) => (*v as f64).into(),
|
||||
am::ScalarValue::F64(v) => (*v).into(),
|
||||
am::ScalarValue::Counter(v) => (f64::from(v)).into(),
|
||||
am::ScalarValue::Timestamp(v) => js_sys::Date::new(&(*v as f64).into()).into(),
|
||||
am::ScalarValue::Boolean(v) => (*v).into(),
|
||||
am::ScalarValue::Null => JsValue::null(),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
pub(crate) fn datatype(s: &am::ScalarValue) -> String {
|
||||
match s {
|
||||
am::ScalarValue::Bytes(_) => "bytes".into(),
|
||||
am::ScalarValue::Str(_) => "str".into(),
|
||||
am::ScalarValue::Int(_) => "int".into(),
|
||||
am::ScalarValue::Uint(_) => "uint".into(),
|
||||
am::ScalarValue::F64(_) => "f64".into(),
|
||||
am::ScalarValue::Counter(_) => "counter".into(),
|
||||
am::ScalarValue::Timestamp(_) => "timestamp".into(),
|
||||
am::ScalarValue::Boolean(_) => "boolean".into(),
|
||||
am::ScalarValue::Null => "null".into(),
|
||||
}
|
||||
}
|
File diff suppressed because it is too large
Load diff
|
@ -1,13 +0,0 @@
|
|||
export {
|
||||
loadDoc as load,
|
||||
create,
|
||||
encodeChange,
|
||||
decodeChange,
|
||||
initSyncState,
|
||||
encodeSyncMessage,
|
||||
decodeSyncMessage,
|
||||
encodeSyncState,
|
||||
decodeSyncState,
|
||||
} from "./bindgen.js"
|
||||
import init from "./bindgen.js"
|
||||
export default init;
|
|
@ -1,48 +0,0 @@
|
|||
use automerge::{transaction::Transactable, Automerge, ROOT};
|
||||
use criterion::{criterion_group, criterion_main, Criterion};
|
||||
|
||||
fn query_single(doc: &Automerge, rounds: u32) {
|
||||
for _ in 0..rounds {
|
||||
// repeatedly get the last key
|
||||
doc.get(ROOT, (rounds - 1).to_string()).unwrap();
|
||||
}
|
||||
}
|
||||
|
||||
fn query_range(doc: &Automerge, rounds: u32) {
|
||||
for i in 0..rounds {
|
||||
doc.get(ROOT, i.to_string()).unwrap();
|
||||
}
|
||||
}
|
||||
|
||||
fn put_doc(doc: &mut Automerge, rounds: u32) {
|
||||
for i in 0..rounds {
|
||||
let mut tx = doc.transaction();
|
||||
tx.put(ROOT, i.to_string(), "value").unwrap();
|
||||
tx.commit();
|
||||
}
|
||||
}
|
||||
|
||||
fn bench(c: &mut Criterion) {
|
||||
let mut group = c.benchmark_group("map");
|
||||
|
||||
let rounds = 10_000;
|
||||
let mut doc = Automerge::new();
|
||||
put_doc(&mut doc, rounds);
|
||||
|
||||
group.bench_function("query single", |b| b.iter(|| query_single(&doc, rounds)));
|
||||
|
||||
group.bench_function("query range", |b| b.iter(|| query_range(&doc, rounds)));
|
||||
|
||||
group.bench_function("put", |b| {
|
||||
b.iter_batched(
|
||||
Automerge::new,
|
||||
|mut doc| put_doc(&mut doc, rounds),
|
||||
criterion::BatchSize::LargeInput,
|
||||
)
|
||||
});
|
||||
|
||||
group.finish();
|
||||
}
|
||||
|
||||
criterion_group!(benches, bench);
|
||||
criterion_main!(benches);
|
|
@ -1,482 +0,0 @@
|
|||
use std::ops::RangeBounds;
|
||||
|
||||
use crate::exid::ExId;
|
||||
use crate::op_observer::OpObserver;
|
||||
use crate::transaction::{CommitOptions, Transactable};
|
||||
use crate::{
|
||||
sync, ApplyOptions, Keys, KeysAt, ObjType, Parents, Range, RangeAt, ScalarValue, Values,
|
||||
ValuesAt,
|
||||
};
|
||||
use crate::{
|
||||
transaction::TransactionInner, ActorId, Automerge, AutomergeError, Change, ChangeHash, Prop,
|
||||
Value,
|
||||
};
|
||||
|
||||
/// An automerge document that automatically manages transactions.
|
||||
#[derive(Debug, Clone)]
|
||||
pub struct AutoCommit {
|
||||
doc: Automerge,
|
||||
transaction: Option<TransactionInner>,
|
||||
}
|
||||
|
||||
impl Default for AutoCommit {
|
||||
fn default() -> Self {
|
||||
Self::new()
|
||||
}
|
||||
}
|
||||
|
||||
impl AutoCommit {
|
||||
pub fn new() -> Self {
|
||||
Self {
|
||||
doc: Automerge::new(),
|
||||
transaction: None,
|
||||
}
|
||||
}
|
||||
|
||||
/// Get the inner document.
|
||||
#[doc(hidden)]
|
||||
pub fn document(&mut self) -> &Automerge {
|
||||
self.ensure_transaction_closed();
|
||||
&self.doc
|
||||
}
|
||||
|
||||
pub fn with_actor(mut self, actor: ActorId) -> Self {
|
||||
self.ensure_transaction_closed();
|
||||
self.doc.set_actor(actor);
|
||||
self
|
||||
}
|
||||
|
||||
pub fn set_actor(&mut self, actor: ActorId) -> &mut Self {
|
||||
self.ensure_transaction_closed();
|
||||
self.doc.set_actor(actor);
|
||||
self
|
||||
}
|
||||
|
||||
pub fn get_actor(&self) -> &ActorId {
|
||||
self.doc.get_actor()
|
||||
}
|
||||
|
||||
fn ensure_transaction_open(&mut self) {
|
||||
if self.transaction.is_none() {
|
||||
self.transaction = Some(self.doc.transaction_inner());
|
||||
}
|
||||
}
|
||||
|
||||
pub fn fork(&mut self) -> Self {
|
||||
self.ensure_transaction_closed();
|
||||
Self {
|
||||
doc: self.doc.fork(),
|
||||
transaction: self.transaction.clone(),
|
||||
}
|
||||
}
|
||||
|
||||
pub fn fork_at(&mut self, heads: &[ChangeHash]) -> Result<Self, AutomergeError> {
|
||||
self.ensure_transaction_closed();
|
||||
Ok(Self {
|
||||
doc: self.doc.fork_at(heads)?,
|
||||
transaction: self.transaction.clone(),
|
||||
})
|
||||
}
|
||||
|
||||
fn ensure_transaction_closed(&mut self) {
|
||||
if let Some(tx) = self.transaction.take() {
|
||||
tx.commit::<()>(&mut self.doc, None, None, None);
|
||||
}
|
||||
}
|
||||
|
||||
pub fn load(data: &[u8]) -> Result<Self, AutomergeError> {
|
||||
let doc = Automerge::load(data)?;
|
||||
Ok(Self {
|
||||
doc,
|
||||
transaction: None,
|
||||
})
|
||||
}
|
||||
|
||||
pub fn load_with<Obs: OpObserver>(
|
||||
data: &[u8],
|
||||
options: ApplyOptions<'_, Obs>,
|
||||
) -> Result<Self, AutomergeError> {
|
||||
let doc = Automerge::load_with(data, options)?;
|
||||
Ok(Self {
|
||||
doc,
|
||||
transaction: None,
|
||||
})
|
||||
}
|
||||
|
||||
pub fn load_incremental(&mut self, data: &[u8]) -> Result<usize, AutomergeError> {
|
||||
self.ensure_transaction_closed();
|
||||
self.doc.load_incremental(data)
|
||||
}
|
||||
|
||||
pub fn load_incremental_with<'a, Obs: OpObserver>(
|
||||
&mut self,
|
||||
data: &[u8],
|
||||
options: ApplyOptions<'a, Obs>,
|
||||
) -> Result<usize, AutomergeError> {
|
||||
self.ensure_transaction_closed();
|
||||
self.doc.load_incremental_with(data, options)
|
||||
}
|
||||
|
||||
pub fn apply_changes(&mut self, changes: Vec<Change>) -> Result<(), AutomergeError> {
|
||||
self.ensure_transaction_closed();
|
||||
self.doc.apply_changes(changes)
|
||||
}
|
||||
|
||||
pub fn apply_changes_with<Obs: OpObserver>(
|
||||
&mut self,
|
||||
changes: Vec<Change>,
|
||||
options: ApplyOptions<'_, Obs>,
|
||||
) -> Result<(), AutomergeError> {
|
||||
self.ensure_transaction_closed();
|
||||
self.doc.apply_changes_with(changes, options)
|
||||
}
|
||||
|
||||
/// Takes all the changes in `other` which are not in `self` and applies them
|
||||
pub fn merge(&mut self, other: &mut Self) -> Result<Vec<ChangeHash>, AutomergeError> {
|
||||
self.ensure_transaction_closed();
|
||||
other.ensure_transaction_closed();
|
||||
self.doc.merge(&mut other.doc)
|
||||
}
|
||||
|
||||
/// Takes all the changes in `other` which are not in `self` and applies them
|
||||
pub fn merge_with<'a, Obs: OpObserver>(
|
||||
&mut self,
|
||||
other: &mut Self,
|
||||
options: ApplyOptions<'a, Obs>,
|
||||
) -> Result<Vec<ChangeHash>, AutomergeError> {
|
||||
self.ensure_transaction_closed();
|
||||
other.ensure_transaction_closed();
|
||||
self.doc.merge_with(&mut other.doc, options)
|
||||
}
|
||||
|
||||
pub fn save(&mut self) -> Vec<u8> {
|
||||
self.ensure_transaction_closed();
|
||||
self.doc.save()
|
||||
}
|
||||
|
||||
// should this return an empty vec instead of None?
|
||||
pub fn save_incremental(&mut self) -> Vec<u8> {
|
||||
self.ensure_transaction_closed();
|
||||
self.doc.save_incremental()
|
||||
}
|
||||
|
||||
pub fn get_missing_deps(&mut self, heads: &[ChangeHash]) -> Vec<ChangeHash> {
|
||||
self.ensure_transaction_closed();
|
||||
self.doc.get_missing_deps(heads)
|
||||
}
|
||||
|
||||
pub fn get_last_local_change(&mut self) -> Option<&Change> {
|
||||
self.ensure_transaction_closed();
|
||||
self.doc.get_last_local_change()
|
||||
}
|
||||
|
||||
pub fn get_changes(&mut self, have_deps: &[ChangeHash]) -> Vec<&Change> {
|
||||
self.ensure_transaction_closed();
|
||||
self.doc.get_changes(have_deps)
|
||||
}
|
||||
|
||||
pub fn get_change_by_hash(&mut self, hash: &ChangeHash) -> Option<&Change> {
|
||||
self.ensure_transaction_closed();
|
||||
self.doc.get_change_by_hash(hash)
|
||||
}
|
||||
|
||||
pub fn get_changes_added<'a>(&mut self, other: &'a mut Self) -> Vec<&'a Change> {
|
||||
self.ensure_transaction_closed();
|
||||
other.ensure_transaction_closed();
|
||||
self.doc.get_changes_added(&other.doc)
|
||||
}
|
||||
|
||||
pub fn import(&self, s: &str) -> Result<ExId, AutomergeError> {
|
||||
self.doc.import(s)
|
||||
}
|
||||
|
||||
pub fn dump(&mut self) {
|
||||
self.ensure_transaction_closed();
|
||||
self.doc.dump()
|
||||
}
|
||||
|
||||
pub fn generate_sync_message(&mut self, sync_state: &mut sync::State) -> Option<sync::Message> {
|
||||
self.ensure_transaction_closed();
|
||||
self.doc.generate_sync_message(sync_state)
|
||||
}
|
||||
|
||||
pub fn receive_sync_message(
|
||||
&mut self,
|
||||
sync_state: &mut sync::State,
|
||||
message: sync::Message,
|
||||
) -> Result<(), AutomergeError> {
|
||||
self.ensure_transaction_closed();
|
||||
self.doc.receive_sync_message(sync_state, message)
|
||||
}
|
||||
|
||||
pub fn receive_sync_message_with<'a, Obs: OpObserver>(
|
||||
&mut self,
|
||||
sync_state: &mut sync::State,
|
||||
message: sync::Message,
|
||||
options: ApplyOptions<'a, Obs>,
|
||||
) -> Result<(), AutomergeError> {
|
||||
self.ensure_transaction_closed();
|
||||
self.doc
|
||||
.receive_sync_message_with(sync_state, message, options)
|
||||
}
|
||||
|
||||
#[cfg(feature = "optree-visualisation")]
|
||||
pub fn visualise_optree(&self) -> String {
|
||||
self.doc.visualise_optree()
|
||||
}
|
||||
|
||||
/// Get the current heads of the document.
|
||||
///
|
||||
/// This closes the transaction first, if one is in progress.
|
||||
pub fn get_heads(&mut self) -> Vec<ChangeHash> {
|
||||
self.ensure_transaction_closed();
|
||||
self.doc.get_heads()
|
||||
}
|
||||
|
||||
pub fn commit(&mut self) -> ChangeHash {
|
||||
self.commit_with::<()>(CommitOptions::default())
|
||||
}
|
||||
|
||||
/// Commit the current operations with some options.
|
||||
///
|
||||
/// ```
|
||||
/// # use automerge::transaction::CommitOptions;
|
||||
/// # use automerge::transaction::Transactable;
|
||||
/// # use automerge::ROOT;
|
||||
/// # use automerge::AutoCommit;
|
||||
/// # use automerge::ObjType;
|
||||
/// # use std::time::SystemTime;
|
||||
/// let mut doc = AutoCommit::new();
|
||||
/// doc.put_object(&ROOT, "todos", ObjType::List).unwrap();
|
||||
/// let now = SystemTime::now().duration_since(SystemTime::UNIX_EPOCH).unwrap().as_secs() as
|
||||
/// i64;
|
||||
/// doc.commit_with::<()>(CommitOptions::default().with_message("Create todos list").with_time(now));
|
||||
/// ```
|
||||
pub fn commit_with<Obs: OpObserver>(&mut self, options: CommitOptions<'_, Obs>) -> ChangeHash {
|
||||
// ensure that even no changes triggers a change
|
||||
self.ensure_transaction_open();
|
||||
let tx = self.transaction.take().unwrap();
|
||||
tx.commit(
|
||||
&mut self.doc,
|
||||
options.message,
|
||||
options.time,
|
||||
options.op_observer,
|
||||
)
|
||||
}
|
||||
|
||||
pub fn rollback(&mut self) -> usize {
|
||||
self.transaction
|
||||
.take()
|
||||
.map(|tx| tx.rollback(&mut self.doc))
|
||||
.unwrap_or(0)
|
||||
}
|
||||
}
|
||||
|
||||
impl Transactable for AutoCommit {
|
||||
fn pending_ops(&self) -> usize {
|
||||
self.transaction
|
||||
.as_ref()
|
||||
.map(|t| t.pending_ops())
|
||||
.unwrap_or(0)
|
||||
}
|
||||
|
||||
// KeysAt::()
|
||||
// LenAt::()
|
||||
// PropAt::()
|
||||
// NthAt::()
|
||||
|
||||
fn keys<O: AsRef<ExId>>(&self, obj: O) -> Keys<'_, '_> {
|
||||
self.doc.keys(obj)
|
||||
}
|
||||
|
||||
fn keys_at<O: AsRef<ExId>>(&self, obj: O, heads: &[ChangeHash]) -> KeysAt<'_, '_> {
|
||||
self.doc.keys_at(obj, heads)
|
||||
}
|
||||
|
||||
fn range<O: AsRef<ExId>, R: RangeBounds<String>>(&self, obj: O, range: R) -> Range<'_, R> {
|
||||
self.doc.range(obj, range)
|
||||
}
|
||||
|
||||
fn range_at<O: AsRef<ExId>, R: RangeBounds<String>>(
|
||||
&self,
|
||||
obj: O,
|
||||
range: R,
|
||||
heads: &[ChangeHash],
|
||||
) -> RangeAt<'_, R> {
|
||||
self.doc.range_at(obj, range, heads)
|
||||
}
|
||||
|
||||
fn values<O: AsRef<ExId>>(&self, obj: O) -> Values<'_> {
|
||||
self.doc.values(obj)
|
||||
}
|
||||
|
||||
fn values_at<O: AsRef<ExId>>(&self, obj: O, heads: &[ChangeHash]) -> ValuesAt<'_> {
|
||||
self.doc.values_at(obj, heads)
|
||||
}
|
||||
|
||||
fn length<O: AsRef<ExId>>(&self, obj: O) -> usize {
|
||||
self.doc.length(obj)
|
||||
}
|
||||
|
||||
fn length_at<O: AsRef<ExId>>(&self, obj: O, heads: &[ChangeHash]) -> usize {
|
||||
self.doc.length_at(obj, heads)
|
||||
}
|
||||
|
||||
fn object_type<O: AsRef<ExId>>(&self, obj: O) -> Option<ObjType> {
|
||||
self.doc.object_type(obj)
|
||||
}
|
||||
|
||||
// set(obj, prop, value) - value can be scalar or objtype
|
||||
// del(obj, prop)
|
||||
// inc(obj, prop, value)
|
||||
// insert(obj, index, value)
|
||||
|
||||
/// Set the value of property `P` to value `V` in object `obj`.
|
||||
///
|
||||
/// # Returns
|
||||
///
|
||||
/// The opid of the operation which was created, or None if this operation doesn't change the
|
||||
/// document or create a new object.
|
||||
///
|
||||
/// # Errors
|
||||
///
|
||||
/// This will return an error if
|
||||
/// - The object does not exist
|
||||
/// - The key is the wrong type for the object
|
||||
/// - The key does not exist in the object
|
||||
fn put<O: AsRef<ExId>, P: Into<Prop>, V: Into<ScalarValue>>(
|
||||
&mut self,
|
||||
obj: O,
|
||||
prop: P,
|
||||
value: V,
|
||||
) -> Result<(), AutomergeError> {
|
||||
self.ensure_transaction_open();
|
||||
let tx = self.transaction.as_mut().unwrap();
|
||||
tx.put(&mut self.doc, obj.as_ref(), prop, value)
|
||||
}
|
||||
|
||||
fn put_object<O: AsRef<ExId>, P: Into<Prop>>(
|
||||
&mut self,
|
||||
obj: O,
|
||||
prop: P,
|
||||
value: ObjType,
|
||||
) -> Result<ExId, AutomergeError> {
|
||||
self.ensure_transaction_open();
|
||||
let tx = self.transaction.as_mut().unwrap();
|
||||
tx.put_object(&mut self.doc, obj.as_ref(), prop, value)
|
||||
}
|
||||
|
||||
fn insert<O: AsRef<ExId>, V: Into<ScalarValue>>(
|
||||
&mut self,
|
||||
obj: O,
|
||||
index: usize,
|
||||
value: V,
|
||||
) -> Result<(), AutomergeError> {
|
||||
self.ensure_transaction_open();
|
||||
let tx = self.transaction.as_mut().unwrap();
|
||||
tx.insert(&mut self.doc, obj.as_ref(), index, value)
|
||||
}
|
||||
|
||||
fn insert_object<O: AsRef<ExId>>(
|
||||
&mut self,
|
||||
obj: O,
|
||||
index: usize,
|
||||
value: ObjType,
|
||||
) -> Result<ExId, AutomergeError> {
|
||||
self.ensure_transaction_open();
|
||||
let tx = self.transaction.as_mut().unwrap();
|
||||
tx.insert_object(&mut self.doc, obj.as_ref(), index, value)
|
||||
}
|
||||
|
||||
fn increment<O: AsRef<ExId>, P: Into<Prop>>(
|
||||
&mut self,
|
||||
obj: O,
|
||||
prop: P,
|
||||
value: i64,
|
||||
) -> Result<(), AutomergeError> {
|
||||
self.ensure_transaction_open();
|
||||
let tx = self.transaction.as_mut().unwrap();
|
||||
tx.increment(&mut self.doc, obj.as_ref(), prop, value)
|
||||
}
|
||||
|
||||
fn delete<O: AsRef<ExId>, P: Into<Prop>>(
|
||||
&mut self,
|
||||
obj: O,
|
||||
prop: P,
|
||||
) -> Result<(), AutomergeError> {
|
||||
self.ensure_transaction_open();
|
||||
let tx = self.transaction.as_mut().unwrap();
|
||||
tx.delete(&mut self.doc, obj.as_ref(), prop)
|
||||
}
|
||||
|
||||
/// Splice new elements into the given sequence. Returns a vector of the OpIds used to insert
|
||||
/// the new elements
|
||||
fn splice<O: AsRef<ExId>, V: IntoIterator<Item = ScalarValue>>(
|
||||
&mut self,
|
||||
obj: O,
|
||||
pos: usize,
|
||||
del: usize,
|
||||
vals: V,
|
||||
) -> Result<(), AutomergeError> {
|
||||
self.ensure_transaction_open();
|
||||
let tx = self.transaction.as_mut().unwrap();
|
||||
tx.splice(&mut self.doc, obj.as_ref(), pos, del, vals)
|
||||
}
|
||||
|
||||
fn text<O: AsRef<ExId>>(&self, obj: O) -> Result<String, AutomergeError> {
|
||||
self.doc.text(obj)
|
||||
}
|
||||
|
||||
fn text_at<O: AsRef<ExId>>(
|
||||
&self,
|
||||
obj: O,
|
||||
heads: &[ChangeHash],
|
||||
) -> Result<String, AutomergeError> {
|
||||
self.doc.text_at(obj, heads)
|
||||
}
|
||||
|
||||
// TODO - I need to return these OpId's here **only** to get
|
||||
// the legacy conflicts format of { [opid]: value }
|
||||
// Something better?
|
||||
fn get<O: AsRef<ExId>, P: Into<Prop>>(
|
||||
&self,
|
||||
obj: O,
|
||||
prop: P,
|
||||
) -> Result<Option<(Value<'_>, ExId)>, AutomergeError> {
|
||||
self.doc.get(obj, prop)
|
||||
}
|
||||
|
||||
fn get_at<O: AsRef<ExId>, P: Into<Prop>>(
|
||||
&self,
|
||||
obj: O,
|
||||
prop: P,
|
||||
heads: &[ChangeHash],
|
||||
) -> Result<Option<(Value<'_>, ExId)>, AutomergeError> {
|
||||
self.doc.get_at(obj, prop, heads)
|
||||
}
|
||||
|
||||
fn get_all<O: AsRef<ExId>, P: Into<Prop>>(
|
||||
&self,
|
||||
obj: O,
|
||||
prop: P,
|
||||
) -> Result<Vec<(Value<'_>, ExId)>, AutomergeError> {
|
||||
self.doc.get_all(obj, prop)
|
||||
}
|
||||
|
||||
fn get_all_at<O: AsRef<ExId>, P: Into<Prop>>(
|
||||
&self,
|
||||
obj: O,
|
||||
prop: P,
|
||||
heads: &[ChangeHash],
|
||||
) -> Result<Vec<(Value<'_>, ExId)>, AutomergeError> {
|
||||
self.doc.get_all_at(obj, prop, heads)
|
||||
}
|
||||
|
||||
fn parent_object<O: AsRef<ExId>>(&self, obj: O) -> Option<(ExId, Prop)> {
|
||||
self.doc.parent_object(obj)
|
||||
}
|
||||
|
||||
fn parents(&self, obj: ExId) -> Parents<'_> {
|
||||
self.doc.parents(obj)
|
||||
}
|
||||
}
|
File diff suppressed because it is too large
Load diff
|
@ -1,997 +0,0 @@
|
|||
use crate::columnar::{
|
||||
ChangeEncoder, ChangeIterator, ColumnEncoder, DepsIterator, DocChange, DocOp, DocOpEncoder,
|
||||
DocOpIterator, OperationIterator, COLUMN_TYPE_DEFLATE,
|
||||
};
|
||||
use crate::decoding;
|
||||
use crate::decoding::{Decodable, InvalidChangeError};
|
||||
use crate::encoding::{Encodable, DEFLATE_MIN_SIZE};
|
||||
use crate::error::AutomergeError;
|
||||
use crate::indexed_cache::IndexedCache;
|
||||
use crate::legacy as amp;
|
||||
use crate::transaction::TransactionInner;
|
||||
use crate::types;
|
||||
use crate::types::{ActorId, ElemId, Key, ObjId, Op, OpId, OpType};
|
||||
use core::ops::Range;
|
||||
use flate2::{
|
||||
bufread::{DeflateDecoder, DeflateEncoder},
|
||||
Compression,
|
||||
};
|
||||
use itertools::Itertools;
|
||||
use sha2::Digest;
|
||||
use sha2::Sha256;
|
||||
use std::collections::{HashMap, HashSet};
|
||||
use std::convert::TryInto;
|
||||
use std::fmt::Debug;
|
||||
use std::io::{Read, Write};
|
||||
use std::num::NonZeroU64;
|
||||
use tracing::instrument;
|
||||
|
||||
const MAGIC_BYTES: [u8; 4] = [0x85, 0x6f, 0x4a, 0x83];
|
||||
const PREAMBLE_BYTES: usize = 8;
|
||||
const HEADER_BYTES: usize = PREAMBLE_BYTES + 1;
|
||||
|
||||
const HASH_BYTES: usize = 32;
|
||||
const BLOCK_TYPE_DOC: u8 = 0;
|
||||
const BLOCK_TYPE_CHANGE: u8 = 1;
|
||||
const BLOCK_TYPE_DEFLATE: u8 = 2;
|
||||
const CHUNK_START: usize = 8;
|
||||
const HASH_RANGE: Range<usize> = 4..8;
|
||||
|
||||
pub(crate) fn encode_document<'a, 'b>(
|
||||
heads: Vec<amp::ChangeHash>,
|
||||
changes: impl Iterator<Item = &'a Change>,
|
||||
doc_ops: impl Iterator<Item = (&'b ObjId, &'b Op)>,
|
||||
actors_index: &IndexedCache<ActorId>,
|
||||
props: &'a [String],
|
||||
) -> Vec<u8> {
|
||||
let mut bytes: Vec<u8> = Vec::new();
|
||||
|
||||
let actors_map = actors_index.encode_index();
|
||||
let actors = actors_index.sorted();
|
||||
|
||||
/*
|
||||
// this assumes that all actor_ids referenced are seen in changes.actor_id which is true
|
||||
// so long as we have a full history
|
||||
let mut actors: Vec<_> = changes
|
||||
.iter()
|
||||
.map(|c| &c.actor)
|
||||
.unique()
|
||||
.sorted()
|
||||
.cloned()
|
||||
.collect();
|
||||
*/
|
||||
|
||||
let (change_bytes, change_info) = ChangeEncoder::encode_changes(changes, &actors);
|
||||
|
||||
//let doc_ops = group_doc_ops(changes, &actors);
|
||||
|
||||
let (ops_bytes, ops_info) = DocOpEncoder::encode_doc_ops(doc_ops, &actors_map, props);
|
||||
|
||||
bytes.extend(MAGIC_BYTES);
|
||||
bytes.extend([0, 0, 0, 0]); // we dont know the hash yet so fill in a fake
|
||||
bytes.push(BLOCK_TYPE_DOC);
|
||||
|
||||
let mut chunk = Vec::new();
|
||||
|
||||
actors.len().encode_vec(&mut chunk);
|
||||
|
||||
for a in actors.into_iter() {
|
||||
a.to_bytes().encode_vec(&mut chunk);
|
||||
}
|
||||
|
||||
heads.len().encode_vec(&mut chunk);
|
||||
for head in heads.iter() {
|
||||
chunk.write_all(&head.0).unwrap();
|
||||
}
|
||||
|
||||
chunk.extend(change_info);
|
||||
chunk.extend(ops_info);
|
||||
|
||||
chunk.extend(change_bytes);
|
||||
chunk.extend(ops_bytes);
|
||||
|
||||
leb128::write::unsigned(&mut bytes, chunk.len() as u64).unwrap();
|
||||
|
||||
bytes.extend(&chunk);
|
||||
|
||||
let hash_result = Sha256::digest(&bytes[CHUNK_START..bytes.len()]);
|
||||
|
||||
bytes.splice(HASH_RANGE, hash_result[0..4].iter().copied());
|
||||
|
||||
bytes
|
||||
}
|
||||
|
||||
/// When encoding a change we take all the actor IDs referenced by a change and place them in an
|
||||
/// array. The array has the actor who authored the change as the first element and all remaining
|
||||
/// actors (i.e. those referenced in object IDs in the target of an operation or in the `pred` of
|
||||
/// an operation) lexicographically ordered following the change author.
|
||||
fn actor_ids_in_change(change: &::Change) -> Vec<amp::ActorId> {
|
||||
let mut other_ids: Vec<&::ActorId> = change
|
||||
.operations
|
||||
.iter()
|
||||
.flat_map(opids_in_operation)
|
||||
.filter(|a| *a != &change.actor_id)
|
||||
.unique()
|
||||
.collect();
|
||||
other_ids.sort();
|
||||
// Now prepend the change actor
|
||||
std::iter::once(&change.actor_id)
|
||||
.chain(other_ids.into_iter())
|
||||
.cloned()
|
||||
.collect()
|
||||
}
|
||||
|
||||
fn opids_in_operation(op: &::Op) -> impl Iterator<Item = &::ActorId> {
|
||||
let obj_actor_id = match &op.obj {
|
||||
amp::ObjectId::Root => None,
|
||||
amp::ObjectId::Id(opid) => Some(opid.actor()),
|
||||
};
|
||||
let pred_ids = op.pred.iter().map(amp::OpId::actor);
|
||||
let key_actor = match &op.key {
|
||||
amp::Key::Seq(amp::ElementId::Id(i)) => Some(i.actor()),
|
||||
_ => None,
|
||||
};
|
||||
obj_actor_id
|
||||
.into_iter()
|
||||
.chain(key_actor.into_iter())
|
||||
.chain(pred_ids)
|
||||
}
|
||||
|
||||
impl From<amp::Change> for Change {
|
||||
fn from(value: amp::Change) -> Self {
|
||||
encode(&value)
|
||||
}
|
||||
}
|
||||
|
||||
impl From<&::Change> for Change {
|
||||
fn from(value: &::Change) -> Self {
|
||||
encode(value)
|
||||
}
|
||||
}
|
||||
|
||||
fn encode(change: &::Change) -> Change {
|
||||
let mut deps = change.deps.clone();
|
||||
deps.sort_unstable();
|
||||
|
||||
let mut chunk = encode_chunk(change, &deps);
|
||||
|
||||
let mut bytes = Vec::with_capacity(MAGIC_BYTES.len() + 4 + chunk.bytes.len());
|
||||
|
||||
bytes.extend(&MAGIC_BYTES);
|
||||
|
||||
bytes.extend(vec![0, 0, 0, 0]); // we dont know the hash yet so fill in a fake
|
||||
|
||||
bytes.push(BLOCK_TYPE_CHANGE);
|
||||
|
||||
leb128::write::unsigned(&mut bytes, chunk.bytes.len() as u64).unwrap();
|
||||
|
||||
let body_start = bytes.len();
|
||||
|
||||
increment_range(&mut chunk.body, bytes.len());
|
||||
increment_range(&mut chunk.message, bytes.len());
|
||||
increment_range(&mut chunk.extra_bytes, bytes.len());
|
||||
increment_range_map(&mut chunk.ops, bytes.len());
|
||||
|
||||
bytes.extend(&chunk.bytes);
|
||||
|
||||
let hash_result = Sha256::digest(&bytes[CHUNK_START..bytes.len()]);
|
||||
let hash: amp::ChangeHash = hash_result[..].try_into().unwrap();
|
||||
|
||||
bytes.splice(HASH_RANGE, hash_result[0..4].iter().copied());
|
||||
|
||||
// any time I make changes to the encoder decoder its a good idea
|
||||
// to run it through a round trip to detect errors the tests might not
|
||||
// catch
|
||||
// let c0 = Change::from_bytes(bytes.clone()).unwrap();
|
||||
// std::assert_eq!(c1, c0);
|
||||
// perhaps we should add something like this to the test suite
|
||||
|
||||
let bytes = ChangeBytes::Uncompressed(bytes);
|
||||
|
||||
Change {
|
||||
bytes,
|
||||
body_start,
|
||||
hash,
|
||||
seq: change.seq,
|
||||
start_op: change.start_op,
|
||||
time: change.time,
|
||||
actors: chunk.actors,
|
||||
message: chunk.message,
|
||||
deps,
|
||||
ops: chunk.ops,
|
||||
extra_bytes: chunk.extra_bytes,
|
||||
}
|
||||
}
|
||||
|
||||
struct ChunkIntermediate {
|
||||
bytes: Vec<u8>,
|
||||
body: Range<usize>,
|
||||
actors: Vec<ActorId>,
|
||||
message: Range<usize>,
|
||||
ops: HashMap<u32, Range<usize>>,
|
||||
extra_bytes: Range<usize>,
|
||||
}
|
||||
|
||||
fn encode_chunk(change: &::Change, deps: &[amp::ChangeHash]) -> ChunkIntermediate {
|
||||
let mut bytes = Vec::new();
|
||||
|
||||
// All these unwraps are okay because we're writing to an in memory buffer so io erros should
|
||||
// not happen
|
||||
|
||||
// encode deps
|
||||
deps.len().encode(&mut bytes).unwrap();
|
||||
for hash in deps.iter() {
|
||||
bytes.write_all(&hash.0).unwrap();
|
||||
}
|
||||
|
||||
let actors = actor_ids_in_change(change);
|
||||
change.actor_id.to_bytes().encode(&mut bytes).unwrap();
|
||||
|
||||
// encode seq, start_op, time, message
|
||||
change.seq.encode(&mut bytes).unwrap();
|
||||
change.start_op.encode(&mut bytes).unwrap();
|
||||
change.time.encode(&mut bytes).unwrap();
|
||||
let message = bytes.len() + 1;
|
||||
change.message.encode(&mut bytes).unwrap();
|
||||
let message = message..bytes.len();
|
||||
|
||||
// encode ops into a side buffer - collect all other actors
|
||||
let (ops_buf, mut ops) = ColumnEncoder::encode_ops(&change.operations, &actors);
|
||||
|
||||
// encode all other actors
|
||||
actors[1..].encode(&mut bytes).unwrap();
|
||||
|
||||
// now we know how many bytes ops are offset by so we can adjust the ranges
|
||||
increment_range_map(&mut ops, bytes.len());
|
||||
|
||||
// write out the ops
|
||||
|
||||
bytes.write_all(&ops_buf).unwrap();
|
||||
|
||||
// write out the extra bytes
|
||||
let extra_bytes = bytes.len()..(bytes.len() + change.extra_bytes.len());
|
||||
bytes.write_all(&change.extra_bytes).unwrap();
|
||||
let body = 0..bytes.len();
|
||||
|
||||
ChunkIntermediate {
|
||||
bytes,
|
||||
body,
|
||||
actors,
|
||||
message,
|
||||
ops,
|
||||
extra_bytes,
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(PartialEq, Debug, Clone)]
|
||||
enum ChangeBytes {
|
||||
Compressed {
|
||||
compressed: Vec<u8>,
|
||||
uncompressed: Vec<u8>,
|
||||
},
|
||||
Uncompressed(Vec<u8>),
|
||||
}
|
||||
|
||||
impl ChangeBytes {
|
||||
fn uncompressed(&self) -> &[u8] {
|
||||
match self {
|
||||
ChangeBytes::Compressed { uncompressed, .. } => &uncompressed[..],
|
||||
ChangeBytes::Uncompressed(b) => &b[..],
|
||||
}
|
||||
}
|
||||
|
||||
fn compress(&mut self, body_start: usize) {
|
||||
match self {
|
||||
ChangeBytes::Compressed { .. } => {}
|
||||
ChangeBytes::Uncompressed(uncompressed) => {
|
||||
if uncompressed.len() > DEFLATE_MIN_SIZE {
|
||||
let mut result = Vec::with_capacity(uncompressed.len());
|
||||
result.extend(&uncompressed[0..8]);
|
||||
result.push(BLOCK_TYPE_DEFLATE);
|
||||
let mut deflater =
|
||||
DeflateEncoder::new(&uncompressed[body_start..], Compression::default());
|
||||
let mut deflated = Vec::new();
|
||||
let deflated_len = deflater.read_to_end(&mut deflated).unwrap();
|
||||
leb128::write::unsigned(&mut result, deflated_len as u64).unwrap();
|
||||
result.extend(&deflated[..]);
|
||||
*self = ChangeBytes::Compressed {
|
||||
compressed: result,
|
||||
uncompressed: std::mem::take(uncompressed),
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
fn raw(&self) -> &[u8] {
|
||||
match self {
|
||||
ChangeBytes::Compressed { compressed, .. } => &compressed[..],
|
||||
ChangeBytes::Uncompressed(b) => &b[..],
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/// A change represents a group of operations performed by an actor.
|
||||
#[derive(PartialEq, Debug, Clone)]
|
||||
pub struct Change {
|
||||
bytes: ChangeBytes,
|
||||
body_start: usize,
|
||||
/// Hash of this change.
|
||||
pub hash: amp::ChangeHash,
|
||||
/// The index of this change in the changes from this actor.
|
||||
pub seq: u64,
|
||||
/// The start operation index. Starts at 1.
|
||||
pub start_op: NonZeroU64,
|
||||
/// The time that this change was committed.
|
||||
pub time: i64,
|
||||
/// The message of this change.
|
||||
message: Range<usize>,
|
||||
/// The actors referenced in this change.
|
||||
actors: Vec<ActorId>,
|
||||
/// The dependencies of this change.
|
||||
pub deps: Vec<amp::ChangeHash>,
|
||||
ops: HashMap<u32, Range<usize>>,
|
||||
extra_bytes: Range<usize>,
|
||||
}
|
||||
|
||||
impl Change {
|
||||
pub fn actor_id(&self) -> &ActorId {
|
||||
&self.actors[0]
|
||||
}
|
||||
|
||||
#[instrument(level = "debug", skip(bytes))]
|
||||
pub fn load_document(bytes: &[u8]) -> Result<Vec<Change>, AutomergeError> {
|
||||
load_blocks(bytes)
|
||||
}
|
||||
|
||||
pub fn from_bytes(bytes: Vec<u8>) -> Result<Change, decoding::Error> {
|
||||
Change::try_from(bytes)
|
||||
}
|
||||
|
||||
pub fn is_empty(&self) -> bool {
|
||||
self.len() == 0
|
||||
}
|
||||
|
||||
pub fn len(&self) -> usize {
|
||||
// TODO - this could be a lot more efficient
|
||||
self.iter_ops().count()
|
||||
}
|
||||
|
||||
pub fn max_op(&self) -> u64 {
|
||||
self.start_op.get() + (self.len() as u64) - 1
|
||||
}
|
||||
|
||||
pub fn message(&self) -> Option<String> {
|
||||
let m = &self.bytes.uncompressed()[self.message.clone()];
|
||||
if m.is_empty() {
|
||||
None
|
||||
} else {
|
||||
std::str::from_utf8(m).map(ToString::to_string).ok()
|
||||
}
|
||||
}
|
||||
|
||||
pub fn decode(&self) -> amp::Change {
|
||||
amp::Change {
|
||||
start_op: self.start_op,
|
||||
seq: self.seq,
|
||||
time: self.time,
|
||||
hash: Some(self.hash),
|
||||
message: self.message(),
|
||||
actor_id: self.actors[0].clone(),
|
||||
deps: self.deps.clone(),
|
||||
operations: self
|
||||
.iter_ops()
|
||||
.map(|op| amp::Op {
|
||||
action: op.action.clone(),
|
||||
obj: op.obj.clone(),
|
||||
key: op.key.clone(),
|
||||
pred: op.pred.clone(),
|
||||
insert: op.insert,
|
||||
})
|
||||
.collect(),
|
||||
extra_bytes: self.extra_bytes().into(),
|
||||
}
|
||||
}
|
||||
|
||||
pub(crate) fn iter_ops(&self) -> OperationIterator<'_> {
|
||||
OperationIterator::new(self.bytes.uncompressed(), self.actors.as_slice(), &self.ops)
|
||||
}
|
||||
|
||||
pub fn extra_bytes(&self) -> &[u8] {
|
||||
&self.bytes.uncompressed()[self.extra_bytes.clone()]
|
||||
}
|
||||
|
||||
pub fn compress(&mut self) {
|
||||
self.bytes.compress(self.body_start);
|
||||
}
|
||||
|
||||
pub fn raw_bytes(&self) -> &[u8] {
|
||||
self.bytes.raw()
|
||||
}
|
||||
}
|
||||
|
||||
fn read_leb128(bytes: &mut &[u8]) -> Result<(usize, usize), decoding::Error> {
|
||||
let mut buf = &bytes[..];
|
||||
let val = leb128::read::unsigned(&mut buf)? as usize;
|
||||
let leb128_bytes = bytes.len() - buf.len();
|
||||
Ok((val, leb128_bytes))
|
||||
}
|
||||
|
||||
fn read_slice<T: Decodable + Debug>(
|
||||
bytes: &[u8],
|
||||
cursor: &mut Range<usize>,
|
||||
) -> Result<T, decoding::Error> {
|
||||
let mut view = &bytes[cursor.clone()];
|
||||
let init_len = view.len();
|
||||
let val = T::decode::<&[u8]>(&mut view).ok_or(decoding::Error::NoDecodedValue);
|
||||
let bytes_read = init_len - view.len();
|
||||
*cursor = (cursor.start + bytes_read)..cursor.end;
|
||||
val
|
||||
}
|
||||
|
||||
fn slice_bytes(bytes: &[u8], cursor: &mut Range<usize>) -> Result<Range<usize>, decoding::Error> {
|
||||
let (val, len) = read_leb128(&mut &bytes[cursor.clone()])?;
|
||||
let start = cursor.start + len;
|
||||
let end = start + val;
|
||||
*cursor = end..cursor.end;
|
||||
Ok(start..end)
|
||||
}
|
||||
|
||||
fn increment_range(range: &mut Range<usize>, len: usize) {
|
||||
range.end += len;
|
||||
range.start += len;
|
||||
}
|
||||
|
||||
fn increment_range_map(ranges: &mut HashMap<u32, Range<usize>>, len: usize) {
|
||||
for range in ranges.values_mut() {
|
||||
increment_range(range, len);
|
||||
}
|
||||
}
|
||||
|
||||
fn export_objid(id: &ObjId, actors: &IndexedCache<ActorId>) -> amp::ObjectId {
|
||||
if id == &ObjId::root() {
|
||||
amp::ObjectId::Root
|
||||
} else {
|
||||
export_opid(&id.0, actors).into()
|
||||
}
|
||||
}
|
||||
|
||||
fn export_elemid(id: &ElemId, actors: &IndexedCache<ActorId>) -> amp::ElementId {
|
||||
if id == &types::HEAD {
|
||||
amp::ElementId::Head
|
||||
} else {
|
||||
export_opid(&id.0, actors).into()
|
||||
}
|
||||
}
|
||||
|
||||
fn export_opid(id: &OpId, actors: &IndexedCache<ActorId>) -> amp::OpId {
|
||||
amp::OpId(id.0, actors.get(id.1).clone())
|
||||
}
|
||||
|
||||
fn export_op(
|
||||
op: &Op,
|
||||
obj: &ObjId,
|
||||
actors: &IndexedCache<ActorId>,
|
||||
props: &IndexedCache<String>,
|
||||
) -> amp::Op {
|
||||
let action = op.action.clone();
|
||||
let key = match &op.key {
|
||||
Key::Map(n) => amp::Key::Map(props.get(*n).clone().into()),
|
||||
Key::Seq(id) => amp::Key::Seq(export_elemid(id, actors)),
|
||||
};
|
||||
let obj = export_objid(obj, actors);
|
||||
let pred = op.pred.iter().map(|id| export_opid(id, actors)).collect();
|
||||
amp::Op {
|
||||
action,
|
||||
obj,
|
||||
insert: op.insert,
|
||||
pred,
|
||||
key,
|
||||
}
|
||||
}
|
||||
|
||||
pub(crate) fn export_change(
|
||||
change: TransactionInner,
|
||||
actors: &IndexedCache<ActorId>,
|
||||
props: &IndexedCache<String>,
|
||||
) -> Change {
|
||||
amp::Change {
|
||||
actor_id: actors.get(change.actor).clone(),
|
||||
seq: change.seq,
|
||||
start_op: change.start_op,
|
||||
time: change.time,
|
||||
deps: change.deps,
|
||||
message: change.message,
|
||||
hash: change.hash,
|
||||
operations: change
|
||||
.operations
|
||||
.iter()
|
||||
.map(|(obj, _, op)| export_op(op, obj, actors, props))
|
||||
.collect(),
|
||||
extra_bytes: change.extra_bytes,
|
||||
}
|
||||
.into()
|
||||
}
|
||||
|
||||
impl TryFrom<Vec<u8>> for Change {
|
||||
type Error = decoding::Error;
|
||||
|
||||
fn try_from(bytes: Vec<u8>) -> Result<Self, Self::Error> {
|
||||
let (chunktype, body) = decode_header_without_hash(&bytes)?;
|
||||
let bytes = if chunktype == BLOCK_TYPE_DEFLATE {
|
||||
decompress_chunk(0..PREAMBLE_BYTES, body, bytes)?
|
||||
} else {
|
||||
ChangeBytes::Uncompressed(bytes)
|
||||
};
|
||||
|
||||
let (chunktype, hash, body) = decode_header(bytes.uncompressed())?;
|
||||
|
||||
if chunktype != BLOCK_TYPE_CHANGE {
|
||||
return Err(decoding::Error::WrongType {
|
||||
expected_one_of: vec![BLOCK_TYPE_CHANGE],
|
||||
found: chunktype,
|
||||
});
|
||||
}
|
||||
|
||||
let body_start = body.start;
|
||||
let mut cursor = body;
|
||||
|
||||
let deps = decode_hashes(bytes.uncompressed(), &mut cursor)?;
|
||||
|
||||
let actor =
|
||||
ActorId::from(&bytes.uncompressed()[slice_bytes(bytes.uncompressed(), &mut cursor)?]);
|
||||
let seq = read_slice(bytes.uncompressed(), &mut cursor)?;
|
||||
let start_op = read_slice(bytes.uncompressed(), &mut cursor)?;
|
||||
let time = read_slice(bytes.uncompressed(), &mut cursor)?;
|
||||
let message = slice_bytes(bytes.uncompressed(), &mut cursor)?;
|
||||
|
||||
let actors = decode_actors(bytes.uncompressed(), &mut cursor, Some(actor))?;
|
||||
|
||||
let ops_info = decode_column_info(bytes.uncompressed(), &mut cursor, false)?;
|
||||
let ops = decode_columns(&mut cursor, &ops_info);
|
||||
|
||||
Ok(Change {
|
||||
bytes,
|
||||
body_start,
|
||||
hash,
|
||||
seq,
|
||||
start_op,
|
||||
time,
|
||||
actors,
|
||||
message,
|
||||
deps,
|
||||
ops,
|
||||
extra_bytes: cursor,
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
fn decompress_chunk(
|
||||
preamble: Range<usize>,
|
||||
body: Range<usize>,
|
||||
compressed: Vec<u8>,
|
||||
) -> Result<ChangeBytes, decoding::Error> {
|
||||
let mut decoder = DeflateDecoder::new(&compressed[body]);
|
||||
let mut decompressed = Vec::new();
|
||||
decoder.read_to_end(&mut decompressed)?;
|
||||
let mut result = Vec::with_capacity(decompressed.len() + preamble.len());
|
||||
result.extend(&compressed[preamble]);
|
||||
result.push(BLOCK_TYPE_CHANGE);
|
||||
leb128::write::unsigned::<Vec<u8>>(&mut result, decompressed.len() as u64).unwrap();
|
||||
result.extend(decompressed);
|
||||
Ok(ChangeBytes::Compressed {
|
||||
uncompressed: result,
|
||||
compressed,
|
||||
})
|
||||
}
|
||||
|
||||
fn decode_hashes(
|
||||
bytes: &[u8],
|
||||
cursor: &mut Range<usize>,
|
||||
) -> Result<Vec<amp::ChangeHash>, decoding::Error> {
|
||||
let num_hashes = read_slice(bytes, cursor)?;
|
||||
let mut hashes = Vec::with_capacity(num_hashes);
|
||||
for _ in 0..num_hashes {
|
||||
let hash = cursor.start..(cursor.start + HASH_BYTES);
|
||||
*cursor = hash.end..cursor.end;
|
||||
hashes.push(
|
||||
bytes
|
||||
.get(hash)
|
||||
.ok_or(decoding::Error::NotEnoughBytes)?
|
||||
.try_into()
|
||||
.map_err(InvalidChangeError::from)?,
|
||||
);
|
||||
}
|
||||
Ok(hashes)
|
||||
}
|
||||
|
||||
fn decode_actors(
|
||||
bytes: &[u8],
|
||||
cursor: &mut Range<usize>,
|
||||
first: Option<ActorId>,
|
||||
) -> Result<Vec<ActorId>, decoding::Error> {
|
||||
let num_actors: usize = read_slice(bytes, cursor)?;
|
||||
let mut actors = Vec::with_capacity(num_actors + 1);
|
||||
if let Some(actor) = first {
|
||||
actors.push(actor);
|
||||
}
|
||||
for _ in 0..num_actors {
|
||||
actors.push(ActorId::from(
|
||||
bytes
|
||||
.get(slice_bytes(bytes, cursor)?)
|
||||
.ok_or(decoding::Error::NotEnoughBytes)?,
|
||||
));
|
||||
}
|
||||
Ok(actors)
|
||||
}
|
||||
|
||||
fn decode_column_info(
|
||||
bytes: &[u8],
|
||||
cursor: &mut Range<usize>,
|
||||
allow_compressed_column: bool,
|
||||
) -> Result<Vec<(u32, usize)>, decoding::Error> {
|
||||
let num_columns = read_slice(bytes, cursor)?;
|
||||
let mut columns = Vec::with_capacity(num_columns);
|
||||
let mut last_id = 0;
|
||||
for _ in 0..num_columns {
|
||||
let id: u32 = read_slice(bytes, cursor)?;
|
||||
if (id & !COLUMN_TYPE_DEFLATE) <= (last_id & !COLUMN_TYPE_DEFLATE) {
|
||||
return Err(decoding::Error::ColumnsNotInAscendingOrder {
|
||||
last: last_id,
|
||||
found: id,
|
||||
});
|
||||
}
|
||||
if id & COLUMN_TYPE_DEFLATE != 0 && !allow_compressed_column {
|
||||
return Err(decoding::Error::ChangeContainedCompressedColumns);
|
||||
}
|
||||
last_id = id;
|
||||
let length = read_slice(bytes, cursor)?;
|
||||
columns.push((id, length));
|
||||
}
|
||||
Ok(columns)
|
||||
}
|
||||
|
||||
fn decode_columns(
|
||||
cursor: &mut Range<usize>,
|
||||
columns: &[(u32, usize)],
|
||||
) -> HashMap<u32, Range<usize>> {
|
||||
let mut ops = HashMap::new();
|
||||
for (id, length) in columns {
|
||||
let start = cursor.start;
|
||||
let end = start + length;
|
||||
*cursor = end..cursor.end;
|
||||
ops.insert(*id, start..end);
|
||||
}
|
||||
ops
|
||||
}
|
||||
|
||||
fn decode_header(bytes: &[u8]) -> Result<(u8, amp::ChangeHash, Range<usize>), decoding::Error> {
|
||||
let (chunktype, body) = decode_header_without_hash(bytes)?;
|
||||
|
||||
let calculated_hash = Sha256::digest(&bytes[PREAMBLE_BYTES..]);
|
||||
|
||||
let checksum = &bytes[4..8];
|
||||
if checksum != &calculated_hash[0..4] {
|
||||
return Err(decoding::Error::InvalidChecksum {
|
||||
found: checksum.try_into().unwrap(),
|
||||
calculated: calculated_hash[0..4].try_into().unwrap(),
|
||||
});
|
||||
}
|
||||
|
||||
let hash = calculated_hash[..]
|
||||
.try_into()
|
||||
.map_err(InvalidChangeError::from)?;
|
||||
|
||||
Ok((chunktype, hash, body))
|
||||
}
|
||||
|
||||
fn decode_header_without_hash(bytes: &[u8]) -> Result<(u8, Range<usize>), decoding::Error> {
|
||||
if bytes.len() <= HEADER_BYTES {
|
||||
return Err(decoding::Error::NotEnoughBytes);
|
||||
}
|
||||
|
||||
if bytes[0..4] != MAGIC_BYTES {
|
||||
return Err(decoding::Error::WrongMagicBytes);
|
||||
}
|
||||
|
||||
let (val, len) = read_leb128(&mut &bytes[HEADER_BYTES..])?;
|
||||
let body = (HEADER_BYTES + len)..(HEADER_BYTES + len + val);
|
||||
if bytes.len() != body.end {
|
||||
return Err(decoding::Error::WrongByteLength {
|
||||
expected: body.end,
|
||||
found: bytes.len(),
|
||||
});
|
||||
}
|
||||
|
||||
let chunktype = bytes[PREAMBLE_BYTES];
|
||||
|
||||
Ok((chunktype, body))
|
||||
}
|
||||
|
||||
fn load_blocks(bytes: &[u8]) -> Result<Vec<Change>, AutomergeError> {
|
||||
let mut changes = Vec::new();
|
||||
for slice in split_blocks(bytes)? {
|
||||
decode_block(slice, &mut changes)?;
|
||||
}
|
||||
Ok(changes)
|
||||
}
|
||||
|
||||
fn split_blocks(bytes: &[u8]) -> Result<Vec<&[u8]>, decoding::Error> {
|
||||
// split off all valid blocks - ignore the rest if its corrupted or truncated
|
||||
let mut blocks = Vec::new();
|
||||
let mut cursor = bytes;
|
||||
while let Some(block) = pop_block(cursor)? {
|
||||
blocks.push(&cursor[block.clone()]);
|
||||
if cursor.len() <= block.end {
|
||||
break;
|
||||
}
|
||||
cursor = &cursor[block.end..];
|
||||
}
|
||||
Ok(blocks)
|
||||
}
|
||||
|
||||
fn pop_block(bytes: &[u8]) -> Result<Option<Range<usize>>, decoding::Error> {
|
||||
if bytes.len() < 4 || bytes[0..4] != MAGIC_BYTES {
|
||||
// not reporting error here - file got corrupted?
|
||||
return Ok(None);
|
||||
}
|
||||
let (val, len) = read_leb128(
|
||||
&mut bytes
|
||||
.get(HEADER_BYTES..)
|
||||
.ok_or(decoding::Error::NotEnoughBytes)?,
|
||||
)?;
|
||||
// val is arbitrary so it could overflow
|
||||
let end = (HEADER_BYTES + len)
|
||||
.checked_add(val)
|
||||
.ok_or(decoding::Error::Overflow)?;
|
||||
if end > bytes.len() {
|
||||
// not reporting error here - file got truncated?
|
||||
return Ok(None);
|
||||
}
|
||||
Ok(Some(0..end))
|
||||
}
|
||||
|
||||
fn decode_block(bytes: &[u8], changes: &mut Vec<Change>) -> Result<(), decoding::Error> {
|
||||
match bytes[PREAMBLE_BYTES] {
|
||||
BLOCK_TYPE_DOC => {
|
||||
changes.extend(decode_document(bytes)?);
|
||||
Ok(())
|
||||
}
|
||||
BLOCK_TYPE_CHANGE | BLOCK_TYPE_DEFLATE => {
|
||||
changes.push(Change::try_from(bytes.to_vec())?);
|
||||
Ok(())
|
||||
}
|
||||
found => Err(decoding::Error::WrongType {
|
||||
expected_one_of: vec![BLOCK_TYPE_DOC, BLOCK_TYPE_CHANGE, BLOCK_TYPE_DEFLATE],
|
||||
found,
|
||||
}),
|
||||
}
|
||||
}
|
||||
|
||||
fn decode_document(bytes: &[u8]) -> Result<Vec<Change>, decoding::Error> {
|
||||
let (chunktype, _hash, mut cursor) = decode_header(bytes)?;
|
||||
|
||||
// chunktype == 0 is a document, chunktype = 1 is a change
|
||||
if chunktype > 0 {
|
||||
return Err(decoding::Error::WrongType {
|
||||
expected_one_of: vec![0],
|
||||
found: chunktype,
|
||||
});
|
||||
}
|
||||
|
||||
let actors = decode_actors(bytes, &mut cursor, None)?;
|
||||
|
||||
let heads = decode_hashes(bytes, &mut cursor)?;
|
||||
|
||||
let changes_info = decode_column_info(bytes, &mut cursor, true)?;
|
||||
let ops_info = decode_column_info(bytes, &mut cursor, true)?;
|
||||
|
||||
let changes_data = decode_columns(&mut cursor, &changes_info);
|
||||
let mut doc_changes = ChangeIterator::new(bytes, &changes_data).collect::<Vec<_>>();
|
||||
let doc_changes_deps = DepsIterator::new(bytes, &changes_data);
|
||||
|
||||
let doc_changes_len = doc_changes.len();
|
||||
|
||||
let ops_data = decode_columns(&mut cursor, &ops_info);
|
||||
let doc_ops: Vec<_> = DocOpIterator::new(bytes, &actors, &ops_data).collect();
|
||||
|
||||
group_doc_change_and_doc_ops(&mut doc_changes, doc_ops, &actors)?;
|
||||
|
||||
let uncompressed_changes =
|
||||
doc_changes_to_uncompressed_changes(doc_changes.into_iter(), &actors);
|
||||
|
||||
let changes = compress_doc_changes(uncompressed_changes, doc_changes_deps, doc_changes_len)
|
||||
.ok_or(decoding::Error::NoDocChanges)?;
|
||||
|
||||
let mut calculated_heads = HashSet::new();
|
||||
for change in &changes {
|
||||
for dep in &change.deps {
|
||||
calculated_heads.remove(dep);
|
||||
}
|
||||
calculated_heads.insert(change.hash);
|
||||
}
|
||||
|
||||
if calculated_heads != heads.into_iter().collect::<HashSet<_>>() {
|
||||
return Err(decoding::Error::MismatchedHeads);
|
||||
}
|
||||
|
||||
Ok(changes)
|
||||
}
|
||||
|
||||
fn compress_doc_changes(
|
||||
uncompressed_changes: impl Iterator<Item = amp::Change>,
|
||||
doc_changes_deps: impl Iterator<Item = Vec<usize>>,
|
||||
num_changes: usize,
|
||||
) -> Option<Vec<Change>> {
|
||||
let mut changes: Vec<Change> = Vec::with_capacity(num_changes);
|
||||
|
||||
// fill out the hashes as we go
|
||||
for (deps, mut uncompressed_change) in doc_changes_deps.zip_eq(uncompressed_changes) {
|
||||
for idx in deps {
|
||||
uncompressed_change.deps.push(changes.get(idx)?.hash);
|
||||
}
|
||||
changes.push(uncompressed_change.into());
|
||||
}
|
||||
|
||||
Some(changes)
|
||||
}
|
||||
|
||||
fn group_doc_change_and_doc_ops(
|
||||
changes: &mut [DocChange],
|
||||
mut ops: Vec<DocOp>,
|
||||
actors: &[ActorId],
|
||||
) -> Result<(), decoding::Error> {
|
||||
let mut changes_by_actor: HashMap<usize, Vec<usize>> = HashMap::new();
|
||||
|
||||
for (i, change) in changes.iter().enumerate() {
|
||||
let actor_change_index = changes_by_actor.entry(change.actor).or_default();
|
||||
if change.seq != (actor_change_index.len() + 1) as u64 {
|
||||
return Err(decoding::Error::ChangeDecompressFailed(
|
||||
"Doc Seq Invalid".into(),
|
||||
));
|
||||
}
|
||||
if change.actor >= actors.len() {
|
||||
return Err(decoding::Error::ChangeDecompressFailed(
|
||||
"Doc Actor Invalid".into(),
|
||||
));
|
||||
}
|
||||
actor_change_index.push(i);
|
||||
}
|
||||
|
||||
let mut op_by_id = HashMap::new();
|
||||
ops.iter().enumerate().for_each(|(i, op)| {
|
||||
op_by_id.insert((op.ctr, op.actor), i);
|
||||
});
|
||||
|
||||
for i in 0..ops.len() {
|
||||
let op = ops[i].clone(); // this is safe - avoid borrow checker issues
|
||||
//let id = (op.ctr, op.actor);
|
||||
//op_by_id.insert(id, i);
|
||||
for succ in &op.succ {
|
||||
if let Some(index) = op_by_id.get(succ) {
|
||||
ops[*index].pred.push((op.ctr, op.actor));
|
||||
} else {
|
||||
let key = if op.insert {
|
||||
amp::OpId(op.ctr, actors[op.actor].clone()).into()
|
||||
} else {
|
||||
op.key.clone()
|
||||
};
|
||||
let del = DocOp {
|
||||
actor: succ.1,
|
||||
ctr: succ.0,
|
||||
action: OpType::Delete,
|
||||
obj: op.obj.clone(),
|
||||
key,
|
||||
succ: Vec::new(),
|
||||
pred: vec![(op.ctr, op.actor)],
|
||||
insert: false,
|
||||
};
|
||||
op_by_id.insert(*succ, ops.len());
|
||||
ops.push(del);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
for op in ops {
|
||||
// binary search for our change
|
||||
let actor_change_index = changes_by_actor.entry(op.actor).or_default();
|
||||
let mut left = 0;
|
||||
let mut right = actor_change_index.len();
|
||||
while left < right {
|
||||
let seq = (left + right) / 2;
|
||||
if changes[actor_change_index[seq]].max_op < op.ctr {
|
||||
left = seq + 1;
|
||||
} else {
|
||||
right = seq;
|
||||
}
|
||||
}
|
||||
if left >= actor_change_index.len() {
|
||||
return Err(decoding::Error::ChangeDecompressFailed(
|
||||
"Doc MaxOp Invalid".into(),
|
||||
));
|
||||
}
|
||||
changes[actor_change_index[left]].ops.push(op);
|
||||
}
|
||||
|
||||
changes
|
||||
.iter_mut()
|
||||
.for_each(|change| change.ops.sort_unstable());
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
fn doc_changes_to_uncompressed_changes<'a>(
|
||||
changes: impl Iterator<Item = DocChange> + 'a,
|
||||
actors: &'a [ActorId],
|
||||
) -> impl Iterator<Item = amp::Change> + 'a {
|
||||
changes.map(move |change| amp::Change {
|
||||
// we've already confirmed that all change.actor's are valid
|
||||
actor_id: actors[change.actor].clone(),
|
||||
seq: change.seq,
|
||||
time: change.time,
|
||||
// SAFETY: this unwrap is safe as we always add 1
|
||||
start_op: NonZeroU64::new(change.max_op - change.ops.len() as u64 + 1).unwrap(),
|
||||
hash: None,
|
||||
message: change.message,
|
||||
operations: change
|
||||
.ops
|
||||
.into_iter()
|
||||
.map(|op| amp::Op {
|
||||
action: op.action.clone(),
|
||||
insert: op.insert,
|
||||
key: op.key,
|
||||
obj: op.obj,
|
||||
// we've already confirmed that all op.actor's are valid
|
||||
pred: pred_into(op.pred.into_iter(), actors),
|
||||
})
|
||||
.collect(),
|
||||
deps: Vec::new(),
|
||||
extra_bytes: change.extra_bytes,
|
||||
})
|
||||
}
|
||||
|
||||
fn pred_into(
|
||||
pred: impl Iterator<Item = (u64, usize)>,
|
||||
actors: &[ActorId],
|
||||
) -> amp::SortedVec<amp::OpId> {
|
||||
pred.map(|(ctr, actor)| amp::OpId(ctr, actors[actor].clone()))
|
||||
.collect()
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use crate::legacy as amp;
|
||||
#[test]
|
||||
fn mismatched_head_repro_one() {
|
||||
let op_json = serde_json::json!({
|
||||
"ops": [
|
||||
{
|
||||
"action": "del",
|
||||
"obj": "1@1485eebc689d47efbf8b892e81653eb3",
|
||||
"elemId": "3164@0dcdf83d9594477199f80ccd25e87053",
|
||||
"pred": [
|
||||
"3164@0dcdf83d9594477199f80ccd25e87053"
|
||||
],
|
||||
"insert": false
|
||||
},
|
||||
],
|
||||
"actor": "e63cf5ed1f0a4fb28b2c5bc6793b9272",
|
||||
"hash": "e7fd5c02c8fdd2cdc3071ce898a5839bf36229678af3b940f347da541d147ae2",
|
||||
"seq": 1,
|
||||
"startOp": 3179,
|
||||
"time": 1634146652,
|
||||
"message": null,
|
||||
"deps": [
|
||||
"2603cded00f91e525507fc9e030e77f9253b239d90264ee343753efa99e3fec1"
|
||||
]
|
||||
});
|
||||
|
||||
let change: amp::Change = serde_json::from_value(op_json).unwrap();
|
||||
let expected_hash: super::amp::ChangeHash =
|
||||
"4dff4665d658a28bb6dcace8764eb35fa8e48e0a255e70b6b8cbf8e8456e5c50"
|
||||
.parse()
|
||||
.unwrap();
|
||||
let encoded: super::Change = change.into();
|
||||
assert_eq!(encoded.hash, expected_hash);
|
||||
}
|
||||
}
|
|
@ -1,52 +0,0 @@
|
|||
use crate::types::OpId;
|
||||
use fxhash::FxBuildHasher;
|
||||
use std::cmp;
|
||||
use std::collections::HashMap;
|
||||
|
||||
#[derive(Debug, Clone, PartialEq)]
|
||||
pub(crate) struct Clock(HashMap<usize, u64, FxBuildHasher>);
|
||||
|
||||
impl Clock {
|
||||
pub(crate) fn new() -> Self {
|
||||
Clock(Default::default())
|
||||
}
|
||||
|
||||
pub(crate) fn include(&mut self, key: usize, n: u64) {
|
||||
self.0
|
||||
.entry(key)
|
||||
.and_modify(|m| *m = cmp::max(n, *m))
|
||||
.or_insert(n);
|
||||
}
|
||||
|
||||
pub(crate) fn covers(&self, id: &OpId) -> bool {
|
||||
if let Some(val) = self.0.get(&id.1) {
|
||||
val >= &id.0
|
||||
} else {
|
||||
false
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
|
||||
#[test]
|
||||
fn covers() {
|
||||
let mut clock = Clock::new();
|
||||
|
||||
clock.include(1, 20);
|
||||
clock.include(2, 10);
|
||||
|
||||
assert!(clock.covers(&OpId(10, 1)));
|
||||
assert!(clock.covers(&OpId(20, 1)));
|
||||
assert!(!clock.covers(&OpId(30, 1)));
|
||||
|
||||
assert!(clock.covers(&OpId(5, 2)));
|
||||
assert!(clock.covers(&OpId(10, 2)));
|
||||
assert!(!clock.covers(&OpId(15, 2)));
|
||||
|
||||
assert!(!clock.covers(&OpId(1, 3)));
|
||||
assert!(!clock.covers(&OpId(100, 3)));
|
||||
}
|
||||
}
|
File diff suppressed because it is too large
Load diff
|
@ -1,391 +0,0 @@
|
|||
use core::fmt::Debug;
|
||||
use std::{
|
||||
io,
|
||||
io::{Read, Write},
|
||||
mem,
|
||||
num::NonZeroU64,
|
||||
};
|
||||
|
||||
use flate2::{bufread::DeflateEncoder, Compression};
|
||||
use smol_str::SmolStr;
|
||||
|
||||
use crate::columnar::COLUMN_TYPE_DEFLATE;
|
||||
use crate::ActorId;
|
||||
|
||||
pub(crate) const DEFLATE_MIN_SIZE: usize = 256;
|
||||
|
||||
/// The error type for encoding operations.
|
||||
#[derive(Debug, thiserror::Error)]
|
||||
pub enum Error {
|
||||
#[error(transparent)]
|
||||
Io(#[from] io::Error),
|
||||
}
|
||||
|
||||
impl PartialEq<Error> for Error {
|
||||
fn eq(&self, other: &Error) -> bool {
|
||||
match (self, other) {
|
||||
(Self::Io(error1), Self::Io(error2)) => error1.kind() == error2.kind(),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/// Encodes booleans by storing the count of the same value.
|
||||
///
|
||||
/// The sequence of numbers describes the count of false values on even indices (0-indexed) and the
|
||||
/// count of true values on odd indices (0-indexed).
|
||||
///
|
||||
/// Counts are encoded as usize.
|
||||
pub(crate) struct BooleanEncoder {
|
||||
buf: Vec<u8>,
|
||||
last: bool,
|
||||
count: usize,
|
||||
}
|
||||
|
||||
impl BooleanEncoder {
|
||||
pub(crate) fn new() -> BooleanEncoder {
|
||||
BooleanEncoder {
|
||||
buf: Vec::new(),
|
||||
last: false,
|
||||
count: 0,
|
||||
}
|
||||
}
|
||||
|
||||
pub(crate) fn append(&mut self, value: bool) {
|
||||
if value == self.last {
|
||||
self.count += 1;
|
||||
} else {
|
||||
self.count.encode(&mut self.buf).ok();
|
||||
self.last = value;
|
||||
self.count = 1;
|
||||
}
|
||||
}
|
||||
|
||||
pub(crate) fn finish(mut self, col: u32) -> ColData {
|
||||
if self.count > 0 {
|
||||
self.count.encode(&mut self.buf).ok();
|
||||
}
|
||||
ColData::new(col, self.buf)
|
||||
}
|
||||
}
|
||||
|
||||
/// Encodes integers as the change since the previous value.
|
||||
///
|
||||
/// The initial value is 0 encoded as u64. Deltas are encoded as i64.
|
||||
///
|
||||
/// Run length encoding is then applied to the resulting sequence.
|
||||
pub(crate) struct DeltaEncoder {
|
||||
rle: RleEncoder<i64>,
|
||||
absolute_value: u64,
|
||||
}
|
||||
|
||||
impl DeltaEncoder {
|
||||
pub(crate) fn new() -> DeltaEncoder {
|
||||
DeltaEncoder {
|
||||
rle: RleEncoder::new(),
|
||||
absolute_value: 0,
|
||||
}
|
||||
}
|
||||
|
||||
pub(crate) fn append_value(&mut self, value: u64) {
|
||||
self.rle
|
||||
.append_value(value as i64 - self.absolute_value as i64);
|
||||
self.absolute_value = value;
|
||||
}
|
||||
|
||||
pub(crate) fn append_null(&mut self) {
|
||||
self.rle.append_null();
|
||||
}
|
||||
|
||||
pub(crate) fn finish(self, col: u32) -> ColData {
|
||||
self.rle.finish(col)
|
||||
}
|
||||
}
|
||||
|
||||
enum RleState<T> {
|
||||
Empty,
|
||||
NullRun(usize),
|
||||
LiteralRun(T, Vec<T>),
|
||||
LoneVal(T),
|
||||
Run(T, usize),
|
||||
}
|
||||
|
||||
/// Encodes data in run lengh encoding format. This is very efficient for long repeats of data
|
||||
///
|
||||
/// There are 3 types of 'run' in this encoder:
|
||||
/// - a normal run (compresses repeated values)
|
||||
/// - a null run (compresses repeated nulls)
|
||||
/// - a literal run (no compression)
|
||||
///
|
||||
/// A normal run consists of the length of the run (encoded as an i64) followed by the encoded value that this run contains.
|
||||
///
|
||||
/// A null run consists of a zero value (encoded as an i64) followed by the length of the null run (encoded as a usize).
|
||||
///
|
||||
/// A literal run consists of the **negative** length of the run (encoded as an i64) followed by the values in the run.
|
||||
///
|
||||
/// Therefore all the types start with an encoded i64, the value of which determines the type of the following data.
|
||||
pub(crate) struct RleEncoder<T>
|
||||
where
|
||||
T: Encodable + PartialEq + Clone,
|
||||
{
|
||||
buf: Vec<u8>,
|
||||
state: RleState<T>,
|
||||
}
|
||||
|
||||
impl<T> RleEncoder<T>
|
||||
where
|
||||
T: Encodable + PartialEq + Clone,
|
||||
{
|
||||
pub(crate) fn new() -> RleEncoder<T> {
|
||||
RleEncoder {
|
||||
buf: Vec::new(),
|
||||
state: RleState::Empty,
|
||||
}
|
||||
}
|
||||
|
||||
pub(crate) fn finish(mut self, col: u32) -> ColData {
|
||||
match self.take_state() {
|
||||
// this covers `only_nulls`
|
||||
RleState::NullRun(size) => {
|
||||
if !self.buf.is_empty() {
|
||||
self.flush_null_run(size);
|
||||
}
|
||||
}
|
||||
RleState::LoneVal(value) => self.flush_lit_run(vec![value]),
|
||||
RleState::Run(value, len) => self.flush_run(&value, len),
|
||||
RleState::LiteralRun(last, mut run) => {
|
||||
run.push(last);
|
||||
self.flush_lit_run(run);
|
||||
}
|
||||
RleState::Empty => {}
|
||||
}
|
||||
ColData::new(col, self.buf)
|
||||
}
|
||||
|
||||
fn flush_run(&mut self, val: &T, len: usize) {
|
||||
self.encode(&(len as i64));
|
||||
self.encode(val);
|
||||
}
|
||||
|
||||
fn flush_null_run(&mut self, len: usize) {
|
||||
self.encode::<i64>(&0);
|
||||
self.encode(&len);
|
||||
}
|
||||
|
||||
fn flush_lit_run(&mut self, run: Vec<T>) {
|
||||
self.encode(&-(run.len() as i64));
|
||||
for val in run {
|
||||
self.encode(&val);
|
||||
}
|
||||
}
|
||||
|
||||
fn take_state(&mut self) -> RleState<T> {
|
||||
let mut state = RleState::Empty;
|
||||
mem::swap(&mut self.state, &mut state);
|
||||
state
|
||||
}
|
||||
|
||||
pub(crate) fn append_null(&mut self) {
|
||||
self.state = match self.take_state() {
|
||||
RleState::Empty => RleState::NullRun(1),
|
||||
RleState::NullRun(size) => RleState::NullRun(size + 1),
|
||||
RleState::LoneVal(other) => {
|
||||
self.flush_lit_run(vec![other]);
|
||||
RleState::NullRun(1)
|
||||
}
|
||||
RleState::Run(other, len) => {
|
||||
self.flush_run(&other, len);
|
||||
RleState::NullRun(1)
|
||||
}
|
||||
RleState::LiteralRun(last, mut run) => {
|
||||
run.push(last);
|
||||
self.flush_lit_run(run);
|
||||
RleState::NullRun(1)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
pub(crate) fn append_value(&mut self, value: T) {
|
||||
self.state = match self.take_state() {
|
||||
RleState::Empty => RleState::LoneVal(value),
|
||||
RleState::LoneVal(other) => {
|
||||
if other == value {
|
||||
RleState::Run(value, 2)
|
||||
} else {
|
||||
let mut v = Vec::with_capacity(2);
|
||||
v.push(other);
|
||||
RleState::LiteralRun(value, v)
|
||||
}
|
||||
}
|
||||
RleState::Run(other, len) => {
|
||||
if other == value {
|
||||
RleState::Run(other, len + 1)
|
||||
} else {
|
||||
self.flush_run(&other, len);
|
||||
RleState::LoneVal(value)
|
||||
}
|
||||
}
|
||||
RleState::LiteralRun(last, mut run) => {
|
||||
if last == value {
|
||||
self.flush_lit_run(run);
|
||||
RleState::Run(value, 2)
|
||||
} else {
|
||||
run.push(last);
|
||||
RleState::LiteralRun(value, run)
|
||||
}
|
||||
}
|
||||
RleState::NullRun(size) => {
|
||||
self.flush_null_run(size);
|
||||
RleState::LoneVal(value)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
fn encode<V>(&mut self, val: &V)
|
||||
where
|
||||
V: Encodable,
|
||||
{
|
||||
val.encode(&mut self.buf).ok();
|
||||
}
|
||||
}
|
||||
|
||||
pub(crate) trait Encodable {
|
||||
fn encode_with_actors_to_vec(&self, actors: &mut [ActorId]) -> io::Result<Vec<u8>> {
|
||||
let mut buf = Vec::new();
|
||||
self.encode_with_actors(&mut buf, actors)?;
|
||||
Ok(buf)
|
||||
}
|
||||
|
||||
fn encode_with_actors<R: Write>(&self, buf: &mut R, _actors: &[ActorId]) -> io::Result<usize> {
|
||||
self.encode(buf)
|
||||
}
|
||||
|
||||
fn encode<R: Write>(&self, buf: &mut R) -> io::Result<usize>;
|
||||
|
||||
fn encode_vec(&self, buf: &mut Vec<u8>) -> usize {
|
||||
self.encode(buf).unwrap()
|
||||
}
|
||||
}
|
||||
|
||||
impl Encodable for SmolStr {
|
||||
fn encode<R: Write>(&self, buf: &mut R) -> io::Result<usize> {
|
||||
let bytes = self.as_bytes();
|
||||
let head = bytes.len().encode(buf)?;
|
||||
buf.write_all(bytes)?;
|
||||
Ok(head + bytes.len())
|
||||
}
|
||||
}
|
||||
|
||||
impl Encodable for String {
|
||||
fn encode<R: Write>(&self, buf: &mut R) -> io::Result<usize> {
|
||||
let bytes = self.as_bytes();
|
||||
let head = bytes.len().encode(buf)?;
|
||||
buf.write_all(bytes)?;
|
||||
Ok(head + bytes.len())
|
||||
}
|
||||
}
|
||||
|
||||
impl Encodable for Option<String> {
|
||||
fn encode<R: Write>(&self, buf: &mut R) -> io::Result<usize> {
|
||||
if let Some(s) = self {
|
||||
s.encode(buf)
|
||||
} else {
|
||||
0.encode(buf)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl Encodable for u64 {
|
||||
fn encode<R: Write>(&self, buf: &mut R) -> io::Result<usize> {
|
||||
leb128::write::unsigned(buf, *self)
|
||||
}
|
||||
}
|
||||
|
||||
impl Encodable for NonZeroU64 {
|
||||
fn encode<R: Write>(&self, buf: &mut R) -> io::Result<usize> {
|
||||
leb128::write::unsigned(buf, self.get())
|
||||
}
|
||||
}
|
||||
|
||||
impl Encodable for f64 {
|
||||
fn encode<R: Write>(&self, buf: &mut R) -> io::Result<usize> {
|
||||
let bytes = self.to_le_bytes();
|
||||
buf.write_all(&bytes)?;
|
||||
Ok(bytes.len())
|
||||
}
|
||||
}
|
||||
|
||||
impl Encodable for f32 {
|
||||
fn encode<R: Write>(&self, buf: &mut R) -> io::Result<usize> {
|
||||
let bytes = self.to_le_bytes();
|
||||
buf.write_all(&bytes)?;
|
||||
Ok(bytes.len())
|
||||
}
|
||||
}
|
||||
|
||||
impl Encodable for i64 {
|
||||
fn encode<R: Write>(&self, buf: &mut R) -> io::Result<usize> {
|
||||
leb128::write::signed(buf, *self)
|
||||
}
|
||||
}
|
||||
|
||||
impl Encodable for usize {
|
||||
fn encode<R: Write>(&self, buf: &mut R) -> io::Result<usize> {
|
||||
(*self as u64).encode(buf)
|
||||
}
|
||||
}
|
||||
|
||||
impl Encodable for u32 {
|
||||
fn encode<R: Write>(&self, buf: &mut R) -> io::Result<usize> {
|
||||
u64::from(*self).encode(buf)
|
||||
}
|
||||
}
|
||||
|
||||
impl Encodable for i32 {
|
||||
fn encode<R: Write>(&self, buf: &mut R) -> io::Result<usize> {
|
||||
i64::from(*self).encode(buf)
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Debug)]
|
||||
pub(crate) struct ColData {
|
||||
pub(crate) col: u32,
|
||||
pub(crate) data: Vec<u8>,
|
||||
#[cfg(debug_assertions)]
|
||||
has_been_deflated: bool,
|
||||
}
|
||||
|
||||
impl ColData {
|
||||
pub(crate) fn new(col_id: u32, data: Vec<u8>) -> ColData {
|
||||
ColData {
|
||||
col: col_id,
|
||||
data,
|
||||
#[cfg(debug_assertions)]
|
||||
has_been_deflated: false,
|
||||
}
|
||||
}
|
||||
|
||||
pub(crate) fn encode_col_len<R: Write>(&self, buf: &mut R) -> io::Result<usize> {
|
||||
let mut len = 0;
|
||||
if !self.data.is_empty() {
|
||||
len += self.col.encode(buf)?;
|
||||
len += self.data.len().encode(buf)?;
|
||||
}
|
||||
Ok(len)
|
||||
}
|
||||
|
||||
pub(crate) fn deflate(&mut self) {
|
||||
#[cfg(debug_assertions)]
|
||||
{
|
||||
debug_assert!(!self.has_been_deflated);
|
||||
self.has_been_deflated = true;
|
||||
}
|
||||
if self.data.len() > DEFLATE_MIN_SIZE {
|
||||
let mut deflated = Vec::new();
|
||||
let mut deflater = DeflateEncoder::new(&self.data[..], Compression::default());
|
||||
//This unwrap should be okay as we're reading and writing to in memory buffers
|
||||
deflater.read_to_end(&mut deflated).unwrap();
|
||||
self.col |= COLUMN_TYPE_DEFLATE;
|
||||
self.data = deflated;
|
||||
}
|
||||
}
|
||||
}
|
|
@ -1,68 +0,0 @@
|
|||
use crate::types::{ActorId, ScalarValue};
|
||||
use crate::value::DataType;
|
||||
use crate::{decoding, encoding, ChangeHash};
|
||||
use thiserror::Error;
|
||||
|
||||
#[derive(Error, Debug, PartialEq)]
|
||||
pub enum AutomergeError {
|
||||
#[error("invalid obj id format `{0}`")]
|
||||
InvalidObjIdFormat(String),
|
||||
#[error("invalid obj id `{0}`")]
|
||||
InvalidObjId(String),
|
||||
#[error("there was an encoding problem: {0}")]
|
||||
Encoding(#[from] encoding::Error),
|
||||
#[error("there was a decoding problem: {0}")]
|
||||
Decoding(#[from] decoding::Error),
|
||||
#[error("key must not be an empty string")]
|
||||
EmptyStringKey,
|
||||
#[error("invalid seq {0}")]
|
||||
InvalidSeq(u64),
|
||||
#[error("index {0} is out of bounds")]
|
||||
InvalidIndex(usize),
|
||||
#[error("duplicate seq {0} found for actor {1}")]
|
||||
DuplicateSeqNumber(u64, ActorId),
|
||||
#[error("invalid hash {0}")]
|
||||
InvalidHash(ChangeHash),
|
||||
#[error("increment operations must be against a counter value")]
|
||||
MissingCounter,
|
||||
#[error("general failure")]
|
||||
Fail,
|
||||
#[error(transparent)]
|
||||
HexDecode(#[from] hex::FromHexError),
|
||||
}
|
||||
|
||||
#[cfg(feature = "wasm")]
|
||||
impl From<AutomergeError> for wasm_bindgen::JsValue {
|
||||
fn from(err: AutomergeError) -> Self {
|
||||
js_sys::Error::new(&std::format!("{}", err)).into()
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Error, Debug)]
|
||||
#[error("Invalid actor ID: {0}")]
|
||||
pub struct InvalidActorId(pub String);
|
||||
|
||||
#[derive(Error, Debug, PartialEq)]
|
||||
#[error("Invalid scalar value, expected {expected} but received {unexpected}")]
|
||||
pub(crate) struct InvalidScalarValue {
|
||||
pub(crate) raw_value: ScalarValue,
|
||||
pub(crate) datatype: DataType,
|
||||
pub(crate) unexpected: String,
|
||||
pub(crate) expected: String,
|
||||
}
|
||||
|
||||
#[derive(Error, Debug, PartialEq)]
|
||||
#[error("Invalid change hash slice: {0:?}")]
|
||||
pub struct InvalidChangeHashSlice(pub Vec<u8>);
|
||||
|
||||
#[derive(Error, Debug, PartialEq)]
|
||||
#[error("Invalid object ID: {0}")]
|
||||
pub struct InvalidObjectId(pub String);
|
||||
|
||||
#[derive(Error, Debug)]
|
||||
#[error("Invalid element ID: {0}")]
|
||||
pub struct InvalidElementId(pub String);
|
||||
|
||||
#[derive(Error, Debug)]
|
||||
#[error("Invalid OpID: {0}")]
|
||||
pub struct InvalidOpId(pub String);
|
|
@ -1,82 +0,0 @@
|
|||
use crate::ActorId;
|
||||
use serde::Serialize;
|
||||
use serde::Serializer;
|
||||
use std::cmp::{Ord, Ordering};
|
||||
use std::fmt;
|
||||
use std::hash::{Hash, Hasher};
|
||||
|
||||
#[derive(Debug, Clone)]
|
||||
pub enum ExId {
|
||||
Root,
|
||||
Id(u64, ActorId, usize),
|
||||
}
|
||||
|
||||
impl PartialEq for ExId {
|
||||
fn eq(&self, other: &Self) -> bool {
|
||||
match (self, other) {
|
||||
(ExId::Root, ExId::Root) => true,
|
||||
(ExId::Id(ctr1, actor1, _), ExId::Id(ctr2, actor2, _))
|
||||
if ctr1 == ctr2 && actor1 == actor2 =>
|
||||
{
|
||||
true
|
||||
}
|
||||
_ => false,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl Eq for ExId {}
|
||||
|
||||
impl fmt::Display for ExId {
|
||||
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
|
||||
match self {
|
||||
ExId::Root => write!(f, "_root"),
|
||||
ExId::Id(ctr, actor, _) => write!(f, "{}@{}", ctr, actor),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl Hash for ExId {
|
||||
fn hash<H: Hasher>(&self, state: &mut H) {
|
||||
match self {
|
||||
ExId::Root => 0.hash(state),
|
||||
ExId::Id(ctr, actor, _) => {
|
||||
ctr.hash(state);
|
||||
actor.hash(state);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl Ord for ExId {
|
||||
fn cmp(&self, other: &Self) -> Ordering {
|
||||
match (self, other) {
|
||||
(ExId::Root, ExId::Root) => Ordering::Equal,
|
||||
(ExId::Root, _) => Ordering::Less,
|
||||
(_, ExId::Root) => Ordering::Greater,
|
||||
(ExId::Id(c1, a1, _), ExId::Id(c2, a2, _)) if c1 == c2 => a2.cmp(a1),
|
||||
(ExId::Id(c1, _, _), ExId::Id(c2, _, _)) => c1.cmp(c2),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl PartialOrd for ExId {
|
||||
fn partial_cmp(&self, other: &Self) -> Option<Ordering> {
|
||||
Some(self.cmp(other))
|
||||
}
|
||||
}
|
||||
|
||||
impl Serialize for ExId {
|
||||
fn serialize<S>(&self, serializer: S) -> Result<S::Ok, S::Error>
|
||||
where
|
||||
S: Serializer,
|
||||
{
|
||||
serializer.serialize_str(self.to_string().as_str())
|
||||
}
|
||||
}
|
||||
|
||||
impl AsRef<ExId> for ExId {
|
||||
fn as_ref(&self) -> &ExId {
|
||||
self
|
||||
}
|
||||
}
|
|
@ -1,111 +0,0 @@
|
|||
#![doc(
|
||||
html_logo_url = "https://raw.githubusercontent.com/automerge/automerge-rs/main/img/brandmark.svg",
|
||||
html_favicon_url = "https:///raw.githubusercontent.com/automerge/automerge-rs/main/img/favicon.ico"
|
||||
)]
|
||||
#![warn(
|
||||
missing_debug_implementations,
|
||||
// missing_docs, // TODO: add documentation!
|
||||
rust_2018_idioms,
|
||||
unreachable_pub,
|
||||
bad_style,
|
||||
const_err,
|
||||
dead_code,
|
||||
improper_ctypes,
|
||||
non_shorthand_field_patterns,
|
||||
no_mangle_generic_items,
|
||||
overflowing_literals,
|
||||
path_statements,
|
||||
patterns_in_fns_without_body,
|
||||
private_in_public,
|
||||
unconditional_recursion,
|
||||
unused,
|
||||
unused_allocation,
|
||||
unused_comparisons,
|
||||
unused_parens,
|
||||
while_true
|
||||
)]
|
||||
|
||||
#[doc(hidden)]
|
||||
#[macro_export]
|
||||
macro_rules! log {
|
||||
( $( $t:tt )* ) => {
|
||||
{
|
||||
use $crate::__log;
|
||||
__log!( $( $t )* );
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(all(feature = "wasm", target_family = "wasm"))]
|
||||
#[doc(hidden)]
|
||||
#[macro_export]
|
||||
macro_rules! __log {
|
||||
( $( $t:tt )* ) => {
|
||||
web_sys::console::log_1(&format!( $( $t )* ).into());
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(not(all(feature = "wasm", target_family = "wasm")))]
|
||||
#[doc(hidden)]
|
||||
#[macro_export]
|
||||
macro_rules! __log {
|
||||
( $( $t:tt )* ) => {
|
||||
println!( $( $t )* );
|
||||
}
|
||||
}
|
||||
|
||||
mod autocommit;
|
||||
mod automerge;
|
||||
mod change;
|
||||
mod clock;
|
||||
mod columnar;
|
||||
mod decoding;
|
||||
mod encoding;
|
||||
mod error;
|
||||
mod exid;
|
||||
mod indexed_cache;
|
||||
mod keys;
|
||||
mod keys_at;
|
||||
mod legacy;
|
||||
mod object_data;
|
||||
mod op_observer;
|
||||
mod op_set;
|
||||
mod op_tree;
|
||||
mod options;
|
||||
mod parents;
|
||||
mod query;
|
||||
mod range;
|
||||
mod range_at;
|
||||
pub mod sync;
|
||||
pub mod transaction;
|
||||
mod types;
|
||||
mod value;
|
||||
mod values;
|
||||
mod values_at;
|
||||
#[cfg(feature = "optree-visualisation")]
|
||||
mod visualisation;
|
||||
|
||||
pub use crate::automerge::Automerge;
|
||||
pub use autocommit::AutoCommit;
|
||||
pub use change::Change;
|
||||
pub use decoding::Error as DecodingError;
|
||||
pub use decoding::InvalidChangeError;
|
||||
pub use encoding::Error as EncodingError;
|
||||
pub use error::AutomergeError;
|
||||
pub use exid::ExId as ObjId;
|
||||
pub use keys::Keys;
|
||||
pub use keys_at::KeysAt;
|
||||
pub use legacy::Change as ExpandedChange;
|
||||
pub use op_observer::OpObserver;
|
||||
pub use op_observer::Patch;
|
||||
pub use op_observer::VecOpObserver;
|
||||
pub use options::ApplyOptions;
|
||||
pub use parents::Parents;
|
||||
pub use range::Range;
|
||||
pub use range_at::RangeAt;
|
||||
pub use types::{ActorId, ChangeHash, ObjType, OpType, Prop};
|
||||
pub use value::{ScalarValue, Value};
|
||||
pub use values::Values;
|
||||
pub use values_at::ValuesAt;
|
||||
|
||||
pub const ROOT: ObjId = ObjId::Root;
|
|
@ -1,176 +0,0 @@
|
|||
use std::ops::RangeBounds;
|
||||
use std::sync::{Arc, Mutex};
|
||||
|
||||
use crate::clock::Clock;
|
||||
use crate::op_tree::{OpSetMetadata, OpTreeInternal};
|
||||
use crate::query::{self, TreeQuery};
|
||||
use crate::types::{Key, ObjId};
|
||||
use crate::types::{Op, OpId};
|
||||
use crate::{query::Keys, query::KeysAt, ObjType};
|
||||
|
||||
#[derive(Debug, Default, Clone, PartialEq)]
|
||||
pub(crate) struct MapOpsCache {
|
||||
pub(crate) last: Option<(Key, usize)>,
|
||||
}
|
||||
|
||||
impl MapOpsCache {
|
||||
fn lookup<'a, Q: TreeQuery<'a>>(&self, query: &mut Q) -> bool {
|
||||
query.cache_lookup_map(self)
|
||||
}
|
||||
|
||||
fn update<'a, Q: TreeQuery<'a>>(&mut self, query: &Q) {
|
||||
query.cache_update_map(self);
|
||||
// TODO: fixup the cache (reordering etc.)
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Debug, Default, Clone, PartialEq)]
|
||||
pub(crate) struct SeqOpsCache {
|
||||
// last insertion (list index, tree index, whether the last op was an insert, opid to be inserted)
|
||||
// TODO: invalidation
|
||||
pub(crate) last: Option<(usize, usize, bool, OpId)>,
|
||||
}
|
||||
|
||||
impl SeqOpsCache {
|
||||
fn lookup<'a, Q: TreeQuery<'a>>(&self, query: &mut Q) -> bool {
|
||||
query.cache_lookup_seq(self)
|
||||
}
|
||||
|
||||
fn update<'a, Q: TreeQuery<'a>>(&mut self, query: &Q) {
|
||||
query.cache_update_seq(self);
|
||||
// TODO: fixup the cache (reordering etc.)
|
||||
}
|
||||
}
|
||||
|
||||
/// Stores the data for an object.
|
||||
#[derive(Debug, Clone, PartialEq)]
|
||||
pub(crate) struct ObjectData {
|
||||
cache: ObjectDataCache,
|
||||
/// The type of this object.
|
||||
typ: ObjType,
|
||||
/// The operations pertaining to this object.
|
||||
pub(crate) ops: OpTreeInternal,
|
||||
/// The id of the parent object, root has no parent.
|
||||
pub(crate) parent: Option<ObjId>,
|
||||
}
|
||||
|
||||
#[derive(Debug, Clone)]
|
||||
pub(crate) enum ObjectDataCache {
|
||||
Map(Arc<Mutex<MapOpsCache>>),
|
||||
Seq(Arc<Mutex<SeqOpsCache>>),
|
||||
}
|
||||
|
||||
impl PartialEq for ObjectDataCache {
|
||||
fn eq(&self, other: &ObjectDataCache) -> bool {
|
||||
match (self, other) {
|
||||
(ObjectDataCache::Map(_), ObjectDataCache::Map(_)) => true,
|
||||
(ObjectDataCache::Map(_), ObjectDataCache::Seq(_)) => false,
|
||||
(ObjectDataCache::Seq(_), ObjectDataCache::Map(_)) => false,
|
||||
(ObjectDataCache::Seq(_), ObjectDataCache::Seq(_)) => true,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl ObjectData {
|
||||
pub(crate) fn root() -> Self {
|
||||
ObjectData {
|
||||
cache: ObjectDataCache::Map(Default::default()),
|
||||
typ: ObjType::Map,
|
||||
ops: Default::default(),
|
||||
parent: None,
|
||||
}
|
||||
}
|
||||
|
||||
pub(crate) fn new(typ: ObjType, parent: Option<ObjId>) -> Self {
|
||||
let internal = match typ {
|
||||
ObjType::Map | ObjType::Table => ObjectDataCache::Map(Default::default()),
|
||||
ObjType::List | ObjType::Text => ObjectDataCache::Seq(Default::default()),
|
||||
};
|
||||
ObjectData {
|
||||
cache: internal,
|
||||
typ,
|
||||
ops: Default::default(),
|
||||
parent,
|
||||
}
|
||||
}
|
||||
|
||||
pub(crate) fn keys(&self) -> Option<Keys<'_>> {
|
||||
self.ops.keys()
|
||||
}
|
||||
|
||||
pub(crate) fn keys_at(&self, clock: Clock) -> Option<KeysAt<'_>> {
|
||||
self.ops.keys_at(clock)
|
||||
}
|
||||
|
||||
pub(crate) fn range<'a, R: RangeBounds<String>>(
|
||||
&'a self,
|
||||
range: R,
|
||||
meta: &'a OpSetMetadata,
|
||||
) -> Option<query::Range<'a, R>> {
|
||||
self.ops.range(range, meta)
|
||||
}
|
||||
|
||||
pub(crate) fn range_at<'a, R: RangeBounds<String>>(
|
||||
&'a self,
|
||||
range: R,
|
||||
meta: &'a OpSetMetadata,
|
||||
clock: Clock,
|
||||
) -> Option<query::RangeAt<'a, R>> {
|
||||
self.ops.range_at(range, meta, clock)
|
||||
}
|
||||
|
||||
pub(crate) fn search<'a, 'b: 'a, Q>(&'b self, mut query: Q, metadata: &OpSetMetadata) -> Q
|
||||
where
|
||||
Q: TreeQuery<'a>,
|
||||
{
|
||||
match self {
|
||||
ObjectData {
|
||||
ops,
|
||||
cache: ObjectDataCache::Map(cache),
|
||||
..
|
||||
} => {
|
||||
let mut cache = cache.lock().unwrap();
|
||||
if !cache.lookup(&mut query) {
|
||||
query = ops.search(query, metadata);
|
||||
}
|
||||
cache.update(&query);
|
||||
query
|
||||
}
|
||||
ObjectData {
|
||||
ops,
|
||||
cache: ObjectDataCache::Seq(cache),
|
||||
..
|
||||
} => {
|
||||
let mut cache = cache.lock().unwrap();
|
||||
if !cache.lookup(&mut query) {
|
||||
query = ops.search(query, metadata);
|
||||
}
|
||||
cache.update(&query);
|
||||
query
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
pub(crate) fn update<F>(&mut self, index: usize, f: F)
|
||||
where
|
||||
F: FnOnce(&mut Op),
|
||||
{
|
||||
self.ops.update(index, f)
|
||||
}
|
||||
|
||||
pub(crate) fn remove(&mut self, index: usize) -> Op {
|
||||
self.ops.remove(index)
|
||||
}
|
||||
|
||||
pub(crate) fn insert(&mut self, index: usize, op: Op) {
|
||||
self.ops.insert(index, op)
|
||||
}
|
||||
|
||||
pub(crate) fn typ(&self) -> ObjType {
|
||||
self.typ
|
||||
}
|
||||
|
||||
pub(crate) fn get(&self, index: usize) -> Option<&Op> {
|
||||
self.ops.get(index)
|
||||
}
|
||||
}
|
|
@ -1,135 +0,0 @@
|
|||
use crate::exid::ExId;
|
||||
use crate::Prop;
|
||||
use crate::Value;
|
||||
|
||||
/// An observer of operations applied to the document.
|
||||
pub trait OpObserver {
|
||||
/// A new value has been inserted into the given object.
|
||||
///
|
||||
/// - `objid`: the object that has been inserted into.
|
||||
/// - `index`: the index the new value has been inserted at.
|
||||
/// - `tagged_value`: the value that has been inserted and the id of the operation that did the
|
||||
/// insert.
|
||||
fn insert(&mut self, objid: ExId, index: usize, tagged_value: (Value<'_>, ExId));
|
||||
|
||||
/// A new value has been put into the given object.
|
||||
///
|
||||
/// - `objid`: the object that has been put into.
|
||||
/// - `key`: the key that the value as been put at.
|
||||
/// - `tagged_value`: the value that has been put into the object and the id of the operation
|
||||
/// that did the put.
|
||||
/// - `conflict`: whether this put conflicts with other operations.
|
||||
fn put(&mut self, objid: ExId, key: Prop, tagged_value: (Value<'_>, ExId), conflict: bool);
|
||||
|
||||
/// A counter has been incremented.
|
||||
///
|
||||
/// - `objid`: the object that contains the counter.
|
||||
/// - `key`: they key that the chounter is at.
|
||||
/// - `tagged_value`: the amount the counter has been incremented by, and the the id of the
|
||||
/// increment operation.
|
||||
fn increment(&mut self, objid: ExId, key: Prop, tagged_value: (i64, ExId));
|
||||
|
||||
/// A value has beeen deleted.
|
||||
///
|
||||
/// - `objid`: the object that has been deleted in.
|
||||
/// - `key`: the key of the value that has been deleted.
|
||||
fn delete(&mut self, objid: ExId, key: Prop);
|
||||
}
|
||||
|
||||
impl OpObserver for () {
|
||||
fn insert(&mut self, _objid: ExId, _index: usize, _tagged_value: (Value<'_>, ExId)) {}
|
||||
|
||||
fn put(&mut self, _objid: ExId, _key: Prop, _tagged_value: (Value<'_>, ExId), _conflict: bool) {
|
||||
}
|
||||
|
||||
fn increment(&mut self, _objid: ExId, _key: Prop, _tagged_value: (i64, ExId)) {}
|
||||
|
||||
fn delete(&mut self, _objid: ExId, _key: Prop) {}
|
||||
}
|
||||
|
||||
/// Capture operations into a [`Vec`] and store them as patches.
|
||||
#[derive(Default, Debug, Clone)]
|
||||
pub struct VecOpObserver {
|
||||
patches: Vec<Patch>,
|
||||
}
|
||||
|
||||
impl VecOpObserver {
|
||||
/// Take the current list of patches, leaving the internal list empty and ready for new
|
||||
/// patches.
|
||||
pub fn take_patches(&mut self) -> Vec<Patch> {
|
||||
std::mem::take(&mut self.patches)
|
||||
}
|
||||
}
|
||||
|
||||
impl OpObserver for VecOpObserver {
|
||||
fn insert(&mut self, obj_id: ExId, index: usize, (value, id): (Value<'_>, ExId)) {
|
||||
self.patches.push(Patch::Insert {
|
||||
obj: obj_id,
|
||||
index,
|
||||
value: (value.into_owned(), id),
|
||||
});
|
||||
}
|
||||
|
||||
fn put(&mut self, objid: ExId, key: Prop, (value, id): (Value<'_>, ExId), conflict: bool) {
|
||||
self.patches.push(Patch::Put {
|
||||
obj: objid,
|
||||
key,
|
||||
value: (value.into_owned(), id),
|
||||
conflict,
|
||||
});
|
||||
}
|
||||
|
||||
fn increment(&mut self, objid: ExId, key: Prop, tagged_value: (i64, ExId)) {
|
||||
self.patches.push(Patch::Increment {
|
||||
obj: objid,
|
||||
key,
|
||||
value: tagged_value,
|
||||
});
|
||||
}
|
||||
|
||||
fn delete(&mut self, objid: ExId, key: Prop) {
|
||||
self.patches.push(Patch::Delete { obj: objid, key })
|
||||
}
|
||||
}
|
||||
|
||||
/// A notification to the application that something has changed in a document.
|
||||
#[derive(Debug, Clone, PartialEq)]
|
||||
pub enum Patch {
|
||||
/// Associating a new value with a key in a map, or an existing list element
|
||||
Put {
|
||||
/// The object that was put into.
|
||||
obj: ExId,
|
||||
/// The key that the new value was put at.
|
||||
key: Prop,
|
||||
/// The value that was put, and the id of the operation that put it there.
|
||||
value: (Value<'static>, ExId),
|
||||
/// Whether this put conflicts with another.
|
||||
conflict: bool,
|
||||
},
|
||||
/// Inserting a new element into a list/text
|
||||
Insert {
|
||||
/// The object that was inserted into.
|
||||
obj: ExId,
|
||||
/// The index that the new value was inserted at.
|
||||
index: usize,
|
||||
/// The value that was inserted, and the id of the operation that inserted it there.
|
||||
value: (Value<'static>, ExId),
|
||||
},
|
||||
/// Incrementing a counter.
|
||||
Increment {
|
||||
/// The object that was incremented in.
|
||||
obj: ExId,
|
||||
/// The key that was incremented.
|
||||
key: Prop,
|
||||
/// The amount that the counter was incremented by, and the id of the operation that
|
||||
/// did the increment.
|
||||
value: (i64, ExId),
|
||||
},
|
||||
/// Deleting an element from a list/text
|
||||
Delete {
|
||||
/// The object that was deleted from.
|
||||
obj: ExId,
|
||||
/// The key that was deleted.
|
||||
key: Prop,
|
||||
},
|
||||
}
|
|
@ -1,318 +0,0 @@
|
|||
use crate::clock::Clock;
|
||||
use crate::exid::ExId;
|
||||
use crate::indexed_cache::IndexedCache;
|
||||
use crate::object_data::ObjectData;
|
||||
use crate::query::{self, OpIdSearch, TreeQuery};
|
||||
use crate::types::{self, ActorId, Key, ObjId, Op, OpId, OpType};
|
||||
use crate::{ObjType, OpObserver};
|
||||
use fxhash::FxBuildHasher;
|
||||
use std::cmp::Ordering;
|
||||
use std::collections::HashMap;
|
||||
use std::ops::RangeBounds;
|
||||
|
||||
pub(crate) type OpSet = OpSetInternal;
|
||||
|
||||
#[derive(Debug, Clone, PartialEq)]
|
||||
pub(crate) struct OpSetInternal {
|
||||
/// The map of objects to their data.
|
||||
objects: HashMap<ObjId, ObjectData, FxBuildHasher>,
|
||||
/// The number of operations in the opset.
|
||||
length: usize,
|
||||
/// Metadata about the operations in this opset.
|
||||
pub(crate) m: OpSetMetadata,
|
||||
}
|
||||
|
||||
impl OpSetInternal {
|
||||
pub(crate) fn new() -> Self {
|
||||
let mut objects: HashMap<_, _, _> = Default::default();
|
||||
objects.insert(ObjId::root(), ObjectData::root());
|
||||
OpSetInternal {
|
||||
objects,
|
||||
length: 0,
|
||||
m: OpSetMetadata {
|
||||
actors: IndexedCache::new(),
|
||||
props: IndexedCache::new(),
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
pub(crate) fn id_to_exid(&self, id: OpId) -> ExId {
|
||||
if id == types::ROOT {
|
||||
ExId::Root
|
||||
} else {
|
||||
ExId::Id(id.0, self.m.actors.cache[id.1].clone(), id.1)
|
||||
}
|
||||
}
|
||||
|
||||
pub(crate) fn iter(&self) -> Iter<'_> {
|
||||
let mut objs: Vec<_> = self.objects.keys().collect();
|
||||
objs.sort_by(|a, b| self.m.lamport_cmp(a.0, b.0));
|
||||
Iter {
|
||||
inner: self,
|
||||
index: 0,
|
||||
sub_index: 0,
|
||||
objs,
|
||||
}
|
||||
}
|
||||
|
||||
pub(crate) fn parent_object(&self, obj: &ObjId) -> Option<(ObjId, Key)> {
|
||||
let parent = self.objects.get(obj)?.parent?;
|
||||
let key = self.search(&parent, OpIdSearch::new(obj.0)).key().unwrap();
|
||||
Some((parent, key))
|
||||
}
|
||||
|
||||
pub(crate) fn keys(&self, obj: ObjId) -> Option<query::Keys<'_>> {
|
||||
if let Some(object) = self.objects.get(&obj) {
|
||||
object.keys()
|
||||
} else {
|
||||
None
|
||||
}
|
||||
}
|
||||
|
||||
pub(crate) fn keys_at(&self, obj: ObjId, clock: Clock) -> Option<query::KeysAt<'_>> {
|
||||
if let Some(object) = self.objects.get(&obj) {
|
||||
object.keys_at(clock)
|
||||
} else {
|
||||
None
|
||||
}
|
||||
}
|
||||
|
||||
pub(crate) fn range<R: RangeBounds<String>>(
|
||||
&self,
|
||||
obj: ObjId,
|
||||
range: R,
|
||||
) -> Option<query::Range<'_, R>> {
|
||||
if let Some(tree) = self.objects.get(&obj) {
|
||||
tree.range(range, &self.m)
|
||||
} else {
|
||||
None
|
||||
}
|
||||
}
|
||||
|
||||
pub(crate) fn range_at<R: RangeBounds<String>>(
|
||||
&self,
|
||||
obj: ObjId,
|
||||
range: R,
|
||||
clock: Clock,
|
||||
) -> Option<query::RangeAt<'_, R>> {
|
||||
if let Some(tree) = self.objects.get(&obj) {
|
||||
tree.range_at(range, &self.m, clock)
|
||||
} else {
|
||||
None
|
||||
}
|
||||
}
|
||||
|
||||
pub(crate) fn search<'a, 'b: 'a, Q>(&'b self, obj: &ObjId, query: Q) -> Q
|
||||
where
|
||||
Q: TreeQuery<'a>,
|
||||
{
|
||||
if let Some(object) = self.objects.get(obj) {
|
||||
object.search(query, &self.m)
|
||||
} else {
|
||||
query
|
||||
}
|
||||
}
|
||||
|
||||
pub(crate) fn replace<F>(&mut self, obj: &ObjId, index: usize, f: F)
|
||||
where
|
||||
F: FnOnce(&mut Op),
|
||||
{
|
||||
if let Some(object) = self.objects.get_mut(obj) {
|
||||
object.update(index, f)
|
||||
}
|
||||
}
|
||||
|
||||
pub(crate) fn remove(&mut self, obj: &ObjId, index: usize) -> Op {
|
||||
// this happens on rollback - be sure to go back to the old state
|
||||
let object = self.objects.get_mut(obj).unwrap();
|
||||
self.length -= 1;
|
||||
let op = object.remove(index);
|
||||
if let OpType::Make(_) = &op.action {
|
||||
self.objects.remove(&op.id.into());
|
||||
}
|
||||
op
|
||||
}
|
||||
|
||||
pub(crate) fn len(&self) -> usize {
|
||||
self.length
|
||||
}
|
||||
|
||||
pub(crate) fn insert(&mut self, index: usize, obj: &ObjId, element: Op) {
|
||||
if let OpType::Make(typ) = element.action {
|
||||
self.objects
|
||||
.insert(element.id.into(), ObjectData::new(typ, Some(*obj)));
|
||||
}
|
||||
|
||||
if let Some(object) = self.objects.get_mut(obj) {
|
||||
object.insert(index, element);
|
||||
self.length += 1;
|
||||
}
|
||||
}
|
||||
|
||||
pub(crate) fn insert_op(&mut self, obj: &ObjId, op: Op) -> Op {
|
||||
let q = self.search(obj, query::SeekOp::new(&op));
|
||||
|
||||
let succ = q.succ;
|
||||
let pos = q.pos;
|
||||
|
||||
for i in succ {
|
||||
self.replace(obj, i, |old_op| old_op.add_succ(&op));
|
||||
}
|
||||
|
||||
if !op.is_delete() {
|
||||
self.insert(pos, obj, op.clone());
|
||||
}
|
||||
op
|
||||
}
|
||||
|
||||
pub(crate) fn insert_op_with_observer<Obs: OpObserver>(
|
||||
&mut self,
|
||||
obj: &ObjId,
|
||||
op: Op,
|
||||
observer: &mut Obs,
|
||||
) -> Op {
|
||||
let q = self.search(obj, query::SeekOpWithPatch::new(&op));
|
||||
|
||||
let query::SeekOpWithPatch {
|
||||
pos,
|
||||
succ,
|
||||
seen,
|
||||
values,
|
||||
had_value_before,
|
||||
..
|
||||
} = q;
|
||||
|
||||
let ex_obj = self.id_to_exid(obj.0);
|
||||
let key = match op.key {
|
||||
Key::Map(index) => self.m.props[index].clone().into(),
|
||||
Key::Seq(_) => seen.into(),
|
||||
};
|
||||
|
||||
if op.insert {
|
||||
let value = (op.value(), self.id_to_exid(op.id));
|
||||
observer.insert(ex_obj, seen, value);
|
||||
} else if op.is_delete() {
|
||||
if let Some(winner) = &values.last() {
|
||||
let value = (winner.value(), self.id_to_exid(winner.id));
|
||||
let conflict = values.len() > 1;
|
||||
observer.put(ex_obj, key, value, conflict);
|
||||
} else {
|
||||
observer.delete(ex_obj, key);
|
||||
}
|
||||
} else if let Some(value) = op.get_increment_value() {
|
||||
// only observe this increment if the counter is visible, i.e. the counter's
|
||||
// create op is in the values
|
||||
if values.iter().any(|value| op.pred.contains(&value.id)) {
|
||||
// we have observed the value
|
||||
observer.increment(ex_obj, key, (value, self.id_to_exid(op.id)));
|
||||
}
|
||||
} else {
|
||||
let winner = if let Some(last_value) = values.last() {
|
||||
if self.m.lamport_cmp(op.id, last_value.id) == Ordering::Greater {
|
||||
&op
|
||||
} else {
|
||||
last_value
|
||||
}
|
||||
} else {
|
||||
&op
|
||||
};
|
||||
let value = (winner.value(), self.id_to_exid(winner.id));
|
||||
if op.is_list_op() && !had_value_before {
|
||||
observer.insert(ex_obj, seen, value);
|
||||
} else {
|
||||
let conflict = !values.is_empty();
|
||||
observer.put(ex_obj, key, value, conflict);
|
||||
}
|
||||
}
|
||||
|
||||
for i in succ {
|
||||
self.replace(obj, i, |old_op| old_op.add_succ(&op));
|
||||
}
|
||||
|
||||
if !op.is_delete() {
|
||||
self.insert(pos, obj, op.clone());
|
||||
}
|
||||
|
||||
op
|
||||
}
|
||||
|
||||
pub(crate) fn object_type(&self, id: &ObjId) -> Option<ObjType> {
|
||||
self.objects.get(id).map(|object| object.typ())
|
||||
}
|
||||
|
||||
#[cfg(feature = "optree-visualisation")]
|
||||
pub(crate) fn visualise(&self) -> String {
|
||||
let mut out = Vec::new();
|
||||
let graph = super::visualisation::GraphVisualisation::construct(&self.objects, &self.m);
|
||||
dot::render(&graph, &mut out).unwrap();
|
||||
String::from_utf8_lossy(&out[..]).to_string()
|
||||
}
|
||||
}
|
||||
|
||||
impl Default for OpSetInternal {
|
||||
fn default() -> Self {
|
||||
Self::new()
|
||||
}
|
||||
}
|
||||
|
||||
impl<'a> IntoIterator for &'a OpSetInternal {
|
||||
type Item = (&'a ObjId, &'a Op);
|
||||
|
||||
type IntoIter = Iter<'a>;
|
||||
|
||||
fn into_iter(self) -> Self::IntoIter {
|
||||
self.iter()
|
||||
}
|
||||
}
|
||||
|
||||
pub(crate) struct Iter<'a> {
|
||||
inner: &'a OpSetInternal,
|
||||
index: usize,
|
||||
objs: Vec<&'a ObjId>,
|
||||
sub_index: usize,
|
||||
}
|
||||
|
||||
impl<'a> Iterator for Iter<'a> {
|
||||
type Item = (&'a ObjId, &'a Op);
|
||||
|
||||
fn next(&mut self) -> Option<Self::Item> {
|
||||
let mut result = None;
|
||||
for obj in self.objs.iter().skip(self.index) {
|
||||
let object = self.inner.objects.get(obj)?;
|
||||
result = object.get(self.sub_index).map(|op| (*obj, op));
|
||||
if result.is_some() {
|
||||
self.sub_index += 1;
|
||||
break;
|
||||
} else {
|
||||
self.index += 1;
|
||||
self.sub_index = 0;
|
||||
}
|
||||
}
|
||||
result
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Clone, Debug, PartialEq)]
|
||||
pub(crate) struct OpSetMetadata {
|
||||
pub(crate) actors: IndexedCache<ActorId>,
|
||||
pub(crate) props: IndexedCache<String>,
|
||||
}
|
||||
|
||||
impl OpSetMetadata {
|
||||
pub(crate) fn key_cmp(&self, left: &Key, right: &Key) -> Ordering {
|
||||
match (left, right) {
|
||||
(Key::Map(a), Key::Map(b)) => self.props[*a].cmp(&self.props[*b]),
|
||||
_ => panic!("can only compare map keys"),
|
||||
}
|
||||
}
|
||||
|
||||
pub(crate) fn lamport_cmp(&self, left: OpId, right: OpId) -> Ordering {
|
||||
match (left, right) {
|
||||
(OpId(0, _), OpId(0, _)) => Ordering::Equal,
|
||||
(OpId(0, _), OpId(_, _)) => Ordering::Less,
|
||||
(OpId(_, _), OpId(0, _)) => Ordering::Greater,
|
||||
(OpId(a, x), OpId(b, y)) if a == b => self.actors[x].cmp(&self.actors[y]),
|
||||
(OpId(a, _), OpId(b, _)) => a.cmp(&b),
|
||||
}
|
||||
}
|
||||
}
|
|
@ -1,16 +0,0 @@
|
|||
#[derive(Debug, Default)]
|
||||
pub struct ApplyOptions<'a, Obs> {
|
||||
pub op_observer: Option<&'a mut Obs>,
|
||||
}
|
||||
|
||||
impl<'a, Obs> ApplyOptions<'a, Obs> {
|
||||
pub fn with_op_observer(mut self, op_observer: &'a mut Obs) -> Self {
|
||||
self.op_observer = Some(op_observer);
|
||||
self
|
||||
}
|
||||
|
||||
pub fn set_op_observer(&mut self, op_observer: &'a mut Obs) -> &mut Self {
|
||||
self.op_observer = Some(op_observer);
|
||||
self
|
||||
}
|
||||
}
|
|
@ -1,20 +0,0 @@
|
|||
use crate::{exid::ExId, Automerge, Prop};
|
||||
|
||||
#[derive(Debug)]
|
||||
pub struct Parents<'a> {
|
||||
pub(crate) obj: ExId,
|
||||
pub(crate) doc: &'a Automerge,
|
||||
}
|
||||
|
||||
impl<'a> Iterator for Parents<'a> {
|
||||
type Item = (ExId, Prop);
|
||||
|
||||
fn next(&mut self) -> Option<Self::Item> {
|
||||
if let Some((obj, prop)) = self.doc.parent_object(&self.obj) {
|
||||
self.obj = obj.clone();
|
||||
Some((obj, prop))
|
||||
} else {
|
||||
None
|
||||
}
|
||||
}
|
||||
}
|
|
@ -1,276 +0,0 @@
|
|||
use crate::object_data::{MapOpsCache, SeqOpsCache};
|
||||
use crate::op_tree::{OpSetMetadata, OpTreeNode};
|
||||
use crate::types::{Clock, Counter, ElemId, Op, OpId, OpType, ScalarValue};
|
||||
use fxhash::FxBuildHasher;
|
||||
use std::cmp::Ordering;
|
||||
use std::collections::{HashMap, HashSet};
|
||||
use std::fmt::Debug;
|
||||
|
||||
mod elem_id_pos;
|
||||
mod insert;
|
||||
mod insert_prop;
|
||||
mod keys;
|
||||
mod keys_at;
|
||||
mod len;
|
||||
mod len_at;
|
||||
mod list_vals;
|
||||
mod list_vals_at;
|
||||
mod nth;
|
||||
mod nth_at;
|
||||
mod opid;
|
||||
mod prop;
|
||||
mod prop_at;
|
||||
mod range;
|
||||
mod range_at;
|
||||
mod seek_op;
|
||||
mod seek_op_with_patch;
|
||||
|
||||
pub(crate) use elem_id_pos::ElemIdPos;
|
||||
pub(crate) use insert::InsertNth;
|
||||
pub(crate) use insert_prop::InsertProp;
|
||||
pub(crate) use keys::Keys;
|
||||
pub(crate) use keys_at::KeysAt;
|
||||
pub(crate) use len::Len;
|
||||
pub(crate) use len_at::LenAt;
|
||||
pub(crate) use list_vals::ListVals;
|
||||
pub(crate) use list_vals_at::ListValsAt;
|
||||
pub(crate) use nth::Nth;
|
||||
pub(crate) use nth_at::NthAt;
|
||||
pub(crate) use opid::OpIdSearch;
|
||||
pub(crate) use prop::Prop;
|
||||
pub(crate) use prop_at::PropAt;
|
||||
pub(crate) use range::Range;
|
||||
pub(crate) use range_at::RangeAt;
|
||||
pub(crate) use seek_op::SeekOp;
|
||||
pub(crate) use seek_op_with_patch::SeekOpWithPatch;
|
||||
|
||||
#[derive(Debug, Clone, PartialEq)]
|
||||
pub(crate) struct CounterData {
|
||||
pos: usize,
|
||||
val: i64,
|
||||
succ: HashSet<OpId>,
|
||||
op: Op,
|
||||
}
|
||||
|
||||
pub(crate) trait TreeQuery<'a> {
|
||||
fn cache_lookup_map(&mut self, _cache: &MapOpsCache) -> bool {
|
||||
// by default we haven't found something in the cache
|
||||
false
|
||||
}
|
||||
|
||||
fn cache_update_map(&self, _cache: &mut MapOpsCache) {
|
||||
// by default we don't have anything to update in the cache
|
||||
}
|
||||
|
||||
fn cache_lookup_seq(&mut self, _cache: &SeqOpsCache) -> bool {
|
||||
// by default we haven't found something in the cache
|
||||
false
|
||||
}
|
||||
|
||||
fn cache_update_seq(&self, _cache: &mut SeqOpsCache) {
|
||||
// by default we don't have anything to update in the cache
|
||||
}
|
||||
|
||||
#[inline(always)]
|
||||
fn query_node_with_metadata(
|
||||
&mut self,
|
||||
child: &'a OpTreeNode,
|
||||
_m: &OpSetMetadata,
|
||||
) -> QueryResult {
|
||||
self.query_node(child)
|
||||
}
|
||||
|
||||
fn query_node(&mut self, _child: &'a OpTreeNode) -> QueryResult {
|
||||
QueryResult::Descend
|
||||
}
|
||||
|
||||
#[inline(always)]
|
||||
fn query_element_with_metadata(&mut self, element: &'a Op, _m: &OpSetMetadata) -> QueryResult {
|
||||
self.query_element(element)
|
||||
}
|
||||
|
||||
fn query_element(&mut self, _element: &'a Op) -> QueryResult {
|
||||
panic!("invalid element query")
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Debug, Clone, PartialEq)]
|
||||
pub(crate) enum QueryResult {
|
||||
Next,
|
||||
Descend,
|
||||
Finish,
|
||||
}
|
||||
|
||||
#[derive(Clone, Debug, PartialEq)]
|
||||
pub(crate) struct Index {
|
||||
/// The map of visible elements to the number of operations targetting them.
|
||||
pub(crate) visible: HashMap<ElemId, usize, FxBuildHasher>,
|
||||
/// Set of opids found in this node and below.
|
||||
pub(crate) ops: HashSet<OpId, FxBuildHasher>,
|
||||
}
|
||||
|
||||
impl Index {
|
||||
pub(crate) fn new() -> Self {
|
||||
Index {
|
||||
visible: Default::default(),
|
||||
ops: Default::default(),
|
||||
}
|
||||
}
|
||||
|
||||
/// Get the number of visible elements in this index.
|
||||
pub(crate) fn visible_len(&self) -> usize {
|
||||
self.visible.len()
|
||||
}
|
||||
|
||||
pub(crate) fn has_visible(&self, e: &Option<ElemId>) -> bool {
|
||||
if let Some(seen) = e {
|
||||
self.visible.contains_key(seen)
|
||||
} else {
|
||||
false
|
||||
}
|
||||
}
|
||||
|
||||
pub(crate) fn replace(&mut self, old: &Op, new: &Op) {
|
||||
if old.id != new.id {
|
||||
self.ops.remove(&old.id);
|
||||
self.ops.insert(new.id);
|
||||
}
|
||||
|
||||
assert!(new.key == old.key);
|
||||
|
||||
match (new.visible(), old.visible(), new.elemid()) {
|
||||
(false, true, Some(elem)) => match self.visible.get(&elem).copied() {
|
||||
Some(n) if n == 1 => {
|
||||
self.visible.remove(&elem);
|
||||
}
|
||||
Some(n) => {
|
||||
self.visible.insert(elem, n - 1);
|
||||
}
|
||||
None => panic!("remove overun in index"),
|
||||
},
|
||||
(true, false, Some(elem)) => *self.visible.entry(elem).or_default() += 1,
|
||||
_ => {}
|
||||
}
|
||||
}
|
||||
|
||||
pub(crate) fn insert(&mut self, op: &Op) {
|
||||
self.ops.insert(op.id);
|
||||
if op.visible() {
|
||||
if let Some(elem) = op.elemid() {
|
||||
*self.visible.entry(elem).or_default() += 1;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
pub(crate) fn remove(&mut self, op: &Op) {
|
||||
self.ops.remove(&op.id);
|
||||
if op.visible() {
|
||||
if let Some(elem) = op.elemid() {
|
||||
match self.visible.get(&elem).copied() {
|
||||
Some(n) if n == 1 => {
|
||||
self.visible.remove(&elem);
|
||||
}
|
||||
Some(n) => {
|
||||
self.visible.insert(elem, n - 1);
|
||||
}
|
||||
None => panic!("remove overun in index"),
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
pub(crate) fn merge(&mut self, other: &Index) {
|
||||
for id in &other.ops {
|
||||
self.ops.insert(*id);
|
||||
}
|
||||
for (elem, n) in other.visible.iter() {
|
||||
*self.visible.entry(*elem).or_default() += n;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl Default for Index {
|
||||
fn default() -> Self {
|
||||
Self::new()
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Debug, Clone, PartialEq, Default)]
|
||||
pub(crate) struct VisWindow {
|
||||
counters: HashMap<OpId, CounterData>,
|
||||
}
|
||||
|
||||
impl VisWindow {
|
||||
fn visible_at(&mut self, op: &Op, pos: usize, clock: &Clock) -> bool {
|
||||
if !clock.covers(&op.id) {
|
||||
return false;
|
||||
}
|
||||
|
||||
let mut visible = false;
|
||||
match op.action {
|
||||
OpType::Put(ScalarValue::Counter(Counter { start, .. })) => {
|
||||
self.counters.insert(
|
||||
op.id,
|
||||
CounterData {
|
||||
pos,
|
||||
val: start,
|
||||
succ: op.succ.iter().cloned().collect(),
|
||||
op: op.clone(),
|
||||
},
|
||||
);
|
||||
if !op.succ.iter().any(|i| clock.covers(i)) {
|
||||
visible = true;
|
||||
}
|
||||
}
|
||||
OpType::Increment(inc_val) => {
|
||||
for id in &op.pred {
|
||||
// pred is always before op.id so we can see them
|
||||
if let Some(mut entry) = self.counters.get_mut(id) {
|
||||
entry.succ.remove(&op.id);
|
||||
entry.val += inc_val;
|
||||
entry.op.action = OpType::Put(ScalarValue::counter(entry.val));
|
||||
if !entry.succ.iter().any(|i| clock.covers(i)) {
|
||||
visible = true;
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
_ => {
|
||||
if !op.succ.iter().any(|i| clock.covers(i)) {
|
||||
visible = true;
|
||||
}
|
||||
}
|
||||
};
|
||||
visible
|
||||
}
|
||||
|
||||
pub(crate) fn seen_op(&self, op: &Op, pos: usize) -> Vec<(usize, Op)> {
|
||||
let mut result = vec![];
|
||||
for pred in &op.pred {
|
||||
if let Some(entry) = self.counters.get(pred) {
|
||||
result.push((entry.pos, entry.op.clone()));
|
||||
}
|
||||
}
|
||||
if result.is_empty() {
|
||||
result.push((pos, op.clone()));
|
||||
}
|
||||
result
|
||||
}
|
||||
}
|
||||
|
||||
pub(crate) fn binary_search_by<F>(node: &OpTreeNode, f: F) -> usize
|
||||
where
|
||||
F: Fn(&Op) -> Ordering,
|
||||
{
|
||||
let mut right = node.len();
|
||||
let mut left = 0;
|
||||
while left < right {
|
||||
let seq = (left + right) / 2;
|
||||
if f(node.get(seq).unwrap()) == Ordering::Less {
|
||||
left = seq + 1;
|
||||
} else {
|
||||
right = seq;
|
||||
}
|
||||
}
|
||||
left
|
||||
}
|
|
@ -1,53 +0,0 @@
|
|||
use crate::{op_tree::OpTreeNode, types::ElemId};
|
||||
|
||||
use super::{QueryResult, TreeQuery};
|
||||
|
||||
/// Lookup the index in the list that this elemid occupies.
|
||||
pub(crate) struct ElemIdPos {
|
||||
elemid: ElemId,
|
||||
pos: usize,
|
||||
found: bool,
|
||||
}
|
||||
|
||||
impl ElemIdPos {
|
||||
pub(crate) fn new(elemid: ElemId) -> Self {
|
||||
Self {
|
||||
elemid,
|
||||
pos: 0,
|
||||
found: false,
|
||||
}
|
||||
}
|
||||
|
||||
pub(crate) fn index(&self) -> Option<usize> {
|
||||
if self.found {
|
||||
Some(self.pos)
|
||||
} else {
|
||||
None
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl<'a> TreeQuery<'a> for ElemIdPos {
|
||||
fn query_node(&mut self, child: &OpTreeNode) -> QueryResult {
|
||||
// if index has our element then we can continue
|
||||
if child.index.has_visible(&Some(self.elemid)) {
|
||||
// element is in this node somewhere
|
||||
QueryResult::Descend
|
||||
} else {
|
||||
// not in this node, try the next one
|
||||
self.pos += child.index.visible_len();
|
||||
QueryResult::Next
|
||||
}
|
||||
}
|
||||
|
||||
fn query_element(&mut self, element: &crate::types::Op) -> QueryResult {
|
||||
if element.elemid() == Some(self.elemid) {
|
||||
// this is it
|
||||
self.found = true;
|
||||
return QueryResult::Finish;
|
||||
} else if element.visible() {
|
||||
self.pos += 1;
|
||||
}
|
||||
QueryResult::Next
|
||||
}
|
||||
}
|
|
@ -1,68 +0,0 @@
|
|||
use crate::op_tree::{OpSetMetadata, OpTreeNode};
|
||||
use crate::query::{binary_search_by, QueryResult, TreeQuery};
|
||||
use crate::types::{Key, Op};
|
||||
use std::fmt::Debug;
|
||||
|
||||
#[derive(Debug, Clone, PartialEq)]
|
||||
pub(crate) struct InsertProp<'a> {
|
||||
key: Key,
|
||||
pub(crate) ops: Vec<&'a Op>,
|
||||
pub(crate) ops_pos: Vec<usize>,
|
||||
pub(crate) pos: usize,
|
||||
start: Option<usize>,
|
||||
}
|
||||
|
||||
impl<'a> InsertProp<'a> {
|
||||
pub(crate) fn new(prop: usize) -> Self {
|
||||
InsertProp {
|
||||
key: Key::Map(prop),
|
||||
ops: vec![],
|
||||
ops_pos: vec![],
|
||||
pos: 0,
|
||||
start: None,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl<'a> TreeQuery<'a> for InsertProp<'a> {
|
||||
fn cache_lookup_map(&mut self, cache: &crate::object_data::MapOpsCache) -> bool {
|
||||
if let Some((last_key, last_pos)) = cache.last {
|
||||
if last_key == self.key {
|
||||
self.start = Some(last_pos);
|
||||
}
|
||||
}
|
||||
// don't have all of the result yet
|
||||
false
|
||||
}
|
||||
|
||||
fn cache_update_map(&self, cache: &mut crate::object_data::MapOpsCache) {
|
||||
cache.last = None
|
||||
}
|
||||
|
||||
fn query_node_with_metadata(
|
||||
&mut self,
|
||||
child: &'a OpTreeNode,
|
||||
m: &OpSetMetadata,
|
||||
) -> QueryResult {
|
||||
let start = if let Some(start) = self.start {
|
||||
debug_assert!(binary_search_by(child, |op| m.key_cmp(&op.key, &self.key)) >= start);
|
||||
start
|
||||
} else {
|
||||
binary_search_by(child, |op| m.key_cmp(&op.key, &self.key))
|
||||
};
|
||||
self.start = Some(start);
|
||||
self.pos = start;
|
||||
for pos in start..child.len() {
|
||||
let op = child.get(pos).unwrap();
|
||||
if op.key != self.key {
|
||||
break;
|
||||
}
|
||||
if op.visible() {
|
||||
self.ops.push(op);
|
||||
self.ops_pos.push(pos);
|
||||
}
|
||||
self.pos += 1;
|
||||
}
|
||||
QueryResult::Finish
|
||||
}
|
||||
}
|
|
@ -1,21 +0,0 @@
|
|||
use crate::op_tree::OpTreeNode;
|
||||
use crate::query::{QueryResult, TreeQuery};
|
||||
use std::fmt::Debug;
|
||||
|
||||
#[derive(Debug, Clone, PartialEq)]
|
||||
pub(crate) struct Len {
|
||||
pub(crate) len: usize,
|
||||
}
|
||||
|
||||
impl Len {
|
||||
pub(crate) fn new() -> Self {
|
||||
Len { len: 0 }
|
||||
}
|
||||
}
|
||||
|
||||
impl<'a> TreeQuery<'a> for Len {
|
||||
fn query_node(&mut self, child: &OpTreeNode) -> QueryResult {
|
||||
self.len = child.index.visible_len();
|
||||
QueryResult::Finish
|
||||
}
|
||||
}
|
|
@ -1,67 +0,0 @@
|
|||
use crate::op_tree::{OpSetMetadata, OpTreeNode};
|
||||
use crate::query::{binary_search_by, QueryResult, TreeQuery};
|
||||
use crate::types::{Key, Op};
|
||||
|
||||
#[derive(Debug, Clone, PartialEq)]
|
||||
pub(crate) struct Prop<'a> {
|
||||
key: Key,
|
||||
pub(crate) ops: Vec<&'a Op>,
|
||||
pub(crate) ops_pos: Vec<usize>,
|
||||
pub(crate) pos: usize,
|
||||
start: Option<usize>,
|
||||
}
|
||||
|
||||
impl<'a> Prop<'a> {
|
||||
pub(crate) fn new(prop: usize) -> Self {
|
||||
Prop {
|
||||
key: Key::Map(prop),
|
||||
ops: vec![],
|
||||
ops_pos: vec![],
|
||||
pos: 0,
|
||||
start: None,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl<'a> TreeQuery<'a> for Prop<'a> {
|
||||
fn cache_lookup_map(&mut self, cache: &crate::object_data::MapOpsCache) -> bool {
|
||||
if let Some((last_key, last_pos)) = cache.last {
|
||||
if last_key == self.key {
|
||||
self.start = Some(last_pos);
|
||||
}
|
||||
}
|
||||
// don't have all of the result yet
|
||||
false
|
||||
}
|
||||
|
||||
fn cache_update_map(&self, cache: &mut crate::object_data::MapOpsCache) {
|
||||
cache.last = self.start.map(|start| (self.key, start));
|
||||
}
|
||||
|
||||
fn query_node_with_metadata(
|
||||
&mut self,
|
||||
child: &'a OpTreeNode,
|
||||
m: &OpSetMetadata,
|
||||
) -> QueryResult {
|
||||
let start = if let Some(start) = self.start {
|
||||
debug_assert!(binary_search_by(child, |op| m.key_cmp(&op.key, &self.key)) >= start);
|
||||
start
|
||||
} else {
|
||||
binary_search_by(child, |op| m.key_cmp(&op.key, &self.key))
|
||||
};
|
||||
self.start = Some(start);
|
||||
self.pos = start;
|
||||
for pos in start..child.len() {
|
||||
let op = child.get(pos).unwrap();
|
||||
if op.key != self.key {
|
||||
break;
|
||||
}
|
||||
if op.visible() {
|
||||
self.ops.push(op);
|
||||
self.ops_pos.push(pos);
|
||||
}
|
||||
self.pos += 1;
|
||||
}
|
||||
QueryResult::Finish
|
||||
}
|
||||
}
|
|
@ -1,72 +0,0 @@
|
|||
use crate::op_tree::{OpSetMetadata, OpTreeNode};
|
||||
use crate::types::{Key, OpId};
|
||||
use crate::Value;
|
||||
use std::fmt::Debug;
|
||||
use std::ops::RangeBounds;
|
||||
|
||||
#[derive(Debug)]
|
||||
pub(crate) struct Range<'a, R: RangeBounds<String>> {
|
||||
range: R,
|
||||
index: usize,
|
||||
last_key: Option<Key>,
|
||||
index_back: usize,
|
||||
last_key_back: Option<Key>,
|
||||
root_child: &'a OpTreeNode,
|
||||
meta: &'a OpSetMetadata,
|
||||
}
|
||||
|
||||
impl<'a, R: RangeBounds<String>> Range<'a, R> {
|
||||
pub(crate) fn new(range: R, root_child: &'a OpTreeNode, meta: &'a OpSetMetadata) -> Self {
|
||||
Self {
|
||||
range,
|
||||
index: 0,
|
||||
last_key: None,
|
||||
index_back: root_child.len(),
|
||||
last_key_back: None,
|
||||
root_child,
|
||||
meta,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl<'a, R: RangeBounds<String>> Iterator for Range<'a, R> {
|
||||
type Item = (&'a str, Value<'a>, OpId);
|
||||
|
||||
fn next(&mut self) -> Option<Self::Item> {
|
||||
for i in self.index..self.index_back {
|
||||
let op = self.root_child.get(i)?;
|
||||
self.index += 1;
|
||||
if Some(op.key) != self.last_key && op.visible() {
|
||||
self.last_key = Some(op.key);
|
||||
let prop = match op.key {
|
||||
Key::Map(m) => self.meta.props.get(m),
|
||||
Key::Seq(_) => panic!("found list op in range query"),
|
||||
};
|
||||
if self.range.contains(prop) {
|
||||
return Some((prop, op.value(), op.id));
|
||||
}
|
||||
}
|
||||
}
|
||||
None
|
||||
}
|
||||
}
|
||||
|
||||
impl<'a, R: RangeBounds<String>> DoubleEndedIterator for Range<'a, R> {
|
||||
fn next_back(&mut self) -> Option<Self::Item> {
|
||||
for i in (self.index..self.index_back).rev() {
|
||||
let op = self.root_child.get(i)?;
|
||||
self.index_back -= 1;
|
||||
if Some(op.key) != self.last_key_back && op.visible() {
|
||||
self.last_key_back = Some(op.key);
|
||||
let prop = match op.key {
|
||||
Key::Map(m) => self.meta.props.get(m),
|
||||
Key::Seq(_) => panic!("can't iterate through lists backwards"),
|
||||
};
|
||||
if self.range.contains(prop) {
|
||||
return Some((prop, op.value(), op.id));
|
||||
}
|
||||
}
|
||||
}
|
||||
None
|
||||
}
|
||||
}
|
|
@ -1,88 +0,0 @@
|
|||
use crate::clock::Clock;
|
||||
use crate::op_tree::{OpSetMetadata, OpTreeNode};
|
||||
use crate::types::{Key, OpId};
|
||||
use crate::Value;
|
||||
use std::fmt::Debug;
|
||||
use std::ops::RangeBounds;
|
||||
|
||||
use super::VisWindow;
|
||||
|
||||
#[derive(Debug)]
|
||||
pub(crate) struct RangeAt<'a, R: RangeBounds<String>> {
|
||||
clock: Clock,
|
||||
window: VisWindow,
|
||||
|
||||
range: R,
|
||||
index: usize,
|
||||
last_key: Option<Key>,
|
||||
|
||||
index_back: usize,
|
||||
last_key_back: Option<Key>,
|
||||
|
||||
root_child: &'a OpTreeNode,
|
||||
meta: &'a OpSetMetadata,
|
||||
}
|
||||
|
||||
impl<'a, R: RangeBounds<String>> RangeAt<'a, R> {
|
||||
pub(crate) fn new(
|
||||
range: R,
|
||||
root_child: &'a OpTreeNode,
|
||||
meta: &'a OpSetMetadata,
|
||||
clock: Clock,
|
||||
) -> Self {
|
||||
Self {
|
||||
clock,
|
||||
window: VisWindow::default(),
|
||||
range,
|
||||
index: 0,
|
||||
last_key: None,
|
||||
index_back: root_child.len(),
|
||||
last_key_back: None,
|
||||
root_child,
|
||||
meta,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl<'a, R: RangeBounds<String>> Iterator for RangeAt<'a, R> {
|
||||
type Item = (&'a str, Value<'a>, OpId);
|
||||
|
||||
fn next(&mut self) -> Option<Self::Item> {
|
||||
for i in self.index..self.index_back {
|
||||
let op = self.root_child.get(i)?;
|
||||
let visible = self.window.visible_at(op, i, &self.clock);
|
||||
self.index += 1;
|
||||
if Some(op.key) != self.last_key && visible {
|
||||
self.last_key = Some(op.key);
|
||||
let prop = match op.key {
|
||||
Key::Map(m) => self.meta.props.get(m),
|
||||
Key::Seq(_) => panic!("found list op in range query"),
|
||||
};
|
||||
if self.range.contains(prop) {
|
||||
return Some((prop, op.value(), op.id));
|
||||
}
|
||||
}
|
||||
}
|
||||
None
|
||||
}
|
||||
}
|
||||
|
||||
impl<'a, R: RangeBounds<String>> DoubleEndedIterator for RangeAt<'a, R> {
|
||||
fn next_back(&mut self) -> Option<Self::Item> {
|
||||
for i in (self.index..self.index_back).rev() {
|
||||
let op = self.root_child.get(i)?;
|
||||
self.index_back -= 1;
|
||||
if Some(op.key) != self.last_key_back && op.visible() {
|
||||
self.last_key_back = Some(op.key);
|
||||
let prop = match op.key {
|
||||
Key::Map(m) => self.meta.props.get(m),
|
||||
Key::Seq(_) => panic!("can't iterate through lists backwards"),
|
||||
};
|
||||
if self.range.contains(prop) {
|
||||
return Some((prop, op.value(), op.id));
|
||||
}
|
||||
}
|
||||
}
|
||||
None
|
||||
}
|
||||
}
|
|
@ -1,116 +0,0 @@
|
|||
use crate::op_tree::{OpSetMetadata, OpTreeNode};
|
||||
use crate::query::{binary_search_by, QueryResult, TreeQuery};
|
||||
use crate::types::{Key, Op, HEAD};
|
||||
use std::cmp::Ordering;
|
||||
use std::fmt::Debug;
|
||||
|
||||
#[derive(Debug, Clone, PartialEq)]
|
||||
pub(crate) struct SeekOp<'a> {
|
||||
/// the op we are looking for
|
||||
op: &'a Op,
|
||||
/// The position to insert at
|
||||
pub(crate) pos: usize,
|
||||
/// The indices of ops that this op overwrites
|
||||
pub(crate) succ: Vec<usize>,
|
||||
/// whether a position has been found
|
||||
found: bool,
|
||||
}
|
||||
|
||||
impl<'a> SeekOp<'a> {
|
||||
pub(crate) fn new(op: &'a Op) -> Self {
|
||||
SeekOp {
|
||||
op,
|
||||
succ: vec![],
|
||||
pos: 0,
|
||||
found: false,
|
||||
}
|
||||
}
|
||||
|
||||
fn lesser_insert(&self, op: &Op, m: &OpSetMetadata) -> bool {
|
||||
op.insert && m.lamport_cmp(op.id, self.op.id) == Ordering::Less
|
||||
}
|
||||
|
||||
fn greater_opid(&self, op: &Op, m: &OpSetMetadata) -> bool {
|
||||
m.lamport_cmp(op.id, self.op.id) == Ordering::Greater
|
||||
}
|
||||
|
||||
fn is_target_insert(&self, op: &Op) -> bool {
|
||||
op.insert && op.elemid() == self.op.key.elemid()
|
||||
}
|
||||
}
|
||||
|
||||
impl<'a> TreeQuery<'a> for SeekOp<'a> {
|
||||
fn query_node_with_metadata(&mut self, child: &OpTreeNode, m: &OpSetMetadata) -> QueryResult {
|
||||
if self.found {
|
||||
return QueryResult::Descend;
|
||||
}
|
||||
match self.op.key {
|
||||
Key::Seq(HEAD) => {
|
||||
while self.pos < child.len() {
|
||||
let op = child.get(self.pos).unwrap();
|
||||
if op.insert && m.lamport_cmp(op.id, self.op.id) == Ordering::Less {
|
||||
break;
|
||||
}
|
||||
self.pos += 1;
|
||||
}
|
||||
QueryResult::Finish
|
||||
}
|
||||
Key::Seq(e) => {
|
||||
if child.index.ops.contains(&e.0) {
|
||||
QueryResult::Descend
|
||||
} else {
|
||||
self.pos += child.len();
|
||||
QueryResult::Next
|
||||
}
|
||||
}
|
||||
Key::Map(_) => {
|
||||
self.pos = binary_search_by(child, |op| m.key_cmp(&op.key, &self.op.key));
|
||||
while self.pos < child.len() {
|
||||
let op = child.get(self.pos).unwrap();
|
||||
if op.key != self.op.key {
|
||||
break;
|
||||
}
|
||||
if self.op.overwrites(op) {
|
||||
self.succ.push(self.pos);
|
||||
}
|
||||
if m.lamport_cmp(op.id, self.op.id) == Ordering::Greater {
|
||||
break;
|
||||
}
|
||||
self.pos += 1;
|
||||
}
|
||||
QueryResult::Finish
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
fn query_element_with_metadata(&mut self, e: &Op, m: &OpSetMetadata) -> QueryResult {
|
||||
if !self.found {
|
||||
if self.is_target_insert(e) {
|
||||
self.found = true;
|
||||
if self.op.overwrites(e) {
|
||||
self.succ.push(self.pos);
|
||||
}
|
||||
}
|
||||
self.pos += 1;
|
||||
QueryResult::Next
|
||||
} else {
|
||||
// we have already found the target
|
||||
if self.op.overwrites(e) {
|
||||
self.succ.push(self.pos);
|
||||
}
|
||||
if self.op.insert {
|
||||
if self.lesser_insert(e, m) {
|
||||
QueryResult::Finish
|
||||
} else {
|
||||
self.pos += 1;
|
||||
QueryResult::Next
|
||||
}
|
||||
} else if e.insert || self.greater_opid(e, m) {
|
||||
QueryResult::Finish
|
||||
} else {
|
||||
self.pos += 1;
|
||||
QueryResult::Next
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
|
@ -1,255 +0,0 @@
|
|||
use crate::op_tree::{OpSetMetadata, OpTreeNode};
|
||||
use crate::query::{binary_search_by, QueryResult, TreeQuery};
|
||||
use crate::types::{ElemId, Key, Op, HEAD};
|
||||
use std::cmp::Ordering;
|
||||
use std::fmt::Debug;
|
||||
|
||||
#[derive(Debug, Clone, PartialEq)]
|
||||
pub(crate) struct SeekOpWithPatch<'a> {
|
||||
op: Op,
|
||||
pub(crate) pos: usize,
|
||||
pub(crate) succ: Vec<usize>,
|
||||
found: bool,
|
||||
pub(crate) seen: usize,
|
||||
last_seen: Option<ElemId>,
|
||||
pub(crate) values: Vec<&'a Op>,
|
||||
pub(crate) had_value_before: bool,
|
||||
}
|
||||
|
||||
impl<'a> SeekOpWithPatch<'a> {
|
||||
pub(crate) fn new(op: &Op) -> Self {
|
||||
SeekOpWithPatch {
|
||||
op: op.clone(),
|
||||
succ: vec![],
|
||||
pos: 0,
|
||||
found: false,
|
||||
seen: 0,
|
||||
last_seen: None,
|
||||
values: vec![],
|
||||
had_value_before: false,
|
||||
}
|
||||
}
|
||||
|
||||
fn lesser_insert(&self, op: &Op, m: &OpSetMetadata) -> bool {
|
||||
op.insert && m.lamport_cmp(op.id, self.op.id) == Ordering::Less
|
||||
}
|
||||
|
||||
fn greater_opid(&self, op: &Op, m: &OpSetMetadata) -> bool {
|
||||
m.lamport_cmp(op.id, self.op.id) == Ordering::Greater
|
||||
}
|
||||
|
||||
fn is_target_insert(&self, op: &Op) -> bool {
|
||||
op.insert && op.elemid() == self.op.key.elemid()
|
||||
}
|
||||
|
||||
/// Keeps track of the number of visible list elements we have seen. Increments `self.seen` if
|
||||
/// operation `e` associates a visible value with a list element, and if we have not already
|
||||
/// counted that list element (this ensures that if a list element has several values, i.e.
|
||||
/// a conflict, then it is still only counted once).
|
||||
fn count_visible(&mut self, e: &Op) {
|
||||
if e.elemid() == self.op.elemid() {
|
||||
return;
|
||||
}
|
||||
if e.insert {
|
||||
self.last_seen = None
|
||||
}
|
||||
if e.visible() && self.last_seen.is_none() {
|
||||
self.seen += 1;
|
||||
self.last_seen = e.elemid()
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl<'a> TreeQuery<'a> for SeekOpWithPatch<'a> {
|
||||
fn query_node_with_metadata(
|
||||
&mut self,
|
||||
child: &'a OpTreeNode,
|
||||
m: &OpSetMetadata,
|
||||
) -> QueryResult {
|
||||
if self.found {
|
||||
return QueryResult::Descend;
|
||||
}
|
||||
match self.op.key {
|
||||
// Special case for insertion at the head of the list (`e == HEAD` is only possible for
|
||||
// an insertion operation). Skip over any list elements whose elemId is greater than
|
||||
// the opId of the operation being inserted.
|
||||
Key::Seq(e) if e == HEAD => {
|
||||
while self.pos < child.len() {
|
||||
let op = child.get(self.pos).unwrap();
|
||||
if op.insert && m.lamport_cmp(op.id, self.op.id) == Ordering::Less {
|
||||
break;
|
||||
}
|
||||
self.count_visible(op);
|
||||
self.pos += 1;
|
||||
}
|
||||
QueryResult::Finish
|
||||
}
|
||||
|
||||
// Updating a list: search for the tree node that contains the new operation's
|
||||
// reference element (i.e. the element we're updating or inserting after)
|
||||
Key::Seq(e) => {
|
||||
if self.found || child.index.ops.contains(&e.0) {
|
||||
QueryResult::Descend
|
||||
} else {
|
||||
self.pos += child.len();
|
||||
|
||||
// When we skip over a subtree, we need to count the number of visible list
|
||||
// elements we're skipping over. Each node stores the number of visible
|
||||
// elements it contains. However, it could happen that a visible element is
|
||||
// split across two tree nodes. To avoid double-counting in this situation, we
|
||||
// subtract one if the last visible element also appears in this tree node.
|
||||
let mut num_vis = child.index.visible_len();
|
||||
if num_vis > 0 {
|
||||
// FIXME: I think this is wrong: we should subtract one only if this
|
||||
// subtree contains a *visible* (i.e. empty succs) operation for the list
|
||||
// element with elemId `last_seen`; this will subtract one even if all
|
||||
// values for this list element have been deleted in this subtree.
|
||||
if child.index.has_visible(&self.last_seen) {
|
||||
num_vis -= 1;
|
||||
}
|
||||
self.seen += num_vis;
|
||||
|
||||
// FIXME: this is also wrong: `last_seen` needs to be the elemId of the
|
||||
// last *visible* list element in this subtree, but I think this returns
|
||||
// the last operation's elemId regardless of whether it's visible or not.
|
||||
// This will lead to incorrect counting if `last_seen` is not visible: it's
|
||||
// not counted towards `num_vis`, so we shouldn't be subtracting 1.
|
||||
self.last_seen = child.last().elemid();
|
||||
}
|
||||
QueryResult::Next
|
||||
}
|
||||
}
|
||||
|
||||
// Updating a map: operations appear in sorted order by key
|
||||
Key::Map(_) => {
|
||||
// Search for the place where we need to insert the new operation. First find the
|
||||
// first op with a key >= the key we're updating
|
||||
self.pos = binary_search_by(child, |op| m.key_cmp(&op.key, &self.op.key));
|
||||
while self.pos < child.len() {
|
||||
// Iterate over any existing operations for the same key; stop when we reach an
|
||||
// operation with a different key
|
||||
let op = child.get(self.pos).unwrap();
|
||||
if op.key != self.op.key {
|
||||
break;
|
||||
}
|
||||
|
||||
// Keep track of any ops we're overwriting and any conflicts on this key
|
||||
if self.op.overwrites(op) {
|
||||
// when we encounter an increment op we also want to find the counter for
|
||||
// it.
|
||||
if self.op.is_inc() && op.is_counter() && op.visible() {
|
||||
self.values.push(op);
|
||||
}
|
||||
self.succ.push(self.pos);
|
||||
} else if op.visible() {
|
||||
self.values.push(op);
|
||||
}
|
||||
|
||||
// Ops for the same key should be in ascending order of opId, so we break when
|
||||
// we reach an op with an opId greater than that of the new operation
|
||||
if m.lamport_cmp(op.id, self.op.id) == Ordering::Greater {
|
||||
break;
|
||||
}
|
||||
|
||||
self.pos += 1;
|
||||
}
|
||||
|
||||
// For the purpose of reporting conflicts, we also need to take into account any
|
||||
// ops for the same key that appear after the new operation
|
||||
let mut later_pos = self.pos;
|
||||
while later_pos < child.len() {
|
||||
let op = child.get(later_pos).unwrap();
|
||||
if op.key != self.op.key {
|
||||
break;
|
||||
}
|
||||
// No need to check if `self.op.overwrites(op)` because an operation's `preds`
|
||||
// must always have lower Lamport timestamps than that op itself, and the ops
|
||||
// here all have greater opIds than the new op
|
||||
if op.visible() {
|
||||
self.values.push(op);
|
||||
}
|
||||
later_pos += 1;
|
||||
}
|
||||
QueryResult::Finish
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Only called when operating on a sequence (list/text) object, since updates of a map are
|
||||
// handled in `query_node_with_metadata`.
|
||||
fn query_element_with_metadata(&mut self, e: &'a Op, m: &OpSetMetadata) -> QueryResult {
|
||||
let result = if !self.found {
|
||||
// First search for the referenced list element (i.e. the element we're updating, or
|
||||
// after which we're inserting)
|
||||
if self.is_target_insert(e) {
|
||||
self.found = true;
|
||||
if self.op.overwrites(e) {
|
||||
// when we encounter an increment op we also want to find the counter for
|
||||
// it.
|
||||
if self.op.is_inc() && e.is_counter() && e.visible() {
|
||||
self.values.push(e);
|
||||
}
|
||||
self.succ.push(self.pos);
|
||||
}
|
||||
if e.visible() {
|
||||
self.had_value_before = true;
|
||||
}
|
||||
}
|
||||
self.pos += 1;
|
||||
QueryResult::Next
|
||||
} else {
|
||||
// Once we've found the reference element, keep track of any ops that we're overwriting
|
||||
let overwritten = self.op.overwrites(e);
|
||||
if overwritten {
|
||||
// when we encounter an increment op we also want to find the counter for
|
||||
// it.
|
||||
if self.op.is_inc() && e.is_counter() && e.visible() {
|
||||
self.values.push(e);
|
||||
}
|
||||
self.succ.push(self.pos);
|
||||
}
|
||||
|
||||
// If the new op is an insertion, skip over any existing list elements whose elemId is
|
||||
// greater than the ID of the new insertion
|
||||
if self.op.insert {
|
||||
if self.lesser_insert(e, m) {
|
||||
// Insert before the first existing list element whose elemId is less than that
|
||||
// of the new insertion
|
||||
QueryResult::Finish
|
||||
} else {
|
||||
self.pos += 1;
|
||||
QueryResult::Next
|
||||
}
|
||||
} else if e.insert {
|
||||
// If the new op is an update of an existing list element, the first insertion op
|
||||
// we encounter after the reference element indicates the end of the reference elem
|
||||
QueryResult::Finish
|
||||
} else {
|
||||
// When updating an existing list element, keep track of any conflicts on this list
|
||||
// element. We also need to remember if the list element had any visible elements
|
||||
// prior to applying the new operation: if not, the new operation is resurrecting
|
||||
// a deleted list element, so it looks like an insertion in the patch.
|
||||
if e.visible() {
|
||||
self.had_value_before = true;
|
||||
if !overwritten {
|
||||
self.values.push(e);
|
||||
}
|
||||
}
|
||||
|
||||
// We now need to put the ops for the same list element into ascending order, so we
|
||||
// skip over any ops whose ID is less than that of the new operation.
|
||||
if !self.greater_opid(e, m) {
|
||||
self.pos += 1;
|
||||
}
|
||||
QueryResult::Next
|
||||
}
|
||||
};
|
||||
|
||||
// The patch needs to know the list index of each operation, so we count the number of
|
||||
// visible list elements up to the insertion position of the new operation
|
||||
if result == QueryResult::Next {
|
||||
self.count_visible(e);
|
||||
}
|
||||
result
|
||||
}
|
||||
}
|
|
@ -1,378 +0,0 @@
|
|||
use itertools::Itertools;
|
||||
use std::{
|
||||
borrow::Cow,
|
||||
collections::{HashMap, HashSet},
|
||||
io,
|
||||
io::Write,
|
||||
};
|
||||
|
||||
use crate::{
|
||||
decoding, decoding::Decoder, encoding::Encodable, ApplyOptions, Automerge, AutomergeError,
|
||||
Change, ChangeHash, OpObserver,
|
||||
};
|
||||
|
||||
mod bloom;
|
||||
mod state;
|
||||
|
||||
pub use bloom::BloomFilter;
|
||||
pub use state::{Have, State};
|
||||
|
||||
const HASH_SIZE: usize = 32; // 256 bits = 32 bytes
|
||||
const MESSAGE_TYPE_SYNC: u8 = 0x42; // first byte of a sync message, for identification
|
||||
|
||||
impl Automerge {
|
||||
pub fn generate_sync_message(&self, sync_state: &mut State) -> Option<Message> {
|
||||
let our_heads = self.get_heads();
|
||||
|
||||
let our_need = self.get_missing_deps(sync_state.their_heads.as_ref().unwrap_or(&vec![]));
|
||||
|
||||
let their_heads_set = if let Some(ref heads) = sync_state.their_heads {
|
||||
heads.iter().collect::<HashSet<_>>()
|
||||
} else {
|
||||
HashSet::new()
|
||||
};
|
||||
let our_have = if our_need.iter().all(|hash| their_heads_set.contains(hash)) {
|
||||
vec![self.make_bloom_filter(sync_state.shared_heads.clone())]
|
||||
} else {
|
||||
Vec::new()
|
||||
};
|
||||
|
||||
if let Some(ref their_have) = sync_state.their_have {
|
||||
if let Some(first_have) = their_have.first().as_ref() {
|
||||
if !first_have
|
||||
.last_sync
|
||||
.iter()
|
||||
.all(|hash| self.get_change_by_hash(hash).is_some())
|
||||
{
|
||||
let reset_msg = Message {
|
||||
heads: our_heads,
|
||||
need: Vec::new(),
|
||||
have: vec![Have::default()],
|
||||
changes: Vec::new(),
|
||||
};
|
||||
return Some(reset_msg);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
let mut changes_to_send = if let (Some(their_have), Some(their_need)) = (
|
||||
sync_state.their_have.as_ref(),
|
||||
sync_state.their_need.as_ref(),
|
||||
) {
|
||||
self.get_changes_to_send(their_have.clone(), their_need)
|
||||
} else {
|
||||
Vec::new()
|
||||
};
|
||||
|
||||
let heads_unchanged = sync_state.last_sent_heads == our_heads;
|
||||
|
||||
let heads_equal = if let Some(their_heads) = sync_state.their_heads.as_ref() {
|
||||
their_heads == &our_heads
|
||||
} else {
|
||||
false
|
||||
};
|
||||
|
||||
if heads_unchanged && heads_equal && changes_to_send.is_empty() {
|
||||
return None;
|
||||
}
|
||||
|
||||
// deduplicate the changes to send with those we have already sent
|
||||
changes_to_send.retain(|change| !sync_state.sent_hashes.contains(&change.hash));
|
||||
|
||||
sync_state.last_sent_heads = our_heads.clone();
|
||||
sync_state
|
||||
.sent_hashes
|
||||
.extend(changes_to_send.iter().map(|c| c.hash));
|
||||
|
||||
let sync_message = Message {
|
||||
heads: our_heads,
|
||||
have: our_have,
|
||||
need: our_need,
|
||||
changes: changes_to_send.into_iter().cloned().collect(),
|
||||
};
|
||||
|
||||
Some(sync_message)
|
||||
}
|
||||
|
||||
pub fn receive_sync_message(
|
||||
&mut self,
|
||||
sync_state: &mut State,
|
||||
message: Message,
|
||||
) -> Result<(), AutomergeError> {
|
||||
self.receive_sync_message_with::<()>(sync_state, message, ApplyOptions::default())
|
||||
}
|
||||
|
||||
pub fn receive_sync_message_with<'a, Obs: OpObserver>(
|
||||
&mut self,
|
||||
sync_state: &mut State,
|
||||
message: Message,
|
||||
options: ApplyOptions<'a, Obs>,
|
||||
) -> Result<(), AutomergeError> {
|
||||
let before_heads = self.get_heads();
|
||||
|
||||
let Message {
|
||||
heads: message_heads,
|
||||
changes: message_changes,
|
||||
need: message_need,
|
||||
have: message_have,
|
||||
} = message;
|
||||
|
||||
let changes_is_empty = message_changes.is_empty();
|
||||
if !changes_is_empty {
|
||||
self.apply_changes_with(message_changes, options)?;
|
||||
sync_state.shared_heads = advance_heads(
|
||||
&before_heads.iter().collect(),
|
||||
&self.get_heads().into_iter().collect(),
|
||||
&sync_state.shared_heads,
|
||||
);
|
||||
}
|
||||
|
||||
// trim down the sent hashes to those that we know they haven't seen
|
||||
self.filter_changes(&message_heads, &mut sync_state.sent_hashes);
|
||||
|
||||
if changes_is_empty && message_heads == before_heads {
|
||||
sync_state.last_sent_heads = message_heads.clone();
|
||||
}
|
||||
|
||||
let known_heads = message_heads
|
||||
.iter()
|
||||
.filter(|head| self.get_change_by_hash(head).is_some())
|
||||
.collect::<Vec<_>>();
|
||||
if known_heads.len() == message_heads.len() {
|
||||
sync_state.shared_heads = message_heads.clone();
|
||||
// If the remote peer has lost all its data, reset our state to perform a full resync
|
||||
if message_heads.is_empty() {
|
||||
sync_state.last_sent_heads = Default::default();
|
||||
sync_state.sent_hashes = Default::default();
|
||||
}
|
||||
} else {
|
||||
sync_state.shared_heads = sync_state
|
||||
.shared_heads
|
||||
.iter()
|
||||
.chain(known_heads)
|
||||
.copied()
|
||||
.unique()
|
||||
.sorted()
|
||||
.collect::<Vec<_>>();
|
||||
}
|
||||
|
||||
sync_state.their_have = Some(message_have);
|
||||
sync_state.their_heads = Some(message_heads);
|
||||
sync_state.their_need = Some(message_need);
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
fn make_bloom_filter(&self, last_sync: Vec<ChangeHash>) -> Have {
|
||||
let new_changes = self.get_changes(&last_sync);
|
||||
let hashes = new_changes
|
||||
.into_iter()
|
||||
.map(|change| change.hash)
|
||||
.collect::<Vec<_>>();
|
||||
Have {
|
||||
last_sync,
|
||||
bloom: BloomFilter::from(&hashes[..]),
|
||||
}
|
||||
}
|
||||
|
||||
fn get_changes_to_send(&self, have: Vec<Have>, need: &[ChangeHash]) -> Vec<&Change> {
|
||||
if have.is_empty() {
|
||||
need.iter()
|
||||
.filter_map(|hash| self.get_change_by_hash(hash))
|
||||
.collect()
|
||||
} else {
|
||||
let mut last_sync_hashes = HashSet::new();
|
||||
let mut bloom_filters = Vec::with_capacity(have.len());
|
||||
|
||||
for h in have {
|
||||
let Have { last_sync, bloom } = h;
|
||||
for hash in last_sync {
|
||||
last_sync_hashes.insert(hash);
|
||||
}
|
||||
bloom_filters.push(bloom);
|
||||
}
|
||||
let last_sync_hashes = last_sync_hashes.into_iter().collect::<Vec<_>>();
|
||||
|
||||
let changes = self.get_changes(&last_sync_hashes);
|
||||
|
||||
let mut change_hashes = HashSet::with_capacity(changes.len());
|
||||
let mut dependents: HashMap<ChangeHash, Vec<ChangeHash>> = HashMap::new();
|
||||
let mut hashes_to_send = HashSet::new();
|
||||
|
||||
for change in &changes {
|
||||
change_hashes.insert(change.hash);
|
||||
|
||||
for dep in &change.deps {
|
||||
dependents.entry(*dep).or_default().push(change.hash);
|
||||
}
|
||||
|
||||
if bloom_filters
|
||||
.iter()
|
||||
.all(|bloom| !bloom.contains_hash(&change.hash))
|
||||
{
|
||||
hashes_to_send.insert(change.hash);
|
||||
}
|
||||
}
|
||||
|
||||
let mut stack = hashes_to_send.iter().copied().collect::<Vec<_>>();
|
||||
while let Some(hash) = stack.pop() {
|
||||
if let Some(deps) = dependents.get(&hash) {
|
||||
for dep in deps {
|
||||
if hashes_to_send.insert(*dep) {
|
||||
stack.push(*dep);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
let mut changes_to_send = Vec::new();
|
||||
for hash in need {
|
||||
hashes_to_send.insert(*hash);
|
||||
if !change_hashes.contains(hash) {
|
||||
let change = self.get_change_by_hash(hash);
|
||||
if let Some(change) = change {
|
||||
changes_to_send.push(change);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
for change in changes {
|
||||
if hashes_to_send.contains(&change.hash) {
|
||||
changes_to_send.push(change);
|
||||
}
|
||||
}
|
||||
changes_to_send
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/// The sync message to be sent.
|
||||
#[derive(Debug, Clone)]
|
||||
pub struct Message {
|
||||
/// The heads of the sender.
|
||||
pub heads: Vec<ChangeHash>,
|
||||
/// The hashes of any changes that are being explicitly requested from the recipient.
|
||||
pub need: Vec<ChangeHash>,
|
||||
/// A summary of the changes that the sender already has.
|
||||
pub have: Vec<Have>,
|
||||
/// The changes for the recipient to apply.
|
||||
pub changes: Vec<Change>,
|
||||
}
|
||||
|
||||
impl Message {
|
||||
pub fn encode(self) -> Vec<u8> {
|
||||
let mut buf = vec![MESSAGE_TYPE_SYNC];
|
||||
|
||||
encode_hashes(&mut buf, &self.heads);
|
||||
encode_hashes(&mut buf, &self.need);
|
||||
(self.have.len() as u32).encode_vec(&mut buf);
|
||||
for have in self.have {
|
||||
encode_hashes(&mut buf, &have.last_sync);
|
||||
have.bloom.to_bytes().encode_vec(&mut buf);
|
||||
}
|
||||
|
||||
(self.changes.len() as u32).encode_vec(&mut buf);
|
||||
for mut change in self.changes {
|
||||
change.compress();
|
||||
change.raw_bytes().encode_vec(&mut buf);
|
||||
}
|
||||
|
||||
buf
|
||||
}
|
||||
|
||||
pub fn decode(bytes: &[u8]) -> Result<Message, decoding::Error> {
|
||||
let mut decoder = Decoder::new(Cow::Borrowed(bytes));
|
||||
|
||||
let message_type = decoder.read::<u8>()?;
|
||||
if message_type != MESSAGE_TYPE_SYNC {
|
||||
return Err(decoding::Error::WrongType {
|
||||
expected_one_of: vec![MESSAGE_TYPE_SYNC],
|
||||
found: message_type,
|
||||
});
|
||||
}
|
||||
|
||||
let heads = decode_hashes(&mut decoder)?;
|
||||
let need = decode_hashes(&mut decoder)?;
|
||||
let have_count = decoder.read::<u32>()?;
|
||||
let mut have = Vec::with_capacity(have_count as usize);
|
||||
for _ in 0..have_count {
|
||||
let last_sync = decode_hashes(&mut decoder)?;
|
||||
let bloom_bytes: Vec<u8> = decoder.read()?;
|
||||
let bloom = BloomFilter::try_from(bloom_bytes.as_slice())?;
|
||||
have.push(Have { last_sync, bloom });
|
||||
}
|
||||
|
||||
let change_count = decoder.read::<u32>()?;
|
||||
let mut changes = Vec::with_capacity(change_count as usize);
|
||||
for _ in 0..change_count {
|
||||
let change = decoder.read()?;
|
||||
changes.push(Change::from_bytes(change)?);
|
||||
}
|
||||
|
||||
Ok(Message {
|
||||
heads,
|
||||
need,
|
||||
have,
|
||||
changes,
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
fn encode_hashes(buf: &mut Vec<u8>, hashes: &[ChangeHash]) {
|
||||
debug_assert!(
|
||||
hashes.windows(2).all(|h| h[0] <= h[1]),
|
||||
"hashes were not sorted"
|
||||
);
|
||||
hashes.encode_vec(buf);
|
||||
}
|
||||
|
||||
impl Encodable for &[ChangeHash] {
|
||||
fn encode<W: Write>(&self, buf: &mut W) -> io::Result<usize> {
|
||||
let head = self.len().encode(buf)?;
|
||||
let mut body = 0;
|
||||
for hash in self.iter() {
|
||||
buf.write_all(&hash.0)?;
|
||||
body += hash.0.len();
|
||||
}
|
||||
Ok(head + body)
|
||||
}
|
||||
}
|
||||
|
||||
fn decode_hashes(decoder: &mut Decoder<'_>) -> Result<Vec<ChangeHash>, decoding::Error> {
|
||||
let length = decoder.read::<u32>()?;
|
||||
let mut hashes = Vec::with_capacity(length as usize);
|
||||
|
||||
for _ in 0..length {
|
||||
let hash_bytes = decoder.read_bytes(HASH_SIZE)?;
|
||||
let hash = ChangeHash::try_from(hash_bytes).map_err(decoding::Error::BadChangeFormat)?;
|
||||
hashes.push(hash);
|
||||
}
|
||||
|
||||
Ok(hashes)
|
||||
}
|
||||
|
||||
fn advance_heads(
|
||||
my_old_heads: &HashSet<&ChangeHash>,
|
||||
my_new_heads: &HashSet<ChangeHash>,
|
||||
our_old_shared_heads: &[ChangeHash],
|
||||
) -> Vec<ChangeHash> {
|
||||
let new_heads = my_new_heads
|
||||
.iter()
|
||||
.filter(|head| !my_old_heads.contains(head))
|
||||
.copied()
|
||||
.collect::<Vec<_>>();
|
||||
|
||||
let common_heads = our_old_shared_heads
|
||||
.iter()
|
||||
.filter(|head| my_new_heads.contains(head))
|
||||
.copied()
|
||||
.collect::<Vec<_>>();
|
||||
|
||||
let mut advanced_heads = HashSet::with_capacity(new_heads.len() + common_heads.len());
|
||||
for head in new_heads.into_iter().chain(common_heads) {
|
||||
advanced_heads.insert(head);
|
||||
}
|
||||
let mut advanced_heads = advanced_heads.into_iter().collect::<Vec<_>>();
|
||||
advanced_heads.sort();
|
||||
advanced_heads
|
||||
}
|
|
@ -1,63 +0,0 @@
|
|||
use std::{borrow::Cow, collections::HashSet};
|
||||
|
||||
use super::{decode_hashes, encode_hashes, BloomFilter};
|
||||
use crate::{decoding, decoding::Decoder, ChangeHash};
|
||||
|
||||
const SYNC_STATE_TYPE: u8 = 0x43; // first byte of an encoded sync state, for identification
|
||||
|
||||
/// The state of synchronisation with a peer.
|
||||
#[derive(Debug, Clone, Default)]
|
||||
pub struct State {
|
||||
pub shared_heads: Vec<ChangeHash>,
|
||||
pub last_sent_heads: Vec<ChangeHash>,
|
||||
pub their_heads: Option<Vec<ChangeHash>>,
|
||||
pub their_need: Option<Vec<ChangeHash>>,
|
||||
pub their_have: Option<Vec<Have>>,
|
||||
pub sent_hashes: HashSet<ChangeHash>,
|
||||
}
|
||||
|
||||
/// A summary of the changes that the sender of the message already has.
|
||||
/// This is implicitly a request to the recipient to send all changes that the
|
||||
/// sender does not already have.
|
||||
#[derive(Debug, Clone, Default)]
|
||||
pub struct Have {
|
||||
/// The heads at the time of the last successful sync with this recipient.
|
||||
pub last_sync: Vec<ChangeHash>,
|
||||
/// A bloom filter summarising all of the changes that the sender of the message has added
|
||||
/// since the last sync.
|
||||
pub bloom: BloomFilter,
|
||||
}
|
||||
|
||||
impl State {
|
||||
pub fn new() -> Self {
|
||||
Default::default()
|
||||
}
|
||||
|
||||
pub fn encode(&self) -> Vec<u8> {
|
||||
let mut buf = vec![SYNC_STATE_TYPE];
|
||||
encode_hashes(&mut buf, &self.shared_heads);
|
||||
buf
|
||||
}
|
||||
|
||||
pub fn decode(bytes: &[u8]) -> Result<Self, decoding::Error> {
|
||||
let mut decoder = Decoder::new(Cow::Borrowed(bytes));
|
||||
|
||||
let record_type = decoder.read::<u8>()?;
|
||||
if record_type != SYNC_STATE_TYPE {
|
||||
return Err(decoding::Error::WrongType {
|
||||
expected_one_of: vec![SYNC_STATE_TYPE],
|
||||
found: record_type,
|
||||
});
|
||||
}
|
||||
|
||||
let shared_heads = decode_hashes(&mut decoder)?;
|
||||
Ok(Self {
|
||||
shared_heads,
|
||||
last_sent_heads: Vec::new(),
|
||||
their_heads: None,
|
||||
their_need: None,
|
||||
their_have: Some(Vec::new()),
|
||||
sent_hashes: HashSet::new(),
|
||||
})
|
||||
}
|
||||
}
|
|
@ -1,395 +0,0 @@
|
|||
use std::num::NonZeroU64;
|
||||
|
||||
use crate::automerge::Actor;
|
||||
use crate::exid::ExId;
|
||||
use crate::query::{self, OpIdSearch};
|
||||
use crate::types::{Key, ObjId, OpId};
|
||||
use crate::{change::export_change, types::Op, Automerge, ChangeHash, Prop};
|
||||
use crate::{AutomergeError, ObjType, OpObserver, OpType, ScalarValue};
|
||||
|
||||
#[derive(Debug, Clone)]
|
||||
pub(crate) struct TransactionInner {
|
||||
pub(crate) actor: usize,
|
||||
pub(crate) seq: u64,
|
||||
pub(crate) start_op: NonZeroU64,
|
||||
pub(crate) time: i64,
|
||||
pub(crate) message: Option<String>,
|
||||
pub(crate) extra_bytes: Vec<u8>,
|
||||
pub(crate) hash: Option<ChangeHash>,
|
||||
pub(crate) deps: Vec<ChangeHash>,
|
||||
pub(crate) operations: Vec<(ObjId, Prop, Op)>,
|
||||
}
|
||||
|
||||
impl TransactionInner {
|
||||
pub(crate) fn pending_ops(&self) -> usize {
|
||||
self.operations.len()
|
||||
}
|
||||
|
||||
/// Commit the operations performed in this transaction, returning the hashes corresponding to
|
||||
/// the new heads.
|
||||
pub(crate) fn commit<Obs: OpObserver>(
|
||||
mut self,
|
||||
doc: &mut Automerge,
|
||||
message: Option<String>,
|
||||
time: Option<i64>,
|
||||
op_observer: Option<&mut Obs>,
|
||||
) -> ChangeHash {
|
||||
if message.is_some() {
|
||||
self.message = message;
|
||||
}
|
||||
|
||||
if let Some(t) = time {
|
||||
self.time = t;
|
||||
}
|
||||
|
||||
if let Some(observer) = op_observer {
|
||||
for (obj, prop, op) in &self.operations {
|
||||
let ex_obj = doc.ops.id_to_exid(obj.0);
|
||||
if op.insert {
|
||||
let value = (op.value(), doc.id_to_exid(op.id));
|
||||
match prop {
|
||||
Prop::Map(_) => panic!("insert into a map"),
|
||||
Prop::Seq(index) => observer.insert(ex_obj, *index, value),
|
||||
}
|
||||
} else if op.is_delete() {
|
||||
observer.delete(ex_obj, prop.clone());
|
||||
} else if let Some(value) = op.get_increment_value() {
|
||||
observer.increment(ex_obj, prop.clone(), (value, doc.id_to_exid(op.id)));
|
||||
} else {
|
||||
let value = (op.value(), doc.ops.id_to_exid(op.id));
|
||||
observer.put(ex_obj, prop.clone(), value, false);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
let num_ops = self.pending_ops();
|
||||
let change = export_change(self, &doc.ops.m.actors, &doc.ops.m.props);
|
||||
let hash = change.hash;
|
||||
doc.update_history(change, num_ops);
|
||||
debug_assert_eq!(doc.get_heads(), vec![hash]);
|
||||
hash
|
||||
}
|
||||
|
||||
/// Undo the operations added in this transaction, returning the number of cancelled
|
||||
/// operations.
|
||||
pub(crate) fn rollback(self, doc: &mut Automerge) -> usize {
|
||||
let num = self.pending_ops();
|
||||
// remove in reverse order so sets are removed before makes etc...
|
||||
for (obj, _prop, op) in self.operations.into_iter().rev() {
|
||||
for pred_id in &op.pred {
|
||||
if let Some(p) = doc.ops.search(&obj, OpIdSearch::new(*pred_id)).index() {
|
||||
doc.ops.replace(&obj, p, |o| o.remove_succ(&op));
|
||||
}
|
||||
}
|
||||
if let Some(pos) = doc.ops.search(&obj, OpIdSearch::new(op.id)).index() {
|
||||
doc.ops.remove(&obj, pos);
|
||||
}
|
||||
}
|
||||
|
||||
// remove the actor from the cache so that it doesn't end up in the saved document
|
||||
if doc.states.get(&self.actor).is_none() {
|
||||
let actor = doc.ops.m.actors.remove_last();
|
||||
doc.actor = Actor::Unused(actor);
|
||||
}
|
||||
|
||||
num
|
||||
}
|
||||
|
||||
/// Set the value of property `P` to value `V` in object `obj`.
|
||||
///
|
||||
/// # Returns
|
||||
///
|
||||
/// The opid of the operation which was created, or None if this operation doesn't change the
|
||||
/// document
|
||||
///
|
||||
/// # Errors
|
||||
///
|
||||
/// This will return an error if
|
||||
/// - The object does not exist
|
||||
/// - The key is the wrong type for the object
|
||||
/// - The key does not exist in the object
|
||||
pub(crate) fn put<P: Into<Prop>, V: Into<ScalarValue>>(
|
||||
&mut self,
|
||||
doc: &mut Automerge,
|
||||
ex_obj: &ExId,
|
||||
prop: P,
|
||||
value: V,
|
||||
) -> Result<(), AutomergeError> {
|
||||
let obj = doc.exid_to_obj(ex_obj)?;
|
||||
let value = value.into();
|
||||
let prop = prop.into();
|
||||
self.local_op(doc, obj, prop, value.into())?;
|
||||
Ok(())
|
||||
}
|
||||
|
||||
/// Set the value of property `P` to value `V` in object `obj`.
|
||||
///
|
||||
/// # Returns
|
||||
///
|
||||
/// The opid of the operation which was created, or None if this operation doesn't change the
|
||||
/// document
|
||||
///
|
||||
/// # Errors
|
||||
///
|
||||
/// This will return an error if
|
||||
/// - The object does not exist
|
||||
/// - The key is the wrong type for the object
|
||||
/// - The key does not exist in the object
|
||||
pub(crate) fn put_object<P: Into<Prop>>(
|
||||
&mut self,
|
||||
doc: &mut Automerge,
|
||||
ex_obj: &ExId,
|
||||
prop: P,
|
||||
value: ObjType,
|
||||
) -> Result<ExId, AutomergeError> {
|
||||
let obj = doc.exid_to_obj(ex_obj)?;
|
||||
let prop = prop.into();
|
||||
let id = self.local_op(doc, obj, prop, value.into())?.unwrap();
|
||||
let id = doc.id_to_exid(id);
|
||||
Ok(id)
|
||||
}
|
||||
|
||||
fn next_id(&mut self) -> OpId {
|
||||
OpId(self.start_op.get() + self.pending_ops() as u64, self.actor)
|
||||
}
|
||||
|
||||
fn insert_local_op(
|
||||
&mut self,
|
||||
doc: &mut Automerge,
|
||||
prop: Prop,
|
||||
op: Op,
|
||||
pos: usize,
|
||||
obj: ObjId,
|
||||
succ_pos: &[usize],
|
||||
) {
|
||||
for succ in succ_pos {
|
||||
doc.ops.replace(&obj, *succ, |old_op| {
|
||||
old_op.add_succ(&op);
|
||||
});
|
||||
}
|
||||
|
||||
if !op.is_delete() {
|
||||
doc.ops.insert(pos, &obj, op.clone());
|
||||
}
|
||||
|
||||
self.operations.push((obj, prop, op));
|
||||
}
|
||||
|
||||
pub(crate) fn insert<V: Into<ScalarValue>>(
|
||||
&mut self,
|
||||
doc: &mut Automerge,
|
||||
ex_obj: &ExId,
|
||||
index: usize,
|
||||
value: V,
|
||||
) -> Result<(), AutomergeError> {
|
||||
let obj = doc.exid_to_obj(ex_obj)?;
|
||||
let value = value.into();
|
||||
self.do_insert(doc, obj, index, value.into())?;
|
||||
Ok(())
|
||||
}
|
||||
|
||||
pub(crate) fn insert_object(
|
||||
&mut self,
|
||||
doc: &mut Automerge,
|
||||
ex_obj: &ExId,
|
||||
index: usize,
|
||||
value: ObjType,
|
||||
) -> Result<ExId, AutomergeError> {
|
||||
let obj = doc.exid_to_obj(ex_obj)?;
|
||||
let id = self.do_insert(doc, obj, index, value.into())?;
|
||||
let id = doc.id_to_exid(id);
|
||||
Ok(id)
|
||||
}
|
||||
|
||||
fn do_insert(
|
||||
&mut self,
|
||||
doc: &mut Automerge,
|
||||
obj: ObjId,
|
||||
index: usize,
|
||||
action: OpType,
|
||||
) -> Result<OpId, AutomergeError> {
|
||||
let id = self.next_id();
|
||||
|
||||
let query = doc.ops.search(&obj, query::InsertNth::new(index, id));
|
||||
|
||||
let key = query.key()?;
|
||||
|
||||
let op = Op {
|
||||
id,
|
||||
action,
|
||||
key,
|
||||
succ: Default::default(),
|
||||
pred: Default::default(),
|
||||
insert: true,
|
||||
};
|
||||
|
||||
doc.ops.insert(query.pos(), &obj, op.clone());
|
||||
self.operations.push((obj, Prop::Seq(index), op));
|
||||
|
||||
Ok(id)
|
||||
}
|
||||
|
||||
pub(crate) fn local_op(
|
||||
&mut self,
|
||||
doc: &mut Automerge,
|
||||
obj: ObjId,
|
||||
prop: Prop,
|
||||
action: OpType,
|
||||
) -> Result<Option<OpId>, AutomergeError> {
|
||||
match prop {
|
||||
Prop::Map(s) => self.local_map_op(doc, obj, s, action),
|
||||
Prop::Seq(n) => self.local_list_op(doc, obj, n, action),
|
||||
}
|
||||
}
|
||||
|
||||
fn local_map_op(
|
||||
&mut self,
|
||||
doc: &mut Automerge,
|
||||
obj: ObjId,
|
||||
prop: String,
|
||||
action: OpType,
|
||||
) -> Result<Option<OpId>, AutomergeError> {
|
||||
if prop.is_empty() {
|
||||
return Err(AutomergeError::EmptyStringKey);
|
||||
}
|
||||
|
||||
let id = self.next_id();
|
||||
let prop_index = doc.ops.m.props.cache(prop.clone());
|
||||
let query = doc.ops.search(&obj, query::InsertProp::new(prop_index));
|
||||
|
||||
// no key present to delete
|
||||
if query.ops.is_empty() && action == OpType::Delete {
|
||||
return Ok(None);
|
||||
}
|
||||
|
||||
if query.ops.len() == 1 && query.ops[0].is_noop(&action) {
|
||||
return Ok(None);
|
||||
}
|
||||
|
||||
// increment operations are only valid against counter values.
|
||||
// if there are multiple values (from conflicts) then we just need one of them to be a counter.
|
||||
if matches!(action, OpType::Increment(_)) && query.ops.iter().all(|op| !op.is_counter()) {
|
||||
return Err(AutomergeError::MissingCounter);
|
||||
}
|
||||
|
||||
let pred = query.ops.iter().map(|op| op.id).collect();
|
||||
|
||||
let op = Op {
|
||||
id,
|
||||
action,
|
||||
key: Key::Map(prop_index),
|
||||
succ: Default::default(),
|
||||
pred,
|
||||
insert: false,
|
||||
};
|
||||
|
||||
let pos = query.pos;
|
||||
let ops_pos = query.ops_pos;
|
||||
self.insert_local_op(doc, Prop::Map(prop), op, pos, obj, &ops_pos);
|
||||
|
||||
Ok(Some(id))
|
||||
}
|
||||
|
||||
fn local_list_op(
|
||||
&mut self,
|
||||
doc: &mut Automerge,
|
||||
obj: ObjId,
|
||||
index: usize,
|
||||
action: OpType,
|
||||
) -> Result<Option<OpId>, AutomergeError> {
|
||||
let query = doc.ops.search(&obj, query::Nth::new(index));
|
||||
|
||||
let id = self.next_id();
|
||||
let pred = query.ops.iter().map(|op| op.id).collect();
|
||||
let key = query.key()?;
|
||||
|
||||
if query.ops.len() == 1 && query.ops[0].is_noop(&action) {
|
||||
return Ok(None);
|
||||
}
|
||||
|
||||
// increment operations are only valid against counter values.
|
||||
// if there are multiple values (from conflicts) then we just need one of them to be a counter.
|
||||
if matches!(action, OpType::Increment(_)) && query.ops.iter().all(|op| !op.is_counter()) {
|
||||
return Err(AutomergeError::MissingCounter);
|
||||
}
|
||||
|
||||
let op = Op {
|
||||
id,
|
||||
action,
|
||||
key,
|
||||
succ: Default::default(),
|
||||
pred,
|
||||
insert: false,
|
||||
};
|
||||
|
||||
let pos = query.pos;
|
||||
let ops_pos = query.ops_pos;
|
||||
self.insert_local_op(doc, Prop::Seq(index), op, pos, obj, &ops_pos);
|
||||
|
||||
Ok(Some(id))
|
||||
}
|
||||
|
||||
pub(crate) fn increment<P: Into<Prop>>(
|
||||
&mut self,
|
||||
doc: &mut Automerge,
|
||||
obj: &ExId,
|
||||
prop: P,
|
||||
value: i64,
|
||||
) -> Result<(), AutomergeError> {
|
||||
let obj = doc.exid_to_obj(obj)?;
|
||||
self.local_op(doc, obj, prop.into(), OpType::Increment(value))?;
|
||||
Ok(())
|
||||
}
|
||||
|
||||
pub(crate) fn delete<P: Into<Prop>>(
|
||||
&mut self,
|
||||
doc: &mut Automerge,
|
||||
ex_obj: &ExId,
|
||||
prop: P,
|
||||
) -> Result<(), AutomergeError> {
|
||||
let obj = doc.exid_to_obj(ex_obj)?;
|
||||
let prop = prop.into();
|
||||
self.local_op(doc, obj, prop, OpType::Delete)?;
|
||||
Ok(())
|
||||
}
|
||||
|
||||
/// Splice new elements into the given sequence. Returns a vector of the OpIds used to insert
|
||||
/// the new elements
|
||||
pub(crate) fn splice(
|
||||
&mut self,
|
||||
doc: &mut Automerge,
|
||||
ex_obj: &ExId,
|
||||
mut pos: usize,
|
||||
del: usize,
|
||||
vals: impl IntoIterator<Item = ScalarValue>,
|
||||
) -> Result<(), AutomergeError> {
|
||||
let obj = doc.exid_to_obj(ex_obj)?;
|
||||
for _ in 0..del {
|
||||
// del()
|
||||
self.local_op(doc, obj, pos.into(), OpType::Delete)?;
|
||||
}
|
||||
for v in vals {
|
||||
// insert()
|
||||
self.do_insert(doc, obj, pos, v.clone().into())?;
|
||||
pos += 1;
|
||||
}
|
||||
Ok(())
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use crate::{transaction::Transactable, ROOT};
|
||||
|
||||
use super::*;
|
||||
|
||||
#[test]
|
||||
fn map_rollback_doesnt_panic() {
|
||||
let mut doc = Automerge::new();
|
||||
let mut tx = doc.transaction();
|
||||
|
||||
let a = tx.put_object(ROOT, "a", ObjType::Map).unwrap();
|
||||
tx.put(&a, "b", 1).unwrap();
|
||||
assert!(tx.get(&a, "b").unwrap().is_some());
|
||||
}
|
||||
}
|
|
@ -1,176 +0,0 @@
|
|||
use std::ops::RangeBounds;
|
||||
|
||||
use crate::exid::ExId;
|
||||
use crate::{
|
||||
AutomergeError, ChangeHash, Keys, KeysAt, ObjType, Parents, Prop, Range, RangeAt, ScalarValue,
|
||||
Value, Values, ValuesAt,
|
||||
};
|
||||
|
||||
/// A way of mutating a document within a single change.
|
||||
pub trait Transactable {
|
||||
/// Get the number of pending operations in this transaction.
|
||||
fn pending_ops(&self) -> usize;
|
||||
|
||||
/// Set the value of property `P` to value `V` in object `obj`.
|
||||
///
|
||||
/// # Errors
|
||||
///
|
||||
/// This will return an error if
|
||||
/// - The object does not exist
|
||||
/// - The key is the wrong type for the object
|
||||
/// - The key does not exist in the object
|
||||
fn put<O: AsRef<ExId>, P: Into<Prop>, V: Into<ScalarValue>>(
|
||||
&mut self,
|
||||
obj: O,
|
||||
prop: P,
|
||||
value: V,
|
||||
) -> Result<(), AutomergeError>;
|
||||
|
||||
/// Set the value of property `P` to the new object `V` in object `obj`.
|
||||
///
|
||||
/// # Returns
|
||||
///
|
||||
/// The id of the object which was created.
|
||||
///
|
||||
/// # Errors
|
||||
///
|
||||
/// This will return an error if
|
||||
/// - The object does not exist
|
||||
/// - The key is the wrong type for the object
|
||||
/// - The key does not exist in the object
|
||||
fn put_object<O: AsRef<ExId>, P: Into<Prop>>(
|
||||
&mut self,
|
||||
obj: O,
|
||||
prop: P,
|
||||
object: ObjType,
|
||||
) -> Result<ExId, AutomergeError>;
|
||||
|
||||
/// Insert a value into a list at the given index.
|
||||
fn insert<O: AsRef<ExId>, V: Into<ScalarValue>>(
|
||||
&mut self,
|
||||
obj: O,
|
||||
index: usize,
|
||||
value: V,
|
||||
) -> Result<(), AutomergeError>;
|
||||
|
||||
/// Insert an object into a list at the given index.
|
||||
fn insert_object<O: AsRef<ExId>>(
|
||||
&mut self,
|
||||
obj: O,
|
||||
index: usize,
|
||||
object: ObjType,
|
||||
) -> Result<ExId, AutomergeError>;
|
||||
|
||||
/// Increment the counter at the prop in the object by `value`.
|
||||
fn increment<O: AsRef<ExId>, P: Into<Prop>>(
|
||||
&mut self,
|
||||
obj: O,
|
||||
prop: P,
|
||||
value: i64,
|
||||
) -> Result<(), AutomergeError>;
|
||||
|
||||
/// Delete the value at prop in the object.
|
||||
fn delete<O: AsRef<ExId>, P: Into<Prop>>(
|
||||
&mut self,
|
||||
obj: O,
|
||||
prop: P,
|
||||
) -> Result<(), AutomergeError>;
|
||||
|
||||
fn splice<O: AsRef<ExId>, V: IntoIterator<Item = ScalarValue>>(
|
||||
&mut self,
|
||||
obj: O,
|
||||
pos: usize,
|
||||
del: usize,
|
||||
vals: V,
|
||||
) -> Result<(), AutomergeError>;
|
||||
|
||||
/// Like [`Self::splice`] but for text.
|
||||
fn splice_text<O: AsRef<ExId>>(
|
||||
&mut self,
|
||||
obj: O,
|
||||
pos: usize,
|
||||
del: usize,
|
||||
text: &str,
|
||||
) -> Result<(), AutomergeError> {
|
||||
let vals = text.chars().map(|c| c.into());
|
||||
self.splice(obj, pos, del, vals)
|
||||
}
|
||||
|
||||
/// Get the keys of the given object, it should be a map.
|
||||
fn keys<O: AsRef<ExId>>(&self, obj: O) -> Keys<'_, '_>;
|
||||
|
||||
/// Get the keys of the given object at a point in history.
|
||||
fn keys_at<O: AsRef<ExId>>(&self, obj: O, heads: &[ChangeHash]) -> KeysAt<'_, '_>;
|
||||
|
||||
fn range<O: AsRef<ExId>, R: RangeBounds<String>>(&self, obj: O, range: R) -> Range<'_, R>;
|
||||
|
||||
fn range_at<O: AsRef<ExId>, R: RangeBounds<String>>(
|
||||
&self,
|
||||
obj: O,
|
||||
range: R,
|
||||
heads: &[ChangeHash],
|
||||
) -> RangeAt<'_, R>;
|
||||
|
||||
fn values<O: AsRef<ExId>>(&self, obj: O) -> Values<'_>;
|
||||
|
||||
fn values_at<O: AsRef<ExId>>(&self, obj: O, heads: &[ChangeHash]) -> ValuesAt<'_>;
|
||||
|
||||
/// Get the length of the given object.
|
||||
fn length<O: AsRef<ExId>>(&self, obj: O) -> usize;
|
||||
|
||||
/// Get the length of the given object at a point in history.
|
||||
fn length_at<O: AsRef<ExId>>(&self, obj: O, heads: &[ChangeHash]) -> usize;
|
||||
|
||||
/// Get type for object
|
||||
fn object_type<O: AsRef<ExId>>(&self, obj: O) -> Option<ObjType>;
|
||||
|
||||
/// Get the string that this text object represents.
|
||||
fn text<O: AsRef<ExId>>(&self, obj: O) -> Result<String, AutomergeError>;
|
||||
|
||||
/// Get the string that this text object represents at a point in history.
|
||||
fn text_at<O: AsRef<ExId>>(
|
||||
&self,
|
||||
obj: O,
|
||||
heads: &[ChangeHash],
|
||||
) -> Result<String, AutomergeError>;
|
||||
|
||||
/// Get the value at this prop in the object.
|
||||
fn get<O: AsRef<ExId>, P: Into<Prop>>(
|
||||
&self,
|
||||
obj: O,
|
||||
prop: P,
|
||||
) -> Result<Option<(Value<'_>, ExId)>, AutomergeError>;
|
||||
|
||||
/// Get the value at this prop in the object at a point in history.
|
||||
fn get_at<O: AsRef<ExId>, P: Into<Prop>>(
|
||||
&self,
|
||||
obj: O,
|
||||
prop: P,
|
||||
heads: &[ChangeHash],
|
||||
) -> Result<Option<(Value<'_>, ExId)>, AutomergeError>;
|
||||
|
||||
fn get_all<O: AsRef<ExId>, P: Into<Prop>>(
|
||||
&self,
|
||||
obj: O,
|
||||
prop: P,
|
||||
) -> Result<Vec<(Value<'_>, ExId)>, AutomergeError>;
|
||||
|
||||
fn get_all_at<O: AsRef<ExId>, P: Into<Prop>>(
|
||||
&self,
|
||||
obj: O,
|
||||
prop: P,
|
||||
heads: &[ChangeHash],
|
||||
) -> Result<Vec<(Value<'_>, ExId)>, AutomergeError>;
|
||||
|
||||
/// Get the object id of the object that contains this object and the prop that this object is
|
||||
/// at in that object.
|
||||
fn parent_object<O: AsRef<ExId>>(&self, obj: O) -> Option<(ExId, Prop)>;
|
||||
|
||||
fn parents(&self, obj: ExId) -> Parents<'_>;
|
||||
|
||||
fn path_to_object<O: AsRef<ExId>>(&self, obj: O) -> Vec<(ExId, Prop)> {
|
||||
let mut path = self.parents(obj.as_ref().clone()).collect::<Vec<_>>();
|
||||
path.reverse();
|
||||
path
|
||||
}
|
||||
}
|
|
@ -1,36 +0,0 @@
|
|||
use crate::{exid::ExId, Value};
|
||||
use std::ops::RangeFull;
|
||||
|
||||
use crate::{query, Automerge};
|
||||
|
||||
#[derive(Debug)]
|
||||
pub struct Values<'a> {
|
||||
range: Option<query::Range<'a, RangeFull>>,
|
||||
doc: &'a Automerge,
|
||||
}
|
||||
|
||||
impl<'a> Values<'a> {
|
||||
pub(crate) fn new(doc: &'a Automerge, range: Option<query::Range<'a, RangeFull>>) -> Self {
|
||||
Self { range, doc }
|
||||
}
|
||||
}
|
||||
|
||||
impl<'a> Iterator for Values<'a> {
|
||||
type Item = (&'a str, Value<'a>, ExId);
|
||||
|
||||
fn next(&mut self) -> Option<Self::Item> {
|
||||
self.range
|
||||
.as_mut()?
|
||||
.next()
|
||||
.map(|(key, value, id)| (key, value, self.doc.id_to_exid(id)))
|
||||
}
|
||||
}
|
||||
|
||||
impl<'a> DoubleEndedIterator for Values<'a> {
|
||||
fn next_back(&mut self) -> Option<Self::Item> {
|
||||
self.range
|
||||
.as_mut()?
|
||||
.next_back()
|
||||
.map(|(key, value, id)| (key, value, self.doc.id_to_exid(id)))
|
||||
}
|
||||
}
|
|
@ -1,36 +0,0 @@
|
|||
use crate::{exid::ExId, Value};
|
||||
use std::ops::RangeFull;
|
||||
|
||||
use crate::{query, Automerge};
|
||||
|
||||
#[derive(Debug)]
|
||||
pub struct ValuesAt<'a> {
|
||||
range: Option<query::RangeAt<'a, RangeFull>>,
|
||||
doc: &'a Automerge,
|
||||
}
|
||||
|
||||
impl<'a> ValuesAt<'a> {
|
||||
pub(crate) fn new(doc: &'a Automerge, range: Option<query::RangeAt<'a, RangeFull>>) -> Self {
|
||||
Self { range, doc }
|
||||
}
|
||||
}
|
||||
|
||||
impl<'a> Iterator for ValuesAt<'a> {
|
||||
type Item = (&'a str, Value<'a>, ExId);
|
||||
|
||||
fn next(&mut self) -> Option<Self::Item> {
|
||||
self.range
|
||||
.as_mut()?
|
||||
.next()
|
||||
.map(|(key, value, id)| (key, value, self.doc.id_to_exid(id)))
|
||||
}
|
||||
}
|
||||
|
||||
impl<'a> DoubleEndedIterator for ValuesAt<'a> {
|
||||
fn next_back(&mut self) -> Option<Self::Item> {
|
||||
self.range
|
||||
.as_mut()?
|
||||
.next_back()
|
||||
.map(|(key, value, id)| (key, value, self.doc.id_to_exid(id)))
|
||||
}
|
||||
}
|
|
@ -1,52 +0,0 @@
|
|||
Try the different editing traces on different automerge implementations
|
||||
|
||||
### Automerge Experiement - pure rust
|
||||
|
||||
```code
|
||||
# cargo --release run
|
||||
```
|
||||
|
||||
#### Benchmarks
|
||||
|
||||
There are some criterion benchmarks in the `benches` folder which can be run with `cargo bench` or `cargo criterion`.
|
||||
For flamegraphing, `cargo flamegraph --bench main -- --bench "save" # or "load" or "replay" or nothing` can be useful.
|
||||
|
||||
### Automerge Experiement - wasm api
|
||||
|
||||
```code
|
||||
# node automerge-wasm.js
|
||||
```
|
||||
|
||||
### Automerge Experiment - JS wrapper
|
||||
|
||||
```code
|
||||
# node automerge-js.js
|
||||
```
|
||||
|
||||
### Automerge 1.0 pure javascript - new fast backend
|
||||
|
||||
This assume automerge has been checked out in a directory along side this repo
|
||||
|
||||
```code
|
||||
# node automerge-1.0.js
|
||||
```
|
||||
|
||||
### Automerge 1.0 with rust backend
|
||||
|
||||
This assume automerge has been checked out in a directory along side this repo
|
||||
|
||||
```code
|
||||
# node automerge-rs.js
|
||||
```
|
||||
|
||||
### Automerge Experiment - JS wrapper
|
||||
|
||||
```code
|
||||
# node automerge-js.js
|
||||
```
|
||||
|
||||
### Baseline Test. Javascript Array with no CRDT info
|
||||
|
||||
```code
|
||||
# node baseline.js
|
||||
```
|
|
@ -1,31 +0,0 @@
|
|||
|
||||
// this assumes that the automerge-rs folder is checked out along side this repo
|
||||
// and someone has run
|
||||
|
||||
// # cd automerge-rs/automerge-backend-wasm
|
||||
// # yarn release
|
||||
|
||||
const { edits, finalText } = require('./editing-trace')
|
||||
const Automerge = require('../../automerge')
|
||||
const path = require('path')
|
||||
const wasmBackend = require(path.resolve("../../automerge-rs/automerge-backend-wasm"))
|
||||
Automerge.setDefaultBackend(wasmBackend)
|
||||
|
||||
const start = new Date()
|
||||
let state = Automerge.from({text: new Automerge.Text()})
|
||||
|
||||
state = Automerge.change(state, doc => {
|
||||
for (let i = 0; i < edits.length; i++) {
|
||||
if (i % 10000 === 0) {
|
||||
console.log(`Processed ${i} edits in ${new Date() - start} ms`)
|
||||
}
|
||||
if (edits[i][1] > 0) doc.text.deleteAt(edits[i][0], edits[i][1])
|
||||
if (edits[i].length > 2) doc.text.insertAt(edits[i][0], ...edits[i].slice(2))
|
||||
}
|
||||
})
|
||||
|
||||
console.log(`Done in ${new Date() - start} ms`)
|
||||
|
||||
if (state.text.join('') !== finalText) {
|
||||
throw new RangeError('ERROR: final text did not match expectation')
|
||||
}
|
30
flake.lock
30
flake.lock
|
@ -2,11 +2,11 @@
|
|||
"nodes": {
|
||||
"flake-utils": {
|
||||
"locked": {
|
||||
"lastModified": 1642700792,
|
||||
"narHash": "sha256-XqHrk7hFb+zBvRg6Ghl+AZDq03ov6OshJLiSWOoX5es=",
|
||||
"lastModified": 1667395993,
|
||||
"narHash": "sha256-nuEHfE/LcWyuSWnS8t12N1wc105Qtau+/OdUAjtQ0rA=",
|
||||
"owner": "numtide",
|
||||
"repo": "flake-utils",
|
||||
"rev": "846b2ae0fc4cc943637d3d1def4454213e203cba",
|
||||
"rev": "5aed5285a952e0b949eb3ba02c12fa4fcfef535f",
|
||||
"type": "github"
|
||||
},
|
||||
"original": {
|
||||
|
@ -17,11 +17,11 @@
|
|||
},
|
||||
"flake-utils_2": {
|
||||
"locked": {
|
||||
"lastModified": 1637014545,
|
||||
"narHash": "sha256-26IZAc5yzlD9FlDT54io1oqG/bBoyka+FJk5guaX4x4=",
|
||||
"lastModified": 1659877975,
|
||||
"narHash": "sha256-zllb8aq3YO3h8B/U0/J1WBgAL8EX5yWf5pMj3G0NAmc=",
|
||||
"owner": "numtide",
|
||||
"repo": "flake-utils",
|
||||
"rev": "bba5dcc8e0b20ab664967ad83d24d64cb64ec4f4",
|
||||
"rev": "c0e246b9b83f637f4681389ecabcb2681b4f3af0",
|
||||
"type": "github"
|
||||
},
|
||||
"original": {
|
||||
|
@ -32,11 +32,11 @@
|
|||
},
|
||||
"nixpkgs": {
|
||||
"locked": {
|
||||
"lastModified": 1643805626,
|
||||
"narHash": "sha256-AXLDVMG+UaAGsGSpOtQHPIKB+IZ0KSd9WS77aanGzgc=",
|
||||
"lastModified": 1669542132,
|
||||
"narHash": "sha256-DRlg++NJAwPh8io3ExBJdNW7Djs3plVI5jgYQ+iXAZQ=",
|
||||
"owner": "nixos",
|
||||
"repo": "nixpkgs",
|
||||
"rev": "554d2d8aa25b6e583575459c297ec23750adb6cb",
|
||||
"rev": "a115bb9bd56831941be3776c8a94005867f316a7",
|
||||
"type": "github"
|
||||
},
|
||||
"original": {
|
||||
|
@ -48,11 +48,11 @@
|
|||
},
|
||||
"nixpkgs_2": {
|
||||
"locked": {
|
||||
"lastModified": 1637453606,
|
||||
"narHash": "sha256-Gy6cwUswft9xqsjWxFYEnx/63/qzaFUwatcbV5GF/GQ=",
|
||||
"lastModified": 1665296151,
|
||||
"narHash": "sha256-uOB0oxqxN9K7XGF1hcnY+PQnlQJ+3bP2vCn/+Ru/bbc=",
|
||||
"owner": "NixOS",
|
||||
"repo": "nixpkgs",
|
||||
"rev": "8afc4e543663ca0a6a4f496262cd05233737e732",
|
||||
"rev": "14ccaaedd95a488dd7ae142757884d8e125b3363",
|
||||
"type": "github"
|
||||
},
|
||||
"original": {
|
||||
|
@ -75,11 +75,11 @@
|
|||
"nixpkgs": "nixpkgs_2"
|
||||
},
|
||||
"locked": {
|
||||
"lastModified": 1643941258,
|
||||
"narHash": "sha256-uHyEuICSu8qQp6adPTqV33ajiwoF0sCh+Iazaz5r7fo=",
|
||||
"lastModified": 1669775522,
|
||||
"narHash": "sha256-6xxGArBqssX38DdHpDoPcPvB/e79uXyQBwpBcaO/BwY=",
|
||||
"owner": "oxalica",
|
||||
"repo": "rust-overlay",
|
||||
"rev": "674156c4c2f46dd6a6846466cb8f9fee84c211ca",
|
||||
"rev": "3158e47f6b85a288d12948aeb9a048e0ed4434d6",
|
||||
"type": "github"
|
||||
},
|
||||
"original": {
|
||||
|
|
106
flake.nix
106
flake.nix
|
@ -3,61 +3,67 @@
|
|||
|
||||
inputs = {
|
||||
nixpkgs.url = "github:nixos/nixpkgs/nixos-unstable";
|
||||
flake-utils = {
|
||||
url = "github:numtide/flake-utils";
|
||||
inputs.nixpkgs.follows = "nixpkgs";
|
||||
};
|
||||
flake-utils.url = "github:numtide/flake-utils";
|
||||
rust-overlay.url = "github:oxalica/rust-overlay";
|
||||
};
|
||||
|
||||
outputs = { self, nixpkgs, flake-utils, rust-overlay }:
|
||||
outputs = {
|
||||
self,
|
||||
nixpkgs,
|
||||
flake-utils,
|
||||
rust-overlay,
|
||||
}:
|
||||
flake-utils.lib.eachDefaultSystem
|
||||
(system:
|
||||
let
|
||||
pkgs = import nixpkgs {
|
||||
overlays = [ rust-overlay.overlay ];
|
||||
inherit system;
|
||||
};
|
||||
lib = pkgs.lib;
|
||||
rust = pkgs.rust-bin.stable.latest.default;
|
||||
cargoNix = pkgs.callPackage ./Cargo.nix {
|
||||
inherit pkgs;
|
||||
release = true;
|
||||
};
|
||||
debugCargoNix = pkgs.callPackage ./Cargo.nix {
|
||||
inherit pkgs;
|
||||
release = false;
|
||||
};
|
||||
in
|
||||
{
|
||||
devShell = pkgs.mkShell {
|
||||
buildInputs = with pkgs;
|
||||
[
|
||||
(rust.override {
|
||||
extensions = [ "rust-src" ];
|
||||
targets = [ "wasm32-unknown-unknown" ];
|
||||
})
|
||||
cargo-edit
|
||||
cargo-watch
|
||||
cargo-criterion
|
||||
cargo-fuzz
|
||||
cargo-flamegraph
|
||||
cargo-deny
|
||||
crate2nix
|
||||
wasm-pack
|
||||
pkgconfig
|
||||
openssl
|
||||
gnuplot
|
||||
(system: let
|
||||
pkgs = import nixpkgs {
|
||||
overlays = [rust-overlay.overlays.default];
|
||||
inherit system;
|
||||
};
|
||||
rust = pkgs.rust-bin.stable.latest.default;
|
||||
in {
|
||||
formatter = pkgs.alejandra;
|
||||
|
||||
nodejs
|
||||
yarn
|
||||
packages = {
|
||||
deadnix = pkgs.runCommand "deadnix" {} ''
|
||||
${pkgs.deadnix}/bin/deadnix --fail ${./.}
|
||||
mkdir $out
|
||||
'';
|
||||
};
|
||||
|
||||
cmake
|
||||
cmocka
|
||||
checks = {
|
||||
inherit (self.packages.${system}) deadnix;
|
||||
};
|
||||
|
||||
rnix-lsp
|
||||
nixpkgs-fmt
|
||||
];
|
||||
};
|
||||
});
|
||||
devShells.default = pkgs.mkShell {
|
||||
buildInputs = with pkgs; [
|
||||
(rust.override {
|
||||
extensions = ["rust-src"];
|
||||
targets = ["wasm32-unknown-unknown"];
|
||||
})
|
||||
cargo-edit
|
||||
cargo-watch
|
||||
cargo-criterion
|
||||
cargo-fuzz
|
||||
cargo-flamegraph
|
||||
cargo-deny
|
||||
crate2nix
|
||||
wasm-pack
|
||||
pkgconfig
|
||||
openssl
|
||||
gnuplot
|
||||
|
||||
nodejs
|
||||
yarn
|
||||
deno
|
||||
|
||||
# c deps
|
||||
cmake
|
||||
cmocka
|
||||
doxygen
|
||||
|
||||
rnix-lsp
|
||||
nixpkgs-fmt
|
||||
];
|
||||
};
|
||||
});
|
||||
}
|
||||
|
|
3
javascript/.denoifyrc.json
Normal file
3
javascript/.denoifyrc.json
Normal file
|
@ -0,0 +1,3 @@
|
|||
{
|
||||
"replacer": "scripts/denoify-replacer.mjs"
|
||||
}
|
2
javascript/.eslintignore
Normal file
2
javascript/.eslintignore
Normal file
|
@ -0,0 +1,2 @@
|
|||
dist
|
||||
examples
|
15
javascript/.eslintrc.cjs
Normal file
15
javascript/.eslintrc.cjs
Normal file
|
@ -0,0 +1,15 @@
|
|||
module.exports = {
|
||||
root: true,
|
||||
parser: "@typescript-eslint/parser",
|
||||
plugins: ["@typescript-eslint"],
|
||||
extends: ["eslint:recommended", "plugin:@typescript-eslint/recommended"],
|
||||
rules: {
|
||||
"@typescript-eslint/no-unused-vars": [
|
||||
"error",
|
||||
{
|
||||
argsIgnorePattern: "^_",
|
||||
varsIgnorePattern: "^_",
|
||||
},
|
||||
],
|
||||
},
|
||||
}
|
6
javascript/.gitignore
vendored
Normal file
6
javascript/.gitignore
vendored
Normal file
|
@ -0,0 +1,6 @@
|
|||
/node_modules
|
||||
/yarn.lock
|
||||
dist
|
||||
docs/
|
||||
.vim
|
||||
deno_dist/
|
4
javascript/.prettierignore
Normal file
4
javascript/.prettierignore
Normal file
|
@ -0,0 +1,4 @@
|
|||
e2e/verdacciodb
|
||||
dist
|
||||
docs
|
||||
deno_dist
|
4
javascript/.prettierrc
Normal file
4
javascript/.prettierrc
Normal file
|
@ -0,0 +1,4 @@
|
|||
{
|
||||
"semi": false,
|
||||
"arrowParens": "avoid"
|
||||
}
|
39
javascript/HACKING.md
Normal file
39
javascript/HACKING.md
Normal file
|
@ -0,0 +1,39 @@
|
|||
## Architecture
|
||||
|
||||
The `@automerge/automerge` package is a set of
|
||||
[`Proxy`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Proxy)
|
||||
objects which provide an idiomatic javascript interface built on top of the
|
||||
lower level `@automerge/automerge-wasm` package (which is in turn built from the
|
||||
Rust codebase and can be found in `~/automerge-wasm`). I.e. the responsibility
|
||||
of this codebase is
|
||||
|
||||
- To map from the javascript data model to the underlying `set`, `make`,
|
||||
`insert`, and `delete` operations of Automerge.
|
||||
- To expose a more convenient interface to functions in `automerge-wasm` which
|
||||
generate messages to send over the network or compressed file formats to store
|
||||
on disk
|
||||
|
||||
## Building and testing
|
||||
|
||||
Much of the functionality of this package depends on the
|
||||
`@automerge/automerge-wasm` package and frequently you will be working on both
|
||||
of them at the same time. It would be frustrating to have to push
|
||||
`automerge-wasm` to NPM every time you want to test a change but I (Alex) also
|
||||
don't trust `yarn link` to do the right thing here. Therefore, the `./e2e`
|
||||
folder contains a little yarn package which spins up a local NPM registry. See
|
||||
`./e2e/README` for details. In brief though:
|
||||
|
||||
To build `automerge-wasm` and install it in the local `node_modules`
|
||||
|
||||
```bash
|
||||
cd e2e && yarn install && yarn run e2e buildjs
|
||||
```
|
||||
|
||||
NOw that you've done this you can run the tests
|
||||
|
||||
```bash
|
||||
yarn test
|
||||
```
|
||||
|
||||
If you make changes to the `automerge-wasm` package you will need to re-run
|
||||
`yarn e2e buildjs`
|
109
javascript/README.md
Normal file
109
javascript/README.md
Normal file
|
@ -0,0 +1,109 @@
|
|||
## Automerge
|
||||
|
||||
Automerge is a library of data structures for building collaborative
|
||||
applications, this package is the javascript implementation.
|
||||
|
||||
Detailed documentation is available at [automerge.org](http://automerge.org/)
|
||||
but see the following for a short getting started guid.
|
||||
|
||||
## Quickstart
|
||||
|
||||
First, install the library.
|
||||
|
||||
```
|
||||
yarn add @automerge/automerge
|
||||
```
|
||||
|
||||
If you're writing a `node` application, you can skip straight to [Make some
|
||||
data](#make-some-data). If you're in a browser you need a bundler
|
||||
|
||||
### Bundler setup
|
||||
|
||||
`@automerge/automerge` is a wrapper around a core library which is written in
|
||||
rust, compiled to WebAssembly and distributed as a separate package called
|
||||
`@automerge/automerge-wasm`. Browsers don't currently support WebAssembly
|
||||
modules taking part in ESM module imports, so you must use a bundler to import
|
||||
`@automerge/automerge` in the browser. There are a lot of bundlers out there, we
|
||||
have examples for common bundlers in the `examples` folder. Here is a short
|
||||
example using Webpack 5.
|
||||
|
||||
Assuming a standard setup of a new webpack project, you'll need to enable the
|
||||
`asyncWebAssembly` experiment. In a typical webpack project that means adding
|
||||
something like this to `webpack.config.js`
|
||||
|
||||
```javascript
|
||||
module.exports = {
|
||||
...
|
||||
experiments: { asyncWebAssembly: true },
|
||||
performance: { // we dont want the wasm blob to generate warnings
|
||||
hints: false,
|
||||
maxEntrypointSize: 512000,
|
||||
maxAssetSize: 512000
|
||||
}
|
||||
};
|
||||
```
|
||||
|
||||
### Make some data
|
||||
|
||||
Automerge allows to separate threads of execution to make changes to some data
|
||||
and always be able to merge their changes later.
|
||||
|
||||
```javascript
|
||||
import * as automerge from "@automerge/automerge"
|
||||
import * as assert from "assert"
|
||||
|
||||
let doc1 = automerge.from({
|
||||
tasks: [
|
||||
{ description: "feed fish", done: false },
|
||||
{ description: "water plants", done: false },
|
||||
],
|
||||
})
|
||||
|
||||
// Create a new thread of execution
|
||||
let doc2 = automerge.clone(doc1)
|
||||
|
||||
// Now we concurrently make changes to doc1 and doc2
|
||||
|
||||
// Complete a task in doc2
|
||||
doc2 = automerge.change(doc2, d => {
|
||||
d.tasks[0].done = true
|
||||
})
|
||||
|
||||
// Add a task in doc1
|
||||
doc1 = automerge.change(doc1, d => {
|
||||
d.tasks.push({
|
||||
description: "water fish",
|
||||
done: false,
|
||||
})
|
||||
})
|
||||
|
||||
// Merge changes from both docs
|
||||
doc1 = automerge.merge(doc1, doc2)
|
||||
doc2 = automerge.merge(doc2, doc1)
|
||||
|
||||
// Both docs are merged and identical
|
||||
assert.deepEqual(doc1, {
|
||||
tasks: [
|
||||
{ description: "feed fish", done: true },
|
||||
{ description: "water plants", done: false },
|
||||
{ description: "water fish", done: false },
|
||||
],
|
||||
})
|
||||
|
||||
assert.deepEqual(doc2, {
|
||||
tasks: [
|
||||
{ description: "feed fish", done: true },
|
||||
{ description: "water plants", done: false },
|
||||
{ description: "water fish", done: false },
|
||||
],
|
||||
})
|
||||
```
|
||||
|
||||
## Development
|
||||
|
||||
See [HACKING.md](./HACKING.md)
|
||||
|
||||
## Meta
|
||||
|
||||
Copyright 2017–present, the Automerge contributors. Released under the terms of the
|
||||
MIT license (see `LICENSE`).
|
12
javascript/config/cjs.json
Normal file
12
javascript/config/cjs.json
Normal file
|
@ -0,0 +1,12 @@
|
|||
{
|
||||
"extends": "../tsconfig.json",
|
||||
"exclude": [
|
||||
"../dist/**/*",
|
||||
"../node_modules",
|
||||
"../test/**/*",
|
||||
"../src/**/*.deno.ts"
|
||||
],
|
||||
"compilerOptions": {
|
||||
"outDir": "../dist/cjs"
|
||||
}
|
||||
}
|
13
javascript/config/declonly.json
Normal file
13
javascript/config/declonly.json
Normal file
|
@ -0,0 +1,13 @@
|
|||
{
|
||||
"extends": "../tsconfig.json",
|
||||
"exclude": [
|
||||
"../dist/**/*",
|
||||
"../node_modules",
|
||||
"../test/**/*",
|
||||
"../src/**/*.deno.ts"
|
||||
],
|
||||
"emitDeclarationOnly": true,
|
||||
"compilerOptions": {
|
||||
"outDir": "../dist"
|
||||
}
|
||||
}
|
14
javascript/config/mjs.json
Normal file
14
javascript/config/mjs.json
Normal file
|
@ -0,0 +1,14 @@
|
|||
{
|
||||
"extends": "../tsconfig.json",
|
||||
"exclude": [
|
||||
"../dist/**/*",
|
||||
"../node_modules",
|
||||
"../test/**/*",
|
||||
"../src/**/*.deno.ts"
|
||||
],
|
||||
"compilerOptions": {
|
||||
"target": "es6",
|
||||
"module": "es6",
|
||||
"outDir": "../dist/mjs"
|
||||
}
|
||||
}
|
10
javascript/deno-tests/deno.ts
Normal file
10
javascript/deno-tests/deno.ts
Normal file
|
@ -0,0 +1,10 @@
|
|||
import * as Automerge from "../deno_dist/index.ts"
|
||||
|
||||
Deno.test("It should create, clone and free", () => {
|
||||
let doc1 = Automerge.init()
|
||||
let doc2 = Automerge.clone(doc1)
|
||||
|
||||
// this is only needed if weakrefs are not supported
|
||||
Automerge.free(doc1)
|
||||
Automerge.free(doc2)
|
||||
})
|
3
javascript/e2e/.gitignore
vendored
Normal file
3
javascript/e2e/.gitignore
vendored
Normal file
|
@ -0,0 +1,3 @@
|
|||
node_modules/
|
||||
verdacciodb/
|
||||
htpasswd
|
70
javascript/e2e/README.md
Normal file
70
javascript/e2e/README.md
Normal file
|
@ -0,0 +1,70 @@
|
|||
#End to end testing for javascript packaging
|
||||
|
||||
The network of packages and bundlers we rely on to get the `automerge` package
|
||||
working is a little complex. We have the `automerge-wasm` package, which the
|
||||
`automerge` package depends upon, which means that anyone who depends on
|
||||
`automerge` needs to either a) be using node or b) use a bundler in order to
|
||||
load the underlying WASM module which is packaged in `automerge-wasm`.
|
||||
|
||||
The various bundlers involved are complicated and capricious and so we need an
|
||||
easy way of testing that everything is in fact working as expected. To do this
|
||||
we run a custom NPM registry (namely [Verdaccio](https://verdaccio.org/)) and
|
||||
build the `automerge-wasm` and `automerge` packages and publish them to this
|
||||
registry. Once we have this registry running we are able to build the example
|
||||
projects which depend on these packages and check that everything works as
|
||||
expected.
|
||||
|
||||
## Usage
|
||||
|
||||
First, install everything:
|
||||
|
||||
```
|
||||
yarn install
|
||||
```
|
||||
|
||||
### Build `automerge-js`
|
||||
|
||||
This builds the `automerge-wasm` package and then runs `yarn build` in the
|
||||
`automerge-js` project with the `--registry` set to the verdaccio registry. The
|
||||
end result is that you can run `yarn test` in the resulting `automerge-js`
|
||||
directory in order to run tests against the current `automerge-wasm`.
|
||||
|
||||
```
|
||||
yarn e2e buildjs
|
||||
```
|
||||
|
||||
### Build examples
|
||||
|
||||
This either builds or the examples in `automerge-js/examples` or just a subset
|
||||
of them. Once this is complete you can run the relevant scripts (e.g. `vite dev`
|
||||
for the Vite example) to check everything works.
|
||||
|
||||
```
|
||||
yarn e2e buildexamples
|
||||
```
|
||||
|
||||
Or, to just build the webpack example
|
||||
|
||||
```
|
||||
yarn e2e buildexamples -e webpack
|
||||
```
|
||||
|
||||
### Run Registry
|
||||
|
||||
If you're experimenting with a project which is not in the `examples` folder
|
||||
you'll need a running registry. `run-registry` builds and publishes
|
||||
`automerge-js` and `automerge-wasm` and then runs the registry at
|
||||
`localhost:4873`.
|
||||
|
||||
```
|
||||
yarn e2e run-registry
|
||||
```
|
||||
|
||||
You can now run `yarn install --registry http://localhost:4873` to experiment
|
||||
with the built packages.
|
||||
|
||||
## Using the `dev` build of `automerge-wasm`
|
||||
|
||||
All the commands above take a `-p` flag which can be either `release` or
|
||||
`debug`. The `debug` builds with additional debug symbols which makes errors
|
||||
less cryptic.
|
Some files were not shown because too many files have changed in this diff Show more
Loading…
Reference in a new issue