Compare commits

...

46 commits
main ... c_api

Author SHA1 Message Date
Jason Kankiewicz
f52b9a13f7 Added Doxygen documentation generation.
Renamed `AMDatatype` to `AmDataType`.
Reorganized the `AmDataType` tags.
Renamed `AMfree()` to `AMdestroy()`.
Renamed `AMclone()` to `AMdup()`.
2022-02-24 14:39:38 -05:00
Orion Henry
8e24e5772d get simple test passing 2022-02-22 11:30:42 -05:00
Orion Henry
73a9b9bb24 rework to return a queriable result 2022-02-22 10:21:45 -05:00
Orion Henry
77fb2385f5 fmt 2022-02-22 10:21:45 -05:00
Orion Henry
6a7bc720e2 remove marks 2022-02-22 10:21:45 -05:00
Orion Henry
4c66e21d1b move marks into its own test 2022-02-22 10:21:45 -05:00
Orion Henry
80640ac799 bugfix: duplicate seq not blocked on apply_changes, clone did not close a transaction, added fork and merge to wasm 2022-02-22 10:21:45 -05:00
Karissa McKelvey
ec3f6cd69f Add failing test for decoding a conflicted merge 2022-02-22 10:21:45 -05:00
rae
4c5bece1f6 Add support for web 2022-02-22 10:21:45 -05:00
Orion Henry
312094575b fix version number 2022-02-22 10:21:45 -05:00
Orion Henry
1ee179b943 cleanup typescript defs 2022-02-22 10:21:45 -05:00
Orion Henry
a14d272902 fix bug in set scalar 2022-02-22 10:21:45 -05:00
Orion Henry
efbc397e02 better error on invalid value 2022-02-22 10:21:44 -05:00
Orion Henry
dabf941370 spans have types not names 2022-02-22 10:21:44 -05:00
Orion Henry
ec879d13dd raw_spans with ids 2022-02-22 10:21:44 -05:00
Orion Henry
0987514204 raw_spans experiment 2022-02-22 10:21:44 -05:00
Orion Henry
d46608bed0 adding make 2022-02-22 10:21:44 -05:00
Orion Henry
a54f6e30c8 Revert "Remove make"
This reverts commit 5b9360155c.
2022-02-22 10:21:44 -05:00
Orion Henry
2bc8d18f89 Remove make 2022-02-22 10:21:44 -05:00
Orion Henry
b0ba6fa733 fixed fixed 2022-02-22 10:21:44 -05:00
Orion Henry
c4d5533ed3 use types in pkg 2022-02-22 10:21:44 -05:00
Orion Henry
2e4ac5e0e1 fix return types 2022-02-22 10:21:44 -05:00
Orion Henry
9e628201ab remove dead code 2022-02-22 10:21:43 -05:00
Orion Henry
b41a758ff4 all ts tests passing 2022-02-22 10:21:43 -05:00
Orion Henry
5c2a8e25ac almost working ts 2022-02-22 10:21:43 -05:00
Karissa McKelvey
4a8c611e57 Fix typescript errors in test 2022-02-22 10:21:43 -05:00
Karissa McKelvey
456766c348 uint datatypes & fix some more typescript errors 2022-02-22 10:21:43 -05:00
Orion Henry
8e0563d4f1 half done - not working typescript 2022-02-22 10:21:43 -05:00
Orion Henry
b0998995f4 cleanup / rename 2022-02-22 10:21:43 -05:00
Orion Henry
188936a711 mark encode/decode/serde 2022-02-22 10:21:43 -05:00
Orion Henry
8541e2246a rework marks as inserts between values 2022-02-22 10:21:42 -05:00
Orion Henry
79ccf65a91 v0 wip 2022-02-22 10:21:42 -05:00
karissa
871fc9febe Update app to include text editor, import Automerge correctly 2022-02-22 10:21:42 -05:00
Orion Henry
f435d81156 remove from package.json 2022-02-22 10:21:42 -05:00
Orion Henry
183507d6a6 remove tmp file 2022-02-22 10:21:42 -05:00
Orion Henry
c0b26c307c add cra example code 2022-02-22 10:21:42 -05:00
Orion Henry
2c483c6c68 rework wasm function to use js types more directly 2022-02-22 10:21:41 -05:00
Andrew Jeffery
585b0a0cc7 Change rust flake to use default profile 2022-02-22 10:21:41 -05:00
Andrew Jeffery
93672e6c08 flake.lock: Update
Flake lock file changes:

• Updated input 'flake-utils':
    'github:numtide/flake-utils/2ebf2558e5bf978c7fb8ea927dfaed8fefab2e28' (2021-04-25)
  → 'github:numtide/flake-utils/846b2ae0fc4cc943637d3d1def4454213e203cba' (2022-01-20)
• Updated input 'nixpkgs':
    'github:nixos/nixpkgs/63586475587d7e0e078291ad4b49b6f6a6885100' (2021-05-06)
  → 'github:nixos/nixpkgs/554d2d8aa25b6e583575459c297ec23750adb6cb' (2022-02-02)
• Updated input 'rust-overlay':
    'github:oxalica/rust-overlay/d8efe70dc561c4bea0b7bf440d36ce98c497e054' (2021-05-07)
  → 'github:oxalica/rust-overlay/674156c4c2f46dd6a6846466cb8f9fee84c211ca' (2022-02-04)
• Updated input 'rust-overlay/flake-utils':
    'github:numtide/flake-utils/5466c5bbece17adaab2d82fae80b46e807611bf3' (2021-02-28)
  → 'github:numtide/flake-utils/bba5dcc8e0b20ab664967ad83d24d64cb64ec4f4' (2021-11-15)
• Updated input 'rust-overlay/nixpkgs':
    'github:nixos/nixpkgs/54c1e44240d8a527a8f4892608c4bce5440c3ecb' (2021-04-02)
  → 'github:NixOS/nixpkgs/8afc4e543663ca0a6a4f496262cd05233737e732' (2021-11-21)
2022-02-22 10:21:41 -05:00
Andrew Jeffery
9d31754659 Add from () for Value 2022-02-22 10:21:41 -05:00
Jason Kankiewicz
930b9a850f Add a CI step to run the CMake build of the C bindings for @alexjg. 2022-02-16 13:24:08 -08:00
Jason Kankiewicz
3290492235 Add CMake instructions for @orionz. 2022-02-16 13:24:08 -08:00
Jason Kankiewicz
b62e2fcf96 Add CMake support. 2022-02-16 13:24:08 -08:00
Jason Kankiewicz
ec5fe444b9 Replace *intptr_t in C function signatures. 2022-02-16 13:24:08 -08:00
Orion Henry
851ec6c6d3 am_pop and am_pop_value 2022-02-06 18:59:19 -05:00
Orion Henry
17104686a2 break the ground 2022-02-03 19:43:36 -05:00
66 changed files with 3156 additions and 438 deletions

View file

@ -1,5 +1,5 @@
name: ci
on:
on:
push:
branches:
- experiment
@ -68,6 +68,18 @@ jobs:
- name: run tests
run: ./scripts/ci/js_tests
cmake_build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Update CMake
uses: jwlawson/actions-setup-cmake@v1.12
with:
cmake-version: latest
- name: Build and test C bindings
run: ./scripts/ci/cmake-build Release Static
shell: bash
linux:
runs-on: ubuntu-latest
strategy:

1
.gitignore vendored
View file

@ -2,3 +2,4 @@
/.direnv
perf.*
/Cargo.lock
build/

View file

@ -2,6 +2,7 @@
members = [
"automerge",
"automerge-wasm",
"automerge-c",
"edit-trace",
]

View file

@ -64,7 +64,7 @@ To build and test the wasm library:
$ yarn opt ## or set `wasm-opt = false` in Cargo.toml on supported platforms (not arm64 osx)
```
And finally to test the js library. This is where most of the tests reside.
To test the js library. This is where most of the tests reside.
```shell
## setup
@ -76,6 +76,24 @@ And finally to test the js library. This is where most of the tests reside.
$ yarn test
```
And finally, to build and test the C bindings with CMake:
```shell
## setup
$ cd automerge-c
$ mkdir -p build
$ cd build
$ cmake -S .. -DCMAKE_BUILD_TYPE=Release -DBUILD_SHARED_LIBS=OFF
## building and testing
$ cmake --build .
```
To add debugging symbols, replace `Release` with `Debug`.
To build a shared library instead of a static one, replace `OFF` with `ON`.
The C bindings can be built and tested on any platform for which CMake is
available but the steps for doing so vary across platforms and are too numerous
to list here.
## Benchmarking
The `edit-trace` folder has the main code for running the edit trace benchmarking.

3
automerge-c/.gitignore vendored Normal file
View file

@ -0,0 +1,3 @@
automerge
automerge.h
automerge.o

133
automerge-c/CMakeLists.txt Normal file
View file

@ -0,0 +1,133 @@
cmake_minimum_required(VERSION 3.18 FATAL_ERROR)
set(CMAKE_MODULE_PATH "${CMAKE_SOURCE_DIR}/cmake")
# Parse the library name, project name and project version out of Cargo's TOML file.
set(CARGO_LIB_SECTION OFF)
set(LIBRARY_NAME "")
set(CARGO_PKG_SECTION OFF)
set(CARGO_PKG_NAME "")
set(CARGO_PKG_VERSION "")
file(READ Cargo.toml TOML_STRING)
string(REPLACE ";" "\\\\;" TOML_STRING "${TOML_STRING}")
string(REPLACE "\n" ";" TOML_LINES "${TOML_STRING}")
foreach(TOML_LINE IN ITEMS ${TOML_LINES})
string(REGEX MATCH "^\\[(lib|package)\\]$" _ ${TOML_LINE})
if(CMAKE_MATCH_1 STREQUAL "lib")
set(CARGO_LIB_SECTION ON)
set(CARGO_PKG_SECTION OFF)
elseif(CMAKE_MATCH_1 STREQUAL "package")
set(CARGO_LIB_SECTION OFF)
set(CARGO_PKG_SECTION ON)
endif()
string(REGEX MATCH "^name += +\"([^\"]+)\"$" _ ${TOML_LINE})
if(CMAKE_MATCH_1 AND (CARGO_LIB_SECTION AND NOT CARGO_PKG_SECTION))
set(LIBRARY_NAME "${CMAKE_MATCH_1}")
elseif(CMAKE_MATCH_1 AND (NOT CARGO_LIB_SECTION AND CARGO_PKG_SECTION))
set(CARGO_PKG_NAME "${CMAKE_MATCH_1}")
endif()
string(REGEX MATCH "^version += +\"([^\"]+)\"$" _ ${TOML_LINE})
if(CMAKE_MATCH_1 AND CARGO_PKG_SECTION)
set(CARGO_PKG_VERSION "${CMAKE_MATCH_1}")
endif()
if(LIBRARY_NAME AND (CARGO_PKG_NAME AND CARGO_PKG_VERSION))
break()
endif()
endforeach()
project(${CARGO_PKG_NAME} VERSION ${CARGO_PKG_VERSION} LANGUAGES C DESCRIPTION "C bindings for the Automerge Rust backend.")
option(BUILD_SHARED_LIBS "Enable the choice of a shared or static library.")
option(BUILD_TESTING "Enable the choice of testing the build." ON)
include(CMakePackageConfigHelpers)
include(GNUInstallDirs)
if(BUILD_TESTING)
include(CTest)
enable_testing()
endif()
string(MAKE_C_IDENTIFIER ${PROJECT_NAME} SYMBOL_PREFIX)
string(TOUPPER ${SYMBOL_PREFIX} SYMBOL_PREFIX)
add_subdirectory(src)
# Generate and install the configuration header.
math(EXPR INTEGER_PROJECT_VERSION_MAJOR "${PROJECT_VERSION_MAJOR} * 100000")
math(EXPR INTEGER_PROJECT_VERSION_MINOR "${PROJECT_VERSION_MINOR} * 100")
math(EXPR INTEGER_PROJECT_VERSION_PATCH "${PROJECT_VERSION_PATCH}")
math(EXPR INTEGER_PROJECT_VERSION "${INTEGER_PROJECT_VERSION_MAJOR} + ${INTEGER_PROJECT_VERSION_MINOR} + ${INTEGER_PROJECT_VERSION_PATCH}")
configure_file(
${CMAKE_MODULE_PATH}/config.h.in
config.h
@ONLY
NEWLINE_STYLE LF
)
install(
FILES ${CMAKE_BINARY_DIR}/config.h
DESTINATION ${CMAKE_INSTALL_INCLUDEDIR}/${PROJECT_NAME}
)
# Generate and install .cmake files
set(PROJECT_CONFIG_NAME "${PROJECT_NAME}-config")
set(PROJECT_CONFIG_VERSION_NAME "${PROJECT_CONFIG_NAME}-version")
write_basic_package_version_file(
${CMAKE_CURRENT_BINARY_DIR}/${PROJECT_CONFIG_VERSION_NAME}.cmake
VERSION ${PROJECT_VERSION}
COMPATIBILITY ExactVersion
)
# The namespace label starts with the title-cased library name.
string(SUBSTRING ${LIBRARY_NAME} 0 1 NS_FIRST)
string(SUBSTRING ${LIBRARY_NAME} 1 -1 NS_REST)
string(TOUPPER ${NS_FIRST} NS_FIRST)
string(TOLOWER ${NS_REST} NS_REST)
string(CONCAT NAMESPACE ${NS_FIRST} ${NS_REST} "::")
# \note CMake doesn't automate the exporting of an imported library's targets
# so the package configuration script must do it.
configure_package_config_file(
${CMAKE_MODULE_PATH}/${PROJECT_CONFIG_NAME}.cmake.in
${CMAKE_CURRENT_BINARY_DIR}/${PROJECT_CONFIG_NAME}.cmake
INSTALL_DESTINATION ${CMAKE_INSTALL_LIBDIR}/cmake/${PROJECT_NAME}
)
install(
FILES
${CMAKE_CURRENT_BINARY_DIR}/${PROJECT_CONFIG_NAME}.cmake
${CMAKE_CURRENT_BINARY_DIR}/${PROJECT_CONFIG_VERSION_NAME}.cmake
DESTINATION
${CMAKE_INSTALL_LIBDIR}/cmake/${PROJECT_NAME}
)

19
automerge-c/Cargo.toml Normal file
View file

@ -0,0 +1,19 @@
[package]
name = "automerge-c"
version = "0.1.0"
authors = ["Orion Henry <orion.henry@gmail.com>"]
edition = "2021"
license = "MIT"
[lib]
name = "automerge"
crate-type = ["cdylib", "staticlib"]
bench = false
doc = false
[dependencies]
automerge = { path = "../automerge" }
libc = "^0.2"
[build-dependencies]
cbindgen = "^0.20"

30
automerge-c/Makefile Normal file
View file

@ -0,0 +1,30 @@
CC=gcc
CFLAGS=-I.
DEPS=automerge.h
LIBS=-lpthread -ldl -lm
LDIR=../target/release
LIB=../target/release/libautomerge.a
DEBUG_LIB=../target/debug/libautomerge.a
all: $(DEBUG_LIB) automerge
debug: LDIR=../target/debug
debug: automerge $(DEBUG_LIB)
automerge: automerge.o $(LDIR)/libautomerge.a
$(CC) -o $@ automerge.o $(LDIR)/libautomerge.a $(LIBS) -L$(LDIR)
$(DEBUG_LIB): src/*.rs
cargo build
$(LIB): src/*.rs
cargo build --release
%.o: %.c $(DEPS)
$(CC) -c -o $@ $< $(CFLAGS)
.PHONY: clean
clean:
rm -f *.o automerge $(LIB) $(DEBUG_LIB)

95
automerge-c/README.md Normal file
View file

@ -0,0 +1,95 @@
## Methods we need to support
### Basic management
1. `AMcreate()`
1. `AMclone(doc)`
1. `AMfree(doc)`
1. `AMconfig(doc, key, val)` // set actor
1. `actor = get_actor(doc)`
### Transactions
1. `AMpendingOps(doc)`
1. `AMcommit(doc, message, time)`
1. `AMrollback(doc)`
### Write
1. `AMset{Map|List}(doc, obj, prop, value)`
1. `AMinsert(doc, obj, index, value)`
1. `AMpush(doc, obj, value)`
1. `AMdel{Map|List}(doc, obj, prop)`
1. `AMinc{Map|List}(doc, obj, prop, value)`
1. `AMspliceText(doc, obj, start, num_del, text)`
### Read
1. `AMkeys(doc, obj, heads)`
1. `AMlength(doc, obj, heads)`
1. `AMvalues(doc, obj, heads)`
1. `AMtext(doc, obj, heads)`
### Sync
1. `AMgenerateSyncMessage(doc, state)`
1. `AMreceiveSyncMessage(doc, state, message)`
1. `AMinitSyncState()`
### Save / Load
1. `AMload(data)`
1. `AMloadIncremental(doc, data)`
1. `AMsave(doc)`
1. `AMsaveIncremental(doc)`
### Low Level Access
1. `AMapplyChanges(doc, changes)`
1. `AMgetChanges(doc, deps)`
1. `AMgetChangesAdded(doc1, doc2)`
1. `AMgetHeads(doc)`
1. `AMgetLastLocalChange(doc)`
1. `AMgetMissingDeps(doc, heads)`
### Encode/Decode
1. `AMencodeChange(change)`
1. `AMdecodeChange(change)`
1. `AMencodeSyncMessage(change)`
1. `AMdecodeSyncMessage(change)`
1. `AMencodeSyncState(change)`
1. `AMdecodeSyncState(change)`
## Open Question - Memory management
Most of these calls return one or more items of arbitrary length. Doing memory management in C is tricky. This is my proposed solution...
###
```
// returns 1 or zero opids
n = automerge_set(doc, "_root", "hello", datatype, value);
if (n) {
automerge_pop(doc, &obj, len);
}
// returns n values
n = automerge_values(doc, "_root", "hello");
for (i = 0; i<n ;i ++) {
automerge_pop_value(doc, &value, &datatype, len);
}
```
There would be one pop method per object type. Users allocs and frees the buffers. Multiple return values would result in multiple pops. Too small buffers would error and allow retry.
### Formats
Actors - We could do (bytes,len) or a hex encoded string?.
ObjIds - We could do flat bytes of the ExId struct but lets do human readable strings for now - the struct would be faster but opque
Heads - Might as well make it a flat buffer `(n, hash, hash, ...)`
Changes - Put them all in a flat concatenated buffer
Encode/Decode - to json strings?

36
automerge-c/automerge.c Normal file
View file

@ -0,0 +1,36 @@
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <assert.h>
#include "automerge.h"
#define MAX_BUFF_SIZE 4096
int main() {
int n = 0;
int data_type = 0;
char buff[MAX_BUFF_SIZE];
char obj[MAX_BUFF_SIZE];
AMresult* res = NULL;
printf("begin\n");
AMdoc* doc = AMcreate();
printf("AMconfig()...");
AMconfig(doc, "actor", "aabbcc");
printf("pass!\n");
printf("AMmapSet()...\n");
res = AMmapSet(doc, NULL, "string", AM_DATA_TYPE_STR, "hello world");
if (AMresultStatus(res) != AM_STATUS_COMMAND_OK)
{
printf("AMmapSet() failed: %s\n", AMerrorMessage(res));
return 1;
}
AMclear(res);
printf("pass!\n");
AMdestroy(doc);
printf("end\n");
}

25
automerge-c/build.rs Normal file
View file

@ -0,0 +1,25 @@
extern crate cbindgen;
use std::{env, path::PathBuf};
fn main() {
let crate_dir = PathBuf::from(
env::var("CARGO_MANIFEST_DIR").expect("CARGO_MANIFEST_DIR env var is not defined"),
);
let config = cbindgen::Config::from_file("cbindgen.toml")
.expect("Unable to find cbindgen.toml configuration file");
// let mut config: cbindgen::Config = Default::default();
// config.language = cbindgen::Language::C;
if let Ok(writer) = cbindgen::generate_with_config(&crate_dir, config) {
writer.write_to_file(crate_dir.join("automerge.h"));
// Also write the generated header into the target directory when
// specified (necessary for an out-of-source build a la CMake).
if let Ok(target_dir) = env::var("CARGO_TARGET_DIR") {
writer.write_to_file(PathBuf::from(target_dir).join("automerge.h"));
}
}
}

38
automerge-c/cbindgen.toml Normal file
View file

@ -0,0 +1,38 @@
header = """
/** \\file
* All constants, functions and types in the Automerge library's C API.
*/
"""
include_guard = "automerge_h"
autogen_warning = "/* Warning, this file is autogenerated by cbindgen. Don't modify this manually. */"
language = "C"
includes = []
sys_includes = ["stddef.h", "stdint.h", "stdbool.h"]
no_includes = true
line_length = 140
documentation = true
documentation_style = "doxy"
after_includes = """\n
/**
* \\defgroup enumerations Public Enumerations
Symbolic names for integer constants.
*/
/**
* \\memberof AMdoc
* \\brief The root object of an `AMdoc` struct.
*/
#define AM_ROOT NULL
"""
usize_is_size_t = true
[enum]
rename_variants = "ScreamingSnakeCase"
enum_class = true
prefix_with_name = true
derive_const_casts = true
must_use = "MUST_USE_ENUM"
[export]
item_types = ["enums", "structs", "opaque", "constants", "functions"]

View file

@ -0,0 +1,99 @@
@PACKAGE_INIT@
include(CMakeFindDependencyMacro)
set(CMAKE_THREAD_PREFER_PTHREAD TRUE)
set(THREADS_PREFER_PTHREAD_FLAG TRUE)
find_dependency(Threads)
find_library(@SYMBOL_PREFIX@_IMPLIB_DEBUG @LIBRARY_NAME@${CMAKE_DEBUG_POSTFIX} PATHS "${PACKAGE_PREFIX_DIR}/debug/${CMAKE_INSTALL_LIBDIR}" "${PACKAGE_PREFIX_DIR}/${CMAKE_INSTALL_LIBDIR}" NO_DEFAULT_PATH)
find_library(@SYMBOL_PREFIX@_IMPLIB_RELEASE @LIBRARY_NAME@${CMAKE_RELEASE_POSTFIX} PATHS "${PACKAGE_PREFIX_DIR}/${CMAKE_INSTALL_LIBDIR}" NO_DEFAULT_PATH)
find_file(@SYMBOL_PREFIX@_LOCATION_DEBUG "${CMAKE_SHARED_LIBRARY_PREFIX}@LIBRARY_NAME@${CMAKE_DEBUG_POSTFIX}${CMAKE_SHARED_LIBRARY_SUFFIX}" PATHS "${PACKAGE_PREFIX_DIR}/debug/${CMAKE_INSTALL_BINDIR}" "${PACKAGE_PREFIX_DIR}/${CMAKE_INSTALL_LIBDIR}" NO_DEFAULT_PATH)
find_file(@SYMBOL_PREFIX@_LOCATION_RELEASE "${CMAKE_SHARED_LIBRARY_PREFIX}@LIBRARY_NAME@${CMAKE_RELEASE_POSTFIX}${CMAKE_SHARED_LIBRARY_SUFFIX}" PATHS "${PACKAGE_PREFIX_DIR}/${CMAKE_INSTALL_BINDIR}" NO_DEFAULT_PATH)
if(@BUILD_SHARED_LIBS@)
set(@SYMBOL_PREFIX@_DEFINE_SYMBOL "@SYMBOL_PREFIX@_EXPORTS")
if(WIN32)
set(@SYMBOL_PREFIX@_NO_SONAME_DEBUG "TRUE")
set(@SYMBOL_PREFIX@_NO_SONAME_RELEASE "TRUE")
set(@SYMBOL_PREFIX@_SONAME_DEBUG "")
set(@SYMBOL_PREFIX@_SONAME_RELEASE "")
else()
set(@SYMBOL_PREFIX@_NO_SONAME_DEBUG "FALSE")
set(@SYMBOL_PREFIX@_NO_SONAME_RELEASE "FALSE")
get_filename_component(@SYMBOL_PREFIX@_SONAME_DEBUG "${@SYMBOL_PREFIX@_LOCATION_DEBUG}" NAME)
get_filename_component(@SYMBOL_PREFIX@_SONAME_RELEASE "${@SYMBOL_PREFIX@_LOCATION_RELEASE}" NAME)
endif()
set(@SYMBOL_PREFIX@_TYPE "SHARED")
else()
set(@SYMBOL_PREFIX@_DEFINE_SYMBOL "")
set(@SYMBOL_PREFIX@_LOCATION_DEBUG "${@SYMBOL_PREFIX@_IMPLIB_DEBUG}")
set(@SYMBOL_PREFIX@_IMPLIB_DEBUG "")
set(@SYMBOL_PREFIX@_LOCATION_RELEASE "${@SYMBOL_PREFIX@_IMPLIB_RELEASE}")
set(@SYMBOL_PREFIX@_IMPLIB_RELEASE "")
set(@SYMBOL_PREFIX@_NO_SONAME_DEBUG "TRUE")
set(@SYMBOL_PREFIX@_NO_SONAME_RELEASE "TRUE")
set(@SYMBOL_PREFIX@_SONAME_DEBUG "")
set(@SYMBOL_PREFIX@_SONAME_RELEASE "")
set(@SYMBOL_PREFIX@_TYPE "STATIC")
endif()
add_library(@NAMESPACE@@PROJECT_NAME@ ${@SYMBOL_PREFIX@_TYPE} IMPORTED)
set_target_properties(
@NAMESPACE@@PROJECT_NAME@
PROPERTIES
# \note Cargo writes a debug build into a nested directory instead of
# decorating its name.
DEBUG_POSTFIX ""
DEFINE_SYMBOL "${@SYMBOL_PREFIX@_DEFINE_SYMBOL}"
IMPORTED_CONFIGURATIONS "RELEASE;DEBUG"
IMPORTED_IMPLIB_DEBUG "${@SYMBOL_PREFIX@_IMPLIB_DEBUG}"
IMPORTED_IMPLIB_RELEASE "${@SYMBOL_PREFIX@_IMPLIB_RELEASE}"
IMPORTED_LOCATION_DEBUG "${@SYMBOL_PREFIX@_LOCATION_DEBUG}"
IMPORTED_LOCATION_RELEASE "${@SYMBOL_PREFIX@_LOCATION_RELEASE}"
IMPORTED_NO_SONAME_DEBUG "${@SYMBOL_PREFIX@_NO_SONAME_DEBUG}"
IMPORTED_NO_SONAME_RELEASE "${@SYMBOL_PREFIX@_NO_SONAME_RELEASE}"
IMPORTED_SONAME_DEBUG "${@SYMBOL_PREFIX@_SONAME_DEBUG}"
IMPORTED_SONAME_RELEASE "${@SYMBOL_PREFIX@_SONAME_RELEASE}"
INTERFACE_INCLUDE_DIRECTORIES "${PACKAGE_PREFIX_DIR}/${CMAKE_INSTALL_INCLUDEDIR}"
LINKER_LANGUAGE C
PUBLIC_HEADER "${PACKAGE_PREFIX_DIR}/${CMAKE_INSTALL_INCLUDEDIR}/@PROJECT_NAME@/@LIBRARY_NAME@.h"
SOVERSION "@PROJECT_VERSION_MAJOR@"
VERSION "@PROJECT_VERSION@"
# \note Cargo exports all of the symbols automatically.
WINDOWS_EXPORT_ALL_SYMBOLS "TRUE"
)
# Remove the variables that the find_* command calls cached.
unset(@SYMBOL_PREFIX@_IMPLIB_DEBUG CACHE)
unset(@SYMBOL_PREFIX@_IMPLIB_RELEASE CACHE)
unset(@SYMBOL_PREFIX@_LOCATION_DEBUG CACHE)
unset(@SYMBOL_PREFIX@_LOCATION_RELEASE CACHE)
check_required_components(@PROJECT_NAME@)

View file

@ -0,0 +1,14 @@
#ifndef @SYMBOL_PREFIX@_CONFIG_INCLUDED
#define @SYMBOL_PREFIX@_CONFIG_INCLUDED
/* This header is auto-generated by CMake. */
#define @SYMBOL_PREFIX@_VERSION @INTEGER_PROJECT_VERSION@
#define @SYMBOL_PREFIX@_MAJOR_VERSION (@SYMBOL_PREFIX@_VERSION / 100000)
#define @SYMBOL_PREFIX@_MINOR_VERSION ((@SYMBOL_PREFIX@_VERSION / 100) % 1000)
#define @SYMBOL_PREFIX@_PATCH_VERSION (@SYMBOL_PREFIX@_VERSION % 100)
#endif /* @SYMBOL_PREFIX@_CONFIG_INCLUDED */

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.4 KiB

View file

@ -0,0 +1,232 @@
cmake_minimum_required(VERSION 3.18 FATAL_ERROR)
find_program (
CARGO_CMD
"cargo"
PATHS "$ENV{CARGO_HOME}/bin"
DOC "The Cargo command"
)
if(NOT CARGO_CMD)
message(FATAL_ERROR "Cargo (Rust package manager) not found! Install it and/or set the CARGO_HOME environment variable.")
endif()
string(TOLOWER "${CMAKE_BUILD_TYPE}" BUILD_TYPE_LOWER)
if(BUILD_TYPE_LOWER STREQUAL debug)
set(CARGO_BUILD_TYPE "debug")
set(CARGO_FLAG "")
else()
set(CARGO_BUILD_TYPE "release")
set(CARGO_FLAG "--release")
endif()
set(CARGO_TARGET_DIR "${CMAKE_CURRENT_BINARY_DIR}/Cargo/target")
set(CARGO_CURRENT_BINARY_DIR "${CARGO_TARGET_DIR}/${CARGO_BUILD_TYPE}")
set(
CARGO_OUTPUT
# \note cbindgen won't regenerate a missing header so it can't be cleaned.
#${CARGO_TARGET_DIR}/${LIBRARY_NAME}.h
${CARGO_CURRENT_BINARY_DIR}/${CMAKE_SHARED_LIBRARY_PREFIX}${LIBRARY_NAME}${CMAKE_SHARED_LIBRARY_SUFFIX}
${CARGO_CURRENT_BINARY_DIR}/${CMAKE_STATIC_LIBRARY_PREFIX}${LIBRARY_NAME}${CMAKE_STATIC_LIBRARY_SUFFIX}
)
if(WIN32)
# \note The basename of an import library output by Cargo is the filename
# of its corresponding shared library.
list(APPEND CARGO_OUTPUT ${CARGO_CURRENT_BINARY_DIR}/${CMAKE_SHARED_LIBRARY_PREFIX}${LIBRARY_NAME}${CMAKE_SHARED_LIBRARY_SUFFIX}${CMAKE_STATIC_LIBRARY_SUFFIX})
endif()
add_custom_command(
OUTPUT ${CARGO_OUTPUT}
COMMAND
${CMAKE_COMMAND} -E env CARGO_TARGET_DIR=${CARGO_TARGET_DIR} ${CARGO_CMD} build ${CARGO_FLAG}
MAIN_DEPENDENCY
lib.rs
DEPENDS
${CMAKE_SOURCE_DIR}/build.rs
${CMAKE_SOURCE_DIR}/Cargo.toml
${CMAKE_SOURCE_DIR}/cbindgen.toml
WORKING_DIRECTORY
${CMAKE_SOURCE_DIR}
COMMENT
"Producing the library artifacts with Cargo..."
VERBATIM
)
if(BUILD_SHARED_LIBS)
if(WIN32)
set(LIBRARY_DESTINATION "${CMAKE_INSTALL_BINDIR}")
else()
set(LIBRARY_DESTINATION "${CMAKE_INSTALL_LIBDIR}")
endif()
set(LIBRARY_DEFINE_SYMBOL "${SYMBOL_PREFIX}_EXPORTS")
# \note The basename of an import library output by Cargo is the filename
# of its corresponding shared library.
set(LIBRARY_IMPLIB "${CARGO_CURRENT_BINARY_DIR}/${CMAKE_SHARED_LIBRARY_PREFIX}${LIBRARY_NAME}${CMAKE_SHARED_LIBRARY_SUFFIX}${CMAKE_STATIC_LIBRARY_SUFFIX}")
set(LIBRARY_LOCATION "${CARGO_CURRENT_BINARY_DIR}/${CMAKE_SHARED_LIBRARY_PREFIX}${LIBRARY_NAME}${CMAKE_SHARED_LIBRARY_SUFFIX}")
set(LIBRARY_NO_SONAME "${WIN32}")
set(LIBRARY_SONAME "${CMAKE_SHARED_LIBRARY_PREFIX}${LIBRARY_NAME}${CMAKE_${CMAKE_BUILD_TYPE}_POSTFIX}${CMAKE_SHARED_LIBRARY_SUFFIX}")
set(LIBRARY_TYPE "SHARED")
else()
set(LIBRARY_DEFINE_SYMBOL "")
set(LIBRARY_DESTINATION "${CMAKE_INSTALL_LIBDIR}")
set(LIBRARY_IMPLIB "")
set(LIBRARY_LOCATION "${CARGO_CURRENT_BINARY_DIR}/${CMAKE_STATIC_LIBRARY_PREFIX}${LIBRARY_NAME}${CMAKE_STATIC_LIBRARY_SUFFIX}")
set(LIBRARY_NO_SONAME "TRUE")
set(LIBRARY_SONAME "")
set(LIBRARY_TYPE "STATIC")
endif()
add_library(${LIBRARY_NAME} ${LIBRARY_TYPE} IMPORTED GLOBAL)
target_sources(${LIBRARY_NAME} INTERFACE ${CARGO_OUTPUT})
set_target_properties(
${LIBRARY_NAME}
PROPERTIES
# \note Cargo writes a debug build into a nested directory instead of
# decorating its name.
DEBUG_POSTFIX ""
DEFINE_SYMBOL "${LIBRARY_DEFINE_SYMBOL}"
IMPORTED_IMPLIB "${LIBRARY_IMPLIB}"
IMPORTED_LOCATION "${LIBRARY_LOCATION}"
IMPORTED_NO_SONAME "${LIBRARY_NO_SONAME}"
IMPORTED_SONAME "${LIBRARY_SONAME}"
LINKER_LANGUAGE C
PUBLIC_HEADER "${CARGO_TARGET_DIR}/${LIBRARY_NAME}.h"
SOVERSION "${PROJECT_VERSION_MAJOR}"
VERSION "${PROJECT_VERSION}"
# \note Cargo exports all of the symbols automatically.
WINDOWS_EXPORT_ALL_SYMBOLS "TRUE"
)
target_compile_definitions(${LIBRARY_NAME} INTERFACE $<TARGET_PROPERTY:${LIBRARY_NAME},DEFINE_SYMBOL>)
target_include_directories(
${LIBRARY_NAME}
INTERFACE
"$<INSTALL_INTERFACE:${CMAKE_INSTALL_INCLUDEDIR}/${PROJECT_NAME}>"
)
set(CMAKE_THREAD_PREFER_PTHREAD TRUE)
set(THREADS_PREFER_PTHREAD_FLAG TRUE)
find_package(Threads REQUIRED)
set(LIBRARY_DEPENDENCIES Threads::Threads ${CMAKE_DL_LIBS})
if(WIN32)
list(APPEND LIBRARY_DEPENDENCIES Bcrypt userenv ws2_32)
else()
list(APPEND LIBRARY_DEPENDENCIES m)
endif()
target_link_libraries(${LIBRARY_NAME} INTERFACE ${LIBRARY_DEPENDENCIES})
install(
FILES $<TARGET_PROPERTY:${LIBRARY_NAME},IMPORTED_IMPLIB>
TYPE LIB
# \note The basename of an import library output by Cargo is the filename
# of its corresponding shared library.
RENAME "${CMAKE_STATIC_LIBRARY_PREFIX}${LIBRARY_NAME}${CMAKE_${CMAKE_BUILD_TYPE}_POSTFIX}${CMAKE_STATIC_LIBRARY_SUFFIX}"
OPTIONAL
)
set(LIBRARY_FILE_NAME "${CMAKE_${LIBRARY_TYPE}_LIBRARY_PREFIX}${LIBRARY_NAME}${CMAKE_${CMAKE_BUILD_TYPE}_POSTFIX}${CMAKE_${LIBRARY_TYPE}_LIBRARY_SUFFIX}")
install(
FILES $<TARGET_PROPERTY:${LIBRARY_NAME},IMPORTED_LOCATION>
RENAME "${LIBRARY_FILE_NAME}"
DESTINATION ${LIBRARY_DESTINATION}
)
install(
FILES $<TARGET_PROPERTY:${LIBRARY_NAME},PUBLIC_HEADER>
DESTINATION ${CMAKE_INSTALL_INCLUDEDIR}/${PROJECT_NAME}
)
if(BUILD_TESTING)
add_executable(test_${LIBRARY_NAME} ${CMAKE_SOURCE_DIR}/${LIBRARY_NAME}.c)
set_target_properties(test_${LIBRARY_NAME} PROPERTIES LINKER_LANGUAGE C)
# \note An imported library's INTERFACE_INCLUDE_DIRECTORIES property can't
# contain a non-existent path so its build-time include directory
# must be specified for all of its dependent targets instead.
target_include_directories(test_${LIBRARY_NAME} PRIVATE "$<BUILD_INTERFACE:${CARGO_TARGET_DIR}>")
target_link_libraries(test_${LIBRARY_NAME} PRIVATE ${LIBRARY_NAME})
add_test(NAME "test_${LIBRARY_NAME}" COMMAND test_${LIBRARY_NAME})
if(BUILD_SHARED_LIBS AND WIN32)
add_custom_command(
TARGET test_${LIBRARY_NAME}
POST_BUILD
COMMAND ${CMAKE_COMMAND} -E copy_if_different
${CARGO_CURRENT_BINARY_DIR}/${CMAKE_SHARED_LIBRARY_PREFIX}${LIBRARY_NAME}${CMAKE_${CMAKE_BUILD_TYPE}_POSTFIX}${CMAKE_SHARED_LIBRARY_SUFFIX}
${CMAKE_CURRENT_BINARY_DIR}
COMMENT "Copying the DLL built by Cargo into the test directory..."
VERBATIM
)
endif()
add_custom_command(
TARGET test_${LIBRARY_NAME}
POST_BUILD
COMMAND
ctest -C $<CONFIGURATION> --output-on-failure
COMMENT
"Running the test(s)..."
VERBATIM
)
endif()
find_package(Doxygen OPTIONAL_COMPONENTS dot)
if(DOXYGEN_FOUND)
set(DOXYGEN_GENERATE_LATEX YES)
set(DOXYGEN_PDF_HYPERLINKS YES)
set(DOXYGEN_PROJECT_LOGO "${CMAKE_SOURCE_DIR}/img/brandmark.png")
set(DOXYGEN_USE_MDFILE_AS_MAINPAGE "${CMAKE_SOURCE_DIR}/README.md")
# \note This target is only necessary because cbindgen won't allow the
# generated header to be listed in the Cargo command's output, being
# Doxygen's input would've been enough otherwise.
add_custom_target(
${LIBRARY_NAME}_header
DEPENDS ${CARGO_OUTPUT}
)
doxygen_add_docs(
${LIBRARY_NAME}_docs
"${CARGO_TARGET_DIR}/${LIBRARY_NAME}.h"
"${CMAKE_SOURCE_DIR}/README.md"
WORKING_DIRECTORY ${CMAKE_SOURCE_DIR}
COMMENT "Producing documentation with Doxygen..."
)
add_dependencies(${LIBRARY_NAME}_docs ${LIBRARY_NAME}_header)
endif()

33
automerge-c/src/doc.rs Normal file
View file

@ -0,0 +1,33 @@
use automerge as am;
use std::ops::{Deref, DerefMut};
/// \class AMdoc
/// \brief A JSON-like CRDT.
#[derive(Clone)]
pub struct AMdoc(am::Automerge);
impl AMdoc {
pub fn create(handle: am::Automerge) -> AMdoc {
AMdoc(handle)
}
}
impl Deref for AMdoc {
type Target = am::Automerge;
fn deref(&self) -> &Self::Target {
&self.0
}
}
impl DerefMut for AMdoc {
fn deref_mut(&mut self) -> &mut Self::Target {
&mut self.0
}
}
impl From<AMdoc> for *mut AMdoc {
fn from(b: AMdoc) -> Self {
Box::into_raw(Box::new(b))
}
}

321
automerge-c/src/lib.rs Normal file
View file

@ -0,0 +1,321 @@
use automerge as am;
use std::{
ffi::{c_void, CStr},
fmt,
os::raw::c_char,
};
mod doc;
mod result;
mod utils;
use doc::AMdoc;
use result::AMresult;
use utils::import_value;
/// \ingroup enumerations
/// \brief All data types that a value can be set to.
#[derive(Debug)]
#[repr(u8)]
pub enum AmDataType {
/// A null value.
Null,
/// A boolean value.
Boolean,
/// An ordered collection of byte values.
Bytes,
/// A CRDT counter value.
Counter,
/// A 64-bit floating-point value.
F64,
/// A signed integer value.
Int,
/// An ordered collection of (index, value) pairs.
List,
/// A unordered collection of (key, value) pairs.
Map,
/// A UTF-8 string value.
Str,
/// An unordered collection of records like in an RDBMS.
Table,
/// An ordered collection of (index, value) pairs optimized for characters.
Text,
/// A Lamport timestamp value.
Timestamp,
/// An unsigned integer value.
Uint,
}
/// \ingroup enumerations
/// \brief The status of an API call.
#[derive(Debug)]
#[repr(u8)]
pub enum AmStatus {
/// The command was successful.
CommandOk,
/// The result is an object ID.
ObjOk,
/// The result is one or more values.
ValuesOk,
/// The result is one or more changes.
ChangesOk,
/// The result is invalid.
InvalidResult,
/// The result was an error.
Error,
}
impl fmt::Display for AmDataType {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
write!(f, "{:?}", self)
}
}
macro_rules! to_str {
($s:expr) => {{
CStr::from_ptr($s).to_string_lossy().to_string()
}};
}
macro_rules! to_doc {
($handle:expr) => {{
let handle = $handle.as_mut();
match handle {
Some(b) => b,
None => return AMresult::err("Invalid AMdoc pointer").into(),
}
}};
}
macro_rules! to_value {
($a:expr,$b:expr) => {{
match import_value($a, $b) {
Ok(v) => v,
Err(r) => return r.into(),
}
}};
}
macro_rules! to_prop {
($key:expr) => {{
// TODO - check null pointer
am::Prop::Map(std::ffi::CStr::from_ptr($key).to_string_lossy().to_string())
}};
}
macro_rules! to_obj {
($handle:expr) => {{
let handle = $handle.as_ref();
match handle {
Some(b) => b,
None => &AMobj(am::ObjId::Root),
}
}};
}
macro_rules! to_result {
($val:expr) => {{
Box::into_raw(Box::new(($val).into()))
}};
}
/// \class AMobj
/// \brief An object's unique identifier.
#[derive(Clone)]
pub struct AMobj(am::ObjId);
/// \memberof AMdoc
/// \brief Allocates a new `AMdoc` struct and initializes it with defaults.
///
/// \return A pointer to an `AMdoc` struct.
/// \warning To avoid a memory leak, the returned pointer must be deallocated
/// with `AMdestroy()`.
#[no_mangle]
pub extern "C" fn AMcreate() -> *mut AMdoc {
AMdoc::create(am::Automerge::new()).into()
}
/// \memberof AMdoc
/// \brief Deallocates the storage for an `AMdoc` struct previously
/// allocated by `AMcreate()` or `AMdup()`.
///
/// \param[in] doc A pointer to an `AMdoc` struct.
/// \pre \p doc must be a valid address.
#[no_mangle]
pub unsafe extern "C" fn AMdestroy(doc: *mut AMdoc) {
if !doc.is_null() {
let doc: AMdoc = *Box::from_raw(doc);
drop(doc)
}
}
/// \memberof AMdoc
/// \brief Allocates storage for an `AMdoc` struct and initializes it by
/// duplicating the `AMdoc` struct pointed to by \p doc.
///
/// \param[in] doc A pointer to an `AMdoc` struct.
/// \return A pointer to an `AMdoc` struct.
/// \pre \p doc must be a valid address.
/// \warning To avoid a memory leak, the returned pointer must be deallocated
/// with `AMdestroy()`.
#[no_mangle]
pub unsafe extern "C" fn AMdup(doc: *mut AMdoc) -> *mut AMdoc {
let doc = *Box::from_raw(doc);
let copy = doc.clone();
std::mem::forget(doc);
copy.into()
}
/// \memberof AMdoc
/// \brief Sets a configuration property of an `AMdoc` struct.
///
/// \param[in] doc A pointer to an `AMdoc` struct.
/// \param[in] key A configuration property's key string.
/// \param[in] value A configuration property's string value or `NULL`.
/// \return A pointer to an `AMresult` struct containing no value.
/// \pre \p doc must be a valid address.
/// \pre \p key must be a valid address.
/// \warning To avoid a memory leak, the returned pointer must be deallocated
/// with `AMclear()`.
#[no_mangle]
pub unsafe extern "C" fn AMconfig(
doc: *mut AMdoc,
key: *const c_char,
value: *const c_char,
) -> *mut AMresult {
let doc = to_doc!(doc);
let key = to_str!(key);
match key.as_str() {
"actor" => {
let actor = to_str!(value);
if let Ok(actor) = actor.try_into() {
doc.set_actor(actor);
AMresult::Ok.into()
} else {
AMresult::err(&format!("Invalid actor '{}'", to_str!(value))).into()
}
}
k => AMresult::err(&format!("Invalid config key '{}'", k)).into(),
}
}
/// \memberof AMdoc
/// \brief Get an `AMdoc` struct's actor ID value as a hexadecimal string.
///
/// \param[in] doc A pointer to an `AMdoc` struct.
/// \return A pointer to an `AMresult` struct containing a string value.
/// \pre \p doc must be a valid address.
/// \warning To avoid a memory leak, the returned pointer must be deallocated
/// with `AMclear()`.
#[no_mangle]
pub unsafe extern "C" fn AMgetActor(_doc: *mut AMdoc) -> *mut AMresult {
unimplemented!()
}
/// \memberof AMresult
/// \brief Get the status code of an `AMresult` struct.
///
/// \param[in] result A pointer to an `AMresult` struct or `NULL`.
/// \return An `AmStatus` enum tag.
#[no_mangle]
pub unsafe extern "C" fn AMresultStatus(result: *mut AMresult) -> AmStatus {
match result.as_mut() {
Some(AMresult::Ok) => AmStatus::CommandOk,
Some(AMresult::Error(_)) => AmStatus::Error,
Some(AMresult::ObjId(_)) => AmStatus::ObjOk,
Some(AMresult::Values(_)) => AmStatus::ValuesOk,
Some(AMresult::Changes(_)) => AmStatus::ChangesOk,
None => AmStatus::InvalidResult,
}
}
/// \memberof AMdoc
/// \brief Set a map object's value.
///
/// \param[in] doc A pointer to an `AMdoc` struct.
/// \param[in] obj A pointer to an `AMobj` struct.
/// \param[in] key A map object's key string.
/// \param[in] data_type An `AmDataType` enum tag matching the actual type that
/// \p value points to.
/// \param[in] value A pointer to the value at \p key or `NULL`.
/// \return A pointer to an `AMresult` struct containing no value.
/// \pre \p doc must be a valid address.
/// \pre \p obj must be a valid address.
/// \pre \p key must be a valid address.
/// \warning To avoid a memory leak, the returned pointer must be deallocated
/// with `AMclear()`.
#[no_mangle]
pub unsafe extern "C" fn AMmapSet(
doc: *mut AMdoc,
obj: *mut AMobj,
key: *const c_char,
data_type: AmDataType,
value: *const c_void,
) -> *mut AMresult {
let doc = to_doc!(doc);
to_result!(doc.set(to_obj!(obj), to_prop!(key), to_value!(value, data_type)))
}
/// \memberof AMdoc
/// \brief Set a list object's value.
///
/// \param[in] doc A pointer to an `AMdoc` struct.
/// \param[in] obj A pointer to an `AMobj` struct.
/// \param[in] index A list object's index number.
/// \param[in] data_type An `AmDataType` enum tag matching the actual type that
/// \p value points to.
/// \param[in] value A pointer to the value at \p index or `NULL`.
/// \return A pointer to an `AMresult` struct containing no value.
/// \pre \p doc must be a valid address.
/// \pre \p obj must be a valid address.
/// \warning To avoid a memory leak, the returned pointer must be deallocated
/// with `AMclear()`.
#[no_mangle]
pub unsafe extern "C" fn AMlistSet(
doc: *mut AMdoc,
obj: *mut AMobj,
index: usize,
data_type: AmDataType,
value: *const c_void,
) -> *mut AMresult {
let doc = to_doc!(doc);
to_result!(doc.set(to_obj!(obj), index, to_value!(value,data_type)))
}
/// \memberof AMresult
/// \brief Get an `AMresult` struct's `AMobj` struct value.
///
/// \param[in] result A pointer to an `AMresult` struct.
/// \return A pointer to an `AMobj` struct.
/// \pre \p result must be a valid address.
#[no_mangle]
pub unsafe extern "C" fn AMgetObj(_result: *mut AMresult) -> *mut AMobj {
unimplemented!()
}
/// \memberof AMresult
/// \brief Deallocates the storage for an `AMresult` struct.
///
/// \param[in] result A pointer to an `AMresult` struct.
/// \pre \p result must be a valid address.
#[no_mangle]
pub unsafe extern "C" fn AMclear(result: *mut AMresult) {
if !result.is_null() {
let result: AMresult = *Box::from_raw(result);
drop(result)
}
}
/// \memberof AMresult
/// \brief Get an `AMresult` struct's error message string.
///
/// \param[in] result A pointer to an `AMresult` struct.
/// \return A string value or `NULL`.
/// \pre \p result must be a valid address.
#[no_mangle]
pub unsafe extern "C" fn AMerrorMessage(result: *mut AMresult) -> *const c_char {
match result.as_mut() {
Some(AMresult::Error(s)) => s.as_ptr(),
_ => 0 as *const c_char,
}
}

28
automerge-c/src/result.rs Normal file
View file

@ -0,0 +1,28 @@
use std::ffi::CString;
use automerge as am;
/// \class AMresult
/// \brief A container of result codes, messages and values.
pub enum AMresult {
Ok,
ObjId(am::ObjId),
Values(Vec<am::Value>),
Changes(Vec<am::Change>),
Error(CString),
}
impl AMresult {
pub (crate) fn err(s: &str) -> Self {
AMresult::Error(CString::new(s).unwrap())
}
}
impl From<Result<Option<am::ObjId>, am::AutomergeError>> for AMresult {
fn from(maybe: Result<Option<am::ObjId>, am::AutomergeError>) -> Self {
match maybe {
Ok(None) => AMresult::Ok,
Ok(Some(obj)) => AMresult::ObjId(obj),
Err(e) => AMresult::Error(CString::new(e.to_string()).unwrap()),
}
}
}

98
automerge-c/src/utils.rs Normal file
View file

@ -0,0 +1,98 @@
use crate::{AMobj, AMresult, AmDataType};
use automerge as am;
use libc::{c_double, c_long, c_ulong};
use std::{
ffi::{c_void, CStr},
ops::Deref,
os::raw::c_char,
};
impl Deref for AMobj {
type Target = am::ObjId;
fn deref(&self) -> &Self::Target {
&self.0
}
}
#[allow(clippy::not_unsafe_ptr_arg_deref)]
impl From<*const AMobj> for AMobj {
fn from(obj: *const AMobj) -> Self {
unsafe { obj.as_ref().cloned().unwrap_or(AMobj(am::ROOT)) }
}
}
impl From<AMresult> for *mut AMresult {
fn from(b: AMresult) -> Self {
Box::into_raw(Box::new(b))
}
}
impl From<&am::Value> for AmDataType {
fn from(v: &am::Value) -> Self {
match v {
am::Value::Scalar(am::ScalarValue::Str(_)) => AmDataType::Str,
am::Value::Scalar(am::ScalarValue::Int(_)) => AmDataType::Int,
am::Value::Scalar(am::ScalarValue::Uint(_)) => AmDataType::Uint,
am::Value::Scalar(am::ScalarValue::F64(_)) => AmDataType::F64,
am::Value::Scalar(am::ScalarValue::Boolean(_)) => AmDataType::Boolean,
am::Value::Scalar(am::ScalarValue::Bytes(_)) => AmDataType::Bytes,
am::Value::Scalar(am::ScalarValue::Counter(_)) => AmDataType::Counter,
am::Value::Scalar(am::ScalarValue::Timestamp(_)) => AmDataType::Timestamp,
am::Value::Scalar(am::ScalarValue::Null) => AmDataType::Null,
am::Value::Object(am::ObjType::Map) => AmDataType::Map,
am::Value::Object(am::ObjType::List) => AmDataType::List,
am::Value::Object(am::ObjType::Table) => AmDataType::Table,
am::Value::Object(am::ObjType::Text) => AmDataType::Text,
}
}
}
pub(crate) fn import_value(
value: *const c_void,
data_type: AmDataType,
) -> Result<am::Value, AMresult> {
unsafe {
match data_type {
AmDataType::Str => {
let value: *const c_char = value.cast();
if !value.is_null() {
Some(CStr::from_ptr(value).to_string_lossy().to_string().into())
} else {
None
}
}
AmDataType::Boolean => value
.cast::<*const c_char>()
.as_ref()
.map(|v| am::Value::boolean(**v != 0)),
AmDataType::Int => value
.cast::<*const c_long>()
.as_ref()
.map(|v| am::Value::int(**v)),
AmDataType::Uint => value
.cast::<*const c_ulong>()
.as_ref()
.map(|v| am::Value::uint(**v)),
AmDataType::F64 => value
.cast::<*const c_double>()
.as_ref()
.map(|v| am::Value::f64(**v)),
AmDataType::Timestamp => value
.cast::<*const c_long>()
.as_ref()
.map(|v| am::Value::timestamp(**v)),
AmDataType::Counter => value
.cast::<*const c_long>()
.as_ref()
.map(|v| am::Value::counter(**v)),
AmDataType::Null => Some(am::Value::null()),
AmDataType::Map => Some(am::Value::map()),
AmDataType::List => Some(am::Value::list()),
AmDataType::Text => Some(am::Value::text()),
AmDataType::Table => Some(am::Value::table()),
_ => return Err(AMresult::err("Invalid data type")),
}
.ok_or_else(|| AMresult::err("Null value"))
}
}

View file

@ -8,7 +8,10 @@ let { Int, Uint, Float64 } = require("./numbers")
let { STATE, HEADS, OBJECT_ID, READ_ONLY, FROZEN } = require("./constants")
function init(actor) {
const state = AutomergeWASM.init(actor)
if (typeof actor != 'string') {
actor = null
}
const state = AutomergeWASM.create(actor)
return rootProxy(state, true);
}
@ -43,7 +46,6 @@ function change(doc, options, callback) {
throw new RangeError("Attempting to use an outdated Automerge document")
}
if (!!doc[HEADS] === true) {
console.log("HEADS", doc[HEADS])
throw new RangeError("Attempting to change an out of date document");
}
if (doc[READ_ONLY] === false) {
@ -97,7 +99,7 @@ function emptyChange(doc, options) {
}
function load(data, actor) {
const state = AutomergeWASM.load(data, actor)
const state = AutomergeWASM.loadDoc(data, actor)
return rootProxy(state, true);
}
@ -135,13 +137,13 @@ function conflictAt(context, objectId, prop) {
const value = conflict[1]
switch (datatype) {
case "map":
result[value] = mapProxy(context, value, [ prop ], true, true)
result[value] = mapProxy(context, value, [ prop ], true)
break;
case "list":
result[value] = listProxy(context, value, [ prop ], true, true)
result[value] = listProxy(context, value, [ prop ], true)
break;
case "text":
result[value] = textProxy(context, value, [ prop ], true, true)
result[value] = textProxy(context, value, [ prop ], true)
break;
//case "table":
//case "cursor":
@ -175,7 +177,11 @@ function getConflicts(doc, prop) {
function getLastLocalChange(doc) {
const state = doc[STATE]
return state.getLastLocalChange()
try {
return state.getLastLocalChange()
} catch (e) {
return
}
}
function getObjectId(doc) {

View file

@ -386,7 +386,7 @@ function textProxy(context, objectId, path, readonly, heads) {
}
function rootProxy(context, readonly) {
return mapProxy(context, "_root", [], readonly, false)
return mapProxy(context, "_root", [], readonly)
}
function listMethods(target) {

View file

@ -90,7 +90,7 @@ describe('Automerge', () => {
const change1 = Automerge.getLastLocalChange(s1)
s2 = Automerge.change(s1, doc => doc.foo = 'bar')
const change2 = Automerge.getLastLocalChange(s2)
assert.strictEqual(change1, null)
assert.strictEqual(change1, undefined)
const change = decodeChange(change2)
assert.deepStrictEqual(change, {
actor: change.actor, deps: [], seq: 1, startOp: 1,

View file

@ -32,6 +32,7 @@ serde-wasm-bindgen = "0.1.3"
serde_bytes = "0.11.5"
unicode-segmentation = "1.7.1"
hex = "^0.4.3"
regex = "^1.5"
[dependencies.wasm-bindgen]
version = "^0.2"
@ -39,7 +40,7 @@ version = "^0.2"
features = ["serde-serialize", "std"]
[package.metadata.wasm-pack.profile.release]
wasm-opt = false
# wasm-opt = false
[package.metadata.wasm-pack.profile.profiling]
wasm-opt = false

View file

@ -1 +1,693 @@
todo
## Automerge WASM Low Level Interface
This is a low level automerge library written in rust exporting a javascript API via WASM. This low level api is the underpinning to the `automerge-js` library that reimplements the Automerge API via these low level functions.
### Static Functions
### Methods
`doc.clone(actor?: string)` : Make a complete
`doc.free()` : deallocate WASM memory associated with a document
#[wasm_bindgen]
pub fn free(self) {}
#[wasm_bindgen(js_name = pendingOps)]
pub fn pending_ops(&self) -> JsValue {
(self.0.pending_ops() as u32).into()
}
pub fn commit(&mut self, message: Option<String>, time: Option<f64>) -> Array {
let heads = self.0.commit(message, time.map(|n| n as i64));
let heads: Array = heads
.iter()
.map(|h| JsValue::from_str(&hex::encode(&h.0)))
.collect();
heads
}
pub fn rollback(&mut self) -> f64 {
self.0.rollback() as f64
}
pub fn keys(&mut self, obj: String, heads: Option<Array>) -> Result<Array, JsValue> {
let obj = self.import(obj)?;
let result = if let Some(heads) = get_heads(heads) {
self.0.keys_at(&obj, &heads)
} else {
self.0.keys(&obj)
}
.iter()
.map(|s| JsValue::from_str(s))
.collect();
Ok(result)
}
pub fn text(&mut self, obj: String, heads: Option<Array>) -> Result<String, JsValue> {
let obj = self.import(obj)?;
if let Some(heads) = get_heads(heads) {
self.0.text_at(&obj, &heads)
} else {
self.0.text(&obj)
}
.map_err(to_js_err)
}
pub fn splice(
&mut self,
obj: String,
start: f64,
delete_count: f64,
text: JsValue,
) -> Result<Option<Array>, JsValue> {
let obj = self.import(obj)?;
let start = start as usize;
let delete_count = delete_count as usize;
let mut vals = vec![];
if let Some(t) = text.as_string() {
self.0
.splice_text(&obj, start, delete_count, &t)
.map_err(to_js_err)?;
Ok(None)
} else {
if let Ok(array) = text.dyn_into::<Array>() {
for i in array.iter() {
if let Ok(array) = i.clone().dyn_into::<Array>() {
let value = array.get(1);
let datatype = array.get(2);
let value = self.import_value(value, datatype.as_string())?;
vals.push(value);
} else {
let value = self.import_value(i, None)?;
vals.push(value);
}
}
}
let result = self
.0
.splice(&obj, start, delete_count, vals)
.map_err(to_js_err)?;
if result.is_empty() {
Ok(None)
} else {
let result: Array = result
.iter()
.map(|r| JsValue::from(r.to_string()))
.collect();
Ok(result.into())
}
}
}
pub fn push(
&mut self,
obj: String,
value: JsValue,
datatype: Option<String>,
) -> Result<Option<String>, JsValue> {
let obj = self.import(obj)?;
let value = self.import_value(value, datatype)?;
let index = self.0.length(&obj);
let opid = self.0.insert(&obj, index, value).map_err(to_js_err)?;
Ok(opid.map(|id| id.to_string()))
}
pub fn insert(
&mut self,
obj: String,
index: f64,
value: JsValue,
datatype: Option<String>,
) -> Result<Option<String>, JsValue> {
let obj = self.import(obj)?;
let index = index as f64;
let value = self.import_value(value, datatype)?;
let opid = self
.0
.insert(&obj, index as usize, value)
.map_err(to_js_err)?;
Ok(opid.map(|id| id.to_string()))
}
pub fn set(
&mut self,
obj: String,
prop: JsValue,
value: JsValue,
datatype: Option<String>,
) -> Result<Option<String>, JsValue> {
let obj = self.import(obj)?;
let prop = self.import_prop(prop)?;
let value = self.import_value(value, datatype)?;
let opid = self.0.set(&obj, prop, value).map_err(to_js_err)?;
Ok(opid.map(|id| id.to_string()))
}
pub fn make(
&mut self,
obj: String,
prop: JsValue,
value: JsValue,
) -> Result<String, JsValue> {
let obj = self.import(obj)?;
let prop = self.import_prop(prop)?;
let value = self.import_value(value, None)?;
if value.is_object() {
let opid = self.0.set(&obj, prop, value).map_err(to_js_err)?;
Ok(opid.unwrap().to_string())
} else {
Err("invalid object type".into())
}
}
pub fn inc(&mut self, obj: String, prop: JsValue, value: JsValue) -> Result<(), JsValue> {
let obj = self.import(obj)?;
let prop = self.import_prop(prop)?;
let value: f64 = value
.as_f64()
.ok_or("inc needs a numberic value")
.map_err(to_js_err)?;
self.0.inc(&obj, prop, value as i64).map_err(to_js_err)?;
Ok(())
}
pub fn value(
&mut self,
obj: String,
prop: JsValue,
heads: Option<Array>,
) -> Result<Array, JsValue> {
let obj = self.import(obj)?;
let result = Array::new();
let prop = to_prop(prop);
let heads = get_heads(heads);
if let Ok(prop) = prop {
let value = if let Some(h) = heads {
self.0.value_at(&obj, prop, &h)
} else {
self.0.value(&obj, prop)
}
.map_err(to_js_err)?;
match value {
Some((Value::Object(obj_type), obj_id)) => {
result.push(&obj_type.to_string().into());
result.push(&obj_id.to_string().into());
}
Some((Value::Scalar(value), _)) => {
result.push(&datatype(&value).into());
result.push(&ScalarValue(value).into());
}
None => {}
}
}
Ok(result)
}
pub fn values(
&mut self,
obj: String,
arg: JsValue,
heads: Option<Array>,
) -> Result<Array, JsValue> {
let obj = self.import(obj)?;
let result = Array::new();
let prop = to_prop(arg);
if let Ok(prop) = prop {
let values = if let Some(heads) = get_heads(heads) {
self.0.values_at(&obj, prop, &heads)
} else {
self.0.values(&obj, prop)
}
.map_err(to_js_err)?;
for value in values {
match value {
(Value::Object(obj_type), obj_id) => {
let sub = Array::new();
sub.push(&obj_type.to_string().into());
sub.push(&obj_id.to_string().into());
result.push(&sub.into());
}
(Value::Scalar(value), id) => {
let sub = Array::new();
sub.push(&datatype(&value).into());
sub.push(&ScalarValue(value).into());
sub.push(&id.to_string().into());
result.push(&sub.into());
}
}
}
}
Ok(result)
}
pub fn length(&mut self, obj: String, heads: Option<Array>) -> Result<f64, JsValue> {
let obj = self.import(obj)?;
if let Some(heads) = get_heads(heads) {
Ok(self.0.length_at(&obj, &heads) as f64)
} else {
Ok(self.0.length(&obj) as f64)
}
}
pub fn del(&mut self, obj: String, prop: JsValue) -> Result<(), JsValue> {
let obj = self.import(obj)?;
let prop = to_prop(prop)?;
self.0.del(&obj, prop).map_err(to_js_err)?;
Ok(())
}
pub fn mark(
&mut self,
obj: JsValue,
range: JsValue,
name: JsValue,
value: JsValue,
datatype: JsValue,
) -> Result<(), JsValue> {
let obj = self.import(obj)?;
let re = Regex::new(r"([\[\(])(\d+)\.\.(\d+)([\)\]])").unwrap();
let range = range.as_string().ok_or("range must be a string")?;
let cap = re.captures_iter(&range).next().ok_or("range must be in the form of (start..end] or [start..end) etc... () for sticky, [] for normal")?;
let start: usize = cap[2].parse().map_err(|_| to_js_err("invalid start"))?;
let end: usize = cap[3].parse().map_err(|_| to_js_err("invalid end"))?;
let start_sticky = &cap[1] == "(";
let end_sticky = &cap[4] == ")";
let name = name
.as_string()
.ok_or("invalid mark name")
.map_err(to_js_err)?;
let value = self.import_scalar(&value, datatype.as_string())?;
self.0
.mark(&obj, start, start_sticky, end, end_sticky, &name, value)
.map_err(to_js_err)?;
Ok(())
}
pub fn spans(&mut self, obj: JsValue) -> Result<JsValue, JsValue> {
let obj = self.import(obj)?;
let text = self.0.text(&obj).map_err(to_js_err)?;
let spans = self.0.spans(&obj).map_err(to_js_err)?;
let mut last_pos = 0;
let result = Array::new();
for s in spans {
let marks = Array::new();
for m in s.marks {
let mark = Array::new();
mark.push(&m.0.into());
mark.push(&datatype(&m.1).into());
mark.push(&ScalarValue(m.1).into());
marks.push(&mark.into());
}
let text_span = &text[last_pos..s.pos]; //.slice(last_pos, s.pos);
if text_span.len() > 0 {
result.push(&text_span.into());
}
result.push(&marks);
last_pos = s.pos;
//let obj = Object::new().into();
//js_set(&obj, "pos", s.pos as i32)?;
//js_set(&obj, "marks", marks)?;
//result.push(&obj.into());
}
let text_span = &text[last_pos..];
if text_span.len() > 0 {
result.push(&text_span.into());
}
Ok(result.into())
}
pub fn save(&mut self) -> Result<Uint8Array, JsValue> {
self.0
.save()
.map(|v| Uint8Array::from(v.as_slice()))
.map_err(to_js_err)
}
#[wasm_bindgen(js_name = saveIncremental)]
pub fn save_incremental(&mut self) -> Uint8Array {
let bytes = self.0.save_incremental();
Uint8Array::from(bytes.as_slice())
}
#[wasm_bindgen(js_name = loadIncremental)]
pub fn load_incremental(&mut self, data: Uint8Array) -> Result<f64, JsValue> {
let data = data.to_vec();
let len = self.0.load_incremental(&data).map_err(to_js_err)?;
Ok(len as f64)
}
#[wasm_bindgen(js_name = applyChanges)]
pub fn apply_changes(&mut self, changes: JsValue) -> Result<(), JsValue> {
let changes: Vec<_> = JS(changes).try_into()?;
self.0.apply_changes(&changes).map_err(to_js_err)?;
Ok(())
}
#[wasm_bindgen(js_name = getChanges)]
pub fn get_changes(&mut self, have_deps: JsValue) -> Result<Array, JsValue> {
let deps: Vec<_> = JS(have_deps).try_into()?;
let changes = self.0.get_changes(&deps);
let changes: Array = changes
.iter()
.map(|c| Uint8Array::from(c.raw_bytes()))
.collect();
Ok(changes)
}
#[wasm_bindgen(js_name = getChangesAdded)]
pub fn get_changes_added(&mut self, other: &Automerge) -> Result<Array, JsValue> {
let changes = self.0.get_changes_added(&other.0);
let changes: Array = changes
.iter()
.map(|c| Uint8Array::from(c.raw_bytes()))
.collect();
Ok(changes)
}
#[wasm_bindgen(js_name = getHeads)]
pub fn get_heads(&mut self) -> Array {
let heads = self.0.get_heads();
let heads: Array = heads
.iter()
.map(|h| JsValue::from_str(&hex::encode(&h.0)))
.collect();
heads
}
#[wasm_bindgen(js_name = getActorId)]
pub fn get_actor_id(&mut self) -> String {
let actor = self.0.get_actor();
actor.to_string()
}
#[wasm_bindgen(js_name = getLastLocalChange)]
pub fn get_last_local_change(&mut self) -> Result<Option<Uint8Array>, JsValue> {
if let Some(change) = self.0.get_last_local_change() {
Ok(Some(Uint8Array::from(change.raw_bytes())))
} else {
Ok(None)
}
}
pub fn dump(&self) {
self.0.dump()
}
#[wasm_bindgen(js_name = getMissingDeps)]
pub fn get_missing_deps(&mut self, heads: Option<Array>) -> Result<Array, JsValue> {
let heads = get_heads(heads).unwrap_or_default();
let deps = self.0.get_missing_deps(&heads);
let deps: Array = deps
.iter()
.map(|h| JsValue::from_str(&hex::encode(&h.0)))
.collect();
Ok(deps)
}
#[wasm_bindgen(js_name = receiveSyncMessage)]
pub fn receive_sync_message(
&mut self,
state: &mut SyncState,
message: Uint8Array,
) -> Result<(), JsValue> {
let message = message.to_vec();
let message = am::SyncMessage::decode(message.as_slice()).map_err(to_js_err)?;
self.0
.receive_sync_message(&mut state.0, message)
.map_err(to_js_err)?;
Ok(())
}
#[wasm_bindgen(js_name = generateSyncMessage)]
pub fn generate_sync_message(&mut self, state: &mut SyncState) -> Result<JsValue, JsValue> {
if let Some(message) = self.0.generate_sync_message(&mut state.0) {
Ok(Uint8Array::from(message.encode().map_err(to_js_err)?.as_slice()).into())
} else {
Ok(JsValue::null())
}
}
#[wasm_bindgen(js_name = toJS)]
pub fn to_js(&self) -> JsValue {
map_to_js(&self.0, &ROOT)
}
fn import(&self, id: String) -> Result<ObjId, JsValue> {
self.0.import(&id).map_err(to_js_err)
}
fn import_prop(&mut self, prop: JsValue) -> Result<Prop, JsValue> {
if let Some(s) = prop.as_string() {
Ok(s.into())
} else if let Some(n) = prop.as_f64() {
Ok((n as usize).into())
} else {
Err(format!("invalid prop {:?}", prop).into())
}
}
fn import_scalar(
&mut self,
value: &JsValue,
datatype: Option<String>,
) -> Result<am::ScalarValue, JsValue> {
match datatype.as_deref() {
Some("boolean") => value
.as_bool()
.ok_or_else(|| "value must be a bool".into())
.map(am::ScalarValue::Boolean),
Some("int") => value
.as_f64()
.ok_or_else(|| "value must be a number".into())
.map(|v| am::ScalarValue::Int(v as i64)),
Some("uint") => value
.as_f64()
.ok_or_else(|| "value must be a number".into())
.map(|v| am::ScalarValue::Uint(v as u64)),
Some("f64") => value
.as_f64()
.ok_or_else(|| "value must be a number".into())
.map(am::ScalarValue::F64),
Some("bytes") => Ok(am::ScalarValue::Bytes(
value.clone().dyn_into::<Uint8Array>().unwrap().to_vec(),
)),
Some("counter") => value
.as_f64()
.ok_or_else(|| "value must be a number".into())
.map(|v| am::ScalarValue::counter(v as i64)),
Some("timestamp") => value
.as_f64()
.ok_or_else(|| "value must be a number".into())
.map(|v| am::ScalarValue::Timestamp(v as i64)),
/*
Some("bytes") => unimplemented!(),
Some("cursor") => unimplemented!(),
*/
Some("null") => Ok(am::ScalarValue::Null),
Some(_) => Err(format!("unknown datatype {:?}", datatype).into()),
None => {
if value.is_null() {
Ok(am::ScalarValue::Null)
} else if let Some(b) = value.as_bool() {
Ok(am::ScalarValue::Boolean(b))
} else if let Some(s) = value.as_string() {
// FIXME - we need to detect str vs int vs float vs bool here :/
Ok(am::ScalarValue::Str(s.into()))
} else if let Some(n) = value.as_f64() {
if (n.round() - n).abs() < f64::EPSILON {
Ok(am::ScalarValue::Int(n as i64))
} else {
Ok(am::ScalarValue::F64(n))
}
// } else if let Some(o) = to_objtype(&value) {
// Ok(o.into())
} else if let Ok(d) = value.clone().dyn_into::<js_sys::Date>() {
Ok(am::ScalarValue::Timestamp(d.get_time() as i64))
} else if let Ok(o) = &value.clone().dyn_into::<Uint8Array>() {
Ok(am::ScalarValue::Bytes(o.to_vec()))
} else {
Err("value is invalid".into())
}
}
}
}
fn import_value(&mut self, value: JsValue, datatype: Option<String>) -> Result<Value, JsValue> {
match self.import_scalar(&value, datatype) {
Ok(val) => Ok(val.into()),
Err(err) => {
if let Some(o) = to_objtype(&value) {
Ok(o.into())
} else {
Err(err)
}
}
}
/*
match datatype.as_deref() {
Some("boolean") => value
.as_bool()
.ok_or_else(|| "value must be a bool".into())
.map(|v| am::ScalarValue::Boolean(v).into()),
Some("int") => value
.as_f64()
.ok_or_else(|| "value must be a number".into())
.map(|v| am::ScalarValue::Int(v as i64).into()),
Some("uint") => value
.as_f64()
.ok_or_else(|| "value must be a number".into())
.map(|v| am::ScalarValue::Uint(v as u64).into()),
Some("f64") => value
.as_f64()
.ok_or_else(|| "value must be a number".into())
.map(|n| am::ScalarValue::F64(n).into()),
Some("bytes") => {
Ok(am::ScalarValue::Bytes(value.dyn_into::<Uint8Array>().unwrap().to_vec()).into())
}
Some("counter") => value
.as_f64()
.ok_or_else(|| "value must be a number".into())
.map(|v| am::ScalarValue::counter(v as i64).into()),
Some("timestamp") => value
.as_f64()
.ok_or_else(|| "value must be a number".into())
.map(|v| am::ScalarValue::Timestamp(v as i64).into()),
Some("null") => Ok(am::ScalarValue::Null.into()),
Some(_) => Err(format!("unknown datatype {:?}", datatype).into()),
None => {
if value.is_null() {
Ok(am::ScalarValue::Null.into())
} else if let Some(b) = value.as_bool() {
Ok(am::ScalarValue::Boolean(b).into())
} else if let Some(s) = value.as_string() {
// FIXME - we need to detect str vs int vs float vs bool here :/
Ok(am::ScalarValue::Str(s.into()).into())
} else if let Some(n) = value.as_f64() {
if (n.round() - n).abs() < f64::EPSILON {
Ok(am::ScalarValue::Int(n as i64).into())
} else {
Ok(am::ScalarValue::F64(n).into())
}
} else if let Some(o) = to_objtype(&value) {
Ok(o.into())
} else if let Ok(d) = value.clone().dyn_into::<js_sys::Date>() {
Ok(am::ScalarValue::Timestamp(d.get_time() as i64).into())
} else if let Ok(o) = &value.dyn_into::<Uint8Array>() {
Ok(am::ScalarValue::Bytes(o.to_vec()).into())
} else {
Err("value is invalid".into())
}
}
}
*/
}
}
#[wasm_bindgen(js_name = create)]
pub fn init(actor: Option<String>) -> Result<Automerge, JsValue> {
console_error_panic_hook::set_once();
Automerge::new(actor)
}
#[wasm_bindgen(js_name = loadDoc)]
pub fn load(data: Uint8Array, actor: Option<String>) -> Result<Automerge, JsValue> {
let data = data.to_vec();
let mut automerge = am::Automerge::load(&data).map_err(to_js_err)?;
if let Some(s) = actor {
let actor = automerge::ActorId::from(hex::decode(s).map_err(to_js_err)?.to_vec());
automerge.set_actor(actor)
}
Ok(Automerge(automerge))
}
#[wasm_bindgen(js_name = encodeChange)]
pub fn encode_change(change: JsValue) -> Result<Uint8Array, JsValue> {
let change: am::ExpandedChange = change.into_serde().map_err(to_js_err)?;
let change: Change = change.into();
Ok(Uint8Array::from(change.raw_bytes()))
}
#[wasm_bindgen(js_name = decodeChange)]
pub fn decode_change(change: Uint8Array) -> Result<JsValue, JsValue> {
let change = Change::from_bytes(change.to_vec()).map_err(to_js_err)?;
let change: am::ExpandedChange = change.decode();
JsValue::from_serde(&change).map_err(to_js_err)
}
#[wasm_bindgen(js_name = initSyncState)]
pub fn init_sync_state() -> SyncState {
SyncState(am::SyncState::new())
}
// this is needed to be compatible with the automerge-js api
#[wasm_bindgen(js_name = importSyncState)]
pub fn import_sync_state(state: JsValue) -> Result<SyncState, JsValue> {
Ok(SyncState(JS(state).try_into()?))
}
// this is needed to be compatible with the automerge-js api
#[wasm_bindgen(js_name = exportSyncState)]
pub fn export_sync_state(state: SyncState) -> JsValue {
JS::from(state.0).into()
}
#[wasm_bindgen(js_name = encodeSyncMessage)]
pub fn encode_sync_message(message: JsValue) -> Result<Uint8Array, JsValue> {
let heads = js_get(&message, "heads")?.try_into()?;
let need = js_get(&message, "need")?.try_into()?;
let changes = js_get(&message, "changes")?.try_into()?;
let have = js_get(&message, "have")?.try_into()?;
Ok(Uint8Array::from(
am::SyncMessage {
heads,
need,
have,
changes,
}
.encode()
.unwrap()
.as_slice(),
))
}
#[wasm_bindgen(js_name = decodeSyncMessage)]
pub fn decode_sync_message(msg: Uint8Array) -> Result<JsValue, JsValue> {
let data = msg.to_vec();
let msg = am::SyncMessage::decode(&data).map_err(to_js_err)?;
let heads = AR::from(msg.heads.as_slice());
let need = AR::from(msg.need.as_slice());
let changes = AR::from(msg.changes.as_slice());
let have = AR::from(msg.have.as_slice());
let obj = Object::new().into();
js_set(&obj, "heads", heads)?;
js_set(&obj, "need", need)?;
js_set(&obj, "have", have)?;
js_set(&obj, "changes", changes)?;
Ok(obj)
}
#[wasm_bindgen(js_name = encodeSyncState)]
pub fn encode_sync_state(state: SyncState) -> Result<Uint8Array, JsValue> {
let state = state.0;
Ok(Uint8Array::from(
state.encode().map_err(to_js_err)?.as_slice(),
))
}
#[wasm_bindgen(js_name = decodeSyncState)]
pub fn decode_sync_state(data: Uint8Array) -> Result<SyncState, JsValue> {
SyncState::decode(data)
}
#[wasm_bindgen(js_name = MAP)]
pub struct Map {}
#[wasm_bindgen(js_name = LIST)]
pub struct List {}
#[wasm_bindgen(js_name = TEXT)]
pub struct Text {}
#[wasm_bindgen(js_name = TABLE)]
pub struct Table {}

221
automerge-wasm/index.d.ts vendored Normal file
View file

@ -0,0 +1,221 @@
export type Actor = string;
export type ObjID = string;
export type Change = Uint8Array;
export type SyncMessage = Uint8Array;
export type Prop = string | number;
export type Hash = string;
export type Heads = Hash[];
export type ObjectType = string; // opaque ??
export type Value = string | number | boolean | null | Date | Uint8Array | ObjectType;
export type FullValue =
["str", string] |
["int", number] |
["uint", number] |
["f64", number] |
["boolean", boolean] |
["timestamp", Date] |
["counter", number] |
["bytes", Uint8Array] |
["null", Uint8Array] |
["map", ObjID] |
["list", ObjID] |
["text", ObjID] |
["table", ObjID]
export const LIST : ObjectType;
export const MAP : ObjectType;
export const TABLE : ObjectType;
export const TEXT : ObjectType;
export enum ObjTypeName {
list = "list",
map = "map",
table = "table",
text = "text",
}
export type Datatype =
"boolean" |
"str" |
"int" |
"uint" |
"f64" |
"null" |
"timestamp" |
"counter" |
"bytes";
export type DecodedSyncMessage = {
heads: Heads,
need: Heads,
have: any[]
changes: Change[]
}
export type DecodedChange = {
actor: Actor,
seq: number
startOp: number,
time: number,
message: string | null,
deps: Heads,
hash: Hash,
ops: Op[]
}
export type Op = {
action: string,
obj: ObjID,
key: string,
value?: string | number | boolean,
datatype?: string,
pred: string[],
}
export function create(actor?: Actor): Automerge;
export function loadDoc(data: Uint8Array, actor?: Actor): Automerge;
export function encodeChange(change: DecodedChange): Change;
export function decodeChange(change: Change): DecodedChange;
export function initSyncState(): SyncState;
export function encodeSyncMessage(message: DecodedSyncMessage): SyncMessage;
export function decodeSyncMessage(msg: SyncMessage): DecodedSyncMessage;
export function encodeSyncState(state: SyncState): Uint8Array;
export function decodeSyncState(data: Uint8Array): SyncState;
export class Automerge {
// change state
set(obj: ObjID, prop: Prop, value: Value, datatype?: Datatype): ObjID | undefined;
make(obj: ObjID, prop: Prop, value: ObjectType): ObjID;
insert(obj: ObjID, index: number, value: Value, datatype?: Datatype): ObjID | undefined;
push(obj: ObjID, value: Value, datatype?: Datatype): ObjID | undefined;
splice(obj: ObjID, start: number, delete_count: number, text?: string | Array<Value | FullValue>): ObjID[] | undefined;
inc(obj: ObjID, prop: Prop, value: number): void;
del(obj: ObjID, prop: Prop): void;
// returns a single value - if there is a conflict return the winner
value(obj: ObjID, prop: any, heads?: Heads): FullValue | null;
// return all values in case of a conflict
values(obj: ObjID, arg: any, heads?: Heads): FullValue[];
keys(obj: ObjID, heads?: Heads): string[];
text(obj: ObjID, heads?: Heads): string;
length(obj: ObjID, heads?: Heads): number;
// transactions
commit(message?: string, time?: number): Heads;
merge(other: Automerge): Heads;
getActorId(): Actor;
pendingOps(): number;
rollback(): number;
// save and load to local store
save(): Uint8Array;
saveIncremental(): Uint8Array;
loadIncremental(data: Uint8Array): number;
// sync over network
receiveSyncMessage(state: SyncState, message: SyncMessage): void;
generateSyncMessage(state: SyncState): SyncMessage;
// low level change functions
applyChanges(changes: Change[]): void;
getChanges(have_deps: Heads): Change[];
getChangesAdded(other: Automerge): Change[];
getHeads(): Heads;
getLastLocalChange(): Change;
getMissingDeps(heads?: Heads): Heads;
// memory management
free(): void;
clone(actor?: string): Automerge;
fork(actor?: string): Automerge;
// dump internal state to console.log
dump(): void;
// dump internal state to a JS object
toJS(): any;
}
export class SyncState {
free(): void;
clone(): SyncState;
lastSentHeads: any;
sentHashes: any;
readonly sharedHeads: any;
}
export type InitInput = RequestInfo | URL | Response | BufferSource | WebAssembly.Module;
export interface InitOutput {
readonly memory: WebAssembly.Memory;
readonly __wbg_automerge_free: (a: number) => void;
readonly automerge_new: (a: number, b: number, c: number) => void;
readonly automerge_clone: (a: number, b: number, c: number, d: number) => void;
readonly automerge_free: (a: number) => void;
readonly automerge_pendingOps: (a: number) => number;
readonly automerge_commit: (a: number, b: number, c: number, d: number, e: number) => number;
readonly automerge_rollback: (a: number) => number;
readonly automerge_keys: (a: number, b: number, c: number, d: number, e: number) => void;
readonly automerge_text: (a: number, b: number, c: number, d: number, e: number) => void;
readonly automerge_splice: (a: number, b: number, c: number, d: number, e: number, f: number, g: number) => void;
readonly automerge_push: (a: number, b: number, c: number, d: number, e: number, f: number, g: number) => void;
readonly automerge_insert: (a: number, b: number, c: number, d: number, e: number, f: number, g: number, h: number) => void;
readonly automerge_set: (a: number, b: number, c: number, d: number, e: number, f: number, g: number, h: number) => void;
readonly automerge_inc: (a: number, b: number, c: number, d: number, e: number, f: number) => void;
readonly automerge_value: (a: number, b: number, c: number, d: number, e: number, f: number) => void;
readonly automerge_values: (a: number, b: number, c: number, d: number, e: number, f: number) => void;
readonly automerge_length: (a: number, b: number, c: number, d: number, e: number) => void;
readonly automerge_del: (a: number, b: number, c: number, d: number, e: number) => void;
readonly automerge_save: (a: number, b: number) => void;
readonly automerge_saveIncremental: (a: number) => number;
readonly automerge_loadIncremental: (a: number, b: number, c: number) => void;
readonly automerge_applyChanges: (a: number, b: number, c: number) => void;
readonly automerge_getChanges: (a: number, b: number, c: number) => void;
readonly automerge_getChangesAdded: (a: number, b: number, c: number) => void;
readonly automerge_getHeads: (a: number) => number;
readonly automerge_getActorId: (a: number, b: number) => void;
readonly automerge_getLastLocalChange: (a: number, b: number) => void;
readonly automerge_dump: (a: number) => void;
readonly automerge_getMissingDeps: (a: number, b: number, c: number) => void;
readonly automerge_receiveSyncMessage: (a: number, b: number, c: number, d: number) => void;
readonly automerge_generateSyncMessage: (a: number, b: number, c: number) => void;
readonly automerge_toJS: (a: number) => number;
readonly create: (a: number, b: number, c: number) => void;
readonly loadDoc: (a: number, b: number, c: number, d: number) => void;
readonly encodeChange: (a: number, b: number) => void;
readonly decodeChange: (a: number, b: number) => void;
readonly initSyncState: () => number;
readonly importSyncState: (a: number, b: number) => void;
readonly exportSyncState: (a: number) => number;
readonly encodeSyncMessage: (a: number, b: number) => void;
readonly decodeSyncMessage: (a: number, b: number) => void;
readonly encodeSyncState: (a: number, b: number) => void;
readonly decodeSyncState: (a: number, b: number) => void;
readonly __wbg_list_free: (a: number) => void;
readonly __wbg_map_free: (a: number) => void;
readonly __wbg_text_free: (a: number) => void;
readonly __wbg_table_free: (a: number) => void;
readonly __wbg_syncstate_free: (a: number) => void;
readonly syncstate_sharedHeads: (a: number) => number;
readonly syncstate_lastSentHeads: (a: number) => number;
readonly syncstate_set_lastSentHeads: (a: number, b: number, c: number) => void;
readonly syncstate_set_sentHashes: (a: number, b: number, c: number) => void;
readonly syncstate_clone: (a: number) => number;
readonly __wbindgen_malloc: (a: number) => number;
readonly __wbindgen_realloc: (a: number, b: number, c: number) => number;
readonly __wbindgen_add_to_stack_pointer: (a: number) => number;
readonly __wbindgen_free: (a: number, b: number) => void;
readonly __wbindgen_exn_store: (a: number) => void;
}
/**
* If `module_or_path` is {RequestInfo} or {URL}, makes a request and
* for everything else, calls `WebAssembly.instantiate` directly.
*
* @param {InitInput | Promise<InitInput>} module_or_path
*
* @returns {Promise<InitOutput>}
*/
export default function init (module_or_path?: InitInput | Promise<InitInput>): Promise<InitOutput>;

View file

@ -6,7 +6,7 @@
],
"name": "automerge-wasm",
"description": "wasm-bindgen bindings to the automerge rust implementation",
"version": "0.1.0",
"version": "0.0.1",
"license": "MIT",
"files": [
"README.md",
@ -15,19 +15,27 @@
"automerge_wasm_bg.wasm",
"automerge_wasm.js"
],
"module": "./pkg/index.js",
"main": "./dev/index.js",
"scripts": {
"build": "rimraf ./dev && wasm-pack build --target nodejs --dev --out-name index -d dev",
"release": "rimraf ./dev && wasm-pack build --target nodejs --release --out-name index -d dev && yarn opt",
"build": "rimraf ./dev && wasm-pack build --target nodejs --dev --out-name index -d dev && cp index.d.ts dev",
"release": "rimraf ./dev && wasm-pack build --target nodejs --release --out-name index -d dev && yarn opt && cp index.d.ts dev",
"pkg": "rimraf ./pkg && wasm-pack build --target web --release --out-name index -d pkg && cp index.d.ts pkg && cd pkg && yarn pack && mv automerge-wasm*tgz ..",
"prof": "rimraf ./dev && wasm-pack build --target nodejs --profiling --out-name index -d dev",
"opt": "wasm-opt -Oz dev/index_bg.wasm -o tmp.wasm && mv tmp.wasm dev/index_bg.wasm",
"test": "yarn build && mocha --bail --full-trace"
"test": "yarn build && ts-mocha -p tsconfig.json --type-check --bail --full-trace test/*.ts"
},
"dependencies": {},
"devDependencies": {
"@types/expect": "^24.3.0",
"@types/jest": "^27.4.0",
"@types/mocha": "^9.1.0",
"@types/node": "^17.0.13",
"fast-sha256": "^1.3.0",
"mocha": "^9.1.3",
"pako": "^2.0.4",
"fast-sha256": "^1.3.0",
"rimraf": "^3.0.2"
"rimraf": "^3.0.2",
"ts-mocha": "^9.0.2",
"typescript": "^4.5.5"
}
}

View file

@ -243,15 +243,14 @@ pub(crate) fn js_get<J: Into<JsValue>>(obj: J, prop: &str) -> Result<JS, JsValue
Ok(JS(Reflect::get(&obj.into(), &prop.into())?))
}
pub(crate) fn js_set<V: Into<JsValue>>(obj: &JsValue, prop: &str, val: V) -> Result<bool, JsValue> {
Reflect::set(obj, &prop.into(), &val.into())
pub(crate) fn stringify(val: &JsValue) -> String {
js_sys::JSON::stringify(val)
.map(|j| j.into())
.unwrap_or_else(|_| "JSON::stringify_eror".into())
}
pub(crate) fn to_usize(val: JsValue, name: &str) -> Result<usize, JsValue> {
match val.as_f64() {
Some(n) => Ok(n as usize),
None => Err(format!("{} must be a number", name).into()),
}
pub(crate) fn js_set<V: Into<JsValue>>(obj: &JsValue, prop: &str, val: V) -> Result<bool, JsValue> {
Reflect::set(obj, &prop.into(), &val.into())
}
pub(crate) fn to_prop(p: JsValue) -> Result<Prop, JsValue> {
@ -260,7 +259,7 @@ pub(crate) fn to_prop(p: JsValue) -> Result<Prop, JsValue> {
} else if let Some(n) = p.as_f64() {
Ok(Prop::Seq(n as usize))
} else {
Err("prop must me a string or number".into())
Err(to_js_err("prop must me a string or number"))
}
}
@ -283,8 +282,10 @@ pub(crate) fn to_objtype(a: &JsValue) -> Option<am::ObjType> {
}
}
pub(crate) fn get_heads(heads: JsValue) -> Option<Vec<ChangeHash>> {
JS(heads).into()
pub(crate) fn get_heads(heads: Option<Array>) -> Option<Vec<ChangeHash>> {
let heads = heads?;
let heads: Result<Vec<ChangeHash>, _> = heads.iter().map(|j| j.into_serde()).collect();
heads.ok()
}
pub(crate) fn map_to_js(doc: &am::Automerge, obj: &ObjId) -> JsValue {

View file

@ -11,7 +11,7 @@ mod sync;
mod value;
use interop::{
get_heads, js_get, js_set, map_to_js, to_js_err, to_objtype, to_prop, to_usize, AR, JS,
get_heads, js_get, js_set, map_to_js, stringify, to_js_err, to_objtype, to_prop, AR, JS,
};
use sync::SyncState;
use value::{datatype, ScalarValue};
@ -33,9 +33,9 @@ pub struct Automerge(automerge::Automerge);
#[wasm_bindgen]
impl Automerge {
pub fn new(actor: JsValue) -> Result<Automerge, JsValue> {
pub fn new(actor: Option<String>) -> Result<Automerge, JsValue> {
let mut automerge = automerge::Automerge::new();
if let Some(a) = actor.as_string() {
if let Some(a) = actor {
let a = automerge::ActorId::from(hex::decode(a).map_err(to_js_err)?.to_vec());
automerge.set_actor(a);
}
@ -43,9 +43,22 @@ impl Automerge {
}
#[allow(clippy::should_implement_trait)]
pub fn clone(&self, actor: JsValue) -> Result<Automerge, JsValue> {
pub fn clone(&mut self, actor: Option<String>) -> Result<Automerge, JsValue> {
if self.0.pending_ops() > 0 {
self.0.commit(None, None);
}
let mut automerge = Automerge(self.0.clone());
if let Some(s) = actor.as_string() {
if let Some(s) = actor {
let actor = automerge::ActorId::from(hex::decode(s).map_err(to_js_err)?.to_vec());
automerge.0.set_actor(actor)
}
Ok(automerge)
}
#[allow(clippy::should_implement_trait)]
pub fn fork(&mut self, actor: Option<String>) -> Result<Automerge, JsValue> {
let mut automerge = Automerge(self.0.fork());
if let Some(s) = actor {
let actor = automerge::ActorId::from(hex::decode(s).map_err(to_js_err)?.to_vec());
automerge.0.set_actor(actor)
}
@ -59,10 +72,8 @@ impl Automerge {
(self.0.pending_ops() as u32).into()
}
pub fn commit(&mut self, message: JsValue, time: JsValue) -> Array {
let message = message.as_string();
let time = time.as_f64().map(|v| v as i64);
let heads = self.0.commit(message, time);
pub fn commit(&mut self, message: Option<String>, time: Option<f64>) -> Array {
let heads = self.0.commit(message, time.map(|n| n as i64));
let heads: Array = heads
.iter()
.map(|h| JsValue::from_str(&hex::encode(&h.0)))
@ -70,11 +81,20 @@ impl Automerge {
heads
}
pub fn rollback(&mut self) -> JsValue {
self.0.rollback().into()
pub fn merge(&mut self, other: &mut Automerge) -> Result<Array, JsError> {
let heads = self.0.merge(&mut other.0)?;
let heads: Array = heads
.iter()
.map(|h| JsValue::from_str(&hex::encode(&h.0)))
.collect();
Ok(heads)
}
pub fn keys(&mut self, obj: JsValue, heads: JsValue) -> Result<Array, JsValue> {
pub fn rollback(&mut self) -> f64 {
self.0.rollback() as f64
}
pub fn keys(&mut self, obj: String, heads: Option<Array>) -> Result<Array, JsValue> {
let obj = self.import(obj)?;
let result = if let Some(heads) = get_heads(heads) {
self.0.keys_at(&obj, &heads)
@ -87,33 +107,29 @@ impl Automerge {
Ok(result)
}
pub fn text(&mut self, obj: JsValue, heads: JsValue) -> Result<JsValue, JsValue> {
pub fn text(&mut self, obj: String, heads: Option<Array>) -> Result<String, JsValue> {
let obj = self.import(obj)?;
if let Some(heads) = get_heads(heads) {
self.0.text_at(&obj, &heads)
Ok(self.0.text_at(&obj, &heads)?)
} else {
self.0.text(&obj)
Ok(self.0.text(&obj)?)
}
.map_err(to_js_err)
.map(|t| t.into())
}
pub fn splice(
&mut self,
obj: JsValue,
start: JsValue,
delete_count: JsValue,
obj: String,
start: f64,
delete_count: f64,
text: JsValue,
) -> Result<JsValue, JsValue> {
) -> Result<Option<Array>, JsValue> {
let obj = self.import(obj)?;
let start = to_usize(start, "start")?;
let delete_count = to_usize(delete_count, "deleteCount")?;
let start = start as usize;
let delete_count = delete_count as usize;
let mut vals = vec![];
if let Some(t) = text.as_string() {
self.0
.splice_text(&obj, start, delete_count, &t)
.map_err(to_js_err)?;
Ok(JsValue::null())
self.0.splice_text(&obj, start, delete_count, &t)?;
Ok(None)
} else {
if let Ok(array) = text.dyn_into::<Array>() {
for i in array.iter() {
@ -128,12 +144,9 @@ impl Automerge {
}
}
}
let result = self
.0
.splice(&obj, start, delete_count, vals)
.map_err(to_js_err)?;
let result = self.0.splice(&obj, start, delete_count, vals)?;
if result.is_empty() {
Ok(JsValue::null())
Ok(None)
} else {
let result: Array = result
.iter()
@ -146,100 +159,107 @@ impl Automerge {
pub fn push(
&mut self,
obj: JsValue,
obj: String,
value: JsValue,
datatype: JsValue,
) -> Result<JsValue, JsValue> {
datatype: Option<String>,
) -> Result<Option<String>, JsValue> {
let obj = self.import(obj)?;
let value = self.import_value(value, datatype.as_string())?;
let value = self.import_value(value, datatype)?;
let index = self.0.length(&obj);
let opid = self.0.insert(&obj, index, value).map_err(to_js_err)?;
match opid {
Some(opid) => Ok(self.export(opid)),
None => Ok(JsValue::null()),
}
let opid = self.0.insert(&obj, index, value)?;
Ok(opid.map(|id| id.to_string()))
}
pub fn insert(
&mut self,
obj: JsValue,
index: JsValue,
obj: String,
index: f64,
value: JsValue,
datatype: JsValue,
) -> Result<JsValue, JsValue> {
datatype: Option<String>,
) -> Result<Option<String>, JsValue> {
let obj = self.import(obj)?;
//let key = self.insert_pos_for_index(&obj, prop)?;
let index: Result<_, JsValue> = index
.as_f64()
.ok_or_else(|| "insert index must be a number".into());
let index = index?;
let value = self.import_value(value, datatype.as_string())?;
let opid = self
.0
.insert(&obj, index as usize, value)
.map_err(to_js_err)?;
match opid {
Some(opid) => Ok(self.export(opid)),
None => Ok(JsValue::null()),
}
let index = index as f64;
let value = self.import_value(value, datatype)?;
let opid = self.0.insert(&obj, index as usize, value)?;
Ok(opid.map(|id| id.to_string()))
}
pub fn set(
&mut self,
obj: JsValue,
obj: String,
prop: JsValue,
value: JsValue,
datatype: JsValue,
) -> Result<JsValue, JsValue> {
datatype: Option<String>,
) -> Result<Option<String>, JsValue> {
let obj = self.import(obj)?;
let prop = self.import_prop(prop)?;
let value = self.import_value(value, datatype.as_string())?;
let opid = self.0.set(&obj, prop, value).map_err(to_js_err)?;
match opid {
Some(opid) => Ok(self.export(opid)),
None => Ok(JsValue::null()),
let value = self.import_value(value, datatype)?;
let opid = self.0.set(&obj, prop, value)?;
Ok(opid.map(|id| id.to_string()))
}
pub fn make(&mut self, obj: String, prop: JsValue, value: JsValue) -> Result<String, JsValue> {
let obj = self.import(obj)?;
let prop = self.import_prop(prop)?;
let value = self.import_value(value, None)?;
if value.is_object() {
let opid = self.0.set(&obj, prop, value)?;
Ok(opid.unwrap().to_string())
} else {
Err(to_js_err("invalid object type"))
}
}
pub fn inc(&mut self, obj: JsValue, prop: JsValue, value: JsValue) -> Result<(), JsValue> {
pub fn inc(&mut self, obj: String, prop: JsValue, value: JsValue) -> Result<(), JsValue> {
let obj = self.import(obj)?;
let prop = self.import_prop(prop)?;
let value: f64 = value
.as_f64()
.ok_or("inc needs a numberic value")
.map_err(to_js_err)?;
self.0.inc(&obj, prop, value as i64).map_err(to_js_err)?;
.ok_or_else(|| to_js_err("inc needs a numberic value"))?;
self.0.inc(&obj, prop, value as i64)?;
Ok(())
}
pub fn value(&mut self, obj: JsValue, prop: JsValue, heads: JsValue) -> Result<Array, JsValue> {
pub fn value(
&mut self,
obj: String,
prop: JsValue,
heads: Option<Array>,
) -> Result<Option<Array>, JsValue> {
let obj = self.import(obj)?;
let result = Array::new();
let prop = to_prop(prop);
let heads = get_heads(heads);
if let Ok(prop) = prop {
let value = if let Some(h) = heads {
self.0.value_at(&obj, prop, &h)
self.0.value_at(&obj, prop, &h)?
} else {
self.0.value(&obj, prop)
}
.map_err(to_js_err)?;
self.0.value(&obj, prop)?
};
match value {
Some((Value::Object(obj_type), obj_id)) => {
result.push(&obj_type.to_string().into());
result.push(&self.export(obj_id));
result.push(&obj_id.to_string().into());
Ok(Some(result))
}
Some((Value::Scalar(value), _)) => {
result.push(&datatype(&value).into());
result.push(&ScalarValue(value).into());
Ok(Some(result))
}
None => {}
None => Ok(None),
}
} else {
Ok(None)
}
Ok(result)
}
pub fn values(&mut self, obj: JsValue, arg: JsValue, heads: JsValue) -> Result<Array, JsValue> {
pub fn values(
&mut self,
obj: String,
arg: JsValue,
heads: Option<Array>,
) -> Result<Array, JsValue> {
let obj = self.import(obj)?;
let result = Array::new();
let prop = to_prop(arg);
@ -255,14 +275,14 @@ impl Automerge {
(Value::Object(obj_type), obj_id) => {
let sub = Array::new();
sub.push(&obj_type.to_string().into());
sub.push(&self.export(obj_id));
sub.push(&obj_id.to_string().into());
result.push(&sub.into());
}
(Value::Scalar(value), id) => {
let sub = Array::new();
sub.push(&datatype(&value).into());
sub.push(&ScalarValue(value).into());
sub.push(&self.export(id));
sub.push(&id.to_string().into());
result.push(&sub.into());
}
}
@ -271,16 +291,16 @@ impl Automerge {
Ok(result)
}
pub fn length(&mut self, obj: JsValue, heads: JsValue) -> Result<JsValue, JsValue> {
pub fn length(&mut self, obj: String, heads: Option<Array>) -> Result<f64, JsValue> {
let obj = self.import(obj)?;
if let Some(heads) = get_heads(heads) {
Ok((self.0.length_at(&obj, &heads) as f64).into())
Ok(self.0.length_at(&obj, &heads) as f64)
} else {
Ok((self.0.length(&obj) as f64).into())
Ok(self.0.length(&obj) as f64)
}
}
pub fn del(&mut self, obj: JsValue, prop: JsValue) -> Result<(), JsValue> {
pub fn del(&mut self, obj: String, prop: JsValue) -> Result<(), JsValue> {
let obj = self.import(obj)?;
let prop = to_prop(prop)?;
self.0.del(&obj, prop).map_err(to_js_err)?;
@ -295,16 +315,16 @@ impl Automerge {
}
#[wasm_bindgen(js_name = saveIncremental)]
pub fn save_incremental(&mut self) -> JsValue {
pub fn save_incremental(&mut self) -> Uint8Array {
let bytes = self.0.save_incremental();
Uint8Array::from(bytes.as_slice()).into()
Uint8Array::from(bytes.as_slice())
}
#[wasm_bindgen(js_name = loadIncremental)]
pub fn load_incremental(&mut self, data: Uint8Array) -> Result<JsValue, JsValue> {
pub fn load_incremental(&mut self, data: Uint8Array) -> Result<f64, JsValue> {
let data = data.to_vec();
let len = self.0.load_incremental(&data).map_err(to_js_err)?;
Ok(len.into())
Ok(len as f64)
}
#[wasm_bindgen(js_name = applyChanges)]
@ -326,8 +346,8 @@ impl Automerge {
}
#[wasm_bindgen(js_name = getChangesAdded)]
pub fn get_changes_added(&mut self, other: &Automerge) -> Result<Array, JsValue> {
let changes = self.0.get_changes_added(&other.0);
pub fn get_changes_added(&mut self, other: &mut Automerge) -> Result<Array, JsValue> {
let changes = self.0.get_changes_added(&mut other.0);
let changes: Array = changes
.iter()
.map(|c| Uint8Array::from(c.raw_bytes()))
@ -346,17 +366,17 @@ impl Automerge {
}
#[wasm_bindgen(js_name = getActorId)]
pub fn get_actor_id(&mut self) -> JsValue {
pub fn get_actor_id(&mut self) -> String {
let actor = self.0.get_actor();
actor.to_string().into()
actor.to_string()
}
#[wasm_bindgen(js_name = getLastLocalChange)]
pub fn get_last_local_change(&mut self) -> Result<JsValue, JsValue> {
pub fn get_last_local_change(&mut self) -> Result<Uint8Array, JsValue> {
if let Some(change) = self.0.get_last_local_change() {
Ok(Uint8Array::from(change.raw_bytes()).into())
Ok(Uint8Array::from(change.raw_bytes()))
} else {
Ok(JsValue::null())
Err(to_js_err("no local changes"))
}
}
@ -365,8 +385,8 @@ impl Automerge {
}
#[wasm_bindgen(js_name = getMissingDeps)]
pub fn get_missing_deps(&mut self, heads: JsValue) -> Result<Array, JsValue> {
let heads: Vec<_> = JS(heads).try_into().unwrap_or_default();
pub fn get_missing_deps(&mut self, heads: Option<Array>) -> Result<Array, JsValue> {
let heads = get_heads(heads).unwrap_or_default();
let deps = self.0.get_missing_deps(&heads);
let deps: Array = deps
.iter()
@ -403,16 +423,8 @@ impl Automerge {
map_to_js(&self.0, &ROOT)
}
fn export(&self, val: ObjId) -> JsValue {
val.to_string().into()
}
fn import(&self, id: JsValue) -> Result<ObjId, JsValue> {
let id_str = id
.as_string()
.ok_or("invalid opid/objid/elemid")
.map_err(to_js_err)?;
self.0.import(&id_str).map_err(to_js_err)
fn import(&self, id: String) -> Result<ObjId, JsValue> {
self.0.import(&id).map_err(to_js_err)
}
fn import_prop(&mut self, prop: JsValue) -> Result<Prop, JsValue> {
@ -421,84 +433,80 @@ impl Automerge {
} else if let Some(n) = prop.as_f64() {
Ok((n as usize).into())
} else {
Err(format!("invalid prop {:?}", prop).into())
Err(to_js_err(format!("invalid prop {:?}", prop)))
}
}
fn import_scalar(
&mut self,
value: &JsValue,
datatype: &Option<String>,
) -> Option<am::ScalarValue> {
match datatype.as_deref() {
Some("boolean") => value.as_bool().map(am::ScalarValue::Boolean),
Some("int") => value.as_f64().map(|v| am::ScalarValue::Int(v as i64)),
Some("uint") => value.as_f64().map(|v| am::ScalarValue::Uint(v as u64)),
Some("f64") => value.as_f64().map(am::ScalarValue::F64),
Some("bytes") => Some(am::ScalarValue::Bytes(
value.clone().dyn_into::<Uint8Array>().unwrap().to_vec(),
)),
Some("counter") => value.as_f64().map(|v| am::ScalarValue::counter(v as i64)),
Some("timestamp") => value.as_f64().map(|v| am::ScalarValue::Timestamp(v as i64)),
Some("null") => Some(am::ScalarValue::Null),
Some(_) => None,
None => {
if value.is_null() {
Some(am::ScalarValue::Null)
} else if let Some(b) = value.as_bool() {
Some(am::ScalarValue::Boolean(b))
} else if let Some(s) = value.as_string() {
Some(am::ScalarValue::Str(s.into()))
} else if let Some(n) = value.as_f64() {
if (n.round() - n).abs() < f64::EPSILON {
Some(am::ScalarValue::Int(n as i64))
} else {
Some(am::ScalarValue::F64(n))
}
} else if let Ok(d) = value.clone().dyn_into::<js_sys::Date>() {
Some(am::ScalarValue::Timestamp(d.get_time() as i64))
} else if let Ok(o) = &value.clone().dyn_into::<Uint8Array>() {
Some(am::ScalarValue::Bytes(o.to_vec()))
} else {
None
}
}
}
}
fn import_value(&mut self, value: JsValue, datatype: Option<String>) -> Result<Value, JsValue> {
match datatype.as_deref() {
Some("boolean") => value
.as_bool()
.ok_or_else(|| "value must be a bool".into())
.map(|v| am::ScalarValue::Boolean(v).into()),
Some("int") => value
.as_f64()
.ok_or_else(|| "value must be a number".into())
.map(|v| am::ScalarValue::Int(v as i64).into()),
Some("uint") => value
.as_f64()
.ok_or_else(|| "value must be a number".into())
.map(|v| am::ScalarValue::Uint(v as u64).into()),
Some("f64") => value
.as_f64()
.ok_or_else(|| "value must be a number".into())
.map(|n| am::ScalarValue::F64(n).into()),
Some("bytes") => {
Ok(am::ScalarValue::Bytes(value.dyn_into::<Uint8Array>().unwrap().to_vec()).into())
}
Some("counter") => value
.as_f64()
.ok_or_else(|| "value must be a number".into())
.map(|v| am::ScalarValue::counter(v as i64).into()),
Some("timestamp") => value
.as_f64()
.ok_or_else(|| "value must be a number".into())
.map(|v| am::ScalarValue::Timestamp(v as i64).into()),
/*
Some("bytes") => unimplemented!(),
Some("cursor") => unimplemented!(),
*/
Some("null") => Ok(am::ScalarValue::Null.into()),
Some(_) => Err(format!("unknown datatype {:?}", datatype).into()),
match self.import_scalar(&value, &datatype) {
Some(val) => Ok(val.into()),
None => {
if value.is_null() {
Ok(am::ScalarValue::Null.into())
} else if let Some(b) = value.as_bool() {
Ok(am::ScalarValue::Boolean(b).into())
} else if let Some(s) = value.as_string() {
// FIXME - we need to detect str vs int vs float vs bool here :/
Ok(am::ScalarValue::Str(s.into()).into())
} else if let Some(n) = value.as_f64() {
if (n.round() - n).abs() < f64::EPSILON {
Ok(am::ScalarValue::Int(n as i64).into())
} else {
Ok(am::ScalarValue::F64(n).into())
}
} else if let Some(o) = to_objtype(&value) {
if let Some(o) = to_objtype(&value) {
Ok(o.into())
} else if let Ok(d) = value.clone().dyn_into::<js_sys::Date>() {
Ok(am::ScalarValue::Timestamp(d.get_time() as i64).into())
} else if let Ok(o) = &value.dyn_into::<Uint8Array>() {
Ok(am::ScalarValue::Bytes(o.to_vec()).into())
} else {
Err("value is invalid".into())
Err(to_js_err(format!(
"invalid value ({},{:?})",
stringify(&value),
datatype
)))
}
}
}
}
}
#[wasm_bindgen]
pub fn init(actor: JsValue) -> Result<Automerge, JsValue> {
#[wasm_bindgen(js_name = create)]
pub fn init(actor: Option<String>) -> Result<Automerge, JsValue> {
console_error_panic_hook::set_once();
Automerge::new(actor)
}
#[wasm_bindgen]
pub fn load(data: Uint8Array, actor: JsValue) -> Result<Automerge, JsValue> {
#[wasm_bindgen(js_name = loadDoc)]
pub fn load(data: Uint8Array, actor: Option<String>) -> Result<Automerge, JsValue> {
let data = data.to_vec();
let mut automerge = am::Automerge::load(&data).map_err(to_js_err)?;
if let Some(s) = actor.as_string() {
if let Some(s) = actor {
let actor = automerge::ActorId::from(hex::decode(s).map_err(to_js_err)?.to_vec());
automerge.set_actor(actor)
}

View file

@ -1,20 +1,13 @@
import { describe, it } from 'mocha';
//@ts-ignore
import assert from 'assert'
//@ts-ignore
import { BloomFilter } from './helpers/sync'
import { create, loadDoc, SyncState, Automerge, MAP, LIST, TEXT, encodeChange, decodeChange, initSyncState, decodeSyncMessage, decodeSyncState, encodeSyncState, encodeSyncMessage } from '../dev/index'
import { DecodedSyncMessage } from '../index';
import { Hash } from '../dev/index';
const assert = require('assert')
const util = require('util')
const { BloomFilter } = require('./helpers/sync')
const Automerge = require('..')
const { MAP, LIST, TEXT, initSyncState, decodeSyncMessage, decodeSyncState, encodeSyncState }= Automerge
// str to uint8array
function en(str) {
return new TextEncoder('utf8').encode(str)
}
// uint8array to str
function de(bytes) {
return new TextDecoder('utf8').decode(bytes);
}
function sync(a, b, aSyncState = initSyncState(), bSyncState = initSyncState()) {
function sync(a: Automerge, b: Automerge, aSyncState = initSyncState(), bSyncState = initSyncState()) {
const MAX_ITER = 10
let aToBmsg = null, bToAmsg = null, i = 0
do {
@ -37,28 +30,28 @@ function sync(a, b, aSyncState = initSyncState(), bSyncState = initSyncState())
describe('Automerge', () => {
describe('basics', () => {
it('should init clone and free', () => {
let doc1 = Automerge.init()
let doc1 = create()
let doc2 = doc1.clone()
doc1.free()
doc2.free()
})
it('should be able to start and commit', () => {
let doc = Automerge.init()
let doc = create()
doc.commit()
doc.free()
})
it('getting a nonexistant prop does not throw an error', () => {
let doc = Automerge.init()
let doc = create()
let root = "_root"
let result = doc.value(root,"hello")
assert.deepEqual(result,[])
assert.deepEqual(result,undefined)
doc.free()
})
it('should be able to set and get a simple value', () => {
let doc = Automerge.init()
let doc : Automerge = create("aabbcc")
let root = "_root"
let result
@ -71,6 +64,8 @@ describe('Automerge', () => {
doc.set(root, "bool", true)
doc.set(root, "time1", 1000, "timestamp")
doc.set(root, "time2", new Date(1001))
doc.set(root, "list", LIST);
doc.set(root, "null", null)
result = doc.value(root,"hello")
assert.deepEqual(result,["str","world"])
@ -104,11 +99,17 @@ describe('Automerge', () => {
result = doc.value(root,"time2")
assert.deepEqual(result,["timestamp",new Date(1001)])
result = doc.value(root,"list")
assert.deepEqual(result,["list","10@aabbcc"]);
result = doc.value(root,"null")
assert.deepEqual(result,["null",null]);
doc.free()
})
it('should be able to use bytes', () => {
let doc = Automerge.init()
let doc = create()
doc.set("_root","data1", new Uint8Array([10,11,12]));
doc.set("_root","data2", new Uint8Array([13,14,15]), "bytes");
let value1 = doc.value("_root", "data1")
@ -119,11 +120,12 @@ describe('Automerge', () => {
})
it('should be able to make sub objects', () => {
let doc = Automerge.init()
let doc = create()
let root = "_root"
let result
let submap = doc.set(root, "submap", MAP)
if (!submap) throw new Error('should be not null')
doc.set(submap, "number", 6, "uint")
assert.strictEqual(doc.pendingOps(),2)
@ -136,10 +138,11 @@ describe('Automerge', () => {
})
it('should be able to make lists', () => {
let doc = Automerge.init()
let doc = create()
let root = "_root"
let submap = doc.set(root, "numbers", LIST)
if (!submap) throw new Error('should be not null')
doc.insert(submap, 0, "a");
doc.insert(submap, 1, "b");
doc.insert(submap, 2, "c");
@ -159,10 +162,11 @@ describe('Automerge', () => {
})
it('lists have insert, set, splice, and push ops', () => {
let doc = Automerge.init()
let doc = create()
let root = "_root"
let submap = doc.set(root, "letters", LIST)
if (!submap) throw new Error('should be not null')
doc.insert(submap, 0, "a");
doc.insert(submap, 0, "b");
assert.deepEqual(doc.toJS(), { letters: ["b", "a" ] })
@ -180,7 +184,7 @@ describe('Automerge', () => {
})
it('should be able delete non-existant props', () => {
let doc = Automerge.init()
let doc = create()
doc.set("_root", "foo","bar")
doc.set("_root", "bip","bap")
@ -199,18 +203,18 @@ describe('Automerge', () => {
})
it('should be able to del', () => {
let doc = Automerge.init()
let doc = create()
let root = "_root"
doc.set(root, "xxx", "xxx");
assert.deepEqual(doc.value(root, "xxx"),["str","xxx"])
doc.del(root, "xxx");
assert.deepEqual(doc.value(root, "xxx"),[])
assert.deepEqual(doc.value(root, "xxx"),undefined)
doc.free()
})
it('should be able to use counters', () => {
let doc = Automerge.init()
let doc = create()
let root = "_root"
doc.set(root, "counter", 10, "counter");
@ -223,10 +227,11 @@ describe('Automerge', () => {
})
it('should be able to splice text', () => {
let doc = Automerge.init()
let doc = create()
let root = "_root";
let text = doc.set(root, "text", Automerge.TEXT);
let text = doc.set(root, "text", TEXT);
if (!text) throw new Error('should not be undefined')
doc.splice(text, 0, 0, "hello ")
doc.splice(text, 6, 0, ["w","o","r","l","d"])
doc.splice(text, 11, 0, [["str","!"],["str","?"]])
@ -240,7 +245,7 @@ describe('Automerge', () => {
})
it('should be able save all or incrementally', () => {
let doc = Automerge.init()
let doc = create()
doc.set("_root", "foo", 1)
@ -261,9 +266,9 @@ describe('Automerge', () => {
assert.notDeepEqual(saveA, saveB);
let docA = Automerge.load(saveA);
let docB = Automerge.load(saveB);
let docC = Automerge.load(saveMidway)
let docA = loadDoc(saveA);
let docB = loadDoc(saveB);
let docC = loadDoc(saveMidway)
docC.loadIncremental(save3)
assert.deepEqual(docA.keys("_root"), docB.keys("_root"));
@ -276,8 +281,9 @@ describe('Automerge', () => {
})
it('should be able to splice text', () => {
let doc = Automerge.init()
let doc = create()
let text = doc.set("_root", "text", TEXT);
if (!text) throw new Error('should not be undefined')
doc.splice(text, 0, 0, "hello world");
let heads1 = doc.commit();
doc.splice(text, 6, 0, "big bad ");
@ -292,10 +298,10 @@ describe('Automerge', () => {
})
it('local inc increments all visible counters in a map', () => {
let doc1 = Automerge.init("aaaa")
let doc1 = create("aaaa")
doc1.set("_root", "hello", "world")
let doc2 = Automerge.load(doc1.save(), "bbbb");
let doc3 = Automerge.load(doc1.save(), "cccc");
let doc2 = loadDoc(doc1.save(), "bbbb");
let doc3 = loadDoc(doc1.save(), "cccc");
doc1.set("_root", "cnt", 20)
doc2.set("_root", "cnt", 0, "counter")
doc3.set("_root", "cnt", 10, "counter")
@ -315,7 +321,7 @@ describe('Automerge', () => {
])
let save1 = doc1.save()
let doc4 = Automerge.load(save1)
let doc4 = loadDoc(save1)
assert.deepEqual(doc4.save(), save1);
doc1.free()
doc2.free()
@ -324,11 +330,12 @@ describe('Automerge', () => {
})
it('local inc increments all visible counters in a sequence', () => {
let doc1 = Automerge.init("aaaa")
let doc1 = create("aaaa")
let seq = doc1.set("_root", "seq", LIST)
if (!seq) throw new Error('Should not be undefined')
doc1.insert(seq, 0, "hello")
let doc2 = Automerge.load(doc1.save(), "bbbb");
let doc3 = Automerge.load(doc1.save(), "cccc");
let doc2 = loadDoc(doc1.save(), "bbbb");
let doc3 = loadDoc(doc1.save(), "cccc");
doc1.set(seq, 0, 20)
doc2.set(seq, 0, 0, "counter")
doc3.set(seq, 0, 10, "counter")
@ -348,7 +355,7 @@ describe('Automerge', () => {
])
let save = doc1.save()
let doc4 = Automerge.load(save)
let doc4 = loadDoc(save)
assert.deepEqual(doc4.save(), save);
doc1.free()
doc2.free()
@ -357,12 +364,13 @@ describe('Automerge', () => {
})
it('only returns an object id when objects are created', () => {
let doc = Automerge.init("aaaa")
let doc = create("aaaa")
let r1 = doc.set("_root","foo","bar")
let r2 = doc.set("_root","list",LIST)
let r3 = doc.set("_root","counter",10, "counter")
let r4 = doc.inc("_root","counter",1)
let r5 = doc.del("_root","counter")
if (!r2) throw new Error('should not be undefined')
let r6 = doc.insert(r2,0,10);
let r7 = doc.insert(r2,0,MAP);
let r8 = doc.splice(r2,1,0,["a","b","c"]);
@ -380,13 +388,16 @@ describe('Automerge', () => {
})
it('objects without properties are preserved', () => {
let doc1 = Automerge.init("aaaa")
let doc1 = create("aaaa")
let a = doc1.set("_root","a",MAP);
if (!a) throw new Error('should not be undefined')
let b = doc1.set("_root","b",MAP);
if (!b) throw new Error('should not be undefined')
let c = doc1.set("_root","c",MAP);
if (!c) throw new Error('should not be undefined')
let d = doc1.set(c,"d","dd");
let saved = doc1.save();
let doc2 = Automerge.load(saved);
let doc2 = loadDoc(saved);
assert.deepEqual(doc2.value("_root","a"),["map",a])
assert.deepEqual(doc2.keys(a),[])
assert.deepEqual(doc2.value("_root","b"),["map",b])
@ -397,13 +408,38 @@ describe('Automerge', () => {
doc1.free()
doc2.free()
})
it('should handle merging text conflicts then saving & loading', () => {
let A = create("aabbcc")
let At = A.make('_root', 'text', TEXT)
A.splice(At, 0, 0, 'hello')
let B = A.fork()
assert.deepEqual(B.value("_root","text"), [ "text", At])
B.splice(At, 4, 1)
B.splice(At, 4, 0, '!')
B.splice(At, 5, 0, ' ')
B.splice(At, 6, 0, 'world')
A.merge(B)
let binary = A.save()
let C = loadDoc(binary)
assert.deepEqual(C.value('_root', 'text'), ['text', '1@aabbcc'])
assert.deepEqual(C.text(At), 'hell! world')
})
})
describe('sync', () => {
it('should send a sync message implying no local data', () => {
let doc = Automerge.init()
let doc = create()
let s1 = initSyncState()
let m1 = doc.generateSyncMessage(s1)
const message = decodeSyncMessage(m1)
const message: DecodedSyncMessage = decodeSyncMessage(m1)
assert.deepStrictEqual(message.heads, [])
assert.deepStrictEqual(message.need, [])
assert.deepStrictEqual(message.have.length, 1)
@ -413,7 +449,7 @@ describe('Automerge', () => {
})
it('should not reply if we have no data as well', () => {
let n1 = Automerge.init(), n2 = Automerge.init()
let n1 = create(), n2 = create()
let s1 = initSyncState(), s2 = initSyncState()
let m1 = n1.generateSyncMessage(s1)
n2.receiveSyncMessage(s2, m1)
@ -422,11 +458,12 @@ describe('Automerge', () => {
})
it('repos with equal heads do not need a reply message', () => {
let n1 = Automerge.init(), n2 = Automerge.init()
let n1 = create(), n2 = create()
let s1 = initSyncState(), s2 = initSyncState()
// make two nodes with the same changes
let list = n1.set("_root","n", LIST)
if (!list) throw new Error('undefined')
n1.commit("",0)
for (let i = 0; i < 10; i++) {
n1.insert(list,i,i)
@ -446,10 +483,11 @@ describe('Automerge', () => {
})
it('n1 should offer all changes to n2 when starting from nothing', () => {
let n1 = Automerge.init(), n2 = Automerge.init()
let n1 = create(), n2 = create()
// make changes for n1 that n2 should request
let list = n1.set("_root","n",LIST)
if (!list) throw new Error('undefined')
n1.commit("",0)
for (let i = 0; i < 10; i++) {
n1.insert(list, i, i)
@ -462,10 +500,11 @@ describe('Automerge', () => {
})
it('should sync peers where one has commits the other does not', () => {
let n1 = Automerge.init(), n2 = Automerge.init()
let n1 = create(), n2 = create()
// make changes for n1 that n2 should request
let list = n1.set("_root","n",LIST)
if (!list) throw new Error('undefined')
n1.commit("",0)
for (let i = 0; i < 10; i++) {
n1.insert(list,i,i)
@ -479,7 +518,7 @@ describe('Automerge', () => {
it('should work with prior sync state', () => {
// create & synchronize two nodes
let n1 = Automerge.init(), n2 = Automerge.init()
let n1 = create(), n2 = create()
let s1 = initSyncState(), s2 = initSyncState()
for (let i = 0; i < 5; i++) {
@ -502,7 +541,7 @@ describe('Automerge', () => {
it('should not generate messages once synced', () => {
// create & synchronize two nodes
let n1 = Automerge.init('abc123'), n2 = Automerge.init('def456')
let n1 = create('abc123'), n2 = create('def456')
let s1 = initSyncState(), s2 = initSyncState()
let message, patch
@ -546,7 +585,7 @@ describe('Automerge', () => {
it('should allow simultaneous messages during synchronization', () => {
// create & synchronize two nodes
let n1 = Automerge.init('abc123'), n2 = Automerge.init('def456')
let n1 = create('abc123'), n2 = create('def456')
let s1 = initSyncState(), s2 = initSyncState()
for (let i = 0; i < 5; i++) {
@ -618,10 +657,11 @@ describe('Automerge', () => {
})
it('should assume sent changes were recieved until we hear otherwise', () => {
let n1 = Automerge.init('01234567'), n2 = Automerge.init('89abcdef')
let n1 = create('01234567'), n2 = create('89abcdef')
let s1 = initSyncState(), s2 = initSyncState(), message = null
let items = n1.set("_root", "items", LIST)
if (!items) throw new Error('undefined')
n1.commit("",0)
sync(n1, n2, s1, s2)
@ -645,7 +685,7 @@ describe('Automerge', () => {
it('should work regardless of who initiates the exchange', () => {
// create & synchronize two nodes
let n1 = Automerge.init(), n2 = Automerge.init()
let n1 = create(), n2 = create()
let s1 = initSyncState(), s2 = initSyncState()
for (let i = 0; i < 5; i++) {
@ -673,7 +713,7 @@ describe('Automerge', () => {
// lastSync is undefined.
// create two peers both with divergent commits
let n1 = Automerge.init('01234567'), n2 = Automerge.init('89abcdef')
let n1 = create('01234567'), n2 = create('89abcdef')
let s1 = initSyncState(), s2 = initSyncState()
for (let i = 0; i < 10; i++) {
@ -706,7 +746,7 @@ describe('Automerge', () => {
// lastSync is c9.
// create two peers both with divergent commits
let n1 = Automerge.init('01234567'), n2 = Automerge.init('89abcdef')
let n1 = create('01234567'), n2 = create('89abcdef')
let s1 = initSyncState(), s2 = initSyncState()
for (let i = 0; i < 10; i++) {
@ -735,7 +775,7 @@ describe('Automerge', () => {
})
it('should ensure non-empty state after sync', () => {
let n1 = Automerge.init('01234567'), n2 = Automerge.init('89abcdef')
let n1 = create('01234567'), n2 = create('89abcdef')
let s1 = initSyncState(), s2 = initSyncState()
for (let i = 0; i < 3; i++) {
@ -754,7 +794,7 @@ describe('Automerge', () => {
// c0 <-- c1 <-- c2 <-- c3 <-- c4 <-- c5 <-- c6 <-- c7 <-- c8
// n2 has changes {c0, c1, c2}, n1's lastSync is c5, and n2's lastSync is c2.
// we want to successfully sync (n1) with (r), even though (n1) believes it's talking to (n2)
let n1 = Automerge.init('01234567'), n2 = Automerge.init('89abcdef')
let n1 = create('01234567'), n2 = create('89abcdef')
let s1 = initSyncState(), s2 = initSyncState()
// n1 makes three changes, which we sync to n2
@ -800,7 +840,7 @@ describe('Automerge', () => {
})
it('should resync after one node experiences data loss without disconnecting', () => {
let n1 = Automerge.init('01234567'), n2 = Automerge.init('89abcdef')
let n1 = create('01234567'), n2 = create('89abcdef')
let s1 = initSyncState(), s2 = initSyncState()
// n1 makes three changes, which we sync to n2
@ -814,7 +854,7 @@ describe('Automerge', () => {
assert.deepStrictEqual(n1.getHeads(), n2.getHeads())
assert.deepStrictEqual(n1.toJS(), n2.toJS())
let n2AfterDataLoss = Automerge.init('89abcdef')
let n2AfterDataLoss = create('89abcdef')
// "n2" now has no data, but n1 still thinks it does. Note we don't do
// decodeSyncState(encodeSyncState(s1)) in order to simulate data loss without disconnecting
@ -824,7 +864,7 @@ describe('Automerge', () => {
})
it('should handle changes concurrent to the last sync heads', () => {
let n1 = Automerge.init('01234567'), n2 = Automerge.init('89abcdef'), n3 = Automerge.init('fedcba98')
let n1 = create('01234567'), n2 = create('89abcdef'), n3 = create('fedcba98')
let s12 = initSyncState(), s21 = initSyncState(), s23 = initSyncState(), s32 = initSyncState()
// Change 1 is known to all three nodes
@ -847,7 +887,9 @@ describe('Automerge', () => {
// Apply n3's latest change to n2. If running in Node, turn the Uint8Array into a Buffer, to
// simulate transmission over a network (see https://github.com/automerge/automerge/pull/362)
let change = n3.getLastLocalChange()
//@ts-ignore
if (typeof Buffer === 'function') change = Buffer.from(change)
if (change === undefined) { throw new RangeError("last local change failed") }
n2.applyChanges([change])
// Now sync n1 and n2. n3's change is concurrent to n1 and n2's last sync heads
@ -857,7 +899,7 @@ describe('Automerge', () => {
})
it('should handle histories with lots of branching and merging', () => {
let n1 = Automerge.init('01234567'), n2 = Automerge.init('89abcdef'), n3 = Automerge.init('fedcba98')
let n1 = create('01234567'), n2 = create('89abcdef'), n3 = create('fedcba98')
n1.set("_root","x",0); n1.commit("",0)
n2.applyChanges([n1.getLastLocalChange()])
n3.applyChanges([n1.getLastLocalChange()])
@ -897,7 +939,7 @@ describe('Automerge', () => {
// `-- n2
// where n2 is a false positive in the Bloom filter containing {n1}.
// lastSync is c9.
let n1 = Automerge.init('01234567'), n2 = Automerge.init('89abcdef')
let n1 = create('01234567'), n2 = create('89abcdef')
let s1 = initSyncState(), s2 = initSyncState()
for (let i = 0; i < 10; i++) {
@ -925,7 +967,7 @@ describe('Automerge', () => {
describe('with a false-positive dependency', () => {
let n1, n2, s1, s2, n1hash2, n2hash2
let n1: Automerge, n2: Automerge, s1: SyncState, s2: SyncState, n1hash2: Hash, n2hash2: Hash
beforeEach(() => {
// Scenario: ,-- n1c1 <-- n1c2
@ -933,8 +975,8 @@ describe('Automerge', () => {
// `-- n2c1 <-- n2c2
// where n2c1 is a false positive in the Bloom filter containing {n1c1, n1c2}.
// lastSync is c9.
n1 = Automerge.init('01234567')
n2 = Automerge.init('89abcdef')
n1 = create('01234567')
n2 = create('89abcdef')
s1 = initSyncState()
s2 = initSyncState()
for (let i = 0; i < 10; i++) {
@ -1000,7 +1042,7 @@ describe('Automerge', () => {
assert.strictEqual(decodeSyncMessage(m2).changes.length, 1) // only n2c2; change n2c1 is not sent
// n3 is a node that doesn't have the missing change. Nevertheless n1 is going to ask n3 for it
let n3 = Automerge.init('fedcba98'), s13 = initSyncState(), s31 = initSyncState()
let n3 = create('fedcba98'), s13 = initSyncState(), s31 = initSyncState()
sync(n1, n3, s13, s31)
assert.deepStrictEqual(n1.getHeads(), [n1hash2])
assert.deepStrictEqual(n3.getHeads(), [n1hash2])
@ -1013,7 +1055,7 @@ describe('Automerge', () => {
// `-- n2c1 <-- n2c2 <-- n2c3
// where n2c2 is a false positive in the Bloom filter containing {n1c1, n1c2, n1c3}.
// lastSync is c4.
let n1 = Automerge.init('01234567'), n2 = Automerge.init('89abcdef')
let n1 = create('01234567'), n2 = create('89abcdef')
let s1 = initSyncState(), s2 = initSyncState()
let n1hash3, n2hash3
@ -1067,7 +1109,7 @@ describe('Automerge', () => {
// `-- n2c1 <-- n2c2 <-- n2c3
// where n2c1 and n2c2 are both false positives in the Bloom filter containing {c5}.
// lastSync is c4.
let n1 = Automerge.init('01234567'), n2 = Automerge.init('89abcdef')
let n1 = create('01234567'), n2 = create('89abcdef')
let s1 = initSyncState(), s2 = initSyncState()
for (let i = 0; i < 5; i++) {
@ -1107,7 +1149,7 @@ describe('Automerge', () => {
// c0 <-- c1 <-- c2 <-- c3 <-- c4 <-- c5 <-- c6 <-- c7 <-- c8 <-- c9 <-+
// `-- n2
// where n2 causes a false positive in the Bloom filter containing {n1}.
let n1 = Automerge.init('01234567'), n2 = Automerge.init('89abcdef')
let n1 = create('01234567'), n2 = create('89abcdef')
let s1 = initSyncState(), s2 = initSyncState()
let message
@ -1163,7 +1205,7 @@ describe('Automerge', () => {
// n1 has {c0, c1, c2, n1c1, n1c2, n1c3, n2c1, n2c2};
// n2 has {c0, c1, c2, n1c1, n1c2, n2c1, n2c2, n2c3};
// n3 has {c0, c1, c2, n3c1, n3c2, n3c3}.
let n1 = Automerge.init('01234567'), n2 = Automerge.init('89abcdef'), n3 = Automerge.init('76543210')
let n1 = create('01234567'), n2 = create('89abcdef'), n3 = create('76543210')
let s13 = initSyncState(), s12 = initSyncState(), s21 = initSyncState()
let s32 = initSyncState(), s31 = initSyncState(), s23 = initSyncState()
let message1, message2, message3
@ -1213,7 +1255,7 @@ describe('Automerge', () => {
const modifiedMessage = decodeSyncMessage(message3)
modifiedMessage.have.push(decodeSyncMessage(message1).have[0])
assert.strictEqual(modifiedMessage.changes.length, 0)
n2.receiveSyncMessage(s23, Automerge.encodeSyncMessage(modifiedMessage))
n2.receiveSyncMessage(s23, encodeSyncMessage(modifiedMessage))
// n2 replies to n3, sending only n2c3 (the one change that n2 has but n1 doesn't)
message2 = n2.generateSyncMessage(s23)
@ -1228,7 +1270,7 @@ describe('Automerge', () => {
})
it('should allow any change to be requested', () => {
let n1 = Automerge.init('01234567'), n2 = Automerge.init('89abcdef')
let n1 = create('01234567'), n2 = create('89abcdef')
let s1 = initSyncState(), s2 = initSyncState()
let message = null
@ -1247,14 +1289,14 @@ describe('Automerge', () => {
message = n1.generateSyncMessage(s1)
const modMsg = decodeSyncMessage(message)
modMsg.need = lastSync // re-request change 2
n2.receiveSyncMessage(s2, Automerge.encodeSyncMessage(modMsg))
n2.receiveSyncMessage(s2, encodeSyncMessage(modMsg))
message = n2.generateSyncMessage(s2)
assert.strictEqual(decodeSyncMessage(message).changes.length, 1)
assert.strictEqual(Automerge.decodeChange(decodeSyncMessage(message).changes[0]).hash, lastSync[0])
assert.strictEqual(decodeChange(decodeSyncMessage(message).changes[0]).hash, lastSync[0])
})
it('should ignore requests for a nonexistent change', () => {
let n1 = Automerge.init('01234567'), n2 = Automerge.init('89abcdef')
let n1 = create('01234567'), n2 = create('89abcdef')
let s1 = initSyncState(), s2 = initSyncState()
let message = null
@ -1264,7 +1306,9 @@ describe('Automerge', () => {
n2.applyChanges(n1.getChanges([]))
message = n1.generateSyncMessage(s1)
message = decodeSyncMessage(message)
message.need = ['0000000000000000000000000000000000000000000000000000000000000000']
message = encodeSyncMessage(message)
n2.receiveSyncMessage(s2, message)
message = n2.generateSyncMessage(s2)
assert.strictEqual(message, null)
@ -1274,7 +1318,7 @@ describe('Automerge', () => {
// ,-- c1 <-- c2
// c0 <-+
// `-- c3 <-- c4 <-- c5 <-- c6 <-- c7 <-- c8
let n1 = Automerge.init('01234567'), n2 = Automerge.init('89abcdef'), n3 = Automerge.init('76543210')
let n1 = create('01234567'), n2 = create('89abcdef'), n3 = create('76543210')
let s1 = initSyncState(), s2 = initSyncState()
let msg, decodedMsg
@ -1300,7 +1344,7 @@ describe('Automerge', () => {
n3.set("_root","x",5); n3.commit("",0)
const change5 = n3.getLastLocalChange()
n3.set("_root","x",6); n3.commit("",0)
const change6 = n3.getLastLocalChange(n3), c6 = n3.getHeads()[0]
const change6 = n3.getLastLocalChange(), c6 = n3.getHeads()[0]
for (let i = 7; i <= 8; i++) {
n3.set("_root","x",i); n3.commit("",0)
}
@ -1313,10 +1357,11 @@ describe('Automerge', () => {
msg = n2.generateSyncMessage(s2)
decodedMsg = decodeSyncMessage(msg)
decodedMsg.changes = [change5, change6]
msg = Automerge.encodeSyncMessage(decodedMsg)
const sentHashes = {}
sentHashes[Automerge.decodeChange(change5, true).hash] = true
sentHashes[Automerge.decodeChange(change6, true).hash] = true
msg = encodeSyncMessage(decodedMsg)
const sentHashes: any = {}
sentHashes[decodeChange(change5).hash] = true
sentHashes[decodeChange(change6).hash] = true
s2.sentHashes = sentHashes
n1.receiveSyncMessage(s1, msg)
assert.deepStrictEqual(s1.sharedHeads, [c2, c6].sort())

View file

@ -0,0 +1,17 @@
{
"compilerOptions": {
"noImplicitAny": true,
"strict": true,
"allowJs": false,
"baseUrl": ".",
"esModuleInterop": true,
"lib": ["dom", "esnext.asynciterable", "es2017", "es2016", "es2015"],
"module": "commonjs",
"moduleResolution": "node",
"paths": { "dev": ["*"]},
"rootDir": "",
"target": "es2016",
"typeRoots": ["./dev/index.d.ts"]
},
"exclude": ["dist/**/*"]
}

View file

@ -26,6 +26,8 @@ tinyvec = { version = "^1.5.1", features = ["alloc"] }
unicode-segmentation = "1.7.1"
serde = { version = "^1.0", features=["derive"] }
dot = { version = "0.1.4", optional = true }
js-sys = "^0.3"
wasm-bindgen = "^0.2"
[dependencies.web-sys]
version = "^0.3.55"

View file

@ -1,6 +1,3 @@
use std::collections::{HashMap, HashSet, VecDeque};
use unicode_segmentation::UnicodeSegmentation;
use crate::change::{encode_document, export_change};
use crate::exid::ExId;
use crate::op_set::OpSet;
@ -10,6 +7,9 @@ use crate::types::{
};
use crate::{legacy, query, types, ObjType};
use crate::{AutomergeError, Change, Prop};
use serde::Serialize;
use std::collections::{HashMap, HashSet, VecDeque};
use unicode_segmentation::UnicodeSegmentation;
#[derive(Debug, Clone)]
pub struct Automerge {
@ -126,6 +126,13 @@ impl Automerge {
self.transaction.as_mut().unwrap()
}
pub fn fork(&mut self) -> Self {
self.ensure_transaction_closed();
let mut f = self.clone();
f.actor = None;
f
}
pub fn commit(&mut self, message: Option<String>, time: Option<i64>) -> Vec<ChangeHash> {
let tx = self.tx();
@ -322,26 +329,25 @@ impl Automerge {
value: V,
) -> Result<Option<ExId>, AutomergeError> {
let obj = self.exid_to_obj(obj)?;
if let Some(id) = self.do_insert(obj, index, value)? {
let value = value.into();
if let Some(id) = self.do_insert(obj, index, value.into())? {
Ok(Some(self.id_to_exid(id)))
} else {
Ok(None)
}
}
fn do_insert<V: Into<Value>>(
fn do_insert(
&mut self,
obj: ObjId,
index: usize,
value: V,
action: OpType,
) -> Result<Option<OpId>, AutomergeError> {
let id = self.next_id();
let query = self.ops.search(obj, query::InsertNth::new(index));
let key = query.key()?;
let value = value.into();
let action = value.into();
let is_make = matches!(&action, OpType::Make(_));
let op = Op {
@ -355,7 +361,7 @@ impl Automerge {
insert: true,
};
self.ops.insert(query.pos, op.clone());
self.ops.insert(query.pos(), op.clone());
self.tx().operations.push(op);
if is_make {
@ -399,7 +405,7 @@ impl Automerge {
let mut results = Vec::new();
for v in vals {
// insert()
let id = self.do_insert(obj, pos, v.clone())?;
let id = self.do_insert(obj, pos, v.into())?;
if let Some(id) = id {
results.push(self.id_to_exid(id));
}
@ -547,10 +553,26 @@ impl Automerge {
Ok(delta)
}
fn duplicate_seq(&self, change: &Change) -> bool {
let mut dup = false;
if let Some(actor_index) = self.ops.m.actors.lookup(change.actor_id()) {
if let Some(s) = self.states.get(&actor_index) {
dup = s.len() >= change.seq as usize;
}
}
dup
}
pub fn apply_changes(&mut self, changes: &[Change]) -> Result<Patch, AutomergeError> {
self.ensure_transaction_closed();
for c in changes {
if !self.history_index.contains_key(&c.hash) {
if self.duplicate_seq(c) {
return Err(AutomergeError::DuplicateSeqNumber(
c.seq,
c.actor_id().clone(),
));
}
if self.is_causally_ready(c) {
self.apply_change(c.clone());
} else {
@ -721,15 +743,15 @@ impl Automerge {
}
/// Takes all the changes in `other` which are not in `self` and applies them
pub fn merge(&mut self, other: &mut Self) {
pub fn merge(&mut self, other: &mut Self) -> Result<Vec<ChangeHash>, AutomergeError> {
// TODO: Make this fallible and figure out how to do this transactionally
other.ensure_transaction_closed();
let changes = self
.get_changes_added(other)
.into_iter()
.cloned()
.collect::<Vec<_>>();
self.apply_changes(&changes).unwrap();
self.apply_changes(&changes)?;
Ok(self._get_heads())
}
pub fn save(&mut self) -> Result<Vec<u8>, AutomergeError> {
@ -963,8 +985,9 @@ impl Automerge {
.and_then(|index| self.history.get(*index))
}
pub fn get_changes_added<'a>(&mut self, other: &'a Self) -> Vec<&'a Change> {
pub fn get_changes_added<'a>(&mut self, other: &'a mut Self) -> Vec<&'a Change> {
self.ensure_transaction_closed();
other.ensure_transaction_closed();
self._get_changes_added(other)
}
@ -1089,8 +1112,8 @@ impl Automerge {
};
let value: String = match &i.action {
OpType::Set(value) => format!("{}", value),
OpType::Make(obj) => format!("make{}", obj),
OpType::Inc(obj) => format!("inc{}", obj),
OpType::Make(obj) => format!("make({})", obj),
OpType::Inc(obj) => format!("inc({})", obj),
OpType::Del => format!("del{}", 0),
};
let pred: Vec<_> = i.pred.iter().map(|id| self.to_string(*id)).collect();
@ -1132,6 +1155,17 @@ impl Default for Automerge {
}
}
#[derive(Serialize, Debug, Clone, PartialEq)]
pub struct SpanInfo {
pub id: ExId,
pub time: i64,
pub start: usize,
pub end: usize,
#[serde(rename = "type")]
pub span_type: String,
pub value: ScalarValue,
}
#[cfg(test)]
mod tests {
use super::*;
@ -1350,6 +1384,8 @@ mod tests {
assert!(doc.value_at(&list, 0, &heads2)?.unwrap().0 == Value::int(10));
assert!(doc.length_at(&list, &heads3) == 2);
doc.dump();
//log!("{:?}", doc.value_at(&list, 0, &heads3)?.unwrap().0);
assert!(doc.value_at(&list, 0, &heads3)?.unwrap().0 == Value::int(30));
assert!(doc.value_at(&list, 1, &heads3)?.unwrap().0 == Value::int(20));

View file

@ -1,5 +1,5 @@
use crate::decoding;
use crate::types::ScalarValue;
use crate::types::{ActorId, ScalarValue};
use crate::value::DataType;
use thiserror::Error;
@ -17,6 +17,8 @@ pub enum AutomergeError {
InvalidSeq(u64),
#[error("index {0} is out of bounds")]
InvalidIndex(usize),
#[error("duplicate seq {0} found for actor {1}")]
DuplicateSeqNumber(u64, ActorId),
#[error("generic automerge error")]
Fail,
}
@ -33,6 +35,12 @@ impl From<decoding::Error> for AutomergeError {
}
}
impl From<AutomergeError> for wasm_bindgen::JsValue {
fn from(err: AutomergeError) -> Self {
js_sys::Error::new(&std::format!("{}", err)).into()
}
}
#[derive(Error, Debug)]
#[error("Invalid actor ID: {0}")]
pub struct InvalidActorId(pub String);

View file

@ -1,4 +1,6 @@
use crate::ActorId;
use serde::Serialize;
use serde::Serializer;
use std::cmp::{Ord, Ordering};
use std::fmt;
use std::hash::{Hash, Hasher};
@ -63,3 +65,12 @@ impl PartialOrd for ExId {
Some(self.cmp(other))
}
}
impl Serialize for ExId {
fn serialize<S>(&self, serializer: S) -> Result<S::Ok, S::Error>
where
S: Serializer,
{
serializer.serialize_str(self.to_string().as_str())
}
}

View file

@ -103,6 +103,8 @@ impl<'de> Deserialize<'de> for RawOpType {
"del",
"inc",
"set",
"mark",
"unmark",
];
// TODO: Probably more efficient to deserialize to a `&str`
let raw_type = String::deserialize(deserializer)?;
@ -144,6 +146,8 @@ impl<'de> Deserialize<'de> for Op {
let mut insert: Option<bool> = None;
let mut datatype: Option<DataType> = None;
let mut value: Option<Option<ScalarValue>> = None;
let mut name: Option<String> = None;
let mut expand: Option<bool> = None;
let mut ref_id: Option<OpId> = None;
while let Some(field) = map.next_key::<String>()? {
match field.as_ref() {
@ -167,6 +171,8 @@ impl<'de> Deserialize<'de> for Op {
"insert" => read_field("insert", &mut insert, &mut map)?,
"datatype" => read_field("datatype", &mut datatype, &mut map)?,
"value" => read_field("value", &mut value, &mut map)?,
"name" => read_field("name", &mut name, &mut map)?,
"expand" => read_field("expand", &mut expand, &mut map)?,
"ref" => read_field("ref", &mut ref_id, &mut map)?,
_ => return Err(Error::unknown_field(&field, FIELDS)),
}

View file

@ -2,4 +2,3 @@ mod element_id;
mod key;
mod object_id;
mod opid;
mod scalar_value;

View file

@ -1,57 +0,0 @@
use std::fmt;
use smol_str::SmolStr;
use crate::value::ScalarValue;
impl From<&str> for ScalarValue {
fn from(s: &str) -> Self {
ScalarValue::Str(s.into())
}
}
impl From<i64> for ScalarValue {
fn from(n: i64) -> Self {
ScalarValue::Int(n)
}
}
impl From<u64> for ScalarValue {
fn from(n: u64) -> Self {
ScalarValue::Uint(n)
}
}
impl From<i32> for ScalarValue {
fn from(n: i32) -> Self {
ScalarValue::Int(n as i64)
}
}
impl From<bool> for ScalarValue {
fn from(b: bool) -> Self {
ScalarValue::Boolean(b)
}
}
impl From<char> for ScalarValue {
fn from(c: char) -> Self {
ScalarValue::Str(SmolStr::new(c.to_string()))
}
}
impl fmt::Display for ScalarValue {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
match self {
ScalarValue::Bytes(b) => write!(f, "\"{:?}\"", b),
ScalarValue::Str(s) => write!(f, "\"{}\"", s),
ScalarValue::Int(i) => write!(f, "{}", i),
ScalarValue::Uint(i) => write!(f, "{}", i),
ScalarValue::F64(n) => write!(f, "{:.324}", n),
ScalarValue::Counter(c) => write!(f, "Counter: {}", c),
ScalarValue::Timestamp(i) => write!(f, "Timestamp: {}", i),
ScalarValue::Boolean(b) => write!(f, "{}", b),
ScalarValue::Null => write!(f, "null"),
}
}
}

View file

@ -8,40 +8,56 @@ use std::fmt::Debug;
pub(crate) struct InsertNth<const B: usize> {
target: usize,
seen: usize,
pub pos: usize,
//pub pos: usize,
n: usize,
valid: Option<usize>,
last_seen: Option<ElemId>,
last_insert: Option<ElemId>,
last_valid_insert: Option<ElemId>,
}
impl<const B: usize> InsertNth<B> {
pub fn new(target: usize) -> Self {
let (valid, last_valid_insert) = if target == 0 {
(Some(0), Some(HEAD))
} else {
(None, None)
};
InsertNth {
target,
seen: 0,
pos: 0,
n: 0,
valid,
last_seen: None,
last_insert: None,
last_valid_insert,
}
}
pub fn pos(&self) -> usize {
self.valid.unwrap_or(self.n)
}
pub fn key(&self) -> Result<Key, AutomergeError> {
if self.target == 0 {
Ok(self
.last_valid_insert
.ok_or(AutomergeError::InvalidIndex(self.target))?
.into())
//if self.target == 0 {
/*
if self.last_insert.is_none() {
Ok(HEAD.into())
} else if self.seen == self.target && self.last_insert.is_some() {
Ok(Key::Seq(self.last_insert.unwrap()))
} else {
Err(AutomergeError::InvalidIndex(self.target))
}
*/
}
}
impl<const B: usize> TreeQuery<B> for InsertNth<B> {
fn query_node(&mut self, child: &OpTreeNode<B>) -> QueryResult {
if self.target == 0 {
// insert at the start of the obj all inserts are lesser b/c this is local
self.pos = 0;
return QueryResult::Finish;
}
let mut num_vis = child.index.len;
if num_vis > 0 {
if child.index.has(&self.last_seen) {
@ -50,30 +66,34 @@ impl<const B: usize> TreeQuery<B> for InsertNth<B> {
if self.seen + num_vis >= self.target {
QueryResult::Decend
} else {
self.pos += child.len();
self.n += child.len();
self.seen += num_vis;
self.last_seen = child.last().elemid();
QueryResult::Next
}
} else {
self.pos += child.len();
self.n += child.len();
QueryResult::Next
}
}
fn query_element(&mut self, element: &Op) -> QueryResult {
if element.insert {
if self.seen >= self.target {
return QueryResult::Finish;
};
if self.valid.is_none() && self.seen >= self.target {
self.valid = Some(self.n);
}
self.last_seen = None;
self.last_insert = element.elemid();
}
if self.last_seen.is_none() && element.visible() {
if self.seen >= self.target {
return QueryResult::Finish;
}
self.seen += 1;
self.last_seen = element.elemid()
self.last_seen = element.elemid();
self.last_valid_insert = self.last_seen
}
self.pos += 1;
self.n += 1;
QueryResult::Next
}
}

View file

@ -64,6 +64,16 @@ impl TryFrom<&str> for ActorId {
}
}
impl TryFrom<String> for ActorId {
type Error = error::InvalidActorId;
fn try_from(s: String) -> Result<Self, Self::Error> {
hex::decode(&s)
.map(ActorId::from)
.map_err(|_| error::InvalidActorId(s))
}
}
impl From<uuid::Uuid> for ActorId {
fn from(u: uuid::Uuid) -> Self {
ActorId(TinyVec::from(*u.as_bytes()))
@ -281,6 +291,18 @@ impl From<ElemId> for Key {
}
}
impl From<Option<ElemId>> for ElemId {
fn from(e: Option<ElemId>) -> Self {
e.unwrap_or(HEAD)
}
}
impl From<Option<ElemId>> for Key {
fn from(e: Option<ElemId>) -> Self {
Key::Seq(e.into())
}
}
#[derive(Debug, PartialEq, PartialOrd, Eq, Ord, Clone, Copy, Hash)]
pub(crate) enum Key {
Map(usize),

View file

@ -34,6 +34,10 @@ impl Value {
Value::Object(ObjType::Table)
}
pub fn null() -> Value {
Value::Scalar(ScalarValue::Null)
}
pub fn str(s: &str) -> Value {
Value::Scalar(ScalarValue::Str(s.into()))
}
@ -58,9 +62,22 @@ impl Value {
Value::Scalar(ScalarValue::F64(n))
}
pub fn boolean(n: bool) -> Value {
Value::Scalar(ScalarValue::Boolean(n))
}
pub fn bytes(b: Vec<u8>) -> Value {
Value::Scalar(ScalarValue::Bytes(b))
}
pub fn is_object(&self) -> bool {
matches!(&self, Value::Object(_))
}
pub fn is_scalar(&self) -> bool {
matches!(&self, Value::Scalar(_))
}
}
impl From<&str> for Value {
@ -93,12 +110,24 @@ impl From<u64> for Value {
}
}
impl From<f64> for Value {
fn from(n: f64) -> Self {
Value::Scalar(ScalarValue::F64(n))
}
}
impl From<bool> for Value {
fn from(v: bool) -> Self {
Value::Scalar(ScalarValue::Boolean(v))
}
}
impl From<()> for Value {
fn from(_: ()) -> Self {
Value::Scalar(ScalarValue::Null)
}
}
impl From<ObjType> for Value {
fn from(o: ObjType) -> Self {
Value::Object(o)
@ -366,7 +395,79 @@ impl ScalarValue {
}
}
pub fn to_bool(self) -> Option<bool> {
match self {
ScalarValue::Boolean(b) => Some(b),
_ => None,
}
}
pub fn to_string(self) -> Option<String> {
match self {
ScalarValue::Str(s) => Some(s.to_string()),
_ => None,
}
}
pub fn counter(n: i64) -> ScalarValue {
ScalarValue::Counter(n.into())
}
}
impl From<&str> for ScalarValue {
fn from(s: &str) -> Self {
ScalarValue::Str(s.into())
}
}
impl From<String> for ScalarValue {
fn from(s: String) -> Self {
ScalarValue::Str(s.into())
}
}
impl From<i64> for ScalarValue {
fn from(n: i64) -> Self {
ScalarValue::Int(n)
}
}
impl From<u64> for ScalarValue {
fn from(n: u64) -> Self {
ScalarValue::Uint(n)
}
}
impl From<i32> for ScalarValue {
fn from(n: i32) -> Self {
ScalarValue::Int(n as i64)
}
}
impl From<bool> for ScalarValue {
fn from(b: bool) -> Self {
ScalarValue::Boolean(b)
}
}
impl From<char> for ScalarValue {
fn from(c: char) -> Self {
ScalarValue::Str(SmolStr::new(c.to_string()))
}
}
impl fmt::Display for ScalarValue {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
match self {
ScalarValue::Bytes(b) => write!(f, "\"{:?}\"", b),
ScalarValue::Str(s) => write!(f, "\"{}\"", s),
ScalarValue::Int(i) => write!(f, "{}", i),
ScalarValue::Uint(i) => write!(f, "{}", i),
ScalarValue::F64(n) => write!(f, "{:.324}", n),
ScalarValue::Counter(c) => write!(f, "Counter: {}", c),
ScalarValue::Timestamp(i) => write!(f, "Timestamp: {}", i),
ScalarValue::Boolean(b) => write!(f, "{}", b),
ScalarValue::Null => write!(f, "null"),
}
}
}

View file

@ -54,10 +54,10 @@ fn repeated_map_assignment_which_resolves_conflict_not_ignored() {
let mut doc1 = new_doc();
let mut doc2 = new_doc();
doc1.set(&automerge::ROOT, "field", 123).unwrap();
doc2.merge(&mut doc1);
doc2.merge(&mut doc1).unwrap();
doc2.set(&automerge::ROOT, "field", 456).unwrap();
doc1.set(&automerge::ROOT, "field", 789).unwrap();
doc1.merge(&mut doc2);
doc1.merge(&mut doc2).unwrap();
assert_eq!(doc1.values(&automerge::ROOT, "field").unwrap().len(), 2);
doc1.set(&automerge::ROOT, "field", 123).unwrap();
@ -78,9 +78,9 @@ fn repeated_list_assignment_which_resolves_conflict_not_ignored() {
.unwrap()
.unwrap();
doc1.insert(&list_id, 0, 123).unwrap();
doc2.merge(&mut doc1);
doc2.merge(&mut doc1).unwrap();
doc2.set(&list_id, 0, 456).unwrap();
doc1.merge(&mut doc2);
doc1.merge(&mut doc2).unwrap();
doc1.set(&list_id, 0, 789).unwrap();
assert_doc!(
@ -123,7 +123,7 @@ fn merge_concurrent_map_prop_updates() {
let mut doc2 = new_doc();
doc1.set(&automerge::ROOT, "foo", "bar").unwrap();
doc2.set(&automerge::ROOT, "hello", "world").unwrap();
doc1.merge(&mut doc2);
doc1.merge(&mut doc2).unwrap();
assert_eq!(
doc1.value(&automerge::ROOT, "foo").unwrap().unwrap().0,
"bar".into()
@ -135,7 +135,7 @@ fn merge_concurrent_map_prop_updates() {
"hello" => { "world" },
}
);
doc2.merge(&mut doc1);
doc2.merge(&mut doc1).unwrap();
assert_doc!(
&doc2,
map! {
@ -152,10 +152,10 @@ fn add_concurrent_increments_of_same_property() {
let mut doc2 = new_doc();
doc1.set(&automerge::ROOT, "counter", mk_counter(0))
.unwrap();
doc2.merge(&mut doc1);
doc2.merge(&mut doc1).unwrap();
doc1.inc(&automerge::ROOT, "counter", 1).unwrap();
doc2.inc(&automerge::ROOT, "counter", 2).unwrap();
doc1.merge(&mut doc2);
doc1.merge(&mut doc2).unwrap();
assert_doc!(
&doc1,
map! {
@ -181,7 +181,7 @@ fn add_increments_only_to_preceeded_values() {
doc2.inc(&automerge::ROOT, "counter", 3).unwrap();
// The two values should be conflicting rather than added
doc1.merge(&mut doc2);
doc1.merge(&mut doc2).unwrap();
assert_doc!(
&doc1,
@ -201,7 +201,7 @@ fn concurrent_updates_of_same_field() {
doc1.set(&automerge::ROOT, "field", "one").unwrap();
doc2.set(&automerge::ROOT, "field", "two").unwrap();
doc1.merge(&mut doc2);
doc1.merge(&mut doc2).unwrap();
assert_doc!(
&doc1,
@ -223,11 +223,11 @@ fn concurrent_updates_of_same_list_element() {
.unwrap()
.unwrap();
doc1.insert(&list_id, 0, "finch").unwrap();
doc2.merge(&mut doc1);
doc2.merge(&mut doc1).unwrap();
doc1.set(&list_id, 0, "greenfinch").unwrap();
doc2.set(&list_id, 0, "goldfinch").unwrap();
doc1.merge(&mut doc2);
doc1.merge(&mut doc2).unwrap();
assert_doc!(
&doc1,
@ -252,8 +252,8 @@ fn assignment_conflicts_of_different_types() {
.unwrap();
doc3.set(&automerge::ROOT, "field", automerge::Value::map())
.unwrap();
doc1.merge(&mut doc2);
doc1.merge(&mut doc3);
doc1.merge(&mut doc2).unwrap();
doc1.merge(&mut doc3).unwrap();
assert_doc!(
&doc1,
@ -277,7 +277,7 @@ fn changes_within_conflicting_map_field() {
.unwrap()
.unwrap();
doc2.set(&map_id, "innerKey", 42).unwrap();
doc1.merge(&mut doc2);
doc1.merge(&mut doc2).unwrap();
assert_doc!(
&doc1,
@ -304,7 +304,7 @@ fn changes_within_conflicting_list_element() {
.unwrap()
.unwrap();
doc1.insert(&list_id, 0, "hello").unwrap();
doc2.merge(&mut doc1);
doc2.merge(&mut doc1).unwrap();
let map_in_doc1 = doc1
.set(&list_id, 0, automerge::Value::map())
@ -317,11 +317,11 @@ fn changes_within_conflicting_list_element() {
.set(&list_id, 0, automerge::Value::map())
.unwrap()
.unwrap();
doc1.merge(&mut doc2);
doc1.merge(&mut doc2).unwrap();
doc2.set(&map_in_doc2, "map2", true).unwrap();
doc2.set(&map_in_doc2, "key", 2).unwrap();
doc1.merge(&mut doc2);
doc1.merge(&mut doc2).unwrap();
assert_doc!(
&doc1,
@ -361,7 +361,7 @@ fn concurrently_assigned_nested_maps_should_not_merge() {
.unwrap();
doc2.set(&doc2_map_id, "logo_url", "logo.png").unwrap();
doc1.merge(&mut doc2);
doc1.merge(&mut doc2).unwrap();
assert_doc!(
&doc1,
@ -392,11 +392,11 @@ fn concurrent_insertions_at_different_list_positions() {
doc1.insert(&list_id, 0, "one").unwrap();
doc1.insert(&list_id, 1, "three").unwrap();
doc2.merge(&mut doc1);
doc2.merge(&mut doc1).unwrap();
doc1.splice(&list_id, 1, 0, vec!["two".into()]).unwrap();
doc2.insert(&list_id, 2, "four").unwrap();
doc1.merge(&mut doc2);
doc1.merge(&mut doc2).unwrap();
assert_doc!(
&doc1,
@ -426,10 +426,10 @@ fn concurrent_insertions_at_same_list_position() {
.unwrap();
doc1.insert(&list_id, 0, "parakeet").unwrap();
doc2.merge(&mut doc1);
doc2.merge(&mut doc1).unwrap();
doc1.insert(&list_id, 1, "starling").unwrap();
doc2.insert(&list_id, 1, "chaffinch").unwrap();
doc1.merge(&mut doc2);
doc1.merge(&mut doc2).unwrap();
assert_doc!(
&doc1,
@ -456,11 +456,11 @@ fn concurrent_assignment_and_deletion_of_a_map_entry() {
let mut doc1 = new_doc();
let mut doc2 = new_doc();
doc1.set(&automerge::ROOT, "bestBird", "robin").unwrap();
doc2.merge(&mut doc1);
doc2.merge(&mut doc1).unwrap();
doc1.del(&automerge::ROOT, "bestBird").unwrap();
doc2.set(&automerge::ROOT, "bestBird", "magpie").unwrap();
doc1.merge(&mut doc2);
doc1.merge(&mut doc2).unwrap();
assert_doc!(
&doc1,
@ -483,7 +483,7 @@ fn concurrent_assignment_and_deletion_of_list_entry() {
doc1.insert(&list_id, 0, "blackbird").unwrap();
doc1.insert(&list_id, 1, "thrush").unwrap();
doc1.insert(&list_id, 2, "goldfinch").unwrap();
doc2.merge(&mut doc1);
doc2.merge(&mut doc1).unwrap();
doc1.set(&list_id, 1, "starling").unwrap();
doc2.del(&list_id, 1).unwrap();
@ -508,7 +508,7 @@ fn concurrent_assignment_and_deletion_of_list_entry() {
}
);
doc1.merge(&mut doc2);
doc1.merge(&mut doc2).unwrap();
assert_doc!(
&doc1,
@ -535,14 +535,14 @@ fn insertion_after_a_deleted_list_element() {
doc1.insert(&list_id, 1, "thrush").unwrap();
doc1.insert(&list_id, 2, "goldfinch").unwrap();
doc2.merge(&mut doc1);
doc2.merge(&mut doc1).unwrap();
doc1.splice(&list_id, 1, 2, Vec::new()).unwrap();
doc2.splice(&list_id, 2, 0, vec!["starling".into()])
.unwrap();
doc1.merge(&mut doc2);
doc1.merge(&mut doc2).unwrap();
assert_doc!(
&doc1,
@ -554,7 +554,7 @@ fn insertion_after_a_deleted_list_element() {
}
);
doc2.merge(&mut doc1);
doc2.merge(&mut doc1).unwrap();
assert_doc!(
&doc2,
map! {
@ -579,13 +579,13 @@ fn concurrent_deletion_of_same_list_element() {
doc1.insert(&list_id, 1, "buzzard").unwrap();
doc1.insert(&list_id, 2, "cormorant").unwrap();
doc2.merge(&mut doc1);
doc2.merge(&mut doc1).unwrap();
doc1.del(&list_id, 1).unwrap();
doc2.del(&list_id, 1).unwrap();
doc1.merge(&mut doc2);
doc1.merge(&mut doc2).unwrap();
assert_doc!(
&doc1,
@ -597,7 +597,7 @@ fn concurrent_deletion_of_same_list_element() {
}
);
doc2.merge(&mut doc1);
doc2.merge(&mut doc1).unwrap();
assert_doc!(
&doc2,
map! {
@ -631,12 +631,12 @@ fn concurrent_updates_at_different_levels() {
.unwrap();
doc1.insert(&mammals, 0, "badger").unwrap();
doc2.merge(&mut doc1);
doc2.merge(&mut doc1).unwrap();
doc1.set(&birds, "brown", "sparrow").unwrap();
doc2.del(&animals, "birds").unwrap();
doc1.merge(&mut doc2);
doc1.merge(&mut doc2).unwrap();
assert_obj!(
&doc1,
@ -676,13 +676,13 @@ fn concurrent_updates_of_concurrently_deleted_objects() {
.unwrap();
doc1.set(&blackbird, "feathers", "black").unwrap();
doc2.merge(&mut doc1);
doc2.merge(&mut doc1).unwrap();
doc1.del(&birds, "blackbird").unwrap();
doc2.set(&blackbird, "beak", "orange").unwrap();
doc1.merge(&mut doc2);
doc1.merge(&mut doc2).unwrap();
assert_doc!(
&doc1,
@ -704,7 +704,7 @@ fn does_not_interleave_sequence_insertions_at_same_position() {
.set(&automerge::ROOT, "wisdom", automerge::Value::list())
.unwrap()
.unwrap();
doc2.merge(&mut doc1);
doc2.merge(&mut doc1).unwrap();
doc1.splice(
&wisdom,
@ -734,7 +734,7 @@ fn does_not_interleave_sequence_insertions_at_same_position() {
)
.unwrap();
doc1.merge(&mut doc2);
doc1.merge(&mut doc2).unwrap();
assert_doc!(
&doc1,
@ -767,7 +767,7 @@ fn mutliple_insertions_at_same_list_position_with_insertion_by_greater_actor_id(
.unwrap()
.unwrap();
doc1.insert(&list, 0, "two").unwrap();
doc2.merge(&mut doc1);
doc2.merge(&mut doc1).unwrap();
doc2.insert(&list, 0, "one").unwrap();
assert_doc!(
@ -793,7 +793,7 @@ fn mutliple_insertions_at_same_list_position_with_insertion_by_lesser_actor_id()
.unwrap()
.unwrap();
doc1.insert(&list, 0, "two").unwrap();
doc2.merge(&mut doc1);
doc2.merge(&mut doc1).unwrap();
doc2.insert(&list, 0, "one").unwrap();
assert_doc!(
@ -817,11 +817,11 @@ fn insertion_consistent_with_causality() {
.unwrap()
.unwrap();
doc1.insert(&list, 0, "four").unwrap();
doc2.merge(&mut doc1);
doc2.merge(&mut doc1).unwrap();
doc2.insert(&list, 0, "three").unwrap();
doc1.merge(&mut doc2);
doc1.merge(&mut doc2).unwrap();
doc1.insert(&list, 0, "two").unwrap();
doc2.merge(&mut doc1);
doc2.merge(&mut doc1).unwrap();
doc2.insert(&list, 0, "one").unwrap();
assert_doc!(
@ -861,11 +861,11 @@ fn save_restore_complex() {
doc1.set(&first_todo, "done", false).unwrap();
let mut doc2 = new_doc();
doc2.merge(&mut doc1);
doc2.merge(&mut doc1).unwrap();
doc2.set(&first_todo, "title", "weed plants").unwrap();
doc1.set(&first_todo, "title", "kill plants").unwrap();
doc1.merge(&mut doc2);
doc1.merge(&mut doc2).unwrap();
let reloaded = Automerge::load(&doc1.save().unwrap()).unwrap();
@ -918,8 +918,8 @@ fn list_counter_del() -> Result<(), automerge::AutomergeError> {
doc1.inc(&list, 1, 1)?;
doc1.inc(&list, 2, 1)?;
doc1.merge(&mut doc2);
doc1.merge(&mut doc3);
doc1.merge(&mut doc2).unwrap();
doc1.merge(&mut doc3).unwrap();
let values = doc1.values(&list, 1)?;
assert_eq!(values.len(), 3);

View file

@ -25,6 +25,10 @@ let _ = doc.save()
console.log(`Done in ${new Date() - start} ms`)
let t_time = new Date()
let t = doc.text(text);
console.log(`doc.text in ${new Date() - t_time} ms`)
if (doc.text(text) !== finalText) {
throw new RangeError('ERROR: final text did not match expectation')
}

2
examples/cra/.gitignore vendored Normal file
View file

@ -0,0 +1,2 @@
node_modules
package-lock.json

22
examples/cra/README.md Normal file
View file

@ -0,0 +1,22 @@
## Example CRA App using AutomergeWASM
### Creating this example app
```bash
$ cd automerge-wasm && yarn pkg # this builds the npm package
$ cd ../examples
$ npx create-react-app cra --template typescript
$ cd cra
$ npm install ../../automerge-wasm/automerge-wasm-v0.1.0.tgz
```
Then I just needed to add the import "automerge-wasm" and `{ useEffect, useState }` code to `./src/App.tsx`
```bash
$ npm start
```
### Open Issues
The example app currently doesn't do anything useful. Perhaps someone with some react experience and figure out the right way to wire everything up for an actual demo.

43
examples/cra/package.json Normal file
View file

@ -0,0 +1,43 @@
{
"name": "cra",
"version": "0.1.0",
"private": true,
"dependencies": {
"@testing-library/jest-dom": "^5.16.1",
"@testing-library/react": "^12.1.2",
"@testing-library/user-event": "^13.5.0",
"@types/jest": "^27.4.0",
"@types/node": "^16.11.21",
"@types/react": "^17.0.38",
"@types/react-dom": "^17.0.11",
"react": "^17.0.2",
"react-dom": "^17.0.2",
"react-scripts": "5.0.0",
"typescript": "^4.5.5",
"web-vitals": "^2.1.4"
},
"scripts": {
"start": "react-scripts start",
"build": "react-scripts build",
"test": "react-scripts test",
"eject": "react-scripts eject"
},
"eslintConfig": {
"extends": [
"react-app",
"react-app/jest"
]
},
"browserslist": {
"production": [
">0.2%",
"not dead",
"not op_mini all"
],
"development": [
"last 1 chrome version",
"last 1 firefox version",
"last 1 safari version"
]
}
}

Binary file not shown.

After

Width:  |  Height:  |  Size: 3.8 KiB

View file

@ -0,0 +1,43 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8" />
<link rel="icon" href="%PUBLIC_URL%/favicon.ico" />
<meta name="viewport" content="width=device-width, initial-scale=1" />
<meta name="theme-color" content="#000000" />
<meta
name="description"
content="Web site created using create-react-app"
/>
<link rel="apple-touch-icon" href="%PUBLIC_URL%/logo192.png" />
<!--
manifest.json provides metadata used when your web app is installed on a
user's mobile device or desktop. See https://developers.google.com/web/fundamentals/web-app-manifest/
-->
<link rel="manifest" href="%PUBLIC_URL%/manifest.json" />
<!--
Notice the use of %PUBLIC_URL% in the tags above.
It will be replaced with the URL of the `public` folder during the build.
Only files inside the `public` folder can be referenced from the HTML.
Unlike "/favicon.ico" or "favicon.ico", "%PUBLIC_URL%/favicon.ico" will
work correctly both with client-side routing and a non-root public URL.
Learn how to configure a non-root public URL by running `npm run build`.
-->
<title>React App</title>
</head>
<body>
<noscript>You need to enable JavaScript to run this app.</noscript>
<div id="root"></div>
<!--
This HTML file is a template.
If you open it directly in the browser, you will see an empty page.
You can add webfonts, meta tags, or analytics to this file.
The build step will place the bundled scripts into the <body> tag.
To begin the development, run `npm start` or `yarn start`.
To create a production bundle, use `npm run build` or `yarn build`.
-->
</body>
</html>

Binary file not shown.

After

Width:  |  Height:  |  Size: 5.2 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 9.4 KiB

View file

@ -0,0 +1,25 @@
{
"short_name": "React App",
"name": "Create React App Sample",
"icons": [
{
"src": "favicon.ico",
"sizes": "64x64 32x32 24x24 16x16",
"type": "image/x-icon"
},
{
"src": "logo192.png",
"type": "image/png",
"sizes": "192x192"
},
{
"src": "logo512.png",
"type": "image/png",
"sizes": "512x512"
}
],
"start_url": ".",
"display": "standalone",
"theme_color": "#000000",
"background_color": "#ffffff"
}

View file

@ -0,0 +1,3 @@
# https://www.robotstxt.org/robotstxt.html
User-agent: *
Disallow:

38
examples/cra/src/App.css Normal file
View file

@ -0,0 +1,38 @@
.App {
text-align: center;
}
.App-logo {
height: 40vmin;
pointer-events: none;
}
@media (prefers-reduced-motion: no-preference) {
.App-logo {
animation: App-logo-spin infinite 20s linear;
}
}
.App-header {
background-color: #282c34;
min-height: 100vh;
display: flex;
flex-direction: column;
align-items: center;
justify-content: center;
font-size: calc(10px + 2vmin);
color: white;
}
.App-link {
color: #61dafb;
}
@keyframes App-logo-spin {
from {
transform: rotate(0deg);
}
to {
transform: rotate(360deg);
}
}

View file

@ -0,0 +1,9 @@
import React from 'react';
import { render, screen } from '@testing-library/react';
import App from './App';
test('renders learn react link', () => {
render(<App />);
const linkElement = screen.getByText(/learn react/i);
expect(linkElement).toBeInTheDocument();
});

45
examples/cra/src/App.tsx Normal file
View file

@ -0,0 +1,45 @@
import React, { useEffect, useState } from 'react';
import './App.css';
import * as Automerge from "automerge-wasm"
function App() {
const [ doc, ] = useState(Automerge.create())
const [ edits, ] = useState(doc.set("_root", "edits", Automerge.TEXT) || "")
const [ val, setVal ] = useState("");
useEffect(() => {
doc.splice(edits, 0, 0, "the quick fox jumps over the lazy dog")
let result = doc.text(edits)
setVal(result)
}, [])
function updateTextarea(e: any) {
e.preventDefault()
let event: InputEvent = e.nativeEvent
console.log(edits, e.target.selectionEnd)
switch (event.inputType) {
case 'insertText':
//@ts-ignore
doc.splice(edits, e.target.selectionEnd - 1, 0, e.nativeEvent.data)
break;
case 'deleteContentBackward':
//@ts-ignore
doc.splice(edits, e.target.selectionEnd, 1)
break;
case 'insertLineBreak':
//@ts-ignore
doc.splice(edits, e.target.selectionEnd - 1, '\n')
break;
}
setVal(doc.text(edits))
}
return (
<div className="App">
<header className="App-header">
<textarea value={val} onChange={updateTextarea}></textarea>
</header>
</div>
);
}
export default App;

View file

@ -0,0 +1,13 @@
body {
margin: 0;
font-family: -apple-system, BlinkMacSystemFont, 'Segoe UI', 'Roboto', 'Oxygen',
'Ubuntu', 'Cantarell', 'Fira Sans', 'Droid Sans', 'Helvetica Neue',
sans-serif;
-webkit-font-smoothing: antialiased;
-moz-osx-font-smoothing: grayscale;
}
code {
font-family: source-code-pro, Menlo, Monaco, Consolas, 'Courier New',
monospace;
}

View file

@ -0,0 +1,20 @@
import React from 'react';
import ReactDOM from 'react-dom';
import './index.css';
import App from './App';
import reportWebVitals from './reportWebVitals';
import init from "automerge-wasm"
init().then(_ => {
ReactDOM.render(
<React.StrictMode>
<App />
</React.StrictMode>,
document.getElementById('root')
);
})
// If you want to start measuring performance in your app, pass a function
// to log results (for example: reportWebVitals(console.log))
// or send to an analytics endpoint. Learn more: https://bit.ly/CRA-vitals
reportWebVitals();

View file

@ -0,0 +1 @@
<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 841.9 595.3"><g fill="#61DAFB"><path d="M666.3 296.5c0-32.5-40.7-63.3-103.1-82.4 14.4-63.6 8-114.2-20.2-130.4-6.5-3.8-14.1-5.6-22.4-5.6v22.3c4.6 0 8.3.9 11.4 2.6 13.6 7.8 19.5 37.5 14.9 75.7-1.1 9.4-2.9 19.3-5.1 29.4-19.6-4.8-41-8.5-63.5-10.9-13.5-18.5-27.5-35.3-41.6-50 32.6-30.3 63.2-46.9 84-46.9V78c-27.5 0-63.5 19.6-99.9 53.6-36.4-33.8-72.4-53.2-99.9-53.2v22.3c20.7 0 51.4 16.5 84 46.6-14 14.7-28 31.4-41.3 49.9-22.6 2.4-44 6.1-63.6 11-2.3-10-4-19.7-5.2-29-4.7-38.2 1.1-67.9 14.6-75.8 3-1.8 6.9-2.6 11.5-2.6V78.5c-8.4 0-16 1.8-22.6 5.6-28.1 16.2-34.4 66.7-19.9 130.1-62.2 19.2-102.7 49.9-102.7 82.3 0 32.5 40.7 63.3 103.1 82.4-14.4 63.6-8 114.2 20.2 130.4 6.5 3.8 14.1 5.6 22.5 5.6 27.5 0 63.5-19.6 99.9-53.6 36.4 33.8 72.4 53.2 99.9 53.2 8.4 0 16-1.8 22.6-5.6 28.1-16.2 34.4-66.7 19.9-130.1 62-19.1 102.5-49.9 102.5-82.3zm-130.2-66.7c-3.7 12.9-8.3 26.2-13.5 39.5-4.1-8-8.4-16-13.1-24-4.6-8-9.5-15.8-14.4-23.4 14.2 2.1 27.9 4.7 41 7.9zm-45.8 106.5c-7.8 13.5-15.8 26.3-24.1 38.2-14.9 1.3-30 2-45.2 2-15.1 0-30.2-.7-45-1.9-8.3-11.9-16.4-24.6-24.2-38-7.6-13.1-14.5-26.4-20.8-39.8 6.2-13.4 13.2-26.8 20.7-39.9 7.8-13.5 15.8-26.3 24.1-38.2 14.9-1.3 30-2 45.2-2 15.1 0 30.2.7 45 1.9 8.3 11.9 16.4 24.6 24.2 38 7.6 13.1 14.5 26.4 20.8 39.8-6.3 13.4-13.2 26.8-20.7 39.9zm32.3-13c5.4 13.4 10 26.8 13.8 39.8-13.1 3.2-26.9 5.9-41.2 8 4.9-7.7 9.8-15.6 14.4-23.7 4.6-8 8.9-16.1 13-24.1zM421.2 430c-9.3-9.6-18.6-20.3-27.8-32 9 .4 18.2.7 27.5.7 9.4 0 18.7-.2 27.8-.7-9 11.7-18.3 22.4-27.5 32zm-74.4-58.9c-14.2-2.1-27.9-4.7-41-7.9 3.7-12.9 8.3-26.2 13.5-39.5 4.1 8 8.4 16 13.1 24 4.7 8 9.5 15.8 14.4 23.4zM420.7 163c9.3 9.6 18.6 20.3 27.8 32-9-.4-18.2-.7-27.5-.7-9.4 0-18.7.2-27.8.7 9-11.7 18.3-22.4 27.5-32zm-74 58.9c-4.9 7.7-9.8 15.6-14.4 23.7-4.6 8-8.9 16-13 24-5.4-13.4-10-26.8-13.8-39.8 13.1-3.1 26.9-5.8 41.2-7.9zm-90.5 125.2c-35.4-15.1-58.3-34.9-58.3-50.6 0-15.7 22.9-35.6 58.3-50.6 8.6-3.7 18-7 27.7-10.1 5.7 19.6 13.2 40 22.5 60.9-9.2 20.8-16.6 41.1-22.2 60.6-9.9-3.1-19.3-6.5-28-10.2zM310 490c-13.6-7.8-19.5-37.5-14.9-75.7 1.1-9.4 2.9-19.3 5.1-29.4 19.6 4.8 41 8.5 63.5 10.9 13.5 18.5 27.5 35.3 41.6 50-32.6 30.3-63.2 46.9-84 46.9-4.5-.1-8.3-1-11.3-2.7zm237.2-76.2c4.7 38.2-1.1 67.9-14.6 75.8-3 1.8-6.9 2.6-11.5 2.6-20.7 0-51.4-16.5-84-46.6 14-14.7 28-31.4 41.3-49.9 22.6-2.4 44-6.1 63.6-11 2.3 10.1 4.1 19.8 5.2 29.1zm38.5-66.7c-8.6 3.7-18 7-27.7 10.1-5.7-19.6-13.2-40-22.5-60.9 9.2-20.8 16.6-41.1 22.2-60.6 9.9 3.1 19.3 6.5 28.1 10.2 35.4 15.1 58.3 34.9 58.3 50.6-.1 15.7-23 35.6-58.4 50.6zM320.8 78.4z"/><circle cx="420.9" cy="296.5" r="45.7"/><path d="M520.5 78.1z"/></g></svg>

After

Width:  |  Height:  |  Size: 2.6 KiB

1
examples/cra/src/react-app-env.d.ts vendored Normal file
View file

@ -0,0 +1 @@
/// <reference types="react-scripts" />

View file

@ -0,0 +1,15 @@
import { ReportHandler } from 'web-vitals';
const reportWebVitals = (onPerfEntry?: ReportHandler) => {
if (onPerfEntry && onPerfEntry instanceof Function) {
import('web-vitals').then(({ getCLS, getFID, getFCP, getLCP, getTTFB }) => {
getCLS(onPerfEntry);
getFID(onPerfEntry);
getFCP(onPerfEntry);
getLCP(onPerfEntry);
getTTFB(onPerfEntry);
});
}
};
export default reportWebVitals;

View file

@ -0,0 +1,5 @@
// jest-dom adds custom jest matchers for asserting on DOM nodes.
// allows you to do things like:
// expect(element).toHaveTextContent(/react/i)
// learn more: https://github.com/testing-library/jest-dom
import '@testing-library/jest-dom';

View file

@ -0,0 +1,26 @@
{
"compilerOptions": {
"target": "es5",
"lib": [
"dom",
"dom.iterable",
"esnext"
],
"allowJs": true,
"skipLibCheck": true,
"esModuleInterop": true,
"allowSyntheticDefaultImports": true,
"strict": true,
"forceConsistentCasingInFileNames": true,
"noFallthroughCasesInSwitch": true,
"module": "esnext",
"moduleResolution": "node",
"resolveJsonModule": true,
"isolatedModules": true,
"noEmit": true,
"jsx": "react-jsx"
},
"include": [
"src"
]
}

View file

@ -2,11 +2,11 @@
"nodes": {
"flake-utils": {
"locked": {
"lastModified": 1619345332,
"narHash": "sha256-qHnQkEp1uklKTpx3MvKtY6xzgcqXDsz5nLilbbuL+3A=",
"lastModified": 1642700792,
"narHash": "sha256-XqHrk7hFb+zBvRg6Ghl+AZDq03ov6OshJLiSWOoX5es=",
"owner": "numtide",
"repo": "flake-utils",
"rev": "2ebf2558e5bf978c7fb8ea927dfaed8fefab2e28",
"rev": "846b2ae0fc4cc943637d3d1def4454213e203cba",
"type": "github"
},
"original": {
@ -17,11 +17,11 @@
},
"flake-utils_2": {
"locked": {
"lastModified": 1614513358,
"narHash": "sha256-LakhOx3S1dRjnh0b5Dg3mbZyH0ToC9I8Y2wKSkBaTzU=",
"lastModified": 1637014545,
"narHash": "sha256-26IZAc5yzlD9FlDT54io1oqG/bBoyka+FJk5guaX4x4=",
"owner": "numtide",
"repo": "flake-utils",
"rev": "5466c5bbece17adaab2d82fae80b46e807611bf3",
"rev": "bba5dcc8e0b20ab664967ad83d24d64cb64ec4f4",
"type": "github"
},
"original": {
@ -32,11 +32,11 @@
},
"nixpkgs": {
"locked": {
"lastModified": 1620340338,
"narHash": "sha256-Op/4K0+Z9Sp5jtFH0s/zMM4H7VFZxrekcAmjQ6JpQ4w=",
"lastModified": 1643805626,
"narHash": "sha256-AXLDVMG+UaAGsGSpOtQHPIKB+IZ0KSd9WS77aanGzgc=",
"owner": "nixos",
"repo": "nixpkgs",
"rev": "63586475587d7e0e078291ad4b49b6f6a6885100",
"rev": "554d2d8aa25b6e583575459c297ec23750adb6cb",
"type": "github"
},
"original": {
@ -48,15 +48,16 @@
},
"nixpkgs_2": {
"locked": {
"lastModified": 1617325113,
"narHash": "sha256-GksR0nvGxfZ79T91UUtWjjccxazv6Yh/MvEJ82v1Xmw=",
"owner": "nixos",
"lastModified": 1637453606,
"narHash": "sha256-Gy6cwUswft9xqsjWxFYEnx/63/qzaFUwatcbV5GF/GQ=",
"owner": "NixOS",
"repo": "nixpkgs",
"rev": "54c1e44240d8a527a8f4892608c4bce5440c3ecb",
"rev": "8afc4e543663ca0a6a4f496262cd05233737e732",
"type": "github"
},
"original": {
"owner": "NixOS",
"ref": "nixpkgs-unstable",
"repo": "nixpkgs",
"type": "github"
}
@ -74,11 +75,11 @@
"nixpkgs": "nixpkgs_2"
},
"locked": {
"lastModified": 1620355527,
"narHash": "sha256-mUTnUODiAtxH83gbv7uuvCbqZ/BNkYYk/wa3MkwrskE=",
"lastModified": 1643941258,
"narHash": "sha256-uHyEuICSu8qQp6adPTqV33ajiwoF0sCh+Iazaz5r7fo=",
"owner": "oxalica",
"repo": "rust-overlay",
"rev": "d8efe70dc561c4bea0b7bf440d36ce98c497e054",
"rev": "674156c4c2f46dd6a6846466cb8f9fee84c211ca",
"type": "github"
},
"original": {

View file

@ -19,7 +19,7 @@
inherit system;
};
lib = pkgs.lib;
rust = pkgs.rust-bin.stable.latest.rust;
rust = pkgs.rust-bin.stable.latest.default;
cargoNix = pkgs.callPackage ./Cargo.nix {
inherit pkgs;
release = true;

18
scripts/ci/cmake-build Executable file
View file

@ -0,0 +1,18 @@
#!/usr/bin/env bash
set -eoux pipefail
THIS_SCRIPT=$(dirname "$0");
# \note CMake's default build types are "Debug", "MinSizeRel", "Release" and
# "RelWithDebInfo" but custom ones can also be defined so we pass it verbatim.
BUILD_TYPE=$1;
LIB_TYPE=$2;
if [ "${LIB_TYPE,,}" == "shared" ]; then
SHARED_TOGGLE="ON"
else
SHARED_TOGGLE="OFF"
fi
C_PROJECT=$THIS_SCRIPT/../../automerge-c;
mkdir -p $C_PROJECT/build;
cd $C_PROJECT/build;
cmake --log-level=ERROR --verbose -B . -S .. -DCMAKE_BUILD_TYPE=$BUILD_TYPE -DBUILD_SHARED_LIBS=$SHARED_TOGGLE;
cmake --build .;

View file

@ -7,3 +7,4 @@ set -eou pipefail
./scripts/ci/docs
./scripts/ci/advisory
./scripts/ci/js_tests
./scripts/ci/cmake-build Release static