Code Monkey home page Code Monkey logo

xray's Introduction

Attention: GitHub has decided not to move forward with any aspect of this project. We'll archive the repository in case anybody finds value here, but we don't expect to actively work on this in the foreseeable future. Thanks to everyone for their interest and support.

Xray

Build Status

Xray is an experimental Electron-based text editor informed by what we've learned in the four years since the launch of Atom. In the short term, this project is a testbed for rapidly iterating on several radical ideas without risking the stability of Atom. The longer term future of the code in this repository will become clearer after a few months of progress. For now, our primary goal is to iterate rapidly and learn as much as possible.

Q3 2018 Focus

We're currently focused on a sub-project of Xray called Memo, which will serve as the foundation of Xray but also be available as a standalone tool. Memo is an operation-based version control system that tracks changes at the level of individual keystrokes and synchronizes branches in real time.

Updates

Foundational priorities

Our goal is to build a cross-platform text editor that is designed from the beginning around the following foundational priorities:

Collaboration

Xray makes it as easy to code together as it is to code alone.

We design features for collaborative use from the beginning. Editors and other relevant UI elements are designed to be occupied by multiple users. Interactions with the file system and other resources such as subprocesses are abstracted to work over network connections.

High performance

Xray feels lightweight and responsive.

We design our features to be responsive from the beginning. We reliably provide visual feedback within the latency windows suggested by the RAIL performance model. For all interactions, we shoot for the following targets on the hardware of our median user:

Duration Action
8ms Scrolling, animations, and fine-grained interactions such as typing or cursor movement.
50ms Coarse-grained interactions such as opening a file or initiating a search. If we can't complete the action within this window, we should show a progress bar.
150ms Opening an application window.

We are careful to maximize throughput of batch operations such as project-wide search. Memory consumption is kept within a low constant factor of the size of the project and open buffer set, but we trade memory for speed and extensibility so long as memory requirements are reasonable.

Extensibility

Xray gives developers control over their own tools.

We expose convenient and powerful APIs to enable users to add non-trivial functionality to the application. We balance the power of our APIs with the ability to ensure the responsiveness, stability, and security of the application as a whole. We avoid leaking implementation details and use versioning where possible to enable a sustained rapid development without destabilizing the package ecosystem.

Web compatibility

Editing on GitHub feels like editing in Xray.

We want to provide a full-featured editor experience that can be used from within a browser. This will ultimately help us provide a more unified experience between GitHub.com and Xray and give us a stronger base of stakeholders in the core editing technology.

Architecture

Martin Fowler defines software architecture as those decisions which are both important and hard to change. Since these decisions are hard to change, we need to be sure that our foundational priorities are well-served by these decisions.

Architecture

The UI is built with web technology

Web tech adds a lot of overhead, which detracts from our top priority of high-performance. However, web standards are also the best approach that we know of to deliver a cross-platform, extensible user interface. Atom proved that developers want to add non-trivial UI elements to their editor, and we still see web technologies as the most viable way to offer them that ability.

The fundamental question is whether we can gain the web's benefits for extensibility while still meeting our desired performance goals. Our hypothesis is that it's possible–with the right architecture.

Core application logic is written in Rust

While the UI will be web-based, the core of the application is implemented in a server process written in Rust. We place as much logic as possible in a library crate located in /xray_core, then expose this logic as a server when running Xray on the desktop (/xray_server) and a web-assembly library running on a worker thread when running Xray in the browser (/xray_wasm). We communicate between the UI and the back end process via JSON RPC.

All of the core application code other than the view logic should be written in Rust. This will ensure that it has a minimal footprint to load and execute, and Rust's robust type system will help us maintain it more efficiently than dynamically typed code. A language that is fundamentally designed for multi-threading will also make it easier to exploit parallelism whenever the need arises, whereas JavaScript's single-threaded nature makes parallelism awkward and challenging.

Fundamentally, we want to spend our time writing in a language that is fast by default. It's true that it's possible to write slow Rust, and also possible to write fast JavaScript. It's also true that it's much harder to write slow Rust than it is to write slow JavaScript. By spending fewer resources on the implementation of the platform itself, we'll make more resources available to run package code.

I/O will be centralized in the server

The server will serialize buffer loads and saves on a per-path basis, and maintains a persistent database of CRDT operations for each file. As edits are performed in windows, they will be streamed to the host process to be stored and echoed out to any other windows with the same open buffer. This will enable unsaved changes to always be incrementally preserved in case of a crash or power failure and preserves the history associated with a file indefinitely.

Early on, we should design the application process to be capable of connecting to multiple workspace servers to facilitate real-time collaboration or editing files on a remote server by running a headless host process. To support these use cases, all code paths that touch the file system or spawn subprocesses will occur in the server process. The UI will not make use of the I/O facilities provided by Electron, and instead interact with the server via RPC.

Packages will run in a JavaScript VM in the server process

A misbehaving package should not be able to impact the responsiveness of the application. The best way to guarantee this while preserving ease of development is to activate packages on their own threads. We can run a worker thread per package or run packages in their own contexts across a pool of threads.

Packages can run code on the render thread by specifying versioned components in their package.json.

"components": {
  "TodoList": "./components/todo-list.js"
}

If a package called my-todos had the above entry in its package.json, it could request that the workspace attach that component by referring to myTodos.TodoList when adding an item. During package installation on the desktop, we can automatically update the V8 snapshot of the UI to include the components of every installed package. Components will only be dynamically loaded from the provided paths in development mode.

Custom views will only have access to the DOM and an asynchronous channel to communicate with the package's back end running on the server. APIs for interacting with the core application state and the underlying operating system will only be available within the server process, discouraging package authors from putting too much logic into their views. We'll use a combination of asynchronous channels and CRDTs to present convenient APIs to package authors within worker threads.

Text is stored in a copy-on-write CRDT

To fully exploit Rust's unique advantage of parallelism, we need to store text in a concurrency-friendly way. We use a variant of RGA called RGASplit, which is described in this research paper.

CRDT diagram

In RGA split, the document is stored as a sequence of insertion fragments. In the example above, the document starts as just a single insertion containing hello world. We then introduce , there and ! as additional insertions, splitting the original insertion into two fragments. To delete the ld at the end of world in the third step, we create another fragment containing just the ld and mark it as deleted with a tombstone.

Structuring the document in this way has a number of advantages.

  • Real-time collaboration works out of the box
  • Concurrent edits: Any thread can read or write its own replica of the document without diverging in the presence of concurrent edits.
  • Integrated non-linear history: To undo any group of operations, we increment an undo counter associated with any insertions and deletions that controls their visibility. This means we only need to store operation ids in the history rather than operations themselves, and we can undo any operation at any time rather than adhering to historical order.
  • Stable logical positions: Instead of tracking the location of markers on every edit, we can refer to stable positions that are guaranteed to be valid for any future buffer state. For example, we can mark the positions of all search results in a background thread and continue to interpret them in a foreground thread if edits are performed in the meantime.

Our use of a CRDT is similar to the Xi editor, but the approach we're exploring is somewhat different. Our current understanding is that in Xi, the buffer is stored in a rope data structure, then a secondary layer is used to incorporate edits. In Xray, the fundamental storage structure of all text is itself a CRDT. It's similar to Xi's rope in that it uses a copy-on-write B-tree to index all inserted fragments, but it does not require any secondary system for incorporating edits.

Derived state will be computed asynchronously

We should avoid implementing synchronous APIs that depend on open-ended computations of derived state. For example, when soft wrapping is enabled in Atom, we synchronously update a display index that maps display coordinates to buffer coordinates, which can block the UI.

In Xray, we want to avoid making these kinds of promises in our API. For example, we will allow the display index to be accessed synchronously after a buffer edit, but only provide an interpolated version of its state that can be produced in logarithmic time. This means it will be spatially consistent with the underlying buffer, but may contain lines that have not yet been soft-wrapped.

We can expose an asynchronous API that allows a package author to wait until the display layer is up to date with a specific version of the buffer. In the user interface, we can display a progress bar for any derived state updates that exceed 50ms, which may occur when the user pastes multiple megabytes of text into the editor.

React will be used for presentation

By using React, we completely eliminate the view framework as a concern that we need to deal with and give package authors access to a tool they're likely to be familiar with. We also raise the level of abstraction above basic DOM APIs. The risk of using React is of course that it is not standardized and could have breaking API changes. To mitigate this risk, we will require packages to declare which version of React they depend on. We will attempt using this version information to provide shims to older versions of React when we upgrade the bundled version. When it's not possible to shim breaking changes, we'll use the version information to present a warning.

Styling will be specified in JS

CSS is a widely-known and well-supported tool for styling user interfaces, which is why we embraced it in Atom. Unfortunately, the performance and maintainability of CSS degrade as the number of selectors increases. CSS also lacks good tools for exposing a versioned theming API and applying programmatic logic such as altering colors. Finally, the browser does not expose APIs for being notified when computed styles change, making it difficult to use CSS as a source of truth for complex components. For a theming system that performs well and scales, we need more direct control. We plan to use a CSS-in-JS approach that automatically generates atomic selectors so as to keep our total number of selectors minimal.

Text is rendered via WebGL

In Atom, the vast majority of computation of any given frame is spent manipulating the DOM, recalculating styles, and performing layout. To achieve good text rendering performance, it is critical that we bypass this overhead and take direct control over rendering. Like Alacritty and Xi, we plan to employ OpenGL to position quads that are mapped to glyph bitmaps in a texture atlas.

There isn't always a 1:1 relationship between code units inside a JavaScript string and glyphs on screen. Characters (code points) can be expressed as two 16-bit units, but this situation is simple to detect by examining the numeric ranges of the code units. In other cases, the correspondence between code units and glyphs is less straightforward to determine. If the current font and/or locale depends on ligatures or contextual alternates to render correctly, determining the correspondence between code points and glyphs requires support for complex text shaping that references metadata embedded in the font. Bi-directional text complicates the situation further.

For now, our plan is to detect the presence of characters that may require such complex text shaping and fall back to rendering with HTML on the specific lines that require these features. This will enable us to support scripts such as Arabic and Devanagari. For fonts like FiraCode, which include ligatures for common character sequences used in programming, we'll need a different approach. One idea would be to perform a limited subset of text-shaping that just handles ligatures, so as to keep performance high. Another approach that would only work on the desktop would be to use the platform text-shaping and rasterization APIs in this environment.

Bypassing the DOM means that we'll need to implement styling and text layout ourselves. That is a high price to pay, but we think it will be worth it to bypass the performance overhead imposed by the DOM.

Development process

Experiment

At this phase, this code is focused on learning. Whatever code we write should be production-quality, but we don't need to support everything at this phase. We can defer features that don't contribute substantially to learning.

Documentation-driven development

Before coding, we ask ourselves whether the code we're writing can be motivated by something that's written in the guide. The right approach here will always be a judgment call, but let's err on the side of transparency and see what happens.

Disciplined monorepo

All code related to Xray should live in this repository, but intra-repository dependencies should be expressed in a disciplined way to ensure that a one-line docs change doesn't require us to rebuild the world. Builds should be finger-printed on a per-component basis and we should aim to keep components granular.

Contributing

Interested in helping out? Welcome! Check out the CONTRIBUTING guide to get started.

xray's People

Contributors

aerijo avatar arcanemagus avatar arun-is avatar b-fuze avatar chgibb avatar deswurstes avatar dirk avatar eyelash avatar felixrabe avatar haacked avatar janczer avatar jasonrudolph avatar joshaber avatar jpunt avatar karlbecker avatar leroix avatar matthewwithanm avatar max-sixty avatar maxbrunsfeld avatar moritzkn avatar nitrino avatar rleungx avatar yajamon avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

xray's Issues

Tree history

I'm not sure whether this is a "can be addressed later" issue or a "should be considered in early designs" issue, hence this.

One of the distinctive features of advanced editors e.g. vim is the history tree, that is, recognising that undo/edit history is sometimes not linear and handling that natively. Atom doesn't do that, but perhaps XRay could? It can still be presented as linear by default.

This might also be a consideration with a multi-player editor, i.e. should each user have their own history, the global history, or some mixture of the two?

Anyway, is this something you've thought about or will be?

Suggestion: consider using React "Native"

I was reading through the docs and intended feature support, and on first appearances I think the web version of React Native could be a good fit. And if there are rough edges that you come across, I'd be interested in smoothing them out.

There's a lot of out-of-box functionality and stronger guarantees. Automatic RTL layout of the chrome GUI (runtime toggle too). Styles are resolved deterministically; your current css-in-js library was based on RNW but doesn't provide this guarantee. The style mechanism also accounts for dynamic styles in the render function, and use outside of React via setNativeProps (roughly equivalent to using findDOMNode and mutating directly). The components also have built in mechanisms for measuring or responding to layout. And plenty more.

If you're interested in looking into it, or have any questions/feedback, I'd be happy to help

Help wanted: Optimize fragment identifiers

This is an optimization that's in my head, but I haven't performed the measurements necessary to prove that it's really worth the complexity. If you want to work on this, helping with those measurements would be an important first step.

Background

Xray's buffer is a CRDT that stores every piece of text ever inserted in a giant copy-on-write B-tree. Insertions are immutable, and each insertion is initially added to the tree in its whole form. Insertions are subsequently split into fragments whenever new text is inserted within an existing piece of inserted text. As a result, the tree ends up containing a sequence of interleaved fragments, each originating from a different original insertion.

Retaining all of these fragments and their connection back to the original insertion allows us to describe positions in the buffer logically, in terms of an (insertion_id, offset) pair that uniquely describes a position inside the immutable insertions.

To determine where such a logical position lands in the document, we need to determine which fragment of the position's specified insertion contains the position's specified offset. To enable this, each insertion is associated with an auxilarry tree via the Buffer::insertions map. These trees contain only fragments from the original insertion, and map offset ranges to FragmentIds. Once you find a fragment id for a particular (insertion_id, offset) pair, you can use it to perform a lookup in the main fragments tree which contains the full text of the document.

See the presentation I gave at InfoQ 2017 for a clearer explanation.

Dense ids

In the teletype-crdt JS library, we map from an (insertion_id, offset) pair to a fragment in the global document by making our fragments members of two splay trees simultaneously. So you can look up the tree containing all the fragments of a single insertion and seek to the fragment containing a particular offset, then use different tree pointers on that same fragment belonging to the main document tree to resolve its location. It's kind of weird, and the talk may clarify this somewhat, but it may not be mandatory to understand. In Xray, we're storing fragments in a persistent B-tree rather than a splay tree, which means we really can't associate fragments with parent pointers. This means we'll need to find a new strategy for understanding where a particular insertion fragment lives inside the main tree.

In Xray, we instead rely on representing fragment ids as dense, ordered identifiers. "Dense" essentially means that between any two of these identifiers, there should always be room to allocate another identifier. You could think of infinite precision floating point numbers as satisfying this property. Between 1 and 2 there is 1.5, and between 1.5 and 2 there is 1.75, and so on. The inspiration for this approach came from a different text-based CRDT called Logoot.

Currently, we represent a FragmentId as a vector of 16 bit integers. To compare ids, we simply compare each element of the two vectors lexicographically. When we want to create a new id that is ordered in between two existing ids, we search for the first index corresponding to elements in either vector that have a difference of at least one. Once we find one, we allocate a new integer randomly between the two ids.

The LSEQ strategy, discussed in a followup paper, seems like it could be a good source of inspiration for optimizing these ids so as to minimize their length. The longer the ids, the more memory they consume and the longer they take to compare. The linked paper has a really good discussion of the performance profile of allocating these ids based on different insertion patterns in a sequence.

One key idea in the paper is to allocate ids closer to the left or right based on a random strategy at each index of the vector. Another idea is to increase the base of each number in the id, so that the key space expands exponentially as an id gets deeper. It's all in the paper.

The next valuable optimization would build on the LSEQ work and avoid allocation whenever ids are below a certain length. We can do this with unsafe code that implements a tagged pointer. Basically, if the id is short enough, we should be able to stuff it into the top 63 bits of a pointer. If it exceeds that quantity, we can interpret those bits as a pointer to a heap-allocated vector of u16s. The comparison code can then be polymorphic over the two representations, and hopefully most of the time we wouldn't need the allocation.

It will be an experiment. Tying it to some sort of benchmarks related to editing performance will be important to ensuring this idea warrants the complexity.

memo_core filesystem sync status?

First thanks for Atom, Xray, Memo, and whatever's next.

At the moment I'm particular interested in memo_core and how it plans to sync with the filesystem. The "Update" posts over the summer described lots of work related to filesystem sync. But looking at the memo README it seems like what is in memo_core right now is just the "light client" as described here:

Library: Memo provides a reference library implementation written in Rust that produces and consumes the Memo protocol messages to synchronize working trees. We plan to ship a "light client" version of the library that compiles to WebAssembly and exposes a virtual file system API

Is the next step code to sync this model with the filesystem? Any idea on how long until such code appears? Also will it depend on the underlying filesystem being a git repository? This seems to suggest that it might:

as well as a full version based on Libgit2 that synchronizes with a full replica on the local file system.

But I think it would be really useful to be able to sync memo with any local filesystem (git or not). In the case where git isn't present then it wouldn't make sense to sync memo state with multiple clients, because no shared git commit... but it would still be useful to have a single client reading memo state... that is synced with filesystem changes that are made outside of memo API.

Incomplete instructions and dependencies for build

I tried to build xray using cd xray_electron && npm install on my ubuntu 16.04 machine. After solving the errors mentioned in the issue #23 by installing cargo and llvm/clang I am stuck with this error

error: failed to run custom build command for `napi v0.1.0 (file:///home/arpit/Projects/Personal/xray/napi)`
process didn't exit successfully: `/home/arpit/Projects/Personal/xray/xray_node/target/release/build/napi-53ced4d986143395/build-script-build` (exit code: 101)
--- stdout
cargo:rerun-if-env-changed=NODE_INCLUDE_PATH
cargo:rerun-if-changed=src/sys/bindings.cc
cargo:rerun-if-changed=src/sys/bindings.h
cargo:rerun-if-changed=src/sys/bindings.rs
cargo:rerun-if-changed=src/sys/mod.rs
cargo:rerun-if-changed=src/sys/node8.rs
cargo:rerun-if-changed=src/sys/node9.rs
cargo:rustc-cfg=node8
TARGET = Some("x86_64-unknown-linux-gnu")
OPT_LEVEL = Some("3")
TARGET = Some("x86_64-unknown-linux-gnu")
HOST = Some("x86_64-unknown-linux-gnu")
TARGET = Some("x86_64-unknown-linux-gnu")
TARGET = Some("x86_64-unknown-linux-gnu")
HOST = Some("x86_64-unknown-linux-gnu")
CXX_x86_64-unknown-linux-gnu = None
CXX_x86_64_unknown_linux_gnu = None
HOST_CXX = None
CXX = None
HOST = Some("x86_64-unknown-linux-gnu")
TARGET = Some("x86_64-unknown-linux-gnu")
HOST = Some("x86_64-unknown-linux-gnu")
CXXFLAGS_x86_64-unknown-linux-gnu = None
CXXFLAGS_x86_64_unknown_linux_gnu = None
HOST_CXXFLAGS = None
CXXFLAGS = None
DEBUG = Some("false")
running: "c++" "-O3" "-ffunction-sections" "-fdata-sections" "-fPIC" "-m64" "-I" "/home/arpit/.nvm/versions/node/v8.9.3/include/node" "-Wall" "-Wextra" "-std=c++0x" "-Wno-unused-parameter" "-o" "/home/arpit/Projects/Personal/xray/xray_node/target/release/build/napi-a063ab0a46fb41ea/out/src/sys/bindings.o" "-c" "src/sys/bindings.cc"
cargo:warning=src/sys/bindings.cc: In function ‘v8::Local<v8::Value> V8LocalValueFromJsValue(napi_value)’:
cargo:warning=src/sys/bindings.cc:9:31: error: ‘memcpy’ was not declared in this scope
cargo:warning=   memcpy(&local, &v, sizeof(v));
cargo:warning=                               ^
exit code: 1

--- stderr
thread 'main' panicked at '

Internal error occurred: Command "c++" "-O3" "-ffunction-sections" "-fdata-sections" "-fPIC" "-m64" "-I" "/home/arpit/.nvm/versions/node/v8.9.3/include/node" "-Wall" "-Wextra" "-std=c++0x" "-Wno-unused-parameter" "-o" "/home/arpit/Projects/Personal/xray/xray_node/target/release/build/napi-a063ab0a46fb41ea/out/src/sys/bindings.o" "-c" "src/sys/bindings.cc" with args "c++" did not execute successfully (status code exit code: 1).

', /home/arpit/.cargo/registry/src/github.com-1ecc6299db9ec823/cc-1.0.4/src/lib.rs:1984:5
note: Run with `RUST_BACKTRACE=1` for a backtrace.

child_process.js:644
    throw err;
    ^

Error: Command failed: cargo rustc --release -- -Clink-args="-undefined dynamic_lookup -export_dynamic"
    at checkExecSyncError (child_process.js:601:13)
    at Object.execSync (child_process.js:641:13)
    at Object.<anonymous> (/home/arpit/Projects/Personal/xray/napi/scripts/napi.js:31:8)
    at Module._compile (module.js:635:30)
    at Object.Module._extensions..js (module.js:646:10)
    at Module.load (module.js:554:32)
    at tryModuleLoad (module.js:497:12)
    at Function.Module._load (module.js:489:3)
    at Function.Module.runMain (module.js:676:10)
    at startup (bootstrap_node.js:187:16)
npm ERR! code ELIFECYCLE
npm ERR! errno 1
npm ERR! [email protected] build-release: `napi build --release`
npm ERR! Exit status 1
npm ERR! 
npm ERR! Failed at the [email protected] build-release script.
npm ERR! This is probably not a problem with npm. There is likely additional logging output above.

npm ERR! A complete log of this run can be found in:
npm ERR!     /home/arpit/.npm/_logs/2018-03-07T06_10_53_773Z-debug.log
npm WARN [email protected] No repository field.

npm ERR! code ELIFECYCLE
npm ERR! errno 1
npm ERR! [email protected] install: `npm run build-release`
npm ERR! Exit status 1
npm ERR! 
npm ERR! Failed at the [email protected] install script.
npm ERR! This is probably not a problem with npm. There is likely additional logging output above.

npm ERR! A complete log of this run can be found in:
npm ERR!     /home/arpit/.npm/_logs/2018-03-07T06_10_53_993Z-debug.log
➜  xray_electron git:(master) ✗ 

Please update the requirements and instructions to build on different platforms

new logo/icon proposal

Good day Sir I am a graphic designer and i am interested in designing a logo for your good project. I will be doing it as a gift for free. I just need your permission first before i begin my design. Hoping for your positive feedback. Thanks

Electron version

4.0.0-nightly.20180821 works. What's up with the use of version 2?

Windows 10 build error xray_electron\node_modules\napi\Cargo.toml can't be found

Following the instructions in the contribution guide:

  1. Install Node 8.9.3
  2. Install Rust
  3. git clone this repository
  4. cd into xray_electron
  5. run npm install

npm install exits saying that I:\xray\xray\xray_electron\node_modules\napi\Cargo.toml does not exist.
I've checked inside that folder and it does not exist.

No rush for me to get anything up and running just mostly curious 🙂

Versions

I:\xray\xray\xray_electron>node --version
v8.9.3
I:\xray\xray\xray_electron>rustc --version
rustc 1.24.1 (d3ae9a9e0 2018-02-27)
Full output
I:\xray\xray\xray_electron>npm install

> [email protected] install I:\xray\xray\xray_electron\node_modules\xray
> npm run build-release


> [email protected] build-release I:\xray\xray\xray_electron\node_modules\xray
> napi build --release

error: failed to load source for a dependency on `napi`

Caused by:
  Unable to update file:///I:/xray/xray/xray_electron/node_modules/napi

Caused by:
  failed to read `I:\xray\xray\xray_electron\node_modules\napi\Cargo.toml`

Caused by:
  The system cannot find the path specified. (os error 3)
child_process.js:644
    throw err;
    ^

Error: Command failed: cargo rustc --release -- -Clink-args="-undefined dynamic_lookup -export_dynamic"
    at checkExecSyncError (child_process.js:601:13)
    at Object.execSync (child_process.js:641:13)
    at Object.<anonymous> (I:\xray\xray\napi\scripts\napi.js:31:8)
    at Module._compile (module.js:635:30)
    at Object.Module._extensions..js (module.js:646:10)
    at Module.load (module.js:554:32)
    at tryModuleLoad (module.js:497:12)
    at Function.Module._load (module.js:489:3)
    at Function.Module.runMain (module.js:676:10)
    at startup (bootstrap_node.js:187:16)
npm ERR! code ELIFECYCLE
npm ERR! errno 1
npm ERR! [email protected] build-release: `napi build --release`
npm ERR! Exit status 1
npm ERR!
npm ERR! Failed at the [email protected] build-release script.
npm ERR! This is probably not a problem with npm. There is likely additional logging output above.

npm ERR! A complete log of this run can be found in:
npm ERR!     C:\Users\bene\AppData\Roaming\npm-cache\_logs\2018-03-06T22_08_16_150Z-debug.log
npm WARN [email protected] No repository field.

npm ERR! code ELIFECYCLE
npm ERR! errno 1
npm ERR! [email protected] install: `npm run build-release`
npm ERR! Exit status 1
npm ERR!
npm ERR! Failed at the [email protected] install script.
npm ERR! This is probably not a problem with npm. There is likely additional logging output above.

npm ERR! A complete log of this run can be found in:
npm ERR!     C:\Users\bene\AppData\Roaming\npm-cache\_logs\2018-03-06T22_08_16_320Z-debug.log

Question about Electron restrictions

microsoft/vscode#10121 this is the most upvoted issue on vscode at the moment and it seems to be blocked by a shortcoming of electron (multiple windows which share the same execution context, microsoft/vscode#10121 (comment))

I am asking if this would be a problem for xray aswell. Since I believe this is mostly a ui issue,
I do believe electron can make problems there.

I am also asking this now, so this feature can be considered early in development, so as to not make it harder to implement later.

Consider using Svelte instead of React

Svelte has some advantages over react:

  • It's compiled, self-contained, & has no runtime library [smaller transpiled payloads]. This also makes it's components compatible with any other component library & will not cause conflicts.
  • It follows standards (html, css, javascript, & web components)
  • It's significantly faster & uses less memory. It's also produces isomorphic javascript out of the box.
  • It's a better development experience. Superior & more cohesive Idioms followed by the community.
  • It's api is stable yet there's plenty of development. See the sapper project (https://sapper.svelte.technology/) & compare it to next.js

Here's the repl: https://svelte.technology/repl?version=1.57.2&example=hello-world

If this seems interesting, I encourage you to have a conversation with Rich Harris on gittr (https://gitter.im/sveltejs/svelte). He introduced Tree-Shaking with his RollupJS package. He also wrote bublejs (https://buble.surge.sh/), which will yield faster & memory-efficient JS compared to babel.

TypeScript/Flow as options for plugin ecosystem

This is purely curiosity, and I know someone is bound to ask eventually, but are there plans to support the plugin ecosystem using a JavaScript type system such as TypeScript, JSDoc, or Flow?

It is mentioned that one of the motivations of using Rust for the core editor is that it has been shown over time that dynamic languages such as JavaScript (without Flow/JSDoc) take at least a small amount of effort more to maintain than a statically typed language like Rust.

Quoted from Readme "Core Application Logic is Written in Rust"

All of the core application code other than the view logic should be written in Rust. This will ensure that it has a minimal footprint to load and execute, and Rust's robust type system will help us maintain it more efficiently than dynamically typed code.

So, are there plans to leverage existing typing tools for the JavaScript components of this project?

This is purely curiosity, as I understand there are many valid arguments on both sides which are each weighed differently from person to person. And it is also prompted by recent significant improvements to JSX typing support in TypeScript.

Thanks!

Starting xray results in an empty window

I wanted to try xray out and used the build command to create the releasr binaries. But running XRAY_SRC_PATH=. ./scripts/xray . results only in an empty electron window on my machine.

The menu items do exist, but are not visible. You can click on them however and the menu list pops up showing all entries as usual. Debug tools notifies me about a security issue of the renderer process. Nothing else is displayed in the console.

Other electron programs work fine (mainly Atom and VSCode).

Help wanted: Benchmarks in xray_core

It would be great to get a PR setting up benchmarks on xray_core in the canonical Rust manner. You can use your imagination on the kinds of operations we want to test, but here are some ideas:

  • Making large numbers of edits
  • Moving large numbers of cursors in a document containing large numbers of edits

Just getting a basic framework in place with some non-trivial benchmark would be helpful.

Help wanted: Bi-directional text and Arabic shaping support

Our text rendering strategy cuts corners to achieve maximal performance in a web-based environment. We render glyphs to a texture atlas and then composite them on the GPU with a texture-mapped quad representing each character on screen. We currently composite characters with a naive strategy, just laying them out one after another from left to right.

Bi-directional text

Xray is a code editor, and most programming languages assume a left-to-right layout. We're not interested in supporting authoring an entire document with right-to-left lines. For to support programmers working in right-to-left locales, we do want the ability to embed strings of right-to-left text within left-to-right text, such as within strings and comments. This is called bi-directional text.

Interestingly, text in right-to-left languages is still stored from left-to-right in strings. There's an official algorithm defined by the Unicode standards body for transforming fragments of a string from this left-to-right "logical order" to the order the string should be displayed on screen. Anyone that wants to take on this issue will probably need to wrap their heads around the basics of how it works.

Arabic shaping

When we tested using HarfBuzz to perform a general text-shaping pass on every line, we ran into a couple problems:

  • HarfBuzz is written in C, and it's not currently possible to compile a single WebAssembly module that contains both Rust and C/C++. At least we couldn't figure it out. Let us know if we're wrong.
  • When we compiled HarfBuzz to its own module and tested running strings through it, it was taking between 4ms and 20ms to process 50 lines of 100 characters each, depending on the font.

Even if we solve these issues, there's still the issue of rendering glyphs. We currently render glyphs via an HTML5 canvas. In Electron, this simplifies a lot of cross-platform headaches related to loading fonts, dealing with fallbacks, etc. For the WebAssembly case, this is our only option unless we want to render everything via FreeType, which seems extremely complex.

However, canvas isn't actually capable of rendering glyphs. It can only render strings. So even if HarfBuzz or a platform-specific text shaping library returns a list of glyphs to render, we can't actually render an arbitrary glyph with canvas. Some glyphs in some languages are only accessible via context-specific combinations of letters.

Long story short, skipping generalized text shaping saves us a lot of complexity and performance overhead in the common case of editing code. What does it lose us?

  • Support for rendering Arabic scripts
  • Support for rendering Indic scripts

Both of these languages rely on contextual alternates. For example, Arabic characters can correspond to one of four different glyphs depending on their position within a word. Indic scripts combine syllables into individual glyphs in ways I don't really understand.

Indic script support is out of scope for this issue, but it turns out we can support Arabic without doing full-blown text shaping and font rasterization. Unicode happens to define "presentational" code points, which each represent a single contextual variant of a character. This great article from the Mapbox GL JS team describes how a normal string containing Arabic can be transformed to use these presentational forms, which we can then rasterize with our naive character strategy without issues.

What a solution might look like

Assuming we can't figure out how to compile C and Rust in the same WebAssembly module, we'd like a pure Rust solution that takes care of the bi-directional text transformation and the substitution of Arabic presentational characters in the output string.

There seem to be several implementations of "arabic shaping" floating around on the internet that could serve as inspiration, though a lot of them are GPL-licensed, so be careful about derivative works since Xray is MIT-licensed.

An additional challenge is that it won't be sufficient to simply transform the text. We'll also need to include enough metadata to understand where the cursor should be rendered inside a line containing bi-directional text and Arabic substitutions. I haven't thought deeply about how this mapping would be structured, but we essentially need to efficiently translate back and forth between columns in the input text and the output text.

A great solution would also contribute some kind of documentation explaining what you learn researching this problem.

Error polling incoming connection: frame size too big

Environment:

  • xray version: bf5e5bc
  • Ubuntu 16.04.4 GNOME
  • Client Chrome v66

command:

XRAY_SRC_PATH=. script/xray_debug -H -l 4321 .

log

Error polling incoming connection: frame size too big
Error polling incoming connection: frame size too big
Error polling incoming connection: frame size too big
Error polling incoming connection: frame size too big
Error polling incoming connection: frame size too big
Error polling incoming connection: frame size too big
Error sending message to client on TCP socket: Connection reset by peer (os error 104)

Bi-directional text

I noticed in this week's update, this note on not supporting bi-directional text in the short term:

No support for bi-directional text. This isn't a deal-breaker in the short term, since all of the dominant programming languages are based on latin scripts.

While it is true that all of the dominant programming languages are based on latin glyphs for their keywords, more and more languages are supporting Unicode identifiers (including Haskell, Elixir, and JavaScript) and most languages support full Unicode string literals these days.

I don't think that bi-directional text needs to be addressed in this experiment. However, I do feel that supporting the features that programming languages themselves support is important enough to be given high priority if this technology progresses past "experimental".

DOM Fallback Implementation.

Hi,

I’m using relatively old and obscure hardware (AMD A8-3850 APU), and have noticed that webgl2 is not available in electron/chrome. In the future, will Xray support some sort of DOM fallback for text_plane.js? If not, how difficult would it be to utilize Atom’s current text-editor*.js implementation to create a fallback DOMRenderer class for TextPlane?

IME support

Have you found solutions for languages that require IME support ?

my impression is rendering the text yourself is incompatible with supporting IMEs with the current electron api.

[Memo] Adding a file to a subdirectory works, but reading it fails

Example code showing the as at https://gist.github.com/probablycorey/3a673dd08aea886ec3785161d646e46e.

I can add a file to a subdirectory (and I can verify that it worked by calling tree.entries()) but when I call getText using the new fileId I get this error.

    invalid type: null, expected a string
      at request (webpack:/memo/src/index.ts?:61:15)
      at WorkTree.openTextFile (webpack:/memo/src/index.ts?:178:24)
      at VirtualTree.getContent (lib/VirtualTree.ts:162:36)

Strategy for rendering glyphs via WebGL

Hi everyone. I very glad to see project like that and Xi. And I very interesting which strategy you choose for hi-quality rendering of glyphs.

There many solutions for that. Which is the most preferable for your goals?

1. Rendering via precomputed multichannel signed distance fields. I Think this is fastest approach

msdf

With LCD subpixel antialiased shader.

2. Rendering via analytic arc approximation sdf like glyphy

3. Vector textures. Paper

4. Loop-Blinn curve-filling technique (Patented). Or variation of this technique without AA via partial derivations like in pathfinder's approach

5. Something new?

File finder UI doesn't update when cache is updated

Repro:

  • Open the app: env XRAY_SRC_PATH=(pwd) script/xray_debug .
  • Quickly search for the phrase "Readme" (not all Readme files will show up)
  • After a while, delete a character to trigger re-searching the cache and notice all "README" references show up

Memo: include `path` in Entry

The current API leaves it up to clients to reconstruct the full path for entries, either using WorkTree.pathForFileId or by doing its own bookkeeping. This is ok but in an ideal world we'd push that bookkeeping down into Memo instead of rely on clients to do it themselves.

Assuming there is some cost overhead such that Memo wouldn't want to always provide the path, one possibility would be to make it another option clients can pass to WorkTree.entries:

tree.entries({includeFullPath: true})

Suggestion: Don't use overloaded Point and Range terms in API

One thing that we ran into problems with the original API were the Point and Range classes. Since the browser APIs contain classes named the same, we ran into naming collisions causing issues for those attempting to use the Atom API. I figured I would record this piece of tribal knowledge here so that we can avoid it in Xray.

Suggestions

Alternatives for Point:

  • Location
  • Coordinate
  • Position

Alternatives for Range:

  • Span
  • Section
  • Bounds
  • Dimension
  • Limits

Unable to build on macOS

When I execute npm install then npm start I get the following error:

Compiling xray_node v0.1.0 (file:///Users/jrothberg/Repositories/xray/xray_node)
error[E0463]: can't find crate for napi
--> src/lib.rs:1:1
|
1 | extern crate napi;
| ^^^^^^^^^^^^^^^^^^ can't find crate

error: aborting due to previous error

If you want more information on this error, try using "rustc --explain E0463"
error: Could not compile xray_node.

To learn more, run the command again with --verbose.
child_process.js:614
throw err;
^

Error: Command failed: cargo rustc --release -- -Clink-args="-undefined dynamic_lookup -export_dynamic"
at checkExecSyncError (child_process.js:574:11)
at Object.execSync (child_process.js:611:13)
at Object. (/Users/jrothberg/Repositories/xray/napi/scripts/napi.js:31:8)
at Module._compile (module.js:660:30)
at Object.Module._extensions..js (module.js:671:10)
at Module.load (module.js:573:32)
at tryModuleLoad (module.js:513:12)
at Function.Module._load (module.js:505:3)
at Function.Module.runMain (module.js:701:10)
at startup (bootstrap_node.js:190:16)

I am building with Rust 1.24.1 (I have also tried building with nightly and get the same result).

Proton-Native instead of Electron

Wouldn't having the UI built using Proton-Native be better that using Electron js? Because Proton-Native claims that it we can create native components using it.

Error polling incoming connection: frame size too big

I am getting when trying out Xray in the browser:

Error sending message to client on TCP socket: Protocol wrong type for socket (os error 41)
Error polling incoming connection: frame size too big
Error polling incoming connection: frame size too big
Error sending message to client on TCP socket: Broken pipe (os error 32)
Error polling incoming connection: frame size too big

Also some keys don't seem to work, e.g. i cannot enter a new line.

Fails to build on Mac OS High Sierra

script/build log

Ozzies-iMac:xray ozziepeck$ script/build
yarn install v1.5.1
warning ../../../package.json: No license field
[1/4] 🔍  Resolving packages...
[2/4] 🚚  Fetching packages...
[3/4] 🔗  Linking dependencies...
[4/4] 📃  Building fresh packages...
✨  Done in 17.81s.
/Users/ozziepeck/desktop/xray
yarn install v1.5.1
warning ../../../package.json: No license field
[1/4] 🔍  Resolving packages...
[2/4] 🚚  Fetching packages...
[3/4] 🔗  Linking dependencies...
[4/4] 📃  Building fresh packages...
success Saved lockfile.
✨  Done in 21.47s.
/Users/ozziepeck/desktop/xray
   Compiling unicode-xid v0.1.0
   Compiling void v1.0.2
   Compiling libc v0.2.40
   Compiling lazy_static v1.0.0
   Compiling ucd-util v0.1.1
   Compiling regex v0.2.10
   Compiling utf8-ranges v1.0.0
   Compiling serde v1.0.39
   Compiling itoa v0.4.1
   Compiling dtoa v0.4.2
   Compiling num-traits v0.2.2
   Compiling strsim v0.6.0
   Compiling unreachable v1.0.0
   Compiling proc-macro2 v0.3.6
   Compiling regex-syntax v0.5.5
   Compiling memchr v2.0.1
   Compiling thread_local v0.3.5
   Compiling quote v0.5.1
   Compiling aho-corasick v0.6.4
   Compiling syn v0.13.1
   Compiling serde_json v1.0.15
   Compiling serde_derive_internals v0.23.1
   Compiling serde_derive v1.0.39
   Compiling docopt v0.8.3
   Compiling xray_cli v0.1.0 (file:///Users/ozziepeck/Desktop/xray/xray_cli)
    Finished dev [unoptimized + debuginfo] target(s) in 24.37 secs
/Users/ozziepeck/desktop/xray
   Compiling libc v0.2.40
   Compiling byteorder v1.2.2
   Compiling cfg-if v0.1.2
   Compiling nodrop v0.1.12
   Compiling memoffset v0.2.1
   Compiling lazycell v0.6.0
   Compiling scopeguard v0.3.3
   Compiling slab v0.4.0
   Compiling futures v0.1.21
   Compiling scoped-tls v0.1.1
   Compiling stable_deref_trait v1.0.0
   Compiling smallvec v0.6.0
   Compiling fnv v1.0.6
   Compiling same-file v1.0.2
   Compiling crossbeam v0.3.2
   Compiling crossbeam-utils v0.3.2
   Compiling log v0.4.1
   Compiling crossbeam-utils v0.2.2
   Compiling arrayvec v0.4.7
   Compiling proc-macro2 v0.3.6
   Compiling bincode v1.0.0
   Compiling iovec v0.1.2
   Compiling net2 v0.2.32
   Compiling rand v0.4.2
   Compiling num_cpus v1.8.0
   Compiling memchr v2.0.1
   Compiling owning_ref v0.3.3
   Compiling walkdir v2.1.4
   Compiling log v0.3.9
   Compiling crossbeam-epoch v0.4.1
   Compiling quote v0.5.1
   Compiling bytes v0.4.6
   Compiling mio v0.6.14
   Compiling aho-corasick v0.6.4
   Compiling tokio-executor v0.1.2
   Compiling futures-cpupool v0.1.8
   Compiling parking_lot_core v0.2.13
   Compiling syn v0.13.1
   Compiling crossbeam-deque v0.3.0
   Compiling regex v0.2.10
   Compiling tokio-io v0.1.6
   Compiling tokio-timer v0.2.1
   Compiling parking_lot v0.5.4
   Compiling tokio-threadpool v0.1.2
   Compiling mio-uds v0.6.4
   Compiling tokio-reactor v0.1.1
   Compiling globset v0.3.0 (https://github.com/atom/ripgrep?branch=include_ignored#e3c5a61b)
   Compiling tokio-tcp v0.1.0
   Compiling tokio-udp v0.1.0
   Compiling tokio v0.1.5
   Compiling ignore v0.4.1 (https://github.com/atom/ripgrep?branch=include_ignored#e3c5a61b)
   Compiling tokio-core v0.1.17
   Compiling serde_derive_internals v0.23.1
   Compiling tokio-signal v0.1.5
   Compiling tokio-uds v0.1.7
   Compiling serde_derive v1.0.39
   Compiling tokio-process v0.1.5
   Compiling xray_core v0.1.0 (file:///Users/ozziepeck/Desktop/xray/xray_core)
   Compiling xray_server v0.1.0 (file:///Users/ozziepeck/Desktop/xray/xray_server)
    Finished dev [unoptimized + debuginfo] target(s) in 43.86 secs
/Users/ozziepeck/desktop/xray
error: toolchain 'nightly-x86_64-apple-darwin' is not installed
script/build: line 5: wasm-bindgen: command not found
yarn install v1.5.1
warning ../../../package.json: No license field
[1/4] 🔍  Resolving packages...
[2/4] 🚚  Fetching packages...
[3/4] 🔗  Linking dependencies...
[4/4] 📃  Building fresh packages...
✨  Done in 47.69s.
Hash: 527c4507da391a9f4dc5
Version: webpack 4.6.0
Time: 40ms
Built at: 2018-04-30 13:46:40

ERROR in Entry module not found: Error: Can't resolve 'src/ui.js' in '/Users/ozziepeck/Desktop/xray/xray_wasm'

Contributor Communication Channels

Is there a place to talk and ask questions about architecture and code with contributors? or are github issues the standard for now?

I think it would be beneficial to have a channel on slack or gitter or riot so that we can discuss small questions that don't deserve their own issues or questions/ideas that aren't fully baked yet.

It is more of a question than issue, how exactly does this approach minimises the Electron footprint?

Isn't simply bringing a "hello world" electron app starts up with the the whole engine and browser APIs and etc?

Maybe I didn't understand correctly how this works, but what Rust brings to the table is the speed of native cod instead of the overhead of a dynamic language that JS is facing (V8 optimisation mechanisms etc).

I am just curios if besides the reason above, are there other ways of minimising the Electron footprint?

Kind thanks and also kudos for the repo!!! I would love to help out if I can find with what :D

Plugin Performance

Hey,

I am a heavy user of the vim plugin on both Atom and VSCode. I’ve noticed that Atom’s vim package is a bit snappier than VSCode’s. I have a hunch that a large portion of the speed difference is caused by the plugin architecture of both editors. From what I understand, plugins in VSCode are running in Web Workers which rely on some form of message passing to communicate between the plugin and the editor’s core. Furthermore, from what I remember, the amount of computing time each plugin is allotted can also be throttled by VSCode’s core if need be. This being said, will the current architecture of XRay impede the performance of plugins like VimMode?

Finally, there are some discrepancies between functionality between the plugins pertaining to selection. I’m unsure if this is related to the architecture though.

The gif below shows the discrepancies between the way selections work.
peek 2018-05-28 14-35

Comparison to Xi

I'm curious if you considered using xi editor as a base; its very similar to this roadmap

Xray's Intent?

I suppose I'm still a little unclear on the exact intent of Xray. You'll have to forgive me, much of the lingo tossed around in the updates goes directly above my head. Is it meant to be a desktop editor that is developed independently of Atom, a desktop editor meant to replace Atom? A web-based editor only (maybe Github's own cloud editor for Github users to code with - a cool idea!)? Will Atom packages be compatible with X-ray, or will it require an entirely new package ecosystem? If I understand correctly, all of this is mostly experimental, but if it succeeds in being a useful tool, where is this tool going to thrive? It is very exciting to read the updates and see the progress, as the code editor is the developer's favorite and most used tool, so I'm really curious to know where this is all heading. :)

Sorry for using a pull request, I closed it so that it wouldn't be in the way of real issues.

Contributing

Exciting project - thanks for sharing at such an early stage.

Are there areas you'd like PRs?
Are there good first issues?
Is there CI (and if not would you take a PR for one)?

Inserting a two byte UTF-8 character moves the cursor two characters

Inserting a two byte UTF-8 character like ä moves the cursor two characters.

Given a buffer like this with the pipe being the cursor:

|1234

Inserting ä will result in the cursor being placed like this:

ä1|234

I've tried to fix it myself but couldn't find the right place in the code. With a few hints I may get further... how is the cursor represented in code? where are insertions handled?

App launches without UI

After app launched, I see window with menu and blank area:
screenshot from 2018-03-08 15-37-23

From console of xray electron app:

Uncaught Error: Cannot find module 'xray'
    at Module._resolveFilename (module.js:543:15)
    at Function.Module._resolveFilename ($HOME/path/xray/xray_electron/node_modules/electron/dist/resources/electron.asar/common/reset-search-paths.js:35:12)
    at Function.Module._load (module.js:473:25)
    at Module.require (module.js:586:17)
    at require (internal/module.js:11:18)
    at Object.<anonymous> ($HOME/path/xray/xray_electron/lib/render_process/main.js:5:14)
    at Object.<anonymous> ($HOME/path/xray/xray_electron/lib/render_process/main.js:37:3)
    at Module._compile (module.js:642:30)
    at Object.Module._extensions..js (module.js:653:10)
    at Module.load (module.js:561:32)
$ npm install

> [email protected] postinstall $HOME/path/xray/xray_electron/node_modules/electron
> node install.js

npm WARN [email protected] No repository field.

added 1 package from 1 contributor in 9.842s

$ rustup -V
rustup 1.11.0 (e751ff9f8 2018-02-13)
$ node -v
v9.8.0
$ uname -a
Linux catsys 4.4.0-116-generic #140-Ubuntu SMP Mon Feb 12 21:23:04 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux

Screenshot of the actual editor please?

Hi.

It would be nice if the README contained what is currently built here. Just like... One at least to see how it looks like. People are visual beings for the greater part. :)

Help wanted: Continuous integration

It would be great to get a PR helping us setup a CI build on Travis. This would also force us to iron out our build on Linux. It's not as simple as just slapping a travis.yml file into the root folder however...

Running only the tests that matter

We're planning on xray being a mono-repository to avoid the transaction costs and community fragmentation of hosting hundreds of separate, interconnected repos. However, it seems like this could make our builds horrible if we're not careful.

For this reason, even though we're versioning all these components together, we want their inter-dependencies to be clearly specified, and we want the global build script to be smart about knowing when to run which component's test suite.

So for example, if we make a change in xray_core, then we want to run tests on xray_node and xray_electron. A change to xray_electron should run tests only on xray_electron. A change to the documentation should run no tests.

How should we achieve this? We could cache some sort of fingerprint file on a per-module basis with a digest of the module's contents, plus make each module's build smart about checking the fingerprints of dependencies. Or maybe there's a better, standard way of dealing with this in mono-repos?

Loading 15mb file

objc[56628]: Class FIFinderSyncExtensionHost is implemented in both
/System/Library/PrivateFrameworks/FinderKit.framework/Versions/A/FinderKit (0x7fff894d6c90) and 
/System/Library/PrivateFrameworks/FileProvider.framework/OverrideBundles/FinderSyncCollaboration
FileProviderOverride.bundle/Contents/MacOS/FinderSyncCollaborationFileProviderOverride 
(0x10fd1ccd8). One of the two will be used. Which one is undefined.

It asks me to save the file somewhere but it doesn't display it. It is a mixed binary/ascii file
I am opening files by drag and drop
I also can't edit any files

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.