Code Monkey home page Code Monkey logo

helix's Introduction

Helix Build Status Coverage Status Inline docs Ebert Floobits Status


Helix is the backend powering the game Hacker Experience 2.

Building a release

To build a release, run the following command

MIX_ENV=prod mix do deps.get, compile, release --env=prod

This will generate a release on _build/prod/rel/helix

To run the output release, export the environment variables specified on this README along with the env REPLACE_OS_VARS=true and execute the _build/prod/rel/helix/bin/helix console*

Note that if this is the first time you are running the helix server, you have to first properly create the database and populate it using helix ecto_create, helix ecto_migrate and helix seeds

Example:

export HELIX_CLUSTER_COOKIE=randomcookie
export HELIX_NODE_NAME=mynode
export HELIX_ENDPOINT_SECRET_KEY=reallyreallylongstring
export HELIX_ENDPOINT_URL=127.0.0.1
export HELIX_DB_USER=postgres
export HELIX_DB_PASS=postgres
export HELIX_DB_HOST=localhost
export HELIX_DB_PREFIX=helix
export HELIX_DB_POOL_SIZE=3
export HELIX_SSL_KEYFILE=priv/dev/ssl.key
export HELIX_SSL_CERTFILE=priv/dev/ssl.crt
export HELIX_MIGRATION_TOKEN=foobar

REPLACE_OS_VARS=true _build/prod/rel/helix/bin/helix ecto_create
REPLACE_OS_VARS=true _build/prod/rel/helix/bin/helix ecto_migrate
REPLACE_OS_VARS=true _build/prod/rel/helix/bin/helix seeds
REPLACE_OS_VARS=true _build/prod/rel/helix/bin/helix console

Notes

* helix console will run the application in the interactive mode, that way you can execute elixir code on the terminal. You can alternatively use helix foreground to run it on foreground (but without the interactive io) or helix start to run it on background

Environment variables

Environment Required? Example Value Description
HELIX_CLUSTER_COOKIE randomcookie The secret cookie used to authenticate erlang nodes on a cluster*
HELIX_NODE_NAME mynode Each erlang node on a cluster must have a different name; this name is used solely to identify the node on the cluster
HELIX_ENDPOINT_SECRET_KEY reallyreallylongstring The secret key used to encrypt the session token
HELIX_ENDPOINT_URL 127.0.0.1 The hostname where the Helix server will run
HELIX_DB_USER postgres RDBMS username
HELIX_DB_PASS postgres RDBMS password
HELIX_DB_HOST localhost RDBMS hostname
HELIX_DB_PREFIX helix The prefix for the databases used on Helix. Eg: if the prefix is foobar, the database for accounts will be foobar_prod_account
HELIX_DB_POOL_SIZE 3 The amount of connections constantly open for each database
HELIX_SSL_KEYFILE priv/dev/ssl.key The path for the keyfile used on HTTPS connections
HELIX_SSL_CERTFILE priv/dev/sll.crt The path for the certificate used on HTTPS connections
HELIX_MIGRATION_TOKEN foobar Token used to authenticate HEBornMigration application exports
APPSIGNAL_PUSH_API_KEY abcdef Key for AppSignal. If this env is not provided, AppSignal won't log errors

Notes

* Note that the secret cookie is the only authentication (besides your firewall) that erlang provides to avoid another erlang node to take direct root access into your server so this cookie should be secure enough and your firewall should be properly configured, otherwise your server is prone to real danger (again, erlang provides root access to the server)

Support

You can get support on shipping your community release on our Online Chat.

If you have any question that could not be responded on the chat by our contributors, feel free to open an issue.

License

Hacker Experience 2, "Helix" and the Hacker Experience 2 logo are copyright (c) 2015-2017 Neoart Labs LLC.

Helix source code is released under AGPL 3.

Check LICENSE or GNU AGPL3 for more information.

AGPL3

helix's People

Contributors

chrisfls avatar mememori avatar renatomassaro avatar umamaistempo avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

helix's Issues

Rename `module_role` to `software_module`

The name module_role doesn't describe what it is. Considering that we defined
that "software" is the behaviour of an in-game program and "file" is the instance
of that entity in the game, it only makes sense to have software_module as the
definition of possible modules that a software may have and file_module as the
mapping of those modules to their versions on a file

Validate File name

Rules:

  • Must be atleast 1 grapheme long
  • Must not start with a space " "
  • Must not have breakline character or any non-printable character
  • Must not end with a dot "."
  • Cannot have the slash or backslach characters "\/"
  • Maybe prevent zalgo ?

Merge StorageDrive controller onto Storage controller

Albeit StorageDrive is a different record on the database, it should not
considered to be a thing but just an implementation detail on how a Storage
binds HDDs to storage spaces. See the drives as just a collection that is stored
by the storage.

Create ProcessAPI

gateway :: PK.t
process :: ProcessType.t
process_id :: PK.t

start_process(gateway, process) :: :ok | {:error, term}
shutdown_process(process_id, reason :: term) :: :ok | {:error, term}
get_processes(gateway) :: [term] # TODO: return format
process_priority(process_id, 0..5) :: :ok
pause_process(process_id) :: :ok
resume_process(process_id) :: :ok

TODO: should the process functions take a Process.t input or a process_id ? This is unlike most other services because the model is not used for anything as all it includes is information relating to how close the process is to be finished... It's still useful to return part of the process on the get_processes function tho since it might be useful for the client to know the process ETA

Implement point-to-point connection

Depends on #94

Technically this task is meant to implement network connections that might have
several nodes, but it'll be implemented considering that only 2 nodes can be
connected as to reduce the prototype complexity.

This task intends to implement:

  • Controller function to connect two nodes through a network (creating a tunnel)
  • Controller function to start a "connection" (the multiple meanings of this
    term in this context makes it hard to explain)
  • Controller function to close a connection
  • Controller function to check if a certain gateway is connected to a certain
    destination node
  • Controller function to list active connections that passes through a node

Implement domain-specific cache update/invalidation rules

Cache invalidation is a hard thing. It is even harder when its logic is distributed across several different mechanics.

My suggestion to ease cache maintenance is to create a centralized (per domain) cache folder that listens to all kinds of events and acts accordingly. As such, we have a single place that holds all rules to update or invalidate a cache.

Task #83 implements a cache. We could use it as an example on how to organize this centralized ruleset.

Apply new style guidelines to function specs

Instead of inconsistently writting specs ad hoc, follow a simple guideline:

  • The left side (call definition) shall have a line only to it (and no breaklines on it's params)
  • The right side (function return) shall take the following line
  • Group possible returns that are different versions of the same thing
  • Break on a new line returns that mean different things
@spec some_action(Ecto.Changeset.t, map()) ::
  {:ok, struct}
  | {:error, :notfound | :invalid}
  | {:error, Ecto.Changeset.t}

Also remember that NOT ALL FUNCTIONS HAVE TO BE SPEC'ED because dialyzer will infer the success typing and if you provide good enough typing for most functions, the rest are prone to be as correct

Tag all tests to allow us to run only suites that makes sense at a context

Use the following tags (some tests can have more than one tag):

:unit # A test that requires only a single node and doesn't depend on any third-party software, only the Erlang environment
:integration # A test that depends on a third-party application and side-effects
:fullstack # A test that depends on a third-party application that is not the main rdbms (postgres)
:external # A test that depends on a third-party application that is accessible only through the internet
:cluster # A test that depends on more than a single erlang node

Update "constant" models to use atoms whenever possible

Since with recent updates "constant" modules started to define their possible
values on their code (ie: compile-time) to not only provide database-level
referential integrity but also application-level referential integrity, it'd be
an interesting improvement if we used atoms instead of strings, that way we
could also provide some type checks because dialyzer has support for singleton
atoms (but not singleton binaries).

Eg:

@spec unban_account(%Account{account_status: :banned}) :: {:ok, %Account{account_status: :active}} | {:error, reason :: term}

instead of

@spec unban_account(Account.t) :: {:ok, Account.t} | {:error, reason :: term}

Before anyone asks about compilation time dependency or the "correct" use of
types and the use of opaque types, there is the following alternative (using
type constructors):

@spec unban_account(Account.with_status(AccountStatus.banned)) :: {:ok, Account.with_status(AccountStatus.active)} | {:error, reason :: term}

With this more accurate typing we not only improve documentation but might avoid
bugs thanks to type analysis


  • Implement an Ecto.Type to store atoms as strings on the database and retrieve them as atoms (alternatively make all constant schemas implement Ecto.Type)
  • Spec all the Types
  • Spec functions that depends on certain constants

Drop EntityService in favour of EntityAPI

APIs should wrap controller actions in meaningful game actions with mechanic
validation through HEnforcer. Since there's nothing to HEnforcer yet, this is not
much more than simply wrapping the controller.

Improve TOP

It's lacking quite a bit on readability

  • Separate TOP into "local" (state) and "controller"
  • Write some tests for the controller to integrate with the local
  • Typespec the controller

Getting more done in GitHub with ZenHub

Hola! @mememori has created a ZenHub account for the HackerExperience organization. ZenHub is the only project management tool integrated natively in GitHub – created specifically for fast-moving, software-driven teams.


How do I use ZenHub?

To get set up with ZenHub, all you have to do is download the browser extension and log in with your GitHub account. Once you do, you’ll get access to ZenHub’s complete feature-set immediately.

What can ZenHub do?

ZenHub adds a series of enhancements directly inside the GitHub UI:

  • Real-time, customizable task boards for GitHub issues;
  • Multi-Repository burndown charts, estimates, and velocity tracking based on GitHub Milestones;
  • Personal to-do lists and task prioritization;
  • Time-saving shortcuts – like a quick repo switcher, a “Move issue” button, and much more.

Add ZenHub to GitHub

Still curious? See more ZenHub features or read user reviews. This issue was written by your friendly ZenHub bot, posted by request from @mememori.

ZenHub Board

Create LogAPI

Log application has no API for inter-app operation

Move (storage_id, drive_id) link from software to hardware

The table storage_drives was added to Software domain because it made sense at the time (optimization to avoid extra HeBroker calls) but it's no longer needed.

Now, it makes sense to have it all under Hardware domain, on a table called hdd_storage (or something like that), as storages are owned by HDD.

In the future, to add referential integrity on the Software domain without having to ask Hardware whether that storage is linked or not, we can add a storage_status that links the storage_id status to "on" or "off", meaning it's linked or unlinked, respectively.

Merge MotherboardSlot controller onto Motherboard controller

Albeit MotherboardSlot is a different record on the database, it should not
considered to be a thing but just an implementation detail on how a Motherboard
binds components. See the slots as just a collection that is stored by the
motherboard.

Rename `FileType` to `SoftwareType`

At first that model received that name because it was meant simply to tag the File and point out what it was, but right now it means more than that, the SoftwareType represents each possible software, how it is implemented etc, so it only makes sense to rename it

Centralize all PK generation

Implement on HELL.PK:

pk_for(module) :: t

Note that obviously HELL.PK should only delegate those functions, another
module can implement them (and do nothing but implement them) because otherwise
the PK module would be bloated

def pk_for(Helix.Hardware.Model.Component.CPU),
  do: generate([0x0003, 0x0001, 0x0002])

(I'd recommend actually making a map %{model => digits} and compiling it into
function clauses, that way the reverse - finding the model from a pk - is just a
few lines away)

Then update all models to use Ecto's autogen with that generator.

Eg:

defmodule Helix.Process.Model.Process do
  @primary_key false
  @ecto_autogenerate {:process_id, {PK, :pk_for, [__MODULE__]}}
  schema "processes" do
    field :process_id, PK,
      primary_key: true
  ...
end

Update Log Controller to use the new find/fetch format

fetch(HELL.PK.t) :: Log.t | nil
find([{:server_id, HELL.PK.t} | {:crypto_version, non_neg_integer | nil} | {:message, String.t}, ...], meta :: []) :: [Log.t]

Note that message on find is a LIKE filter
also note that, to avoid potential bugs, ecto does not accept where thing = ^var if var is nil, you'd have to write where is_nil(thing)

Break the umbrella

Make it a single application again, no questions asked.

For this task to be completed, it would require that no PR or not-merged commit is left alone, otherwise they are most certainly completely broken after that as it will make any rebase nearly impossible to happen

Implement network related models

This issue relates to implementing basic models that will be used to properly
implement basic network functionality

Information can be found on the wiki page about internet connections

A network connection is comprised by a Tunnel that contains Links and
Connections

Tunnel

Represent a certain medium to connect from the gateway server to the
destination server. A tunnel contains several Links. Connections use the
Tunnel.

The Tunnel model is composed by:

  • Unique ID
  • network_id
  • gateway_id
  • destination_id
  • hash*

* Hash is a precomputed value generated by applying a certain hash function to
all links onto a Tunnel. This is done so a user can have multiple Tunnels to a
certain destination server and allow the backend to lazily receive the action as
a collection of nodes (and thus reuse a Tunnel)

Link

A link is the edge from a certain node used in the Tunnel to another. A Tunnel
is composed by 1 or more links and those links (and their order) affect the
side-effects of a certain network action.

On Tunnel that emanates from Node onto Node through the path
A -> B -> C -> D we have the links A -> B, B -> C and C -> D.

The link model is composed by

  • tunnel_id
  • source_id
  • destination_id
  • sequence*

* Sequence id is a precalculated integer to order the edges. This could be
avoided by using a linked list approach on the database side, but it is faster
to prototype this way

Connection

A connection represents an action that is happening through a Tunnel and that
was started by the gateway and targets the destination.

That is, if a certain user is trying to crack into a certain destination
computer through certain bounce nodes, it would start a Tunnel with those nodes
and start a "crack" connection. After the crack is complete and the user gets
into the target server, the "crack" connection finishes and a "ssh" connection
starts. If the user decides to download a file, then they have simultaneously
a "ssh" connection (because they are logged into the target system) and a "ftp"
connection (because they are transfering files to/from the target system)

The connection model is composed by

  • Unique ID*
  • tunnel_id
  • Connection Type

* This unique id is used to allow a certain tunnel to have more than one
connection of a certain kind (ie: download a dozen of files in parallel) and
also to provide visual cue to players that two simillar connections of the same
kind are different

ConnectionType

Naturally since a connection has a type and the amount of possible types is
limited, it is better to introduce a lookup table to provide referential
integrity

About tunnel implementation

On the Tunnel's model i see the following functions:

create(Network.t, gateway :: Server.id, destination :: Server.id, bounces :: [node, ...]) :: %Tunnel{links: [Link.t, ...]}

On this case, the links builder inside tunnels model would make a set of input
nodes (and fail if any is repeated) and also ensure that the first node is the
gateway and the last node is the destination

Drop HardwareService in favour of HardwareAPI

APIs should wrap controller actions in meaningful game actions with mechanic
validation through HEnforcer. Since there's nothing to HEnforcer yet, this is not
much more than simply wrapping the controller.

Implement mapping of storage_id to server

We'll add caches through our application when it makes sense to reduce the effort to fetch a commonly used information. Right now, fetching all storage that belong to a server is quite expensive (Get server, get motherboard, find HDDs attached to motherboard, find storages attached to HDDs).

A solution is to add a cache mapping all storage_ids that belong to a single server. The primary key is the tuple (server_id, storage_id). storage_id is unique; server_id isn't.

Since both storage_id and servers won't change too often, and will be read all the time, it makes sense to create a materialized view that is updated through triggers (when new storage_ids are created/removed). Or we can simply create a new table. I don't think there is a difference at all in this case.

Refactor Settings and AccountSettings

As it is now, AccountSettings is an EAV and that (including the fact that each
setting value is a row/record) doesn't give us any advantage (actually it is the
opposite).

To fix this, remove Setting, make the possible settings (and their default
values) be a compiled map

Drop everything about AccountSetting and make it simply store a JSON linked to
the account.

When returning the user settings, simply merge it over the compiled settings map
to get the final map with both defaults and custom values

Ideally that compiled settings map could be an Ecto embedded schema, that way it
would be very easy to validate the input values and it could still easily be
used the way we expect it to

defmodule Settings do
  @primary_key false
  embedded_schema do
    field :is_beta, :boolean,
      default: false
  end
end

Note that in this case i didn't put the default value in the embedded schema,
there's a reason for this: we might opt to create a special type for it that
strips it from struct metadata and from nil fields (thus avoiding saving the
defaults on the database) the advantage of this approach is saving database
store (well, it doesn't matter that much) and that a change on the default value
would affect every account that doesn't has a custom value for such setting
(while storing the record with the default value would make it a custom value)

AccountSettings controller should implement:

Model.AccountSettings.__schema__ :: %AccountSettings{account_id: HELL.PK.t, settings: Settings.t}
Model.AccountSettings.settings(AccountSettings.t) :: Settings.t

# AccountController
get_settings(Account.t) :: Settings.t
put_settings(Account.t, settings :: map) :: {:ok, Settings.t} | {:error, reason :: term}

Implement Encrypt and Decrypt event reactions

Encrypt should:

  • Change the target file's encrypt_version
  • Invalidate any key that previously worked for the file (they should not be removed, just stop working)
  • Create a new key and store it at the gateway's storage

Decrypt should:

  • Create a new key and store it at the gateway's storage

So, in practice, this means:

  • Update file model to store a crypto version
  • Implement a method to retrieve the currently used storage of a server
  • Implement the event handlers

Additionally, those event handlers should emit an event telling that a key was created and is available to the user (this will be used by our notification system)

Store File's full path

Drop the current UNIQUE index on file table and create a new column full_path that is UNIQUE and automatically computed as file_path <> file_name <> file_extension

Include LICENCE file

Fun fact: in UK english, licence is a noun and license is a verb while on USA license is both

Anyway, include the LICENCE file, there's not much to say about it, right ?

Drop AccountService in favour of AccountAPI

APIs should wrap controller actions in meaningful game actions with mechanic
validation through HEnforcer. Since there's nothing to HEnforcer yet, this is not
much more than simply wrapping the controller.

I'd say that Account.Controller.login should be moved to Account.API.login

Drop ServerService in favour of ServerAPI

APIs should wrap controller actions in meaningful game actions with mechanic
validation through HEnforcer. Since there's nothing to HEnforcer yet, this is not
much more than simply wrapping the controller.

Crypto key basic implementation

  • Crypto Key Software model
  • Crypto Key software_type
  • Controller to create and retrieve keys

A Crypto Key is an specialized file that enables the user that has them to view/use a certain encrypted file

Their record should probably contain a column target_file_id that points the file that it allows to be seem. Note that it is not a FK (ie: no integrity), that way we can allow crypto keys to exist even if their target file is deleted A nilified FK might be a better choice :)

Also, they should generate an (sufficiently) unique random hash to work as a signature. That way, if player A encrypted a certain file and later player B (decrypts and) encrypts it, then only player B's key should work and that signature would be an easy way to implement that feature without removing player A's key (otherwise they would notice it and it would affect the stealth gameplay)
A better idea is simply to nilify the FK (that way we can have referential integrity) and just emit a delayed event to tell the player that their key is useless (that delayed event shall be triggered when the player logs into the target server)

Additionaly, the Key's record should include the Target Server id. This is so we can, in the future, provide an (in-game) application that lists your keys grouped by servers on your hacked db (and allow you to, with a click, remove all keys that are linked to "unknown servers" - ie: those that are not on your hacked db)

Write README.md

Document

  • What this is
  • How to setup the application
  • How to test the application
  • How to configure the application

Update Process Controller to use the new find/fetch format

fetch(HELL.PK.t) :: Process.t
find([{:gateway, server :: HELL.PK.t} | {:target, server :: HELL.PK.t} | {:file, HELL.PK.t} | {:network, HELL.PK.t} | {:type, [term, ...] | term} | {:state, [term, ...] | term}, ...], meta :: []) :: [Process.t]

Create SoftwareAPI for File

# service/api/file.ex
create(map) :: {:ok, File.t} | {:error, term}
fetch(PK.t) :: File.t | nil

storage_contents(Storage.t) :: %{folder :: String.t => [File.t]}
files_on_storage(Storage.t) :: [File.t]

copy(File.t, Storage.t, path :: String.t) :: {:ok, File.t} | {:error, term}
move(File.t, path :: String.t) :: {:ok, File.t} | {:error, term}
rename(File.t, name :: String.t) :: {:ok, File.t} | {:error, term}
encrypt(File.t, version :: pos_integer) :: {:ok, File.t} | {:error, term}
decrypt(File.t) :: {:ok, File.t} | {:error, term}
delete(File.t) :: :ok

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.