Code Monkey home page Code Monkey logo

browser-mechainsm-test's Introduction

Mechanism Browser

The mechanism browser allows searching for mechanisms without knowing their name or a textual description. Instead, you specify mechanical features, like rotation and translation axes, to find what you seek. Along with the search functionality, you can also add new mechanisms and edit existing articles.



Requirements

Make sure that you have the following software with the corresponding versions installed:

The Mechanism Browser works best with Mozilla Firefox.



Setup

Add server to hosts file

For any system that should have access to the Mechanism Browser, add the IP of the server and the hostname "mechanism-browser" to its hosts file.

Setup the Django server

Make sure that you are using Python 3. You might want to create a virtual environment to prevent interfering libraries.

Install the requirements:

sudo pip install -r Mechanism-Browser/backend/requirements.txt

Navigate to Mechanism-Browser/backend/mechanismbackend/. Migrate the schemas:

python manage.py migrate

Create a superuser and choose a username and a password:

python manage.py createsuperuser

Finally, start the Django server:

python manage.py runserver 0.0.0.0:8000

Setup the Node.js server

Navigate to Mechanism-Browser/frontend/. Install the dependencies:

sudo npm install

Setup OpenJSCAD:

git clone https://github.com/jscad/OpenJSCAD.org.git
cp OpenJSCAD.org/packages/web/imgs/busy.gif media/busy.gif
mkdir external
mv OpenJSCAD.org/ external/

Then, start the Node.js server:

sudo npm start



Usage

Browse mechanisms

In your browser, open http://mechanism-browser/.

List mechanisms in the backend

In your browser, open http://mechanism-browser:8000/api/mechanisms.

Edit mechanism in the backend

In your browser, open http://mechainsm-browser:8000/admin. At "Mechanisms", click "Add", fill in the form, and confirm.



Frontend documentation

The documentation of the frontend can be found at frontend/docs/index.html. To re-generate it, run:

cd frontend
npm run docs:build



Web crawler

To fill the database with the web crawler, perform the following steps. You might want to create a virtual environment to prevent interfering libraries.

cd exploration/web_page_extractor
pip install -r requirements.txt
python web_page_extractor.py

browser-mechainsm-test's People

Contributors

alpreu avatar dependabot[bot] avatar olaf1022 avatar thijsroumen avatar

Watchers

 avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.