Code Monkey home page Code Monkey logo

Comments (21)

miguelvr avatar miguelvr commented on August 17, 2024 2

After reading this thread and #973 it isn't clear to me yet how to deploy a container locally.

it's not possible to deploy cortex locally anymore, since version 0.26
If you really need to run cortex locally, you can use an earlier version

from cortex.

timothywangdev avatar timothywangdev commented on August 17, 2024

is this till being worked on? It's going to be quite helpful for local testings

from cortex.

deliahu avatar deliahu commented on August 17, 2024

@timothywangdev this is not actively being worked on, although it is on our roadmap. Do you mind sharing what is your motivation for this? E.g. is it to save time and/or money during development, or to serve a small-scale API from one machine in production, or something else?

from cortex.

nabeeltahir12 avatar nabeeltahir12 commented on August 17, 2024

@deliahu Have you guys incorporated this feature in current release of cortex. I have a GPU server available locally and would like to use your platform for development to cut down the cost.

from cortex.

RobertLucian avatar RobertLucian commented on August 17, 2024

@nabeeltahir12 as far as I know, a local version of cortex intended for development is in the works and could be released in the next version (which means a few weeks give or take). I'm not part of the development team, so I might be wrong.

from cortex.

deliahu avatar deliahu commented on August 17, 2024

@RobertLucian yes that is correct, @nabeeltahir12 we are planning to release the local environment in our next release, which will probably be in ~2 weeks. We haven't yet researched how GPUs work with docker locally, but we will look into it, and we'll include GPU support in this release if it's relatively straightforward for us to implement it

from cortex.

codeghees avatar codeghees commented on August 17, 2024

@deliahu with this new update, I will be able to run cortex without AWS, right? I would love to contribute to your code but can't afford the cost of AWS servers as I am just a graduating student.

from cortex.

mrciolino avatar mrciolino commented on August 17, 2024

I would say my main reason to want to test the deployment is to just verify everything is working before connecting the service up to aws. It aligns with my current workflow of local development and then deployment. Thanks to the devs for writing this amazing software!

from cortex.

deliahu avatar deliahu commented on August 17, 2024

@codeghees Yes, the local feature will allow you to code and deploy APIs on your local machine, without AWS.

@mrciolino Thank you for the kind words, we really appreciate hearing feedback from our users! Yes, that is one of the main reasons we decided to implement local deployment. We also thought it would be a nice option for evaluating Cortex before committing to running on AWS. There is a third use case, which will probably be less common, but if you are using it in a dev or test environment and you don't need Cortex's "cluster" features (e.g. autoscaling, spot instance management, etc), then you could spin up a single VM, SSH in, run Cortex locally, and expose the port to the internet.

from cortex.

deliahu avatar deliahu commented on August 17, 2024

Just to post an update: we've been heads down for the last couple weeks on this, and we're hoping to release local support this week.

from cortex.

codeghees avatar codeghees commented on August 17, 2024

Excited to see what comes out @deliahu! Good luck!

from cortex.

claverru avatar claverru commented on August 17, 2024

After reading this thread and #973 it isn't clear to me yet how to deploy a container locally.

from cortex.

jesperbruunhansen avatar jesperbruunhansen commented on August 17, 2024

What was the reason for removing this feature again?

from cortex.

miguelvr avatar miguelvr commented on August 17, 2024

What was the reason for removing this feature again?

In a nutshell:

It had costs for our limited development time, and it was not aligned with the product focus, which is running ML at scale.

from cortex.

codeghees avatar codeghees commented on August 17, 2024

from cortex.

ospillinger avatar ospillinger commented on August 17, 2024

Hi @codeghees, could you please share with us some of your use cases for local?

from cortex.

jesperbruunhansen avatar jesperbruunhansen commented on August 17, 2024

@ospillinger I can personally say that our team used this primarily for local development of the API's before deployment.

We're still quite new to Cortex, so perhaps you could elaborate on how to do local dev/debugging/testing etc. with your framework? Perhaps we simple go around it the wrong way.

from cortex.

vishalbollu avatar vishalbollu commented on August 17, 2024

For local development, users typically can initialize their predictor in a python runtime and invoke .predict() function with the expected payload by call.

import time

class PythonPredictor:
    def __init__(self, config):
        ...
    def predict(self, payload):
        ...

predictor = PythonPredictor(config={<insert config from api spec here>})
predictor.predict(payload={})

This is a good place to start when testing the predictor implementation (although this approach may not be possible in a few scenarios). For testing the API as a whole, a few users maintain a dev cluster for more rigorous testing before deploying to a prod cluster. For the in-between testing, such as just testing the container, we have filed this ticket #2077 to design ways to run the API containers locally. Ideally we would like to get to a point where you can docker run [flags] the container to test it locally.

from cortex.

vishalbollu avatar vishalbollu commented on August 17, 2024

Following up on this, in cortex v0.34, a new command cortex prepare-debug has been added to help debug API containers locally. You can read more about it here.

from cortex.

mvrahden avatar mvrahden commented on August 17, 2024

@vishalbollu it looks like this has been removed again. At least I can't find it on the current release (0.40.0) nor in the documentation. What is the current approach to running a local setup (e.g. for testing purposes or even for those with a local GPU cluster)? Thanks

from cortex.

miguelvr avatar miguelvr commented on August 17, 2024

Cortex now deploys docker containers built by the users.

So you just need to do docker run <container_name> to run locally

from cortex.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.