Code Monkey home page Code Monkey logo

prefect-demo's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

prefect-demo's Issues

mc: <ERROR> Unable to make bucket `local/minio-flows`. Access Denied.

❯ make kubes deploy
k3d cluster create orion --registry-create orion-registry:0.0.0.0:5550 \
                -p 4200:80@loadbalancer -p 9000:9000@loadbalancer -p 9001:9001@loadbalancer \
                -p 10001:10001@loadbalancer -p 8265:8265@loadbalancer -p 6379:6379@loadbalancer \
                --k3s-arg '--kube-apiserver-arg=feature-gates=EphemeralContainers=true@server:*' \
                --k3s-arg '--kube-scheduler-arg=feature-gates=EphemeralContainers=true@server:*' \
                --k3s-arg '--kubelet-arg=feature-gates=EphemeralContainers=true@agent:*' \
                --wait
INFO[0000] portmapping '4200:80' targets the loadbalancer: defaulting to [servers:*:proxy agents:*:proxy] 
INFO[0000] portmapping '9000:9000' targets the loadbalancer: defaulting to [servers:*:proxy agents:*:proxy] 
INFO[0000] portmapping '9001:9001' targets the loadbalancer: defaulting to [servers:*:proxy agents:*:proxy] 
INFO[0000] portmapping '10001:10001' targets the loadbalancer: defaulting to [servers:*:proxy agents:*:proxy] 
INFO[0000] portmapping '8265:8265' targets the loadbalancer: defaulting to [servers:*:proxy agents:*:proxy] 
INFO[0000] portmapping '6379:6379' targets the loadbalancer: defaulting to [servers:*:proxy agents:*:proxy] 
INFO[0000] Prep: Network                                
INFO[0000] Created network 'k3d-orion'                  
INFO[0000] Created image volume k3d-orion-images        
INFO[0000] Creating node 'orion-registry'               
INFO[0005] Pulling image 'docker.io/library/registry:2' 
INFO[0010] Successfully created registry 'orion-registry' 
INFO[0010] Starting new tools node...                   
INFO[0010] Starting Node 'k3d-orion-tools'              
INFO[0011] Creating node 'k3d-orion-server-0'           
INFO[0016] Pulling image 'docker.io/rancher/k3s:v1.24.4-k3s1' 
INFO[0031] Creating LoadBalancer 'k3d-orion-serverlb'   
INFO[0032] Pulling image 'ghcr.io/k3d-io/k3d-proxy:5.4.6' 
INFO[0038] Using the k3d-tools node to gather environment information 
INFO[0038] Starting new tools node...                   
INFO[0038] Starting Node 'k3d-orion-tools'              
INFO[0040] Starting cluster 'orion'                     
INFO[0040] Starting servers...                          
INFO[0040] Starting Node 'k3d-orion-server-0'           
INFO[0043] All agents already running.                  
INFO[0043] Starting helpers...                          
INFO[0043] Starting Node 'orion-registry'               
INFO[0044] Starting Node 'k3d-orion-serverlb'           
INFO[0050] Injecting records for hostAliases (incl. host.k3d.internal) and for 4 network members into CoreDNS configmap... 
INFO[0054] Cluster 'orion' created successfully!        
INFO[0054] You can now use it like this:                
kubectl cluster-info
Probing until traefik CRDs are created (~60 secs)...
10
20
30
40
50
NAME                                CREATED AT
ingressroutes.traefik.containo.us   2023-03-02T02:19:46Z

To use your cluster set:

export KUBECONFIG=/Users/oliver.mannion/.k3d/kubeconfig-orion.yaml
helm repo add bitnami https://charts.bitnami.com/bitnami
"bitnami" already exists with the same configuration, skipping
helm upgrade --install minio bitnami/minio --set auth.rootUser=minioadmin --set auth.rootPassword=minioadmin \
                --wait --debug > /dev/null
history.go:56: [debug] getting history for release minio
install.go:192: [debug] Original chart version: ""
install.go:209: [debug] CHART PATH: /Users/oliver.mannion/Library/Caches/helm/repository/minio-11.10.2.tgz

client.go:128: [debug] creating 5 resource(s)
wait.go:66: [debug] beginning wait for 5 resources with timeout of 5m0s
ready.go:268: [debug] PersistentVolumeClaim is not bound: default/minio
ready.go:268: [debug] PersistentVolumeClaim is not bound: default/minio
ready.go:268: [debug] PersistentVolumeClaim is not bound: default/minio
ready.go:268: [debug] PersistentVolumeClaim is not bound: default/minio
ready.go:268: [debug] PersistentVolumeClaim is not bound: default/minio
ready.go:268: [debug] PersistentVolumeClaim is not bound: default/minio
ready.go:277: [debug] Deployment is not ready: default/minio. 0 out of 1 expected pods are ready
ready.go:277: [debug] Deployment is not ready: default/minio. 0 out of 1 expected pods are ready
ready.go:277: [debug] Deployment is not ready: default/minio. 0 out of 1 expected pods are ready
ready.go:277: [debug] Deployment is not ready: default/minio. 0 out of 1 expected pods are ready
ready.go:277: [debug] Deployment is not ready: default/minio. 0 out of 1 expected pods are ready
ready.go:277: [debug] Deployment is not ready: default/minio. 0 out of 1 expected pods are ready
ready.go:277: [debug] Deployment is not ready: default/minio. 0 out of 1 expected pods are ready
ready.go:277: [debug] Deployment is not ready: default/minio. 0 out of 1 expected pods are ready
ready.go:277: [debug] Deployment is not ready: default/minio. 0 out of 1 expected pods are ready
ready.go:277: [debug] Deployment is not ready: default/minio. 0 out of 1 expected pods are ready
ready.go:277: [debug] Deployment is not ready: default/minio. 0 out of 1 expected pods are ready
ready.go:277: [debug] Deployment is not ready: default/minio. 0 out of 1 expected pods are ready
ready.go:277: [debug] Deployment is not ready: default/minio. 0 out of 1 expected pods are ready
kubectl apply -f infra/lb-minio.yaml
service/minio-lb created
kubectl exec deploy/minio -- mc mb -p local/minio-flows
mc: <ERROR> Unable to make bucket `local/minio-flows`. Access Denied.
command terminated with exit code 1
make: *** [kubes-minio] Error 1

Pull image from a KubernetesJob

Hello,
Is there a way to configure a docker registry with credentials in a KubernetesJob ? I want to pull a image from a private registry.

reception_deployment : Deployment = Deployment.build_from_flow(
    name="private-flow",
    flow=main_flow,
    output="deployment-private-flow.yaml",
    description="private-flow",
    version="snapshot",
    work_queue_name="kubernetes",
    infrastructure=KubernetesJob(), 
    infra_overrides=dict(
        image="myprivate-image:latest",
        env={},
        finished_job_ttl=300)
    )

I can not pull myprivate-image
Do I have to update service_account ?

Thank you

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.