Comments (8)
How would you like to specify this, i.e. what would fit the need?
Just FYI -- you can already do what you're trying to do here. You have the alternative possibility of staging whatever data you need on your PVC. For example you can create a /data/imports
directory and then configure Neo4j to use that as its import directory (dbms.directories.import here: https://neo4j.com/docs/operations-manual/current/reference/configuration-settings/#config_dbms.directories.import)
from neo4j-helm.
The ideal use case is to inject files into the deployed instance to for example configure apoc.conf
using a ConfigMap, or another plugins config file to be shared across many instances.
Changing the import path is fine, but you cannot specify it to be another PVC because you cannot mount another PVC, and pre-populating a PVC is completely out of the question in our environment (AWS). It could be done by using an initContainer and then copying the file across to the existing path but that's seems to be a ridiculously long-winded approach to injecting a file into the image.
The second problem you then have to deal with is versioning of the file and ensuring each node has the right version etc. Which really goes against the immutable infrastructure approach we're using. Just being able to specify and maintain a single ConfigMap which contains the file(s) and done, fits perfectly in our gitops workflows and pipelines.
from neo4j-helm.
I'm going to think on this one a bit. I see what you're after, but working on how to support this without massively overcomplicating the helm chart and creating a lot of untested paths & possibilities.
The default Neo4j docker container has a bunch of fixed mount points it can expect, for example /data
, /logs
, /import
, /plugins
. I'm mulling how best to expose config for any of the built-in mount points. What happens right now is a PVC gets mounted to /data
and everything else is ephemeral container storage, which works fine. I don't want to pre-allocate PVCs for all of those different mount points because most people don't need anything other than /data most of the time. But just for argument's sake, we could do something like this:
neo4jMountPoints:
plugins: ephemeral
import: my-separate-PVC
logs: ephemeral
A thing I'm not following about what you're saying though -- is that if pre-populating a PVC is out of the question for you, then why would it help if you could specify a mount point for import? You'd be mounting an empty drive because you can't pre-populate a PVC, right?
Separately -- this kind of config could end up messy and confusing. In the example above, imagine I specified a PVC for logs. Oops, this could result in all 3 members mounting the same drive and stepping all over each other's logs, so to mount the same drive to all 3 cluster members would require something fancier
from neo4j-helm.
each of these mount points need options to be mounted as either a drive or as a configmap. Which means that basically what's needed is the ability to specify all of what would normally go into volumes
and volumeClaimTemplates
.
At the point where you're doing that much extra configuration and parameter setting to make sure that the internal drives of the statefulset get set up properly -- I guess it isn't obvious to me that this is actually easier than using an initContainer to just curl whatever file (or access whatever configmap) and put it on /data where you want it?
from neo4j-helm.
each of these mount points need options to be mounted as either a drive or as a configmap. Which means that basically what's needed is the ability to specify all of what would normally go into
volumes
andvolumeClaimTemplates
.
Not quite, we just need to expose a map that expands thevolumeMounts
section in the core and readReplica pods, as well as a map that expands thevolumes
section in each of those and you're done. Then it's up to the end user to add whatever else they need if they need it.At the point where you're doing that much extra configuration and parameter setting to make sure that the internal drives of the statefulset get set up properly -- I guess it isn't obvious to me that this is actually easier than using an initContainer to just curl whatever file (or access whatever configmap) and put it on /data where you want it?
You could use an initcontainer to curl the file from some source and then copy it into the data mount point, if this is something you want to mount in the data end point. The problem there is of course if the pod is restarted for some reason, you now need to ensure that file is the exact same version or you're going to get value drift and you lose the option of having immutable infrastructure. You could mitigate that by injecting the ConfigMap and then copying that from the initContainer to the pods /data end point. All good, unless you're not injecting data files but configuration files, or credentials for backups ro the like.
I'll put together a PR to show what i was after.
The init container approach also only works with the exposed volume mounts already existing in the container, which means there is no way of actually adding material to paths outside of these locations (such as the /conf path), which i think is the point you've brought up above.
from neo4j-helm.
See #54
It'll need some more testing but linting it all looks good.
from neo4j-helm.
I'm going to close this because the PR is now merged, but I'll check in on some more testing & I need to document this in the user guide since otherwise this optionality will likely get missed.
One challenge I see with this feature is that because it requires specifying volumes and volume mounts, there are going to be a lot of ways to break an install by making a small error in one of these configuration points. We'll have to push back later and not support custom mounts like this, just because it's quite a big deep area of possibility outside of the scope of neo4j the product.
from neo4j-helm.
I am ensure about the way it's been implemented, but maybe I'm just using it wrong.
I describe my issue with the current implementation on the merge request : #54
from neo4j-helm.
Related Issues (20)
- Unable to set apoc.trigger.enabled in neo4j.conf HOT 1
- When using an older version of Neo4J (3.5) the latest changes in 4.3.2-1 chart cause startup errors
- Prometheus Endpoint fails
- Readiness probe prevents recovery
- Cannot create manual jobs from CronJob
- Specify a specific version of plugin to install HOT 1
- Add support for priority classes
- 3.5.30 - `neo4j-apoc-procedures/verison.json` for 3.5.30 not updated HOT 1
- Different nodeSelector, affinity and tolerations for core and readReplica
- Can't set toleration on the backup pods
- Can't change log format
- Issue when setting apoc export or import configurations HOT 5
- Please do not deprecate this chart!
- log4j version 2.17
- Neo4j 4.2.15 is not starting with helm version 4.2.14 HOT 2
- Entrypoint script has moved in 3.5.31 - no longer boots using Helm chart HOT 1
- Neo4j backup fails with azure-cli version 2.34.1
- Neo4j Installation Stuck due apoc plugin downloading failed.
- Wrong name with 4.4.10 release (neo4j-4.4.10.1.tgz)
- Deprecated API versions for Cronjob and PodDisruptionBudget
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from neo4j-helm.