rwaltersma / mongo-spark-jupyter Goto Github PK
View Code? Open in Web Editor NEWDocker environment that spins up MongoDB replica set, Spark, and Jupyter Lab. Example code uses PySpark and the MongoDB Spark Connector.
Docker environment that spins up MongoDB replica set, Spark, and Jupyter Lab. Example code uses PySpark and the MongoDB Spark Connector.
Hi,
Does anyone know how to make the run.sh script compatible with Windows?
Thanks!
Hi! Thanks for making this docker available.
I'm facing a problem, when executing the command 'df = spark.read.format("mongo").load()'
I get the error: 'Py4JJavaError: An error occurred while calling o38.load.
: com.mongodb.MongoTimeoutException: Timed out after 30000 ms while waiting for a server that matches com.mongodb.client.internal.MongoClientDelegate$1@5a79e. Client view of cluster state is {type=REPLICA_SET, servers=[{address=mongo1:27017, type=REPLICA_SET_GHOST, roundTripTime=1.3 ms, state=CONNECTED}, {address=mongo2:27018, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoSocketException: mongo2}, caused by {java.net.UnknownHostException: mongo2}}, {address=mongo3:27019, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoSocketException: mongo3 }, caused by {java.net.UnknownHostException: mongo3}}]'
I'm using the image in windows 10, running with the command './run.ps1' in windows powershell
I run this using run.sh
and trained a classification model using Spark ML. After training, I wanted to save the model.
I tried model.write().overwrite().save('spark-model')
. This creates a spark-model directory but only saves the "_SUCCESS" files in it; no actual model fies were saved.
Then I checked if they are in workers' files and they were in /home/jovyan/work
in workers' file system:
When I collect the files into one place and tried to load the model using PipelineModel.load
, I get this error:
----> [3](vscode-notebook-cell:/home/emre/etiya/stuff/mongo-spark-jupyter/Untitled.ipynb#Y113sZmlsZQ%3D%3D?line=2) pipeline_model = PipelineModel.load('spark-model')
File [/usr/local/spark/python/pyspark/ml/util.py:332](https://file+.vscode-resource.vscode-cdn.net/usr/local/spark/python/pyspark/ml/util.py:332), in MLReadable.load(cls, path)
329 @classmethod
330 def load(cls, path):
331 """Reads an ML instance from the input path, a shortcut of `read().load(path)`."""
--> 332 return cls.read().load(path)
File [/usr/local/spark/python/pyspark/ml/pipeline.py:256](https://file+.vscode-resource.vscode-cdn.net/usr/local/spark/python/pyspark/ml/pipeline.py:256), in PipelineModelReader.load(self, path)
255 def load(self, path):
--> 256 metadata = DefaultParamsReader.loadMetadata(path, self.sc)
257 if 'language' not in metadata['paramMap'] or metadata['paramMap']['language'] != 'Python':
258 return JavaMLReader(self.cls).load(path)
File [/usr/local/spark/python/pyspark/ml/util.py:525](https://file+.vscode-resource.vscode-cdn.net/usr/local/spark/python/pyspark/ml/util.py:525), in DefaultParamsReader.loadMetadata(path, sc, expectedClassName)
514 """
515 Load metadata saved using :py:meth:`DefaultParamsWriter.saveMetadata`
516
(...)
522 If non empty, this is checked against the loaded metadata.
523 """
524 metadataPath = os.path.join(path, "metadata")
--> 525 metadataStr = sc.textFile(metadataPath, 1).first()
526 loadedVals = DefaultParamsReader._parseMetaData(metadataStr, expectedClassName)
527 return loadedVals
File [/usr/local/spark/python/pyspark/rdd.py:1591](https://file+.vscode-resource.vscode-cdn.net/usr/local/spark/python/pyspark/rdd.py:1591), in RDD.first(self)
1589 if rs:
1590 return rs[0]
-> 1591 raise ValueError("RDD is empty")
ValueError: RDD is empty
How can I save and load the models without issues? Thanks.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.