Code Monkey home page Code Monkey logo

sparkmagic's People

Contributors

aggftw avatar alex-the-man avatar alope107 avatar amacaskill avatar apetresc avatar c2zwdjnlcg avatar dependabot[bot] avatar devstein avatar edwardps avatar ellisonbg avatar ericdill avatar gaspardbt avatar gthomas-slack avatar hanyucui avatar hegary avatar itamarst avatar jeffersonezra avatar juhoautio avatar juliusvonkohout avatar linanzheng avatar ljubon avatar msftristew avatar pedrorossi avatar praveen-kanamarlapudi avatar rickystewart avatar strunevskiy avatar suhsteve avatar tomaszdudek7 avatar utkarshgupta137 avatar vallenki avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

sparkmagic's Issues

Installation is hard

See title; installation is a few steps, which should ideally all be subsumed by a Pip install.

Add %info to wrapper kernels

This should display to the user:

  • Livy endpoint kernel will hit
  • Sessions for the given endpoint (number, state, and type)

Incorrect visualizations on some sample data

Ran the following code:

hvac = sc.textFile('wasb:///HdiSamples/HdiSamples/SensorSampleData/hvac/HVAC.csv')
from pyspark.sql import Row
Doc = Row("TargetTemp", "ActualTemp", "System", "SystemAge", "BuildingID")
def parseDocument(line):
    values = [str(x) for x in line.split(',')]
    return Doc(values[2], values[3], values[4], values[5], values[6])
documents = hvac.filter(lambda s: "Date" not in s).map(parseDocument)
df = sqlContext.createDataFrame(documents)
df.registerTempTable('data')

and then

%select * from data limit 100

The visualizations, at least for the pie graphs, are wrong. Screenshot:

screen shot 2016-01-06 at 6 18 47 pm

Clearly there is no building where the desired target temperature is 1.

%hive show tables doesn't seem to work

This looks like a regression when the improved output rendering change was introduced. %hive SHOW TABLES crashes explaining it doesn't know how to convert the output into a dataframe (the output is an empty list). It definitely doesn't work when the list of tables is empty; it also probably does not work if that list is nonempty.

Return well formated string/error from Livy to user

Right now, magics return the result from Livy without being aware of whether the string is a result or an error.

In the case of a result, magics should nicely print the result back.
In the case of an error, it should be clear to the user that an error just happened in the cluster. A stacktrace should be printed if available.

Create widget for session manager

This prevents users from typing their credentials in clear text.

Users should not be allowed to manage sessions by using text subcommands if they are running the notebook in the browser. It should be allowed for users in a terminal, though.

Give pandas df back to user when user runs sql query

The pandas df being constructed is not being passed back to the user for the user to play with.

So, user does something like:

%spark -c sql SELECT * FROM table

and even though the result is being constructed into a pandas df, user cannot visualize it.

Instead, a user could do something like:

%spark -c sql -v myDf SELECT * FROM table

and result could be available in myDf.

We should discuss the syntax before implementing this. Pinging @ellisonbg

Restructure repos

We should create other repos for a Livy client, a configuration getter, a logger, and etcetera. Then we should put our python files into folders that make actual sense.

In doing this, we might want to add docstring comments for public methods in every repo.

Revise API

Consolidate magics and commands. Clean up UX.

SQL queries are not escaped properly

From livyclient.py:

def execute_sql(self, command):
    return self.execute('sqlContext.sql("{}").collect()'.format(command))

def execute_hive(self, command):
    return self.execute('hiveContext.sql("{}").collect()'.format(command))

If the SQL query the user passes in has double-quotes in it (double-quotes are a valid string delimiter in Spark SQL), then this is liable to cause an error. Moreover, if the query happens to have a stray " in it (maybe as a result of user error), then this will cause a syntax error in the rest of the running code.

Should the default Livy URL be localhost:8998?

See title. I wonder if this change would increase the odds that someone can do a clean install of the wrapper kernels and have everything "just work" without having to mess with configurations at all.

User is not notified if context creation fails

I just ran into this when a HiveContext failed to create in Scala and I wasn't notified that any error happened. When I then tried to run %hive SHOW TABLES, an error was thrown that hiveContext wasn't defined.

Manage livy endpoint from magics

This will be the API:

  • %spark add session_name language conn_string
    will create a session against the endpoint specified
  • %spark info
    will display the info for the sessions created in that notebook
  • %spark config <configuration_overrides>
    will add session configs for subsequent sessions
  • %spark info conn_string
    will list the sessions for a given livy endpoint by providing session_id, language, state
  • %spark delete session_name
    will delete a session by its name from the notebook that created it
  • %spark delete conn_string session_id
    will delete a session for a given endpoint by its id
  • %spark cleanup
    will delete all sessions created by the notebook
  • %spark cleanup conn_string
    will delete all session for the given livy endpoint

This covers #56, #75, and #76 for magics in Python kernel.
We are not designing the API for the wrapper kernels here and we'll tackle that as a separate improvement.

ping @msftristew @ellisonbg to take a look when they can

Allow user to return dataframes not constructed through SQL queries

A thought I just had: Currently we only return responses as dataframes if their query is a SQL query. Since we already have the code for parsing dataframes from JSON responses, we might provide an option that lets users say "I'm going to output a bunch of JSON here, please parse it and return it as a dataframe", even if their code isn't a SQL query. This could be useful and could allow users to get arbitrary semi-structured data back from the remote cluster. I'm not sure how feasible this is, or if this would be too error-prone, but I think it makes sense and could be really useful in some niche scenarios.

Rethink configuration

This is part of a greater architectural issue around configurations. There should be a configuration module which:

  1. Automatically loads data from a configuration file,
  2. loads the configuration with an appropriate set of defaults if things are missing from the configuration file, and
  3. allows a developer to substitute a different configuration module if necessary (i.e. for tests).

Improve error when credentials are not provided

Currently, when you fail to specify credentials in the config file, the kernel crashes. The exception (that you need to provide credentials for Livy) is visible in Jupyter's logs but it would be ideal if the kernel didn't crash and instead the error message was written to the screen.

Change Magic Contract

Change the way that the magic is used so that it is run once to specify the livy connection and all subsequent cells are run against the remote cluster. This clears up the confusion between the local and remote namespaces without being as heavyweight of a solution as a new kernel.

Improve parsing for dataframe generation and gracefully handle errors

  1. Improve dataframe parsing. In particular, the pyspark livy client should not be calling "eval'. This may require further investigation into the structure of the strings that Livy may possibly return to the client.
  2. Tighten up error handling. This requires enumerating all the possible errors that we may run into during parsing, and converting them smartly into DataFrameParseExceptions so that error messages can be displayed to the user nicely.

`ipywidgets.FlexBox` deprecated in version 5.0

I think the current version of ipywidgets is 4.1.1 so this is not a pressing issue (it's not deprecated yet) but for future-proofing we should consider moving the autovizwidget way from that model.

Missing fields produce Altair errors

I was doing the following hive query when I discovered an error:

%hive SELECT * FROM hivesampletable WHERE deviceplatform = 'Android'

With the following pandas df:

records_text = '{"clientid":"8","querytime":"18:54:20","market":"en-US","deviceplatform":"Android","devicemake":"Samsung","devicemodel":"SCH-i500","state":"California","country":"United States","querydwelltime":13.9204007,"sessionid":0,"sessionpagevieworder":0}\n{"clientid":"23","querytime":"19:19:44","market":"en-US","deviceplatform":"Android","devicemake":"HTC","devicemodel":"Incredible","state":"Pennsylvania","country":"United States","sessionid":0,"sessionpagevieworder":0}'
json_array = "[{}]".format(",".join(records_text.split("\n")))
import json
d = json.loads(json_array)
result = pd.DataFrame(d)
result

the NaN for querydwelltime produces the following error:

Javascript error adding output!
TypeError: Cannot read property 'prop' of undefined
See your browser Javascript console for more details.

The vegalite spec produced by Altair is:

{'config': {'width': 600, 'gridOpacity': 0.08, 'gridColor': u'black', 'height': 400}, 'marktype': 'point', 'data': {'formatType': 'json', 'values': [{u'deviceplatform': u'Android', u'devicemodel': u'SCH-i500', u'country': u'United States', u'sessionpagevieworder': 0, u'state': u'California', u'clientid': u'8', u'sessionid': 0, u'querytime': u'18:54:20', u'devicemake': u'Samsung', u'market': u'en-US', u'querydwelltime': 13.9204007}, {u'deviceplatform': u'Android', u'devicemodel': u'Incredible', u'country': u'United States', u'sessionpagevieworder': 0, u'state': u'Pennsylvania', u'clientid': u'23', u'sessionid': 0, u'querytime': u'19:19:44', u'devicemake': u'HTC', u'market': u'en-US', u'querydwelltime': nan}]}}

For commentary and possible fixes, this issue is tracked by lightning renderer in:
lightning-viz/lightning-python#34

We might need to address this in Altair too.

cc @ellisonbg @mathisonian

Improve docstring showed for the %spark magic

When user runs %spark? in the notebook, the docstring should include a small paragraph stateing that the magic allows you to connect to a Livy endpoint by creating sessions that are tied to a particular language and that every session can run code in that language + SparkSQL.

Error when visualizing empty dataframe

Steps to reproduce:

  1. %sql SHOW TABLES or %hive SHOW TABLES when there are no tables (i.e. the result dataframe is empty).

  2. The data viz widget pops up. Switch from "table" to any of the other chart styles.

  3. You get an exception. This except doesn't go away even if you switch back to the table graph type.

    ValueError: cannot label index with a null key
    

Explore alternate SQL contexts

Sparkmagic currently supports only vanilla SQLContexts as first class interfaces. If a user wants to use an alternate context (like a HiveQLContext), they can do so through the pyspark or scala interfaces, but they must handle the context itself. It may be useful to allow the user to specify a type of SQLContext when using the SQL interface.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.