dispatch / reboot Goto Github PK
View Code? Open in Web Editor NEWScala wrapper for the Java AsyncHttpClient.
Home Page: https://dispatchhttp.org
License: GNU Lesser General Public License v3.0
Scala wrapper for the Java AsyncHttpClient.
Home Page: https://dispatchhttp.org
License: GNU Lesser General Public License v3.0
A quick example to illustrate the problem:
This works as intended:
val req1 = url("http://foo.com") <<? Map("q" -> "a")
req1.url
>> "http://foo.com/?q=a"
If segments are added after query map:
val req2 = req1 / "bar" / "baz"
>> "http://foo.com/bar/baz?q=a&q=a&q=a&q=a"
Hello guys. First of all thanks for your product. It's really pleasant to work with it. I need an ability to poll our internal API with POST queries, keeping connection alive between queries. Is it possible with dispatch? What I tried:
val _http: Http = new Http()
.configure(builder => {
builder.setConnectionTimeoutInMs(1000)
builder.setAllowPoolingConnection(true)
builder.setMaximumConnectionsTotal(1000)
builder.addRequestFilter(new ThrottleRequestFilter(1));
builder.build()
builder
})
def sendRequest(url: String, body: String):String = {
val request = dispatch.url(url).
addHeader("Connection", "Keep-Alive").
setContentType("application/json", "UTF-8").POST
request << body
val result = _http(request OK as.String).either
result() match {
case Right(content) => return content
case Left(StatusCode(204)) => return ""
case Left(StatusCode(code)) => return ""
case Left(_) => return ""
}
}
but it doesn't help. I rechecked that backend doesn't close connections on server side, because another callers works ok with it. So, am I wrong somewhere? Thanks for your help
Would be nice to have a release that is built against 2.11.
One thing I'd like to be able to do is to operate on promise projections is to do something like
def req = url("http://api.meetup.com/2/member/self")
for { JObject(fs) <- Http(req > as.lift.Json).either.right }
yield fs
If I try to do this with the current release the compiler tells me that the projection doesn't implement filter method.
I took some inspiration for scala's projections and added what I thought would work in a new branch
This doesn't quiet do what I thought it may be I wanted to get your thoughts before I go further.
Testing this in the liftjson module I get this error
scala> (for { JObject(fs) <- Http(url("http://api.meetup.com/2/member/self") OK as.lift.Json).either.right } yield fs)()
<console>:17: error: constructor cannot be instantiated to expected type;
found : net.liftweb.json.JsonAST.JObject
required: Option[Either[Nothing,net.liftweb.json.package.JValue]]
(for { JObject(fs) <- Http(url("http://api.meetup.com/2/member/self") OK as.lift.Json).either.right } yield fs)()
It's very possible that I'm not unraveling something correctly. The filter method I added to the PromiseEither Left/Right projections seems in line with those same definitions in scala's left/right projections.
We want to default to non-20x status codes being a fatal error, but this behavior should be obvious to the reader and also obvious how to override it.
I just started working on some streaming clients in 0.9.3 and am getting runtime null pointer exceptions. For sanity check I tested the same code under 0.9.2 and had no issue :/
I stripped my example down so you can copy and paste it in a console.
import dispatch._
Http(:/("stream.meetup.com") / "2" / "rsvps" > as.lift.stream.Json(println))()
The npe is thrown is listed below. It looks like it's thrown on this line but I don't see how.
The content type header returned by the service is
Content-Type: application/json; charset=utf-8
java.lang.NullPointerException
at scala.collection.JavaConversions$JListWrapper.isEmpty(JavaConversions.scala:617)
at scala.collection.TraversableLike$class.headOption(TraversableLike.scala:422)
at scala.collection.JavaConversions$JListWrapper.headOption(JavaConversions.scala:615)
at dispatch.stream.Strings$class.onHeadersReceived(strings.scala:19)
at dispatch.as.lift.stream.Json$$anon$1.onHeadersReceived(json.scala:8)
at com.ning.http.client.providers.netty.NettyAsyncHttpProvider.updateHeadersAndInterrupt(NettyAsyncHttpProvider.java:1467)
at com.ning.http.client.providers.netty.NettyAsyncHttpProvider.access$2300(NettyAsyncHttpProvider.java:137)
at com.ning.http.client.providers.netty.NettyAsyncHttpProvider$HttpProtocol.handle(NettyAsyncHttpProvider.java:2230)
at com.ning.http.client.providers.netty.NettyAsyncHttpProvider.messageReceived(NettyAsyncHttpProvider.java:1128)
at org.jboss.netty.handler.stream.ChunkedWriteHandler.handleUpstream(ChunkedWriteHandler.java:141)
at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296)
at org.jboss.netty.handler.codec.replay.ReplayingDecoder.unfoldAndFireMessageReceived(ReplayingDecoder.java:600)
at org.jboss.netty.handler.codec.replay.ReplayingDecoder.callDecode(ReplayingDecoder.java:584)
at org.jboss.netty.handler.codec.replay.ReplayingDecoder.messageReceived(ReplayingDecoder.java:445)
at org.jboss.netty.handler.codec.http.HttpClientCodec.handleUpstream(HttpClientCodec.java:92)
at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:268)
at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:255)
at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:94)
at org.jboss.netty.channel.socket.nio.AbstractNioWorker.processSelectedKeys(AbstractNioWorker.java:372)
at org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:246)
at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:38)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:680)
Hi,
I'm using the gnieh/sohva couchdb client that uses dispatch for http. When you don't want to use a couch database object any more, you call .shutdown(). All that this does is calls dispatch's HttpExecutor.shutdown(). On calling .shutdown(), bad things happen. All cores spin up to 100% and the JVM is prevented from exiting normally.
After poking about a bit with jprofiler, I see that NettyAsyncHttpProvider.close is running just once and not exiting. However, looking at the code I can't see what could be looping for ever.
It doesn't look at first sight like a sohva issue as they just call dispatch.shutdown() without doing any other work. Is it you or netty?
I recently upgraded from dispatch 0.11.0 to 0.11.1 and noticed all my POST requests had now broken. Using wireshark to investigate the headers I saw that my requests were now adding an two Content-Type
headers to all POST requests. One that I hadn't specified, Content-Type: text/plain; charset=UTF-8
, and the one that I had, Content-Type: application/json
, where in 0.11.0 only the json Content-Type
was added to my POST request.
After looking through the changelog for 0.11.1 I saw this PR that ended up changing the behavior of POSTs (#72). Now setting the body with <<
will add a text/plain content type if one isn't set, whereas before it wouldn't. In my code I had set the headers after I had set the body, and it seems in this case Dispatch will happily add two Content-Types. I've got a simple project that will duplicate the issue that occurs in my code and prints out the request here: : https://github.com/efuquen/dispatch-multiple-content-type/tree/master . I see two things wrong with this.
Content-Type
headers. In this SO response to a query about multiple Cache-Control
headers (http://stackoverflow.com/questions/4371328/are-duplicate-http-response-headers-acceptable/4371395#4371395) the author references the spec, which states in short that a header can be repeated if the header is already defined to validly have multiple values, the multiple headers should then be treated as if this had been a single header with the multiple values listed out. So, for example: Cache-Control: no-store
Cache-Control: no-cache
is fine because it's equivalent to:
Cache-Control: no-store, no-cache
But, Content-Type
is not specified by the http spec to validly have multiple values, if you read it's definition here: http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.17 . Therefore you should not be allowed to specify multiple Content-Type
headers. IMO the correct behavior here is that if a user specifies a header, in any order when building the request (my bug would have not manifested if I had specified headers before setting the body with <<
) it should override any default that is set by Dispatch, the user clearly has no knowledge that a default header is being set and in the end that is what causes the conflict. If the user was to specify two Content-Type
headers, which I haven't tested with the current Dispatch, I would think the second one should overwrite the first or it should throw an error.
What license is reboot and the previous version being released under?
It is quite likely that I am using this the wrong way, but here is what I am doing. I need to make many HTTP requests, with the result of one request being used as the input for a handful of other requests, with are then used for more requests, etc. So, I figured the monadic approach of Promises in a for comprehension was the way to go (with a semaphore to be sure I didn't create too many requests at once.
I am also using scalaz and its EitherT
monad transformer to deal with Promise[Either[A, B]]
in a sane fashion. Perhaps I am misusing it here?
I expected (in my naiveté, perhaps), that the AsyncHttpClientCallback
threads would be short-lived, until the promise was fulfilled, and the value used for the next request. I have something like this:
val bigPromiseThatDoesEverything: EitherT[Promise[A, B] = for {
widgetIds <- getWidgetIds // widgetIds: List[Int]
widgets <- widgetIds.map(getWidget).sequence.map(_.join) // widgets: List[Widget]
componentPrices <- getPrices(widgets.flatMap(_.components)) // componentPrices: List[Int]
} yield componentPrices
bigPromiseThatDoesEverything.run.apply()
Obviously, my real code is a bit more complicated, but I hope that expresses what I am trying to do.
The actual result is that the AsyncHttpClientCallback
threads from the earlier requests are used for the later requests, and not returned to the pool, or killed, or anything. Before long, I have 1000 threads running.
Am I approaching this the wrong way? Can you offer any advice?
Thanks!
I wrote some code of the form:
val baseUrl = url("http://foo.com")
List("bar", "baz", "hoge") map { path =>
val url = baseUrl / path
Http(url OK as.String)
// ...do stuff...
}
Of course, I intended to generate the following URLs:
But, due to the mutable behaviour of RequestBuilder, the actual URLs generated were:
In my opinion this is quite unintuitive (un-Scala-like) behaviour. Suggestions:
For example, "餌" is encoded to %990C
, which doesn't decode to "餌" using java.net.URLDecoder
. The default charset in my environment is UTF-8. The correct URL encoding for "餌" under UTF-8 is %E9%A4%8C
.
scala> import dispatch._
import dispatch._
scala> val enc = UriEncode.path("餌")
enc: String = %990C
scala> java.net.URLDecoder.decode(enc, "UTF-8")
res11: String = �0C
The implication is that requests built like host("localhost", 8080) / "餌"
won't be interpreted as expected (unless the server knows how to decode %990C
to "餌").
a more detailed bug description will be forth coming but I'm jotting this here for reference. It looks like it may have to do with some expectation of parameter encoding.
I was using 0.11.0 and the callback sent to the server didn't seem to be encoded correctly but when I switched back to 0.10.0 things seemed to work fine. Also, not didn't verify this but noticed when the dispatch.oauth.SomeXXX paths ended with a trailing slash the target authorization server ( Meetup ) didn't agree with dispatches generated request signature. Again, will verify some of this stuff later in a more detailed report but I'm jotting these things down before I forget!
It seems that if we can it would be nicer to capture that as a usable structure rather than throwing an exception.
https://github.com/dispatch/reboot/blob/master/core/src/main/scala/handlers.scala#L37
when trying to build a dynamic url to take advantage of url encoding a call to host("domain.com") returns a RequestBuilder, however when provided as an arg to Http I get a promise(-incomplete-)
If I prepend the domain with http:// ie. host("http://domain.com") I get a Promise(!http://http://pubsub.pubnub.com/!) which is obviously not a valid url. I am stuck here
It seems that Dispatch by default accepts all SSL certificates (that is, does not check the certificate at all).
(Tested with dispatch-core_2.10-0.11.0.jar, async-http-client-1.7.16.jar, netty-3.6.3.Final.jar)
It seems this default behavior comes from AsyncHttpClient's Netty provider. Personally, I don't think this is a good default (see e.g. https://crypto.stanford.edu/~dabo/pubs/abstracts/ssl-client-bugs.html), but at the very least, Dispatch documentation should mention this (and include example how to configure SSL properly -- something that AsyncHttpClient's docs seem to omit).
I tried to download a website with utf-8 encoding, there are chinese characters in it.
val s = Http(url("http://test") OK as.String)
println(s())
The content is printed, but all the chinese characters are displayed as ??
Just wondering, is there a way to limit the number of threads used by Dispatch in version 0.11.0?
When I bombard my Play service, which in turn does async HTTP requests with Dispatch, on Heroku, I get a lot of unable to create new native thread
errors. A look at NewRelic monitoring indicates that memory is not an issue, so I suspect that I may be running into the max threads limit
Thanks.
It's awkward to use <<
with a PUT, even though putting a application/x-www-form-urlencoded is a legit use case.
Hi, could someone take a look at https://gist.github.com/satyagraha/6741946 and explain what I'm doing wrong? This a complete runnable program but it seems to block at line 42...
This is with v0.11.0 of the library and Scala 2.9.3 - thanks!
Hey, I want to use a socks proxy for an http request, but I can't seem to figure out how to use it with dispatch, the ProxyServer object doesn't have a socks option, basically what I would like to use is something similar to the java.net.Proxy with Type.SOCKS
when I try to set my jvm properties of socksProxyHost and socksProxyPort it doesn't seem to work either..
Is there anyway to achieve this with dispatch?
thanks!
Based on a mailing list discussion
The Scala promise coming in Dispatch 0.10 will apparently fix this. For now, this is a workaround for Dispatch 0.9.5:
http(request > handler).map(identity)
This forces dispatch to register the listener associated with the request handler.
Getting an encoded stream is rather common from big data APIs. Having a default handler similar to the basic as.stream.Lines
would be really handy for this kind of use case.
Here's an example implementation to connect to a gzip encoded stream:
Http(
host("stream.data.com")
.secure
.addHeader("Accept-Encoding", "gzip")
/ "streams" / "track" / "tweets.json"
> as.stream.EncodedLines(println) )()```
I am using dispatch_core version 0.10.1
Http client created using configure method doesn't shutdown if we don't set excecutors explicitly. Is it expected ?
val http = new Http()
.configure(builder => {
builder.setConnectionTimeoutInMs(1000)
//builder.setExecutorService(Executors.newCachedThreadPool())
builder
})
http.shutdown
you can have a look at part of jstack -
https://gist.github.com/pankajmi/5812305
EDIT: and CPU usage is very very high
When I try to create a lot of requests (for example, mapping quite a big list to Promise), I get java.nio.SocketException: Too many open files
. Code:
val result = http.promise.all(List.fill(1000)("http://en.wikipedia.org/wiki/Main_Page").map { loc =>
http(url(loc) OK as.String).map(str => Right(str.size))
.recover {
case e: Throwable => Left(e.toString)
}.onSuccess { case res =>
println(res)
}
})()
I looked inside it a bit, and it seems that the problem is in the underlying AsyncHttpClient - AsyncHttpClient/async-http-client#220.
To sum up, each call to AsyncHttpClient.executeRequest results in new socket being opened, which not only leads to aforementioned error, but also kills the performance (lots of timeouts happen, for instance). And there seems no built-in way to throttle those connections inside their lib.
Such behavior sort of kills the whole "async" principle.
I can't think of a "proper" way to resolve the problem right now, but here's the idea: maybe we can wrap those executeRequest
calls with some kind of a proxy Future, which would execute the actual call on some dedicated thread - and that thread would perform blocking using some semaphore - for example, that's how it is done in ThrottleRequestFilter.
What do you think?
First identified on the mailing list:
https://groups.google.com/forum/?fromgroups#!topic/dispatch-scala/CEZg9H32kX8
As far as I can tell this would have no effect on a production environment, but it's very annoying in dev.
Any ideas why this would be happening? :
com.ning.http.client.providers.jdk.JDKFuture.get (JDKFuture.java:143)
com.ning.http.client.providers.jdk.JDKFuture.get (JDKFuture.java:118)
dispatch.HttpExecutor$$anonfun$apply$2$$anonfun$apply$3.apply(execution.scala:50)
scala.util.Try$.apply(Try.scala:161)
dispatch.HttpExecutor$$anonfun$apply$2.apply(execution.scala:50)
dispatch.HttpExecutor$$anonfun$apply$2.apply(execution.scala:50)
dispatch.package$$anon$1.run(package.scala:18)
scala.concurrent.impl.ExecutionContextImpl$$anon$3.exec(ExecutionContextImpl.scala:107)
scala.concurrent.forkjoin.ForkJoinTask.doExec (ForkJoinTask.java:260)
…ala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask (ForkJoinPool.java:1339)
scala.concurrent.forkjoin.ForkJoinPool.runWorker (ForkJoinPool.java:1979)
scala.concurrent.forkjoin.ForkJoinWorkerThread.run (ForkJoinWorkerThread.java:107)
caused by java.net.SocketException: Too many open files
java.net.Socket.createImpl (Socket.java:447)
java.net.Socket.connect (Socket.java:577)
sun.security.ssl.SSLSocketImpl.connect (SSLSocketImpl.java:618)
sun.net.NetworkClient.doConnect (NetworkClient.java:175)
sun.net.www.http.HttpClient.openServer (HttpClient.java:378)
sun.net.www.http.HttpClient.openServer (HttpClient.java:473)
sun.net.www.protocol.https.HttpsClient. (HttpsClient.java:270)
sun.net.www.protocol.https.HttpsClient.New (HttpsClient.java:327)
…ps.AbstractDelegateHttpsURLConnection.getNewHttpClient (AbstractDelegateHttpsURLConnection.java:191)
…n.net.www.protocol.http.HttpURLConnection.plainConnect (HttpURLConnection.java:974)
…tocol.https.AbstractDelegateHttpsURLConnection.connect (AbstractDelegateHttpsURLConnection.java:177)
….net.www.protocol.https.HttpsURLConnectionImpl.connect (HttpsURLConnectionImpl.java:153)
…s.jdk.JDKAsyncHttpProvider$AsyncHttpUrlConnection.call (JDKAsyncHttpProvider.java:243)
java.util.concurrent.FutureTask$Sync.innerRun (FutureTask.java:334)
java.util.concurrent.FutureTask.run (FutureTask.java:166)
java.util.concurrent.ThreadPoolExecutor.runWorker (ThreadPoolExecutor.java:1145)
java.util.concurrent.ThreadPoolExecutor$Worker.run (ThreadPoolExecutor.java:615)
java.lang.Thread.run (Thread.java:724)
I am trying to use dispatch (reboot) 0.9.3 to access a long-running HTTPS end point, however I consistent get a connection time out after 60s.
I am trying to set the timeout to 300s, and in tests with the same code against a non-SSL HTTP end point the code work as expected -- it holds until the response is returned.
Am I configuring async-http-client or dispatch improperly for HTTPS end points?
It seems that for an HTTPS end point dispatch is not connecting fully to the HTTPS service.
The code I use is:
val request = dispatch.url( endpt ).secure
val body = requestBodyFor( inquiry ).toString
request << body.toString
// actually I hold http in a variable that is lazy created but this is close enough
val response = http( request > as.String ).either
def http: dispatch.Http = {
val client = new AsyncHttpClient(
new AsyncHttpClientConfig.Builder()
.setAllowPoolingConnection( true )
.setAllowSslConnectionPool( true )
.setConnectionTimeoutInMs( outer.timeout.toInt )
.setIdleConnectionTimeoutInMs( outer.timeout.toInt )
.setMaxRequestRetry( 3 )
.setRequestTimeoutInMs( outer.timeout.toInt )
.setAsyncHttpClientProviderConfig(
new NettyAsyncHttpProviderConfig().addProperty(
NettyAsyncHttpProviderConfig.BOSS_EXECUTOR_SERVICE,
juc.Executors.newCachedThreadPool( DaemonThreads.factory )
)
).build()
)
dispatch.Http( client = client ).waiting( dispatch.Duration.millis( outer.timeout ) )
}
endpt is a configured string that is the URL I'm trying to hit; e.g., "https://www.foo.com/bar" timeout is also configured in ms -- currently set to 300000
even though I set the timeout to 300000ms (5mins), I get the following error after 1 minute:
[debug] c.n.h.c.p.n.NettyConnectListener - Trying to recover a dead cached channel [id: 0x06808e21] with a retry value of true
[debug] c.n.h.c.p.n.NettyConnectListener - Failed to recover from exception: java.net.ConnectException: Connection timed out with channel [id: 0x06808e21]
[debug] c.n.h.c.AsyncCompletionHandlerBase - Connection timed out to https://direct.backgroundchecks.com/integration/bgcdirectpost.aspx
java.net.ConnectException: Connection timed out to https://direct.backgroundchecks.com/integration/bgcdirectpost.aspx
at com.ning.http.client.providers.netty.NettyConnectListener.operationComplete(NettyConnectListener.java:100) ~[async-http-client.jar:na]
at org.jboss.netty.channel.DefaultChannelFuture.notifyListener(DefaultChannelFuture.java:428) [netty.jar:na]
at org.jboss.netty.channel.DefaultChannelFuture.notifyListeners(DefaultChannelFuture.java:419) [netty.jar:na]
at org.jboss.netty.channel.DefaultChannelFuture.setFailure(DefaultChannelFuture.java:381) [netty.jar:na]
at org.jboss.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.connect(NioClientSocketPipelineSink.java:409) [netty.jar:na]
at org.jboss.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.processSelectedKeys(NioClientSocketPipelineSink.java:366) [netty.jar:na]
Caused by: java.net.ConnectException: Connection timed out
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) ~[na:1.7.0_07]
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:692) ~[na:1.7.0_07]
at org.jboss.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.connect(NioClientSocketPipelineSink.java:404) [netty.jar:na]
at org.jboss.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.processSelectedKeys(NioClientSocketPipelineSink.java:366) [netty.jar:na]
at org.jboss.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.run(NioClientSocketPipelineSink.java:282) [netty.jar:na]
at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:102) [netty.jar:na]
[debug] c.n.h.c.p.n.NettyAsyncHttpProvider - Channel Closed: [id: 0x06808e21] with attachment null
[debug] c.n.h.c.p.n.NettyAsyncHttpProvider - Unexpected I/O exception on channel [id: 0x06808e21]
java.net.ConnectException: Connection timed out
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) ~[na:1.7.0_07]
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:692) ~[na:1.7.0_07]
at org.jboss.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.connect(NioClientSocketPipelineSink.java:404) ~[netty.jar:na]
at org.jboss.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.processSelectedKeys(NioClientSocketPipelineSink.java:366) ~[netty.jar:na]
at org.jboss.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.run(NioClientSocketPipelineSink.java:282) ~[netty.jar:na]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110) [na:1.7.0_07]
[debug] c.n.h.c.p.n.NettyAsyncHttpProvider - Closing Channel [id: 0x06808e21]
Again, if I change only the endpoint to a non-SSL URL, the code works as expected and waits up till 5mins for the response.
Thanks in advance!
Once dispatch.Http.configure is called, it's basically creating a new httpasynclient instead of reusing the InternalDefaults.client one, which causing two problems -
It would be great if there is an override to respecify the configuration ( well, I mainly care about the connectionTimeout and requestTimeout) or overriding at the client level (less preferred but acceptable)
Can you cut a release?
I see a tagsoup support branch in the works. Is lift-json as well? If not, is the tagsoup branch an appropriate guide to use for porting json support to 0.9.0? Thanks
Despite what is said here, currently on 0.11.0:
(host("example.com") / "a/b").url == "http://example.com/a%2Fb"
Is this a regression?
Hello,
I am new to Scala and I was trying out an example with ScalaXB.
I am using IntelliJ IDEA and the latest Scala version 2.10.1.
ScalaXB uses Dispatch so I added that as a dependency to the project. I also had to add as a dependency slf4j-nop:1.6.2. So, afterwards everything compiled but an exception keeps appearing:
Exception in thread "main" java.lang.NoSuchMethodError: scala.util.control.Exception$Catch.either(Lscala/Function0;)Lscala/Either;
at dispatch.Promise$class.result(promise.scala:64)
at dispatch.ListenableFuturePromise.result(promise.scala:223)
at dispatch.Promise$class.apply(promise.scala:75)
at dispatch.ListenableFuturePromise.apply(promise.scala:223)
at scalaxb.DispatchHttpClients$DispatchHttpClient$class.request(httpclients_dispatch.scala:12)
at scalaxb.DispatchHttpClients$$anon$1.request(httpclients_dispatch.scala:4)
at scalaxb.SoapClients$SoapClient$class.soapRequest(soap12.scala:32)
at scalaxb.SoapClients$$anon$1.soapRequest(soap12.scala:14)
at scalaxb.SoapClients$SoapClient$class.requestResponse(soap12.scala:51)
at scalaxb.SoapClients$$anon$1.requestResponse(soap12.scala:14)
at eu.getintheloop.sample.XMLProtocol$WeatherSoap12Bindings$WeatherSoap12Binding$class.getWeather(xmlprotocol.scala:53)
at eu.getintheloop.sample.XMLProtocol$WeatherSoap12Bindings$$anon$3.getWeather(xmlprotocol.scala:48)
at eu.getintheloop.sample.main$.main(main.scala:20)
at eu.getintheloop.sample.main.main(main.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at com.intellij.rt.execution.application.AppMain.main(AppMain.java:120)
The code line is:
protected lazy val result = allCatch.either { claim }
This looks quite strange because "claim" appears to be a perfectly good java.lang.String object:
<?xml version="1.0" encoding="utf-8"?>
<soap:Envelope xmlns:soap="http://www.w3.org/2003/05/soap-envelope" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema">
<soap:Body>
<GetWeatherResponse xmlns="http://litwinconsulting.com/webservices/">
<GetWeatherResult>Sunny</GetWeatherResult>
</GetWeatherResponse>
</soap:Body>
</soap:Envelope>
Is this a bug or I am doing something wrong? How can I fix it?
I am sorry if this is not the right place for this question. Any help would be appreciated.
Thanks in advance.
I really liked the way your website works. Simple, nice to browse through.
"Often, can you can" on http://dispatch.databinder.net/Abstraction+over+promised+information.html
Someone could maybe fix that?
-asko
Currently we are using the I/O worker's executor for handler composition and our own executor for promise composition. This should probably be done differently, and also we need hooks for handling exceptions that happen in any executor.
On the Promising Either page the method signature for extractTemp is said to be extractTemp: (xml: scala.xml.Elem)Promise[Either[String,Int]]
while it actually should be extractTemp: (xml: scala.xml.Elem)Either[String,Int]
find . -name "build.sbt" | xargs ack "name"
core/build.sbt
1:name := "dispatch-core"
json4sjackson/build.sbt
1:name := "json4s-jackson"
json4snative/build.sbt
1:name := "json4s-native"
jsoup/build.sbt
1:name := "dispatch-jsoup"
liftjson/build.sbt
1:name := "dispatch-lift-json"
tagsoup/build.sbt
1:name := "dispatch-tagsoup"
I created a branch with the fixes here if we want to rename them https://github.com/dispatch/reboot/tree/module-prefixing
appears there was a regression in recent versions noted here
The documentation here:
http://dispatch.databinder.net/HTTP+methods+and+parameters.html
Doesn't have any examples of how to set multiple values for a URL parameter in Dispatch. It would be really helpful if there was an example.
Dispatch needs to be able to POST and PUT multipart/form-data requests.
I'd like to be a contributor and work on this.
Maybe I'm missing something... but I can't find any documentation on the new dispatch library. The website has a tutorial on working with promises, but what about how to actually build the requests‽ There are a few short snippets about very generic actions, but what about adding get/post parameters, setting request heading, or even specifying what type of requests it is (post vs get vs delete etc)?
The new library looks really cool but I don't know how to use it so I'll just switch back to 0.8.x for now. Sorry if it's either in an obvious place or this isn't really for production yet and this is planned for later.
Currently if you create a streaming connection it defaults to timeout after 60000 ms. In order to override it you must pass a client which is configured to not timeout:
object StreamClient {
import com.ning.http.client.{AsyncHttpClient, AsyncHttpClientConfig}
import com.ning.http.client.providers.netty.NettyAsyncHttpProviderConfig
lazy val client = new AsyncHttpClient(config)
lazy val config = new AsyncHttpClientConfig.Builder()
.setRequestTimeoutInMs(-1)
.setAsyncHttpClientProviderConfig(
new NettyAsyncHttpProviderConfig().addProperty(
NettyAsyncHttpProviderConfig.BOSS_EXECUTOR_SERVICE, bossExecutor
)
).build()
lazy val bossExecutor =
java.util.concurrent.Executors.newCachedThreadPool(DaemonThreads.factory)
}
Http(client = StreamClient.client)
It would be nice to make this setting more streamlined in a similar way to how promise timeout is set.
Unless there is an easier way I am not aware of that is...
Hi
Do you think it would be a good idea to change the package name slightly?
Currently they collide with the previous api so if some library or sbt plugin is using a previous version of dispatch then you're stuck with the latest version of the previous api.
I seem to be having trouble limiting the thread pool size for the Async workers. Perhaps I am just missing something basic?
I declared my Http
object like so:
val h = Http.threads(16)
However, when I run my app, the thread count climbs up towards 300, which makes me think it is still using the default of 256.
In order to isolate the issue, I have created a dummy web service, and a simple client. Feel free to grab this dummy code from https://bitbucket.org/pkaeding/dispatch-example to try running it.
Any ideas?
So that we don't have to do:
req.toRequestBuilder.setBodyEncoding("encoding")
After upgrade to dispatch 0.10.0 from 0.9.5, I started noticing problems with streamed gzipped https connections. There were NUL symbols everywhere (as in \0 \x00, ASCII zero)- mostly between the chunks. Needless to say, this breaks a lot of stuff, in particular JSON parsing.
Upgrading to async-http-client-1.7.14 has fixed the issue (1.7.13 didn't), I suspect that AsyncHttpClient/async-http-client#287 was the culprit.
It would be great if dispatch fixed this in the next version. Thanks!
When I do the following:
Http(url("http://ru.wikipedia.org/wiki/Список_городов_России") OK as.String)()
the http header contains the following (I used wireshark for checking):
GET /wiki/??????_???????_?????? HTTP/1.1
instead of (when I use curl
on the same url):
GET /wiki/\320\241\320\277\320\270\321\201\320\276\320\272_\320\263\320\276\321\200\320\276\320\264\320\276\320\262_\320\240\320\276\321\201\321\201\320\270\320\270 HTTP/1.1\r\n
83a6635 URL-encodes the URL path using URLEncoder, which is incorrect. URLEncoder should only be used for query parameters. java.net.URI should be used to escape paths of the URL/URI.
See http://stackoverflow.com/questions/2678551/when-to-encode-space-to-plus-and-when-to-20.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.