mcohen01 / amazonica Goto Github PK
View Code? Open in Web Editor NEWA comprehensive Clojure client for the entire Amazon AWS api.
A comprehensive Clojure client for the entire Amazon AWS api.
I'm attempting to do this, which seems to fit the Java SDK spec:
(let [bytes (.getBytes "foo")]
(put-object
:bucket-name "my-bucket"
:key "foo"
:input (ByteArrayInputStream. bytes)
:metadata {:content-length (count bytes)
:conten-type "text/plan"}))
However I receive a cryptic error and cannot figure out how to debug it:
java.lang.NullPointerException: null
AmazonS3Client.java:1130 com.amazonaws.services.s3.AmazonS3Client.putObject
(Unknown Source) sun.reflect.GeneratedMethodAccessor25.invoke
DelegatingMethodAccessorImpl.java:43 sun.reflect.DelegatingMethodAccessorImpl.invoke
Method.java:601 java.lang.reflect.Method.invoke
(Unknown Source) sun.reflect.GeneratedMethodAccessor8.invoke
DelegatingMethodAccessorImpl.java:43 sun.reflect.DelegatingMethodAccessorImpl.invoke
Method.java:601 java.lang.reflect.Method.invoke
Reflector.java:93 clojure.lang.Reflector.invokeMatchingMethod
Reflector.java:28 clojure.lang.Reflector.invokeInstanceMethod
core.clj:589 amazonica.core/fn-call[fn]
core.clj:629 amazonica.core/intern-function[fn]
RestFn.java:619 clojure.lang.RestFn.invoke
NO_SOURCE_FILE:114 canary.sensor/eval5799
Compiler.java:6619 clojure.lang.Compiler.eval
Compiler.java:6582 clojure.lang.Compiler.eval
core.clj:2852 clojure.core/eval9 lighttable.hub.clj.eval/->result
AFn.java:163 clojure.lang.AFn.applyToHelper
AFn.java:151 clojure.lang.AFn.applyTo
core.clj:619 clojure.core/apply
core.clj:2396 clojure.core/partial[fn]
RestFn.java:408 clojure.lang.RestFn.invoke
core.clj:2485 clojure.core/map[fn]
LazySeq.java:42 clojure.lang.LazySeq.sval
LazySeq.java:60 clojure.lang.LazySeq.seq
RT.java:484 clojure.lang.RT.seq
core.clj:133 clojure.core/seq
core.clj:2523 clojure.core/filter[fn]
LazySeq.java:42 clojure.lang.LazySeq.sval
LazySeq.java:60 clojure.lang.LazySeq.seq
RT.java:484 clojure.lang.RT.seq
core.clj:133 clojure.core/seq
core.clj:2780 clojure.core/dorun
core.clj:2796 clojure.core/doall
eval.clj:150 lighttable.hub.clj.eval/eval-clj[fn]
core.clj:1836 clojure.core/binding-conveyor-fn[fn]
AFn.java:18 clojure.lang.AFn.call
FutureTask.java:334 java.util.concurrent.FutureTask$Sync.innerRun
FutureTask.java:166 java.util.concurrent.FutureTask.run
ThreadPoolExecutor.java:1145 java.util.concurrent.ThreadPoolExecutor.runWorker
ThreadPoolExecutor.java:615 java.util.concurrent.ThreadPoolExecutor$Worker.run
Thread.java:722 java.lang.Thread.run
I notice the the :file
approach to uploading seems to work well, but I need to upload an input stream. I'm mostly trying to understand if this is on my end or something deeper.
Any pointers?
I'm trying to invoke the following AmazonS3Client method :
public void setObjectAcl(String bucketName, String key,CannedAccessControlList acl)
Here is how I'm invoking it in my code (where creds is a map with my aws credentials and client config):
(require '[amazonica.aws.s3 :as amazonica-s3])
(import com.amazonaws.services.s3.model.CannedAccessControlList)
(amazonica-s3/set-object-acl creds "com.test.bucket" "test/key" CannedAccessControlList/PublicRead)
IllegalArgumentException Don't know how to create ISeq from: com.amazonaws.services.s3.model.CannedAccessControlList clojure.lang.RT.seqFrom (RT.java:494)
I have a feeling that its trying to invoke a similar method in the s3 client with the following signature instead:
public void setObjectAcl(String bucketName, String key, AccessControlList acl)
Does amazonica API match the underlying client API by type of args or just the count of the args?
DynamoDB now has support for maps:
http://aws.amazon.com/blogs/aws/dynamodb-update-json-and-more/
When I call (receive-message :queue-url "https://path.to.queue")
I get an exception:
IllegalArgumentException No value supplied for key: https://path.to.queue
clojure.lang.PersistentHashMap.create (PersistentHashMap.java:77)
clojure.core/hash-map (core.clj:365)
clojure.core/apply (core.clj:617)
amazonica.aws.sqs/delete-on-receive (sqs.clj:34)
clojure.lang.Var.invoke (Var.java:423)
clojure.lang.Var.applyTo (Var.java:532)
clojure.core/apply (core.clj:619)
robert.hooke/compose-hooks/fn--1482 (hooke.clj:40)
clojure.core/apply (core.clj:617)
robert.hooke/run-hooks (hooke.clj:46)
robert.hooke/prepare-for-hooks/fn--1487/fn--1488 (hooke.clj:54)
clojure.lang.AFunction$1.doInvoke (AFunction.java:29)
Line 34 of delete-on-receive is a hook that deletes messages when they've been received if the :delete option is present.
It's possible to work around the bug with the following:
(use 'robert.hooke)
(with-hooks-disabled receive-message
(receive-message :queue-url "https://path.to.queue"))
but of course you can't use the delete-on-receive functionality if you do this.
I'd like to be able to use amazonica in applications running on EC2 instances. Instead of storing the credentials in the code or a config file, I'd like to use an IAM role on the instance to authenticate to the API.
The Java SDK developer guide describes how the SDK will do this:
If your application software constructs a client object for an AWS service using an overload of the constructor that does not take any parameters, the constructor searches the "credentials provider chain." The credentials provider chain is the set of places where the constructor attempts to find credentials if they are not specified explicitly as parameters. For Java, the credentials provider chain is:
- Environment Variables: AWS_ACCESS_KEY_ID and AWS_SECRET_KEY
- Java System Properties: aws.accessKeyId and aws.secretKey
- Instance Metadata Service, which provides the credentials associated with the IAM role for the EC2 instance
I did some experimenting with this yesterday and it wasn't obvious to me. Is it possible to use IAM roles with amazonica?
Thanks,
Dave
I've tried to run the tests, but I get an exception parse-args already refers to: #'amazonica.aws.glacier/parse-args in namespace: amazonica.test.core
I ran the tests using lein test
.
The full stack trace is:
Exception in thread "main" java.lang.IllegalStateException: parse-args already refers to: #'amazonica.aws.glacier/parse-args in namespace: amazonica.test.core
at clojure.lang.Namespace.warnOrFailOnReplace(Namespace.java:88)
at clojure.lang.Namespace.reference(Namespace.java:110)
at clojure.lang.Namespace.refer(Namespace.java:168)
at clojure.core$refer.doInvoke(core.clj:3850)
at clojure.lang.RestFn.invoke(RestFn.java:410)
at clojure.lang.AFn.applyToHelper(AFn.java:161)
at clojure.lang.RestFn.applyTo(RestFn.java:132)
at clojure.core$apply.invoke(core.clj:619)
at clojure.core$load_lib.doInvoke(core.clj:5394)
at clojure.lang.RestFn.applyTo(RestFn.java:142)
at clojure.core$apply.invoke(core.clj:619)
at clojure.core$load_libs.doInvoke(core.clj:5417)
at clojure.lang.RestFn.applyTo(RestFn.java:137)
at clojure.core$apply.invoke(core.clj:621)
at clojure.core$use.doInvoke(core.clj:5507)
at clojure.lang.RestFn.invoke(RestFn.java:1789)
at amazonica.test.core$eval496$loading__4910__auto____497.invoke(core.clj:1)
at amazonica.test.core$eval496.invoke(core.clj:1)
at clojure.lang.Compiler.eval(Compiler.java:6619)
at clojure.lang.Compiler.eval(Compiler.java:6608)
at clojure.lang.Compiler.load(Compiler.java:7064)
at clojure.lang.RT.loadResourceScript(RT.java:370)
at clojure.lang.RT.loadResourceScript(RT.java:361)
at clojure.lang.RT.load(RT.java:440)
at clojure.lang.RT.load(RT.java:411)
at clojure.core$load$fn__5018.invoke(core.clj:5530)
at clojure.core$load.doInvoke(core.clj:5529)
at clojure.lang.RestFn.invoke(RestFn.java:408)
at clojure.core$load_one.invoke(core.clj:5336)
at clojure.core$load_lib$fn__4967.invoke(core.clj:5375)
at clojure.core$load_lib.doInvoke(core.clj:5374)
at clojure.lang.RestFn.applyTo(RestFn.java:142)
at clojure.core$apply.invoke(core.clj:619)
at clojure.core$load_libs.doInvoke(core.clj:5413)
at clojure.lang.RestFn.applyTo(RestFn.java:137)
at clojure.core$apply.invoke(core.clj:619)
at clojure.core$require.doInvoke(core.clj:5496)
at clojure.lang.RestFn.applyTo(RestFn.java:137)
at clojure.core$apply.invoke(core.clj:619)
at user$eval85.invoke(form-init8712192415606825579.clj:1)
at clojure.lang.Compiler.eval(Compiler.java:6619)
at clojure.lang.Compiler.eval(Compiler.java:6609)
at clojure.lang.Compiler.load(Compiler.java:7064)
at clojure.lang.Compiler.loadFile(Compiler.java:7020)
at clojure.main$load_script.invoke(main.clj:294)
at clojure.main$init_opt.invoke(main.clj:299)
at clojure.main$initialize.invoke(main.clj:327)
at clojure.main$null_opt.invoke(main.clj:362)
at clojure.main$main.doInvoke(main.clj:440)
at clojure.lang.RestFn.invoke(RestFn.java:421)
at clojure.lang.Var.invoke(Var.java:419)
at clojure.lang.AFn.applyToHelper(AFn.java:163)
at clojure.lang.Var.applyTo(Var.java:532)
at clojure.main.main(main.java:37)
Tests failed.
Hello!
I'm having some issues when deleting a DynamoDB item using amazonica.aws.dynamodbv2
.
An example:
(delete-item :table-name "my-table" :key {:id {:s "12345"}})
This fails with a java.lang.IllegalArgumentException: null
exception.
Any ideas here? I am wracking my brain trying to understand what I am doing wrong. Thanks!
I think this great library needs some more documentation and examples. E.g. I'm trying (and failing) to write a correct amazonica.aws.dynamodbv2/batch-write-item
request for an hour or so already. And the fact that the API is automatically generated from AWS API makes it harder (at least for me) to understand how to do it.
I'd be happy to help with the stuff I'm using (mainly dynamodb and simpledb) but first I have to understand it myself :)
I have Kinesis records that contain Snappy-encoded blocks such that when a process that decodes each block yields a sequence of strings like this:
"{:foo 42} "
"{:bar 3.14} "
...
"{:qaz [1 2 3 4]} "
So each string is an EDN value (a basic Clojure literal).
Amazonica seems to assume that data must be nippy-serialized. So when I try:
(get-records :shard-iterator (get-shard-iterator "my-stream"
"shardId-000000000003"
"TRIM_HORIZON"))
I get:
CompilerException java.lang.Exception: Thaw failed: Uncompressed data?, compiling:(form-init1883914083661306631.clj:1:9)
Perhaps I am missing it, but I don't see a simple way to tell Amazonica not to do that and instead give me raw bytes, which I could decompress and decode however I like.
So, is there a way to do that?
So far I see that you have unwrap
function that is used directly and indirectly by get-records
and by processor-factory
, respectively. Would be wonderful if I could supply my own version of unwrap, instead of nippy-thawing version of unwrap
.
(Thanks for resolving the put-record issue.)
I am now having an issue consuming the records written to Kinesis. This command:
(kinesis/get-records
cred
:shard-iterator shard-iterator
:limit batch-limit)
Results in this error:
Exception in thread "main" java.lang.IllegalArgumentException: No value supplied for key: 2, compiling:(/private/var/folders/g8/1b2_h6yx7t7csbtr7g4x9qvh0000gn/T/form-init4194254269444512279.clj:1:142)
Where batch-limit is set to "2". When I removed :limit, the same error cited key as the shard-iterator value, so it seems like an argument counting/position issue. Thanks.
When running the first SimpleDB select query, the library prints the credentials (secret and all):
user> (select cred :select-expression "select count(*) from `Users`")
#<BasicAWSCredentials com.amazonaws.auth.BasicAWSCredentials@72cb56cd>
{:class com.amazonaws.auth.BasicAWSCredentials, :AWSSecretKey XXXXXXXXXXXXXX :AWSAccessKeyId XXXXXXXXXXXXXX}
{:items [{:name "Domain", :attributes [{:name "Count", :value "257"}]}]}
The middle two lines are printed, while the last is the returned value.
Subsequent requests don't print them.
I'm getting this error when attempting to create a launch configuration. You can recreate the error by doing the following:
(use 'amazonica.aws.autoscaling)
(create-launch-configuration :security-groups (seq ["hello" "world"]))
... results in...
IllegalArgumentException No method in multimethod 'fmap' for dispatch value: class clojure.lang.PersistentVector$ChunkedSeq clojure.lang.MultiFn.getFn (MultiFn.java:160)
In my case the contents of my :security groups
param have been generated by map
. This also happens when using create-auto-scaling-group
so I have a feeling it's likely to happen in a lot of places.
I'm using Clojure 1.6.0
but have tried with 1.5.1
as well. The stack trace points to the error occurring in org.clojure/algo.generic
so I tried updating that from 0.1.0
to 0.1.2
but that didn't help.
I can eliminate the problem by wrapping my sequence in vec
but that's not an ideal solution. Am I doing something wrong here or is this something Amazonica could help with?
I know it's probably not a surprise that SimpleDB doesn't work โgiven that you don't include a MVS in the readmeโ but I figured it might be useful for you to track the task in an issue anyway.
(require '[amazonica.core :refer [defcredentials])
(defcredentials "AccessKeyID" "SecretKey")
(require '[amazonica.aws.simpledb :as sdb])
(sdb/put-attributes "domain"
"devapiTue Aug 05 23:24:36 UTC 2014"
[{:name "args", :value ["Test message"]}
{:name "instant", :value #inst "2014-08-05T23:24:36.701-00:00"}
{:name "ns", :value "some-api.main"}
{:name "file", :value "/tmp/form-init6934682238825582339.clj"}
{:name "hostname", :value "devapi"}
{:name "output", :value "2014-Aug-05 23:24:36 +0000 devapi WARN [some-api.main] - Test message"}
{:name "prefix", :value "2014-Aug-05 23:24:36 +0000 devapi WARN [some-api.main]"}
{:name "level", :value :warn}
{:name "line", :value nil}
{:name "ap-config", :value {}}
{:name "error?", :value false}
{:name "throwable", :value nil}
{:name "timestamp", :value "2014-Aug-05 23:24:36 +0000"}
{:name "message", :value "Test message"}])
And the error is:
IllegalArgumentException Could not determine best method to invoke for put-attributes using arguments ("spike_for_logging" "devapiTue Aug 05 23:25:28 UTC 2014" ({:name "args", :value "[\"Test message\"]"} {:name "instant", :value "Tue Aug 05 23:25:28 UTC 2014"} {:name "ns", :value "some-api.main"} {:name "file", :value "/tmp/form-init6934682238825582339.clj"} {:name "hostname", :value "devapi"} {:name "output", :value "2014-Aug-05 23:25:28 +0000 devapi WARN [some-api.main] - Test message"} {:name "prefix", :value "2014-Aug-05 23:25:28 +0000 devapi WARN [some-api.main]"} {:name "level", :value ":warn"} {:name "line", :value ""} {:name "ap-config", :value "{}"} {:name "error?", :value "false"} {:name "throwable", :value ""} {:name "timestamp", :value "2014-Aug-05 23:25:28 +0000"} {:name "message", :value "Test message"})) amazonica.core/intern-function/fn--11248 (core.clj:780)
I couldn't discern how to do this from the readme and found no examples online.
My use case is setting the public/read permission on newly-uploaded files for the purposes of website hosting.
It would be useful to have an AmazonS3EncryptionClient for client-side data encryption prior to upload.
http://aws.amazon.com/articles/2850096021478074/
http://docs.aws.amazon.com/redshift/latest/mgmt/uploading-aws-sdk-for-java-encrypted.html
I ran into something that I think might be a bug, and am interested to know if you have any ideas: https://gist.github.com/anonymous/d4dad5ff47ae7e92f7a5
The problem is that the pending
atom always winds up as -1
.
I thought this was odd, and so I tested it a bit in the REPL, and it appears that after eval'ing the buffer containing this code in emacs and running a single upload, :started
never is not seen by the progress-listener. This only happens the first time the upload function is called.
Once a single upload has been processed, all future uploads seem to see the :started
event, and inc the pending atom accordingly.
Any thoughts?
Thanks
I'm calling:
(amazonica.aws.s3/put-object bucket-name key-name (io/input-stream a-file) {:some :metadata} )
com.amazonaws.AmazonClientException: Unable to unmarshall error response (Premature end of file.). Response Code: 400, Response Text: Bad Request
at com.amazonaws.http.AmazonHttpClient.handleErrorResponse(AmazonHttpClient.java:792)
at com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:421)
at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:232)
at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3566)
at com.amazonaws.services.s3.AmazonS3Client.putObject(AmazonS3Client.java:1434)
at com.amazonaws.services.s3.AmazonS3Client.putObject(AmazonS3Client.java:1275)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at clojure.lang.Reflector.invokeMatchingMethod(Reflector.java:93)
at clojure.lang.Reflector.invokeInstanceMethod(Reflector.java:28)
at amazonica.core$fn_call$fn__1614.invoke(core.clj:718)
at amazonica.core$intern_function$fn__1633.doInvoke(core.clj:769)
at clojure.lang.RestFn.invoke(RestFn.java:457)
I'm pretty sure this is caused by the S3Client being GC'd before the file is finished uploading, in amazonica.core/fn-call.
References:
I'm having trouble getting delete-objects
to work. For delete-object
(singular), the following seems to work:
(delete-object {:bucket-name "my-bucket" :key "key1"})
So by analogy between DeleteObjectRequest and DeleteObjectsRequest, I expected this to work:
(delete-objects {:bucket-name "my-bucket" :keys ["key1" "key2"]})
but instead I get:
UnsupportedOperationException: nth not supported on this type: Character
at clojure.lang.RT.nthFrom(RT.java:857)
Am I wrong to expect delete-objects
to work this way?
With amazonica 0.1.3 running
(get-object db-creds
:bucket-name "db-bucket"
:key "db/foo.txt")
doesn't work (invalid arity exception) but
(get-object db-creds
"db-bucket"
"db/foo.txt")
works.
Looks like the two-arity string string Java method is getting called in favor of the one-arity GetObjectRequest
method.
It's a bit surprising, so either the keyword example should work out of the box or the second example put in the README.
I'm cleaning up a lein project's deps, and after shuffling around some namespaces and updating the amazonica project.clj dependency to use 0.1.22, I'm getting a new error when calling s3/get-object:
java.lang.NoClassDefFoundError: org/apache/http/impl/conn/PoolingClientConnectionManager
at com.amazonaws.http.ConnectionManagerFactory.createPoolingClientConnManager (ConnectionManagerFactory.java:26)
com.amazonaws.http.HttpClientFactory.createHttpClient (HttpClientFactory.java:87)
com.amazonaws.http.AmazonHttpClient.<init> (AmazonHttpClient.java:121)
com.amazonaws.AmazonWebServiceClient.<init> (AmazonWebServiceClient.java:66)
com.amazonaws.services.s3.AmazonS3Client.<init> (AmazonS3Client.java:304)
com.amazonaws.services.s3.AmazonS3Client.<init> (AmazonS3Client.java:286)
sun.reflect.NativeConstructorAccessorImpl.newInstance0 (NativeConstructorAccessorImpl.java:-2)
sun.reflect.NativeConstructorAccessorImpl.newInstance (NativeConstructorAccessorImpl.java:57)
sun.reflect.DelegatingConstructorAccessorImpl.newInstance (DelegatingConstructorAccessorImpl.java:45)
java.lang.reflect.Constructor.newInstance (Constructor.java:526)
clojure.lang.Reflector.invokeConstructor (Reflector.java:180)
amazonica.core$create_client.invoke (core.clj:139)
amazonica.core$amazon_client_STAR_.invoke (core.clj:187)
clojure.lang.AFn.applyToHelper (AFn.java:167)
clojure.lang.AFn.applyTo (AFn.java:151)
clojure.core$apply.invoke (core.clj:617)
clojure.core$memoize$fn__5049.doInvoke (core.clj:5735)
clojure.lang.RestFn.invoke (RestFn.java:436)
amazonica.core$candidate_client.invoke (core.clj:661)
amazonica.core$fn_call$fn__1670.invoke (core.clj:671)
clojure.lang.Delay.deref (Delay.java:33)
clojure.core$deref.invoke (core.clj:2128)
amazonica.core$fn_call$fn__1672.invoke (core.clj:674)
amazonica.core$intern_function$fn__1687.doInvoke (core.clj:718)
.
.
.
I tried resetting my amazonica dependency back to the previous version 0.1.15, but I'm still getting the error. It looks like the aws-sdk jar updated the ConnectionManagerFactory code when it bumped to version 1.5.0, which amazonica was including a bit before version 0.1.15 if I'm not mistaken.
Any idea why this is happening now? As part of my project refactoring, I removed some dependencies from the project. Is there any reason why amazonica would be dependent on any other libraries being present in the project?
I think they want you to use describe-step or describe-cluster now along with the list methods for those.
I'm trying to list the files in an S3 bucket using [amazonica "0.2.24"] and the following code:
(s3/list-objects :bucket-name "mybucket")
I get the exception listed below.
Caused by org.xml.sax.SAXParseException
Premature end of file.
ErrorHandlerWrapper.java: 203 com.sun.org.apache.xerces.internal.util.ErrorHandlerWrapper/createSAXParseException
ErrorHandlerWrapper.java: 177 com.sun.org.apache.xerces.internal.util.ErrorHandlerWrapper/fatalError
XMLErrorReporter.java: 441 com.sun.org.apache.xerces.internal.impl.XMLErrorReporter/reportError
XMLErrorReporter.java: 368 com.sun.org.apache.xerces.internal.impl.XMLErrorReporter/reportError
XMLScanner.java: 1436 com.sun.org.apache.xerces.internal.impl.XMLScanner/reportFatalError
XMLDocumentScannerImpl.java: 1019 com.sun.org.apache.xerces.internal.impl.XMLDocumentScannerImpl$PrologDriver/next
XMLDocumentScannerImpl.java: 606 com.sun.org.apache.xerces.internal.impl.XMLDocumentScannerImpl/next
XMLNSDocumentScannerImpl.java: 117 com.sun.org.apache.xerces.internal.impl.XMLNSDocumentScannerImpl/next
XMLDocumentFragmentScannerImpl.java: 510 com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl/scanDocument
XML11Configuration.java: 848 com.sun.org.apache.xerces.internal.parsers.XML11Configuration/parse
XML11Configuration.java: 777 com.sun.org.apache.xerces.internal.parsers.XML11Configuration/parse
XMLParser.java: 141 com.sun.org.apache.xerces.internal.parsers.XMLParser/parse
AbstractSAXParser.java: 1213 com.sun.org.apache.xerces.internal.parsers.AbstractSAXParser/parse
XmlResponsesSaxParser.java: 145 com.amazonaws.services.s3.model.transform.XmlResponsesSaxParser/parseXmlInputStream
XmlResponsesSaxParser.java: 293 com.amazonaws.services.s3.model.transform.XmlResponsesSaxParser/parseListBucketObjectsResponse
Unmarshallers.java: 76 com.amazonaws.services.s3.model.transform.Unmarshallers$ListObjectsUnmarshaller/unmarshall
Unmarshallers.java: 73 com.amazonaws.services.s3.model.transform.Unmarshallers$ListObjectsUnmarshaller/unmarshall
S3XmlResponseHandler.java: 62 com.amazonaws.services.s3.internal.S3XmlResponseHandler/handle
S3XmlResponseHandler.java: 31 com.amazonaws.services.s3.internal.S3XmlResponseHandler/handle
AmazonHttpClient.java: 795 com.amazonaws.http.AmazonHttpClient/handleResponse
AmazonHttpClient.java: 463 com.amazonaws.http.AmazonHttpClient/executeHelper
AmazonHttpClient.java: 257 com.amazonaws.http.AmazonHttpClient/execute
AmazonS3Client.java: 3623 com.amazonaws.services.s3.AmazonS3Client/invoke
AmazonS3Client.java: 3575 com.amazonaws.services.s3.AmazonS3Client/invoke
AmazonS3Client.java: 620 com.amazonaws.services.s3.AmazonS3Client/listObjects
NativeMethodAccessorImpl.java: -2 sun.reflect.NativeMethodAccessorImpl/invoke0
NativeMethodAccessorImpl.java: 62 sun.reflect.NativeMethodAccessorImpl/invoke
DelegatingMethodAccessorImpl.java: 43 sun.reflect.DelegatingMethodAccessorImpl/invoke
Method.java: 483 java.lang.reflect.Method/invoke
nil: -1 sun.reflect.GeneratedMethodAccessor54/invoke
DelegatingMethodAccessorImpl.java: 43 sun.reflect.DelegatingMethodAccessorImpl/invoke
Method.java: 483 java.lang.reflect.Method/invoke
Reflector.java: 93 clojure.lang.Reflector/invokeMatchingMethod
Reflector.java: 28 clojure.lang.Reflector/invokeInstanceMethod
core.clj: 726 amazonica.core/fn-call/fn
core.clj: 777 amazonica.core/intern-function/fn
RestFn.java: 421 clojure.lang.RestFn/invoke
replutils.clj: 74 unpacker.examples.replutils/unparsable-keys
REPL: 1 unpacker.examples.replutils/eval15366
Is there any way to change the region endpoint for DynamoDB client? The following code creates table "TestTable" in the US East region, despite I set :endpoint to "eu-west-1":
(def cred {:access-key "aws-access-key"
:secret-key "aws-secret-key"
:endpoint "eu-west-1"
:client-config {:proxy-host "my-proxy"
:proxy-port 8080}})
(create-table cred :table-name "TestTable"
; ....
)
amazonica 0.1.22
It's not at all obvious to me from the docs or code how to accomplish this. Using ENV Vars I do this:
(def aws_access_key_id
(.getAWSAccessKeyId
(.getCredentials (amazonica.core/get-credentials :cred))))
(def aws_secret_key
(.getAWSSecretKey
; (.getCredentials (amazonica.core/get-credentials :cred))))
(defcredential aws_access_key_id aws_secret_key (:region options))
However, if this happens while on an instance with IAM profile, an error is thrown about a missing security token. Is there some simple alternative that I'm missing?
I am attempting to test kinesis via:
(put-record cred "beatport-api-test" new-event event-key)
Both new-event
and event-key
are java.lang.String. I am getting the following error:
Caused by: java.lang.IllegalArgumentException: No coercion is available to turn {"response":{"status":200,"headers":{"link":"</search?q=hee&group-by=kind&page=1>"},"body":{"list":[],"track":[],"release":[],"mix":[],"genre":[],"account":[],"best-match":null}},"ip":"127.0.0.1","user-agent":"","method":"get","events":"","duration":"369ms","http-server":"org.eclipse.jetty.server.HttpInput@3b702644","function-times":{},"id":"2014-07-16-api-usw1a-001-00000000","action":"Request","time-unix":1405563461,"uri":"/search","user":"newport"} into an object of type class java.nio.ByteBuffer
(Where new-event
is the {"response ... newport"} portion, a simulated log event encoded as json, which will need be processed as text by the consumer.)
Hopefully, there is something simple that I am missing. But I have reviewed the Readme, and it seems the next troubleshooting step would be to coerce new-event
to java.nio.ByteBuffer myself. Any help is appreciated. Thanks.
Hey guys, I'm getting an exception using with-credentials:
clojure.lang.ArityException: Wrong number of args (2) passed to: core$amazon-client-STAR-
Here's the code:
(ns paddleguru.util.aws
(:require [environ.core :refer [env]]
[amazonica.core :refer [with-credential]]
[amazonica.aws.s3 :as s3]
[amazonica.aws.s3transfer :as s3t]))
(with-credential [(:aws-access-key-id env)
(:aws-access-key-secret env)
"us-west-1"]
(s3/list-buckets))
The same code works great without the wrapping with-credential.
Almost all clients have an asynchronous counterpart as listed below. It'd be useful to have them supported by amazonica.
Here's an example of using one of them in clojure.
As far as I can see, as a general rule, the async client implements two methods of the form:
Future<Void> methodNameFromTheSyncClientAsync(RequestClass aRequest);
Future<Void> methodNameFromTheSyncClientAsync(RequestClass aRequest, \
AsyncHandler<RequestClass,ResultClass> asyncHandler);
for each method of the sync version of the client.
Does amazonica support IAM and STS services as well?
CompilerException java.lang.IllegalStateException: marshall already refers to: #'amazonica.core/marshall in namespace: amazonica.aws.dynamodb, compiling:(amazonica/aws/dynamodb.clj:60:12)
worker
in kinesis.clj has the code
:or {checkpoint 60000
...
I'm not clear on exactly what's happening with opts
, but is there a mismatched units bug here since processor-factory
multiplies checkpoint by 1000 (presumably to convert from seconds to milliseconds) ?
(reset! next-check (+' (System/currentTimeMillis) (*' 1000 checkpoint))))
Amazonica's section on auth says:
The default authentication scheme is to use the chained Provider class from the AWS SDK, whereby authentication is attempted in the following order:
Only listing 3 options, but the AWS Default chain (http://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/auth/DefaultAWSCredentialsProviderChain.html) actually lists 4. The missing one is:
This isn't just a documentation discrepancy, it appears that Amazonica really doesn't look for this file in it's Chain.
So Amazonica is a really slick library, particularly the bean-to-Clojure map mapping. I'm writing a s3sync utility that I hope will be more flexible/functional than the commonly used s3sync.rb. Being able to express things as Clojure maps makes things work really well, but user-metadata requires a workaround if you're updating metadata that you pulled from s3.
In particular, :user-metadata is a keyword -> String mapping when downloaded, but must be a String -> String mapping when uploaded.
Example below
user=> (amazonica.aws.s3/get-object-metadata my-credentials my-website "404.html")
{:content-length 6452, :last-modified #<DateTime 2013-04-04T13:21:30.000-05:00>, :content-type "text/html", :raw-metadata {:Content-Type "text/html", :Accept-Ranges "bytes", :Last-Modified #<DateTime 2013-04-04T13:21:30.000-05:00>, :Content-Length 6452, :ETag "2d477e36be6f149b4c559591a6201774"}, :etag "2d477e36be6f149b4c559591a6201774", :user-metadata {:foo "bar"}}
user=> (amazonica.aws.s3/copy-object my-credentials :source-bucket-name my-website :destination-bucket-name my-website :source-key "404.html" :destination-key "404.html" :new-object-metadata {:content-type "text/html" :user-metadata {:foo "bar"}})
ClassCastException clojure.lang.Keyword cannot be cast to java.lang.String com.amazonaws.services.s3.AmazonS3Client.populateRequestMetadata (AmazonS3Client.java:2634)
user=> (amazonica.aws.s3/copy-object my-credentials :source-bucket-name my-website :destination-bucket-name my-website :source-key "404.html" :destination-key "404.html" :new-object-metadata {:content-type "text/html" :user-metadata {"foo" "bar"}})
{:etag "2d477e36be6f149b4c559591a6201774", :last-modified-date #<DateTime 2013-04-04T13:21:58.000-05:00>}
I'd like to be able to use Amazonica through an HTTP proxy. The Amazon*Clients support using a proxy by passing a suitable configured ClientConfiguration object to their constructors.
I'm not sure what the best interface would be. Perhaps it'd be easiest to read system properties and environment variables to find http proxy configuration, but I'm not sure that's the best solution.
An alternative would be to create a with-configuration macro that sets a dynamic var with config options.
Is this something that you'd consider adding to Amazonica? I'd be happy to come up with a pull request if you are interested.
Are there plans for implementing support for the new cloudsearchv2 API?
Only the administrative functions for CloudSearchV2 are exposed. I would like to use the search functions in CloudSearchDomain to actually build and execute a query. Is it possible to add these?
This looks like a really awesome library---I like that you're using reflection to solve everything at once rather than try to wrap one service at a time.
I'm just diving in but seem to have hit a snag.
Here's a minimal example:
(ns scratch
(:require amazonica.core)
(:use amazonica.aws.identitymanagement))
(def creds {:access-key "root-access-key"
:secret-key "root-secret-key"})
(create-user creds :user-name "db")
;;This works fine; user is created
(create-access-key creds :user-name "db")
;;This returns a new access key, but it's for the root account (i.e., the same account as creds), not for the new "db" account.
The problem is that the :user-name
isn't being taken into account, so access keys are created for the same user that owns the creds
used to make the request, not the specified new IAM user.
The docs for the underlying CreateAccessKeyRequest
object seem to match the get/set method model that you're reflecting against, so I have no idea why it doesn't work.
It seems like the no-argument CreateAccessKeyRequest
is matching first.
I want to use the range feature of the AWS SDK, but I'm not sure if this is supported by Amazonica as it is an option that requires two arguments instead of one.
I tried the following:
(s3/get-object :bucket-name "my-bucket" :key "my-key" :range 1000)
; IllegalArgumentException wrong number of arguments
; sun.reflect.NativeMethodAccessorImpl.invoke0 (NativeMethodAccessorImpl.java:-2)
; sun.reflect.NativeMethodAccessorImpl.invoke (NativeMethodAccessorImpl.java:39)
; sun.reflect.DelegatingMethodAccessorImpl.invoke (DelegatingMethodAccessorImpl.java:25)
; java.lang.reflect.Method.invoke (Method.java:597)
; sun.reflect.GeneratedMethodAccessor15.invoke (:-1)
; sun.reflect.DelegatingMethodAccessorImpl.invoke (DelegatingMethodAccessorImpl.java:25)
; java.lang.reflect.Method.invoke (Method.java:597)
; clojure.lang.Reflector.invokeMatchingMethod (Reflector.java:93)
; clojure.lang.Reflector.invokeInstanceMethod (Reflector.java:28)
; amazonica.core/invoke (core.clj:444)
(s3/get-object :bucket-name "my-bucket" :key "my-key" :range [0 1000])
;; Same error
Hi,
I would like to run amazonica against Java 1.6.
The latest Clojar release has been compiled with Java 1.7.
I'm not sure if this is due to the Java Amazon bindings enforce this.
Is it is possible, could we get a release that works with Java 1.6?
Thanks,
hi,
I came across this problem with the macro with-credential
(defmacro with-credential
"Per invocation binding of credentials for ad-hoc
service calls using alternate user/password combos
(and endpoints)."
[[a b c] & body]
`(binding [*credentials* ~(keys->cred a b c)]
(do ~@body)))
In this macro you destructure the [a b c]
as a vector,
and this works fine when used as
(with-credential ["a" "b" "c"]
(comment "foo"))
however if you try to use in this context
(defn get-my-credentials []
["a" "b" "c"])
(with-credential (get-my-credentials)
(comment "foo"))
it is going to fail because the macro will try to destructure the (get-my-credentials)
as a list with one element instead of evaluate the function call.
This is the macroexpand result.
(clojure.core/binding [amazonica.core/*credentials* {:access-key
get-my-credentials,
:secret-key nil}]
(do (comment "foo")))
To fix this I suggest to remove the destructuring of the vector in the macro,
and call the keys->cred
on the unquote (~) of the credential argument.
best regards
Bruno
There would be a number of benefits for some Clojure apps if a Kinesis shard could be presented as a Clojure core.async channel. Delivering shards as channels would create new options for Clojure stream consumers, beyond the limited Kinesis notion of worker and record processor and the bandwidth and other limits applied to shards.
Here is an idea for one method of doing this in Amazonica, in case it is useful.
(Disclaimer - This is a rough sketch based on an inexpert read of the AWS documentation and what I understand so far of core.async.)
worker
in kinesis.clj).processRecords
in processor-factory
) performs blocking writes to the channel for each Kinesis record (rather than calling a processor
function for each record)This of course is insufficient on its own - by simply dumping shard records on a channel, we have lost the ability to know when each record is "done". We don't know when the records will be read from the channel, or when they will be processed. So a new mechanism is required to restore the ability to checkpoint sequence numbers in the shard.
Only the core.async app knows when a record is really done; one method to communicate "doneness" back to Amazonica is by using another channel. The app can write to this channel to send completed sequence numbers back to Amazonica.
(chan (sliding-buffer 1))
. A sliding buffer channel will drop oldest values; in this case only the latest put survives.To combine the previous steps, a variant of processor-factory
does the following:
checkpoint(String sequenceNumber)
variant of IRecordProcessorCheckpointer
For step 3: note that checkpoint()
(no arguments) currently used in kinesis.clj checkpoints the progress at the last record that was delivered to the record processor; with channels, we want to checkpoint a specific sequence number. This capability is added in the Kinesis Client Library version 1.1.
One way (not sure if this is idiomatic) to get the latest value from the checkpoint channel, without waiting if nothing is available:
(alts!! [checkpoint-channel (timeout 0)] :priority true)
If not nil, the returned value is checkpointed.
See also:
As an example, see #60.
I don't have experience with reflection in Java, but it seems plausible to me that unhelpful error messages would just come with the territory. If so, feel free to just close this issue as "Won't fix" or something.
P.S. I mean this more as an FYI than as a complaint
Please since I couldn't figure out where else to make this request I'm using this medium. Please do you have any examples of using cloudformation from amazonica. I'm especially having diffuiculty knowing how to map the exmaple from the amazon website to amazonica. I can't find how to specify an amazon template to amazonica's cloudformation api. Yes I also can't figure the same for the Amazon client .
I'll appreciate any help with this.
Hi,
I wanted to create security groups and assign rules to them. Also wanted to create a VPC and subnets. I do not see any api exposed for that. Can u help me how I should proceed ?
Thanks,
Murtaza
Function ex-info does not exist.
Any idea why I'm getting this exception? :-)
Jul 29, 2013 3:02:06 PM com.amazonaws.http.AmazonHttpClient executeHelper
INFO: Unable to execute HTTP request: peer not authenticated
javax.net.ssl.SSLPeerUnverifiedException: peer not authenticated
at sun.security.ssl.SSLSessionImpl.getPeerCertificates(SSLSessionImpl.java:397)
at org.apache.http.conn.ssl.AbstractVerifier.verify(AbstractVerifier.java:128)
at org.apache.http.conn.ssl.SSLSocketFactory.connectSocket(SSLSocketFactory.java:572)
at org.apache.http.impl.conn.DefaultClientConnectionOperator.openConnection(DefaultClientConnectionOperator.java:180)
at org.apache.http.impl.conn.ManagedClientConnectionImpl.open(ManagedClientConnectionImpl.java:294)
at org.apache.http.impl.client.DefaultRequestDirector.tryConnect(DefaultRequestDirector.java:641)
at org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:480)
at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:906)
at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:805)
at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:784)
at com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:315)
at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:199)
at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:2994)
at com.amazonaws.services.s3.AmazonS3Client.getObjectMetadata(AmazonS3Client.java:800)
at com.amazonaws.services.s3.AmazonS3Client.getObjectMetadata(AmazonS3Client.java:780)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:613)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:613)
at clojure.lang.Reflector.invokeMatchingMethod(Reflector.java:93)
at clojure.lang.Reflector.invokeInstanceMethod(Reflector.java:28)
at amazonica.core$fn_call$fn__7047.invoke(core.clj:589)
at amazonica.core$intern_function$fn__7059.doInvoke(core.clj:629)
at clojure.lang.RestFn.invoke(RestFn.java:436)
...
SSLPeerUnverifiedException peer not authenticated sun.security.ssl.SSLSessionImpl.getPeerCertificates
One more for you:
(s3t/upload (bucket-name)
(UUID/randomUUID)
(file "/Users/sritchie/Desktop/20131009-TWITTER-ENGINEERS-016edit-660x824.jpg"))
;; CompilerException java.lang.IllegalArgumentException: No matching method found: setRegion for class com.amazonaws.services.s3.transfer.TransferManager, compiling:(form-init6963743079950413614.clj:2:3)
In the docs it suggests that in order to do explict checkpointing from your kinessis records processor you should set the checkpoint to Long/MAX_VALUE and return true from process_records. However later on in processes-factory checkpoint and System/currentTimeMillis are added together causing an integer overflow error.
https://github.com/mcohen01/amazonica/blob/master/src/amazonica/aws/kinesis.clj#L94
I suspect the quick answer is to update the docs (possibly describing the role that the checkpoint value plays). But I would that there are two follow up issues. First it's not particularly clear the role that :checkpoint plays as a time variable. Second if it's going to be overloaded like this it might make sense top be able to pass in an explicit value for "no automatic checkpointing" like { :checkpoint :no }
to make the distinction a little more clear.
Thanks!
One of the possible exceptions thrown by IRecordProcessorCheckpointer checkpoint()
is KinesisClientLibDependencyException
according to
(NB. This is version 1.1.0 of the Kinesis client library - so possibly the exception is new).
This exception is not currently handled in kinesis.clj mark-checkpoint
. The Amazon comments suggest "...the application can backoff and retry."
Should KinesisClientLibDependencyException
be handled similar to ThrottlingException
?
project.clj says
[com.amazonaws/amazon-kinesis-client "1.0.0"]
So perhaps this doesn't apply just yet.
Hi,
I am trying to add a new rule to a security group I created.
(http://docs.aws.amazon.com/AWSSdkDocsJava/latest/DeveloperGuide/authorize-ingress.html)
(ns aws-infra.security-groups
(:require [amazonica.core :as aws-core :refer [defcredential]]
[amazonica.aws.ec2 :as aws-ec2])))
(aws-ec2/authorize-security-group-ingress
:group-name "test-group"
:ip-permissions [{:cidr-ip "21.21.22.23/32"
:ip-protocol "tcp"
:from-port "22"
:to-port "22"}])
I get the below exception, can you please help ?
java.lang.ClassCastException: java.lang.String cannot be cast to java.lang.Character
RT.java:1087 clojure.lang.RT.intCast
core.clj:846 clojure.core/int
core.clj:314 amazonica.core/coerce-value
core.clj:508 amazonica.core/invoke-method
AFn.java:160 clojure.lang.AFn.applyToHelper
Using Amazonica 0.1.21.
I'm not sure if this is an issue, or I'm just using the library awkwardly.
If I put my Amazon credentials into environment variable the following works fine:
(use 'amazonica.aws.s3)
(get-object-metadata {:bucket-name "ahjones-test" :key "foo"})
However, if I want to pass in credentials as the first parameter I get a message that says that the best method can't be found.
(def cred {:access-key "key" :access-secret "secret"})
(get-object-metadata cred {:bucket-name "ahjones-test" :key "foo"})
The exception:
IllegalArgumentException Could not determine best method to invoke for get-object-metadata using arguments ({:secret-key "secret", :access-key "key"} {:key "foo", :bucket-name "ahjones-test"})
amazonica.core/intern-function/fn--1458 (core.clj:705)
user/eval1649 (form-init5154478138353900686.clj:1)
clojure.lang.Compiler.eval (Compiler.java:6619)
clojure.lang.Compiler.eval (Compiler.java:6582)
clojure.core/eval (core.clj:2852)
clojure.main/repl/read-eval-print--6588/fn--6591 (main.clj:259)
clojure.main/repl/read-eval-print--6588 (main.clj:259)
clojure.main/repl/fn--6597 (main.clj:277)
clojure.main/repl (main.clj:277)
clojure.tools.nrepl.middleware.interruptible-eval/evaluate/fn--591 (interruptible_eval.clj:56)
clojure.core/apply (core.clj:617)
clojure.core/with-bindings* (core.clj:1788)
However this is OK
(get-object-metadata cred :bucket-name "ahjones-test" :key "foo")
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.