Code Monkey home page Code Monkey logo

render's Introduction

Render Tools and Services

A collection of Java tools and HTTP services (APIs) for rendering transformed image tiles that includes:

Render Components Diagram

render's People

Contributors

axtimwalde avatar dependabot[bot] avatar fcollman avatar martinschorb avatar perlman avatar russtorres avatar stephanpreibisch avatar trautmane avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

render's Issues

introduction figure needs improvement

I think the introduction figure to render should be improved to be more "client" side focused, on how people should think about how they will use render. Later you can have this figure which is more a "how does render work" figure, but there is a lot of confusion about what render is and isn't from the community, and I think having a clearer intro figure as to how its used will help resolve that.

RenderParameters and TileSpec should extend a common base interface

RenderParameters and TileSpecs should both be possible sources to Rendering as Rendering generates an image.
In particular, we see it replicating bounding box estimation, and meshCellSize. There are two bounding boxes in RenderParameters at this time, the minimal bounding box that is derived from the minimal bounding boxes of all included TileSpecs and the bounding box of the rendering `canvas' (x,y, width, height). We currently believe that there should be only one box. In RenderParameters and TileSpec, this box can be estimated by deriveMinimalBoundingBox(...), RenderParameters additionally adds padding and uses meshCellSize from internal state whereas TileSpec does not offer padding and gets meshCellSize as a parameter. Unifying this seems necessary.

bulk write error from clone API

At the Allen, Gayathri gets a "Bulk write operation error" when using the clone API to copy a LOADING phase 2 stack to a new (COMPLETED) stack for viewing.

The Spark or Java CopyStackClient tools should be preferred over the clone API for copying data because they support parallelized client-side processing. Although Gayathri's immediate issue is resolved by use of these clients, the clone API error should be investigated when time allows.

JDK install link broken ... again

The present install script isn't working again because the JDK link is broken. I'm having trouble locating the new link... i'll keep trying. Meanwhile there has to be a better way to distribute this!

add pair metadata variants of match retrieval APIs

Dan Kapner and Matt Nichols have asked for variants of the match APIs that return canvas pair information without the detailed point match data. At a minimum, this will reduce network traffic and client side memory requirements. It will be interesting to see if it also improves response times. If the count of match points is included in the summary, the point match explorer web client could use these APIs exclusively.

Convert from Render to neuroglancer precomputed format

I would like to convert data to the neuroglancer precomputed format from the following pipeline:

raw data -> trakem pipeline of stitching and alignment -> render

Do scripts maybe already exist that convert a render project into the neuroglancer precomputed data format ? If not, do you think that the following would be the way to do it ?

  1. Render to disk full resolution slices (or mipmaps if entire 2d slices are too big) using ARGBRenderer.java
  2. Convert the rendered 2d slices to precomputed format with scripts similar to the ones used to convert from BigBrain format to precomputed format

(I think that the conversion could also be done directly from Trakem and that the strategy would be the same.)

Thanks a lot for any advice

add ransac filter client

Create java client for running Saalfeld's RANSAC filter.
Input and output should be a tile pair JSON file.

mongo GroupQuery not doing and correctly

I'm not seeing the correct behavior for

http://ibs-forrestc-ux1/render-ws/v1/owner/Forrest/project/M247514_Rorb_1/stack/BIGALIGN2_MARCH24c_MBP_deconvnew/resolvedTiles?minZ=20&maxZ=22

The first result being
{"transformIdToSpecMap":{},"tileIdToSpecMap":{"300000013013011":{"tileId":"300000013013011","layout":{"sectionId":"13","temca":"Leica","camera":"zyla","imageRow":0,"imageCol":0,"stageX":5098.46462998,"stageY":-44640.4308487,"rotation":0.0},"z":13.0,"minX":53991.0,"minY":-176145.0,"maxX":124299.0,"maxY":-107911.0,"width":2048.0,"height":2048.0,"minIntensity":0.0,"maxIntensity":40000.0,"mipmapLevels":{"0":

z=13, clearly wrong..

log shows..

INFO [org.janelia.render.service.RenderDataService] getResolvedTiles: entry, owner=Forrest, project=M247514_Rorb_1, stack=BIGALIGN2_MARCH24c_MBP_deconvnew, minZ=20.0, maxZ=22.0, groupId=null, minX=null, maxX=null, minY=null, maxY=null

So i suspect the mongo call below... and i'm not a mongo expert but is it possible this is performing a logical OR as written and not an AND?

    private Document getGroupQuery(final Double minZ,
                                   final Double maxZ,
                                   final String groupId,
                                   final Double minX,
                                   final Double maxX,
                                   final Double minY,
                                   final Double maxY)
            throws IllegalArgumentException {

        final Document groupQuery = new Document();

        if ((minZ != null) && minZ.equals(maxZ)) {
            groupQuery.append("z", minZ);
        } else {
            if (minZ != null) {
                groupQuery.append("z", new Document(QueryOperators.GTE, minZ));
            }
            if (maxZ != null) {
                groupQuery.append("z", new Document(QueryOperators.LTE, maxZ));
            }
        }

        if (groupId != null) {
            groupQuery.append("groupId", groupId);
        }

        if (minX != null) {
            groupQuery.append("maxX", new Document(QueryOperators.GTE, minX));
        }
        if (maxX != null) {
            groupQuery.append("minX", new Document(QueryOperators.LTE, maxX));
        }
        if (minY != null) {
            groupQuery.append("maxY", new Document(QueryOperators.GTE, minY));
        }
        if (maxY != null) {
            groupQuery.append("minY", new Document(QueryOperators.LTE, maxY));
        }

        return groupQuery;
    }

sporadic dockerfile build failure after chmod

this happens on one machine, but not on another... not sure why, but other docker users have reported the same sporadic problem as well: moby/moby#9547

/var/lib/jetty/webapps:
total 12
drwxrwxrwx  2 root  root  4096 May 22 21:34 .
drwxr-xr-x 12 jetty jetty 4096 May 23 00:04 ..
-rwxrwxr-x  1 root  root   431 May 22 21:34 root-context.xml
/bin/sh: ./configure_web_server.sh: Text file busy
The command '/bin/sh -c ls -al $JETTY_BASE/* &&     chmod 755 ./configure_web_server.sh &&     ./configure_web_server.sh' returned a non-zero code: 2

either of these 2 changes to the dockerfile fix the problem:

RUN ls -al $JETTY_BASE/* && \
    chmod 755 ./configure_web_server.sh && \
    sync && \
    ./configure_web_server.sh

or

RUN ls -al $JETTY_BASE/* && \
    chmod 755 ./configure_web_server.sh && \
    sleep 2 && \
    ./configure_web_server.sh

Amazon s3 support

Allow image data to be read from, and mipmaps stored to, s3 (using the Amazon SDK).

Data can be accessed from render via http. s3 would allow access to private data, write support, and (possibly) a minor speed improvement.

Decoupling raw data from mipmap level 0

Would it make sense to decouple the raw data from mipmap level 0?

We are currently using filters (#5) to transform raw data into something we actually want to render. Some of these modifications can be expensive or involve accessing additional data (e.g. applying a flat field correction). Some of these operations work on full resolution and may not make sense as scale-aware filters.

By decoupling, the new mipmap level 0 data (full resolution) would be effectively be a cached version of the fixed image, and would be the basis for the rest of the image hierarchy.

Render Web Services Example leads to HTTP status 500 Error

I've just started out with render and was trying to work through the Render Web Services Example, and ran into a 500 error when trying to run

# complete v1_acquire stack
${CLIENT_SCRIPTS}/manage_stacks.sh ${EXAMPLE_1_PARAMS} --stack v1_acquire --action SET_STATE --stackState COMPLETE

under the Import Acquisition Data step. All subsequent third commands (those ending with --stackState COMPLETE) also fail with the same 500 error, while the others all completed successfully.

If it helps, I'm running an Ubuntu 16.04 LTS virtual machine (named 'janelia', which is why 'janelia' shows up all over the traceback), and followed the basic installation instructions exactly besides using the newest version of MongoDB.

I would appreciate any feedback, thank you!

Full traceback:

rlane@janelia:~/development/render$ ${CLIENT_SCRIPTS}/manage_stacks.sh ${EXAMPLE_1_PARAMS} --stack v1_acquire --action SET_STATE --stackState COMPLETE

  Running: /home/rlane/development/render/deploy/jdk1.8.0_131/bin/java -cp /home/rlane/development/render/render-ws-java-client/target/render-ws-java-client-2.0.1-SNAPSHOT-standalone.jar -Xms1G -Xmx1G -Djava.awt.headless=true -XX:+UseSerialGC org.janelia.render.client.StackClient --baseDataUrl http://localhost:8080/render-ws/v1 --owner demo --project example_1 --stack v1_acquire --action SET_STATE --stackState COMPLETE


17:12:43.529 [main] INFO  [org.janelia.render.client.ClientRunner] run: entry
17:12:43.681 [main] INFO  [org.janelia.render.client.StackClient] runClient: entry, parameters={
  "renderWeb" : {
    "baseDataUrl" : "http://localhost:8080/render-ws/v1",
    "owner" : "demo",
    "project" : "example_1"
  },
  "stack" : "v1_acquire",
  "action" : "SET_STATE",
  "stackState" : "COMPLETE"
}
17:12:43.936 [main] INFO  [org.janelia.render.client.RenderDataClient] getStackMetaData: submitting GET http://localhost:8080/render-ws/v1/owner/demo/project/example_1/stack/v1_acquire
17:12:44.009 [main] INFO  [org.janelia.render.client.StackClient] setStackState: before update, stackMetaData={
  "stackId" : {
    "owner" : "demo",
    "project" : "example_1",
    "stack" : "v1_acquire"
  },
  "state" : "LOADING",
  "lastModifiedTimestamp" : "2017-12-12T15:43:25.329Z",
  "currentVersionNumber" : 3,
  "currentVersion" : {
    "createTimestamp" : "2017-12-12T15:43:25.300Z",
    "cycleNumber" : 1,
    "cycleStepNumber" : 1
  }
}
17:12:44.023 [main] INFO  [org.janelia.render.client.RenderDataClient] setStackState: submitting PUT http://localhost:8080/render-ws/v1/owner/demo/project/example_1/stack/v1_acquire/state/COMPLETE
17:12:44.036 [main] ERROR [org.janelia.render.client.ClientRunner] run: caught exception
org.apache.http.client.ClientProtocolException: HTTP status 500 with body

  null

returned for

  PUT http://localhost:8080/render-ws/v1/owner/demo/project/example_1/stack/v1_acquire/state/COMPLETE

	at org.janelia.render.client.response.BaseResponseHandler.getValidatedResponseEntity(BaseResponseHandler.java:93) ~[render-ws-java-client-2.0.1-SNAPSHOT-standalone.jar:na]
	at org.janelia.render.client.response.ResourceCreatedResponseHandler.handleResponse(ResourceCreatedResponseHandler.java:30) ~[render-ws-java-client-2.0.1-SNAPSHOT-standalone.jar:na]
	at org.janelia.render.client.response.ResourceCreatedResponseHandler.handleResponse(ResourceCreatedResponseHandler.java:15) ~[render-ws-java-client-2.0.1-SNAPSHOT-standalone.jar:na]
	at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:218) ~[render-ws-java-client-2.0.1-SNAPSHOT-standalone.jar:na]
	at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:160) ~[render-ws-java-client-2.0.1-SNAPSHOT-standalone.jar:na]
	at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:136) ~[render-ws-java-client-2.0.1-SNAPSHOT-standalone.jar:na]
	at org.janelia.render.client.RenderDataClient.setStackState(RenderDataClient.java:545) ~[render-ws-java-client-2.0.1-SNAPSHOT-standalone.jar:na]
	at org.janelia.render.client.StackClient.setStackState(StackClient.java:279) [render-ws-java-client-2.0.1-SNAPSHOT-standalone.jar:na]
	at org.janelia.render.client.StackClient$1.runClient(StackClient.java:169) ~[render-ws-java-client-2.0.1-SNAPSHOT-standalone.jar:na]
	at org.janelia.render.client.ClientRunner.run(ClientRunner.java:38) ~[render-ws-java-client-2.0.1-SNAPSHOT-standalone.jar:na]
	at org.janelia.render.client.StackClient.main(StackClient.java:177) [render-ws-java-client-2.0.1-SNAPSHOT-standalone.jar:na]
17:12:44.036 [main] INFO  [org.janelia.render.client.ClientRunner] run: exit, processing failed after 0 hours, 0 minutes, 0 seconds
rlane@janelia:~/development/render$ 

SIFT point match logger hardcoding driver api port

the logging utilities with the SIFT point match client
LogUtilities.getExecutorsApiJson

currently has the 4040 port hardcoded in, and when submitting multiple applications the driver can dynamically assign ports, causing the job to fail if it isn't the only thing submitted to that node when the job starts. Need to figure out a way to get around this... example error from the driver log below.

17/01/13 17:53:33 WARN Utils: Service 'SparkUI' could not bind on port 4040. Attempting port 4041.
17/01/13 17:53:33 INFO Utils: Successfully started service 'SparkUI' on port 4041.
17/01/13 17:53:33 INFO SparkUI: Bound SparkUI to 0.0.0.0, and started at http://10.128.24.33:4041
17/01/13 17:53:33 INFO SparkContext: Added JAR file:/pipeline/render/render-ws-spark-client/target/render-ws-spark-client-0.3.0-SNAPSHOT-standalone.jar at spark://10.128.24.33:45705/jars/render-ws-spark-client-0.3.0-SNAPSHOT-standalone.jar with timestamp 1484358813125
17/01/13 17:53:33 INFO StandaloneAppClient$ClientEndpoint: Connecting to master spark://atbigdawg.corp.alleninstitute.org:7077...
17/01/13 17:53:33 INFO TransportClientFactory: Successfully created connection to atbigdawg.corp.alleninstitute.org/10.128.24.91:7077 after 1 ms (0 ms spent in bootstraps)
17/01/13 17:53:33 INFO StandaloneSchedulerBackend: Connected to Spark cluster with app ID app-20170113175333-0022
17/01/13 17:53:33 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 36201.
17/01/13 17:53:33 INFO NettyBlockTransferService: Server created on 10.128.24.33:36201
17/01/13 17:53:33 INFO BlockManagerMaster: Registering BlockManager BlockManagerId(driver, 10.128.24.33, 36201)
17/01/13 17:53:33 INFO BlockManagerMasterEndpoint: Registering block manager 10.128.24.33:36201 with 5.2 GB RAM, BlockManagerId(driver, 10.128.24.33, 36201)
17/01/13 17:53:33 INFO BlockManagerMaster: Registered BlockManager BlockManagerId(driver, 10.128.24.33, 36201)
17/01/13 17:53:33 INFO StandaloneSchedulerBackend: SchedulerBackend is ready for scheduling beginning after reached minRegisteredResourcesRatio: 0.0
17/01/13 17:53:33 ERROR ClientRunner: run: caught exception
java.io.FileNotFoundException: http://localhost:4040/api/v1/applications/app-20170113175333-0022/executors
at sun.net.www.protocol.http.HttpURLConnection.getInputStream0(HttpURLConnection.java:1872)
at sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1474)
at java.net.URL.openStream(URL.java:1045)
at org.janelia.render.client.spark.LogUtilities.getExecutorsApiJson(LogUtilities.java:61)
at org.janelia.render.client.spark.SIFTPointMatchClient.run(SIFTPointMatchClient.java:155)
at org.janelia.render.client.spark.SIFTPointMatchClient$1.runClient(SIFTPointMatchClient.java:135)
at org.janelia.render.client.ClientRunner.run(ClientRunner.java:38)
at org.janelia.render.client.spark.SIFTPointMatchClient.main(SIFTPointMatchClient.java:139)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.deploy.worker.DriverWrapper$.main(DriverWrapper.scala:58)
at org.apache.spark.deploy.worker.DriverWrapper.main(DriverWrapper.scala)

default_chan option for multi-channel stacks

right now the multichannel support takes the first channel in the tilespec to render when there is no default image-pyramid defined. I was wondering if instead the tilespec could have an options 'default_channel' option, or the stack could have a 'default_channel' metadata option that if defined would control which channel is used for rendering in cases when none is given?

Right now i have to manage adding a default channel to all my stacks, and then my modules which do image specific operations either have to remember which channel is my default channel and update the tilespec with the new image path, but either redo the work on the default image pyramid or go check based upon which imageUrl matches which is the default channel... it would be much cleaner to just leave the default image pyramid blank and only fill in channels... the only problem is that then there is some unreliable ordering of the channels lists and so which channel is first and gets displayed on default is mixed/confused.

thoughts?

render xz and yz views

Khaled mentioned that he frequently uses Matlab to produce xz and yz views of render data. Gleb uses Fiji to do this. It would be nice to have these views available directly from the render services.

resolvedTiles documentation incorrect

Minor problem...the swagger says that resolvedTiles should return something of the form

     {
            'transforms':
            'tileSpecs':
            'tileCount':
            'transformCount':
        }

but instead it returns

        {
            'transformIdToSpecMap':
            'tileIdToSpecMap':
        }

trakem2 converter script produces tilespecs with no channel name

An example...

{
"tileId" : "8",
"z" : 0.0,
"minX" : -4470.0,
"minY" : 3618.0,
"maxX" : 535.0,
"maxY" : 8663.0,
"width" : 5000.0,
"height" : 5000.0,
"mipmapLevels" : { },
"channels" : [
{
"minIntensity" : 255.0,
"maxIntensity" : 0.0,
"mipmapLevels" : {
"0" : {
"imageUrl" : "file:/nas3/data/M247514_Rorb_1/annotation/Site 3/S_000_2098862426/Tile_r1-c1_S_000_209886_flip.jpg",
"maskUrl" : "file:/nas3/data/M247514_Rorb_1/annotation/trakem2.1474404253542.1123465259.478816230/trakem2.masks/1.8.zip"
}
}
}
],
"transforms" : {
"type" : "list",
"specList" : [
{
"type" : "leaf",
"className" : "mpicbg.trakem2.transform.AffineModel2D",
"dataString" : "0.9997652837809011 -0.003813429188347456 0.0010519827088758399 1.0050585714544518 -4470.676115967191 3637.2127719311575"
}
]
},
"meshCellSize" : 64.0
},

cut a release 1.0.0

I am experimenting with SNAPSHOT dependencies and it keeps breaking my neck. Let's cut a release before going into the next major overhaul. The released artifact can become part of this repository similar to SWT (https://github.com/maven-eclipse/dev-releases#how-to-use) such that including them would become as easy as

[...]
<repositories>
	<repository>
		<id>render-repo-dev</id>
		<url>https://github.com/saalfeldlab/render/maven</url>
	</repository>
</repositories>
<dependencies>
	<dependency>
		<groupId>org.janelia</groupId>
		<artifactId>render-app</artifactId>
		<version>1.0.0</version>
	</dependency>
<dependencies>
[...]

add support for renaming stacks

Currently, renaming a stack requires a slow copy and delete process. Decoupling stack display names from their persisted collection names should simplify this operation.

Return png images as G or GA instead of RGBA?

Some endpoints, e.g. /png-image, return data as RGBA (specifically, "PNG image data, 1024 x 1024, 8-bit/color RGBA, non-interlaced").

I am considering adding flags to force grayscale and/or disable transparency for PNG output. (The PNG format supports both G and GA at both 8- and 16-bit depths.)

Is this a reasonable approach?

render-parameters off by 1 for 1024 png images

render seems to be confused about some 1024 tiles we have loaded here

for example...

this call
http://em-131fs:8999/render-ws/v1/owner/gayathri/project/EM_Phase1/stack/Pinky40_20170313_aibsdata_flipped/tile/1,3484_aligned_0_1/render-parameters

returns

{
"meshCellSize" : 64.0,
"minMeshCellSize" : 0.0,
"x" : 1896.0,
"y" : 876.0,
"width" : 1023,
"height" : 1023,
"scale" : 1.0,
"areaOffset" : false,
"convertToGray" : false,
"quality" : 0.85,
"numberOfThreads" : 1,
"skipInterpolation" : false,
"binaryMask" : false,
"excludeMask" : false,
"doFilter" : false,
"tileSpecs" : [ {
"tileId" : "1,3484_aligned_0_1",
"layout" : {
"sectionId" : "None",
"temca" : "None",
"camera" : "None"
},
"z" : 3484.0,
"minX" : 1896.0,
"minY" : 876.0,
"maxX" : 2919.0,
"maxY" : 1899.0,
"width" : 1024.0,
"height" : 1024.0,
"minIntensity" : 0.0,
"maxIntensity" : 255.0,
"mipmapLevels" : {
"0" : {
"imageUrl" : "file:///data/nc-em/russelt/20170227_Princeton_Pinky40/4_aligned_tiled/1,3484_aligned_0_1_flip.png"
}
},
"transforms" : {
"type" : "list",
"specList" : [ {
"type" : "leaf",
"className" : "mpicbg.trakem2.transform.AffineModel2D",
"dataString" : "1.0000000000 0.0000000000 0.0000000000 1.0000000000 1896.0000000000 -876.0000000000"
}, {
"type" : "leaf",
"className" : "mpicbg.trakem2.transform.AffineModel2D",
"dataString" : "1.0000000000 0.0000000000 0.0000000000 1.0000000000 0.0000000000 1752.0000000000"
} ]
},
"meshCellSize" : 64.0
} ],
"minBoundsMeshCellSize" : 64.0
}

as you can see the image is 1024x1024 (and the png file is that)
but the render-parameters have it as 1023x1023, and bounding_box sytle renderings often have black pixel lines as the tile doesn't quite fill the space it should.

ideas?

add materialized box path support to box image APIs

Materialized box paths are currently only supported by the largeTileDataSource APIs (used for CATMAID integration). Support needs to be added to the box image APIs for use by ndviz/neuroglancer clients.

match db not ignoring system collections

When trying to use the point match database, i get an error on my deployment from
/render-ws/v1/matchCollectionOwners
which seems to stem from the fact that the mongodb is listing the collections including the system collections, which it can't parse into owner/collection.

see error below.... perhaps i need to reconfigure mongo somehow?

<title>Error 500 Server Error</title>

HTTP ERROR 500

Problem accessing /render-ws/v1/matchCollectionOwners. Reason:

    Server Error

Caused by:

org.jboss.resteasy.spi.UnhandledException: java.lang.IllegalArgumentException: invalid match collection name 'system.indexes'
	at org.jboss.resteasy.core.SynchronousDispatcher.handleApplicationException(SynchronousDispatcher.java:365)
	at org.jboss.resteasy.core.SynchronousDispatcher.handleException(SynchronousDispatcher.java:233)
	at org.jboss.resteasy.core.SynchronousDispatcher.handleInvokerException(SynchronousDispatcher.java:209)
	at org.jboss.resteasy.core.SynchronousDispatcher.getResponse(SynchronousDispatcher.java:557)
	at org.jboss.resteasy.core.SynchronousDispatcher.invoke(SynchronousDispatcher.java:524)
	at org.jboss.resteasy.core.SynchronousDispatcher.invoke(SynchronousDispatcher.java:126)
	at org.jboss.resteasy.plugins.server.servlet.ServletContainerDispatcher.service(ServletContainerDispatcher.java:208)
	at org.jboss.resteasy.plugins.server.servlet.HttpServletDispatcher.service(HttpServletDispatcher.java:55)
	at org.jboss.resteasy.plugins.server.servlet.HttpServletDispatcher.service(HttpServletDispatcher.java:50)
	at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
	at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:835)
	at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1685)
	at org.eclipse.jetty.servlets.CrossOriginFilter.handle(CrossOriginFilter.java:308)
	at org.eclipse.jetty.servlets.CrossOriginFilter.doFilter(CrossOriginFilter.java:262)
	at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1668)
	at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:581)
	at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
	at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
	at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
	at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1158)
	at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:511)
	at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
	at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1090)
	at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
	at org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213)
	at org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:109)
	at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:119)
	at org.eclipse.jetty.server.Server.handle(Server.java:517)
	at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:308)
	at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:242)
	at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273)
	at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95)
	at org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:75)
	at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceAndRun(ExecuteProduceConsume.java:213)
	at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:147)
	at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:654)
	at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:572)
	at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.IllegalArgumentException: invalid match collection name 'system.indexes'
	at org.janelia.alignment.match.MatchCollectionId.fromDbCollectionName(MatchCollectionId.java:102)
	at org.janelia.render.service.dao.MatchDao.getMatchCollectionMetaData(MatchDao.java:57)
	at org.janelia.render.service.MatchService.getOwners(MatchService.java:72)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.jboss.resteasy.core.MethodInjectorImpl.invoke(MethodInjectorImpl.java:167)
	at org.jboss.resteasy.core.ResourceMethod.invokeOnTarget(ResourceMethod.java:269)
	at org.jboss.resteasy.core.ResourceMethod.invoke(ResourceMethod.java:227)
	at org.jboss.resteasy.core.ResourceMethod.invoke(ResourceMethod.java:216)
	at org.jboss.resteasy.core.SynchronousDispatcher.getResponse(SynchronousDispatcher.java:542)
	... 34 more

Caused by:

java.lang.IllegalArgumentException: invalid match collection name 'system.indexes'
	at org.janelia.alignment.match.MatchCollectionId.fromDbCollectionName(MatchCollectionId.java:102)
	at org.janelia.render.service.dao.MatchDao.getMatchCollectionMetaData(MatchDao.java:57)
	at org.janelia.render.service.MatchService.getOwners(MatchService.java:72)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.jboss.resteasy.core.MethodInjectorImpl.invoke(MethodInjectorImpl.java:167)
	at org.jboss.resteasy.core.ResourceMethod.invokeOnTarget(ResourceMethod.java:269)
	at org.jboss.resteasy.core.ResourceMethod.invoke(ResourceMethod.java:227)
	at org.jboss.resteasy.core.ResourceMethod.invoke(ResourceMethod.java:216)
	at org.jboss.resteasy.core.SynchronousDispatcher.getResponse(SynchronousDispatcher.java:542)
	at org.jboss.resteasy.core.SynchronousDispatcher.invoke(SynchronousDispatcher.java:524)
	at org.jboss.resteasy.core.SynchronousDispatcher.invoke(SynchronousDispatcher.java:126)
	at org.jboss.resteasy.plugins.server.servlet.ServletContainerDispatcher.service(ServletContainerDispatcher.java:208)
	at org.jboss.resteasy.plugins.server.servlet.HttpServletDispatcher.service(HttpServletDispatcher.java:55)
	at org.jboss.resteasy.plugins.server.servlet.HttpServletDispatcher.service(HttpServletDispatcher.java:50)
	at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
	at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:835)
	at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1685)
	at org.eclipse.jetty.servlets.CrossOriginFilter.handle(CrossOriginFilter.java:308)
	at org.eclipse.jetty.servlets.CrossOriginFilter.doFilter(CrossOriginFilter.java:262)
	at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1668)
	at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:581)
	at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
	at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
	at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
	at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1158)
	at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:511)
	at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
	at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1090)
	at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
	at org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213)
	at org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:109)
	at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:119)
	at org.eclipse.jetty.server.Server.handle(Server.java:517)
	at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:308)
	at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:242)
	at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273)
	at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95)
	at org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:75)
	at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceAndRun(ExecuteProduceConsume.java:213)
	at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:147)
	at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:654)
	at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:572)
	at java.lang.Thread.run(Thread.java:745)

Powered by Jetty:// 9.3.7.v20160115

MONGO_PORT and render-db.properties port not working

The docker packaging and the configuration file advertise a port can be set, but that isn't actually getting read by the mongo configuration code, which makes it fall back to the default port.

We should likely remove the MONGO_PORT option from the docker packaging and the configuration file, or fix the code to allow it to work. It seems like its looking for host:port in the servers line of the file.

https://github.com/saalfeldlab/render/blame/d65fb27ce59b7100293c360ba902e6a9ec443ee6/render-ws/src/main/java/org/janelia/render/service/dao/DbConfig.java#L142

Image filters

We would like to apply filters to raw images before they are transformed. An example of where this would be useful is to apply a flat field correction to all images coming off a microscope.

Support Mongo SCRAM-SHA-1 auth

Render currently trie to use the old Mongo authentication mechanism (MongoDB-CR) instead of SCRAM-SHA-1. This prevents render from working with authentication on a Mongo 3+ database out of the box.

The current workaround is to downgrade the MongoDB schema and recreate the user:
db.system.version.insert({ "_id" : "authSchema", "currentVersion" : 3 })

version notes for point matches

It would be great if we could have a versionNotes
field for point-matches, and that automatically gets populated by the
SIFT parameters and point-match parameters in the script.

-Khaled

Point matches should contain the context that they were calculated in

The point match database should contain an explicit representation of the context that they were generated in. This relates to the "local" transforms issue, in that if we imagine that depending on the problem (TEM, SEM, LM, etc) that users will want to calculate point matches, and therefore run solvers from different initial conditions, than there needs to be a guaranteed way to map point matches to their locations. For example, it would seem that you should be able to use the coordinate mapping service to map point matches from the "local" context they were generated in to the "global" coordinate of some stack, however, given that now sometimes point matches are generated where "local" doesn't actually mean 0 transformations on the data, but rather something a bit more specific there is no way to do this in a guaranteed way. Eric Trautman and I had a long talk about this in Berlin, and his thought that came out of that converation was to add this contextual information to the point match database, so that it can be tracked, and therefore you would use the following mapping, to go from the context point matches were generated in to any particular next context... start_context>local>end_context. Or alternatively maybe we should extend the coordinate mapping service to map between contexts directly... but that's a separate issue.

normalizeForMatching not sufficiently general

I am having trouble running the spark SIFT client and getting matches because it assumes i want to do normalizeForMatching, and normalizeForMatching assumes simply removing the last transform in the transform list will bring the tile back to something where putting x,y at 0,0 will bring the image into view. This is true if all you have a lens correction and a single transformation to bring it into the world, but in the case of longer lists of transforms this is not true. In particular I happen to have 4 transformations after using trakem2 to

not sure what the fix is... a few options i can think of to brainstorm.

  1. make normalizeForMatching an option so those who don't want it can skip it. (seems like a bad solution given the need for evenly sized tiles is certain matchers). Incidentally i don't see how this solution guarantees consistently sized tiles if the lens corrections are sufficiently different.

  2. give some options to hardcode the size of the tile to cutout, understanding that the user needs to make it large enough to cover the transformed tile given its current transformed location into the world.

  3. completely rewrite the point match service to calculate and express matches in the original dataspace, meaning pre-transformed data space of the original mipmaplevel 0 image, not this sort of partially transformed standard that currently exists.
    (i realize that this is a major change...I don't see how you can really maintain meaningful matches across transformation sets any other way, and if matches are tied to a particular transformation set, than that relationship should be made explicit and tracked rather than ad hoc).

  4. If the goal of normalization is to bring the tile back to this "original" state, but with only lens correction included, than the tilespec schema could be extended to more explicitly call out that this transformation (or set of transformations) as special/pre-alignment/raw-correction or something, and other transformations are not... and then the normalizeForMatching code should use this distinction to remove all the non special/pre-alignment/raw-correction type transformations. Could be as easy as adding a flag to the transform class of, is-raw or not.

  5. do not assume that you can shift the origin to 0,0 after removing the transforms that you did.. instead dynamically calculate the bounding box after you remove the transformations that you have. This is I think the easiest fix i can think of.

I think I like 4 the best, but 5 would also be ok...
thoughts?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.