Code Monkey home page Code Monkey logo

Comments (3)

f0k avatar f0k commented on June 9, 2024

I am just wondering is there a better way to do it?

You're already obtaining all the output expressions at once (in a single get_output() call), that's a good start. But you compile a separate function for every layer. Try to compile a single function for all saliency maps.

It seems to me that the gradient loop did not properly exploit the stacked structure of the vgg net and has to go through the graph every single time.

It should be able to share the computation for the forward pass, but the way you defined the expression to take the gradient of, it has to do a separate backward pass for each filter in every layer. You can try to formulate your expression such that it performs a single batched backward pass. You'll need to replicate your input image into a batch of multiple images and take the gradient with respect to that, so you'll get a batch of different saliency maps. Form a single cost that is the sum of maximizing the first filter of the first layer for the first input example, the second filter of the first layer for the second input example, and so on, and take the gradient of that wrt. the input batch. This way your compiled function can do everything in a single backward pass, and should be faster to compile and execute. Hope this makes sense?

from recipes.

ckchenkuan avatar ckchenkuan commented on June 9, 2024

Hi sorry for the late reply. I "sort of" implemented what you said by compiling the function once for each layer, but the speedup is not obvious. I am not sure if I am doing what you meant correctly.


def compile_saliency_function2(net,layernamelist,layershapelist,scalefactor):
    inp = net['input'].input_var
    outp = lasagne.layers.get_output([net[layername] for layername in layernamelist], deterministic=True)
    saliencyfnlist=[]    
    for layeri in range(len(layernamelist)):
        filtercount=int(layershapelist[layeri]/scalefactor)
        filterindices=[ii*scalefactor for ii in range(filtercount)]
        #layeroutp=outp[layeri]
        saliencylayerlist=[]
        max_outplist=[]
        inplist=[]
        netdict={}
        for filterindex in filterindices:
            netdict[filterindex]=net       
            inp = netdict[filterindex]['input'].input_var
            outp = lasagne.layers.get_output([netdict[filterindex][layername] for layername in layernamelist], deterministic=True)
            max_outpi=outp[layeri][0,filterindex,].sum()
            max_outplist.append(max_outpi)
            inplist.append(inp)
        max_outpall=max_outplist[0]
        for ii in range(1,len(max_outplist)):
            max_outpall+=max_outplist[ii]
        st=time.time()
        saliencylayer=theano.grad(max_outpall, wrt=inplist)   
        print(time.time()-st)
        starttime=time.time()
        print(len(saliencylayerlist))
        layerfnlist=theano.function([inp], saliencylayer)
        print('compile time is ', time.time()-starttime)
        saliencyfnlist.append([layerfnlist]) 
        #print(layeri)
    return saliencyfnlist

starttime=time.time()
saliencyfntuple=compile_saliency_function2(net,['conv5_1','conv5_2','conv5_3'],[512,512,512],8)
print('fn time',time.time()-starttime)

from recipes.

f0k avatar f0k commented on June 9, 2024

You have a second lasagne.layers.get_output call in your for loop now! Never do this!

What I said was to do a single theano.grad() call with respect to a single input batch. So make your input_var a tensor4 (it probably is already), replicate it so you have enough copies of the input image, pass it through the network once and then form your cost to maximize some particular unit for the first item in your minibatch, another unit for the second item in your minibatch, and so on. Something like:

inp = net['input'].input_var
inp_rep = T.repeat(inp, len(layers), axis=0)
outp = lasagne.layers.get_output(layers, inp_rep)
cost = 0
for idx, layer in enumerate(layers):
    cost += outp[idx][idx].sum()  // maximize output of idx'th layer for idx'th example
sal = theano.grad(cost, inp)
fn = theano.function([inp], sal)

Now you can pass a single image (as a 4d tensor) and it will give you a batch of all saliency maps. The key is to make sure Theano can do a single batched forward pass and then a single batched backward pass (although I'm actually not sure how well this will work for the backward pass if the costs are computed at different layers -- maybe it only gets compiled into a single backward pass if you can express the cost as a vectorized operation, like maximizing all the different units or feature maps in a single layer).

from recipes.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.