Code Monkey home page Code Monkey logo

Comments (9)

tonyngjichun avatar tonyngjichun commented on June 4, 2024

Hi, thanks for opening this issue. I would like to ask how is your dataset structured? i.e. what's the content of your '/home/lxk/ZHP/data/VeIDData/VERI/custom_train.csv'?

from solar.

hewumars avatar hewumars commented on June 4, 2024

image
landmark_id is class_id. the means of images is the name of all images in same class.

from solar.

tonyngjichun avatar tonyngjichun commented on June 4, 2024

Can you access the tensorboard log files during your training? (it's located in ./specs by default) Your triplets should be visualised in the IMAGES tab, do you mind sharing an example of it?

from solar.

hewumars avatar hewumars commented on June 4, 2024

google drive download url : https://drive.google.com/file/d/1cwD8iiSeYsouimQKKn22Y9ChoVXuGnWk/view?usp=sharing

I removed ‘--soa --sos’ for experimentation.
train params : specs/gl18 --training-dataset gl18 --test-datasets veri_test --arch resnet101 --pool gem --p 3 --loss triplet --pretrained-type gl18 --loss-margin 1.25 --optimizer adam --lr 1e-6 -ld 1e-2 --neg-num 5 --query-size 2000 --pool-size 20000 --batch-size 32 --image-size 256 --update-every 1 --whitening --lambda 10 --no-val --flatten-desc --epochs 1000 --soa-layers ''

from solar.

tonyngjichun avatar tonyngjichun commented on June 4, 2024

image
image

according to your tensorboard files, many of your negatives are actually positives, that explains why L2 is close to 0 in negative mining. Are you sure that the landmark ID are unique? i.e. multiple landmark IDs do not correspond to the same vehicle class?

from solar.

hewumars avatar hewumars commented on June 4, 2024

the datasets use VeRi-776 ,github. Make sure IDs are unique. negatives are not positives, but the difference is very small.
Also I have used Market1501 pedestrian dataset, It have the same problem.

image
image

from solar.

tonyngjichun avatar tonyngjichun commented on June 4, 2024

I see that you get rank 1 close to 100%, but mAP very low - this makes sense given the triplets visualised above. The network is able to tell the subtle differences as the negatives are extremely hard (in the image/landmark retrieval community we usually take these as positives); however as the network is not exposed to moderately difficult negatives as much during training like in the example below, it is less capable of ranking vehicles that are more different than the query.

I am not an expert in person/vehicle re-ID but I suppose it's sensible that in these data domains the hardest negative distances could be very close to 0, since they are quite a bit more confusing for the network to recognise than landmarks. Therefore, hardest negative sampling might not be the best choice for your dataset, you might wanna add some thresholding / include easier negatives. Moreover, judging by your triplet examples, you might want the positive and negatives to be from the same viewpoint, as now the negatives are way more closer to the anchor than the positive is, this makes the triplet loss practically impossible to minise. Therefore, if there's a viewpoint attribute from you dataset, I suggest you constrain the negative viewpoints to be as different from the anchor as the positive is, then mine from this constrained pool of negatives to find the hardest ones.

Also, would you be able to show me your test dataset? The mAP is dependant on the number of ground-truth positive labels, so any mislabelling there might impact the mAP a lot even though the rank 1 predictions are nearly perfect.

image

from solar.

hewumars avatar hewumars commented on June 4, 2024

I modified test.py.

def main():
    args = parser.parse_args()

    # check if there are unknown datasets
    for dataset in args.datasets.split(','):
        if dataset not in datasets_names:
            raise ValueError('Unsupported or unknown dataset: {}!'.format(dataset))

    # check if test dataset are downloaded
    # and download if they are not
    # download_test(get_data_root())

    # setting up the visible GPU
    os.environ['CUDA_VISIBLE_DEVICES'] = args.gpu_id

    # loading network
    net = load_network(network_name=args.network)
    net.mode = 'test'
    # x = torch.randn(1, 3, 256, 256, requires_grad=False)
    # torch.onnx.export(net, x, "solar.onnx", opset_version=12, verbose=True)

    print(">>>> loaded network: ")
    print(net.meta_repr())

    # setting up the multi-scale parameters
    ms = list(eval(args.multiscale))

    print(">>>> Evaluating scales: {}".format(ms))

    # moving network to gpu and eval mode
    net.cuda()
    net.eval()

    # set up the transform
    normalize = transforms.Normalize(
        mean=net.meta['mean'],
        std=net.meta['std']
    )
    transform = transforms.Compose([
        transforms.ToTensor(),
        normalize
    ])

    # evaluate on test datasets
    datasets = args.datasets.split(',')
    for dataset in datasets:
        start = time.time()

        print('')
        print('>> {}: Extracting...'.format(dataset))

        # prepare config structure for the test dataset
        dataset_root_path = os.path.join(get_data_root(),'test',dataset)
        images = []
        qimages = []
        images_path = os.listdir(os.path.join(dataset_root_path,'query'))
        for dir_name in images_path:
            image_paths = glob.glob(os.path.join(dataset_root_path,'query', dir_name, '*.jpg'))
            for image_path in image_paths:
                qimages.append(image_path)
        images_path = os.listdir(os.path.join(dataset_root_path,'gallery'))
        for dir_name in images_path:
            image_paths = glob.glob(os.path.join(dataset_root_path,'gallery', dir_name, '*.jpg'))
            for image_path in image_paths:
                images.append(image_path)
        try:
            # bbxs = [tuple(cfg['gnd'][i]['bbx']) for i in range(cfg['nq'])]
            bbxs = None  # for holidaysmanrot and copydays
        except:
            bbxs = None  # for holidaysmanrot and copydays

        # extract database and query vectors
        print('>> {}: database images...'.format(dataset))
        vecs = extract_vectors(net, images, args.image_size, transform, ms=ms, mode='test')
        vecs = vecs.numpy()

        print('>> {}: query images...'.format(dataset))
        qvecs = extract_vectors(net, qimages, args.image_size, transform, bbxs=bbxs, ms=ms, mode='test')
        qvecs = qvecs.numpy()

        print('>> {}: Evaluating...'.format(dataset))

        # search, rank, and print
        scores = np.dot(vecs.T, qvecs)
        ranks = np.argsort(-scores, axis=0)
        scoresT = scores.T
        ranksT = ranks.T
        top1 = 0
        top_one = 0
        mAP = 0.0
        false_alarm_num = 0
        for i in range(ranksT.shape[0]):
            t = 0
            rank = 0.0
            query_id0 = qimages[i][qimages[i].rfind('/')-4:qimages[i].rfind('/')]
            gallery_id0 = images[ranksT[i][0]][images[ranksT[i][0]].rfind('/')-4:images[ranksT[i][0]].rfind('/')]
            if query_id0 == gallery_id0:
                top1 += 1
            if query_id0 == gallery_id0 and scoresT[i][ranksT[i][0]] > 0.6:
                top_one += 1
            query_id = qimages[i][qimages[i].rfind('/')-4:qimages[i].rfind('/')]
            for j in range(ranksT.shape[1]):
                gallery_id = images[ranksT[i][j]][images[ranksT[i][j]].rfind('/')-4:images[ranksT[i][j]].rfind('/')]
                if query_id == gallery_id:
                    t += 1
                    rank += t/(j+1)
                if query_id != gallery_id and scoresT[i][ranksT[i][j]] > 0.6:
                    false_alarm_num += 1
            if t == 0:
                continue
            mAP += rank / t
            print('{}.{} AP = {}%'.format(i, query_id, rank / t * 100))
        query_num = len(qimages)
        print('TOP1 num: {}'.format(top1))
        print('TOP1 recall: {}%'.format(top1 / query_num * 100))
        print('mAP = {}%'.format(mAP / query_num * 100))
        print('accuray: {}%'.format(top_one / query_num * 100))
        print('false num: {}'.format(false_alarm_num))
        print('false rate: {}%'.format(false_alarm_num / query_num * 100))

The figure below is the similarity matrix, it's shape is [1367,11579]. When loss converges, all similarity values are closer to 1.
image
image

from solar.

hewumars avatar hewumars commented on June 4, 2024

VeRi test data google drive:https://drive.google.com/file/d/1NsH8e4NbFQYxtPc6QLL0aJfsIm3A4OHf/view?usp=sharing
I initially suspected the problem of datasets generation, but no errors were found.
I am considering whether triple loss needs to be used together with classification loss to ensure retrieval accuracy.

from solar.

Related Issues (15)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.