Code Monkey home page Code Monkey logo

ammieqi's Projects

sr-gnn icon sr-gnn

Source code and datasets for the paper "Session-based Recommendation with Graph Neural Networks" (AAAI-19)

ssg icon ssg

Self-similarity Grouping: A Simple Unsupervised Cross Domain Adaptation Approach for Person Re-identification, ICCV 2019 (Oral)

ssta-captioning icon ssta-captioning

Repository for paper: Saliency-Based Spatio-Temporal Attention for Video Captioning

st-gcn icon st-gcn

Spatial Temporal Graph Convolutional Networks (ST-GCN) for Skeleton-Based Action Recognition in PyTorch

st-gcn-pytorch icon st-gcn-pytorch

Spatial Temporal Graph Convolutional Networks for Skeleton-Based Action Recognition

stanfordnlp icon stanfordnlp

Official Stanford NLP Python Library for Many Human Languages

stargan icon stargan

Official PyTorch Implementation of StarGAN - CVPR 2018

state-of-the-art-result-for-machine-learning-problems icon state-of-the-art-result-for-machine-learning-problems

This repository provides state of the art (SoTA) results for all machine learning problems. We do our best to keep this repository up to date. If you do find a problem's SoTA result is out of date or missing, please raise this as an issue or submit Google form (with this information: research paper name, dataset, metric, source code and year). We will fix it immediately.

ste-nvan icon ste-nvan

Spatially and Temporally Efficient Non-local Attention Network for Video-based Person Re-Identification (BMVC 2019)

step icon step

CVPR2019 STEP: Spatio-Temporal Progressive Learning for Video Action Detection

stgcn-pytorch icon stgcn-pytorch

Implementation of spatio-temporal graph convolutional network with PyTorch

stgraph icon stgraph

Codebase for CVPR 2020 paper "Spatio-Temporal Graph for Video Captioning with Knowledge Distillation"

structpool icon structpool

The code for our ICLR paper: StructPool: Structured Graph Pooling via Conditional Random Fields

structural-transformer icon structural-transformer

Code corresponding to our paper "Modeling Graph Structure in Transformer for Better AMR-to-Text Generation" in EMNLP-IJCNLP-2019

stsgcn icon stsgcn

AAAI 2020. Spatial-Temporal Synchronous Graph Convolutional Networks: A New Framework for Spatial-Temporal Network Data Forecasting

stsgcn-1 icon stsgcn-1

AAAI 2020. Spatial-Temporal Synchronous Graph Convolutional Networks: A New Framework for Spatial-Temporal Network Data Forecasting

sttran icon sttran

Spatial-Temporal Transformer for Dynamic Scene Graph Generation, ICCV2021

sura icon sura

Video Description using DeepLearning

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.