zhxfl / cuda-cnn Goto Github PK
View Code? Open in Web Editor NEWCNN accelerated by cuda. Test on mnist and finilly get 99.76%
CNN accelerated by cuda. Test on mnist and finilly get 99.76%
我这边选择1以后 最后显示
total malloc cpu memory 0.00000MB
total malloc Gpu memory 0.00000MB
cuMatrix Vector operator[] error
楼主 求指导以下- -找不到原因了。。
I have run it successfully on VS2015+CUDA8.0+opencv2.4, thanks! Now I have a very great program to study, thanks!
I also want to know what kind of your compute, I want to buy a new compute and hope there are a compute configuration to refer to , thanks!
Hi guys,
Really appreciate your elegent code!
Right now I playing with the batch size setting. For Mnist dataset, when I set the batch size = 1, I will get nan for the weight. I guess that is because the learning rate is too large for the batch = 1. Any ideas why this is happening?
Thanks,
Jimmy
Currently I am learning CUDA-CNN source code and may find some unnecessary use of __syncthreads() for a kernel funcitons g_getCost_3 in common/cuBase.cu
For kernel function g_getCost_3,
__global__ void g_getCost_3(float* cost,
float** weight,
float lambda, int wlen)
{
extern __shared__ float _sum[];
_sum[threadIdx.x] = 0;
__syncthreads();
float* w = weight[blockIdx.x];
for(int i = 0; i < wlen; i += blockDim.x)
{
int id = i + threadIdx.x;
if(id < wlen)
{
_sum[threadIdx.x] += w[id] * w[id];
}
}
......
}
Meanwhile, g_getCost_3 will be called in project as below form:
g_getCost_3<<<dim3(w.size()), dim3(32), sizeof(float) * 32>>>(cost->getDev(),
So we can make sure, there are 32 threads in one block, which means each block will have one thread warp.
Before kernel function reaches the line 149, there is no branch divergence in the program, so that before line 149, all threads in the same thread warp are synchronized (ref: https://devtalk.nvidia.com/default/topic/632471/is-syncthreads-required-within-a-warp- ), because If there is no divergence in the WARP, all threads of the WARP will execute the same instruction at the same time, so, you don't need to synchronize at WARP level.
As a result , we can safely make a conclusion that when all threads in one block executing "float* w = weight[blockIdx.x];" at line 149, "_sum[threadIdx.x] = 0;" are all finished in each thread, so there should be unnecessary to synchronize threads at line 148?
Thanks very much ! :)
when I run ./CUDA-CNN 1 and get this error 'cuMatrix host memory allocation failed'.Can you fix this.
I run this on Max OSX 10.11.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.