microsoft / cntk Goto Github PK
View Code? Open in Web Editor NEWMicrosoft Cognitive Toolkit (CNTK), an open source deep-learning toolkit
Home Page: https://docs.microsoft.com/cognitive-toolkit/
License: Other
Microsoft Cognitive Toolkit (CNTK), an open source deep-learning toolkit
Home Page: https://docs.microsoft.com/cognitive-toolkit/
License: Other
hi, all, while running classbased LM trainning with LSTM, I have found that if define writeWordAndClass section as blow
writeWordAndClassInfo = [
action = "writeWordAndClass"
# input train data
inputFile = "$DataDir$/$trainFile$"
# four column vocabulary file
#
# FORMAT:
# - the first column is the word id
# - the second column is the count of the word
# - the third column is the word
# - the fourth column is the class id
outputVocabFile = "$ModelDir$/vocab.txt"
if outputVocabFile's parent directory(eg,
EXCEPTION occurred: cannot open word class file
SOLUTION:
just one more line code can fix this problem, modify Source/ActionsLib/OtherActions.cpp, funtion DoWriteWordAndClassInfo(const ConfigParameters& config), around line 431, change from:
std::ofstream ofvocab;
ofvocab.open(outputVocabFile.c_str());
to
std::ofstream ofvocab;
msra::files::make_intermediate_dirs(s2ws(outputVocabFile));
ofvocab.open(outputVocabFile.c_str());
recomiple and install cntk, then all'll be well, that's all
In the wiki of this repo on the right hand side and at https://github.com/Microsoft/CNTK/wiki/Enabling-1bit-SGD is a link to "CNTK 1bit-SGD License" https://github.com/Microsoft/CNTK-1bit-SGD/blob/master/LICENSE.md
which leads to a 404 because the repo doesn't exist yet :)
hi, I am trying the cntk binary file and when I try to run it on gpu, I encounter the the following error message:" CNTK: Win32 exception caught (such an access violation or a stack overflow".
I am using nvidia k5000 and cuda7.0 on Windows 10.
Does anyone know what is the problem?
thanks a lot
Hello All,
I am attempting to run the Simple2d example from the Wiki, but I get the following exception:
EXCEPTION occurred: ConfigValue (uint64_t): invalid input string
CNTK is installed on machine running CentOS 6.6 and is one of the last versions from when the source code was on codeplex. I have attached the full output here: run_simple2d.txt.
Currently disabled because it has a known bug (Backprop). Also should allow transposing arbitrary tensor dimensions. Switching to true tensors will fix both.
Hello,
I cannot find the CPU only binary in the binary downloads link
https://github.com/Microsoft/CNTK/wiki/CNTK-Binary-Download-and-Configuration
While compiling the source from scratch I observe type qualifier errors as:
In file included from Source/Common/Include/latticearchive.h(25),
from Source/SequenceTrainingLib/latticeforwardbackward.cpp(11):
Source/Common/Include/simplesenonehmm.h(294): error #858: type qualifier on return type is meaningless
const size_t getnumsenone() const
.....
Has not been used for a while, and may need updates.
I wanted to try the library in CPU mode, I downloaded CNTK-20160126-Windows-64bit-ACML5.3.1-CUDA7.0 binary and set the appropriate environment variable ACML_FMA. When I run CNTK with 'Simple' example which should have CPU support, it crashes and produces error: "The program cant start because cublas_64_70.dll" and "...curand64_70.dll is missing from your computer". Those libraries seem to be CUDA-related... Should I try to compile the library by myself or is there some other problem here?
After execute command "git submodule update --init --recursive" , there is no 1bit-SGD code in Source/1BitSGD directory of the CNTK repository. Will you please tell me how to solve this problem?
Hi, I am new to CNTK, so I followed the step-by-step setup on https://github.com/Microsoft/CNTK/wiki/Setup-CNTK-on-Windows
I've download all the third party files and set the environmental variables, but when I open the CNTKSolution and build the CNTK project, I got the following errors:
1>------ Rebuild All started: Project: MathCUDA, Configuration: Debug x64 ------
2>------ Rebuild All started: Project: SequenceTrainingLib, Configuration: Debug x64 ------
3>------ Rebuild All started: Project: EvalWrapper, Configuration: Debug x64 ------
3> wrapper.cpp
1>
1> D:\2015erdos\Install CNTK\CNTK-master\CNTK-master\Source\Math>"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v7.0\bin\nvcc.exe" -ccbin "C:\Program Files (x86)\Microsoft Visual Studio 12.0\VC\bin\x86_amd64" -I..\Common\include\ -I"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v7.0\include" -I"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v7.0\include" -lineinfo --keep-dir x64\Debug -maxrregcount=0 --machine 64 --compile -Xcudafe "--diag_suppress=field_without_dll_interface" -g -use_fast_math -D_DEBUG -DNO_SYNC -DWIN32 -D_WINDOWS -D_USRDLL -DMATH_EXPORTS -DUSE_CUDNN -D_UNICODE -DUNICODE -Xcompiler "/EHsc /W4 /nologo /Od /Zi /RTC1 /MDd " -o x64\Debug\MathCUDA\GPUTensor.cu.obj "D:\2015erdos\Install CNTK\CNTK-master\CNTK-master\Source\Math\GPUTensor.cu" -clean
2> parallelforwardbackward.cpp
2>C:\Program Files (x86)\Microsoft Visual Studio 12.0\VC\include\crtdefs.h(496): error C2371: 'size_t' : redefinition; different basic types
2> parallelforwardbackward.cpp : see declaration of 'size_t'
3>C:\Program Files (x86)\Microsoft Visual Studio 12.0\VC\include\crtdefs.h(496): error C2371: 'size_t' : redefinition; different basic types
3> wrapper.cpp : see declaration of 'size_t'
3>C:\Program Files (x86)\Windows Kits\8.1\Include\um\winnt.h(889): error C4235: nonstandard extension used : '__asm' keyword not supported on this architecture
3>C:\Program Files (x86)\Windows Kits\8.1\Include\um\winnt.h(890): error C2065: 'mov' : undeclared identifier
2>C:\Program Files (x86)\Windows Kits\8.1\Include\um\winnt.h(889): error C4235: nonstandard extension used : '__asm' keyword not supported on this architecture
2>C:\Program Files (x86)\Windows Kits\8.1\Include\um\winnt.h(890): error C2065: 'mov' : undeclared identifier
2>C:\Program Files (x86)\Windows Kits\8.1\Include\um\winnt.h(890): error C2146: syntax error : missing ';' before identifier 'ecx'
2>C:\Program Files (x86)\Windows Kits\8.1\Include\um\winnt.h(890): error C2065: 'ecx' : undeclared identifier
2>C:\Program Files (x86)\Windows Kits\8.1\Include\um\winnt.h(891): error C2146: syntax error : missing ';' before identifier 'mov'
2>C:\Program Files (x86)\Windows Kits\8.1\Include\um\winnt.h(891): error C2065: 'mov' : undeclared identifier
2>C:\Program Files (x86)\Windows Kits\8.1\Include\um\winnt.h(891): error C2146: syntax error : missing ';' before identifier 'eax'
2>C:\Program Files (x86)\Windows Kits\8.1\Include\um\winnt.h(891): error C2065: 'eax' : undeclared identifier
......
GitHub won't let me to paste the full output of the compiling results. So I attached it as a txt file.
Compiling errors.txt
Any idea how to fix it? Thanks!!!
I would like to build cntk with cuda 7.5 and vs2013 on windows 10. However, the MathCUDA project cannot be loaded. The error message says, "The imported project "C:\Program Files (x86)\MSBuild\Microsoft.Cpp\v4.0\V120\BuildCustomizations\CUDA 7.0.props" was not found." I wonder if I have to install cuda 7.0 in order to build the solution.
Thanks
Include directories shall point to $(WindowsSDK_IncludePath) for the proper architecture, 10, in this case, 8-8.1?, include directories with the VC++ additional include directories option are the best for doing so.
Project does not compile with Visual Studio 2015 and Windows 10 unless directories are fixed, (ctype.h, include errors, etcetera).
Also, a nice README about CUDA, ACML and MPI would come in handy.
The license looks like it was written for CNTK, but I think that legally it's equivalent to either BSD or MIT. It would be convenient for developers if you were to adopt one of those, or really almost any license that is well known, instead of writing your own.
Just a thought.
In the setup description in the wiki https://github.com/Microsoft/CNTK/wiki/CNTK-Binary-Download-and-Configuration
it says to download the latest release (link provided to https://github.com/Microsoft/CNTK/releases)
However there are currently no releases
Error "EXCEPTION occurred: Undefined function or macro 'ConvReLUBNLayer' in ConvReLUBNLayer(featScaled, cMap1, 25, kW1, kH1, hStride1, vStride1, 10)"
In the network defined file "03_ConvBatchNorm.ndl", It says:
# ConvReLUBNLayer is defined in Macros.ndl
conv1 = ConvReLUBNLayer(featScaled, cMap1, 25, kW1, kH1, hStride1, vStride1, 10)
Actually, The function "ConvReLUBNLayer" is not defined in the "Macros.ndl"!!!
Hi,
I am trying to run CNTK on a system with GPUs.
The compilation and creation of data proceeded fine without any issues.
After running I see this error :
cat ../Output/01_OneHidden_out_train_test.log
<<<<<<<<<<<<<<<<<<<< PROCESSED CONFIG WITH ALL VARIABLES RESOLVED <<<<<<<<<<<<<<<<<<<<
command: train test
precision = float
CNTKModelPath: ../Output/Models/01_OneHidden
CNTKCommandTrainInfo: train : 30
CNTKCommandTrainInfo: CNTKNoMoreCommands_Total : 30
CNTKCommandTrainBegin: train
[CALL STACK]
/scratch-shared/mch/scratch/dipsank/CUDNN/CNTK/bin/../lib/libcntkmath.so ( Microsoft::MSR::CNTK::DebugUtil::PrintCallStack() + 0xb4 ) [0x7fe97d769d44]
cntk ( void Microsoft::MSR::CNTK::ThrowFormattedstd::runtime_error(char const_, ...) + 0xc0 ) [0x530140]
cntk ( Microsoft::MSR::CNTK::BestGpu::GetDevices(int, Microsoft::MSR::CNTK::BestGpuFlags) + 0x98d ) [0x7ae02d]
cntk ( Microsoft::MSR::CNTK::BestGpu::GetDevice(Microsoft::MSR::CNTK::BestGpuFlags) + 0x1a ) [0x7ae29a]
cntk ( Microsoft::MSR::CNTK::DeviceFromConfig(Microsoft::MSR::CNTK::ConfigParameters const&) + 0x5b3 ) [0x7b1873]
cntk ( void DoTrain<Microsoft::MSR::CNTK::ConfigParameters, float>(Microsoft::MSR::CNTK::ConfigParameters const&) + 0x4c ) [0x76117c]
cntk ( void DoCommands(Microsoft::MSR::CNTK::ConfigParameters const&) + 0x7a4 ) [0x5926e4]
cntk ( wmainOldCNTKConfig(int, wchar_t__) + 0xaa1 ) [0x52a941]
cntk ( wmain1(int, wchar_t_*) + 0x62 ) [0x52b0f2]
cntk ( main + 0xcc ) [0x51e06c]
/lib64/libc.so.6 ( __libc_start_main + 0xfd ) [0x344e61ed5d]
cntk ( ) [0x521b09]
Please let me know if you need additional details.
Any pointers on anything I am doing wrong ?
I would like to point out that identifiers like "_FILEUTIL_
" and "__VALLUE_QUANTIZER_H__
" do not fit to the expected naming convention of the C++ language standard.
Would you like to adjust your selection for unique names?
A quick look at the history of the repository shows that most of the past commits don't comply with git's standard commit guidelines (see here), which you might want to follow to make reading logs a bit easier and clean up the history if you are looking for external contributors.
Some examples:
Hi,
After compiling the CNTK on Win8.1, I was able to run the Simple2d to have a taste on this toolkit. The performance is amazing!!! It's super fast!!!
However, I have a simple question, besides print out the EvalErrorPrediction value onto the screen, is there a command or action that can output the classification result to a .txt file like this:
0 0
1 1
0 1
1 0
1 1
...
while the first column is the label of the test file, and the second column is the label identified with CNTK. The reason I'm asking this is that we are intend to run different fiber identification task on CNTK, hence we need to know the identified fiber blend ratio, say the 70/30 cotton/wool might be identified as 65/35 blends.
Thanks!!
As per Github guide, you can set a CONTRIBUTING.md
to help the community write pull requests or reviewing their commits. This should include preferred code style per language, commit messages guidelines and other related preferences.
binary download from Codeplex is out-of-date.
Need a new binary download for current version of CNTK build
Some projects don't compile under Visual Studio 2013 without changes.
I got the following error.
1>TensorView.cpp(291): error C2440: '' : cannot convert from 'initializer-list' to 'std::arrayMicrosoft::MSR::CNTK::TensorShape,0x04'
1> Constructor for class 'std::arrayMicrosoft::MSR::CNTK::TensorShape,0x04' is declared 'explicit'
1> TensorView.cpp(280) : while compiling class template member function 'void Microsoft::MSR::CNTK::TensorView::DoTernaryOpOf(ElemType,const Microsoft::MSR::CNTK::TensorView &,const Microsoft::MSR::CNTK::TensorView &,const Microsoft::MSR::CNTK::TensorView &,ElemType,Microsoft::MSR::CNTK::ElementWiseOperator)'
1> with
1> [
1> ElemType=float
1> ]
1> c:\home\library\cntk\source\math\TensorView.h(117) : see reference to function template instantiation 'void Microsoft::MSR::CNTK::TensorView::DoTernaryOpOf(ElemType,const Microsoft::MSR::CNTK::TensorView &,const Microsoft::MSR::CNTK::TensorView &,const Microsoft::MSR::CNTK::TensorView &,ElemType,Microsoft::MSR::CNTK::ElementWiseOperator)' being compiled
1> with
1> [
1> ElemType=float
1> ]
1> TensorView.cpp(373) : see reference to class template instantiation 'Microsoft::MSR::CNTK::TensorView' being compiled
My work-around solution is to add an additional pair of '{' and '}' for the array as
PrepareTensorOperands<ElemType, 2>(array<TensorShape, 2>{ { a.GetShape(), GetShape()} } , offsets, regularOpDims, regularStrides, reducingOpDims, reducingStrides);
Anyone knows what is the root cause of this compiling error?
See #32, CNTK doesn't seem to build in certain locales.
Hi,
I recently download the source code of CNTK and I'm trying to compile it on Ubuntu (Ubuntu 14.04.3 LTS) with the Kaldi plug-in. I have followed the installation instructions in README.md and /Source/Readers/KaldiReaderReadme , but unfortunately I encountered an error when compiling "Source/Readers/Kaldi2Reader/HTKMLFReader.cpp" (see below)
Command:
mpic++ -c Source/Readers/Kaldi2Reader/HTKMLFReader.cpp -o /home/mirco/cntk_source/build/release/.build/Source/Readers/Kaldi2Reader/HTKMLFReader.o -D_POSIX_SOURCE -D_XOPEN_SOURCE=600 -D__USE_XOPEN2K -DUSE_ACML -DKALDI_DOUBLEPRECISION=0 -DHAVE_POSIX_MEMALIGN -DHAVE_EXECINFO_H=1 -DHAVE_CXXABI_H -DHAVE_ATLAS -DHAVE_OPENFST_GE_10400 -DNDEBUG -msse3 -std=c++0x -std=c++11 -fopenmp -fpermissive -fPIC -Werror -fcheck-new -Wno-error=literal-suffix -O4 -ISource/Common/Include -ISource/Math -ISource/CNTK -ISource/ActionsLib -ISource/ComputationNetworkLib -ISource/SGDLib -ISource/SequenceTrainingLib -ISource/CNTK/BrainScript -I/usr/./include/nvidia/gdk -I/home/mirco/cub-1.5.1 -I/usr/local/cuda-7.0//include -I/opt/acml5.3.1/ifort64_mp/include -I/home/mirco/kaldi-trunk//src -I/home/mirco/kaldi-trunk//tools/ATLAS/include -I/home/mirco/kaldi-trunk//tools/openfst/include -MD -MP -MF /home/mirco/cntk_source/build/release/.build/Source/Readers/Kaldi2Reader/HTKMLFReader.d
Error:
Source/Readers/Kaldi2Reader/HTKMLFReader.cpp: In member function ‘void Microsoft::MSR::CNTK::HTKMLFReader<ElemType>::PrepareForTrainingOrTesting(const ConfigRecordType&)’:
Source/Readers/Kaldi2Reader/HTKMLFReader.cpp:477:79: error: no matching function for call to ‘msra::dbn::latticesource::latticesource(std::pair<std::vector<std::basic_string<wchar_t> >, std::vector<std::basic_string<wchar_t> > >&, std::unordered_map<std::basic_string<char>, long unsigned int>&)’
m_lattices = new msra::dbn::latticesource(latticetocs, modelsymmap);
^
Source/Readers/Kaldi2Reader/HTKMLFReader.cpp:477:79: note: candidate is:
In file included from Source/Readers/Kaldi2Reader/minibatchiterator.h:16:0,
from Source/Readers/Kaldi2Reader/rollingwindowsource.h:14,
from Source/Readers/Kaldi2Reader/HTKMLFReader.cpp:15:
Source/Common/Include/latticesource.h:29:5: note: msra::dbn::latticesource::latticesource(std::pair<std::vector<std::basic_string<wchar_t> >, std::vector<std::basic_string<wchar_t> > >, const std::unordered_map<std::basic_string<char>, long unsigned int>&, std::wstring)
latticesource (std::pair<std::vector<std::wstring>,std::vector<std::wstring>> latticetocs, const std::unordered_map<std::string,size_t> & modelsymmap, std::wstring RootPathInToc)
Source/Common/Include/latticesource.h:29:5: note: candidate expects 3 arguments, 2 provided
make[1]: *** [/home/mirco/cntk_source/build/release/.build/Source/Readers/Kaldi2Reader/HTKMLFReader.o] Error 1
make[1]: Leaving directory `/home/mirco/cntk_source'
make: *** [all] Error 2
How can I fix it?
Note that this is the only error I have when running make.
Thank you!
Mirco
somehow the compiler gets confused by the template
template <typename ElemType> bool CheckFunction(std::string& p_nodeType, bool* allowUndeterminedVariable = nullptr);
=-----------------------------------------------------------=
mpic++ -shared -L./lib -L/opt/acml5.3.1/ifort64_mp/lib -L/opt/opencv-3.0.0/release/lib -Wl,-rpath,'$ORIGIN' -Wl,-rpath,/opt/acml5.3.1/ifort64_mp/lib -Wl,-rpath,/opt/opencv-3.0.0/release/lib -o lib/ImageReader.so .build/Source/Readers/ImageReader/Exports.o .build/Source/Readers/ImageReader/ImageReader.o -lcntkmath -lopencv_core -lopencv_imgproc -lopencv_imgcodecs
=-----------------------------------------------------------=
building output for with build type release
mpic++ -L./lib -L/opt/acml5.3.1/ifort64_mp/lib -L/opt/opencv-3.0.0/release/lib -Wl,-rpath,'$ORIGIN/../lib' -Wl,-rpath,/opt/acml5.3.1/ifort64_mp/lib -Wl,-rpath,/opt/opencv-3.0.0/release/lib -o bin/cntk Source/CNTK/buildinfo.h .build/Source/CNTK/CNTK.o .build/Source/CNTK/ModelEditLanguage.o .build/Source/CNTK/NetworkDescriptionLanguage.o .build/Source/CNTK/SimpleNetworkBuilder.o .build/Source/CNTK/SynchronousExecutionEngine.o .build/Source/CNTK/tests.o .build/Source/ComputationNetworkLib/ComputationNode.o .build/Source/ComputationNetworkLib/ComputationNetwork.o .build/Source/ComputationNetworkLib/ComputationNetworkEvaluation.o .build/Source/ComputationNetworkLib/ComputationNetworkAnalysis.o .build/Source/ComputationNetworkLib/ComputationNetworkEditing.o .build/Source/ComputationNetworkLib/ComputationNetworkBuilder.o .build/Source/ComputationNetworkLib/ComputationNetworkScripting.o .build/Source/SGDLib/Profiler.o .build/Source/SGDLib/SGD.o .build/Source/ActionsLib/TrainActions.o .build/Source/ActionsLib/EvalActions.o .build/Source/ActionsLib/OtherActions.o .build/Source/ActionsLib/SpecialPurposeActions.o .build/Source/SequenceTrainingLib/latticeforwardbackward.o .build/Source/SequenceTrainingLib/parallelforwardbackward.o .build/Source/CNTK/BrainScript/BrainScriptEvaluator.o .build/Source/CNTK/BrainScript/BrainScriptParser.o .build/Source/CNTK/BrainScript/BrainScriptTest.o .build/Source/CNTK/BrainScript/ExperimentalNetworkBuilder.o .build/Source/Common/BestGpu.o .build/Source/Common/MPIWrapper.o .build/Source/SequenceTrainingLib/latticeNoGPU.o -lacml_mp -liomp5 -lm -lpthread -lcntkmath -fopenmp
.build/Source/CNTK/CNTK.o: In function `Microsoft::MSR::CNTK::NDLScript<float>::CheckName(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, bool)':
/home/me/CNTK-r2016-01-26/Source/CNTK/NetworkDescriptionLanguage.h:800: undefined reference to `bool Microsoft::MSR::CNTK::CheckFunction<float>(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >&, bool*)'
.build/Source/CNTK/CNTK.o: In function `Microsoft::MSR::CNTK::NDLScript<float>::NDLScript(Microsoft::MSR::CNTK::ConfigValue const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, bool)':
/home/me/CNTK-r2016-01-26/Source/CNTK/NetworkDescriptionLanguage.h:525: undefined reference to `bool Microsoft::MSR::CNTK::CheckFunction<float>(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >&, bool*)'
.build/Source/CNTK/CNTK.o: In function `Microsoft::MSR::CNTK::NDLScript<float>::ParseValue(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, unsigned long, unsigned long)':
/home/me/CNTK-r2016-01-26/Source/CNTK/NetworkDescriptionLanguage.h:1022: undefined reference to `bool Microsoft::MSR::CNTK::CheckFunction<float>(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >&, bool*)'
collect2: error: ld returned 1 exit status
Makefile:494: recipe for target 'bin/cntk' failed
make: *** [bin/cntk] Error 1
Edit CNTK book and online document to reflect the new changes, esp. new nodes we enabled, the config to run parallel training, and the new programmer's guide.
The main selling point of CNTK (compared to other deep learning packages) is that it supports training a large model on a compute cluster. However, I didn't find any information online or from the book on how to set up training across computers. Anybody can help?
Hello All,
If my testing dataset, does not include lable, what kind of configurtion shoud I apply? Is there any functionality like this, enabled right now? when I remove the label configurtion in my test action code, compiler gives the error as: "EXCEPTION occurred: features and label files must be the same file, use separate readers to define single use files"
Since it's on GitHub now, team members should use forks rather than official repo branches for saving their work.
Will CNTK support OSX platform compile and debug?
That will more portable.
Spotted by Frank, e.g., Source\ComputationNetworkLib\x64\Debug\ComputationNetworkLib*.obj
HI i am facing some problem that first angularjs function execute then View DOM load.. so how to fix this issue.. alternatively i have used below code---
app.run(["$rootScope", "$location","$window", function ($rootScope, $location,$window) {
$rootScope.$on("$routeChangeSuccess", function (userInfo) {
//console.log(userInfo);
angular.element(document).ready(function () {
setTimeout(function () {
var windowHeight = $(window).height() - 100;
$('#page-layout').css({ 'overflow-x': 'auto', 'max-height': windowHeight });
}, 1000);
$(window).resize(function () {
var windowHeight = $(window).height() - 100;
$('#page-layout').css({ 'overflow-x': 'auto', 'max-height': windowHeight });
});
});
});
$rootScope.$on("$routeChangeError", function (event, current, previous, eventObj) {
if (eventObj.authenticated === false) {
$location.path("/Login");
}
});
}])
Hi All,
The Kaldi decoding fails with the new version of CNTK. If I use an older version of CNTK the decoding works fine. I find it is difficult to infer the issue from the error message. I have mentioned the error below. Please advice. Thank you.
`Post-processing network complete.
HTKMLFWriter::Init: reading output script file data-lda/test_eval92/split8/1/cntk_test.counts ... 560 entries
Allocating matrices for forward and/or backward propagation.
evaluate: reading 571 frames of 440c02010
evaluate: reading 571 frames of 440c02010
[CALL STACK]
/home/lahiru/Devinstall/cntk_github/CNTK/build/release/lib/libcntkmath.so ( Microsoft::MSR::CNTK::DebugUtil::PrintCallStack() + 0xbf ) [0x7ff296ba6cdf]
cntk ( void Microsoft::MSR::CNTK::ThrowFormattedstd::logic_error(char const_, ...) + 0xdd ) [0x53d5dd]
cntk ( Microsoft::MSR::CNTK::ComputationNode::NotifyFunctionValuesMBSizeModified() + 0x41c ) [0x53e57c]
cntk ( ) [0x758d37]
cntk ( Microsoft::MSR::CNTK::SimpleOutputWriter::WriteOutput(Microsoft::MSR::CNTK::IDataReader&, unsigned long, Microsoft::MSR::CNTK::IDataWriter&, std::vector<std::basic_string<wchar_t, std::char_traits<wchar_t>, std::allocator<wchar_t> >, std::allocator<std::basic_string<wchar_t, std::char_traits<wchar_t>, std::allocator<wchar_t> > > > const&, unsigned long, bool) + 0x363 ) [0x75bb63]
cntk ( void DoWriteOutput(Microsoft::MSR::CNTK::ConfigParameters const&) + 0x669 ) [0x760849]
cntk ( void DoCommands(Microsoft::MSR::CNTK::ConfigParameters const&) + 0xc07 ) [0x593c47]
cntk ( wmainOldCNTKConfig(int, wchar_t__) + 0x909 ) [0x535519]
cntk ( wmain1(int, wchar_t_*) + 0x68 ) [0x535be8]
cntk ( main + 0xd8 ) [0x529518]
/lib/x86_64-linux-gnu/libc.so.6 ( __libc_start_main + 0xf5 ) [0x7ff29582fec5]
cntk ( ) [0x52d4b7]
Closed Kaldi writer`
Hi,
This is my first build of CNTK, and I am stuck at the below error message:
Source/SGDLib/SimpleDistGradAggregator.h:282:186: error: ‘MPI_Iallreduce’ was not declared in this scope
system: Linux Mint Rosa (based on Ubuntu LTS 14.4).
CUDA Kit v7.5 (GTX 970)
I downloaded and installed openmpi, acml, CUB, Cunn per the instructions given on this site.
Appreciate any suggestions!
Ken
Currently CNTK directs all console output to stderr rather than stdout. Was it for a reason?
binary download from Codeplex is out-of-date.
Need a new binary download for current version of CNTK build
mpic++ -c Source/CNTK/ModelEditLanguage.cpp -o .build/Source/CNTK/ModelEditLanguage.o -D_POSIX_SOURCE -D_XOPEN_SOURCE=600 -D__USE_XOPEN2K -DCPUONLY -DUSE_ACML -DNDEBUG -msse3 -std=c++0x -std=c++11 -fopenmp -fpermissive -fPIC -Werror -fcheck-new -Wno-error=literal-suffix -O4 -ISource/Common/Include -ISource/Math -ISource/CNTK -ISource/ActionsLib -ISource/ComputationNetworkLib -ISource/SGDLib -ISource/SequenceTrainingLib -ISource/CNTK/BrainScript -I/opt/acml5.3.1/open64_64//include -I/opt/opencv-3.0.0/include -MD -MP -MF .build/Source/CNTK/ModelEditLanguage.d
Source/Math/ConvolutionEngine.cpp: In instantiation of ‘static std::unique_ptr<Microsoft::MSR::CNTK::ConvolutionEngineFactory<ElemType> > Microsoft::MSR::CNTK::ConvolutionEngineFactory<ElemType>::Create(int, Microsoft::MSR::CNTK::ConvolutionEngineFactory<ElemType>::EngineType, Microsoft::MSR::CNTK::ImageLayoutKind) [with ElemType = float]’:
Source/Math/ConvolutionEngine.cpp:493:16: required from here
Source/Math/ConvolutionEngine.cpp:490:17: error: format ‘%d’ expects argument of type ‘int’, but argument 2 has type ‘Microsoft::MSR::CNTK::ConvolutionEngineFactory<float>::EngineType’ [-Werror=format=]
RuntimeError("Not supported convolution engine type: %d.", engType);
cc1plus: all warnings being treated as errors
> dpkg --list | grep compiler
ii gcc-4.8 4.8.5-1ubuntu1 amd64 GNU C compiler
ii gcc-4.9 4.9.3-5ubuntu1 amd64 GNU C compiler
ii gcc-5 5.2.1-22ubuntu2 amd64 GNU C compiler
Hello there :)
I'm trying to create a sparse layer, for that I need sparse matrix multiplication and to define the sparse weight matrix. This is the only framework that has sparse matrix multiplication on GPU. But I could not find a way to use it with the NDL.
I guess the "wrappers" are missing. It also would be nice if you guys provide an example of using the library directly from C++, maybe from there I can access what I need?
Best regards,
Caio Mendes.
The projects reference many libraries that have already environment variables set upon installation.
Using the environment variables would make the projects and solution more tolerant to change in libraries.
For example: C:\Program Files (x86)\Microsoft SDKs\MPI\Include should be MSMPI_INC
These still need to be updated to support parallel sequences.
https://github.com/Microsoft/CNTK/tree/master/Documentation/Documents should be updated, and (fully?) integrated into the Wiki's content.
Would you like to add more error handling for return values from functions like the following?
CrossProcessMutex.h uses a file for a global lock inside /var/lock. That location cannot be universally be assumed to be writeable in all environments for all users. It should be changed to some place that is, or changeable through a config variable.
CrossProcessMutex(const std::string& name)
: m_fd(-1),
m_fileName("/var/lock/" + name)
{
}
The samples use very small data sets, but will serve as the best-practice examples for real-life use. We should review/revise their configurations as to be applicable to large realistic data sets. E.g. while a tiny minibatch size is great for debugging, it is unnecessarily inefficient for parallelization.
I use the following command to download CIFAR-10 dataset
python CIFAR_convert.py
Then issue the following command to run it
cntk configFile=01_Conv.config configName=01_Conv
However there is an error from log below which complains labelsmap.txt can't be found. How do you create labelsmap.txt?
<<<<<<<<<<<<<<<<<<<< PROCESSED CONFIG WITH ALL VARIABLES RESOLVED <<<<<<<<<<<<<<<<<<<<
command: Train Test
precision = float
CNTKModelPath: ./Output/Models/01_Convolution
CNTKCommandTrainInfo: Train : 30
CNTKCommandTrainInfo: CNTKNoMoreCommands_Total : 30
CNTKCommandTrainBegin: Train
NDLBuilder Using CPU
Reading UCI file ./Train.txt
[CALL STACK]
/home/fc/src/CNTK/bin/../lib/libcntkmath.so ( Microsoft::MSR::CNTK::DebugUtil::PrintCallStack() + 0xbf ) [0x7f178c7c90ff]
cntk ( void Microsoft::MSR::CNTK::ThrowFormattedstd::runtime_error(char const*, ...) + 0xdd ) [0x43332d]
/home/fc/src/CNTK/bin/../lib/UCIFastReader.so ( void Microsoft::MSR::CNTK::UCIFastReader::InitFromConfigMicrosoft::MSR::CNTK::ConfigParameters(Microsoft::MSR::CNTK::ConfigParameters const&) + 0xfed ) [0x7f1787b9813d]
/home/fc/src/CNTK/bin/../lib/libcntkmath.so ( Microsoft::MSR::CNTK::DataReader::DataReaderMicrosoft::MSR::CNTK::ConfigParameters(Microsoft::MSR::CNTK::ConfigParameters const&) + 0x48e ) [0x7f178c7c101e]
cntk ( ) [0x6436b1]
cntk ( ) [0x648470]
cntk ( ) [0x48a9ec]
cntk ( ) [0x42cd99]
cntk ( ) [0x42d3e8]
cntk ( ) [0x421288]
/lib/x86_64-linux-gnu/libc.so.6 ( __libc_start_main + 0xf5 ) [0x7f178b528ec5]
cntk ( ) [0x424e07]
EXCEPTION occurred: label mapping file ./labelsmap.txt not found, can be created with a 'createLabelMap' command/action
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.