I've found a bug in the index function which corrupts the data whenever it is not indexing a Tensor across the 1st dimension.
-- Create random data and random slice indices
x = torch.rand(2000,20000)+3
ind = torch.randperm(x:size(2))[{{1,10}}]:long()
-- Slice x along dimension 2 using the supplied indices in ind
sx = x:index(2,ind)
print("Original Range: " .. x:min() .. "," .. x:max())
print("Slice Range: " .. sx:min() .. "," .. sx:max())
The original range should report 3,4 while the sliced range reports an invalid range of 0,~4 (the upper and lower can vary due to randomness, but it should never be lower than 3).
This indicates that the internal data is invalid when using the index function. It certainly has something to do with the size of the matrix itself since you get perfectly fine copies of Tensor sizes of [200,10]:
x = torch.rand(200,10)+3
ind = torch.LongTensor{1,2,3,4,5,6,7,8,9,10} -- Select all indices in order
sx = x:index(2,ind)
print("Matching: ".. x:eq(sx):sum() .. "/" .. x:nElement())
This produces Matching: 2000/2000, however the critical point seems to be when the first dimension is OVER 720.
A matrix of size [720,10] with the above test produces Matching: 7200/7200, however with a matrix of size [721,10] it produces Matching: 129/7210 which is absolutely not correct.
I'd really appreciate any insights into this error.