Comments (10)
Hi sorry for not making it clear yes i am calling via the python library client.
When i'm calling it via the Triton client, i've always done it as such but i recently added 1 extra output, when i called it i get the opposite position of one another. Or am i just doing it wrong?
triton_client = grpcclient.InferenceServerClient(url="localhost:8001")
for image in images:
with open(image, 'rb') as f:
image = np.frombuffer(f.read(), dtype=np.uint8)
sent_count += 1
image = np.expand_dims(image, axis=0)
inputs = [grpcclient.InferInput(input_name, image.shape, input_dtype)]
outputs = [grpcclient.InferRequestedOutput('processed_image'),grpcclient.InferRequestedOutput('image_shape')] # isn't this how the client access via the tensors name?
inputs[0].set_data_from_numpy(image)
result = triton_client.infer(
'seg_preproc', inputs,
request_id=str(sent_count),
model_version='1', outputs=outputs
)
result.as_numpy('image_shape') # (1, 256, 256, 3), FP32
result.as_numpy('processed_image') # (1, 3), INT64
I launched the verbose mode of triton i get the following in this order:
requested outputs:
image_shape
processed_image
from dali_backend.
@edwin-19 ,
Unfortunately, I couldn't quickly reproduce your issue - for me the order of outputs is correct. Could you share your DALI pipeline? The problem may lay there.
Also, what version of dali_backend do you use? Few days ago we had major bug fix, which might also impact your case. Should you like to try out master
version, please build dali_backend
with the docker:
git clone --recursive [email protected]:triton-inference-server/dali_backend.git
cd dali_backend
docker build -t tritonserver:dali-master .
Then you treat tritonserver:dali-master
as your regular Triton server image
from dali_backend.
You don't say how you are communicating with Triton. I assume the tritonclient python library but the answer would be the same if you were using the C++ library or directly using the HTTP/REST or GRPC protocol. In all these cases there is no enforced order of the tensor outputs (or inputs), even between different requests of the same model. You must access the outputs using the name of the tensor.
from dali_backend.
@szalpal
I didn't upgrade any version of dali when i first pulled the triton server image(2.6.0) so it should 0.28
I have been using dali for about 2 weeks, there was a bug fix i see, hmmm. Most likely will be the dali version affecting?
Anyway my pipeline is shown below, but i dont think should cause it to mistake outputs?
pipeline = Pipeline(batch_size=32, num_threads=4, device_id=0)
with pipeline:
source = fn.external_source(device='cpu', name='image_input')
shapes = fn.peek_image_shape(source)
images = fn.image_decoder(source, device="mixed", output_type=types.BGR)
resized_images = fn.resize(images, resize_x=256, resize_y=256)
resized_images = resized_images / 255.
pipeline.set_outputs(resized_images, shapes)
from dali_backend.
Unfortunately, again I couldn't reproduce your problem, even using your pipeline. Here are the files I used, hope you'll find them useful:
pipeline = dali.pipeline.Pipeline(batch_size=32, num_threads=4, device_id=0)
with pipeline:
source = fn.external_source(device='cpu', name='image_input')
shapes = fn.peek_image_shape(source)
images = fn.image_decoder(source, device="mixed", output_type=types.BGR)
resized_images = fn.resize(images, resize_x=256, resize_y=256)
resized_images = resized_images / 255.
pipeline.set_outputs(resized_images, shapes)
pipeline.serialize(filename=filename)
name: "dali"
backend: "dali"
max_batch_size: 256
input [
{
name: "image_input"
data_type: TYPE_UINT8
dims: [ -1 ]
}
]
output [
{
name: "DALI_OUTPUT_0"
data_type: TYPE_FP32
dims: [ 256, 256, 3 ]
},
{
name: "DALI_OUTPUT_1"
data_type: TYPE_FP32
dims: [ -1 ]
}
]
And the client with the output:
outputs.append(tritonclient.grpc.InferRequestedOutput("DALI_OUTPUT_0"))
outputs.append(tritonclient.grpc.InferRequestedOutput("DALI_OUTPUT_1"))
inputs[0].set_data_from_numpy(batch)
# Test with outputs
results = triton_client.infer(model_name=model_name,
inputs=inputs,
outputs=outputs)
print("0:",results.as_numpy("DALI_OUTPUT_0").shape) # 0: (1, 256, 256, 3)
print("1:",results.as_numpy("DALI_OUTPUT_1").shape) # 1: (1, 3)
Should you like more help with your issue, please provide more details or more specific reproduction
from dali_backend.
Hi @szalpal, thanks for the feedback so kind of solve it. With your suggestion above
You see this is my original output tensor names for my Triton dali backend
# When i set the tensor outputs to the following names the order goes weird
output: [
{
name: "processed_image"
dims: [256, 256, 3]
data_type: TYPE_FP32
},
{
name: "image_shape"
dims: [3]
data_type: TYPE_INT64
}
]
When i change the tensor names to what you suggested above the order matches:
# When i set the name like this the mapping and order is correct
output [
{
name: "DALI_OUTPUT_0"
data_type: TYPE_FP32
dims: [ 256, 256, 3 ]
},
{
name: "DALI_OUTPUT_1"
data_type: TYPE_FP32
dims: [ -1 ]
}
]
So i was wondering is this a design of dali backend, because i did not specify any special name when setting the outputs so i thought i could assign any name as long as i follow the order of my pipeline output.
from dali_backend.
This is interesting! Thank you for providing info. Certainly, there shouldn't be any problem with custom names for inputs. If this confirms, I guess it's a bug. I'll look into the issue.
from dali_backend.
Sure let me know if u need any extra info
from dali_backend.
Indeed, there is a bug in mapping outputs from DALI to the outputs from dali_backend
.
As a workaround please assign outputs in config.pbtxt
in an alphanumeric order. I'll post the fix as soon as possible.
PR with fix prerequisite:
NVIDIA/DALI#2665
from dali_backend.
Waiting for DALI v1.0
from dali_backend.
Related Issues (20)
- Memory increase when decoding exception occurs HOT 2
- dali backend device parameter setting question HOT 1
- layout parameter to external_source causes assert error HOT 2
- DALI backend not releasing device memory HOT 8
- How to provide mean & stddev to dali.fn.normalize HOT 3
- Error when executing Mixed operator decoders__Image when sending image binary to dali in triton HOT 9
- how to use the numpy data in the DALI HOT 3
- Batching does not improve performance with dali HOT 10
- Can dali backend support default values or optional input? HOT 2
- Unexpected large memory needed for gpu resize HOT 4
- Error in thread 31: nvJPEG error (5): The user-provided allocator functions, for either memory allocation or for releasing the memory, returned a non-zero code. HOT 6
- Cannot compile dali_backend with older version of triton HOT 2
- how to provide batch input data for dali pipeline whicn input shapes [-1] HOT 1
- if I want to crop from different start point, how can I build pipe to do this? HOT 2
- Test issue
- Connecting InputOperator with no explicit inputs to Triton HOT 12
- Could not serialize dali.fn.python_function HOT 1
- when using crop_mirror_normalize func, Output layout "CHW" is slower than "HWC" HOT 5
- dlopen libcuda.so failed!. Please install GPU dirverTraceback (most recent call last): HOT 4
- Prefeed multiple input batches to the inference pipeline HOT 7
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from dali_backend.