Comments (2)
For code organization purposes, we likely want to implement this not as branching logic in the orchestrator itself but as a separate chunker that can get called via some name (still implemented within the orchestrator repo, but from a deployment/user configuration POV, this name can be still be specified in the config).
Hmm, I don't think this approach will fit with the current design. We delegate to the ChunkerClient
, which wraps the underlying ChunkersServiceClient
s. As this "default" implementation would not be an actual grpc ChunkersServiceClient
(which is a struct, not a trait that we can implement a DefaultChunkerServiceClient
for), this wouldn't work.
I can propose some alternative ways to implement this, if interested.
from fms-guardrails-orchestrator.
@declark1 definitely let us know some alternatives. The idea here was to not have yet another branching logic, like if chunker not configured
or if chunker is default
and instead have a stock implementation of a chunker that instead of making an external calls, returns the response based on the input text directly but in the format that the "chunker" itself would do.
This could be added in the chunk
function, or we could add the default chunking mechanism as a special case handling in the chunk client itself 🤔
from fms-guardrails-orchestrator.
Related Issues (20)
- Add a check to verify if the len of input and output match for detectors
- Add ADR for orchestrator API
- Change `get_test_context()` to have a default GenerationClient HOT 1
- Replace prompt in detector context analysis API with content
- Document streaming result aggregation policy in ADR
- Implement streaming result aggregation policy
- Add unit test to verify call to text generation is working as expected or not
- Additional validation on threshold parameter
- Update the detector API with text
- Update HTTP client creation for mTLS HOT 3
- Add ADR documenting the detector API decisions
- API response field different from v1.0: generated_text (v1.0) --> text_generated (v2.0) HOT 2
- Warnings and seed fields, not always present in json_response HOT 4
- v2.0 finish_reason responds with EOS_TOKEN instead of MAX_TOKENS. HOT 5
- v2.0 not returning tokens HOT 4
- Add unit test to verify parameter massaging for the detectors HOT 1
- NLP client not working on tokenize at least
- Add tests for orchestrator response with text generation edge cases HOT 1
- Failed to deserialize the JSON body into the target type: missing field `models` HOT 2
- Detected PII word's "start" and "end" are returning the wrong positions HOT 5
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from fms-guardrails-orchestrator.