dialogflow / dialogflow-fulfillment-nodejs Goto Github PK
View Code? Open in Web Editor NEWDialogflow agent fulfillment library supporting v1&v2, 8 platforms, and text, card, image, suggestion, custom responses
License: Apache License 2.0
Dialogflow agent fulfillment library supporting v1&v2, 8 platforms, and text, card, image, suggestion, custom responses
License: Apache License 2.0
I'm using the samples given here and here.
But I'm not able to make it work. When I deploy my cloud function and then test it by typing something in the Actions on Google Simulator or Try it Now section on DialogFlow, I'm not getting any response.
When I check the Cloud Functions Logs, this is what I get:
Unhandled rejection
Error: No responses defined for platform: undefined
at WebhookClient.send_ (/user_code/node_modules/dialogflow-fulfillment/src/dialogflow-fulfillment.js:428:13)
at promise.then (/user_code/node_modules/dialogflow-fulfillment/src/dialogflow-fulfillment.js:246:38)
at process._tickDomainCallback (internal/process/next_tick.js:135:7)
Here's my Cloud Function Code: https://codepen.io/anon/pen/LdMNBJ?editors=0010
I think I'm using http inside the function due to which it's throwing an error. But I'm not pretty sure about it?
I use a fulfillment to setup some suggestions.
I'd like to still use the response text messages I defined from the DialogFlow UI, and just add some suggestions using this library.
My code looks like that:
module.exports = (agent) => {
agent.add(new Suggestion(`Order now`));
agent.add(new Suggestion(`Opening hours`));
agent.add(new Suggestion(`Who are you ?`));
}
As described by the documentation, DialogFlow webhook request transmits what have been defined into their UI in the queryResult.fulfillmentText and queryResult.fulfillmentMessages attributes.
How can I say easily to this library to use them as an original value for the response, and just append what I call into agent.add() ?
Cards are not supporting multiple buttons! Why is it so? Or is there any way to add more than one button in card? Please let me know ASAP. Even if i send it as payload same issue! Check with that and let us know.
It's documented as Object[]
when it's actually just an Object
agent.setFollowupEvent is not working for me. I compared several times the string of the parameter in the method and the event name in DF.
Where do I have to call this method? I tried to call in the handler and outside.
Any Solutions?
Currently the card response only support 1 button, is there any reason to it?
Thanks.
Currently actions-on-google
is listed as a regular dependency which cases problems if you want to use features that only exist in actions-on-google
.
For example:
const { SimpleResponse } = require("actions-on-google");
const { WebhookClient } = require("dialogflow-fulfillment");
const client = WebhookClient({request, response});
client.add(client.conv()
// These two line should be equivalent by aren't
.ask("Hello, world!")
.ask(new SimpleResponse("Hello, world!")));
This outputs an invalid response:
{
"contextOut": [
{
"lifespan": 99,
"name": "_actions_on_google",
"parameters": {
"data": "{}"
}
}
],
"data": {
"google": {
"expectUserResponse": true,
"richResponse": {
"items": [
{
"simpleResponse": {
"textToSpeech": "Hello, world!"
}
},
{
"textToSpeech": "Hello, world!"
}
]
},
"userStorage": "{\"data\":{}}"
}
}
}
the reason is that AOG does this:
// Line 134 in rich.ts
if (item instanceof SimpleResponse) {
this.items!.push({ simpleResponse: item })
continue
}
...
// Line 154 in rich.ts
this.items!.push(item)
this looks fine but since there are two modules named actions-on-google
(node_modules/actions-on-google
and node_modules/dialogflow-fulfillment/node_modules/actions-on-google
) and DFF uses the latter while the user's code uses the former.
The easiest solution would be to define actions-on-google
as a peer dependency which of course has a few downsides:
For reference this is the correct/expected output:
{
"contextOut": [
{
"lifespan": 99,
"name": "_actions_on_google",
"parameters": {
"data": "{}"
}
}
],
"data": {
"google": {
"expectUserResponse": true,
"richResponse": {
"items": [
{
"simpleResponse": {
"textToSpeech": "Hello, world!"
}
},
{
"simpleResponse": {
"textToSpeech": "Hello, world!"
}
}
]
},
"userStorage": "{\"data\":{}}"
}
}
}
I've attached a sample project: dialogflow-fulfillment-nodejs-bug.zip
When using a payload with the V2 agent, only the payload gets sent on platforms that support multiple message responses (eg, Facebook).
Eg,:
agent.add(new Text({'text': 'hello', 'platform': 'FACEBOOK'}))
agent.add(new Payload('FACEBOOK', {...})
will only send the payload, as opposed to both 'hello' and the payload
Thanks your work on this, I've found it much easier to use than other AoG/DialogFlow libraries despite being in early beta.
With the latest update (removing builders), there's a problem with callback/promises in handler functions, specifically when doing HTTP requests. I believe it is a combination of the removal of agent.send() and the way HTTP client libraries do callback (possibly not preserving 'this' reference as needed?).
Something like this works as expected:
function myFunc (callback) {
// Do something
callback();
}
function anotherFunc() {
agent.add('My Response');
}
myFunc(anotherFunc);
However, if you try this (request library, callback)
request('http://www.google.com', (error, response, body) => {
agent.add('Hello');
});
Or with axios (Promise-based)
axios.get('https://google.com')
.then(response => {
agent.add('My response');
})
.catch (error => {
// do something
})
You get the following error
Error: No responses defined for null
at V1Agent.sendResponse_ (/user_code/node_modules/dialogflow-fulfillment/v1-agent.js:119:13)
at WebhookClient.send_ (/user_code/node_modules/dialogflow-fulfillment/dialogflow-fulfillment.js:225:19)
at promise.then (/user_code/node_modules/dialogflow-fulfillment/dialogflow-fulfillment.js:285:38)
at process._tickDomainCallback (internal/process/next_tick.js:135:7)
I've played around with this a bit and I'm not sure what exactly the issue is. It seems related to how these HTTP clients handle callback and losing the 'this' reference, obviously both request and axios are very popular HTTP clients. An easy fix seems like adding back agent.send(), but I'm not sure if the plan is to remove that.
I was trying to figure out why the originalRequest
field wasn't showing up for the longest time, and it turns out that I was using the v0.3.0-beta.2
version, when in fact, accessing the original request is in v0.3.0-beta.3
. On the NPM website, it shows that beta.2 is the latest version although #14 seems to suggest that beta.3 was released a while back, and the documentation shows that you can access the original request. It would be nice to be able to install this package without specifying the version number every time, since it's not obvious that just running npm install dialogflow-fulfillment
will not install the latest version.
Hello,
Thanks for this great lib.
I wondered why you are changed the map given to intent Map given to the function handleRequest has changed from action map to intent map ?
Maybe you can add if intent.name not exist in the map, search the action name ?
Thanks
Hi guys,
agent.clearOutgoingContexts({exclude:['loggedincontext']})
I'm developing an app for Google Home and Google Assistant and when I try to display buttons within a card object they are not rendered for Google Assistant.
I think I found a few typos which prevent the rendering of buttons in the DialogFlow response object:
https://github.com/dialogflow/dialogflow-fulfillment-nodejs/blob/master/response-builder.js#L511
if (this.buttonsTitle && this.buttonUrl) {
response.buttons = [{}];
response.basicCard.buttons[0].title = this.buttonsTitle;
response.basicCard.buttons[0].openUriAction.uri = this.buttonUrl;
}
if (this.buttonText && this.buttonUrl) {
response.basicCard.buttons = [{}];
response.basicCard.buttons[0].title = this.buttonText;
response.basicCard.buttons[0].openUriAction = { uri: this.buttonUrl };
}
For some dialogs it is necessary to set phrase_hints for the audio input, for example to be able to understand certain (foreign) product names.
When using the Dialogflow API directly from code, it is possible to add these phrase hints when starting the StreamingDetectIntentRequest, but when Google Assistant is used I cannot find a way to provide the phrase hints.
Could a feature be made to add phrase hints to the listening request?
One way would be explicit in fulfillment code, another could be by adding a paramter to the context.
Entities could be (mis)used to provide a translatable list of phrase hints.
So for example a context parameter phrase_hints is added which could directly contain a list of phrase hints or a single or list of entity names containing the phrase hitns i.e. @product_name_phrase_hints.
Addendum:
It seems speechBiasingHints is used to add hints for the recognizer, only it doesn't seem to work (yet), the words provided in the entities are not recognized. The entity names are passed in the request like this:
"speechBiasingHints":["$box","$taste"]} (from the Action on Google log)
$box and $taste are entities in my agent.
How can I tell Dialogflow to not expect a response from the user (close the mic)?
In your samples I saw only the use of agent.add() with phrases that expect always a response.
Something like ask/tell (now ask/close) in actions-on-google.
Hi!
For me this library is great because it automatically "translates" rich responses to the format that individual platforms require.
I would like to extend this library to also support Carousels.
Am I correct in assuming that all I have to do is create a new type of RichResponse and change the getV1ResponseObject_
and getV2ResponseObject_
methods?
Or do you don't want to add features that only work for certain platforms and not all 14 supported ones?
Will it be possbile in the future to change the title of Suggestions for the platforms that require it (from "choose one item" to something custom) and, also, edit the callback of the replies?
hi team,
when I use fulfillment version "dialogflow-fulfillment": "^0.3.0-beta.2" I have trouble with the request. I get log
Error: No responses defined for platform: undefined
at WebhookClient.send_ (/user_code/node_modules/dialogflow-fulfillment/src/dialogflow-fulfillment.js:428:13)
at promise.then (/user_code/node_modules/dialogflow-fulfillment/src/dialogflow-fulfillment.js:246:38)
at process._tickDomainCallback (internal/process/next_tick.js:135:7)
when calling e.g.
function fallback(agent) {
agent.add(`I didn't understand YO`);
agent.add(`I'm sorry, can you try again?`);
}
it is running perfect with only one entry like:
function fallback(agent) {
agent.add(`I didn't understand YO`);
// agent.add(`I'm sorry, can you try again?`);
}
using node v9.10.1
When I try to instantiate a new ImageResponse, nodejs raises this error: ImageResponse is not a constructor.
What I simply did:
const {ImageResponse} = require('dialogflow-fulfillment');
...
function testHandler(agent){
let imageResponse = new ImageResponse('https://example.com/placeholder.png'); // error
imageResponse.setImage('https://assistant.google.com/static/images/molecule/Molecule-Formation- stop.png');
agent.add(imageResponse)
}
Am I doing something wrong ?
I run dialogflow-fulfillment at 0.3.0-beta.2
It happens in the sendResponse_ method.
buildResponseMessages()_ return null because of the undefined platform. It receives the platform name by parsing the request.body.originalDetectIntentRequest.
When using the "Try it now" functionality, it doesn't send anything about the platform.
The platform should be called something like "playground" or "dialogflow"
I believe the WebhookClient class has a bug.
when I do this,
let agent = new WebhookClient({request:{body: requestBody}, response:{}});
let ctx = {'name': 'custom_name', 'lifespan': 2, 'parameters': {'city': 'Rome'}};
agent.setContext(ctx);
let res = agent.getContext('custom_name');
console.log(res); // prints null
When I looked into the source code, getContext() was using 'this.contexts', while setContext() was using 'this.outgoingContexts_'
I'd be glad to make the update and submit the pull request if you guys like.
If you follow the steps here (Option 2) and run the application in simulator you get the error below
MalformedResponse
expected_inputs[0].input_prompt.rich_initial_prompt: 'item[1]' must not be empty.
Edit: Option 1 to option 2
Hi, in Actions on Google library this was possible:
app.ask(app.buildRichResponse() .addSimpleResponse('Example') .addSuggestionLink('Suggestion Link', 'https://assistant.google.com/') // After clicking it would open browser );
I couldn't find any way how to do this here. Did I missed something or is it just not supported?
Thanks for reply.
According to the docs, the WebhookClient constructor needs Express HTTP request and response objects.
However, in Lambda function, I receive only the event
(the request). How do I create the Express request and response objects?
I have tried this so far:
const {WebhookClient} = require('dialogflow-fulfillment');
exports.dialogflowFulfillment = async (event) => {
let response = {};
const agent = new WebhookClient({ event, response });
function sayNiceThings(agent) {
agent.add(`Nice to meet you!`);
}
let intentMap = new Map();
intentMap.set('Say Nice Things', sayNiceThings);
agent.handleRequest(intentMap);
};
There doesn't appear to be any way to get the platform-specific payload from the request. This would be the portion of the request from originalRequest.platformName
in v1 and originalDetectIntentRequest.platformName
in v2. This is necessary to get some platform specific information, such as the user
object from AoG.
If I use your dependency of actions-on-google (2.0.0-alpha4) when using SimpleResponse I get the following error:
TypeError: dialogflow_fulfillment_1.SimpleResponse is not a constructor
If instead I change your dependecy to version 2.0.1 everything works just fine.
My code is as follow:
import { WebhookClient, SimpleResponse } from 'dialogflow-fulfillment'
let result = //My code to get a result string
if (app.requestSource === app.ACTIONS_ON_GOOGLE) {
let conv = app.conv()
conv.close(new SimpleResponse({
speech: `<speak>
<prosody rate="100%" pitch="-2st">
My name is <prosody rate="slow">Wonder Woman</prosody>.
</prosody>
</speak>`, text: result
}))
app.add(conv)
}
else
app.add(result)
Would you like a PR or is enough this issue?
I am not sure if it is intended or not. In current design and implementation, I can only set outgoing context, but not allowed to cancel or delete contexts in webhook via this DF fulfillment sdk. Also, I noticed that DF agent will merge outgoing context set via webhook and all incoming context to outgoing contexts for an intents. This means the only way we can cancel an existing context via web interface inside the intent configuration by setting lifespan to 0 in the outgoing contexts.
Again I am wondering if this is intended. It would be nice to be able to manage contexts, especially those initiated by developers in webhook via for example setContext(), also via webhook.
Thanks.
I'm trying to response message back with custom payload to LINE API (reference for text-message payload: https://developers.line.me/en/docs/messaging-api/reference/#text-message)
const payloadJson = {
"type": "text",
"text": "Hello LINE Messenger"
};
let payload = new Payload(agent.LINE, {});
payload.setPayload(payloadJson);
agent.add(payload);
but when i use agent.add('Hello LINE Messenger');
its work. Is something i'm missing in JSON Payload? Thanks for your help.
I clicked link in npm. But then i got 404.
Please fix the link.
https://github.com/dialogflow/dialogflow-fulfillment
to https://github.com/dialogflow/dialogflow-fulfillment-nodejs
😄
I have seen most of the examples related to how to return rich content via webhooks involving just basic response and Card.
What's the structure needed in dialogflow webhook V2 response to return either List or Carousel?
I'm a chatbot develoepr and working with dialogflow from last 1 year, today i got some time to get into dialogflow v2 and found this library, so my questoion is is this library is official or third party?
i got this question because there are several approach for making fulfilment webhook:
should i learn to use all of them, can anyone please do a comparison of these libraries? whihc one of them will get longer and stable support
According to the documentation the action property should be null
if the action is not set but in reality it's set to default
when using the v2 API
Can someone help me make this sample work or point out where am I going wrong ??? Cant figure out myself after lots of tries.
I think this final call
agent.handleRequest(intentMap)
is failing somewhere .
let payload ={
"expectUserResponse": true,
"richResponse": {
"items": [
{
"intent": "actions.intent.PERMISSION",
"inputValueData": {
"@type": "type.googleapis.com/google.actions.v2.PermissionValueSpec",
"optContext": "To address you by name and know your location",
"permissions": [
"NAME",
"DEVICE_PRECISE_LOCATION"
]
}
}
]
},
"userStorage": "{\"data\":{}}"
};
agent.add(new Payload(PLATFORMS.ACTIONS_ON_GOOGLE, payload));
I have this code and i have this error :
TypeError: Cannot read property 'ask' of null
/**
'use strict';
const express = require("express");
const bodyParser = require("body-parser");
const { WebhookClient } = require('dialogflow-fulfillment');
const { Card, Suggestion } = require('dialogflow-fulfillment');
const { Carousel } = require('actions-on-google');
process.env.DEBUG = 'dialogflow:debug'; // enables lib debugging statements
const restService = express();
const request = require('request-promise-native');
const app = actionssdk();
restService.use(
bodyParser.urlencoded({
extended: true
})
);
restService.use(bodyParser.json());
restService.post("/echo", function(req, res) {
const imageUrl = 'https://developers.google.com/actions/images/badges/XPM_BADGING_GoogleAssistant_VER.png';
const imageUrl2 = 'https://lh3.googleusercontent.com/Nu3a6F80WfixUqf_ec_vgXy_c0-0r4VLJRXjVFF_X_CIilEu8B9fT35qyTEj_PEsKw';
const linkUrl = 'https://assistant.google.com/';
const agent = new WebhookClient({ request: req, response: res });
console.log('Dialogflow Request headers: ' + JSON.stringify(request.headers));
console.log('Dialogflow Request body: ' + JSON.stringify(request.body));
let intentMap = new Map();
intentMap.set('Default Welcome Intent', welcome);
intentMap.set('Default Fallback Intent', fallback);
agent.handleRequest(intentMap);
});
function welcome(agent) {
let conv = agent.conv();
// Use Actions on Google library to add responses
conv.ask('Please choose an item:')
conv.ask(new Carousel({
title: 'Google Assistant',
items: {
'WorksWithGoogleAssistantItemKey': {
title: 'Works With the Google Assistant',
description: 'If you see this logo, you know it will work with the Google Assistant.',
image: {
url: imageUrl,
accessibilityText: 'Works With the Google Assistant logo',
},
},
'GoogleHomeItemKey': {
title: 'Google Home',
description: 'Google Home is a powerful speaker and voice Assistant.',
image: {
url: imageUrl2,
accessibilityText: 'Google Home',
},
},
},
}))
// Add Actions on Google library responses to your agent's response
agent.add(conv);
}
function fallback(agent) {
let conv = agent.conv();
// Use Actions on Google library to add responses
conv.ask('Please choose an item:')
// Add Actions on Google library responses to your agent's response
agent.add(conv);
}
restService.listen(process.env.PORT || 5001, function() {
console.log("Server up and listening :) !");
});
Is this ever going to be included, or has this been deprecated? (I know there is a lot of flux right now with the Account Activity API)
I'm trying out the custom payload responses for Facebook, but so far it's a dud. I pulled a response from the messenger examples.
let id = request.body.originalRequest.data.sender.id
var messageData = {
recipient: {id},
message: {
attachment: {
type: 'template',
payload: {
template_type: 'button',
text: 'This is test text',
buttons: [{
type: 'web_url',
url: 'https://www.facebook.com/fbcameraeffects/tryit/168878713921569/',
title: 'Open Camera Effect'
}, {
type: 'postback',
title: 'Trigger Postback',
payload: 'DEVELOPER_DEFINED_PAYLOAD'
}, {
type: 'phone_number',
title: 'Call Phone Number',
payload: '+16505551234'
}]
}
}
}
}
let payload = new Payload(agent.FACEBOOK, messageData)
agent.add(payload)
Is there something I'm missing? The webhook execution finishes, but nothing shows up in messenger.
Actions on Google's BasicCard
class doesn't require title
(eg when only adding formattedText
) but CardResponse
constructor does:
In my opinion title
for CardResponse
class should either be optional or required only for the respective integration platforms that requires it.
Is the module designed to be run on Google Cloud Functions?
I wonder the module has requirement on the node engine to be ~8.0
While the Node version of (publicly available) Google Cloud functions is v6.11.5
if I understand correctly?
https://cloud.google.com/functions/docs/writing/
MalformedResponse
'final_response' must be set.
const conv = agent.conv();
conv.ask('Test string');
agent.add(conv)
Solution to this is conv.close(TEXT) but that doesnt work with the following
let conv = agent.conv();
conv.ask('Sample Title');
conv.ask('Sample Next Line');
conv.close(new Suggestions('Lets Go!!', 'Awesome!!'));
const conv = agent.conv();
conv.ask(new Suggestions([`Let's Go !!`, `Not Interested !!`, `Experimenting`]));
OR
conv.close(new Suggestions([`Let's Go !!`, `Not Interested !!`, `Experimenting`]));
agent.add(conv)
if (agent.requestSource === PLATFORMS.ACTIONS_ON_GOOGLE) {
let payload = {
"conversationToken": "",
"expectUserResponse": true,
"expectedInputs": [
{
"inputPrompt": {
"richInitialPrompt": {
"items": [
{
"simpleResponse": {
"textToSpeech": "Howdy! I can tell you fun facts about almost any number like 0, 42, or 100. What number do you have in mind?",
"displayText": "Howdy! I can tell you fun facts about almost any number. What number do you have in mind?"
}
}
],
"suggestions": [
{
"title": "0"
},
{
"title": "42"
},
{
"title": "100"
},
{
"title": "Never mind"
}
],
"linkOutSuggestion": {
"destinationName": "Suggestion Link",
"url": "https://assistant.google.com/"
}
}
},
"possibleIntents": [
{
"intent": "actions.intent.TEXT"
}
]
}
]
};
agent.add(new Payload(PLATFORMS.ACTIONS_ON_GOOGLE, payload));
Below is the response for the 3rd case which is like the final option to make things work !! but NO not working
{
"responseMetadata": {
"status": {
"code": 10,
"message": "Failed to parse Dialogflow response into AppResponse because of empty speech response",
"details": [
{
"@type": "type.googleapis.com/google.protobuf.Value",
"value": "{\"id\":\"a85c67b2-a6eb-4d99-9aa2-941978939278\",\"timestamp\":\"2018-05-09T08:05:43.624Z\",\"lang\":\"en-us\",\"result\":{},\"status\":{\"code\":200,\"errorType\":\"success\"},\"sessionId\":\"1525853143208\"}"
}
]
}
}
}
It would be nice if the actionIncomplete
property was added, preferably to the WebhookClient. This is needed for cases when the "Use webhook for slot-filling" is checked on certain intents. Right now the only way to access it is directly from the request json: req.body.result.actionIncomplete
.
The Use Case
I have designed a poem server which responds to requests from DialogFlow.
When the webhook is fired, it sends a simpleResponse containing the poem audio & text and then the pre-canned responses prepared in the intent by Dialogflow.
These pre-canned responses are sent in the request and accessible in the field request.body.result.fulfillment_
My suggestion
add the field fulfillment to the agent_v1 object
How ?
In file v1-agent.js, add the following lines to the function processRequest()_
processRequest_() {
...
/**
* Dialogflow fulfillment included in the request or null if no value
* https://dialogflow.com/docs/fulfillment
* @type {string}
*/
this.agent.fulfillment = this.agent.request_.body.result.fulfillment;
debug(`Input fulfillment: ${JSON.stringify(this.agent.fulfillment)}`);
...
Tests ?
I haven't managed to figure out were to add the associated unitary tests associated to this modification.
All I can say is that with this patch I works like a charm and I can have access to agent.fulfillment information in my code.
agent-v2 equivalent ?
I have looked in detail yet which modification is required but I'll have a look later when I use version 2
Next steps ?
I you agree, I can send a pull request for this modification.
Looking forward to receiving feedback on this issue and congratulations for this great module!
error [email protected]: The engine "node" is incompatible with this module. Expected version "~8.0".
I can use the package directly from the git repository via:
yarn add https://github.com/dialogflow/dialogflow-fulfillment-nodejs
but
yarn add dialogflow-fulfillment
throws me the above error.
Hello,
I can't use request-promise or request with actions-on-google like this, and i search a solution
Best Regards
"use strict";
const express = require("express");
const bodyParser = require("body-parser");
const rp = require('request-promise');
const { ActionsSdkApp } = require('actions-on-google');
const {WebhookClient} = require('dialogflow-fulfillment');
const {Card, Suggestion} = require('dialogflow-fulfillment');
const restService = express();
restService.use(
bodyParser.urlencoded({
extended: true
})
);
restService.use(bodyParser.json());
restService.post("/echo", function(req, res) {
//console.log('nouvelle requete');
//console.log(req.body);
const app = new ActionsSdkApp({ request: req, response: res });
const agent = new WebhookClient({ request: req, response: res });
const WELCOME_INTENT = 'input.welcome';
let intentMap = new Map();
intentMap.set('Default Welcome Intent', welcome);
intentMap.set('Default Fallback Intent', fallback);
intentMap.set('Echo', echo);
// intentMap.set('<INTENT_NAME_HERE>', yourFunctionHandler);
// intentMap.set('<INTENT_NAME_HERE>', googleAssistantHandler);
agent.handleRequest(intentMap);
});
function echo (agent) {
console.log('je fais un log ici');
console.log(agent.query);
var speech = agent.query;
var options = {
uri: 'https://www.rjweb.xyz/restbot/public/expression/' + speech,
json: true
};
rp(options).then(function(result) {
//console.log(result);
console.log('au dessus');
if (result.length == 1) {
console.log('1 resultat');
} else if (result.length > 1) {
console.log(' multiple resultat');
}
else {
console.log('aucun resultat');
//message.addReply({ type: 'text', content: 'Désolé mais je n\'ai trouvé aucun résultat, tu peux réessayer avec un autre mot, phrase ou expression.' });
agent.add(`Désolé mais je n\'ai trouvé aucun résultat, tu peux réessayer avec un autre mot, phrase ou expression.`);
}
});
}
function welcome (agent) {
agent.add(Bonjour ou Welcome agent
);
agent.add(new Card({
title: Title: this is a card title
,
imageUrl: 'https://dialogflow.com/images/api_home_laptop.svg',
text: This is the body text of a card. You can even use line\n breaks and emoji! 💁
,
buttonText: 'This is a button',
buttonUrl: 'https://docs.dialogflow.com/'
})
);
agent.add(new Suggestion(Quick Reply
));
agent.add(new Suggestion(Suggestion
));
agent.setContext({ name: 'weather', lifespan: 2, parameters: { city: 'Rome' }});
}
function fallback (agent) {
agent.add(I didn't understand
);
agent.add(I'm sorry, can you try again?
);
}
restService.listen(process.env.PORT || 5001, function() {
console.log("Server up and listening");
});
Hi,
My webhook is returning the response below:
{
"conversationToken": "[\"employee\"]",
"expectUserResponse": true,
"expectedInputs": [
{
"inputPrompt": {
"richInitialPrompt": {
"items": [
{
"simpleResponse": {
"textToSpeech": "Welcome to Smart Pension. You can ask a question like, what's my summary...",
"displayText": "Welcome to Smart Pension. You can ask a question like, what's my summary..."
}
},
{
"basicCard": {
"title": "Title: this is a card title",
"formattedText": "This is the body text of a card. You can even use line\n breaks and emoji! 💁",
"image": {
"url": "https://s3-eu-west-1.amazonaws.com/mediasp/google-home/images/google-app-blue-1280-720.png",
"accessibilityText": "accessibility text"
},
"buttons": [
{
"title": "This is a button",
"openUrlAction": {
"url": "https://my.autoenrolment.co.uk"
}
}
]
}
},
{
"simpleResponse": {
"textToSpeech": "Now, what can I help you with?",
"displayText": "Now, what can I help you with?"
}
}
],
"suggestions": [
{
"title": "My personal information"
},
{
"title": "My summary"
}
]
}
},
"possibleIntents": [
{
"intent": "assistant.intent.action.TEXT"
}
]
}
],
"responseMetadata": {
"status": {},
"queryMatchInfo": {
"queryMatched": true,
"intent": "2b8d59af-3daf-4f02-8b3e-848f67b7accc"
}
}
}
but I cannot see the card since I upgraded to the beta version.
this is the code I'm using to create the response:
agent.add(`Welcome to Smart Pension. You can ask a question like, what's my summary...`);
agent.add(new Card({
title: `Title: this is a card title`,
imageUrl: smartPensionLogo,
text: `This is the body text of a card. You can even use line\n breaks and emoji! 💁`,
buttonText: 'This is a button',
buttonUrl: smartPensionMyUrl
})
);
agent.add(`Now, what can I help you with?`);
agent.add(new Suggestion(`My personal information`));
agent.add(new Suggestion(`My summary`));
Could you help me to solve this issue?
The end of line should be a ;
not ,
Update actions-on-google
to stable release 2.0.0
and debug
to version 3.1.0
Issue: There' no way to set the accessibility text for images in ImageResponse and CardResponse objects. Accessibility text is required for images on AoG, currently I see the text 'accessibility text' is used as filler.
This should be a pretty straightforward addition, just running it by you to get an OK - I do have one main question. Should accessibilityText be:
I guess it's more product-related than an implementation question, and from using DialogFlow/AoG I know accessibility text must be set any time there's an image... so I think this library should reflect that?
Anyways, brief outline of the changes that would be made:
ImageResponse Object:
Add a method setAccessibilityText(accessibilityText).
Modify constructor, if an object is passed in, look for an accessibilityText key.
Update getV1/V2Object methods appropriately for AoG platform
CardResponse Object:
This one is a bit trickier, I don't think it makes sense to add another method setImageAccessibilityText() since accessibilityText is only used for AoG and dependent on if you have an image in your card in the first place.
I think the better approach is to allow the user to pass in an object in setImage(). They can specify the accessibility text there. getV1/V2Object methods would be updated appropriately for AoG platform.
TypeError: Cannot read property 'source' of undefined
at V2Agent.processRequest_ (/user_code/node_modules/dialogflow-fulfillment/src/v2-agent.js:108:86)
at new WebhookClient (/user_code/node_modules/dialogflow-fulfillment/src/dialogflow-fulfillment.js:193:17)
at exports.dialogflowFirebaseFulfillment.functions.https.onRequest (/user_code/index.js:38:17)
at cloudFunction (/user_code/node_modules/firebase-functions/lib/providers/https.js:26:47)
at /var/tmp/worker/worker.js:676:7
at /var/tmp/worker/worker.js:660:9
at _combinedTickCallback (internal/process/next_tick.js:73:7)
at process._tickDomainCallback (internal/process/next_tick.js:128:9)
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.