Code Monkey home page Code Monkey logo

assistant-sdk-nodejs's People

Contributors

fleker avatar manavm1990 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

assistant-sdk-nodejs's Issues

Search questions returning 'undefined'

Hi

I followed all the steps and confirmed it works/communicates by querying "Hello". It also works for definitions/information responses: "What is ...?"

But any 'complex' search queries which DO return results on Assistant on device/App -- return 'undefined'.

For our clients purpose I have ommitted their name but a few examples:

"What is ...?" -- text: "... is a blah blah blah (from Wikipedia)"
"When does ... start?" -- undefined
"Can you take dogs on ..." -- undefined
"How long is the ...?" -- undefined

All the above undefined responses return correct answers on the App follow-up or when asking a Home device.

Is this just not possible with this SDK?

Assistant SDK no longer responds with text response

As per the code example, the below no longer responds with the answer from Google Assistant. Audio responses are still provided, however the text response is always blank

Having asked questions such as

  • What time is it?
  • What's the weather in London?
  • Who is the president of the USA?
  • How tall is the Empire State Building?
const promptUser = () => {
    stdio.question('> ', (err, prompt) => {
        assistant.assist(prompt)
            .then(({ text }) => {
                console.log(text); // Will log the answer
                promptUser();
            });
    });
};

What I have found from personal investigation is that I can replicate this issue by passing the PLAYING parameter to the screen mode in the SDK. If I set the screen mode to OFF then it responds with the text.

Whilst these are only my results, I also found that using this example repo where the screen mode isn't passed at all, it still does not respond with the text.

I have also read that some users pass OFF as the screen mode but still have this issue as per this issue:
endoplasmic/google-assistant#81

It seems that this issue actually begun around the end of last year.

voice input example?

Hi i'm trying to do implement this with voice input request but the current sample only supports text input. Do i configure the config file to include audioIn and remove the 'delete request.audio_in' and 'text_query' to enable audio request mode?

I have searched online but cant seem to find any documentations on how to do this. Helps will be much appreciated! Thanks~!

node googleassistant.js gives an error Auth error:Error: No access, refresh token or API key is set.

Hi,

While trying to run the node googleassistant.js, it gives me the following error always.

soumya@Soumya-MacBook-Pro google-assistant-grpc % node googleassistant.js
> : (node:30841) DeprecationWarning: grpc.load: Use the @grpc/proto-loader module with grpc.loadPackageDefinition instead
(Use `node --trace-deprecation ...` to show where the warning was created)

Auth error:Error: No access, refresh token or API key is set.
(node:30841) UnhandledPromiseRejectionWarning: Error: 14 UNAVAILABLE: Getting metadata from plugin failed with error: No access, refresh token or API key is set.
    at Object.exports.createStatusError (/Users/soumya/Development/Node/Demo/google-assistant-grpc/node_modules/grpc/src/common.js:91:15)
    at ClientDuplexStream._emitStatusIfDone (/Users/soumya/Development/Node/Demo/google-assistant-grpc/node_modules/grpc/src/client.js:233:26)
    at ClientDuplexStream._receiveStatus (/Users/soumya/Development/Node/Demo/google-assistant-grpc/node_modules/grpc/src/client.js:211:8)
    at Object.onReceiveStatus (/Users/soumya/Development/Node/Demo/google-assistant-grpc/node_modules/grpc/src/client_interceptors.js:1311:15)
    at InterceptingListener._callNext (/Users/soumya/Development/Node/Demo/google-assistant-grpc/node_modules/grpc/src/client_interceptors.js:568:42)
    at InterceptingListener.onReceiveStatus (/Users/soumya/Development/Node/Demo/google-assistant-grpc/node_modules/grpc/src/client_interceptors.js:618:8)
    at /Users/soumya/Development/Node/Demo/google-assistant-grpc/node_modules/grpc/src/client_interceptors.js:1127:18
(node:30841) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). To terminate the node process on unhandled promise rejection, use the CLI flag `--unhandled-rejections=strict` (see https://nodejs.org/api/cli.html#cli_unhandled_rejections_mode). (rejection id: 1)
(node:30841) [DEP0018] DeprecationWarning: Unhandled promise rejections are deprecated. In the future, promise rejections that are not handled will terminate the Node.js process with a non-zero exit code.

Unable to find any solution to resolve this issue. I have downloaded the JSON file and have put it in the same folder where the googleassistant.js file is placed.

const deviceCredentials = require("/Users/soumya/Development/Node/Demo/google-assistant-grpc/credentials.json");

Any assistant will be helpful in resolution.

IONIC APP

hey, How can I use the SDK in hybrid app like IONIC applications?

Erratic response to built-in traits

I have defined a device with all traits except temperature enabled. when testing, i receive erratic responses for builtin traits. I have defined 2 custom traits that always work. For example,

: start
{}
: start
{}
: start
{}
: go to dock
{}
: start
{
"deviceAction": {
"inputs": [
{
"context": {
"locale_language": "en"
},
"intent": "action.devices.EXECUTE",
"payload": {
"commands": [
{
"devices": [
{
"id": "xxxxxxxxxxx"
}
],
"execution": [
{
"command": "action.devices.commands.StartStop",
"params": {
"start": true
}
}
]
}
]
}
}
],
"requestId": "xxxxxxxxxxxx"
}
}
Here's the device model definition:

"deviceModels": [
{
"deviceModelId": "xxxxxxx-yyyyyyy",
"projectId": "xxxxxxxxxxxxxxx",
"manifest": {
"manufacturer": "xxx",
"productName": "xxx"
},
"name": "projects/xxxxxxxxxxxxxxx",
"deviceType": "action.devices.types.LIGHT",
"traits": [
"action.devices.traits.Brightness",
"action.devices.traits.ColorSpectrum",
"action.devices.traits.ColorTemperature",
"action.devices.traits.Dock",
"action.devices.traits.OnOff",
"action.devices.traits.StartStop"
],
"executionModes": [
"DIRECT_RESPONSE"
],
"lastUpdatedTime": "2018-10-29T21:29:16.591690742Z"
}
]

Similar behavior happens with turn on/off, go to dock. Color setting (e.g. set the light to red) never seems to work

Cannot use google-oauthlib-tool

When I run this on a python environment:
env/bin/google-oauthlib-tool --client-secrets credentials.json \ --credentials devicecredentials.json \ --scope https://www.googleapis.com/auth/assistant-sdk-prototype \ --save
I get this
Please visit this URL to authorize this application: https://accounts.google.com/o/oauth2/auth?response_type=code&client_id=324326529759-7v7i5qhjmbnniqtq5hdfflavaqe9626e.apps.googleusercontent.com&redirect_uri=http%3A%2F%2Flocalhost%3A8080%2F&scope=https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fassistant-sdk-prototype&state=HmHLOq6phamDdQF7y2TaRf3swSHtJf&access_type=offline

when I go to this link and log in to my account I am redirectioned to localhost and I get the error

ERR_CONNECTION_REFUSED

Using service account loses device trait actions

I was getting ready to allow other people to test my implementation,
Then, i realized that I was using my own account to grant permission to my device.
So, I did it all over again, this time using a service account. I had to upgrade to google-auth-library 2.0 but i eventually got it all to work EXCEPT for device actions. It ignores the built in traits like dock and start/stop on/off and for 'color red' says it needs permission. any ideas? heres my action console screen:
Linked device models . TPR Series
Product name TPR Series
Manufacturer name OhmniLabs
Model Id ohmnilabstpr-tpr-series-9xw6v5
Device type Light
Supported traits Brightness , ColorSpectrum , Dock , OnOff , StartStop

Error: 14 UNAVAILABLE: Getting metadata from plugin failed with error: invalid_client

In the project credentials are retrieved from home directory as follows:

const homedir = require('homedir');
const deviceCredentials = require(`${homedir()}/.config/google-oauthlib-tool/credentials.json`);

What I am trying to do is populating deviceCredentials Directory in the code itself:

const deviceCredentials = {
  client_secret: 'xxxxxxxxxxx',
  refresh_token: 'xxxxxxxxxxx',
  scopes: ['https://www.googleapis.com/auth/assistant-sdk-prototype'],
  client_id: 'xxxxxxxxxxxxxxx',
  token_uri: 'https://accounts.google.com/o/oauth2/token',
};

I am getting following error:

Auth error:Error: invalid_client
{ Error: 14 UNAVAILABLE: Getting metadata from plugin failed with error: invalid_client
    at Object.exports.createStatusError (/home/akshay/Desktop/Workspace/assistant-sdk/remoteAssistant/node_modules/grpc/src/common.js:87:15)
    at ClientDuplexStream._emitStatusIfDone (/home/akshay/Desktop/Workspace/assistant-sdk/remoteAssistant/node_modules/grpc/src/client.js:235:26)
    at ClientDuplexStream._receiveStatus (/home/akshay/Desktop/Workspace/assistant-sdk/remoteAssistant/node_modules/grpc/src/client.js:213:8)
    at Object.onReceiveStatus (/home/akshay/Desktop/Workspace/assistant-sdk/remoteAssistant/node_modules/grpc/src/client_interceptors.js:1290:15)
    at InterceptingListener._callNext (/home/akshay/Desktop/Workspace/assistant-sdk/remoteAssistant/node_modules/grpc/src/client_interceptors.js:564:42)
    at InterceptingListener.onReceiveStatus (/home/akshay/Desktop/Workspace/assistant-sdk/remoteAssistant/node_modules/grpc/src/client_interceptors.js:614:8)
    at /home/akshay/Desktop/Workspace/assistant-sdk/remoteAssistant/node_modules/grpc/src/client_interceptors.js:1110:18
  code: 14,
  metadata: Metadata { _internal_repr: {} },
  details:
   'Getting metadata from plugin failed with error: invalid_client' }
Error: Error: 14 UNAVAILABLE: Getting metadata from plugin failed with error: invalid_client
    at getAssistantResponse (/home/akshay/Desktop/Workspace/assistant-sdk/remoteAssistant/server.js:34:11)

Using real device loses all audio

This works fine if I use it (typing "time" as the query) without a real device, namely:
this.deviceModelId = 'default';
this.deviceInstanceId = 'default';
if I fill in a 'real' device that I registered in the action console as a speaker, I get no output -- neither text nor audio -- just a vacuous device_action.
This happens in every version of the assistant sdk I have tried. Do I need to set something on my device definition? You'd think being a speaker would be enough. Attached are two logs showing output.

Bad.LOG:

time
data: { event_type: 'EVENT_TYPE_UNSPECIFIED',
  audio_out: null,
  device_action: null,
  speech_results: [],
  dialog_state_out: null }
data: { event_type: 'EVENT_TYPE_UNSPECIFIED',
  audio_out: null,
  device_action: { device_request_json: '{"requestId":"5bb3e0d6-0000-2ec6-9f71-94eb2c141a3e"}\n' },
  speech_results: [],
  dialog_state_out: null }
data: { event_type: 'EVENT_TYPE_UNSPECIFIED',
  audio_out: null,
  device_action: null,
  speech_results: [],
  dialog_state_out: 
   { supplemental_display_text: '',
     conversation_state: <Buffer 0a 26 43 23 35 62 63 64 61 62 66 63 2d 30 30 30 30 2d 32 36 38 64 2d 62 37 34 39 2d 38 38 33 64 32 34 66 33 61 61 38 63 12 e7 01 4b 6a 38 77 58 32 31 ... >,
     microphone_mode: 'CLOSE_MICROPHONE',
     volume_percentage: 0 } }
undefined

GOOD.LOG

time
data: { event_type: 'EVENT_TYPE_UNSPECIFIED',
  audio_out: null,
  device_action: null,
  speech_results: [],
  dialog_state_out: 
   { supplemental_display_text: '5:45.',
     conversation_state: <Buffer 0a 26 43 23 35 62 63 65 33 34 36 31 2d 30 30 30 30 2d 32 65 31 31 2d 61 65 37 37 2d 38 38 33 64 32 34 66 37 66 65 39 30 12 8f 02 4b 6b 34 77 58 32 31 ... >,
     microphone_mode: 'CLOSE_MICROPHONE',
     volume_percentage: 0 } }
data: { event_type: 'EVENT_TYPE_UNSPECIFIED',
  audio_out: { audio_data: <Buffer 00 00 00 00 ff ff 00 00 00 00 ff ff ff ff 00 00 ff ff 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ff ff 00 00 00 00 00 00 00 00 ... > },
  device_action: null,
  speech_results: [],
  dialog_state_out: null }
data: { event_type: 'EVENT_TYPE_UNSPECIFIED',
  audio_out: { audio_data: <Buffer 17 00 e8 ff db ff dc ff cd ff 19 00 24 00 0c 00 e5 ff 81 ff 15 00 0f 00 23 00 3c 00 08 00 bd ff d0 ff 1e 00 dc ff fd ff 38 00 27 00 11 00 43 00 af 00 ... > },
  device_action: null,
  speech_results: [],
  dialog_state_out: null }
data: { event_type: 'EVENT_TYPE_UNSPECIFIED',
  audio_out: { audio_data: <Buffer 72 0b 38 10 33 14 a0 16 58 19 c8 1b 7b 1d 13 1f 7f 1f a9 1b 03 14 a6 0a af ff 25 f5 78 ee c3 eb ad eb 7e f0 8d f7 d2 ff a5 09 35 11 55 16 fb 19 90 1a ... > },
  device_action: null,
  speech_results: [],
  dialog_state_out: null }
data: { event_type: 'EVENT_TYPE_UNSPECIFIED',
  audio_out: { audio_data: <Buffer ec 28 5f 10 e5 f5 5d e0 51 d4 0f d5 31 de 37 e6 79 ea a0 ed b4 f1 e8 f9 45 07 e1 16 dc 24 0f 30 d3 36 8e 32 6b 22 e9 0d 14 f9 6d ea d1 e6 46 ea 10 f0 ... > },
  device_action: null,
  speech_results: [],
  dialog_state_out: null }
data: { event_type: 'EVENT_TYPE_UNSPECIFIED',
  audio_out: { audio_data: <Buffer e5 08 f6 0a 8c 0d cf 11 76 17 6d 1a 82 1a a9 18 9b 14 1c 0f 6f 0a b6 07 67 06 38 05 03 04 6d 02 23 01 78 01 50 04 51 07 84 0a eb 0d c5 0e a7 0a 34 03 ... > },
  device_action: null,
  speech_results: [],
  dialog_state_out: null }
data: { event_type: 'EVENT_TYPE_UNSPECIFIED',
  audio_out: { audio_data: <Buffer 40 05 30 05 ac 04 30 05 f8 04 57 04 63 04 1b 05 3d 05 f6 05 a1 04 04 04 a1 03 58 03 20 03 5c 02 80 01 59 01 05 01 d4 ff aa ff 50 00 af fe c5 fd 02 fe ... > },
  device_action: null,
  speech_results: [],
  dialog_state_out: null }
data: { event_type: 'EVENT_TYPE_UNSPECIFIED',
  audio_out: { audio_data: <Buffer 03 00 e6 ff 2a 00 5a 00 34 00 7d 00 be 00 5f 00 67 00 e6 ff 47 00 77 00 81 00 54 00 18 00 3b 00 4f 00 5f 00 c7 00 6b 00 40 00 8b 00 86 00 4f 00 35 00 ... > },
  device_action: null,
  speech_results: [],
  dialog_state_out: null }
data: { event_type: 'EVENT_TYPE_UNSPECIFIED',
  audio_out: { audio_data: <Buffer af 01 f9 00 2c 00 1b ff e7 ff c9 00 1f 01 bf ff 33 00 6a fe 38 fe 5f ff 0b 00 9a 00 a9 01 12 01 47 00 90 ff c9 ff b2 00 0f ff 91 fe 0d 00 6b ff b2 ff ... > },
  device_action: null,
  speech_results: [],
  dialog_state_out: null }
data: { event_type: 'EVENT_TYPE_UNSPECIFIED',
  audio_out: { audio_data: <Buffer 8d 37 28 35 2d 30 82 29 35 21 e7 18 95 11 ec 0b 2f 07 a5 03 8b 01 e7 00 9f 00 32 00 bc ff cf fe ea fc ac f9 43 f6 0e f3 68 f0 31 ee be ed e5 ee 93 f1 ... > },
  device_action: null,
  speech_results: [],
  dialog_state_out: null }
data: { event_type: 'EVENT_TYPE_UNSPECIFIED',
  audio_out: { audio_data: <Buffer cd 0d c7 0a 20 06 c8 ff bd f8 96 f2 d5 ee 3b ee 2c ef 81 f0 af f1 c1 f2 ae f3 5a f5 45 f8 79 fc 90 01 8c 06 23 0b e3 0f ba 14 91 18 64 1b 59 1d 55 1e ... > },
  device_action: null,
  speech_results: [],
  dialog_state_out: null }
data: { event_type: 'EVENT_TYPE_UNSPECIFIED',
  audio_out: { audio_data: <Buffer 49 f6 5d f4 d2 f4 45 f7 39 fa 58 fb 2f fb ea fb 48 fd 95 ff fb 01 0f 02 e1 00 95 00 44 00 1f ff 38 fd 6c fa 4b f5 38 ee 10 e9 fb e9 b5 ee 8a ed 23 e4 ... > },
  device_action: null,
  speech_results: [],
  dialog_state_out: null }
data: { event_type: 'EVENT_TYPE_UNSPECIFIED',
  audio_out: { audio_data: <Buffer 97 13 3f 17 22 1b 52 1e d2 1f e8 1f 0e 1f e7 1c 7a 1a 33 18 70 16 7a 15 c3 12 60 0d 09 09 8e 07 e9 05 15 04 f8 01 20 ff 25 fd 06 fd f0 fd 6f ff 96 ff ... > },
  device_action: null,
  speech_results: [],
  dialog_state_out: null }
data: { event_type: 'EVENT_TYPE_UNSPECIFIED',
  audio_out: { audio_data: <Buffer 12 00 b9 ff d8 ff da ff 94 ff cd ff 34 00 1c 00 c6 ff b5 ff 2a 00 f1 ff f1 ff b1 ff 21 ff 04 00 35 00 ed ff ec ff 7d 00 be ff 08 00 f9 ff da ff c1 ff ... > },
  device_action: null,
  speech_results: [],
  dialog_state_out: null }
data: { event_type: 'EVENT_TYPE_UNSPECIFIED',
  audio_out: { audio_data: <Buffer 6f 0a a8 0a 57 0b f1 0b a4 0b 1c 0b 3e 0a 3c 08 34 05 fe 00 78 fb ae f5 e6 f0 50 ed 16 ea 61 e7 92 e5 8d e3 36 e0 0a dc c3 d6 26 d0 a0 cb 56 ca 84 cd ... > },
  device_action: null,
  speech_results: [],
  dialog_state_out: null }
data: { event_type: 'EVENT_TYPE_UNSPECIFIED',
  audio_out: { audio_data: <Buffer 1a 00 c3 04 2d 0b 88 12 cf 15 f0 14 05 12 33 0d 06 08 61 04 d5 02 a5 02 94 03 56 06 e9 08 59 09 5c 08 4a 05 54 00 f9 fb 27 f8 7a f4 79 f1 18 ee 8b e9 ... > },
  device_action: null,
  speech_results: [],
  dialog_state_out: null }
data: { event_type: 'EVENT_TYPE_UNSPECIFIED',
  audio_out: { audio_data: <Buffer 96 04 5b 08 fb 0d 86 13 94 15 e3 13 ba 0f 5b 08 62 01 60 fd 3f fc 97 fd 05 01 a9 05 4c 09 5e 0a 9a 0a 0a 0b 6b 0b 6f 0d 98 10 69 12 bd 13 e9 12 6e 0e ... > },
  device_action: null,
  speech_results: [],
  dialog_state_out: null }
data: { event_type: 'EVENT_TYPE_UNSPECIFIED',
  audio_out: { audio_data: <Buffer 82 10 77 11 d7 0f 1e 0e 53 0c 24 0a bc 08 dc 07 a2 06 a4 03 4f 00 be fe cf fd 0d fe 88 00 98 03 0d 06 0a 09 cc 0b bc 0c 98 0b b1 0b 2f 0c ed 0b 42 0c ... > },
  device_action: null,
  speech_results: [],
  dialog_state_out: null }
data: { event_type: 'EVENT_TYPE_UNSPECIFIED',
  audio_out: { audio_data: <Buffer a4 fa eb f9 9f f8 1c f7 0e f5 7b f2 71 ef 34 ec 90 e9 52 e8 82 e6 0d e4 6f e1 4a e0 ab e1 18 e6 21 ec ff f1 e4 f5 98 f9 00 fe 70 02 8d 06 c1 0a 18 0e ... > },
  device_action: null,
  speech_results: [],
  dialog_state_out: null }
data: { event_type: 'EVENT_TYPE_UNSPECIFIED',
  audio_out: { audio_data: <Buffer 0b fe db fd d7 fd ff fd 01 fe d4 fd 95 fd 94 fd bd fd d0 fd cf fd ac fd b5 fd 11 fe 5f fe 7a fe 79 fe 72 fe 8c fe db fe 36 ff 8b ff 94 ff 9f ff df ff ... > },
  device_action: null,
  speech_results: [],
  dialog_state_out: null }
data: { event_type: 'EVENT_TYPE_UNSPECIFIED',
  audio_out: { audio_data: <Buffer fe ff ff ff 00 00 00 00 00 00 00 00 00 00 ff ff fe ff 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ff ff 00 00 01 00 00 00 ff ff ... > },
  device_action: null,
  speech_results: [],
  dialog_state_out: null }
data: { event_type: 'EVENT_TYPE_UNSPECIFIED',
  audio_out: { audio_data: <Buffer 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ... > },
  device_action: null,
  speech_results: [],
  dialog_state_out: null }
data: { event_type: 'EVENT_TYPE_UNSPECIFIED',
  audio_out: { audio_data: <Buffer 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ... > },
  device_action: null,
  speech_results: [],
  dialog_state_out: null }
data: { event_type: 'EVENT_TYPE_UNSPECIFIED',
  audio_out: { audio_data: <Buffer 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ... > },
  device_action: null,
  speech_results: [],
  dialog_state_out: null }
5:45.

How to setup actions.capability.SCREEN_OUTPUT capability

hi,

I was trying to use this sample (and also original code here https://github.com/actions-on-google/actions-on-google-testing-nodejs) for google action which is checking if user's device has actions.capability.SCREEN_OUTPUT capability and if not it will forward to screen enabled device (using conv.ask(new NewSurface({ ...)

The problem is if (conv.surface.capabilities.has('actions.capability.SCREEN_OUTPUT')) { .. always evaluates to false. I was trying to change the registered device I am using for testing from Light to Phone but it did no help (though I was lazy to regenerate credentials). Any idea what needs to be done so that this sample code is actually working as if with screen enabled device?

thanks,

Adam

All data.device_action are null in the response

Hello,

When I ask "Is the light on?", I do not have a textual answer, but I earing for example "The light is off" (via data.audio_out.audio_data)
All data.device_action are null in the response

Did I miss something?

Thanks

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.