Code Monkey home page Code Monkey logo

real-time-voice-bot's Introduction

real-time-voice-bot

About

This is a simple example of how you could build a Real Time Voice Bot using Deepgram's Real Time Speech to Text and Text to Speech.

This example uses Langchain and OpenAI's ChatGPT LLM

Setup

npm i

API Keys

export OPEN_AI_API_KEY=xxx
export DEEPGRAM_API_KEY=xxx

Running

npm run start

Webpage URL

http://localhost:3000/

real-time-voice-bot's People

Contributors

damiendeepgram avatar jkroll-deepgram avatar vchulski avatar

Stargazers

Alexander Dischberg avatar Felix Fale avatar Jaime Tarazona โœ… avatar  avatar MARTIAL Willys avatar Pranay Manocha avatar  avatar  avatar Jabari E Holder avatar Jd Fiscus avatar  avatar John Wright avatar dbraganca avatar Dante Noguez avatar Michelle Chan avatar

Watchers

Jason Maldonis avatar Jackie Harris avatar  avatar  avatar

real-time-voice-bot's Issues

Getting error after starting server

Hello, I cloned the repo and followed all steps, verified all the api key are correct. getting below error on index page->

[nodemon] 2.0.7
[nodemon] to restart at any time, enter rs
[nodemon] watching path(s): .
[nodemon] watching extensions: js,mjs,json
[nodemon] starting node server.js
Starting Server on Port 3000
Running
Connected on server side with ID: JGOpkbxwU6WH4nk8AAAB
ERROR MESG n {
target: T {
_events: [Object: null prototype] {
open: [Function],
close: [Function],
error: [Function],
message: [Function]
},
_eventsCount: 4,
_maxListeners: undefined,
_binaryType: 'nodebuffer',
_closeCode: 1006,
_closeFrameReceived: false,
_closeFrameSent: false,
_closeMessage: '',
_closeTimer: null,
_extensions: {},
_protocol: '',
_readyState: 2,
_receiver: null,
_sender: null,
_socket: null,
_bufferedAmount: 0,
_isServer: false,
_redirects: 0,
_url: 'wss://api.deepgram.com/v1/listen?language=en-US&smart_format=true&model=nova&interim_results=true&endpointing=100&no_delay=true&utterance_end_ms=1000',
_req: ClientRequest {
_events: [Object: null prototype],
_eventsCount: 5,
_maxListeners: undefined,
outputData: [],
outputSize: 0,
writable: true,
destroyed: true,
_last: true,
chunkedEncoding: false,
shouldKeepAlive: true,
maxRequestsOnConnectionReached: false,
_defaultKeepAlive: true,
useChunkedEncodingByDefault: false,
sendDate: false,
_removedConnection: false,
_removedContLen: false,
_removedTE: false,
strictContentLength: false,
_contentLength: 0,
_hasBody: true,
_trailer: '',
finished: true,
_headerSent: true,
_closed: false,
socket: [TLSSocket],
_header: 'GET /v1/listen?language=en-US&smart_format=true&model=nova&interim_results=true&endpointing=100&no_delay=true&utterance_end_ms=1000 HTTP/1.1\r\n' +
'Sec-WebSocket-Version: 13\r\n' +
'Sec-WebSocket-Key: mS0RAqkFk7Eqh5QiQ0cBJg==\r\n' +
'Connection: Upgrade\r\n' +
'Upgrade: websocket\r\n' +
'Authorization: token 6b4337df6f394f47ac8916021fc206a5\r\n' +
'User-Agent: @deepgram/sdk/1.21.0 node/20.11.1\r\n' +
'Sec-WebSocket-Extensions: permessage-deflate; client_max_window_bits\r\n' +
'Host: api.deepgram.com\r\n' +
'\r\n',
_keepAliveTimeout: 0,
_onPendingData: [Function: nop],
agent: undefined,
socketPath: undefined,
method: 'GET',
maxHeaderSize: undefined,
insecureHTTPParser: undefined,
joinDuplicateHeaders: undefined,
path: '/v1/listen?language=en-US&smart_format=true&model=nova&interim_results=true&endpointing=100&no_delay=true&utterance_end_ms=1000',
_ended: false,
res: [IncomingMessage],
aborted: true,
timeoutCb: null,
upgradeOrConnect: false,
parser: [HTTPParser],
maxHeadersCount: null,
reusedSocket: false,
host: 'api.deepgram.com',
protocol: 'https:',
[Symbol(shapeMode)]: false,
[Symbol(kCapture)]: false,
[Symbol(kBytesWritten)]: 0,
[Symbol(kNeedDrain)]: false,
[Symbol(corked)]: 0,
[Symbol(kOutHeaders)]: [Object: null prototype],
[Symbol(errored)]: null,
[Symbol(kHighWaterMark)]: 16384,
[Symbol(kRejectNonStandardBodyWrites)]: false,
[Symbol(kUniqueHeaders)]: null,
[Symbol(kError)]: undefined
},
[Symbol(shapeMode)]: false,
[Symbol(kCapture)]: false
},
type: 'error',
message: 'Unexpected server response: 401',
error: Error: Unexpected server response: 401
at ClientRequest. (C:\Users\gaura\gihub_repo\real-time-voice-bot\node_modules@deepgram\sdk\dist\index.js:1:54417)
at ClientRequest.emit (node:events:518:28)
at HTTPParser.parserOnIncomingClient [as onIncoming] (node:_http_client:693:27)
at HTTPParser.parserOnHeadersComplete (node:_http_common:119:17)
at TLSSocket.socketOnData (node:_http_client:535:22)
at TLSSocket.emit (node:events:518:28)
at addChunk (node:internal/streams/readable:559:12)
at readableAddChunkPushByteMode (node:internal/streams/readable:510:3)
at Readable.push (node:internal/streams/readable:390:5)
at TLSWrap.onStreamRead (node:internal/stream_base_commons:190:23)
}
dgLive socketId: JGOpkbxwU6WH4nk8AAAB ERROR::Type:error / Code:undefined
dgLive socketId: JGOpkbxwU6WH4nk8AAAB CONNECTION CLOSED!

Only working one way - No reply from deepgram then crashes

Microsoft Windows [Version 10.0.19045.4170]
(c) Microsoft Corporation. All rights reserved.

hi - cant get this running 2 ways - have the API keys in a .env file

C:\Users\Sam\AI\real-time-voice-bot>npm i

up to date, audited 262 packages in 2s

35 packages are looking for funding
run npm fund for details

8 moderate severity vulnerabilities

To address all issues (including breaking changes), run:
npm audit fix --force

Run npm audit for details.

C:\Users\Sam\AI\real-time-voice-bot>npm run start

[email protected] start
nodemon server.js

[nodemon] 2.0.7
[nodemon] to restart at any time, enter rs
[nodemon] watching path(s): .
[nodemon] watching extensions: js,mjs,json
[nodemon] starting node server.js
Starting Server on Port 3000
Running
node:events:491
throw er; // Unhandled 'error' event
^

Error: listen EADDRINUSE: address already in use :::3000
at Server.setupListenHandle [as _listen2] (node:net:1740:16)
at listenInCluster (node:net:1788:12)
at Server.listen (node:net:1876:7)
at file:///C:/Users/Sam/AI/real-time-voice-bot/server.js:263:12
at ModuleJob.run (node:internal/modules/esm/module_job:194:25)
Emitted 'error' event on Server instance at:
at emitErrorNT (node:net:1767:8)
at process.processTicksAndRejections (node:internal/process/task_queues:82:21) {
code: 'EADDRINUSE',
errno: -4091,
syscall: 'listen',
address: '::',
port: 3000
}

Node.js v18.15.0
[nodemon] app crashed - waiting for file changes before starting...
Terminate batch job (Y/N)? y

C:\Users\Sam\AI\real-time-voice-bot>npm run start

[email protected] start
nodemon server.js

[nodemon] 2.0.7
[nodemon] to restart at any time, enter rs
[nodemon] watching path(s): .
[nodemon] watching extensions: js,mjs,json
[nodemon] starting node server.js
Starting Server on Port 3000
Running
node:events:491
throw er; // Unhandled 'error' event
^

Error: listen EADDRINUSE: address already in use :::3000
at Server.setupListenHandle [as _listen2] (node:net:1740:16)
at listenInCluster (node:net:1788:12)
at Server.listen (node:net:1876:7)
at file:///C:/Users/Sam/AI/real-time-voice-bot/server.js:263:12
at ModuleJob.run (node:internal/modules/esm/module_job:194:25)
Emitted 'error' event on Server instance at:
at emitErrorNT (node:net:1767:8)
at process.processTicksAndRejections (node:internal/process/task_queues:82:21) {
code: 'EADDRINUSE',
errno: -4091,
syscall: 'listen',
address: '::',
port: 3000
}

Node.js v18.15.0
[nodemon] app crashed - waiting for file changes before starting...
Terminate batch job (Y/N)? y

C:\Users\Sam\AI\real-time-voice-bot>npm run start

[email protected] start
nodemon server.js

[nodemon] 2.0.7
[nodemon] to restart at any time, enter rs
[nodemon] watching path(s): .
[nodemon] watching extensions: js,mjs,json
[nodemon] starting node server.js
Starting Server on Port 3000
Running
Connected on server side with ID: R3SI8AojR-jJwgYqAAAB
dgLive socketId: R3SI8AojR-jJwgYqAAAB WEBSOCKET CONNECTION OPEN!
INTERIM_RESULT: Hello.
INTERIM_RESULT: Hello?
INTERIM_RESULT: Hello. What can you do for me?
SPEECH_FINAL socketId: R3SI8AojR-jJwgYqAAAB: Hello. What can you do for me?
UTTERANCE_END_MS Not Triggered socketId: R3SI8AojR-jJwgYqAAAB:
INTERIM_RESULT: Well,
INTERIM_RESULT: Why are you not reply
SPEECH_FINAL socketId: R3SI8AojR-jJwgYqAAAB: Why are you not replying?
UTTERANCE_END_MS Not Triggered socketId: R3SI8AojR-jJwgYqAAAB:
INTERIM_RESULT: Why do you know
SPEECH_FINAL socketId: R3SI8AojR-jJwgYqAAAB: Why do you not work?
UTTERANCE_END_MS Not Triggered socketId: R3SI8AojR-jJwgYqAAAB:
INTERIM_RESULT: Hello?
INTERIM_RESULT: Hello?
SPEECH_FINAL socketId: R3SI8AojR-jJwgYqAAAB: Hello?
INTERIM_RESULT: I need some help.
SPEECH_FINAL socketId: R3SI8AojR-jJwgYqAAAB: I need some help.
UTTERANCE_END_MS Not Triggered socketId: R3SI8AojR-jJwgYqAAAB:
INTERIM_RESULT: No
INTERIM_RESULT: No reply so
SPEECH_FINAL socketId: R3SI8AojR-jJwgYqAAAB: No reply so far.
Error: Request failed with status code 429
at createError (C:\Users\Sam\AI\real-time-voice-bot\node_modules\axios\lib\core\createError.js:16:15)
at settle (C:\Users\Sam\AI\real-time-voice-bot\node_modules\axios\lib\core\settle.js:17:12)
at IncomingMessage.handleStreamEnd (C:\Users\Sam\AI\real-time-voice-bot\node_modules\axios\lib\adapters\http.js:322:11)
at IncomingMessage.emit (node:events:525:35)
at endReadableNT (node:internal/streams/readable:1359:12)
at process.processTicksAndRejections (node:internal/process/task_queues:82:21) {
config: {
transitional: {
silentJSONParsing: true,
forcedJSONParsing: true,
clarifyTimeoutError: false
},
adapter: [Function: httpAdapter],
transformRequest: [ [Function: transformRequest] ],
transformResponse: [ [Function: transformResponse] ],
timeout: 0,
xsrfCookieName: 'XSRF-TOKEN',
xsrfHeaderName: 'X-XSRF-TOKEN',
maxContentLength: -1,
maxBodyLength: -1,
validateStatus: [Function: validateStatus],
headers: {
Accept: 'application/json, text/plain, /',
'Content-Type': 'application/json',
'User-Agent': 'OpenAI/NodeJS/3.3.0',
Authorization: 'Bearer sk-xxx',
'Content-Length': 460
},
method: 'post',
data: '{"model":"gpt-3.5-turbo","temperature":0,"top_p":1,"frequency_penalty":0,"presence_penalty":0,"n":1,"stream":false,"messages":[{"role":"system","content":"You are very caring and considerate. You are always positive and helpful. You provide short answers one or two sentence at a time. You ask probing questions to help the user share more. You provide reassurances and help the user feel better."},{"role":"user","content":"Hello. What can you do for me? "}]}',
url: 'https://api.openai.com/v1/chat/completions'
},
request: <ref *1> ClientRequest {
_events: [Object: null prototype] {
abort: [Function (anonymous)],
aborted: [Function (anonymous)],
connect: [Function (anonymous)],
error: [Function (anonymous)],
socket: [Function (anonymous)],
timeout: [Function (anonymous)],
finish: [Function: requestOnFinish]
},
_eventsCount: 7,
_maxListeners: undefined,
outputData: [],
outputSize: 0,
writable: true,
destroyed: false,
_last: true,
chunkedEncoding: false,
shouldKeepAlive: false,
maxRequestsOnConnectionReached: false,
_defaultKeepAlive: true,
useChunkedEncodingByDefault: true,
sendDate: false,
_removedConnection: false,
_removedContLen: false,
_removedTE: false,
strictContentLength: false,
_contentLength: 460,
_hasBody: true,
_trailer: '',
finished: true,
_headerSent: true,
_closed: false,
socket: TLSSocket {
_tlsOptions: [Object],
_secureEstablished: true,
_securePending: false,
_newSessionPending: false,
_controlReleased: true,
secureConnecting: false,
_SNICallback: null,
servername: 'api.openai.com',
alpnProtocol: false,
authorized: true,
authorizationError: null,
encrypted: true,
_events: [Object: null prototype],
_eventsCount: 10,
connecting: false,
_hadError: false,
_parent: null,
_host: 'api.openai.com',
_closeAfterHandlingError: false,
_readableState: [ReadableState],
_maxListeners: undefined,
_writableState: [WritableState],
allowHalfOpen: false,
_sockname: null,
_pendingData: null,
_pendingEncoding: '',
server: undefined,
_server: null,
ssl: [TLSWrap],
_requestCert: true,
_rejectUnauthorized: true,
parser: null,
_httpMessage: [Circular *1],
[Symbol(res)]: [TLSWrap],
[Symbol(verified)]: true,
[Symbol(pendingSession)]: null,
[Symbol(async_id_symbol)]: 3652,
[Symbol(kHandle)]: [TLSWrap],
[Symbol(lastWriteQueueSize)]: 0,
[Symbol(timeout)]: null,
[Symbol(kBuffer)]: null,
[Symbol(kBufferCb)]: null,
[Symbol(kBufferGen)]: null,
[Symbol(kCapture)]: false,
[Symbol(kSetNoDelay)]: false,
[Symbol(kSetKeepAlive)]: true,
[Symbol(kSetKeepAliveInitialDelay)]: 60,
[Symbol(kBytesRead)]: 0,
[Symbol(kBytesWritten)]: 0,
[Symbol(connect-options)]: [Object]
},
_header: 'POST /v1/chat/completions HTTP/1.1\r\n' +
'Accept: application/json, text/plain, /\r\n' +
'Content-Type: application/json\r\n' +
'User-Agent: OpenAI/NodeJS/3.3.0\r\n' +
'Authorization: Bearer sk-xx\r\n' +
'Content-Length: 460\r\n' +
'Host: api.openai.com\r\n' +
'Connection: close\r\n' +
'\r\n',
_keepAliveTimeout: 0,
_onPendingData: [Function: nop],
agent: Agent {
_events: [Object: null prototype],
_eventsCount: 2,
_maxListeners: undefined,
defaultPort: 443,
protocol: 'https:',
options: [Object: null prototype],
requests: [Object: null prototype] {},
sockets: [Object: null prototype],
freeSockets: [Object: null prototype] {},
keepAliveMsecs: 1000,
keepAlive: false,
maxSockets: Infinity,
maxFreeSockets: 256,
scheduling: 'lifo',
maxTotalSockets: Infinity,
totalSocketCount: 2,
maxCachedSessions: 100,
_sessionCache: [Object],
[Symbol(kCapture)]: false
},
socketPath: undefined,
method: 'POST',
maxHeaderSize: undefined,
insecureHTTPParser: undefined,
joinDuplicateHeaders: undefined,
path: '/v1/chat/completions',
_ended: true,
res: IncomingMessage {
_readableState: [ReadableState],
_events: [Object: null prototype],
_eventsCount: 4,
_maxListeners: undefined,
socket: [TLSSocket],
httpVersionMajor: 1,
httpVersionMinor: 1,
httpVersion: '1.1',
complete: true,
rawHeaders: [Array],
rawTrailers: [],
joinDuplicateHeaders: undefined,
aborted: false,
upgrade: false,
url: '',
method: null,
statusCode: 429,
statusMessage: 'Too Many Requests',
client: [TLSSocket],
_consuming: false,
_dumped: false,
req: [Circular *1],
responseUrl: 'https://api.openai.com/v1/chat/completions',
redirects: [],
[Symbol(kCapture)]: false,
[Symbol(kHeaders)]: [Object],
[Symbol(kHeadersCount)]: 26,
[Symbol(kTrailers)]: null,
[Symbol(kTrailersCount)]: 0
},
aborted: false,
timeoutCb: null,
upgradeOrConnect: false,
parser: null,
maxHeadersCount: null,
reusedSocket: false,
host: 'api.openai.com',
protocol: 'https:',
_redirectable: Writable {
_writableState: [WritableState],
_events: [Object: null prototype],
_eventsCount: 3,
_maxListeners: undefined,
_options: [Object],
_ended: true,
_ending: true,
_redirectCount: 0,
_redirects: [],
_requestBodyLength: 460,
_requestBodyBuffers: [],
_onNativeResponse: [Function (anonymous)],
_currentRequest: [Circular *1],
_currentUrl: 'https://api.openai.com/v1/chat/completions',
[Symbol(kCapture)]: false
},
[Symbol(kCapture)]: false,
[Symbol(kBytesWritten)]: 0,
[Symbol(kEndCalled)]: true,
[Symbol(kNeedDrain)]: false,
[Symbol(corked)]: 0,
[Symbol(kOutHeaders)]: [Object: null prototype] {
accept: [Array],
'content-type': [Array],
'user-agent': [Array],
authorization: [Array],
'content-length': [Array],
host: [Array]
},
[Symbol(errored)]: null,
[Symbol(kUniqueHeaders)]: null
},
response: {
status: 429,
statusText: 'Too Many Requests',
headers: {
date: 'Mon, 18 Mar 2024 18:10:30 GMT',
'content-type': 'application/json; charset=utf-8',
'content-length': '337',
connection: 'close',
vary: 'Origin',
'x-request-id': 'req_5110ce1e2c3d61cc09f370977700b951',
'strict-transport-security': 'max-age=15724800; includeSubDomains',
'cf-cache-status': 'DYNAMIC',
'set-cookie': [Array],
server: 'cloudflare',
'cf-ray': '86672e0b8c126317-LHR',
'alt-svc': 'h3=":443"; ma=86400'
},
config: {
transitional: [Object],
adapter: [Function: httpAdapter],
transformRequest: [Array],
transformResponse: [Array],
timeout: 0,
xsrfCookieName: 'XSRF-TOKEN',
xsrfHeaderName: 'X-XSRF-TOKEN',
maxContentLength: -1,
maxBodyLength: -1,
validateStatus: [Function: validateStatus],
headers: [Object],
method: 'post',
data: '{"model":"gpt-3.5-turbo","temperature":0,"top_p":1,"frequency_penalty":0,"presence_penalty":0,"n":1,"stream":false,"messages":[{"role":"system","content":"You are very caring and considerate. You are always positive and helpful. You provide short answers one or two sentence at a time. You ask probing questions to help the user share more. You provide reassurances and help the user feel better."},{"role":"user","content":"Hello. What can you do for me? "}]}',
url: 'https://api.openai.com/v1/chat/completions'
},
request: <ref *1> ClientRequest {
_events: [Object: null prototype],
_eventsCount: 7,
_maxListeners: undefined,
outputData: [],
outputSize: 0,
writable: true,
destroyed: false,
_last: true,
chunkedEncoding: false,
shouldKeepAlive: false,
maxRequestsOnConnectionReached: false,
_defaultKeepAlive: true,
useChunkedEncodingByDefault: true,
sendDate: false,
_removedConnection: false,
_removedContLen: false,
_removedTE: false,
strictContentLength: false,
_contentLength: 460,
_hasBody: true,
_trailer: '',
finished: true,
_headerSent: true,
_closed: false,
socket: [TLSSocket],
_header: 'POST /v1/chat/completions HTTP/1.1\r\n' +
'Accept: application/json, text/plain, /\r\n' +
'Content-Type: application/json\r\n' +
'User-Agent: OpenAI/NodeJS/3.3.0\r\n' +
'Authorization: Bearer xxr\n' +
'Content-Length: 460\r\n' +
'Host: api.openai.com\r\n' +
'Connection: close\r\n' +
'\r\n',
_keepAliveTimeout: 0,
_onPendingData: [Function: nop],
agent: [Agent],
socketPath: undefined,
method: 'POST',
maxHeaderSize: undefined,
insecureHTTPParser: undefined,
joinDuplicateHeaders: undefined,
path: '/v1/chat/completions',
_ended: true,
res: [IncomingMessage],
aborted: false,
timeoutCb: null,
upgradeOrConnect: false,
parser: null,
maxHeadersCount: null,
reusedSocket: false,
host: 'api.openai.com',
protocol: 'https:',
_redirectable: [Writable],
[Symbol(kCapture)]: false,
[Symbol(kBytesWritten)]: 0,
[Symbol(kEndCalled)]: true,
[Symbol(kNeedDrain)]: false,
[Symbol(corked)]: 0,
[Symbol(kOutHeaders)]: [Object: null prototype],
[Symbol(errored)]: null,
[Symbol(kUniqueHeaders)]: null
},
data: { error: [Object] }
},
isAxiosError: true,
toJSON: [Function: toJSON],
attemptNumber: 7,
retriesLeft: 0
}
Error: Request failed with status code 429
at createError (C:\Users\Sam\AI\real-time-voice-bot\node_modules\axios\lib\core\createError.js:16:15)
at settle (C:\Users\Sam\AI\real-time-voice-bot\node_modules\axios\lib\core\settle.js:17:12)
at IncomingMessage.handleStreamEnd (C:\Users\Sam\AI\real-time-voice-bot\node_modules\axios\lib\adapters\http.js:322:11)
at IncomingMessage.emit (node:events:525:35)
at endReadableNT (node:internal/streams/readable:1359:12)
at process.processTicksAndRejections (node:internal/process/task_queues:82:21) {
config: {
transitional: {
silentJSONParsing: true,
forcedJSONParsing: true,
clarifyTimeoutError: false
},
adapter: [Function: httpAdapter],
transformRequest: [ [Function: transformRequest] ],
transformResponse: [ [Function: transformResponse] ],
timeout: 0,
xsrfCookieName: 'XSRF-TOKEN',
xsrfHeaderName: 'X-XSRF-TOKEN',
maxContentLength: -1,
maxBodyLength: -1,
validateStatus: [Function: validateStatus],
headers: {
Accept: 'application/json, text/plain, /',
'Content-Type': 'application/json',
'User-Agent': 'OpenAI/NodeJS/3.3.0',
Authorization: 'Bearer sk-xx',
'Content-Length': 450
},
method: 'post',
data: '{"model":"gpt-3.5-turbo","temperature":0,"top_p":1,"frequency_penalty":0,"presence_penalty":0,"n":1,"stream":false,"messages":[{"role":"system","content":"You are very caring and considerate. You are always positive and helpful. You provide short answers one or two sentence at a time. You ask probing questions to help the user share more. You provide reassurances and help the user feel better."},{"role":"user","content":"Why do you not work? "}]}',
url: 'https://api.openai.com/v1/chat/completions'
},
request: <ref *1> ClientRequest {
_events: [Object: null prototype] {
abort: [Function (anonymous)],
aborted: [Function (anonymous)],
connect: [Function (anonymous)],
error: [Function (anonymous)],
socket: [Function (anonymous)],
timeout: [Function (anonymous)],
finish: [Function: requestOnFinish]
},
_eventsCount: 7,
_maxListeners: undefined,
outputData: [],
outputSize: 0,
writable: true,
destroyed: false,
_last: true,
chunkedEncoding: false,
shouldKeepAlive: false,
maxRequestsOnConnectionReached: false,
_defaultKeepAlive: true,
useChunkedEncodingByDefault: true,
sendDate: false,
_removedConnection: false,
_removedContLen: false,
_removedTE: false,
strictContentLength: false,
_contentLength: 450,
_hasBody: true,
_trailer: '',
finished: true,
_headerSent: true,
_closed: false,
socket: TLSSocket {
_tlsOptions: [Object],
_secureEstablished: true,
_securePending: false,
_newSessionPending: false,
_controlReleased: true,
secureConnecting: false,
_SNICallback: null,
servername: 'api.openai.com',
alpnProtocol: false,
authorized: true,
authorizationError: null,
encrypted: true,
_events: [Object: null prototype],
_eventsCount: 10,
connecting: false,
_hadError: false,
_parent: null,
_host: 'api.openai.com',
_closeAfterHandlingError: false,
_readableState: [ReadableState],
_maxListeners: undefined,
_writableState: [WritableState],
allowHalfOpen: false,
_sockname: null,
_pendingData: null,
_pendingEncoding: '',
server: undefined,
_server: null,
ssl: [TLSWrap],
_requestCert: true,
_rejectUnauthorized: true,
parser: null,
_httpMessage: [Circular *1],
[Symbol(res)]: [TLSWrap],
[Symbol(verified)]: true,
[Symbol(pendingSession)]: null,
[Symbol(async_id_symbol)]: 3853,
[Symbol(kHandle)]: [TLSWrap],
[Symbol(lastWriteQueueSize)]: 0,
[Symbol(timeout)]: null,
[Symbol(kBuffer)]: null,
[Symbol(kBufferCb)]: null,
[Symbol(kBufferGen)]: null,
[Symbol(kCapture)]: false,
[Symbol(kSetNoDelay)]: false,
[Symbol(kSetKeepAlive)]: true,
[Symbol(kSetKeepAliveInitialDelay)]: 60,
[Symbol(kBytesRead)]: 0,
[Symbol(kBytesWritten)]: 0,
[Symbol(connect-options)]: [Object]
},
_header: 'POST /v1/chat/completions HTTP/1.1\r\n' +
'Accept: application/json, text/plain, /\r\n' +
'Content-Type: application/json\r\n' +
'User-Agent: OpenAI/NodeJS/3.3.0\r\n' +
'Authorization: Bearer sk-xx\r\n' +
'Content-Length: 450\r\n' +
'Host: api.openai.com\r\n' +
'Connection: close\r\n' +
'\r\n',
_keepAliveTimeout: 0,
_onPendingData: [Function: nop],
agent: Agent {
_events: [Object: null prototype],
_eventsCount: 2,
_maxListeners: undefined,
defaultPort: 443,
protocol: 'https:',
options: [Object: null prototype],
requests: [Object: null prototype] {},
sockets: [Object: null prototype],
freeSockets: [Object: null prototype] {},
keepAliveMsecs: 1000,
keepAlive: false,
maxSockets: Infinity,
maxFreeSockets: 256,
scheduling: 'lifo',
maxTotalSockets: Infinity,
totalSocketCount: 1,
maxCachedSessions: 100,
_sessionCache: [Object],
[Symbol(kCapture)]: false
},
socketPath: undefined,
method: 'POST',
maxHeaderSize: undefined,
insecureHTTPParser: undefined,
joinDuplicateHeaders: undefined,
path: '/v1/chat/completions',
_ended: true,
res: IncomingMessage {
_readableState: [ReadableState],
_events: [Object: null prototype],
_eventsCount: 4,
_maxListeners: undefined,
socket: [TLSSocket],
httpVersionMajor: 1,
httpVersionMinor: 1,
httpVersion: '1.1',
complete: true,
rawHeaders: [Array],
rawTrailers: [],
joinDuplicateHeaders: undefined,
aborted: false,
upgrade: false,
url: '',
method: null,
statusCode: 429,
statusMessage: 'Too Many Requests',
client: [TLSSocket],
_consuming: false,
_dumped: false,
req: [Circular *1],
responseUrl: 'https://api.openai.com/v1/chat/completions',
redirects: [],
[Symbol(kCapture)]: false,
[Symbol(kHeaders)]: [Object],
[Symbol(kHeadersCount)]: 26,
[Symbol(kTrailers)]: null,
[Symbol(kTrailersCount)]: 0
},
aborted: false,
timeoutCb: null,
upgradeOrConnect: false,
parser: null,
maxHeadersCount: null,
reusedSocket: false,
host: 'api.openai.com',
protocol: 'https:',
_redirectable: Writable {
_writableState: [WritableState],
_events: [Object: null prototype],
_eventsCount: 3,
_maxListeners: undefined,
_options: [Object],
_ended: true,
_ending: true,
_redirectCount: 0,
_redirects: [],
_requestBodyLength: 450,
_requestBodyBuffers: [],
_onNativeResponse: [Function (anonymous)],
_currentRequest: [Circular *1],
_currentUrl: 'https://api.openai.com/v1/chat/completions',
[Symbol(kCapture)]: false
},
[Symbol(kCapture)]: false,
[Symbol(kBytesWritten)]: 0,
[Symbol(kEndCalled)]: true,
[Symbol(kNeedDrain)]: false,
[Symbol(corked)]: 0,
[Symbol(kOutHeaders)]: [Object: null prototype] {
accept: [Array],
'content-type': [Array],
'user-agent': [Array],
authorization: [Array],
'content-length': [Array],
host: [Array]
},
[Symbol(errored)]: null,
[Symbol(kUniqueHeaders)]: null
},
response: {
status: 429,
statusText: 'Too Many Requests',
headers: {
date: 'Mon, 18 Mar 2024 18:10:48 GMT',
'content-type': 'application/json; charset=utf-8',
'content-length': '337',
connection: 'close',
vary: 'Origin',
'x-request-id': 'req_f8b001b5032f5fac788c9d880ac79eba',
'strict-transport-security': 'max-age=15724800; includeSubDomains',
'cf-cache-status': 'DYNAMIC',
'set-cookie': [Array],
server: 'cloudflare',
'cf-ray': '86672e794f1c71aa-LHR',
'alt-svc': 'h3=":443"; ma=86400'
},
config: {
transitional: [Object],
adapter: [Function: httpAdapter],
transformRequest: [Array],
transformResponse: [Array],
timeout: 0,
xsrfCookieName: 'XSRF-TOKEN',
xsrfHeaderName: 'X-XSRF-TOKEN',
maxContentLength: -1,
maxBodyLength: -1,
validateStatus: [Function: validateStatus],
headers: [Object],
method: 'post',
data: '{"model":"gpt-3.5-turbo","temperature":0,"top_p":1,"frequency_penalty":0,"presence_penalty":0,"n":1,"stream":false,"messages":[{"role":"system","content":"You are very caring and considerate. You are always positive and helpful. You provide short answers one or two sentence at a time. You ask probing questions to help the user share more. You provide reassurances and help the user feel better."},{"role":"user","content":"Why do you not work? "}]}',
url: 'https://api.openai.com/v1/chat/completions'
},
request: <ref *1> ClientRequest {
_events: [Object: null prototype],
_eventsCount: 7,
_maxListeners: undefined,
outputData: [],
outputSize: 0,
writable: true,
destroyed: false,
_last: true,
chunkedEncoding: false,
shouldKeepAlive: false,
maxRequestsOnConnectionReached: false,
_defaultKeepAlive: true,
useChunkedEncodingByDefault: true,
sendDate: false,
_removedConnection: false,
_removedContLen: false,
_removedTE: false,
strictContentLength: false,
_contentLength: 450,
_hasBody: true,
_trailer: '',
finished: true,
_headerSent: true,
_closed: false,
socket: [TLSSocket],
_header: 'POST /v1/chat/completions HTTP/1.1\r\n' +
'Accept: application/json, text/plain, /\r\n' +
'Content-Type: application/json\r\n' +
'User-Agent: OpenAI/NodeJS/3.3.0\r\n' +
'Authorization: Bearer sk-xx\r\n' +
'Content-Length: 450\r\n' +
'Host: api.openai.com\r\n' +
'Connection: close\r\n' +
'\r\n',
_keepAliveTimeout: 0,
_onPendingData: [Function: nop],
agent: [Agent],
socketPath: undefined,
method: 'POST',
maxHeaderSize: undefined,
insecureHTTPParser: undefined,
joinDuplicateHeaders: undefined,
path: '/v1/chat/completions',
_ended: true,
res: [IncomingMessage],
aborted: false,
timeoutCb: null,
upgradeOrConnect: false,
parser: null,
maxHeadersCount: null,
reusedSocket: false,
host: 'api.openai.com',
protocol: 'https:',
_redirectable: [Writable],
[Symbol(kCapture)]: false,
[Symbol(kBytesWritten)]: 0,
[Symbol(kEndCalled)]: true,
[Symbol(kNeedDrain)]: false,
[Symbol(corked)]: 0,
[Symbol(kOutHeaders)]: [Object: null prototype],
[Symbol(errored)]: null,
[Symbol(kUniqueHeaders)]: null
},
data: { error: [Object] }
},
isAxiosError: true,
toJSON: [Function: toJSON],
attemptNumber: 7,
retriesLeft: 0
}
Error: Request failed with status code 429
at createError (C:\Users\Sam\AI\real-time-voice-bot\node_modules\axios\lib\core\createError.js:16:15)
at settle (C:\Users\Sam\AI\real-time-voice-bot\node_modules\axios\lib\core\settle.js:17:12)
at IncomingMessage.handleStreamEnd (C:\Users\Sam\AI\real-time-voice-bot\node_modules\axios\lib\adapters\http.js:322:11)
at IncomingMessage.emit (node:events:525:35)
at endReadableNT (node:internal/streams/readable:1359:12)
at process.processTicksAndRejections (node:internal/process/task_queues:82:21) {
config: {
transitional: {
silentJSONParsing: true,
forcedJSONParsing: true,
clarifyTimeoutError: false
},
adapter: [Function: httpAdapter],
transformRequest: [ [Function: transformRequest] ],
transformResponse: [ [Function: transformResponse] ],
timeout: 0,
xsrfCookieName: 'XSRF-TOKEN',
xsrfHeaderName: 'X-XSRF-TOKEN',
maxContentLength: -1,
maxBodyLength: -1,
validateStatus: [Function: validateStatus],
headers: {
Accept: 'application/json, text/plain, /',
'Content-Type': 'application/json',
'User-Agent': 'OpenAI/NodeJS/3.3.0',
Authorization: 'Bearer sk-xx',
'Content-Length': 455
},
method: 'post',
data: '{"model":"gpt-3.5-turbo","temperature":0,"top_p":1,"frequency_penalty":0,"presence_penalty":0,"n":1,"stream":false,"messages":[{"role":"system","content":"You are very caring and considerate. You are always positive and helpful. You provide short answers one or two sentence at a time. You ask probing questions to help the user share more. You provide reassurances and help the user feel better."},{"role":"user","content":"Why are you not replying? "}]}',
url: 'https://api.openai.com/v1/chat/completions'
},
request: <ref *1> ClientRequest {
_events: [Object: null prototype] {
abort: [Function (anonymous)],
aborted: [Function (anonymous)],
connect: [Function (anonymous)],
error: [Function (anonymous)],
socket: [Function (anonymous)],
timeout: [Function (anonymous)],
finish: [Function: requestOnFinish]
},
_eventsCount: 7,
_maxListeners: undefined,
outputData: [],
outputSize: 0,
writable: true,
destroyed: false,
_last: true,
chunkedEncoding: false,
shouldKeepAlive: false,
maxRequestsOnConnectionReached: false,
_defaultKeepAlive: true,
useChunkedEncodingByDefault: true,
sendDate: false,
_removedConnection: false,
_removedContLen: false,
_removedTE: false,
strictContentLength: false,
_contentLength: 455,
_hasBody: true,
_trailer: '',
finished: true,
_headerSent: true,
_closed: false,
socket: TLSSocket {
_tlsOptions: [Object],
_secureEstablished: true,
_securePending: false,
_newSessionPending: false,
_controlReleased: true,
secureConnecting: false,
_SNICallback: null,
servername: 'api.openai.com',
alpnProtocol: false,
authorized: true,
authorizationError: null,
encrypted: true,
_events: [Object: null prototype],
_eventsCount: 10,
connecting: false,
_hadError: false,
_parent: null,
_host: 'api.openai.com',
_closeAfterHandlingError: false,
_readableState: [ReadableState],
_maxListeners: undefined,
_writableState: [WritableState],
allowHalfOpen: false,
_sockname: null,
_pendingData: null,
_pendingEncoding: '',
server: undefined,
_server: null,
ssl: [TLSWrap],
_requestCert: true,
_rejectUnauthorized: true,
parser: null,
_httpMessage: [Circular *1],
[Symbol(res)]: [TLSWrap],
[Symbol(verified)]: true,
[Symbol(pendingSession)]: null,
[Symbol(async_id_symbol)]: 3889,
[Symbol(kHandle)]: [TLSWrap],
[Symbol(lastWriteQueueSize)]: 0,
[Symbol(timeout)]: null,
[Symbol(kBuffer)]: null,
[Symbol(kBufferCb)]: null,
[Symbol(kBufferGen)]: null,
[Symbol(kCapture)]: false,
[Symbol(kSetNoDelay)]: false,
[Symbol(kSetKeepAlive)]: true,
[Symbol(kSetKeepAliveInitialDelay)]: 60,
[Symbol(kBytesRead)]: 0,
[Symbol(kBytesWritten)]: 0,
[Symbol(connect-options)]: [Object]
},
_header: 'POST /v1/chat/completions HTTP/1.1\r\n' +
'Accept: application/json, text/plain, /\r\n' +
'Content-Type: application/json\r\n' +
'User-Agent: OpenAI/NodeJS/3.3.0\r\n' +
'Authorization: Bearer sk-xx\r\n' +
'Content-Length: 455\r\n' +
'Host: api.openai.com\r\n' +
'Connection: close\r\n' +
'\r\n',
_keepAliveTimeout: 0,
_onPendingData: [Function: nop],
agent: Agent {
_events: [Object: null prototype],
_eventsCount: 2,
_maxListeners: undefined,
defaultPort: 443,
protocol: 'https:',
options: [Object: null prototype],
requests: [Object: null prototype] {},
sockets: [Object: null prototype],
freeSockets: [Object: null prototype] {},
keepAliveMsecs: 1000,
keepAlive: false,
maxSockets: Infinity,
maxFreeSockets: 256,
scheduling: 'lifo',
maxTotalSockets: Infinity,
totalSocketCount: 1,
maxCachedSessions: 100,
_sessionCache: [Object],
[Symbol(kCapture)]: false
},
socketPath: undefined,
method: 'POST',
maxHeaderSize: undefined,
insecureHTTPParser: undefined,
joinDuplicateHeaders: undefined,
path: '/v1/chat/completions',
_ended: true,
res: IncomingMessage {
_readableState: [ReadableState],
_events: [Object: null prototype],
_eventsCount: 4,
_maxListeners: undefined,
socket: [TLSSocket],
httpVersionMajor: 1,
httpVersionMinor: 1,
httpVersion: '1.1',
complete: true,
rawHeaders: [Array],
rawTrailers: [],
joinDuplicateHeaders: undefined,
aborted: false,
upgrade: false,
url: '',
method: null,
statusCode: 429,
statusMessage: 'Too Many Requests',
client: [TLSSocket],
_consuming: false,
_dumped: false,
req: [Circular *1],
responseUrl: 'https://api.openai.com/v1/chat/completions',
redirects: [],
[Symbol(kCapture)]: false,
[Symbol(kHeaders)]: [Object],
[Symbol(kHeadersCount)]: 26,
[Symbol(kTrailers)]: null,
[Symbol(kTrailersCount)]: 0
},
aborted: false,
timeoutCb: null,
upgradeOrConnect: false,
parser: null,
maxHeadersCount: null,
reusedSocket: false,
host: 'api.openai.com',
protocol: 'https:',
_redirectable: Writable {
_writableState: [WritableState],
_events: [Object: null prototype],
_eventsCount: 3,
_maxListeners: undefined,
_options: [Object],
_ended: true,
_ending: true,
_redirectCount: 0,
_redirects: [],
_requestBodyLength: 455,
_requestBodyBuffers: [],
_onNativeResponse: [Function (anonymous)],
_currentRequest: [Circular *1],
_currentUrl: 'https://api.openai.com/v1/chat/completions',
[Symbol(kCapture)]: false
},
[Symbol(kCapture)]: false,
[Symbol(kBytesWritten)]: 0,
[Symbol(kEndCalled)]: true,
[Symbol(kNeedDrain)]: false,
[Symbol(corked)]: 0,
[Symbol(kOutHeaders)]: [Object: null prototype] {
accept: [Array],
'content-type': [Array],
'user-agent': [Array],
authorization: [Array],
'content-length': [Array],
host: [Array]
},
[Symbol(errored)]: null,
[Symbol(kUniqueHeaders)]: null
},
response: {
status: 429,
statusText: 'Too Many Requests',
headers: {
date: 'Mon, 18 Mar 2024 18:10:48 GMT',
'content-type': 'application/json; charset=utf-8',
'content-length': '337',
connection: 'close',
vary: 'Origin',
'x-request-id': 'req_69ea5edd008d181dc419883aa0ea631b',
'strict-transport-security': 'max-age=15724800; includeSubDomains',
'cf-cache-status': 'DYNAMIC',
'set-cookie': [Array],
server: 'cloudflare',
'cf-ray': '86672e7cca6c63c8-LHR',
'alt-svc': 'h3=":443"; ma=86400'
},
config: {
transitional: [Object],
adapter: [Function: httpAdapter],
transformRequest: [Array],
transformResponse: [Array],
timeout: 0,
xsrfCookieName: 'XSRF-TOKEN',
xsrfHeaderName: 'X-XSRF-TOKEN',
maxContentLength: -1,
maxBodyLength: -1,
validateStatus: [Function: validateStatus],
headers: [Object],
method: 'post',
data: '{"model":"gpt-3.5-turbo","temperature":0,"top_p":1,"frequency_penalty":0,"presence_penalty":0,"n":1,"stream":false,"messages":[{"role":"system","content":"You are very caring and considerate. You are always positive and helpful. You provide short answers one or two sentence at a time. You ask probing questions to help the user share more. You provide reassurances and help the user feel better."},{"role":"user","content":"Why are you not replying? "}]}',
url: 'https://api.openai.com/v1/chat/completions'
},
request: <ref *1> ClientRequest {
_events: [Object: null prototype],
_eventsCount: 7,
_maxListeners: undefined,
outputData: [],
outputSize: 0,
writable: true,
destroyed: false,
_last: true,
chunkedEncoding: false,
shouldKeepAlive: false,
maxRequestsOnConnectionReached: false,
_defaultKeepAlive: true,
useChunkedEncodingByDefault: true,
sendDate: false,
_removedConnection: false,
_removedContLen: false,
_removedTE: false,
strictContentLength: false,
_contentLength: 455,
_hasBody: true,
_trailer: '',
finished: true,
_headerSent: true,
_closed: false,
socket: [TLSSocket],
_header: 'POST /v1/chat/completions HTTP/1.1\r\n' +
'Accept: application/json, text/plain, /\r\n' +
'Content-Type: application/json\r\n' +
'User-Agent: OpenAI/NodeJS/3.3.0\r\n' +
'Authorization: Bearer sk-xx\r\n' +
'Content-Length: 455\r\n' +
'Host: api.openai.com\r\n' +
'Connection: close\r\n' +
'\r\n',
_keepAliveTimeout: 0,
_onPendingData: [Function: nop],
agent: [Agent],
socketPath: undefined,
method: 'POST',
maxHeaderSize: undefined,
insecureHTTPParser: undefined,
joinDuplicateHeaders: undefined,
path: '/v1/chat/completions',
_ended: true,
res: [IncomingMessage],
aborted: false,
timeoutCb: null,
upgradeOrConnect: false,
parser: null,
maxHeadersCount: null,
reusedSocket: false,
host: 'api.openai.com',
protocol: 'https:',
_redirectable: [Writable],
[Symbol(kCapture)]: false,
[Symbol(kBytesWritten)]: 0,
[Symbol(kEndCalled)]: true,
[Symbol(kNeedDrain)]: false,
[Symbol(corked)]: 0,
[Symbol(kOutHeaders)]: [Object: null prototype],
[Symbol(errored)]: null,
[Symbol(kUniqueHeaders)]: null
},
data: { error: [Object] }
},
isAxiosError: true,
toJSON: [Function: toJSON],
attemptNumber: 7,
retriesLeft: 0
}

Do I need a config.json file somewhere?

I get a client side popup with

Error: You must configure your OpenAI API Key in the config.json to use the "Respond with AI" feature.

on the server side I see this
ERROR MESG Could not send. Connection not open.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.