Code Monkey home page Code Monkey logo

chatroom's People

Contributors

btotharye avatar daniel-wer avatar dependabot[bot] avatar hotzenklotz avatar jstriebel avatar koaning avatar netcarver avatar nicholasbulka avatar nicolepilsworth avatar normanrz avatar originleon avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

chatroom's Issues

Speech recognition integration in the chatroom and bot icon beside the message

Hello,
I made a little program in python that handle speech recognition and I want to integrate it to the chatroom (via speaker button). As I'm not very familiar with react and web stack development, I wonder how I can do this without blowing up everything ?
On the other hand, how can I manage to integrate an icon beside for both user and bot messages ?

Thanks in advance for your help.

PS : I don't wanted to fork this project as I think enhancing its functionalities will be a great chatroom for Raza community.

import speech_recognition as sr

def recordAudio():
    # Record Audio
    r = sr.Recognizer()    
# obtain audio from the microphone
    
    with sr.Microphone() as source:
        r.adjust_for_ambient_noise(source)
        print("Say something!")
        audio = r.listen(source)
        
 
    # Speech recognition using Google Speech Recognition
        data = ""
    try:
        # Uses the default API key
        # To use another API key: `r.recognize_google(audio, key="GOOGLE_SPEECH_RECOGNITION_API_KEY")`
        data = r.recognize_google(audio,language="en-EN")
        #data = r.recognize_sphinx(audio)
        #data = r.recognize_bing(audio)
        print("You said: " + data)
    except sr.UnknownValueError:
        print("Sorry, but, I could not understand what you said!")
    except sr.RequestError as e:
        print("Could not request results from Google Speech Recognition service; {0}".format(e))
 
    return data

if __name__ == '__main__':
    try:
        while True:
            recordAudio()
    except KeyboardInterrupt:
        exit

NameError: name 'cmdline_args' is not defined

I am trying to add the https://github.com/JustinaPetr/Weatherbot_Tutorial 'WEATHER BOT' to a website. (Or sample HTML) page. I used the steps given in Advanced Usage with a custom Rasa Core project as Custom Channel from Python.

Following is the code I ran ('run_application.py') after executing the previous steps suggested in the page (i.e. Steps under Basic Usage - Running build and HTML page for your Chatroom)

import os
from rasa_core.channels import HttpInputChannel
from rasa_core.agent import Agent
from rasa_core.interpreter import RasaNLUInterpreter
from rasa_slack_connector import SlackInput
from bot_server_channel import BotServerInputChannel

# Creating the Interpreter and Agent
def load_agent(): 
    nlu_interpreter = RasaNLUInterpreter('./models/nlu/default/productnlu')
    agent = Agent.load('./models/dialogue', interpreter = nlu_interpreter)

# Creating the server
def main_server():
    agent = load_agent()

    channel = BotServerInputChannel(agent, port=cmdline_args.port)
    agent.handle_channels([channel], http_port=cmdline_args.port)

main_server()

Got the following error:

Traceback (most recent call last):
  File "run_application.py", line 26, in <module>
    main_server()
  File "run_application.py", line 23, in main_server
    channel = BotServerInputChannel(agent, port=cmdline_args.port)
NameError: name 'cmdline_args' is not defined

chatroom connection to dockered bot

Hey,

i dockered my bot which starts the Klein server at port 5002. Forwarding the port works and I can access the server with docker toolbox IP http://192.168.99.100:5002/health successfully. But the GUI at http://127.0.0.1:8080/ seems not be connected with the bot although I gave the host inside index.html to be 192.168.99.100:5002

What I have to do so the chatrrom can communicate with the docker bot?

User messages are stacked below bot messages?

I acces the gui via VPN on a virtual machine. Then I recognize that the user messages are stacked below the bot messages. Testing it locally or directly on the VM works normally.

Like.

bot: bla
bot: bla2
bot: bla3
user: hi
user: need help
user: ok

after each user input his messages are stacked below and the bot messages come before the first user message!

version: 0.7.3

All requests hitting same endpoint

Hi
I have defined two endpoints in bot_server_channel.py

  1. /conversations/sender_id/say
  2. /course/newcourse/sender_id/say

But all my requests are hitting only endpoint "/course/newcourse//say".
Request targeted for "/conversations//say " are also hitting "course" endpoint.

When I delete this /course/newcourse//say, requests meant for conversations endpoint hits the correct endpoint.

I am not getting reason for this.
Any help?

Chatroom not working from non-local environment with RASA bot backend

Hello,
First of all, thank you for all the good work and all the nice stuff you propose!
I'm trying to plug a front to my RASA bot with your code.
I succeed in getting the chatroom working with RASA core in back from :

  • 127.0.0.1:8080
  • 192.168.x.x:8080 (any local address given by my DHCP)
  • blabla.ngrok.io
    and getting answers from my RASA bot configured, but always locally => from my laptop (where all the stuff is installed).

From any external devices, I get the first page, with welcome message, can see user messages posted but no answers coming back from the bot.
Actually I have the feeling that the backend does not receive any message when I use an external device.
I tried from Mozilla, IE, iPhone nav, etc
As I get the first page, I don't believe it could be any NAT or firewall issue, but I don't get from where it is blocking ...
I'm not familiar with React or any front unfortunately ... so I'm struggling into modifying any line of codes to debug anything.
Many thanks for your help.

Customize css

Changes in charoom.scss does not change anything. I tried to change text color. But nothing happened. I did yarn build afterwards.

I am completely new to css and html.
I just want to remove the transparency from the text window. You can see the background from the wepage...and maybe change the postion of the container

Custom Action is not being called from chatroom webhook

I m using rasa_core 0.11.12
and using

  1. $ python -m rasa_core_sdk.endpoint --actions actions (To run custom actions)
  2. $ yarn serve
  3. $ python -m rasa_utils.bot -d models/dialogue -u models/nlu/default/bank_nlu
    but i can not get any responses at frontend.
<html>
<head>
  <link rel="stylesheet" href="https://npm-scalableminds.s3.eu-central-1.amazonaws.com/@scalableminds/chatroom@master/dist/Chatroom.css" />
</head>
<body>
  <div class="chat-container"></div>

  <script src="https://npm-scalableminds.s3.eu-central-1.amazonaws.com/@scalableminds/chatroom@master/dist/Chatroom.js"/></script>
  <script type="text/javascript">
    var chatroom = window.Chatroom({
      host: "http://localhost:5005",
      title: "Chat with Mike",
      container: document.querySelector(".chat-container"),
      welcomeMessage: "Hi, I am Mike. How may I help you?"
    });
    chatroom.openChat();
  </script>
</body>
</html>

Issue with chatroom.js

Hey I get for my chatroom.js which I got from last week and now I see that is has changed? Anyway, do I have done something wrong accidentally? Although I have done nothing with this file, but it seems corrupted now?
I get from line 144: Uncaught TypeError: require(...) is not a function

Here ist the file:

// @flow
import "babel-polyfill";
import React, { Component, Fragment } from "react";
import ReactDOM from "react-dom";
import isEqual from "lodash.isequal";

// $FlowFixMe
import "./Chatroom.scss";

import { uuidv4 } from "./utils";
import Message, { MessageTime } from "./Message";

const REDRAW_INTERVAL = 10000;
const GROUP_INTERVAL = 60000;

export type ChatMessage = {
  message:
    | {
        type: "text",
        text: string
      }
    | { type: "image", image: string }
    | { type: "button", buttons: Array<{ payload: string, title: string }> },
  username: string,
  time: number,
  uuid: string
};

const WaitingBubble = () => (
  <li className="chat waiting">
    <span>●</span> <span>●</span> <span>●</span>
  </li>
);

const MessageGroup = ({ messages, onButtonClick }) => {
  const isBot = messages[0].username === "bot";
  const isButtonGroup =
    messages.length === 1 && messages[0].message.type === "button";
  return (
    <Fragment>
      {messages.map((message, i) => (
        <Message chat={message} key={i} onButtonClick={onButtonClick} />
      ))}
      {!isButtonGroup ? (
        <MessageTime time={messages[messages.length - 1].time} isBot={isBot} />
      ) : null}
    </Fragment>
  );
};

type ChatroomProps = {
  messages: Array<ChatMessage>,
  title: string,
  isOpen: boolean,
  showWaitingBubble: boolean,
  onButtonClick: (message: string, payload: string) => *,
  onSendMessage: (message: string) => *,
  onToggleChat: () => *
};

export default class Chatroom extends Component<ChatroomProps, {}> {
  lastRendered: number = 0;
  chatsRef: ?HTMLElement = null;
  inputRef: ?HTMLInputElement = null;

  componentDidMount() {
    this.scrollToBot();
  }

  componentDidUpdate(prevProps: ChatroomProps) {
    if (!isEqual(prevProps.messages, this.props.messages)) {
      this.scrollToBot();
    }
    if (!prevProps.isOpen && this.props.isOpen) {
      this.focusInput();
    }
    this.lastRendered = Date.now();
  }

  shouldComponentUpdate(nextProps: ChatroomProps) {
    return (
      !isEqual(nextProps, this.props) ||
      Date.now() > this.lastRendered + REDRAW_INTERVAL
    );
  }

  getInputRef(): HTMLInputElement {
    const { inputRef } = this;
    if (inputRef == null) throw new TypeError("inputRef is null.");
    return ((ReactDOM.findDOMNode(inputRef): any): HTMLInputElement);
  }

  getChatsRef(): HTMLElement {
    const { chatsRef } = this;
    if (chatsRef == null) throw new TypeError("chatsRef is null.");
    return ((ReactDOM.findDOMNode(chatsRef): any): HTMLElement);
  }

  scrollToBot() {
    this.getChatsRef().scrollTop = this.getChatsRef().scrollHeight;
  }

  focusInput() {
    this.getInputRef().focus();
  }

  handleSubmitMessage = async (e: SyntheticEvent<>) => {
    e.preventDefault();
    const message = this.getInputRef().value.trim();
    this.props.onSendMessage(message);
    this.getInputRef().value = "";
  };

  groupMessages(messages: Array<ChatMessage>) {
    if (messages.length === 0) return [];

    let currentGroup = [messages[0]];
    let lastTime = messages[0].time;
    let lastUsername = messages[0].username;
    let lastType = messages[0].message.type;
    const groups = [currentGroup];

    for (const message of messages.slice(1)) {
      if (
        // Buttons always have their own group
        lastType === "button" ||
        message.message.type === "button" ||
        // Messages are grouped by user/bot
        message.username !== lastUsername ||
        // Only time-continuous messages are grouped
        message.time > lastTime + GROUP_INTERVAL
      ) {
        // new group
        currentGroup = [message];
        groups.push(currentGroup);
      } else {
        // append to group
        currentGroup.push(message);
      }
      lastTime = message.time;
      lastUsername = message.username;
      lastType = message.message.type;
    }
    return groups;
  }

  render() {
    const { messages, isOpen, showWaitingBubble } = this.props;
    const chatroomClassName = `chatroom ${isOpen ? "open" : "closed"}`;

    const messageGroups = this.groupMessages(messages);

    return (
      <div className={chatroomClassName}>
        <h3 onClick={this.props.onToggleChat}>{this.props.title}</h3>
        <div
          className="chats"
          ref={el => {
            this.chatsRef = el;
          }}
        >
          {messageGroups.map((group, i) => (
            <MessageGroup
              messages={group}
              key={i}
              onButtonClick={this.props.onButtonClick}
            />
          ))}
          {showWaitingBubble ? <WaitingBubble /> : null}
        </div>
        <form className="input" onSubmit={this.handleSubmitMessage}>
          <input
            type="text"
            ref={el => {
              this.inputRef = el;
            }}
          />
          <input type="submit" value="Submit" />
        </form>
      </div>
    );
  }
}

'BotServerInputChannel' object has no attribute 'cors_origins'

I did the integration into bot tutorial example and ended up with this issue:

2018-06-16 16:29:28+0200 [-] "127.0.0.1" - - [16/Jun/2018:14:29:28 +0000] "GET /conversations/82aa233e-91e9-4557-bff6-7b6dc3cdc624/log HTTP/1.1" 500 5554 "http://localhost:8080/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/67.0.3396.87 Safari/537.36"
2018-06-16 16:29:30+0200 [-] Unhandled Error
Traceback (most recent call last):
File "C:\Program Files (x86)\Microsoft Visual Studio\Shared\Anaconda3_64\lib\site-packages\twisted\web\server.py", line 257, in render
body = resrc.render(self)
File "C:\Program Files (x86)\Microsoft Visual Studio\Shared\Anaconda3_64\lib\site-packages\klein\resource.py", line 210, in render
d = defer.maybeDeferred(_execute)
File "C:\Program Files (x86)\Microsoft Visual Studio\Shared\Anaconda3_64\lib\site-packages\twisted\internet\defer.py", line 150, in maybeDeferred
result = f(*args, **kw)
File "C:\Program Files (x86)\Microsoft Visual Studio\Shared\Anaconda3_64\lib\site-packages\klein\resource.py", line 204, in _execute
**kwargs)
--- ---
File "C:\Program Files (x86)\Microsoft Visual Studio\Shared\Anaconda3_64\lib\site-packages\twisted\internet\defer.py", line 150, in maybeDeferred
result = f(*args, **kw)
File "C:\Program Files (x86)\Microsoft Visual Studio\Shared\Anaconda3_64\lib\site-packages\klein\app.py", line 128, in execute_endpoint
return endpoint_f(self._instance, *args, **kwargs)
File "C:\Program Files (x86)\Microsoft Visual Studio\Shared\Anaconda3_64\lib\site-packages\klein\app.py", line 227, in _f
return _call(instance, f, request, *a, **kw)
File "C:\Program Files (x86)\Microsoft Visual Studio\Shared\Anaconda3_64\lib\site-packages\klein\app.py", line 50, in _call
result = f(args, **kwargs)
File "C:\Program Files (x86)\Microsoft Visual Studio\Shared\Anaconda3_64\lib\site-packages\rasa_nlu\server.py", line 105, in decorated
if '
' in self.cors_origins:
builtins.AttributeError: 'BotServerInputChannel' object has no attribute 'cors_origins'

any insight what causing this ?

Make Delay Configurable

Right now, there is a delay of 1 second for every message the bot says. It would be nice if it was configurable, e.g. so that I can turn it off during debugging.

Error while accessing chatroom outside installed server

When i try to launch Chatroom from in the network using IP or DNS doesnt work, it works only on localhost, could you please help for any configuration change to launch chatroom launched within a network

var chatroom = window.Chatroom({
host: "http://DNSORIP:5005",
title: "Chat with Mike",
container: document.querySelector(".chat-container"),
welcomeMessage: "Hi, I am Mike. How may I help you?"
});

cannot deploy rasa bot on chatroom

Hi, I just run into the issue when I wanted to deploy my bot using RestInput and Chatroom.
I'm using Rasa core Master version.

I am not sure if I should use the standard Rasa project or a custom Rasa Core project on CLI? I tried both.
I followed the steps (Usage with a standard Rasa Core project) in Chatroom, but when I run python -m rasa_utils.bot -d models/dialogue -u models/current/nlu

The terminal returns:

Traceback (most recent call last):
  File "/Users/wisionlearning/anaconda3/lib/python3.6/runpy.py", line 193, in _run_module_as_main
    "__main__", mod_spec)
  File "/Users/wisionlearning/anaconda3/lib/python3.6/runpy.py", line 85, in _run_code
    exec(code, run_globals)
  File "/Users/wisionlearning/Documents/dd0926/rasa_utils/bot.py", line 21, in <module>
    from rasa_core.channels import (
ImportError: cannot import name 'RestInput'

I also followed the steps on chat & voice platform RestChannel:

I made a credentials.yml as same as it should be:

rest:
  # you don't need to provide anything here - this channel doesn't
  # require any credentials

Then I run python -m rasa_core.run -d models/dialogue -u models/current/nlu/
--port 5002 --credentials credentials.yml

I got:

Using TensorFlow backend.
Traceback (most recent call last):
  File "/Users/wisionlearning/anaconda3/lib/python3.6/runpy.py", line 193, in _run_module_as_main
    "__main__", mod_spec)
  File "/Users/wisionlearning/anaconda3/lib/python3.6/runpy.py", line 85, in _run_code
    exec(code, run_globals)
  File "/Users/wisionlearning/rasa_core/rasa_core/run.py", line 234, in <module>
    nlu_endpoint)
  File "/Users/wisionlearning/rasa_core/rasa_core/run.py", line 203, in main
    generator=nlg_endpoint)
  File "/Users/wisionlearning/rasa_core/rasa_core/agent.py", line 81, in load
    ensemble = PolicyEnsemble.load(path)
  File "/Users/wisionlearning/rasa_core/rasa_core/policies/ensemble.py", line 198, in load
    policy = policy_cls.load(policy_path)
  File "/Users/wisionlearning/rasa_core/rasa_core/policies/keras_policy.py", line 264, in load
    model_arch = cls._load_model_arch(path, meta)
  File "/Users/wisionlearning/rasa_core/rasa_core/policies/keras_policy.py", line 240, in _load_model_arch
    arch_file = os.path.join(path, meta["arch"])
KeyError: 'arch'

When I tried Simple Usage with a custom Rasa Core project on CLI

credential.yml:

</Users/wisionlearning/Documents/dd0926/rasa_utils>/bot_server_channel.BotServerInputChannel:
# pass

Then I run:

python -m rasa_core.run -vv \
  --core models/dialogue  \
  --nlu models/current/nlu  \
  --endpoints endpoints.yml \
  --credentials credentials.yml

I got the errors as:

Using TensorFlow backend.
Traceback (most recent call last):
  File "/Users/wisionlearning/anaconda3/lib/python3.6/runpy.py", line 193, in _run_module_as_main
    "__main__", mod_spec)
  File "/Users/wisionlearning/anaconda3/lib/python3.6/runpy.py", line 85, in _run_code
    exec(code, run_globals)
  File "/Users/wisionlearning/rasa_core/rasa_core/run.py", line 234, in <module>
    nlu_endpoint)
  File "/Users/wisionlearning/rasa_core/rasa_core/run.py", line 203, in main
    generator=nlg_endpoint)
  File "/Users/wisionlearning/rasa_core/rasa_core/agent.py", line 81, in load
    ensemble = PolicyEnsemble.load(path)
  File "/Users/wisionlearning/rasa_core/rasa_core/policies/ensemble.py", line 198, in load
    policy = policy_cls.load(policy_path)
  File "/Users/wisionlearning/rasa_core/rasa_core/policies/keras_policy.py", line 264, in load
    model_arch = cls._load_model_arch(path, meta)
  File "/Users/wisionlearning/rasa_core/rasa_core/policies/keras_policy.py", line 240, in _load_model_arch
    arch_file = os.path.join(path, meta["arch"])
KeyError: 'arch'

Should I do the basic usage first? When I run yarn install, it returns many errors.
Thanks for the help.

No Response to the Chatroom

I have installed and followed the instructions.
04_ui_capture
03_bot_capture
02_actions_capture
01_yarn_capture

I have 3 terminals running --> Yarn serve, actions, bot.
When I type in the chatroom, I see that in the Bot terminal, but i dont get any response displayed in the Chatroom.

Screenshots attached.
Please help.

ImportError: No module named bot_server_channel

python -m rasa_utils.bot -d models/current/dialogue -u models/nlu/Auto/Auto

/usr/local/lib/python2.7/dist-packages/h5py/__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.
  from ._conv import register_converters as _register_converters
Traceback (most recent call last):
  File "/usr/lib/python2.7/runpy.py", line 174, in _run_module_as_main
    "__main__", fname, loader, pkg_name)
  File "/usr/lib/python2.7/runpy.py", line 72, in _run_code
    exec code in run_globals
  File "/home/user/Documents/Ui Bot/rasa_utils/bot.py", line 10, in <module>
    from rasa_core.agent import Agent
  File "rasa_core/agent.py", line 40, in <module>
    from bot_server_channel import BotServerInputChanne
ImportError: No module named bot_server_channel

Implement Debug View

It would be great to have a UI with the chatbot's state. Debugging chatbots on the command line is not fun, because debug information is printed in a rather unstructured way and gets mixed with messages.

A separate debug view would solve this.

Basic Implementation

I hacked together a basic variant like this:

Server

I added a new route to get the current tracker, including events:

    @app.route("/conversations/<cid>/tracker", methods=["GET"])
    @check_cors
    def tracker(self, request, cid):

        tracker = self.agent.tracker_store.get_or_create_tracker(cid)
        tracker_state = tracker.current_state(should_include_events=True,
                                              only_events_after_latest_restart=True)

        request.setHeader("Content-Type", "application/json")
        return json.dumps(tracker_state)

An alternative would be to add the tracker to the /log response.

JS Client

The JS frontend just polls the tracker and pretty-prints the important elements:

      async function updateTrackerView() {
        var sessionId = window.sessionStorage.getItem("simple-chatroom-cid");
        var response = await fetch("http://localhost:5002/conversations/" + sessionId + "/tracker");
        var tracker = await response.json();

        // Remove Intent Ranking
        var {intent_ranking, ...latest_message} = tracker.latest_message;

        var html = "<h1>Slots</h1>" +
                   "<pre>" + JSON.stringify(tracker.slots, null, 2) + "</pre>" +
                   "<h1>Latest Message</h1>" +
                   "<pre>" + JSON.stringify(latest_message, null, 2) + "</pre>" +
                   "<h1>Events</h1>" +
                   "<pre>" + JSON.stringify(tracker.events, null, 2) + "</pre>";

        document.getElementById("tracker-view").innerHTML = html;
      }

      setInterval(updateTrackerView, 1000)

Further Ideas

  • We could also prominently desplay the last executed action. Would have to be searched for in the events.
  • We should probably reverse events to have the most recent on top.
  • We should probably also filter intent_ranking from the events as it is very verbose. Or at least make it collabsible.

Providing CORS

Hi, I am using your service which is pretty good but I am unable to provide cors in web service. I know that there is cors option in bot server channel but I think it's for rasa_core and it is not helping me to whitelist my url on another server or PC, where I was running my bot.

Use credentials in fetch() call

The chatroom doesn't currently work if HTTP authentification is used.
Use fetch(url, { credentials: 'include' }) in all fetch() calls.

TypeError: 'BotServerInputChannel' object is not iterable

When I run the code below:

from bot_server_channel import BotServerInputChannel
##Creating the Interpreter and Agent
def load_agent():
    action_endpoint = EndpointConfig(url="http://localhost:5055/webhook")
    interpreter = RasaNLUInterpreter("./models/default/nlu")
    tracker_store = MongoTrackerStore(domain ="domain.yml", host="http://localhost:27017",db="test",username="username",password="password",collection="conversations")
    agent = Agent.load("./models/dialogue",interpreter=interpreter,action_endpoint=action_endpoint,tracker_store = tracker_store)
    return agent
##Creating the server
def main_server():
    agent = load_agent()
    channel = BotServerInputChannel(agent)
    agent.handle_channels(channel)

main_server()

I got the below error:

In most other cases you should consider using 'safe_load(stream)'
data = yaml.load(stream)
Traceback (most recent call last):
File "test.py", line 39, in
main_server()
File "test.py", line 37, in main_server
agent.handle_channels(channel)
File "/usr/local/lib/python3.6/dist-packages/rasa_core/agent.py", line 528, in handle_channels
route="/webhooks/")
File "/usr/local/lib/python3.6/dist-packages/rasa_core/channels/channel.py", line 58, in register
for channel in input_channels:
TypeError: 'BotServerInputChannel' object is not iterable

Can you help me how can I resolve this issue/

AvailableEndpoints ImportError when running bot.py

When I tried running

python -m rasa_utils.bot -d models/current/dialogue -u models/current/model_20181101-200133/

I get the error -

Traceback (most recent call last):
  File "/Users/abarthak/Downloads/installations/miniconda3/envs/rasapy36/lib/python3.6/runpy.py", line 193, in _run_module_as_main
    "__main__", mod_spec)
  File "/Users/abarthak/Downloads/installations/miniconda3/envs/rasapy36/lib/python3.6/runpy.py", line 85, in _run_code
    exec(code, run_globals)
  File "/Users/abarthak/chatbot_sandbox/starter-pack-rasa-stack/rasa_utils/bot.py", line 26, in <module>
    from rasa_core.utils import read_yaml_file, AvailableEndpoints
ImportError: cannot import name 'AvailableEndpoints'

My Rasa Core version is 0.11.1, rasa_core_sdk is 0.11.5.

Any clue why this is happening? Any help would be great thanks!

Testing

When trying to test if bot_server_channel.py is working, I'm getting the message below

No Such Resource
Sorry. No luck finding that resource.

I follow the steps in Readme but seems that I'm missing something here.

would you please advise.

Add audio type in bot response

Hi

I have use case where person can interact with bot using audio.
When user says something using audio, it gets converted to text first. Then, This text is sent to bot endpoint and text response is returned. I want to add one field in this response that this text response had type audio so that at UI end, it can be resolved as required.

Until which version before Rasa 0.11?

Which version works without rasa 0.11? I saw in commits that also a debug view is now iplemented? Can I use this (which version?) without rasa 0.11? And if not can you offer the same for rasa version below 0.11?

Make First Message come from the Bot

I think it would be a lot cleaner if the first message would be specified as a template in the bot's domain.yaml and would be just another message. This would also automatically add supports for buttons in the first message, which is currently not supported.

I would suggest that the Chatroom component gets an optional startMessage argument. If specified, this message (e.g. /start) is sent to the bot when the chat room is opened.

Connect to Rasa UI?

Hey,

do you know what I have to do to connect the rasa UI to the chatroom? I am a bit off where I have to put in the right urls. Do I have to handover just the port number of 5002 starting rasa to the NLU server endpoint of rasa ui?

Image Display in Chat tool

Hi,

Image is not getting displayed currently, along with the text in bot's reply. Am I missing something or does this require any enhancement?

Chatbot access from external system

Hi
I have launched UI server using command 'yarn serve'. It is loaded well also I integrated this UI with Rasa Core.
when I launched UI it is loaded on 127.0.0.1 and 10.x.x.x . I have launched my rasa backend also. I am able to chat with bot on my system as everything is setup on my system.
Now, I tried accessing this bot from other system using internal IP which is 10.x.x.x. UI is perfectly visible on other system. But response from rasa backend is not coming.

Any suggestion what could be the problem?

bot.py not loading NLU model

I have recently been using chatroom in a Rasa Project I have been working on.
Te problem is, when I run rasa_utils.bot.py, the NLU model does not load, no matter what I put as "-u " (even a non-existing path), the chatroom always run but no intent recognition. For example, I have an intent named "greet" with the phrase "hello there", but if I send "hello there" in the chat, the debug output says it recognizes an intent with name "intent_hello there" (with spaces). Same for whatever I type as "message", it recognizes "intent_message". Can you help me? (using rasa-core 0.10.4 and rasa-nlu 0.13.1). Thanks in advance.

Error while adding InputChanel to Agent

I'm getting following error while trying to add chanel to rasa_core.agent

agent.handle_channel(channel)
AttributeError: 'Agent' object has no attribute 'handle_channel'

If tried changing it to - agent.handle_channels(channel)
TypeError: 'BotServerInputChannel' object is not iterable

Code:

from bot_server_channel import BotServerInputChannel
from rasa_core.agent import Agent
from rasa_core.interpreter import RasaNLUInterpreter

def main_server():
agent = Agent.load("models/dialogue",RasaNLUInterpreter("models/current/nlu"))
channel = BotServerInputChannel(agent)
agent.handle_channels(channel)

main_server()

Looks like there is no function called handle_channel in rasa_core.agent. Not sure how the code is working for all.

[Question] Trouble connecting to a custom backend

Hello,

So I have a custom Rasa backend/server that I'm using. The data is being passed in the form of a JSON, and the query takes the form of localhost:(PORT)/parse?q=. I've noticed that when I try to connect via the chatroom, that query takes the form of localhost:(PORT)/conversations. Is there a way to change the way that chatroom's queries are formed? The bot is pointing towards the correct port that my data would streaming in on. Secondly, is there also a way to turn off the once-per-second logging of the errors? I've attached a screenshot below

screen shot 2018-09-07 at 14 05 02

Thank you once again

[Question] Bot Message Icons

Hello,

Is there an option to add in icons for the message bubbles? For example, I'd like to put in a custom image next to each of the bot responses. If I were to do this myself, would I need to make the changes within messages.js or within chatroom.js?

Thank you

Issues connecting Rasa to chatroom bot

  1. Create a credentials file, e.g. channel_credentials.yml with the fully qualified path to the Python class, e.g.:
bot.helpers.chatroom_channel.BotServerInputChannel:
  # pass
  1. Start the Rasa bot using the commandline and pass the --credentialsflag:
	python -m rasa_core.run  \
		--core models/current/dialogue  \
		--nlu models/current/nlu  \
		--endpoints endpoints.yml \
		--credentials channel_credentials.yml

Originally posted by @hotzenklotz in #43 (comment)

So I still keep getting 404 errors despite following the steps above.

screen shot 2018-10-09 at 3 49 15 pm

I'm sure the problem is on my end so if anyone could help troubleshoot, I would be very grateful.

Rasa_core= 11.1

The path to the python class BotServerInputChannel, is from bot_server_channel.py recently edited and pushed to this repo/from the approved pull request, correct? Its not the deafult version that comes with rasa_utils.

Also, are we supposed to specificy anything in the endpoints.yml file besides the action and core server?

'Agent' object has no attribute 'handle_channel'

Traceback (most recent call last):
File "/usr/lib/python2.7/runpy.py", line 174, in _run_module_as_main
"main", fname, loader, pkg_name)
File "/usr/lib/python2.7/runpy.py", line 72, in _run_code
exec code in run_globals
File "/home/user/Documents/Bot (copy)/rasa_utils/bot.py", line 61, in
agent.handle_channel(channel, message_preprocessor=preprocessor)
AttributeError: 'Agent' object has no attribute 'handle_channel'

log?nocache ?

Hey,

is it possible that latest master is no more working with rasa core version 10.4?

I get correctly the message from the GUI like: webhooks/chatroom/conversations/id/say?message=hi

but afterwards I just get messages like : webhooks/chatroom/conversations/id/log?nocache=10102203

I setted up it on a virtual machine, both gui and python web server are working separately.

Passing additional parameters or headers in logs

Hey,

I want to know that how can I pass additional parameters in the URL or in headers when say endpoint is being hit. Like in the present scenario we get nocache as URL parameter, I want to add some more parameters.

Cannot Get Rasa Working

I tried following your instructions for the rasa channel but not sure how to actually get to the react chat UI. I have the channel loaded and running in my bot but not sure where the actual chat UI is supposed to be available at.

Thanks

Issue while using yarn install

Hi there,

I am trying to install this yarn repository and while executing the command yarn install I get Unexpected token error in index.js as following.
screenshot from 2018-07-12 14-46-54

My sytem configuration is as follows:
Ubuntu 18.04
Node 10.6
yarn 1.7

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.