Code Monkey home page Code Monkey logo

deathstarbench's Introduction

DeathStarBench

Open-source benchmark suite for cloud microservices. DeathStarBench includes five end-to-end services, four for cloud systems, and one for cloud-edge systems running on drone swarms.

End-to-end Services suite-icon

  • Social Network (released)
  • Media Service (released)
  • Hotel Reservation (released)
  • E-commerce site (in progress)
  • Banking System (in progress)
  • Drone coordination system (in progress)

License & Copyright

DeathStarBench is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, version 2.

DeathStarBench is being developed by the SAIL group at Cornell University.

Publications

More details on the applications and a characterization of their behavior can be found at "An Open-Source Benchmark Suite for Microservices and Their Hardware-Software Implications for Cloud and Edge Systems", Y. Gan et al., ASPLOS 2019.

If you use this benchmark suite in your work, we ask that you please cite the paper above.

Beta-testing

If you are interested in joining the beta-testing group for DeathStarBench, send us an email at: [email protected]

deathstarbench's People

Contributors

aabuddabi avatar alcidesmig avatar bowen-intel avatar cdelimitrou avatar changzhi1990 avatar dimoibiehg avatar frotto440 avatar gavinlichn avatar gy1005 avatar harp-intel avatar huaqiangwang avatar igorrudyk avatar ingomueller-net avatar li-mingyu avatar lianhao avatar liuchangyan avatar lukasz340 avatar montageofzyy avatar nevenas-mit avatar nikitastsiris avatar npathapa avatar panlichen avatar przmk0 avatar red-gv avatar salehsedghpour avatar trent-s avatar vaastav avatar vishnuchalla avatar zhxie avatar zzhou612 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

deathstarbench's Issues

Load balncing between same containers of Social Network

Hi, I am trying to do load balancing between multiple containers of same service using nginx container, like if I want to run another container of Home timeline service then I run it by following command

sudo docker run -itd --restart always --network socialnetwork_default --entrypoint HomeTimelineService --hostname home-timeline-service yg397/social-network-microservices

It will run container of home timeline service. My issue is that for load balancing between containers I have to provide ip address and port of that container to nginx.conf file which i dont know. How i do that? Help me for this. If you know a little bit then kindly tell me, Thank you in advance

why does QPS not increase with more connections and threads for "read home timelines" query?

The CPU utility and IOhas not reached the bottleneck. I got the results below.

./wrk -D exp -t 10 -c 1000 -d 120 -L -s ./scripts/social-network/read-home-timeline.lua http://IP:8080/wrk2-api/home-timeline/read -R 1 --timeout 5s
...
  Thread Stats   Avg      Stdev     99%   +/- Stdev
    Latency     0.99m    23.78s    1.65m    66.92%
    Req/Sec     2.18k   195.02     2.40k    66.67%
  Latency Distribution (HdrHistogram - Recorded Latency)
 50.000%    1.10m
 75.000%    1.33m
 90.000%    1.41m
 99.000%    1.65m
 99.900%    1.82m
 99.990%    1.92m
 99.999%    1.96m
100.000%    1.98m
...
----------------------------------------------------------
  2557431 requests in 2.00m, 485.35MB read
  Socket errors: connect 0, read 0, write 0, timeout 11725
Requests/sec:  21317.48
Transfer/sec:      4.05MB

So, the average Qps is 2.18k or 21317.48? why does QPS not increase with more connections and threads without any cpu or disk io bottleneck?

Is it possible that the benchmark itself is the bottleneck?

Are all features of social network from paper implemented?

Hi! In the paper, fig. 4 and the corresponding text mentions several features that I do not see running within the image. Can you clarify what is already implemented, what is a plan, and what would not be implemented?

The image supports following 12 binaries:

  • ComposePostService
  • HomeTimelineService
  • MediaService
  • PostStorageService
  • SocialGraphService
  • TextService
  • UniqueIdService
  • UrlShortenService
  • UserMentionService
  • UserService
  • UserTimelineService
  • WriteHomeTimelineService

Which does not map 1:1 to the fig. 4, but I miss at least:

  • Ads
  • User recommender engine
  • Search service
  • User statistics

The following features might be implemented within the existing binaries:

  • Sending messages between users
  • Follow, unfollow and block users
  • Favorite and repost posts

I would like to use your benchmark for something as process mining, so it would be perfect to understand what is happening. And I was not able to understand it from the source code.

Thank you for clarifying this :).

Jaeger collect the latency different from wrk2.

In social-network, the latency of requests(read-user-timeline) collected by wrk is inconsistent with the delay collected by Jaeger.

n the data collected by jarger, part of the trace will reach the s level, and the delay information of wrk (0.txt) is the delay of ms level.

Do you encounter such a situation?

ComposePostService fails to compile

I'm trying to build the microservices manually from the source but ComposePostService from the social network benchmark does not compile. I get the following error:

/usr/bin/ld: ComposePostService: hidden symbol `_ZN5boost6chrono12steady_clock3nowEv' in /usr/local/lib/libboost_chrono.a(chrono.o) is referenced by DSO
/usr/bin/ld: final link failed: Bad value
collect2: error: ld returned 1 exit status
src/ComposePostService/CMakeFiles/ComposePostService.dir/build.make:214: recipe for target 'src/ComposePostService/ComposePostService' failed

To my understanding, this particular microservice doesn't even use libboost_chrono and uses STD's chorno library, but somehow this is getting linked.

aiohttp.client_exceptions.ClientConnectorError: Cannot connect to host 127.0.0.1:8080 while running init_social_graph.py

Hi,

Below error is coming while I run init_social_graph.py.

Traceback (most recent call last):
File "/home/test/.local/lib/python3.6/site-packages/aiohttp/connector.py", line 924, in _wrap_create_connection
await self._loop.create_connection(*args, **kwargs))
File "/home/test/anaconda3/lib/python3.6/asyncio/base_events.py", line 778, in create_connection
raise exceptions[0]
File "/home/test/anaconda3/lib/python3.6/asyncio/base_events.py", line 765, in create_connection
yield from self.sock_connect(sock, address)
File "/home/test/anaconda3/lib/python3.6/asyncio/selector_events.py", line 450, in sock_connect
return (yield from fut)
File "/home/test/anaconda3/lib/python3.6/asyncio/selector_events.py", line 480, in _sock_connect_cb
raise OSError(err, 'Connect call failed %s' % (address,))
ConnectionRefusedError: [Errno 111] Connect call failed ('127.0.0.1', 8080)

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "init_social_graph.py", line 74, in
loop.run_until_complete(future)
File "/home/test/anaconda3/lib/python3.6/asyncio/base_events.py", line 468, in run_until_complete
return future.result()
File "init_social_graph.py", line 39, in register
resps = await asyncio.gather(*tasks)
File "init_social_graph.py", line 13, in upload_register
async with session.post(addr + "/wrk2-api/user/register", data=payload) as resp:
File "/home/test/.local/lib/python3.6/site-packages/aiohttp/client.py", line 1005, in aenter
self._resp = await self._coro
File "/home/test/.local/lib/python3.6/site-packages/aiohttp/client.py", line 476, in _request
timeout=real_timeout
File "/home/test/.local/lib/python3.6/site-packages/aiohttp/connector.py", line 522, in connect
proto = await self._create_connection(req, traces, timeout)
File "/home/test/.local/lib/python3.6/site-packages/aiohttp/connector.py", line 854, in _create_connection
req, traces, timeout)
File "/home/test/.local/lib/python3.6/site-packages/aiohttp/connector.py", line 992, in _create_direct_connection
raise last_exc
File "/home/test/.local/lib/python3.6/site-packages/aiohttp/connector.py", line 974, in _create_direct_connection
req=req, client_error=client_error)
File "/home/test/.local/lib/python3.6/site-packages/aiohttp/connector.py", line 931, in _wrap_create_connection
raise client_error(req.connection_key, exc) from exc
aiohttp.client_exceptions.ClientConnectorError: Cannot connect to host 127.0.0.1:8080 ssl:None [Connect call failed ('127.0.0.1', 8080)]


python version-Python 3.6.6


Could you please help me what should I do to correct this.

Not able to establish connection between services - Hotel Reservation

I executed your Hotel Reservation microservice and followed the steps as described. When I generate the load using wrk then all the requests failed and generate 500 http status error. All the generated requests included in "Non 2xx or 3xx responses". I tried to catch the error that is "

rpc error: code = Unavailable desc = there is no address available

"
kindly help me to solve this issue. Also when i checked the traces of services on jaeger then only "jaeger-query" in service block is shown.
wrk

Request generator for hotelReservation is not working.

I deploy the hotelReservation and request it using the command ./wrk -D exp -t 4 -c 4 -d 10 -L -s ./wrk2_lua_scripts/mixed-workload_type_1.lua http://localhost:5000 -R 300, but the result shows there are no successful request. Is there any problem?

#[Mean    =        0.758, StdDeviation   =        2.194]
#[Max     =       39.872, Total count    =         1493]
#[Buckets =           27, SubBuckets     =         2048]
----------------------------------------------------------
  3000 requests in 10.00s, 785.16KB read
  Non-2xx or 3xx responses: 3000
Requests/sec:    299.99
Transfer/sec:     78.51KB

Social Network Init Social Graph Script

Hi,
I am still new to python3's asyncio library.

I am not able to understand why the tasks list in the functions initializing the nodes and edges is not being reset or cleared after every 200 iterations.

Container exits when running individually of Social Network microservice

I run the Social network microservices using the procedure defined. When i run docker ps then all the containers status are UP, means they are running continuously. My issue is when I separately run a container like then it does not keep running. below is one of the service in docker-compose file.
social-graph-service: image: yg397/social-network-microservices hostname: social-graph-service restart: always entrypoint: SocialGraphService

when i run it using command sudo docker run -d --restart always --entrypoint SocialGraphService --hostname social-graph-service yg397/social-network-microservices then its status does not UP, it restarts when i run docker ps and then again exit.

Can't use .wrk in ubuntu 20.04

The version of libssl in ubuntu 20.04 is 1.1, but .wrk still use version 1.0.0. So I can't use .wrk to generate load in ubuntu 20.04. Please upgrade .wrk to support libssl-dev 1.1.

./wrk -D exp -t 2 -c 2 -d 10s -L -s ./scripts/social-network/compose-post.lua http://localhost:8082/wrk2-api/post/compose -R 1
./wrk: error while loading shared libraries: libcrypto.so.1.0.0: cannot open shared object file: No such file or directory

When I check libssl-dev version in ubuntu20.04:

apt-cache policy libssl-dev
libssl-dev:
  Installed: (none)
  Candidate: 1.1.1f-1ubuntu2
  Version table:
     1.1.1f-1ubuntu2 500
        500 http://mirrors.163.com/ubuntu focal/main amd64 Packages

End-to-end latency seems not accurate

I have checked end-to-end latency and also latency represented by wrk2. Both seems not correct and are probably calculated based on the latency between servers not based on the request response time... If you for example use curl to generate 1 request instead, you would see way different latency in comparison to what is seen by wrk2. However, this is not a bug for DeathBench applications but rather for wrk2..

Is ther any chance to change DeathStartBench into Serverless mode?

Hi there,
I try to modify the services into such a situation that, all services is down untile a request is arrive, and the contianers will auto turn up to deal with the request and go to destory after send the response.
Is ther any useful suggestions? Or some others ideas to make the benchmark more flexable and useful!

nginx pod is not running

Hello,

I am trying to deploy the media benchmark with kubernetes but the nginx pod is not running.
Using "kubectl describe" I get the following message:
MountVolume.SetUp failed for volume "nginx-conf" : hostPath type check failed: /root/DeathStarBench/mediaMicroservices/nginx-web-server/conf/nginx-k8s.conf is not a file

I think that this problem is related to setting the resolver to be the FQDN of the core-dns or kube-dns. I changed the resolver in the path mentioned in Readme but it is not working.

Could anyone help. Is there any other reason beside setting the FQDN?
Also I am not too familiar with kubernetes. So how could I be sure that I am setting the right value for FQDN?

Need more detail information about dependencies between containers and files.

In the past three weeks, our team decided to use socialNetwork to be the bench mark for our research. threrefore, I need to deploy the whole 30 containers in a cluster. Because for the lack of dependencies between containers, I made quite a greate effort before I finally resolve all the errors occured when deploying.

Now, socialNetwork works well in cluster, and next focus on the hotelReservation project.

I recommend that you can describe more detail about the applications such as the dependencies between containers or how configs like the 'config.json' in hotelReservation work , so the users like me can use your bench mark more easily.

Changing workload

I'd like to use the social network scenario to evaluate an autoscaling policy.

By reading the documentation, I've found that I can send request to the service as follows:
./wrk -D exp -t <num-threads> -c <num-conns> -d <duration> -L -s ./scripts/social-network/compose-post.lua http://localhost:8080/wrk2-api/post/compose -R <reqs-per-sec>

Using this command, the client sends a burst of request to the server at a fixed rate (-R). Is there a way to send requests by following a varying pattern of requests per second?

Is this an active project?

Hi,
First of all thanks for this amazing repository, lots of good stuff that would definetly help lots of beginners like me.
But I'm wondering is this an active repository? Are you still supporting this and will you support this in future?

K8s deployment and attached volumes

Considering the social network benchmark, there is an issue with the deployment of components: nginx-thrift and media-frontend.

Both the deployments attach volumes which nonetheless are defined as local folders. When the benchmark is deployed in a distributed environment, those folders and files cannot be found.

volumes:
      - name: lua-scripts
        hostPath: 
          path: /root/DeathStarBench/socialNetwork/nginx-web-server/lua-scripts-k8s
          type: Directory

This should be a remote folder.

Questions about the QPS sensitivity in the cpu frequency

Hello,
I have several questions after reading your ASPLOS paper and utilizing DeathStarBench.

In Fig 12 in DeathStarBench paper, it seems that the QPS limit without QoS violation (or QPS with the same tail latency) increases faster than the frequency increase.
To be specific, I think the system's performance increases at most twice when the CPU frequency becomes twice higher. But, in the Social Network graph in Fig 12, it seems that the QPS at 2400MHz is more than twice higher than the QPS at 1200MHz.
I also find out the same tendency after built and executed DeathStarBench.

I'm so curious about the reason.
Can you explain why the QPS limit increases faster than the frequency increase?

Social Network Development

Hi, my name is Luiz Eduardo Magalhães Ferreira, i'm a software engineering student from Brazil, my reason for this contact is to clarify some premises, i'm currently developing a social network and i would like to know if i can use your software architecture as a base and, if so, the specific version that i develop can be associated with a private company, which i intend, in the future, to found. Thta is my only concern, i read the GNU license but some things were not so clear to me, i apologize in advanced for asking something that is already expressively written somewhere, i just want to make sure not to hurt your work, which i'm trully impressed and fascinated by, anyway, thanks for your attention, if you need to talk to me you can contact me here or through my personal email [email protected]

hotelReservation workload generation file missing

The wrk executable we use to generate workloads is missing in the latest commit. It was present in the previous version, before the latest set of commits. Also, it would be great if you could provide us with a way to build the wrk file, like the one in mediaMicroservices and socialNetwork, where we run make in wrk2 folders to build it.

[Question] Jaeger sampling policy

Hello, I have been using Jaeger for latency measurements of the various services.

What is the default sampling policy used by Jaeger? Can this be changed? Also for benchmarking the performance of the various microservices, what is the best recommended policy?

Data initialization in Hotel Reservation

According to issue #41, the database initialization happens at cmd/*/db.go. Is it possible that the database initilization happens in other places too?

I have disabled the default initialization and added some static data in each of the cmd/*/db.go code. I was expecting the code to pick up the static input I fed. But with the rebuilt code, I am still getting a lot of the initialization from before (which I hoped wasn't possible because of commenting the default intialization).

It would be great if a rough program flow could be given on how the service comes up, where the DB is being initialized and how to change the input.

how to deploy this system?

Hello there,

As a beginner, I followed the steps list in https://github.com/delimitrou/DeathStarBench/tree/master/mediaMicroservices

I followed the instructions, and everything works well until docker-compose up -d. It looks all fine in building these services, then I met the following error:

====================================================
ERROR: for search Cannot create container for service search: Requested CPUs are not available - requested 2, available: 0-1

ERROR: for mongodb-profile Cannot create container for service mongodb-profile: Requested CPUs are not available - requested 12, available: 0-1

ERROR: for memcached-profile Cannot create container for service memcached-profile: Requested CPUs are not available - requested 9, available: 0-1

ERROR: for jaeger Cannot create container for service jaeger: Requested CPUs are not available - requested 17, available: 0-1

ERROR: for mongodb-user Cannot create container for service mongodb-user: Requested CPUs are not available - requested 16, available: 0-1

It looks that this is not supposed to be deployed on a single machine, so I'm wondering if there is any detailed instructions for deploying the services on a cluster? Thank you.

Request of the original Dockerfile of several docker images.

Hi there,
I am trying to make the benchmark run on the ARM platfrom, therefore I need to rebuild docker images including:

  • yg397/social-network-microservices
  • yg397/openresty-thrift
  • yg397/media-frontend

However, the image history of these images are not enough to rebuild them. There are some confusing sha256 strings in the history, making me cannot rebuild these images.

For example, when I try to rebuild the image yg397/social-network-microservices using image history:

=> ERROR [2/10] ADD file:c02de920036d851cccaedd7f9ed93d48c960ada8e7e839bd2e79ab7b0d7a12d6 in /                                              0.0s
=> CACHED [3/10] RUN /bin/sh -c set -xe       && echo '#!/bin/sh' > /usr/sbin/policy-rc.d      && echo 'exit 101' >> /usr/sbin/policy-rc.d  0.0s
=> CACHED [4/10] RUN /bin/sh -c rm -rf /var/lib/apt/lists/*                                                                                 0.0s
=> CACHED [5/10] RUN /bin/sh -c mkdir -p /run/systemd     && echo 'docker' > /run/systemd/container                                         0.0s
=> CACHED [6/10] RUN /bin/sh -c apt-get update       && apt-get install -y ca-certificates g++ cmake wget git libmemcached-dev automake bi  0.0s
=> CACHED [7/10] RUN /bin/sh -c ldconfig                                                                                                    0.0s
=> ERROR [8/10] COPY dir:0a51c3cb96020cc5f03775907749b78d69d3326d673b3ab700a82110fe47d959 in /social-network-microservices                  0.0s
------
> [2/10] ADD file:c02de920036d851cccaedd7f9ed93d48c960ada8e7e839bd2e79ab7b0d7a12d6 in /:
------
------
> [8/10] COPY dir:0a51c3cb96020cc5f03775907749b78d69d3326d673b3ab700a82110fe47d959 in /social-network-microservices:
------
failed to solve: rpc error: code = Unknown desc = failed to solve with frontend dockerfile.v0: failed to build LLB: failed to compute cache key: "/dir:0a51c3cb96020cc5f03775907749b78d69d3326d673b3ab700a82110fe47d959" not found: not found

I am wondering could you provide the original Dockerfile? Or tell me how to build these images from Dockerfile?

Social network set up problem

Hi
I am working on a project and try to use this benchmark. I noticed that almost all the servers are inside docker containers. I use the wrk2 to run on the port 8080, which gives me the number. I want to make sure it is right since I using elinks to connect to the localhost:8080, which is unavailable but work on port 8082 I believe it is a real server, I can see the post and login, etc. Since using wr2 to run a plane web also will return numbers, I connect to 8082 port also have Nginx workers running. Please tell me which part set up I did wrong? should I open the Nginx server or not and how to set it up?

across-server

Hello, how to deploy this application to multiple servers?

Languages used in repo vs paper

I was reading through the paper and from Table 1 it's mentioned that e.g. Movie Reviewing application has 20% of lines written in Java. But while browsing the repo it seems that all services are implemented in C++. Has the implementation changed from the time the paper got published?

Thank you,

socialnetwork_nginx-thrift_1 keeps restarting (image: yg397/openresty-thrift:xenial)

Hi,

I am deploying socialnetwork on a single machine with docker-compose. The nginx-thrift (from image: yg397/openresty-thrift:xenial) keeps restarting after docker-compose up and fails to provide services. The log of the container shows the error as follows,

2019/10/28 10:44:29 [error] 1#1: init_by_lua error: /gen-lua/social_network_ttypes.lua:31: attempt to index local '__TObject' (a nil value)
stack traceback:
	/gen-lua/social_network_ttypes.lua:31: in main chunk
	[C]: in function 'require'
	/gen-lua/social_network_UserTimelineService.lua:18: in main chunk
	[C]: in function 'require'
	init_by_lua:7: in main chunk
nginx: [error] init_by_lua error: /gen-lua/social_network_ttypes.lua:31: attempt to index local '__TObject' (a nil value)
stack traceback:
	/gen-lua/social_network_ttypes.lua:31: in main chunk
	[C]: in function 'require'
	/gen-lua/social_network_UserTimelineService.lua:18: in main chunk
	[C]: in function 'require'
	init_by_lua:7: in main chunk

In addition, I found the port number of nginx is slightly inconsistent throughout the project. Have you decided to settle with 8082:8080 for nginx? The docker-compose.yml, init_social_graph.py, and examples in README.md about wrk usage have mentioned the port.

BTW, after issue #11, will DeathStarBench still support single-node experiments with Docker in the future? Or using Kubernetes is suggested for upcoming benchmarks?


  • Environment:
    • OS: Ubuntu 18.04
    • docker-compose version: 1.22.0, build f46880fe
  • Reproduce:
    • git clone https://github.com/delimitrou/DeathStarBench.git
    • cd DeathStarBench/socialNetwork
    • docker-compose up -d
    • docker container ls -a

Thank you.

Best regards,
Luke

NO x.x.x.x found in config.json

The README file of hotelReservation says I need to replace x.x.x.x in config file to my own address so that the application can work. But I only found that there are many address with ports of service in the config file, and the default value is all localhost.

Do the application use this file's information to find other services and how it works?
I find that you use consul to manage addresses of servies, but after I use 'docker-compose up -d' and I can only find consul itself in http://127.0.0.1:8500/, is there anything wrong with me?

why does Qps tend to decrease when running "compose posts" workload?

I ran the script 100 times below to check Qps ("Requests/sec") from the result, it seemed to be decreasing from 852 to 695. I am wondering where is bottleneck? But when I cleared the volume where mongodb used to store data, the Qps increased.

./wrk -D exp -t 2 -c 2 -d 10 -L -s ./scripts/social-network/compose-post.lua http://localhost:8080/wrk2-api/post/compose -R 1

Hardcoded url in JavaScript of frontend

Hi,

Starting the image using docker-compose up lead to frontend having incorrect url in JS code for forms.

http://localhost:8080/compose.html

function clickEvent(){
    if (document.getElementById('media').value != "") {
        var formData = new FormData(document.getElementById('media-form'));
        const Http = new XMLHttpRequest();
        const url='http://ath-8.ece.cornell.edu:8081/upload-media';
        Http.onreadystatechange = function() {
            if (this.readyState == 4 && this.status == 200) {
                var resp = JSON.parse(Http.responseText);
                uploadPost(resp);
           }
        };

        Http.open("POST", url, true);
        Http.send(formData);
    } else {
        uploadPost()
    }

}

Exactly this line:

const url='http://ath-8.ece.cornell.edu:8081/upload-media';

The same issue is with function uploadPost in compose.html, in user-timeline.html with clickEvent - variables url and img.src, and the same in home-timeline.html. Changing the address to const url='http://localhost:8081/upload-media'; worked in compose - the file was correctly uploaded and stored in the DB.

Error with running social network test cases

Take testComposePost.py for example, how should I update socket ip and port ?

#socket = TSocket.TSocket("ath-8.ece.cornell.edu", 10001)
socket = TSocket.TSocket("10.151.154.11", 10001)
python3 testComposePost.py
Could not connect to any of [('10.151.154.11', 10001)]

If I updated the port to 8080 (the exposed port of socialnetwork_nginx-thrift_1 container), "TSocket read 0 bytes" error occured.

Can ZIPKIN be used for tracing?

I found this project use Jaeger for tracing, can I use ZIPKIN?
It seems there is a config - COLLECTOR_ZIPKIN_HTTP_PORT=9411, but ZIPKIN is not working. Do I need some additional configuration? Thanks!

lua cannot find the socket module

I'd like like to use this benchmark, but I'm having some problems with the lua socket module.

I've launched docker-compose and built files in the wrk2 folder.
Then, I tried to run the experiments, but I received errors that I'm sending in the enclosed log file.

Apparently, when the ./wrk executable runs, it overrides the lua_path, and the scripts cannot correctly find the lua-socket component. However if I directly run "lua -l socket", it correctly loads the 'socket' module.
How can I prevent this from happening?

Furthermore, when I run ./wrk, I see that all the requests receive a "non-2xx or 3xx" response; I think this is related to the previous lua-socket problem. Of course, when I open the browser, I can correctly reach the localhost:8080, localhost:8081, and localhost:16686 services. What are the REST APIs invoked, so that I can manually try them using postman?

Thanks
Matteo

log.txt

Failed to upload movie_id to compose-review-service

Hello,

I have been using DeathStarBench/mediaMicroservices for my research project. I have set up the application on Kubernetes using k8s-YAML files (thanks for providing that), along with Istio sidecar injection enabled (because I am working on traffic routing rules in between service versions).

When I first boot all the pods, everything seems working, however, after a while (continuously performing compose-review workload for a couple of hours), my pods (unique-id-service, movie-id-service, text-service, and user-service) all start to get the same error: "Failed to upload movie_id to compose-review-service".

Example log from "user-service":
TSocket::write_partial() send() <Host: compose-review-service Port: 9090>: Broken pipe
error: (UserHandler.h:502:UploadUserWithUsername) Failed to upload movie_id to compose-review-service

Can you help me with this?
Best!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.