Dependencies for SlipStream server.
slipstream / slipstreamserverdeps Goto Github PK
View Code? Open in Web Editor NEWDependencies used for SlipStream server
Dependencies used for SlipStream server
Dependencies for SlipStream server.
The missing package prevents from running libcloud dependant cloud connectors after the migration to libcloud
0.18.0
With the current 4096 it takes more than 18 min to generate and is done from the slipstream-server-nginx-conf RPM post-install script [1].
It takes too long and artificially increases deployment times in CI.
Proposed solution:
slipstream.sh
with openssl dhparam -out /etc/nginx/ssl/dhparam.pem 4096 && systemctl restart nginx
after the RPM installation, and probably only on enterprise
edition.[1]
982 ? Ssl 0:00 python /opt/slipstream/client/sbin/slipstream-node -v start
1016 ? S 0:00 \_ /bin/bash /var/lib/slipstream/module_slipstream_ss-ci-pipeline_ss-deployer_3535__deployment
1021 ? S 0:00 \_ /bin/bash ./ss-deployer.sh
22451 ? S 0:00 \_ /bin/bash /tmp/ss-install-ref-conf.sh -r http://nexus.sixsq.com/service/local/artifact/maven/redirect?r=snapshots-enterprise-rhel7&g=com.sixsq.slipstream&a=SlipStre
22471 ? S 0:00 \_ bash ./slipstream.sh -S -k snapshot -e community -d hsqldb
25012 ? S 0:01 \_ /usr/bin/python /usr/bin/yum install -y slipstream-server-nginx-conf-community
25137 ? S 0:00 \_ /bin/sh /var/tmp/rpm-tmp.Lzgroi 1
25153 ? S 0:00 \_ /bin/sh /opt/slipstream/server/sbin/generate-dh-params.sh
25154 ? R 18:03 \_ openssl dhparam -out /etc/nginx/ssl/dhparam.pem 4096
And indeed
[root@deployer1979c2a75-94bd-4990-9cba-2ecc2394cda1 ~]# openssl dhparam -out /tmp/blah.pem 4096
Generating DH parameters, 4096 bit long safe prime, generator 2
This is going to take a long time
.................................................................+......................................................................................+......................................................................................................................................................................^C
[root@deployer1979c2a75-94bd-4990-9cba-2ecc2394cda1 ~]#
The jar file name is currently SlipStreamServiceOfferAPI.jar
. Change this to something more relevant.
Related to slipstream/SlipStream#215
Related to https://github.com/SixSq/tasklist/issues/895
Update the nginx webui configuration so that all URLs below /webui
are routed to the single-page application at /webui/index.html
.
At the moment by default it's on /var. If the partition gets full, nginx sends truncated files. Need to be more flexible with this.
Moving to /var/run/slipstream/nginx/cache
ss-backup-put <s3-bucket-url> <backup-file>
that would substitute s3curl.plpackage.os
is provided by the root POM SlipStream/pom.xml
.
Move the riemann dependency from the enterprise repository to the community repository.
The script uses set -e
which forces it to return before the actual printout of the output from the S3 utility happens.
Backup timestamp file should be stored as /var/run/slipstream/slipstream-backup-timestamp
.
Backup launched by cron is using slipstream
user.
It should be ensured that /var/run/slipstream
is there and writable by slipstream
user.
/var/log/slipstream-backup-timestamp
-> /var/log/slipstream/slipstream-backup-timestamp
Depends on slipstream/SlipStreamServer#543
slipstream user should be used. Backup script should check the effective user name and exit with error if it's not 'slipstream'.
pom.xml
fileOn Mac Os , build would raise error because RPM utility is not present, but installing RPM would also make the build fail. Therefore the rpm build should be run conditionaly
The installation of phantomjs is missing a link to /usr/bin/phantomjs
. The pricing information providers will not run without this link.
Ensure that requests proxied through nginx always have an empty value for the "slipstream-authn-info" header.
Create an RPM package to facilitate the installation of phantomjs for pricing scrappers. This is also needed for unit testing in clojurescript.
It would be useful to be able to associate a particular user session with a given virtual host. To do this, the SNI host information must be passed from nginx to the backend services. Add a request header to provide this information. The variable in nginx with this value is $ssl_server_name
.
It looks like it fails since the end of Sept'16
Wed Sep 28 16:30:01 UTC 2016
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 179 100 179 0 0 5659 0 --:--:-- --:--:-- --:--:-- 5774
{"error":{"root_cause":[{"type":"repository_missing_exception","reason":"[es_backup] missing"}],"type":"repository_missing_exception","reason":"[es_backup] missing"},"status":404}
The maximum size of a report is limited by nginx to be around 1MB by default. This should be increased as logs can be rather large. Discuss what is a good limit (2MB, 5MB?) and change the default configuration.
The file to change is /etc/nginx/conf.d/slipstream-extra/limit-reports.block
:
location ~ ^/reports/ {
include conf.d/slipstream-proxy.params;
limit_conn reportslimit 10;
client_max_body_size 5M;
}
Use 5MB as the maximum for now. Developer votes ranged from 2MB to 10MB.
[2017-02-27T08:11:48,434][INFO ][o.e.c.r.a.AllocationService] [e1TurZX] Cluster health status changed from [RED] to [GREEN] (reason: [shards started [[resources-index][1], [resources-index][2], [resources-index][0]] ...]).
[2017-02-27T08:27:25,472][INFO ][o.e.m.j.JvmGcMonitorService] [e1TurZX] [gc][941] overhead, spent [346ms] collecting in the last [1s]
[2017-02-27T08:27:30,415][INFO ][o.e.s.SnapshotShardsService] [e1TurZX] snapshot [es_backup:es.snapshot.185.19.28.68.2017-02-27t0827z/58ndiqI5Q7-nXLQyvVqjew] is done
[2017-02-27T08:27:30,811][WARN ][o.e.s.SnapshotsService ] [e1TurZX] [es_backup:es.snapshot.185.19.28.68.2017-02-27t0827z/58ndiqI5Q7-nXLQyvVqjew] failed to finalize snapshot
org.elasticsearch.repositories.RepositoryException: [es_backup] concurrent modification of the index-N file, expected current generation [-1], actual current generation [0] - possibly due to simultaneous snapshot deletion requests
at org.elasticsearch.repositories.blobstore.BlobStoreRepository.writeIndexGen(BlobStoreRepository.java:820) ~[elasticsearch-5.2.1.jar:5.2.1]
at org.elasticsearch.repositories.blobstore.BlobStoreRepository.finalizeSnapshot(BlobStoreRepository.java:567) ~[elasticsearch-5.2.1.jar:5.2.1]
at org.elasticsearch.snapshots.SnapshotsService$5.run(SnapshotsService.java:908) [elasticsearch-5.2.1.jar:5.2.1]
at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:527) [elasticsearch-5.2.1.jar:5.2.1]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_121]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_121]
at java.lang.Thread.run(Thread.java:745) [?:1.8.0_121]
[2017-02-27T08:27:30,822][WARN ][r.suppressed ] path: /_snapshot/es_backup/es.snapshot.185.19.28.68.2017-02-27t0827z, params: {repository=es_backup, wait_for_completion=true, snapshot=es.snapshot.185.19.28.68.2017-02-27t0827z}
org.elasticsearch.repositories.RepositoryException: [es_backup] concurrent modification of the index-N file, expected current generation [-1], actual current generation [0] - possibly due to simultaneous snapshot deletion requests
at org.elasticsearch.repositories.blobstore.BlobStoreRepository.writeIndexGen(BlobStoreRepository.java:820) ~[elasticsearch-5.2.1.jar:5.2.1]
at org.elasticsearch.repositories.blobstore.BlobStoreRepository.finalizeSnapshot(BlobStoreRepository.java:567) ~[elasticsearch-5.2.1.jar:5.2.1]
at org.elasticsearch.snapshots.SnapshotsService$5.run(SnapshotsService.java:908) ~[elasticsearch-5.2.1.jar:5.2.1]
at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:527) ~[elasticsearch-5.2.1.jar:5.2.1]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_121]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_121]
at java.lang.Thread.run(Thread.java:745) [?:1.8.0_121]
The maintenance will do the following:
503
). So that search engines can handle that case correctly.The custom error pages allow to return the error in the format which has been requested.
By default, nginx always serves default error pages as HTML.
As part of migration to centos 7.
Connected to https://github.com/SixSq/tasklist/issues/618
The riemann service as deployed on Nuvla (v3.22) currently fails. This is because the sysconfig file references the wrong jar file. The jar file has changed from SlipStreamServiceOfferAPI.jar
to SlipStreamRiemann.jar
.
The HSQLDB code seems to work correctly with Java 1.7. Move the dependency from 1.6 to 1.7.
The backup of the server should be split into two: one for the database and one for the reports. The database backup is critical. The report backup is less so.
Create an RPM that configures a machine to use the elasticsearch repository.
The StratusLab distribution has updated its package structure. We need to change the way we extract and repackage it here
At the moment, the link gets created in the postinstall script in slipstream-phantomjs RPM
https://github.com/slipstream/SlipStreamServerDeps/blob/master/phantomjs/pom.xml#L114
Instead it should be part of the RPM.
Currently using outdated s3curl.
The current nagios backup probe actually does the full backup. This causes a problem because the backup and probe take longer than 60 seconds to complete now. Although the timeout can be extended in the definition of the probe, it cannot exceed the maximum timeout given in the NRPE configuration file on the client. The default for this parameter is 60 seconds. In any case, it is not a good idea to do significant calculations in a probe.
The implementation should be refactored such that a cron job executes the backup and the nagios probe only verifies that the backup has been done recently.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.