Code Monkey home page Code Monkey logo

ovirt-engine's Introduction

oVirt Engine - Open Virtualization Manager

last build

Welcome to the oVirt Engine - Open Virtualization Manager source repository.

How to contribute

Submitting patches

Patches are welcome!

Please submit patches to github.com:ovirt-engine. If you are not familiar with the review process you can read about Working with oVirt on GitHub on the oVirt website.

Found a bug or documentation issue?

To submit a bug or suggest an enhancement for oVirt Engine please use oVirt GitHub issues.

If you find a documentation issue on the oVirt website please navigate and click "Report an issue on GitHub" in the page footer.

Still need help?

If you have any other questions, please join oVirt Users forum / mailing list and ask there.

Developer mode installation

Preparations

Prerequisites

Install the following system components:

  • java-11-openjdk-devel

  • mime-types or mailcap

  • unzip

  • openssl

  • bind-utils

  • postgresql-server >= 12.0

  • postgresql >= 12.0

  • postgresql-contrib >= 12.0

  • python3-dateutil / dateutil

  • python3-cryptography / cryptography

  • python3-m2crypto / m2crypto

  • python3-psycopg2 / psycopg

  • python3-jinja2 / Jinja2

  • python3-libxml2 / libxml2[python]

  • python3-daemon

  • python3-otopi >= 1.10.0

  • python3-ovirt-setup-lib

  • maven >= 3.5

  • ansible-core >= 2.12.0

  • ansible-runner >= 2.1.3

  • ovirt-ansible-roles >= 1.2.0

  • ovirt-imageio-daemon >= 2.0.6

  • ovirt-engine-metrics (optional)

  • ovirt-provider-ovn (optional)

  • python3-ovirt-engine-sdk4 (optional)

  • ansible-lint / python3-ansible-lint (optional)

  • python3-flake8 / pyflakes (optional)

  • python3-pycodestyle / pycodestyle (optional)

  • python3-isort (optional)

  • python3-distro

Note on Java versions

The project is built and run using java 11. 4.3 branches and earlier are excluded.

Prepare your dev environment for java 11

  • Use alternatives command to configure java and javac to version 11:

$ sudo alternatives --config java

There are 4 programs which provide 'java'.

  Selection    Command
-----------------------------------------------
   1           java-latest-openjdk.x86_64 (/usr/lib/jvm/java-12-openjdk-12.0.1.12-1.rolling.fc30.x86_64/bin/java)
   2           java (/opt/jdk-9/bin/java)
   3           java-11-openjdk.x86_64 (/usr/lib/jvm/java-11-openjdk-11.0.3.7-5.fc30.x86_64/bin/java)
*+ 4           java-1.8.0-openjdk.x86_64 (/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.212.b04-0.fc30.x86_64/jre/bin/java)

Enter to keep the current selection[+], or type selection number: 3

$ sudo alternatives --config javac

There are 2 programs which provide 'javac'.

  Selection    Command
-----------------------------------------------
*+ 1           java-1.8.0-openjdk.x86_64 (/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.212.b04-0.fc30.x86_64/bin/javac)
   2           java-11-openjdk.x86_64 (/usr/lib/jvm/java-11-openjdk-11.0.3.7-5.fc30.x86_64/bin/javac)

Enter to keep the current selection[+], or type selection number: 2

- export `JAVA_HOME` if `mvn` is not executing using java-11:
```console
#put this in your ~/.bashrc
$ export JAVA_HOME=/lib/jvm/java-11

$ mvn -v | grep "Java version: "
Java version: 11.0.4, vendor: Oracle Corporation, runtime: /usr/lib/jvm/java-11-openjdk-11.0.4.11-0.fc30.x86_64

WildFly 15 is required along with ovirt-engine-wildfly-overlay. Preferred way is to install following packages:

  • ovirt-engine-wildfly

  • ovirt-engine-wildfly-overlay

Both packages can be installed from oVirt COPR CentOS repositories. Repository list can be updated using the following commands:

+ $ sudo dnf copr enable -y ovirt/ovirt-master-snapshot centos-stream-8 $ sudo dnf install -y ovirt-release-master

OVN/OVS is an optional dependency. If you want to use it, check the requirements in the ovirt-engine.spec.in file for a list of packages. Otherwise, you should reply 'No' when asked about it by engine-setup.

System settings

Build locales requires at least 10240 file descriptors, create the following file, replace <user> with user that is used for building, and logout/login:

/etc/security/limits.d/10-nofile.conf
<user> hard nofile 10240
#<user> soft nofile 10240  # optional, to apply automatically

If soft limit was not set, before building, apply new limit using:

$ ulimit -n 10240

Development environment by default uses ports 8080 (HTTP), 8443 (HTTPS), 8787 (java debug), and 54323 (ovirt-imageio-proxy) so make sure they are accessible from the outside. For example:

firewall-cmd --add-port=8080/tcp --permanent
firewall-cmd --add-port=8443/tcp --permanent
firewall-cmd --add-port=8787/tcp --permanent
firewall-cmd --add-port=54323/tcp --permanent

If you also want to connect to the database from the outside:

firewall-cmd --add-port=5432/tcp --permanent

Finally, apply changes using:

firewall-cmd --reload

If compiling in a virtual machine, javac might experience difficulties on guests with dynamically growing RAM so it’s better to have VM’s starting allocation and maximum allocation set to the same value.

PostgreSQL accessibility

Initialize PostgreSQL configuration files:

$ sudo postgresql-setup --initdb --unit postgresql

Configure PostgreSQL to accept user and password:

Locate pg_hba.conf within your distribution, common locations are:

  • /var/lib/pgsql/data/pg_hba.conf

  • /etc/postgresql-*/pg_hba.conf

  • /etc/postgresql/*/main/pg_hba.conf

Within pg_hba.conf set method to password for 127.0.0.1/32 and ::1/128 for IPv4 and IPv6 local connections correspondingly.

If you want to make postgres accessible from the outside, change 127.0.0.1/32 to 0.0.0.0/0 and ::1/128 to ::/0.

Tune PostgreSQL configuration: Locate postgresql.conf within your distribution, common locations are:

  • /var/lib/pgsql/data

  • /etc/postgresql*

Within postgresql.conf make sure following values are set:

max_connections = 150
work_mem = 8MB
autovacuum_max_workers = 6
autovacuum_vacuum_scale_factor = 0.01
autovacuum_analyze_scale_factor = 0.075
maintenance_work_mem = 64MB

If you want to connect from the outside, set also:

listen_addresses = '*'

Enable and start (systemctl enable postgresql --now).

Database creation

Create database for ovirt-engine, usually the following sequence should work to create a user named engine that owns database named engine:

# su - postgres -c "psql -d template1"
template1=# create user engine password 'engine';
template1=# drop database engine;
template1=# create database engine owner engine template template0
encoding 'UTF8' lc_collate 'en_US.UTF-8' lc_ctype 'en_US.UTF-8';
template1=# \q

Enable uuid-ossp extension for the database:

# su - postgres -c "psql -d engine"
engine=# CREATE EXTENSION "uuid-ossp";
engine=# \q

Ansible Runner configration

Since oVirt 4.5 the engine is integrated with ansible-core and ansible-runner, so you need to install RPM packages for both, but not additional configuration is required.

All previously used configuration for ansible-runner-service is no longer relevant and 'ansible-runner-service*' packages and configuration can be removed.

Development

Environment

Development environment is supported only under non-root account. Do not run this sequence as root.

Each instance of application must be installed at different PREFIX and use its own database. Throughout this document application is installed using PREFIX="${PREFIX}" and engine database and user, these should be changed if a new instance is required. Do not mix different versions of product with same PREFIX/database.

From this point on, the "${PREFIX}" will be used to mark the prefix in which you selected to install the development environment.

Build

To build and install ovirt-engine at your home folder under ovirt-engine directory execute the following command:

$ make clean install-dev PREFIX="${PREFIX}"
Note
${PREFIX} should be replaced with the location in which you intend to install the environment.
Note
Add SKIP_CHECKS=1 to disable tests.
Build targets
all

Build project.

clean

Clean project.

all-dev

Build project for development.

install-dev

Install a development environment at PREFIX.

dist

Create source tarball out of git repository.

maven

Force execution of maven.

generated-files

Create file from templates (.in files).

When creating new templates, generated files will be automatically appears in .gitignore, updated .gitignore should be part of committing new templates.
Build customization

The following Makefile environment variables are available for build customization:

PREFIX

Installation root directory. Default is /usr/local.

BUILD_GWT

Build GWT. Default is 1.

BUILD_ALL_USER_AGENTS

Build GWT applications for all supported browsers. Default is 0.

BUILD_LOCALES

Build GWT applications for all supported locales. default is 0.

BUILD_DEV

Add extra development flags. Usually this should not be used directly, as the all-dev sets this. Default is 0.

BUILD_UT

Perform unit tests during build. Default is 0.

BUILD_JAVA_OPTS_MAVEN

Maven JVM options. Can be defined as environment variable. Default is empty.

BUILD_JAVA_OPTS_GWT

GWT compiler and dev mode JVM options. Can be defined as environment variable. default is empty.

Note
Note that BUILD_JAVA_OPTS_GWT overrides BUILD_JAVA_OPTS_MAVEN when building GWT applications (BUILD_JAVA_OPTS_MAVEN settings still apply, unless overridden).
DEV_BUILD_GWT_DRAFT

Build "draft" version of GWT applications without optimizations. This is useful when profiling compiled applications in web browser. Default value is 0.

Following changes are applied for draft builds: - Prevent code and CSS obfuscation. - Reduce the level of code optimizations.

+ On local development environment, using GWT Super Dev Mode (see below) is preferred, as it automatically disables all optimizations and allows you to recompile the GWT application on the fly.

DEV_BUILD_GWT_SUPER_DEV_MODE

Allows debugging GWT applications via Super Dev Mode, using web browser’s JavaScript development tooling. Default value is 1.

Do a local Engine development build as you normally would. Then, start the Super Dev Mode code server as following:

$ make gwt-debug DEV_BUILD_GWT_SUPER_DEV_MODE=1

In your browser, open http://127.0.0.1:9876/ and save the "Dev Mode On" bookmark. Next, visit the GWT application URL (as served from Engine) and click "Dev Mode On". This allows you to recompile and reload the GWT application, reflecting any changes you’ve made in the UI code.

DEV_EXTRA_BUILD_FLAGS

Any maven build flags required for building.

For example, if your machine is low on memory, limit maximum simultaneous GWT permutation worker threads:

DEV_EXTRA_BUILD_FLAGS="-Dgwt.compiler.localWorkers=1"
DEV_EXTRA_BUILD_FLAGS_GWT_DEFAULTS

Any maven build flags required for building GWT applications.

By default, GWT applications are built for Firefox only. To build for additional browsers, provide comma-separated list of user agents, see frontend/webadmin/modules/pom.xml for full list.

For example, to build for Firefox and Chrome:

DEV_EXTRA_BUILD_FLAGS_GWT_DEFAULTS="-Dgwt.userAgent=gecko1_8,safari"

To build for all supported browsers, use BUILD_ALL_USER_AGENTS=1.

For example, to build only the English and Japanese locale:

DEV_EXTRA_BUILD_FLAGS_GWT_DEFAULTS="-Dgwt.locale=en_US,ja_JP"

To build for all supported locales, use BUILD_LOCALES=1.

For example to build engine without obfuscated Javascript code:

DEV_EXTRA_BUILD_FLAGS_GWT_DEFAULTS="-Dgwt.style=pretty"

To build engine without obfuscated CSS styles:

DEV_EXTRA_BUILD_FLAGS_GWT_DEFAULTS="-Dgwt.cssResourceStyle=pretty"
DEV_REBUILD

Disable if only packaging components were modified. Default is 1.

PY_VERSION

Python defaults to python3 if available, use PY_VERSION=2 in order to override.
This options affects various services and several features written in python.

Note
engine-setup which runs otopi, uses different customized variable OTOPI_PYTHON
WILDFLY_OVERLAY_MODULES

Change location of WildFly overlay modules. If you want to disable WildFly overlay configuration completely, please set to empty string. Default is /usr/share/ovirt-engine-wildfly-overlay/modules.

ISORT

Set name/location of the isort utility, which is used during make validations (also called from make install-dev). Defaults to isort. If not found, that’s ok. If found, should be at least version 5.7. The version in CentOS Stream 8 is ok. The version provided by RHEL 8 (and rebuilds) is too old, 4.3. Some ways to get a newer version:

  • dnf copr enable -y sbonazzo/EL8_collection

  • Install from pypi in a python virtualenv/venv, e.g.:

sudo dnf install python3-virtualenv
mkdir -p $HOME/venv
cd $HOME/venv
virtualenv-3 python3-isort
. python3-isort/bin/activate
pip install isort

And, before running make,

export ISORT=$HOME/venv/python3-isort/bin/isort

If you do have an older version installed and want make to ignore it, you can point the variable at some non-existing name/location, e.g.:

export ISORT=nonexistent

Setup

To setup the product use the following command:

$ "${PREFIX}/bin/engine-setup"
Note
otopi, and therefore engine-setup, now defaults to python3 except el7, use:
$ OTOPI_PYTHON=/usr/bin/python2 "${PREFIX}/bin/engine-setup"
to override.

During engine setup, a certificate has to be issued and you will be asked for a hostname. If you want to upload and download images from administration portal, it has to be the name by which your machine is accessible from the outside.

JBoss

If you want to use different WildFly/EAP installation, specify it at --jboss-home= parameter of setup.

Environment

OVIRT_ENGINE_JAVA_HOME

Select a specific Java home.

OVIRT_ENGINE_JAVA_HOME_FORCE

Set to non zero to bypass Java compatibility check.

Refresh

If there are no significant changes, such as file structure or database schema, there is no need to run the setup again, make install-dev <args> will overwrite files as required, run engine-setup to refresh database schema.

Do remember to restart the engine service.

If there is a significant change, safest path is to stop service, remove ${PREFIX} directory, build and setup.

The ${PREFIX}/bin/engine-cleanup tool is also available to cleanup the environment, it is useful for application changes, less for packaging changes.

Service administration

Most utilities and services are operational, including PKI, host deploy.

To start/stop the engine service use:

$ "${PREFIX}/share/ovirt-engine/services/ovirt-engine/ovirt-engine.py" start

While the service is running, this command will not exit. Press <Ctrl>-C to stop service.

Access using HTTP or HTTPS:

Remote debug

By default, debug address is 127.0.0.1:8787. If you want to make engine accessible to the remote debugger, after running engine-setup edit the following file: ${PREFIX}/etc/ovirt-engine/engine.conf.d/10-setup-protocols.conf:

ENGINE_DEBUG_ADDRESS=0.0.0.0:8787

Running instance management (JMX)

ovirt-engine service supports jmx as management interface. Actually, this is the standard jboss jmx interface, while authentication can be done using any engine user with SuperUser role. Access is permitted only from the local host.

Access JMX shell using provide OPTIONAL_COMMAND for non interactive usage:

$ "${JBOSS_HOME}/bin/jboss-cli.sh" \
  --connect \
  --timeout=30000 \
  --controller=localhost:8706 \
  --user=admin@internal \
  --commands="OPTIONAL_COMMA_SEPARATED_COMMANDS"

Useful commands:

Modify log level
/subsystem=logging/logger=org.ovirt.engine.core.bll:write-attribute(name=level,value=DEBUG)
Create a new log category
/subsystem=logging/logger=org.ovirt.engine:add
Get the engine data-source statistics
ls /subsystem=datasources/data-source=ENGINEDataSource/statistics=jdbc/
Get threading info
ls /core-service=platform-mbean/type=threading/

By default JMX access is available only to localhost, to open JMX to world, add ${PREFIX}/etc/ovirt-engine/engine.conf.d/20-setup-jmx-debug.conf with:

ENGINE_JMX_INTERFACE=public

DAO tests

Create empty database for DAO tests refer to Database creation.

Provided user is engine, password is engine and database is engine_dao_tests.

$ PGPASSWORD=engine \
  ./packaging/dbscripts/schema.sh \
    -c apply -u engine -d engine_dao_tests

Run build as:

$ make maven BUILD_GWT=0 BUILD_UT=1 EXTRA_BUILD_FLAGS="-P enable-dao-tests \
  -D engine.db.username=engine \
  -D engine.db.password=engine \
  -D engine.db.url=jdbc:postgresql://localhost/engine_dao_tests"

VM console

After the environment is setup and installed, some adjustments are required.

Copy vmconsole-host configuration:

$ sudo cp -p "${PREFIX}/share/ovirt-engine/conf/ovirt-vmconsole-proxy.conf \
/etc/ovirt-vmconsole/ovirt-vmconsole-proxy/conf.d/50-ovirt-vmconsole-proxy.conf

If selinux is enabled on your machine, set type on vmconsole helper using:

$ sudo chcon --type=bin_t "${PREFIX}/libexec/ovirt-vmconsole-proxy-helper/ovirt-vmconsole-list.py"

ovirt-imageio

After setup, you need to run ovirt-imageio manually if you want to upload and download images via the administration portal. To run ovirt-imageio, run the following command:

$ ovirt-imageio --conf-dir $PREFIX/etc/ovirt-imageio

This assumes you have installed ovirt-imageio-daemon and you have run engine-setup.

In development mode, ovirt-imageio logs to stderr using DEBUG level. If you would like to log to a file create a log directory:

$ mkdir $PREFIX/var/log/ovirt-imageio

And install a drop-in configuration file to override engine developement setup:

$ cat $PREFIX/etc/ovirt-imageio/conf.d/99-local.conf
[handlers]
keys = logfile
[logger_root]
handlers = logfile
[handler_logfile]
args = ('/home/username/ovirt-engine/log/ovirt-imageio/daemon.log',)

RPM packaging

$ make dist
$ rpmbuild -ts @tarball@
# yum-builddep @srpm@
# rpmbuild -tb @tarball@

The following spec file variables are available for package customization:

ovirt_build_quick

Quick build, best for syntax checks. Default is 0.

ovirt_build_minimal

Build minimal Firefox only package. Default is 0.

ovirt_build_user_agent

When using quick or minimal build, build only for this user agent. Default is gecko1_8 (Firefox). To build for Chrome use safari.

ovirt_build_gwt

Build GWT components. Default is 1.

ovirt_build_all_user_agents

Build GWT components for all supported browsers. Default is 1.

ovirt_build_locales

Build GWT components for all supported locales. Default is 1.

Examples:

Build minimal rpm package for Firefox

$ rpmbuild -D"ovirt_build_minimal 1" -tb @tarball@

Build minimal rpm package for Chrome or Safari

$ rpmbuild -D"ovirt_build_minimal 1" -D"ovirt_build_user_agent safari" -tb @tarball@

Ansible Lint

To use ansible-lint locally you need to install it from PyPI in a python virtualenv/venv, e.g.:

sudo dnf install python3-virtualenv
mkdir -p $HOME/venv
python3 -m venv $HOME/venv/ansible-lint
. $HOME/venv/ansible-lint/bin/activate
pip3 install "ansible-lint>=6.0.0,<7.0.0"

Run of the lint:

$ ansible-lint -c build/ansible-lint.conf packaging/ansible-runner-service-project/project/roles/*

Branch/release management

Git branch master should always have the latest version.

Releases should be done from stable branches. So-called "bump patches", to increase the version, should be created using the script bump_release.sh. This creates two git commits - one for doing the release, which should be tagged, and the next one for getting back to development builds, which have a timestamp+git-hash in their RPMs names.

When branching stable branches, master branch should be bumped to the next Y or Z version. There is currently no script for doing that. It can be done using something like:

find . -name pom.xml -exec sed -i "s:4.5.1.3-SNAPSHOT:4.5.2-SNAPSHOT:" {} +

Replace 4.5.1.3 with the current version, and 4.5.2 with the version you want to bump to.

ovirt-engine's People

Contributors

abonas avatar ahadas avatar akrejcir avatar alonakaplan avatar alonbl avatar awels avatar bennyz avatar danielerez avatar didib avatar dominikholler avatar emesika avatar gilad-chap avatar gregsheremeta avatar jhernand avatar jniederm avatar laravot avatar ljelinkova avatar masayag avatar matobet avatar mkolesnik avatar mureinik avatar mwperina avatar rgolangh avatar sabose avatar sandrobonazzola avatar shaharha avatar shenitzky avatar smelamud avatar tnisan avatar vojtechszocs avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

ovirt-engine's Issues

CVE-2022-22963 scope/remediation?

Hello-

Does the engine still use the Spring implementation of JDBC? If so, are the 4.4 RPMs susceptible to CVE-2022-22963 and if so, when would a mitigating patch be available, roughly?

The following library is affected:

Spring Cloud Function

With the following versions:

  • 3.1.6
  • 3.2.2
  • (other EOL versions)

And remediated in the following versions:

  • 3.1.7
  • 3.2.3

Can't change compatable version from 4.4 to 4.5

  • Then try editengine and save without any change :

engine.log:

2022-05-05 07:58:37,767+03 INFO  [org.ovirt.engine.core.bll.UpdateVmCommand] (default task-7) [345e25c0-3fcb-4755-a2eb-385f82bd43b2] Lock Acquired to object 'EngineLock:{exclusiveLocks='[HostedEngine=VM_NAME]', sharedLocks='[f75316cb-6e94-445e-ad01-09219be02031=VM]'}'
2022-05-05 07:58:37,768+03 WARN  [org.ovirt.engine.core.bll.UpdateVmCommand] (default task-7) [345e25c0-3fcb-4755-a2eb-385f82bd43b2] Validation of action 'UpdateVm' failed for user [email protected]@dev.obs.group-authz. Reasons: VAR__ACTION__UPDATE,VAR__TYPE__VM,VM_CANNOT_UPDATE_HOSTED_ENGINE_FIELD
2022-05-05 07:58:37,768+03 INFO  [org.ovirt.engine.core.bll.UpdateVmCommand] (default task-7) [345e25c0-3fcb-4755-a2eb-385f82bd43b2] Lock freed to object 'EngineLock:{exclusiveLocks='[HostedEngine=VM_NAME]', sharedLocks='[f75316cb-6e94-445e-ad01-09219be02031=VM]'}'

UI error message:

Error while executing action:

HostedEngine:
There was an attempt to change Hosted Engine VM values that are locked.

  • Then try change cluster version 4.6 to 4.7:
    engine.log:
2022-05-05 08:03:48,779+03 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default task-7) [7256c3a9] EVENT_ID: CLUSTER_CANNOT_UPDATE_VM_COMPATIBILITY_VERSION(12,005), Cannot update compatibility version of Vm/Template: [HostedEngine], Message: There was an attempt to change Hosted Engine VM values that are locked.

UI error message:

Error while executing action: Cannot update cluster because the update triggered update of the VMs/Templates and it failed for the following: HostedEngine. "There was an attempt to change Hosted Engine VM values that are locked." is one of the error(s).

To fix the issue, please go to each VM/Template, edit, change the Custom Compatibility Version (or other fields changed previously in the cluster dialog) and press OK. If the save does not pass, fix the dialog validation. After successful cluster update, you can revert your Custom Compatibility Version change (or other changes). If the problem still persists, you may refer to the engine.log file for further details.

Upgrade postgresql-jdbc to 42.2.14-1.el8 (last) broke engine

2022-05-05 09:09:31,552+03 ERROR [org.jboss.msc.service.fail] (ServerService Thread Pool -- 44) MSC000001: Failed to start service jboss.deployment.subunit."engine.ear"."bll.jar".component.Backend.START: org.jboss.msc.service.StartException in service jboss.deployment.subunit."engine.ear"."bll.jar".component.Backend.START: java.lang.IllegalStateException: WFLYEE0042: Failed to construct component instance
        at [email protected]//org.jboss.as.ee.component.ComponentStartService$1.run(ComponentStartService.java:57)
        at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
        at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
        at [email protected]//org.jboss.threads.ContextClassLoaderSavingRunnable.run(ContextClassLoaderSavingRunnable.java:35)
        at [email protected]//org.jboss.threads.EnhancedQueueExecutor.safeRun(EnhancedQueueExecutor.java:1990)
        at [email protected]//org.jboss.threads.EnhancedQueueExecutor$ThreadBody.doRunTask(EnhancedQueueExecutor.java:1486)
        at [email protected]//org.jboss.threads.EnhancedQueueExecutor$ThreadBody.run(EnhancedQueueExecutor.java:1377)
        at java.base/java.lang.Thread.run(Thread.java:829)
        at [email protected]//org.jboss.threads.JBossThread.run(JBossThread.java:513)
Caused by: java.lang.IllegalStateException: WFLYEE0042: Failed to construct component instance
        at [email protected]//org.jboss.as.ee.component.BasicComponent.constructComponentInstance(BasicComponent.java:170)
        at [email protected]//org.jboss.as.ee.component.BasicComponent.constructComponentInstance(BasicComponent.java:141)
        at [email protected]//org.jboss.as.ee.component.BasicComponent.createInstance(BasicComponent.java:88)
        at [email protected]//org.jboss.as.ejb3.component.singleton.SingletonComponent.getComponentInstance(SingletonComponent.java:127)
        at [email protected]//org.jboss.as.ejb3.component.singleton.SingletonComponent.start(SingletonComponent.java:141)
        at [email protected]//org.jboss.as.ee.component.ComponentStartService$1.run(ComponentStartService.java:54)
        ... 8 more
Caused by: javax.ejb.EJBException: org.jboss.weld.exceptions.WeldException: WELD-000049: Unable to invoke protected void org.ovirt.engine.core.bll.TagsDirector.init() on org.ovirt.engine.core.bll.TagsDirector@5eb3a7cd
        at [email protected]//org.jboss.as.ejb3.tx.CMTTxInterceptor.invokeInNoTx(CMTTxInterceptor.java:239)
        at [email protected]//org.jboss.as.ejb3.tx.CMTTxInterceptor.supports(CMTTxInterceptor.java:446)
        at [email protected]//org.jboss.as.ejb3.tx.LifecycleCMTTxInterceptor.processInvocation(LifecycleCMTTxInterceptor.java:70)
        at [email protected]//org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
        at [email protected]//org.jboss.as.weld.injection.WeldInjectionContextInterceptor.processInvocation(WeldInjectionContextInterceptor.java:43)
        at [email protected]//org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
        at [email protected]//org.jboss.as.ejb3.component.interceptors.CurrentInvocationContextInterceptor.processInvocation(CurrentInvocationContextInterceptor.java:41)
        at [email protected]//org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
        at [email protected]//org.jboss.as.ee.concurrent.ConcurrentContextInterceptor.processInvocation(ConcurrentContextInterceptor.java:45)
        at [email protected]//org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
        at [email protected]//org.jboss.invocation.ContextClassLoaderInterceptor.processInvocation(ContextClassLoaderInterceptor.java:60)
        at [email protected]//org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
        at [email protected]//org.jboss.as.ejb3.component.singleton.StartupCountDownInterceptor.processInvocation(StartupCountDownInterceptor.java:25)
        at [email protected]//org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
        at [email protected]//org.jboss.invocation.ChainedInterceptor.processInvocation(ChainedInterceptor.java:53)
        at [email protected]//org.jboss.as.ee.component.BasicComponent.constructComponentInstance(BasicComponent.java:168)
        ... 13 more
Caused by: org.jboss.weld.exceptions.WeldException: WELD-000049: Unable to invoke protected void org.ovirt.engine.core.bll.TagsDirector.init() on org.ovirt.engine.core.bll.TagsDirector@5eb3a7cd
        at [email protected]//org.jboss.weld.injection.producer.DefaultLifecycleCallbackInvoker.invokeMethods(DefaultLifecycleCallbackInvoker.java:85)
        at [email protected]//org.jboss.weld.injection.producer.DefaultLifecycleCallbackInvoker.postConstruct(DefaultLifecycleCallbackInvoker.java:66)
        at [email protected]//org.jboss.weld.injection.producer.BasicInjectionTarget.postConstruct(BasicInjectionTarget.java:122)
        at [email protected]//org.jboss.weld.bean.ManagedBean.create(ManagedBean.java:174)
        at [email protected]//org.jboss.weld.contexts.AbstractContext.get(AbstractContext.java:96)
        at [email protected]//org.jboss.weld.bean.ContextualInstanceStrategy$DefaultContextualInstanceStrategy.get(ContextualInstanceStrategy.java:100)
        at [email protected]//org.jboss.weld.bean.ContextualInstanceStrategy$ApplicationScopedContextualInstanceStrategy.get(ContextualInstanceStrategy.java:140)
        at [email protected]//org.jboss.weld.bean.ContextualInstance.get(ContextualInstance.java:50)
        at [email protected]//org.jboss.weld.manager.BeanManagerImpl.getReference(BeanManagerImpl.java:694)
        at [email protected]//org.jboss.weld.manager.BeanManagerImpl.getInjectableReference(BeanManagerImpl.java:794)
        at [email protected]//org.jboss.weld.injection.FieldInjectionPoint.inject(FieldInjectionPoint.java:92)
        at [email protected]//org.jboss.weld.util.Beans.injectBoundFields(Beans.java:345)
        at [email protected]//org.jboss.weld.util.Beans.injectFieldsAndInitializers(Beans.java:356)
        at [email protected]//org.jboss.weld.injection.producer.DefaultInjector$1.proceed(DefaultInjector.java:71)
        at [email protected]//org.jboss.weld.injection.InjectionContextImpl.run(InjectionContextImpl.java:48)
        at [email protected]//org.jboss.weld.injection.producer.DefaultInjector.inject(DefaultInjector.java:73)
        at [email protected]//org.jboss.weld.module.ejb.DynamicInjectionPointInjector.inject(DynamicInjectionPointInjector.java:61)
        at [email protected]//org.jboss.weld.module.ejb.SessionBeanInjectionTarget.inject(SessionBeanInjectionTarget.java:138)
        at [email protected]//org.jboss.as.weld.injection.WeldInjectionContext.inject(WeldInjectionContext.java:39)
        at [email protected]//org.jboss.as.weld.injection.WeldInjectionInterceptor.processInvocation(WeldInjectionInterceptor.java:51)
        at [email protected]//org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
        at [email protected]//org.jboss.as.ee.component.AroundConstructInterceptorFactory$1.processInvocation(AroundConstructInterceptorFactory.java:28)
        at [email protected]//org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
        at [email protected]//org.jboss.as.weld.injection.WeldInterceptorInjectionInterceptor.processInvocation(WeldInterceptorInjectionInterceptor.java:56)
        at [email protected]//org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
        at [email protected]//org.jboss.as.ee.component.ComponentInstantiatorInterceptor.processInvocation(ComponentInstantiatorInterceptor.java:74)
        at [email protected]//org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
        at [email protected]//org.jboss.as.weld.interceptors.Jsr299BindingsCreateInterceptor.processInvocation(Jsr299BindingsCreateInterceptor.java:111)
        at [email protected]//org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
        at [email protected]//org.jboss.as.ee.component.NamespaceContextInterceptor.processInvocation(NamespaceContextInterceptor.java:50)
        at [email protected]//org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
        at [email protected]//org.jboss.as.ejb3.tx.CMTTxInterceptor.invokeInNoTx(CMTTxInterceptor.java:232)
        ... 28 more
Caused by: java.lang.reflect.InvocationTargetException
        at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.base/java.lang.reflect.Method.invoke(Method.java:566)
        at [email protected]//org.jboss.weld.injection.producer.DefaultLifecycleCallbackInvoker.invokeMethods(DefaultLifecycleCallbackInvoker.java:83)
        ... 59 more
Caused by: org.springframework.dao.InvalidDataAccessApiUsageException: Unable to determine the correct call signature - no procedure/function/signature for 'gettagsbyparent_id'
        at [email protected]//org.springframework.jdbc.core.metadata.GenericCallMetaDataProvider.processProcedureColumns(GenericCallMetaDataProvider.java:362)
        at [email protected]//org.springframework.jdbc.core.metadata.GenericCallMetaDataProvider.initializeWithProcedureColumnMetaData(GenericCallMetaDataProvider.java:114)
        at [email protected]//org.springframework.jdbc.core.metadata.CallMetaDataProviderFactory.lambda$createMetaDataProvider$0(CallMetaDataProviderFactory.java:127)
        at [email protected]//org.springframework.jdbc.support.JdbcUtils.extractDatabaseMetaData(JdbcUtils.java:324)
        at [email protected]//org.springframework.jdbc.core.metadata.CallMetaDataProviderFactory.createMetaDataProvider(CallMetaDataProviderFactory.java:70)
        at [email protected]//org.springframework.jdbc.core.metadata.CallMetaDataContext.initializeMetaData(CallMetaDataContext.java:252)
        at [email protected]//org.springframework.jdbc.core.simple.AbstractJdbcCall.compileInternal(AbstractJdbcCall.java:313)
        at org.ovirt.engine.core.dal//org.ovirt.engine.core.dal.dbbroker.PostgresDbEngineDialect$PostgresSimpleJdbcCall.compileInternal(PostgresDbEngineDialect.java:106)
        at [email protected]//org.springframework.jdbc.core.simple.AbstractJdbcCall.compile(AbstractJdbcCall.java:296)
        at org.ovirt.engine.core.dal//org.ovirt.engine.core.dal.dbbroker.SimpleJdbcCallsHandler.getCall(SimpleJdbcCallsHandler.java:157)
        at org.ovirt.engine.core.dal//org.ovirt.engine.core.dal.dbbroker.SimpleJdbcCallsHandler.executeImpl(SimpleJdbcCallsHandler.java:134)
        at org.ovirt.engine.core.dal//org.ovirt.engine.core.dal.dbbroker.SimpleJdbcCallsHandler.executeReadList(SimpleJdbcCallsHandler.java:105)
        at org.ovirt.engine.core.dal//org.ovirt.engine.core.dao.TagDaoImpl.getAllForParent(TagDaoImpl.java:82)
        at deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.TagsDirector.addChildren(TagsDirector.java:116)
        at deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.TagsDirector.init(TagsDirector.java:75)
        ... 64 more

fix

[root@e dbutils]# dnf downgrade postgresql-jdbc
...
Running transaction
  Preparing        :                                                                                                                                                                                                                     1/1
  Downgrading      : postgresql-jdbc-42.2.3-3.el8_2.noarch                                                                                                                                                                               1/2
  Cleanup          : postgresql-jdbc-42.2.14-1.el8.noarch                                                                                                                                                                                2/2
  Verifying        : postgresql-jdbc-42.2.3-3.el8_2.noarch                                                                                                                                                                               1/2
  Verifying        : postgresql-jdbc-42.2.14-1.el8.noarch                                                                                                                                                                                2/2

Downgraded:
  postgresql-jdbc-42.2.3-3.el8_2.noarch

Complete!
[root@e dbutils]# engine-setup
....

Users Allocation to PoolVM remains even after VM detachment/deletion

Good Evening

When a user logs in to VM-Portal and requests a VM from a Pool, this assignment remains even after the according VM gets shutdown, detached or deleted, resulting in unexpected (from a user point of view 'weird') behaviour. In case of deletion of the attached VM, oVirt tries to start the non-existing VM. After some time the attachment to a specific VM gets lost and everything works again as normal; However, even in a normal production enviroment this may result in some unwanted behaviour.

See additional info for other problems and my thoughts on solutions

Engine Version: Version 4.4.10.7-1.el8
Cluster Comb. Level: 4.6

Steps to Reproduce:

  1. Setup a Pool of VM (for us Windows using sysprep)
  2. Login as a user, request a VM from that pool, login to the VM
  3. Shutdown the VM as User (from inside or WebUI)
  4. Logout as a User
  5. Login as Admin, detach and delete the VM the user is bound to
  6. Login as User and request a VM from the pool

Actual results:
The System requests the non-existing VM from the Pool, resulting in an error

Expected results:
The System requests any functioning VM.

Additional info:

Another - for us - unwanted scenario: The User wants to get a fresh VM.
He shuts down his VM; After that the user can request a new VM from the Pool. But instead of oVirt giving any already prepared VMs, the user gets the same VM he had before - and now he has to sit through all the sysprep work that is done when a VM from the pool is setup. This maybe wanted, but it can be painful.

Feature enhancement request:

Solutions i thought about. They are not mutually exclusive, all of them would be nice.
1)
Let the API have a function "deallocateVm" similar to "allocateVm" ( http://ovirt.github.io/ovirt-engine-api-model/4.5/#services/vm_pool ) It removes the allocation between user and vm and guarantees a fresh start. The function may have a parameter on what to do with the VM is attached to (shutdown, restart, ...)
This way any software can force a deallocation (And the WebUI should have that option then)
2) I know the behavior can be wanted, so add an Parameter to a Pool, an "User Allocation Policy", which could be:

  • Same User (try to allocate the same VM to the same user as long as possible; if all vms already have an allocation, allocate the vm that belongs to the user that was not seen the longest time; This is kind of the situation we have now)
  • Bind to User (Same as "Same", but if all VMs are bound, throw an error)
  • Bind to User and Extend (Same as Bind to User, but extend the pool by 1 machine every time a new user requests a vm)
    [The Bind to User settings may have a timeout that deletes the binding when the user was absent for a certain period of time)
  • Any (Ignore bindings, just hand out any VM that is ready, except the user is logged in into a VM)

Reduce size of VM disks that are created from a RAW based template on a block-based storage domain

When we clone a VM from a raw template on a block-based storage domain (for example ISCSI), we create a logical volume in the maximum size (virtual size * 1.1).
This behavior results in a big waste of storage space for the user.

Version-Release number of selected component (if applicable):
ovirt-engine-4.4.10-0.17.el8ev.noarch

How reproducible:
Create a VM using a RAW disk format on a block-based storage domain.

Steps to Reproduce:

  1. Create a template disk in RAW format on a block-based storage domain.
  2. Create a VM via UI using the template (when creating the VM enter the resource allocation window and choose: format -> QCOW, target -> ISCSI, Disk profile -> ISCSI)

Actual results:
The logical volume created is in the maximum size (virtual size * 1.1)

Expected results:
A more efficient creation of the logical volume created, so less storage would be wasted (reduce the VM cloned from the raw template to optimal size - like when using "reduce_disk.py")

Additional info:

Template created using RAW format:

raw 6442450944 false ...

Clone from raw template:

Virtual Size: 6 GiB
Actual Size: 6 GiB
disk id: d3c98805-b3c2-4562-9d50-29e46314075d

lvs -o vg_name,lv_name,size,tags | grep d3c98805-b3c2-4562-9d50-29e46314075d

lvs -o vg_name,lv_name,size,tags | grep d3c98805-b3c2-4562-9d50-29e46314075d

feab3738-c158-4d48-8a41-b5a95c057a50 d29f8f4a-4536-4b37-b89d-41739d561343 6.62g IU_d3c98805-b3c2-4562-9d50-29e46314075d,MD_13,PU_00000000-0000-0000-0000-000000000000

lvchange -ay feab3738-c158-4d48-8a41-b5a95c057a50/d29f8f4a-4536-4b37-b89d-41739d561343

qemu-img info /dev/feab3738-c158-4d48-8a41-b5a95c057a50/d29f8f4a-4536-4b37-b89d-41739d561343

image: /dev/feab3738-c158-4d48-8a41-b5a95c057a50/d29f8f4a-4536-4b37-b89d-41739d561343
file format: qcow2
virtual size: 6 GiB (6442450944 bytes)
disk size: 0 B
cluster_size: 65536
Format specific information:
compat: 1.1
compression type: zlib
lazy refcounts: false
refcount bits: 16
corrupt: false
extended l2: false

lvchange -an feab3738-c158-4d48-8a41-b5a95c057a50/d29f8f4a-4536-4b37-b89d-41739d561343

Measuring the template disks:

disk id: da0ac715-d511-4468-9675-bf3871b2be5b

lvs -o vg_name,lv_name,size,tags | grep da0ac715-d511-4468-9675-bf3871b2be5b

feab3738-c158-4d48-8a41-b5a95c057a50 be385bd3-3777-4240-95f5-147d56f66c48 6.00g IU_da0ac715-d511-4468-9675-bf3871b2be5b,MD_10,PU_00000000-0000-0000-0000-000000000000

lvchange -ay feab3738-c158-4d48-8a41-b5a95c057a50/be385bd3-3777-4240-95f5-147d56f66c48

qemu-img measure -O qcow2 /dev/feab3738-c158-4d48-8a41-b5a95c057a50/be385bd3-3777-4240-95f5-147d56f66c48

required size: 6443696128
fully allocated size: 6443696128

Since the RAW template does not have any metadata, qemu-img measure cannot tell
which areas are allocated and which are not, so must report that we need
the fully allocated size (6443696128 bytes, 6.001 GiB).
Engine sends this value to vdsm, which always allocate 1.1 * virtual size
for block based volumes. Allocating 10% more is not needed in this case, but
removing this may break old engine that assumes that vdsm allocates more.
So we allocate 6.60 GiB, aligned to 6.62 by lvm.

Original bug: https://bugzilla.redhat.com/2034542

Logical Name not populated on MBS disks attached to VM

I've been using an ansible playbook to create disks in oVirt and format and mount them on the hosts. With image-based disks, the ansible API returns the logical disk name in the guest when the disk is created. However, with MBS-based disks, this field is not returned. I don't think this is purely an ansible API issue, as the web UI also does not show the Logical Device under the Disks tab on the VM if it's an MBS disk.

Version-Release number of selected component (if applicable):

All hosts and engine updated to latest oVirt 4.4.10.

How reproducible:

Attach MBS disk to a VM instead of an image-based disk.

Steps to Reproduce:

  1. Attach MBS disk to VM
  2. Go to VM -> Disks, look at Logical Name field

Actual results:

Logical Disks field is empty.

Expected results:

Logical Disks field is populated.

Additional info:

If it's useful, here's the relevant bit of the playbook. I haven't been able to find any alternate way to correlate a new unformatted disk attached in oVirt with the device inside the guest without a whole lot of extra work within the OS to try to detect new disks.

  • name: create and mount disk on vm
    when: ovirt_auth.token is defined
    block:
    • name: create disk in ovirt
      ovirt.ovirt.ovirt_disk:
      auth: "{{ ovirt_auth }}"
      vm_name: "{{ ansible_fqdn }}"
      name: "{{ ansible_fqdn }}_{{ disk_name }}"
      size: "{{ disk_size }}"
      format: "{{ ovirt_disk_format }}"
      interface: "{{ ovirt_disk_interface }}"
      storage_domain: "{{ ovirt_storage_domain }}"
      fetch_nested: yes
      register: ovirt_disk_info
      delegate_to: localhost

    • name: create filesystem
      filesystem:
      dev: "{{ ovirt_disk_info.diskattachment.logical_name }}"
      fstype: "{{ mount_fstype }}"
      opts: "-L {{ disk_name }}"
      resizefs: yes
      when: ovirt_disk_info.changed

    • name: mount filesystem
      mount:
      path: "{{ mount_path }}"
      src: "LABEL={{ disk_name }}"
      fstype: "{{ mount_fstype }}"
      state: mounted

Original bug: https://bugzilla.redhat.com/2064080

REST api - Cloud init

Hello everyone,

I'm looking for some examples of how and when I would use cloud init when creating a new VM.
The manual is pretty obscure about it. Maybe anybody here that can give some pointers?

What I'd like to discover are:

  • What does a full curl POST request using cloud-init look like?
  • Do I need to create a VM from template first, wait for it to be ready, and send cloudinit after?

I might have some other questions, but I can't ask them right now because of the lack of knowledge. If somebody could be my guide, I'd be very grateful.

Thank you.

Turn off options in private DC for perfomance? costs and stability

All hosts in private network, all events logged. Secute levels overhead and implemented in net/adm level.

Need support

1.. selinux=0 boot options.

Updates can issue denys, bugs for new logic. Many muddle inhouse installs not need it Profit is s,mall. Fails costs is big .

  1. migrations=off.

  2. All VM must work in fll perfomanse If user get root from guest VM i will offer it ! ) Agan - costs for perfomance is biig. Cost for cavets is little.

  3. Turn off all power save and other llatancy issues. C0 state folrever.

emergy cost s muchs smaller than time

  1. Support use last kernel/tools for new features!

Time is costs, New features save time

  1. Move all support services to containers/ nsmespaces.

No chance use host for NAS or docker. you tools logic use all host but server can be userd for more tasks and custom workload

  1. Add NoTLS/SSL install option

no need overhead amd cpsts

  1. Add support for change/upgrade hostd engiine cfgs (bios type/max cpu/ sorage buus).

8 . Add support for overrite XML in VM cfgs for customs

9 Add backup api calls scripts for VM, Schecluder, UI

  1. Add options for DISK image RQW/QCOW2 convers, shrink

11/ Add fast path to full backup without cloning, moving and exportion OVS.

i Want gzipped snapshot download or expoet without cloning vm (with leases, permissions and other itemss). Cloning snapshot for new VM is most stupid and overhead task. Why need clone all and then delete all objects? In most cases used backup/download/save. Clonning for clonning used in 1% cases!

  1. Add exceeptpns collector and sending with KB if have workaround.

[root@vm7 libvirt]# xzgrep -ci Exception /var/log/vdsm/vdsm.*xz | cut -d ":" -f 2 | sum
30297

Regals. from Russia with love )

ovirt-engine webadmin cann't open, error select * from gettagsbyparent_id()

When i enter ovirt-engine URL, the url cann't open, and see some error :
root cause

javax.ejb.EJBException: org.springframework.jdbc.BadSqlGrammarException: PreparedStatementCallback; bad SQL grammar [select * from gettagsbyparent_id()]; nested exception is org.postgresql.util.PSQLException: ERROR: function gettagsbyparent_id() does not exist
Hint: No function matches the given name and argument types. You might need to add explicit type casts.
Position: 16
org.jboss.as.ejb3.tx.CMTTxInterceptor.handleExceptionInOurTx(CMTTxInterceptor.java:166)
org.jboss.as.ejb3.tx.CMTTxInterceptor.invokeInOurTx(CMTTxInterceptor.java:230)
org.jboss.as.ejb3.tx.CMTTxInterceptor.requiresNew(CMTTxInterceptor.java:333)
org.jboss.as.ejb3.tx.SingletonLifecycleCMTTxInterceptor.processInvocation(SingletonLifecycleCMTTxInterceptor.java:56)
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:288)
org.jboss.as.ejb3.component.interceptors.CurrentInvocationContextInterceptor.processInvocation(CurrentInvocationContextInterceptor.java:41)
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:288)
org.jboss.as.ee.component.TCCLInterceptor.processInvocation(TCCLInterceptor.java:45)
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:288)
org.jboss.invocation.ChainedInterceptor.processInvocation(ChainedInterceptor.java:61)
org.jboss.as.ee.component.BasicComponent.constructComponentInstance(BasicComponent.java:161)
org.jboss.as.ee.component.BasicComponent.createInstance(BasicComponent.java:85)
org.jboss.as.ejb3.component.singleton.SingletonComponent.getComponentInstance(SingletonComponent.java:116)
org.jboss.as.ejb3.component.singleton.SingletonComponentInstanceAssociationInterceptor.processInvocation(SingletonComponentInstanceAssociationInterceptor.java:48)
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:288)
org.jboss.as.ejb3.tx.CMTTxInterceptor.invokeInNoTx(CMTTxInterceptor.java:211)
org.jboss.as.ejb3.tx.CMTTxInterceptor.supports(CMTTxInterceptor.java:363)
org.jboss.as.ejb3.tx.CMTTxInterceptor.processInvocation(CMTTxInterceptor.java:194)
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:288)
org.jboss.as.ejb3.component.interceptors.CurrentInvocationContextInterceptor.processInvocation(CurrentInvocationContextInterceptor.java:41)
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:288)
org.jboss.as.ejb3.component.interceptors.LoggingInterceptor.processInvocation(LoggingInterceptor.java:59)
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:288)
org.jboss.as.ee.component.NamespaceContextInterceptor.processInvocation(NamespaceContextInterceptor.java:50)
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:288)
org.jboss.as.ee.component.TCCLInterceptor.processInvocation(TCCLInterceptor.java:45)
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:288)
org.jboss.invocation.ChainedInterceptor.processInvocation(ChainedInterceptor.java:61)
org.jboss.as.ee.component.ViewService$View.invoke(ViewService.java:165)
org.jboss.as.ee.component.ViewDescription$1.processInvocation(ViewDescription.java:173)
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:288)
org.jboss.invocation.ChainedInterceptor.processInvocation(ChainedInterceptor.java:61)
org.jboss.as.ee.component.ProxyInvocationHandler.invoke(ProxyInvocationHandler.java:72)
org.ovirt.engine.core.common.interfaces.BackendLocal$$$view9.runPublicQuery(Unknown Source)
org.ovirt.engine.core.WelcomeServlet.doGet(WelcomeServlet.java:82)
javax.servlet.http.HttpServlet.service(HttpServlet.java:734)
javax.servlet.http.HttpServlet.service(HttpServlet.java:847)
org.ovirt.engine.core.branding.BrandingFilter.doFilter(BrandingFilter.java:72)
org.ovirt.engine.core.utils.servlet.LocaleFilter.doFilter(LocaleFilter.java:64)
root cause

and in database no this function, it's only has an one parameter function.
ovirt=# \df gettagsbyparent_id
List of functions
Schema | Name | Result data type | Argument data types | Type
--------+--------------------+------------------+---------------------+--------
ovirt | gettagsbyparent_id | SETOF tags | v_parent_id uuid | normal
(1 row)
It's a bug?

Support upload-image to MBS domain

ovirt-imageio supports can write to any storage supported by qemu-nbd.
The issue is in engine, responsible for orchestration of image transfers.

The problem is that engine does not support image transfer to managed
block storage domains. Adding support for MBS disks should not be hard,
it is just missing feature.

With the current system the only way to upload images is to create a data
domain using another type of storage (e.g. NFS), and upload the images
to the data domain, and move the disk to the MBS domain.

To implement this support for MBS disks, we need:

Engine UI:

  • Prevent upload of qcow2 disks to MBS domain, since we don't support converting
    the format during the upload from the UI.

Engine backend:

  • Instead of preparing the image before the upload, attach the disk to the host
  • Instead of tearing down the image after the upload, detach the disk to the host

Vdsm:

  • Starting NBD server in assumes an image (PDIV) add support for MBS volume
    (basically attached volume at /run/vdsm/managedvolume/uuid)

Original bug: https://bugzilla.redhat.com/show_bug.cgi?id=2077649

i18n fr_FR correction

Hello,

I noticed some bad translations on french version of ovirt, and I wanted to push some corrections in the following file :

ovirt-engine/frontend/webadmin/modules/webadmin/src/main/resources/org/ovirt/engine/ui/webadmin/ApplicationConstants_fr_FR.properties

But apparently, the file was auto translated, cf comment in the file

# auto translated by TM merge

So my question is, can I push update on i18n or is this file auto generated and will be overwritten ?

Thanks a lot for your great software.

SKIP_CHECKS=1 doesn't skip checkstyle

I changed the Makefile:

-SKIP_CHECKS=0
+SKIP_CHECKS=1

but using xmvn --local instead of mvn still lead to:

[ERROR] Failed to execute goal on project ovirt-checkstyle-extension:
Could not resolve dependencies for project org.ovirt.engine:ovirt-checkstyle-extension:jar:4.5.0-SNAPSHOT:
Cannot access central (https://repo1.maven.org/maven2) in offline mode and the artifact 
com.puppycrawl.tools:checkstyle:jar:9.3 has not been downloaded from it before. -> [Help 1]

As com.puppycrawl.tools:checkstyle is not available in CentOS Stream and as it's used only during development we should be able to skip it during RPM build.

Add API that returns iSCSI Multipath status

We want to monitor the iSCSI multipath status on our hosts.
So we notice it when for example 1 path is down to the iSCSI LUN.

oVirt notices this perfectly, as you can see this in the alerts/log:
2020-09-18 09:53:08,880+02 WARN [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-86) [] EVENT_ID: FAULTY_MULTIPATHS_ON_HOST(11,500), Faulty multipath paths on host kvm001 on devices: [3600a0980383056645524502f414a726a]

The thing is, parsing the log and then alert something is not very useful sometimes.

Our use-case uses Nagios/Icinga(2) for example to monitor the oVirt setup.

Is would be great if the API was exended with a call that returns the iSCSI Multipath status on that host.

The following call gives iSCSI info now:
/ovirt-engine/api/hosts/xxxx/storage

It would be nice to extend this with available_paths: int for example.
Then if paths > available_paths -> trigger alert in our monitoring.

Original bug: https://bugzilla.redhat.com/show_bug.cgi?id=1880375

fence_xvm -o list is Operation failed, after deploying self hosted-engine

Dear,

I've physical server and I've installed OEL 8.6 and configured the fence_virtd,
fence_xvm -o list was working fine, but directly after the ovirtmgmt network interface was created the fence_xvm command stopped working,

I did some troubleshooting and i noticed that the multicast which assigned to virbr0 is now gone and no matter what I do or restart the virtd service it still not assigned to virbr0 interface

error in messages file
May 23 08:48:54 rckvm06 fence_virtd[2738440]: No worthy mechs found
May 23 08:48:54 rckvm06 journal[109661]: End of file while reading data: Input/output error

Timed out waiting for response
Operation failed

ibvirt: XML-RPC error : authentication failed: Failed to start SASL negotiation: -4 (SASL(-4): no mechanism available: No worthy mechs found)
Debugging threshold is now 99
[libvirt:INIT] Could not connect to any hypervisors
Backend plugin libvirt failed to initialize

appreciate your help and support.

I need change legacy to ovs switch mode on oVirt 4.4.10 node's (Problem)

Hello, Team! I need help.

I have cluster with 3 nodes oVirt 4.4.10 and common ISCSI storage and i have hosted engine installed on it.

I wont change legacy to ovs switch mode, using instruction below:

  • Install first host and VM using "hosted-engine --deploy"
  • In the Engine UI, change Cluster switch type from Legacy to OVS
  • Shutdown the engine VM and stop vdsmd on the host.
  • Manually change the switch type to ovs in /var/lib/vdsm/persistence/netconf/nets/ovirtmgmt
  • Restart the host

After this changing, my hosted engine don't wont start, it says i have a problem with vnet interface, i don't know virtmanager or i don't know vnetuser. When i returned back settings from ovs to legacy mode, hosted engine start working without any problem

When i had change file /var/lib/vdsm/persistence/netconf/nets/ovirtmgmt on hosts without hosted engine, they are change network mode and start working as ovs switch mode, but they say - network out of sync. I tryed to migrate hosted engine from host with legacy switch to host with ovs switch, hosted engine don't wont migrate.

How can i migrate my infrastructure on ovs switch.

HostedEngine lost xml param in conf file

log

2022-05-05 05:54:47,742+0300 INFO  (jsonrpc/1) [api.virt] START create(vmParams={'vmId': 'f75316cb-6e94-445e-ad01-09219be02031', 'memSize': '16384', 'display': 'vnc', 'vmName': 'HostedEngine', 'spiceSecureChannels': 'smain,sdisplay,sinputs,scursor,splayback,srecord,ssmartcard,susbredir', 'smp': '4', 'maxVCpus': '32', 'cpuType': 'SandyBridge,+pcid,+spec-ctrl,+ssbd,+md-clear', 'emulatedMachine': '', 'devices': [{'index': '2', 'iface': 'ide', 'address': {'controller': '0', 'target': '0', 'unit': '0', 'bus': '1', 'type': 'drive'}, 'specParams': {}, 'readonly': 'true', 'deviceId': '', 'path': '', 'device': 'cdrom', 'shared': 'false', 'type': 'disk'}, {'index': '0', 'iface': 'virtio', 'format': 'raw', 'poolID': '00000000-0000-0000-0000-000000000000', 'volumeID': 'ba019898-807e-4c64-ab58-e7d4e770602d', 'imageID': 'e9eb058b-acba-4e9c-97fa-73366c5f74d1', 'specParams': {}, 'readonly': 'false', 'domainID': '989e8f8e-3f3e-4757-a305-4c4f3fa3ceef', 'optional': 'false', 'deviceId': 'ba019898-807e-4c64-ab58-e7d4e770602d', 'address': {'bus': '0x00', 'slot': '0x06', 'domain': '0x0000', 'type': 'pci', 'function': '0x0'}, 'device': 'disk', 'shared': 'exclusive', 'propagateErrors': 'off', 'type': 'disk', 'bootOrder': '1'}, {'device': 'scsi', 'model': 'virtio-scsi', 'type': 'controller'}, {'nicModel': 'pv', 'macAddr': '00:16:3e:14:e2:f9', 'linkActive': 'true', 'network': 'ovirtmgmt', 'specParams': {}, 'deviceId': '', 'address': {'bus': '0x00', 'slot': '0x03', 'domain': '0x0000', 'type': 'pci', 'function': '0x0'}, 'device': 'bridge', 'type': 'interface'}, {'device': 'console', 'type': 'console'}, {'device': 'vga', 'alias': 'video0', 'type': 'video'}, {'device': 'vnc', 'type': 'graphics'}, {'device': 'virtio', 'specParams': {'source': 'urandom'}, 'model': 'virtio', 'type': 'rng'}]}) from=127.0.0.1,35216, vmId=f75316cb-6e94-445e-ad01-09219be02031 (api:48)
2022-05-05 05:54:47,742+0300 ERROR (jsonrpc/1) [api] FINISH create error='xml' (api:134)
Traceback (most recent call last):
  File "/usr/lib/python3.6/site-packages/vdsm/common/api.py", line 124, in method
    ret = func(*args, **kwargs)
  File "/usr/lib/python3.6/site-packages/vdsm/API.py", line 194, in create
    xml = vmParams.get('_srcDomXML') or vmParams['xml']
KeyError: 'xml'
2022-05-05 05:54:47,742+0300 INFO  (jsonrpc/1) [api.virt] FINISH create return={'status': {'code': 100, 'message': 'General Exception: ("\'xml\'",)'}} from=127.0.0.1,35216, vmId=f75316cb-6e94-445e-ad01-09219be02031 (api:54)

workaround ("/usr/lib/python3.6/site-packages/vdsm/API.py", line 194):

        # self._UUID is None in this call, it must be retrieved from XML
        try:
            xml = vmParams.get('_srcDomXML') or vmParams['xml']
        except:
            xml = '<?xml version=\'1.0\' encoding=\'UTF-8\'?>\n<domain xmlns:ovirt-tune="http://ovirt.org/vm/tune/1.0" xmlns:ovirt-vm="http://ovirt.org/vm/1.0" type="kvm"><name>HostedEngine</name><uuid>f75316cb-6e94-445e-ad01-09219be02031</uuid><memory>16777216</memory><currentMemory>16777216</currentMemory><iothreads>1</iothreads><maxMemory slots="16">67108864</maxMemory><vcpu current="4">32</vcpu><sysinfo type="smbios"><system><entry name="manufacturer">oVirt</entry><entry name="product">OS-NAME:</entry><entry name="version">OS-VERSION:</entry><entry name="family">oVirt</entry><entry name="serial">HOST-SERIAL:</entry><entry name="uuid">f75316cb-6e94-445e-ad01-09219be02031</entry></system></sysinfo><clock offset="variable" adjustment="0"><timer name="rtc" tickpolicy="catchup"/><timer name="pit" tickpolicy="delay"/><timer name="hpet" present="no"/></clock><features><acpi/><vmcoreinfo/></features><cpu match="exact"><model>SandyBridge</model><topology cores="8" threads="2" sockets="2"/><numa><cell id="0" cpus="0-31" memory="16777216"/></numa></cpu><cputune/><devices><input type="tablet" bus="usb"/><channel type="unix"><target type="virtio" name="org.qemu.guest_agent.0"/><source mode="bind" path="/var/lib/libvirt/qemu/channels/f75316cb-6e94-445e-ad01-09219be02031.org.qemu.guest_agent.0"/></channel><graphics type="spice" port="-1" autoport="yes" passwd="*****" passwdValidTo="1970-01-01T00:00:01" tlsPort="-1"><channel name="main" mode="secure"/><channel name="inputs" mode="secure"/><channel name="cursor" mode="secure"/><channel name="playback" mode="secure"/><channel name="record" mode="secure"/><channel name="display" mode="secure"/><channel name="smartcard" mode="secure"/><channel name="usbredir" mode="secure"/><listen type="network" network="vdsm-ovirtmgmt"/></graphics><controller type="pci" model="pcie-root-port" index="13"><address bus="0x00" domain="0x0000" function="0x4" slot="0x03" type="pci"/></controller><controller type="pci" model="pcie-root-port" index="4"><address bus="0x00" domain="0x0000" function="0x3" slot="0x02" type="pci"/></controller><controller type="sata" index="0"><address bus="0x00" domain="0x0000" function="0x2" slot="0x1f" type="pci"/></controller><controller type="pci" model="pcie-root-port" index="16"><address bus="0x00" domain="0x0000" function="0x7" slot="0x03" type="pci"/></controller><controller type="pci" model="pcie-root-port" index="10"><address bus="0x00" domain="0x0000" function="0x1" slot="0x03" type="pci"/></controller><controller type="usb" model="qemu-xhci" index="0" ports="8"><alias name="ua-42ee5e88-3ac9-4a80-b127-49967506cd54"/><address bus="0x04" domain="0x0000" function="0x0" slot="0x00" type="pci"/></controller><controller type="pci" model="pcie-root-port" index="11"><address bus="0x00" domain="0x0000" function="0x2" slot="0x03" type="pci"/></controller><rng model="virtio"><backend model="random">/dev/urandom</backend><alias name="ua-4f26a36f-8521-4086-8a5f-5efc37ffcea3"/></rng><memballoon model="virtio"><stats period="5"/><alias name="ua-5e362997-092a-4979-ac2d-a45c42df6287"/><address bus="0x07" domain="0x0000" function="0x0" slot="0x00" type="pci"/></memballoon><controller type="pci" model="pcie-to-pci-bridge" index="18"><address bus="0x01" domain="0x0000" function="0x0" slot="0x00" type="pci"/></controller><video><model type="qxl" vram="32768" heads="1" ram="65536" vgamem="16384"/><alias name="ua-6ff57534-3157-49a3-bf7d-0ce9b50cf37e"/><address bus="0x00" domain="0x0000" function="0x0" slot="0x01" type="pci"/></video><controller type="pci" model="pcie-root-port" index="2"><address bus="0x00" domain="0x0000" function="0x1" slot="0x02" type="pci"/></controller><controller type="virtio-serial" index="0" ports="16"><alias name="ua-8462b99d-4fa6-46b8-bb55-c932178c7552"/><address bus="0x05" domain="0x0000" function="0x0" slot="0x00" type="pci"/></controller><controller type="pci" model="pcie-root-port" index="12"><address bus="0x00" domain="0x0000" function="0x3" slot="0x03" type="pci"/></controller><controller type="pci" model="pcie-root-port" index="6"><address bus="0x00" domain="0x0000" function="0x5" slot="0x02" type="pci"/></controller><controller type="pci" model="pcie-root-port" index="8"><address bus="0x00" domain="0x0000" function="0x7" slot="0x02" type="pci"/></controller><controller type="pci" model="pcie-root-port" index="5"><address bus="0x00" domain="0x0000" function="0x4" slot="0x02" type="pci"/></controller><console type="unix"><source path="/var/run/ovirt-vmconsole-console/f75316cb-6e94-445e-ad01-09219be02031.sock" mode="bind"/><target type="serial" port="0"/><alias name="ua-ad39220f-68fb-41e9-a7dc-99b5dfca5de5"/></console><controller type="pci" model="pcie-root-port" index="9"><address bus="0x00" domain="0x0000" function="0x0" slot="0x03" type="pci" multifunction="on"/></controller><controller type="pci" model="pcie-root-port" index="17"><address bus="0x00" domain="0x0000" function="0x0" slot="0x04" type="pci"/></controller><controller type="scsi" model="virtio-scsi" index="0"><driver iothread="1"/><alias name="ua-c40d974b-3b2d-4aa8-98c6-c54d9b680cf6"/><address bus="0x03" domain="0x0000" function="0x0" slot="0x00" type="pci"/></controller><controller type="pci" model="pcie-root-port" index="1"><address bus="0x00" domain="0x0000" function="0x0" slot="0x02" type="pci" multifunction="on"/></controller><controller type="pci" model="pcie-root-port" index="3"><address bus="0x00" domain="0x0000" function="0x2" slot="0x02" type="pci"/></controller><controller type="pci" model="pcie-root-port" index="14"><address bus="0x00" domain="0x0000" function="0x5" slot="0x03" type="pci"/></controller><controller type="pci" model="pcie-root-port" index="15"><address bus="0x00" domain="0x0000" function="0x6" slot="0x03" type="pci"/></controller><graphics type="vnc" port="-1" autoport="yes" passwd="*****" passwdValidTo="1970-01-01T00:00:01" keymap="en-us"><listen type="network" network="vdsm-ovirtmgmt"/></graphics><sound model="ich6"><alias name="ua-f3ee4079-98cd-46a3-a384-340e449316ab"/><address bus="0x12" domain="0x0000" function="0x0" slot="0x01" type="pci"/></sound><controller type="pci" model="pcie-root-port" index="7"><address bus="0x00" domain="0x0000" function="0x6" slot="0x02" type="pci"/></controller><serial type="unix"><source path="/var/run/ovirt-vmconsole-console/f75316cb-6e94-445e-ad01-09219be02031.sock" mode="bind"/><target port="0"/></serial><channel type="spicevmc"><target type="virtio" name="com.redhat.spice.0"/></channel><controller type="pci" model="pcie-root"/><interface type="bridge"><model type="virtio"/><link state="up"/><source bridge="ovirtmgmt"/><driver queues="4" name="vhost"/><alias name="ua-18a81fa3-0a0f-4990-a967-5b712af6be9b"/><address bus="0x02" domain="0x0000" function="0x0" slot="0x00" type="pci"/><mac address="00:16:3e:14:e2:f9"/><mtu size="1500"/><filterref filter="vdsm-no-mac-spoofing"/><bandwidth/></interface><disk type="file" device="cdrom" snapshot="no"><driver name="qemu" type="raw" error_policy="report"/><source file="" startupPolicy="optional"><seclabel model="dac" type="none" relabel="no"/></source><target dev="sdc" bus="sata"/><readonly/><alias name="ua-9044898d-7a17-446f-8245-96fcd02c85c6"/><address bus="0" controller="0" unit="2" type="drive" target="0"/></disk><disk snapshot="no" type="file" device="disk"><target dev="vda" bus="virtio"/><source file="/rhev/data-center/00000000-0000-0000-0000-000000000000/989e8f8e-3f3e-4757-a305-4c4f3fa3ceef/images/e9eb058b-acba-4e9c-97fa-73366c5f74d1/ba019898-807e-4c64-ab58-e7d4e770602d"><seclabel model="dac" type="none" relabel="no"/></source><driver name="qemu" iothread="1" io="threads" type="raw" error_policy="stop" cache="none"/><alias name="ua-e9eb058b-acba-4e9c-97fa-73366c5f74d1"/><address bus="0x06" domain="0x0000" function="0x0" slot="0x00" type="pci"/><serial>e9eb058b-acba-4e9c-97fa-73366c5f74d1</serial></disk><lease><key>ba019898-807e-4c64-ab58-e7d4e770602d</key><lockspace>989e8f8e-3f3e-4757-a305-4c4f3fa3ceef</lockspace><target offset="LEASE-OFFSET:ba019898-807e-4c64-ab58-e7d4e770602d:989e8f8e-3f3e-4757-a305-4c4f3fa3ceef" path="LEASE-PATH:ba019898-807e-4c64-ab58-e7d4e770602d:989e8f8e-3f3e-4757-a305-4c4f3fa3ceef"/></lease></devices><pm><suspend-to-disk enabled="no"/><suspend-to-mem enabled="no"/></pm><os><type arch="x86_64" machine="pc-q35-rhel8.4.0">hvm</type><smbios mode="sysinfo"/><bios useserial="yes"/></os><metadata><ovirt-tune:qos/><ovirt-vm:vm><ovirt-vm:minGuaranteedMemoryMb type="int">1024</ovirt-vm:minGuaranteedMemoryMb><ovirt-vm:clusterVersion>4.6</ovirt-vm:clusterVersion><ovirt-vm:custom/><ovirt-vm:device alias="ua-18a81fa3-0a0f-4990-a967-5b712af6be9b" mac_address="00:16:3e:14:e2:f9"><ovirt-vm:custom/></ovirt-vm:device><ovirt-vm:device devtype="disk" name="vda"><ovirt-vm:poolID>00000000-0000-0000-0000-000000000000</ovirt-vm:poolID><ovirt-vm:volumeID>ba019898-807e-4c64-ab58-e7d4e770602d</ovirt-vm:volumeID><ovirt-vm:shared>exclusive</ovirt-vm:shared><ovirt-vm:imageID>e9eb058b-acba-4e9c-97fa-73366c5f74d1</ovirt-vm:imageID><ovirt-vm:domainID>989e8f8e-3f3e-4757-a305-4c4f3fa3ceef</ovirt-vm:domainID></ovirt-vm:device><ovirt-vm:launchPaused>false</ovirt-vm:launchPaused><ovirt-vm:resumeBehavior>auto_resume</ovirt-vm:resumeBehavior><ovirt-vm:ballooningEnabled>true</ovirt-vm:ballooningEnabled><ovirt-vm:cpuPolicy>none</ovirt-vm:cpuPolicy></ovirt-vm:vm></metadata></domain>'
            vmParams['xml'] = xml

        self._UUID = DomainDescriptor(xml, xml_source=XmlSource.INITIAL).id
        vmParams['vmId'] = self._UUID

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.