Code Monkey home page Code Monkey logo

yum's Introduction

⛔ This project is deprecated. Please use DNF, the successor of YUM.

YUM

Yum is an automatic updater and installer for rpm-based systems.

Included programs:

/usr/bin/yum		Main program

Usage

Yum is run with one of the following options:

  • update [package list]

    If run without any packages, Yum will automatically upgrade every currently installed package. If one or more packages are specified, Yum will only update the packages listed.

  • install <package list>

    Yum will install the latest version of the specified package (don't specify version information).

  • remove <package list>

    Yum will remove the specified packages from the system.

  • list [package list]

    List available packages.

See the man page for more information (man yum). Also see:

3.2.X Branch - yum-3_2_X
      Starting commit is roughly: a3c91d7f6a15f31a42d020127b2da2877dfc137d
         E.g. git diff a3c91d7f6a15f31a42d020127b2da2877dfc137d

Building

You can build an RPM package by running:

$ make rpm

Note: Make sure you have mock and lynx installed.

Development

You can run Yum from the current checkout in a container as follows (make sure you have the podman package installed):

$ make shell

This will first build a CentOS 7 image (if not built already) and then run a container with a shell where you can directly execute Yum:

[root@bf03d3a43cbf /] yum

When you edit the code on your host, the changes you make will be immediately reflected inside the container since the checkout is bind-mounted.

Warning: There's a (probably) bug in podman at the moment which makes it not see symlinks in a freshly created container, which, in turn, makes Yum not see the /etc/yum.conf symlink when it runs for the first time. The workaround is to touch /etc/yum.conf or simply re-run Yum.

Note: When you exit the container, it is not deleted but just stopped. To re-attach to it, use (replace the ID appropriately):

$ podman start bf03d3a43cbf
$ podman attach bf03d3a43cbf

yum's People

Contributors

aalam avatar abadger avatar ausil avatar covex avatar dmnks avatar dnagl avatar elf avatar ffesti avatar goeranu avatar guidograzioli avatar iamamoose avatar james-antill avatar jkeating avatar katzj avatar kiilerix avatar lmacken avatar logan5 avatar m-blaha avatar mattdm avatar megaumi avatar opoplawski avatar piotrdrag avatar pmatila avatar scop avatar skvidal avatar timlau avatar vnwildman avatar vpodzime avatar wenottingham avatar yurchor avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

yum's Issues

yum autoremove tries to remove systemd

When running yum autoremove, yum tries to remove systemd, even though systemd is marked as 'reason: user' in yumdb.

In my case, I'm trying to remove policycoreutils-python (in a container), which indirectly depends on shadow-utils, which is required by systemd.

I can workaround the issue by yumdb set reason user shadow-utils to make the autoremove successful.

The reason seems to be that required_packages() and requiring_packages() do not run recursively:

yum/yum/rpmsack.py

Lines 115 to 128 in 1222f37

def requiring_packages(self):
"""return list of installed pkgs requiring this package"""
pkgset = set()
for (reqn, reqf, reqevr) in self.provides:
for pkg in self.rpmdb.getRequires(reqn,reqf,reqevr):
if pkg != self:
pkgset.add(pkg)
for fn in self.filelist + self.dirlist:
for pkg in self.rpmdb.getRequires(fn, None, (None, None, None)):
if pkg != self:
pkgset.add(pkg)
return list(pkgset)

yum/yum/rpmsack.py

Lines 131 to 138 in 1222f37

def required_packages(self):
pkgset = set()
for (reqn, reqf, reqevr) in self.strong_requires:
for pkg in self.rpmdb.getProvides(reqn, reqf, reqevr):
if pkg != self:
pkgset.add(pkg)
return list(pkgset)

yum-cron ignores "update_messages = no" after downloading updates

I have configured yum-cron to download updates on some machines, but not apply them. Every time it does this it sends an email. I do not wish to receive such emails and have set "update_messages = no" in /etc/yum/yum-cron.conf, but yum-cron appears to be ignoring this option.

Is this intended behaviour?

The comment in /etc/yum/yum-cron.conf says "Whether a message should be emitted when updates are available, were downloaded, or applied."

yum gets deadlocked/hung up (indefinitely) waiting for urlgrabber-ext-down

While I can appreciate that YUM is now deprecated, it's still the main package manager for EL7, which is where I am running into an issue with it just hanging indefinitely, until it is killed.

The process tree looks like this:

 8702 ?        S      0:05  |       \_ /usr/bin/python /usr/bin/yum -y --disablerepo=* --enablerepo=repo.dc.hpdd.intel.com_repository_*,build.hpdd.intel.com_job_daos-stack* install --exclude openmpi daos-1.1.2.1-1.5456.g02ce0510.el7.x86_64 daos-client-1.1.2.1-1.5456.g02ce0510.el7.x86_64 daos-tests-1.1.2.1-1.5456.g02ce0510.el7.x86_64 daos-server-1.1.2.1-1.5456.g02ce0510.el7.x86_64 openmpi3 hwloc ndctl fio patchutils ior-hpc-daos-0 romio-tests-cart-4-daos-0 testmpio-cart-4-daos-0 mpi4py-tests-cart-4-daos-0 hdf5-mpich2-tests-daos-0 hdf5-openmpi3-tests-daos-0 hdf5-vol-daos-mpich2-tests-daos-0 hdf5-vol-daos-openmpi3-tests-daos-0 MACSio-mpich2-daos-0 MACSio-openmpi3-daos-0 mpifileutils-mpich-daos-0
 8705 ?        S      0:00  |           \_ /usr/bin/python /usr/libexec/urlgrabber-ext-down
 8711 ?        S      0:00  |           \_ /usr/bin/python /usr/libexec/urlgrabber-ext-down
 8712 ?        S      0:00  |           \_ /usr/bin/python /usr/libexec/urlgrabber-ext-down

The status of the processes are:

# /tmp/strace -f -p 8702
/tmp/strace: Process 8702 attached
wait4(8711, ^C/tmp/strace: Process 8702 detached
 <detached ...>
# /tmp/strace -f -p 8705
/tmp/strace: Process 8705 attached
read(0, ^C/tmp/strace: Process 8705 detached
 <detached ...>
# /tmp/strace -f -p 8711
/tmp/strace: Process 8711 attached
futex(0x1444c90, FUTEX_WAIT_PRIVATE, 2, NULL^C/tmp/strace: Process 8711 detached
 <detached ...>
# /tmp/strace -f -p 8712
/tmp/strace: Process 8712 attached
futex(0x2174c90, FUTEX_WAIT_PRIVATE, 2, NULL^C/tmp/strace: Process 8712 detached
 <detached ...>

which to me looks like 8702, 8711 and 8705 are deadlocked all waiting/blocked on each other.

Enhancement: Allow update notifications to be send even when no updates are needed

For updatesCheck where:

        pups = self.refreshUpdates()
        gups = self.refreshGroupUpdates()

        # If neither have updates, we can just exit.
        if not (pups or gups):
            sys.exit(0)

Go ahead and emit message and release the lock even when no updates are available. Possible to give emitter flag to allow this to be configurable.

        pups = self.refreshUpdates()
        gups = self.refreshGroupUpdates()

        # If neither have updates, we can just exit.
        if not (pups or gups):
            self.emitMessages()
            self.releaseLocks()
            sys.exit(0)
class UpdateEmitter(object):
      def sendMessages(self):
        """Send the messages that have been stored.  This should be
        overridden by inheriting classes to emit the messages
        according to their individual methods.
        """

        if self.opts.emit_when_already_updated and not self.output:
          self.output.append("Yum Cron Update ran successfully and 0 packages needing updated")

        # Convert any byte strings to unicode objects now (so that we avoid
        # implicit conversions with the "ascii" codec later when join()-ing the
        # strings, leading to tracebacks).
        self.output = [x.decode('utf-8') if isinstance(x, str) else x
                       for x in self.output]

This will allow configurability and notification even when 0 packages need updating.

Downloadonly does not save files to disk if repo url starts with file://

I traced the code path and it does not seem to be a bug, rather a feature. I think it makes sense to have this as a default behavior since the files are presumably already on the disk so there is no point in saving them again.

But, in my use case, which I will describe shortly, it makes more sense to actually save them on the disk. Therefore, it would be great to be able to override this default behavior either with a plugin or an input flag. In my use case I am serving very large packages over an io device that is mounted to a directory. Since the rpms are large and io device is very slow I want to first issue a yum --downloadonly overnight to fetch the rpms and then install them in the morning. What ends up happening is that calling yum with downloadonly will actually read and download the large rpms but does not save them locally. Then, in the morning yum will download everything again and installs them.

I have tracked the required code change to fix this to the following two lines:

  1. If "copy_local=1" flag is passed to
    ret = self._getFile(url=basepath,
    it will download the files.
  2. The earliest time to set this flag would be in
    kwargs = {}
    as a key-value arguement to be passed to the getPackage function

This could change would require me to fork yum, which I rather not do. So, my preference is

  1. Submit a PR to make this change and have a flag to control whether to always download or not
  2. Write a yum plugin that takes care of this. I have looked at the yum plugins but they do not seem to be capable of handling such cases. If this can be done with a plugin, please let me know.

I would appreciate any feedback on this. Thank you!


EDIT:

btw, I have seen the local.py plugin: http://yum.baseurl.org/gitweb?p=yum-utils.git;a=tree;f=plugins/local;h=27633ae6037ea85aa83c9f939397632f9e74a6cb;hb=HEAD

It has two limitations:

  1. It does not work with the --downloadonly flag. Since when downloadonly is passed, posdownload hook is not called. This can be fixed by modifiying local.py plugin to mimic the behaviour of downloadonly and stop using downloadoly.
  2. It would still download the file twice. First, when it normally downloads it and second in the postdownload hook where it copies the file to local dir. Therefore, it does not solve the problem of downloading a big file over a slow device twice, but at least pushes these downloads over night.

So basically I am looking for a better solution than the above.

get list of available updates without yum lock

I'm looking for a method to print a list of available updates without locking the yum database. I notice that after an initial yum check-update any subsequent check-update commands executed shortly thereafter resolve much more quickly. Is this data cached and if so is there a way to pull from that cache without locking the yum database?

A yum command to show just the version of package installed

Here is yum commands cheatsheet.

yum info package-name shows info something like

Name           : package-name
Arch           : arch_type
Version        : version_number
Release        : release_number
Size           :  size
Repo           : repo-name
Summary        : some_summary
License        : license_name
Description    : description

I understand that all this is important information to show on how yum packaging works. I always had a thought if there have been some work/discussion around having a command which would just output the version and not all other info?

Please close the ticket if this is noise for yum issue queue, I'll go to SO or Quora or yum mailing list to see if can get some information around this, just curious. or if there is something i'm gonna learn new.

Note for developers: Renew GPG keys for successful build

The GPG key in https://github.com/rpm-software-management/yum/blob/master/test/gpg/key.pub expires, and the build will fail in make check stage.

# gpg key.pub
pub  2048R/D5865417 2017-07-26 Joe Doe <[email protected]>
sub  2048R/2DB632D4 2017-07-26 [expires: 2019-07-26]

As we known, the YUM is deprecated. Here is the fix:

  1. Go to test/gpg directory
  2. Generate a new set of GPG key to replace the key.pub, make sure it is in ascii
  3. Detach sign repomd.xml and generate a new signature repomd.xml.asc, make sure it is in ascii
  4. Replace the generated fingerprint and keyid in

    yum/test/pubringtests.py

    Lines 15 to 16 in f8616a2

    KEYID = '38BB1B5ED5865417'
    FPR = '417A0E6E55566A755BE7D68C38BB1B5ED5865417'

Archive this repository

Since there is no time spent on working on the project any further, please archive the project.

yum-builddep not abiding by BuildRequires: package range

A packaging fubar was made and a package was made with a pre-release tag in it without using the ~ indicator. So now my repo has foo-2.0.0a1 in it. That fubar was fixed and foo-2.0.0~a1 is now also in the repository.

But now I want to update another package to use the proper foo package. So I've added to it's spec BuildRequires: foo < 2.0.0a1 and that works and selects foo-2.0.0~a1 with yum-builddep.

$ sudo yum-builddep bar.spec 
...
Getting requirements for bar.spec
 --> foo-devel-2.0.0~a1-1.git.4871023.el7.x86_64
--> Running transaction check
---> Package foo-devel.x86_64 0:2.0.0~a1-1.git.4871023.el7 will be installed
--> Processing Dependency: foo(x86-64) = 2.0.0~a1-1.git.4871023.el7 for package: foo-devel-2.0.0~a1-1.git.4871023.el7.x86_64
--> Processing Dependency: libfoo.so.2()(64bit) for package: foo-devel-2.0.0~a1-1.git.4871023.el7.x86_64
--> Processing Dependency: libfoo_hl.so.2()(64bit) for package: foo-devel-2.0.0~a1-1.git.4871023.el7.x86_64
--> Processing Dependency: libfoo_util.so.2()(64bit) for package: foo-devel-2.0.0~a1-1.git.4871023.el7.x86_64
--> Processing Dependency: libna.so.2()(64bit) for package: foo-devel-2.0.0~a1-1.git.4871023.el7.x86_64
--> Running transaction check
---> Package foo.x86_64 0:2.0.0~a1-1.git.4871023.el7 will be installed
--> Finished Dependency Resolution

Dependencies Resolved

=================================================================================================================================================
 Package                           Arch                       Version                                          Repository                   Size
=================================================================================================================================================
Installing:
 foo-devel                         x86_64                     2.0.0~a1-1.git.4871023.el7                       my_repo                      58 k
Installing for dependencies:
 foo                               x86_64                     2.0.0~a1-1.git.4871023.el7                       my_repo                     103 k

Transaction Summary
=================================================================================================================================================
Install  1 Package (+1 Dependent package)

Total download size: 161 k
Installed size: 597 k
Is this ok [y/d/N]: 

However there are also older versions of foo and I want to ensure that at least a minimum of my new foo-2.0.0~a1-1.git.4871023.el7 package is installed, so I add a BuildRequires: foo-devel >= 2.0.0~a1 to bar.spec. But yum-builddep seems unable to handle that:

$ sudo yum-builddep bar.spec 
...
Getting requirements for bar.spec
 --> foo-devel-2.0.0~a1-1.git.4871023.el7.x86_64
 --> foo-devel-2.0.0a1-0.8.git.4871023.el7.x86_64
--> Running transaction check
---> Package foo-devel.x86_64 0:2.0.0~a1-1.git.4871023.el7 will be installed
--> Processing Dependency: foo(x86-64) = 2.0.0~a1-1.git.4871023.el7 for package: foo-devel-2.0.0~a1-1.git.4871023.el7.x86_64
--> Processing Dependency: libfoo.so.2()(64bit) for package: foo-devel-2.0.0~a1-1.git.4871023.el7.x86_64
--> Processing Dependency: libfoo_hl.so.2()(64bit) for package: foo-devel-2.0.0~a1-1.git.4871023.el7.x86_64
--> Processing Dependency: libfoo_util.so.2()(64bit) for package: foo-devel-2.0.0~a1-1.git.4871023.el7.x86_64
--> Processing Dependency: libna.so.2()(64bit) for package: foo-devel-2.0.0~a1-1.git.4871023.el7.x86_64
---> Package foo-devel.x86_64 0:2.0.0a1-0.8.git.4871023.el7 will be installed
--> Processing Dependency: foo(x86-64) = 2.0.0a1-0.8.git.4871023.el7 for package: foo-devel-2.0.0a1-0.8.git.4871023.el7.x86_64
--> Running transaction check
---> Package foo.x86_64 0:2.0.0~a1-1.git.4871023.el7 will be installed
--> Processing Dependency: foo(x86-64) = 2.0.0~a1-1.git.4871023.el7 for package: foo-devel-2.0.0~a1-1.git.4871023.el7.x86_64
---> Package foo.x86_64 0:2.0.0a1-0.8.git.4871023.el7 will be installed
--> Finished Dependency Resolution
Error: Package: foo-devel-2.0.0~a1-1.git.4871023.el7.x86_64 (foo)
           Requires: foo(x86-64) = 2.0.0~a1-1.git.4871023.el7
           Available: foo-1.0.1-13.el7.x86_64 (my_other_repo)
               foo(x86-64) = 1.0.1-13.el7
           Available: foo-1.0.1-17.el7.x86_64 (my_other_repo)
               foo(x86-64) = 1.0.1-17.el7
           Available: foo-1.0.1-21.el7.x86_64 (my_other_repo)
               foo(x86-64) = 1.0.1-21.el7
           Available: foo-2.0.0~a1-1.git.4871023.el7.x86_64 (my_repo)
               foo(x86-64) = 2.0.0~a1-1.git.4871023.el7
           Available: foo-2.0.0a1-0.2.git.c2c2628.el7.x86_64 (my_other_repo)
               foo(x86-64) = 2.0.0a1-0.2.git.c2c2628.el7
           Available: foo-2.0.0a1-0.3.git.c2c2628.el7.x86_64 (my_other_repo)
               foo(x86-64) = 2.0.0a1-0.3.git.c2c2628.el7
           Available: foo-2.0.0a1-0.4.git.5d0cd77.el7.x86_64 (my_other_repo)
               foo(x86-64) = 2.0.0a1-0.4.git.5d0cd77.el7
           Available: foo-2.0.0a1-0.5.git.ad5a3b3.el7.x86_64 (my_other_repo)
               foo(x86-64) = 2.0.0a1-0.5.git.ad5a3b3.el7
           Available: foo-2.0.0a1-0.6.git.299b06d.el7.x86_64 (my_other_repo)
               foo(x86-64) = 2.0.0a1-0.6.git.299b06d.el7
           Available: foo-2.0.0a1-0.7.git.41caa14.el7.x86_64 (my_other_repo)
               foo(x86-64) = 2.0.0a1-0.7.git.41caa14.el7
           Installing: foo-2.0.0a1-0.8.git.4871023.el7.x86_64 (my_other_repo)
               foo(x86-64) = 2.0.0a1-0.8.git.4871023.el7
           Available: foo-1.0.1-9.el7.src (my_other_repo)
               Not found
           Available: foo-1.0.1-12.el7.src (my_other_repo)
               Not found
 You could try using --skip-broken to work around the problem
 You could try running: rpm -Va --nofiles --nodigest

Is my understanding of being able to use:

BuildRequires: foo-devel < 2.0.0a1
BuildRequires: foo-devel >= 2.0.0~a1

to enforce a version range not correct? I am sure I have seen this done and done this myself in the past.

I even tried:

BuildRequires: foo-devel < 2.0.0a1
BuildRequires: foo-devel > 1.9

to ensure it was not a problem with the pre-release version with the ~ in it.

install command will return error code 1 if package is already installed

A lot of installation scripts rely on exit code of commands, and it will exit if any of subcommands has non-zero exit code.
Yum may return error code (1) in case package is already installed
(imho it is not an error)

Example:

Step 1 (first install):

yum -y install https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm; echo $?
...
Installed:
  epel-release.noarch 0:7-11                                                                                                                                                                                                               

Complete!
0

Step 2 (install the same package):

yum -y install https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm; echo $?
...
/var/tmp/yum-root-dE0FBq/epel-release-latest-7.noarch.rpm: does not update installed package.
Error: Nothing to do
1

Step3 (wrong path):

yum -y install https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch__BUG__.rpm; echo $?
Loaded plugins: fastestmirror
Cannot open: https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch__BUG__.rpm. Skipping.
Error: Nothing to do
1

Suggestion:

A) return 0 exit code (no_error) in _already_installed_case
or
B) use different error exit code

Alt solutions I found:

1. use localinstall:

will return 0, but it also will return 0 in wrong cases,
like:
yum -y localinstall https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch_BUG.rpm || echo $?
Loaded plugins: fastestmirror, versionlock
Cannot open: https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch_BUG.rpm. Skipping.
Nothing to do
0

2. use reinstall:

works fine, but it will REINSTALL it every time (trade off: at least time)

Reference:

https://github.com/rpm-software-management/yum/blob/master/yum/__init__.py#L5654

Avoiding compatibility errors with python 3

I know that yum will be deprecated, so no python 3 support is expected.
The best thing to do would be to change all scripts from
#!/usr/bin/python
to
#!/usr/bin/python2.7
This would give people freedom to choose default version of python without errors

repo_gpgcheck=1 breaks Yum's progress output

I'm not sure if this is a yum bug or a urlgrabber bug.

When repo_gpgcheck=1 is set on a repository, Yum's progress indicators for the repomd.xml and repomd.xml.asc files are broken:

# yum clean metadata
Loaded plugins: auto-update-debuginfo, priorities, versionlock
Cleaning repos: base extras updates
13 metadata files removed
6 sqlite files removed
0 metadata files removed

# yum check-update
Loaded plugins: auto-update-debuginfo, priorities, versionlock
base/7/x86_64/signature                                  |  811 B     00:00     
base/7/x86_64/signature                                  | 3.6 kB     00:00 !!! 
extras/7/x86_64/signature                                |  811 B     00:00     
extras/7/x86_64/signature                                | 3.4 kB     00:00 !!! 
updates/7/x86_64/signature                               |  811 B     00:00     
updates/7/x86_64/signature                               | 3.4 kB     00:00 !!! 
(1/4): base/7/x86_64/group_gz                              | 166 kB   00:00     
(2/4): extras/7/x86_64/primary_db                          | 150 kB   00:00     
(3/4): updates/7/x86_64/primary_db                         | 3.6 MB   00:00     
(4/4): base/7/x86_64/primary_db                            | 5.9 MB   00:00     
...

We expect to see two lines per repository, the first (e.g. base/7/x86_64) being the repomd.xml file, and the second (e.g. base/7/x86_64/signature) being the repomd.xml.asc file.

Instead it looks like the repomd.xml.asc progress indication comes first, and the repomd.xml progress indication has the wrong name, right size, but !!! appended to indicate Yum thought it got the wrong size.

These errors are purely cosmetic, but they are confusing. I expect more people are going to be confused by them since CentOS recently published a blog post on this feature, and they may start shipping repository files with repo_gpgcheck=1 enabled by default.

My hunch is that this problem is due to the the checkfunc function for the repomd.xml itself recursing back into urlgrabber to get the repomd.xml.asc, but I don't understand the code well enough to be sure of this.

Output response body or log file location when receiving a 503 response code

Just spend a couple hours debugging an issue inside a firewall where yum was getting a 503 response code but no response body. And a message around "Failure talking to yum: Cannot find a valid baseurl for repo: remi". Turns out it was blocked by the firewall and tshark actually showed the whole response body, explaining the reason for the 503 (blocked yum useragent) and who to email etc.

yum needs to output the response body or a log file location with the response body in it. A user shouldn't need tshark to read the response body of a 503.

How does this sound or is their a functionality built into yum already to show this information? eg. debug mode etc.

Even if there is most users won't know about it and if there is a 503 then yum should probably at least say "re-run yum with --debug to see all response bodies".

Sender email address does not appear to be treated the same way for the envelope level as it is for email headers

Let's assume that the system is named cheetah.

It appears that the logic used to replace root@localhost with root@cheetah works as intended for the email headers, but does not apply to the address used as the sender address (the envelope) when sending the email message.

This code:

yum/yum-cron/yum-cron.py

Lines 244 to 247 in f8616a2

username, at, domain = self.opts.email_from.rpartition('@')
if domain == 'localhost':
domain = self.opts.system_name
msg['From'] = '%s@%s' % (username, domain)

appears to replace root@localhost with root@cheetah and store the result in domain. The domain variable is then used here:

msg['From'] = '%s@%s' % (username, domain)
to form msg['From'].

We can see here:

s.sendmail(self.opts.email_from, self.opts.email_to, msg.as_string())

that the msg type is converted to a string to form the complete email headers/body (I may be butchering this description) as the last argument to the s.sendmail() call.

The first part of the s.sendmail() call appears to use the email_from value as-is without any conversion, so if the configuration has root@localhost, then this is used as-is for the first argument in the s.sendmail() function call.

Is this intentional?

Looking here:

yum/etc/yum-cron.conf

Lines 33 to 36 in f8616a2

[emitters]
# Name to use for this system in messages that are emitted. If
# system_name is None, the hostname will be used.
system_name = None

and here:

yum/etc/yum-cron.conf

Lines 50 to 59 in f8616a2

[email]
# The address to send email messages from.
# NOTE: 'localhost' will be replaced with the value of system_name.
email_from = root@localhost
# List of addresses to send messages to.
email_to = root
# Name of the host to connect to to send email messages.
email_host = localhost

I don't see this behavior documented as intentional.

Is this a documentation problem or is the s.sendmail() function call intended to be called this way:

            s.sendmail(msg['From'], self.opts.email_to, msg.as_string())

Calling import_key_to_pubring without cachedir or gpgdir leads to interesting results

The bug isn't very noticeable since mostly things will work, it creates the gpgdir in the wrong place and even if you change your current dir it will just create another bogus directory.

The other issue is that is sets the environment variable permanently and when another app then tries to start GPG it will fail horribly since None/gpgdir does not work well as a default GPG dir.

def import_key_to_pubring(rawkey, keyid, cachedir=None, gpgdir=None, make_ro_copy=True):
    # FIXME - cachedir can be removed from this method when we break api
    if gpgme is None:
        return False

    if not gpgdir:
        gpgdir = '%s/gpgdir' % cachedir     <- Now gpgdir ==  None/gpgdir

    if not os.path.exists(gpgdir):
        os.makedirs(gpgdir)                       <- And apparently we create that directory

    key_fo = StringIO(rawkey) 
    os.environ['GNUPGHOME'] = gpgdir      <- And then set env var with wrong value without ever unsetting
    # import the key
    ctx = gpgme.Context()
    fp = open(os.path.join(gpgdir, 'gpg.conf'), 'wb')
    fp.write('')
    fp.close()
    ctx.import_(key_fo)
    key_fo.close()
    # ultimately trust the key or pygpgme is definitionally stupid
    k = ctx.get_key(keyid)
    gpgme.editutil.edit_trust(ctx, k, gpgme.VALIDITY_ULTIMATE)

    if make_ro_copy:

        rodir = gpgdir + '-ro'
        if not os.path.exists(rodir):
            os.makedirs(rodir, mode=0755)
            for f in glob.glob(gpgdir + '/*'):
                basename = os.path.basename(f)
                ro_f = rodir + '/' + basename
                shutil.copy(f, ro_f)
                os.chmod(ro_f, 0755)
            fp = open(rodir + '/gpg.conf', 'w', 0755)
            # yes it is this stupid, why do you ask?
            opts="""lock-never    
no-auto-check-trustdb    
trust-model direct
no-expensive-trust-checks
no-permission-warning         
preserve-permissions
"""
            fp.write(opts)
            fp.close()


    return True

ImportError: cannot import name exception2msg

hi, I encountered this error while installing yum. Could you help me ?

Traceback (most recent call last):
File "/usr/bin/yum", line 28, in
import yummain
File "/usr/share/yum-cli/yummain.py", line 32, in
from yum.i18n import utf8_width, exception2msg
ImportError: cannot import name exception2msg

from_repo validators broke file install output

Local/http install rpms get a repo of "/rn-v-r.a", which doesn't validate via. misc.validate_repoid
This means they don't get output in Eg. yum list foo, and even make yumdb traceback. This hack fixes it:

% syum fs diff /usr/lib/python2.7/site-packages/yum
diff -ru /tmp/tmp6ZWVHx/usr/lib/python2.7/site-packages/yum/rpmsack.py /usr/lib/python2.7/site-packages/yum/rpmsack.py
--- /tmp/tmp6ZWVHx/usr/lib/python2.7/site-packages/yum/rpmsack.py	2017-08-31 15:56:13.712179010 -0400
+++ /usr/lib/python2.7/site-packages/yum/rpmsack.py	2017-08-31 15:55:08.638097415 -0400
@@ -1731,6 +1731,11 @@
 
         pass
 
+def _hack_valid_repoid(repoid):
+    if repoid and repoid[0] == '/':
+        return misc.validate_repoid(repoid[1:])
+    return misc.validate_repoid(repoid)
+
 class RPMDBAdditionalDataPackage(object):
 
     # We do auto hardlink on these attributes
@@ -1744,7 +1749,7 @@
     # Validate these attributes when they are read from a file
     _validators = {
         # Fixes BZ 1234967
-        'from_repo': lambda repoid: misc.validate_repoid(repoid) is None,
+        'from_repo': lambda repoid: _hack_valid_repoid(repoid) is None,
     }
 
     def __init__(self, conf, pkgdir, yumdb_cache=None):
diff -ru /tmp/tmp6ZWVHx/usr/lib/python2.7/site-packages/yum/rpmsack.pyc /usr/lib/python2.7/site-packages/yum/rpmsack.pyc
Binary files /tmp/tmp6ZWVHx/usr/lib/python2.7/site-packages/yum/rpmsack.pyc and /usr/lib/python2.7/site-packages/yum/rpmsack.pyc differ

yum-cron ignores "exclude" setting when update_cmd = default

I want yum-cron to install all updates (security or otherwise) but only exclude a couple critical packages. When running with the following config file, the gh and neo4j packages were still updated, despite being listed in the "exclude" setting. I even tried exact match and wildcard (*) to force it to exclude, but that didn't help.

/etc/yum/yum-cron.conf

[commands]
update_cmd = default

#Exclude updating these packages:
# add additional package filters with a space between each
exclude = neo4j* cypher-shell* gh gh*

I'm running on RHEL

NAME="Red Hat Enterprise Linux Server"
VERSION="7.9 (Maipo)"
ID="rhel"
ID_LIKE="fedora"
VARIANT="Server"
VARIANT_ID="server"
VERSION_ID="7.9"

Makefile is not working correctly

When i run make rpm i get following output:

The archive is in yum-3.4.3.tar.gz
cp: cannot create regular file 'build/SOURCES/': Not a directory
Makefile:137: recipe for target 'srpm' failed
make: *** [srpm] Error 1

yum.Errors.MiscError: Invalid version flag: if

I have just upgraded Fedora from version 28 to 30 through dnf system-upgrade.
When I try to use package-cleanup --problems I get the error

yum.Errors.MiscError: Invalid version flag: if

This seems to be due to the function misc.string_to_prco_tuple that is not able to parse the string if (mysql-selinux if selinux-policy-targeted)

Why the function `rpmUtils.arch.getArchList()` returns i386, i486, etc. arches?

I was looking for a way to download all the dependencies of an RPM. I found this link:

http://unix.stackexchange.com/questions/50642/download-all-dependencies-with-yumdownloader-even-if-already-installed/50671#50671

telling to do this:

repotrack -a x86_64 -p /repos/Packages [packages]

The author claims that there is a bug: indeed, this command will download RPMs with arch x86_64, but also i*86 arches.

Here is the list of arch returned by the function rpmUtils.arch.getArchList() for the given option -a x86_64:

['x86_64', 'athlon', 'i686', 'i586', 'i486', 'i386', 'noarch']

Is it a bug, or is there a real reason to download also all the packages with those different arches?

Here is the source of the function rpmUtils.arch.getArchList():

https://github.com/rpm-software-management/yum/blob/master/rpmUtils/arch.py#L213-L231

Thanks,

"yum clean all" does not remove urlgrabber's download host timing file

This is a crosspost of RHBZ #1465172.

Description of problem

Yum, through the urlgrabber Python module, keeps track of which hosts are the quickest to download files from. This is generally useful, but can cause problems in some situations if it gets out of date (e.g. when cached in a Docker image). Unfortunately, while yum clean all cleans up most cached data, it does not remove the urlgrabber download host timing file.

Version-Release number of selected component (if applicable)

yum: 3.4.3-150.el7.centos
python-urlgrabber: 3.10-8.el7

How reproducible

Trivial.

Steps to Reproduce

As root (or with sudo in front of each command):

find /var/cache/yum -mindepth 1 -delete # clean up the cache
find /var/cache/yum -name timedhosts
yum -q makecache fast
find /var/cache/yum -name timedhosts
yum clean all
find /var/cache/yum -name timedhosts

Actual results

# find /var/cache/yum -mindepth 1 -delete # clean up the cache
# find /var/cache/yum -name timedhosts
# yum -q makecache fast
# find /var/cache/yum -name timedhosts
/var/cache/yum/x86_64/7/timedhosts
# yum clean all
Loaded plugins: fastestmirror, ovl
Cleaning repos: base extras updates
Cleaning up everything
Cleaning up list of fastest mirrors
# find /var/cache/yum -name timedhosts
/var/cache/yum/x86_64/7/timedhosts
# 

Expected results

# find /var/cache/yum -name timedhosts
# 

Additional info

This is not caused by the fastestmirror plugin, which by default uses a file named timedhosts.txt. It is caused by the timedhosts functionality of urlgrabber. This feature is disabled by default but was enabled in the Yum core in 9fdc18d, which has the following commit message:

enable timedhosts

Q: Make it configurable?

This was never followed up on. While it would be nice if it were possible to configure the use of timedhosts or the path to the file it uses, the primary issue is the fact that the timedhosts file is not removed during cleanup. It probably makes the most sense to handle this in the yum clean metadata command, which is implemented in cleanMetadata.

yum-cron emits messages even with debuglevel=-4

I have enabled yum-cron to make some servers self-maintain as much as possible. But in case they don't, it's not the end of the world (regular manually initiated yum updates will take care of things, in case automation has failed).

Therefore, I don't want to get error mails from yum like this:

/etc/cron.hourly/0yum-hourly.cron:

Could not get metalink https://mirrors.fedoraproject.org/metalink?repo=epel-7&arch=x86_64 error was
14: HTTPS Error 503 - Service Unavailable

I only ever want to hear from yum-cron, if it runs into rpm database corruption, or something of that severity.

Consequently, I've adjusted /etc/yum/yum-cron-hourly.conf and /etc/yum/yum-cron.conf, so that
emit_via =
debuglevel = -4

Still, I get error mails like the above quoted once in a while. I'm seeing this on some CentOS 7.2 servers. I have seen something similar on RHEL servers (but on the RHEL servers, I've simply stopped using yum-cron, in order to cut down on mail-noise).

I believe it's due to the _getMetalink method in yum/yumRepo.py. It prints an error message without considering debuglevel. I don't know it is best fixed, though.

A way to reproduce the error:
Adjuste /etc/yum/yum-cron-hourly.conf, asking yum-cron to download updates, but keep silient (see above) with a temporary iptables rule which drops packets to port 443/tcp. Run /etc/cron.hourly/0yum-hourly.cron
(For the sake of testing, it's also nice to have random_sleep=0, so that calling /etc/cron.hourly/0yum-hourly.cron doesn't start by sleeping for a while.)

Enhancement: create a yum-cron emitter calling all executables in a directory

Basically I am looking for a yum-cron notification system to be extendable. syslog and email are not the only use cases that we have in our super secure environment where monitoring automation is easier through push vs pull. It would be nice to to have a directory like: /etc/yum/yum.cron.d or /etc/yum/cron.emitters.d where we can store executables that take in json/yaml.

Use cases:

#!/usr/local/bin ruby
require 'yaml'
require 'zabbix_sender'

message_from_yum_cron = YAML.load(ARGV[0])

sender = ZabbixSender.new(zabbix_host: "some-zabbix", port: 10051)
sender.post("host", "yum_cron_updates_failed", messages_from_yum_cron['updates_failed'].size())

I would imagine the emitter would look something like:

import os
import subprocess
import yaml

class PluginEmitter(UpdateEmitter):
    """Emitter class to send messages via custom plugins."""

    default_emitters_directory = '/etc/yum/yum-cron.emitters.d'

    def __init__(self, opts, logger):
        super(PluginEmitter, self).__init__(opts)        
        self.logger = logger
        self.message = {}

    def updatesAvailable(self, summary):
        """Appends a message to the output list stating that there are
        updates available
        :param summary: A human-readable summary of the transaction.
        """
        super(PluginEmitter, self).updatesAvailable(summary)
        if 'updatesAvailable' not in self.message:
            self.message['updatesAvailable'] = []
        self.message['updatesAvailable'].append(summary)

    def updatesDownloaded(self):
        """Append a message to the output list stating that updates
        have been downloaded successfully
        """
        super(PluginEmitter, self).updatesDownloaded()
        self.message['updatesDownloaded'] = True

    def updatesInstalled(self):
        """Append a message to the output list stating that updates
        have been installed successfully
        """
        super(PluginEmitter, self).updatesInstalled()
        self.message['updatesInstalled'] = True

    def setupFailed(self, errmsg):
        """Append a message to the output list stating that setup
        failed, and then call sendMessages to emit the output
        :param errmsgs: a string that contains the error message
        """
        super(PluginEmitter, self).setupFailed(errmsg)
        if 'setupFailed' not in self.message:
            self.message['setupFailed'] = []
        self.message['setupFailed'].append(errmsg)

    def checkFailed(self, errmsg):
        """Append a message to the output stating that checking for
        updates failed, then call sendMessages to emit the output
        :param errmsgs: a string that contains the error message
        """
        super(PluginEmitter, self).checkFailed(errmsg)
        self.subject = "Yum: Failed to check for updates on %s" % self.opts.system_name
        if 'checkFailed' not in self.message:
            self.message['checkFailed'] = []
        self.message['checkFailed'].append(errmsg)

    def downloadFailed(self, errmsg):
        """Append a message to the output list stating that checking
        for group updates failed, then call sendMessages to emit the
        output
        :param errmsgs: a string that contains the error message
        """
        super(PluginEmitter, self).downloadFailed(errmsg)
        if 'downloadFailed' not in self.message:
            self.message['downloadFailed'] = []
        self.message['downloadFailed'].append(errmsg)

    def updatesFailed(self, errmsg):
        """Append a message to the output list stating that installing
        updates failed, then call sendMessages to emit the output
        :param errmsgs: a string that contains the error message
        """
        super(PluginEmitter, self).updatesFailed(errmsg)
        if 'updatesFailed' not in self.message:
            self.message['updatesFailed'] = []
        self.message['updatesFailed'].append(errmsg)

    def sendMessages(self):
        """Call each emitter and send the message in as a parameter"""

        super(PluginEmitter, self).sendMessages()
        # Don't send empty messages
        if not self.message:
            return

        # Get emitters directory
        emitters_directory = self.default_emitters_directory
        if self.opts.emitters_dir:
            emitters_directory = self.opts.emitters_dir

        try:
            for emitter_file in os.listdir(emitters_directory):
                emitter_file_path = os.path.join(emitters_directory, emitter_file)
                # We will ignore nested directories, for now and only concern ourselves with executables.
                if not (os.path.isfile(emitter_file_path) and os.access(emitter_file_path, os.X_OK))
                    self.logger.info("Emitter file is not executable '%s'" % (emitter_file_path))
                    continue
                try:
                    subprocess.check_call([emitter_file_path, yaml.dump(self.message)])
                except CalledProcessError, e:
                    self.logger.error("Emitter '%s' failed: %s" % (emitter_file_path, e))

        except Exception, e:
            self.logger.error("Failed to list emitters directory '%s': %s" % (emitters_directory, e))

I am certain better messaging could be created from parsing the yum output before sending to emitters, but that would not be initial use case.

yum list output formatting broken

The output of, for example, 'yum list installed' is arguably almost never examined without first piping it through something like grep or less. For some reason piped output is formatted to fixed width columns, so the third column begins at position 58 on the line, and adds newlines to align to column boundaries. There should be, at the very least, some knob to let one turn that behaviour off. akin to --color. See, for example, ls -1.

Here is a reference to others suffering this:
https://unix.stackexchange.com/questions/274938/how-to-get-yum-list-output-to-stay-on-one-line-when-getting-output-via-remote

My yum version is from rhel 7.4; --version reports: 3.4.3

Thanks!

Yum history always shows user as System <unset> in AIX

yum history

ID | Command line | Date and time | Action(s) | Altered

272 | erase -y bison | 2017-06-09 03:24 | Erase | 1 EE
271 | install -y bison | 2017-06-09 03:24 | Install | 1 EE
270 | erase -y git | 2017-06-09 02:22 | Erase | 1
269 | install -y git | 2017-06-09 02:22 | Install | 1
268 | erase -y git | 2017-06-08 21:55 | Erase | 1
267 | install make | 2017-06-07 09:54 | Install | 1 EE
266 | install subversion | 2017-06-07 09:49 | Install | 1
265 | install httpd | 2017-06-07 09:48 | Install | 2 EE
264 | install tla | 2017-06-07 09:44 | Install | 3 EE
263 | install bash curl expat | 2017-06-07 09:44 | Update | 2 EE
262 | install coreutils curl-d | 2017-06-07 09:43 | Install | 1
261 | erase grep-3.0-1.ppc | 2017-06-07 06:41 | Erase | 1
260 | erase -y pcre | 2017-06-07 03:00 | Erase | 1
259 | install -y pcre | 2017-06-07 03:00 | Install | 1
258 | install grep | 2017-06-07 02:09 | Install | 1
257 | erase -y pcre | 2017-06-07 01:38 | Erase | 3 EE
256 | erase -y vim | 2017-06-07 01:10 | Erase | 1
255 | install -y vim | 2017-06-07 01:09 | I, U | 2
254 | erase -y vim | 2017-06-07 00:37 | Erase | 1
253 | install -y vim | 2017-06-07 00:37 | Install | 1
history list

(0) root @ aixoss-automation-2: 6.1.0.0: /opt/freeware/etc/yum/repos.d

yum history info 263

Transaction ID : 263
Begin time : Wed Jun 7 09:44:04 2017
Begin rpmdb : 107:4862ad7227bc760f3d3f5f0c4cf3c4410c7ff593
End time : 09:44:15 2017 (11 seconds)
End rpmdb : 107:a3665a21e5b27567f29a034aa57fe2870fed5222
User : System
Return-Code : Success
Command Line : install bash curl expat gettext less perl python rsync zlib
Transaction performed with:
Installed yum-3.4.3-4.noarch installed
Packages Altered:
Updated bash-4.3.30-1.ppc @?AIX_Toolbox_local
Update 4.4-1.ppc @AIX_Toolbox_local
Updated less-481-1.ppc @?AIX_Toolbox_local
Update 487-1.ppc @AIX_Toolbox_local

"Repository listed more than once" when also using Spacewalk (RHN)

When the yum library is called on a server with both RHN (Spacewalk in my case) and local repository files configured, it outputs that some repositories more than once:

python -c 'import yum ; yb = yum.YumBase() ; print [(r.id + "=" + str(r.gpgcheck)) for r in yb.repos.listEnabled()]' | grep "^\[" | tr -d '[] ' | tr -d "'" | sed 's/,/ /g'
Repository dell-system-update_independent is listed more than once in the configuration
Repository dell-system-update_dependent is listed more than once in the configuration
Repository dell-omsa-indep is listed more than once in the configuration
Repository dell-omsa-specific is listed more than once in the configuration
centos7=False centos_7-repos=False dell-system-update_dependent=False dell-system-update_independent=False epel_7=False spacewalk_client-el7=False zabbix-el7=False

This isn't the case when the yum command itself is executed. Maybe the output is filtered out?

The Dell repositories are locally configured, the rest if coming from Spacewalk.

This happens on both CentOS 6 and 7.
Version of the yum-rhn-plugin: yum-rhn-plugin-2.5.5-1

add_enable_repo loses proxy configuration

When running repoquery on a host that requires a proxy to access the internet, I'm getting failures reaching a repository specified through the --repofrompath option. Given the option --repofrompath="Cloudera Manager,https://archive.cloudera.com/cm6/6.0.0/redhat7/yum", the following error appears repeatedly.

https://archive.cloudera.com/cm6/6.0.0/redhat7/yum/repodata/repomd.xml: [Errno 12] Timeout on https://archive.cloudera.com/cm6/6.0.0/redhat7/yum/repodata/repomd.xml: (28, 'Connection timed out after 30001 milliseconds')
Trying other mirror. 

It seems that when the repository definition in the option is added by repoquery using add_enable_repo, the proxy configuration in /etc/yum.conf is not preserved. If I add the following lines to the add_enable_repo function, before adding the newly defined repo to self.repos, the error goes away.

        newrepo.proxy = self.conf.proxy
        newrepo.proxy_username = self.conf.proxy_username
        newrepo.proxy_password = self.conf.proxy_password

However, this doesn't seem to be the right fix. There's lots of logic that should be able to surface the proxy settings from /etc/yum.conf (along with everything else that could be in there), but I can't figure out (yet) where they are getting lost.

yum-cron Unicode Decode Error on locale de_DE.utf8

yum-cron sends a stacktrace via email if the output contains special characters (e.g. missing key for an rpm:

Warnung: /var/cache/yum/x86_64/7/ius/packages/php72u-mysqlnd-7.2.21-1.el7.ius.x86_64.rpm: Header V4 RSA/SHA256 Signature, Schlüssel-ID 4b274df2: NOKEY
GPG-Schlüssel 0x4B274DF2 importieren:
 Benutzerkennung     : "IUS (7) <[email protected]>"
 Fingerabdruck: c958 7a09 a11f d706 4f0c a0f4 e558 0725 4b27 4df2
 Paket    : ius-release-2-1.el7.ius.noarch (@ius)
 Von       : /etc/pki/rpm-gpg/RPM-GPG-KEY-IUS-7
Traceback (most recent call last):
  File "/usr/sbin/yum-cron", line 729, in <module>
    main()
  File "/usr/sbin/yum-cron", line 726, in main
    base.updatesCheck()
  File "/usr/sbin/yum-cron", line 649, in updatesCheck
    self.installUpdates(self.opts.update_messages)
  File "/usr/sbin/yum-cron", line 577, in installUpdates
    self.emitUpdateFailed(errmsg)
  File "/usr/sbin/yum-cron", line 708, in emitUpdateFailed
    map(lambda x: x.updatesFailed(errmsg), self.emitters)
  File "/usr/sbin/yum-cron", line 708, in <lambda>
    map(lambda x: x.updatesFailed(errmsg), self.emitters)
  File "/usr/sbin/yum-cron", line 136, in updatesFailed
    self.sendMessages()
  File "/usr/sbin/yum-cron", line 268, in sendMessages
    print "".join(self.output)
UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position 81: ordinal not in range(128)

relevant word is probably "Schlüssel", and it traces back to the following line:

print "".join(self.output)

a most likely similar bug is also a few lines up, and documented with a patch in rh-bugzilla.

https://bugzilla.redhat.com/show_bug.cgi?id=1425776

Support on python3

Hi,
python2 is going to be deprecated in the upcoming year.
I have widely used the yum sdk for my own development.
I would like to ask for the publish on python3 for this yum sdk.
Thank you!

yum-config-manager does not sanitize `@` when adding repo from the URL

To reproduce

[root@c79bfb7185b1 yum.repos.d]# yum-config-manager --add-repo "https://user:[email protected]"
[root@c79bfb7185b1 yum.repos.d]# yum search bla
Bad id for repo: [email protected], byte = @ 4

From looking at the code,

yum/yum/misc.py

Line 1252 in 1222f37

allowed_chars = string.ascii_letters + string.digits + '-_.:'

Checks what is valid repo_id

def validate_repoid(repoid):
    """Return the first invalid char found in the repoid, or None."""
    allowed_chars = string.ascii_letters + string.digits + '-_.:'
    for char in repoid:
        if char not in allowed_chars:
            return char
    else:
        return None

and sanitization is done in http://yum.baseurl.org/gitweb?p=yum-utils.git;a=blob;f=yum-config-manager.py;h=380a54fd89b8d2f1afc96020be20d231733b838b;hb=HEAD#l19

So IMHO, either validation should allow @, or sanitization should remove all not ascii chars, other than -_:

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.