Code Monkey home page Code Monkey logo

Comments (7)

razorsedge avatar razorsedge commented on July 17, 2024

That is very strange. You say this is Ubuntu 12.04.2 LTS? What version of Puppet are you using? Have you tried just using this?

class { 'cloudera':
  cm_server_host => 'mainhead.es.fr.alten.com',
  use_parcels    => false,
}

from puppet-cloudera.

CSalamanca avatar CSalamanca commented on July 17, 2024

No, always jump the java step.

from puppet-cloudera.

CSalamanca avatar CSalamanca commented on July 17, 2024

root@mainhead:/home/vagrant# facter |grep -i lsb
lsbdistcodename => precise
lsbdistdescription => Ubuntu 12.04.4 LTS
lsbdistid => Ubuntu
lsbdistrelease => 12.04
lsbmajdistrelease => 12

root@mainhead:/home/vagrant# dpkg -l |grep pupp
ii facter 1.7.5-1puppetlabs1 Ruby module for collecting simple facts about a host operating system
ii hiera 1.3.2-1puppetlabs1 A simple pluggable Hierarchical Database.
ii puppet 3.4.3-1puppetlabs1 Centralized configuration management - agent startup and compatibility scripts
ii puppet-common 3.4.3-1puppetlabs1 Centralized configuration management
ii puppetmaster 3.4.3-1puppetlabs1 Centralized configuration management - master startup and compatibility scripts
ii puppetmaster-common 3.4.3-1puppetlabs1 Puppet master common scripts
ii ruby-rgen 0.6.5-1puppetlabs1 A framework supporting Model Driven Software Development (MDSD)

from puppet-cloudera.

razorsedge avatar razorsedge commented on July 17, 2024

Hmmm. You have Ubuntu 12.04.4 LTS and I have been testing on Ubuntu 12.04.2 LTS. I have no idea what the difference between those versions might be or if they are causing this issue.

Can you provide the Puppet log/errors? That will be useful to help in debuging.

from puppet-cloudera.

CSalamanca avatar CSalamanca commented on July 17, 2024

Preinstalling java... because I don't know why I can not with you module.

My "puppet":
...
class { 'cloudera':
install_java => false,
ensure => 'present',
cm_server_host => 'mainhead.es.fr.alten.com',
use_parcels => false,
}
->
class { 'cloudera::cm::server':}
->
class { 'cloudera::cdh::hue': }
->
class { 'cloudera::cdh::mahout': }
->
class { 'cloudera::cdh::sqoop': }
->
class { 'cloudera::gplextras': }
...

Before I was trying with mysql configuration, but ... again I saw module
problems, I now I tried with "your" normal configuration (I guess with
postgre), but after install I saw:
...
11474 ? Ss 0:10 /usr/sbin/ntpd -p /var/run/ntpd.pid -g -u 115:123
17912 ? Ssl 6:40 /usr/lib/cmf/agent/build/env/bin/python
/usr/lib/cmf/agent/src/cmf/agent.py --package_dir /usr/lib/cmf/service
--agent_dir /var/run/cloudera-scm-agent --logfile
/var/log/cloudera-scm-agent/cloudera-scm-agent.log
18016 ? Ss 0:13
/usr/lib/cmf/agent/src/cmf/../../build/env/bin/python
/usr/lib/cmf/agent/src/cmf/../../build/env/bin/supervisord
18022 ? S 0:00 _ /usr/lib/cmf/agent/build/env/bin/python
/usr/lib/cmf/agent/src/cmf/supervisor_listener.py -l
/var/log/cloudera-scm-agent/cmf_listener.log /run/cloudera-scm-agent/events
19442 ? S 0:00 /usr/lib/postgresql/8.4/bin/postgres -D
/var/lib/postgresql/8.4/main -c
config_file=/etc/postgresql/8.4/main/postgresql.conf
19444 ? Ss 0:18 _ postgres: writer process

19445 ? Ss 0:16 _ postgres: wal writer process

19446 ? Ss 0:05 _ postgres: autovacuum launcher process

19447 ? Ss 0:02 _ postgres: stats collector process

19648 ? S 0:01 /usr/lib/postgresql/8.4/bin/postgres -D
/var/lib/cloudera-scm-server-db/data -k /var/run/cloudera-scm-server-db
19699 ? Ss 0:18 _ postgres: writer process

19700 ? Ss 0:16 _ postgres: wal writer process

19701 ? Ss 0:06 _ postgres: autovacuum launcher process

19702 ? Ss 0:03 _ postgres: stats collector process

19712 ? Ss 0:03 _ postgres: scm scm 127.0.0.1(49274) idle

19713 ? Ss 0:00 _ postgres: scm scm 127.0.0.1(49273) idle

19714 ? Ss 0:00 _ postgres: scm scm 127.0.0.1(49276) idle

19715 ? Ss 0:00 _ postgres: scm scm 127.0.0.1(49277) idle

19716 ? Ss 0:04 _ postgres: scm scm 127.0.0.1(49278) idle

19689 ? S 0:00 su cloudera-scm -s /bin/bash -c nohup
/usr/sbin/cmf-server
19691 ? Sl 3:44 _ /usr/lib/jvm/j2sdk1.6-oracle/bin/java -cp
.:lib/*:/usr/share/java/mysql-connector-java.jar
-Dlog4j.configuration=file:/etc/cloudera-scm-server/log4j.properties
-Dcmf.root.logger=INFO,LOGFILE -Dcmf.log.dir=/var/log/cloudera-scm-server
-Dcmf.log.file=cloudera-scm-server.log -Dcmf.jetty.threshhold=WARN
-Dcmf.schema.dir=/usr/share/cmf/schema -Djava.awt.headless=true
-Djava.net.preferIPv4Stack=true -Dpython.home=/usr/share/cmf -Xmx2G
-XX:MaxPermSize=256m com.cloudera.server.cmf.Main

Why mysql-connector?

On Sun, Mar 2, 2014 at 12:28 AM, Mike Arnold [email protected]:

Hmmm. You have Ubuntu 12.04.4 LTS and I have been testing on Ubuntu
12.04.2 LTS. I have no idea what the difference between those versions
might be or if they are causing this issue. Can you provide the Puppet
log/errors? That will be useful to help in debuging.

Reply to this email directly or view it on GitHubhttps://github.com//issues/6#issuecomment-36440393
.

Un saludo,
Carlos.

from puppet-cloudera.

CSalamanca avatar CSalamanca commented on July 17, 2024

Sorry, I made another issue to not mix the problems.
Here only the java installation with your puppet module.

My configuration:
...
class { 'cloudera':
cm_server_host => 'mainhead.es.fr.alten.com',
use_parcels => false,
}
...

Here the log:
time puppet agent --test --debug
Debug: Puppet::Type::User::ProviderUser_role_add: file rolemod does not exist
Debug: Puppet::Type::User::ProviderPw: file pw does not exist
...
Info: /Stage[main]/Cloudera::Cm::Repo/Apt::Source[cloudera-manager]/File[cloudera-manager.list]: Scheduling refresh of Exec[apt_update]
Debug: /Stage[main]/Cloudera::Cm::Repo/Apt::Source[cloudera-manager]/File[cloudera-manager.list]: The container Apt::Source[cloudera-manager] will propagate my refresh event
Notice: /Stage[main]/Cloudera::Cdh::Repo/Apt::Source[cloudera-cdh4]/File[cloudera-cdh4.list]/ensure: created
Info: /Stage[main]/Cloudera::Cdh::Repo/Apt::Source[cloudera-cdh4]/File[cloudera-cdh4.list]: Scheduling refresh of Exec[apt_update]
Debug: /Stage[main]/Cloudera::Cdh::Repo/Apt::Source[cloudera-cdh4]/File[cloudera-cdh4.list]: The container Apt::Source[cloudera-cdh4] will propagate my refresh event
Debug: Execapt_update: Executing '/usr/bin/apt-get update'
Debug: Executing '/usr/bin/apt-get update'
Notice: /Stage[main]/Apt::Update/Exec[apt_update]: Triggered 'refresh' from 4 events
Debug: /Stage[main]/Apt::Update/Exec[apt_update]: The container Class[Apt::Update] will propagate my refresh event
Debug: Class[Apt::Update]: The container Stage[main] will propagate my refresh event
Debug: Apt::Source[cloudera-cdh4]: The container Class[Cloudera::Cdh::Repo] will propagate my refresh event
Debug: Executing '/usr/bin/dpkg-query -W --showformat '${Status} ${Package} ${Version}\n' hadoop-client'
Debug: Executing '/usr/bin/apt-get -q -y -o DPkg::Options::=--force-confold install hadoop-client'
Notice: /Stage[main]/Cloudera::Cdh::Hadoop::Client/Package[hadoop-client]/ensure: ensure changed 'purged' to 'present'
Debug: /Stage[main]/Cloudera::Cdh::Hadoop::Client/Package[hadoop-client]: The container Class[Cloudera::Cdh::Hadoop::Client] will propagate my refresh event
Debug: Executing '/usr/bin/dpkg-query -W --showformat '${Status} ${Package} ${Version}\n' hue-common'
Debug: Executing '/usr/bin/apt-get -q -y -o DPkg::Options::=--force-confold install hue-common'
Notice: /Stage[main]/Cloudera::Cdh::Hue::Plugins/Package[hue-common]/ensure: ensure changed 'purged' to 'present'
Debug: /Stage[main]/Cloudera::Cdh::Hue::Plugins/Package[hue-common]: The container Class[Cloudera::Cdh::Hue::Plugins] will propagate my refresh event
Debug: Executing '/usr/bin/dpkg-query -W --showformat '${Status} ${Package} ${Version}\n' hadoop-httpfs'
Debug: Executing '/usr/bin/apt-get -q -y -o DPkg::Options::=--force-confold install hadoop-httpfs'
Error: Execution of '/usr/bin/apt-get -q -y -o DPkg::Options::=--force-confold install hadoop-httpfs' returned 100: Reading package lists...
Building dependency tree...
Reading state information...
The following extra packages will be installed:
bigtop-tomcat
The following NEW packages will be installed:
bigtop-tomcat hadoop-httpfs
0 upgraded, 2 newly installed, 0 to remove and 0 not upgraded.
Need to get 0 B/26.0 MB of archives.
After this operation, 40.1 MB of additional disk space will be used.
Selecting previously unselected package bigtop-tomcat.
(Reading database ... 77512 files and directories currently installed.)
Unpacking bigtop-tomcat (from .../bigtop-tomcat_6.0.37-1.cdh4.6.0.p0.12precise-cdh4.6.0_all.deb) ...
Selecting previously unselected package hadoop-httpfs.
Unpacking hadoop-httpfs (from .../hadoop-httpfs_2.0.0+1554-1.cdh4.6.0.p0.16
precise-cdh4.6.0_all.deb) ...
Processing triggers for ureadahead ...
Setting up bigtop-tomcat (6.0.37-1.cdh4.6.0.p0.12precise-cdh4.6.0) ...
Setting up hadoop-httpfs (2.0.0+1554-1.cdh4.6.0.p0.16
precise-cdh4.6.0) ...
update-alternatives: using /etc/hadoop-httpfs/conf.empty to provide /etc/hadoop-httpfs/conf (hadoop-httpfs-conf) in auto mode.

  • Starting Hadoop httpfs:

Setting HTTPFS_HOME: /usr/lib/hadoop-httpfs
Using HTTPFS_CONFIG: /etc/hadoop-httpfs/conf
Sourcing: /etc/hadoop-httpfs/conf/httpfs-env.sh
Using HTTPFS_LOG: /var/log/hadoop-httpfs/
Using HTTPFS_TEMP: /var/run/hadoop-httpfs
Setting HTTPFS_HTTP_PORT: 14000
Setting HTTPFS_ADMIN_PORT: 14001
Setting HTTPFS_HTTP_HOSTNAME: mainhead.es.fr.alten.com
Using CATALINA_BASE: /usr/lib/hadoop-httpfs
Using HTTPFS_CATALINA_HOME: /usr/lib/bigtop-tomcat
Setting CATALINA_OUT: /var/log/hadoop-httpfs//httpfs-catalina.out
Using CATALINA_PID: /var/run/hadoop-httpfs/hadoop-httpfs-httpfs.pid

Using CATALINA_OPTS:
Adding to CATALINA_OPTS: -Dhttpfs.home.dir=/usr/lib/hadoop-httpfs -Dhttpfs.config.dir=/etc/hadoop-httpfs/conf -Dhttpfs.log.dir=/var/log/hadoop-httpfs/ -Dhttpfs.temp.dir=/var/run/hadoop-httpfs -Dhttpfs.admin.port=14001 -Dhttpfs.http.port=14000 -Dhttpfs.http.hostname=mainhead.es.fr.alten.com
Neither the JAVA_HOME nor the JRE_HOME environment variable is defined
At least one of these environment variable is needed to run this program
invoke-rc.d: initscript hadoop-httpfs, action "start" failed.
dpkg: error processing hadoop-httpfs (--configure):
subprocess installed post-installation script returned error exit status 3
Errors were encountered while processing:
hadoop-httpfs
E: Sub-process /usr/bin/dpkg returned an error code (1)

Error: /Stage[main]/Cloudera::Cdh::Hadoop/Package[hadoop-httpfs]/ensure: change from purged to present failed: Execution of '/usr/bin/apt-get -q -y -o DPkg::Options::=--force-confold install hadoop-httpfs' returned 100: Reading package lists...
Building dependency tree...
Reading state information...
The following extra packages will be installed:
bigtop-tomcat
The following NEW packages will be installed:
bigtop-tomcat hadoop-httpfs
0 upgraded, 2 newly installed, 0 to remove and 0 not upgraded.
Need to get 0 B/26.0 MB of archives.
After this operation, 40.1 MB of additional disk space will be used.
Selecting previously unselected package bigtop-tomcat.
(Reading database ... 77512 files and directories currently installed.)
Unpacking bigtop-tomcat (from .../bigtop-tomcat_6.0.37-1.cdh4.6.0.p0.12precise-cdh4.6.0_all.deb) ...
Selecting previously unselected package hadoop-httpfs.
Unpacking hadoop-httpfs (from .../hadoop-httpfs_2.0.0+1554-1.cdh4.6.0.p0.16
precise-cdh4.6.0_all.deb) ...
Processing triggers for ureadahead ...
Setting up bigtop-tomcat (6.0.37-1.cdh4.6.0.p0.12precise-cdh4.6.0) ...
Setting up hadoop-httpfs (2.0.0+1554-1.cdh4.6.0.p0.16
precise-cdh4.6.0) ...
update-alternatives: using /etc/hadoop-httpfs/conf.empty to provide /etc/hadoop-httpfs/conf (hadoop-httpfs-conf) in auto mode.

  • Starting Hadoop httpfs:

Setting HTTPFS_HOME: /usr/lib/hadoop-httpfs
Using HTTPFS_CONFIG: /etc/hadoop-httpfs/conf
Sourcing: /etc/hadoop-httpfs/conf/httpfs-env.sh
Using HTTPFS_LOG: /var/log/hadoop-httpfs/
Using HTTPFS_TEMP: /var/run/hadoop-httpfs
Setting HTTPFS_HTTP_PORT: 14000
Setting HTTPFS_ADMIN_PORT: 14001
Setting HTTPFS_HTTP_HOSTNAME: mainhead.es.fr.alten.com
Using CATALINA_BASE: /usr/lib/hadoop-httpfs
Using HTTPFS_CATALINA_HOME: /usr/lib/bigtop-tomcat
Setting CATALINA_OUT: /var/log/hadoop-httpfs//httpfs-catalina.out
Using CATALINA_PID: /var/run/hadoop-httpfs/hadoop-httpfs-httpfs.pid

Using CATALINA_OPTS:
Adding to CATALINA_OPTS: -Dhttpfs.home.dir=/usr/lib/hadoop-httpfs -Dhttpfs.config.dir=/etc/hadoop-httpfs/conf -Dhttpfs.log.dir=/var/log/hadoop-httpfs/ -Dhttpfs.temp.dir=/var/run/hadoop-httpfs -Dhttpfs.admin.port=14001 -Dhttpfs.http.port=14000 -Dhttpfs.http.hostname=mainhead.es.fr.alten.com
Neither the JAVA_HOME nor the JRE_HOME environment variable is defined
At least one of these environment variable is needed to run this program
invoke-rc.d: initscript hadoop-httpfs, action "start" failed.
dpkg: error processing hadoop-httpfs (--configure):
subprocess installed post-installation script returned error exit status 3
Errors were encountered while processing:
hadoop-httpfs
E: Sub-process /usr/bin/dpkg returned an error code (1)

Debug: Executing '/usr/bin/dpkg-query -W --showformat '${Status} ${Package} ${Version}\n' hue-plugins'
Debug: Executing '/usr/bin/apt-get -q -y -o DPkg::Options::=--force-confold install hue-plugins'
Error: Execution of '/usr/bin/apt-get -q -y -o DPkg::Options::=--force-confold install hue-plugins' returned 100: Reading package lists...
Building dependency tree...
Reading state information...
The following NEW packages will be installed:
hue-plugins
0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded.
1 not fully installed or removed.
Need to get 0 B/2255 kB of archives.
After this operation, 2596 kB of additional disk space will be used.
Selecting previously unselected package hue-plugins.
(Reading database ... 79349 files and directories currently installed.)
Unpacking hue-plugins (from .../hue-plugins_2.5.0+217-1.cdh4.6.0.p0.19precise-cdh4.6.0_all.deb) ...
Setting up hadoop-httpfs (2.0.0+1554-1.cdh4.6.0.p0.16
precise-cdh4.6.0) ...

  • Starting Hadoop httpfs:

Setting HTTPFS_HOME: /usr/lib/hadoop-httpfs
Using HTTPFS_CONFIG: /etc/hadoop-httpfs/conf
Sourcing: /etc/hadoop-httpfs/conf/httpfs-env.sh
Using HTTPFS_LOG: /var/log/hadoop-httpfs/
Using HTTPFS_TEMP: /var/run/hadoop-httpfs
Setting HTTPFS_HTTP_PORT: 14000
Setting HTTPFS_ADMIN_PORT: 14001
Setting HTTPFS_HTTP_HOSTNAME: mainhead.es.fr.alten.com
Using CATALINA_BASE: /usr/lib/hadoop-httpfs
Using HTTPFS_CATALINA_HOME: /usr/lib/bigtop-tomcat
Setting CATALINA_OUT: /var/log/hadoop-httpfs//httpfs-catalina.out
Using CATALINA_PID: /var/run/hadoop-httpfs/hadoop-httpfs-httpfs.pid

Using CATALINA_OPTS:
Adding to CATALINA_OPTS: -Dhttpfs.home.dir=/usr/lib/hadoop-httpfs -Dhttpfs.config.dir=/etc/hadoop-httpfs/conf -Dhttpfs.log.dir=/var/log/hadoop-httpfs/ -Dhttpfs.temp.dir=/var/run/hadoop-httpfs -Dhttpfs.admin.port=14001 -Dhttpfs.http.port=14000 -Dhttpfs.http.hostname=mainhead.es.fr.alten.com
Neither the JAVA_HOME nor the JRE_HOME environment variable is defined
At least one of these environment variable is needed to run this program
invoke-rc.d: initscript hadoop-httpfs, action "start" failed.
dpkg: error processing hadoop-httpfs (--configure):
subprocess installed post-installation script returned error exit status 3
Setting up hue-plugins (2.5.0+217-1.cdh4.6.0.p0.19~precise-cdh4.6.0) ...
Errors were encountered while processing:
hadoop-httpfs
E: Sub-process /usr/bin/dpkg returned an error code (1)

Error: /Stage[main]/Cloudera::Cdh::Hue::Plugins/Package[hue-plugins]/ensure: change from purged to present failed: Execution of '/usr/bin/apt-get -q -y -o DPkg::Options::=--force-confold install hue-plugins' returned 100: Reading package lists...
Building dependency tree...
Reading state information...
The following NEW packages will be installed:
hue-plugins
0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded.
1 not fully installed or removed.
Need to get 0 B/2255 kB of archives.
After this operation, 2596 kB of additional disk space will be used.
Selecting previously unselected package hue-plugins.
(Reading database ... 79349 files and directories currently installed.)
Unpacking hue-plugins (from .../hue-plugins_2.5.0+217-1.cdh4.6.0.p0.19precise-cdh4.6.0_all.deb) ...
Setting up hadoop-httpfs (2.0.0+1554-1.cdh4.6.0.p0.16
precise-cdh4.6.0) ...

  • Starting Hadoop httpfs:

Setting HTTPFS_HOME: /usr/lib/hadoop-httpfs
Using HTTPFS_CONFIG: /etc/hadoop-httpfs/conf
Sourcing: /etc/hadoop-httpfs/conf/httpfs-env.sh
Using HTTPFS_LOG: /var/log/hadoop-httpfs/
Using HTTPFS_TEMP: /var/run/hadoop-httpfs
Setting HTTPFS_HTTP_PORT: 14000
Setting HTTPFS_ADMIN_PORT: 14001
Setting HTTPFS_HTTP_HOSTNAME: mainhead.es.fr.alten.com
Using CATALINA_BASE: /usr/lib/hadoop-httpfs
Using HTTPFS_CATALINA_HOME: /usr/lib/bigtop-tomcat
Setting CATALINA_OUT: /var/log/hadoop-httpfs//httpfs-catalina.out
Using CATALINA_PID: /var/run/hadoop-httpfs/hadoop-httpfs-httpfs.pid

Using CATALINA_OPTS:
Adding to CATALINA_OPTS: -Dhttpfs.home.dir=/usr/lib/hadoop-httpfs -Dhttpfs.config.dir=/etc/hadoop-httpfs/conf -Dhttpfs.log.dir=/var/log/hadoop-httpfs/ -Dhttpfs.temp.dir=/var/run/hadoop-httpfs -Dhttpfs.admin.port=14001 -Dhttpfs.http.port=14000 -Dhttpfs.http.hostname=mainhead.es.fr.alten.com
Neither the JAVA_HOME nor the JRE_HOME environment variable is defined
At least one of these environment variable is needed to run this program
invoke-rc.d: initscript hadoop-httpfs, action "start" failed.
dpkg: error processing hadoop-httpfs (--configure):
subprocess installed post-installation script returned error exit status 3
Setting up hue-plugins (2.5.0+217-1.cdh4.6.0.p0.19~precise-cdh4.6.0) ...
Errors were encountered while processing:
hadoop-httpfs
E: Sub-process /usr/bin/dpkg returned an error code (1)

Debug: Executing '/usr/bin/dpkg-query -W --showformat '${Status} ${Package} ${Version}\n' hadoop-mapreduce'
Debug: Executing '/usr/bin/dpkg-query -W --showformat '${Status} ${Package} ${Version}\n' hadoop-yarn'
Debug: Executing '/usr/bin/dpkg-query -W --showformat '${Status} ${Package} ${Version}\n' bigtop-utils'
Debug: Class[Cloudera::Cdh::Hadoop::Client]: The container Stage[main] will propagate my refresh event
Debug: Class[Cloudera::Cdh::Hue::Plugins]: The container Stage[main] will propagate my refresh event
Debug: Executing '/usr/bin/dpkg-query -W --showformat '${Status} ${Package} ${Version}\n' zookeeper'
Debug: Class[Cloudera::Cdh::Repo]: The container Stage[main] will propagate my refresh event
Debug: Executing '/usr/bin/dpkg-query -W --showformat '${Status} ${Package} ${Version}\n' oozie-client'
Debug: Executing '/usr/bin/apt-get -q -y -o DPkg::Options::=--force-confold install oozie-client'
Error: Execution of '/usr/bin/apt-get -q -y -o DPkg::Options::=--force-confold install oozie-client' returned 100: Reading package lists...
Building dependency tree...
Reading state information...
The following NEW packages will be installed:
oozie-client
0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded.
1 not fully installed or removed.
Need to get 0 B/4681 kB of archives.
After this operation, 34.7 MB of additional disk space will be used.
Selecting previously unselected package oozie-client.
(Reading database ... 79354 files and directories currently installed.)
Unpacking oozie-client (from .../oozie-client_3.3.2+100-1.cdh4.6.0.p0.13precise-cdh4.6.0_all.deb) ...
Processing triggers for man-db ...
Setting up hadoop-httpfs (2.0.0+1554-1.cdh4.6.0.p0.16
precise-cdh4.6.0) ...

  • Starting Hadoop httpfs:

Setting HTTPFS_HOME: /usr/lib/hadoop-httpfs
Using HTTPFS_CONFIG: /etc/hadoop-httpfs/conf
Sourcing: /etc/hadoop-httpfs/conf/httpfs-env.sh
Using HTTPFS_LOG: /var/log/hadoop-httpfs/
Using HTTPFS_TEMP: /var/run/hadoop-httpfs
Setting HTTPFS_HTTP_PORT: 14000
Setting HTTPFS_ADMIN_PORT: 14001
Setting HTTPFS_HTTP_HOSTNAME: mainhead.es.fr.alten.com
Using CATALINA_BASE: /usr/lib/hadoop-httpfs
Using HTTPFS_CATALINA_HOME: /usr/lib/bigtop-tomcat
Setting CATALINA_OUT: /var/log/hadoop-httpfs//httpfs-catalina.out
Using CATALINA_PID: /var/run/hadoop-httpfs/hadoop-httpfs-httpfs.pid

Using CATALINA_OPTS:
Adding to CATALINA_OPTS: -Dhttpfs.home.dir=/usr/lib/hadoop-httpfs -Dhttpfs.config.dir=/etc/hadoop-httpfs/conf -Dhttpfs.log.dir=/var/log/hadoop-httpfs/ -Dhttpfs.temp.dir=/var/run/hadoop-httpfs -Dhttpfs.admin.port=14001 -Dhttpfs.http.port=14000 -Dhttpfs.http.hostname=mainhead.es.fr.alten.com
Neither the JAVA_HOME nor the JRE_HOME environment variable is defined
At least one of these environment variable is needed to run this program
invoke-rc.d: initscript hadoop-httpfs, action "start" failed.
dpkg: error processing hadoop-httpfs (--configure):
subprocess installed post-installation script returned error exit status 3
Setting up oozie-client (3.3.2+100-1.cdh4.6.0.p0.13~precise-cdh4.6.0) ...
Errors were encountered while processing:
hadoop-httpfs
E: Sub-process /usr/bin/dpkg returned an error code (1)

Error: /Stage[main]/Cloudera::Cdh::Oozie::Client/Package[oozie-client]/ensure: change from purged to present failed: Execution of '/usr/bin/apt-get -q -y -o DPkg::Options::=--force-confold install oozie-client' returned 100: Reading package lists...
Building dependency tree...
Reading state information...
The following NEW packages will be installed:
oozie-client
0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded.
1 not fully installed or removed.
Need to get 0 B/4681 kB of archives.
After this operation, 34.7 MB of additional disk space will be used.
Selecting previously unselected package oozie-client.
(Reading database ... 79354 files and directories currently installed.)
Unpacking oozie-client (from .../oozie-client_3.3.2+100-1.cdh4.6.0.p0.13precise-cdh4.6.0_all.deb) ...
Processing triggers for man-db ...
Setting up hadoop-httpfs (2.0.0+1554-1.cdh4.6.0.p0.16
precise-cdh4.6.0) ...

  • Starting Hadoop httpfs:

Setting HTTPFS_HOME: /usr/lib/hadoop-httpfs
Using HTTPFS_CONFIG: /etc/hadoop-httpfs/conf
Sourcing: /etc/hadoop-httpfs/conf/httpfs-env.sh
Using HTTPFS_LOG: /var/log/hadoop-httpfs/
Using HTTPFS_TEMP: /var/run/hadoop-httpfs
Setting HTTPFS_HTTP_PORT: 14000
Setting HTTPFS_ADMIN_PORT: 14001
Setting HTTPFS_HTTP_HOSTNAME: mainhead.es.fr.alten.com
Using CATALINA_BASE: /usr/lib/hadoop-httpfs
Using HTTPFS_CATALINA_HOME: /usr/lib/bigtop-tomcat
Setting CATALINA_OUT: /var/log/hadoop-httpfs//httpfs-catalina.out
Using CATALINA_PID: /var/run/hadoop-httpfs/hadoop-httpfs-httpfs.pid

Using CATALINA_OPTS:
Adding to CATALINA_OPTS: -Dhttpfs.home.dir=/usr/lib/hadoop-httpfs -Dhttpfs.config.dir=/etc/hadoop-httpfs/conf -Dhttpfs.log.dir=/var/log/hadoop-httpfs/ -Dhttpfs.temp.dir=/var/run/hadoop-httpfs -Dhttpfs.admin.port=14001 -Dhttpfs.http.port=14000 -Dhttpfs.http.hostname=mainhead.es.fr.alten.com
Neither the JAVA_HOME nor the JRE_HOME environment variable is defined
At least one of these environment variable is needed to run this program
invoke-rc.d: initscript hadoop-httpfs, action "start" failed.
dpkg: error processing hadoop-httpfs (--configure):
subprocess installed post-installation script returned error exit status 3
Setting up oozie-client (3.3.2+100-1.cdh4.6.0.p0.13~precise-cdh4.6.0) ...
Errors were encountered while processing:
hadoop-httpfs
E: Sub-process /usr/bin/dpkg returned an error code (1)

Debug: Executing '/usr/bin/dpkg-query -W --showformat '${Status} ${Package} ${Version}\n' hive'
Debug: Executing '/usr/bin/dpkg-query -W --showformat '${Status} ${Package} ${Version}\n' hadoop-hdfs'
Debug: Executing '/usr/bin/dpkg-query -W --showformat '${Status} ${Package} ${Version}\n' bigtop-jsvc'
Debug: Apt::Source[cloudera-impala]: The container Class[Cloudera::Impala::Repo] will propagate my refresh event
Debug: Class[Cloudera::Impala::Repo]: The container Stage[main] will propagate my refresh event
Debug: Executing '/usr/bin/dpkg-query -W --showformat '${Status} ${Package} ${Version}\n' impala-shell'
Debug: Executing '/usr/bin/apt-get -q -y -o DPkg::Options::=--force-confold install impala-shell'
Error: Execution of '/usr/bin/apt-get -q -y -o DPkg::Options::=--force-confold install impala-shell' returned 100: Reading package lists...
Building dependency tree...
Reading state information...
The following extra packages will be installed:
python-setuptools
The following NEW packages will be installed:
impala-shell python-setuptools
0 upgraded, 2 newly installed, 0 to remove and 0 not upgraded.
1 not fully installed or removed.
Need to get 0 B/899 kB of archives.
After this operation, 3724 kB of additional disk space will be used.
Selecting previously unselected package python-setuptools.
(Reading database ... 81115 files and directories currently installed.)
Unpacking python-setuptools (from .../python-setuptools_0.6.24-1ubuntu1_all.deb) ...
Selecting previously unselected package impala-shell.
Unpacking impala-shell (from .../impala-shell_1.2.3-1.p0.352precise-impala1.2.3_all.deb) ...
Setting up hadoop-httpfs (2.0.0+1554-1.cdh4.6.0.p0.16
precise-cdh4.6.0) ...

  • Starting Hadoop httpfs:

Setting HTTPFS_HOME: /usr/lib/hadoop-httpfs
Using HTTPFS_CONFIG: /etc/hadoop-httpfs/conf
Sourcing: /etc/hadoop-httpfs/conf/httpfs-env.sh
Using HTTPFS_LOG: /var/log/hadoop-httpfs/
Using HTTPFS_TEMP: /var/run/hadoop-httpfs
Setting HTTPFS_HTTP_PORT: 14000
Setting HTTPFS_ADMIN_PORT: 14001
Setting HTTPFS_HTTP_HOSTNAME: mainhead.es.fr.alten.com
Using CATALINA_BASE: /usr/lib/hadoop-httpfs
Using HTTPFS_CATALINA_HOME: /usr/lib/bigtop-tomcat
Setting CATALINA_OUT: /var/log/hadoop-httpfs//httpfs-catalina.out
Using CATALINA_PID: /var/run/hadoop-httpfs/hadoop-httpfs-httpfs.pid

Using CATALINA_OPTS:
Adding to CATALINA_OPTS: -Dhttpfs.home.dir=/usr/lib/hadoop-httpfs -Dhttpfs.config.dir=/etc/hadoop-httpfs/conf -Dhttpfs.log.dir=/var/log/hadoop-httpfs/ -Dhttpfs.temp.dir=/var/run/hadoop-httpfs -Dhttpfs.admin.port=14001 -Dhttpfs.http.port=14000 -Dhttpfs.http.hostname=mainhead.es.fr.alten.com
Neither the JAVA_HOME nor the JRE_HOME environment variable is defined
At least one of these environment variable is needed to run this program
invoke-rc.d: initscript hadoop-httpfs, action "start" failed.
dpkg: error processing hadoop-httpfs (--configure):
subprocess installed post-installation script returned error exit status 3
Setting up python-setuptools (0.6.24-1ubuntu1) ...
Setting up impala-shell (1.2.3-1.p0.352~precise-impala1.2.3) ...
Errors were encountered while processing:
hadoop-httpfs
E: Sub-process /usr/bin/dpkg returned an error code (1)

Error: /Stage[main]/Cloudera::Impala/Package[impala-shell]/ensure: change from purged to present failed: Execution of '/usr/bin/apt-get -q -y -o DPkg::Options::=--force-confold install impala-shell' returned 100: Reading package lists...
Building dependency tree...
Reading state information...
The following extra packages will be installed:
python-setuptools
The following NEW packages will be installed:
impala-shell python-setuptools
0 upgraded, 2 newly installed, 0 to remove and 0 not upgraded.
1 not fully installed or removed.
Need to get 0 B/899 kB of archives.
After this operation, 3724 kB of additional disk space will be used.
Selecting previously unselected package python-setuptools.
(Reading database ... 81115 files and directories currently installed.)
Unpacking python-setuptools (from .../python-setuptools_0.6.24-1ubuntu1_all.deb) ...
Selecting previously unselected package impala-shell.
Unpacking impala-shell (from .../impala-shell_1.2.3-1.p0.352precise-impala1.2.3_all.deb) ...
Setting up hadoop-httpfs (2.0.0+1554-1.cdh4.6.0.p0.16
precise-cdh4.6.0) ...

  • Starting Hadoop httpfs:

Setting HTTPFS_HOME: /usr/lib/hadoop-httpfs
Using HTTPFS_CONFIG: /etc/hadoop-httpfs/conf
Sourcing: /etc/hadoop-httpfs/conf/httpfs-env.sh
Using HTTPFS_LOG: /var/log/hadoop-httpfs/
Using HTTPFS_TEMP: /var/run/hadoop-httpfs
Setting HTTPFS_HTTP_PORT: 14000
Setting HTTPFS_ADMIN_PORT: 14001
Setting HTTPFS_HTTP_HOSTNAME: mainhead.es.fr.alten.com
Using CATALINA_BASE: /usr/lib/hadoop-httpfs
Using HTTPFS_CATALINA_HOME: /usr/lib/bigtop-tomcat
Setting CATALINA_OUT: /var/log/hadoop-httpfs//httpfs-catalina.out
Using CATALINA_PID: /var/run/hadoop-httpfs/hadoop-httpfs-httpfs.pid

Using CATALINA_OPTS:
Adding to CATALINA_OPTS: -Dhttpfs.home.dir=/usr/lib/hadoop-httpfs -Dhttpfs.config.dir=/etc/hadoop-httpfs/conf -Dhttpfs.log.dir=/var/log/hadoop-httpfs/ -Dhttpfs.temp.dir=/var/run/hadoop-httpfs -Dhttpfs.admin.port=14001 -Dhttpfs.http.port=14000 -Dhttpfs.http.hostname=mainhead.es.fr.alten.com
Neither the JAVA_HOME nor the JRE_HOME environment variable is defined
At least one of these environment variable is needed to run this program
invoke-rc.d: initscript hadoop-httpfs, action "start" failed.
dpkg: error processing hadoop-httpfs (--configure):
subprocess installed post-installation script returned error exit status 3
Setting up python-setuptools (0.6.24-1ubuntu1) ...
Setting up impala-shell (1.2.3-1.p0.352~precise-impala1.2.3) ...
Errors were encountered while processing:
hadoop-httpfs
E: Sub-process /usr/bin/dpkg returned an error code (1)

Debug: Apt::Source[cloudera-search]: The container Class[Cloudera::Search::Repo] will propagate my refresh event
Debug: Class[Cloudera::Search::Repo]: The container Stage[main] will propagate my refresh event
Debug: Executing '/usr/bin/dpkg-query -W --showformat '${Status} ${Package} ${Version}\n' solr-server'
Debug: Executing '/usr/bin/apt-get -q -y -o DPkg::Options::=--force-confold install solr-server'
Error: Execution of '/usr/bin/apt-get -q -y -o DPkg::Options::=--force-confold install solr-server' returned 100: Reading package lists...
Building dependency tree...
Reading state information...
The following extra packages will be installed:
solr
The following NEW packages will be installed:
solr solr-server
0 upgraded, 2 newly installed, 0 to remove and 0 not upgraded.
1 not fully installed or removed.
Need to get 0 B/66.1 MB of archives.
After this operation, 71.2 MB of additional disk space will be used.
Selecting previously unselected package solr.
(Reading database ... 81414 files and directories currently installed.)
Unpacking solr (from .../solr_4.4.0+155-1.cdh4.5.0.p0.6precise-search1.2.0_all.deb) ...
Selecting previously unselected package solr-server.
Unpacking solr-server (from .../solr-server_4.4.0+155-1.cdh4.5.0.p0.6
precise-search1.2.0_all.deb) ...
Processing triggers for ureadahead ...
Setting up hadoop-httpfs (2.0.0+1554-1.cdh4.6.0.p0.16~precise-cdh4.6.0) ...

  • Starting Hadoop httpfs:

Setting HTTPFS_HOME: /usr/lib/hadoop-httpfs
Using HTTPFS_CONFIG: /etc/hadoop-httpfs/conf
Sourcing: /etc/hadoop-httpfs/conf/httpfs-env.sh
Using HTTPFS_LOG: /var/log/hadoop-httpfs/
Using HTTPFS_TEMP: /var/run/hadoop-httpfs
Setting HTTPFS_HTTP_PORT: 14000
Setting HTTPFS_ADMIN_PORT: 14001
Setting HTTPFS_HTTP_HOSTNAME: mainhead.es.fr.alten.com
Using CATALINA_BASE: /usr/lib/hadoop-httpfs
Using HTTPFS_CATALINA_HOME: /usr/lib/bigtop-tomcat
Setting CATALINA_OUT: /var/log/hadoop-httpfs//httpfs-catalina.out
Using CATALINA_PID: /var/run/hadoop-httpfs/hadoop-httpfs-httpfs.pid

Using CATALINA_OPTS:
Adding to CATALINA_OPTS: -Dhttpfs.home.dir=/usr/lib/hadoop-httpfs -Dhttpfs.config.dir=/etc/hadoop-httpfs/conf -Dhttpfs.log.dir=/var/log/hadoop-httpfs/ -Dhttpfs.temp.dir=/var/run/hadoop-httpfs -Dhttpfs.admin.port=14001 -Dhttpfs.http.port=14000 -Dhttpfs.http.hostname=mainhead.es.fr.alten.com
Neither the JAVA_HOME nor the JRE_HOME environment variable is defined
At least one of these environment variable is needed to run this program
invoke-rc.d: initscript hadoop-httpfs, action "start" failed.
dpkg: error processing hadoop-httpfs (--configure):
subprocess installed post-installation script returned error exit status 3
Setting up solr (4.4.0+155-1.cdh4.5.0.p0.6precise-search1.2.0) ...
update-alternatives: using /etc/solr/conf.dist to provide /etc/solr/conf (solr-conf) in auto mode.
The following warning applies to any collections configured to
use Non-SolrCloud mode. Any such collection configuration will
need to be upgraded, see Upgrading Cloudera Search for details.
Setting up solr-server (4.4.0+155-1.cdh4.5.0.p0.6
precise-search1.2.0) ...

  • Starting Solr server daemon:
    Error: SOLR_ZK_ENSEMBLE is not set in /etc/default/solr
    invoke-rc.d: initscript solr-server, action "start" failed.
    Errors were encountered while processing:
    hadoop-httpfs
    E: Sub-process /usr/bin/dpkg returned an error code (1)

Error: /Stage[main]/Cloudera::Search/Package[solr-server]/ensure: change from purged to present failed: Execution of '/usr/bin/apt-get -q -y -o DPkg::Options::=--force-confold install solr-server' returned 100: Reading package lists...
Building dependency tree...
Reading state information...
The following extra packages will be installed:
solr
The following NEW packages will be installed:
solr solr-server
0 upgraded, 2 newly installed, 0 to remove and 0 not upgraded.
1 not fully installed or removed.
Need to get 0 B/66.1 MB of archives.
After this operation, 71.2 MB of additional disk space will be used.
Selecting previously unselected package solr.
(Reading database ... 81414 files and directories currently installed.)
Unpacking solr (from .../solr_4.4.0+155-1.cdh4.5.0.p0.6precise-search1.2.0_all.deb) ...
Selecting previously unselected package solr-server.
Unpacking solr-server (from .../solr-server_4.4.0+155-1.cdh4.5.0.p0.6
precise-search1.2.0_all.deb) ...
Processing triggers for ureadahead ...
Setting up hadoop-httpfs (2.0.0+1554-1.cdh4.6.0.p0.16~precise-cdh4.6.0) ...

  • Starting Hadoop httpfs:

Setting HTTPFS_HOME: /usr/lib/hadoop-httpfs
Using HTTPFS_CONFIG: /etc/hadoop-httpfs/conf
Sourcing: /etc/hadoop-httpfs/conf/httpfs-env.sh
Using HTTPFS_LOG: /var/log/hadoop-httpfs/
Using HTTPFS_TEMP: /var/run/hadoop-httpfs
Setting HTTPFS_HTTP_PORT: 14000
Setting HTTPFS_ADMIN_PORT: 14001
Setting HTTPFS_HTTP_HOSTNAME: mainhead.es.fr.alten.com
Using CATALINA_BASE: /usr/lib/hadoop-httpfs
Using HTTPFS_CATALINA_HOME: /usr/lib/bigtop-tomcat
Setting CATALINA_OUT: /var/log/hadoop-httpfs//httpfs-catalina.out
Using CATALINA_PID: /var/run/hadoop-httpfs/hadoop-httpfs-httpfs.pid

Using CATALINA_OPTS:
Adding to CATALINA_OPTS: -Dhttpfs.home.dir=/usr/lib/hadoop-httpfs -Dhttpfs.config.dir=/etc/hadoop-httpfs/conf -Dhttpfs.log.dir=/var/log/hadoop-httpfs/ -Dhttpfs.temp.dir=/var/run/hadoop-httpfs -Dhttpfs.admin.port=14001 -Dhttpfs.http.port=14000 -Dhttpfs.http.hostname=mainhead.es.fr.alten.com
Neither the JAVA_HOME nor the JRE_HOME environment variable is defined
At least one of these environment variable is needed to run this program
invoke-rc.d: initscript hadoop-httpfs, action "start" failed.
dpkg: error processing hadoop-httpfs (--configure):
subprocess installed post-installation script returned error exit status 3
Setting up solr (4.4.0+155-1.cdh4.5.0.p0.6precise-search1.2.0) ...
update-alternatives: using /etc/solr/conf.dist to provide /etc/solr/conf (solr-conf) in auto mode.
The following warning applies to any collections configured to
use Non-SolrCloud mode. Any such collection configuration will
need to be upgraded, see Upgrading Cloudera Search for details.
Setting up solr-server (4.4.0+155-1.cdh4.5.0.p0.6
precise-search1.2.0) ...

  • Starting Solr server daemon:
    Error: SOLR_ZK_ENSEMBLE is not set in /etc/default/solr
    invoke-rc.d: initscript solr-server, action "start" failed.
    Errors were encountered while processing:
    hadoop-httpfs
    E: Sub-process /usr/bin/dpkg returned an error code (1)

Notice: /Stage[main]/Cloudera::Search/Service[solr-server]: Dependency Package[solr-server] has failures: true
Warning: /Stage[main]/Cloudera::Search/Service[solr-server]: Skipping because of failed dependencies
Notice: /Stage[main]/Cloudera::Cdh::Hadoop/Service[hadoop-httpfs]: Dependency Package[hadoop-httpfs] has failures: true
Warning: /Stage[main]/Cloudera::Cdh::Hadoop/Service[hadoop-httpfs]: Skipping because of failed dependencies
Debug: Executing '/usr/bin/dpkg-query -W --showformat '${Status} ${Package} ${Version}\n' oozie'
Debug: Executing '/usr/bin/apt-get -q -y -o DPkg::Options::=--force-confold install oozie'
Error: Execution of '/usr/bin/apt-get -q -y -o DPkg::Options::=--force-confold install oozie' returned 100: Reading package lists...
Building dependency tree...
Reading state information...
The following NEW packages will be installed:
oozie
0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded.
1 not fully installed or removed.
Need to get 0 B/259 MB of archives.
After this operation, 268 MB of additional disk space will be used.
Selecting previously unselected package oozie.
(Reading database ... 81940 files and directories currently installed.)
Unpacking oozie (from .../oozie_3.3.2+100-1.cdh4.6.0.p0.13precise-cdh4.6.0_all.deb) ...
Processing triggers for ureadahead ...
Setting up hadoop-httpfs (2.0.0+1554-1.cdh4.6.0.p0.16
precise-cdh4.6.0) ...

  • Starting Hadoop httpfs:

Setting HTTPFS_HOME: /usr/lib/hadoop-httpfs
Using HTTPFS_CONFIG: /etc/hadoop-httpfs/conf
Sourcing: /etc/hadoop-httpfs/conf/httpfs-env.sh
Using HTTPFS_LOG: /var/log/hadoop-httpfs/
Using HTTPFS_TEMP: /var/run/hadoop-httpfs
Setting HTTPFS_HTTP_PORT: 14000
Setting HTTPFS_ADMIN_PORT: 14001
Setting HTTPFS_HTTP_HOSTNAME: mainhead.es.fr.alten.com
Using CATALINA_BASE: /usr/lib/hadoop-httpfs
Using HTTPFS_CATALINA_HOME: /usr/lib/bigtop-tomcat
Setting CATALINA_OUT: /var/log/hadoop-httpfs//httpfs-catalina.out
Using CATALINA_PID: /var/run/hadoop-httpfs/hadoop-httpfs-httpfs.pid

Using CATALINA_OPTS:
Adding to CATALINA_OPTS: -Dhttpfs.home.dir=/usr/lib/hadoop-httpfs -Dhttpfs.config.dir=/etc/hadoop-httpfs/conf -Dhttpfs.log.dir=/var/log/hadoop-httpfs/ -Dhttpfs.temp.dir=/var/run/hadoop-httpfs -Dhttpfs.admin.port=14001 -Dhttpfs.http.port=14000 -Dhttpfs.http.hostname=mainhead.es.fr.alten.com
Neither the JAVA_HOME nor the JRE_HOME environment variable is defined
At least one of these environment variable is needed to run this program
invoke-rc.d: initscript hadoop-httpfs, action "start" failed.
dpkg: error processing hadoop-httpfs (--configure):
subprocess installed post-installation script returned error exit status 3
Setting up oozie (3.3.2+100-1.cdh4.6.0.p0.13~precise-cdh4.6.0) ...
update-alternatives: using /etc/oozie/conf.dist to provide /etc/oozie/conf (oozie-conf) in auto mode.
Errors were encountered while processing:
hadoop-httpfs
E: Sub-process /usr/bin/dpkg returned an error code (1)

Error: /Stage[main]/Cloudera::Cdh::Oozie/Package[oozie]/ensure: change from purged to present failed: Execution of '/usr/bin/apt-get -q -y -o DPkg::Options::=--force-confold install oozie' returned 100: Reading package lists...
Building dependency tree...
Reading state information...
The following NEW packages will be installed:
oozie
0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded.
1 not fully installed or removed.
Need to get 0 B/259 MB of archives.
After this operation, 268 MB of additional disk space will be used.
Selecting previously unselected package oozie.
(Reading database ... 81940 files and directories currently installed.)
Unpacking oozie (from .../oozie_3.3.2+100-1.cdh4.6.0.p0.13precise-cdh4.6.0_all.deb) ...
Processing triggers for ureadahead ...
Setting up hadoop-httpfs (2.0.0+1554-1.cdh4.6.0.p0.16
precise-cdh4.6.0) ...

  • Starting Hadoop httpfs:

Setting HTTPFS_HOME: /usr/lib/hadoop-httpfs
Using HTTPFS_CONFIG: /etc/hadoop-httpfs/conf
Sourcing: /etc/hadoop-httpfs/conf/httpfs-env.sh
Using HTTPFS_LOG: /var/log/hadoop-httpfs/
Using HTTPFS_TEMP: /var/run/hadoop-httpfs
Setting HTTPFS_HTTP_PORT: 14000
Setting HTTPFS_ADMIN_PORT: 14001
Setting HTTPFS_HTTP_HOSTNAME: mainhead.es.fr.alten.com
Using CATALINA_BASE: /usr/lib/hadoop-httpfs
Using HTTPFS_CATALINA_HOME: /usr/lib/bigtop-tomcat
Setting CATALINA_OUT: /var/log/hadoop-httpfs//httpfs-catalina.out
Using CATALINA_PID: /var/run/hadoop-httpfs/hadoop-httpfs-httpfs.pid

Using CATALINA_OPTS:
Adding to CATALINA_OPTS: -Dhttpfs.home.dir=/usr/lib/hadoop-httpfs -Dhttpfs.config.dir=/etc/hadoop-httpfs/conf -Dhttpfs.log.dir=/var/log/hadoop-httpfs/ -Dhttpfs.temp.dir=/var/run/hadoop-httpfs -Dhttpfs.admin.port=14001 -Dhttpfs.http.port=14000 -Dhttpfs.http.hostname=mainhead.es.fr.alten.com
Neither the JAVA_HOME nor the JRE_HOME environment variable is defined
At least one of these environment variable is needed to run this program
invoke-rc.d: initscript hadoop-httpfs, action "start" failed.
dpkg: error processing hadoop-httpfs (--configure):
subprocess installed post-installation script returned error exit status 3
Setting up oozie (3.3.2+100-1.cdh4.6.0.p0.13~precise-cdh4.6.0) ...
update-alternatives: using /etc/oozie/conf.dist to provide /etc/oozie/conf (oozie-conf) in auto mode.
Errors were encountered while processing:
hadoop-httpfs
E: Sub-process /usr/bin/dpkg returned an error code (1)

Notice: /Stage[main]/Cloudera::Cdh::Oozie/Service[oozie]: Dependency Package[oozie] has failures: true
Warning: /Stage[main]/Cloudera::Cdh::Oozie/Service[oozie]: Skipping because of failed dependencies
Notice: /Stage[main]/Cloudera::Cdh::Oozie/Anchor[cloudera::cdh::oozie::end]: Dependency Package[oozie-client] has failures: true
Warning: /Stage[main]/Cloudera::Cdh::Oozie/Anchor[cloudera::cdh::oozie::end]: Skipping because of failed dependencies
Debug: Executing '/usr/bin/dpkg-query -W --showformat '${Status} ${Package} ${Version}\n' bigtop-tomcat'
Debug: Executing '/usr/bin/dpkg-query -W --showformat '${Status} ${Package} ${Version}\n' pig'
Debug: Executing '/usr/bin/apt-get -q -y -o DPkg::Options::=--force-confold install pig'
Error: Execution of '/usr/bin/apt-get -q -y -o DPkg::Options::=--force-confold install pig' returned 100: Reading package lists...
Building dependency tree...
Reading state information...
The following NEW packages will be installed:
pig
0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded.
1 not fully installed or removed.
Need to get 0 B/57.2 MB of archives.
After this operation, 127 MB of additional disk space will be used.
Selecting previously unselected package pig.
(Reading database ... 82316 files and directories currently installed.)
Unpacking pig (from .../pig_0.11.0+42-1.cdh4.6.0.p0.13precise-cdh4.6.0_all.deb) ...
Processing triggers for man-db ...
Setting up hadoop-httpfs (2.0.0+1554-1.cdh4.6.0.p0.16
precise-cdh4.6.0) ...

  • Starting Hadoop httpfs:

Setting HTTPFS_HOME: /usr/lib/hadoop-httpfs
Using HTTPFS_CONFIG: /etc/hadoop-httpfs/conf
Sourcing: /etc/hadoop-httpfs/conf/httpfs-env.sh
Using HTTPFS_LOG: /var/log/hadoop-httpfs/
Using HTTPFS_TEMP: /var/run/hadoop-httpfs
Setting HTTPFS_HTTP_PORT: 14000
Setting HTTPFS_ADMIN_PORT: 14001
Setting HTTPFS_HTTP_HOSTNAME: mainhead.es.fr.alten.com
Using CATALINA_BASE: /usr/lib/hadoop-httpfs
Using HTTPFS_CATALINA_HOME: /usr/lib/bigtop-tomcat
Setting CATALINA_OUT: /var/log/hadoop-httpfs//httpfs-catalina.out
Using CATALINA_PID: /var/run/hadoop-httpfs/hadoop-httpfs-httpfs.pid

Using CATALINA_OPTS:
Adding to CATALINA_OPTS: -Dhttpfs.home.dir=/usr/lib/hadoop-httpfs -Dhttpfs.config.dir=/etc/hadoop-httpfs/conf -Dhttpfs.log.dir=/var/log/hadoop-httpfs/ -Dhttpfs.temp.dir=/var/run/hadoop-httpfs -Dhttpfs.admin.port=14001 -Dhttpfs.http.port=14000 -Dhttpfs.http.hostname=mainhead.es.fr.alten.com
Neither the JAVA_HOME nor the JRE_HOME environment variable is defined
At least one of these environment variable is needed to run this program
invoke-rc.d: initscript hadoop-httpfs, action "start" failed.
dpkg: error processing hadoop-httpfs (--configure):
subprocess installed post-installation script returned error exit status 3
Setting up pig (0.11.0+42-1.cdh4.6.0.p0.13~precise-cdh4.6.0) ...
update-alternatives: using /etc/pig/conf.dist to provide /etc/pig/conf (pig-conf) in auto mode.
Errors were encountered while processing:
hadoop-httpfs
E: Sub-process /usr/bin/dpkg returned an error code (1)

Error: /Stage[main]/Cloudera::Cdh::Pig/Package[pig]/ensure: change from purged to present failed: Execution of '/usr/bin/apt-get -q -y -o DPkg::Options::=--force-confold install pig' returned 100: Reading package lists...
Building dependency tree...
Reading state information...
The following NEW packages will be installed:
pig
0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded.
1 not fully installed or removed.
Need to get 0 B/57.2 MB of archives.
After this operation, 127 MB of additional disk space will be used.
Selecting previously unselected package pig.
(Reading database ... 82316 files and directories currently installed.)
Unpacking pig (from .../pig_0.11.0+42-1.cdh4.6.0.p0.13precise-cdh4.6.0_all.deb) ...
Processing triggers for man-db ...
Setting up hadoop-httpfs (2.0.0+1554-1.cdh4.6.0.p0.16
precise-cdh4.6.0) ...

  • Starting Hadoop httpfs:

Setting HTTPFS_HOME: /usr/lib/hadoop-httpfs
Using HTTPFS_CONFIG: /etc/hadoop-httpfs/conf
Sourcing: /etc/hadoop-httpfs/conf/httpfs-env.sh
Using HTTPFS_LOG: /var/log/hadoop-httpfs/
Using HTTPFS_TEMP: /var/run/hadoop-httpfs
Setting HTTPFS_HTTP_PORT: 14000
Setting HTTPFS_ADMIN_PORT: 14001
Setting HTTPFS_HTTP_HOSTNAME: mainhead.es.fr.alten.com
Using CATALINA_BASE: /usr/lib/hadoop-httpfs
Using HTTPFS_CATALINA_HOME: /usr/lib/bigtop-tomcat
Setting CATALINA_OUT: /var/log/hadoop-httpfs//httpfs-catalina.out
Using CATALINA_PID: /var/run/hadoop-httpfs/hadoop-httpfs-httpfs.pid

Using CATALINA_OPTS:
Adding to CATALINA_OPTS: -Dhttpfs.home.dir=/usr/lib/hadoop-httpfs -Dhttpfs.config.dir=/etc/hadoop-httpfs/conf -Dhttpfs.log.dir=/var/log/hadoop-httpfs/ -Dhttpfs.temp.dir=/var/run/hadoop-httpfs -Dhttpfs.admin.port=14001 -Dhttpfs.http.port=14000 -Dhttpfs.http.hostname=mainhead.es.fr.alten.com
Neither the JAVA_HOME nor the JRE_HOME environment variable is defined
At least one of these environment variable is needed to run this program
invoke-rc.d: initscript hadoop-httpfs, action "start" failed.
dpkg: error processing hadoop-httpfs (--configure):
subprocess installed post-installation script returned error exit status 3
Setting up pig (0.11.0+42-1.cdh4.6.0.p0.13~precise-cdh4.6.0) ...
update-alternatives: using /etc/pig/conf.dist to provide /etc/pig/conf (pig-conf) in auto mode.
Errors were encountered while processing:
hadoop-httpfs
E: Sub-process /usr/bin/dpkg returned an error code (1)

Debug: Executing '/usr/bin/dpkg-query -W --showformat '${Status} ${Package} ${Version}\n' impala'
Debug: Executing '/usr/bin/apt-get -q -y -o DPkg::Options::=--force-confold install impala'
Error: Execution of '/usr/bin/apt-get -q -y -o DPkg::Options::=--force-confold install impala' returned 100: Reading package lists...
Building dependency tree...
Reading state information...
The following NEW packages will be installed:
impala
0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded.
1 not fully installed or removed.
Need to get 0 B/741 MB of archives.
After this operation, 1966 MB of additional disk space will be used.
Selecting previously unselected package impala.
(Reading database ... 85547 files and directories currently installed.)
Unpacking impala (from .../impala_1.2.3-1.p0.352precise-impala1.2.3_all.deb) ...
Setting up hadoop-httpfs (2.0.0+1554-1.cdh4.6.0.p0.16
precise-cdh4.6.0) ...

  • Starting Hadoop httpfs:

Setting HTTPFS_HOME: /usr/lib/hadoop-httpfs
Using HTTPFS_CONFIG: /etc/hadoop-httpfs/conf
Sourcing: /etc/hadoop-httpfs/conf/httpfs-env.sh
Using HTTPFS_LOG: /var/log/hadoop-httpfs/
Using HTTPFS_TEMP: /var/run/hadoop-httpfs
Setting HTTPFS_HTTP_PORT: 14000
Setting HTTPFS_ADMIN_PORT: 14001
Setting HTTPFS_HTTP_HOSTNAME: mainhead.es.fr.alten.com
Using CATALINA_BASE: /usr/lib/hadoop-httpfs
Using HTTPFS_CATALINA_HOME: /usr/lib/bigtop-tomcat
Setting CATALINA_OUT: /var/log/hadoop-httpfs//httpfs-catalina.out
Using CATALINA_PID: /var/run/hadoop-httpfs/hadoop-httpfs-httpfs.pid

Using CATALINA_OPTS:
Adding to CATALINA_OPTS: -Dhttpfs.home.dir=/usr/lib/hadoop-httpfs -Dhttpfs.config.dir=/etc/hadoop-httpfs/conf -Dhttpfs.log.dir=/var/log/hadoop-httpfs/ -Dhttpfs.temp.dir=/var/run/hadoop-httpfs -Dhttpfs.admin.port=14001 -Dhttpfs.http.port=14000 -Dhttpfs.http.hostname=mainhead.es.fr.alten.com
Neither the JAVA_HOME nor the JRE_HOME environment variable is defined
At least one of these environment variable is needed to run this program
invoke-rc.d: initscript hadoop-httpfs, action "start" failed.
dpkg: error processing hadoop-httpfs (--configure):
subprocess installed post-installation script returned error exit status 3
Setting up impala (1.2.3-1.p0.352~precise-impala1.2.3) ...
update-alternatives: using /etc/impala/conf.dist to provide /etc/impala/conf (impala-conf) in auto mode.
update-alternatives: using /usr/lib/impala/sbin-retail to provide /usr/lib/impala/sbin (impala) in auto mode.
Errors were encountered while processing:
hadoop-httpfs
E: Sub-process /usr/bin/dpkg returned an error code (1)

Error: /Stage[main]/Cloudera::Impala/Package[impala]/ensure: change from purged to present failed: Execution of '/usr/bin/apt-get -q -y -o DPkg::Options::=--force-confold install impala' returned 100: Reading package lists...
Building dependency tree...
Reading state information...
The following NEW packages will be installed:
impala
0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded.
1 not fully installed or removed.
Need to get 0 B/741 MB of archives.
After this operation, 1966 MB of additional disk space will be used.
Selecting previously unselected package impala.
(Reading database ... 85547 files and directories currently installed.)
Unpacking impala (from .../impala_1.2.3-1.p0.352precise-impala1.2.3_all.deb) ...
Setting up hadoop-httpfs (2.0.0+1554-1.cdh4.6.0.p0.16
precise-cdh4.6.0) ...

  • Starting Hadoop httpfs:

Setting HTTPFS_HOME: /usr/lib/hadoop-httpfs
Using HTTPFS_CONFIG: /etc/hadoop-httpfs/conf
Sourcing: /etc/hadoop-httpfs/conf/httpfs-env.sh
Using HTTPFS_LOG: /var/log/hadoop-httpfs/
Using HTTPFS_TEMP: /var/run/hadoop-httpfs
Setting HTTPFS_HTTP_PORT: 14000
Setting HTTPFS_ADMIN_PORT: 14001
Setting HTTPFS_HTTP_HOSTNAME: mainhead.es.fr.alten.com
Using CATALINA_BASE: /usr/lib/hadoop-httpfs
Using HTTPFS_CATALINA_HOME: /usr/lib/bigtop-tomcat
Setting CATALINA_OUT: /var/log/hadoop-httpfs//httpfs-catalina.out
Using CATALINA_PID: /var/run/hadoop-httpfs/hadoop-httpfs-httpfs.pid

Using CATALINA_OPTS:
Adding to CATALINA_OPTS: -Dhttpfs.home.dir=/usr/lib/hadoop-httpfs -Dhttpfs.config.dir=/etc/hadoop-httpfs/conf -Dhttpfs.log.dir=/var/log/hadoop-httpfs/ -Dhttpfs.temp.dir=/var/run/hadoop-httpfs -Dhttpfs.admin.port=14001 -Dhttpfs.http.port=14000 -Dhttpfs.http.hostname=mainhead.es.fr.alten.com
Neither the JAVA_HOME nor the JRE_HOME environment variable is defined
At least one of these environment variable is needed to run this program
invoke-rc.d: initscript hadoop-httpfs, action "start" failed.
dpkg: error processing hadoop-httpfs (--configure):
subprocess installed post-installation script returned error exit status 3
Setting up impala (1.2.3-1.p0.352~precise-impala1.2.3) ...
update-alternatives: using /etc/impala/conf.dist to provide /etc/impala/conf (impala-conf) in auto mode.
update-alternatives: using /usr/lib/impala/sbin-retail to provide /usr/lib/impala/sbin (impala) in auto mode.
Errors were encountered while processing:
hadoop-httpfs
E: Sub-process /usr/bin/dpkg returned an error code (1)

Debug: Executing '/usr/bin/dpkg-query -W --showformat '${Status} ${Package} ${Version}\n' flume-ng'
Debug: Executing '/usr/bin/apt-get -q -y -o DPkg::Options::=--force-confold install flume-ng'
Error: Execution of '/usr/bin/apt-get -q -y -o DPkg::Options::=--force-confold install flume-ng' returned 100: Reading package lists...
Building dependency tree...
Reading state information...
The following NEW packages will be installed:
flume-ng
0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded.
1 not fully installed or removed.
Need to get 0 B/23.7 MB of archives.
After this operation, 26.1 MB of additional disk space will be used.
Selecting previously unselected package flume-ng.
(Reading database ... 85738 files and directories currently installed.)
Unpacking flume-ng (from .../flume-ng_1.4.0+96-1.cdh4.6.0.p0.13precise-cdh4.6.0_all.deb) ...
Setting up hadoop-httpfs (2.0.0+1554-1.cdh4.6.0.p0.16
precise-cdh4.6.0) ...

  • Starting Hadoop httpfs:

Setting HTTPFS_HOME: /usr/lib/hadoop-httpfs
Using HTTPFS_CONFIG: /etc/hadoop-httpfs/conf
Sourcing: /etc/hadoop-httpfs/conf/httpfs-env.sh
Using HTTPFS_LOG: /var/log/hadoop-httpfs/
Using HTTPFS_TEMP: /var/run/hadoop-httpfs
Setting HTTPFS_HTTP_PORT: 14000
Setting HTTPFS_ADMIN_PORT: 14001
Setting HTTPFS_HTTP_HOSTNAME: mainhead.es.fr.alten.com
Using CATALINA_BASE: /usr/lib/hadoop-httpfs
Using HTTPFS_CATALINA_HOME: /usr/lib/bigtop-tomcat
Setting CATALINA_OUT: /var/log/hadoop-httpfs//httpfs-catalina.out
Using CATALINA_PID: /var/run/hadoop-httpfs/hadoop-httpfs-httpfs.pid

Using CATALINA_OPTS:
Adding to CATALINA_OPTS: -Dhttpfs.home.dir=/usr/lib/hadoop-httpfs -Dhttpfs.config.dir=/etc/hadoop-httpfs/conf -Dhttpfs.log.dir=/var/log/hadoop-httpfs/ -Dhttpfs.temp.dir=/var/run/hadoop-httpfs -Dhttpfs.admin.port=14001 -Dhttpfs.http.port=14000 -Dhttpfs.http.hostname=mainhead.es.fr.alten.com
Neither the JAVA_HOME nor the JRE_HOME environment variable is defined
At least one of these environment variable is needed to run this program
invoke-rc.d: initscript hadoop-httpfs, action "start" failed.
dpkg: error processing hadoop-httpfs (--configure):
subprocess installed post-installation script returned error exit status 3
Setting up flume-ng (1.4.0+96-1.cdh4.6.0.p0.13~precise-cdh4.6.0) ...
update-alternatives: using /etc/flume-ng/conf.empty to provide /etc/flume-ng/conf (flume-ng-conf) in auto mode.
Errors were encountered while processing:
hadoop-httpfs
E: Sub-process /usr/bin/dpkg returned an error code (1)

Error: /Stage[main]/Cloudera::Cdh::Flume/Package[flume-ng]/ensure: change from purged to present failed: Execution of '/usr/bin/apt-get -q -y -o DPkg::Options::=--force-confold install flume-ng' returned 100: Reading package lists...
Building dependency tree...
Reading state information...
The following NEW packages will be installed:
flume-ng
0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded.
1 not fully installed or removed.
Need to get 0 B/23.7 MB of archives.
After this operation, 26.1 MB of additional disk space will be used.
Selecting previously unselected package flume-ng.
(Reading database ... 85738 files and directories currently installed.)
Unpacking flume-ng (from .../flume-ng_1.4.0+96-1.cdh4.6.0.p0.13precise-cdh4.6.0_all.deb) ...
Setting up hadoop-httpfs (2.0.0+1554-1.cdh4.6.0.p0.16
precise-cdh4.6.0) ...

  • Starting Hadoop httpfs:

Setting HTTPFS_HOME: /usr/lib/hadoop-httpfs
Using HTTPFS_CONFIG: /etc/hadoop-httpfs/conf
Sourcing: /etc/hadoop-httpfs/conf/httpfs-env.sh
Using HTTPFS_LOG: /var/log/hadoop-httpfs/
Using HTTPFS_TEMP: /var/run/hadoop-httpfs
Setting HTTPFS_HTTP_PORT: 14000
Setting HTTPFS_ADMIN_PORT: 14001
Setting HTTPFS_HTTP_HOSTNAME: mainhead.es.fr.alten.com
Using CATALINA_BASE: /usr/lib/hadoop-httpfs
Using HTTPFS_CATALINA_HOME: /usr/lib/bigtop-tomcat
Setting CATALINA_OUT: /var/log/hadoop-httpfs//httpfs-catalina.out
Using CATALINA_PID: /var/run/hadoop-httpfs/hadoop-httpfs-httpfs.pid

Using CATALINA_OPTS:
Adding to CATALINA_OPTS: -Dhttpfs.home.dir=/usr/lib/hadoop-httpfs -Dhttpfs.config.dir=/etc/hadoop-httpfs/conf -Dhttpfs.log.dir=/var/log/hadoop-httpfs/ -Dhttpfs.temp.dir=/var/run/hadoop-httpfs -Dhttpfs.admin.port=14001 -Dhttpfs.http.port=14000 -Dhttpfs.http.hostname=mainhead.es.fr.alten.com
Neither the JAVA_HOME nor the JRE_HOME environment variable is defined
At least one of these environment variable is needed to run this program
invoke-rc.d: initscript hadoop-httpfs, action "start" failed.
dpkg: error processing hadoop-httpfs (--configure):
subprocess installed post-installation script returned error exit status 3
Setting up flume-ng (1.4.0+96-1.cdh4.6.0.p0.13~precise-cdh4.6.0) ...
update-alternatives: using /etc/flume-ng/conf.empty to provide /etc/flume-ng/conf (flume-ng-conf) in auto mode.
Errors were encountered while processing:
hadoop-httpfs
E: Sub-process /usr/bin/dpkg returned an error code (1)

Debug: Executing '/usr/bin/dpkg-query -W --showformat '${Status} ${Package} ${Version}\n' hadoop'
Debug: Executing '/usr/bin/dpkg-query -W --showformat '${Status} ${Package} ${Version}\n' hbase'
Debug: Executing '/usr/bin/dpkg-query -W --showformat '${Status} ${Package} ${Version}\n' hadoop-0.20-mapreduce'
Debug: Apt::Source[cloudera-manager]: The container Class[Cloudera::Cm::Repo] will propagate my refresh event
Debug: Class[Cloudera::Cm::Repo]: The container Stage[main] will propagate my refresh event
Debug: Executing '/usr/bin/dpkg-query -W --showformat '${Status} ${Package} ${Version}\n' oracle-j2sdk1.6'
Debug: Executing '/usr/bin/apt-get -q -y -o DPkg::Options::=--force-confold install oracle-j2sdk1.6'
Notice: /Stage[main]/Cloudera::Java/Package[jdk]/ensure: ensure changed 'purged' to 'present'
Debug: /Stage[main]/Cloudera::Java/Package[jdk]: The container Class[Cloudera::Java] will propagate my refresh event
Notice: /Stage[main]/Cloudera::Cdh::Hadoop/Exec[service hadoop-httpfs stop]: Dependency Package[hadoop-httpfs] has failures: true
Warning: /Stage[main]/Cloudera::Cdh::Hadoop/Exec[service hadoop-httpfs stop]: Skipping because of failed dependencies
Debug: Failed to load library 'msgpack' for feature 'msgpack'
Debug: file_metadata supports formats: pson yaml b64_zlib_yaml raw
Notice: /Stage[main]/Cloudera::Java/File[java-profile.d]/ensure: defined content as '{md5}afd3f41db0923d4bf92b4ff15d20af3b'
Debug: /Stage[main]/Cloudera::Java/File[java-profile.d]: The container Class[Cloudera::Java] will propagate my refresh event
Debug: Class[Cloudera::Java]: The container Stage[main] will propagate my refresh event
Debug: Executing '/usr/bin/dpkg-query -W --showformat '${Status} ${Package} ${Version}\n' cloudera-manager-agent'
Debug: Executing '/usr/bin/apt-get -q -y -o DPkg::Options::=--force-confold install cloudera-manager-agent'
Notice: /Stage[main]/Cloudera::Cm/Package[cloudera-manager-agent]/ensure: ensure changed 'purged' to 'present'
Debug: /Stage[main]/Cloudera::Cm/Package[cloudera-manager-agent]: The container Class[Cloudera::Cm] will propagate my refresh event
Debug: Executing '/usr/bin/dpkg-query -W --showformat '${Status} ${Package} ${Version}\n' cloudera-manager-daemons'
Debug: Executing '/usr/bin/apt-get -q -y -o DPkg::Options::=--force-confold install cloudera-manager-daemons'
Notice: /Stage[main]/Cloudera::Cm/Package[cloudera-manager-daemons]/ensure: ensure changed 'purged' to 'present'
Debug: /Stage[main]/Cloudera::Cm/Package[cloudera-manager-daemons]: The container Class[Cloudera::Cm] will propagate my refresh event
Debug: Executing 'diff -u /etc/cloudera-scm-agent/config.ini /tmp/puppet-file20140305-5640-efancy-0'
Notice: /Stage[main]/Cloudera::Cm/File[scm-config.ini]/content:
--- /etc/cloudera-scm-agent/config.ini 2014-02-27 03:05:18.000000000 +0000
+++ /tmp/puppet-file20140305-5640-efancy-0 2014-03-05 12:35:12.501231291 +0000
@@ -1,6 +1,9 @@
+###
+### File managed by Puppet
+###
[General]

Hostname of Cloudera SCM Server

-server_host=localhost
+server_host=mainhead.es.fr.alten.com

Port that server is listening on

server_port=7182
@@ -13,7 +16,7 @@

listening_ip=

Hostname that Agent reports as its hostname

-# listening_hostname=
+listening_hostname=mainhead.es.fr.alten.com

Log file. The supervisord log file will be placed into

the same directory. Note that if the agent is being started

Info: /Stage[main]/Cloudera::Cm/File[scm-config.ini]: Filebucketed /etc/cloudera-scm-agent/config.ini to puppet with sum 42027ee976a430f1540dad7a3779fa27
Notice: /Stage[main]/Cloudera::Cm/File[scm-config.ini]/content: content changed '{md5}42027ee976a430f1540dad7a3779fa27' to '{md5}3477b68a092999087030a7a07652337f'
Info: /Stage[main]/Cloudera::Cm/File[scm-config.ini]: Scheduling refresh of Service[cloudera-scm-agent]
Debug: /Stage[main]/Cloudera::Cm/File[scm-config.ini]: The container Class[Cloudera::Cm] will propagate my refresh event
Debug: Servicecloudera-scm-agent: Could not find cloudera-scm-agent.conf in /etc/init
Debug: Servicecloudera-scm-agent: Could not find cloudera-scm-agent.conf in /etc/init.d
Debug: Servicecloudera-scm-agent: Could not find cloudera-scm-agent in /etc/init
Debug: Executing '/etc/init.d/cloudera-scm-agent status'
Debug: Executing '/etc/init.d/cloudera-scm-agent start'
Notice: /Stage[main]/Cloudera::Cm/Service[cloudera-scm-agent]/ensure: ensure changed 'stopped' to 'running'
Debug: /Stage[main]/Cloudera::Cm/Service[cloudera-scm-agent]: The container Class[Cloudera::Cm] will propagate my refresh event
Info: /Stage[main]/Cloudera::Cm/Service[cloudera-scm-agent]: Unscheduling refresh on Service[cloudera-scm-agent]
Debug: Class[Cloudera::Cm]: The container Stage[main] will propagate my refresh event
Notice: /Stage[main]/Cloudera::Cdh/Anchor[cloudera::cdh::end]: Dependency Package[pig] has failures: true
Notice: /Stage[main]/Cloudera::Cdh/Anchor[cloudera::cdh::end]: Dependency Package[hadoop-httpfs] has failures: true
Notice: /Stage[main]/Cloudera::Cdh/Anchor[cloudera::cdh::end]: Dependency Package[flume-ng] has failures: true
Notice: /Stage[main]/Cloudera::Cdh/Anchor[cloudera::cdh::end]: Dependency Package[oozie] has failures: true
Notice: /Stage[main]/Cloudera::Cdh/Anchor[cloudera::cdh::end]: Dependency Package[hue-plugins] has failures: true
Notice: /Stage[main]/Cloudera::Cdh/Anchor[cloudera::cdh::end]: Dependency Package[oozie-client] has failures: true
Warning: /Stage[main]/Cloudera::Cdh/Anchor[cloudera::cdh::end]: Skipping because of failed dependencies
Notice: /Stage[main]/Cloudera/Anchor[cloudera::end]: Dependency Package[pig] has failures: true
Notice: /Stage[main]/Cloudera/Anchor[cloudera::end]: Dependency Package[hadoop-httpfs] has failures: true
Notice: /Stage[main]/Cloudera/Anchor[cloudera::end]: Dependency Package[flume-ng] has failures: true
Notice: /Stage[main]/Cloudera/Anchor[cloudera::end]: Dependency Package[oozie] has failures: true
Notice: /Stage[main]/Cloudera/Anchor[cloudera::end]: Dependency Package[solr-server] has failures: true
Notice: /Stage[main]/Cloudera/Anchor[cloudera::end]: Dependency Package[hue-plugins] has failures: true
Notice: /Stage[main]/Cloudera/Anchor[cloudera::end]: Dependency Package[impala] has failures: true
Notice: /Stage[main]/Cloudera/Anchor[cloudera::end]: Dependency Package[oozie-client] has failures: true
Notice: /Stage[main]/Cloudera/Anchor[cloudera::end]: Dependency Package[impala-shell] has failures: true
Warning: /Stage[main]/Cloudera/Anchor[cloudera::end]: Skipping because of failed dependencies
Debug: Executing 'diff -u /etc/ganglia/gmetad.conf /tmp/puppet-file20140305-5640-11r8m1v-0'
Notice: /Stage[main]/Ganglia::Gmetad::Config/File[/etc/ganglia/gmetad.conf]/content:
--- /etc/ganglia/gmetad.conf 2012-11-14 22:42:29.000000000 +0000
+++ /tmp/puppet-file20140305-5640-11r8m1v-0 2014-03-05 12:35:17.882471607 +0000
@@ -36,7 +36,7 @@

data_source "my grid" 50 1.3.4.7:8655 grid.org:8651 grid-backup.org:8651

data_source "another source" 1.3.4.7:8655 1.3.4.8

-data_source "my cluster" localhost
+data_source "BigData" localhost

Round-Robin Archives

Info: /Stage[main]/Ganglia::Gmetad::Config/File[/etc/ganglia/gmetad.conf]: Filebucketed /etc/ganglia/gmetad.conf to puppet with sum 263a1e02d6484d3c0c2cd5319c2a1ffc
Notice: /Stage[main]/Ganglia::Gmetad::Config/File[/etc/ganglia/gmetad.conf]/content: content changed '{md5}263a1e02d6484d3c0c2cd5319c2a1ffc' to '{md5}d16a79869b7fc9e4a665400f54610e8f'
Debug: /Stage[main]/Ganglia::Gmetad::Config/File[/etc/ganglia/gmetad.conf]: The container Class[Ganglia::Gmetad::Config] will propagate my refresh event
Info: /Stage[main]/Ganglia::Gmetad::Config/File[/etc/ganglia/gmetad.conf]: Scheduling refresh of Class[Ganglia::Gmetad::Service]
Debug: Class[Ganglia::Gmetad::Config]: The container Stage[main] will propagate my refresh event
Info: Class[Ganglia::Gmetad::Service]: Scheduling refresh of Service[gmetad]
Debug: Servicegmetad: Could not find gmetad.conf in /etc/init
Debug: Servicegmetad: Could not find gmetad.conf in /etc/init.d
Debug: Servicegmetad: Could not find gmetad in /etc/init
Debug: Executing 'pgrep -u nobody -f /usr/sbin/gmetad'
Debug: Executing 'pgrep -u nobody -f /usr/sbin/gmetad'
Debug: Executing '/etc/init.d/gmetad restart'
Notice: /Stage[main]/Ganglia::Gmetad::Service/Service[gmetad]: Triggered 'refresh' from 1 events
Debug: /Stage[main]/Ganglia::Gmetad::Service/Service[gmetad]: The container Class[Ganglia::Gmetad::Service] will propagate my refresh event
Debug: Class[Ganglia::Gmetad::Service]: The container Stage[main] will propagate my refresh event
Notice: /Stage[main]/Main/File[/etc/apache2/sites-enabled/010-ganglia]/ensure: created
Debug: /Stage[main]/Main/File[/etc/apache2/sites-enabled/010-ganglia]: The container Class[Main] will propagate my refresh event
Debug: Class[Main]: The container Stage[main] will propagate my refresh event
Debug: Finishing transaction 70114661075240
Debug: Storing state
Debug: Stored state in 0.03 seconds
Notice: Finished catalog run in 273.13 seconds

real 4m45.060s
user 0m57.676s
sys 1m18.345s
root@mainhead:/etc/puppet/manifests#

from puppet-cloudera.

razorsedge avatar razorsedge commented on July 17, 2024

Is this still an issue with the version 2.0.2 of this module?

from puppet-cloudera.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.