...
- Register server
subscription-manager register --username <rhel licence name> --password <password> --auto-attach
# enable epel for npm
rpm -ivh {+} $ yum install -y https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm+
# enable rhel-7-server-e4s-optional-rpms in /etc/yum.repos.d/redhat.repo
# to get following rpm available (icewm dependency)
# fribidi x86_64 0.19.4-6.el7 rhel-7-server-e4s-optional-rpms
# install following packages
$ yum install -y screen expect nodejs git wget createrepo python2-pip patch - install docker
$ curl https://releases.rancher.com/install-docker/17.03.sh | sh - install docker alternative - both rhel and centos (without selinux !):
$ sed -i 's/^\s*SELINUX=.*/SELINUX=disabled/' /etc/selinux/config
$ setenforce 0
$ yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo - start docker
$ systemctl start docker - download the installer
$ git clone https://git.onap.org/integration/devtool
$ cd devtool
$ git checkout remotes/origin/beijing
$ cd onap-offlineAnchor _9i96cnscrdvw _9i96cnscrdvw
Part 2. Download artifacts for offline installerAnchor _988hh695oslh _988hh695oslh Wiki Markup All artifacts should be downloaded by running following script \\ $ ./bash/tools/download_offline_data_by_lists.sh !worddav8a2a0708ad42f63de823c674420d76d6.png|height=60,width=60! Download is as reliable as network connectivity to internet, it's highly recommended to run it in screen and save log file from this script execution for checking if all artifacts were successfully collected. Each start and end of script call should contain timestamp in console output. Downloading consists of 12 steps, which should be checked at the end one-by-one. *Verify:* +Please take a look on following comments to respective parts of download script+ \\ \[Step 1/12 Download collected docker images\] \[Step 2/12 Download manually collected docker images\] => both image download steps are quite reliable and contain retry logic \\ E.g == pkg #143 of 163 == rancher/etc-host-updater:v0.0.3 digest:sha256:bc156a5ae480d6d6d536aa454a9cc2a88385988617a388808b271e06dc309ce8 Error response from daemon: Get https://registry-1.docker.io/v2/rancher/etc-host-updater/manifests/v0.0.3: Get https://auth.docker.io/token?scope=repository%3Arancher%2Fetc-host-updater%3Apull&service=registry.docker.io: net/http: TLS handshake timeout WARNING \[!\]: warning Command docker -l error pull rancher/etc-host-updater:v0.0.3 failed. Attempt: 2/5 INFO: info waiting 10s for another try... v0.0.3: Pulling from rancher/etc-host-updater b3e1c725a85f: Already exists 6a710864a9fc: Already exists d0ac3b234321: Already exists 87f567b5cf58: Already exists 16914729cfd3: Already exists 83c2da5790af: Pulling fs layer 83c2da5790af: Verifying Checksum 83c2da5790af: Download complete 83c2da5790af: Pull complete \\ \[Step 3/12 Build own nginx image\] => there is no hardening in this step, if it failed needs to be retriggered. It should end with "Successfully built <id>" \\ \[Step 4/12 Save docker images from docker cache to tarfiles\] => quite reliable, retry logic in place \\ \[Step 5/12 move infra related images to infra folder\] => should be safe, precondition is not failing step(3) \\ \[Step 6/12 Download git repos\] => potentially unsafe, no hardening in place. If it not download all git repos. It has to be executed again. Easiest way is probably to comment-out other steps in load script and run it again. \\ E.g. Cloning into bare repository 'github.com/rancher/community-catalog.git'... error: RPC failed; result=28, HTTP code = 0 fatal: The remote end hung up unexpectedly Cloning into bare repository 'git.rancher.io/rancher-catalog.git'... Cloning into bare repository 'gerrit.onap.org/r/testsuite/properties.git'... Cloning into bare repository 'gerrit.onap.org/r/portal.git'... Cloning into bare repository 'gerrit.onap.org/r/aaf/authz.git'... Cloning into bare repository 'gerrit.onap.org/r/demo.git'... Cloning into bare repository 'gerrit.onap.org/r/dmaap/messagerouter/messageservice.git'... Cloning into bare repository 'gerrit.onap.org/r/so/docker-config.git'... \\ \[Step 7/12 Download http files\] \[Step 8/12 Download npm pkgs\] \[Step 9/12 Download bin tools\] => work quite reliably, If it not download all artifacts. Easiest way is probably to comment-out other steps in load script and run it again. \\ \[Step 10/12 Download rhel pkgs\] => this is the step which will work on rhel only, for other platform different packages has to be downloaded. We need just couple of rpms, but those has a lot of dependencies (mostly because of vnc). Script is also download all perl packages from all repos, but we need around dozen of them. \\ Following is considered as sucessfull run of this part: \\ Available: 1:net-snmp-devel-5.7.2-32.el7.i686 (rhel-7-server-rpms) net-snmp-devel = 1:5.7.2-32.el7 Available: 1:net-snmp-devel-5.7.2-33.el7_5.2.i686 (rhel-7-server-rpms) net-snmp-devel = 1:5.7.2-33.el7_5.2 Dependency resolution failed, some packages will not be downloaded. No Presto metadata available for rhel-7-server-rpms https://ftp.icm.edu.pl/pub/Linux/fedora/linux/epel/7/x86_64/Packages/p/perl-CDB_File-0.98-9.el7.x86_64.rpm: \[Errno 12\] Timeout on https://ftp.icm.edu.pl/pub/Linux/fedora/linux/epel/7/x86_64/Packages/p/perl-CDB_File-0.98-9.el7.x86_64.rpm: (28, 'Operation timed out after 30001 milliseconds with 0 out of 0 bytes received') Trying other mirror. Spawning worker 0 with 230 pkgs Spawning worker 1 with 230 pkgs Spawning worker 2 with 230 pkgs Spawning worker 3 with 230 pkgs Spawning worker 4 with 229 pkgs Spawning worker 5 with 229 pkgs Spawning worker 6 with 229 pkgs Spawning worker 7 with 229 pkgs Workers Finished Saving Primary metadata Saving file lists metadata Saving other metadata Generating sqlite DBs Sqlite DBs complete \\ \\ \\ \[Step 11/12 Download oom\] => this step is downloading oom repo into ./resources/oom and patch it using our patch file. If this step is retried after previously passing it will lead to inconsistent oom repo. Because patch will fail and create some .rej files, which will be marked as broken during onap_deploy part later. \\ \\ \\ E.g. successful run looks like this: \\ Checkout base commit which will be patched Switched to a new branch 'patched_beijing' patching file kubernetes/appc/values.yaml patching file kubernetes/common/dgbuilder/templates/deployment.yaml patching file kubernetes/dcaegen2/charts/dcae-cloudify-manager/templates/deployment.yaml patching file kubernetes/dmaap/charts/message-router/templates/deployment.yaml patching file kubernetes/onap/values.yaml patching file kubernetes/policy/charts/drools/resources/config/opt/policy/config/drools/apps-install.sh patching file kubernetes/policy/charts/drools/resources/scripts/update-vfw-op-policy.sh patching file kubernetes/policy/resources/config/pe/push-policies.sh patching file kubernetes/robot/values.yaml patching file kubernetes/sdnc/charts/sdnc-ansible-server/templates/deployment.yaml patching file kubernetes/sdnc/charts/sdnc-portal/templates/deployment.yaml patching file kubernetes/uui/charts/uui-server/templates/deployment.yaml \\ \\ \[Step 12/12 Download sdnc-ansible-server packages\] => there is again no retry logic in this part, it's collecting packages for sdnc-ansible-server in the exactly same way how that container is doing it, however there is a bug in upstream that image in place won't work with those packages as old ones are not available and newer are not compatible with other stuff inside that image \\ \\ +Following is approximate size of all artifacts after successful download+ \\ \[root@upstream-master onap-offline\]# for i in `ls -1 resources/`;do du -h resources/$i | tail -1;done 126M resources/downloads 97M resources/git-repo 61M resources/http 91G resources/offline_data 36M resources/oom 638M resources/pkg \\
Part 3. Populate local nexusPrereq:Anchor _a701bwfyn06x _a701bwfyn06x
...