License Statement

Licensed to the Apache Software Foundation (ASF) under one or more contributor license agreements. See the NOTICE file distributed with this work for additional information regarding copyright ownership. The ASF licenses this file to you under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

Disclaimer: Apache Trafodion is an effort undergoing incubation at the Apache Software Foundation (ASF), sponsored by the Apache Incubator PMC. Incubation is required of all newly accepted projects until a further review indicates that the infrastructure, communications, and decision making process have stabilized in a manner consistent with other successful ASF projects. While incubation status is not necessarily a reflection of the completeness or stability of the code, it does indicate that the project has yet to be fully endorsed by the ASF. <<<

Revision History

Version Date

2.1.0

TBD

2.0.1

July 7, 2016

2.0.0

June 6, 2016

1.3.0

January, 2016

1. About This Document

This guide describes how to provision the end-user Trafodion binaries on top of an existing Hadoop environment. This install allows you to store and query data using Trafodion, either via Trafodion clients (see Trafodion Client Installation Guide) or via application code you write.

If you want to install a Trafodion developer-build environment, then please refer to the Trafodion Contributor Guide for instructions.

1.1. Intended Audience

This guide assumes that you are well-versed in Linux and Hadoop administration. If you don’t have such experience, then you should consider going through the steps required to install a Hadoop environment before attempting to install Trafodion.

The instructions contained herein apply to the following environments.

  • Single-Node Environments: Typically used when you want to evaluate Trafodion.

  • Cluster (Multi-Node) Environments: Typically used when you deploy Trafodion for application usage.

Trafodion can be provisioned on a single-node or multi-node environment. Unless specifically noted, the term cluster is used to mean both single- and multi-node environments.

The provisioning instructions applies to a diverse set of platforms:

  • Virtual Machines: Often used for evaluations and Trafodion development.

  • Cloud: Used for Product Environments as well as for Developer Environments.

  • Bare Metal: Used for Product Environments as well as for Developer Environments.

The term node is used to represent a computing platform on which operating system, Hadoop, and Trafodion software is running. Unless specifically qualified (bare-metal node, virtual-machine node, or cloud-node), node represents a computing platform in your cluster regardless of platform type.

1.2. New and Changed Information

This guide has been updated to include Ambari installation.

1.3. Notation Conventions

This list summarizes the notation conventions for syntax presentation in this manual.

  • UPPERCASE LETTERS

    Uppercase letters indicate keywords and reserved words. Type these items exactly as shown. Items not enclosed in brackets are required.

    SELECT
  • lowercase letters

    Lowercase letters, regardless of font, indicate variable items that you supply. Items not enclosed in brackets are required.

    file-name
  • [ ] Brackets

    Brackets enclose optional syntax items.

    DATETIME [start-field TO] end-field

    A group of items enclosed in brackets is a list from which you can choose one item or none.

    The items in the list can be arranged either vertically, with aligned brackets on each side of the list, or horizontally, enclosed in a pair of brackets and separated by vertical lines.

    For example:

    DROP SCHEMA schema [CASCADE]
    DROP SCHEMA schema [ CASCADE | RESTRICT ]
  • { } Braces

    Braces enclose required syntax items.

    FROM { grantee [, grantee ] ... }

    A group of items enclosed in braces is a list from which you are required to choose one item.

    The items in the list can be arranged either vertically, with aligned braces on each side of the list, or horizontally, enclosed in a pair of braces and separated by vertical lines.

    For example:

    INTERVAL { start-field TO end-field }
    { single-field }
    INTERVAL { start-field TO end-field | single-field }
  • | Vertical Line

    A vertical line separates alternatives in a horizontal list that is enclosed in brackets or braces.

    {expression | NULL}
  • … Ellipsis

    An ellipsis immediately following a pair of brackets or braces indicates that you can repeat the enclosed sequence of syntax items any number of times.

    ATTRIBUTE[S] attribute [, attribute] ...
    {, sql-expression } ...

    An ellipsis immediately following a single syntax item indicates that you can repeat that syntax item any number of times.

    For example:

    expression-n ...
  • Punctuation

    Parentheses, commas, semicolons, and other symbols not previously described must be typed as shown.

    DAY (datetime-expression)
    @script-file

    Quotation marks around a symbol such as a bracket or brace indicate the symbol is a required character that you must type as shown.

    For example:

    "{" module-name [, module-name] ... "}"
  • Item Spacing

    Spaces shown between items are required unless one of the items is a punctuation symbol such as a parenthesis or a comma.

    DAY (datetime-expression) DAY(datetime-expression)

    If there is no space between two items, spaces are not permitted. In this example, no spaces are permitted between the period and any other items:

    myfile.sh
  • Line Spacing

    If the syntax of a command is too long to fit on a single line, each continuation line is indented three spaces and is separated from the preceding line by a blank line.

    This spacing distinguishes items in a continuation line from items in a vertical list of selections.

    match-value [NOT] LIKE _pattern
       [ESCAPE esc-char-expression]

1.4. Comments Encouraged

We encourage your comments concerning this document. We are committed to providing documentation that meets your needs. Send any errors found, suggestions for improvement, or compliments to user@trafodion.incubator.apache.org.

Include the document title and any comment, error found, or suggestion for improvement you have concerning this document.

2. Quick Start

This chapter provides a quick start for how to use the command-line Trafodion Installer to install Trafodion. If you prefer to intall on HDP distribution using Ambari, refer to the Ambari Install section.

You need the following before using the information herein:

  • A supported and running Hadoop enviroment with HDFS, HBase, and Hive. Refer to the Release Notes for information about supported versions.

  • A user ID with passwordless SSH among all the nodes in the cluster. This user ID must have sudo access.

The Trafodion Installer modifies and restarts your Hadoop environment.

2.1. Download Binaries

You download the Trafodion binaries from the Trafodion Download page. Download the following packages:

  • Trafodion Installer (if planning to use the Trafodion Installer)

  • Trafodion Server

You can download and install the Trafodion Clients once you’ve installed and activated Trafodion. Refer to the Trafodion Client Install Guide for instructions.

Example

$ mkdir $HOME/trafodion-download
$ cd $HOME/trafodion-download
$ # Download the Trafodion Installer binaries
$ wget http://apache.cs.utah.edu/incubator/trafodion/trafodion-1.3.0.incubating/apache-trafodion-installer-1.3.0-incubating-bin.tar.gz
Resolving http://apache.cs.utah.edu... 192.168.1.56
Connecting to http://apache.cs.utah.edu|192.168.1.56|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 68813 (67K) [application/x-gzip]
Saving to: "apache-trafodion-installer-1.3.0-incubating-bin.tar.gz"

100%[=====================================================================================================================>] 68,813       124K/s   in 0.5s

2016-02-14 04:19:42 (124 KB/s) - "apache-trafodion-installer-1.3.0-incubating-bin.tar.gz" saved [68813/68813]
$ # Download the Trafodion Server binaries
$ wget http://apache.cs.utah.edu/incubator/trafodion/trafodion-1.3.0.incubating/apache-trafodion-1.3.0-incubating-bin.tar.gz
Resolving http://apache.cs.utah.edu... 192.168.1.56
Connecting to http://apache.cs.utah.edu|192.168.1.56|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 214508243 (205M) [application/x-gzip]
Saving to: "apache-trafodion-1.3.0-incubating-bin.tar.gz"

100%[=====================================================================================================================>] 214,508,243 3.90M/s   in 55s

2016-02-14 04:22:14 (3.72 MB/s) - "apache-trafodion-1.3.0-incubating-bin.tar.gz" saved [214508243/214508243]

$ ls -l
total 209552
-rw-rw-r-- 1 centos centos 214508243 Jan 12 20:10 apache-trafodion-1.3.0-incubating-bin.tar.gz
-rw-rw-r-- 1 centos centos     68813 Jan 12 20:10 apache-trafodion-installer-1.3.0-incubating-bin.tar.gz
$

2.2. Unpack Installer

The first step in the installation process is to unpack the Trafodion Installer tar file.

Example

$ mkdir $HOME/trafodion-installer
$ cd $HOME/trafodion-downloads
$ tar -zxf apache-trafodion-installer-1.3.0-incubating-bin.tar.gz -C $HOME/trafodion-installer
$ ls $HOME/trafodion-installer/installer
bashrc_default           tools                             traf_config_check           trafodion_apache_hadoop_install  traf_package_setup
build-version-1.3.0.txt  traf_add_user                     traf_config_setup           trafodion_config_default         traf_setup
dcs_installer            traf_apache_hadoop_config_setup   traf_create_systemdefaults  trafodion_install                traf_sqconfig
rest_installer           traf_authentication_conf_default  traf_getHadoopNodes         trafodion_license                traf_start
setup_known_hosts.exp    traf_cloudera_mods98              traf_hortonworks_mods98     trafodion_uninstaller
$

2.3. Collect Information

Collect/decide the following information:

2.3.1. Location of Trafodion Server-Side Binary

You need the fully-qualified name of the Trafodion server-side binary.

Example

/home/trafodion-downloads/apache-trafodion-installer-1.3.0-incubating-bin.tar.gz

2.3.2. Java Location

You need to record the location of the Java. For example, use ps -ef | grep java | grep hadoop | grep hbase to determine what version HBase is running.

Example

ps -ef | grep java | grep hadoop | grep hbase
hbase     17302  17288  1 20:35 ?        00:00:10 /usr/jdk64/jdk1.7.0_67/bin/java -Dproc_master -XX:OnOutOfMemoryError=kill -9 %p -Dhdp.version=2.3.6.0-3796 -XX:+UseConcMarkSweepGC -XX:ErrorFile=/var/log/hbase/hs_err_pid%p.log -Djava.io.tmpdir=/tmp -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps -Xloggc:/var/log/hbase/gc.log-201606302035 -Xmx1024m -XX:PermSize=128m -XX:MaxPermSize=128m -Dhbase.log.dir=/var/log/hbase -Dhbase.log.file=hbase-hbase-master-ip-172-31-56-238.log -Dhbase.home.dir=/usr/hdp/current/hbase-master/bin/.. -Dhbase.id.str=hbase -Dhbase.root.logger=INFO,RFA -Djava.library.path=:/usr/hdp/2.3.6.0-3796/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.3.6.0-3796/hadoop/lib/native -Dhbase.security.logger=INFO,RFAS org.apache.hadoop.hbase.master.HMaster start

The Java location is: /usr/jdk64/jdk1.7.0_67

2.3.3. Data Nodes

{projet-name} is installed on all data nodes in your Hadoop cluster. You need to record the fully-qualified domain name node for each node. For example, refer to /etc/hosts.

Example

$ cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

172.31.56.238              ip-172-31-56-238.ec2.internal node01
172.31.61.110              ip-172-31-61-110.ec2.internal node02
172.31.57.143              ip-172-31-57-143.ec2.internal node03

Record the node names in a space-separated list.

Example

ip-172-31-56-238.ec2.internal ip-172-31-61-110.ec2.internal ip-172-31-57-143.ec2.internal

2.3.4. Trafodion Runtime User Home Directory

The Installer creates the trafodion user ID. You need to decide the home directory for this user.

The default is: /home

2.3.5. Distribution Manager URL

The Installer interacts with the Distribution Manager (for example, Apache Ambari or Cloudera Manager) to modify the Hadoop configuration.

Example

Apache Ambari URL

http://myhost.com:8080

2.4. Run Installer

You run the Installer once you’ve collected the base information as described in Collect Information above.

The following example shows a guided install of Trafodion on a three-node Hortonworks Hadoop cluster.

By default, the Trafodion Installer invokes sqlci so that you can enter the initialize trafodion; command. This is shown in the example below.

Example

  1. Run the Trafodion Installer in guided mode.

    $ cd $HOME/trafodion-installer/installer
    $ ./trafodion_install 2>&1 | tee install.log
    ******************************
     TRAFODION INSTALLATION START
    ******************************
    
    ***INFO: testing sudo access
    ***INFO: Log file located at /var/log/trafodion/trafodion_install_2016-06-30-21-02-38.log
    ***INFO: Config directory: /etc/trafodion
    ***INFO: Working directory: /usr/lib/trafodion
    
    ************************************
     Trafodion Configuration File Setup
    ************************************
    
    ***INFO: Please press [Enter] to select defaults.
    
    Is this a cloud environment (Y/N), default is [N]: N
    Enter trafodion password, default is [traf123]:
    Enter list of data nodes (blank separated), default []: ip-172-31-56-238.ec2.internal ip-172-31-61-110.ec2.internal ip-172-31-57-143.ec2.internal
    Do you h ave a set of management nodes (Y/N), default is N: N
    Enter Trafodion userid's home directory prefix, default is [/home]: /opt
    Specify  location of Java 1.7.0_65 or higher (JDK), default is []: /usr/jdk64/jdk1.7.0_67
    Enter full path (including .tar or .tar.gz) of trafodion tar file []: /home/trafodion-downloads/apache-trafodion_server-2.0.1-incubating.tar.gz
    Enter Backup/Restore username (can be Trafodion), default is [trafodion]:
    Specify the Hadoop distribut ion installed (1: Cloudera, 2: Hortonworks, 3: Other): 2
    Enter Hadoop admin username, default is [admin]: Enter Hadoop admin pas sword, default is [admin]:
    Enter full Hadoop external network URL:port (include 'http://' or 'https://), default is []: http://ip-172-31-56-238.ec2.internal:8080
    Enter  HDFS username or username running HDFS, default is [hdfs]:
    Enter HBase username or username running HBase, default is [hbase]:
    Enter HBase group, default is [hbase]:
    Enter Zookeeper username or username running Zookeeper, default is [zookeeper]:
    Enter  directory to install trafodion to, default is [/opt/trafodion/apache-trafodion_server-2.0.1-incubating]:
    Start Trafodion after install (Y/N), default is Y:
    Total number of client connections per cluster, default [24]: 96
    Enter the node of primary DcsMaste r, default [ip-172-31-56-238.ec2.internal]:
    Enable High Availability (Y/N), default is N:
    Enable simple LDAP security (Y/N), d efault is N:
    ***INFO: Trafodion configuration setup complete
    ***INFO: Trafodion Configuration File Check
    ***INFO: Testing sudo access on node ip-172-31-56-238
    ***INFO: Testing sudo access on node ip-172-31-61-110
    ***INFO: Testing sudo access on node ip-172-31-57-143
    ***INFO: Testing ssh on ip-172-31-56-238
    ***INFO: Testing ssh on ip-172-31-61-110
    ***INFO: Testing ssh on ip-172-31-57-143
    #!/bin/bash
    #
    # @@@ START COPYRIGHT @@@
    #
    # Licensed to the Apache Software Foundation (ASF) under one
    # or more contributor license agreements.  See the NOTICE file
    # distributed with this work for additional information
    # regarding copyright ownership.  The ASF licenses this file
    # to you under the Apache License, Version 2.0 (the
    # "License"); you may not use this file except in compliance
    # with the License.  You may obtain a copy of the License at
    #
    #   http://www.apache.org/licenses/LICENSE-2.0
    #
    # Unless required by applicable law or agreed to in writing,
    # software distributed under the License is distributed on an
    # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
    # KIND, either express or implied.  See the License for the
    # specific language governing permissions and limitations
    # under the License.
    #
    .
    .
    .
    9. Accepting Warranty or Additional Liability. While redistributing
    the Work or Derivative Works thereof, You may choose to offer, and
    charge a fee for, acceptance of support, warranty, indemnity, or
    other liability obligations and/or rights consistent with this
    License. However, in accepting such obligations, You may act only
    on Your own behalf and on Your sole responsibility, not on behalf
    of any other Contributor, and only if You agree to indemnify, defend,
    and hold each Contributor harmless for any liability incurred by,
    or claims asserted against, such Contributor by reason of your
    accepting any such warranty or additional liability.
    
    END OF TERMS AND CONDITIONS
    
    BY TYPING "ACCEPT" YOU AGREE TO THE TERMS OF THIS AGREEMENT: ***INFO: testing sudo access
    ***INFO: Starting Trafodion Package Setup (2016-06-30-21-06-40)
    ***INFO: Installing required packages
    ***INFO: Log file located in /var/log/trafodion
    ***INFO: ... pdsh on node ip-172-31-56-238
    ***INFO: ... pdsh on node ip-172-31-61-110
    ***INFO: ... pdsh on node ip-172-31-57-143
    ***INFO: Checking if apr is installed ...
    ***INFO: Checking if apr-util is installed ...
    ***INFO: Checking if sqlite is installed ...
    ***INFO: Checking if expect is installed ...
    ***INFO: Checking if perl-DBD-SQLite* is installed ...
    ***INFO: Checking if protobuf is installed ...
    ***INFO: Checking if xerces-c is installed ...
    ***INFO: Checking if perl-Params-Validate is installed ...
    ***INFO: Checking if perl-Time-HiRes is installed ...
    ***INFO: Checking if gzip is installed ...
    ***INFO: Checking if lzo is installed ...
    ***INFO: Checking if lzop is installed ...
    ***INFO: Checking if unzip is installed ...
    ***INFO: modifying limits in /usr/lib/trafodion/trafodion.conf on all nodes
    ***INFO: create Trafodion userid "trafodion"
    ***INFO: Trafodion userid's (trafodion) home directory: /opt/trafodion
    ***INFO: testing sudo access
    Generating public/private rsa key pair.
    Created directory '/opt/trafodion/.ssh'.
    Your identification has been saved in /opt/trafodion/.ssh/id_rsa.
    Your public key has been saved in /opt/trafodion/.ssh/id_rsa.pub.
    The key fingerprint is:
    12:59:ab:d7:59:a2:0e:e8:38:1c:e9:e1:86:f6:18:23 trafodion@ip-172-31-56-238
    The key's randomart image is:
    +--[ RSA 2048]----+
    |        .        |
    |       o .       |
    |      o . . .    |
    |   . . o o +     |
    |  + . + S o      |
    | = =   =         |
    |E+B .   .        |
    |o.=.             |
    | . .             |
    +-----------------+
    ***INFO: creating .bashrc file
    ***INFO: Setting up userid trafodion on all other nodes in cluster
    ***INFO: Creating known_hosts file for all nodes
    ip-172-31-56-238
    ip-172-31-56-238 ip-172-31-61-110 ip-172-31-57-143
    ip-172-31-61-110
    ip-172-31-56-238 ip-172-31-61-110 ip-172-31-57-143
    ip-172-31-57-143
    ip-172-31-56-238 ip-172-31-61-110 ip-172-31-57-143
    ***INFO: trafodion user added successfully
    ***INFO: Trafodion environment setup completed
    ***INFO: creating sqconfig file
    ***INFO: Reserving DCS ports
    
    ***INFO: Creating trafodion sudo access file
    
    
    ******************************
     TRAFODION MODS
    ******************************
    
    ***INFO: Hortonworks installed will run traf_hortonworks_mods
    ***INFO: copying hbase-trx-hdp2_3-*.jar to all nodes
    ***INFO: hbase-trx-hdp2_3-*.jar copied correctly! Huzzah.
    USERID=admin
    PASSWORD=admin
    PORT=:8080
    {
      "resources" : [
        {
          "href" : "http://ip-172-31-56-238.ec2.internal:8080/api/v1/clusters/trafodion/configurations/service_config_versions?ser
    vice_name=HBASE&service_config_version=2",
    .
    .
    .
        {
          "href" : "http://ip-172-31-56-238.ec2.internal:8080/api/v1/clusters/trafodion/requests/12/tasks/128",
          "Tasks" : {
            "cluster_name" : "trafodion",
            "id" : 128,
            "request_id" : 12,
            "stage_id" : 2
          }
        },
        {
          "href" : "http://ip-172-31-56-238.ec2.internal:8080/api/v1/clusters/trafodion/requests/12/tasks/129",
          "Tasks" : {
            "cluster_name" : "trafodion",
            "id" : 129,
            "request_id" : 12,
            "stage_id" : 2
          }
        },
        {
          "href" : "http://ip-172-31-56-238.ec2.internal:8080/api/v1/clusters/trafodion/requests/12/tasks/130",
          "Tasks" : {
            "cluster_name" : "trafodion",
            "id" : 130,
            "request_id" : 12,
            "stage_id" : 2
          }
        }
      ],
      "stages" : [
        {
          "href" : "http://ip-172-31-56-238.ec2.internal:8080/api/v1/clusters/trafodion/requests/12/stages/0",
          "Stage" : {
            "cluster_name" : "trafodion",
            "request_id" : 12,
            "stage_id" : 0
          }
        },
        {
          "href" : "http://ip-172-31-56-238.ec2.internal:8080/api/v1/clusters/trafodion/requests/12/stages/1",
          "Stage" : {
            "cluster_name" : "trafodion",
            "request_id" : 12,
            "stage_id" : 1
          }
        },
        {
          "href" : "http://ip-172-31-56-238.ec2.internal:8080/api/v1/clusters/trafodion/requests/12/stages/2",
          "Stage" : {
            "cluster_name" : "trafodion",
            "request_id" : 12,
            "stage_id" : 2
          }
        }
      ]
    }***INFO: ...polling every 30 seconds until HBase start is completed.
    ***INFO: HBase restart completed
    ***INFO: Setting HDFS ACLs for snapshot scan support
    cp: `trafodion_config' and `/home/trafinstall/trafodion-2.0.1/installer/trafodion_config' are the same file
    ***INFO: Trafodion Mods ran successfully.
    
    ******************************
     TRAFODION CONFIGURATION
    ******************************
    
    /usr/lib/trafodion/installer/..
    /opt/trafodion/apache-trafodion_server-2.0.1-incubating
    ***INFO: untarring file  to /opt/trafodion/apache-trafodion_server-2.0.1-incubating
    ***INFO: modifying .bashrc to set Trafodion environment variables
    ***INFO: copying .bashrc file to all nodes
    ***INFO: copying sqconfig file (/opt/trafodion/sqconfig) to /opt/trafodion/apache-trafodion_server-2.0.1-incubating/sql/script
    s/sqconfig
    ***INFO: Creating /opt/trafodion/apache-trafodion_server-2.0.1-incubating directory on all nodes
    ***INFO: Start of DCS install
    ***INFO: DCS Install Directory: /opt/trafodion/apache-trafodion_server-2.0.1-incubating/dcs-2.0.1
    ***INFO: modifying /opt/trafodion/apache-trafodion_server-2.0.1-incubating/dcs-2.0.1/conf/dcs-env.sh
    ***INFO: modifying /opt/trafodion/apache-trafodion_server-2.0.1-incubating/dcs-2.0.1/conf/dcs-site.xml
    ***INFO: creating /opt/trafodion/apache-trafodion_server-2.0.1-incubating/dcs-2.0.1/conf/servers file
    ***INFO: End of DCS install.
    ***INFO: Start of REST Server install
    ***INFO: Rest Install Directory: /opt/trafodion/apache-trafodion_server-2.0.1-incubating/rest-2.0.1
    ***INFO: modifying /opt/trafodion/apache-trafodion_server-2.0.1-incubating/rest-2.0.1/conf/rest-site.xml
    ***INFO: End of REST Server install.
    ***INFO: starting sqgen
    ip-172-31-56-238,ip-172-31-57-143,ip-172-31-61-110
    
    Creating directories on cluster nodes
    /usr/bin/pdsh -R exec -w ip-172-31-56-238,ip-172-31-57-143,ip-172-31-61-110 -x ip-172-31-56-238 ssh -q -n %h mkdir -p /opt/tra
    fodion/apache-trafodion_server-2.0.1-incubating/etc
    /usr/bin/pdsh -R exec -w ip-172-31-56-238,ip-172-31-57-143,ip-172-31-61-110 -x ip-172-31-56-238 ssh -q -n %h mkdir -p /opt/tra
    fodion/apache-trafodion_server-2.0.1-incubating/logs
    /usr/bin/pdsh -R exec -w ip-172-31-56-238,ip-172-31-57-143,ip-172-31-61-110 -x ip-172-31-56-238 ssh -q -n %h mkdir -p /opt/tra
    fodion/apache-trafodion_server-2.0.1-incubating/tmp
    /usr/bin/pdsh -R exec -w ip-172-31-56-238,ip-172-31-57-143,ip-172-31-61-110 -x ip-172-31-56-238 ssh -q -n %h mkdir -p /opt/tra
    fodion/apache-trafodion_server-2.0.1-incubating/sql/scripts
    
    Generating SQ environment variable file: /opt/trafodion/apache-trafodion_server-2.0.1-incubating/etc/ms.env
    
    Note: Using cluster.conf format type 2.
    
    Generating SeaMonster environment variable file: /opt/trafodion/apache-trafodion_server-2.0.1-incubating/etc/seamonster.env
    
    
    Generated SQ startup script file: ./gomon.cold
    Generated SQ startup script file: ./gomon.warm
    Generated SQ cluster config file: /opt/trafodion/apache-trafodion_server-2.0.1-incubating/tmp/cluster.conf
    Generated SQ Shell          file: sqshell
    Generated RMS Startup       file: rmsstart
    Generated RMS Stop          file: rmsstop
    Generated RMS Check         file: rmscheck.sql
    Generated SSMP Startup      file: ssmpstart
    Generated SSMP Stop         file: ssmpstop
    Generated SSCP Startup      file: sscpstart
    Generated SSCP Stop         file: sscpstop
    
    
    Copying the generated files to all the nodes in the cluster
    .
    .
    .
    SQ Startup script (/opt/trafodion/apache-trafodion_server-2.0.1-incubating/sql/scripts/gomon.cold) ran successfully. Performin
    g further checks...
    Checking if processes are up.
    Checking attempt: 1; user specified max: 2. Execution time in seconds: 0.
    
    The SQ environment is up!
    
    
    Process                Configured        Actual            Down
    -------                ----------        ------            ----
    DTM                3                3
    RMS                6                6
    DcsMaster        1                0            1
    DcsServer        3                0            3
    mxosrvr                96                0            96
    
    Thu Jun 30 21:15:29 UTC 2016
    Checking if processes are up.
    Checking attempt: 1; user specified max: 1. Execution time in seconds: 0.
    
    The SQ environment is up!
    
    
    Process                Configured        Actual            Down
    -------                ----------        ------            ----
    DTM                3                3
    RMS                6                6
    DcsMaster        1                0            1
    DcsServer        3                0            3
    mxosrvr                96                0            96
    
    Starting the DCS environment now
    starting master, logging to /opt/trafodion/apache-trafodion_server-2.0.1-incubating/dcs-2.0.1/bin/../logs/dcs-trafodion-1-mast
    er-ip-172-31-56-238.out
    ip-172-31-56-238: starting server, logging to /opt/trafodion/apache-trafodion_server-2.0.1-incubating/dcs-2.0.1/bin/../logs/dc
    s-trafodion-1-server-ip-172-31-56-238.out
    ip-172-31-57-143: starting server, logging to /opt/trafodion/apache-trafodion_server-2.0.1-incubating/dcs-2.0.1/bin/../logs/dc
    s-trafodion-3-server-ip-172-31-57-143.out
    ip-172-31-61-110: starting server, logging to /opt/trafodion/apache-trafodion_server-2.0.1-incubating/dcs-2.0.1/bin/../logs/dc
    s-trafodion-2-server-ip-172-31-61-110.out
    Checking if processes are up.
    Checking attempt: 1; user specified max: 2. Execution time in seconds: 1.
    
    The SQ environment is up!
    
    
    Process                Configured        Actual            Down
    -------                ----------        ------            ----
    DTM                3                3
    RMS                6                6
    DcsMaster        1                1
    DcsServer        3                3
    mxosrvr                96                7            89
    
    Starting the REST environment now
    starting rest, logging to /opt/trafodion/apache-trafodion_server-2.0.1-incubating/rest-2.0.1/bin/../logs/rest-trafodion-1-rest
    -ip-172-31-56-238.out
    
    
    
    Zookeeper listen port: 2181
    DcsMaster listen port: 23400
    
    Configured Primary DcsMaster: "ip-172-31-56-238.ec2.internal"
    Active DcsMaster            : "ip-172-31-56-238"
    
    Process                Configured        Actual                Down
    ---------        ----------        ------                ----
    DcsMaster        1                1
    DcsServer        3                3
    mxosrvr                96                94                2
    
    
    You can monitor the SQ shell log file : /opt/trafodion/apache-trafodion_server-2.0.1-incubating/logs/sqmon.log
    
    
    Startup time  0 hour(s) 2 minute(s) 19 second(s)
    Apache Trafodion Conversational Interface 2.0.1
    Copyright (c) 2015-2016 Apache Software Foundation
    >>
    --- SQL operation complete.
    >>
    
    End of MXCI Session
    
    ***INFO: Installation setup completed successfully.
    
    ******************************
     TRAFODION INSTALLATION END
    ******************************
  2. Switch to the Trafodion Runtime User and check the status of Trafodion.

    $ sudo su - trafodion
    $ sqcheck
    Checking if processes are up.
    Checking attempt: 1; user specified max: 2. Execution time in seconds: 0.
    
    The SQ environment is up!
    
    
    Process                Configured        Actual            Down
    -------                ----------        ------            ----
    DTM                3                3
    RMS                6                6
    DcsMaster        1                1
    DcsServer        3                3
    mxosrvr                96                96
    $

Trafodion is now running on your Hadoop cluster. Please refer to the Activate chapter for basic instructions on how to verify the Trafodion management and how to perform basic management operations.

3. Introduction

Trafodion is a Hadoop add-on service that provides transactional SQL on top of HBase. Typically, you use Trafodion as the database for applications that require Online Transaction Processing (OLTP), Operational Data Store (ODS), and/or strong reporting capabilities. You access Trafodion using standard JDBC and ODBC APIs.

You may choose whether to add Trafodion to an existing Hadoop environment or to create a standalone Hadoop environment specifically for Hadoop.

This guide assumes that a Hadoop environment exists upon which your provisioning Trafodion. Refer to Hadoop Software for information about what Hadoop software is required Trafodion.

3.1. Security Considerations

The following users and principals need be considered for Trafodion:

  • Provisioning User: A Linux-level user that performs the Trafodion provisioning tasks. This user ID requires sudo access and passwordless ssh among the nodes where Trafodion is installed. In addition, this user ID requires access to Hadoop distribution, HDFS, and HBase administrative users to change respective environment’s configuration settings per Trafodion requirements. Refer to Trafodion Provisioning User for more information about the requirements and usage associated with this user ID.

  • Runtime User: A Linux-level user under which the Trafodion software runs. This user ID must be registered as a user in the Hadoop Distributed File System (HDFS) to store and access objects in HDFS, HBase, and Hive. In addition, this user ID requires passwordless access among the nodes where Trafodion is installed. Refer to Trafodion Runtime User for more information about this user ID.

  • Trafodion Database Users: Trafodion users are managed by Trafodion security features (grant, revoke, etc.), which can be integrated with LDAP if so desired. These users are referred to as database users and do not have direct access to the operating system. Refer to LDAP for details on enabling LDAP for authenticating database users. Refer to Register User, Grant, and other SQL statements in the Trafodion SQL Reference Manual for more information about managing Trafodion Database Users.

    If your environment has been provisioned with Kerberos, then the following additional information is required.

  • KDC admin principal: Trafodion requires administrator access to Kerberos to create principals and keytabs for the trafodion user, and to look-up principal names for HDFS and HBase keytabs. Refer to Kerberos for more information about the requirements and usage associated with this principal.

  • HBase keytab location: Trafodion requires administrator access to HBase to grant required privileges to the trafodion user. Refer to Kerberos for more information about the requirements and usage associated with this keytab.

  • HDFS keytab location: Trafodion requires administrator access to HDFS to create directories that store files needed to perform SQL requests such as data loads and backups. Refer to Kerberos for more information about the requirements and usage associated with this keytab.

    If your environment is using LDAP for authentication, then the following additional information is required.

  • LDAP username for database root access: When Trafodion is installed, it creates a predefined database user referred to as the DB__ROOT user. In order to connect to the database as database root, there must be a mapping between the database user DB__ROOT and an LDAP user. Refer to LDAP for more information about this option.

  • LDAP search user name: Trafodion optionally requests an LDAP username and password in order to perform LDAP operations such as LDAP search. Refer to LDAP for more information about this option.

3.2. Provisioning Options

Trafodion includes two options for installation: a plug-in integration with Apache Ambari and command-line installation scripts.

The Ambari integration provides support for Hortonworks Hadoop distributions, while the command-line Trafodion Installer supports Cloudera and Hortonworks Hadoop distributions, and for select vanilla Hadoop installations.

The Trafodion Installer supports Linux distributions SUSE and RedHat/CentOS. There are, however, some differences. Prerequisite software packages are not installed automatically on SUSE.

The Trafodion Installer automates many of the tasks required to install/upgrade Trafodion, from downloading and installing required software packages and making required configuration changes to your Hadoop environment via creating the Trafodion runtime user ID to installing and starting Trafodion. It is, therefore, highly recommend that you use the Trafodion Installer for initial installation and upgrades of Trafodion. These steps are referred to as "Script-Based Provisioning" in this guide. Refer to Trafodion Installer that provides usage information.

The command-line installer has been replaced for the 2.1.0 release. Written in python, it replaces the legacy bash-script installer. The bash command-line installer is deprecated as of 2.1.0, but is still provided, just in case you experience any problems with the new installer. If so, please report those problems to the project team, since the legacy installer will soon be obsoleted.

3.3. Provisioning Activities

Trafodion provisioning is divided into the following main activities:

  • Requirements: Activities and documentation required to install the Trafodion software. These activities include tasks such as understanding hardware and operating system requirements, Hadoop requirements, what software packages that need to be downloaded, configuration settings that need to be changed, and user ID requirements.

  • Prepare: Activities to prepare the operating system and the Hadoop ecosystem to run Trafodion. These activities include tasks such as installing required software packages, configure the Trafodion Installation User, gather information about the Hadoop environment, and the modify configuration for different Hadoop services.

  • Install: Activities related to installing the Trafodion software. These activities include tasks such as unpacking the Trafodion tar files, creating the Trafodion Runtime User, creating Trafodion HDFS directories, installing the Trafodion software, and enabling security features.

  • Upgrade: Activities related to the upgrading the Trafodion software. These activities include tasks such as shutting down Trafodion and installing a new version of the Trafodion software. The upgrade tasks vary depending on the differences between the current and new release of Trafodion. For example, an upgrade may or may not include an upgrade of the Trafodion metadata.

  • Activate: Activities related to starting the Trafodion software. These actives include basic management tasks such as starting and checking the status of the Trafodion components and performing basic smoke tests.

  • Remove: Activities related to removing Trafodion from your Hadoop cluster.

  • Enable Security: Activities related to enabling security features on an already installed Trafodion installation. These activities include tasks such as adding Kerberos principals and keytabs, and setting up the LDAP configuration files.

3.4. Provisioning Master Node

All provisioning tasks are performed from a single node in the cluster, which must be part of the Hadoop environment you’re adding Trafodion to. This node is referred to as the "Provisioning Master Node" in this guide.

The Trafodion Provisioning User must have access to all other nodes from the Provisioning Master Node in order to perform provisioning tasks on the cluster.

3.5. Trafodion Installer

The Trafodion Installer is a set of scripts automates most of the tasks requires to install/upgrade Trafodion. You download the Trafodion Installer tar file from the Trafodion download page. Next, you unpack the tar file.

Example

$ mkdir $HOME/trafodion-installer
$ cd $HOME/trafodion-downloads
$ tar -zxf apache-trafodion-installer-1.3.0-incubating-bin.tar.gz -C $HOME/trafodion-installer
$

The Trafodion Installer supports two different modes:

  1. Guided Setup: Prompts for information as it works through the installation/upgrade process. This mode is recommended for new users.

  2. Automated Setup: Required information is provided in a pre-formatted bash-script configuration file, which is provided via a command argument when running the Trafodion Installer thereby suppressing all prompts. There is one exception, if Kerberos is enabled on the cluster, then you will always be prompted for the KDC admin password. We do not store the KDC admin password as part of installation anywhere.

    A template of the configuration file is available here within the installer directory: trafodion_config_default. Make a copy of the file in your directory and populate the needed information.

    Automated Setup is recommended since it allows you to record the required provisioning information ahead of time. Refer to Automated Setup for information about how to populate this file.

3.5.1. Usage

The following shows help for the Trafodion Installer.

./trafodion_install --help

This script will install Trafodion. It will create a configuration
file (if one has not been created), setup of the environment needed
for Trafodion, configure HBase with Hbase-trx and co-processors needed,
and install a specified Trafodion build.

Options:
    --help             Print this message and exit
    --accept_license   If provided, the user agrees to accept all the
                       provisions in the Trafodion license.  This allows
                       for automation by skipping the display and prompt of
                       the Trafodion license.
    --config_file      If provided, all install prompts will be
                       taken from this file and not prompted for.

3.5.2. Install vs. Upgrade

The Trafodion Installer automatically detects whether you’re performing an install or an upgrade by looking for the Trafodion Runtime User in the /etc/passwd file.

  • If the user ID doesn’t exist, then the Trafodion Installer runs in install mode.

  • If the user ID exists, then the Trafodion Installer runs in upgrade mode.

3.5.3. Guided Setup

By default, the Trafodion Installer runs in Guided Setup mode, which means that it prompts you for information during the install/upgrade process.

Refer to the following sections for examples:

3.5.4. Automated Setup

The --config_file option runs the Trafodion in Automated Setup mode.

Before running the Trafodion Installer with this option, you do the following:

  1. Copy the trafodion_config_default file.

    Example

    cp trafodion_config_default my_config
  2. Edit the new file using information you collect in the Gather Configuration Information section in the Prepare chapter.

  3. Run the Trafodion Installer in Automated Setup Mode

    Example

    ./trafodion_installer --config_file my_config
Your Trafodion Configuration File contains the password for the Trafodion Runtime User and for the Distribution Manager. Therefore, we recommend that you secure the file in a manner that matches the security policies of your organization.
If you are installing Trafodion on a version of Hadoop that has been instrumented with Kerberos, you will be asked for a password associated with a Kerberos administrator.
Example: Creating a Trafodion Configuration File

Using the instructions in Gather Configuration Information in the Prepare chapter, you record the following information.

ID Information Setting
ADMIN

Administrator user name for Apache Ambari or Cloudera Manager.

admin

ADMIN_PRINCIPAL

Kerberos principal for the KDC admin user including the realm.

BACKUP_DCS_NODES

List of nodes where to start the backup DCS Master components.

CLOUD_CONFIG

Whether you’re installing Trafodion on a cloud environment.

N

CLOUD_TYPE

What type of cloud environment you’re installing Trafodion on.

CLUSTER_NAME

The name of the Hadoop Cluster.

Cluster 1

DB_ROOT_NAME

LDAP name used to connect as database root user

trafodion

DCS_BUILD

Tar file containing the DCS component.

DCS_PRIMARY_MASTER_NODE

The node where the primary DCS should run.

DCS_SERVER_PARM

Number of concurrent client sessions per node.

8

ENABLE_HA

Whether to run DCS in high-availability (HA) mode.

N

EPEL_RPM

Location of EPEL RPM. Specify if you don’t have access to the Internet.

FLOATING_IP

IP address if running DCS in HA mode.

HADOOP_TYPE

The type of Hadoop distribution you’re installing Trafodion on.

cloudera

HBASE_GROUP

Linux group name for the HBASE administrative user.

hbase

HBASE_KEYTAB

Kerberos service keytab for HBase admin principal.

Default based on distribution

HBASE_USER

Linux user name for the HBASE administrative user.

hbase

HDFS_KEYTAB

Kerberos service keytab for HDFS admin principal.

Default based on distribution

HDFS_USER

Linux user name for the HDFS administrative user.

hdfs

HOME_DIR

Root directory under which the trafodion home directory should be created.

/home

INIT_TRAFODION

Whether to automatically initialize the Trafodion database.

Y

INTERFACE

Interface type used for $FLOATING_IP.

JAVA_HOME

Location of Java 1.7.0_65 or higher (JDK).

/usr/java/jdk1.7.0_67-cloudera

KDC_SERVER

Location of Kerberos server for admin access

LDAP_CERT

Full path to TLS certificate.

LDAP_HOSTS

List of nodes where LDAP Identity Store servers are running.

LDAP_ID

List of LDAP unique identifiers.

LDAP_LEVEL

LDAP Encryption Level.

LDAP_PASSWORD

Password for LDAP_USER.

LDAP_PORT

Port used to communicate with LDAP Identity Store.

LDAP_SECURITY

Whether to enable LDAP authentication.

N

LDAP_USER

LDAP Search user name.

LOCAL_WORKDIR

The directory where the Trafodion Installer is located.

/home/centos/trafodion-installer/installer

MANAGEMENT_ENABLED

Whether your installation uses separate management nodes.

N

MANAGEMENT_NODES

The FQDN names of management nodes, if any.

MAX_LIFETIME

Kerberos ticket lifetime for Trafodion principal

24hours

NODE_LIST

The FQDN names of the nodes where Trafodion will be installed.

trafodion-1 trafodion-2

PASSWORD

Administrator password for Apache Ambari or Cloudera Manager.

admin

RENEW_LIFETIME

Kerberos ticket renewal lifetime for Trafodion principal

7days

REST_BUILD

Tar file containing the REST component.

SECURE_HADOOP

Indicates whether Hadoop has Kerberos enabled

Based on whether Kerberos is enabled for your Hadoop installation

TRAF_HOME

Target directory for the Trafodion software.

/home/trafodion/apache-trafodion-1.3.0-incubating-bin

START

Whether to start Trafodion after install/upgrade.

Y

SUSE_LINUX

Whether your installing Trafodion on SUSE Linux.

false

TRAF_PACKAGE

The location of the Trafodion installation package tar file or core installation tar file.

/home/centos/trafodion-download/apache-trafodion-1.3.0-incubating-bin.tar.gz

TRAF_KEYTAB

Kerberos keytab for trafodion principal.

Default keytab based on distribution

TRAF_KEYTAB_DIR

Location of Kerberos keytab for the trafodion principal.

Default location based on distribution

TRAF_USER

The Trafodion runtime user ID. Must be trafodion in this release.

trafodion

TRAF_USER_PASSWORD

The password used for the trafodion:trafodion user ID.

traf123

URL

FQDN and port for the Distribution Manager’s REST API.

trafodion-1.apache.org:7180

Next, you edit my_config to contain the following:

#!/bin/bash
# @@@ START COPYRIGHT @@@
#
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements.  See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership.  The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License.  You may obtain a copy of the License at
#
#   http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied.  See the License for the
# specific language governing permissions and limitations
# under the License.
#
# @@@ END COPYRIGHT @@@

#====================================================
# Trafodion Configuration File
# This file contains default values for the installer.

# Users can also edit this file and provide values for all parameters
# and then specify this file on the run line of trafodion_install.
# Example:
# ./trafodion_install --config_file <Trafodion-config-file>
# WARNING: This mode is for advanced users!
#
#=====================================================


#=====================================================
#Must be set to 'true' if on a SUSE linux system. If on another type of system
#this must be set to false.

export SUSE_LINUX="false"

# The working directory where Trafodion installer untars files, etc.
# do not change this unless you really know what you are doing
export TRAF_WORKDIR="/usr/lib/trafodion"

# This is the directory where the installer scripts were untarred to
export LOCAL_WORKDIR="/home/centos/trafodion-installer/installer"

# The maximum number of dcs servers, i.e. client connections
export DCS_SERVERS_PARM="8"

# "true" if this is an upgrade
export UPGRADE_TRAF="false"

# Trafodion userid, This is the userid the Trafodion instance will run under
export TRAF_USER="trafodion"

# Trafodion userid's password
export TRAF_USER_PASSWORD="traf123"

# a blank separated list of nodes in your cluster
# node names should include full domain names
#This can not be left blank!
export NODE_LIST="trafodion-1 trafodion-2"

# count of nodes in node list
export node_count="2"

# another list of the same nodes in NODE_LIST but specified in a pdsh usable format
# i.e.  "-w centos-cdh[1-6]"  or "-w node1 -w node2 -w node3"
export MY_NODES="-w trafodion-[1-2]"

# the directory prefix for the trafodion userid's $HOME directory
# i.e. /opt/home, not /opt/home/trafodion
export HOME_DIR="/home"

#JAVA HOME must be a JDK. Must include FULL Path. Must be 1.7.0_65 or higher.

export JAVA_HOME="/usr/java/jdk1.7.0_67-cloudera"

# If your machine doesn't have external internet access then you must
# specify the location of the EPEL rpm, otherwise leave blank and it
# will be installed from the internet
export EPEL_RPM=""

# full path of the Trafodion package tar file
export TRAF_PACKAGE="/home/centos/trafodion-download/apache-trafodion-1.3.0-incubating-bin.tar.gz"

# if TRAF_PACKAGE wasn't specified then these two values must be specified
# TRAF_BUILD is the trafodion_server tar file
# DCS_BUILD is the DCS tar file
# REST_BUILD is the REST tar file
export TRAF_BUILD=""
export DCS_BUILD=""
export REST_BUILD=""
# Either "cloudera" or "hortonworks" (all lowercase)
export HADOOP_TYPE="cloudera"

# The URL for Cloudera/Hortonworks REST API (i.e. node1.host.com:8080)
export URL="trafodion-1.apache.org:7180"

# Cloudera/Hortonworks UI admin's userid and password
export ADMIN="admin"
export PASSWORD="admin"

# hadoop cluster name
export CLUSTER_NAME=""

# the Hadoop HDFS userid
export HDFS_USER="hdfs"

# the Hadoop HBase userid and group
export HBASE_USER="hbase"
export HBASE_GROUP="hbase"

# The hadoop HBase service name
export HBASE="hbase"

# full path of where to install Trafodion to
# Example is used below. If $HOME_DIR or $TRAF_USER have been changed
# then this will need to be changed.
# On an upgrade, it is recommend to choose a different directory.
# First time install : /home/trafodion/traf
# On Upgrade: /home/trafodion/traf_<date>
# By doing this the previous version will remain and allow for an easier rollback.
export TRAF_HOME="/home/trafodion/apache-trafodion-1.3.0-incubating-bin"

# Start Trafodion after install completes
export START="Y"

# initialize trafodion after starting
export INIT_TRAFODION="Y"

# full path to the sqconfig file
# Default is to leave as is and this file will be created.
export SQCONFIG=""

#-----------------  security configuration information -----------------
#Enter in Kerberos details if Kerberos is enabled on your cluster

#Indicate Kerberos is enabled
export SECURE_HADOOP="N"

#Location of Kerberos server for admin access
export KDC_SERVER=""

#Kerberos Admin principal used to create Trafodion principals and keytabs
#Please include realm, for example: trafadmin/admin@MYREALM.COM
export ADMIN_PRINCIPAL=""

#Keytab for HBase admin user, used to grant Trafodion user CRWE privilege
export HBASE_KEYTAB=""

#Keytab for HDFS admin user, used to create data directories for Trafodion
export HDFS_KEYTAB=""

#Kerberos ticket defaults for the Trafodion user
export MAX_LIFETIME="24hours"
export RENEW_LIFETIME="7days"

#Trafodion keytab information
export TRAF_KEYTAB=""
export TRAF_KEYTAB_DIR=""

#Enter in LDAP configuration information
#Turn on authentication - MUST have existing LDAP configured.
export LDAP_SECURITY="Y"

#Name of LDAP Config file
export LDAP_AUTH_FILE="traf_authentication_config_`hostname -s`"

#LDAP name to map to database user DB__ROOT
DB_ROOT_NAME="trafodion"
#-----------------      end security configuration     -----------------

export CONFIG_COMPLETE="true"

Once completed, run the Trafodion Installer with the --config_file option.

Refer to the following sections for examples:

3.6. Trafodion Provisioning Directories

Trafodion stores its provisioning information in the following directories on each node in the cluster:

  • /etc/trafodion: Configuration information.

  • /usr/lib/trafodion: Copies of the files required by the installer.

4. Requirements

Trafodion requires an x86 version of Linux.

The current release of Trafodion has been tested with:

  • 64-bit Red Hat Enterprise Linux (RHEL) or CentOS 6.5 - 6.8

  • Cloudera CDH 5.4 - 5.7

  • Hortonworks HDP 2.3 - 2.4

Other OS releases may work, too. The Trafodion project is currently working on better support for more distribution and non-distribution versions of Hadoop.

4.1. General Cluster and OS Requirements and Recommendations

64-bit x86 instruction set running a Linux distribution is required. Further, Trafodion assumes an environment based on the requirements of the tested Hadoop distributions/services.

4.1.1. Hardware Requirements and Recommendations

Single-Node Cluster

It is possible to run Trafodion on a single-node sandbox environment. Typically, any sandbox running a Hadoop distribution can be used. A typical single-node configuration uses 4-8 cores with 16 GB of memory, and 20 GB free disk space.

Multi-Node Cluster

For multi-node end-user clusters, your typical HBase environment should suffice for Trafodion. Typically, memory configuration range between 64-128 GB per node with minimum requirement of 16 GB. The cluster size can span from 1 to n nodes; a minimum of two nodes is recommended. A minimum of two cores is required regardless of whether you’re deploying Trafodion on a bare-metal or virtual environment.

Recommended configurations:

Attribute Guidance

Processors per Node

• Small: 2 cores
• Medium: 4 cores
• Large: 8+ cores

Memory per Node

• Small: 16 GB
• Medium: 64 GB
• Large: 128 GB

Concurrency:Nodes

• Two Small Nodes: Four concurrent queries
• Two Medium Nodes: 64 concurrent queries
• Two Large Nodes: 256 concurrent queries

4.1.2. OS Requirements and Recommendations

Please verify these requirements on each node you will install Trafodion on:

Function Requirement Verification Guidance

Linux

64-bit version of Red Hat 6.5 or later, or SUSE SLES 11.3 or later.

sshd

The ssh daemon is running on each node in the cluster.

ps aux | grep sshd
sudo netstat -plant | grep :22

ntpd

The ntp daemon is running and synchronizing time on each node in the cluster.

ps aux | grep ntp
ntpq -p

FQDN

/etc/hosts is set up for fully-qualified node names (FQDN).
/etc/resolv.conf is configured to use a name server.

hostname --fqdn shows the fully-qualified node name, if any.
• The fully-qualified node name is part of the /etc/hosts file.
host -T <FQDN> (responds if using a DNS server, times out otherwise)
• Simply ssh among nodes using ssh <FQDN>.

Port Availability

The Linux Kernel Firewall (iptables) has either been disabled or ports required by Trafodion have been opened.

lsmod | grep ip_tables checks whether iptables is loaded. If not, no further checking is needed.
sudo iptables -nL | grep <port> checks the configuration of a port. An empty response indicates no rule for the port, which often means the port is not open.

passwordless ssh

The user name used to provision Trafodion must have passwordless ssh access to all nodes.

ssh to the nodes, ensure that no password prompt appears.

sudo privileges

The user name used to provision Trafodion must sudo access to a number of root functions .

sudo echo "test" on each node.

bash

Available for shell-script execution.

bash --version

java

Available to run the Trafodion software. Same version as HBase is using.

java --version

perl

Available for script execution.

perl --version

python

Available for script execution.

python --version

yum

Available for installs, updates, and removal of software packages.

yum --version

rpm

Available for installs, updates, and removal of software packages.

rpm --version

scp

Available to copy files among nodes in the cluster.

scp --help

curl

Available to transfer data with URL syntax.

curl --version

wget

Available to download files from the Web.

wget --version

pdsh

Available to run shell commands in parallel.

pdsh -V

pdcp

Available to copy files among nodes in parallel. part of the pdsh package.

pdcp -V

4.1.3. IP Ports

The following table lists the default ports used by the different Trafodion components plus the configuration file and configuration attribute associated with each port setting.

Default Port Configuration File Configuration Entry Required Range Protocol Comment

4200

rest-site.xml
trafodion.rest.port

Yes

1

REST

Trafodion REST Server.

4201

rest-site.xml
trafodion.rest.https.port

Yes

1

HTTPS

Trafodion REST Server (HTTPS).

23400

dcs-site.xml
dcs.master.port

Yes

n

binary

Start of Trafodion DCS port range. (37800 for Trafodion 1.1)

24400

dcs-site.xml
dcs.master.info.port

Yes

1

HTTP

DCS master web GUI. (40010 for Trafodion 1.1)

24410

dcs-site.xml
dcs.server.info.port

Yes

n

HTTP

Start of range for DCS server web GUIs. (40030 for Trafodion 1.1)

50030

mapred-site.xml
mapred.job.tracker.http.address

No

1

HTTP

MapReduce Job Tracker web GUI.

50070

hdfs-site.xml
dfs.http.address

No

1

HTTP

HDFS Name Node web GUI.

50075

hdfs-site.xml
dfs.datanode.http.address

No

1

HTTP

HDFS Data Node web GUI.

50090

hdfs-site.xml
dfs.secondary.http.address

No

1

HTTP

HDFS Secondary Name Node web GUI.

60010

hbase-site.xml
hbase.master.info.port

No

1

HTTP

HBase Master web GUI.

60030

hbase-site.xml
hbase.regionserver.info.port

No

1

HTTP

HBase Region Server web GUI.

There are two port ranges used by Trafodion.

  • 23400 is a range, to allow multiple mxosrvr processes on each node. Allow a range of a few ports, enough to cover all the servers per node that are listed in the "servers" file in the DCS configuration directory.

  • 24410 is a range as well, enough to cover the DCS servers per node, usually 1 or 2.

On top of the ports identified above, you also need the ports required by your Hadoop distribution. For example:

If you have Kerberos or LDAP enabled, then ports required by these products need to be opened as well.

Although not all the ports will be used on every node of the cluster, you need to open most of them for all the nodes in the cluster that have Trafodion, HBase, or HDFS servers on them.

4.2. Prerequisite Software

4.2.1. Hadoop Software

Trafodion runs as an add-on service on Hadoop distributions. The following Hadoop services and their dependencies must be installed and running on the cluster where you intend to install Trafodion:

  • Hadoop Distributed File System (HDFS)

  • YARN with MapReduce version 2

  • ZooKeeper

  • HBase

  • Hive

  • Apache Ambari (Hortonworks) or Cloudera Manager (Cloudera) with associated embedded databases.

The following distributions have been tested with Trafodion.1

Distribution Version HBase Version Installation Documentation

Cloudera Distribution Including Apache Hadoop (CDH)

5.4

1.0

CHD 5.4 Installation

Hortonworks Data Platform (HDP)

2.3

1.1

HDP 2.3 Installation

  1. Future releases of Trafodion will move away from distribution-specific integration. Instead, Trafodion will be tested with specific version of the Hadoop, HDFS, HBase, and other services/products only.

  2. When possible, install using parcels to simply the installation process.

Trafodion does not yet support installation on a non-distribution version of Hadoop; that is, Hadoop downloaded from the Apache web site. This restriction will be lifted in a later release of Trafodion.

4.2.2. Software Packages

In addition to the software packages required to run different Hadoop services listed above (for example, Java), Trafodion requires supplementary software to be installed on the cluster before it is installed. These are Linux tools that are not typically packaged as part of the core Linux distribution.

For RedHat/CentOS, the Trafodion Installer automatically attempts get a subset of these packages over the Internet. If the cluster’s access to the Internet is disabled, then you need to manually download the packages and make them available for installation.
Package Usage Installation

EPEL

Add-on packages to completed the Linux distribution.

Download
Fedora RPM

pdsh

Parallelize shell commands during install and Trafodion runtime utilities.

yum install pdsh

sqlite

Internal configuration information managed by the Trafodion Foundation component.

yum install sqlite

expect

Not used?

yum install expect

perl-DBD-SQLite

Allows Perl scripts to connect to SQLite.

yum install perl-DBD-SQLite

perl-Params-Validate

Validates method/function parameters in Perl scripts.

yum install perl-Params-Validate

perl-Time-HiRes

High resolution alarm, sleep, gettimeofday, interval timers in Perl scripts.

yum install perl-Time-HiRes

protobuf

Data serialization.

yum install protobuf

xerces-c

C++ XML parsing.

yum install xerces-c

gzip

Data compress/decompress.

yum install gzip

rpm-build2

Build binary and source software packages.

yum install rpm-build

apr-devel2

Support files used to build applications using the APR library.

yum install apr-devel

apr-util-devel2

Support files used to build applications using the APR utility library.

yum install apr-util-devel

doxygen2

Generate documentation from annotated C++ sources.

yum install doxygen

gcc2

GNU Compiler Collection

yum install gcc

gcc_c++2

GNU C++ compiler.

yum install gcc_c++
  1. log4c++ was recently withdrawn from public repositories. Therefore, you will need to build the log4c++ RPM on your system and then install the RPM using the procedure described in log4c++ Installation.

  2. Software package required to build log4c++. Not required otherwise. These packages are not installed by the Trafodion Installer in this release.

The Trafodion Installer requires Internet access to install the required software packages.

4.3. Trafodion User IDs and Their Privileges

4.3.1. Trafodion Runtime User

The trafodion:trafodion user ID is created as part of the installation process. The default password is: traf123.

Trafodion requires that either HDFS ACL support or Kerberos is enabled. The Trafodion Installer will enable HDFS ACL and Kerberos support. Refer to Kerberos for more information about the requirements and usage of Kerberos in Trafodion. Refer to Apache HBase™ Reference Guide for security in HBase.

Do not create the trafodion:trafodion user ID in advance. The Trafodion Installer uses the presence of this user ID to determine whether you’re doing an installation or upgrade.

4.3.2. Trafodion Provisioning User

Typically, the Trafodion Installer is used for Trafodion installations. It requires access to the user IDs documented below.

Linux Installation User

The user ID that performs the Trafodion installation steps. Typically, this User ID runs the Trafodion Installer.

Requirements:

  • User name or group cannot be trafodion.

  • Passwordless ssh access to all nodes in the cluster.

  • Internet access to download software packages.

  • requiretty must be disabled in /etc/sudoers.

  • sudo1 access to:

    • Download and install software packages.

    • Modify /etc/sudoers.d (allow the trafodion user to modify floating IP: ip and arping).

    • Create the trafodion user ID and group.

    • Install Trafodion software into the HBase environment.

    • Run Java version command on each node in the cluster.

    • Run Hadoop version command on each node in the cluster.

    • Run HBase version command on each node in the cluster.

    • Create directories and files in:

      • /etc

      • /usr/lib

      • /var/log

    • Invoke su to execute commands as other users; for example, trafodion.

    • Edit sysctl.conf and activate changes using sysctl -p:

      • Modify kernel limits.

      • Reserve IP ports.

1 sudo is required in the current release of Trafodion. This restriction may be relaxed in later releases. Alternative mechanisms for privileged access (such as running as root or sudo alternative commands) are not supported.

Distribution Manager User

A user ID that can change the configuration using Apache Ambari or Cloudera Manager. The Trafodion Installer makes REST request to perform configuration and control functions to the distribution manager using this user ID.

Requirements:

  • Administrator user name and password.

  • URL to Distribution Manager’s REST API.

HDFS Administrator User

The HDFS super user. Required to create directories and change security settings, as needed. The Trafodion Installer uses su to run commands under this user ID.

Requirements:

  • HDFS Administrator user name.

  • Write access to home directory on the node where the Distribution Manager is running.

  • For Kerberos enabled installations, location of the keytab for the HDFS service principal.

HBase Administrator User

The HBase super user. Required to change directory ownership in HDFS. For Kerberos enabled installations, the HBase super user is needed to grant the trafodion user create, read, write, and execute privileges.

Requirements:

  • HBase Administrator user name and group.

  • Read access to hbase-site.xml.

  • For Kerberos enabled installations, location of the keytab for the HBase service principal.

Kerberos Administrator User

The Kerberos adminstrator. Required to create Trafodion principals and keytabs on a cluster where Kerberos is enabled.

Requirements:

  • Kerberos Administrator admin name including the realm.

  • Kerberos Administrator password

The following configuration changes are recommended but not required.

The Trafodion Installer does not make these changes.

The trafodion user ID should not be given other sudo privileges than what’s specified in this manual. Also, we recommend that this user ID is locked (sudo passwd -l trafodion) once the installation/upgrade activity has been completed. Users that need issue commands as the trafodion ID should do so using sudo; for example, sudo -u trafodion -i.

These settings are configured in the hadoop-env.sh file.

Property Recommended Setting Guidance
DataNode Java Heap Size

2 GB

Use this setting for a large configuration.

NameNode Java Heap Size

2 GB

Use this setting for a large configuration.

Secondary NameNode Java Heap Size

2 GB

Use this setting for a large configuration.

Configuration Property Recommended Setting Guidance
hbase.rpc.timeout

10 minutes

This setting depends on the tables' size. Sixty (60) seconds is the default. Increase this value for big tables. Make it the same value as hbase.client.scanner.timeout.period. We have found that increasing the setting to six-hundred (600) seconds will prevent many of the timeout-related errors we encountered, such as OutOfOrderNextException errors.

hbase.client.scanner.timeout.period

10 minutes

Similar to the hbase.rpc.timeout setting. Sixty (60) seconds is the default. Depending on the size of a user table, we have experienced timeout failures on count(*) and update statistics commands from this setting. The underlying issue is the length of the execution of the coprocessor within HBase.
NOTE: HBase uses the smaller of hbase.rpc.timeout and hbase.client.scanner.timeout.period to calculate the scanner timeout.

hbase.snapshot.master.timeoutMillis and hbase.snapshot.region.timeout

10 minutes

HBase’s default setting is 60000 milliseconds. If you experience timeout issues with HBase snapshots when you use the Trafodion Bulk Loader or other statements, you can set the value for these two HBase properties to 10 minutes (600,000 milliseconds).

hbase.hregion.max.filesize

107374182400 bytes

HBase’s default setting is 10737418240 (10 GB). We have increased the setting to 107374182400 (100 GB), which reduces the number of HStoreFiles per table and appears to reduce disruptions to active transactions from region splitting.

hbase.hstore.blockingStoreFiles

10

hbase.regionserver.handler.count

<num>

This setting should match the number of concurrent sessions (mxosrvr). The default is 10.

5. Prepare

You need to prepare your Hadoop environment before installing Trafodion.

5.1. Install Optional Workstation Software

If you are using a Windows workstation, then the following optional software helps installation process. We recommended that you pre-install the software before continuing with the Trafodion installation:

  • putty and puttygen (download from PuTTY web site)

  • VNC client (download from RealVNC web site)

  • Firefox or Chrome browser

  • SFTP client to transfer files from your workstation to the Linux server: WinSCP or FileZilla

5.2. Configure Installation User ID

If using the command-line Installer, Trafodion installation requires a user ID with these attributes:

  • sudo access per the requirements documented in Linux Installation User.

  • passwordless ssh to all nodes on the cluster where Trafodion will be installed.

You may need to request permission from your cluster-management team to obtain this type of access.

The following example shows how to set up your user ID to have "passwordless ssh" abilities.

Do the following on the Provision Master Node:

echo -e 'y\n' | ssh-keygen -t rsa -N "" -f $HOME/.ssh/id_rsa
cat $HOME/.ssh/id_rsa.pub >> $HOME/.ssh/authorized_keys
echo localhost $(cat /etc/ssh/ssh_host_rsa_key.pub) >> $HOME/.ssh/known_hosts
echo "NoHostAuthenticationForLocalhost=yes" >> $HOME/.ssh/config
chmod 600 $HOME/.ssh/config
chmod 600 $HOME/.ssh/authorized_keys
chmod 700 $HOME/.ssh/

After running these commands, do the following:

  • If necessary, create the $HOME/.ssh directory on the other nodes in your cluster and secure it private to yourself (chmod 700).

  • If necessary, create the $HOME/.ssh/authorized_keys file on the other nodes in your cluster. Secure it with chmod 600 $HOME/.ssh/authorized_keys.

  • Copy the content of the $HOME/.ssh/id_rsa.pub file on the Provisioning Master Node and append the to the $HOME/.ssh/authorized_keys file on the other nodes in your cluster.

  • ssh to the other nodes in the cluster. Answer y to the prompt asking you whether to continue the connection. This adds the node to the $HOME/.ssh/known_hosts file completing the passwordless ssh setup.

5.3. Disable requiretty

If using the command-line Installer, you need to disable requiretty in /etc/sudoers on all nodes in the cluster to ensure that sudo commands can be run from inside the installation scripts.

Comment out the Defaults requiretty setting in the /etc/sudoers file to ensure that the requiretty option is NOT being used.

5.4. Verify OS Requirements and Recommendations

Please ensure that the OS Requirements and Recommendations are met for each node in the cluster where you intend to install Trafodion.

5.5. Configure Kerberos

If your Hadoop installation has enabled Kerberos, then Trafodion needs to have Kerberos enabled. If not, then Trafodion will not run. If you plan to enable Kerberos in Trafodion, then you need to have access to a KDC (Kerberos Key Distribution Center) and administration credentials so you can create the necessary Trafodion principals and keytabs.

If you wish to manually set up and activate Kerberos principals and keytabs, then refer to the section on Kerberos.

5.6. Configure LDAP Identity Store

If you plan to enable security features in Trafodion, then you need to have an LDAP identity store available to perform authentication. The Trafodion Installer prompts you to set up an authentication configuration file that points to an LDAP server (or servers), which enables security (that is, authentication and authorization) in the Trafodion database.

If you wish to manually set up the authentication configuration file and enable security, then refer to the section on LDAP.

5.7. Gather Configuration Information

You need to gather/decide information about your environment to aid installation Trafodion for the Trafodion Installer. (Listed in alphabetical order to make it easier to find information when referenced in the install and upgrade instructions.)

ID1 Information Default Notes
ADMIN

Administrator user name for Apache Ambari or Cloudera Manager.

admin

A user that can change configuration and restart services via the distribution manager’s REST API.

ADMIN_PRINCIPAL2

Kerberos admin principal to manage principals and keytabs

None

Required if Kerberos is enabled.

BACKUP_DCS_NODES

List of nodes where to start the backup DCS Master components.

None

Blank separated FQDN list. Not needed if $ENABLE_HA = N.

CLOUD_CONFIG

Whether you’re installing Trafodion on a cloud environment.

N

N = bare-metal or VM installation.

CLOUD_TYPE

What type of cloud environment you’re installing Trafodion on.

None

{ AWS | OpenStack | Other }. Not applicable for bare-metal or VM installation.

CLUSTER_NAME

The name of the Hadoop Cluster.

None

From Apache Ambari or Cloudera Manager.

DB_ROOT_NAME2

LDAP name used to connect as database root user

trafodion

Required when LDAP is enabled.

DCS_BUILD

Tar file containing the DCS component.

None

Not needed if using a Trafodion package installation tar file.

DCS_PRIMARY_MASTER_NODE

The node where the primary DCS should run.

None

The DCS Master handles JDBC and ODBC connection requests.

DCS_SERVER_PARM

Number of concurrent client sessions per node.

16

This number specifies the concurrent sessions per node to be supported. Each session could require up to 1GB of physical memory. The number can be changed post-installation. For more information, refer to the Trafodion Client Installation Guide.

ENABLE_HA

Whether to run DCS in high-availability (HA) mode.

N

You need the floating IP address, the interface, and the backup nodes for DCS Master if enabling this feature.

EPEL_RPM

Location of EPEL RPM.

None

Specify if you don’t have access to the Internet. Downloaded automatically by the Trafodion Installer.

FLOATING_IP

IP address if running DCS in HA mode.

None

Not needed if $ENABLE_HA = N. An FQDN name or IP address.

HADOOP_TYPE

The type of Hadoop distribution you’re installing Trafodion on.

None

Lowercase. cloudera or hadoop.

HBASE_GROUP

Linux group name for the HBASE administrative user.

hbase

Required in order to provide access to select HDFS directories to this user ID.

HBASE_KEYTAB2

HBase credentials used to grant Trafodion CRWE privileges

based on distribution

Required if Kerberos is enabled.

HBASE_USER

Linux user name for the HBASE administrative user.

hbase

Required in order to provide access to select HDFS directories to this user ID.

HDFS_KEYTAB2

HDFS credentials used to set privileges on HDFS directories. .

based on distribution

Required if Kerberos is enabled.

HDFS_USER

Linux user name for the HDFS administrative user.

hdfs

The Trafodion Installer uses sudo su to make HDFS configuration changes under this user.

HOME_DIR

Root directory under which the trafodion home directory should be created.

/home

Example

If the home directory of the trafodion user is /opt/home/trafodion, then specify the root directory as /opt/home.

INIT_TRAFODION

Whether to automatically initialize the Trafodion database.

N

Applies if $START=Y only.

INTERFACE

Interface type used for $FLOATING_IP.

None

Not needed if $ENABLE_HA = N.

JAVA_HOME

Location of Java 1.7.0_65 or higher (JDK).

$JAVA_HOME setting

Fully qualified path of the JDK. For example: /usr/java/jdk1.7.0_67-cloudera

KDC_SERVER2

Location of host where Kerberos server exists

None

Required if Kerberos enabled.

LDAP_CERT2

Full path to TLS certificate.

None

Required if $LDAP_LEVEL = 1 or 2.

LDAP_HOSTS2

List of nodes where LDAP Identity Store servers are running.

None

Blank separated. FQDN format.

LDAP_ID2

List of LDAP unique identifiers.

None

Blank separated.

LDAP_LEVEL2

LDAP Encryption Level.

0

0: Encryption not used, 1: SSL, 2: TLS

LDAP_PASSWORD2

Password for LDAP_USER.

None

If LDAP_USER is required only.

LDAP_PORT2

Port used to communicate with LDAP Identity Store.

None

Examples: 389 for no encryption or TLS, 636 for SSL.

LDAP_SECURITY2

Whether to enable simple LDAP authentication.

N

If Y, then you need to provide LDAP_HOSTS.

LDAP_USER2

LDAP Search user name.

None

Required if you need additional LDAP functionally such as LDAPSearch. If so, must provide LDAP_PASSWORD, too.

LOCAL_WORKDIR

The directory where the Trafodion Installer is located.

None

Full path, no environmental variables.

MANAGEMENT_ENABLED

Whether your installation uses separate management nodes.

N

Y if using separate management nodes for Apache Ambari or Cloudera Manager.

MANAGEMENT_NODES

The FQDN names of management nodes, if any.

None

Provide a blank-separated list of node names.

MAX_LIFETIME2

Kerberos ticket lifetime for Trafodion principal

24hours

Can be specified when Kerberos is enabled.

NODE_LIST

The FQDN names of the nodes where Trafodion will be installed.

None

Provide a blank-separated list of node names. The Trafodion Provisioning ID must have passwordless and sudo access to these nodes.

PASSWORD

Administrator password for Apache Ambari or Cloudera Manager.

admin

A user that can change configuration and restart services via the distribution manager’s REST API.

RENEW_LIFETIME2

Number times Kerberos ticket is for the Trafodion principal

7days

Can be specified when Kerberos is enabled.

REST_BUILD

Tar file containing the REST component.

None

Not needed if using a Trafodion package installation tar file.

SECURE_HADOOP2

Indicates whether Hadoop has enabled Kerberos

Y only if Kerberos enabled

Based on whether Kerberos is enabled for your Hadoop installation

TRAF_HOME

Target directory for the Trafodion software.

$HOME_DIR/trafodion

Trafodion is installed in this directory on all nodes in $NODE_LIST.

START

Whether to start Trafodion after install/upgrade.

N
SUSE_LINUX

Whether your installing Trafodion on SUSE Linux.

false

Auto-detected by the Trafodion Installer.

TRAF_KEYTAB2

Name to use when specifying Trafodion keytab

based on distribution

Required if Kerberos is enabled.

TRAF_KEYTAB_DIR2

Location of {project_name} keytab

based on distribution

Required if Kerberos is enabled.

TRAF_PACKAGE

The location of the Trafodion installation package tar file or core installation tar file.

None

The package file contains the Trafodion server, DCS, and REST software while the core installation file contains the Trafodion server software only. If you’re using a core installation file, then you need to record the location of the DCS and REST installation tar files, too. Normally, you perform Trafodion provisioning using a Trafodion package installation tar file.

TRAF_USER

The Trafodion runtime user ID.

trafodion

Must be trafodion in this release.

TRAF_USER_PASSWORD

The password used for the trafodion:trafodion user ID.

traf123

Must be 6-8 characters long.

URL

FQDN and port for the Distribution Manager’s REST API.

None

Include http:// or https:// as applicable. Specify in the form: <IP-address>:<port> or <node name>:<port> Example: https://susevm-1.yourcompany.local:8080

  1. The ID matches the environmental variables used in the Trafodion Installation configuration file. Refer to Trafodion Installer for more information.

  2. Refer to Enable Security for more information about these security settings.

5.8. Install Required Software Packages

5.8.1. Download and Install Packages

This step is required if you’re:

  • Installing Trafodion on SUSE.

  • Can’t download the required software packages using the Internet.

If none of these situations exist, then we highly recommend that you use the Trafodion Installer.

You perform this step as a user with root or sudo access.

Install the packages listed in Software Packages above on all nodes in the cluster.

5.9. Download Trafodion Binaries

You download the Trafodion binaries from the Trafodion Download page. Download the following packages:

Command-line Installation

  • Trafodion Installer

  • Trafodion Server tar file

Ambari Installation

  • Trafodion Ambari RPM

  • Trafodion Server RPM

You can download and install the Trafodion Clients once you’ve installed and activated Trafodion. Refer to the Trafodion Client Install Guide for instructions.

6. Install with Ambari

This method of installation uses RPM packages rather than tar files. There are two packages:

  • traf_ambari - Ambari management pack (plug-in) that is installed on the Ambari Server node

  • apache-trafodion_server - Trafodion package that is installed on every data node

You can either set up a local yum repository (requires a web server) or install the RPMs manually on each node.

6.1. Local Repository

On your web server host, be sure the createrepo package is installed. Copy the two RPM files into a directory served to the web and run the createrepo command.

$ createrepo -d .

The command must be used to update repo meta-data any time new RPMs are added or replaced.

Note the Trafodion repository URL for later use.

6.2. Install Ambari Management Pack for Trafodion

On your Ambari server host:

  1. If Ambari Server is not already installed, be sure to download a yum repo file for Ambari. For example: Ambari-2.4.2 repo.

  2. Add a yum repo file with the URL of your local repository, or copy the traf_ambari RPM locally.

  3. Install the Trafodion Ambari management pack RPM. Ambari server will be installed as a dependency, if not already installed.

    $ sudo yum install traf_ambari
  4. Set-up Ambari

    1. If Ambari server was previously running, restart it.

      $ sudo ambari-server restart
    2. If Ambari server was not previously running, initialize and start it.

      $ sudo ambari-server setup
      ...
      $ sudo ambari-server start

6.3. Install Trafodion

Unlike the command-line installer, Trafodion can be provisioned at time of creating a new cluster.

6.3.1. Initial Cluster Creation

If you are creating a new cluster and you have the Trafodion server RPM hosted on a local yum repository, then create the cluster as normal, and select Trafodion on the service selection screen. When Ambari prompts for the repository URLs, be sure to update the Trafodion URL to the URL for your local repository.

If you plan to install the server RPM manually, do not select the Trafodion service. First, create a cluster without Trafodion service and follow instructions for an existing cluster.

6.3.2. Existing Cluster

If you are not using a local yum repository, manually copy the apache-trafodion_server RPM to each data node and install it using yum install.

Using Ambari, select the cluster and then choose "Add a Service" and select Trafodion.

7. Install

This chapter describes how to use the Trafodion Installer to install Trafodion. You use the Trafodion Provisioning ID to run the Trafodion Installer.

7.1. Unpack Installer

You should already have downloaded the Trafodion Binaries per the instructions in the Download Trafodion Binaries in the Prepare chapter. If not, please do so now.

The first step in the installation process is to unpack the Trafodion Installer tar file.

Example

$ mkdir $HOME/trafodion-installer
$ cd $HOME/trafodion-downloads
$ tar -zxf apache-trafodion-installer-1.3.0-incubating-bin.tar.gz -C $HOME/trafodion-installer
$

7.2. Guided Install

The Trafodion Installer prompts you for the information you collected in the Gather Configuration Information step in the Prepare chapter.

The following example shows a guided install of Trafodion on a two-node Cloudera Hadoop cluster that does not have Kerberos nor LDAP installed.

Example

  1. Run the Trafodion Installer in guided mode.

    $ cd $HOME/trafodion-installer/installer
    $ ./trafodion_install
    
    ******************************
     TRAFODION INSTALLATION START
    ******************************
    
    ***INFO: testing sudo access
    ***INFO: Log file located at /var/log/trafodion/trafodion_install_2016-02-15-04-45-30.log
    ***INFO: Config directory: /etc/trafodion
    ***INFO: Working directory: /usr/lib/trafodion
    
    *******************************
     Trafodion Configuration Setup
    *******************************
    
    ***INFO: Please press [Enter] to select defaults.
    
    Enter trafodion password, default is [traf123]: traf123
    Enter list of nodes (blank separated), default []: trafodion-1 trafodion-2
    Enter Trafodion userid's home directory prefix, default is [/home]: /home
    Specify full path to EPEL RPM (including .rpm), default is None:
    ***INFO: Will attempt to download RPM if EPEL is not installed on all nodes.
    Specify location of Java 1.7.0_65 or higher (JDK), default is []: /usr/java/jdk1.7.0_67-cloudera
    Enter full path (including .tar or .tar.gz) of trafodion tar file []: /home/centos/trafodion-download/apache-trafodion-1.3.0-incubating-bin.tar.gz
    Enter Hadoop admin username, default is [admin]:
    Enter Hadoop admin password, default is [admin]:
    Enter Hadoop external network URL:port (no 'http://' needed), default is []: trafodion-1.apache.org:7180
    Enter HDFS username, default is [hdfs]:
    Enter HBase username, default is [hbase]:
    Enter HBase group, default is [hbase]:
    Enter directory to install trafodion to, default is [/home/trafodion/apache-trafodion-1.3.0-incubating-bin]:
    Total number of client connections per node, default [16]: 8
    Enable simple LDAP security (Y/N), default is N: N
    ***INFO: Configuration file: /etc/trafodion/trafodion_config
    ***INFO: Trafodion configuration setup complete
    
    ************************************
     Trafodion Configuration File Check
    ************************************
    
    
    The authenticity of host 'trafodion-1 (10.1.30.71)' can't be established.
    RSA key fingerprint is 83:96:d4:5e:c1:b8:b1:62:8d:c6:78:a7:7f:1f:6a:d7.
    Are you sure you want to continue connecting (yes/no)? yes
    ***INFO: Testing sudo access on node trafodion-1
    ***INFO: Testing sudo access on node trafodion-2
    ***INFO: Testing ssh on trafodion-1
    ***INFO: Testing ssh on trafodion-2
    ***INFO: Getting list of all cloudera nodes
    ***INFO: Getting list of all cloudera nodes
    ***INFO: cloudera list of nodes:  trafodion-1 trafodion-2
    ***INFO: Testing ssh on trafodion-1
    ***INFO: Testing ssh on trafodion-2
    ***INFO: Testing sudo access on trafodion-1
    ***INFO: Testing sudo access on trafodion-2
    ***DEBUG: trafodionFullName=trafodion_server-1.3.0.tgz
    ***INFO: Trafodion version = 1.3.0
    ***DEBUG: HBase's java_exec=/usr/java/jdk1.7.0_67-cloudera/bin/java
    
    ******************************
     TRAFODION SETUP
    ******************************
    
    ***INFO: Starting Trafodion environment setup (2016-02-15-07-09-58)
    === 2016-02-15-07-09-58 ===
    # @@@ START COPYRIGHT @@@
    #
    # Licensed to the Apache Software Foundation (ASF) under one
    # or more contributor license agreements.  See the NOTICE file
    # distributed with this work for additional information
    # regarding copyright ownership.  The ASF licenses this file
    # to you under the Apache License, Version 2.0 (the
    # "License"); you may not use this file except in compliance
    # with the License.  You may obtain a copy of the License at
    #
    .
    .
    .
    and hold each Contributor harmless for any liability incurred by,
    or claims asserted against, such Contributor by reason of your
    accepting any such warranty or additional liability.
    
    END OF TERMS AND CONDITIONS
    
    BY TYPING "ACCEPT" YOU AGREE TO THE TERMS OF THIS AGREEMENT:ACCEPT
    ***INFO: testing sudo access
    ***INFO: Checking all nodes in specified node list
    trafodion-1
    trafodion-2
    ***INFO: Total number of nodes = 2
    ***INFO: Starting Trafodion Package Setup (2016-02-15-07-11-09)
    ***INFO: Installing required packages
    ***INFO: Log file located in /var/log/trafodion
    ***INFO: ... pdsh on node trafodion-1
    ***INFO: ... pdsh on node trafodion-2
    ***INFO: Checking if log4cxx is installed ...
    ***INFO: Checking if sqlite is installed ...
    ***INFO: Checking if expect is installed ...
    ***INFO: Installing expect on all nodes
    .
    .
    .
    ***INFO: modifying limits in /usr/lib/trafodion/trafodion.conf on all nodes
    ***INFO: create Trafodion userid "trafodion"
    ***INFO: Trafodion userid's (trafodion) home directory: /home/trafodion
    ***INFO: testing sudo access
    Generating public/private rsa key pair.
    Created directory '/home/trafodion/.ssh'.
    Your identification has been saved in /home/trafodion/.ssh/id_rsa.
    Your public key has been saved in /home/trafodion/.ssh/id_rsa.pub.
    The key fingerprint is:
    4b:b3:60:38:c9:9d:19:f8:cd:b1:c8:cd:2a:6e:4e:d0 trafodion@trafodion-1
    The key's randomart image is:
    +--[ RSA 2048]----+
    |                 |
    |     .           |
    |    . . .        |
    |   o * X o       |
    |  . E X S        |
    |   . o + +       |
    |    o . o        |
    |   o..           |
    |   oo            |
    +-----------------+
    ***INFO: creating .bashrc file
    ***INFO: Setting up userid trafodion on all other nodes in cluster
    ***INFO: Creating known_hosts file for all nodes
    trafodion-1
    trafodion-2
    ***INFO: trafodion user added successfully
    ***INFO: Trafodion environment setup completed
    ***INFO: creating sqconfig file
    ***INFO: Reserving DCS ports
    
    ******************************
     TRAFODION MODS
    ******************************
    
    ***INFO: Cloudera installed will run traf_cloudera_mods98
    ***INFO: Detected JAVA version 1.7
    ***INFO: copying hbase-trx-cdh5_3-1.3.0.jar to all nodes
    ***INFO: Cloudera Manager is on trafodion-1
    ***INFO: Detected JAVA version 1.7
    ***INFO: copying hbase-trx-cdh5_3-1.3.0.jar to all nodes
    ***INFO: Cloudera Manager is on trafodion-1
      % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
    .
    .
    .
    ***INFO: Hadoop restart completed successfully
    ***INFO: waiting for HDFS to exit safemode
    Safe mode is OFF
    ***INFO: Setting HDFS ACLs for snapshot scan support
    ***INFO: Trafodion Mods ran successfully.
    
    ******************************
     TRAFODION START
    ******************************
    
    /usr/lib/trafodion/installer/..
    ***INFO: Log file location /var/log/trafodion/trafodion_install_2016-02-15-07-08-07.log
    ***INFO: traf_start
    ******************************************
    ******************************************
    ******************************************
    ******************************************
    /home/trafodion/apache-trafodion-1.3.0-incubating-bin
    ***INFO: untarring build file /usr/lib/trafodion/apache-trafodion-1.3.0-incubating-bin/trafodion_server-1.3.0.tgz to /home/trafodion/apache-trafodion-1.3.0-incubating-bin
    .
    .
    .
    ******* Generate public/private certificates *******
    
     Cluster Name : Cluster%201
    Generating Self Signed Certificate....
    ***********************************************************
     Certificate file :server.crt
     Private key file :server.key
     Certificate/Private key created in directory :/home/trafodion/sqcert
    ***********************************************************
    
    ***********************************************************
     Updating Authentication Configuration
    ***********************************************************
    Creating folders for storing certificates
    
    ***INFO: copying /home/trafodion/sqcert directory to all nodes
    ***INFO: copying install to all nodes
    ***INFO: starting Trafodion instance
    Checking orphan processes.
    Removing old mpijob* files from /home/trafodion/apache-trafodion-1.3.0-incubating-bin/tmp
    
    Removing old monitor.port* files from /home/trafodion/apache-trafodion-1.3.0-incubating-bin/tmp
    
    Executing sqipcrm (output to sqipcrm.out)
    Starting the SQ Environment (Executing /home/trafodion/apache-trafodion-1.3.0-incubating-bin/sql/scripts/gomon.cold)
    Background SQ Startup job (pid: 7276)
    .
    .
    .
    Zookeeper is listening on port 2181
    DcsMaster is listening on port 23400
    
    Process         Configured      Actual          Down
    ---------       ----------      ------          ----
    DcsMaster       1               1
    DcsServer       2               2
    mxosrvr         8               8
    
    
    You can monitor the SQ shell log file : /home/trafodion/apache-trafodion-1.3.0-incubating-bin/logs/sqmon.log
    
    
    Startup time  0 hour(s) 1 minute(s) 9 second(s)
    Apache Trafodion Conversational Interface 1.3.0
    Copyright (c) 2015 Apache Software Foundation
    >> initialize trafodion;
    --- SQL operation complete.
    >>
    
    End of MXCI Session
    
    ***INFO: Installation completed successfully.
    
    *********************************
     TRAFODION INSTALLATION COMPLETE
    *********************************
    
    $
  2. Switch to the Trafodion Runtime User and check the status of Trafodion.

    $ sudo su - trafodion
    $ sqcheck
    Checking if processes are up.
    Checking attempt: 1; user specified max: 2. Execution time in seconds: 0.
    
    The SQ environment is up!
    
    
    Process         Configured      Actual      Down
    -------         ----------      ------      ----
    DTM             2               2
    RMS             4               4
    MXOSRVR         8               8
    
    $

Trafodion is now running on your Hadoop cluster. Please refer to the Activate chapter for basic instructions on how to verify the Trafodion management and how to perform basic management operations.

7.3. Automated Install

The --config_file option runs the Trafodion in Automated Setup mode. Refer to Trafodion Installer in the Introduction chapter for instructions of how you edit your configuration file.

Edit your config file using the information you collected in the Gather Configuration Information step in the Prepare chapter.

The following example shows an automated install of Trafodion on a two-node Hortonworks Hadoop cluster that does not have Kerberos nor LDAP enabled.

Example

  1. Run the Trafodion Installer in Automated Setup mode.

    $ cd $HOME/trafodion-installer/installer
    $ ./trafodion_install --config_file my
    ******************************
     TRAFODION INSTALLATION START
    ******************************
    
    ***INFO: testing sudo access
    ***INFO: Log file located at /var/log/trafodion/trafodion_install_2016-02-16-21-12-03.log
    ***INFO: Config directory: /etc/trafodion
    ***INFO: Working directory: /usr/lib/trafodion
    
    ************************************
     Trafodion Configuration File Check
    ************************************
    
    
    ***INFO: Testing sudo access on node trafodion-1
    ***INFO: Testing sudo access on node trafodion-2
    ***INFO: Testing ssh on trafodion-1
    ***INFO: Testing ssh on trafodion-2
    ***INFO: Getting list of all hortonworks nodes
    ***INFO: Getting list of all hortonworks nodes
    ***INFO: hortonworks list of nodes:  trafodion-1 trafodion-2
    ***INFO: Testing ssh on trafodion-1
    ***INFO: Testing ssh on trafodion-2
    ***INFO: Testing sudo access on trafodion-1
    ***INFO: Testing sudo access on trafodion-2
    ***DEBUG: trafodionFullName=trafodion_server-1.3.0.tgz
    ***INFO: Trafodion version = 1.3.0
    ***DEBUG: HBase's java_exec=/usr/jdk64/jdk1.7.0_67/bin/java
    
    ******************************
     TRAFODION SETUP
    ******************************
    
    ***INFO: Starting Trafodion environment setup (2016-02-16-21-12-31)
    === 2016-02-16-21-12-31 ===
    # @@@ START COPYRIGHT @@@
    #
    # Licensed to the Apache Software Foundation (ASF) under one
    # or more contributor license agreements.  See the NOTICE file
    # distributed with this work for additional information
    # regarding copyright ownership.  The ASF licenses this file
    # to you under the Apache License, Version 2.0 (the
    # "License"); you may not use this file except in compliance
    # with the License.  You may obtain a copy of the License at
    .
    .
    .
    9. Accepting Warranty or Additional Liability. While redistributing
    the Work or Derivative Works thereof, You may choose to offer, and
    charge a fee for, acceptance of support, warranty, indemnity, or
    other liability obligations and/or rights consistent with this
    License. However, in accepting such obligations, You may act only
    on Your own behalf and on Your sole responsibility, not on behalf
    of any other Contributor, and only if You agree to indemnify, defend,
    and hold each Contributor harmless for any liability incurred by,
    or claims asserted against, such Contributor by reason of your
    accepting any such warranty or additional liability.
    
    END OF TERMS AND CONDITIONS
    
    BY TYPING "ACCEPT" YOU AGREE TO THE TERMS OF THIS AGREEMENT: ***INFO: testing sudo access
    ***INFO: Checking all nodes in specified node list
    trafodion-1
    trafodion-2
    ***INFO: Total number of nodes = 2
    ***INFO: Starting Trafodion Package Setup (2016-02-16-21-12-35)
    ***INFO: Installing required packages
    ***INFO: Log file located in /var/log/trafodion
    ***INFO: ... EPEL rpm
    ***INFO: ... pdsh on node trafodion-1
    ***INFO: ... pdsh on node trafodion-2
    ***INFO: Checking if log4cxx is installed ...
    ***INFO: Checking if sqlite is installed ...
    ***INFO: Checking if expect is installed ...
    .
    .
    .
    ***INFO: trafodion user added successfully
    ***INFO: Trafodion environment setup completed
    ***INFO: creating sqconfig file
    ***INFO: Reserving DCS ports
    
    ******************************
     TRAFODION MODS
    ******************************
    
    ***INFO: Hortonworks installed will run traf_hortonworks_mods98
    ***INFO: Detected JAVA version 1.7
    ***INFO: copying hbase-trx-hdp2_2-1.3.0.jar to all nodes
    PORT=:8080
    .
    .
    .
    Starting the REST environment now
    starting rest, logging to /home/trafodion/apache-trafodion-1.3.0-incubating-bin/rest-1.3.0/bin/../logs/rest-trafodion-1-rest-trafodion-1.out
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:file:/home/trafodion/apache-trafodion-1.3.0-incubating-bin/rest-1.3.0/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/usr/hdp/2.2.9.0-3393/hadoop/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
    
    
    DcsMaster is not started. Please start DCS using 'dcsstart' command...
    
    Process         Configured      Actual          Down
    ---------       ----------      ------          ----
    DcsMaster       1               0               1
    DcsServer       2               0               2
    mxosrvr         8               8
    
    
    You can monitor the SQ shell log file : /home/trafodion/apache-trafodion-1.3.0-incubating-bin/logs/sqmon.log
    
    
    Startup time  0 hour(s) 1 minute(s) 9 second(s)
    Apache Trafodion Conversational Interface 1.3.0
    Copyright (c) 2015 Apache Software Foundation
    >> initialize trafodion;
    --- SQL operation complete.
    >>
    
    End of MXCI Session
    
    ***INFO: Installation completed successfully.
    
    *********************************
     TRAFODION INSTALLATION COMPLETE
    *********************************
    
    $
  2. Switch to the Trafodion Runtime User and check the status of Trafodion.

    Example

    $ sudo su - trafodion
    $ sqcheck
    Checking if processes are up.
    Checking attempt: 1; user specified max: 2. Execution time in seconds: 0.
    
    The SQ environment is up!
    
    
    Process         Configured      Actual      Down
    -------         ----------      ------      ----
    DTM             2               2
    RMS             4               4
    MXOSRVR         8               8
    
    $

Trafodion is now running on your Hadoop cluster. Please refer to the Activate chapter for basic instructions on how to verify the Trafodion management and how to perform basic management operations.

8. Upgrade

This chapter describes how to use the Trafodion Installer to upgrade Trafodion. You use the Trafodion Provisioning ID to run the Trafodion Installer.

8.1. Download Updated Trafodion Binaries

You perform this step as the Trafodion Provisioning User.

You download the updated Trafodion binaries from the Trafodion Download page. Download the following packages:

  • Trafodion Installer (if planning to use the Trafodion Installer)

  • Trafodion Server

Refer to Download Trafodion Binaries in the Prepare chapter for examples.

8.2. Unpack Installer

You perform this step as the Trafodion Provisioning User.

You unpack the updated Trafodion Installer into a new directory.

Example

$ mkdir $HOME/trafodion-installer-2.0
$ cd $HOME/trafodion-downloads
$ tar -zxf apache-trafodion-installer-2.0.0-incubating-bin.tar.gz -C $HOME/trafodion-installer
$

8.3. Stop Trafodion

You perform this step as the Trafodion Runtime User.

Example

$ sudo su trafodion
$ sqstop
Shutting down the REST environment now
stopping rest.
Shutting down the DCS environment now
stopping master.
trafodion-1: stopping server.
trafodion-2: stopping server.
stopped $zlobsrv0
stopped $zlobsrv1
Shutting down (normal) the SQ environment!
Wed Feb 17 05:12:40 UTC 2016
Processing cluster.conf on local host trafodion-1
[$Z000KAE] Shell/shell Version 1.0.1 Apache_Trafodion Release 1.3.0 (Build release [1.3.0-0-g5af956f_Bld2], date 20160112_1927)
ps
[$Z000KAE] %ps
[$Z000KAE] NID,PID(os)  PRI TYPE STATES  NAME        PARENT      PROGRAM
[$Z000KAE] ------------ --- ---- ------- ----------- ----------- ---------------
[$Z000KAE] 000,00064198 000 WDG  ES--A-- $WDG000     NONE        sqwatchdog
[$Z000KAE] 000,00064199 000 PSD  ES--A-- $PSD000     NONE        pstartd
[$Z000KAE] 000,00064212 001 GEN  ES--A-- $TSID0      NONE        idtmsrv
[$Z000KAE] 000,00064242 001 DTM  ES--A-- $TM0        NONE        tm
[$Z000KAE] 000,00065278 001 GEN  ES--A-- $ZSC000     NONE        mxsscp
[$Z000KAE] 000,00065305 001 SSMP ES--A-- $ZSM000     NONE        mxssmp
[$Z000KAE] 000,00001219 001 GEN  ES--A-- $Z0000ZU    NONE        mxosrvr
[$Z000KAE] 000,00001235 001 GEN  ES--A-- $Z00010A    NONE        mxosrvr
[$Z000KAE] 000,00001279 001 GEN  ES--A-- $Z00011J    NONE        mxosrvr
[$Z000KAE] 000,00001446 001 GEN  ES--A-- $Z00016B    NONE        mxosrvr
[$Z000KAE] 000,00024864 001 GEN  ES--A-- $Z000KAE    NONE        shell
[$Z000KAE] 001,00025180 000 PSD  ES--A-- $PSD001     NONE        pstartd
[$Z000KAE] 001,00025179 000 WDG  ES--A-- $WDG001     NONE        sqwatchdog
[$Z000KAE] 001,00025234 001 DTM  ES--A-- $TM1        NONE        tm
[$Z000KAE] 001,00025793 001 GEN  ES--A-- $ZSC001     NONE        mxsscp
[$Z000KAE] 001,00025797 001 SSMP ES--A-- $ZSM001     NONE        mxssmp
[$Z000KAE] 001,00026587 001 GEN  ES--A-- $Z010LPM    NONE        mxosrvr
[$Z000KAE] 001,00026617 001 GEN  ES--A-- $Z010LQH    NONE        mxosrvr
[$Z000KAE] 001,00026643 001 GEN  ES--A-- $Z010LR8    NONE        mxosrvr
[$Z000KAE] 001,00026644 001 GEN  ES--A-- $Z010LR9    NONE        mxosrvr
shutdown
[$Z000KAE] %shutdown
exit
Issued a 'shutdown normal' request

Shutdown in progress

# of SQ processes: 0
SQ Shutdown (normal) from /home/trafodion Successful
Wed Feb 17 05:12:47 UTC 2016
$

8.4. Guided Upgrade

You perform this step as the Trafodion Provisioning User.

As in the case with an installation, the Trafodion Installer prompts you for the information you collected in the Gather Configuration Information step in the Prepare chapter. Some of the prompts are populated with the current values.

The following example shows a guided upgrade of Trafodion on a two-node Cloudera Hadoop cluster without Kerberos nor LDAP enabled.

Example

  1. Run the updated Trafodion Installer in Guided Setup mode to perform the upgrade. Change information at prompts as applicable.

    $ cd $HOME/trafodion-installer-2.0/installer
    $ ./trafodion_install
    ******************************
     TRAFODION INSTALLATION START
    ******************************
    
    ***INFO: testing sudo access
    ***INFO: Log file located at /var/log/trafodion/trafodion_install_2016-02-17-08-15-33.log
    ***INFO: Config directory: /etc/trafodion
    ***INFO: Working directory: /usr/lib/trafodion
    
    *******************************
     Trafodion Configuration Setup
    *******************************
    
    ***INFO: Please press [Enter] to select defaults.
    
    Enter trafodion password, default is [traf123]:
    Enter list of nodes (blank separated), default []: trafodion-1.apache.org trafodion-2.apache.org
    Specify location of Java 1.7.0_65 or higher (JDK), default is [/usr/java/jdk1.7.0_67-cloudera]:
    Enter full path (including .tar or .tar.gz) of trafodion tar file []: /home/centos/trafodion-download/apache-trafodion-2.0.0-incubating-bin.tar.gz
    Enter Hadoop admin username, default is [admin]:
    Enter Hadoop admin password, default is [admin]:
    Enter Hadoop external network URL:port (no 'http://' needed), default is []: trafodion-1.apache.org:7180
    Enter HDFS username, default is [hdfs]:
    Enter HBase username, default is [hbase]:
    Enter HBase group, default is [hbase]:
    Enter directory to install trafodion to, default is [/home/trafodion/apache-trafodion-1.3.0-incubating-bin]: /home/centos/apache-trafodion-2.0.0-incubating-bin
    Start Trafodion after install (Y/N), default is Y:
    Total number of client connections per node, default [16]: 8
    Enable simple LDAP security (Y/N), default is N:
    ***INFO: Configuration file: /etc/trafodion/trafodion_config
    ***INFO: Trafodion configuration setup complete
    
    ************************************
     Trafodion Configuration File Check
    ************************************
    
    
    ***INFO: Testing sudo access on node trafodion-1
    ***INFO: Testing sudo access on node trafodion-2
    ***INFO: Testing ssh on trafodion-1
    ***INFO: Testing ssh on trafodion-2
    ***INFO: Getting list of all cloudera nodes
    ***INFO: Getting list of all cloudera nodes
    ***INFO: cloudera list of nodes:  trafodion-1 trafodion-2
    ***INFO: Testing ssh on trafodion-1
    ***INFO: Testing ssh on trafodion-2
    ***INFO: Testing sudo access on trafodion-1
    ***INFO: Testing sudo access on trafodion-2
    ***INFO: Checking cloudera Version
    ***INFO: nameOfVersion=cdh5.3.0
    ***INFO: HADOOP_PATH=/usr/lib/hbase/lib
    ***INFO: Trafodion scanner will not be run.
    ***DEBUG: trafodionFullName=trafodion_server-1.3.0.tgz
    ***INFO: Trafodion version = 1.3.0
    ***DEBUG: HBase's java_exec=/usr/java/jdk1.7.0_67-cloudera/bin/java
    
    ******************************
     TRAFODION SETUP
    ******************************
    
    ***INFO: Installing required RPM packages
    ***INFO: Starting Trafodion Package Setup (2016-02-17-08-16-11)
    ***INFO: Installing required packages
    ***INFO: Log file located in /var/log/trafodion
    ***INFO: ... pdsh on node trafodion-1
    ***INFO: ... pdsh on node trafodion-2
    ***INFO: Checking if log4cxx is installed ...
    ***INFO: Checking if sqlite is installed ...
    ***INFO: Checking if expect is installed ...
    ***INFO: Checking if perl-DBD-SQLite* is installed ...
    ***INFO: Checking if protobuf is installed ...
    ***INFO: Checking if xerces-c is installed ...
    ***INFO: Checking if perl-Params-Validate is installed ...
    ***INFO: Checking if perl-Time-HiRes is installed ...
    ***INFO: Checking if gzip is installed ...
    ***INFO: creating sqconfig file
    ***INFO: Reserving DCS ports
    
    ******************************
     TRAFODION MODS
    ******************************
    
    ***INFO: Cloudera installed will run traf_cloudera_mods98
    ***INFO: Detected JAVA version 1.7
    ***INFO: copying hbase-trx-cdh5_3-1.3.0.jar to all nodes
    ***INFO: Cloudera Manager is on trafodion-1
    .
    .
    .
    Zookeeper is listening on port 2181
    DcsMaster is listening on port 23400
    
    Process         Configured      Actual          Down
    ---------       ----------      ------          ----
    DcsMaster       1               1
    DcsServer       2               2
    mxosrvr         8               8
    
    
    You can monitor the SQ shell log file : /home/trafodion/apache-trafodion-2.0.0-incubating-bin/logs/sqmon.log
    
    
    Startup time  0 hour(s) 1 minute(s) 9 second(s)
    Apache Trafodion Conversational Interface 1.3.0
    Copyright (c) 2015 Apache Software Foundation
    >>
    
    End of MXCI Session
    
    ***INFO: Installation completed successfully.
    
    *********************************
     TRAFODION INSTALLATION COMPLETE
    *********************************
    
    $
  2. Switch to the Trafodion Runtime User and check the status of Trafodion.

    $ sudo su - trafodion
    $ sqcheck
    Checking if processes are up.
    Checking attempt: 1; user specified max: 2. Execution time in seconds: 0.
    
    The SQ environment is up!
    
    
    Process         Configured      Actual      Down
    -------         ----------      ------      ----
    DTM             2               2
    RMS             4               4
    MXOSRVR         8               8
    
    $

Trafodion is now running on your Hadoop cluster. Please refer to the Activate chapter for basic instructions on how to verify the Trafodion management and how to perform basic management operations.

8.5. Automated Upgrade

You perform this step as the Trafodion Provisioning User.

The --config_file option runs the Trafodion in Automated Setup mode. Refer to Trafodion Installer in the Introduction chapter for instructions of how you edit your configuration file.

At a minimum, you need to change the following settings:

  • LOCAL_WORKDIR

  • TRAF_PACKAGE

  • TRAF_HOME

Example

$ cd $HOME/trafodion-configuration
$ cp my_config my_config_2.0
$ # Pre edit content

export LOCAL_WORKDIR="/home/centos/trafodion-installer/installer"
export TRAF_PACKAGE="/home/centos/trafodion-download/apache-trafodion-1.3.0-incubating-bin.tar.gz"
export TRAF_HOME="/home/trafodion/apache-trafodion-1.3.0-incubating-bin"

$ # Use your favorite editor to modify my_config_2.0
$ emacs my_config_2.0
$ # Post edit changes

export LOCAL_WORKDIR="/home/centos/trafodion-installer-2.0/installer"
export TRAF_PACKAGE="/home/centos/trafodion-download/apache-trafodion-2.0.0-incubating-bin.tar.gz"
export TRAF_HOME="/home/trafodion/apache-trafodion-2.0.0-incubating-bin"

The following example shows an upgrade of Trafodion on a two-node Hortonworks Hadoop cluster using Automated Setup mode without Kerberos nor LDAP enabled.

The Trafodion Installer performs the same configuration changes as it does for an installation, including restarting Hadoop services.

Example

  1. Run the updated Trafodion Installer using the modified my_config_2.0 file.

    $ cd $HOME/trafodion-installer-2.0/installer
    $ ./trafodion_install --config_file $HOME/trafodion-configuration/my_config_2.0
    ******************************
     TRAFODION INSTALLATION START
    ******************************
    
    ***INFO: Testing sudo access on node trafodion-1
    ***INFO: Testing sudo access on node trafodion-2
    ***INFO: Testing ssh on trafodion-1
    ***INFO: Testing ssh on trafodion-2
    ***INFO: Getting list of all hortonworks nodes
    ***INFO: Getting list of all hortonworks nodes
    ***INFO: hortonworks list of nodes:  trafodion-1 trafodion-2
    ***INFO: Testing ssh on trafodion-1
    ***INFO: Testing ssh on trafodion-2
    ***INFO: Testing sudo access on trafodion-1
    ***INFO: Testing sudo access on trafodion-2
    ***INFO: Trafodion scanner will not be run.
    ***DEBUG: trafodionFullName=trafodion_server-2.0.0.tgz
    ***INFO: Trafodion version = 2.0.0
    ***DEBUG: HBase's java_exec=/usr/jdk64/jdk1.7.0_67/bin/java
    
    ******************************
     TRAFODION SETUP
    ******************************
    
    ***INFO: Installing required RPM packages
    ***INFO: Starting Trafodion Package Setup (2016-02-17-05-33-29)
    ***INFO: Installing required packages
    ***INFO: Log file located in /var/log/trafodion
    ***INFO: ... pdsh on node trafodion-1
    ***INFO: ... pdsh on node trafodion-2
    ***INFO: Checking if log4cxx is installed ...
    .
    .
    .
    DcsMaster is not started. Please start DCS using 'dcsstart' command...
    
    Process         Configured      Actual          Down
    ---------       ----------      ------          ----
    DcsMaster       1               0               1
    DcsServer       2               0               2
    mxosrvr         8               8
    
    
    You can monitor the SQ shell log file : /home/trafodion/apache-trafodion-2.0.0-incubating-bin/logs/sqmon.log
    
    
    Startup time  0 hour(s) 1 minute(s) 9 second(s)
    Apache Trafodion Conversational Interface 1.3.0
    Copyright (c) 2015 Apache Software Foundation
    >>Metadata Upgrade: started
    
    Version Check: started
      Metadata is already at Version 1.1.
    Version Check: done
    
    Metadata Upgrade: done
    
    
    --- SQL operation complete.
    >>
    
    End of MXCI Session
    
    ***INFO: Installation completed successfully.
    
    *********************************
     TRAFODION INSTALLATION COMPLETE
    *********************************
    
    $
  2. Switch to the Trafodion Runtime User and check the status of Trafodion.

    $ sudo su - trafodion
    $ sqcheck
    Checking if processes are up.
    Checking attempt: 1; user specified max: 2. Execution time in seconds: 0.
    
    The SQ environment is up!
    
    
    Process         Configured      Actual      Down
    -------         ----------      ------      ----
    DTM             2               2
    RMS             4               4
    MXOSRVR         8               8
    
    $

Trafodion is now running on your Hadoop cluster. Please refer to the Activate chapter for basic instructions on how to verify the Trafodion management and how to perform basic management operations.

9. Activate

9.1. Manage Trafodion

You use the Trafodion runtime user ID to perform Trafodion management operations.

The following table provides an overview of the different Trafodion management scripts.

Component Start Stop Status

All of Trafodion

sqstart
sqstop
sqcheck

RMS Server

rmsstart
rmsstop
rmscheck

REST Server

reststart
reststop
-

LOB Server

lobstart
lobstop
-

DCS (Database Connectivity Services)

dcsstart
dcsstop
dcscheck

Example: Start Trafodion

cd $TRAF_HOME/sql/scripts
sqstart
sqcheck

9.2. Validate Trafodion Installation

You can use sqlci (part of the base product) or trafci (requires separate install; see the Trafodion Client Installation Guide) to validate your installation.

9.2.1. Smoke Test

A simple smoke test to validate that Trafodion is functioning.

get schemas;
create table table1 (a int);
invoke table1;
insert into table1 values (1), (2), (3), (4);
select * from table1;
drop table table1;
exit;

Example

$ sqlci
Apache Trafodion Conversational Interface 1.3.0
Copyright (c) 2015 Apache Software Foundation
>>get schemas;

Schemas in Catalog TRAFODION
============================

SEABASE
_MD_
_LIBMGR_
_REPOS_

--- SQL operation complete.
>>create table table1 (a int);

--- SQL operation complete.
>>invoke table1;

-- Definition of Trafodion table TRAFODION.SEABASE.TABLE1
-- Definition current  Mon Feb 15 07:42:02 2016

  (
    SYSKEY                           LARGEINT NO DEFAULT NOT NULL NOT DROPPABLE
      NOT SERIALIZED
  , A                                INT DEFAULT NULL SERIALIZED
  )

--- SQL operation complete.
>>insert into table1 values (1), (2), (3), (4);

--- 4 row(s) inserted.
>>select * from table1;

A
-----------

          1
          2
          3
          4

--- 4 row(s) selected.
>>drop table table1;

--- SQL operation complete.
>>exit;
$

Assuming no errors, your installation has been successful. Next, do the following:

9.3. Troubleshooting Tips

If you are not able to start up the environment or if there are problems running sqlci or trafci, then verify that the all the processes are up and running.

  • sqcheck should indicate all processes are running.

If processes are not running as expected, then:

  • sqstop to shut down Trafodion. If some Trafodion processes do not terminate cleanly, then run ckillall.

  • sqstart to restart Trafodion.

If problems persist please review logs:

  • $TRAF_HOME/logs: Trafodion logs.

10. Remove

You use the Trafodion Provisioning User for these instructions.

You do not need to use the trafodion_uninstaller script if upgrading Trafodion. Instead, use the trafodion_install script, which automatically upgrades the version of Trafodion. Please refer to the Install chapter for further instructions.

Run the commands from the first node of the cluster. Do not run them from a machine that is not part of the Trafodion cluster.

10.1. Stop Trafodion

Do the following:

su trafodion
cd $TRAF_HOME/sql/scripts or cds
sqstop
exit

Example

[admin@trafodion-1 ~]$ su trafodion
[trafodion@trafodion-1 scripts]$ cds
[trafodion@trafodion-1 scripts]$ sqstop
Shutting down the REST environment now
stopping rest.
Shutting down the DCS environment now
stopping master.
trafodion-1: stopping server.
trafodion-2: stopping server.
stopped $zlobsrv0
stopped $zlobsrv1
Shutting down (normal) the SQ environment!
Mon Feb 15 07:49:18 UTC 2016
Processing cluster.conf on local host trafodion-1
.
.
.
[$Z000HDS] 001,00024772 001 GEN  ES--A-- $Z010K7S    NONE        mxosrvr
[$Z000HDS] 001,00024782 001 GEN  ES--U-- $ZLOBSRV1   NONE        mxlobsrvr
shutdown
[$Z000HDS] %shutdown
exit
Issued a 'shutdown normal' request

Shutdown in progress

# of SQ processes: 0
SQ Shutdown (normal) from /home/trafodion/apache-trafodion-1.3.0-incubating-bin/sql/scripts Successful
Mon Feb 15 07:49:26 UTC 2016
[trafodion@trafodion-1 scripts]$ exit
[admin@trafodion-1 ~]$

10.2. Run trafodion_uninstaller

The trafodion_uninstaller completely removes Trafodion.

Example

[admin@trafodion-1 ~]$ cd $HOME/trafodion-installer/installer
[admin@trafodion-1 installer]$ ./trafodion_uninstaller
Do you want to uninstall Trafodion (Everything will be removed)? (Y/N) y
***INFO: testing sudo access
***INFO: NOTE, rpms that were installed will not be removed.
***INFO: stopping Trafodion instance
SQ environment is not up.
Going to execute ckillall

Can't find file /home/trafodion/.vnc/trafodion-1:1.pid
You'll have to kill the Xvnc process manually

***INFO: restoring linux system files that were changed
***INFO: removing hbase-trx* from Hadoop directories
pdsh@trafodion-1: trafodion-1: ssh exited with exit code 1
pdsh@trafodion-1: trafodion-2: ssh exited with exit code 1
pdsh@trafodion-1: trafodion-1: ssh exited with exit code 1
pdsh@trafodion-1: trafodion-2: ssh exited with exit code 1
***INFO remove the Trafodion userid and group
***INFO: removing all files from /home/trafodion/apache-trafodion-1.3.0-incubating-bin
***INFO: removing all files from /usr/lib/trafodion and /var/log/trafodion
***INFO: removing all files from /etc/trafodion
***INFO: Trafodion uninstall complete.
[admin@trafodion-1 installer]$

11. Enable Security

Trafodion supports user authentication with LDAP, integrates with Hadoop’s Kerberos environment and supports authorization through database grant and revoke requests (privileges).

If this is an initial installation, both LDAP and Kerberos can be configured by running the Trafodion installer. If Trafodion is already installed, then both LDAP and Kerberos can be configured by running the Trafodion security installer.

  • If Hadoop has enabled Kerberos, then Trafodion must also enable Kerberos.

  • If Kerberos is enabled, then LDAP must be enabled.

  • If LDAP is enabled, then database authorization (privilege support) is automatically enabled.

  • If Kerberos is not enabled, then enabling LDAP is optional.

11.1. Configuring Trafodion for Kerberos

Kerberos is a protocol for authenticating a request for a service or operation. It uses the notion of a ticket to verify accessibility. The ticket is proof of identity encrypted with a secret key for the particular requested service. Tickets exist for a short time and then expire. Therefore, you can use the service as long as your ticket is valid (i.e. not expired). Hadoop uses Kerberos to provide security for its services, as such Trafodion needs to function properly with Hadoop that have Kerberos enabled.

11.1.1. Kerberos configuration file

It is assumed that Kerberos has already been set up on all the nodes by the time Trafodion is installed. This section briefly discusses the Kerberos configuration file for reference.

The Kerberos configuration file defaults to /etc/krb5.conf and contains, among other attributes:

* log location: location where Kerberos errors and other information are logged
* KDC location: host location where the KDC (Key Distribution Center) is located
* admin server location: host location where the Kerberos admin server is located
* realm: the set of nodes that share a Kerberos database
* ticket defaults: contains defaults for ticket lifetimes, encoding, and other attributes

You need to have access to a Kerberos administrator account to enable Kerberos for Trafodion. The following is an example request that lists principals defined in the Kerberos database that can be used to test connectivity:

kadmin -p 'kdcadmin/admin' -w 'kdcadmin123' -s 'kdc.server' -q 'listprincs'
* -p (principal): please replace 'kdcadmin/admin' with your admin principal
* -w (password): please replace 'kdadmin123' with the password for the admin principal
* -s (server location): please replace 'kdc.server' with your KDC admin server location
* -q (command): defines the command to run, in this case principals are returned

11.1.2. Ticket Management

When Kerberos is enabled in Trafodion, the security installation process:

  • Adds a Trafodion principal in Kerberos, one per node with the name trafodion/hostname@realm.

  • Creates a keytab for each principal and distributes the keytab to each node. The keytab name is the same for all nodes and defaults to a value based on the distribution, for example: etc/trafodion/keytabs/trafodion.service.keytab.

  • Performs a "kinit" on all nodes in the cluster for the trafodion user.

  • Adds commands to perform "kinit" and to start the ticket renewal procedure to the trafodion .bashrc scripts on each node.

The ticket renewal service renews tickets up until the maximum number of renewals allowed. So if your ticket lifetime is one day and the number of renewals is seven days, the ticket renewal service automatically renews tickets six times. Once the ticket expires, it must be initialized again to continue running Trafodion. Connecting to each node as the trafodion user initializes the ticket if one does not exist.

TBD - A future update will include details on how tickets can be managed at the cluster level.

11.1.3. Kerberos installation

The Trafodion installation scripts automatically determine if Kerberos is enabled on the node. If it is enabled, then the environment variable SECURE_HADOOP is set to "Y".

The following are questions that will be asked related to Kerberos:

  • Enter KDC server address, default is []: – no default

  • Enter admin principal (include realm), default is []: - no default

  • Enter fully qualified name for HBase keytab, default is []: - Installer searches for a valid keytab based on the distribution

  • Enter fully qualified name for HDFS keytab, default is []: - Installer searches for a valid keytab based on the distribution

  • Enter max lifetime for Trafodion principal (valid format required), default is [24hours]:

  • Enter renew lifetime for Trafodion principal (valid format required), default is [7days]:

  • Enter Trafodion keytab name, default is []: - Installer determines default name based on the distribution

  • Enter keytab location, default is []: - Installer determins default name based on the distribution

The Trafodion installer always asked for the KDC admin password when enabling Kerberos independent on whether running in Automated of Guided mode. It does not save this password.

11.2. Configuring LDAP

Trafodion does not manage user names and passwords internally but supports authentication via directory servers using the OpenLDAP protocol, also known as LDAP servers. You can configure the LDAP servers during installation by answering the Trafodion Installer’s prompts. To configure LDAP after installation run the Trafodion security installer directly. Installing LDAP also enables database authorization (privilege support).

Once authentication and authorization are enabled, Trafodion allows users to be registered in the database and allows privileges on objects to be granted to users and roles (which are granted to users). Trafodion also supports component-level (or system-level) privileges, such as MANAGE_USERS, which can be granted to users and roles. Refer to Manage Users below.

If you do not enable LDAP in Trafodion, then a client interface to Trafodion may request a user name and password, but Trafodion ignores the user name and password entered in the client interface, and the session runs as the database root user, DB__ROOT, without restrictions. If you want to restrict users, restrict access to certain users only, or restrict access to an object or operation, then you must enable security, which enforces authentication and authorization.

11.2.1. Configuring LDAP Servers

To specify the LDAP server(s) to be used for authentication, you need to configure the text file .traf_authentication_config, located (by default) in $TRAF_HOME/sql/scripts. This file is a flat file, organized as a series of attribute/value pairs. Details on all the attributes and values accepted in the authentication configuration file and how to configure alternate locations can be found in .traf_authentication_config below.

A sample template file is located in $TRAF_HOME/sql/scripts/traf_authentication_config.

Attributes and values in the authentication configuration file are separated with a colon immediately following the attribute name. In general, white space is ignored but spaces may be relevant in some values. Attribute names are always case insensitive. Multiple instances of an attribute are specified by repeating the attribute name and providing the new value. For attributes with only one instance, if the attribute is repeated, the last value provided is used.

Attribute1: valueA
Attribute2: valueB
Attribute1: valueC

If Attribute1 has only one instance, valueC is used, otherwise, valueA and valueC are both added to the list of values for Attribute1.

Attributes are grouped into sections; this is for future enhancements. Attributes are declared in the LOCAL section, unless otherwise specified.

Section names, attribute names, and the general layout of the authentication configuration file are subject to change in future versions of Trafodion and backward compatibility is not guaranteed.

Specification of your directory server(s) requires at a minimum:

Setting Description Example

LDAP Host Name(s)

One or more names of hosts that support the OpenLDAP protocol must be specified. Trafodion attempts to connect to all provided host names during the authentication process. The set of user names and passwords should be identical on all hosts to avoid unpredictable results. The attribute name is LDAPHostName.

LDAPHostName: ldap.company.com

LDAP Port Number

Port number of the LDAP server. Typically this is 389 for servers using no encryption or TLS, and 636 for servers using SSL. The attribute name is LDAPPort.

LDAPPort: 389

LDAP Unique Identifier

Attribute(s) used by the directory server that uniquely identifies the user name. You may provide one or more unique identifier specifiers.

UniqueIdentifier: uid=,ou=users,dc=com

Encryption Level

A numeric value indicating the encryption scheme used by your LDAP server. Values are:

• 0: Encryption not used
• 1: SSL
• 2: TLS

LDAPSSL: 2

If your LDAP server uses TLS you must specify a file containing the certificate used to encrypt the password. By default the Trafodion software looks for this file in $TRAF_HOME/cacerts, but you may specify a fully qualified filename, or set the environment variable CACERTS_DIR to another directory. To specify the file containing the certificate, you set the value of the attribute TLS_CACERTFilename, located in the Defaults section.

Example

TLS_CACERTFilename: mycert.pem
TLS_CACertFilename: /usr/etc/cert.pem

Search username and password

Some LDAP servers require a known user name and password to search the directory of user names. If your environment has that requirement, provide these "search" values.

LDAPSearchDN: lookup@company.com
LDAPSearchPwd: Lookup123

There are additional optional attributes that can be used to customize Trafodion authentication. As mentioned earlier, they are described in .traf_authentication_config below.

You can test the authentication configuration file for syntactic errors using the ldapconfigcheck tool. If you have loaded the Trafodion environment (sqenv.sh), then the tool automatically checks the file at $TRAF_HOME/sql/scripts/.traf_authentication_config. If not, you can specify the file to be checked.

Example

ldapconfigcheck -file myconfigfile
File myconfigfile is valid.

If an error is found, then the line number with the error is displayed along with the error. Please refer to ldapconfigcheck below for more information.

The authentication configuration file needs to be propagated to all nodes, but there is a script that does that for you described later. For now, you can test your changes on the local node.

You can test the LDAP connection using the utility ldapcheck. To use this utility the Trafodion environment must be loaded (sqenv.sh), but the Trafodion instance does not need to be running. To test the connection only, you can specify any user name, and a name lookup is performed using the attributes in .traf_authentication_config.

ldapcheck --username=fakename@company.com
User fakename@company.com not found

If ldapcheck reports either that the user was found or the user was not found, the connection was successful. However, if an error is reported, either the configuration file is not setup correctly, or there is a problem either with your LDAP server or the connection to the server. You can get additional error detail by including the --verbose option. Please refer to ldapcheck for more information.

If you supply a password, ldapcheck attempts to authenticate the specified username and password. The example below shows the password for illustrative purposes, but to avoid typing the password on the command line, leave the password blank (--password=) and the utility prompts for the password with no echo.

ldapcheck --username=realuser@company.com --password=StrongPassword
Authentication successful

11.2.2. Generate Trafodion Certificate

Trafodion clients such as trafci encrypt the password before sending it to Trafodion. A self-signed certificate is used to encrypt the password. The certificate and key are generated when the sqgen script is invoked. By default, the files server.key and server.crt are located in $HOME/sqcert. If those files are not present and since Trafodion clients does not send unencrypted passwords, then you need to manually generate those files. To do so, run the script sqcertgen located in $TRAF_HOME/sql/scripts. The script runs openssl to generate the certificate and key.

To run openssl manually, follow the example:

openssl req -x509 -nodes -days 365 -subj '/C=US/ST=California/L=PaloAlto/CN=host.domain.com/O=Some Company/OU=Service Connection'
- newkey rsa:2048 -keyout server.key -out server.crt
Option Description
-x509

Generate a self-signed certificate.

-days <validity of certificate>

Make the certificate valid for the days specified.

-newkey rsa:<bytes>

Generate a new private key of type RSA of length 1024 or 2048 bytes.

-subj <certificateinfo>

Specify the information that is incorporated in the certificate. Each instance in a cluster should have a unique common name(CN).

-keyout <filename>

Write the newly generated RSA private key to the file specified.

-nodes

It is an optional parameter that specifies NOT to encrypt the private key. If you encrypt the private key, then you must enter the password every time the private key is used by an application.

-out <filename>

Write the self-signed certificate to the specified file.

Both the public (server.crt) and private (server.key) files should be placed in the directory $HOME/sqcert. If you do not want to use the HOME directory or if you want to use different names for the private and/or public key files, then Trafodion supports environment variables to specific the alternate locations or names.

  • Trafodion first checks the environment variables SQCERT_PRIVKEY and SQCERT_PUBKEY. If they are set, Trafodion uses the fully qualified filename value of the environment variable.

    You can specify either one filename environment variable or both.

  • If at least one filename environment variable is not set, Trafodion checks the value of the environment variable SQCERT_DIR. If set, then the default filename server.key or server.crt is appended to the value of the environment variable SQCERT_DIR.

  • If the filename environment variable is not set and the directory environment variable is not set, then Trafodion uses the default location ($HOME/sqcert) and the default filename.

11.2.3. Creating the LDAP configuration file

The .traf_authentication_config file is user to enable the Trafodion security features.

File Location

By default, the Trafodion authentication configuration file is located in $TRAF_HOME/sql/scripts/.traf_authentication_config. If you want to store the configuration file in a different location and/or use a different filename, then Trafodion supports environment variables to specify the alternate location/name.

Trafodion firsts checks the environment variable TRAFAUTH_CONFIGFILE. If set, the value is used as the fully-qualified Trafodion authentication configuration file.

If the environment variable is not set, then Trafodion next checks the variable TRAFAUTH_CONFIGDIR. If set, the value is prepended to .traf_authentication_config and used as the Trafodion authentication file.

If neither is set, Trafodion defaults to $TRAF_HOME/sql/scripts/.traf_authentication_config.

Template
# To use authentication in Trafodion, this file must be configured
# as described below and placed in $TRAF_HOME/sql/scripts and be named
# .traf_authentication_config.
#
# NOTE: the format of this configuration file is expected to change in the
# next release of Trafodion.  Backward compatibility is not guaranteed.
#
SECTION: Defaults
  DefaultSectionName: local
  RefreshTime: 1800
  TLS_CACERTFilename:
SECTION: local

# If one or more of the LDAPHostName values is a load balancing host, list
# the name(s) here, one name: value pair for each host.
  LoadBalanceHostName:

# One or more identically configured hosts must be specified here,
# one name: value pair for each host.
  LDAPHostName:

# Default is port 389, change if using 636 or any other port
  LDAPPort:389

# Must specify one or more unique identifiers, one name: value pair for each
  UniqueIdentifier:

# If the configured LDAP server requires a username and password to
# to perform name lookup, provide those here.
  LDAPSearchDN:
  LDAPSearchPwd:

# If configured LDAP server requires TLS(1) or SSL (2), update this value
  LDAPSSL:0

# Default timeout values in seconds
  LDAPNetworkTimeout: 30
  LDAPTimeout: 30
  LDAPTimeLimit: 30

# Default values for retry logic algorithm
  RetryCount: 5
  RetryDelay: 2
  PreserveConnection: No
  ExcludeBadHosts: Yes
  MaxExcludeListSize: 3
Configuration Attributes
Attribute Name Purpose Example Value Notes

LDAPHostName

Host name of the local LDAP server.

ldap.master.com

If more than one LDAPHostName entry is provided, then Trafodion attempts to connect with each LDAP server before returning an authentication error. Also see the description related to RetryCount and RetryDelay entries.

LDAPPort

Port number of the local LDAP server.

345

Must be numeric value. Related to LDAPSSL entry. Standard port numbers for OpenLDAP are as follows:

• Non-secure: 389
• SSL: 636
• TLS: 389

LDAPSearchDN

If a search user is needed, the search user distinguished name is specified here.

cn=aaabbb, dc=demo, dc=net

If anonymous search is allowed on the local server, then this attribute does not need to be specified or can be specified with no value (blank). To date, anonymous search is the normal approach used.

LDAPSearchPWD

Password for the LDAPSearchDN value. See that entry for details.

welcome

None.

LDAPSSL

A numeric value specifying whether the local LDAP server interface is unencrypted or TLS or SSL. Legal values are 0 for unencrypted, 1 for SSL, and 2 for TLS. For SSL/TLS, see the section below on Encryption Support.

0

None.

UniqueIdentifier

The directory attribute that contains the user’s unique identifier.

uid=,ou=Users,dc=demo,dc=net

To account for the multiple forms of DN supported by a given LDAP server, specify the UniqueIdentifier parameter multiple times with different values. During a search, each UniqueIdentifier is tried in the order it is listed in the configuration file.

LDAPNetworkTimeout

Specifies the timeout (in seconds) after which the next LDAPHostName entry is tried, in case of no response for a connection request. This parameter is similar to NETWORK_TIMEOUT in ldap_conf(5). Default value is 30 seconds.

20

The value must be a positive number or -1. Setting this to -1 results in an infinite timeout.

LDAPTimelimit

Specifies the time to wait when performing a search on the LDAP server for the user name. The number must be a positive integer. This parameter is similar to TIMELIMIT in ldap_conf(5). Default value is 30 seconds.

15

The server may still apply a lower server-side limit on the duration of a search operation.

LDAPTimeout

Specifies a timeout (in seconds) after which calls to synchronous LDAP APIs aborts if no response is received. This parameter is similar to TIMEOUT in ldap_conf(5). Default value is 30 seconds.

15

The value must be a positive number or -1. Setting this to -1 results in an infinite timeout.

RetryCount

Number of attempts to establish a successful LDAP connection. Default is 5 retries before returning an error.

10

When a failed operation is retried, it is attempted with each configured LDAP server, until the operation is successful or the number of configured retries is exceeded.

RetryDelay

Specifies the number of seconds to delay between retries. Default value is 2 seconds. See description of RetryCount.

1

None.

PreserveConnection

Specifies whether the connection to LDAP server is maintained (YES) or closed (NO) once the operation finishes. Default value is NO.

YES

None.

RefreshTime

Specifies the number of seconds that must have elapsed before the configuration file is reread. Default is 1800 (30 minutes).

3600

If set to zero, the configuration file is never read. The connectivity servers must be restarted for changes to take effect if this value is zero. This attribute is not specific to either configuration and must be defined in the DEFAULTS section.

TLS_CACERTFilename

Specifies the location of the certificate file for the LDAP server(s). Filename can either be fully qualified or relative to $CACERTS_DIR.

cert.pem

This attribute applies to both configurations. If a configuration does not require a certificate, then this attribute is ignored. This attribute must be defined in the DEFAULTS section.

DefaultSectionName

Specifies the configuration type that is assigned to a user by the REGISTER USER command if no authentication type is specified. In the initial Trafodion release, only one configuration is supported.

LOCAL

This attribute must be defined in the DEFAULTS section. If the DefaultSectionName attribute is specified, then a section by that name (or equivalent) must be defined in .traf_ldapconfig. Legal values are LOCAL and ENTERPRISE. This syntax is likely to change.

11.2.4. Verifying configuration and users through ldapcheck

Usage
ldapcheck  [<option>]...
<option> ::= --help|-h            display usage information
             --username=<LDAP-username>
             --password[=<password>]
             --primary            Use first configuration
             --local              Use first configuration
             --enterprise         Use first configuration
             --secondary          Use second configuration
             --remote             Use second configuration
             --cluster            Use second configuration
             --verbose            Display non-zero retry counts
                                  and LDAP errors
Considerations
  • Aliases for primary include enterprise and local. Aliases for secondary include cluster and remote. If no configuration is specified, primary is assumed.

  • The equals sign is required when supplying a value to username or password.

  • To be prompted for a password value with no echo, specify the password argument but omit the equals sign and value.

  • Passwords that contain special characters may need to be escaped if the password is specified on the command line or within a script file.

  • If the password keyword is not specified, only the username is checked. The tool can therefore be used to test the LDAP configuration and connection to the configured LDAP server(s) without knowing a valid username or password.

11.2.5. Verifying contents of configuration file through ldapconfigcheck

This page describes the ldapconfigcheck tool, which validates the syntactic correctness of a Trafodion authentication configuration file. Trafodion does not need to be running to run the tool.

Considerations

If the configuration filename is not specified, then the tool looks for a file using environment variables. Those environment variables and the search order are:

  1. TRAFAUTH_CONFIGFILE

    A fully qualified name is expected.

  2. TRAFAUTH_CONFIGDIR

    Filename .traf_authentication_config/ is appended to the specified directory

  3. TRAF_HOME

    /sql/scripts/.traf_authentication_config is appended to the value of TRAF_HOME.

Errors

One of the following is output when the tool is run. Only the first error encountered is reported.

Code Text
0

File filename is valid.

1

File filename not found.

2

File: filename

Invalid attribute name on line line-number.

3

File: filename

Missing required value on line line-number.

4

File: filename

Value out of range on line line-number.

5

File: filename

Open of traf_authentication_config file failed.

6

File: filename

Read of traf_authentication_config file failed.

7

No file provided. Either specify a file parameter or verify environment variables.

8

TLS was requested in at least one section, but TLS_CACERTFilename was not provided.

9

Missing host name in at least one section.

Each LDAP connection configuration section must provide at least one host name.

10

Missing unique identifier in at least one section.

Each LDAP connection configuration section must provide at least one unique identifier.

11

At least one LDAP connection configuration section must be specified.

12

Internal error parsing .traf_authentication_config.

11.3. Manage Users

Kerberos is enabled for installations that require a secure Hadoop environment. LDAP is enabled to enforce authentication for any user connecting to Trafodion. The Trafodion database enforces privileges on the database, database schemas, database objects (table, views, etc) and database operations. Privileges are enforced when authorization is enabled. When LDAP or Kerberos is enabled, authorization is automatically enabled.

To determine the status of authentication and authorization, bring up sqlci and perform "env;".

>>env;
----------------------------------
Current Environment
----------------------------------
AUTHENTICATION     enabled
AUTHORIZATION      enabled
CURRENT DIRECTORY  /.../incubator-trafodion/install/installer
LIST_COUNT         4294967295
LOG FILE
MESSAGEFILE        /.../incubator-trafodion/core/sqf/export/ ...
MESSAGEFILE LANG   US English
MESSAGEFILE VRSN   {2016-06-14 22:27 LINUX:host/user}
SQL CATALOG        TRAFODION
SQL SCHEMA         SCH
SQL USER CONNECTED user not connected
SQL USER DB NAME   SQLUSER1
SQL USER ID        33367
TERMINAL CHARSET   ISO88591
TRANSACTION ID
TRANSACTION STATE  not in progress
WARNINGS           on

Once authorization is enabled, there is one predefined database user called DB__ROOT associated with your specified LDAP username. Please connect to the database and this user and register users that will perform database admin management. The database admin can then connect and setup required users, roles, and privileges.

TBD - A future update should include a pointer to the security best practices guide.

To learn more about how to register users, grant object and component privileges, and manage users and roles, please see the Trafodion SQL Reference Manual.