Discover millions of ebooks, audiobooks, and so much more with a free trial

Only $11.99/month after trial. Cancel anytime.

Oracle GoldenGate With Microservices: Real-Time Scenarios with Oracle GoldenGate
Oracle GoldenGate With Microservices: Real-Time Scenarios with Oracle GoldenGate
Oracle GoldenGate With Microservices: Real-Time Scenarios with Oracle GoldenGate
Ebook841 pages3 hours

Oracle GoldenGate With Microservices: Real-Time Scenarios with Oracle GoldenGate

Rating: 0 out of 5 stars

()

Read preview

About this ebook

The book starts with a brief introduction about Oracle GoldenGate with Microservices and how to configure high availability using various methods. Oracle GoldenGate Microservices Architecture (MA) is a similar architecture based on REST APIs which enable us to configure, monitor, and manage Oracle GoldenGate services using a web-based user interface. Each module supports a specific business goal and uses a simple, lightweight, and well-defined interface to communicate with other sets of services. Oracle GoldenGate can interact with custom conflict-resolution routines that customers write to satisfy their business rules.
LanguageEnglish
Release dateFeb 20, 2020
ISBN9789389328493
Oracle GoldenGate With Microservices: Real-Time Scenarios with Oracle GoldenGate

Related to Oracle GoldenGate With Microservices

Related ebooks

Certification Guides For You

View More

Related articles

Reviews for Oracle GoldenGate With Microservices

Rating: 0 out of 5 stars
0 ratings

0 ratings0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    Oracle GoldenGate With Microservices - Yenugula Venkata Ravi Kumar

    CHAPTER 1

    Introduction to Oracle GoldenGate HA-XAG Components

    Introduction

    Oracle GoldenGate (OGG) provides data capture and real-time replication mechanisms for heterogeneous databases. The OGG architecture can be implemented for almost all types of replication scenarios. Oracle Grid Infrastructure Bundled Agents (XAG), which is now a part of the Oracle Grid Infrastructure, provides HA and management framework through the AGCTL command line interface.

    OGG can be used for a single instance as well as cluster databases. In the cluster environment, GoldenGate can tolerate server failures by moving process to another surviving server. The Real Application Clusters (RAC) database replicated by OGG is considered as a complete high availability (HA) architecture.

    Oracle introduced Microservices Architecture from OGG 12.3. The older version is now called Classic Architecture. Oracle Microservices architecture provides access through a secure web interface that simplifies the administration, command line interface, and APIs.

    The following illustrates the components of each architecture. The below diagram is a view of an OGG Classic architecture:

    Figure 1.1: Oracle GoldenGate Classic Architecture components

    The primary access into the OGG classic architecture is via GoldenGate Service Command Interface (GGSCI). From GGSCI, you can control GG processes, such as Manager, Extract (capture), data Pump, and Replicat (apply).

    A view of an OGG Microservices Architecture is shown as follows:

    Figure 1.2: Oracle GoldenGate MA Architecture components

    As you will notice on the diagram, Extract (capture), trail files, and Replicat (apply) are still there. But the following components are modified and added into the MA architecture:

    Service manager: A service manager (SM) is a main interface into the OGG. The SM HTML user interface enables you to see the status of the administration server, the distribution server, the performance metrics server, and the receiver server. From the console, you can start, stop, and query other services and deployments. It acts as the watchdog process for the environment. A service manager is responsible to restart other services that go down.

    Administration server: An administration server is a central management entity for GoldenGate. From a web-based interface, you can create and manage Extract and Replicat processes.

    Distribution server: This server distributes trail files to one or more destination database servers.

    Receiver server: A receiver server coordinates and handles all received trail files.

    Performance metrics server: A performance metrics server collects instance deployment performance results. All performance related metrics are sent to this service from OGG processes.

    Additional components in OGG Microservices Architecture are as follows:

    Admin client: Instead of using the GUI interface, you can use the admin client to perform the same tasks. The adminclient usage example is shown as follows:

    $ export OGG_HOME=/GG_HOME/ma

    $ export JAVA_HOME=$OGG_HOME/jdk/jre

    $ cd $OGG_HOME/bin

    $ ./adminclient

    Oracle GoldenGate Administration Client for Oracle

    Version 19.1.0.0.2 OGGCORE_19.1.0.0.0_PLATFORMS_190823.0013

    Copyright (C) 1995, 2019, Oracle and/or its affiliates. All rights reserved.

    Linux, x64, 64bit (optimized) on Aug 23 2019 07:49:43

    Operating system character set identified as UTF-8.

    OGG (not connected) 5> connect http://localhost:9001 as oggadmin password oggadmin

    Using default deployment ‘MyDeployment’

    OGG (https://localhost:9001 MyDeployment) 6> help

    Admin Client Command Summary:

    !    - Executes the previous command without modifications.

    ADD CHECKPOINTTABLE      - Creates a checkpoint table in a database.

    ADD CREDENTIALS         - Create user credentials for use by the Administration Client.

    ADD CREDENTIALSTORE      - (Deprecated) Creates a credentials store (wallet) that stores encrypted database user credentials.

    ADD DISTPATH           - Creates a distribution path.

    Extract process: The capture mechanism of Oracle Golden Gate is called EXTRACT, which runs on a source database. Extract is responsible for capturing committed DML and DDL operations that are performed on objects in the Extract configuration and persisting them to trail files. Multiple Extract processes can operate on different objects at the same time.

    Replicat process: The Replicat process runs on the target system, which reads the trail files and applies them to the target database.

    Trails: A trail is the series of files on the disk where GoldenGate stores the captured data. By default, trails are stored in the dirdat subdirectory of the OGG directory and are aged automatically to allow processing to continue without interruption.

    Checkpoints: Oracle GoldenGate processes record their read and write positions along the data flow checkpoint files cluster-wide is essential so that after a failure occurs, the OGG processes can continue running from their last known position.

    AGCTL: In RAC, the main interface for contacting Oracle Clusterware is CRSCTL. This tool provides cluster-aware commands with which you can perform check, start, stop and modify operations on the cluster. You can run these commands from any node or on all nodes in the cluster, depending on the operation.

    One of the important components in Clusterware, which is used in GoldenGate HA architecture, is application VIP. An application VIP is a virtual IP, which can failover to another node. Using this technique, VIP remains online when cluster is up and the node where it was running fails.

    Instead of CRSCTL, AGCTL should be used to manage application resources of type XAG. This framework provides a complete set of commands to online, offline, relocate, check state, modify the configuration, remove, or disable bundled agents.

    The syntax and usage of AGCTL is given as follows:

    # /u01/app/19.3.0/grid/bin/agctl

    Manages Apache Tomcat, Apache Webserver, E-Business Suite Concurrent Manager, Goldengate, JDE Enterprise Server, MySQL Server, Peoplesoft App Server, Peoplesoft Batch Server, Peoplesoft PIA Server, Siebel Gateway, Siebel Server, WebLogic Administration Server as Oracle Clusterware Resources

    Usage: agctl []

    verbs: add|check|config|disable|enable|modify|query|relocate|remove|start|status|stop|upgrade

    objects: apache_tomcat|apache_webserver|ebs_concurrent_manager|goldengate|jde_enterprise_server|mysql_server|peoplesoft_app_server|peoplesoft_batch_server|peoplesoft_pia_server|siebel_gateway|siebel_server|weblogic_admin_server

    For detailed help on each verb and object and its options use:

    agctl --help or

    agctl --help

    Other Commands:

    agctl query releaseversion

    agctl query deployment

    agctl upgrade deployments

    For detailed help, you can run AGCTL with the -h option:

    # /u01/app/19.3.0/grid/bin/agctl -h

    Setting up a filesystem for OGG MA

    For OGG installation, you need to set up a filesystem. There are three filesystem types available for GoldenGate installation: ACFS, DBFS, and NFS. The best practice is to store the OGG files in DBFS or ACFS. In this section, we will describe the necessary steps for all three filesystems configuration, but only one of them should be configured.

    The filesystem configuration section assumes that you have already set up two RAC databases, one as a source and another as a target.

    Configure ACFS for OGG MA

    Oracle ACFS is bundled into the Oracle Grid Infrastructure, allowing for integrated optimized management of file systems, volumes, and databases. Oracle ACFS makes use of Oracle Automatic Storage Management (ASM) files and inherits ASM features, including striping, mirroring, rebalancing, preferred read, fast resync, flex ASM, and other features.

    Verify that the ACFS/ADVM modules are present in memory (on each node):

    # lsmod | grep oracle

    oracleacfs        5438460 0

    oracleadvm        1104207 0

    oracleoks         732987 2 oracleacfs, oracleadvm

    The above output shows that modules are present. If your output is not the same, then reinstall them as root user:

    # cd /u01/app/19.3.0/grid/bin

    # ./acfsroot install

    # ./acfsload start

    ACFS-9391: Checking for existing ADVM/ACFS installation.

    ACFS-9392: Validating ADVM/ACFS installation files for operating system.

    ACFS-9393: Verifying ASM Administrator setup.

    ACFS-9308: Loading installed ADVM/ACFS drivers.

    ACFS-9325: Driver OS kernel version = 3.10.0-862.el7.x86_64.

    ACFS-9326:   Driver build number = 190703.

    ACFS-9212:   Driver build version = 19.0.0.0.0 (19.4.0.0.0).

    ACFS-9547:   Driver available build number = 190703.

    ACFS-9548:   Driver available build version = 19.0.0.0.0 (19.4.0.0.0).

    ACFS-9549:   Kernel and command versions.

    Kernel:

    Build version: 19.0.0.0.0

    Build full version: 19.4.0.0.0

    Build hash: 9256567290

    Bug numbers: NoTransactionInformation

    Commands:

    Build version: 19.0.0.0.0

    Build full version: 19.4.0.0.0

    Build hash: 9256567290

    Bug numbers: NoTransactionInformation

    ACFS-9327: Verifying ADVM/ACFS devices.

    ACFS-9156: Detecting control device ‘/dev/asm/.asm_ctl_spec’.

    ACFS-9156: Detecting control device ‘/dev/ofsctl’.

    ACFS-9294: updating file /etc/sysconfig/oracledrivers.conf

    ACFS-9322: completed

    Create a separate DISKGROUP called GGDG for the GoldenGate software (from the 1st nodes of each cluster). To create a new DISKGROUP, use the following sample script:

    SQL> CREATE DISKGROUP GGDG NORMAL REDUNDANCY

    FAILGROUP PRIMRAC1 DISK ‘/dev/primrac1.lun4’ NAME PRIMRAC1$LUN4 SIZE 20480M

    FAILGROUP PRIMRAC2 DISK ‘/dev/primrac2.lun4’ NAME PRIMRAC2$LUN4 SIZE 20480M

    QUORUM FAILGROUP PRIMRACQ DISK ‘/dev/primracq.lun2’ NAME PRIMRACQ$LUN2

         ATTRIBUTE ‘au_size’ = ‘4M’

            ,’compatible.asm’ = ‘19.0.0.0’

            ,’compatible.rdbms’ = ‘19.0.0.0’;   

    SQL> ALTER DISKGROUP GGDG SET ATTRIBUTE ‘failgroup_repair_time’ = ‘2400h’

    Make sure that DISKGROUP is mounted on all nodes.

    From the first nodes of each cluster, create ACFS volumes:

    $ sudo su - grid

    $ asmcmd volcreate -G GGDG -s 19G ACFSGG

    Identify the device name of the volume (from the 1st nodes of each cluster):

    [grid@primrac1 ~]$ asmcmd volinfo -G GGDG -a

    Diskgroup Name: GGDG

    Volume Name: ACFSGG

    Volume Device: /dev/asm/acfsgg-91

    State: ENABLED

    Size (MB): 19456

    Resize Unit (MB): 512

    Redundancy: MIRROR

    Stripe Columns: 8

    Stripe Width (K): 1024

    Usage:

    Mountpath:

    [grid@stbyrac1 ~]$ asmcmd volinfo -G GGDG -a

    Diskgroup Name: GGDG

    Volume Name: ACFSGG

    Volume Device: /dev/asm/acfsgg-211

    State: ENABLED

    Size (MB): 19456

    Resize Unit (MB): 512

    Redundancy: MIRROR

    Stripe Columns: 8

    Stripe Width (K): 1024

    Usage:

    Mountpath:

    Create an ACFS filesystem on the volume:

    [grid@primrac1 ~]$ mkfs -t acfs /dev/asm/acfsgg-91

    mkfs.acfs: version             = 19.0.0.0.0

    mkfs.acfs: on-disk version       = 46.0

    mkfs.acfs: volume             = /dev/asm/acfsgg-91

    mkfs.acfs: volume size          = 20401094656 ( 19.00 GB)

    mkfs.acfs: Format complete.

    [grid@stbyrac1 ~]$ mkfs -t acfs /dev/asm/acfsgg-211

    mkfs.acfs: version             = 19.0.0.0.0

    mkfs.acfs: on-disk version       = 46.0

    mkfs.acfs: volume             = /dev/asm/acfsgg-211

    mkfs.acfs: volume size          = 20401094656 ( 19.00 GB)

    mkfs.acfs: Format complete.

    As a root user, create a directory named GG_HOME on all nodes of each cluster:

    # mkdir /GG_HOME

    # chmod 775 /GG_HOME

    # chown oracle:oinstall /GG_HOME

    As a root user register the file system with Clusterware (from the 1st nodes of each cluster):

    [root@primrac1 ~]# srvctl add filesystem -device /dev/asm/acfsgg-91 -path /GG_HOME -volume acfsgg -diskgroup GGDG -user oracle -fstype ACFS -description Primary ACFS for Golden Gate

    [root@stbyrac1 ~]# srvctl add filesystem -device /dev/asm/acfsgg-211 -path /GG_HOME -volume acfsgg -diskgroup GGDG -user oracle -fstype ACFS -description Standby ACFS for Golden Gate

    Start the filesystem service:

    [root@primrac1 ~]# srvctl start filesystem -device /dev/asm/acfsgg-91

    [root@stbyrac1 ~]# srvctl start filesystem -device /dev/asm/acfsgg-211

    Check the filesystem service status:

    [root@primrac1 ~]# srvctl status filesystem -device /dev/asm/acfsgg-91

    ACFS file system /GG_HOME is mounted on nodes primrac1,primrac2

    [root@stbyrac1 ~]# srvctl status filesystem -device /dev/asm/acfsgg-211

    ACFS file system /GG_HOME is mounted on nodes stbyrac1,stbyrac2

    Configure DBFS for OGG MA

    Database File System (DBFS) was introduced in Oracle 11g Release 2. It creates a shared file system that has its files stored in the database.

    Create a new database for DBFS.

    Create a new tablespace in the DBFS database:

    $ sqlplus / as sysdba

    SQL> create tablespace dbfs_tbs datafile ‘+data’ size 10m autoextend on next 1m maxsize unlimited;

    Tablespace created.

    Create user and grant necessary permissions:

    SQL> create user dbfs_user identified by dbfs_user default tablespace dbfs_tbs quota unlimited on dbfs_tbs;

    User created.

    SQL> grant create session, resource, create table, create view, create procedure, dbfs_role to dbfs_user;

    Grant succeeded.

    Create the filesystem in dbfs_tbstablespace by running the dbfs_create_filesystem.sql script as the dbfs_user user:

    $ cd $ORACLE_HOME/rdbms/admin

    $ sqlplus dbfs_user/dbfs_user

    SQL> @dbfs_create_filesystem.sql dbfs_tbs fs_name

    No errors.

    Where dbfs_tbs is the tablespace name and fs_name is the filesystem name.

    Install fuse and RUBY using yum. Perform the following on all source and target cluster nodes:

    # yum install kernel-devel fuse fuse-libs -y

    # yum install numactl* -y

    Mount the filesystem; perform the following on all nodes. Create a mount point:

    # mkdir /GG_HOME

    # chown oracle:oinstall /GG_HOME

    Add a new library path and create symbolic links on all cluster nodes:

    # echo /usr/local/lib >> /etc/ld.so.conf.d/usr_local_lib.conf

    # export ORACLE_HOME=/u01/app/oracle/product/19.3.0/dbhome_1

    # ln -s $ORACLE_HOME/lib/libclntsh.so.19.1

    /usr/local/lib/libclntsh.so.19.1

    # ln -s $ORACLE_HOME/lib/libclntshcore.so.19.1 /usr/local/lib/libclntshcore.so.19.1

    # ln -s $ORACLE_HOME/lib/libnnz19.so /usr/local/lib/libnnz19.so

    # ldconfig

    Uncomment the user_allow_other in /etc/fuse.conf file and assign necessary permissions to fuse the configuration file (on all cluster nodes):

    # cat /etc/fuse.conf |grep -v ‘^#’

    user_allow_other

    # chmod +x /usr/bin/fusermount

    Set the ProcessUnpackaged parameter to yes in /etc/abrt/abrt-action-save-package-data.conf (on all cluster nodes):

    ProcessUnpackaged = yes

    Reboot the server:

    # reboot

    Add the connection string in $ORACLE_HOME/network/admin/tnsnames.ora file (on all nodes):

    ORCL_DBFS =

    (DESCRIPTION =

    (ADDRESS = (PROTOCOL = TCP)(HOST = )(PORT = 1521))

    (CONNECT_DATA =

    (SERVER = DEDICATED)

    (SERVICE_NAME = orcl)))

    Note, instead of , indicate the exact SCAN name of the cluster.

    Download the mount-dbfs.zip file attached to the My Oracle Support note 1054431.1 note. Place the mount-dbfs.conf into /etc/oracle and mount-dbfs.sh into /u01/app/19.3.0/grid/crs/script folders on each cluster node.

    Edit the variable settings in /etc/oracle/mount-dbfs.conf for your environment (on all nodes):

    DBNAME=orcl

    MOUNT_POINT=/GG_HOME

    DBFS_USER=dbfs_user

    ORACLE_HOME=/u01/app/oracle/product/19.3.0/dbhome_1

    GRID_HOME=/u01/app/19.3.0/grid

    LOGGER_FACILITY=user

    MOUNT_OPTIONS=allow_other, direct_io

    DBFS_PASSWD=dbfs_user

    WALLET=false

    TNS_ADMIN=/u01/app/oracle/product/19.3.0/dbhome_1/network/admin/

    DBFS_LOCAL_TNSALIAS=ORCL_DBFS

    Set proper permissions (all nodes):

    # chown oracle:dba /u01/app/19.3.0/grid/crs/script/mount-dbfs.sh

    # chmod 750 /u01/app/19.3.0/grid/crs/script/mount-dbfs.sh

    # chown oracle:dba /etc/oracle/mount-dbfs.conf

    # chmod 640 /etc/oracle/mount-dbfs.conf

    Create an add_resource.sh script using the following content on the 1st node of each cluster:

    # sudo su - oracle

    $ cat /home/oracle/add_resource.sh

    #!/bin/bash

    ACTION_SCRIPT=/u01/app/19.3.0/grid/crs/script/mount-dbfs.sh

    RESNAME=dbfs_mount

    DBNAME=orcl

    ORACLE_HOME=/u01/app/19.3.0/grid

    PATH=$ORACLE_HOME/bin:$PATH

    export PATH ORACLE_HOME

    crsctl add resource $RESNAME \

    -type local_resource \

    -attr "ACTION_SCRIPT=$ACTION_SCRIPT, \

    CHECK_INTERVAL=30,RESTART_ATTEMPTS=10, \

    START_DEPENDENCIES=’hard(ora.$DBNAME.db)pullup(ora.$DBNAME.db)’ \

    STOP_DEPENDENCIES=’hard(ora.$DBNAME.db)’, \

    SCRIPT_TIMEOUT=300"

    $ chmod 770 /home/oracle/add_resource.sh

    Add resource to the Clusterware by executing the add_resource.sh script from only the 1st nodes of each cluster as oracle user:

    # su - oracle

    $ /home/oracle/add_resource.sh

    Check resource permissions:

    # crsctl getperm resource dbfs_mount

    Name: dbfs_mount

    owner:oracle:rwx, pgrp:oinstall:rwx, other::r--

    If you don’t have the same permissions, it means you have not run the add_resource.sh script by oracle user. So, you need to delete the resource and re-add as oracle user.

    Restart the database that is used for DBFS. Stop the database with the -f option:

    # srvctl stop database -db orcl -f

    Start the database:

    # srvctl start database -db orcl

    Start the dbfs resource:

    [root@primrac1 ~]# crsctl start resource dbfs_mount

    CRS-2672: Attempting to start ‘dbfs_mount’ on ‘primrac1’

    CRS-2672: Attempting to start ‘dbfs_mount’ on ‘primrac2’

    CRS-2676: Start of ‘dbfs_mount’ on ‘primrac1’ succeeded

    CRS-2676: Start of ‘dbfs_mount’ on ‘primrac2’ succeeded

    [root@stbyrac1 ~]# crsctl start resource dbfs_mount

    CRS-2672: Attempting to start ‘dbfs_mount’ on ‘stbyrac1’

    CRS-2672: Attempting to start ‘dbfs_mount’ on ‘stbyrac2’

    CRS-2676: Start of ‘dbfs_mount’ on ‘stbyrac2’ succeeded

    CRS-2676: Start of ‘dbfs_mount’ on ‘stbyrac1’ succeeded

    Check the status of the dbfs resource:

    [root@primrac1 ~]# crsctl status resource dbfs_mount

    NAME=dbfs_mount

    TYPE=local_resource

    TARGET=ONLINE      , ONLINE

    STATE=ONLINE on primrac1, ONLINE on primrac2

    [root@stbyrac1 ~]# crsctl status resource dbfs_mount

    NAME=dbfs_mount

    TYPE=local_resource

    TARGET=ONLINE      , ONLINE

    STATE=ONLINE on stbyrac1, ONLINE on stbyrac2

    Configure NFS for OGG MA

    Network File System (NFS) is created by Sun Microsystems, which allows shared access to files. We can use NFS to provide shared storage for an OGG installation.

    In a production environment, we must use NAS. For testing, it can be one of the RAC nodes itself. In our example, we will configure primrac1 and stbyrac1 as NFS servers.

    Create mount point and assign necessary permissions on all cluster nodes:

    # mkdir /GG_HOME

    # chmod 775 /GG_HOME

    # chown oracle:oinstall /GG_HOME

    Add the following lines to/etc/exports file on primrac1 and stbyrac1:

    /GG_HOME *(rw, sync, no_wdelay, insecure_locks, no_root_squash)

    Enable and start NFS to export the NFS shares on primrac1 and stbyrac1:

    # systemctl enable nfs

    Created symlink from /etc/systemd/system/multi-user.target.wants/nfs-server.service to /usr/lib/systemd/system/nfs-server.service.

    # systemctl start nfs

    Add the following in one line to the /etc/fstab file:

    On primrac2:

    primrac1:/GG_HOME /GG_HOME nfs rw,bg,hard,nointr,tcp,vers=3,timeo=300,rsize=32768,wsize=32768,actimeo=0 0 0

    On stbyrac2:

    stbyrac1:/GG_HOME /GG_HOME nfs rw,bg,hard,nointr,tcp,vers=3,timeo=300,rsize=32768,wsize=32768,actimeo=0 0 0

    Mount the NFS shares on primrac2 and stbyrac2:

    # mount /GG_HOME

    Configure and setup OGG MA

    Grid Infrastructure Agents provide Oracle Clusterware resources for OGG. Agents require an operational installation of the Oracle Grid Infrastructure.

    Before following the GG setup steps, you need to create two separate real application cluster databases. The first RAC as a source and the second as a destination database.

    Install the Oracle GoldenGate Software

    Download the Oracle GoldenGate 19c Microservices software from Oracle Technology Network (OTN) at: http://www.oracle.com/technetwork/middleware/goldengate/downloads/index.html

    Figure 1.3: Oracle Technology Network (OTN), Download Golden Gate

    There are two different binaries for Linux: Oracle GoldenGate 19.1.0.0.2 for Oracle on Linux x86-64 and Oracle GoldenGate 19.1.0.0.2 Microservices for Oracle on Linux x86-64. You need to download the binary which contains the keyword Microservices.

    Create the following directories from the 1st node of each cluster:

    Temporary staging directory:

    # mkdir -p /u01/stage/ggstg

    Microservices home:

    # mkdir /GG_HOME/ma

    Service Manager home:

    # mkdir /GG_HOME/sm

    Deployment home:

    # mkdir /GG_HOME/deploy

    Change owner to Oracle:

    # chown -R oracle:oinstall /GG_HOME

    Check the directory structure:

    # ll /GG_HOME

    total 220

    drwxr-xr-x 2 oracle oinstall 20480 Sep 25 11:14 deploy

    drwx------ 2 oracle oinstall 65536 Sep 25 11:04 lost+found

    drwxr-xr-x 2 oracle oinstall 20480 Sep 25 11:14 ma

    drwxr-xr-x 2 oracle oinstall 20480 Sep 25 11:14 sm

    Extract the installation ZIP file into the temporary staging directory:

    # unzip 191002_fbo_ggs_Linux_x64_services_shiphome.zip -d /u01/stage/ggstg

    # chown -R oracle:oinstall /u01/stage/ggstg

    Configure X Forwarding or VNC for Oracle user.

    Connect as an Oracle user and execute runInstaller:

    # su - oracle

    $/u01/stage/ggstg/fbo_ggs_Linux_x64_services_shiphome/Disk1/runInstaller

    On the Select Installation Option page, select Oracle GoldenGate for Oracle Database 19c, and then click Next to continue:

    Figure 1.4: Installation option

    On the Specify Installation Details page, specify Software Location- /GG_HOME/ma:

    Figure 1.5: Installation details, software location

    Click Install:

    Figure 1.6: Installation summary

    Click Close:

    Figure 1.7: Finish installation

    Check the content under GG MA home:

    Figure 1.8: Content under GG home

    Directory structure under /GG_HOME/ma is as follows:

    bin: This directory contains all MA programs and utilities, such as: Admin Client (adminclient), Administration Server (adminsrvr), Distribution Server (distsrvr), Extract data process (extract), MA Configuration Assistant (oggca.sh), wallets and certificate Management tool (orapki), Performance Metrics Server (pmsrvr), Receiver Server (recvsrvr), Replicat data process (replicat), Service Manager (ServiceManager).

    cfgtoollogs: This contains directories for opatch and Oracle Universal Installer log files (opatch, oui).

    deinstall: This contains the executable file to uninstall Golden Gate MA using Oracle Universal Installer (deinstall.sh).

    diagnostics: This directory contains OUI.xml. The file contains information about OUI log file locations.

    etc: After configuring MA, you will see actual configuration and security files under /GG_HOME/deploy/etc.

    include: This contains header files for compiling.

    install: This subdirectory of this directory contains globalcontext.xml. The file contains installation session variable information.

    inventory: This directory contains XML files describing configuration information, OUI templates, scripts, and more.

    jdk: This is home for Java Development Kit.

    jlib: This is Java classes in Java archive format.

    lib: This contains MA libraries and the following subdirectories: MA HTML pages (htdocs); help files for MA HTML (info); directory for health check, legacy, and sharding utilities (sql); directory that contains install, logging, reverseproxy, and sharding utilities (utl).

    Opatch: This is the directory of the Oracle Patch utility.

    oraInst.loc: This file identifies the name of the Oracle Inventory group and the path to the Oracle Inventory directory.

    oui: This is the Oracle Universal Installer directory.

    srvm: Under this directory you can find a binary file called ractrans, which is a version of transferListedDirsToNodes API used to copy the directories to the remote nodes.

    var: After configuring MA, the directory that contains logs and reporting processing artifacts will be /GG_HOME/deploy/var.

    Key directories and environment variables in Oracle GoldenGate Microservices are the following:

    $ORACLE_HOME: It is a target or source Oracle database home (/u01/app/oracle/product/19.3.0/dbhome_1).

    $OGG_HOME: Oracle Golden Gate home (/GG_HOME/ma)

    The following directories will be created after OGG MA configuration, later in this chapter:

    $OGG_ETC_HOME: It consists of conf and ssl subdirectories that contain configuration and security files (/GG_HOME/deploy/etc).

    $OGG_CONF_HOME: This directory that contains information about deployment including configuration and parameter files ($OGG_ETC_HOME/conf).

    $OGG_SSL_HOME: This contains security files of the deployment, such as certificates and wallets ($OGG_ETC_HOME/ssl).

    $OGG_VAR_HOME: This directory contains logs and reporting processing artifacts (/GG_HOME/deploy/var).

    $OGG_DATA_HOME: This directory contains trail files ($OGG_VAR_HOME/lib/data).

    Note that the Oracle Golden Gate MA installation should be done from the primrac1 and stbyrac1 servers.

    Install Oracle Grid Infrastructure Standalone Agents

    Oracle Grid Infrastructure Standalone Agent (XAG) automates start/stop of the GG deployment when relocating between nodes. During relocation, automatic mounting of ACFS and DBFS shared filesystems is managed by XAG. The Oracle GoldenGate Microservices architecture is supported only from XAG v9.

    Install Oracle Grid Infrastructure Standalone Agents using the following procedure:

    Download the software from https://www.oracle.com/technetwork/database/database-technologies/clusterware/downloads/xag-agents-downloads-3636484.html.

    Place the downloaded software in /tmp/Install. From the 1st node of each cluster, run the following via grid user:

    # chown -R grid:dba /tmp/Install

    # su - grid

    $ cd /tmp/Install

    $ unzip xagpack91.zip

    From primrac1:

    [grid@primrac1 Install]$ ./xag/xagsetup.sh --install --directory /u01/app/grid/xag --all_nodes

    Installing Oracle Grid Infrastructure Agents on: primrac1

    Installing Oracle Grid Infrastructure Agents on: primrac2

    Done.

    Updating XAG resources.

    Successfully updated XAG resources.

    From stbyrac1:

    [grid@stbyrac1 Install]$ ./xag/xagsetup.sh --install --directory /u01/app/grid/xag --all_nodes

    Installing Oracle Grid Infrastructure Agents on: stbyrac1

    Installing Oracle Grid Infrastructure Agents on: stbyrac2

    Done.

    Updating XAG resources.

    Successfully updated XAG resources.

    Deploying OGG Microservices

    Once the Oracle GG is installed, the next step is to deploy it using GG Configuration Assistant (oggca.sh).

    Create a new Service Manager from the first nodes of each cluster using the following steps (this step requires X forwarding or VNC ):

    # su - oracle

    $ export ORACLE_HOME=/u01/app/oracle/product/19.3.0/dbhome_1

    $ export LD_LIBRARY_PATH=$ORACLE_HOME/lib:$LD_LIBRARY_PATH

    $ export TNS_ADMIN=$ORACLE_HOME/network/admin

    $ export OGG_HOME=/GG_HOME/ma

    $ $OGG_HOME/bin/oggca.sh

    On the Select Service Manager Options page, choose Create New Service Manager. Enter Service Manager Deployment Home and the localhost value in the Listening hostname/address field. Enter Listening port and choose Integrate with XAG:

    Figure 1.9: Service Manager Options

    Choose Addnew GoldenGate deployment:

    Figure 1.10: Configuration Options

    Specify Deployment Name and GG Software Home:

    Figure 1.11: Deployment Details

    Specify Deployment home:

    Figure 1.12: Deployment Directories

    Set Environment Variables if you have not set them before running the configuration assistant:

    Figure 1.13: Environment Variables

    Set Username and Password for the Service Manager administrator:

    Enjoying the preview?
    Page 1 of 1