This topic is related to the following topics:
About Using PostgreSQL as the database
You can use PostgreSQL as the working database for your migration project. The following guidelines apply when you use PostgreSQL as the migration working database:
Naming of all Tables, Views, Database, Schema, or Fields must be set to lowercase.
Each Database contains a Schema that uses the same name of the database in the following format: <database name>.<schema name>.<tablename>
For example, postgres.tgtecc.mara
Each PostgreSQL Datasource must have a corresponding Syniti Knowledge Platform (SKP) System Connection setup with the correct username and password for each of schemas in the working database.
When scripting views, the database must be identified with the table name, for example, wrkfin.c2_adrc_ecc_supplier_s.
Constraints used with an alias are identified with the following format: ‘T005’::text AS checktable.
All work in the PostgreSQL database is done using the pgAdmin application. It is an open source administration and development platform for PostgreSQL that can be downloaded from here.
Preparation for using PostgreSQL Database
The Admin sets up the PostgreSQL database server as a Migration Working Database Server in Administer > Setup > Guided Setup within the initial weeks on site. Refer to Initial Setup - First Week for more information on the initial system setup.
.png?sv=2022-11-02&spr=https&st=2026-02-13T08%3A50%3A44Z&se=2026-02-13T09%3A12%3A44Z&sr=c&sp=r&sig=PZcCuexO7L3e0pe3TdtCPT6N9nv52R8PYO%2FLGr9q7gY%3D)
Datasource Server settings for the PostgreSQL database
Note
If using Data Services for ETL processing, the database version of the PostgreSQL Server must be entered as same as the datastore for PostgreSQL within Data Services. Data Services is case-sensitive and an incorrect case structure produces issues for importing data for a datasource.
SRCCONSTRUCT Build in PostgreSQL
Create CONSTRUCT Datasource as a Snapshot Source Datasource (SRCCONSTRUCT) in the PostgreSQL database. This new Snapshot datasource stores the CONSTRUCT table data for use during Mock runs as the table data remains static. Each Construct table built in Syniti Construct and stored in SQL Server CONSTRUCT database is added to the SRCCONSTRUCT datasource in Snapshot Management for the PostgreSQL database and the Metadata is also imported.

Datasources: Source Snapshot of CONSTRUCT db: Add Tables
Note
Along with storing all Construct pages as tables in the SRCCONSTRUCT database in PostgreSQL, the XTVALUEMAP and LIST_TARGET_VALUE tables are stored in the SRCCONSTRUCT database as Snapshot Datasource tables to provide access for Value Mapping. XREF ETL Logic uses the values in the snapshot table of XTVALUEMAP for translations. The ETL tasks INSERT into MIGRATE.dbo.XTVALUEMAP and update target values from SRCCONSTRUCT.dbo.XTVALUEMAP. The LIST_TARGET_VALUE table provides the Target Value, Load Value, and Description of each record by Datasource and Check Table.
This Source Snapshot is connected to the CONSTRUCT datasource that comes delivered and points to the Syniti Migrate system > CONSTRUCT database in SQL Server. As each table is added to the Source Snapshot datasource, the system attempts to import Metadata, for example, metadata for columns.

Datasources: Source Snapshot Datasource SRCCONSTRUCT: Import Table Metadata
An example of the scripting code for the PostgreSQL CREATE Table is mentioned below:
CREATE TABLE IF NOT EXISTS srcconstruct.fi_f0101_address_c
(
address character varying(50) COLLATE pg_catalog."default",
addressinfo character varying(50) COLLATE pg_catalog."default",
contactinfo character varying(50) COLLATE pg_catalog."default",
country character varying(50) COLLATE pg_catalog."default",
customerdetails character varying(50) COLLATE pg_catalog."default",
customerid character varying(50) COLLATE pg_catalog."default",
zdeploy character varying(50) COLLATE pg_catalog."default",
zsource character varying(50) COLLATE pg_catalog."default",
zstatus character varying(50) COLLATE pg_catalog."default",
targetimport character varying(50) COLLATE pg_catalog."default",
addedby character varying(256) COLLATE pg_catalog."default",
addedon timestamp without time zone,
addedvia character varying(50) COLLATE pg_catalog."default",
changedby character varying(256) COLLATE pg_catalog."default",
changedon timestamp without time zone,
changedvia character varying(50) COLLATE pg_catalog."default",
boastatus integer
)Important!
Field names must be entered in lowercase only.
Report Schema in PostgreSQL
An Admin updates the delivered Datasource for REPORT (stored in the Syniti Migrate Application and Instance) to point to the PostgreSQL working database’s server and schema.

Datasources: Datasource REPORT: Details
The report Datasource is stored in the PostgreSQL database. As the object is run through the ETL process, the reports are generated and the data is stored in tables of the report schema in the PostgreSQL database. The Environments page designates the appropriate datasource for reports.
Note
Access the PostgreSQL database using the pgAdmin application.

PostgreSQL DB: Schema report: List generated Reports
Parameters
Access Administer > Setup > Parameters from the Syniti Migrate home page. The Parameters are set for the PostgreSQL database using the Working Database Type of PostgreSQL that is delivered with the system. Review the information of all fields on the Parameters page and refer to Administer > Setup > Guided Setup for more information about these fields.
Environments
To set up the Report Cache Database for the PostgreSQL database:
Access Administer > Setup > Environments from the Syniti Migrate home page.
.png?sv=2022-11-02&spr=https&st=2026-02-13T08%3A50%3A44Z&se=2026-02-13T09%3A12%3A44Z&sr=c&sp=r&sig=PZcCuexO7L3e0pe3TdtCPT6N9nv52R8PYO%2FLGr9q7gY%3D)
Environments
Click the Edit
icon for the required environment. The Environment dialog box is displayed.From the Report Cache Datasource list, select the PostgreSQL report datasource. This datasource is previously created in Syniti Migrate > Datasources. Refer to Catalog > Datasources for more information on the Report Cache Datasource setup.

Environments: Edit Details
Datasources in PostgreSQL
An Admin creates the Application and Instance for the PostgreSQL database in Catalog > Datasources to point to this database system. Once the Snapshot Datasource is built in Syniti Migrate and connected to the System Source or System Target, the user may Build Import XML. The Snapshot Datasource tables automatically built in the PostgreSQL Database as part of this process.

PostgreSQL database: Schema
Once the Snapshot Datasource is built within Syniti Migrate and connected to the System Datasource, the user may Build Import XML for the import process. This scenario is active when the ETL Tool is set to SAP Data Services on the Parameters page.
Note
Should the user attempt to create the PostgreSQL Snapshot Datasource tables manually after they are created, the job queue will indicate a failure due to the tables already existing in the system.

Snapshot Management: Snapshot Datasource: Details

Snapshot Management: Snapshot Datasource: Details: View Import XML scripting
The tables are automatically built in the PostgreSQL Database as shown below. If the tables already exist in the snapshot database, the Debug Log is empty (the table already exists and is not rebuilt).

Job Queue: Build Import XML and Snapshot Tables
The following example script is for building tables in the PostgreSQL database:
CREATE TABLE IF NOT EXISTS srcfoo.supplier
(
supplierid character varying(15) COLLATE pg_catalog."default" NOT NULL,
supplier character varying(50) COLLATE pg_catalog."default" NOT NULL,
addresstype character varying(50) COLLATE pg_catalog."default" NOT NULL,
addressline1 character varying(60) COLLATE pg_catalog."default",
addressline2 character varying(60) COLLATE pg_catalog."default",
city character varying(30) COLLATE pg_catalog."default",
statecode text COLLATE pg_catalog."default",
state character varying(50) COLLATE pg_catalog."default",
postalcode character varying(15) COLLATE pg_catalog."default",
countrycode character varying(3) COLLATE pg_catalog."default",
country character varying(50) COLLATE pg_catalog."default",
phonenumber character varying(25) COLLATE pg_catalog."default",
activeflag boolean NOT NULL,
"test 123" text COLLATE pg_catalog."default",
createddate date,
createdby character varying(50) COLLATE pg_catalog."default",
CONSTRAINT pk_supplier PRIMARY KEY (supplierid)
)In the example above, the supplier table is being created in the srcfoo schema for the target snapshot.
If successful, the newly created tables display in the PostgreSQL database for the srcfoo schema.
Note
Make sure to refresh the schema after running Code Build to make these new items visible.
Build the System Views
When the user attempts to Build System Views for the Project, Release, or Subject Area in System Views, a Job Queue is initiated to create these views in the PostgreSQL database from the target system.
The Snapshot Datasource for the target system is tgt. In this case, tgtbar is selected. The Schema in the PostgreSQL database is the target snapshot schema (tgtbar).

System Views: Build System Views
Click the Build button to initiate the job queue to build out these Pass Thru Views in the PostgreSQL database. Refer to section Migrate > System Views for details of this process.

Project Setup: Subject Area: Build System Views
The Job Queue runs a DROP query of the view, and the second step runs a CREATE query of the view in the tgt Datasource, so there are two times as many steps as there are tables for the Target Snapshot Schema.

Job Queue: Build System Views for the Subject Area
The following example scripts are given below:
DROP View:
DO $$ BEGIN IF EXISTS (SELECT 1 FROM information_schema.views WHERE table_name = LOWER('"/1cn/cpasap00001_custom"') AND table_schema = '"wrkfin"') THEN DROP VIEW IF EXISTS "wrkfin"."/1cn/cpasap00001_custom"; END IF; END $$;Create View:
CREATE OR REPLACE VIEW "wrkfin"."/1cn/cpasap00001_custom" AS SELECT * FROM "tgtbar"."/1cn/cpasap00001"Scripting for Datasets within Mapping
No updates needed for the ETL - XML scripting. The Target Tables automatically generate in the PostgreSQL database, if the delivered Database Type Translations are correct from SQL Server to PostgreSQL database and in the PostgreSQL database itself. These translations are delivered to work successfully and does not require any update.

Administer: Database Types: Translation of PostgreSQL field format
Note
PostgreSQL uses lowercase in field names, and use of uppercase will produce invalid data.

Mappings: Build XML for Dataset Mappings: Create Table for PostgreSQL DB
The following example script for a Target Table is given below:
CREATE TABLE IF NOT EXISTS wrkfin.ad_adrc_t
(
zdatasource character varying(4000) COLLATE pg_catalog."default",
zdeploy character varying(4000) COLLATE pg_catalog."default",
zlegacyaddrnumber character varying(4000) COLLATE pg_catalog."default",
zlegacydate_from character varying(4000) COLLATE pg_catalog."default",
zlegacynation character varying(4000) COLLATE pg_catalog."default",
client character varying(4000) COLLATE pg_catalog."default",
addrnumber character varying(4000) COLLATE pg_catalog."default",
date_from character varying(4000) COLLATE pg_catalog."default",
nation character varying(4000) COLLATE pg_catalog."default",
name1 character varying(4000) COLLATE pg_catalog."default",
country character varying(4000) COLLATE pg_catalog."default",
langu character varying(4000) COLLATE pg_catalog."default",
region character varying(4000) COLLATE pg_catalog."default",
zerror_log character varying(4000) COLLATE pg_catalog."default",
zloaded character varying(4000) COLLATE pg_catalog."default",
zloaddate character varying(4000) COLLATE pg_catalog."default",
zdata_ready character varying(4000) COLLATE pg_catalog."default",
zstage_ready character varying(4000) COLLATE pg_catalog."default",
ztarget_ready character varying(4000) COLLATE pg_catalog."default",
zchunk character varying(4000) COLLATE pg_catalog."default",
zintargetsys character varying(4000) COLLATE pg_catalog."default",
zremediated character varying(4000) COLLATE pg_catalog."default",
zcritical character varying(4000) COLLATE pg_catalog."default",
zpreload character varying(4000) COLLATE pg_catalog."default",
zaddmanual character varying(4000) COLLATE pg_catalog."default",
regiogroup character varying(4000) COLLATE pg_catalog."default",
city1 character varying(4000) COLLATE pg_catalog."default",
date_to character varying(4000) COLLATE pg_catalog."default"
)
TABLESPACE pg_default;
ALTER TABLE IF EXISTS wrkfin.ad_adrc_t
OWNER to postgres;
PostgreSQL DB: Table Definition
Scripting for Sources within Mapping
No updates needed for the ETL - XML scripting. The Source Table automatically generate in the PostgreSQL database, if the delivered Database Type Translations are correct from SQL Server to PostgreSQL database and in the PostgreSQL database itself.
Once the mapping effort has been completed, Build the XML for the source. The View XML page shows minor coding changes for the PostgreSQL database language.

Mappings: Source: Details: View XML
The following example script for the PostgreSQL database is shown below:
CREATE TABLE IF NOT EXISTS wrkfin.c2_adrc_foo_supplier_s
(
supplierid character varying(15) COLLATE pg_catalog."default" NOT NULL,
supplier character varying(50) COLLATE pg_catalog."default" NOT NULL,
addresstype character varying(50) COLLATE pg_catalog."default" NOT NULL,
addressline1 character varying(60) COLLATE pg_catalog."default",
addressline2 character varying(60) COLLATE pg_catalog."default",
city character varying(30) COLLATE pg_catalog."default",
statecode text COLLATE pg_catalog."default",
state character varying(50) COLLATE pg_catalog."default",
postalcode character varying(15) COLLATE pg_catalog."default",
countrycode character varying(3) COLLATE pg_catalog."default",
country character varying(50) COLLATE pg_catalog."default",
phonenumber character varying(25) COLLATE pg_catalog."default",
activeflag boolean NOT NULL,
"test 123" text COLLATE pg_catalog."default",
createddate date,
createdby character varying(50) COLLATE pg_catalog."default",
zsource character varying(50) COLLATE pg_catalog."default",
zdeploy character varying(10) COLLATE pg_catalog."default",
zlegacyaddrnumber character varying(50) COLLATE pg_catalog."default",
zlegacydate_from character varying(50) COLLATE pg_catalog."default",
zlegacynation character varying(50) COLLATE pg_catalog."default",
zclient character varying(3) COLLATE pg_catalog."default",
zaddrnumber character varying(10) COLLATE pg_catalog."default",
zdate_from date,
znation character varying(1) COLLATE pg_catalog."default",
zdate_to date,
zname1 character varying(40) COLLATE pg_catalog."default",
zstreet character varying(60) COLLATE pg_catalog."default",
zcountry character varying(3) COLLATE pg_catalog."default",
zlangu character varying(1) COLLATE pg_catalog."default",
zregion character varying(3) COLLATE pg_catalog."default",
zduplicate_source character varying(50) COLLATE pg_catalog."default",
zduplicate_of character varying(50) COLLATE pg_catalog."default",
zerror_log character varying(2000) COLLATE pg_catalog."default",
zinactive_reason character varying(2000) COLLATE pg_catalog."default",
zcritical boolean DEFAULT false,
zactive boolean DEFAULT true,
zxrefcountry character varying(100) COLLATE pg_catalog."default",
zxreflangu character varying(100) COLLATE pg_catalog."default",
zxrefregion character varying(100) COLLATE pg_catalog."default",
CONSTRAINT pk_c2_adrc_foo_supplier_s PRIMARY KEY (supplierid)
)
TABLESPACE pg_default;
ALTER TABLE IF EXISTS wrkfin.c2_adrc_foo_supplier_s
OWNER to postgres;
PostgreSQL DB: Table Definition for field with camel case
Creating Target Reports
The Reporting process in this scenario builds the views in the Working Database (PostgreSQL) and the reporting data stores in REPORT database in the SQL Server, where Migration is run. Most of the standard reports auto-generate and build views as expected. It is important to note that in the case of the Not IN reports, the Target system (ECC, S4H) Data Dictionary tables (DD Tables) need to be stored and refreshed within the Target Datasource, so that these Check table reports may build the appropriate joins. Should any of the Not IN reports fail to build, the issue is with the DD Tables of the Target Datasource. The user can manually correct these views for use.
The Counts and Recon reports are built in the database.
On All Errors Report:
Report is registered, but not created in the Database until the Report Refresh process.
Report re-generates each time an Error report is changed (Added, Deleted, or marked In-Active).
Note
Marking In-Active requires the report Refreshed to initiate the Re-generate of All Errors Report.

Job Queue: Target Reports build
Note
For any step that shows status of ERROR, click the Edit
icon to view the error message. The Parameter of SQL provides a Value of the scripting of that report. The user should attempt to manually build the same report in PostgreSQL to view the extended error message with assistance from PostgreSQL for issues.
An example of the scripting for Target Reports in PostgreSQL DB is shown below:
DO $$ BEGIN IF NOT EXISTS (SELECT 1 FROM information_schema.views WHERE table_name = LOWER('dk_vendor_t_rpt_vendor_isnull')) THEN CREATE VIEW "wrkfin"."dk_vendor_t_rpt_vendor_isnull" AS SELECT T.ZSOURCE, T.ZDEPLOY, T.ZLEGACYVENDORID, T.VENDOR FROM "wrkfin"."dk_vendor_t" T WHERE T.VENDOR IS NULL; END IF; END $$;Value Mapping Configuration for PostgreSQL
You can configure the list of Check Tables for use in Value Mapping in the PostgreSQL database. Check Tables are used to validate and control values entered in fields, primarily using SAP ETL. In the Mapping exercise, target fields are assigned as a mapping type of XREF (cross reference) and the assigned Check Table and its values are added to Value Mapping using the XTVALUEMAP table in the Syniti Construct SQL Server database. This process runs the conversion in the ETL inserts to the MIGRATE.dbo.XTVALUEMAP table and updates the SRCCONSTRUCT.dbo.XTVALUEMAP table.
Note
The XRef Update rules use the value in the LOAD_VALUE field in the XTVALUEMAP table for the target value. If no LOAD_VALUE is stored for the record, the Xref Update rules use the TARGET_VALUE.
In Snapshot Management, the MIG_XTVALUEMAP view has been added to the MIGRATE database. This view provides access for the Xref Update rules to use the LOAD_VALUE field. Update the Snapshot Table Name of the XTVALUEMAP snapshot to MIG_XTVALUEMAP, such that you can use this view as a source for the XTVALUEMAP snapshot from the MIGRATE database to the SRCCONSTRUCT database. The XRef Update rules can then reference the LOAD_VALUE field from the new XTVALUEMAP table in the SRCCONSTRUCT database.
.png?sv=2022-11-02&spr=https&st=2026-02-13T08%3A50%3A44Z&se=2026-02-13T09%3A12%3A44Z&sr=c&sp=r&sig=PZcCuexO7L3e0pe3TdtCPT6N9nv52R8PYO%2FLGr9q7gY%3D)
The team maps from source values to the check table target values in a Value Mapping exercise to produce valid translation. In some cases, the target values do not exist in the Target ETL or the Target Check table does not yet exist in the Target ETL. This process provides the user with the ability to create Check Tables and add fields in Check Tables for use in the Value Mapping process.
Once the Target ETL becomes in sync with the values in the Check Table, these manually configured values change its status to system configured values. Users may Insert, Update, Delete, or mark Inactive values in the Check Tables.
In the case where SQL Server is not the working database (which is PostgreSQL or HANA DB), a Snapshot Datasource must be built in the PostgreSQL Server database for this process to work. Also, the local Snapshot must contain the required DD Tables including DD02L, DD02T, DD03L, DD03T, DD04T, DD05P, and DD08L as well as any of the check tables used in Construct. Refer to section Administer > Value Mapping Config for more information of this process and use.
icon to view the error message. The Parameter of SQL provides a Value of the scripting of that report. The user should attempt to manually build the same report in PostgreSQL to view the extended error message with assistance from PostgreSQL for issues.