Uploaded by Володимир Подрушняк

Oracle to Azure PostgreSQL Migration Playbook

advertisement
Oracle to Azure Database for
PostgreSQL Migration
Playbook
Prepared by
Azure Data Engineering Team
Disclaimer
The High-Level Architecture, Migration Dispositions and guidelines in this document are developed in
consultation and collaboration with Microsoft Corporation technical architects. Because Microsoft must respond
to changing market conditions, this document should not be interpreted as an invitation to contract or a
commitment on the part of Microsoft.
Microsoft has provided generic high-level guidance in this document with the understanding that MICROSOFT
MAKES NO WARRANTIES, EXPRESS OR IMPLIED, WITH RESPECT TO THE INFORMATION CONTAINED HEREIN.
This document is provided “as-is”. Information and views expressed in this document, including URL and other
Internet Web site references, may change without notice.
Some examples depicted herein are provided for illustration only and are fictitious. No real association or
connection is intended or should be inferred.
This document does not provide you with any legal rights to any intellectual property in any Microsoft product.
You may copy and use this document for your internal reference purposes.
© 2024 Microsoft. All rights reserved.
Contents
Introduction ........................................................................................................................... 5
Why Migrate? .......................................................................................................................... 5
Why Choose Azure Database for PostgreSQL? .......................................................................... 6
Discovery ............................................................................................................................ 7
Assessments....................................................................................................................... 7
Database Schema Migration................................................................................................. 8
Database Code Migration ................................................................................................... 10
Refactoring Oracle-Proprietary Features .......................................................................... 11
Data Migration ................................................................................................................... 12
Application Code Migration ................................................................................................ 12
Migration Validation ........................................................................................................... 13
Performance Tuning ........................................................................................................... 13
Query Store ................................................................................................................... 14
Index Tuning .................................................................................................................. 14
Intelligent Tuning ............................................................................................................ 14
Cloud Optimization............................................................................................................ 15
Migration Best Practices ........................................................................................................ 15
Migration Planning Checklist.................................................................................................. 19
Partner Solution Providers ..................................................................................................... 20
Software Solutions ............................................................................................................ 20
Services Solutions ............................................................................................................. 20
ATROPOSS – Open Source Solution Approach ..................................................................... 21
Cortex by Splendid Data ..................................................................................................... 22
CYBERTEC Migrator ........................................................................................................... 24
DMAP by Newt Global ........................................................................................................ 24
Hyper-Q by Datometry ....................................................................................................... 26
HVR/Fivetran ..................................................................................................................... 27
Ispirer Solutions ................................................................................................................ 27
Liberatii Gateway ............................................................................................................... 29
Ora2Pg.............................................................................................................................. 30
QMigrator by Quadrant Technologies .................................................................................. 30
Spectral Core .................................................................................................................... 31
Case Studies ........................................................................................................................ 33
Contact Us ........................................................................................................................... 35
Congratulations!
If you are considering the migration of an Oracle database workload, there has never been a better
time to modernize your platform. Your interest signifies your intent to add more value and new
capabilities to your database environment, while simultaneously reducing licensing costs and
operating overhead. These ambitions are the very same core drivers that motivate the Azure
Postgres Engineering Group, so we say to you, "Welcome to the team!"
Every championship team leverages a playbook to ensure their rules and goals are well defined, to
provide guidance on avoiding penalties, coordinate and implement strategies, and ultimately, to
succeed and win. The Oracle to Azure Postgres Migration Playbook provides a comprehensive
overview of the key phases required to migrate successfully, and introduces our solution provider
ecosystem which offer tools, technologies, and services to ensure each stage of your migration
moves forward.
Why Migrate?
Sustainable organizations are increasingly assessing their infrastructural environments against the
ever-changing collection of business drivers to plan their strategic priorities and operations.
Decision makers are seeking efficiencies wherever possible to focus their investments in areas that
drive innovation and growth. Under these circumstances, it's easy to understand why so many
teams are finding success in leveraging cloud platforms to host their critical workloads. Whether
contending with overwhelming security threats, software and hardware refresh cycles, budget and
resource constraints, or end-of-life support agreements, the Azure cloud delivers on-demand
infrastructure, prioritizes security, and encourages innovation for every facet of your delivery
roadmap.
Why Choose Azure Database for PostgreSQL?
As businesses evolve, database platforms need to keep pace in adopting modern features and
capabilities. Modernizing onto Azure Database for PostgreSQL (Azure Postgres) enables
organizations to focus on innovation rather than database management, hardware operations,
disaster recovery mitigation, and licensing fees. Azure Postgres raises your ability to take
advantage of the cloud native infrastructure and encapsulates the key principles of wellarchitected pillars: cost optimization, operational excellence, performance efficiency, reliability,
and security. Azure Postgres additionally embodies extensibility by offering powerful and
innovative transactional applications capable of unlocking a variety of critical workloads, such as:
time-series based data, geo-spatial capabilities, and cutting-edge generative AI language models
and delivering increased performance, decreased cost, all while maintaining complete
observability and control over your environment.
Azure Postgres is built upon the official open-source Postgres community builds, ensuring your
workloads are compatible with every Postgres release without risk of proprietary dependencies or
lock-ins. Our team embraces the open-source community, and we proudly employ the most
Postgres contributors who are actively enhancing and maintaining the community Postgres
platform.
Getting Started
A critical step towards the successful completion of your initiative includes recognition that Oracle
migrations are complex operations which require the successful execution of multiple sequential
key phases and must be addressed in a specific and structured order. Carefully orchestrating and
following these established methodologies and battle-tested processes are essential to achieving
success. Our experience and expertise in supporting countless successful migrations can ensure
that your migration is able to leverage and apply our learned lessons within your specific migration
scenario. Additionally, there are key solution providers within the Microsoft Partner network
offering powerful tools to assist with your migration efforts. This reference is intended to help
identify key migration stages and recommend the ideal set of services and solutions for each stage
of your Oracle migration.
Discovery
Most customers are already well acquainted with the quantities and locations of their Oracle
database instances (especially their associated licensing costs), however for the sake of
completeness we are highlighting this phase as an important starting point in your migration. The
Discovery phase is an ideal stage to determine the appropriate scope of your migration efforts. Do
you have an Oracle database server "farm" environment requiring tens, hundreds, or even
thousands of databases to migrate? Are you considering an at-scale migration following a
“migration factory” approach? Or, is your environment more suitable for the end-to-end migration
of a single database together with modernizing all of your connected clients before moving onto the
next database on the migration list? In either case, an up-to-date and thorough inventory is a
critical prerequisite, and the Discovery phase ensures you are prepared for success.
Assessments
Assessments encapsulate many different types of estimate-based exploratory operations which
are individually defined by their unique characteristics. Some assessments are designed to
estimate and categorize the complexity of effort and resources involved in migration of database
objects and based upon factors such as the numbers of objects (potentially even exploring the
number of lines of code) requiring attention from a subject matter expert. Alternatively, other types
of assessments explore the structure and size of the underlying data and provide guidance
regarding the amount of time required to fully migrate data to the destination environment. Yet
another assessment type is structured to ensure your destination Azure Postgres resources are
appropriately scaled to accommodate the compute, memory, IOPS, and network configuration
required to service your data. One of the most important assessments which must be included to
ensure your migration success is a thorough review and consideration of all connected clients and
the scope comprising all dependent applications. To summarize, when preparing your migration
assessments, ensure you are assessing all aspects of your database migration, including:
•
•
•
•
Database schema / code conversion quantity and complexity
Database size and scale
Database resource operating requirements
Client application code migration
Your assessment accuracy will be closely tied to the specific underlying tools and service platforms
involved in the execution and completion of subsequent migration steps. It's therefore important
to consider that there are several factors which can impact the accuracy of these assessment
estimates and reported results are directly correlated to the underlying tools utilized in your
migration assessment. Care must be taken to avoid interpolating estimate outputs from different
or combined tools when reviewing and incorporating assessment outputs into your migration plans.
For further specific guidance on assessments, see our recommended best practices below.
Database Schema Migration
Structured data definitions are one of the hallmarks of transactional database engines and an
essential foundation to a well-designed data platform. Ensuring that your unique Oracle data
structures and data type definitions will be properly mapped to their respective tables within Azure
Postgres is a critical requirement to the overall success in your migration. While all transactional
databases share many similarities, data table and column data type differences do exist and care
must be taken to ensure your data is not inadvertently lost, truncated, or mangled due to
mismatched data definitions. Numeric data types, date/time data types, and text-based data types
are just some examples of areas that must be closely examined when developing corresponding
data mappings for your migration.
The following table contains examples of the differences between Oracle and Postgres data types.
Oracle Data Types
Postgres Data Types
Number/Integer Type Mappings
NUMBER(p, s)
NUMERIC(p, s)
NUMBER (without
scale)
BIGINT or DOUBLE
PRECISION (depending on
usage)
DOUBLE PRECISION
FLOAT
CLOB, NCLOB
TEXT
DATE
TIMESTAMP (WITH|OUT TIME
ZONE)
TIMESTAMPTZ
TIMESTAMP
INTERVAL
TEXT
Date/Time Types
DATE
TIMESTAMP WITH
LOCAL TIME ZONE
INTERVAL YEAR TO
MONTH, DAY TO
SECOND
TIMESTAMPTZ
TIMESTAMP
INTERVAL
Binary Types
BLOB, RAW
BYTEA
Spatial Types
Character/String Types
TIMESTAMP WITH TIME
ZONE
TIMESTAMP WITH
LOCAL TIME ZONE
INTERVAL YEAR TO
MONTH, DAY TO
SECOND
CLOB, NCLOB
TIMESTAMP WITH TIME
PostgreSQL
ZONE
Data
Type
TIMESTAMP (WITH|OUT TIME
ZONE)
SDO_GEOMETRY
GEOMETRY (via PostGIS)
Other Types
LONG
TEXT
XMLTYPE
XML
BFILE
TEXT or custom
ANYDATA
Converted using custom
mappings
In Oracle, tables can store very large and very small values and users have the option of
using a special data type: NUMBER. The NUMBER data type is capable of storing a
maximum precision value of 38 decimal digits, and the total range of values can be from
1.0E-129 to 9.99E125. Critically, these values can be defined without explicitly defining
precision or scale:
CREATE TABLE NUMBER_TEST (val NUMBER NOT NULL);
When subsequently storing data in a NUMBER column, any numeric values provided are stored
using the default precision and scale settings:
INSERT INTO NUMBER_TEST VALUES (1.0E-129);
INSERT INTO NUMBER_TEST VALUES (9.99E125);
--VERY SMALL NUMBER
--VERY LARGE NUMBER
This can complicate schema migrations when using tools designed to analyze table definition
metadata:
SELECT COLUMN_NAME, DATA_TYPE, NULLABLE, DATA_LENGTH, DATA_PRECISION, DATA_SCALE
FROM USER_TAB_COLUMNS
WHERE TABLE_NAME = 'NUMBER_TEST';
Returns:
As pictured above, the table definition metadata does not contain any precision or scale values,
and these values are therefore unavailable. It is critical to understand both the underlying purpose
of the column values and the minimum and maximum possible values encountered in order to
ensure your destination data types are properly mapped.
Oracle and Postgres handle date/time data types differently. For example, Oracle uses
the DATE data type to store both date and time precise data. In Postgres, the DATE data
type is used for only storing a calendar date and the TIME data type is used to store time
of day data. The Postgres TIMESTAMP data type is equivalent to the Oracle DATE data type.
As when reviewing your schema design, special care should be taken to assess and review your
data partitioning implementation. If your original data tables are configured using a specific
partitioning strategy, you must ensure that you are familiar with the differences between how
Oracle and Postgres query planning and execution engines utilize partitioned table structures.
While it is possible to replicate granular or complex partition schema designs to store data in
Postgres, you may also be surprised to find that the very same partitioned structures result in
performance that is unexpectedly and dramatically different.
Database Code Migration
Database code migration refers to the process of converting database code written for Oracle to be
compatible with the Postgres database engine, while maintaining both the original functionality
and existing performance characteristics. This process entails converting Oracle PL/SQL queries,
stored procedures, functions, triggers, and other database objects to be compliant Postgres
PL/pgSQL. Fortunately, Oracle’s PL/SQL and Postgres’ PL/pgSQL procedural language dialects
share many similarities, and this is commonly the initial factor many organizations identify when
selecting Postgres as the best fit for Oracle database migrations. There are, however, some unique
differences and distinctions between the two database languages which must be considered.
Areas of attention include: database-specific keywords and syntax, exception handling, built-in
functions, data types, and sequence incrementation.
In many instances, the Postgres extension ecosystem can be a powerful ally to assist with
streamlining your code migration process. For example, the extension “Oracle Functions for
PostgreSQL” (orafce) provides a set of built in Oracle-compatibility functions and packages which
can reduce the need to rewrite parts of your codebase that rely on and reference these Oracle
functions. Using this compatibility-based approach during the migration of Oracle code to
PostgreSQL offers significant advantages in terms of reducing the complexity, time, and cost of the
migration process by maintaining your original logic and functionality of your source database
definitions, ensures consistency in results, and enhances developer productivity. All of these
benefits add up to an achieved simplified and more efficient code migration to PostgreSQL.
The following table contains examples of the differences between Oracle and Postgres features.
Oracle Feature
Postgres Equivalent
Implicit cursors
Use Explicit cursors or FOR
loops in PL/pgSQL.
Explicit cursors
Use Postgres cursors using
DECLARE and FETCH.
Dynamic SQL (EXECUTEUse Postgres EXECUTE
IMMEDIATE)
statements.
Exception handling
(EXCEPTION)
Use PL/pgSQL EXCEPTION
blocks.
%TYPE and %ROWTYPE
attributes
Use variable declarations
and composite types.
FORALL and BULK
COLLECT
Often replaced by set-based
operations or FOR loops.
DBMS_SQL package
calls
May require manual handling
for dynamic SQL.
String Functions
REGEXP_SUBSTR
Date and Time Functions
SYSDATE
CURRENT_DATE or NOW()
SYSTIMESTAMP
CURRENT_TIMESTAMP
DBTIMEZONE
SHOW TIME ZONE (manual
conversion)
NEXT_DAY(date,
weekday)
date + INTERVAL 'n
months'
Custom calculation:
EXTRACT(YEAR FROM
age(date1, date2)) * 12
+ EXTRACT(MONTH FROM
age(date1, date2))
DATE_TRUNC('month',
date) + INTERVAL '1
month - 1 day'
Custom logic using CASE
or EXTRACT(DOW)
ROUND(date, format)
DATE_TRUNC(format, date)
TRUNC(date, format)
DATE_TRUNC(format, date)
ADD_MONTHS(date, n)
MONTHS_BETWEEN(date1,
date2)
LAST_DAY(date)
CONCAT
`(BACKTICK)
INSTR(string,
substring, position,
occurrence)
POSITION(substring IN
string)
REGEXP_LIKE
SIMILAR TO or ~ operator
REGEXP_MATCHES or manual
extraction with regex
Conversion Functions
Hierarchical Queries
TO_NUMBER(string,
format)
CAST(string AS NUMERIC)
or TO_NUMBER(string,
format)
CONNECT BY PRIOR
TO_CLOB
CAST(string AS TEXT)
Miscellaneous Operations
TO_BLOB
CAST(string AS BYTEA)
DECODE
CASE
SYS_GUID()
UUID_GENERATE_V4()
(requires uuid-ossp
extension)
COALESCE(expression1,
expression2)
USER
CURRENT_USER
CASE WHEN expression IS
NOT NULL THEN
if_not_null ELSE if_null
END
VSIZE
OCTET_LENGTH
DBMS_OUTPUT.PUT_LINE
RAISE NOTICE
Null-Related Functions
NVL(expression1,
expression2)
NVL2(expression,
if_not_null, if_null)
Use recursive WITH
queries.
Refactoring Oracle-Proprietary Features
There are several functional differences between the Oracle and Postgres dialects. Pay special
attention to the existence of these Oracle objects:
Packages
Oracle PL/SQL provides the PACKAGE object type for logically grouping custom data types, stored
procedures, functions, etc. Of particular note, however, is the ability to encapsulate package-level
scoped variables, constants, cursors, and extensions. These additional properties can allow the
package to maintain state throughout operations. There are no direct corollary constructs in
PL/pgSQL for package-level scoped properties, therefore it is essential to identify and understand
the specific dependencies on any Oracle-specific functionality used within your database.
Since Postgres does not contain support for a package object, a common solution is to use
schemas to organize your objects to maintain a logical structure and organization.
When refactoring any package session state, you can consider using Postgres temporary tables.
Postgres temporary tables are session-specific and allow you to store and retrieve state
information throughout your session.
Autonomous Transactions
These sections of code allow for independently managed sub-areas of a transaction, allowing
specific operations to be committed even when rolling back the parent transaction operation.
NULL Values
In PL/SQL, empty strings (‘’) are treated as NULL. In PL/pgSQL, an empty string is not considered
NULL. Similarly, in PL/SQL, concatenating a NULL value with a non-NULL value results in a nonNULL string. In PL/pgSQL, this same operation returns NULL.
For further specific guidance on database code migrations, see our recommended best practices
below.
Data Migration
In today’s data-driven environment, your data is arguably your most valuable asset. Your data
resources increasingly influence every aspect of informed business operations and strategic
decision making. It’s therefore especially vital that your data migration pipelines operate efficiently
and expediently, are fully consistent and verifiable, and ultimately complete successfully.
Your data migration strategy should be carefully considered to determine whether “offline” or “live”
approaches are applicable to your environment. Each data migration strategy has its own blend of
benefits and considerations, and the choice between “offline” and “live” operations depend on the
specific requirements and constraints of your environment. For example, “offline” migrations can
be more straightforward and less complex than “live” migrations, however, “offline” migrations
involve downtime for the period of time required to fully migrate your data to your destination
database. “Live” migrations offer minimal to no downtime, however they involve more complexity
and infrastructure to oversee the initial backfill data load and the subsequent data synchronization
of changes which may have occurred since the start of the data migration. Careful planning,
thorough assessment of business requirements, and consideration of your team’s specific critical
factors will ensure you are able to make an informed decision fully aligned with your data migration
needs.
Keep in mind that both the underlying data schema definitions and overall data sizes can
impact the performance and lengthen the time required to migrate data. Extremely wide
tables and/or tables with character large object data (CLOB), as well as tables with
extremely row counts should be considered when planning your migrations.
Application Code Migration
While external applications may be technically considered outside the domain of database team
migration responsibilities, updating and modernizing the database connectivity to your client
applications is an essential and closely interrelated stage to the overall success of your database
migration journey. As with the other phases of your migration, the associated effort and complexity
involved in remediating your client application platform compatibility depends on the unique
circumstances of your environment. Are your client applications developed by a third party? If so,
it’s important to ensure that their software product is certified to support the Postgres database
platform. Are your in-house applications using object relational mapping technologies, such as
(n)Hibernate or Entity Framework? In some cases, a small configuration or file change may be all
that is required. Conversely, if you have significant amounts of database queries and statements
embedded within your code, you may need to allocate more time to appropriately review, modify,
and validate your code changes.
Alternatively, there are partner solution providers offering novel approaches capable of translating
legacy client database operations in real-time. These proxy services provide an abstraction over
your database layers which effectively decouple your applications from any database-specific
language dependencies.
In many cases, your decision may incorporate a combination of multiple strategies and hybrid
approach collectively employed for their respective strengths and combined capabilities.
Deploying a real-time database translation layer can enable your teams to rapidly re-deploy their
client applications while providing your software engineers and developers with appropriate time
and resource planning to refactor their database-specific dependencies to support Postgres native
operations.
Each of these choices are accompanied by their own particular sets of considerations and benefits
and it is essential that your teams carefully review each of these approaches to determine the ideal
strategic path forward.
Migration Validation
When migrating from Oracle to PostgreSQL, ensuring data integrity and logical consistency are both
paramount. Migration validation plays a critical role in this process, as it involves verifying that the
data transferred from the source Oracle database is accurate and complete in the target
PostgreSQL system. This step is essential not only for maintaining the trustworthiness of the data
but also for confirming that the migration process has not introduced any errors or discrepancies.
Validation checks can include comparing table counts, verifying data types and structures,
comparing row-level column values, and ensuring that complex queries yield consistent results
across both databases. Additionally, special attention must be paid to handling differences in how
the two database systems manage data, such as variations in date and time formats, character
encoding, and handling of null values.
This typically involves setting up automated validation scripts that can compare datasets in both
databases and highlight any anomalies. Tools and frameworks designed for data comparison can
be leveraged to streamline this process. Post-migration validation should be an iterative process,
with multiple checks conducted at various stages of the migration to catch issues early and
minimize the risk of data corruption. By prioritizing data validation, organizations can confidently
transition from Oracle to PostgreSQL, knowing that their data remains reliable and actionable.
Performance Tuning
Performance is generally viewed as one of the most tangible, and therefore, most important
characteristics that determines the perception and usability of your platform. Ensuring that your
migration is both accurate and performant is paramount to achieving success and cannot be
overlooked. More specifically, query performance is often considered the most critical indicator of
optimal database configuration and is commonly used as a litmus test by your users to determine
the state of health of your environment. Fortunately, the Azure platform natively incorporates the
tooling and capabilities needed to monitor performance points across a variety of metrics,
including scale, efficiency, and perhaps most importantly, speed. These Intelligent Performance
features work hand-in-hand with the Postgres monitoring resources to simplify your tuning
processes, and in many cases, automate these steps to automatically adapt and adjust as needed.
The following Azure tools can ensure your database systems are operating at their very best levels.
Query Store
Query Store for Azure Postgres serves as the foundation for your monitoring features. Query Store
tracks the statistics and operational metrics from your Postgres database, including queries,
associated explain plans, resource utilization, and workload timing. These data points can uncover
long running queries, queries consuming the most resources, the most frequently run queries,
excessive table bloat, and many more operational facets of your database. This information helps
you spend less time troubleshooting by quickly identifying any operations or areas requiring
attention. Query Store provides a comprehensive view of your overall workload performance by
identifying:
•
•
•
•
•
Long running queries, and how they change over time.
Wait types affecting those queries.
Details on top database queries by Calls (execution count), by data-usage, by IOPS and by
Temporary file usage (potential tuning candidates for performance improvements).
Drill down details of a query, to view the Query ID and history of resource utilization.
Deeper insight into overall databases resource consumption.
Index Tuning
Index tuning is a feature of Azure Database for PostgreSQL flexible server that can automatically
improve the performance of your workload by analyzing tracked queries and providing index
recommendations. It's natively built-in into Azure Database for PostgreSQL Flexible Server, and
builds upon Query Store functionality. Index tuning analyzes workloads tracked by Query Store and
produces index recommendations to improve the performance of the analyzed workload or to drop
duplicate or unused indexes. This is accomplished in three unique ways:
•
•
•
Identify which indexes are beneficial to create because they could significantly improve the
queries analyzed during an index tuning session.
Identify indexes that are exact duplicates and can be eliminated to reduce the performance
impact their existence and maintenance have on the system's overall performance.
Identify indexes not used in a configurable period that could be candidates to eliminate.
Intelligent Tuning
Intelligent Tuning is an ongoing monitoring and analysis process that not only learns about the
characteristics of your workload but also tracks your current load and resource usage, such as CPU
or IOPS. It doesn't disturb the normal operations of your application workload. The process allows
the database to dynamically adjust to your workload by discerning the current bloat ratio, write
performance, and checkpoint efficiency on your instance. With these insights, Intelligent Tuning
deploys tuning actions that enhance your workload's performance and avoid potential pitfalls. This
feature comprises two automatic tuning functions:
•
•
Autovacuum tuning: This function tracks the bloat ratio and adjusts autovacuum settings
accordingly. It factors in both current and predicted resource usage to prevent workload
disruptions.
Writes tuning: This function monitors the volume and patterns of write operations, and it
modifies parameters that affect write performance. These adjustments enhance both
system performance and reliability, to proactively avert potential complications.
Cloud Optimization
Optimization of your new Azure Postgres database environment signifies the culmination of all the
incredible effort and hard work that has led your team to arrive at this key point. Cloud optimization
may be a new responsibility, especially when coming from an on-premises or legacy database
environment. The Azure cloud platform introduces a new and enhanced set of valuable and
cutting-edge scalability features allowing your team to “dial-in” the precise allocation of resources,
features, and cost-efficiency to match your organizational needs today, and well on into the future.
Cloud Optimization is an ongoing process of continuous refinement to your environment as viewed
through the lenses of the best practices associated with the Microsoft well-architected Framework:
cost optimization, operational excellence, performance efficiency, reliability, and security.
Cost Optimization is a combination of right-sizing your resources, applying strategies for cost
management, and efficient resource utilization.
Operating Excellence includes the adoption of automation for deployments, monitoring, and
scaling, and reduces error while increasing efficiency.
Performance Efficiency ensures you choose the appropriate resources to meet requirements
without over-provisioning, while also applying best practices for scalability to handle varying loads
efficiently during peak operational periods.
Reliability guides you towards designed highly available and fault-tolerant systems with
redundancy and failover mechanisms to minimize downtime, and disaster recovery strategies for
implementing robust recovery plans, including backup and restore procedures.
Security emphasizes the importance of strong identity protocols and access management
practices, such as least privilege access, password-less authentication, and role-based access
control. Data protection and encryption ensures sensitive data is protected both at rest and in
transit. Security also includes tools and best practices for threat-detection, and automated
responses to address security incidents promptly. Compliance ensures your environment
complies with industry standards and regulations.
Migration Best Practices
The following scenarios outline some of the potential challenges which have been encountered
during an Oracle to Azure Postgres migration. The experiences gained from our migration efforts
can be helpful when planning and executing your own migration(s).
Scenario: Two separate low-latency client applications were discovered to be
connected to the same database. Each system was inadvertently bumping the other
cached queries out of the buffers. The shared load and combined resource contention
created a situation wherein the database’s shared buffers were being flushed too
frequently, resulting in degraded performance across both systems.
Recommended Solution: Ensure that your initial assessments are capturing ALL aspects of your
database platform environment, including the memory consumption by both systems global area
(SGA) and program global area (PGA) memory structures. Select the appropriate family of compute
that matches your resource requirements and ensure your Postgres planned capacity is adjusted
as required.
How-To: The pg_buffercache extension provides a means for examining the utilization
and allows you to observe what’s happening in the shared buffer cache in real time.
Buffer Cache Hit Ratio
Examining hit ratios allows you to evaluate cache effectiveness and determine whether shared
buffer size is appropriate. A good cache hit ratio is a sign that most data requests are being
served from memory rather than disk, providing optimal performance:
SELECT COUNT(*) AS total
, SUM(CASE WHEN isdirty THEN 1 ELSE 0 END) AS dirty -- # of buffers out of sync with disk
, SUM(CASE WHEN isdirty THEN 0 ELSE 1 END) AS clean -- # of buffers in sync with data on disk
FROM pg_buffercache;
Most Frequently Accessed Tables & Indexes
Examining which tables and indexes are most frequently accessed and/or occupying the most
space in the buffer cache can help identify hotspots being cached in memory:
SELECT b.relfilenode, relname, relblocknumber
, relkind --r = ordinary table, i = index, S = sequence, t = TOAST table, v = view, m =
materialized view, c = composite type, f = foreign table, p = partitioned table, I =
partitioned index
, COUNT(*) AS buffers
FROM pg_buffercache b
JOIN pg_class c ON c.oid = b.relfilenode
GROUP BY b.relfilenode, relname, relblocknumber, relkind
ORDER BY buffers DESC
LIMIT 10;
Buffer Cache Contention
Significant contention in your buffer cache indicates queries might be fighting for the same
buffers, leading to performance bottlenecks. Examining the location and frequency of buffer
access can help in diagnosing such issues:
SELECT c.relname, b.relblocknumber, COUNT(*) AS access_count
FROM pg_buffercache b
JOIN pg_class c ON c.relfilenode = b.relfilenode
GROUP BY c.relname, b.relblocknumber
ORDER BY access_count DESC
LIMIT 10;
Scenario: A migration effort was initiated in between and spanning releases of the
Postgres platform releases. Despite new features and improvements being available
in the latest release, the version identified at the outset of the migration was kept to
complete the migration. Subsequent additional effort, time, and expense were exerted
to upgrade the Postgres database following the initial migration in order to achieve optimal
performance and new capabilities.
Recommended Solution: Whenever possible, prioritize the adoption of the latest release version
of Postgres when migrating. The Postgres community dev teams work incredibly hard to squeeze
every bit of performance and stability into each new release, and holding back potentially
translates to leaving performance on the sidelines. Additionally, take full advantage of new Azure
features, including SSDv2 storage, the latest server family of infrastructure, and automated index
tuning and autonomous server parameter tuning capabilities.
Scenario: Organizations migrating to Postgres for the first time may be
unfamiliar with best practices and approaches when identifying slowrunning queries, implementing new index-types appropriately, and operating
within a database engine environment which does not require query hints.
Recommended Solution: Extensions are an integral part of what makes Postgres so powerful.
There are several extensions that can provide important features enabling you to ensure your
database is operating at peak performance. Some key extensions to consider include:
auto_explain: automatically logs execution plans for queries which run beyond a set threshold. Allows
database administrators to diagnose performance issues and optimize query performance without
manually running EXPLAIN on each query.
pg_trgm: provides functions and operators for determining similarity of text-based data by way of trigram
matching. This extension is useful for tasks involving text search, fuzzy matching, and similarity-based
queries. Combined with GIN or GIST indexes on text columns offers improved performance on LIKE queries
and similarity searches.
pg_cron: allows for scheduling and management of periodic tasks directly within the database. Integrates
cron-like job scheduling into Postgres enabling the automation of routine maintenance tasks, data
processing, and similar repetitive operations.
Edge Case Example: if your database operations involve a significant amount of repeated creation and deletion of
database objects, older pg_catalog system table tuples will increase, leading to table “bloat”. As pg_catalog is a
system table involved in many database operations, unmitigated maintenance on this table can result in degraded
performance across the database. Ensuring that pg_catalog is adequately maintained and appropriately vacuumed
can be ensured by configuring a pg_cron schedule.
pg_hint_plan: Postgres aims to provide consistent and reliable performance without the need for manual
intervention, resulting in the intentional design decision to not include query hints. For some scenarios
where specific and precise control over query plan designs are needed, pg_hint_plan provides a way to
influence the query planner’s decisions by using hints embedded into SQL comments. These hints allow
database administrators to guide the query planner to choose specific plans in order to optimize complex
queries or address performance issues that the planner may not be able to handle on its own.
These examples are just scratching the surface of the incredibly vast set of extensions available to
your Postgres database. We encourage you to fully explore these extensions as an opportunity to
supercharge your Postgres database and to consider the possibility of authoring your own
extensions where you see the potential to expand Postgres beyond its current capabilities. The
powerfully flexible extension architecture ensures that Postgres will always be able to adapt and
evolve with your platform requirements.
Scenario: In some instances, table partition strategies resulting in the creation of
thousands of partitions can slow query performance when the query planner is unable
to determine the partition key when parsing the query. The resulting behavior of
extended planning time causes query planning to take longer than the actual query
execution.
Recommended Solution: Re-evaluate the need for excessive partitioning strategies. The Postgres
database engine may no longer require the same structuring of data and reducing the number of
partitions may likely improve performance. If the partitioning scheme is assessed to be required,
consider restructuring your query into discrete operations to first identify and extract dynamic
partition keys, and then use the partition keys in your operations.
Scenario: At times, external dependencies and environmental circumstances may
require hybrid database scenarios where both Oracle and Azure Postgres databases
need to coexist. There may be occasions where phased migrations are necessary to
access and query Oracle data directly from Azure Postgres without the overhead of
importing data or modifying complex ETL processes. In other instances, performing parallel data
validation by comparing equivalent datasets in both Oracle and Azure Postgres environments
simultaneously can help ensure data consistency and integrity during/after your migration.
Recommended Solution: PostgreSQL Foreign Data Wrapper (FDW) Extensions are a key Postgres
feature allowing you to access and manipulate data stored in external systems as if that data was
within Azure Postgres database natively. FDWs enable Azure Postgres to function as a federated
database, allowing integration with any number of external data sources, including Oracle
databases. FDWs create foreign table definitions within your Postgres database and these foreign
tables act as a proxy for your defined external data source allowing users to query these foreign
tables using regular SQL queries. Internally, the Postgres engine uses the external FDW definition
to communicate with and coordinate data on demand from the remote data source.
oracle_fdw: (Foreign Data Wrapper for Oracle) is a Postgres extension that allows you to
access Oracle databases from within Azure Postgres. When migrating from Oracle to
Azure Postgres, oracle_fdw can play a crucial role by providing data access, data
validation, incremental migration, and real-time data synchronization. It is important to
keep in mind the following key considerations when using FDWs:
•
•
Running queries through oracle_fdw will incur overhead in the form of network
communications and authentication negotiation while the data is processed and fetched
from the remote Oracle server
Some data types may need special handling or conversion to ensure data types are correctly
mapped between systems.
Effectively using oracle_fdw can potentially help in simplifying the database transition and ensuring
data accessibility by enabling your applications and data to remain accessible throughout the
overall migration process.
Migration Planning Checklist
The following questions are intended to help you identify and record the compute resources
required to maintain operational database performance. This information is critical when planning
your migration as well as discussing migration plans with your selected migration partner(s).
1. Workload Consistency
Question
Does my database environment utilization change throughout the day?
If Yes, which time(s) of day does activity peak?
How long is the sustained peak activity period?
Does my database environment utilization change throughout the week?
If Yes, which day(s) of the week does activity peak?
How long is the sustained peak activity period?
Does my database environment utilization change throughout the month?
If Yes, which times(s) of the month does activity peak?
How long is the sustained peak activity period?
Does my database environment utilization change throughout the year?
If Yes, which month(s) of the year does activity peak?
How long is the sustained peak activity period?
Answer
2. Workload Type
Question
Is my workload read-intensive, write-intensive, or hybrid of both?
Are there any inbound or outbound linked database connection dependencies?
Does my database use any file system mounts for data operations?
How many dependent client applications are connected to my database?
Answer
3. Peak Operational Periods
During peak operational periods…
Question
How many cores are being utilized?
What is the percentage of core utilization?
What is the maximum amount of memory used by SGA?
What is the maximum amount of memory used by PGA?
What are the maximum IOPS required?
What are the maximum storage throughput speeds?
What is the maximum required network speed?
What is the maximum number of concurrent active database connections?
Answer
Partner Solution Providers (alphabetically ordered)
As we have reviewed above, a successful Oracle to Azure Postgres migration involves the effective
completion of a number of essential activities. Fortunately, there are several solution providers
within the Microsoft Partner network and the larger solutions industry who all offer powerful tools
to assist with achieving your goals. This reference is intended to help identify and recommend the
ideal set of service providers and solutions during your Azure Postgres adoption journey.
These platforms can be primarily categorized into two groups based upon their implementation
philosophies: software companies who offer implementation services, and services
implementation companies who have developed software to maximize their efficiency. This is a
subtle, yet important detail that should be evaluated depending upon what approach best fits your
organization’s environment.
Software Solutions
ATROPOSS
CYBERTEC
Datometry
Ora2Pg
Spectral
Core
Splendid
Data
Striim
Services Solutions
Solution
Newt Global
Ispirer
Liberatii
QMigrator
ATROPOSS – Open Source Solution Approach
ATROPOSS / ATROPOSS Documentation / ATROPOSS Documentation
atroposs is a browser-based discovery tool enabling customers to explore their solution stacks in
a secure, scalable, and non-disruptive way.
Faced with the question, "How can you change the target architecture without disrupting essential
business operations?” We developed atroposs as an open-source browser-based discovery tool
designed to help you map your solution stacks, ensuring a secure and non-disruptive transition to
a Microsoft Azure Environment.
atroposs offers a suite of advanced modules:
•
AWR Module: This module parses AWR and Statspack reports at level 7, with RVTool support for databases
deployed on an ESX cluster.
•
Oracle Schema Discovery Module: Currently under development, this module is replacing its JSON parser with
a Rust parser to discover memory-optimized schemes. A free containerized version of the standard edition is
available.
Applications Discovery Module: Based on CodeCharta, this module is designed for due diligence and cloud
readiness assessments, employing SonarQube.
SQL Server Module: This module facilitates the evaluation of on-prem SQL servers for optimal transition to
Azure SQL PaaS Services.
Post Migration Module (DbOPS): This module aids in documenting and implementing online schema changes
in databases, leveraging Skeema and GHOST.
•
•
•
ATROPOSS is built on Angular, supporting module development and integration in JavaScript or
WASM Python. We're also developing a WASM RUST module. To aid in integration, we've prepared
templates available via GitHub. Shunning the "black box" approach common to licensed solutions,
atroposs gives you transparency and flexibility.
To scale in modernization engagements where customers are managing hundreds to thousands of
databases with nearly as many applications, scalability is required to provide a process
architecture framework. We have developed an atroposs-based process named, AIMM:
Cortex by Splendid Data
Cortex - Migrate your Oracle® databases to PostgreSQL with ease - Splendid Data
The most advanced product to migrate Oracle databases technically to PostgreSQL in a highly
automated way. Cortex is the most sophisticated product available in the market to perform
automated migration of your Oracle databases to native PostgreSQL. Our experts that built and
continuously improving Cortex have in-depth technical know-how of both Oracle databases and
PostgreSQL as well as parser and compiler techniques.
Translation of the Oracle database Data Objects and Code Objects (PL/SQL to PL/pgSQL) is done
semantically in relation to their context (including dependencies throughout the PL/SQL code). All
other products in the market are mainly based on “find and replace” without any dependency
checks throughout the PL/SQL code, which makes the overall automatic translation of these
products unreliable and inefficient, and basically inappropriate to setup a larger scale migration of
Oracle databases to PostgreSQL program/project.
Benefits:
•
Effectiveness – The more Oracle Data and Code Objects can be migrated automatically to
native PostgreSQL the better. It saves time and money, lowers the risk and accelerates the
Oracle to PostgreSQL migration workloads.
•
•
•
Efficiency – Higher productivity with far less resources.
Quality – The code produced by automated tools is by definition better than that produced
manually.
Factory-based approach – A standardized and repeatable process
CYBERTEC Migrator
CYBERTEC MIGRATOR (cybertec-postgresql.com)
CyberTec offers migration services for organizations seeking to move from Oracle to Azure
Database for PostgreSQL. CyberTec offers a simple and user-friendly migration tool called
CyberTec Migrator, which supports large enterprises in an efficient and structured migration from
Oracle databases to PostgreSQL. CyberTec also offers consulting and support services during the
migration process, including assessing data quickly and providing immediate feedback to avoid
communication failures, expensive delays, and wrong approaches. In terms of PostgreSQL
products, CyberTec offers PostgreSQL Enterprise Edition (PGEE), which is a cost-efficient and
superior enterprise database solution. PGEE includes 24/7 enterprise support, bug fixes and patch
installs within PostgreSQL and the CyberTec extended toolchain, performance tuning, extended
troubleshooting, and monthly consulting hours. It also includes assisted monitoring, discounted
training sessions, and a dedicated contact person. CyberTec PGEE is available for all major
platforms, including Red Hat Enterprise Linux (RHEL), Debian, Ubuntu, Docker, Kubernetes, and
OpenShift.
CyberTec's CYPEX is a perfect alternative to APEX and Oracle Forms, offering a way out of
exorbitantly high license costs. CYPEX offers a user-friendly interface to streamline processes,
visualize workflows, and provide easier access to data. Getting started with CYPEX is easy and can
be summarized in three simple steps: import data structure, define metadata, and create apps.
The tool also offers a video tutorial collection, handbook, and video tutorials for every CYPEX
feature. Overall, CyberTec provides a range of services and products to assist with Oracle to Azure
Postgres migrations, including migration tools, consulting, and support services, and PostgreSQL
products like PGEE and CYPEX.
DMAP by Newt Global
Database Migration Solution: Migrate Databases with DMAP (newtglobal.com)
DMAP is a comprehensive solution designed to accelerate Oracle to Azure PostgreSQL migrations.
It provides end-to-end support across every phase of the migration lifecycle, including database
and application assessment, schema conversion, data migration, and embedded SQL remediation.
DMAP is also integrated with GenAI and GitHub Copilot to further accelerate the Oracle migration
and reduce the overall effort and time required for the migration by up to 90%, which saves
resources while ensuring accuracy and consistency.
Key Features:
Schema Assessment: DMAP automatically identifies all database schemas, eliminating need for
manual listing. It evaluates code objects (e.g., stored procedures), storage objects (tables, indexes),
and special objects (BLOB/CLOB), and estimates conversion effort. DMAP also detects Oraclespecific constructs (global variables, GOTO statements, advanced queues) to highlight migration
complexity. It identifies internal and external schema dependencies, streamlining conversion
sequencing, and can assess thousands of schemas simultaneously, making it ideal for large
databases.
Database Assessment: DMAP evaluates Oracle server configurations and performs automated
AWR trend analysis, providing insights into infrastructure utilization (CPU, IOPS, throughput) and
identifying high-impact SQL queries. Its detailed analysis of data distribution, storage
characteristics, and database partitions informs optimal sharding and migration strategies,
producing tailored recommendations for target architecture on Azure PostgreSQL. DMAP also
assesses licensing features to estimate costs and generates a business case report comparing the
total cost of ownership (TCO) between on-prem Oracle and Azure PostgreSQL.
Schema Conversion & Validation: DMAP natively automates schema conversion with up to 80%
reduction in effort—far exceeding typical conversion rates (e.g., 50% with standard tools). Special
Oracle constructs, such as global variables and GOTO statements, are seamlessly handled, and
post-migration validation ensures that all objects function as expected. Conversion tips and
mapping reports help address the conversion of partially converted constructs. Integration with
GenAI & Copilot further enhances the automation capability by up to 90%.
Data Migration & Validation: An integrated dashboard provides end-to-end visibility into the data
migration process, supporting multi-schema migrations with a multi-threaded configuration for
optimal performance. Automated physical and logical validations verify successful migration,
confirming row counts, and data integrity.
Application Assessment & Remediation: DMAP evaluates applications for embedded SQL that
require conversion to PostgreSQL compliant syntax and identifies necessary datatype changes.
Certified for large applications with millions of lines of code, it identifies and remediates both static
and dynamic SQLs including non-standard ORMs, nested SQL loops, and custom SQL methods. A
CLI utility enables quick ROM estimation for embedded SQLs without needing DMAP installation,
speeding up project scoping.
Additional Features: DMAP offers a containerized deployment option, a user-friendly GUI, and a
centralized dashboard for managing the migration lifecycle. Its analytics retrieve insights from
worker nodes to aid decision-making, and it supports backup and restore for converted schemas
and data.
DMAP's robust automation, scalability, and end-to-end support make it an indispensable tool for
enterprises aiming to modernize Oracle workloads on Azure PostgreSQL.
Hyper-Q by Datometry
Hyper-Q – Datometry
Datometry is a software platform that enables existing applications to work with modern database
technology, making it possible to migrate from Oracle to Azure Database for PostgreSQL without
changing a line of SQL. The platform eliminates the need for time-consuming, expensive, and risky
database migrations, enabling you to de-risk and accelerate your database migration. With
Datometry, you can break free from vendor lock-in quickly, creating substantial savings in total cost
of ownership (TCO) and exceptional return on investment (ROI).
Datometry Hyper-Q is the world's first software platform that makes existing applications work with
modern database technology, so you don't have to change a line of SQL. It transforms all
statements and data in real-time for a hassle-free, fast, and cost-effective migration. With detailed
Migration Cost and TCO Calculator, you can discover and account for all the hidden costs that
could otherwise sink your migration project. Datometry makes existing applications built and
configured for legacy systems work on the next generation of databases without changing a line of
code. Datometry Hyper-Q also offers pre-built connectors for popular applications like Tableau,
Qlik, and Power BI, making it easy to connect and migrate data. Additionally, it provides a
comprehensive set of tools for managing data quality, data profiling, and data integration. These
tools help ensure that your data is accurate, consistent, and complete before, during, and after the
migration process.
Overall, Datometry can be one of the fastest and most comprehensive ways to move off legacy
systems and modernize your data infrastructure. It reduces migration time and cost by 4x and the
cost by about 80% compared to a conventional migration. With Datometry Hyper-Q, you can
implement a multi-project architecture that would have been close to impossible to build
otherwise. Datometry is trusted by leading enterprises and partners and is the world's first platform
that eliminates the need for time-consuming, expensive, and risky database migrations.
HVR/Fivetran
Fivetran | Automated data movement platform
Fivetran is a platform that offers data movement services, which can be used to migrate data from
one database to another. Fivetran's connectors support over 500 different data sources. Once the
databases are connected, Fivetran will automatically extract the data from the Oracle database
and load it into the Postgres database. To ensure that the migration is successful, it is important to
configure the replication settings correctly. Fivetran offers different replication methods, including
log-based CDC, timestamp-based CDC, trigger-based CDC, and log-free CDC. These methods
vary in terms of their impact on the source database, the accuracy of the replication, and the
latency of the data. Fivetran will automatically handle any schema changes and data
transformations that are necessary to ensure that the data is loaded correctly into the Postgres
database.
Fivetran offers a variety of tools for testing and validating the data, including data profiling, data
quality monitoring, and data lineage tracking, and can be a valuable tool for migrating data from
Oracle to Postgres. By connecting the databases to Fivetran and configuring the replication
settings correctly, Fivetran can automate the process of extracting data from Oracle and loading it
into Postgres. With Fivetran's tools for testing and validating the data, users can be confident that
the migration has been successful and that the data is accurate and complete.
Ispirer Solutions
Ispirer provides a comprehensive solution for database migration, designed to handle any type of
migration regardless of the location of the target database.
Ispirer offers the Assessment Wizard, the reports of which contain information about the accurate
number of SQL-objects and lines of code in them and about the approximate code complexity level
(it tightly correlates with the migration capabilities of Ispirer Toolkit). Ispirer technical team any
assist with the reports analysis to identify the places in the code that will require manual efforts &
which may be resolved by tool customizations.
Ispirer Toolkit is a command line tool (that also has a GUI) supports migration of tables with data
and all types of SQL-objects: procedures, packages, functions, triggers, views. Conversion of
constraints, synonyms, indexes, sequences, primary/foreign keys etc is supported as well.
The max speed of data transfer for Oracle to PostgreSQL migration is up to 60 GB per hour due to
usage of ODBC connection. There is no replication/CDC feature in the current version of Ispirer
Toolkit, but it is on the roadmap for 2025-2026.
The code migration can be performed either via an ODBC connection to the source database or by
selecting files/scripts with SQL code for the migration.
The main advantage of Ispirer Toolkit is its ability to be customized to a specific project to automate
the migration process as much as possible. A short explanatory video is available here
https://www.youtube.com/watch?v=0GY72r_LVa8.
Code conversion can be performed by Ispirer team as a professional services engagement (fixed
time fixed price base on the analysis of the source code), upon completion of which customer will
get a pre-tested target database. For such type of engagement, access to the source DB schema is
required for Ispirer team.
Ispirer Toolkit supports migration of embedded SQL statements within the application code (Java,
C# etc) and the same can be delivered by Ispirer team as professional services.
When it comes to tuning/optimization of converted queries or database performance, such
activities can be done by Ispirer team as professional services (T&M).
Liberatii Gateway
Gateway configuration — Liberatii Gateway
Liberatii is a platform that can be used to migrate Oracle databases to Azure Database for
PostgreSQL. The migration process involves several steps, including technical assessment,
database migration, and post-migration tuning and troubleshooting. Before beginning the
migration process, it is important to assess the Oracle database to determine its compatibility with
Liberatii. The Liberatii Assessment extension for Azure Data Studio can be used to assess the
compatibility of the database. This involves querying the database or SQL files to assess
compatibility with Liberatii Gateway. The extension parses SQL statements using the Liberatii
Gateway parser, maps statements to the database objects they reference, and provides a score for
each statement based on its similarity to a set of pre-tested statements. This information can be
recorded in a series of JSON files that contain metadata about the database schema and minimized
SQL statements.
Once the database has been assessed and deemed compatible with Liberatii, the migration
process can begin. The migration process involves several steps, including technical assessment,
database migration, and post-migration tuning and troubleshooting. A Liberatii Assessment
extension can be used to assess the compatibility of the database, and the migration process
includes steps such as configuration, initialization, schema migration, data migration, verification,
synchronizing databases, and testing. During the technical assessment phase, the technical team
will perform a detailed analysis of the existing Oracle database. This includes reviewing the
schema, database objects, data types, constraints, and indexes. After the assessment phase, the
migration team will begin the database migration process. This involves configuring the Azure
environment and migrating the database schema and data to Azure Database for PostgreSQL. The
migration process includes steps such as configuration, initialization, schema migration, data
migration, verification, synchronizing databases, and testing. Once the migration is complete, the
migration team will perform post-migration tuning and troubleshooting. This involves monitoring
the migrated database to ensure it is performing as expected, manually translating tables, tuning
queries, and troubleshooting any issues that arise.
Ora2Pg
Ora2Pg : Migrates Oracle to PostgreSQL (darold.net)
Ora2Pg is a free tool used to migrate an Oracle database to a PostgreSQL compatible schema. It is
easy to use and doesn't require any Oracle database knowledge other than providing the
parameters needed to connect to the Oracle database. Ora2Pg consists of a Perl script (ora2pg)
and a Perl module (Ora2Pg.pm), and the only thing you have to modify is the configuration file
ora2pg.conf by setting the DSN to the Oracle database and optionally the name of a schema. Once
that's done, you just have to set the type of export you want: TABLE with constraints, VIEW, MVIEW,
TABLESPACE, SEQUENCE, INDEXES, TRIGGER, GRANT, FUNCTION, PROCEDURE, PACKAGE,
PARTITION, TYPE, INSERT or COPY, FDW, QUERY, KETTLE, SYNONYM.
QMigrator by Quadrant Technologies
Top data migration and cross-database migration platform - QMigrator | Quadrant Technologies
QMigrator, developed by Quadrant Technologies, is a state-of-the-art migration platform
specializing in Oracle to Azure PostgreSQL migrations. It manages all phases of cross-database
migrations, including client onboarding, database assessment, migration planning, schema, code
and data migration, validation, functional and performance testing, performance tuning, database
administration, production cutover planning and execution, application remediation and cloud
optimization.
Security: QMigrator is hosted within the client’s environment, ensuring the protection of both code
and data. It incorporates robust security measures, including data masking and obfuscation, to
prevent unauthorized access to sensitive information.
Autonomous Execution: Once set up for an Oracle to Azure PostgreSQL migration, QMigrator can
autonomously manage the entire migration process. It connects to source reference databases,
source production databases, and target databases across different environments. The platform
highlights tasks requiring manual intervention and resumes operations once dependencies are
resolved. For smaller, non-complex databases, QMigrator can perform end-to-end migrations in a
single run. However, it is recommended to conduct multiple iterations of migration and cutover in
lower environments to test and customize tasks until all migration objectives are met.
Performance and Efficiency: QMigrator excels in automated code translations for heterogeneous
database migrations and scalable data migration, achieving speeds of up to 10 TB per day for initial
loads and 1 TB per day for change data capture (CDC). Its automated functional and performance
testing features significantly reduce migration time from months to weeks. The delta process
module efficiently integrates production release changes into target databases during migration.
Expert Support: Despite its autonomous capabilities, QMigrator benefits from expert monitoring
and timely remediation due to the unique complexities of each migration. Quadrant Technologies’
engineers, with extensive experience in migrations, PostgreSQL performance tuning, and database
administration, provide invaluable support. Their expertise ensures that all nuances of the
migration and cutover processes are effectively addressed.
Utilizing QMigrator and professional services from Quadrant Technologies enables organizations
to achieve their migration and modernization goals quickly and cost-effectively. This combination
ensures a smooth transition with minimized risks and optimized performance.
Spectral Core
SQL Tran (https://sqltran.com)
Omni Loader (omniloader.com)
SQL Tran is a next-generation database code translation solution. Based on a revolutionary
Hummingbird engine capable of parsing over 5 million lines of code per second, our platform can
fully translate million+ LoC SQL codebase in under 60 seconds. The whole object lifecycle is
handled in-app, from parsing to the final testing.
Assessment is included by executing a full translation, then displaying the statistics and listing out
outstanding issues encountered during the translation. Most codebases are completely translated
in under a second.
Translation from source to target SQL dialect transforms the code into equivalent structure
idiomatic for the target database engine. Code constructs with no equivalent target functionality
are clearly marked for manual rewrite (which is also done in-app).
A powerful testing framework ensures that business logic works in the same way. Static analysis is
used to intelligently track procedure and function parameter usage and generate thousands of
tests, including full data lineage for easy impact analysis. Rich customization functionality allows
for easy overriding of the default rules for schema, object name, column name, and data type
mapping.
It is important to mention that SQL Tran does not use AI for translation. Instead, its powerful and
fully deterministic engine does a reliable job of translating even the most complex codebases in
record time and with unmatched accuracy.
SQL Tran is deployed in a customer’s Azure tenant and requires no external access. Customer
resources, data, and metadata never leave your tenant.
SQL Tran is available on Azure Marketplace in both pay-as-you-go and bring-your-own-license
variants. Integration with Omni Loader enables an extremely powerful data + code migration
package. Integration with Omni Loader enables an extremely powerful data + code migration
package.
Omni Loader is an advanced database migration tool offering unmatched performance out of the
box. It significantly reduces risk in large migration projects by abstracting the complexity away,
having a fully self-tuning engine and ability to effortlessly handle even 100 TB+ databases. Key
feature is the simplicity for the user which is a key factor in ensuring the projects are done with no
surprises.
There are over 30 databases supported, everything from files (CSV/Parquet) to relational databases
and cloud data warehouses. Any source database can be copied to any supported target database.
The user workflow is simple and always the same regardless of the completely different ways the
software approaches data migration under the hood (depending on database types/versions and
table structure). Once you connect to your source and target databases, Omni Loader is ready to
copy your whole database even if it contains over a 100 thousand tables.
Performance is excellent out of the box with a typical throughput in millions of records per second,
even going from on-premises to the cloud. Key ingredient for achieving this level of performance is
the highly optimized massively parallel migration engine, built from the ground up. It automatically
vertically scales to use all CPU cores, optimally slicing individual tables with slices being processed
in parallel.
There is a built-in recover/resume which is especially helpful with flaky networks. Rich
customization functionality allows for easy overriding of the default rules for schema, object name,
column name, and data type mapping. Data can be automatically verified after conversion to
ensure there are no differences between source and target tables.
Omni Loader can be run in three modes:
-
Standalone executable with UI via browser
Console, where job is specified via JSON file
Distributed migration cluster, for the largest jobs (you probably don’t need this)
Case Studies
Technical Story
Every day, more than 1 million travelers rely on the timely services of ÖBB—Austrian Federal
Railways—Austria’s largest mobility services provider. As one of the most punctual train services
in Europe, ÖBB prides itself on efficiency and great customer service by offering a range of mobility
solutions that get people and goods where they need to go. The Ticketshop platform on Azure is an
example of innovation in efficiency for ÖBB customers, who use it like a ticket counter, travel guide,
and travel agency all rolled into one.
The idea of migrating 11 TB of Oracle data made the project
daunting in its scope, and ÖBB could not afford downtime
for this critical public service. “This project required longterm planning,” Lipp observes. “We knew that it wouldn't be
a good idea to just move Ticketshop to the cloud. So we
decided to start with a smaller system and migrate it to
Azure.”
The IT team at ÖBB saw an opportunity to take a new
approach to Ticketshop using the Ticketshop VVT–branded version to plan journeys aboard the
local train, bus, and tram routes. The VVT tenant stores hundreds of gigabytes (GB) of data, making
it substantially smaller than ÖBB Ticketshop (the tenant used by ÖBB) and a better target for a move
to Azure.
As Lipp recalls, “We used the VVT project to learn how to go into the cloud. It was fast, and it taught
us a lot about Azure.”
The first challenge faced by the team was to separate Ticketshop VVT services from the Oracle
servers and other operations that took place in ÖBB’s datacenter. ÖBB Ticketshop, the main tenant,
ran on-premises, where the software for all application services was built and delivered using the
RPM Package Manager (RPM). To make the app’s architecture more portable, the IT team
containerized the app and used Kubernetes, the popular open-source container orchestrator, to
manage the cluster.
From lift-and-shift to managed services for 11 TB of data
Like many projects, the Ticketshop migration and deployment evolved over time. Initially, the IT
team chose a simple lift-and-shift of the data tier. The team moved the on-premises databases
containing all credit card transactions and other critical data to Azure Virtual Machines.
However, the licensing fees for the huge Oracle servers began to add up. Since the license was up
for renewal, the team looked for another database solution. The first choice was PostgreSQL, an
open-source database known for its ability to store large, sophisticated datasets, such as the JSON
data used by ÖBB. The PostgreSQL PL/pgSQL language is also similar to the Oracle PL/SQL
language, which makes it easier for developers to port applications.
The team set up a test environment on-premises, and PostgreSQL was a success. The only
downside was that it required more operational overhead than desired. Azure Database for
PostgreSQL provided the solution—a managed database service that runs fully compatible
Postgres and supports the latest versions and popular PostgreSQL extensions.
First, the team moved Ticketshop VVT from an Oracle database with hundreds of gigabytes of data
to Azure Database for PostgreSQL. After extensive performance testing, the team was ready to
tackle the much larger ÖBB data tier with terabytes of data.
Flexible Server also delivers a high availability configuration with automatic failover capability using
zone redundant server deployment. As Pongratz notes, “With this architecture, even if a zone fails,
that's also something that Flexible Server fixes.”
“All our services were highly optimized for performance on
the Oracle servers,” Pongratz says. “Flexible Server saved
the project and enabled us to go live.”
To move the data from Oracle to Azure Database for
PostgreSQL, the Microsoft advisors suggested a live
migration. A live migration keeps all the environments in
sync, with no downtime, while gradually replacing the
Oracle-based data tier architecture with Azure Database for PostgreSQL. The team uses Striim, a
tool for streaming data integration, to capture changes in the source database in real time, while
the team builds the Azure environment.
“We can seamlessly switch between Postgres in the cloud and Oracle on-premises, and there is no
data loss,” Lipp reports. “This is an extremely powerful migration approach.” The migration is
scheduled to be completed at the end of the year.
ÖBB continues to work closely with Microsoft on database optimizations while running a staggered,
hybrid deployment and operating a round-the-clock service for customers. Some Ticketshop
services continue to run on-premises as the teams work through the data dependencies.
As Huss notes, “With Microsoft’s experience, we could overcome project difficulties. I see a strong
partnership that we can rely on, so we are in a good position for future challenges.”
Contact Us
Ready to get started and want to discuss next steps?
Are you interested in speaking with someone from your account team?
Do you have any questions regarding how to engage with a solution provider?
Reach out and contact us here: Migrations PM
Download