Overview

Enterprises are moving from commercial databases like Oracle and MS SQL to purposeful cloud-native no-SQL databases to achieve high performance, availability, and scale at lower TCO, like Amazon DynamoDB Migration from commercial databases requires capabilities for complicated heterogeneous schema conversion, modification of in-compatible SQL queries, data replication and transformation, and migration of PL SQL source code.
A public cloud provider like AWS has many tools like SCT and DMS to accelerate database migration from commercial databases to DynamoDB. A few of the best practices and key learnings which help to de-risk database migrations are listed below:

The Key Learnings and Best Practices

  • Leverage single table design to model the data using composite keys and indexes to support different relational data access patterns such as one-one, one-to-many, many-to-one, etc
  • Use the global secondary index to increase data access patterns to implement business functionality.
  • Handle data consistency and transactional capability at the application level as DynamoDB doesn’t enforce consistency and transactional capabilities.
  • Remove unused Amazon DynamoDB tables or unnecessary data using TTL to optimize AWS cost.
  • Leverage reverse capacity instead of on-demand capacity in provision capacity mode when transaction traffic is predictable.
  • Use shorter attribute names to reduce the amount of storage required for your data.
  • Grant limited set of administrative access viz Create Table, Delete Table, Create Backup on DynamoDB using IAM policies.
  • Grant database authentication and access control viz Get Item, Put Item, Scan, Query access using IAM policies.
  • Ensure DynamoDB data is secured at rest using customer or AWS-managed KMS keys.
  • Enable monitoring VPC flow Logs and CloudTrail logs with custom logic to identify anomalies
  • Configure CloudWatch rules and Guard Duty findings for automated notifications and remediations.
  • Configure AWS CloudFormation drift detections to monitor configuration changes against baseline settings and send alerts.
Ensure on-demand and continuous backup and restore are enabled on the DynamoDB table
  • Define indexes on the table carefully to avoid table scans and filters in DynamoDB.
  • Identify peak data access patterns to organize data using partitions effectively. DynamoDB scales by increasing the number of partitions that are available to process queries.
  • Use eventual consistency instead of strong consistency for data read operations.
  • Distribute reads/writes uniformly across partitions to avoid hot partitions.
  • Store hot and cold data in separate tables.
  • Identify sort keys as per the business data access patterns.

Our Experiences

American electrical distribution company

LTIMindtree has helped to modernize customer-facing application API’s to a serverless architecture. LTIMindtree has implemented more than 200+ Lambda APIs interacting with DynamoDB as a datastore. LTIMindtree has also helped migrate on-premises Oracle relational database to no-SQL DynamoDB to reduce the cost of ownership, improve performance, and ensure near to zero downtime.
American multinational consumer goods corporation

LTIMindtree has helped customers migrate multiple HR applications to modern serverless architecture using React, Lambda API, and DynamoDB. LTIMindtree has also allowed customers to architect data models for DynamoDB to migrate from the on-premises Oracle database. Migration from an on-premises Oracle database to DynamoDB has reduced the cost of ownership and improved application performance

LTIMindtree’s Service Offering for DynamoDB


1. Consulting
Our consulting service offering focuses on tool-based assessment to analyze existing database models, define database migration strategy with a roadmap, and define architecture design for the DynamoDB database.

2. Application modernization
LTIMindtree has deep expertise in transforming monolithic architecture to cloud-native architecture using No-SQL DynamoDB. We conduct joint application development workshops with customers to understand different personas and data access patterns to define the right architecture design to ensure DynamoDB data model design is cost-effective, secured, scalable, and highly available.

3. Modernizing data pipeline and data lakes
Our data engineering service line helps customers modernize data pipeline/ETL and data lakes using AWS Data services and AWS Lambda. This service line helps migrate commercial databases to serverless cloud databases such as AWS Aurora and DynamoDB using Lambda-based ETL or AWS DMS and SCT tools.

LTIMindtree’s Accelerators

This platform is equipped with efficiency kits for application assessment, development, deployment, FinOps, Operations, and DevOps tools to accelerate AWS Lambda-based application development.
Architecture blueprints and best practices Architecture blueprints using Amazon DynamoDB for web applications, IoT, streaming use cases.
Infinity Ensure A self-service SaaS platform that provides FinOps governance on AWS serverless services viz DynamoDB, AWS Lambda, API Gateway, RDS, etc.
Observability platform for serverless LTIMindtree’s observability solutions help quickly navigate the root cause of the problem, which helps reduce application development using DynamoDB databases and Lambda functions.

Conclusion

DynamoDB is a valuable service for delivering real-life use cases using a No-SQL data store as an application data store. DynamoDB addresses performance, TCO, and availability-related issues with commercial relational databases. Careful data model design and proper identification of critical use cases and data access patterns are required to migrate from commercial relation database to No-SQL DynamoDB database.
With the LTI Infinity platform, architecture blueprints, and best practices, LTIMindtree has helped customers migrate legacy commercial databases to No-SQL DynamoDB databases.