What does a typical DWH cloud migration project look like?
Data solution migration projects differ case-by-case, having a different levels of complexity, different organization, and various project dependencies. However, the core of the work typically does not change much.
Data solution migration projects differ case-by-case, having a different levels of complexity, different organization, and various project dependencies. However, the core of the work typically does not change much.
However, let’s imagine a typical middle-sized Data Warehouse in an organization with standard complexity in the banking or telco sector. The DWH solution typically has dozens of various source systems and hundreds of entities in the landing area and the core of the Data Warehouse. It also has hundreds of reports being produced on a daily basis for risk, controlling, marketing, sales and other departments within the bank.
What does it mean to migrate this type of legacy solution to the cloud (Azure, GCP, AmazonWeb Services)? In standard circumstances, this would typically be a many-month project, resulting in the re-implementation of hundreds and thousands of ETLjobs/data pipelines from legacy ETL technology to the target technology (AzureData Factory, Big Query etc).
Using ADELE as a cloud migration accelerator speeds up the overall re-platforming efforts. Yes, given the complexity of ETL jobs, ‘personalized’ coding standards and peculiarities hidden in a legacy solution usually mean, that not all jobs can be migrated / re-platformed entirely automatically.
Here, ADELE can help greatly - automating as much as 80% of the workload. Yes, the rest has to be tweaked and re-implemented manually, but saving 80% of the results in any given workload is a great reduction of the overall project duration.