SAP BW on HANA functional migration project experiences
Migrating objects in our client’s SAP Business Warehouse system within a HANA system transition

Purpose of the BW system migration project, framework
Foreword
In which we get to know the protagonist’s best friend the Client…
The customer started developing the SAP BW system many, many years ago and completed the first important step in migrating to the HANA system, the database migration, in 2018. The next step in this path is the migration of objects.
According to the customer’s need and the project tender, the existing models function had to be rebuilt in the same way from Hana compatible objects. It was expected that after the live start of the new objects and models that the old and new models should work in parallel for a certain time so that during the live system operation, based on several months of data loading and operation experience, we can state that the new models work accurately. You can then stop loading old models and delete their data content at a later stage.
Planning, preparation
First chapter
In which the protagonist gets to know and then understands the task, cool runs through his body, turns pale, and then rolls up his sleeve…
As the first step of the BW system migration project, we determined the exact scope. This is one of the most important steps, because an unplanned object type or model part that arises on the go can easily give so many extra tasks that the available time and resource frames can no longer be maintained.
We excluded from the scope of the given project the more complicated models implemented with the development. Assessing and reshaping them would not have fit into the time frame we have. Planning objects were migrated as part of the project, but were done by the client’s IT expert colleagues, so the entire models were migrated.
The goal was not a complete transition, but multi-phase, controlled, ensuring a migration of stable operation.
In the initial phase of the project, we estimated the models to be transformed and created AS-IS diagrams for each model, as well as a complete list of objects that included the relevant infoproviders, infosources, and datasources. In the next step, the resulting diagrams and lists were validated by the client’s IT experts, thus clarifying the scope of the project. In this phase of the project, it is especially important that experts are involved who are well acquainted with the given system and on the part of the client, their knowledge and experience are essential to form the whole picture.
Of course, the estimation of the individual models also included the assessment and determination of the relationships between the models and with other parts not included in the scope.
As a separate task, the developments in the system had to be mapped for each object in the object list. Not only the developed programs, functional elements, tables, transactions had to be listed here, but also the references in any transformations and user-exit codes, DTP filtering.
Based on the completed AS-IS diagrams and lists, the TO-BE diagrams were created, which contained the desired model diagrams, relationships, and objects. In parallel with the TO-BE figures, an object list was created for each area for each model, which included the code, name, type of the original object, and the name, code, and type of the new object to be created based on it. By merging the object lists by model, an object list for the entire project was created, which formed the basis not only for the statistics created at the end of the project, but also for finding references in developments.
Based on the completed object list, it was possible to modify the authorization roles and objects.
In addition to the TO-BE diagram and the list of objects, the list of reports to be copied and workbooks to be converted were also part of the design documentation. Users use BEx workbooks, so it was important to be able to replace all queries to be copied in the associated workbooks. The reports to be copied were based on the report runtime statistics report from the live system.
Usually, based on the number of runs, we determined which reports would be copied and which were not, but the client-side experts who knew the area and the habits of the users also provided a great help in selecting and validating the reports to be copied.
At this point, we tried to determine which models / model parts, for which – due to their design – the old and the new model cannot be operated in parallel, or the new model cannot be produced by copying, only by modifying the existing model. This can result from the associated development using too many other objects (tables, transactions), so it would be too much of a task to copy the entire program package. Another such problematic case may be the use of a hybrid model where an old type of infocube or SPO has been inserted into a new composite provider.
Instead of the infocube connected to the composite provider, we created an aDSO and replaced the infocube in the composite provider with the new aDSO using the replace function. Thus, it was no longer necessary to modify additional objects (query, calculation view) built on the composite provider.
In cases where the old and new models cannot be run in parallel, we ran a query before transport (before status) and then ran the same query again after transport (after status). Testing was done by comparing the result of the query before and after.
During the survey of the areas, special attention had to be paid to the conversion of the existing and operating 3.x loads.
Implementation of the BW system migration project
Chapter Two
In which there is no stone or info set-cube will be left standing, infosource just falls… The SAP-recommended migration program is designed to map a complete model, along with all of its relationships, and migrate all affected (to be migrated) objects by specifying a top-level object. However, this program was not suitable for migrating complete model parts. If a 3.x loaded branch or SPO was found, the program stopped with an error message. Therefore, most area backend objects were copied manually. New DTPs were created, DTP selections were copied manually.
We developed the new process chains based on the existing process chains, due to the need for parallel operation, the new chains loading the new model branches were inserted into the existing chains.
Loading methodology for new models
An important question is how we can most easily double the size of our database.
According to the chosen approach, the new model branches are not loaded entirely from the datasources, but according to the operation of the models constructed according to the LSA concept, from the lowest level DSO. And the objects above it are already loaded from below within the new model. This solution can be used if the lower-level DSOs contain all the necessary data.
This approach was chosen primarily due to the long time required for uploads through the datasource, and on the other hand due to the complexity of the setup operation of LIS datasources. I would add to the advantages of the solution that the transformations and the loads above the lower level DSOs were also tested.
During the testing, several shortcomings and errors of the existing models were revealed. If the transformation of the load within the model included a logic that made it impossible to reload the top-level object, then we also created a side-branch load at the top level.
What are we copying on what?
The question seems redundant because the answer seems clear based on SAP training materials, brochures also available on the Internet, yet let’s go through it in a few words, based on specific experiences.
Classic DSO
There really is no question here. Each DSO has an advanced DSO pair with appropriate parameter settings.
SPO
SPO can be of infocube or DSO type. The question may arise as to whether, if a particular object is partitioned, it is worthwhile to break the new object along some feature as well. SAP recommends that you set up a semantic partition on an aDSO above 2 billion records. During this time, the system performs table breakdown automatically at the database level, no special setting is required.
For SPOs, side branch loading also raises some questions. Side-way loads are HANA-compatible loads in all cases; in the rarest cases do we use any logic transformation (text-character cleaning). However, loading from SPOs has a huge memory requirement, in many cases exceeding a memory parameter (Global Memory Allocation Limit). Exceeding the memory limit will cause the system to shut down.
Based on the SAP recommendation, the load should be set up by partitions. We tried to do the loads in several installments broken down by years. In most problematic cases, this was a solution, but for some data storage, a period breakdown was also required. There were places where we chose a non-Hanas load, so the load ran on the application server (and not in memory), leaving no significant memory usage and a very long time, but the loads ran through.
Infocube
An infocube has also become an aDSO in most cases, but it does matter with what parameterization it is. If the aDSO is created by copying, an infocube type aDSO is automatically obtained. In most cases, this served its purpose.
However, the infocube-type aDSO has some features. If you activate the load requests, it is no longer possible to delete the requests; this practically corresponds to the previous compression function. In such cases, we chose to put the “cleanup” element in the process chains, where you can set what happens to the old requests. Generally, we have set requests that are older than 30 days to be activated. There was an object where the data had to be kept unchanged, so a possible reload is not possible, here it can be a problem if an application is accidentally activated and cannot be deleted. Here we considered creating a normal aDSO object instead of an infocube type aDSO, fortunately the number of possible keys for the new aDSO does not limit this setting. To decide this, the specifics of a given model must be examined.
For some models where there was no business logic mapped between the top-level DSO and the infocube in the transformation, we omitted the infocube level in the new model and added the new aDSO directly to the composite provider.
Multiprovider
Instead of multiproviders, we created a composite provider in each case.
Infoset
Unfortunately, copying infosets was problematic in all cases. Most of the time, it seemed easier to rebuild the composite provider from scratch and copy the reports between the two objects. In some cases, we decided to create a virtual model created with calculation views for the given infoset.
The figure above compares the number of infoproviders in the old and new models. It is important to note that in the old models, the SPO partition was counted as a separate object. From maintenance – operational point of view, I think this is a justified step.
Developments, programs
A project of this size and complexity cannot be possible without the help of at least one hard-working and patient developer. The real problem is not to find and replace object references and readings hidden in transformation codes, but to map and update individual enhancements that have been complicated and used for too long and without thorough documentation. The transaction code is just the tip of the iceberg. There is no general methodology to solve this problem, but only three things need to be added… 😊
A separate question arose as to whether it was possible or worthwhile during the project to replace the ABAP codes in the transformations with AMDP scripts. As a general rule, I think it’s only worth looking at where current loading times justify. By rewriting the codes, we hide another possibility of error in the model, which is cumbersome to test and requires more resources.
Testing of the BW system migration project
Chapter Three
In which the protagonist thinks he can rest but is wrong…
Developer tests consisted mostly of checking the data content of objects created in parallel. The data content of the lower level objects was obtained with 1: 1 loads, so there was not much discrepancy here, but the loading of the objects above it already made it possible to check the operation of the new model branches.
Client-side integration testing was basically done by comparing pre-selected type reports. An incredible monotony-tolerant and precise colleague runs the old and then the new report, with the same selections, by a breakdown by multiple aspects, and then compares the results and finds the needle in the haystack. This is where the Sisyphean investigation begins, how the “needle” got there.
As a result of the methodology, testing allowed a static, instantaneous match to be checked. It was not possible to test the correctness of new downloads from the source system with continuous data downloads. Therefore, parallel operation in a live system has been given special importance. The disadvantage of this solution was that for the transitional period of parallel operation, the size of the memory used in the live system almost doubled.
BW system migration project: Live start, initial live loads
Chapter Four
In which the big day comes…
Preparing for a live start requires careful planning and the involvement of all concerned parties. It is not enough to prepare and control the transports, to carry out the related administration, many other tasks have to be thought through and prepared.
Based on the experience of the test system loads, we planned the sequence of productive system loads. In the productive system, care had to be taken not to overload the system by loading large amounts of data, so the full data load was designed to be pulled apart for 2 weeks. This, in turn, meant that although all objects were available and operational in the productive system after the transport was imported, until the storage objects were filled, the new reports would not bring data. Therefore, we planned the transport of workbooks and the notification of users after productive system data uploads.
After the data loads, the new process chains had to be scheduled in the productive system so that the new model branches would also be loaded on a daily basis. We then also checked the productive reports, comparing the old and new reports.
Proper communication with users is very important during this level of transformation. They should be notified that a technical migration step has occurred in the system that does not bring a functional change in the way the reports work, but it is important that they use the new reports and report immediately if any discrepancies are noticed.
End of the BW system migration project, follow-up tasks
Afterword
In which everyone calms down and spends his well-deserved rest…
Of course, such a project will not end for a long time. We always find mistakes, changes, small tasks that, even after a month or two, remind us of the difficulties of an otherwise successful project.
Because the old and new model lines run in parallel in a live system, this greatly increases memory usage. It is necessary to perform a follow-up task that assesses and plans to shut down old model branches and delete the contents of old storage objects.
Based on the opinion of the client’s experts, the decision was made that the old objects will not be deleted; only their content will be deleted over time, and the objects will be moved to the archive (info) area, so that the defined business logics will be available and retrievable later.
A priority order has been set to delete the contents of the objects. According to this, if for a given model the new time of the new model and reports works without error for adequate time, then in the first step the contents of the infocubes in the old model can be deleted. For security reasons, the contents of the underlying DSOs will need to be deleted in a subsequent step.
For models based on logistics datasources (2LIS_ *), the safety time interval for retaining old data was set to a slightly larger value. If reloading is required, the setup process for past data from these datasources is a much more complex and time-consuming task.
We have received positive feedback from the client, so the project is definitely a success. We gained a lot of experience, encountered all kinds of BW objects, got to know most of the basic business models and their features, dealt with the historicality of the models, and thought about how we could have done and what we could have done better.
What is at least as important is that during many months of intense, joint work we have developed a very good relationship with the experts of the client’s IT team, if they recognize themselves, I would like to thank them again for the much help and trust they have given us.
BW system migration project experience
This article was written by: Zsolt Kaprinyák
SAP Business Intelligence Competence Centre

Discover our other posts!
Acquisition of BKB Solutions
Established in 2012, the company provides SAP services to its clients.
ONESPIRE 2023 team at the Ultrabalaton race
The event was held between May 5-7, our company was represented by the ONESPIRE 2023 team.
Our certifications for bluetelligence’s Enterprise Glossary and Metadata API
This highlights our commitment to keep up with the latest advancements in data management technology.
Great things in business are never done by One person.
They are done by Onespire!
Do you have a question about our services?