HANA PAL SQL Architecture

HANA PAL SQL Architecture

Step by Step Guide for PAL Algorithms.

Method 1 – High-level Pictorial representation – Direct Method recommended

 

Method 2 – High-Level Pictorial representation – Wrapper Method

 

Step 1

  

Step 2 –

 

Method 3 – Flowgraph

The First two method is to complete SQL way where we create an input table, result table and call SQL wrapper or procedure directly. We have many SAP documents with SQL samples for this. Below I am showing Flowgraph method for running ABC Analysis algorithm.

Input and Result tables are already created as per Algorithm requirements.

                          
 – Karthik Addula

 

How to Create a CDS view from scratch and consume in Analytics cloud/Fiori

How to Create a CDS view from scratch and consume in Analytics cloud/Fiori

This blog is intended for creating custom CDS view. CDS views are open (i.e., the CDS views definition can be deployed on any database supported by the SAP system).

What is a CDS view

Core Data services are new next-generation modeling object from SAP. In short words, it will be the one-stop place for all the modeling with embedding DDL (Definition), QL (Query) and DCL (Control). It works with Code push down approach with database doing the heavy lift.

You can find more about CDS in below SAP link.   CDFS has two flavors. 1) ABAP CDS which behave as full ABAP native object defined in ADT. 2) HANA CDSW which is more native to Hana database modeling.

More about CDS concepts – 

ABAP CDS – https://help.sap.com/viewer/f2e545608079437ab165c105649b89db/7.51.1/en-US/4ed2245c6e391014adc9fffe4e204223.html

HANA CDS – https://help.sap.com/doc/29ff91966a9f46ba85b61af337724d31/2.0.00/en-US/SAP_HANA_Core_Data_Services_CDS_Reference_en.pdf

*Below blog is intended for ABAP CDS modelling

 

CDS view types

We have three main types of CDS View (Few others for Extensions)

Basic view – First level of CDS virtual model which is mostly one on one replication of database table. Basic views can be Dimension type or Fact type. The basic view will be mostly private and avoid user consumption for BI

Composite view –  Composite view have joins/associations between Fact and Dimension basic views. Composite views are like cubes. we can define aggregation types and calculations in Composite view

Consumption view – Query views on top of composite view with all the end user semantics and can be consumed in any BI tool including Firoi apps

 

         

Setting up Eclipse and ADT

Step 1 – Login to Eclipse and start ABAP Prospective

 

Step 2 – Create a new ABAP project for the desired backend system

 

Step 3 – Add the package where the objects will be stored as a favorite so it can be easily accessible
 

 

Creating a Basic view on ACDOCA table

Navigate to Core data services under the above added package.

Step 1 – select new from the context menu on core data services and give a relevant name as per naming standards
 

If the change management is turned on, it will prompt to select a transport. After selecting the transport request, a pop-up with predefined templates are available to kick-start, select “Define view”

 Click finish. The template will come up with standard annotations and please update the code with relevant names and add table fields names.

Key annotations :
@VDM.viewType: #BASIC
@Analytics.dataCategory: #Fact

 

Validate the code and activate.

Check data preview.

  

 

 Creating a Composite view

Composite views are intermediate views where typically all the semantics and calculations are done.

Step 1 – Create a new CDS view same as basic view ( ABAP annotations will define the view type)

 (Optional = select annotation type template for predefined code skeleton)

Below is the Composite view definition with Associations and data category is Cube. Data extraction annotation will also adds extends CDS view as an extractor in ODP
@VDM.ViewType: #Composite
@Analytics.dataCategory: #Cube 

  

Finally, create a Consumption view/ Query.  Below annotation will create Odata and Transient Bex for BI consumption.
Key annotations :
@VDM.viewType: #Consumption
@odata.publish: True – creates a OData service for UI / Fiori consumption
@analytics.query: True – Creates a Transient Bex for BI consumption

  

Once the consumption view with analytics query annotation is activated we can check the transient query in RSRT T code

Bex query naming: 2C

 


 

Consuming in Fiori Query Browser App

Navigate to SAP Gateway system and activate the OData services for the above CDS View

 

Login to Fiori Launchpad

Pre reqs – Query Browser app and Security setup is done

 

  

Consuming from Analytics – Cloud

Create a Live/Import connection ( You can use the existing one if you want)

 

Create a Model and consume the Transient Query (2C)

 

Planning models can be created on Transient Bex planning queries with required Annotations. 


Please reach out to us, Let Tek -Analytics take care of S4 Analytics strategic planning and development for you 🙂 

– Karthik Addula 

Contact us today!

Consuming HANA Data sources from Power BI Desktop

Consuming HANA Data sources from Power BI Desktop

Consuming HANA Data sources from Power BI Desktop 
 

Step 1 – Launch Power BI Desktop

Step 2 – Click Get Data

Step 3 – Select databases > SAP HANA database

Step 4 –  Enter the server details and select connection type

Import = Imports the data ( you can also schedule this)

Direct = Executes data on demand whenever a user refreshes the data ( Real-time)

Step 5 – Select type of credentials  and click connect

To use Kerberos Windows AD SSO – Select use my current credentials

To use HANA database user id and password – select database and type user id password

Step 6 –  Upon successful connection, all the Calcviews will be displayed as per authorization

Step 7 – Select the calc view and click load

All the data will be now available for visualizations.

Filtering Data by Time in SAP Analytics Cloud

Filtering Data by Time in SAP Analytics Cloud



 

In SAP Analytics Cloud, you can add a time calculation to your chart without using the calculation editor. This allows you to create quick comparisons with your data such as Previous Year, Period over Period, Year to Date, and so on.

There are two ways to filter your visualizations using time filters:

  • Adding a time calculation
  • Creating a single value filter

There’s a time and place to use each, and in this post, we’ll show you some examples of how each of these time filters work.

1) Adding a time calculation

If your data includes different levels of granularity (i.e. year, quarter, month) then you can add a Time Calculation to compare your data over these periods.

First, we need to create a chart and add our measure. For this example, we’ll select Gross Margin. We also need to add Date as our time dimension.


 

Next, we’ll select ‘Add Time Calculation’ and choose ‘Previous Year’ from the list.

When we do this, nothing happens to our chart. It still shows our Gross Margin for All. We receive the following notification both in the chart and in the builder panel: “The Date dimension must be drilled to Year or lower level for Gross Margin – PY to appear.”

need to drill down to a lower level of granularity so that our chart can compare this years’ data with the previous year.

Once we drill down using the drill down icon within the chart, we can see our data compared with previous years. If we choose, we can continue to drill down to lower levels of granularity.

We can select any two measures in our chart, and a variance breakdown appears showing us the difference between the two data points.

Adding a variance

Another way to compare data from previous periods is to add a variance to your chart.

Following the previous example, we can remove the Time Calculation we added, then select our chart, and choose the ‘Add Variance’ option.

We can leave our Measures to ‘All Measures in Use (Dynamic)’, and for the dimension, we can select ‘Time’.

When we do that, we’re given all the different time variations such as Current Period, Previous Period, Previous Year, and so on. Select “Previous Year” for the Date dimension and click on Ok.

Now, our chart displays the variance based on the time period we selected.

 

Depending on your preference, there are different display options you can choose for your variance chart.

We can further drill down into any of our ranges to see a lower level of granularity and the variance will be calculated accordingly.

2) Creating a single value filter

As mentioned previously, there’s another way you can filter your visualization using time filters. First, we looked at adding a time calculation and a variance. The other way is to create a single value filter.

This option is useful for when you’re using a numeric chart, which doesn’t allow you to add a time dimension. Let’s look at an example.

We’ll start by creating a numeric point chart and selecting Gross Margin as our measure. However, this chart is displaying our total Gross Margin for all time periods, which is not what we want. Rather we want to filter the chart by time, specifically 2015.

We can also add a variance so that we can see how 2015’s Gross Margin compares to the 2014’s.

To do this, we start by selecting our visualization and then adding a Time (Member) filter.

This allows us to select the exact date for which we want to filter. In this example, we want to filter our visualization by 2015.

 

When we select OK, the time filter is enabled and our visualization restricts the measure by the time we selected.


 

Next, we can follow the same workflow described in the previous example and add a variance at the year granularity, which will show us how this year’s Gross Margin compares with the previous year.


 

We’ve shown you two ways to filter your data by time, as well as how to add a variance to compare two values.

Need additional assistance? Contact us today!

How to Merge Datasets in SAP Analytics Cloud

Leveraging Python in Tableau

After announcing the beta release of TabPy(https://github.com/tableau/TabPy), a new API that enables evaluation of Python code from within a Tableau workbook, I integrated Python in Tableau to do a Predictive model to find if the patient is diagnosed by Hepatitis is Dead or Alive and also a prediction on HR related data where we can learn about the attrition of the employees if the employee is leaving or not. I and my colleague Puxin we created the dashboards taking this blog (https://www.tableau.com/about/blog/2017/1/building-advanced-analytics-applications-tabpy-64916) and tweaked the notebook to show the results.

For Hepatitis dashboard we used the publicly available dataset in which there are 155 samples and each sample has 1 target and 20 features. After correlation analysis, we shrink the number of features to 7 to avoid overfitting which are ‘ALKPHOSPHATE’, ‘SGOT’, ‘ALBUMIN’, ‘PROTIME’, ‘BILIRUBIN’, ‘LIVERBIG’, ‘LIVERFIRM’. Scaler transformation and cross validation are used to make a model more stable and robust. We start by trying linear SVM and RBF kernel SVM, measuring model by statistics like f1, accuracy, precision, recall, and roc_auc. By grid search of hype-parameter, we end up to 0.858 accuracy and 0.706 roc_auc, which is not bad. Then, we work on logistic regression and Naive Bayes, go through the same process, but they don’t give us a better result. At the end of experiments, the gradient boosting model gives us the best result, 0.994 accuracies and 0.984 roc_auc. Thus, we decide to use it as final model in the dashboard. When users input the features on the dashboard, the data will be sent to the model backend and returned a prediction which you can see under Result if the patient is Dead or Alive.

And for the HR dashboard we used similar models and in a dataset, there are 1470 samples and each sample have 1 target ‘Attrition’ and 34 features associated with the sample. After correlation analysis, we shrink the number of features to 10 to avoid overfitting, which are ‘EducationField’, ‘OverTime’, ‘EnvironmentSatisfaction’, ‘JobSatisfaction’, ‘StockOptionLevel’, ‘WorkLifeBalance’, ‘Age’, ‘TotalWorkingYears’, ‘MonthlyIncome’, ‘YearsWithCurrManager’. Scaler transformation and cross validation are used to make a model more stable and robust. We start from linear SVM and logistic regression, measuring model by statistics like f1, precision, recall and roc_auc. Based on grid search of hype-parameter, we choose the best SVM and logistic regression. Also, we used lightgbm, the current best gradient boosting machine. At the end, we ensemble these 3 models to make the prediction if the employee is going stay or not.
 
 

Below is the calculated field and attached the Jupyter work containing all the Python code used for model training and evaluation,

Calculated field :

 SCRIPT_STR(“return tabpy.query(‘Predict’,_arg1,_arg2,_arg3,_arg4,_arg5,_arg6,_arg7,_arg8,_arg9, _arg10)[‘response’]”,

[age], [totalWorkingYears], [monthlyIncome], [yearsWithCurrManager], [environmentSatisfaction], 

[jobSatisfaction], [stockOptionLevel], [workLifeBalance], [educationField], [overTime])
 

SAP Business Planning and Consolidation Overview of BPC 11.0, version for SAP BW/4HANA

BPC 11 is not just an upgrade, it’s a different product

The release of BW/4HANA in 2016 introduced great capabilities for S/4HANA users, but until now it was missing some crucial components: Planning and Consolidation capabilities designed specifically for BW/4HANA.

 
 

SAP BPC 11.0 version for SAP BW/4HANA

Is a simplified planning offering
 

With the release of BPC 11, BW/4HANA now has fully developed Planning and Consolidation capabilities that are above and beyond prior versions of BPC. Highlights include:

  • Web worksheets are now available, allowing browser-based planning and reporting, with multiple planning objects in one screen
  • Integration with SAP Analytics cloud means syncing plan data from end users who are purely cloud based
  • New customizable HTML5 interface which includes visualization capabilities and task navigation similar to Business Process Flows
  • Advanced performance achieved through HANA processing and Advanced Data Store Objects (aDSO, no longer InfoCubes or classic DSO)
  • Standard and Embedded model choices are still available
  • Microsoft Excel integration still available with Analysis for Office and EPM interfaces


 

Advanced Analytics:

SAP BPC 11.0 brings built-in, real-time analytics into the application’s Web Client.  Users can easily create graphs and charts to visualize variances.  Additionally, these can be shared amongst colleagues and attached to reports or pinned to the home screen as a favorite.

One key benefit to emphasize is that by integrating with SAP Analytics Cloud, BPC can directly utilize the Predictive Analytics functionality to generate new insights from Plan and Actuals data. Predictive Analytics is also helpful for uncovering the true drivers of business variance.

 

The predictive analytics has the potential to be a game changer for finance.  Business users can take their actuals and plans and run them through predictive processes and gain meaningful insights using machine learning.  Before running a budget or forecast, finance departments can leverage the predictive functions to see where their current data set aims to take them and then uncover influencers and business challenges.
 

The Cloud

With the world moving rapidly towards the cloud, SAP BPC 11.0 offers a tempered pace at which organizations can move their planning and consolidation platform.  This latest release is available not just on the SAP HEC, but also on AWS, Azure, Virtustream, and other cloud providers.  Finance organizations can have rapid deployments due to the ease of cloud enablement as well as utilize hybrid scenarios, which are quite popular.  The platform can start on premise and then move to the cloud.  Additionally, the integration with SAP Analytics Cloud allows organizations to have part of their platform (SAP BPC) on premise/cloud and then integrate with a public cloud platform for seamless integration.

Integration

Integration has been discussed a few times and it’s more than just bringing data into SAP BPC.  Finance users will clearly benefit from being able to bring data in from SAP S/4HANA and other SAP data sources and non-SAP data sources via the integration that is native in SAP BW/4HANA.  The key is what to do with the integration and why this matters to finance.

Accountants, analysts, controllers, and others will have the ability to bring all actuals (from ERP, logistics, marketing systems, human resources, and others) into SAP BPC to get a true reflection of actuals and compare to plans.  What this leads to ultimately is one powerful financial planning system that allows organizations to perform true Integrated Financial Planning.

We all know that the CFO resides not just in finance, but in every department (like sales and marketing), every geography, every region—everyone has a finance team providing true fiscal guidance.  Those silos now have the ability to come together in this integrated platform and provide a universal platform for planning.

SAP BW/4HANA also has the ability to capture things like sentiment analysis, which can be groundbreaking for finance individuals. Such analysis enables them to use market sentiments from Social Media and other sources to judge the direction of the budgets based on real-time market feedback.

Bottom Line

SAP BPC 11.0 with SAP BW/4HANA is poised to be quite the game changer for the office of the CFO and their ability to be a business partner across the enterprise.  With all the tools at their disposal, gone are the days when finance responds with “I’ll get back to you.” Now, they can lead and help drive business decisions.

-Bharat Sunkari 

How to import Large CSV files into SAP Analytics Cloud

How to Import Large CSV Files Into SAP Analytics Cloud.

SAP Analytics Cloud, by default maximum upload file size limit, is set to 100 MB for Excel and 250 Mb for CSV. The following procedure is to load larger files and create a model.

Step 1 – Split the large excel file into multiple small CSV file, based on partition object, in this example I used Year and Month

Step 2 – Create a Model based on the first file and save the model in Analytics Cloud

Step 3 – Select the model and click import data as shown in the screenshot below. 

Step 4 – Select the option, “import the data from file.”

Step 5 – make sure the target model is the same you saved in step 1 and select the second source file, click import.

Step 6 – The Model will auto-map, click finish mapping and save the model. (Make sure the import mode is “Appended”)

Repeat the same steps, for all the files.

Sample Story showing all the months and years data.

Need additional assistance? Contact us today!

Consuming Analytics Cloud Models in Analysis Office

Consuming Analytics Cloud Models in Analysis Office

With Analysis 2.5 release, it is now possible to connect and consume SAP Analytics Cloud models into Analysis Office.

Note that the following functions are not supported in Analysis:

  • You cannot plan data.
  • You cannot create exception aggregation on the fly (but you can use exception aggregation defined in the model).
  • You cannot create dynamic calculations rank and rank Olympic.

The following restrictions apply for the SAP Analytics Cloud models:

  • Analysis only supports the usage of the default currency (similar to cloud chart behavior). You cannot access non-converted currencies (similar to a cloud table).
  • You cannot consume defined thresholds from the model definition. In Analysis, you can use conditional formatting and Table Design formatting.
  • Live Data Models are not supported

 

Step by Step Process

Creating a new connection

  • Launch Analysis and click Select Data Source

 

  • Right click and under context menu select New SAC connection

 
 

  • Provide the SAC logon url ( friendly url format – *..sapbusinessobjects.cloud)

 

  • Select the required Model

 

Tek Analytics to Provide Partner Managed Cloud for SAP® Software

Tek Analytics to Provide Partner Managed Cloud for SAP® Software
To offer software subscription delivery of SAP solutions in a private cloud hosted by Tek Analytics Chicago – November 21st, 2017 – Tek Analytics announced today an agreement with SAP (NYSE: SAP) to provide a partner managed cloud for SAP® software solutions. Tek Analytics will offer an affordable, ready-to-use, subscription-based offering of SAP solutions in its private hosted cloud, enabling customers to access SAP software quickly and easily without the need for up-front infrastructure costs associated with on-premise software deployments. “We can now offer our customers an end-to-end hosted service for SAP solutions, from software acquisition to ongoing support, all with one cloud delivery model,” said Bharat SunkariManaging Partner/CEO, Tek Analytics. “Through our partner managed cloud, we will enable customers to rapidly benefit from the latest innovative SAP analytics, predictive analytics and IoT solutions without the need for capital expenditures. This will help customers’ to accelerate their digital core strategies and leverage SaaS, with custom rapid-deployment solutions from Tek Analytics.” As a provider of a partner managed cloud, Tek Analytics will provide customers with a cost-effective, on-demand, cloud-based delivery model of their SAP software solutions. Tek Analytics will supplement these services with application migration, implementation, management and support services. About Tek Analytics Tek Analytics is a technology services company with core competencies in BI Analytics, Big Data, Data Strategy & Architecture, Visualization, Enterprise cloud and Predictive Analytics to help clients find key insights and trends within their data to solve business problems and make better business decisions. We are an SAP-certified partner, providing comprehensive services and solutions to clients. Headquartered in Chicago, Tek Analytics with proven competency in SAP solutions and commitment to excellence enables us to build trusted and lasting relationships with our clients and partners.

For more information on this press release, contact [email protected]

SAP HANA DYNAMIC TIERING PART-3

SAP HANA DYNAMIC TIERING –FUTURE DEFACTO STANDARD SAP IN-HOUSE DATA AGING SOLUTION – Part 3

Now speaking technical terms, how to create tables in extended storage?

Creating Hot & warm tables in Extended Storage

 

Tables created in extended storage show up in the schema catalog along with In-Memory tables with extension “EXTENDED”.

Once you have the extended table let’s create SQL Procedure to move data from In-Memory to Extended storage warm dynamic tables.
 
  

Let’s create a view to retrieve data from both Hot & Warm stores.
 

Now let’s talk about SAP HANA 2.0 – Dynamic Tiering

How is a Multistore table in HANA 2.0 different from extended table in HANA 1.0 ?

MULTISTORE TABLES FOR AGING DATA MANAGEMENT FROM HANA 2.0 SPS00+

By using a multistore table in your HANA system, you can “age out” older data using the SAP HANA Dynamic Tiering functionality, where you can still access and update older historical data, without  a need to maintain two separate systems for current and historical archive as historical data is still needed sporadically for business analytics and decision-making. Organizations may need to keep older data primarily for auditing and compliance requirements.

 

SIMPLIFIED STORAGE USING PARTITIONING FOR BETTER PERFORMANCE OPTIMIZATION

Each multistore table contains at least two partitions: one in the SAP HANA default storage and one in extended storage. To fully utilize multistore table capabilities, it needs to be partitioned, you can choose which partitions experience the most frequent use, and store those partitions in default storage (In-Memory). You may then move data accessed less often to less expensive storage, like extended storage at your desired level of frequency (such as daily, weekly, monthly, annually, and so on), letting you move large amounts of data within a database to gain more In-Memory storage for performance. For tasks that require modifications to old historical data, you can reverse the process (through “anti-aging” of data) by bringing this data out from extended storage, back into default storage. When using a multistore table, the data is presented as a single table, and you can perform DML operations (such as inserting, deleting, and updating) regardless of whether the data is hot or cold. Tables without the time selection property do not allow you to restrict the UPDATE statement to affect just the column store or extended store partition in a multistore table, and could result in slower performance. For tables with time selection, however, you can use the WITH RANGE RESTRICTION (CURRENT |) clause to restrict the operation to partitions that are either current, or are more recent than the specified date. The faster and recommended method to move aged data is by partitioning, where you periodically move entire aged data partitions to extended storage as they become old and historical in terms of their value. Such partitions could be very large, and this method could move them at close to 1 terabyte per hour. You can create a multistore table from an existing non-partitioned column store HANA table.

 

How many Multistore Table Partition Types?

Multistore tables support single and multilevel partitioning.

  1. Range Partitioning (Single)
  2. Hash-Range and Range-Range Partitioning (multilevel)
  3. Time-Selection Partitioning (multilevel)

WHAT IS A DELTA STORE IN HANA 2.0 & HOW DOES IT PLAY ROLE IN DYNAMIC TIERING?

The delta store uses row-level versioning (RLV) to allow more than one user to modify the same table concurrently, eliminating waits for transaction locks. RLV utilizes multi-version concurrency control (MVCC), for version management at the row level. The delta store can contain extended tables or multistore tables. You may create the delta store when you create extended storage, or you may alter extended storage to add a delta store later.

DYNAMIC TIERING EXTENDED STORAGE VIEW AT HIGH LEVEL IN HANA 1.0 VS 2.0
 

DYNAMIC TIERING SIZING 

Dynamic Tiering system sizes are based on raw data size.

A starting point      adjust cores (and memory) based on workload requirements

    

Sizing “Rules of Thumb” for Dynamic Tiering Storage

  • 2.5x data compression factor (raw data compressed to <40% of its original size)
  • Metadata and versioning space: 5% of compressed data • Temp store: 20% of compressed data
  • For any DT system size, ensure that the storage system can provide 50MB / sec / core of throughput
  • Size RLV transaction log at 8GB * 2 * (number of cores) Cores
  • 1.5 core / concurrent query • 1 core can load 10MB / sec of raw data
  • In the sizing boxes above, round up the number of cores to match a hardware vendor’s most compatible server model RAM
  • 16GB / core (assumes use of delta enabled extended tables for concurrent writes) Network
  • 10GBit / sec dedicated network between HANA and DT server
  • Add networks as needed, so that HANA DT network is completely isolated from storage network 

BACKUP & DISASTER RECOVERY (HANA 1.0 & 2.0)

  • HANA backup manages backup of both hot & warm stores
  • Point in time recovery is supported*
  • Data backups with log backups allow restore to Point in time or most recent point before the crash.
  • Data backups alone only allow restore to n-1 backup if the recent backup is incomplete due to crash.
  • Storage failures will depend upon storage vendor disk mirroring & fault tolerance capabilities.
  • SAP HANA systems with extended storage supports system replication from HANA 2.0 SPS00 , a high availability(HA) feature that maintains a secondary system for fault and disaster recovery support.
  • There are few limitations for Dynamic Tiering from system replication perspective in initial versions.

*In case of HANA 1.0, HANA dynamic Tiering does not support HANA system replication.

  • Dynamic Tiering only supports System PKI SSL for system replication. To set up System PKI for internal system replication for Dynamic Tiering, follow the steps in “Secure Internal Communication between Sites in System Replication Scenarios” in the SAP HANA Security Guide.
  • SAP HANA Dynamic Tiering is not recommended to be used in production within MDC systems up to SPS12

*For a list of system replication features that Dynamic Tiering supports, see SAP Note 2356851 

Limitations on SAP HANA Landscape when using Dynamic Tiering:

https://help.sap.com/viewer/88f82e0d010e4da1bc8963f18346f46e/2.0.02/en-US/ddc2f2a4f47c4253b302d349293bd422.html

 

SAP HANA Dynamic Tiering 2.0 SP 02 (Released July 26, 2017) is the latest available version which has better performance & High Availability related improvements.

SPS 02 brings some exciting innovations to SAP HANA 2.0 Dynamic Tiering:

  • Two-tier asynchronous system replication
  • Three-tier system replication
  • New option for hypervisor vendors to self-certify their virtualization environments for SAP HANA dynamic tiering.
  • Improved performance of data transfer from the dynamic tiering server back to HANA (anti-aging)
  • Smarter caching for cross-store join operations (caching of HANA data in dynamic tiering, to improve performance of repetitive queries that join data in HANA with data in dynamic tiering)

For More information on SAP HANA 2.0 SPS 02 check out Live Expert Series: What’s New in SAP HANA 2.0 SPS 02

ConclusionDynamic Tiering is continuously evolving to be a completely integrated data aging solution for SAP HANA to manage large volumes of aged data.

*Please note that at this point of time, Dynamic Tiering is not supported for SAP Business Suite on HANA & S/4 Hana. As dynamic tiering evolves to be more native to the HANA database, these applications will likely pick it up for large data volume use cases in future. But there are various other approaches to handle cold & hot data for SAP Business suite on HANA & S/4 HANA. 

References:

SAP HANA Dynamic Tiering Guides

SAP HANA Administration Guide

SAP HANA Dynamic Tiering Installation Guide

SAP HANA Dynamic Tiering Administration Guide

SAP HANA Dynamic Tiering Option Master Guide

SAP HANA Dynamic Tiering Option release notes

Let Tek Analytics Implement the solution for you. Please contact us @ [email protected] 

* indicates functionality limitations/restrictions applicable.
DT is an abbreviation for Dynamic Tiering.
HA is an abbreviation for High Availability.

 

 

Need additional assistance? Contact us today!