Consuming HANA Data sources from Power BI Desktop

Consuming HANA Data sources from Power BI Desktop

Consuming HANA Data sources from Power BI Desktop 
 

Step 1 – Launch Power BI Desktop

Step 2 – Click Get Data

Step 3 – Select databases > SAP HANA database

Step 4 –  Enter the server details and select connection type

Import = Imports the data ( you can also schedule this)

Direct = Executes data on demand whenever a user refreshes the data ( Real-time)

Step 5 – Select type of credentials  and click connect

To use Kerberos Windows AD SSO – Select use my current credentials

To use HANA database user id and password – select database and type user id password

Step 6 –  Upon successful connection, all the Calcviews will be displayed as per authorization

Step 7 – Select the calc view and click load

All the data will be now available for visualizations.

Filtering Data by Time in SAP Analytics Cloud

Filtering Data by Time in SAP Analytics Cloud



 

In SAP Analytics Cloud, you can add a time calculation to your chart without using the calculation editor. This allows you to create quick comparisons with your data such as Previous Year, Period over Period, Year to Date, and so on.

There are two ways to filter your visualizations using time filters:

  • Adding a time calculation
  • Creating a single value filter

There’s a time and place to use each, and in this post, we’ll show you some examples of how each of these time filters work.

1) Adding a time calculation

If your data includes different levels of granularity (i.e. year, quarter, month) then you can add a Time Calculation to compare your data over these periods.

First, we need to create a chart and add our measure. For this example, we’ll select Gross Margin. We also need to add Date as our time dimension.


 

Next, we’ll select ‘Add Time Calculation’ and choose ‘Previous Year’ from the list.

When we do this, nothing happens to our chart. It still shows our Gross Margin for All. We receive the following notification both in the chart and in the builder panel: “The Date dimension must be drilled to Year or lower level for Gross Margin – PY to appear.”

need to drill down to a lower level of granularity so that our chart can compare this years’ data with the previous year.

Once we drill down using the drill down icon within the chart, we can see our data compared with previous years. If we choose, we can continue to drill down to lower levels of granularity.

We can select any two measures in our chart, and a variance breakdown appears showing us the difference between the two data points.

Adding a variance

Another way to compare data from previous periods is to add a variance to your chart.

Following the previous example, we can remove the Time Calculation we added, then select our chart, and choose the ‘Add Variance’ option.

We can leave our Measures to ‘All Measures in Use (Dynamic)’, and for the dimension, we can select ‘Time’.

When we do that, we’re given all the different time variations such as Current Period, Previous Period, Previous Year, and so on. Select “Previous Year” for the Date dimension and click on Ok.

Now, our chart displays the variance based on the time period we selected.

 

Depending on your preference, there are different display options you can choose for your variance chart.

We can further drill down into any of our ranges to see a lower level of granularity and the variance will be calculated accordingly.

2) Creating a single value filter

As mentioned previously, there’s another way you can filter your visualization using time filters. First, we looked at adding a time calculation and a variance. The other way is to create a single value filter.

This option is useful for when you’re using a numeric chart, which doesn’t allow you to add a time dimension. Let’s look at an example.

We’ll start by creating a numeric point chart and selecting Gross Margin as our measure. However, this chart is displaying our total Gross Margin for all time periods, which is not what we want. Rather we want to filter the chart by time, specifically 2015.

We can also add a variance so that we can see how 2015’s Gross Margin compares to the 2014’s.

To do this, we start by selecting our visualization and then adding a Time (Member) filter.

This allows us to select the exact date for which we want to filter. In this example, we want to filter our visualization by 2015.

 

When we select OK, the time filter is enabled and our visualization restricts the measure by the time we selected.


 

Next, we can follow the same workflow described in the previous example and add a variance at the year granularity, which will show us how this year’s Gross Margin compares with the previous year.


 

We’ve shown you two ways to filter your data by time, as well as how to add a variance to compare two values.

Need additional assistance? Contact us today!

How to Merge Datasets in SAP Analytics Cloud

Leveraging Python in Tableau

After announcing the beta release of TabPy(https://github.com/tableau/TabPy), a new API that enables evaluation of Python code from within a Tableau workbook, I integrated Python in Tableau to do a Predictive model to find if the patient is diagnosed by Hepatitis is Dead or Alive and also a prediction on HR related data where we can learn about the attrition of the employees if the employee is leaving or not. I and my colleague Puxin we created the dashboards taking this blog (https://www.tableau.com/about/blog/2017/1/building-advanced-analytics-applications-tabpy-64916) and tweaked the notebook to show the results.

For Hepatitis dashboard we used the publicly available dataset in which there are 155 samples and each sample has 1 target and 20 features. After correlation analysis, we shrink the number of features to 7 to avoid overfitting which are ‘ALKPHOSPHATE’, ‘SGOT’, ‘ALBUMIN’, ‘PROTIME’, ‘BILIRUBIN’, ‘LIVERBIG’, ‘LIVERFIRM’. Scaler transformation and cross validation are used to make a model more stable and robust. We start by trying linear SVM and RBF kernel SVM, measuring model by statistics like f1, accuracy, precision, recall, and roc_auc. By grid search of hype-parameter, we end up to 0.858 accuracy and 0.706 roc_auc, which is not bad. Then, we work on logistic regression and Naive Bayes, go through the same process, but they don’t give us a better result. At the end of experiments, the gradient boosting model gives us the best result, 0.994 accuracies and 0.984 roc_auc. Thus, we decide to use it as final model in the dashboard. When users input the features on the dashboard, the data will be sent to the model backend and returned a prediction which you can see under Result if the patient is Dead or Alive.

And for the HR dashboard we used similar models and in a dataset, there are 1470 samples and each sample have 1 target ‘Attrition’ and 34 features associated with the sample. After correlation analysis, we shrink the number of features to 10 to avoid overfitting, which are ‘EducationField’, ‘OverTime’, ‘EnvironmentSatisfaction’, ‘JobSatisfaction’, ‘StockOptionLevel’, ‘WorkLifeBalance’, ‘Age’, ‘TotalWorkingYears’, ‘MonthlyIncome’, ‘YearsWithCurrManager’. Scaler transformation and cross validation are used to make a model more stable and robust. We start from linear SVM and logistic regression, measuring model by statistics like f1, precision, recall and roc_auc. Based on grid search of hype-parameter, we choose the best SVM and logistic regression. Also, we used lightgbm, the current best gradient boosting machine. At the end, we ensemble these 3 models to make the prediction if the employee is going stay or not.
 
 

Below is the calculated field and attached the Jupyter work containing all the Python code used for model training and evaluation,

Calculated field :

 SCRIPT_STR(“return tabpy.query(‘Predict’,_arg1,_arg2,_arg3,_arg4,_arg5,_arg6,_arg7,_arg8,_arg9, _arg10)[‘response’]”,

[age], [totalWorkingYears], [monthlyIncome], [yearsWithCurrManager], [environmentSatisfaction], 

[jobSatisfaction], [stockOptionLevel], [workLifeBalance], [educationField], [overTime])
 

SAP Business Planning and Consolidation Overview of BPC 11.0, version for SAP BW/4HANA

BPC 11 is not just an upgrade, it’s a different product

The release of BW/4HANA in 2016 introduced great capabilities for S/4HANA users, but until now it was missing some crucial components: Planning and Consolidation capabilities designed specifically for BW/4HANA.

 
 

SAP BPC 11.0 version for SAP BW/4HANA

Is a simplified planning offering
 

With the release of BPC 11, BW/4HANA now has fully developed Planning and Consolidation capabilities that are above and beyond prior versions of BPC. Highlights include:

  • Web worksheets are now available, allowing browser-based planning and reporting, with multiple planning objects in one screen
  • Integration with SAP Analytics cloud means syncing plan data from end users who are purely cloud based
  • New customizable HTML5 interface which includes visualization capabilities and task navigation similar to Business Process Flows
  • Advanced performance achieved through HANA processing and Advanced Data Store Objects (aDSO, no longer InfoCubes or classic DSO)
  • Standard and Embedded model choices are still available
  • Microsoft Excel integration still available with Analysis for Office and EPM interfaces


 

Advanced Analytics:

SAP BPC 11.0 brings built-in, real-time analytics into the application’s Web Client.  Users can easily create graphs and charts to visualize variances.  Additionally, these can be shared amongst colleagues and attached to reports or pinned to the home screen as a favorite.

One key benefit to emphasize is that by integrating with SAP Analytics Cloud, BPC can directly utilize the Predictive Analytics functionality to generate new insights from Plan and Actuals data. Predictive Analytics is also helpful for uncovering the true drivers of business variance.

 

The predictive analytics has the potential to be a game changer for finance.  Business users can take their actuals and plans and run them through predictive processes and gain meaningful insights using machine learning.  Before running a budget or forecast, finance departments can leverage the predictive functions to see where their current data set aims to take them and then uncover influencers and business challenges.
 

The Cloud

With the world moving rapidly towards the cloud, SAP BPC 11.0 offers a tempered pace at which organizations can move their planning and consolidation platform.  This latest release is available not just on the SAP HEC, but also on AWS, Azure, Virtustream, and other cloud providers.  Finance organizations can have rapid deployments due to the ease of cloud enablement as well as utilize hybrid scenarios, which are quite popular.  The platform can start on premise and then move to the cloud.  Additionally, the integration with SAP Analytics Cloud allows organizations to have part of their platform (SAP BPC) on premise/cloud and then integrate with a public cloud platform for seamless integration.

Integration

Integration has been discussed a few times and it’s more than just bringing data into SAP BPC.  Finance users will clearly benefit from being able to bring data in from SAP S/4HANA and other SAP data sources and non-SAP data sources via the integration that is native in SAP BW/4HANA.  The key is what to do with the integration and why this matters to finance.

Accountants, analysts, controllers, and others will have the ability to bring all actuals (from ERP, logistics, marketing systems, human resources, and others) into SAP BPC to get a true reflection of actuals and compare to plans.  What this leads to ultimately is one powerful financial planning system that allows organizations to perform true Integrated Financial Planning.

We all know that the CFO resides not just in finance, but in every department (like sales and marketing), every geography, every region—everyone has a finance team providing true fiscal guidance.  Those silos now have the ability to come together in this integrated platform and provide a universal platform for planning.

SAP BW/4HANA also has the ability to capture things like sentiment analysis, which can be groundbreaking for finance individuals. Such analysis enables them to use market sentiments from Social Media and other sources to judge the direction of the budgets based on real-time market feedback.

Bottom Line

SAP BPC 11.0 with SAP BW/4HANA is poised to be quite the game changer for the office of the CFO and their ability to be a business partner across the enterprise.  With all the tools at their disposal, gone are the days when finance responds with “I’ll get back to you.” Now, they can lead and help drive business decisions.

-Bharat Sunkari 

How to import Large CSV files into SAP Analytics Cloud

How to Import Large CSV Files Into SAP Analytics Cloud.

SAP Analytics Cloud, by default maximum upload file size limit, is set to 100 MB for Excel and 250 Mb for CSV. The following procedure is to load larger files and create a model.

Step 1 – Split the large excel file into multiple small CSV file, based on partition object, in this example I used Year and Month

Step 2 – Create a Model based on the first file and save the model in Analytics Cloud

Step 3 – Select the model and click import data as shown in the screenshot below. 

Step 4 – Select the option, “import the data from file.”

Step 5 – make sure the target model is the same you saved in step 1 and select the second source file, click import.

Step 6 – The Model will auto-map, click finish mapping and save the model. (Make sure the import mode is “Appended”)

Repeat the same steps, for all the files.

Sample Story showing all the months and years data.

Need additional assistance? Contact us today!

Consuming Analytics Cloud Models in Analysis Office

Consuming Analytics Cloud Models in Analysis Office

With Analysis 2.5 release, it is now possible to connect and consume SAP Analytics Cloud models into Analysis Office.

Note that the following functions are not supported in Analysis:

  • You cannot plan data.
  • You cannot create exception aggregation on the fly (but you can use exception aggregation defined in the model).
  • You cannot create dynamic calculations rank and rank Olympic.

The following restrictions apply for the SAP Analytics Cloud models:

  • Analysis only supports the usage of the default currency (similar to cloud chart behavior). You cannot access non-converted currencies (similar to a cloud table).
  • You cannot consume defined thresholds from the model definition. In Analysis, you can use conditional formatting and Table Design formatting.
  • Live Data Models are not supported

 

Step by Step Process

Creating a new connection

  • Launch Analysis and click Select Data Source

 

  • Right click and under context menu select New SAC connection

 
 

  • Provide the SAC logon url ( friendly url format – *..sapbusinessobjects.cloud)

 

  • Select the required Model

 

SAP HANA DYNAMIC TIERING PART-3

SAP HANA DYNAMIC TIERING –FUTURE DEFACTO STANDARD SAP IN-HOUSE DATA AGING SOLUTION – Part 3

Now speaking technical terms, how to create tables in extended storage?

Creating Hot & warm tables in Extended Storage

 

Tables created in extended storage show up in the schema catalog along with In-Memory tables with extension “EXTENDED”.

Once you have the extended table let’s create SQL Procedure to move data from In-Memory to Extended storage warm dynamic tables.
 
  

Let’s create a view to retrieve data from both Hot & Warm stores.
 

Now let’s talk about SAP HANA 2.0 – Dynamic Tiering

How is a Multistore table in HANA 2.0 different from extended table in HANA 1.0 ?

MULTISTORE TABLES FOR AGING DATA MANAGEMENT FROM HANA 2.0 SPS00+

By using a multistore table in your HANA system, you can “age out” older data using the SAP HANA Dynamic Tiering functionality, where you can still access and update older historical data, without  a need to maintain two separate systems for current and historical archive as historical data is still needed sporadically for business analytics and decision-making. Organizations may need to keep older data primarily for auditing and compliance requirements.

 

SIMPLIFIED STORAGE USING PARTITIONING FOR BETTER PERFORMANCE OPTIMIZATION

Each multistore table contains at least two partitions: one in the SAP HANA default storage and one in extended storage. To fully utilize multistore table capabilities, it needs to be partitioned, you can choose which partitions experience the most frequent use, and store those partitions in default storage (In-Memory). You may then move data accessed less often to less expensive storage, like extended storage at your desired level of frequency (such as daily, weekly, monthly, annually, and so on), letting you move large amounts of data within a database to gain more In-Memory storage for performance. For tasks that require modifications to old historical data, you can reverse the process (through “anti-aging” of data) by bringing this data out from extended storage, back into default storage. When using a multistore table, the data is presented as a single table, and you can perform DML operations (such as inserting, deleting, and updating) regardless of whether the data is hot or cold. Tables without the time selection property do not allow you to restrict the UPDATE statement to affect just the column store or extended store partition in a multistore table, and could result in slower performance. For tables with time selection, however, you can use the WITH RANGE RESTRICTION (CURRENT |) clause to restrict the operation to partitions that are either current, or are more recent than the specified date. The faster and recommended method to move aged data is by partitioning, where you periodically move entire aged data partitions to extended storage as they become old and historical in terms of their value. Such partitions could be very large, and this method could move them at close to 1 terabyte per hour. You can create a multistore table from an existing non-partitioned column store HANA table.

 

How many Multistore Table Partition Types?

Multistore tables support single and multilevel partitioning.

  1. Range Partitioning (Single)
  2. Hash-Range and Range-Range Partitioning (multilevel)
  3. Time-Selection Partitioning (multilevel)

WHAT IS A DELTA STORE IN HANA 2.0 & HOW DOES IT PLAY ROLE IN DYNAMIC TIERING?

The delta store uses row-level versioning (RLV) to allow more than one user to modify the same table concurrently, eliminating waits for transaction locks. RLV utilizes multi-version concurrency control (MVCC), for version management at the row level. The delta store can contain extended tables or multistore tables. You may create the delta store when you create extended storage, or you may alter extended storage to add a delta store later.

DYNAMIC TIERING EXTENDED STORAGE VIEW AT HIGH LEVEL IN HANA 1.0 VS 2.0
 

DYNAMIC TIERING SIZING 

Dynamic Tiering system sizes are based on raw data size.

A starting point      adjust cores (and memory) based on workload requirements

    

Sizing “Rules of Thumb” for Dynamic Tiering Storage

  • 2.5x data compression factor (raw data compressed to <40% of its original size)
  • Metadata and versioning space: 5% of compressed data • Temp store: 20% of compressed data
  • For any DT system size, ensure that the storage system can provide 50MB / sec / core of throughput
  • Size RLV transaction log at 8GB * 2 * (number of cores) Cores
  • 1.5 core / concurrent query • 1 core can load 10MB / sec of raw data
  • In the sizing boxes above, round up the number of cores to match a hardware vendor’s most compatible server model RAM
  • 16GB / core (assumes use of delta enabled extended tables for concurrent writes) Network
  • 10GBit / sec dedicated network between HANA and DT server
  • Add networks as needed, so that HANA DT network is completely isolated from storage network 

BACKUP & DISASTER RECOVERY (HANA 1.0 & 2.0)

  • HANA backup manages backup of both hot & warm stores
  • Point in time recovery is supported*
  • Data backups with log backups allow restore to Point in time or most recent point before the crash.
  • Data backups alone only allow restore to n-1 backup if the recent backup is incomplete due to crash.
  • Storage failures will depend upon storage vendor disk mirroring & fault tolerance capabilities.
  • SAP HANA systems with extended storage supports system replication from HANA 2.0 SPS00 , a high availability(HA) feature that maintains a secondary system for fault and disaster recovery support.
  • There are few limitations for Dynamic Tiering from system replication perspective in initial versions.

*In case of HANA 1.0, HANA dynamic Tiering does not support HANA system replication.

  • Dynamic Tiering only supports System PKI SSL for system replication. To set up System PKI for internal system replication for Dynamic Tiering, follow the steps in “Secure Internal Communication between Sites in System Replication Scenarios” in the SAP HANA Security Guide.
  • SAP HANA Dynamic Tiering is not recommended to be used in production within MDC systems up to SPS12

*For a list of system replication features that Dynamic Tiering supports, see SAP Note 2356851 

Limitations on SAP HANA Landscape when using Dynamic Tiering:

https://help.sap.com/viewer/88f82e0d010e4da1bc8963f18346f46e/2.0.02/en-US/ddc2f2a4f47c4253b302d349293bd422.html

 

SAP HANA Dynamic Tiering 2.0 SP 02 (Released July 26, 2017) is the latest available version which has better performance & High Availability related improvements.

SPS 02 brings some exciting innovations to SAP HANA 2.0 Dynamic Tiering:

  • Two-tier asynchronous system replication
  • Three-tier system replication
  • New option for hypervisor vendors to self-certify their virtualization environments for SAP HANA dynamic tiering.
  • Improved performance of data transfer from the dynamic tiering server back to HANA (anti-aging)
  • Smarter caching for cross-store join operations (caching of HANA data in dynamic tiering, to improve performance of repetitive queries that join data in HANA with data in dynamic tiering)

For More information on SAP HANA 2.0 SPS 02 check out Live Expert Series: What’s New in SAP HANA 2.0 SPS 02

ConclusionDynamic Tiering is continuously evolving to be a completely integrated data aging solution for SAP HANA to manage large volumes of aged data.

*Please note that at this point of time, Dynamic Tiering is not supported for SAP Business Suite on HANA & S/4 Hana. As dynamic tiering evolves to be more native to the HANA database, these applications will likely pick it up for large data volume use cases in future. But there are various other approaches to handle cold & hot data for SAP Business suite on HANA & S/4 HANA. 

References:

SAP HANA Dynamic Tiering Guides

SAP HANA Administration Guide

SAP HANA Dynamic Tiering Installation Guide

SAP HANA Dynamic Tiering Administration Guide

SAP HANA Dynamic Tiering Option Master Guide

SAP HANA Dynamic Tiering Option release notes

Let Tek Analytics Implement the solution for you. Please contact us @ [email protected] 

* indicates functionality limitations/restrictions applicable.
DT is an abbreviation for Dynamic Tiering.
HA is an abbreviation for High Availability.

 

 

Need additional assistance? Contact us today!

SAP HANA DYNAMIC TIERING PART-2

SAP HANA DYNAMIC TIERING –FUTURE DEFACTO STANDARD SAP IN-HOUSE DATA AGING SOLUTION – Part 2

IN-MEMORY & EXTENDED STORAGE WITH ONE LOGICAL DB BUT DIFFERENT TIERS

 
 

SAP HANA WITH – DYNAMIC TIERING VS EXTERNAL STORAGE SYSTEMS 

SAP Dynamic Tiering HANA             

External Storage systems(Like Hadoop, Spark, Big Query)

Dependent on HANA Disk capacity(additional storage can be purchased)

Unlimited disk capacity

Lower TCO (70% cost reduction in Licenses)

Higher TCO

Dynamic tiers in same HANA system

External additional systems required

No additional 3rd party tools integration

Additional integration tools & Licenses

Concurrent inserts from multiple connections on extended tables allowing multiple writes

No concurrent inserts

All types of data manipulation are possible even on cold/warm data (Create, Read, Update, and Delete).

Write Once, Read only- Many.

High speed data movement between In memory HANA tables & Extended tables.

 

System Replication for Dynamic Tiering from HANA 2.0*

 

QUICK LOOK ON HOW DOES DYNAMIC TIERING WORK WITH SAP BW ON HANA 

 

  • BW requirements were the driving force behind extended storage-dynamic tiering
  • BW uses dynamic tiering with object-based temperature assignment.
  • In-Memory can be utilized in more efficient manner with proper staging in DT.

 

HOW DYNAMIC TIERING PLAYS PART IN BIG DATA ANALYTICS ?

 SAP BIG DATA ANALYTICS WITH DYNAMIC TIERING VS EXTERNAL STORAGE 

FEW LIMITATIONS FOR REPORTING ON EXTENDED TABLES (HANA 1.0) OR MULTISTORE TABLES (HANA 2.0)

  • SAP HANA advanced capabilities like text search and analytics, series data, and graph are not currently supported with extended tables, or on the dynamic tiering partitions of multistore tables.
  • Extended tables are only allowed in Calculation Views. You cannot add extended table/multistore to Analytic or Attribute Views.
  • Changing table in such a view to extended table invalidates the view.
  • You cannot use extended tables or multistore tables in Core Data Services (CDS) views. In case, if you use an Application Function Library (AFL) function on data residing in dynamic tiering, the data is pulled into SAP HANA for processing in memory.

 

To be contd ..

  • SAP HANA advanced capabilities like text search and analytics, series data, and graph are not currently supported with extended tables, or on the dynamic tiering partitions of multistore tables.
  • Extended tables are only allowed in Calculation Views. You cannot add extended table/multistore to Analytic or Attribute Views.
  • Changing table in such a view to extended table invalidates the view.
  • You cannot use extended tables or multistore tables in Core Data Services (CDS) views. In case, if you use an Application Function Library (AFL) function on data residing in dynamic tiering, the data is pulled into SAP HANA for processing in memory.

 

To be contd ..

Let Tek Analytics Implement the solution for you. Please contact us @ [email protected]


           NAVEEN MANNE @ Tek-Analytics
SAP BIG DATA & ADVANCED ANALYTICS|SAP HANA

 

Tek-Analytics Lumira Pre-built Templates

Tek-Analytics Lumira Pre-built Templates

Introducing Tek-analytics pre-buit plug and play SAP Lumira templates . Kick start your SAP Dashboard design with our Templates.
 
  • Use our templates to save upto 80% of development time 
  • Pre-built Plug and Play templates by functional areas
  • Flexible, Adaptive and Responsive designs 

 

SAP HANA DYNAMIC TIERING PART-1

SAP HANA DYNAMIC TIERING –FUTURE DEFACTO STANDARD SAP IN-HOUSE DATA AGING SOLUTION – Part 1

This article will act as a Quick Guide for SAP HANA Dynamic Tiering sharing insights on leveraging SAP HANA to store and access cold/warm data without an external nearline storage solution. Customers adapted to HANA are now seeing the best data management & processing speeds with in-memory solution.
 

What if: your HANA data keeps growing rapidly?

What if: your HANA In-memory footprint growing towards allocated limit?

What if: your HANA complains out of memory issues?

What if: business needs more years of data for reporting & analytics ?
 

What’s next ?

Solutions for these questions can be justified with SAP HANA Dynamic Tiering – a native SAP Big Data solution to extend & enhance HANA In-memory capabilities by reducing HANA memory footprint in case of BW on HANA & for native HANA scenarios.

 

What is SAP HANA Dynamic Tiering ?

SAP HANA Dynamic Tiering is native Big Data solution (SAP in-house) to handle large enterprise data without external storage systems. Dynamic Tiering uses HANA DB columnar Disk space instead of In-Memory to store data in tables based on SAP IQ smart column store technology. Dynamic tiering is a deeply integrated, high performance solution in a single/scale out systems that can handle datasets from terabytes to Petabytes.
 
 

Data Aging & Data Temperatures in HANA

What is your data temperature Value?

  
 

FACTORS FUELING DYNAMIC TIERING
 

Cost

SAVINGS

Performance

Risk & Complexities

Reliability

Approx, $15K per TB Disk Size

Huge savings in terms of Licensing & storage over the time

Upto 13X faster* (Compared to other non-sap tools)

No Migration necessary
Simple as in-memory operations

Native Big data solution. Reliable solution in future for SAP HANA

 

AREAS RELEVANT FOR DYNAMIC TIERING: 

 

GOALS & CONSIDERATIONS FOR DYNAMIC TIERING

  • Future SAP in-house solution
  • manage HANA memory footprint for BW on HANA & native HANA scenarios
  • Cross enterprise transparency with EDW
  • Manage data cost effectively without affecting SLA’s.
  • Reduced TCO in terms of storage & external integration tools.
  • Performance Optimized & Centralized control – all in one single Database.
  • No need of additional 3rd party integration tools.
  • Cost of Read/Write operations on database is relatively low/normal.
  • No need for warm data tables to be in memory. Main memory will be used for caching & processing when accessing warm data. 

FEW TECHNICAL INSIGHTS

  • Extended tables schema will be part of catalog.
  • Creating indexes on extended tables can enhance performance.
  • Update & query warm tables seamlessly just like in-memory tables.
  • Extended tables can be used in Calculation views for reporting purposes.
  • Performance optimized Calculation & processing engines
  • Parallel query processing when data has to be processed from hot & warm stores
  • Hot Data for daily reporting & warm data for periodic use.
  • Customers can retain more years of data for compliance purposes & appropriate historical data can be retrieved for reporting whenever necessary.

To be contd ..

Let Tek Analytics Implement the solution for you. Please contact us @ [email protected]

         NAVEEN MANNE @ Tek-Analytics
SAP BIG DATA & ADVANCED ANALYTICS|SAP HANA