How to Extract Data From S4 to BW

How to Extract Data From S4 to BW

ODP based Extraction from S4 Hana to BW

Create a CDS view with relevant category either cube or Fact or dimension with data extraction annotation declared

Create a source system under ODP-ABAP CDS view in the target BW system

Create a Data source on CDS view created in S4

Select Proposal from ODP

Select the ODP in S4 system, the name is same as SQL view from SE11

Create the data source and activate

Create a A-DSO and load the data just like any other BW flow.

For real-time streaming, you can use Open ODS concepts or streaming process chain to A dSO

Need additional assistance? Contact us today!

How to create Custom CDS view and Custom Analytical Query using Fiori Apps

How to create Custom CDS view and Custom Analytical Query using Fiori Apps

Creating a Custom CDS view and consuming in Analytics cloud

Previously I discussed about the CDS view, Types, Architecture and how to create custom CDS views using ABAP development tools. In this blog, I am going to discuss more of an end user approach where Business Users can enhance and create existing CDS views using simple Fiori Apps.

 

Apps we are going to use

Custom CDS, Custom Analytical Queries, View browser

Please install the above apps and make sure ATO setup is done in the backend SAP system.

Creating Custom CDS view

Launch Fiori Launchpad

Launch Custom CDS view App

Click create

Give name, Label and add Primary data source. Once we add the primary data source and associations if needed

Check ODATA if this CDS, needed to be consumed elsewhere as API

The analytical option can be either cube or dimension type

Select the fields for the cube

Select field properties, here select aggregation, semantics, currency, and associations. This is equivalent to object level annotations we give in eclipse method.

Parameters Tab is not supported in On-prem version 

Assign filters as needed in filters tab

Save draft and Publish the CDS view. Once successfully Published you can view the data preview

You can also view the data and properties of the CDS view using the View Browser app.

Creating analytical query

Launch Custom Analytical Query App.



 

Click New. Give name and select the custom CDS view created in the above step

Under General, give label for end user display (Description of Query)

Under Fields selection, select fields for the query. By default, measures go into columns and all characteristics under Free char’s.

Under Display select properties for the field.

For measures, we can define scaling factors, decimals, display

  • For characteristics, you can select display type, result suppression, and Row selection
 

Under Filters, we can define variables/Prompts for the query.

Save and Publish.

 

Consuming the Query

By default, all the queries created using Custom analytical apps are available in query browser. With Live /Import connection defined between SAC and On-prem, you can consume the query in Analytics Cloud

As of now, I haven’t seen any option to define Planning enabled queries.

Need additional assistance? Contact us today!

Hacking Custom CDS view

Hacking CDS

Out of passion with CDS modelling, this is one interesting thing I did for a customer to create a planning enabled CDS view

 

First, create Custom CDS / Custom Analytical query using Fiori App.

Launch ABAP eclipse and add the ATP local package to eclipse. In this case, the package is TEST_YY_KEY_USER_LOCAL_APP package.

Copy the entire code from the CDS view generated by App

Create a new CDS view and paste the code, Change the names in the code as required

Declare Planning annotations .

This will save a lot of time in defining the query/CDS view from scratch.

The CDS view will be available as any other CDS view in BI tools. But since the origin for the new CDS view is ADT, you cannot develop new Analytical queries on this using the app.

HANA PAL SQL Architecture

HANA PAL SQL Architecture

Step by Step Guide for PAL Algorithms.

Method 1 – High-level Pictorial representation – Direct Method recommended

 

Method 2 – High-Level Pictorial representation – Wrapper Method

 

Step 1

  

Step 2 –

 

Method 3 – Flowgraph

The First two method is to complete SQL way where we create an input table, result table and call SQL wrapper or procedure directly. We have many SAP documents with SQL samples for this. Below I am showing Flowgraph method for running ABC Analysis algorithm.

Input and Result tables are already created as per Algorithm requirements.

                          
 – Karthik Addula

 

How to Create a CDS view from scratch and consume in Analytics cloud/Fiori

How to Create a CDS view from scratch and consume in Analytics cloud/Fiori

This blog is intended for creating custom CDS view. CDS views are open (i.e., the CDS views definition can be deployed on any database supported by the SAP system).

What is a CDS view

Core Data services are new next-generation modeling object from SAP. In short words, it will be the one-stop place for all the modeling with embedding DDL (Definition), QL (Query) and DCL (Control). It works with Code push down approach with database doing the heavy lift.

You can find more about CDS in below SAP link.   CDFS has two flavors. 1) ABAP CDS which behave as full ABAP native object defined in ADT. 2) HANA CDSW which is more native to Hana database modeling.

More about CDS concepts – 

ABAP CDS – https://help.sap.com/viewer/f2e545608079437ab165c105649b89db/7.51.1/en-US/4ed2245c6e391014adc9fffe4e204223.html

HANA CDS – https://help.sap.com/doc/29ff91966a9f46ba85b61af337724d31/2.0.00/en-US/SAP_HANA_Core_Data_Services_CDS_Reference_en.pdf

*Below blog is intended for ABAP CDS modelling

 

CDS view types

We have three main types of CDS View (Few others for Extensions)

Basic view – First level of CDS virtual model which is mostly one on one replication of database table. Basic views can be Dimension type or Fact type. The basic view will be mostly private and avoid user consumption for BI

Composite view –  Composite view have joins/associations between Fact and Dimension basic views. Composite views are like cubes. we can define aggregation types and calculations in Composite view

Consumption view – Query views on top of composite view with all the end user semantics and can be consumed in any BI tool including Firoi apps

 

         

Setting up Eclipse and ADT

Step 1 – Login to Eclipse and start ABAP Prospective

 

Step 2 – Create a new ABAP project for the desired backend system

 

Step 3 – Add the package where the objects will be stored as a favorite so it can be easily accessible
 

 

Creating a Basic view on ACDOCA table

Navigate to Core data services under the above added package.

Step 1 – select new from the context menu on core data services and give a relevant name as per naming standards
 

If the change management is turned on, it will prompt to select a transport. After selecting the transport request, a pop-up with predefined templates are available to kick-start, select “Define view”

 Click finish. The template will come up with standard annotations and please update the code with relevant names and add table fields names.

Key annotations :
@VDM.viewType: #BASIC
@Analytics.dataCategory: #Fact

 

Validate the code and activate.

Check data preview.

  

 

 Creating a Composite view

Composite views are intermediate views where typically all the semantics and calculations are done.

Step 1 – Create a new CDS view same as basic view ( ABAP annotations will define the view type)

 (Optional = select annotation type template for predefined code skeleton)

Below is the Composite view definition with Associations and data category is Cube. Data extraction annotation will also adds extends CDS view as an extractor in ODP
@VDM.ViewType: #Composite
@Analytics.dataCategory: #Cube 

  

Finally, create a Consumption view/ Query.  Below annotation will create Odata and Transient Bex for BI consumption.
Key annotations :
@VDM.viewType: #Consumption
@odata.publish: True – creates a OData service for UI / Fiori consumption
@analytics.query: True – Creates a Transient Bex for BI consumption

  

Once the consumption view with analytics query annotation is activated we can check the transient query in RSRT T code

Bex query naming: 2C

 


 

Consuming in Fiori Query Browser App

Navigate to SAP Gateway system and activate the OData services for the above CDS View

 

Login to Fiori Launchpad

Pre reqs – Query Browser app and Security setup is done

 

  

Consuming from Analytics – Cloud

Create a Live/Import connection ( You can use the existing one if you want)

 

Create a Model and consume the Transient Query (2C)

 

Planning models can be created on Transient Bex planning queries with required Annotations. 


Please reach out to us, Let Tek -Analytics take care of S4 Analytics strategic planning and development for you 🙂 

– Karthik Addula 

Contact us today!

Consuming HANA Data sources from Power BI Desktop

Consuming HANA Data sources from Power BI Desktop

Consuming HANA Data sources from Power BI Desktop 
 

Step 1 – Launch Power BI Desktop

Step 2 – Click Get Data

Step 3 – Select databases > SAP HANA database

Step 4 –  Enter the server details and select connection type

Import = Imports the data ( you can also schedule this)

Direct = Executes data on demand whenever a user refreshes the data ( Real-time)

Step 5 – Select type of credentials  and click connect

To use Kerberos Windows AD SSO – Select use my current credentials

To use HANA database user id and password – select database and type user id password

Step 6 –  Upon successful connection, all the Calcviews will be displayed as per authorization

Step 7 – Select the calc view and click load

All the data will be now available for visualizations.

Filtering Data by Time in SAP Analytics Cloud

Filtering Data by Time in SAP Analytics Cloud



 

In SAP Analytics Cloud, you can add a time calculation to your chart without using the calculation editor. This allows you to create quick comparisons with your data such as Previous Year, Period over Period, Year to Date, and so on.

There are two ways to filter your visualizations using time filters:

  • Adding a time calculation
  • Creating a single value filter

There’s a time and place to use each, and in this post, we’ll show you some examples of how each of these time filters work.

1) Adding a time calculation

If your data includes different levels of granularity (i.e. year, quarter, month) then you can add a Time Calculation to compare your data over these periods.

First, we need to create a chart and add our measure. For this example, we’ll select Gross Margin. We also need to add Date as our time dimension.


 

Next, we’ll select ‘Add Time Calculation’ and choose ‘Previous Year’ from the list.

When we do this, nothing happens to our chart. It still shows our Gross Margin for All. We receive the following notification both in the chart and in the builder panel: “The Date dimension must be drilled to Year or lower level for Gross Margin – PY to appear.”

need to drill down to a lower level of granularity so that our chart can compare this years’ data with the previous year.

Once we drill down using the drill down icon within the chart, we can see our data compared with previous years. If we choose, we can continue to drill down to lower levels of granularity.

We can select any two measures in our chart, and a variance breakdown appears showing us the difference between the two data points.

Adding a variance

Another way to compare data from previous periods is to add a variance to your chart.

Following the previous example, we can remove the Time Calculation we added, then select our chart, and choose the ‘Add Variance’ option.

We can leave our Measures to ‘All Measures in Use (Dynamic)’, and for the dimension, we can select ‘Time’.

When we do that, we’re given all the different time variations such as Current Period, Previous Period, Previous Year, and so on. Select “Previous Year” for the Date dimension and click on Ok.

Now, our chart displays the variance based on the time period we selected.

 

Depending on your preference, there are different display options you can choose for your variance chart.

We can further drill down into any of our ranges to see a lower level of granularity and the variance will be calculated accordingly.

2) Creating a single value filter

As mentioned previously, there’s another way you can filter your visualization using time filters. First, we looked at adding a time calculation and a variance. The other way is to create a single value filter.

This option is useful for when you’re using a numeric chart, which doesn’t allow you to add a time dimension. Let’s look at an example.

We’ll start by creating a numeric point chart and selecting Gross Margin as our measure. However, this chart is displaying our total Gross Margin for all time periods, which is not what we want. Rather we want to filter the chart by time, specifically 2015.

We can also add a variance so that we can see how 2015’s Gross Margin compares to the 2014’s.

To do this, we start by selecting our visualization and then adding a Time (Member) filter.

This allows us to select the exact date for which we want to filter. In this example, we want to filter our visualization by 2015.

 

When we select OK, the time filter is enabled and our visualization restricts the measure by the time we selected.


 

Next, we can follow the same workflow described in the previous example and add a variance at the year granularity, which will show us how this year’s Gross Margin compares with the previous year.


 

We’ve shown you two ways to filter your data by time, as well as how to add a variance to compare two values.

Need additional assistance? Contact us today!

How to Merge Datasets in SAP Analytics Cloud

Leveraging Python in Tableau

After announcing the beta release of TabPy(https://github.com/tableau/TabPy), a new API that enables evaluation of Python code from within a Tableau workbook, I integrated Python in Tableau to do a Predictive model to find if the patient is diagnosed by Hepatitis is Dead or Alive and also a prediction on HR related data where we can learn about the attrition of the employees if the employee is leaving or not. I and my colleague Puxin we created the dashboards taking this blog (https://www.tableau.com/about/blog/2017/1/building-advanced-analytics-applications-tabpy-64916) and tweaked the notebook to show the results.

For Hepatitis dashboard we used the publicly available dataset in which there are 155 samples and each sample has 1 target and 20 features. After correlation analysis, we shrink the number of features to 7 to avoid overfitting which are ‘ALKPHOSPHATE’, ‘SGOT’, ‘ALBUMIN’, ‘PROTIME’, ‘BILIRUBIN’, ‘LIVERBIG’, ‘LIVERFIRM’. Scaler transformation and cross validation are used to make a model more stable and robust. We start by trying linear SVM and RBF kernel SVM, measuring model by statistics like f1, accuracy, precision, recall, and roc_auc. By grid search of hype-parameter, we end up to 0.858 accuracy and 0.706 roc_auc, which is not bad. Then, we work on logistic regression and Naive Bayes, go through the same process, but they don’t give us a better result. At the end of experiments, the gradient boosting model gives us the best result, 0.994 accuracies and 0.984 roc_auc. Thus, we decide to use it as final model in the dashboard. When users input the features on the dashboard, the data will be sent to the model backend and returned a prediction which you can see under Result if the patient is Dead or Alive.

And for the HR dashboard we used similar models and in a dataset, there are 1470 samples and each sample have 1 target ‘Attrition’ and 34 features associated with the sample. After correlation analysis, we shrink the number of features to 10 to avoid overfitting, which are ‘EducationField’, ‘OverTime’, ‘EnvironmentSatisfaction’, ‘JobSatisfaction’, ‘StockOptionLevel’, ‘WorkLifeBalance’, ‘Age’, ‘TotalWorkingYears’, ‘MonthlyIncome’, ‘YearsWithCurrManager’. Scaler transformation and cross validation are used to make a model more stable and robust. We start from linear SVM and logistic regression, measuring model by statistics like f1, precision, recall and roc_auc. Based on grid search of hype-parameter, we choose the best SVM and logistic regression. Also, we used lightgbm, the current best gradient boosting machine. At the end, we ensemble these 3 models to make the prediction if the employee is going stay or not.
 
 

Below is the calculated field and attached the Jupyter work containing all the Python code used for model training and evaluation,

Calculated field :

 SCRIPT_STR(“return tabpy.query(‘Predict’,_arg1,_arg2,_arg3,_arg4,_arg5,_arg6,_arg7,_arg8,_arg9, _arg10)[‘response’]”,

[age], [totalWorkingYears], [monthlyIncome], [yearsWithCurrManager], [environmentSatisfaction], 

[jobSatisfaction], [stockOptionLevel], [workLifeBalance], [educationField], [overTime])
 

SAP Business Planning and Consolidation Overview of BPC 11.0, version for SAP BW/4HANA

BPC 11 is not just an upgrade, it’s a different product

The release of BW/4HANA in 2016 introduced great capabilities for S/4HANA users, but until now it was missing some crucial components: Planning and Consolidation capabilities designed specifically for BW/4HANA.

 
 

SAP BPC 11.0 version for SAP BW/4HANA

Is a simplified planning offering
 

With the release of BPC 11, BW/4HANA now has fully developed Planning and Consolidation capabilities that are above and beyond prior versions of BPC. Highlights include:

  • Web worksheets are now available, allowing browser-based planning and reporting, with multiple planning objects in one screen
  • Integration with SAP Analytics cloud means syncing plan data from end users who are purely cloud based
  • New customizable HTML5 interface which includes visualization capabilities and task navigation similar to Business Process Flows
  • Advanced performance achieved through HANA processing and Advanced Data Store Objects (aDSO, no longer InfoCubes or classic DSO)
  • Standard and Embedded model choices are still available
  • Microsoft Excel integration still available with Analysis for Office and EPM interfaces


 

Advanced Analytics:

SAP BPC 11.0 brings built-in, real-time analytics into the application’s Web Client.  Users can easily create graphs and charts to visualize variances.  Additionally, these can be shared amongst colleagues and attached to reports or pinned to the home screen as a favorite.

One key benefit to emphasize is that by integrating with SAP Analytics Cloud, BPC can directly utilize the Predictive Analytics functionality to generate new insights from Plan and Actuals data. Predictive Analytics is also helpful for uncovering the true drivers of business variance.

 

The predictive analytics has the potential to be a game changer for finance.  Business users can take their actuals and plans and run them through predictive processes and gain meaningful insights using machine learning.  Before running a budget or forecast, finance departments can leverage the predictive functions to see where their current data set aims to take them and then uncover influencers and business challenges.
 

The Cloud

With the world moving rapidly towards the cloud, SAP BPC 11.0 offers a tempered pace at which organizations can move their planning and consolidation platform.  This latest release is available not just on the SAP HEC, but also on AWS, Azure, Virtustream, and other cloud providers.  Finance organizations can have rapid deployments due to the ease of cloud enablement as well as utilize hybrid scenarios, which are quite popular.  The platform can start on premise and then move to the cloud.  Additionally, the integration with SAP Analytics Cloud allows organizations to have part of their platform (SAP BPC) on premise/cloud and then integrate with a public cloud platform for seamless integration.

Integration

Integration has been discussed a few times and it’s more than just bringing data into SAP BPC.  Finance users will clearly benefit from being able to bring data in from SAP S/4HANA and other SAP data sources and non-SAP data sources via the integration that is native in SAP BW/4HANA.  The key is what to do with the integration and why this matters to finance.

Accountants, analysts, controllers, and others will have the ability to bring all actuals (from ERP, logistics, marketing systems, human resources, and others) into SAP BPC to get a true reflection of actuals and compare to plans.  What this leads to ultimately is one powerful financial planning system that allows organizations to perform true Integrated Financial Planning.

We all know that the CFO resides not just in finance, but in every department (like sales and marketing), every geography, every region—everyone has a finance team providing true fiscal guidance.  Those silos now have the ability to come together in this integrated platform and provide a universal platform for planning.

SAP BW/4HANA also has the ability to capture things like sentiment analysis, which can be groundbreaking for finance individuals. Such analysis enables them to use market sentiments from Social Media and other sources to judge the direction of the budgets based on real-time market feedback.

Bottom Line

SAP BPC 11.0 with SAP BW/4HANA is poised to be quite the game changer for the office of the CFO and their ability to be a business partner across the enterprise.  With all the tools at their disposal, gone are the days when finance responds with “I’ll get back to you.” Now, they can lead and help drive business decisions.

-Bharat Sunkari 

How to import Large CSV files into SAP Analytics Cloud

How to Import Large CSV Files Into SAP Analytics Cloud.

SAP Analytics Cloud, by default maximum upload file size limit, is set to 100 MB for Excel and 250 Mb for CSV. The following procedure is to load larger files and create a model.

Step 1 – Split the large excel file into multiple small CSV file, based on partition object, in this example I used Year and Month

Step 2 – Create a Model based on the first file and save the model in Analytics Cloud

Step 3 – Select the model and click import data as shown in the screenshot below. 

Step 4 – Select the option, “import the data from file.”

Step 5 – make sure the target model is the same you saved in step 1 and select the second source file, click import.

Step 6 – The Model will auto-map, click finish mapping and save the model. (Make sure the import mode is “Appended”)

Repeat the same steps, for all the files.

Sample Story showing all the months and years data.

Need additional assistance? Contact us today!