DataFlow Object

Right-click on Data Flow Object

When creating the Dataflow object a transport request opens up.

Afterwards add a new or existing Datasource by drag and drop to the Details:

Double click on the entered Datasource leads to the following properties:

Select a Source System and if you want a copy from an existing or a proposal from an ODP connection

By clicking on the „NEXT“ button the Dataflow is shown – now it is possible to add any other object to the dataflow.

In this example an ADSO was added and filled with three InfoObjects

Now you created a persistent ADSO.

Activate the newly created ADSO. Create a connection between the newly created ADSO and the previously created Datasource

Now you’ll see a transformation and DTP creation is possible based on the connection.

Right-click on the transformation symbol and select “Create Transformation … “

Now you are able to create a transformation – Target and Source are filled automatically:


Here you are able to define all settings for the transformation as well as connect to the fields from the previous ADSO


In addition, if infoObjects are available in the source – they are automatically connected. Don’t forget to activate!

Same for the creation of the DTP. All DTP properties can be selected. You can see/select/change all the properties already known in BW.

Same process for the composite provider – drag and drop the composite provider, assign a name and create a connection to the ADSO.


It is also possible to add a comment for the specific selections within the dataflow. Click the one you want to comment on – go to the properties section and add documentation.


As you could experience, the Dataflow object is an easy way to model your intended dataflow. You can see all the different stages on one side in one place and do not have to click yourselves through your modelled dataflow, which can be difficult and time-consuming with two different tools either the Eclipse modelling tool and the SAP GUI.

Although it is a simple way to create a complete dataflow it does not replace process chains for loading data as it is moreover a graphical view of your model. For loading the data into your model you also need to create a process chain.

Finally I’ve put together some hints which will make your work easier:

  • Create Persistent Objects of ADSO, InfoSources, HCPR directly out of DataFlow Maintenance Screen
  • You can use Templates eg. Datasource or Infosource as templates and more.
  • When you have a SourceObject like an Infosource and a Target object like an ADSO, first create the connection in between before persisting the ADSO, then all fields/InfoObjects of the source are automatically taken into account for the DataModel of the Target.
  • Execution of DTP’s is possible
  • if you want to use DTP Monitor you have to execute the DTP in the DTP Maintenance Screen, from here the InPlace GUI is being called.

Tek-Analytics SAC Scheduling Engine

Tek-Analytics SAC Scheduling Engine

Tek-Analytics scheduling engine brings scheduling capabilities to SAC stories and models. You can use Scheduling engine to burst the reports in PDF, Excel formats and the destinations can be email client, File system or SFTP.

 It is a lightweight on-premise installation with easy to maintain GUI interface. Administrators can choose the Models, Stories to be busted with Recurrence schedule.  Stories can be published to third-party vendors by email or SFTP.

Formats supported – PDF, Excel, CSV

Destinations – Email Client, File System, SFTP

How to Extract Data From S4 to BW

How to Extract data from S4 to BW

ODP based Extraction from S4 Hana to BW

Create a CDS view with relevant category either cube or Fact or dimension with data extraction annotation declared

Create a source system under ODP-ABAP CDS view in the target BW system

Create a Data source on CDS view created in S4

Select Proposal from ODP

Select the ODP in S4 system, the name is same as SQL view from SE11

Create the data source and activate

Create a A-DSO and load the data just like any other BW flow.

For real-time streaming, you can use Open ODS concepts or streaming process chain to A dSO

How to create Custom CDS view and Custom Analytical Query using Fiori Apps

How to create Custom CDS view and Custom Analytical Query using Fiori Apps

Creating a Custom CDS view and consuming in Analytics cloud

Previously I discussed about the CDS view, Types, Architecture and how to create custom CDS views using ABAP development tools. In this blog, I am going to discuss more of an end user approach where Business Users can enhance and create existing CDS views using simple Fiori Apps.

Apps we are going to use

Custom CDS, Custom Analytical Queries, View browser

Please install the above apps and make sure ATO setup is done in the backend SAP system.

Creating Custom CDS view

Launch Fiori Launchpad

Launch Custom CDS view App

Click create

Give name, Label and add Primary data source. Once we add the primary data source and associations if needed

Check ODATA if this CDS, needed to be consumed elsewhere as API

The analytical option can be either cube or dimension type

Select the fields for the cube

Select field properties, here select aggregation, semantics, currency, and associations. This is equivalent to object level annotations we give in eclipse method.

Parameters Tab is not supported in On-prem version

Assign filters as needed in filters tab

Save draft and Publish the CDS view. Once successfully Published you can view the data preview

You can also view the data and properties of the CDS view using the View Browser app.

Creating analytical query

Launch Custom Analytical Query App.

Click New. Give name and select the custom CDS view created in the above step

Under General, give label for end user display (Description of Query)

Under Fields selection, select fields for the query. By default, measures go into columns and all characteristics under Free char’s.

Under Display select properties for the field.

For measures, we can define scaling factors, decimals, display

  • For characteristics, you can select display type, result suppression, and Row selection

Under Filters, we can define variables/Prompts for the query.

Save and Publish.

Consuming the Query

By default, all the queries created using Custom analytical apps are available in query browser. With Live /Import connection defined between SAC and On-prem, you can consume the query in Analytics Cloud

As of now, I haven’t seen any option to define Planning enabled queries.

Hacking Custom CDS view

Hacking CDS

Out of passion with CDS modelling, this is one interesting thing I did for a customer to create a planning enabled CDS view

First, create Custom CDS / Custom Analytical query using Fiori App.

Launch ABAP eclipse and add the ATP local package to eclipse. In this case, the package is TEST_YY_KEY_USER_LOCAL_APP package.

Copy the entire code from the CDS view generated by App

Create a new CDS view and paste the code, Change the names in the code as required

Declare Planning annotations .

This will save a lot of time in defining the query/CDS view from scratch.

The CDS view will be available as any other CDS view in BI tools. But since the origin for the new CDS view is ADT, you cannot develop new Analytical queries on this using the app.

HANA PAL SQL Architecture

HANA PAL SQL Architecture

Step by Step Guide for PAL Algorithms.

Method 1 – High-level Pictorial representation – Direct Method recommended


Method 2 – High-Level Pictorial representation – Wrapper Method

Step 1


Step 2 –

Method 3 – Flowgraph

The First two method is to complete SQL way where we create an input table, result table and call SQL wrapper or procedure directly. We have many SAP documents with SQL samples for this. Below I am showing Flowgraph method for running ABC Analysis algorithm.

Input and Result tables are already created as per Algorithm requirements.

 – Karthik Addula

How to Create a CDS view from scratch and consume in Analytics cloud/Fiori

How to Create a CDS view from scratch and consume in Analytics cloud/Fiori

This blog is intended for creating custom CDS view. CDS views are open (i.e., the CDS views definition can be deployed on any database supported by the SAP system).

What is a CDS view

Core Data services are new next-generation modeling object from SAP. In short words, it will be the one-stop place for all the modeling with embedding DDL (Definition), QL (Query) and DCL (Control). It works with Code push down approach with database doing the heavy lift.

You can find more about CDS in below SAP link.   CDFS has two flavors. 1) ABAP CDS which behave as full ABAP native object defined in ADT. 2) HANA CDSW which is more native to Hana database modeling.

More about CDS concepts –



*Below blog is intended for ABAP CDS modelling

CDS view types

We have three main types of CDS View (Few others for Extensions)

Basic view – First level of CDS virtual model which is mostly one on one replication of database table. Basic views can be Dimension type or Fact type. The basic view will be mostly private and avoid user consumption for BI

Composite view –  Composite view have joins/associations between Fact and Dimension basic views. Composite views are like cubes. we can define aggregation types and calculations in Composite view

Consumption view – Query views on top of composite view with all the end user semantics and can be consumed in any BI tool including Firoi apps


Setting up Eclipse and ADT

Step 1 – Login to Eclipse and start ABAP Prospective


Step 2 – Create a new ABAP project for the desired backend system

Step 3 – Add the package where the objects will be stored as a favorite so it can be easily accessible

Creating a Basic view on ACDOCA table

Navigate to Core data services under the above added package.

Step 1 – select new from the context menu on core data services and give a relevant name as per naming standards

If the change management is turned on, it will prompt to select a transport. After selecting the transport request, a pop-up with predefined templates are available to kick-start, select “Define view”

 Click finish. The template will come up with standard annotations and please update the code with relevant names and add table fields names.

Key annotations :
@VDM.viewType: #BASIC
@Analytics.dataCategory: #Fact

Validate the code and activate.

Check data preview.


 Creating a Composite view

Composite views are intermediate views where typically all the semantics and calculations are done.

Step 1 – Create a new CDS view same as basic view ( ABAP annotations will define the view type)

 (Optional = select annotation type template for predefined code skeleton)

Below is the Composite view definition with Associations and data category is Cube. Data extraction annotation will also adds extends CDS view as an extractor in ODP
@VDM.ViewType: #Composite
@Analytics.dataCategory: #Cube

Finally, create a Consumption view/ Query.  Below annotation will create Odata and Transient Bex for BI consumption.
Key annotations :
@VDM.viewType: #Consumption
@odata.publish: True – creates a OData service for UI / Fiori consumption
@analytics.query: True – Creates a Transient Bex for BI consumption


Once the consumption view with analytics query annotation is activated we can check the transient query in RSRT T code

Bex query naming: 2C


Consuming in Fiori Query Browser App

Navigate to SAP Gateway system and activate the OData services for the above CDS View

Login to Fiori Launchpad

Pre reqs – Query Browser app and Security setup is done

Consuming from Analytics – Cloud

Create a Live/Import connection ( You can use the existing one if you want)

Create a Model and consume the Transient Query (2C)

Planning models can be created on Transient Bex planning queries with required Annotations. 

Please reach out to us, Let Tek -Analytics take care of S4 Analytics strategic planning and development for you 🙂 

– Karthik Addula

Consuming HANA Data sources from Power BI Desktop

Consuming HANA Data sources from Power BI Desktop

Consuming HANA Data sources from Power BI Desktop 

Step 1 – Launch Power BI Desktop

Step 2 – Click Get Data

Step 3 – Select databases > SAP HANA database

Step 4 –  Enter the server details and select connection type

Import = Imports the data ( you can also schedule this)

Direct = Executes data on demand whenever a user refreshes the data ( Real-time)

Step 5 – Select type of credentials  and click connect

To use Kerberos Windows AD SSO – Select use my current credentials

To use HANA database user id and password – select database and type user id password

Step 6 –  Upon successful connection, all the Calcviews will be displayed as per authorization

Step 7 – Select the calc view and click load

All the data will be now available for visualizations.

Filtering Data by Time in SAP Analytics Cloud

Filtering Data by Time in SAP Analytics Cloud

In SAP Analytics Cloud, you can add a time calculation to your chart without using the calculation editor. This allows you to create quick comparisons with your data such as Previous Year, Period over Period, Year to Date, and so on.

There are two ways to filter your visualizations using time filters:

  • Adding a time calculation
  • Creating a single value filter

There’s a time and place to use each, and in this post, we’ll show you some examples of how each of these time filters work.

1) Adding a time calculation

If your data includes different levels of granularity (i.e. year, quarter, month) then you can add a Time Calculation to compare your data over these periods.

First, we need to create a chart and add our measure. For this example, we’ll select Gross Margin. We also need to add Date as our time dimension.

Next, we’ll select ‘Add Time Calculation’ and choose ‘Previous Year’ from the list.

When we do this, nothing happens to our chart. It still shows our Gross Margin for All. We receive the following notification both in the chart and in the builder panel: “The Date dimension must be drilled to Year or lower level for Gross Margin – PY to appear.”

need to drill down to a lower level of granularity so that our chart can compare this years’ data with the previous year.

Once we drill down using the drill down icon within the chart, we can see our data compared with previous years. If we choose, we can continue to drill down to lower levels of granularity.

We can select any two measures in our chart, and a variance breakdown appears showing us the difference between the two data points.

Adding a variance

Another way to compare data from previous periods is to add a variance to your chart.

Following the previous example, we can remove the Time Calculation we added, then select our chart, and choose the ‘Add Variance’ option.

We can leave our Measures to ‘All Measures in Use (Dynamic)’, and for the dimension, we can select ‘Time’.

When we do that, we’re given all the different time variations such as Current Period, Previous Period, Previous Year, and so on. Select “Previous Year” for the Date dimension and click on Ok.

Now, our chart displays the variance based on the time period we selected.

Depending on your preference, there are different display options you can choose for your variance chart.

We can further drill down into any of our ranges to see a lower level of granularity and the variance will be calculated accordingly.

2) Creating a single value filter

As mentioned previously, there’s another way you can filter your visualization using time filters. First, we looked at adding a time calculation and a variance. The other way is to create a single value filter.

This option is useful for when you’re using a numeric chart, which doesn’t allow you to add a time dimension. Let’s look at an example.

We’ll start by creating a numeric point chart and selecting Gross Margin as our measure. However, this chart is displaying our total Gross Margin for all time periods, which is not what we want. Rather we want to filter the chart by time, specifically 2015.

We can also add a variance so that we can see how 2015’s Gross Margin compares to the 2014’s.

To do this, we start by selecting our visualization and then adding a Time (Member) filter.

This allows us to select the exact date for which we want to filter. In this example, we want to filter our visualization by 2015.

When we select OK, the time filter is enabled and our visualization restricts the measure by the time we selected.

Next, we can follow the same workflow described in the previous example and add a variance at the year granularity, which will show us how this year’s Gross Margin compares with the previous year.

We’ve shown you two ways to filter your data by time, as well as how to add a variance to compare two values.

How to Merge Datasets in SAP Analytics Cloud

Leveraging Python in Tableau

After announcing the beta release of TabPy(, a new API that enables evaluation of Python code from within a Tableau workbook, I integrated Python in Tableau to do a Predictive model to find if the patient is diagnosed by Hepatitis is Dead or Alive and also a prediction on HR related data where we can learn about the attrition of the employees if the employee is leaving or not. I and my colleague Puxin we created the dashboards taking this blog ( and tweaked the notebook to show the results.

For Hepatitis dashboard we used the publicly available dataset in which there are 155 samples and each sample has 1 target and 20 features. After correlation analysis, we shrink the number of features to 7 to avoid overfitting which are ‘ALKPHOSPHATE’, ‘SGOT’, ‘ALBUMIN’, ‘PROTIME’, ‘BILIRUBIN’, ‘LIVERBIG’, ‘LIVERFIRM’. Scaler transformation and cross validation are used to make a model more stable and robust. We start by trying linear SVM and RBF kernel SVM, measuring model by statistics like f1, accuracy, precision, recall, and roc_auc. By grid search of hype-parameter, we end up to 0.858 accuracy and 0.706 roc_auc, which is not bad. Then, we work on logistic regression and Naive Bayes, go through the same process, but they don’t give us a better result. At the end of experiments, the gradient boosting model gives us the best result, 0.994 accuracies and 0.984 roc_auc. Thus, we decide to use it as final model in the dashboard. When users input the features on the dashboard, the data will be sent to the model backend and returned a prediction which you can see under Result if the patient is Dead or Alive.

And for the HR dashboard we used similar models and in a dataset, there are 1470 samples and each sample have 1 target ‘Attrition’ and 34 features associated with the sample. After correlation analysis, we shrink the number of features to 10 to avoid overfitting, which are ‘EducationField’, ‘OverTime’, ‘EnvironmentSatisfaction’, ‘JobSatisfaction’, ‘StockOptionLevel’, ‘WorkLifeBalance’, ‘Age’, ‘TotalWorkingYears’, ‘MonthlyIncome’, ‘YearsWithCurrManager’. Scaler transformation and cross validation are used to make a model more stable and robust. We start from linear SVM and logistic regression, measuring model by statistics like f1, precision, recall and roc_auc. Based on grid search of hype-parameter, we choose the best SVM and logistic regression. Also, we used lightgbm, the current best gradient boosting machine. At the end, we ensemble these 3 models to make the prediction if the employee is going stay or not.

Below is the calculated field and attached the Jupyter work containing all the Python code used for model training and evaluation,

Calculated field :

 SCRIPT_STR(“return tabpy.query(‘Predict’,_arg1,_arg2,_arg3,_arg4,_arg5,_arg6,_arg7,_arg8,_arg9, _arg10)[‘response’]”,

[age], [totalWorkingYears], [monthlyIncome], [yearsWithCurrManager], [environmentSatisfaction],

[jobSatisfaction], [stockOptionLevel], [workLifeBalance], [educationField], [overTime])