How to Merge Datasets in SAP Analytics Cloud

Leveraging Python in Tableau

After announcing the beta release of TabPy(https://github.com/tableau/TabPy), a new API that enables evaluation of Python code from within a Tableau workbook, I integrated Python in Tableau to do a Predictive model to find if the patient is diagnosed by Hepatitis is Dead or Alive and also a prediction on HR related data where we can learn about the attrition of the employees if the employee is leaving or not. I and my colleague Puxin we created the dashboards taking this blog (https://www.tableau.com/about/blog/2017/1/building-advanced-analytics-applications-tabpy-64916) and tweaked the notebook to show the results.

For Hepatitis dashboard we used the publicly available dataset in which there are 155 samples and each sample has 1 target and 20 features. After correlation analysis, we shrink the number of features to 7 to avoid overfitting which are ‘ALKPHOSPHATE’, ‘SGOT’, ‘ALBUMIN’, ‘PROTIME’, ‘BILIRUBIN’, ‘LIVERBIG’, ‘LIVERFIRM’. Scaler transformation and cross validation are used to make a model more stable and robust. We start by trying linear SVM and RBF kernel SVM, measuring model by statistics like f1, accuracy, precision, recall, and roc_auc. By grid search of hype-parameter, we end up to 0.858 accuracy and 0.706 roc_auc, which is not bad. Then, we work on logistic regression and Naive Bayes, go through the same process, but they don’t give us a better result. At the end of experiments, the gradient boosting model gives us the best result, 0.994 accuracies and 0.984 roc_auc. Thus, we decide to use it as final model in the dashboard. When users input the features on the dashboard, the data will be sent to the model backend and returned a prediction which you can see under Result if the patient is Dead or Alive.

And for the HR dashboard we used similar models and in a dataset, there are 1470 samples and each sample have 1 target ‘Attrition’ and 34 features associated with the sample. After correlation analysis, we shrink the number of features to 10 to avoid overfitting, which are ‘EducationField’, ‘OverTime’, ‘EnvironmentSatisfaction’, ‘JobSatisfaction’, ‘StockOptionLevel’, ‘WorkLifeBalance’, ‘Age’, ‘TotalWorkingYears’, ‘MonthlyIncome’, ‘YearsWithCurrManager’. Scaler transformation and cross validation are used to make a model more stable and robust. We start from linear SVM and logistic regression, measuring model by statistics like f1, precision, recall and roc_auc. Based on grid search of hype-parameter, we choose the best SVM and logistic regression. Also, we used lightgbm, the current best gradient boosting machine. At the end, we ensemble these 3 models to make the prediction if the employee is going stay or not.
 
 

Below is the calculated field and attached the Jupyter work containing all the Python code used for model training and evaluation,

Calculated field :

 SCRIPT_STR(“return tabpy.query(‘Predict’,_arg1,_arg2,_arg3,_arg4,_arg5,_arg6,_arg7,_arg8,_arg9, _arg10)[‘response’]”,

[age], [totalWorkingYears], [monthlyIncome], [yearsWithCurrManager], [environmentSatisfaction], 

[jobSatisfaction], [stockOptionLevel], [workLifeBalance], [educationField], [overTime])
 

SAP Business Planning and Consolidation Overview of BPC 11.0, version for SAP BW/4HANA

BPC 11 is not just an upgrade, it’s a different product

The release of BW/4HANA in 2016 introduced great capabilities for S/4HANA users, but until now it was missing some crucial components: Planning and Consolidation capabilities designed specifically for BW/4HANA.

 
 

SAP BPC 11.0 version for SAP BW/4HANA

Is a simplified planning offering
 

With the release of BPC 11, BW/4HANA now has fully developed Planning and Consolidation capabilities that are above and beyond prior versions of BPC. Highlights include:

  • Web worksheets are now available, allowing browser-based planning and reporting, with multiple planning objects in one screen
  • Integration with SAP Analytics cloud means syncing plan data from end users who are purely cloud based
  • New customizable HTML5 interface which includes visualization capabilities and task navigation similar to Business Process Flows
  • Advanced performance achieved through HANA processing and Advanced Data Store Objects (aDSO, no longer InfoCubes or classic DSO)
  • Standard and Embedded model choices are still available
  • Microsoft Excel integration still available with Analysis for Office and EPM interfaces


 

Advanced Analytics:

SAP BPC 11.0 brings built-in, real-time analytics into the application’s Web Client.  Users can easily create graphs and charts to visualize variances.  Additionally, these can be shared amongst colleagues and attached to reports or pinned to the home screen as a favorite.

One key benefit to emphasize is that by integrating with SAP Analytics Cloud, BPC can directly utilize the Predictive Analytics functionality to generate new insights from Plan and Actuals data. Predictive Analytics is also helpful for uncovering the true drivers of business variance.

 

The predictive analytics has the potential to be a game changer for finance.  Business users can take their actuals and plans and run them through predictive processes and gain meaningful insights using machine learning.  Before running a budget or forecast, finance departments can leverage the predictive functions to see where their current data set aims to take them and then uncover influencers and business challenges.
 

The Cloud

With the world moving rapidly towards the cloud, SAP BPC 11.0 offers a tempered pace at which organizations can move their planning and consolidation platform.  This latest release is available not just on the SAP HEC, but also on AWS, Azure, Virtustream, and other cloud providers.  Finance organizations can have rapid deployments due to the ease of cloud enablement as well as utilize hybrid scenarios, which are quite popular.  The platform can start on premise and then move to the cloud.  Additionally, the integration with SAP Analytics Cloud allows organizations to have part of their platform (SAP BPC) on premise/cloud and then integrate with a public cloud platform for seamless integration.

Integration

Integration has been discussed a few times and it’s more than just bringing data into SAP BPC.  Finance users will clearly benefit from being able to bring data in from SAP S/4HANA and other SAP data sources and non-SAP data sources via the integration that is native in SAP BW/4HANA.  The key is what to do with the integration and why this matters to finance.

Accountants, analysts, controllers, and others will have the ability to bring all actuals (from ERP, logistics, marketing systems, human resources, and others) into SAP BPC to get a true reflection of actuals and compare to plans.  What this leads to ultimately is one powerful financial planning system that allows organizations to perform true Integrated Financial Planning.

We all know that the CFO resides not just in finance, but in every department (like sales and marketing), every geography, every region—everyone has a finance team providing true fiscal guidance.  Those silos now have the ability to come together in this integrated platform and provide a universal platform for planning.

SAP BW/4HANA also has the ability to capture things like sentiment analysis, which can be groundbreaking for finance individuals. Such analysis enables them to use market sentiments from Social Media and other sources to judge the direction of the budgets based on real-time market feedback.

Bottom Line

SAP BPC 11.0 with SAP BW/4HANA is poised to be quite the game changer for the office of the CFO and their ability to be a business partner across the enterprise.  With all the tools at their disposal, gone are the days when finance responds with “I’ll get back to you.” Now, they can lead and help drive business decisions.

-Bharat Sunkari 

How to import Large CSV files into SAP Analytics Cloud

How to Import Large CSV Files Into SAP Analytics Cloud.

SAP Analytics Cloud, by default maximum upload file size limit, is set to 100 MB for Excel and 250 Mb for CSV. The following procedure is to load larger files and create a model.

Step 1 – Split the large excel file into multiple small CSV file, based on partition object, in this example I used Year and Month

Step 2 – Create a Model based on the first file and save the model in Analytics Cloud

Step 3 – Select the model and click import data as shown in the screenshot below. 

Step 4 – Select the option, “import the data from file.”

Step 5 – make sure the target model is the same you saved in step 1 and select the second source file, click import.

Step 6 – The Model will auto-map, click finish mapping and save the model. (Make sure the import mode is “Appended”)

Repeat the same steps, for all the files.

Sample Story showing all the months and years data.

Need additional assistance? Contact us today!

Consuming Analytics Cloud Models in Analysis Office

Consuming Analytics Cloud Models in Analysis Office

With Analysis 2.5 release, it is now possible to connect and consume SAP Analytics Cloud models into Analysis Office.

Note that the following functions are not supported in Analysis:

  • You cannot plan data.
  • You cannot create exception aggregation on the fly (but you can use exception aggregation defined in the model).
  • You cannot create dynamic calculations rank and rank Olympic.

The following restrictions apply for the SAP Analytics Cloud models:

  • Analysis only supports the usage of the default currency (similar to cloud chart behavior). You cannot access non-converted currencies (similar to a cloud table).
  • You cannot consume defined thresholds from the model definition. In Analysis, you can use conditional formatting and Table Design formatting.
  • Live Data Models are not supported

 

Step by Step Process

Creating a new connection

  • Launch Analysis and click Select Data Source

 

  • Right click and under context menu select New SAC connection

 
 

  • Provide the SAC logon url ( friendly url format – *..sapbusinessobjects.cloud)

 

  • Select the required Model

 

SAP HANA DYNAMIC TIERING PART-3

SAP HANA DYNAMIC TIERING –FUTURE DEFACTO STANDARD SAP IN-HOUSE DATA AGING SOLUTION – Part 3

Now speaking technical terms, how to create tables in extended storage?

Creating Hot & warm tables in Extended Storage

 

Tables created in extended storage show up in the schema catalog along with In-Memory tables with extension “EXTENDED”.

Once you have the extended table let’s create SQL Procedure to move data from In-Memory to Extended storage warm dynamic tables.
 
  

Let’s create a view to retrieve data from both Hot & Warm stores.
 

Now let’s talk about SAP HANA 2.0 – Dynamic Tiering

How is a Multistore table in HANA 2.0 different from extended table in HANA 1.0 ?

MULTISTORE TABLES FOR AGING DATA MANAGEMENT FROM HANA 2.0 SPS00+

By using a multistore table in your HANA system, you can “age out” older data using the SAP HANA Dynamic Tiering functionality, where you can still access and update older historical data, without  a need to maintain two separate systems for current and historical archive as historical data is still needed sporadically for business analytics and decision-making. Organizations may need to keep older data primarily for auditing and compliance requirements.

 

SIMPLIFIED STORAGE USING PARTITIONING FOR BETTER PERFORMANCE OPTIMIZATION

Each multistore table contains at least two partitions: one in the SAP HANA default storage and one in extended storage. To fully utilize multistore table capabilities, it needs to be partitioned, you can choose which partitions experience the most frequent use, and store those partitions in default storage (In-Memory). You may then move data accessed less often to less expensive storage, like extended storage at your desired level of frequency (such as daily, weekly, monthly, annually, and so on), letting you move large amounts of data within a database to gain more In-Memory storage for performance. For tasks that require modifications to old historical data, you can reverse the process (through “anti-aging” of data) by bringing this data out from extended storage, back into default storage. When using a multistore table, the data is presented as a single table, and you can perform DML operations (such as inserting, deleting, and updating) regardless of whether the data is hot or cold. Tables without the time selection property do not allow you to restrict the UPDATE statement to affect just the column store or extended store partition in a multistore table, and could result in slower performance. For tables with time selection, however, you can use the WITH RANGE RESTRICTION (CURRENT |) clause to restrict the operation to partitions that are either current, or are more recent than the specified date. The faster and recommended method to move aged data is by partitioning, where you periodically move entire aged data partitions to extended storage as they become old and historical in terms of their value. Such partitions could be very large, and this method could move them at close to 1 terabyte per hour. You can create a multistore table from an existing non-partitioned column store HANA table.

 

How many Multistore Table Partition Types?

Multistore tables support single and multilevel partitioning.

  1. Range Partitioning (Single)
  2. Hash-Range and Range-Range Partitioning (multilevel)
  3. Time-Selection Partitioning (multilevel)

WHAT IS A DELTA STORE IN HANA 2.0 & HOW DOES IT PLAY ROLE IN DYNAMIC TIERING?

The delta store uses row-level versioning (RLV) to allow more than one user to modify the same table concurrently, eliminating waits for transaction locks. RLV utilizes multi-version concurrency control (MVCC), for version management at the row level. The delta store can contain extended tables or multistore tables. You may create the delta store when you create extended storage, or you may alter extended storage to add a delta store later.

DYNAMIC TIERING EXTENDED STORAGE VIEW AT HIGH LEVEL IN HANA 1.0 VS 2.0
 

DYNAMIC TIERING SIZING 

Dynamic Tiering system sizes are based on raw data size.

A starting point      adjust cores (and memory) based on workload requirements

    

Sizing “Rules of Thumb” for Dynamic Tiering Storage

  • 2.5x data compression factor (raw data compressed to <40% of its original size)
  • Metadata and versioning space: 5% of compressed data • Temp store: 20% of compressed data
  • For any DT system size, ensure that the storage system can provide 50MB / sec / core of throughput
  • Size RLV transaction log at 8GB * 2 * (number of cores) Cores
  • 1.5 core / concurrent query • 1 core can load 10MB / sec of raw data
  • In the sizing boxes above, round up the number of cores to match a hardware vendor’s most compatible server model RAM
  • 16GB / core (assumes use of delta enabled extended tables for concurrent writes) Network
  • 10GBit / sec dedicated network between HANA and DT server
  • Add networks as needed, so that HANA DT network is completely isolated from storage network 

BACKUP & DISASTER RECOVERY (HANA 1.0 & 2.0)

  • HANA backup manages backup of both hot & warm stores
  • Point in time recovery is supported*
  • Data backups with log backups allow restore to Point in time or most recent point before the crash.
  • Data backups alone only allow restore to n-1 backup if the recent backup is incomplete due to crash.
  • Storage failures will depend upon storage vendor disk mirroring & fault tolerance capabilities.
  • SAP HANA systems with extended storage supports system replication from HANA 2.0 SPS00 , a high availability(HA) feature that maintains a secondary system for fault and disaster recovery support.
  • There are few limitations for Dynamic Tiering from system replication perspective in initial versions.

*In case of HANA 1.0, HANA dynamic Tiering does not support HANA system replication.

  • Dynamic Tiering only supports System PKI SSL for system replication. To set up System PKI for internal system replication for Dynamic Tiering, follow the steps in “Secure Internal Communication between Sites in System Replication Scenarios” in the SAP HANA Security Guide.
  • SAP HANA Dynamic Tiering is not recommended to be used in production within MDC systems up to SPS12

*For a list of system replication features that Dynamic Tiering supports, see SAP Note 2356851 

Limitations on SAP HANA Landscape when using Dynamic Tiering:

https://help.sap.com/viewer/88f82e0d010e4da1bc8963f18346f46e/2.0.02/en-US/ddc2f2a4f47c4253b302d349293bd422.html

 

SAP HANA Dynamic Tiering 2.0 SP 02 (Released July 26, 2017) is the latest available version which has better performance & High Availability related improvements.

SPS 02 brings some exciting innovations to SAP HANA 2.0 Dynamic Tiering:

  • Two-tier asynchronous system replication
  • Three-tier system replication
  • New option for hypervisor vendors to self-certify their virtualization environments for SAP HANA dynamic tiering.
  • Improved performance of data transfer from the dynamic tiering server back to HANA (anti-aging)
  • Smarter caching for cross-store join operations (caching of HANA data in dynamic tiering, to improve performance of repetitive queries that join data in HANA with data in dynamic tiering)

For More information on SAP HANA 2.0 SPS 02 check out Live Expert Series: What’s New in SAP HANA 2.0 SPS 02

ConclusionDynamic Tiering is continuously evolving to be a completely integrated data aging solution for SAP HANA to manage large volumes of aged data.

*Please note that at this point of time, Dynamic Tiering is not supported for SAP Business Suite on HANA & S/4 Hana. As dynamic tiering evolves to be more native to the HANA database, these applications will likely pick it up for large data volume use cases in future. But there are various other approaches to handle cold & hot data for SAP Business suite on HANA & S/4 HANA. 

References:

SAP HANA Dynamic Tiering Guides

SAP HANA Administration Guide

SAP HANA Dynamic Tiering Installation Guide

SAP HANA Dynamic Tiering Administration Guide

SAP HANA Dynamic Tiering Option Master Guide

SAP HANA Dynamic Tiering Option release notes

Let Tek Analytics Implement the solution for you. Please contact us @ [email protected] 

* indicates functionality limitations/restrictions applicable.
DT is an abbreviation for Dynamic Tiering.
HA is an abbreviation for High Availability.

 

 

Need additional assistance? Contact us today!

SAP HANA DYNAMIC TIERING PART-2

SAP HANA DYNAMIC TIERING –FUTURE DEFACTO STANDARD SAP IN-HOUSE DATA AGING SOLUTION – Part 2

IN-MEMORY & EXTENDED STORAGE WITH ONE LOGICAL DB BUT DIFFERENT TIERS

 
 

SAP HANA WITH – DYNAMIC TIERING VS EXTERNAL STORAGE SYSTEMS 

SAP Dynamic Tiering HANA             

External Storage systems(Like Hadoop, Spark, Big Query)

Dependent on HANA Disk capacity(additional storage can be purchased)

Unlimited disk capacity

Lower TCO (70% cost reduction in Licenses)

Higher TCO

Dynamic tiers in same HANA system

External additional systems required

No additional 3rd party tools integration

Additional integration tools & Licenses

Concurrent inserts from multiple connections on extended tables allowing multiple writes

No concurrent inserts

All types of data manipulation are possible even on cold/warm data (Create, Read, Update, and Delete).

Write Once, Read only- Many.

High speed data movement between In memory HANA tables & Extended tables.

 

System Replication for Dynamic Tiering from HANA 2.0*

 

QUICK LOOK ON HOW DOES DYNAMIC TIERING WORK WITH SAP BW ON HANA 

 

  • BW requirements were the driving force behind extended storage-dynamic tiering
  • BW uses dynamic tiering with object-based temperature assignment.
  • In-Memory can be utilized in more efficient manner with proper staging in DT.

 

HOW DYNAMIC TIERING PLAYS PART IN BIG DATA ANALYTICS ?

 SAP BIG DATA ANALYTICS WITH DYNAMIC TIERING VS EXTERNAL STORAGE 

FEW LIMITATIONS FOR REPORTING ON EXTENDED TABLES (HANA 1.0) OR MULTISTORE TABLES (HANA 2.0)

  • SAP HANA advanced capabilities like text search and analytics, series data, and graph are not currently supported with extended tables, or on the dynamic tiering partitions of multistore tables.
  • Extended tables are only allowed in Calculation Views. You cannot add extended table/multistore to Analytic or Attribute Views.
  • Changing table in such a view to extended table invalidates the view.
  • You cannot use extended tables or multistore tables in Core Data Services (CDS) views. In case, if you use an Application Function Library (AFL) function on data residing in dynamic tiering, the data is pulled into SAP HANA for processing in memory.

 

To be contd ..

  • SAP HANA advanced capabilities like text search and analytics, series data, and graph are not currently supported with extended tables, or on the dynamic tiering partitions of multistore tables.
  • Extended tables are only allowed in Calculation Views. You cannot add extended table/multistore to Analytic or Attribute Views.
  • Changing table in such a view to extended table invalidates the view.
  • You cannot use extended tables or multistore tables in Core Data Services (CDS) views. In case, if you use an Application Function Library (AFL) function on data residing in dynamic tiering, the data is pulled into SAP HANA for processing in memory.

 

To be contd ..

Let Tek Analytics Implement the solution for you. Please contact us @ [email protected]


           NAVEEN MANNE @ Tek-Analytics
SAP BIG DATA & ADVANCED ANALYTICS|SAP HANA

 

Tek-Analytics Lumira Pre-built Templates

Tek-Analytics Lumira Pre-built Templates

Introducing Tek-analytics pre-buit plug and play SAP Lumira templates . Kick start your SAP Dashboard design with our Templates.
 
  • Use our templates to save upto 80% of development time 
  • Pre-built Plug and Play templates by functional areas
  • Flexible, Adaptive and Responsive designs 

 

SAP HANA DYNAMIC TIERING PART-1

SAP HANA DYNAMIC TIERING –FUTURE DEFACTO STANDARD SAP IN-HOUSE DATA AGING SOLUTION – Part 1

This article will act as a Quick Guide for SAP HANA Dynamic Tiering sharing insights on leveraging SAP HANA to store and access cold/warm data without an external nearline storage solution. Customers adapted to HANA are now seeing the best data management & processing speeds with in-memory solution.
 

What if: your HANA data keeps growing rapidly?

What if: your HANA In-memory footprint growing towards allocated limit?

What if: your HANA complains out of memory issues?

What if: business needs more years of data for reporting & analytics ?
 

What’s next ?

Solutions for these questions can be justified with SAP HANA Dynamic Tiering – a native SAP Big Data solution to extend & enhance HANA In-memory capabilities by reducing HANA memory footprint in case of BW on HANA & for native HANA scenarios.

 

What is SAP HANA Dynamic Tiering ?

SAP HANA Dynamic Tiering is native Big Data solution (SAP in-house) to handle large enterprise data without external storage systems. Dynamic Tiering uses HANA DB columnar Disk space instead of In-Memory to store data in tables based on SAP IQ smart column store technology. Dynamic tiering is a deeply integrated, high performance solution in a single/scale out systems that can handle datasets from terabytes to Petabytes.
 
 

Data Aging & Data Temperatures in HANA

What is your data temperature Value?

  
 

FACTORS FUELING DYNAMIC TIERING
 

Cost

SAVINGS

Performance

Risk & Complexities

Reliability

Approx, $15K per TB Disk Size

Huge savings in terms of Licensing & storage over the time

Upto 13X faster* (Compared to other non-sap tools)

No Migration necessary
Simple as in-memory operations

Native Big data solution. Reliable solution in future for SAP HANA

 

AREAS RELEVANT FOR DYNAMIC TIERING: 

 

GOALS & CONSIDERATIONS FOR DYNAMIC TIERING

  • Future SAP in-house solution
  • manage HANA memory footprint for BW on HANA & native HANA scenarios
  • Cross enterprise transparency with EDW
  • Manage data cost effectively without affecting SLA’s.
  • Reduced TCO in terms of storage & external integration tools.
  • Performance Optimized & Centralized control – all in one single Database.
  • No need of additional 3rd party integration tools.
  • Cost of Read/Write operations on database is relatively low/normal.
  • No need for warm data tables to be in memory. Main memory will be used for caching & processing when accessing warm data. 

FEW TECHNICAL INSIGHTS

  • Extended tables schema will be part of catalog.
  • Creating indexes on extended tables can enhance performance.
  • Update & query warm tables seamlessly just like in-memory tables.
  • Extended tables can be used in Calculation views for reporting purposes.
  • Performance optimized Calculation & processing engines
  • Parallel query processing when data has to be processed from hot & warm stores
  • Hot Data for daily reporting & warm data for periodic use.
  • Customers can retain more years of data for compliance purposes & appropriate historical data can be retrieved for reporting whenever necessary.

To be contd ..

Let Tek Analytics Implement the solution for you. Please contact us @ [email protected]

         NAVEEN MANNE @ Tek-Analytics
SAP BIG DATA & ADVANCED ANALYTICS|SAP HANA

SAP Analytics Cloud Integration with on premise SAP Systems

SAP Analytics Cloud (formerly BOBJ Cloud / Cloud for Analytics) is a SaaS packaged Analytics solution. The tool is capable of analytics, planning and predictive with R integration.

SAP Analytics Cloud (formerly BOBJ Cloud / Cloud for Analytics) is a SaaS packaged Analytics solution. The tool is capable of analytics, planning and predictive with R integration.

Last week we successfully configured analytics cloud with on-premise sap application for one of our customers. Below is the detailed architecture over over-view of the integration

Components

S4 1610 Fp02 On-premises, BOBJ 4.2, BW4 and HANA 2.0. All the applications are running on amazon virtual private cloud.

SAP Analytics cloud connection type

  • Import
  • Direct
  • Reverse Proxy

Import Connection – Requires SAP Cloud connector, the data connection is not live, cannot be useful for live stream data

Direct – Using the standard INA services in an SAP application, Analytics cloud will directly connect to source systems. Requires to open up the ports to cloud

Reverse Proxy – making use of a web server paths in a DMZ layer, Live data connections can be created. We used Apache as a web-server for this setup

 

Live Data connection using the reverse proxy

 

High-level configuration steps

  • 1 – Install apache on the server
  • 2- Configure SSL for the webserver (default port 443)
  • 3 – Configure Proxy paths with redirection to backend SAP systems (BW, S/4, HANA, and BOBJ* are supported)
  • 4 – Configure the default path route to your cloud tenant

*Additional installation and config is required for tomcat

**SAML SSO is supported

Thanks
Karthik Addula

SAP BW Workspace

SAP BW WORKSPACE:

  1. Introduction:

A BW Workspace is a kind of ‘small sandbox’ which IT defines. IT sets the boundaries, IT sets the amount of resources that a BW Workspace can consume and exposes some of the central data models to the BW Workspace (data of the models and their related semantics). The BW Workspace exposes the central data in a logical fashion only. The data is not copied over into the BW Workspace, which means that no data replication is needed.

The goal is to enable the Key Business Users to use this functionality in a dedicated and separated environment, which is deeply embedded and integrated in the existing BW landscape. The BW Workspaces bridge the gap between the architected and the departmental data marts. Therefore, Workspaces are integrated, independent and, as the usage of the SAP Business Warehouse Accelerator is mandatory, they are completely in-memory.

Purpose:

This article will guide you through the implementation of BW workspaces, provide an overview of the different steps, people to involve in your organization, as well as step-by-step instructions on the back end configuration.

OVERVIEW:

BW Workspaces are dedicated areas in a BW system where new models can be created based on central BW and local data files (i.e. flat files).  The BW Workspace environment itself needs to be created, maintained, and controlled by IT.  Once IT setup the BW Workspace, then the business users can subsequently use the BW Workspace to combine data from BW with data in flat files in order to react quickly to new and changing requirements.

Table summarizes the responsibilities of IT and the business. BW Workspaces do not need to be transported.  Therefore, they bridge the gap between the need for central governance and flexibility.
 
Creating SAP BW Workspace:
The following should be considered when setting up BW Workspaces, Composite Providers,
and Queries:
Enter the technical code /NRSWSP in the SAP BW system it takes you to the Workspace.
The technical names of BW Workspaces should start with the same prefix as used for the associated InfoArea that owns the BW Workspace.  This will make it easier to identify which BW Workspace belongs to which InfoArea and Click on create.
  
Figure 1: Naming the BW Workspace
Leave the Workspace Prefix field blank when creating the BW Workspace (see Figure 2).  When it is set, all objects in the Workspace will start with that prefix, which will reduce the number of characters available for naming the objects.  This property cannot be changed after the BW Workspace has been saved.  It can only be changed at the time that the BW Workspace is being created.

  1. As BW Workspaces always involve the Business perspective and the IT perspective, use the Contact 1 and Contact 2 fields respectively. This will allow transparency on who to contact with questions/issues about the BW Workspace or its contents.
  2. CSV files uploaded to the BW Workspace should also start with the same prefix in order for the authorizations to be easily managed.
  3. The Expiry Date for a BW Workspace should be set to expire at a fixed date after it has been created.

 

                                                        Figure 2: Prefix for Workspace Provider Field
Click on the Central Providers will take you to the below screen. Here Click on the tree symbol it displays the all the fields of the MultiProvider, you can make any changes if required to display the fields. 
          

Here we go through detailed steps for the configuration of BW Workspaces, the security setup and the creation of the BW Workspace itself.
In order to include ABAP WebDynpro application the following procedure needs to be followed in order to make sure that the Web Dynpro application for BW Workspaces works properly:
Additionally, the following ICF services for the BW Workspace Designer application (which is a Web Dynpro application) need to be enabled via transaction SICF:

  • RSL_UI_MY_WORKSPACE
  • RSL_UI_CREATE_COPR
  • RSL_UI_CREATE_PROVIDER
  • RSL_UI_CHANGE_WORKSPACE
  • RSL_UI_CHANGE_QUERIES

The above ICF services can be found at the location /default_host/sap/bc/webdynpro/sap      

  • To Activate the services Rigth click on the serive and click activate service.

   

  • Create the authorization roles (see the next section)
  • In the SICF handler list handler CL_WDR_MAIN_TASK is defined for the webdynpro node.
  • Assign the users the appropriate roles
  • Confirm the parameters in Internet Communication Manager (ICM) are set to
  • Activate the service myssocntl via transaction code SICF.


 Activate the service myssocntl via transaction code SICF.

  

Log on the ABAP Webdynpro via url.

location /default_host/sap/bc/webdynpro/sap and give system,client,user and password to log in.
 
Under ‘My Workspace’ and tab strip ‘Central Provider’ you can have a look which central data provider have been assigned to the BW Workspace by the IT department.
Create Local Provider: We are going to describe the straight forward way, how a flat file (MS Excel or CSV format) can be uploaded in the BW Workspace Designer. Enter the proposed type as Template Query and click on Continue.
 
In this step Click on ‘Choose File’ you browse the file and click transfer file.It automatically sets the path,Local provider Name and description and click Next.
 
Change Columns: The two columns of the file containing both character values have been analyzed by the system and type ‘Character String’ has been determined. Here you can decide on further data type options and you can deselect columns.

Confirmation: Click on the Upload and Activate Provider.
 
A Local Provider has been created. You can display the data directly or execute a query on the Provider.  Finally we create another Local Provider by uploading the inventory file data. For this we can also use the Link under ‘Next Steps’. 

Create Composite Provider:  Now we will merge the just uploaded Local Provider data with the central sales data located in the SAP NetWeaver Business Warehouse system. This is done by modeling a Composite Provider. Furthermore, during the modeling process, you will generate two queries on top of the Composite Provider.
Start the Application Start the application ‘Create Composite Provider’ and skip the optional step of ‘using a query as template’.
 
Select Provider: Select Provider Select the MultiProvider as well as the Local Provider and choose a technical name and description. Click on next.
 
Model Composite Provider:  
The MultiProvider which has been assigned under ‘settings’ in the BW Workspace maintenance represents the UNION part of the Composite Provider. This means that all its fields have been brought over to the Composite Provider definition and make up the basis. During assignment of a MultiProvider under ‘Central Provider’ in the BW Workspace maintenance it is possible to exclude InfoObjects. This is reflected in the BW Workspace Designer meaning that only these fields selected in the maintenance are available in the Composite Provider model. For the Local Provider to be joined, binding type ‘Inner Join’ is proposed.
Here you can view the details of the Main Provider by clicking over Details of the Main Provider it shows fields of Main Provider and links.
 

If you click on the Generate provider it automatically generate the Data Choose the Generate Provider, settings and click on Ok.
 
Click on the Details of Links shows Links for the Provider and Join field Analysis.
If you use the mapping proposal by pressing ‘Details → Complete Proposal’ the system comes to the same conclusion automatically.
The mapping proposer can analyze the metadata, data content and existing or historical data models and shows the analysis results.
 
Click on the next. Here you can edit the name of the fields if needed then click on next.
 
This is the last screen to create a Standard and a Reference Query. Choose the Type of query is Standard. Queries once created here cannot be edited, they are transcended which mean they are automatically adjusted when the Composite Provider changes.  
Reference Query is a restricted and calculated key field defined in a globally or centre provider are also available. Admin can reuse it to create a new query.
 
 It creates the Query as follows. These queries can immediately be used for reporting after having activated the Composite Provider.
 
Check and Save:
The Composite Provider is ready for use now.