Data Management and Governance Subject Matter Expert (SME)

Job Title: Data Management and Governance Subject Matter Expert (SME)

Company: TekAnalytics

Location: Remote

About Us: TekAnalytics is a leading technology solutions provider, dedicated to serving our clients with innovative and mission-focused expertise. We are a team of passionate professionals committed to making a difference through digital transformation and cutting-edge technology solutions.

Job Description: TekAnalytics is seeking a Data Management and Governance Subject Matter Expert (SME) to help develop and implement a comprehensive data management and governance framework. This role will address a variety of data-related challenges and work towards improving overall data quality, accessibility, and governance.

Key Responsibilities:

  1. Design and implement data governance strategies using Informatica tools like Axon, DPM, and EDC.
  2. Develop and enforce data standards and guidelines to ensure consistency and high-quality data across the organization, with a focus on the Enterprise Data Platform (EDP) and associated systems.
  3. Lead Master Data Management (MDM) initiatives, driving data quality, consistency, and standardization across all datasets.
  4. Develop dashboards using Informatica tools to monitor data quality improvement efforts and ensure compliance with specific data quality requirements.
  5. Automate data quality assessment and profiling processes using Informatica tools to proactively identify and address data quality issues.
  6. Identify data silos and develop integration strategies using Informatica tools to enhance data accessibility and collaboration.
  7. Implement a people, process, and technology framework for effective data management and governance.
  8. Promote awareness and adoption of data management practices throughout the organization.
  9. Collaborate with the Enterprise Data Platforms team to integrate automated data governance mechanisms and tools.
  10. Engage closely with IT and business stakeholders to align MDM and data governance strategies with specific business objectives.
  11. Regularly review data governance policies, standards, guidelines, and procedures, recommending changes as necessary using Informatica tools.
  12. Assist in the architecture and design of a Universal Data Hub and a centralized entity master repository, implementing best-of-breed and in-line analytics data standards.
  13. Cultivate a data-centric culture within the organization.
  14. Support the design of comprehensive solutions and frameworks for data management and governance.
  15. Administer the data governance program, facilitating data governance council meetings.
  16. Develop, document, and publish data standards for EDP-related applications.
  17. Document standards for data storage, access, and analytics.
  18. Train users on data management tools, such as Informatica’s EDC and Axon.
  19. Extract metadata from existing datasets and catalogs to create a unified metadata catalog using Informatica tools.
  20. Implement a semantic model on Informatica Axon for a common view and discovery portal for business users.
  21. Implement data standards from source to consumption.
  22. Apply automated data quality functions using Informatica tools.

Requirements:

  • Bachelor’s or Master’s degree in Computer Science, Information Systems, or a related field.
  • Several years of experience in data governance, data management, and data quality roles, specifically with Informatica’s suite of data governance tools like Axon, DPM, and EDC.
  • Proven track record in developing and implementing data governance strategies using Informatica tools.
  • Knowledge and understanding of Federal data, policies, and procedures.
  • Excellent communication, collaboration, and stakeholder management skills.
  • Strong problem-solving and analytical skills.
  • Experience in promoting a data-centric culture in large organizations.
  • Familiarity with data storage, access, and analytics standards.
  • Ability to train and enable users on data management tools.
  • Experience in metadata management and semantic modeling using Informatica tools.

TekAnalytics is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees.

 
 
 

Informatica Data Governance Engineer

Job Title: Informatica Data Governance Engineer

Location: Remote

About Us: TekAnalytics is a leading technology solutions provider, dedicated to serving our clients with innovative and mission-focused expertise. We are a team of passionate professionals committed to making a difference through digital transformation and cutting-edge technology solutions.

Job Description: TekAnalytics is seeking an Informatica Data Governance Engineer to develop and implement a comprehensive data management and governance framework using Informatica suite of tools including Axon, DPM, and EDC. The role will focus on addressing data-related challenges and improving data quality, accessibility, and governance across the organization.

Key Responsibilities:

1. Design and implement data governance strategies using Informatica tools like Axon, DPM, and EDC.
2. Develop and enforce data standards and guidelines to ensure consistency and high-quality data across the organization, with a focus on the Enterprise Data Platform (EDP) and associated systems.
3. Lead Master Data Management (MDM) initiatives, driving data quality, consistency, and standardization across all datasets.
4. Develop dashboards using Informatica tools to monitor data quality improvement efforts and ensure compliance with specific data quality requirements.
5. Automate data quality assessment and profiling processes using Informatica tools to proactively identify and address data quality issues.
6. Identify data silos and develop integration strategies using Informatica tools to enhance data accessibility and collaboration.
7. Implement a people, process, and technology framework for effective data management and governance.
8. Promote awareness and adoption of data management practices throughout the organization.
9. Collaborate with the Enterprise Data Platforms team to integrate automated data governance mechanisms and tools.
10. Engage closely with IT and business stakeholders to align MDM and data governance strategies with specific business objectives.
11. Regularly review data governance policies, standards, guidelines, and procedures, recommending changes as necessary using Informatica tools.
12. Assist in the architecture and design of a Universal Data Hub and a centralized entity master repository, implementing best-of-breed and in-line analytics data standards.
13. Cultivate a data-centric culture within the organization.
14. Support the design of comprehensive solutions and frameworks for data management and governance.
15. Administer the data governance program, facilitating data governance council meetings.
16. Develop, document, and publish data standards for EDP-related applications.
17. Document standards for data storage, access, and analytics.
18. Train users on data management tools, such as Informatica’s EDC and Axon.
19. Extract metadata from existing datasets and catalogs to create a unified metadata catalog using Informatica tools.
20. Implement a semantic model on Informatica Axon for a common view and discovery portal for business users.
21. Implement data standards from source to consumption.
22. Apply automated data quality functions using Informatica tools.

Requirements:

– Bachelor’s or Master’s degree in Computer Science, Information Systems, or a related field.
– Several years of experience in data governance, data management, and data quality roles, specifically with Informatica’s suite of data governance tools like Axon, DPM, and EDC.
– Proven track record in developing and implementing data governance strategies using Informatica tools.
– Knowledge and understanding of Federal data, policies, and procedures.
– Excellent communication, collaboration, and stakeholder management skills.
– Strong problem-solving and analytical skills.
– Experience in promoting a data-centric culture in large organizations.
– Familiarity with data storage, access, and analytics standards.
– Ability to train and enable users on data management tools.
– Experience in metadata management and semantic modeling using Informatica tools.

TekAnalytics is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees.

Software Engineer / Data Modeler (XML & JSON)

Job Title: Informatica Data Governance Engineer

Location: Remote

About Us: TekAnalytics is a leading technology solutions provider, dedicated to serving our clients with innovative and mission-focused expertise. We are a team of passionate professionals committed to making a difference through digital transformation and cutting-edge technology solutions.

Job Description:

TekAnalytics is seeking a skilled Data Modeler with expertise in XML & JSON to join our dynamic team and ensure the accuracy, efficiency, and utility of our data structures.

Responsibilities:

  • Design and Develop Data Models: Create, maintain, and optimize XML and JSON data models to ensure they meet business needs and maintain the integrity and consistency of the dataset.
  • Collaborate with Stakeholders: Work closely with business analysts, data architects, developers, and other stakeholders to understand data requirements and provide optimal modeling solutions.
  • Data Mapping: Assist in defining and mapping data transformations from source systems to target structures in XML and JSON.
  • Documentation: Maintain comprehensive documentation for all data models, including entity-relationship diagrams, data dictionaries, and other related artifacts.
  • Validation and Testing: Perform validation checks on the models to ensure data integrity, quality, and adherence to standards.
  • Performance Optimization: Monitor the performance of data models and make necessary optimizations to enhance their efficiency.
  • Stay Updated: Continuously update knowledge on emerging tools, technologies, and best practices in the realm of data modeling, especially concerning XML and JSON structures.
  • Training and Mentoring: Provide guidance and training to junior team members and stakeholders on best practices related to XML and JSON data modeling.

Qualifications:

  • Bachelor’s degree in Computer Science, Information Systems, or a related field. Master’s degree preferred.
  • 5 years of experience in data modeling, with a specific focus on XML and JSON structures.
  • Proficiency in data modeling tools such as ERwin, IBM Data Architect, or similar.
  • Strong understanding of data architecture, database design, and normalization principles.
  • Ability to understand and translate business needs into data models.
  • Strong problem-solving skills, attention to detail, and analytical thinking.
  • Excellent communication skills, both written and verbal.
  •  

TekAnalytics is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees.

Plan, Visualize, Manage Promotion and Incentive Program Budget

Plan, Visualize, Manage Promotion and Incentive Program Budgets

In the modern digital world customers with information at finger tips are making more informed decisions in buying or leasing products.

Manufacturers can now connect with customers directly, transform the traditional selling practices by offering customized incentive programs. Using advanced analytics manufacturers can get deeper insights into customer sentiments and influencing factors for closing a deal with the customer.

With dynamically changing ownership models and competitive incentives landscapes, manufactures and retailers need more tools to manage incentive programs and budgets to swiftly modify programs by gaining insights into customer’s preferences to win deals.

TekAnalytics offers a flexible and easy to customize solution for Trade Promotion and Incentive Programs Management. The end-to-end Incentive Planning and Trade Promotion management solutions covers process steps for a series of activities aimed at successful product sales. Sales promotions and program activities, retailer incentives, promotional campaigns and many other tasks can be accurately planned, budgeted, executed and audited using TEK-TPIM®.  Get your complimentary whitepaper copy today!! 👉👉

Please complete the form to access the whitepaper:

Optimized inventory with Improved Product Flow and Turn

About the Client: Major US Manufacturer and Distributor

The Client is a manufacturer and distributor of food packaging and foodservice products, supplying packers, processors, supermarkets, restaurants, institutions, and foodservice outlets across North America.

 

Business Challenge:

  • The current setup has no Integrated supply chain consolidated reports
  • For manufacturers, not only is it difficult to predict the maintenance needs of equipment but also to determine the necessary inventory of spare parts for potential repairs.
  • Increase availability of spare parts for servicing and repairing of machinery and manufacturing assets and reduce the cost of maintaining inventory

 

The Solution:

  • Gather data from supply chain history for drilling, rigging, pipeline materials, other inputs, part failure, and part usage history, lead time history (for reorders), history of disruptions to supply chain, planned and the actual history of ramping of capacity.
  • Establish an optimal freight cost model that can ensure on-time delivery.
  • Characterize statistical properties (distributions and correlations) in parts demand and lead times.
  • Understand and account for historical disruptions by leveraging data from SAP BW and flat files to leverage SAP Analytics Cloud (Analytics, Planning, and Predictive ).
  • Link data sources with an analytics engine.
  • Using the current state of inventory and chosen economic scenario, automatically set optimal inventory levels for all parts.   

 

Results and Benefits:

  • Develop a flexible framework for using inputs of historical data, economic and manufacturing capacity scenarios, and predictive models to set optimal inventory levels by using SAC predictive tools
  • Optimized inventory level-setting with fully quantified operating characteristics of expected cost and probability of stocking out

Need additional Assistance?  Contact us today!

Tek Analytics® Becomes SAP® PartnerEdge® Gold Partner

Chicago – October 2019 – Tek Analytics (www.tek-analytics.com) announced today that it joined the SAP® PartnerEdge® program as a gold partner. The program gives partners access to resources, services, and benefits that will help Tek Analytics build and maintain a successful partnership with SAP and optimize business results for clients.

Tek Analytics will help businesses that use SAP solutions to design, implement and integrate SAP solutions; optimize business processes; and provide strategic business consultation. Tek Analytics will take an active part in the support, customization, and deployment of SAP solution-based systems.

“This demonstrates our strong commitment to the SAP PartnerEdge program and reinforces our long-term relationship with both our customers and SAP account teams,” said Bharat Sunkari, CEO and Managing Partner, Tek Analytics.

“We now offer our customers an end-to-end hosted service for SAP solutions, from software acquisition to ongoing support. We will enable our customers to rapidly benefit from the latest innovative solutions for analytics, predictive analytics, and IoT without the need for high-cost capital expenditures. This will help our customers to accelerate their digital core strategy.”



For more information on this press release, contact [email protected]

Maximize Auto Parts fill rate with SAC Predictive reporting

About the Client: Fortune 100 Automotive Customer

The Fortune 100 Automotive Customer is one of the world’s largest and most important car manufacturers with 6 million market capture. It has 50 production locations across the five continents with 200000 employees across the globe. To be a leading, profitable volume manufacturer as well as playing a leading role in the new world of the automobility industry in the long term.

 

Business Challenge:

Maximizing the Dealer orders Fill percent or Fill Rate is always a key focus in the automotive after-sales stream. Increasing fill rate without piling excessive inventory at a plant requires not only a deep dive into fill rate and supply chain metrics across multiple dimensions but also deep learnings into data patterns and identify influencers impacting the fill rates.

 

The Solution:

A complete cloud-based reporting solution for tracking fill rate from Plant to the part level. Our consultants in partnership with the business developed a regression-based predictive model by taking the last 3 years’ dealer order history and fill rates into the account. The Model predicts future fill rates for the dealer odder demand data in the planning system.

  • Gather data from supply chain history for drilling, rigging, pipeline materials, other inputs, part failure, and part usage history, lead time history (for reorders), history of disruptions to supply chain, planned and the actual history of ramping of capacity.
  • Unified Historical and planning data along with master data within BW/4HANA
  • Reporting Model which enables multiple drill-down levels and secured at region leve
  • An easy-to-use yet very effective SAC-based visualization to track Fill rates historically and also drill down to material level with multiple end-user level capabilities including Ranking, Ad-hoc exploring.
  • Considering 3 years dealer order history, created a regression model which predicts fill rate and quantities at a part level for demand planning data
 

 

Results and Benefits:

  • The SAC story completely replaced excel based fill rate tracking. Users no longer need to perform any manual excel calculations to derive KPI’s which tremendously save time in analyzing.
  • A very user friendly yet robust UI design
  • A deeper insights into the fill rate data across multiple dimensions and
  • 95% confident regression model trained from 3 years historical data which predicts the future fill rates, this will help users to maintain optimal inventory

Get our case study

TEK Offers Microsoft ECIF to Build the next Gen Cloud DATAWAREHOUSE

Over the past few years, we have seen numerous organizations look for ways to benefit from a cloud transformation. Common approaches are moving the on-prem SAP BW Data warehouses to a cloud platform (BW/4HANAAZURE Cloud) or building a Cloud DW/Data Lake.

However, for organizations that have both Microsoft tools and an SAP footprint, leveraging them together to maximize benefit requires a strategic approach. This is especially true if you are using those tools to supplement or consolidate your overall data strategy. Building and executing that strategy requires business buy in and an ability show case a long-term vision.

Fortunately, TEK has a way to help by bringing best practices and cut down initial costs by: End Customer Investment Funds (ECIF).

TEK is an SAP and Microsoft Gold partner. We can bring expertise in both Microsoft and SAP to help you put a winning strategy together that will make the most out of your investments.

TEK Analytics can help you understand how you can provide business value by seamlessly integrating your Microsoft and SAP investments to reduce the total cost of ownership.

For more information on this press release, contact us today!

Predictive Scenario on Regression

Predictive Scenario On Classification.

Smart Predict is an Augmented Analytics feature in SAP Analytics Cloud that helps you generate predictions about future events, values, and trends.

The predictive experience in SAP Analytics Cloud is simple. Smart Predict guides you step by step to create a predictive model based on historical data. The resulting model can be used to make trusted future predictions, providing you with advanced insights to guide decision-making.

Smart Predict accelerates the prediction and recommendation creation process by focusing on business outcomes.

Before using Smart Predict for the first time, it really helps to understand a few basic concepts of predictive modeling. So, here they are!

The different types of predictive scenarios

There are currently 3 types of predictive scenarios available in Smart Predict:

  • Classification
  • Regression
  • Time Series

Defining the business problem or business question you want to address will help you choose the right type of predictive scenario.

Classification Scenario: If you’re trying to determine the likelihood of whether something will happen, you’re dealing with a classification scenario.

Ex: You want to predict membership of categories such as Yes/NO, Customer is likely to churn or not, replacing intervals within short or long for the manufacturing process, Binary (0 or 1).

Regression Scenario: If you’re trying to predict a numerical value and explore the key drivers behind it, you’re dealing with a regression scenario. 

Ex: Predict the price of an imported product based on projected transport charges and tax duties.

 Time Series Scenario: If you’re trying to forecast a future numerical value based on fluctuations over time, seasons, and other internal and external variables, you’re dealing with a time series scenario.

 

Predictive Scenario based on Regression:

Step 1. Loading the data

• Before we load the dataset into SAC, we cut out a few records from the original dataset. We will use this to apply our model later. I cut out 20 records of each wines from the dataset for red and white wines, each to use later. Now I have 4 datasets as follows.

A.         1580 red wines for training

B.         20 red wines for prediction

C.        4879 white wines for training

D.        20 white wines for prediction

At first create a folder in your files to save all the files at one place.

Inside your newly created folder, create a new dataset by clicking ‘+’ icon → Dataset .

Now you will be asked how you would like to begin – load data from a local file or from a data source. Since we have data in csv files, click on local data source. Select your source file. Load all 4 datasets. Your folder should look like the last screenshot.

        

Step 2. Training the model

  1. Let us now build the predictive scenario. This is where our models to predict wine quality will be built and trained. On the main menu, click on Create >> Predictive Scenario.
  2. For this problem, our predicted entity is an integer between 0 to 10. So, we will build a regression model. Select regression, give the scenario a suitable name and description.

In the Predictive Scenarios page click on ‘+’ to add a scenario.

Select a scenario that best suits to your analytic dataset.

Fill the details and click on OK.

Select the Input Dataset as show below.

  • If you want edit the edit metadata, Click on Edit variable metadata (under the input dataset field) to understand how SAC has interpreted the dataset, what is the storage and type of each of the variables, what should SAC do with missing values, is the variable the key of the dataset, etc.

  • We now need to define the variable we wish to predict. In our case this is quality of the wine, so click on Target and select quality.

  • If there are variables in the dataset, you would like to exclude from modelling, declare them here as show below. This helps to Improve the results.

  • Now that SAC knows what it needs to do, we can get started with training. Click on Train at the bottom.

SAC will take a while to train with the data set, and identify the best model for this problem statement. Then you now have results of your model as shown below.

 

Step 3. Understanding the results

  • After you click on Train, and SAC completes the training process, it will show you 2 tabs of information.
  • Overview tells you about the quality of the results
  • In our cases it is 99% confident about its results.
  • It also says that the error is 0.8. This means that the true value is ±0.8 from our prediction. 
  • The influencer contributions explain the results
  • Density of the wine and sugar understandably have the highest correlation with wine quality, followed by the other variables.

Step 4. Applying the model

  • Now that our model is ready, we can apply it on the dataset we had carved out earlier.
  • Click on the apply model option (icon at far right).

   

  • At Input Dataset Variables selects the variables that you would like to see in the output.

  • Fill all the details as shown below and click on Ok.

  • Then go to the folder to check the predicted model and click on the model. You can see, in the predictions file, see the column at far right called Predicted Value.

Repeat the same steps to know the quality of the Red wine.

  • I found the model for red wines had an error of 0.69 with a confidence of 95%. Alcohol seems to be the dominant predictor.
  • I can see both my models in the predictive model’s section at the bottom. I can see status of models (trained / applied).

Create a model based on the Output Dataset of the Predictive Scenario.

Click on Menu → Create→ Model→ Click on Get data from datasource and in the Acquire Data select Dataset.

  • Select the Dataset.

  • Click on the Create Model on the bottom.

Or You Can directly create a story on Dataset.

Go to Create → Click on Story

  • Click on Access & Explore Data as shown below.

  • Click on Datasource and click on the Dataset as shown below.

           

  • Select the Dataset.

  • Click on the Story and start creating story.

 

Need additional assistance? Contact us today!