ted-learning-logo

DP-203 Microsoft Azure Data Engineer Associate

Learn data engineering and work with real-time analytical solutions using Azure data platform technologies.

Free

4 days
All levels
0 lessons
0 quizzes
0 students

Course Info

In this course, the student will learn about the data engineering as it pertains to working with batch and real-time analytical solutions using Azure data platform technologies. Students will begin by understanding the core compute and storage technologies that are used to build an analytical solution.

The students will learn how to interactively explore data stored in files in a data lake. They will learn the various ingestion techniques that can be used to load data using the Apache Spark capability found in Azure Synapse Analytics or Azure Databricks, or how to ingest using Azure Data Factory or Azure Synapse pipelines. The students will also learn the various ways they can transform the data using the same technologies that is used to ingest data.

What Will I Learn From This Course?

Explore compute and storage options for data engineering workloads in Azure

Run interactive queries using serverless SQL pools

Perform data Exploration and Transformation in Azure Databricks

Explore, transform, and load data into the Data Warehouse using Apache Spark

Ingest and load Data into the Data Warehouse

Transform Data with Azure Data Factory or Azure Synapse Pipelines

Integrate Data from Notebooks with Azure Data Factory or Azure Synapse Pipelines

Support Hybrid Transactional Analytical Processing (HTAP) with Azure Synapse Link

Perform end-to-end security with Azure Synapse Analytics

Perform real-time Stream Processing with Stream Analytics

Create a Stream Processing Solution with Event Hubs and Azure Databricks

Pre-requisite

Successful students start this course with knowledge of cloud computing and core data concepts and professional experience with data solutions. Specifically completing AZ-900 – Azure Fundamentals and DP- 900 – Microsoft Azure Data Fundamentals.

Methodology

Lectures, visual presentations, hands-on demo files and lab exercises, Q&A.

Target Audience

The primary audience for this course is data professionals, data architects, and business intelligence professionals who want to learn about data engineering and building analytical solutions using data platform technologies that exist on Microsoft Azure. The secondary audience for this course data analysts Target Audience Course Outline: DP-203 Microsoft Azure Data Engineer Associate and data scientists who work with analytical solutions built on Microsoft Azure.

Course Outline for This Programme

• Introduction to Azure Synapse Analytics
• Describe Azure Databricks
• Introduction to Azure Data Lake storage
• Describe Delta Lake architecture
• Work with data streams by using Azure Stream Analytics
• Combine streaming and batch processing with a single pipeline
• Organize the data lake into levels of file transformation
• Index data lake storage for query and workload acceleration

• Explore Azure Synapse serverless SQL pools capabilities
• Query data in the lake using Azure Synapse serverless SQL pools
• Query Parquet data with serverless SQL pools
• Create external tables for Parquet and CSV files
• Create views with serverless SQL pools
• Secure access to data in a data lake when using serverless SQL pools
• Configure data lake security using Role-Based Access Control (RBAC) and Access Control L
• Create metadata objects in Azure Synapse serverless SQL pools
• Secure data and manage users in Azure Synapse serverless SQL pools
• Secure data and manage users in Azure Synapse serverless SQL pools

• Describe Azure Databricks
• Read and write data in Azure Databricks
• Work with DataFrames in Azure Databricks
• Work with DataFrames advanced methods in Azure Databricks
• Use DataFrames in Azure Databricks to explore and filter data
• Cache a DataFrame for faster subsequent queries
• Remove duplicate data
• Manipulate date/time values
• Remove and rename DataFrame columns
• Aggregate data stored in a DataFrame

• Understand big data engineering with Apache Spark in Azure Synapse Analytics
• Ingest data with Apache Spark notebooks in Azure Synapse Analytics
• Transform data with DataFrames in Apache Spark Pools in Azure Synapse Analytics
• Integrate SQL and Apache Spark pools in Azure Synapse Analytics
• Perform Data Exploration in Synapse Studio
• Ingest data with Spark notebooks in Azure Synapse Analytics
• Transform data with DataFrames in Spark pools in Azure Synapse Analytics
• Integrate SQL and Spark pools in Azure Synapse Analytics

• Use data loading best practices in Azure Synapse Analytics
• Petabyte-scale ingestion with Azure Data Factory
• Perform petabyte-scale ingestion with Azure Synapse Pipelines
• Import data with PolyBase and COPY using T-SQL
• Use data loading best practices in Azure Synapse Analytics

• Data integration with Azure Data Factory or Azure Synapse Pipelines
• Code-free transformation at scale with Azure Data Factory or Azure Synapse Pipelines
• Execute code-free transformations at scale with Azure Synapse Pipelines
• Create data pipeline to import poorly formatted CSV files
• Create Mapping Data Flows

• Orchestrate data movement and transformation in Azure Data Factory
• Integrate Data from Notebooks with Azure Data Factory or Azure Synapse Pipelines

• Secure a data warehouse in Azure Synapse Analytics
• Configure and manage secrets in Azure Key Vault
• Implement compliance controls for sensitive data
• Secure Azure Synapse Analytics supporting infrastructure
• Secure the Azure Synapse Analytics workspace and managed services
• Secure Azure Synapse Analytics workspace data

• Design hybrid transactional and analytical processing using Azure Synapse Analytics
• Configure Azure Synapse Link with Azure Cosmos DB
• Query Azure Cosmos DB with Apache Spark pools
• Query Azure Cosmos DB with serverless SQL pools
• Query Azure Cosmos DB with Apache Spark for Synapse Analytics
• Query Azure Cosmos DB with serverless SQL pool for Azure Synapse Analytics

• Enable reliable messaging for Big Data applications using Azure Event Hubs
• Work with data streams by using Azure Stream Analytics
• Ingest data streams with Azure Stream Analytics
• Use Stream Analytics to process real-time data from Event Hubs
• Use Stream Analytics windowing functions to build aggregates and output to Synapse An
• Scale the Azure Stream Analytics job to increase throughput through partitioning
• Repartition the stream input to optimize parallelization

• Process streaming data with Azure Databricks structured streaming
• Explore key features and uses of Structured Streaming
• Stream data from a file and write it out to a distributed file system
• Use sliding windows to aggregate over chunks of data rather than all data
• Apply watermarking to remove stale data
• Connect to Event Hubs read and write streams

Curriculum is empty
Renganathan

Renganathan Palanisamy has extensive experience in both academic and corporate training arena which enables him to incorporate best practices of both training approach to ensure training delivery is effective and relevant. This is further strengthened by his involvement in various technology related collaboration with renowned players in the industry such as Microsoft, IBM and Oracle. During his service as a Programme Leader in KDU College Sdn Bhd, he was responsible for coordination, collaboration and delivery of courses offered by Sun Microsystems in partnership with Guidance View and as Authorized Sun Education Centre, Oracle under its Workforce Development Programme and Microsoft through its MSDN Academic Alliance Programme.

He started off his early days of involvement in IT industry professing strong inclination towards Java technology and the object oriented design and development practices. He was able to share his in-depth knowledge of the technology via several academic courses and workshops he conducted for students and academic staff alike. Later he ventured into database design and development with Oracle and eventually expanded to include Microsoft SQL Server as well as IBM DB2 and IBM Informix.

Currently his main focus is on delivering Data Management and Business Intelligence tracks which is strongly complemented by his exposure to non-Microsoft related technologies. He has strong understanding of .NET technologies and tools. His knowledge is seeked as evident in his presence at premier events such as Microsoft TechEd and his involvement in SQL PASS local group (SPAN). His technical skills coupled with know-how of training delivery techniques have earned him recognition amongst his peers and attendees of his training sessions. His specialty includes the ability to relate concepts derived from various technologies and ability to ensure smooth transition for trainees migrating to different technology. Resourcefulness is his trademark and this helps enhance the training experience of the attendees.

He is a dynamic and versatile individual. He is willing to take up new challenges to learn and has the ability to apply new skills in a short span of time. Other strengths include good time management, analytical skills, ability to present ideas in innovative ways and most importantly, a sense of responsibility.

whatsapp-icons
whatsapp-icons