Code Monkey home page Code Monkey logo

101-synapse-with-purview-connection-poc's Introduction

Azure Synapse Proof-of-Concept

Synapse Analytics

Table Of Contents

  1. Introduction
  2. Purpose
  3. Prerequisites
  4. Deployment
  5. Post Deployment

Introduction

This template deploys necessary resources to run an Azure Synapse Proof-of-Concept. Following resources are deployed with this template along with some RBAC role assignments:

  • An Azure Synapse Workspace with batch data pipeline and other required resources
  • An Azure Synapse SQL Pool
  • An optional Apache Spark Pool
  • Azure Data Lake Storage Gen2 account
  • A new File System inside the Storage Account to be used by Azure Synapse
  • A Logic App to Pause the SQL Pool at defined schedule
  • A Logic App to Resume the SQL Pool at defined schedule
  • A key vault to store the secrets

The data pipeline inside the Synapse Workspace gets Newyork Taxi trip and fare data, joins them and perform aggregations on them to give the final aggregated results. Other resources include datasets, linked services and dataflows. All resources are completely parameterized and all the secrets are stored in the key vault. These secrets are fetched inside the linked services using key vault linked service. The Logic App will check for Active Queries. If there are active queries, it will wait 5 minutes and check again until there are none before pausing

Purpose

This template allows the Administrator to deploy a Proof-of-Concept environment of Azure Synapse Analytics with some pre-set parameters. This allows more time to focus on the Proof-of-Concept at hand and test the service.

Prerequisites

Owner role (otherwise Contributor + User Access Administrator roles) for the Azure Subscription the template being deployed in. This is for creation of a separate Proof-of-Concept Resource Group and to delegate roles necessary for this proof of concept. Refer to this official documentation for RBAC role-assignments.

Deployment

  1. Fork out this github repository into your github account.

    Fork

  2. Click 'Deploy To Azure' button given below to deploy all the resources.

    Deploy To Azure

    • Provide the values for:

      • Resource group (create new)
      • Region
      • Company Tla
      • Option (true or false) for Allow All Connections
      • Option (true or false) for Spark Deployment
      • Spark Node Size (Small, Medium, large) if Spark Deployment is set to true
      • Sql Administrator Login
      • Sql Administrator Login Password
      • Sku
      • Option (true or false) for Metadata Sync
      • Frequency
      • Time Zone
      • Resume Time
      • Pause Time
      • Option (Enabled or Disabled) for Transparent Data Encryption
      • Github Username (username for the account where this github repository was forked out into)
    • Click 'Review + Create'.

    • On successfull validation, click 'Create'.

Post Deployment

  • Current Azure user needs to have "Storage Blob Data Contributor" role access to recently created Azure Data Lake Storage Gen2 account to avoid 403 type permission errors.
  • After the deployment is complete, click 'Go to resource group'.
  • You'll see all the resources deployed in the resource group.
  • Click on the newly deployed Synapse workspace.
  • Click on link 'Open' inside the box labelled as 'Open Synapse Studio'.
  • Click on 'Log into Github' after workspace is opened. Provide your credentials for the github account holding the forked out repository.
  • After logging in into your github account, click on 'Integrate' icon in the left panel. A blade will appear from right side of the screen.
  • Make sure that 'main' branch is selected as 'Working branch' and click 'Save'.

PostDeployment-1

  • Now open the pipeline named 'TripFaresDataPipeline'.
  • Click on 'Parameters' tab at bottom of the window.
  • Update the parameters' values. You can copy the resources' names from the resource group recently deployed.
  • Make sure the SQL login username is correct and the workspace name is fully qualified domain name, i.e. workspaceName.database.windows.net

PostDeployment-2

  • After the parameters are updated, click on 'Commit all'.
  • After successful commit, click 'Publish'. A blade will appear from right side of the window.
  • Click 'Ok'.

PostDeployment-3

  • Once published all the resources will now be available in the live mode.
  • To switch to the live mode from git mode, click the drop down at top left corner and select 'Switch to live mode'.

PostDeployment-4

  • Now to trigger the pipeline, click 'Add trigger' at the top panel and click 'Trigger now'.
  • Confirm the pipeline parameters' values and click 'Ok'.
  • You can check the pipeline status under 'Pipeline runs' in the 'Monitor' tab on the left panel.

PostDeployment-5

  • To run the notebook (if spark pool is deployed), click on 'Develop' tab on the left panel.
  • Now under 'Notebooks' dropdown on left side of screen, click the notebook named 'Data Exploration and ML Modeling - NYC taxi predict using Spark MLlib'.
  • Click 'Run all' to run the notebook. (It might take a few minutes to start the session)

PostDeployment-6

101-synapse-with-purview-connection-poc's People

Contributors

osamaemumba avatar

Watchers

 avatar

Forkers

osama1993bce

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.