Code Monkey home page Code Monkey logo

azure-orbital-integration's Introduction

Azure Orbital Integration

Azure Orbital Integration is a solution that enables end users to easily deploy downstream components necessary to receive and process space-borne Earth Observation (EO) data using Azure Orbital Ground Station(AOGS). This solution provides self-start scripts to create an endpoint to receive data from the ground station (TCP to BLOB component), deploy a virtual machine to process the data (Processor component), and optionally bring logs from all components to a single place (Central Logging component).

Overview

The diagram below shows an example of capturing and processing direct broadcast data from NASA's Aqua Earth-Observing Satellite using the components provided in this repo. The general architecture shown here can be adapted to process space-borne data from other EO satellites as well. This can be done by installing processing software on the virutal machine that is specfic to the onboard instruments on the spacecraft.

Azure Orbital Integration Diagram

Deployment Steps

  1. Register and authorize a spacecraft - Creates a spacecraft resource containing the required information to identify the spacecraft and verifies that you are authorized to communicate with it.
  2. Deploy tcp-to-blob - Deploys a TCP endpoint for receiving data from the ground station. To reduce the cost and latency in moving data, it is recommended to deploy TCP to BLOB in the same Azure region the spacecraft has been deployed unless there is a quota issue.
  3. Deploy aqua-processor - Creates an Azure VM for processing the downlinked satellite data.
  4. Deploy central-logging (Optional) Creates a single central store for logs, backed by Azure Data Explorer.
  5. Schedule a contact Once the above components have been deployed, schedule a contact to downlink data from the spacecraft.

Contributing

This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.

When you submit a pull request, a CLA bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.

This project has adopted the Microsoft Open Source Code of Conduct. See CODE_OF_CONDUCT.md in the project root for more information and resources.

Trademarks

This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow Microsoft's Trademark & Brand Guidelines. Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party's policies.

Security

See SECURITY.md for instructions on reporting security vulnerabilities. Please do not report security vulnerabilities through public GitHub issues.

License

Copyright © 2022 Microsoft. This Software is licensed under the MIT License. See LICENSE in the project root for more information.

azure-orbital-integration's People

Contributors

cheyyeary avatar dependabot[bot] avatar hankai-jing avatar jfrazee avatar karthick-rn avatar micholas avatar microsoft-github-operations[bot] avatar platypus87 avatar scschneider avatar srajtiwari avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

azure-orbital-integration's Issues

tcp-to-blob logging correlationId

Currently we are using filename for correlations, but not all events will have a filename reference. It would be good to generate a correlationId and tag all events for a given session with that id so we could easily correlate.

TCP to BLOB: Create/update contact profile during deployment

Currently TCP to BLOB users are responsible for creating a contact profile that targets the TCP to BLOB endpoint. This is additional work that increases the risk of user error during initial setup as well as increasing the risk of the contact profile becoming non-functional if/when the TCP to BLOB configuration changes.

DoD: All TCP to BLOB deployment mechanisms create/update an appropriately configured Orbital contact profile including:

  • AZ CLI deployer
  • Bicep deployer
  • ADO pipeline deployer
  • Github actions deployer (when available)

Logging improvements optional centralized logging

Currently tcp-to-blob and processor log to their respective Log Analytics Workspaces. This improvement would bring the 2 workspaces together into a single ADX instance. This would be optional for users that want to bring their logging together in a single location.

To bring the 2 workspaces together, we would need to follow these steps:

  1. Completion of: #19
  2. Create and setup an Event Hub with 2 hubs (AppTraces and AppExceptions) to receive log exports from Log Analytics Workspaces
  3. Create an ADX instance
  4. Create a new database, tables, mappings, update policies and data connections in ADX
  5. Configure the 2 workspaces to export AppTraces and AppExceptions to the Event Hub in step number 2
  6. Validate data flowing into the 2 new tables in ADX (log ingestion can be up to 10min behind)

BlobDownloadService implement separate checkpoint options

Currently the Event Hub receiver leverages the contact data storage account for storing checkpoints. This can cause unwanted behavior if you do not have your Event Grid subscription setup appropriately. We should give the option for users to use a separate storage account if desired.

Fix formatting of pull_request_template.md

Current formatting of the pull request template does not auto-populate a link to the issue and does not correctly display checkboxes in the checklist. The following format works well.

# Description
Fixes #nn

Add description here. 

## Dependencies affected:
- 


## Checklist before merging
- [] Properly labeled PR 
- [] Licensing statement added to new files 
- [] Downstream dependencies have been addressed
- [] Corresponding changes to the documentation have been made
- [] Issue is linked under the development section

Fully integrate BlobDownloadService

Once we have BlobDownloadService ready to use, we will need to integrate it with the solution, specifically it will be running as a service on the Aqua processing VM.

  1. Update the deployment of the Aqua processing VM to use BlobDownloadService
  2. Enable log export in the central-logging component

Feature request: Enable Shared Access Signature (SAS)

Summary

Enable Enable Shared Access Signature (SAS) support for securing access to storage accounts.

Use case

Given

  1. Two teams/organizations:
    1. Data provider: Owns the subscription/resource group that is on-boarded to Azure Orbital and contains the spacecraft (spacecraft-p) and contact profile (cp-p).
    2. Data recipient: Owns the subscription/resource group containing the storage account (storage-account-r) to which Orbital contact data is to be delivered.
  2. Data Provider owned TCP to BLOB instance (tcp-to-blob-p) configured with the data recipient's SAS token for storage-account-r.
  3. Data Provider owned contact profile (cp-p) pointing to tcp-to-blob-p endpoint.

When

Data provider owned contact for spacecraft-p hits cp-p.

Then

tcp-to-blob-p will deliver contact data to a BLOB in storage-account-r.

Spacecraft TLE update

Problem

The TLE for an Orbital spacecraft must be specified during initial creation resulting in a static TLE setting. Over time the actual data the TLE is meant to represent drifts eventually resulting in contact failures if the spacecraft TLE setting is not updated.

  1. Orbital users are likely not to be aware of TLE drift until their contacts fail.
  2. Orbital users are responsible for establishing a process or system for periodically updating the TLE for each of their spacecrafts. Any failure of this process may result in missing data for all contacts until the TLE is properly updated.

DoD

Create an AKS service and/or Azure function that periodically updates the TLE for one or more configurable spacecrafts.

Feature request (TCP to BLOB) Key Vault support

Currently the storage account connection string is stored as a Kubernetes secret.

DoD: Storage account connection string or SAS token is stored in Azure Key Vault where it it retrieved by TCP to BLOB.

Logging improvements for tcp-to-blob

Currently we are leveraging the built in logging that AKS provides, but we should deploy a separate Log Analytics Workspace and a separate Application Insights and point AKS to use the new separate workspace for logging.

Application Insights might not be required if AKS allows linking directly to a workspace.

Add platform tags to docker build

MacBooks with ARM chips do not build docker images that can be used on AKS by default. Fix this by specifically targeting the linux os using the platform tag.

TCP to BLOB: Use Dotenv for env configuration

Currently TCP to BLOB requires a bash-like shell for environment configuration which may be problematic for some environments. Additionally, lingering environment variables set in the shell have cause confusion/unexpected behavior for some users.

DoD: TCP to BLOB leverages Dotenv for environment configuration to enable cross-platform configuration in a standard way.

Create initial actions workflow.

To start with a simple foundation, create a workflow that does linting. Depending on when this issue is put a code coverage threshold as well as a job to run unit tests.

Update processor to follow similar patterns

We need to update the processor component (aqua-processor) to reflect how we are deploying tcp-to-blob using bicep. Simplify variable declarations (not everything needs to be adjusted) and adopt the .env pattern.

Prettier

Add prettier NPM script to format code.

Orbital Helper

Create NodeJS package that makes it easier to interact with Orbital API.

tcp-to-blob logging improvement

Currently we have the total end to end time from when a connection starts sending data to tcp-to-blob to when it is finished uploading. It would be good to have this broken down to separate tcp-to-blob receive time vs tcp-to-blob upload time.

Trigger ADO build from github

Set up an event to trigger an ADO pipeline deployment based upon an agreed state in GitHub , most likely a named GitHub release.

Deployments via GitHub Actions

Create a pipeline in GitHub to deploy TCP-to-Blob. If possible (and if it makes sense) we should leverage existing scripts that are used with other deployment methods to avoid having to maintain duplicate scripts.

BlobDownloadService does not clean up failed downloads

For any reason that there is a message in the queue that points to a deleted/non-existent blob, BlobDownloadService always creates the tmp file first before trying to download and this will result in incomplete/lingering files. We should clean these up if blob downloads fail for any reason.

Build BlobDownloadService for more event driven processing

Currently we rely on cron jobs to check a storage account for new blobs on a regular cadence. It would be ideal to have a service that listened for Event Grid events via Service Bus that would then download the blob for processing. Also if we were to use this strategy, we could trigger resources to startup and start processing, allowing resources to be deallocated when not in use.

Bring az cli up to parity with Bicep for tcp-to-blob

With the recent changes to add Event Grid and Service Bus for contact data notifications to Bicep, we need to bring the az cli deployment piece up to parity so that it provides the same deployment experience.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.