Code Monkey home page Code Monkey logo

helix-home's Introduction

helix bridge

Project Helix


Welcome to Project Helix!

Background

Read all of this.

Developing Helix

Hackathons

There's always a next hackathon. Check the list and sign up!

Communication

Anything relevant should be persisted in a GitHub issue (and a pull request) in the respective repository, or a GitHub discussion.

We hang out in the #franklin-chat Slack channel (Enterprise Grid).

There is also the distribution list grp-project-helix-friends (Enterprise Grid) where we send around invites for our bi-weekly Helix Show & Tells.

helix-home's People

Contributors

3vil3mpir3 avatar alexkli avatar anfibiacreativa avatar bdelacretaz avatar bstopp avatar cgoudie avatar davidnuescheler avatar dkuntze avatar dominique-pfister avatar dylandepass avatar fkakatie avatar gilliankrause avatar karlpauls avatar kgera avatar koraa avatar kptdobe avatar langswei avatar marquiserosier avatar maxakuru avatar meryllblanchet avatar mhaack avatar mokimo avatar rofe avatar royfielding avatar ruxandraburtica avatar sdmcraft avatar simonwex avatar stefan-guggisberg avatar trieloff avatar tripodsan avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

helix-home's Issues

Theme: Cache Management

This is a "thematic ticket" meant to collect general discussions about how we manage the various caches involved in content publishing.

The idea is that "if it's about caching it should be discussed or referenced here". We'll see how that works as we progress!

I'm assigning this ticket to myself with the aim of coordinating this experiment (using such tickets) but everybody's welcome to contribute, of course.

Create a new repository with the name `helix-pingdom-status`

Create a new repository with the name helix-pingdom-status – see #30

  • Use helix-library or helix-service template
  • Add topics to the repository, at least helix
  • Add the group "Project Helix Admins" with Admin permissions to the list of collaborators
  • Add the group "Project Helix Developers" with Write permissions to the list of collaborators (Project Helix Guests will be taken care of automatically)
  • Upload a social media image (use this Spark template)
  • Set the repository description
  • Update the repository README.md (search for adobe/helix-service or adobe/helix-library)
  • Set up Project Bot
  • Set up CircleCI
  • Set up Greenkeeper

Note: Greenkeeper will find your new repository automatically, but it might file an issue that it cannot get the build status if you haven't set up CircleCI yet. In this case, go to Greenkeeper and click the "fix repo" button

Open the #helix-noisy Slack channel, then type /github subscribe adobe/helix-…

New service repo: helix-resolve-git-ref

Create a new repository with the name helix-…

  • Use helix-library or helix-service template
  • Add tags to the repository, at least helix
  • Add the group "Project Helix Admins" with Admin permissions to the list of collaborators
  • Add the group "Project Helix Developers" with Write permissions to the list of collaborators (Project Helix Guests will be taken care of automatically)
  • Upload a social media image (use this Spark template)
  • Set the repository description
  • Update the repository README.md (search for adobe/helix-service or adobe/helix-library)
  • Set up Project Bot
  • Set up CircleCI
  • Set up Greenkeeper

Note: Greenkeeper will find your new repository automatically, but it might file an issue that it cannot get the build status if you haven't set up CircleCI yet. In this case, go to Greenkeeper and click the "fix repo" button

Open the #helix-noisy Slack channel, then type /github subscribe adobe/helix-…

add possibility to deploy multiple code repos to same 2nd level domain.

Is your feature request related to a problem? Please describe.

  • As owner of a 2nd level domain, eg: mybusiness.com, I want to be able to run several projects. eg: www.mybusiness.com, info.mybusiness.com.
  • as a helix developer I want to reuse my existing domain for several projects, because I don't want to setup multiple domains. eg: dev.mysite.com, project4.mysite.com, test.mysite.com, tripod-test.mysite.com

Describe the solution you'd like
Ideally, I want to be able to specify the domain in the helix-config.yaml and use the same fastly service. each project would deploy to an own action-package.

Describe alternatives you've considered
the only possibility is to create multiple branches in the same repository and they use deploy flags, like --only to deploy and publish the respective branches.

Theme: Developer Experience

Theme ticket for things related to the Helix developer experience, annoyances which are not really bugs, etc.

Please mention this ticket in related ones in the various modules, so we get an overview of such issues here.

helix-pingdom-status service and library

We have the same code that generates a Pingdom status report in almost every action. I'd refactor it out into a small library that we can
a) depend on in various projects
b) use for monitoring the availability of runtime itself (right now, we only monitor the API Gateway)

Document Helix Cookies

For compliance reasons, we need a public documentation where we list all cookies Helix is issuing, what they contain and what they are used for.

Move trieloff/helix-index-tree to adobe

Create a new repository with the name helix-index-tree

Open the #helix-noisy Slack channel, then type /github subscribe adobe/helix-…

Helix Pages Themes as Git Repositories

The demo by @kptdobe this morning prompted this idea: at some point we'll need themes for Helix Pages.

I quite like how they are handled in GitHub pages: each theme is provided by a GitHub repository, like https://github.com/pages-themes/hacker . A config file in your content repository selects which theme to use, and if you want to override part of the theme you just copy the corresponding file(s) in your content repository and modify it.

The problem might be backwards compatibility, when a theme is updated your changes might break. Pointing to a specific version/tag of the them would help with that.

(I suppose creating this ticket here is correct as it's a general product design discussion)

Theme: Documentation and Examples

This is a "thematic ticket" meant to collect general discussions about documentation and examples along with links to more specific tickets.

The idea is that "if it's about documentation it should be discussed or referenced here". We'll see how that works as we progress!

Create a new repository with the name helix-example-advanced

Create a new repository with the name helix-example-advanced

  • Use helix-library or helix-service template
  • Add tags to the repository, at least helix
  • Add the group "Project Helix Admins" with Admin permissions to the list of collaborators
  • Add the group "Project Helix Developers" with Write permissions to the list of collaborators (Project Helix Guests will be taken care of automatically)
  • Upload a social media image
  • Set the repository description
  • Update the repository README.md (search for adobe/helix-service or adobe/helix-library)
  • Set up Project Bot
  • Set up CircleCI
  • Set up Greenkeeper

Note: Greenkeeper will find your new repository automatically, but it might file an issue that it cannot get the build status if you haven't set up CircleCI yet. In this case, go to Greenkeeper and click the "fix repo" button

Open the #helix-noisy Slack channel, then type /github subscribe adobe/helix-…

Create helix-ops repo

Our monitoring code is currently spread across multiple Helix repos. We should consolidate it in its own dedicated library, so it can be evolved independent of helix-status, and reused without having helix-status as a dependency.

WIP: https://github.com/adobe/helix-ops

Open the #helix-noisy Slack channel, then type /github subscribe adobe/helix-…

Move trieloff/helix-index-files to adobe

Create a new repository with the name helix-index-files

Open the #helix-noisy Slack channel, then type /github subscribe adobe/helix-…

Clarify Getting Started with Helix

A couple of things that deserves clarifications, a.k.a. state what may sound obvious:

  • nvm can be used to install and setup a supported version of node
  • apihost must be adobeioruntime.net instead of runtime.adobe.io
  • HLX_FASTLY_NAMESPACE is the Service ID
  • HLX_FASTLY_AUTH is a personal token that one needs to create

Default Selector for Strains

Instead of pushing the Selector Dispatch/Routing Logic down into the Pipeline as adobe/helix-pipeline#169 suggests, we could simplify this by allowing each strain to have a default selector. This way, all request-based conditions can be used for routing to the correct selector/template without having to build an additional router script.

I've discussed this with @tripodsan, but would also like to hear @kptdobe's opinion.

Move trieloff/helix-index-big-tree to adobe

Create a new repository with the name helix-big-tree

Open the #helix-noisy Slack channel, then type /github subscribe adobe/helix-…

Create Repository for Dispatch Action Experiment

Create a new repository with the name helix-experimental-dispatch as an additional way to explore adobe/helix-pipeline#365. After conclusion of the experiment, we either archive the repo or remove the -experimental from the name.

  • Use helix-library or helix-service template
  • Add tags to the repository, at least helix
  • Add the group "Project Helix Admins" with Admin permissions to the list of collaborators
  • Add the group "Project Helix Developers" with Write permissions to the list of collaborators (Project Helix Guests will be taken care of automatically)
  • Upload a social media image
  • Set the repository description
  • Update the repository README.md (search for adobe/helix-service or adobe/helix-library)
  • Set up Project Bot
  • Set up CircleCI
  • Set up Greenkeeper

Note: Greenkeeper will find your new repository automatically, but it might file an issue that it cannot get the build status if you haven't set up CircleCI yet. In this case, go to Greenkeeper and click the "fix repo" button

Open the #helix-chat Slack channel, then type /github subscribe adobe/helix-…

Create a new repository with the name helix-example-basic

Create a new repository with the name helix-example-basic

  • Use helix-library or helix-service template
  • Add tags to the repository, at least helix
  • Add the group "Project Helix Admins" with Admin permissions to the list of collaborators
  • Add the group "Project Helix Developers" with Write permissions to the list of collaborators (Project Helix Guests will be taken care of automatically)
  • Upload a social media image
  • Set the repository description
  • Update the repository README.md (search for adobe/helix-service or adobe/helix-library)
  • Set up Project Bot
  • Set up CircleCI
  • Set up Greenkeeper

Note: Greenkeeper will find your new repository automatically, but it might file an issue that it cannot get the build status if you haven't set up CircleCI yet. In this case, go to Greenkeeper and click the "fix repo" button

Open the #helix-noisy Slack channel, then type /github subscribe adobe/helix-…

Introduce a way to track initial request across all action layers

As a follow up of my last week hackathon researches, I continued working on one main issue: it is really hard today to track the journey of a request, especially with the dispatch action triggering multiple request with intent to fail. This leads to many actions triggered and it is almost impossible via the logs to trace which are the ones that belongs together.

I created various branches (still wip) into various repo to to address the issue like this: Fastly generates a x-referrer header computing the current incoming request (value would be something like https://alex.helix-demo.xyz/apage.html?w=a) and sends that header to the dispatch action which propagates it to all actions it triggers. This allows our openwhisk-action-utils logger to automatically log the header, similar to what we do for ow properties. This would become a default behavior, x-referrer would always be output in the logs automatically if present.

This would allow us to query Coralogix logs and with one url find all the corresponding actions triggered.

While this looks pretty obvious, I still have 3 open items / questions:

  1. I need to make sure the reporting could be done out of Coralogix (I am not afraid, their query language is really strong but could not test yet, plugging in all the pieces together in development is really hard)
  2. For each action triggered in the chain, we would need to think about providing / forwarding the x-referrer header (for now, it is only dispatch -> static / html... but maybe there will be more layers in the future)
  3. I am not sure if this is "serverless" state of the art. But on the other hand, I do not see how we could do it differently than propagating a "unique id"

WDYT ?

Here are the POC branches (would still need to create appropriate issues in each repo if agreement):

Quo Vadis Vulcain?

Quick thoughts from discussions with @stefan-guggisberg

  • Vulcain as an alternative to GraphQL
  • use case: site admin needs the list of pages in the site, their title, description, and visitors30days (the list of pages is provided by one service, title and description by another and visitors30days by yet another service)
  • the client does not want to deal with the latency of making 1+(2*n) requests (one for the list, n for the items times two services)
  • the client also does not want to overfetch all the fields provided by the page details and page stats resource

Vulcain is a spec to solve that:

  • Overfetching can be avoided with the Field request header, which specifies a white list of allowed JSON field names (with wildcards) to retrieve
  • Underfetching can be avoided with the Preload request header, which asks the server to HTTP2-server-push (short push) resources matching a particular field pattern

Server-side Implementation takes

  • a helix-fields-filter that uses JSON Pointer-like wildcards to strip out superfluous properties from JSON responses
  • a helix-preload-filter that uses JSON Pointer-like wildcards to request HTTP2 server pushes for linked resources using the Link response header
  • Fastly provides us with a server push by treating Link response headers as invitations to perform server pushes.

Client-side implementation takes:

  • a caching HTTP2 client that supports server pushes in the browser, i.e. fetch
  • the same for i/iPad/macOS/Swift
  • (maybe) the same for Node

Watchers for content repositories

Helix should allow users to register repository watchers, so that actions can be performed in response to changes in the content repository. Actions can be project specific (code resides in the code repository) or global (any URL that accepts the right kind of POST requests).

Examples for actions:

  • invalidate cache
  • check for broken links
  • trigger Adobe I/O events

I see following steps to implementation:

In strains.yaml

  • add a new key watchers (array)
  • each entry is a string (URL or file name of a js file in the current repo)

In hlx deploy

  • each js file referenced by watchers gets packaged and deployed as an action to OpenWhisk
  • for each js file, the exported filters method is called and the response is kept
  • for each URL, a GET request (Accept application/json) is made and the response (if it is a JSON object) is kept
  • the file src/openwhisk/webhook.js gets deployed at hlx--webhook. All filter responses collected in the above steps are passed as a filters default parameter
  • the action hlx-webhook is registered as a webhook for all push events in all content repositories mentioned in strains.yaml

In webhook.js

The webhook handler will only take care of converting push events into lists of changed files and dispatch new events to each watcher when the configured filters match. It is only concerned with GitHub semantics and won't try to parse markdown or interpret the changed files in any way.

  • receive the PushEvent payload looks like this
  • for each commit, in commits, get the list of changed files from the commit REST API
  • for each filter and file compare if the allow and deny expressions for repo, owner, ref, path, dirname ,extension, author, committer match
  • if yes, and the file is not a binary, fetch the raw file
  • then POST a combined payload (push event, commit JSON, raw text) to the configured URL

Authentication considerations

  • hlx deploy needs a token to write the repo
  • hlx--webhook needs read access

This way, each action can configure which events it wants to receive in advance.

How to (or why not to) post-process HTML in Helix

From @ryanbetts:

Just hopped in here to field a general question about Helix’s mechanics that I’m still struggling to wrap my head around. I’ve read through the Anatomy of the Pipeline, which was helpful, and just saw @bdelacre’s “How Helix Works” initial draft. Both of these provide some useful context. Where it falls apart for me still is:

It’s clear how an individual set of files are woven together to build a piece of the final page to be delivered to a person’s device (eg. global nav, the content of an individual page, a blog component on that page, the global footer). I know how to pre and post process those individual elements/templates/partials using the Helix Pipeline.

That said, I still have an instinct to want to post-process the assembled final result for certain things, before it is cached for the client. For example, if the same SVG icon has been inlined in all 4 of the above mentioned templates, I’d want to analyze the final HTML and strip it down to a single resource. This seems to be a Helix anti-pattern – or just not possible do to other than on the client side.

Am I correct in that understanding? If so, would love to know what the better approach is. I’m still new to serverless, Fastly and VCL. My assumption is that’s where the gap in understanding lies for me.

Move trieloff/helix-index-pipelines to adobe

Create a new repository with the name helix-index-pipelines

Open the #helix-noisy Slack channel, then type /github subscribe adobe/helix-…

Homepage response time 5x slower

Screenshot 2019-07-08 at 17 44 00

Since the last increase end of May, page response times measured on www.project-helix.io have massively degraded again around July 5th, this time shooting from ~0.6s up to ~6.4s, then settling in at ~3.3s. The following issues fall into this timeframe:

  • helix-pingdom-status: Error XML is not well formatted
  • helix-pipeline: default pipeline should include
    s
  • helix-experimental-dispatch: Strain mount path got lost during refactoring; strain url path is not respected anymore; requests to selector resources fail; if static and content repo are identical, don't fetch twice; only serve 404.html for *.html requests
  • helix-cli: Remove local helix-static; allow custom dispatch version, changing log level does not work anymore

Also, I'm wondering why there is such a variance in the response time given that the site should always be cached...

cc @davidnuescheler

Improve Status Monitoring for Adobe I/O Runtime

When you check https://status.project-helix.io, it appears that Adobe I/O Runtime is 100% available, while the services running on top of it have lower availability.

This is an artifact of our monitoring setup. For all Helix Services, we execute an action on Adobe I/O Runtime, whereas for Adobe I/O Runtime itself, we only access the API root resource, which is cached and served through Adobe I/O API Gateway.

I'd suggest we create an additional helix-runtime-monitoring service that is deployed with all other Helix Services in the helix namespace, but serves as a monitoring probe for Adobe I/O Runtime.

Soft Purges and Stale Content

To the tune of the Fugees: Purging me softly 🎶 … with his API

Right now, all cache invalidation we do are hard purges of the entire cache, i.e. the entire cache will be invalidated and with the next request, every request to Fastly will block until the backend (Runtime) delivers a response.

Fastly offers a Soft purge API that instead of removing the cache key entirely, it just marks it as outdated, so that (with the right configuration) with the next request the old cached version will be delivered while at the same time a backend request will be made to update the cache, which means that the request immediately following a purge will be fast, but have stale content and up-to-date content will only be delivered with the following request.

Generally, soft purges and stale content deliver a faster and therefore better visitor experience and should therefore:

  • be the only option for the bot
  • be the default option for the CLI (debatable, I'd love to hear your counter-arguments @tripodsan @davidnuescheler)
  • for Helix Pages, I'm not sure

Fastly does not have a "soft purge all" API, so we will have to use a Surrogate-Key value (e.g. all) that is used for every response and soft-purge that Surrogate-Key value.

What needs to be done:

I'll file individual issues once we have general agreement

helix-run-query

Create a new repository with the name helix-run-query

Note: Greenkeeper will find your new repository automatically, but it might file an issue that it cannot get the build status if you haven't set up CircleCI yet. In this case, go to Greenkeeper and click the "fix repo" button

Open the #helix-noisy Slack channel, then type /github subscribe adobe/helix-…

Example Zero

I have created https://github.com/bdelacretaz/helix-example-zero which (once moved to the adobe org) can be Example Zero, the starting point to Helix.

The idea is that someone just needs to be pointed to this Example Zero to get started, in a pragmatic way, by playing with it.

It will demonstrate a basic website, with header and footer and CSS to have some "meat" to it (just an index.md for now) and it points to other examples (once we have them) using a GitHub query for the helix-example Topic.

The repo includes a CircleCI setup to validate the actual https://helix-example-zero-bdelacretaz.project-helix.page/ output. The tests are still rough but demonstrate how we can also use the examples as integration tests for the whole publishing chain as well as validate the overall Helix "APIs" over time. Changes to these tests will track the product evolution, assuming the test coverage is good.

Next steps, if we agree on the idea:

  • Flesh out the example to make it beautiful
  • Fix the TODOs in the tests and improve test coverage.
  • Find a way to wait for the published website to be updated - maybe include the Git revision hash in an HTTP header (in debug mode?) and wait for it before running the tests
  • Verify that caching doesn't get in the way of testing
  • Maybe hide the test code in a separate branch that's merged right before testing, so that users don't see them by default.

[monitoring] move all config from package.json to .circleci/config.yml

Is your feature request related to a problem? Please describe.
The configuration for the monitoring automation is currently spread across multiple files:

  • package.json: limited number of config properties like the name and group of the Statuspage component, New Relic monitor name and group policy
  • .circleci/config.yml: invocation of the monitoring command from the adobe/helix-post-deploy orb with additional parameters such as action name (if different from package name), package, namespace etc.

Describe the solution you'd like
The monitoring command in the adobe/helix-post-deploy orb supports all config options currently definable in the package.json (and more) as parameters. Having multiple places to maintain this configuration is tedious and error prone. It would therefore be best to consolidate monitoring automation config in a single place: the CircleCI config.

Describe alternatives you've considered
We could duplicate all parameters currently supported by the monitoring command in the package.json, and extend the tooling to pick them up. But this would just mean more complexity in the tooling and not solve the root problem of having multiple places to configure monitoring automation.

Additional context
Example:

  • package.json
 "statuspage": {
    "name": "Google Docs Adapter",
    "group": "Delivery"
  },
  "newrelic": {
    "group_policy": "Delivery Repeated Failure"
  }
  • .circleci/config.yml:
  - helix-post-deploy/monitoring:
      action_name: gdocs2md

This would be consolidated to:

  • .circleci/config.yml:
  - helix-post-deploy/monitoring:
      statuspage_name: Google Docs Adapter
      statuspage_group: Delivery
      newrelic_group_policy: Delivery Repeated Failure
      action_name: gdocs2md

Document the differences and switch between Helix Pages and non-Pages

Intended Audience

Helix users

Related Issues

We need to explain the differences between Helix Pages and non-Pages and what causes the switch from one to the other mode.

And maybe also find a more descriptive name for the non-Pages mode unless we have one already.

As per https://github.com/adobe/helix-bot/issues/109 a GitHub repository is considered to be a Helix Pages repo unless:

  • it has a .github/helix-bot-config.yaml(.gpg)
  • it has a src directory
  • it has a helix-config.yaml

And BTW these rules should probably be part of some shared code to be reusable and strictly defined.

Linking this to #31, the "doc" theme ticket.

GraphQL Repository API

Problem

With the existing GitHub API (+ raw) we have an easy, RESTful way of accessing repository contents, which is largely sufficient for delivery in Helix. When it comes to content management (think AEM site admin), we need something more advanced to support queries like this:

  • get all pages that have Helix in the title
  • get all pages that have been authored by @davidnuescheler
  • get all images in all pages that point to the DAM

This requires not just a way to express structured queries, but also a way to inspect the contents of a markdown "file" and treat it as a helix "document", i.e. expose sections, metadata, embeds, links, external object, etc.

When it comes to the query language, it seems like GraphQL is the way to go (and something we should explore). If we use GraphQL, we need a way to run a GraphQL gateway as a serverless application.

Note: GitHub does have a GraphQL API that allows us to run queries like "get all Markdown files authored by @davidnuescheler in the branch 2020-launch". It does not allow us to look into the structure of the Markdown files.


Approach 1

Until recently, the architecture I'd have used to approach this would look something like this:

  1. install a web hook into the content repository that calls a function
  2. which parses the markdown and dumps the parsed structure in some schemaless database like CosmosDB or Firebase
  3. run a GraphQL gateway as a serverless function close to the data

The two problems with this approach:

  • step (3) is very much "???" because we don't have a ready-to-use service
  • we end up creating a mirror of every git repository we have access to
  • we need to install the webhook, so we cannot access arbitrary repositories

Approach 2

I've learned about AWS AppSync a while ago, but dismissed it because it seemed to be very mobile-focused, but after a few conversations with people that have been using it, I think it can be a viable alternative. AWS AppSync is a GraphQL gateway as a service (with a full serverless pricing model, so no paying for idle) and full extensibility (you can define custom resolvers as AWS Lambda functions).

As a result, the final architecture would look something like this:

  1. the user (author) sends a GraphQL query as a POST request to AWS AppSync
  2. AppSync has one custom resolver that proxies the GitHub GraphQL API, allowing queries against the repository structure
  3. AppSync has a second custom resolver that fetches the Markdown file from raw, parses it and returns the key metadata structure (this could use a variation of the JSON pipeline)
  4. AppSync creates a query plan and executes the query against the GitHub GraphQL API (retrieve all Markdown files) and then against the custom resolver (spin up one lambda function for each Markdown file found, parse it, etc) and then performs the join

Key aspects/challenges of this approach:

  • works with any repository (we don't need the bot)
  • does not mirror anything (we don't need a database, we don't pay for storage for inactive websites)
  • runs only when queries are run (and is free otherwise)
  • needs good caching (we don't want to re-fetch and re-parse the same version of the same file over and over again)
  • potentially runs with massive parallelism (tries to fetch and parse every markdown file in the repository)
  • potentially slower on initial queries (cold start for resolver lambda functions)

I suggest we try both approaches and compare which one works best for our use cases.

Move trieloff/helix-vulcain-filters to adobe

Create a new repository with the name helix-vulcain-filters

Open the #helix-noisy Slack channel, then type /github subscribe adobe/helix-…

Experimental fetch from google docs

Using @davidnuescheler's experimental googledocs->markdown generator [0], it should be possible to fetch the markdown directly in the pipeline.

The ID of the document is either passed through the resource path:

[0]: https://helix-test-davidnuescheler.project-helix.page/googledocs.html

Review CircleCI config in all Helix repos

@kptdobe commented on Nov 23, 2018, 9:31 AM UTC:

While working on adobe/helix-cli#289, I found several inconsistencies into our configs, like some migrated to 2.1, some having old config, improvement not propagated
everywhere... Here is a list of tasks that could easily be performed:

Repository to consider: https://github.com/search?p=1&q=topic%3Ahelix+org%3Aadobe&type=Repositories

This issue was moved by kptdobe from adobe/project-helix#347.

Review Helix experience for developers at step 3

Following the "Start Developing Your First Helix Project in 60 Seconds" guide gives a good start point with Helix. This is what I would call "step 1" and being approved with adobe/helix-cli#711.
Step 2 would be the first publishing of the site which requires some insights from the Helix team (like how to setup Fastly / I/O Runtime). It does not make it easy, but this is still fine for now so that we can control who is using Helix and we lack of certain automation tool (like create a Fastly service, I/O Runtime account...)
The step 3 is everything that comes right after the first publishing like testing a new version of my site, deploying / publishing again and again and iterate on the code (branches) or on its content...
Things start really to get complicated and non-intuitive event for someone like me who has done the exercise multiple times right from the beginning of step 3.

Once my first version of the site is live, the first thing a dev does is to create a v2 branch. This has been validated during the hackathon, you do want to make the changes directly in master.

Issue number 1: running hlx up in a local checkout with a different branch does not render my local code but the one from GitHub master branch because it is setup like this in the helix-config.yaml.
Possible solution: get rid of the code property in the strains by default. You may add one if you know what you are doing.

To solve that problem, I do not really know what is the good solution:

  • temporarily hack the default strain to refer to my branch (and use --local-url=. - might be the default now)
  • add a new strain with code / static references to my branch (and use --local-url=. - might be the default now) but then I need to use the url property to make sure that locally it gets to my new strain and not to the default one.
    This is way too complicated.

Issue number 2: In my temporary code v2 branch, my helix-config.yaml starts to diverge and when v2 will be merged, I do not want a reference of v2 in my helix-config.yaml.
Possible solution: do not use strain at that stage, there should a better way.

Now I want to test my branch live. Knowing that there are fondamental differences between the simulator and Fastly, as an advanced developer I must test my code in a production like environment.
In my local checkout of my branch, I need to swap my .env file to get the "test env" and not the "prod one"

Issue number 3: There is a huge risk here that I forget and I overwrite the prod
Possible solution: hlx deploy and hlx publish should ask me to confirm the namespace / services I am going to update.

Then I need to deploy the code, which in the end I do not really care about. As a developer here, I want to "publish" my site, whatever needs to happen should happen.

Issue number 4: hlx deploy and hlx publish belong together for a simple user. They should be one operation.
Possible solution: create a single cmd that executes both.

Issue number 5: hlx deploy will complain and force you to add a strain for the v2 branch which I can add automatically with hlx deploy --add v2. The pb is that I have no clue what add will do, it is a separate operation that I want to do before to check the result.
Possible solution: extract the --add option into either a standalone cmd or at least it a single operation of the deploy

I am not going to use the prodution Fastly service, too risky. I have my own Fastly service "acapt - Playground" where I want to publish to. Again here, the risk that I publish to the wrong service is really high (I managed to break 3 services in 2 days because of wrong service id).

Now I can go to my test domain and validate that everything works as expected.
I can merge my code change (+/- the extra unnecessary strain) and do the same operations from my updated master branch.

One might say that this is ok because it will be run by a CI anyway. That's what I try to do too. Actually, you can replace the "I" by "CI" in all the sentences above and you end up with the same result. I did not manage to automate the CI for a dev branch (only deploy / publish on commit to master).

The main issue I see here is the number of conflicting dimensions: branch vs strain vs domain. The fact that hlx deploy needs a strain for the current branch and finally updates the helix-config.yaml with the package reference illustrates there is a duplicate on the code side (url + package reference).

One proposal would be to remove the code and static url from the strains and just keep the content url, the staticRoot and the url / condition (+extra configs).
When you run hlx superpublish, you do it from a code checkout anyway, thus you know the code url, static url and the branch, it can then deploy and publish with everything you need. If hlx superpublish asks you which service id (showing the nice names we have in Fastly), you are sure of where it goes (with shortcut flags for the CI).
After hlx superpublish, you simply need to commit the patched helix-config.yaml to keep history of the deployment.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.