Code Monkey home page Code Monkey logo

pagoda's Introduction

Pagoda: Rapid, easy full-stack web development starter kit in Go

Go Report Card Test License: MIT Go Reference GoT Mentioned in Awesome Go

Logo

Table of Contents

Introduction

Overview

Pagoda is not a framework but rather a base starter-kit for rapid, easy full-stack web development in Go, aiming to provide much of the functionality you would expect from a complete web framework as well as establishing patterns, procedures and structure for your web application.

Built on a solid foundation of well-established frameworks and modules, Pagoda aims to be a starting point for any web application with the benefit over a mega-framework in that you have full control over all of the code, the ability to easily swap any frameworks or modules in or out, no strict patterns or interfaces to follow, and no fear of lock-in.

While separate JavaScript frontends have surged in popularity, many prefer the reliability, simplicity and speed of a full-stack approach with server-side rendered HTML. Even the popular JS frameworks all have SSR options. This project aims to highlight that Go templates can be powerful and easy to work with, and interesting frontend libraries can provide the same modern functionality and behavior without having to write any JS at all.

Foundation

While many great projects were used to build this, all of which are listed in the credits section, the following provide the foundation of the back and frontend. It's important to note that you are not required to use any of these. Swapping any of them out will be relatively easy.

Backend

  • Echo: High performance, extensible, minimalist Go web framework.
  • Ent: Simple, yet powerful ORM for modeling and querying data.

Frontend

Go server-side rendered HTML combined with the projects below enable you to create slick, modern UIs without writing any JavaScript or CSS.

  • HTMX: Access AJAX, CSS Transitions, WebSockets and Server Sent Events directly in HTML, using attributes, so you can build modern user interfaces with the simplicity and power of hypertext.
  • Alpine.js: Rugged, minimal tool for composing behavior directly in your markup. Think of it like jQuery for the modern web. Plop in a script tag and get going.
  • Bulma: Provides ready-to-use frontend components that you can easily combine to build responsive web interfaces. No JavaScript dependencies.

Storage

  • PostgreSQL: The world's most advanced open source relational database.
  • Redis: In-memory data structure store, used as a database, cache, and message broker.

Screenshots

Inline form validation

Inline validation

Switch layout templates, user registration

Registration

Alpine.js modal, HTMX AJAX request

Alpine and HTMX

Getting started

Dependencies

Ensure the following are installed on your system:

Start the application

After checking out the repository, from within the root, start the Docker containers for the database and cache by executing make up:

git clone [email protected]:mikestefanello/pagoda.git
cd pagoda
make up

Since this repository is a template and not a Go library, you do not use go get.

Once that completes, you can start the application by executing make run. By default, you should be able to access the application in your browser at localhost:8000.

If you ever want to quickly drop the Docker containers and restart them in order to wipe all data, execute make reset.

Running tests

To run all tests in the application, execute make test. This ensures that the tests from each package are not run in parallel. This is required since many packages contain tests that connect to the test database which is dropped and recreated automatically for each package.

Clients

The following make commands are available to make it easy to connect to the database and cache.

  • make db: Connects to the primary database
  • make db-test: Connects to the test database
  • make cache: Connects to the primary cache
  • make cache-test: Connects to the test cache

Service container

The container is located at pkg/services/container.go and is meant to house all of your application's services and/or dependencies. It is easily extensible and can be created and initialized in a single call. The services currently included in the container are:

  • Configuration
  • Cache
  • Database
  • ORM
  • Web
  • Validator
  • Authentication
  • Mail
  • Template renderer
  • Tasks

A new container can be created and initialized via services.NewContainer(). It can be later shutdown via Shutdown().

Dependency injection

The container exists to faciliate easy dependency-injection both for services within the container as well as areas of your application that require any of these dependencies. For example, the container is passed to and stored within the Controller so that the controller and the route using it have full, easy access to all services.

Test dependencies

It is common that your tests will require access to dependencies, like the database, or any of the other services available within the container. Keeping all services in a container makes it especially easy to initialize everything within your tests. You can see an example pattern for doing this here.

Configuration

The config package provides a flexible, extensible way to store all configuration for the application. Configuration is added to the Container as a Service, making it accessible across most of the application.

Be sure to review and adjust all of the default configuration values provided in config/config.yaml.

Environment overrides

Leveraging the functionality of viper to manage configuration, all configuration values can be overridden by environment variables. The name of the variable is determined by the set prefix and the name of the configuration field in config/config.yaml.

In config/config.go, the prefix is set as pagoda via viper.SetEnvPrefix("pagoda"). Nested fields require an underscore between levels. For example:

cache:
  port: 1234

can be overridden by setting an environment variable with the name PAGODA_CACHE_PORT.

Environments

The configuration value for the current environment (Config.App.Environment) is an important one as it can influence some behavior significantly (will be explained in later sections).

A helper function (config.SwitchEnvironment) is available to make switching the environment easy, but this must be executed prior to loading the configuration. The common use-case for this is to switch the environment to Test before tests are executed:

func TestMain(m *testing.M) {
    // Set the environment to test
    config.SwitchEnvironment(config.EnvTest)

    // Start a new container
    c = services.NewContainer()

    // Run tests
    exitVal := m.Run()

    // Shutdown the container
    if err := c.Shutdown(); err != nil {
        panic(err)
    }

    os.Exit(exitVal)
}

Database

The database currently used is PostgreSQL but you are free to use whatever you prefer. If you plan to continue using Ent, the incredible ORM, you can check their supported databases here. The database-driver and client is provided by pgx and included in the Container.

Database configuration can be found and managed within the config package.

Auto-migrations

Ent provides automatic migrations which are executed on the database whenever the Container is created, which means they will run when the application starts.

Separate test database

Since many tests can require a database, this application supports a separate database specifically for tests. Within the config, the test database name can be specified at Config.Database.TestDatabase.

When a Container is created, if the environment is set to config.EnvTest, the database client will connect to the test database instead, drop the database, recreate it, and run migrations so your tests start with a clean, ready-to-go database. Another benefit is that after the tests execute in a given package, you can connect to the test database to audit the data which can be useful for debugging.

ORM

As previously mentioned, Ent is the supplied ORM. It can swapped out, but I highly recommend it. I don't think there is anything comparable for Go, at the current time. If you're not familiar with Ent, take a look through their top-notch documentation.

An Ent client is included in the Container to provide easy access to the ORM throughout the application.

Ent relies on code-generation for the entities you create to provide robust, type-safe data operations. Everything within the ent package in this repository is generated code for the two entity types listed below with the exception of the schema declaration.

Entity types

The two included entity types are:

  • User
  • PasswordToken

New entity type

While you should refer to their documentation for detailed usage, it's helpful to understand how to create an entity type and generate code. To make this easier, the Makefile contains some helpers.

  1. Ensure all Ent code is downloaded by executing make ent-install.
  2. Create the new entity type by executing make ent-new name=User where User is the name of the entity type. This will generate a file like you can see in ent/schema/user.go though the Fields() and Edges() will be left empty.
  3. Populate the Fields() and optionally the Edges() (which are the relationships to other entity types).
  4. When done, generate all code by executing make ent-gen.

The generated code is extremely flexible and impressive. An example to highlight this is one used within this application:

entity, err := c.ORM.PasswordToken.
    Query().
    Where(passwordtoken.ID(tokenID)).
    Where(passwordtoken.HasUserWith(user.ID(userID))).
    Where(passwordtoken.CreatedAtGTE(expiration)).
    Only(ctx.Request().Context())

This executes a database query to return the password token entity with a given ID that belong to a user with a given ID and has a created at timestamp field that is greater than or equal to a given time.

Sessions

Sessions are provided and handled via Gorilla sessions and configured as middleware in the router located at pkg/routes/router.go. Session data is currently stored in cookies but there are many options available if you wish to use something else.

Here's a simple example of loading data from a session and saving new values:

func SomeFunction(ctx echo.Context) error {
    sess, err := session.Get("some-session-key", ctx)
    if err != nil {
        return err
    }
    sess.Values["hello"] = "world"
    sess.Values["isSomething"] = true
    return sess.Save(ctx.Request(), ctx.Response())
}

Encryption

Session data is encrypted for security purposes. The encryption key is stored in configuration at Config.App.EncryptionKey. While the default is fine for local development, it is imperative that you change this value for any live environment otherwise session data can be compromised.

Authentication

Included are standard authentication features you expect in any web application. Authentication functionality is bundled as a Service within services/AuthClient and added to the Container. If you wish to handle authentication in a different manner, you could swap this client out or modify it as needed.

Authentication currently requires sessions and the session middleware.

Login / Logout

The AuthClient has methods Login() and Logout() to log a user in or out. To track a user's authentication state, data is stored in the session including the user ID and authentication status.

Prior to logging a user in, the method CheckPassword() can be used to determine if a user's password matches the hash stored in the database and on the User entity.

Routes are provided for the user to login and logout at user/login and user/logout.

Forgot password

Users can reset their password in a secure manner by issuing a new password token via the method GeneratePasswordResetToken(). This creates a new PasswordToken entity in the database belonging to the user. The actual token itself, however, is not stored in the database for security purposes. It is only returned via the method so it can be used to build the reset URL for the email. Rather, a hash of the token is stored, using bcrypt the same package used to hash user passwords. The reason for doing this is the same as passwords. You do not want to store a plain-text value in the database that can be used to access an account.

Tokens have a configurable expiration. By default, they expire within 1 hour. This can be controlled in the config package. The expiration of the token is not stored in the database, but rather is used only when tokens are loaded for potential usage. This allows you to change the expiration duration and affect existing tokens.

Since the actual tokens are not stored in the database, the reset URL must contain the user and password token ID. Using that, GetValidPasswordToken() will load a matching, non-expired password token entity belonging to the user, and use bcrypt to determine if the token in the URL matches stored hash of the password token entity.

Once a user claims a valid password token, all tokens for that user should be deleted using DeletePasswordTokens().

Routes are provided to request a password reset email at user/password and to reset your password at user/password/reset/token/:user/:password_token/:token.

Registration

The actual registration of a user is not handled within the AuthClient but rather just by creating a User entity. When creating a user, use HashPassword() to create a hash of the user's password, which is what will be stored in the database.

A route is provided for the user to register at user/register.

Authenticated user

The AuthClient has two methods available to get either the User entity or the ID of the user currently logged in for a given request. Those methods are GetAuthenticatedUser() and GetAuthenticatedUserID().

Middleware

Registered for all routes is middleware that will load the currently logged in user entity and store it within the request context. The middleware is located at middleware.LoadAuthenticatedUser() and, if authenticated, the User entity is stored within the context using the key context.AuthenticatedUserKey.

If you wish to require either authentication or non-authentication for a given route, you can use either middleware.RequireAuthentication() or middleware.RequireNoAuthentication().

Email verification

Most web applications require the user to verify their email address (or other form of contact information). The User entity has a field Verified to indicate if they have verified themself. When a user successfully registers, an email is sent to them containing a link with a token that will verify their account when visited. This route is currently accessible at /email/verify/:token and handled by routes/VerifyEmail.

There is currently no enforcement that a User must be verified in order to access the application. If that is something you desire, it will have to be added in yourself. It was not included because you may want partial access of certain features until the user verifies; or no access at all.

Verification tokens are JSON Web Tokens generated and processed by the jwt module. The tokens are signed using the encryption key stored in configuration (Config.App.EncryptionKey). It is imperative that you override this value from the default in any live environments otherwise the data can be comprimised. JWT was chosen because they are secure tokens that do not have to be stored in the database, since the tokens contain all of the data required, including built-in expirations. These were not chosen for password reset tokens because JWT cannot be withdrawn once they are issued which poses a security risk. Since these tokens do not grant access to an account, the ability to withdraw the tokens is not needed.

By default, verification tokens expire 12 hours after they are issued. This can be changed in configuration at Config.App.EmailVerificationTokenExpiration. There is currently not a route or form provided to request a new link.

Be sure to review the email section since actual email sending is not fully implemented.

To generate a new verification token, the AuthClient has a method GenerateEmailVerificationToken() which creates a token for a given email address. To verify the token, pass it in to ValidateEmailVerificationToken() which will return the email address associated with the token and an error if the token is invalid.

Routes

The router functionality is provided by Echo and constructed within via the BuildRouter() function inside pkg/routes/router.go. Since the Echo instance is a Service on the Container which is passed in to BuildRouter(), middleware and routes can be added directly to it.

Custom middleware

By default, a middleware stack is included in the router that makes sense for most web applications. Be sure to review what has been included and what else is available within Echo and the other projects mentioned.

A middleware package is included which you can easily add to along with the custom middleware provided.

Controller / Dependencies

The Controller, which is described in a section below, serves two purposes for routes:

  1. It provides base functionality which can be embedded in each route, most importantly Page rendering (described in the Controller section below)
  2. It stores a pointer to the Container, making all Services available within your route

While using the Controller is not required for your routes, it will certainly make development easier.

See the following section for the proposed pattern.

Patterns

These patterns are not required, but were designed to make development as easy as possible.

To declare a new route that will have methods to handle a GET and POST request, for example, start with a new struct type, that embeds the Controller:

type home struct {
    controller.Controller
}

func (c *home) Get(ctx echo.Context) error {}

func (c *home) Post(ctx echo.Context) error {}

Then create the route and add to the router:

home := home{Controller: controller.NewController(c)}
g.GET("/", home.Get).Name = "home"
g.POST("/", home.Post).Name = "home.post"

Your route will now have all methods available on the Controller as well as access to the Container. It's not required to name the route methods to match the HTTP method.

It is highly recommended that you provide a Name for your routes. Most methods on the back and frontend leverage the route name and parameters in order to generate URLs.

Errors

Routes can return errors to indicate that something wrong happened. Ideally, the error is of type *echo.HTTPError to indicate the intended HTTP response code. You can use return echo.NewHTTPError(http.StatusInternalServerError), for example. If an error of a different type is returned, an Internal Server Error is assumed.

The error handler is set to a provided route pkg/routes/error.go in the BuildRouter() function. That means that if any middleware or route return an error, the request gets routed there. This route conveniently constructs and renders a Page which uses the template templates/pages/error.go. The status code is passed to the template so you can easily alter the markup depending on the error type.

Testing

Since most of your web application logic will live in your routes, being able to easily test them is important. The following aims to help facilitate that.

The test setup and helpers reside in pkg/routes/router_test.go.

Only a brief example of route tests were provided in order to highlight what is available. Adding full tests did not seem logical since these routes will most likely be changed or removed in your project.

HTTP server

When the route tests initialize, a new Container is created which provides full access to all of the Services that will be available during normal application execution. Also provided is a test HTTP server with the router added. This means your tests can make requests and expect responses exactly as the application would behave outside of tests. You do not need to mock the requests and responses.

Request / Response helpers

With the test HTTP server setup, test helpers for making HTTP requests and evaluating responses are made available to reduce the amount of code you need to write. See httpRequest and httpResponse within pkg/routes/router_test.go.

Here is an example how to easily make a request and evaluate the response:

func TestAbout_Get(t *testing.T) {
    doc := request(t).
        setRoute("about").
        get().
        assertStatusCode(http.StatusOK).
        toDoc()
}

Goquery

A helpful, included package to test HTML markup from HTTP responses is goquery. This allows you to use jQuery-style selectors to parse and extract HTML values, attributes, and so on.

In the example above, toDoc() will return a *goquery.Document created from the HTML response of the test HTTP server.

Here is a simple example of how to use it, along with testify for making assertions:

h1 := doc.Find("h1.title")
assert.Len(t, h1.Nodes, 1)
assert.Equal(t, "About", h1.Text())

Controller

As previously mentioned, the Controller acts as a base for your routes, though it is optional. It stores the Container which houses all Services (dependencies) but also a wide array of functionality aimed at allowing you to build complex responses with ease and consistency.

Page

The Page is the major building block of your Controller responses. It is a struct type located at pkg/controller/page.go. The concept of the Page is that it provides a consistent structure for building responses and transmitting data and functionality to the templates.

All example routes provided construct and render a Page. It's recommended that you review both the Page and the example routes as they try to illustrate all included functionality.

As you develop your application, the Page can be easily extended to include whatever data or functions you want to provide to your templates.

Initializing a new page is simple:

func (c *home) Get(ctx echo.Context) error {
    page := controller.NewPage(ctx)
}

Using the echo.Context, the Page will be initialized with the following fields populated:

  • Context: The passed in context
  • ToURL: A function the templates can use to generate a URL with a given route name and parameters
  • Path: The requested URL path
  • URL: The requested URL
  • StatusCode: Defaults to 200
  • Pager: Initialized Pager (see below)
  • RequestID: The request ID, if the middleware is being used
  • IsHome: If the request was for the homepage
  • IsAuth: If the user is authenticated
  • AuthUser: The logged in user entity, if one
  • CSRF: The CSRF token, if the middleware is being used
  • HTMX.Request: Data from the HTMX headers, if HTMX made the request (see below)

Flash messaging

While flash messaging functionality is provided outside of the Controller and Page, within the msg package, it's really only used within this context.

Flash messaging requires that sessions and the session middleware are in place since that is where the messages are stored.

Creating messages

There are four types of messages, and each can be created as follows:

  • Success: msg.Success(ctx echo.Context, message string)
  • Info: msg.Info(ctx echo.Context, message string)
  • Warning: msg.Warning(ctx echo.Context, message string)
  • Danger: msg.Danger(ctx echo.Context, message string)

The message string can contain HTML.

Rendering messages

When a flash message is retrieved from storage in order to be rendered, it is deleted from storage so that it cannot be rendered again.

The Page has a method that can be used to fetch messages for a given type from within the template: Page.GetMessages(typ msg.Type). This is used rather than the funcmap because the Page contains the request context which is required in order to access the session data. Since the Page is the data destined for the templates, you can use: {{.GetMessages "success"}} for example.

To make things easier, a template component is already provided, located at templates/components/messages.gohtml. This will render all messages of all types simply by using {{template "messages" .}} either within your page or layout template.

Pager

A very basic mechanism is provided to handle and facilitate paging located in pkg/controller/pager.go. When a Page is initialized, so is a Pager at Page.Pager. If the requested URL contains a page query parameter with a numeric value, that will be set as the page number in the pager.

During initialization, the items per page amount will be set to the default, controlled via constant, which has a value of 20. It can be overridden by changing Pager.ItemsPerPage but should be done before other values are set in order to not provide incorrect calculations.

Methods include:

  • SetItems(items int): Set the total amount of items in the entire result-set
  • IsBeginning(): Determine if the pager is at the beginning of the pages
  • IsEnd(): Determine if the pager is at the end of the pages
  • GetOffset(): Get the offset which can be useful is constructing a paged database query

There is currently no template (yet) to easily render a pager.

CSRF

By default, all non GET requests will require a CSRF token be provided as a form value. This is provided by middleware and can be adjusted or removed in the router.

The Page will contain the CSRF token for the given request. There is a CSRF helper component template which can be used to easily render a hidden form element in your form which will contain the CSRF token and the proper element name. Simply include {{template "csrf" .}} within your form.

Automatic template parsing

Dealing with templates can be quite tedious and annoying so the Page aims to make it as simple as possible with the help of the template renderer. To start, templates for pages are grouped in the following directories within the templates directory:

  • layouts: Base templates that provide the entire HTML wrapper/layout. This template should include a call to {{template "content" .}} to render the content of the Page.
  • pages: Templates that are specific for a given route/page. These must contain {{define "content"}}{{end}} which will be injected in to the layout template.
  • components: A shared library of common components that the layout and base template can leverage.

Specifying which templates to render for a given Page is as easy as:

page.Name = "home"
page.Layout = "main"

That alone will result in the following templates being parsed and executed when the Page is rendered:

  1. layouts/main.gohtml as the base template
  2. pages/home.gohtml to provide the content template for the layout
  3. All template files located within the components directory
  4. The entire funcmap

The template renderer also provides caching and local hot-reloading.

Cached responses

A Page can have cached enabled just by setting Page.Cache.Enabled to true. The Controller will automatically handle caching the HTML output, headers and status code. Cached pages are stored using a key that matches the full request URL and middleware is used to serve it on matching requests.

By default, the cache expiration time will be set according to the configuration value located at Config.Cache.Expiration.Page but it can be set per-page at Page.Cache.Expiration.

Cache tags

You can optionally specify cache tags for the Page by setting a slice of strings on Page.Cache.Tags. This provides the ability to build in cache invalidation logic in your application driven by events such as entity operations, for example.

You can use the cache client on the Container to easily flush cache tags, if needed.

Cache middleware

Cached pages are served via the middleware ServeCachedPage() in the middleware package.

The cache is bypassed if the requests meet any of the following criteria:

  1. Is not a GET request
  2. Is made by an authenticated user

Cached pages are looked up for a key that matches the exact, full URL of the given request.

Data

The Data field on the Page is of type any and is what allows your route to pass whatever it requires to the templates, alongside the Page itself.

Forms

The Form field on the Page is similar to the Data field in that it's an any type but it's meant to store a struct that represents a form being rendered on the page.

An example of this pattern is:

type ContactForm struct {
    Email      string `form:"email" validate:"required,email"`
    Message    string `form:"message" validate:"required"`
    Submission controller.FormSubmission
}

Then in your page:

page := controller.NewPage(ctx)
page.Form = ContactForm{}

How the form gets populated with values so that your template can render them is covered in the next section.

Submission processing

Form submission processing is made extremely simple by leveraging functionality provided by Echo binding, validator and the FormSubmission struct located in pkg/controller/form.go.

Using the example form above, these are the steps you would take within the POST callback for your route:

Start by storing a pointer to the form in the context so that your GET callback can access the form values, which will be showed at the end:

var form ContactForm
ctx.Set(context.FormKey, &form)

Parse the input in the POST data to map to the struct so it becomes populated. This uses the form struct tags to map form values to the struct fields.

if err := ctx.Bind(&form); err != nil {
    // Something went wrong...
}

Process the submission which uses validator to check for validation errors:

if err := form.Submission.Process(ctx, form); err != nil {
    // Something went wrong...
}

Check if the form submission has any validation errors:

if !form.Submission.HasErrors() {
    // All good, now execute something!
}

In the event of a validation error, you most likely want to re-render the form with the values provided and any error messages. Since you stored a pointer to the form in the context in the first step, you can first have the POST handler call the GET:

if form.Submission.HasErrors() {
    return c.Get(ctx)
}

Then, in your GET handler, extract the form from the context so it can be passed to the templates:

page := controller.NewPage(ctx)
page.Form = ContactForm{}

if form := ctx.Get(context.FormKey); form != nil {
    page.Form = form.(*ContactForm)
}

And finally, your template:

<input id="email" name="email" type="email" class="input" value="{{.Form.Email}}">

Inline validation

The FormSubmission makes inline validation easier because it will store all validation errors in a map, keyed by the form struct field name. It also contains helper methods that your templates can use to provide classes and extract the error messages.

While validator is a great package that is used to validate based on struct tags, the downside is that the messaging, by default, is not very human-readable or easy to override. Within FormSubmission.setErrorMessages() the validation errors are converted to more readable messages based on the tag that failed validation. Only a few tags are provided as an example, so be sure to expand on that as needed.

To provide the inline validation in your template, there are two things that need to be done.

First, include a status class on the element so it will highlight green or red based on the validation:

<input id="email" name="email" type="email" class="input {{.Form.Submission.GetFieldStatusClass "Email"}}" value="{{.Form.Email}}">

Second, render the error messages, if there are any for a given field:

{{template "field-errors" (.Form.Submission.GetFieldErrors "Email")}}

Headers

HTTP headers can be set either via the Page or the context:

page := controller.NewPage(ctx)
page.Headers["HeaderName"] = "header-value"
ctx.Response().Header().Set("HeaderName", "header-value")

Status code

The HTTP response status code can be set either via the Page or the context:

page := controller.NewPage(ctx)
page.StatusCode = http.StatusTooManyRequests
ctx.Response().Status = http.StatusTooManyRequests

Metatags

The Page provides the ability to set basic HTML metatags which can be especially useful if your web application is publicly accessible. Only fields for the description and keywords are provided but adding additional fields is very easy.

page := controller.NewPage(ctx)
page.Metatags.Description = "The page description."
page.Metatags.Keywords = []string{"Go", "Software"}

A component template is included to render metatags in core.gohtml which can be used by adding {{template "metatags" .}} to your layout.

URL and link generation

Generating URLs in the templates is made easy if you follow the routing patterns and provide names for your routes. Echo provides a Reverse function to generate a route URL with a given route name and optional parameters. This function is made accessible to the templates via the Page field ToURL.

As an example, if you have route such as:

profile := Profile{Controller: ctr}
e.GET("/user/profile/:user", profile.Get).Name = "user_profile"

And you want to generate a URL in the template, you can:

{{call .ToURL "user_profile" 1}

Which will generate: /user/profile/1

There is also a helper function provided in the funcmap to generate links which has the benefit of adding an active class if the link URL matches the current path. This is especially useful for navigation menus.

{{link (call .ToURL "user_profile" .AuthUser.ID) "Profile" .Path "extra-class"}}

Will generate:

<a href="/user/profile/1" class="is-active extra-class">Profile</a>

Assuming the current path is /user/profile/1; otherwise the is-active class will be excluded.

HTMX support

HTMX is an awesome JavaScript library allows you to access AJAX, CSS Transitions, WebSockets and Server Sent Events directly in HTML, using attributes, so you can build modern user interfaces with the simplicity and power of hypertext.

Many examples of its usage are available in the included examples:

  • All navigation links use boost which dynamically replaces the page content with an AJAX request, providing a SPA-like experience.
  • All forms use either boost or hx-post to submit via AJAX.
  • The mock search autocomplete modal uses hx-get to fetch search results from the server via AJAX and update the UI.
  • The mock posts on the homepage/dashboard use hx-get to fetch and page posts via AJAX.

All of this can be easily accomplished without writing any JavaScript at all.

Another benefit of HTMX is that it's completely backend-agnostic and does not require any special tools or integrations on the backend. But to make things easier, included is a small package to read and write HTTP headers that HTMX uses to communicate additional information and commands.

The htmx package contains the headers for the request and response. When a Page is initialized, Page.HTMX.Request will also be initialized and populated with the headers that HTMX provides, if HTMX made the request. This allows you to determine if HTMX is making the given request and what exactly it is doing, which could be useful both in your route as well as your templates.

If you need to set any HTMX headers in your Page response, this can be done by altering Page.HTMX.Response.

Layout template override

To facilitate easy partial rendering for HTMX requests, the Page will automatically change your Layout template to use htmx.gohtml, which currently only renders {{template "content" .}}. This allows you to use an HTMX request to only update the content portion of the page, rather than the entire HTML.

This override only happens if the HTMX request being made is not a boost request because boost requests replace the entire body element so there is no need to do a partial render.

Conditional processing / rendering

Since HTMX communicates what it is doing with the server, you can use the request headers to conditionally process in your route or render in your template, if needed. If your routes aren't doing multiple things, you may not need this, but it's worth knowing how flexible you can be.

A simple example of this:

if page.HTMX.Request.Target == "search" {
    // You know this request HTMX is fetching content just for the #search element
}
{{if eq .HTMX.Request.Target "search"}}
    // Render content for the #search element
{{end}}

CSRF token

If CSRF protection is enabled, the token value will automatically be passed to HTMX to be included in all non-GET requests. This is done in the footer template by leveraging HTMX events.

Rendering the page

Once your Page is fully built, rendering it via the embedded Controller in your route can be done simply by calling RenderPage():

func (c *home) Get(ctx echo.Context) error {
    page := controller.NewPage(ctx)
    page.Layout = templates.LayoutMain
    page.Name = templates.PageHome
    return c.RenderPage(ctx, page)
}

Template renderer

The template renderer is a Service on the Container that aims to make template parsing and rendering easy and flexible. It is the mechanism that allows the Page to do automatic template parsing. The standard html/template is still the engine used behind the scenes. The code can be found in pkg/services/template_renderer.go.

Here is an example of a complex rendering that uses multiple template files as well as an entire directory of template files:

buf, err = c.TemplateRenderer.
    Parse().
    Group("page").
    Key("home").
    Base("main").
    Files("layouts/main", "pages/home").
    Directories("components").
    Execute(data)

This will do the following:

  • Cache the parsed template with a group of page and key of home so this parse only happens once
  • Set the base template file as main
  • Include the templates templates/layout/main.gohtml and templates/pages/home.gohtml
  • Include all templates located within the directory templates/components
  • Include the funcmap
  • Execute the parsed template with data being passed in to the templates

Using the example from the page rendering, this is what the Controller will execute:

buf, err = c.Container.TemplateRenderer.
    Parse().
    Group("page").
    Key(page.Name).
    Base(page.Layout).
    Files(
        fmt.Sprintf("layouts/%s", page.Layout),
        fmt.Sprintf("pages/%s", page.Name),
    ).
    Directories("components").
    Execute(page)

If you have a need to separately parse and cache the templates then later execute, you can separate the operations:

_, err := c.TemplateRenderer.
    Parse().
    Group("my-group").
    Key("my-key").
    Base("auth").
    Files("layouts/auth", "pages/login").
    Directories("components").
    Store()
tpl, err := c.TemplateRenderer.Load("my-group", "my-key")
buf, err := tpl.Execute(data)

Custom functions

All templates will be parsed with the funcmap so all of your custom functions as well as the functions provided by sprig will be available.

Caching

Parsed templates will be cached within a sync.Map so the operation will only happen once per cache group and ID. Be careful with your cache group and ID parameters to avoid collisions.

Hot-reload for development

If the current environment is set to config.EnvLocal, which is the default, the cache will be bypassed and templates will be parsed every time they are requested. This allows you to have hot-reloading without having to restart the application so you can see your HTML changes in the browser immediately.

File configuration

To make things easier and less repetitive, parameters given to the template renderer must not include the templates directory or the template file extensions. The file extension is stored as a constant (TemplateExt) within the config package.

Funcmap

The funcmap package provides a function map (template.FuncMap) which will be included for all templates rendered with the template renderer. Aside from a few custom functions, sprig is included which provides over 100 commonly used template functions. The full list is available here.

To include additional custom functions, add to the slice in GetFuncMap() and define the function in the package. It will then become automatically available in all templates.

Cache

As previously mentioned, Redis was chosen as the cache but it can be easily swapped out for something else. go-redis is used as the underlying client but the Container contains a custom client wrapper (CacheClient) that makes typical cache operations extremely simple. This wrapper does expose the go-redis client however, at CacheClient.Client, in case you have a need for it.

The cache functionality within the CacheClient is powered by gocache which was chosen because it makes interfacing with the cache service much easier, and it provides a consistent interface if you were to use a cache backend other than Redis.

The built-in usage of the cache is currently only for optional page caching but it can be used for practically anything. See examples below:

Similar to how there is a separate test database to avoid writing to your primary database when running tests, the cache supports a separate database as well for tests. Within the config, the test database number can be specified at Config.Cache.TestDatabase. By default, the primary database is 0 and the test database is 1.

Set data

Set data with just a key:

err := c.Cache.
    Set().
    Key("my-key").
    Data(myData).
    Save(ctx)

Set data within a group:

err := c.Cache.
    Set().
    Group("my-group").
    Key("my-key").
    Data(myData).
    Save(ctx)

Include cache tags:

err := c.Cache.
    Set().
    Key("my-key").
    Tags("tag1", "tag2").
    Data(myData).
    Save(ctx)

Include an expiration:

err := c.Cache.
    Set().
    Key("my-key").
    Expiration(time.Hour * 2).
    Data(myData).
    Save(ctx)

Get data

data, err := c.Cache.
    Get().
    Group("my-group").
    Key("my-key").
    Type(myType).
    Fetch(ctx)

The Type method tells the cache what type of data you stored so it can be cast afterwards with: result, ok := data.(myType)

Flush data

err := c.Cache.
    Flush().
    Group("my-group").
    Key("my-key").
    Execute(ctx)

Flush tags

This will flush all cache entries that were tagged with the given tags.

err := c.Cache.
    Flush().
    Tags("tag1", "tag2").
    Execute(ctx)

Tasks

Tasks are operations to be executed in the background, either in a queue, at a specfic time, after a given amount of time, or according to a periodic interval (like cron). Some examples of tasks could be long-running operations, bulk processing, cleanup, notifications, and so on.

Since we're already using Redis as a cache, it's available to act as a message broker as well and handle the processing of queued tasks. Asynq is the library chosen to interface with Redis and handle queueing tasks and processing them asynchronously with workers.

To make things even easier, a custom client (TaskClient) is provided as a Service on the Container which exposes a simple interface with asynq.

For more detailed information about asynq and it's usage, review the wiki.

Queues

All tasks must be placed in to queues in order to be executed by the worker. You are not required to specify a queue when creating a task, as it will be placed in the default queue if one is not provided. Asynq supports multiple queues which allows for functionality such as prioritization.

Creating a queued task is easy and at the minimum only requires the name of the task:

err := c.Tasks.
    New("my_task").
    Save()

This will add a task to the default queue with a task type of my_task. The type is used to route the task to the correct worker.

Options

Tasks can be created and queued with various chained options:

err := c.Tasks.
    New("my_task").
    Payload(taskData).
    Queue("critical").
    MaxRetries(5).
    Timeout(30 * time.Second).
    Wait(5 * time.Second).
    Retain(2 * time.Hour).
    Save()

In this example, this task will be:

  • Assigned a task type of my_task
  • The task worker will be sent taskData as the payload
  • Put in to the critical queue
  • Be retried up to 5 times in the event of a failure
  • Timeout after 30 seconds of execution
  • Wait 5 seconds before execution starts
  • Retain the task data in Redis for 2 hours after execution completes

Scheduled tasks

Tasks can be scheduled to execute at a single point in the future or at a periodic interval. These tasks can also use the options highlighted in the previous section.

To execute a task once at a specific time:

err := c.Tasks.
    New("my_task").
    At(time.Date(2022, time.November, 10, 23, 0, 0, 0, time.UTC)).
    Save()

To execute a periodic task using a cron schedule:

err := c.Tasks.
    New("my_task").
    Periodic("*/10 * * * *")
    Save()

To execute a periodic task using a simple syntax:

err := c.Tasks.
    New("my_task").
    Periodic("@every 10m")
    Save()

Scheduler

A service needs to run in order to add periodic tasks to the queue at the specified intervals. When the application is started, this scheduler service will also be started. In cmd/web/main.go, this is done with the following code:

go func() {
    if err := c.Tasks.StartScheduler(); err != nil {
        c.Web.Logger.Fatalf("scheduler shutdown: %v", err)
    }
}()

In the event of an application restart, periodic tasks must be re-registered with the scheduler in order to continue being queued for execution.

Worker

The worker is a service that executes the queued tasks using task processors. Included is a basic implementation of a separate worker service that will listen for and execute tasks being added to the queues. If you prefer to move the worker so it runs alongside the web server, you can do that, though it's recommended to keep these processes separate for performance and scalability reasons.

The underlying functionality of the worker service is provided by asynq, so it's highly recommended that you review the documentation for that project first.

Starting the worker

A make target was added to allow you to start the worker service easily. From the root of the repository, execute make worker.

Understanding the service

The worker service is located in cmd/worker/main.go and starts with the creation of a new *asynq.Server provided by asynq.NewServer(). There are various configuration options available, so be sure to review them all.

Prior to starting the service, we need to route tasks according to their type to their handlers which will process the tasks. This is done by using async.ServeMux much like you would use an HTTP router:

mux := asynq.NewServeMux()
mux.Handle(tasks.TypeExample, new(tasks.ExampleProcessor))

In this example, all tasks of type tasks.TypeExample will be routed to ExampleProcessor which is a struct that implements ProcessTask(). See the included basic example.

Finally, the service is started with async.Server.Run(mux).

Monitoring

Asynq comes with two options to monitor your queues: 1) Command-line tool and 2) Web UI

Static files

Static files are currently configured in the router (pkg/routes/router.go) to be served from the static directory. If you wish to change the directory, alter the constant config.StaticDir. The URL prefix for static files is /files which is controlled via the config.StaticPrefix constant.

Cache control headers

Static files are grouped separately so you can apply middleware only to them. Included is a custom middleware to set cache control headers (middleware.CacheControl) which has been added to the static files router group.

The cache max-life is controlled by the configuration at Config.Cache.Expiration.StaticFile and defaults to 6 months.

Cache-buster

While it's ideal to use cache control headers on your static files so browsers cache the files, you need a way to bust the cache in case the files are changed. In order to do this, a function is provided in the funcmap to generate a static file URL for a given file that appends a cache-buster query. This query string is randomly generated and persisted until the application restarts.

For example, to render a file located in static/picture.png, you would use:

<img src="{{File "picture.png"}}"/>

Which would result in:

<img src="/files/picture.png?v=9fhe73kaf3"/>

Where 9fhe73kaf3 is the randomly-generated cache-buster.

Email

An email client was added as a Service to the Container but it is just a skeleton without any actual email-sending functionality. The reason is because there are a lot of ways to send email and most prefer using a SaaS solution for that. That makes it difficult to provide a generic solution that will work for most applications.

The structure in the client (MailClient) makes composing emails very easy and you have the option to construct the body using either a simple string or with a template by leveraging the template renderer. The standard library can be used if you wish to send email via SMTP and most SaaS providers have a Go package that can be used if you choose to go that direction. You must finish the implementation of MailClient.send.

The from address will default to the configuration value at Config.Mail.FromAddress. This can be overridden per-email by calling From() on the email and passing in the desired address.

See below for examples on how to use the client to compose emails.

Sending with a string body:

err = c.Mail.
    Compose().
    To("[email protected]").
    Subject("Welcome!").
    Body("Thank you for registering.").
    Send(ctx)

Sending with a template body:

err = c.Mail.
    Compose().
    To("[email protected]").
    Subject("Welcome!").
    Template("welcome").
    TemplateData(templateData).
    Send(ctx)

This will use the template located at templates/emails/welcome.gohtml and pass templateData to it.

HTTPS

By default, the application will not use HTTPS but it can be enabled easily. Just alter the following configuration:

  • Config.HTTP.TLS.Enabled: true
  • Config.HTTP.TLS.Certificate: Full path to the certificate file
  • Config.HTTP.TLS.Key: Full path to the key file

To use Let's Encrypt follow this guide.

Logging

Logging is provided by Echo and is accessible within the Echo instance, which is located in the Web field of the Container, or within any of the context parameters, for example:

func (c *home) Get(ctx echo.Context) error {
    ctx.Logger().Info("something happened")

    if err := someOperation(); err != nil {
        ctx.Logger().Errorf("the operation failed: %v", err)
    }
}

The logger can be swapped out for another, as long as it implements Echo's logging interface. There are projects that provide this bridge for popular logging packages such as zerolog.

Request ID

By default, Echo's request ID middleware is enabled on the router but it only adds a request ID to the log entry for the HTTP request itself. Log entries that are created during the course of that request do not contain the request ID. LogRequestID() is custom middleware included which adds that request ID to all logs created throughout the request.

Roadmap

Future work includes but is not limited to:

  • Flexible pager templates
  • Expanded HTMX examples and integration
  • Admin section

Credits

Thank you to all of the following amazing projects for making this possible.

pagoda's People

Contributors

ahashim avatar arrkiin avatar gedw99 avatar hbd avatar jimmy99 avatar jordanstephens avatar joshlemer avatar migeorge avatar mikestefanello avatar mips171 avatar saurori avatar testwill avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

pagoda's Issues

example repo ?

The structure and patterns are pretty nice, and really help with not reinventing the wheel again.

I was wondering if you have or are planning or wanting example repos that use this template.

Also i assume that this template approach allows golang developers to work in their real repo, and when the architecture of the template repo changes, they can do a git merge into their example repo ?

Anyways nice code

API support

Thank you for putting Pagoda together and for sharing it!

I could not find any examples for implementing API endpoints. What's the best way to do that in Pagoda? And can that be added to the docs?

Also, would it be possible to use the same controller or function to implement a json response and an HTML (template) response, and respond with either based on the request (content-type request header, for example)?

Getting started improvements for newbies

It would nice to have an explaination on the command line to get the project to work quickely.

  1. go get didn t work on my Ubuntu 20.04 I was asked to use go isntall
    "
    go: downloading github.com/google/go-cmp v0.5.6
    go get: installing executables with 'go get' in module mode is deprecated.
    Use 'go install pkg@version' instead.
    For more information, see https://golang.org/doc/go-get-install-deprecation
    or run 'go help get' or 'go help install'.
    "
  2. A little explaination on how to install docker.
    In the readme it is assumed that we know docker... Some persons like me don t.
    On ubuntu do :
    snap install docker
    cd ~/go/pkg/mod/github.com/mikestefanello/[email protected]
    $ make up
    "
    Status: Downloaded newer image for postgres:alpine
    Creating pagodav040_cache_1 ...
    Creating pagodav040_db_1 ...
    Creating pagodav040_db_1 ... error

Creating pagodav040_cache_1 ... done
5e2b0267327e1f48384ad): Error starting userland proxy: listen tcp4 0.0.0.0:5432: bind: address already in use

ERROR: for db Cannot start service db: driver failed programming external connectivity on endpoint pagodav040_db_1 (b1f6e840f3529cf0eb014fb1428118e3b55481e8f5d5e2b0267327e1f48384ad): Error starting userland proxy: listen tcp4 0.0.0.0:5432: bind: address already in use
ERROR: Encountered errors while bringing up the project.
make: *** [Makefile:39 : up] Erreur 1

"

Best approach for mocking container services

Thank you @mikestefanello for this joyful template. It got me excited about building web applications in Go.

I have a custom service that I added to the container, that calls an external service (REST API). Any suggestions or recommendations for ways to test the application without making those REST API calls?

It would be great to say something about that in the README.

Image storage and handling

A really common pattern is where you need to allow users to upload images and i need to save its existence in the DB for later use.

For example you might have a web page with a form, and as part of that they need to upload a screenshot or a list of images.

Then you want to save the image in a local FS or S3, and record the URL ( or RID - resource ID ) in the DB, so that you have a record of the form data and also the image that they also uploaded with the Form.

So you need a Web template for image uploads also that can be easily included in other golang templates. This is because you need image upload GUI that can be used with different form golang templates.

is there a reusable pattern here ?

The above example if a very simple one. A generic pattern of having a "blobs" table in the DB that is just for holding a mapping of files to the db can also be useful because it can be reused for lots of use cases. You can then reference that "blobs" table from other tables in your DB.

In terms of transactional integrity, you save the file first and if no errors then save the RID into the "blobs" table.

There is also a side effect with this pattern that can occur if you allow mutation access to S3. An image is changed, and you need to tell the DB that it has changed. File systems and S3 have a Notification API for this. Here is the minio docs on this to illustrate: https://docs.min.io/docs/minio-bucket-notification-guide.html
In this case, you woudl want a webhook to call back into the Golang API.

Hope that i explained myself ok and that you think this is generic enough and crosses enough use case that its worth including.

For anyone reading this, please let me know what you think. Is this something you need ? Is it better to do this in a different way ?

feel free to ask me if i have not explained this very well.

Error initializing ORM

Thanks for making the Repo.

I'm having issues getting the ORM setup correctly. On a fresh clone of the repo following the README and running make up and then make run i'm getting the following error.

go run cmd/web/main.go
panic: failed to create database schema: postgres: unexpected number of rows: 1

goroutine 1 [running]:
github.com/mikestefanello/pagoda/pkg/services.(*Container).initORM(0xc000192370)
	/~/~/~/pagoda-template/pkg/services/container.go:175 +0x1a5
github.com/mikestefanello/pagoda/pkg/services.NewContainer()
	/~/~/~/pagoda-template/pkg/services/container.go:66 +0xdb
main.main()
	/~/~/~/pagoda-template/cmd/web/main.go:18 +0x25
exit status 2
make: *** [run] Error 1

Adding debug to Schema.Create() shows

2023/09/17 15:33:23 driver.Query: query=SHOW server_version_num args=[]
2023/09/17 15:33:23 driver.Tx(ce7868ac-32fc-483e-805f-df638139aab3): started
2023/09/17 15:33:23 Tx(ce7868ac-32fc-483e-805f-df638139aab3).Query: query=SELECT setting FROM pg_settings WHERE name IN ('lc_collate', 'lc_ctype', 'server_version_num', 'crdb_version') ORDER BY name DESC args=[]
panic: failed to create database schema: postgres: unexpected number of rows: 1
...

I've checked this query against the postgres container and it returns

make db
docker exec -it pagoda_db psql postgresql://admin:admin@localhost:5432/app
psql (16.0)
Type "help" for help.

app=# SELECT setting FROM pg_settings WHERE name IN ('lc_collate', 'lc_ctype', 'server_version_num', 'crdb_version') ORDER BY name DESC;
 setting
---------
 160000
(1 row)

Unsure where to go from here, specifically the unexpected number of rows: 1 error and how many rows is it expecting?

Alias "docker compose" in makefile so it works on macs also

On a mac its "docker-compose", not "docker compose".

I changed the makefile like this:

# mac name
DCO_BIN_NAME=docker-compose
# linux name
#DCO_BIN_NAME=docker compose

....

# Start the Docker containers
.PHONY: up
up:
	$(DCO_BIN_NAME) up -d
	sleep 3

htmx problem

I am trying to implement the functionality to bulk update as set out in https://htmx.org/examples/bulk-update/
It seems that the swap of the tbody element isn't being done properly since the checkboxes are not visible but they do appear in the html, and the button elements are repeated. (which seems to indicate that it is not just the tbody which is being swapped but other elements on the page (or maybe the whole page)
In your experience with htmx, do you have any tips about where to start troubleshooting this?

Can't update Go and deps cleanly

Setting project to go 1.21 then running go get -u all we encounter breakage with pegasus and mergo

Updating mergo was fixed with these commands:

go mod edit -replace "github.com/imdario/mergo=github.com/imdario/[email protected]"
go get -u all
go mod tidy

But when running it make run there's a compile error in XiaoMi/pegasus:

go run cmd/web/main.go
# github.com/XiaoMi/pegasus-go-client/idl/cmd
../../go/pkg/mod/github.com/!xiao!mi/[email protected]/idl/cmd/cmd.go:42:15: not enough arguments in call to iprot.ReadStructBegin
        have ()
        want (context.Context)
../../go/pkg/mod/github.com/!xiao!mi/[email protected]/idl/cmd/cmd.go:47:35: not enough arguments in call to iprot.ReadFieldBegin
        have ()
        want (context.Context)
../../go/pkg/mod/github.com/!xiao!mi/[email protected]/idl/cmd/cmd.go:61:26: not enough arguments in call to iprot.Skip
        have (thrift.TType)
        want (context.Context, thrift.TType)
../../go/pkg/mod/github.com/!xiao!mi/[email protected]/idl/cmd/cmd.go:71:26: not enough arguments in call to iprot.Skip
        have (thrift.TType)
        want (context.Context, thrift.TType)
../../go/pkg/mod/github.com/!xiao!mi/[email protected]/idl/cmd/cmd.go:76:25: not enough arguments in call to iprot.Skip
        have (thrift.TType)
        want (context.Context, thrift.TType)
../../go/pkg/mod/github.com/!xiao!mi/[email protected]/idl/cmd/cmd.go:80:13: not enough arguments in call to iprot.ReadFieldEnd
        have ()
        want (context.Context)
../../go/pkg/mod/github.com/!xiao!mi/[email protected]/idl/cmd/cmd.go:84:12: not enough arguments in call to iprot.ReadStructEnd
        have ()
        want (context.Context)
../../go/pkg/mod/github.com/!xiao!mi/[email protected]/idl/cmd/cmd.go:91:15: not enough arguments in call to iprot.ReadString
        have ()
        want (context.Context)
../../go/pkg/mod/github.com/!xiao!mi/[email protected]/idl/cmd/cmd.go:100:18: not enough arguments in call to iprot.ReadListBegin
        have ()
        want (context.Context)
../../go/pkg/mod/github.com/!xiao!mi/[email protected]/idl/cmd/cmd.go:108:16: not enough arguments in call to iprot.ReadString
        have ()
        want (context.Context)
../../go/pkg/mod/github.com/!xiao!mi/[email protected]/idl/cmd/cmd.go:108:16: too many errors
# github.com/XiaoMi/pegasus-go-client/idl/base
../../go/pkg/mod/github.com/!xiao!mi/[email protected]/idl/base/blob.go:18:15: not enough arguments in call to iprot.ReadBinary
        have ()
        want (context.Context)
../../go/pkg/mod/github.com/!xiao!mi/[email protected]/idl/base/blob.go:27:27: not enough arguments in call to oprot.WriteBinary
        have ([]byte)
        want (context.Context, []byte)
../../go/pkg/mod/github.com/!xiao!mi/[email protected]/idl/base/error_code.go:105:18: not enough arguments in call to iprot.ReadString
        have ()
        want (context.Context)
../../go/pkg/mod/github.com/!xiao!mi/[email protected]/idl/base/error_code.go:110:27: not enough arguments in call to oprot.WriteString
        have (string)
        want (context.Context, string)
../../go/pkg/mod/github.com/!xiao!mi/[email protected]/idl/base/gpid.go:18:12: not enough arguments in call to iprot.ReadI64
        have ()
        want (context.Context)
../../go/pkg/mod/github.com/!xiao!mi/[email protected]/idl/base/gpid.go:30:24: not enough arguments in call to oprot.WriteI64
        have (int64)
        want (context.Context, int64)
../../go/pkg/mod/github.com/!xiao!mi/[email protected]/idl/base/rpc_address.go:26:18: not enough arguments in call to iprot.ReadI64
        have ()
        want (context.Context)
../../go/pkg/mod/github.com/!xiao!mi/[email protected]/idl/base/rpc_address.go:35:24: not enough arguments in call to oprot.WriteI64
        have (int64)
        want (context.Context, int64)
make: *** [run] Error 1

I tried linking XiaoMi/pegasus to the apache hosted version but that did not work. Maybe I did something wrong

file: go.mod

replace github.com/imdario/mergo => github.com/imdario/mergo v0.3.16
replace github.com/apache/incubator-pegasus/tree/master/go-client => github.com/XiaoMi/pegasus-go-client v0.0.0-20220519103347-ba0e68465cd5

go mod graph:

github.com/mikestefanello/pagoda github.com/XiaoMi/[email protected]
github.com/mikestefanello/pagoda github.com/pegasus-kv/[email protected]
github.com/XiaoMi/[email protected] github.com/BurntSushi/[email protected]
github.com/XiaoMi/[email protected] github.com/agiledragon/[email protected]+incompatible
github.com/XiaoMi/[email protected] github.com/cenkalti/backoff/[email protected]
github.com/XiaoMi/[email protected] github.com/fortytw2/[email protected]
github.com/XiaoMi/[email protected] github.com/pegasus-kv/[email protected]
github.com/XiaoMi/[email protected] github.com/sirupsen/[email protected]
github.com/XiaoMi/[email protected] github.com/stretchr/[email protected]
github.com/XiaoMi/[email protected] golang.org/x/[email protected]
github.com/XiaoMi/[email protected] golang.org/x/[email protected]
github.com/XiaoMi/[email protected] gopkg.in/natefinch/[email protected]
github.com/XiaoMi/[email protected] gopkg.in/[email protected]
github.com/XiaoMi/[email protected] k8s.io/[email protected]
github.com/eko/gocache/[email protected] github.com/XiaoMi/[email protected]
github.com/eko/gocache/[email protected] github.com/pegasus-kv/[email protected]

htmx example

I am a golang dev. thanks for making this nice and clean golang base.

htmx really looks like an answer to so many gui issues.

would be great if you added a htmx example, so i can see how to join up the front and back.

How a task saving its data into DB by Ent?

Hi Mikestefanello,

Your kit is awesome for starter like me.
I'm researching how to utilizing your kit to build an Web App.
My purpose is that:

  • A task runs in background to fetch data from a website. Then save obtained data into Postgres db utilizing Ent
  • When a client fetches router(GET), the data is load from Postgres db to show on HTML

My problem is:

  • When saving data, like /routes/*.go, it needs 2 variables c(*Container) and ctx(echo.Context) to manipulate c.Container.ORM. But ctx of a task handler(in /worker/tasks/*.go) is context.Context(using reflect I see that: *context.timerCtx). Therefore I couldn't save data to db.
    Could you tell me how to save data to db in worker?(If there is any sample code, really appreciate your help)

Thanks

Files

How about we use s3 and local ?

The directory info can be in the db.

this is a very standard approach.

so after a file is transacted into the file store and event updates the directory info on the db.

Websocket example

Any chance of including a websocket example using HTMX? I haven't been able to get the official docs example to work, but I'm probably overlooking something. Anyway, would be nice to establish a pattern for it here.

Feature idea: Deployment examples

The architecture is fast thanks for the smart layering of caches.

I was thinking that deployment examples might be useful for devs .

Fly.io can scale out Postresql and Redis automatically for example. Its requires slight changes to the Docker as fly only allow a Docker and not a Docker compose. This might sound like a problem but it is not and just requires a Dockerfile that can boot Redis and Postresql.

Anyway, there is no Dockerfile in the repo for the main golang code anyway, so we could make one that is designed for fly and scaling out Redis and Postresql if we want.

It can also be used for non fly too btw, but shaping the docker appropriately.

https://fly.io/docs/reference/redis/ for edit but you can also just boot your own.

https://fly.io/docs/postgres/ but you can also boot your own and scale out your own.

Fly makes all databases sync into all regions automatically.

The golang code scales to zero so is server less. So costs you close to nothing if you users hitting your system.

fly cli is golang and is insanely easy.

Fly anycast / bop automatically does nearest server LB, so its seamless scale out at that level too. Its pretty no brainer stuff

Most people use Cloudflare but any DNS works. SSL is done for you.

CueLang

Your code is a little bit low code type of architecture.

I was thinking about also using Cuelang with it. This would mean that you dont have to have types everywhere.
Types can be used at runtime. But you can also generate JSON, YMAL, OPENAPI, and a ton more things with Cue.

https://cuelang.org/

https://cuelang.org/docs/usecases/ is a good place to start.

https://github.com/cue-lang/cue is golang ... Its even has the equivalent go.mod and go.sum.. Its really cool.

There are a ton of golang coders using it around github etc.

https://github.com/cueblox/blox

https://github.com/eonpatapon/tree-sitter-cue

Create a space for discussions, support and future roadmap planning

Hi Mike, I find this project extremely helpful and with the trend that HTMX is going through right now I can only assume that a modern stack like this would serve a lot of community members. Was wondering whether it was worthwhile creating a Slack/Discord for like minded people working on this project to share their experience, help with any issues, recommendations and future roadmap. Happy to setup something and create a PR for the readme if that helps.

Flash messages when using HTMX

When using HTMX the full template is not rendered on the backend (which means that flash messages are not processed). How should we use flash messages when using htmx powered forms ?

It is possible to include the "messages" but I was wondering if these are in the base template and you would like to show flash messages not in the form but in the base template. I'm not sure if this can even work, given the semantics of what is actually rendered (just the html which should be changed and not the whole template)

So any tips on patterns would be greatly appreciated!

How to Query data by ORM with condition(ORDER BY, LIMIT, ..)

Hi Mikestefanello,
In routes, I attempts to Query n latest data that saved in db before.
Assume, I want to Query 14 newest posts(latest created_at).
I coded this code:

// ./routes/posts.go
...
	smt, err := c.Container.ORM.Posts.
		Query().
		Order(ent.Desc("created_at")).
		Limit(14).
		Only(ctx.Request().Context())
...

but it returned error:

ent: posts not singular

Could you tell me what is correct way to use .Order()?

Thanks

How to use pongo2 as default template engine.

I love pagoda. It's an excellent piece of software.

However, I do feel like the default template engine from Golang std lib is quite limited.
Specially since, HTMX could benefit from a lot of features not found in go's official templating library.

Hence, I wanted to use Pongo2 (Similar to Django's template engine). How do I go about that?

sqlc instead of ent

I would like to know your opinion on using sqlc instead of Ent as the database layer. Do you think there is a good way to add it as an optional or interchangeable option alongside Ent, or should we consider creating a separate fork for it

Rendering data returned by ent query

Hi mikestefanello
Do you have some pointers about how to render the data returned by an ent orm query? For example do you create a struct for the data returned and range through the data returned by the query to populate the struct or what is your suggested approach or how you have used it in the past? I am just not so clear on how you feed the query data into Page.Data

A few suggestions / questions from a first time user

Firstly I'd like to say thanks for putting together this "framework" - it's a great alternative to something like Buffalo or Bud. I plan on experimenting with it and adapting to my own base default template, trying a different template engine templ and adding an assets pipeline for js/css.

Here are a few observations / questions I noticed from using the template for the first time (some of which could become a PR based on feedback):

  1. Pages are cached, but is it intended for Routes to be cached as well? For example, if you start the application and visit /about, change the /about route to /about-us (and rerun), then visit /about, the page renders. I would expect a change in the router would no longer route to any page, cached or not.
  2. config.GetConfig() appears to be guessing where the config directory is based on what sub-package file is loading the config. I think a better approach would be something like:
	// Get the basepath / project root
	_, b, _, _ := runtime.Caller(0)
	basepath := filepath.Dir(b)
	...
	viper.AddConfigPath(basepath)
  1. A make down command would be useful, e.g.: $(DCO_BIN) down

Wasm service worker and htmx idea

There is a rust example here: https://www.reddit.com/r/htmx/comments/y64ian/wasmservice_proof_of_concept_for_htmx_webassembly/

The concept I like is to render off the server to ensure fast initial load and then boot strap the Service worker.

The Service worker is just a version of the golang backend that acts as a proxy.

Use cases ?

Quasi db and template rendering . In the same way that the cache uses tags and they are invalidated by the DAL , we could invalidate data and hence rendered templates in the service worker.

Materialised views are its data. The Bsckend DB produces data , and we cache it in Redis. We could extend it to the Service worker .

both the above examples require the Server to fire events to the Service worker to tell it that data is invalidate.

Posts from the Browser DOM to the Service Worker could do validation , and if passed get fed further to the backend where further validation is sometimes required.

Building a service worker just requires the service.js and the wasm ( originally golang ) will load.

The htmx Ajax would route through to the Service Worker. Just like it does now to the backend.

Request for Tailwind CSS Support

Hi thank you for building/maintaining this project. Quick question, Have we considered replacing the way we are currently styling the project with the popular Tailwind utility library? We could perhaps include this in the makefile as a build step for production or run a script for development.

Problem implementing redirect with querystring parameters

I am struggling to implement a redirect with querystring parameters
It seems the controller.Redirect can receive optional parameters which are empty interfaces.
I've tried a few variations on this theme but the parameters never find their way into the URL.
In this example the route defaults to "itemselect" when I would expect "itemselect?id=13&name=aaa"
Anybody know how to implement this correctly?

var queryParam1 interface{} = "id=13"
var queryParam2 interface{} = "name=aaa"
return c.Redirect(ctx, "itemselect", queryParam1, queryParam2)

Cannot connect to database on non standard port

The database port from the configuration is ignored when creating the postgres connect string. This defaults to the standard port 5432 which in my case is not where the postgres server is.

I updated in container.go to the following to be able to spin this up.

	getAddr := func(dbName string) string {
		return fmt.Sprintf("postgresql://%s:%s@%s:%s/%s",
			c.Config.Database.User,
			c.Config.Database.Password,
			c.Config.Database.Hostname,
			strconv.Itoa(int(c.Config.Database.Port)),
			dbName,
		)
	}

Syntax Highlighting for gohtml files

Apologies if this does not belong in issues, but when I clone the project I don't see any syntax highlighitng on the gohtml files. Is there a step I am missing that would give me proper syntax highlighting? I checked VSCode extensions and didn't see anything. Should I just use default html highlighting?

Thanks!

Workflow on making migrations with Ent and Atlas?

Thank you for a great project! This project is perfect for me as a new go developer.

I am having hard time working with migrations, especially making some seed data. May I know what are the steps on how you guys create migrations? Thank you!

So far what I have tried:

  1. atlas migrate new add-default-roles --dir "file://ent/migrate/migrations"
  2. atlas migrate hash --dir "file://ent/migrate/migrations"
  3. atlas migrate status --dir "file://ent/migrate/migrations" -u "sqlite://mydb.db"

On third, step I'm having an error:
Error: sql/migrate: connected database is not clean: found multiple tables: 4. baseline version or allow-dirty is required

This template is really awesome, thanks!

Sorry for the unnecessary issue, but I had to say this starter project is really really cool. I love the decisions you guys have made on it, the initial project functionality is a really great demo/example, really nice touches with the docker-compose, hooking up the redis cache, and make scripts and everything. The documentation on the README is great too. Very well done thanks everyone!

Rebuilding after file edits

Thanks for this great project! I issue ctrl-C and make run every time I edit a file to rebuild and restart the server. Is this the correct approach or is there a filesystem watcher that I am not using properly?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.