Code Monkey home page Code Monkey logo

cli's Introduction

Code PushUp CLI

Code-Pushup Logo

Comprehensive tech quality monitoring

Quantify tech debt โ€” Track incremental improvements โ€” Monitor regressions


version release date license commit activity CI Codecov


๐Ÿ”Œ Code quality tools are like phone chargers. Everyone has a different plug.

Common problems with keeping track of technical quality:

  • When tech debt is invisible, it's difficult to plan much-needed maintenance efforts ๐Ÿ”ง
  • Individual tools measure different metrics, inability to combine them leads to
    a lack of comprehensive overview ๐Ÿง‘โ€๐Ÿฆฏ
  • Open-source tools typically used for failing checks in CI, which can't measure incremental improvements due to arbitrary pass/fail thresholds ๐Ÿค–
  • Off-the-shelf solutions tend to be opinionated and hard to customize, so may not fit your specific needs ๐Ÿงฑ

We want to change that!


๐Ÿ”Ž๐Ÿ”ฌ Code quality integrations for any tool ๐Ÿ“‰๐Ÿ”

๐Ÿš€ Get started๏ธ ๐Ÿค– CI automation
Getting started cover image CI Automation cover
๐Ÿ“ˆ Portal๏ธ ๐Ÿ”Œ Custom plugins
Portal integration cover image Custom plugins
  • Portal ๐ŸŒ visualizes reports in a slick UI.
  • Track historical data from uploads. โฌ†๏ธ

๐Ÿ”Œ Officially supported plugins

Icon Name Description
ESLint Static analysis using ESLint rules.
Code Coverage Collects code coverage from your tests.
JS Packages Checks 3rd party packages for known vulnerabilities and outdated versions.
Lighthouse Measures web performance and best practices with Lighthouse.

๐Ÿ“ How it works

  1. Configure
    Pick from a set of supported packages or include your own ideas. ๐Ÿงฉ

  2. Integrate
    Use our integration guide and packages to set up CI integration in minutes. โฑ๏ธ

  3. Observe
    Guard regressions and track improvements with every code change. ๐Ÿ”

  4. Relax!
    Watch improvements, share reports ๐Ÿ“ˆ


๐Ÿ’– Want to support us?

cli's People

Contributors

beaussan avatar biophoton avatar dianjuar avatar getlarge avatar hanna-skryl avatar hoebbelsb avatar ikatsuba avatar layzeedk avatar markusnissl avatar matejchalk avatar mishaseredenkopushbased avatar nachovazquez avatar tlacenka avatar vmasek avatar wuglyakbolgoink avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

cli's Issues

Should we enforce the Dependency Inversion Principle (DIP)?

Motivation

This pattern promotes decoupling, making the system easier to maintain, scale, and modify. It also facilitates unit testing since you can mock the abstractions.

  1. High-level modules should not depend on low-level modules. Both should depend on abstractions.
  2. Abstractions should not depend on details. Details should depend on abstractions.
image image

Implementation Overview

We can achieve this be making sure the Core Module and the CLI both depend on the abstractions in the Models Module. This means we would move the abstraction there and that module will actually dictate the behavior of the system.

image
(In the diagram you can see the the dependency between CLI and Core is a doted line. This is because the core module should implement the abstraction in Models and therefore CLI actually depends on Models and Core could be replace with anything that satisfies the abstractions in models)

By making sure the abstraction are in the Models Module and that both the Core Module and the CLI depend on these abstraction we essentially inverting the dependency such that higher-level and lower-level modules both depend on an abstraction rather than the higher-level module depending on the details of a lower-level module.

Because the Core Module handles implementing the abstraction from the Models Module, the CLI does not require depending on the Utils Module. Therefore I suggest we enforce module boundaries on the CLI so that it can only depend on Core and Models.

image

This would also mean that the Plugins Modules do not require the Core Module , as all abstraction needed to create a plugin are located in the Models Module and all the function that would help the plugin authors create these plugins are located in the Utils Module.

image

As part of this i would also suggest we enforce module bounties so that the Plugins Modules can only depend on Utils and Modules. Potentially in the future we can also have testing utils and the boundaries can be extended to include that module.

Because Utils depends on Models and and core implements the logic related to the CLI execution. We should be very mindful of what can be imported in the module. For example it should not import yargs as that should belong in the Core Module, nor should it import anything that is specific to a plugin, like lighhouse or any eslint internal.

TODOS

  • Enforce Module Boundaries with NX

    • CLI can only depend on Models and Core
    • Plugins can only depend on Models and Utils
  • Restrict imports in Utils and Plugins

    • Utils and Plugins cannot import Yargs
  • Move an create abstraction in Models

    • As we move Core out of Utils we should move the abstractions to Models

    Related to: #73

Implement collect command

  • runs runner for each plugin
  • reads outputs for each plugin and validates them
  • prints summary to terminal

model refinements

I see a couple of potential flaws in the current data structure the we should address.

  • The type RunnerOutput seems odd and I feel we should restructure the data to just and array of audit Output's and the name should be AuditOutput
  • At the moment not all meta informations are included in the report.
    The missing informations are:
    • plugin.meta
    • categories => should contain metadata
  • The naming could get more aligned
    • use xConfig for the different top level props in CoreConfig
    • use xMeta for all metaInformation and store it under a property called meta
      • use it for CoreConfig
      • use it for PluginConfig
        • is missing description to align with AuditMeatdata
      • use it for AuditMetadata
        • we should rename title to name to align with PluginConfig.meta.name
      • AuditGroup does not contain meta information => it should
    • use xOutput for the different results from a plugin execution
    • use xReport for data structures living in Report
    • remove label
    • always use title instead of name
  • To generate reports the configuration is needed. This tightly couples the report with the content of the code-pushup.config.js file at the time of execution. The report should be interpretable independent of the core config

Use Verdaccio local registry for E2E tests

  • currently we use NPM workspaces to e2e test configs with plugin imports
    • requires running npm install after first package builds
  • Nx supports Verdaccio for publishing project's to local registry
  • @nx/plugin:e2e-project generator sets up local registry setup and teardown for E2E testing
  • replace imports in e2e/cli-e2e/mocks/code-pushup.config.(mjs|js|cjs|ts)

nx plugin - `configuration` generator for standalone workspace layout

The nx plugin should have a generator that is responsible for adding the code-pushup.config.json to a project in the workspace.

Todo:

  • The project should get added over a CLI argument
  • The project, if not given in the CLI arguments, should show up as select options in the CLI prompts
  • In a standalone workspace layout a script should get added to the root package.json
  • In a common workspace layout (e.g. libs/apps) an error should get thrown that mentions that the feature will be implemented in the future.

Handle test artefacts with cleanup and VFS where applicable

To improve the test situation with artefacts we can do the following things:

  • use a VFS implementation to mock all direct file access. A implementation inc tests can be found here
  • for files created in a spawned process (mocking is not possible) we can:
    • work in a folder that is registered in .gitignore
    • use import.meta.url to align the different cwd when execution tests in IDE or CLI
    • using beforeAll/beforeEach and afterAll/beforeEach
    • write helper functions to setup and clean folders and files

Update:
A lot of things are already changed. This ticket would most probably be just a check or mini cleanup in that area and could include conclusions about where and when in a readme.

Submit issue to @nx/rollup re: hard-coded file extensions

The @nx/rollup:rollup executor hard-codes the file extensions for the resulting bundles:

  • file names set here
  • package.json references set here and here

This means that when "format": ["cjs", "esm"] is specified, index.cjs.js and index.esm.js files are created. This prevents the resulting package from supporting both CommonJS (require) and ES Module (import) imports, because Node will interpret all .js files in the same way (based on "type" in package.json).

We need to generate explicit file extensions, i.e. index.cjs and index.mjs. In order to achieve this with the @nx/rollup plugin, we've had to resort to the following workarounds:

  • adding a custom "rollupConfig": "rollup.config.js" for each project.json to change entryFileNames from '[name].[fomat].js' to '[name].cjs' or '[name].mjs' - see rollup.config.js
  • modifying the underlying @nx/rollup sources in node_modules using patch-package - see patches/@nx+rollup+16.7.0.patch

Submit issue to Lighthouse re: import side-effects

To support dynamically importing TS configs which import Lighthouse (using Jiti), we have to work around the problem of Lighthouse having import side-effects. These use import.meta (ESM-only) to then read asset files relative to the given module.

Examples of side-effects:

Lighthouse often passes import.meta to getModuleDirectory or getModulePath, which only access the url property.

The current workaround is a custom Babel plugin which replaces getModuleDirectory(import.meta) with a string literal (either empty string or the actual path when required). This is very hacky because it relies on knowledge of Lighthouse internals, and will make updating lighthouse versions error-prone. (Also, the custom Babel plugin is needed because Lighthouse doesn't access import.meta.url directly, so existing Babel transformations don't work.)

E2E tests setup for CLI

The cli project's test currently relies on build of other projects, because it's testing if config files with plugin imports can be loaded. This also relies on running npm install after build to create symlinks for NPM workspaces.

Testing this part is valuable, but it would be better to move it to an E2E tests suite, so that unit tests can be build independent.

  • move config loading tests under e2e target
  • make CLI unit tests under test target independent of build
  • use separate cli-e2e project

Nx plugin setup

Motivation

As all the good stuff will come from the community plugins it pays of to give them the best DX possible.
Next to good docs and examples etc i see a Nx plugin very useful as it can support us in the following areas:

  • skufholding
    • setup
    • unit testing
    • e2e testing
  • migration
  • publishing
  • integration
  • documentation
  • we could leverage the power of Nx CLI

This should make it as easy as possible to start a custom plugin project.

The plugin can also help use with advertisement as:

  • we can generate branded readme files etc with the generate commands
  • we can publish it under Nx community plugins

Todos

  • Basic setup for Nx plugin package
  • Execute command collect to
    • execute the CLI
    • maintaining configurations partially in project.json
  • Generate command init to
    • setup code pushup in a specific project
  • Generate command plugin to create a custom plugin inc:
    • project with esbuild & vitest` setup for the plugin logic
    • project with vitest setup for the plugin e2e tests

nx plugin - `configuration` generator for project target

The nx plugin should be able to add a project target to the given project.

It should:

  • setup code-pushup.config.js in a specific project
  • add a target to the project.json with the code pushup executor
  • configurations partially in project.json - moved to executor

Move core logic into new "core" package

Proposal

  • all our core logic that's not Yargs-related lives in utils - too much?
  • utils could be only for more generic helper functions, e.g. executeProcess or Markdown formatters
  • core logic like collect and executePlugin could live in core package

Motivation

  • core package should be for programmatic usage of our core logic (e.g. used by our GitHub Action)
  • cleaner separation of core and plugin layers - plugin-* packages can import from utils or models, but not from core
  • the cli project might not then need multiple entry points (only bin)
  • having separate cli and core is quite common in other CLI-first projects (e.g. jest or graphql-code-generator)

Nx graph

Image

Implement helper in utils near `execute-process` to parse an object to command line arguments

Motivation

Plugin authors can already use our executeProcess function in their code to writ plugins faster.
The helper objectToCliArgs helps to create an array of command line arguments from a object in the common standard e.g. --param, --no-param etc...

This helps library authors to execute their code and avoids errors. The function should be used separately from the executeProcess function to have more flexibility.

Usage in a custom plugin

In custom plugins the function is most useful for runner arguments:

export async function myPlugin(config?: NxValidatorsPluginConfig) {
  const { outputPath } = config || {};
  return {
    // ...
    runner: {
      command: 'node',
      args: objectToCliArgs({ _: 'bin.js', interactive: false }),
      outputPath: 'out.json'
    },
    // ...
  };
}

Todos

  • implement the logic
  • implement tests
  • document it with JS docs
  • export it as public API

Automate commit hooks

Commit message conventions could be important for releases.

Possible solutions:

What to include:

  • message hook
    • Commitlint
      • scopes are derived from Nx projects
      • conventional commit
  • pre-commit hook
    • formatting with Prettier
    • Lint in CI

Implement ESLint runner

Implement runner

  • runs ESLint using provided config
  • transforms output to conform to plugin runner output models
  • (nice to have) if affected files provided as environment variable (by core ), restrict ESLint to those files only

nx plugin - `autorun` executor for the code pushup CLI

User story

As a user of Nx I want to have convenient configuration of my code-pushup related target.
A custom executor for the code-pushup CLI would reduce configuration and setup cost.

Acceptance criteria

Dual Build

Problems regarding the dual build:

  • @poppinss/cliui also doesn't support CJS for some time.
    Possible workaround for @poppinss/cliui would be a dynamic import
  • non-CJS-compatible runtime dependencies are lighthouse and lighthouse-logger
  • big refactoring as turning all functions async that touch one of the incompatible tools

Conclusion use the CLI in a separate process instead and duplicate some of the logic.

Implementation details

https://github.com/code-pushup/cli/tree/add-executor-to-nx-plugin

Format duration to an integer

At the moment duration is a floating number of microseconds e.g. 0.1373330056667328.
Formatting it to an integer of ms will make the information more readable and meaningful.

Reduce unit test flakiness - part 2

  • remove NPM workspaces to make tests indepedent of build
  • use await expect(() => ...).rejects.toThrowError() instead of error spies with .catch (improves test failure messages)
  • use node -e "require('fs').writeFileSync(..., ...)" instead of bash -c echo ... > ... (platform-indepedence)
  • use memfs for testing persist logic (see persist.spec.ts) with different formats (instead of real file-system) - see vitest-memfs-poc branch
  • mock process.exit in Yargs tests

Refactor: Platform-independant plugin runner

Image

Refactor: Testing errors

Image

Investigate in more readable error messages for zod

Current messages are hard to read:

Example of the issue:

Error message:

ZodError: [
  {
    "code": "invalid_type",
    "expected": "number",
    "received": "nan",
    "path": [
      "parallel"
    ],
    "message": "Expected number, received nan"
  }
]

We will try to find better solution for this.

Solution

import { generateErrorMessage, ErrorMessageOptions } from 'zod-error';
import { z } from 'zod';

enum Color {
  Red = 'Red',
  Blue = 'Blue',
}

const options: ErrorMessageOptions = {
  delimiter: {
    error: ' ๐Ÿ”ฅ ',
  },
  transform: ({ errorMessage, index }) => `Error #${index + 1}: ${errorMessage}`,
};

const schema = z.object({
  color: z.nativeEnum(Color),
  shape: z.string(),
  size: z.number().gt(0),
});

const data = {
  color: 'Green',
  size: -1,
};

const result = schema.safeParse(data);
if (!result.success) {
  const errorMessage = generateErrorMessage(result.error.issues, options);
  throw new Error(errorMessage);
}

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.