Code Monkey home page Code Monkey logo

k6-learn's Issues

Suggestion to add k6 challenges

Apart from the great content and quizzes, I would love to see some k6 challenges along the way, with concrete tasks to achieve. E.g. Propose concrete real-life scenarios and requirements to load test those (prepare your e-commerce site for Black Friday using http://ecommerce.k6.io/, etc.), where we can put the knowledge into practice in a bit less guided way. We could publish a proposed solution, while allowing folks to apply the knowledge on their own.

Move workshop to its own directory

There's a lot of files to navigate in the modules dir. Separating them should make it easier to navigate, especially since there's no a clear distinction between "workshop" and "non-workshop" content.

This should not be hard, we just need to make sure that we properly change the file paths in all the in-text anchors and images.

Clarify one comment about not using think time

The "Think time" doc says that users might not use want think time when:

Your load generator can run your test script without crossing the 80% CPU utilization mark.

Why is this the case? It's not so obvious to me how this pertains to the decision to use sleep.

Why is the license Apache 2?

I don't think the Apache 2 is suitable for our use:

  • It's a code license and this is a media repo.
  • A copyleft license would be more in line with the other open source repos of this this organization.

With these two things in mind, I propose CC BY-SA 4.0 (SA is "sharealike," similar in concept to copyleft). If I understand the license well, anyone could copy and remix the material here, as long as they gave credit and released the remix under the same license.

Right now, under Apache 2, I think derivative works would just need to attribute and indicate changes. For a CC license that doesn't have this copyleft clause, we could use one of the non-share alikes.

Quiz about best way to define SLAs is potentially ambiguous

"Setting load profiles" has a quiz question that asks about "The best way to define baseline metrics". Is this question too vague? Won't different SLAs have different key metrics? For example, if I wanted define an SLA on latency, wouldn't it also make sense to use ramping-vus with thresholds?

But even for one metric, how confidently can we say that one executor is "best" for it?

We're looking to establish some baseline metrics to define an SLA for an existing service, what is the quickest way to achieve this?
A : Use the shared-iterations executor to run through 1,000 requests.

B: Use the constant-arrival-rate to see how many virtual users it takes to maintain a constant 50 requests per second (RPS).

C: Use the externally-controlled executor to start k6 in server mode to have your sweet Bash script ramp up virtual users.

Preface each executor with an example use case

Right now, the executor exercises only describe what the function does, but they don't describe context.

For example, I'm still unclear about when a user would prefer to configure VUs vs arrival rate. To me, arrival rate doesn't seem to have any real value other than that people are used to configuring RPS. I am probably wrong about this, because I don't know day-to-day load testing operations and I lack imagination.

It'd be great if the exercises were framed in a more concrete context.

For what it's worth, the docs aren't any better on this point. For example:

Use when
Use this executor if you need a specific amount of VUs to complete the same amount of iterations. This can be useful when you have fixed sets of test data that you want to partition between VUs.

What I'm thinking is Ok. And when should I do that?

This issue ties in well with grafana/k6-docs#808.

Create a contributor's guide

We should document how to contribute to this repo.

  • Create an issue if it does not exist, fork the repo, PR with the contribution. Any best practices we want to recommend, etc. We can probably keep it short initially.
  • Users must sign a CLA (not yet in place, will be in the automations).

Separate Front and backend testing from the intro

The intro of the workshop has great overview content about the what and whys of performance testing. It also has a little overview of the differences of frontend and backend testing, with their advantages and disadvantages. This content is good too, but I think it should be separated into its own topic. It doesn't fit as well with the other high-level sections, which are more aimed to the question "why are we doing this?"

Is there any difference between a shakeout test and a smoke test?

The "Load testing" article uses the phrase "Shakeout test". This is fine, but the k6 docs use the phrase "Smoke test" for what seems like a similar concept. I also have seen pipe-cleaner test to mean something similar.

To maintain semantic coherence, we should probably aligned workshop terms with docs terms. Maybe we could retitle that section "Smoke test", or if "Shakeout test" is so much better, think about changing the docs.

Of course, we can put as many alternative terms as we want in a non-heading sentence, e.g. "A shakeout test, also called a smoke test or pipe-cleaner test, is a ..."

Add introductory section "Why k6"

At the broadest perspective, all load-test applications do more or less the same thing: they generate concurrent load and present metrics. Of course, there's a substantial amount of difference in the details of implementation.

Perhaps the workshops could make a factual, non-market-y case for why k6 is a good choice for load testing. It may help people see the "so what" behind the workshop. What's more, this exercise could perhaps carry over to other parts of the docs.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.