grafana / k6-learn Goto Github PK
View Code? Open in Web Editor NEWLicense: GNU Affero General Public License v3.0
License: GNU Affero General Public License v3.0
Apart from the great content and quizzes, I would love to see some k6 challenges along the way, with concrete tasks to achieve. E.g. Propose concrete real-life scenarios and requirements to load test those (prepare your e-commerce site for Black Friday using http://ecommerce.k6.io/, etc.), where we can put the knowledge into practice in a bit less guided way. We could publish a proposed solution, while allowing folks to apply the knowledge on their own.
There's a lot of files to navigate in the modules
dir. Separating them should make it easier to navigate, especially since there's no a clear distinction between "workshop" and "non-workshop" content.
This should not be hard, we just need to make sure that we properly change the file paths in all the in-text anchors and images.
The "Think time" doc says that users might not use want think time when:
Your load generator can run your test script without crossing the 80% CPU utilization mark.
Why is this the case? It's not so obvious to me how this pertains to the decision to use sleep
.
I don't think the Apache 2 is suitable for our use:
With these two things in mind, I propose CC BY-SA 4.0 (SA is "sharealike," similar in concept to copyleft). If I understand the license well, anyone could copy and remix the material here, as long as they gave credit and released the remix under the same license.
Right now, under Apache 2, I think derivative works would just need to attribute and indicate changes. For a CC license that doesn't have this copyleft clause, we could use one of the non-share alikes.
The "Checks" topic links to https://github.com/grafana/k6-learn/blob/main/Modules/Clarifying%20testing%20criteria.md
But this page isn't mentioned in the main ToC of the workshop/course. I think this would be good to highlight. The big-picture topics are probably what k6 docs most lack.
"Setting load profiles" has a quiz question that asks about "The best way to define baseline metrics". Is this question too vague? Won't different SLAs have different key metrics? For example, if I wanted define an SLA on latency, wouldn't it also make sense to use ramping-vus with thresholds?
But even for one metric, how confidently can we say that one executor is "best" for it?
We're looking to establish some baseline metrics to define an SLA for an existing service, what is the quickest way to achieve this?
A : Use theshared-iterations
executor to run through 1,000 requests.B: Use the
constant-arrival-rate
to see how many virtual users it takes to maintain a constant 50 requests per second (RPS).C: Use the
externally-controlled
executor to start k6 in server mode to have your sweet Bash script ramp up virtual users.
Right now, the executor exercises only describe what the function does, but they don't describe context.
For example, I'm still unclear about when a user would prefer to configure VUs vs arrival rate. To me, arrival rate doesn't seem to have any real value other than that people are used to configuring RPS. I am probably wrong about this, because I don't know day-to-day load testing operations and I lack imagination.
It'd be great if the exercises were framed in a more concrete context.
For what it's worth, the docs aren't any better on this point. For example:
Use when
Use this executor if you need a specific amount of VUs to complete the same amount of iterations. This can be useful when you have fixed sets of test data that you want to partition between VUs.
What I'm thinking is Ok. And when should I do that?
This issue ties in well with grafana/k6-docs#808.
We should document how to contribute to this repo.
The intro of the workshop has great overview content about the what and whys of performance testing. It also has a little overview of the differences of frontend and backend testing, with their advantages and disadvantages. This content is good too, but I think it should be separated into its own topic. It doesn't fit as well with the other high-level sections, which are more aimed to the question "why are we doing this?"
The "Load testing" article uses the phrase "Shakeout test". This is fine, but the k6 docs use the phrase "Smoke test" for what seems like a similar concept. I also have seen pipe-cleaner test to mean something similar.
To maintain semantic coherence, we should probably aligned workshop terms with docs terms. Maybe we could retitle that section "Smoke test", or if "Shakeout test" is so much better, think about changing the docs.
Of course, we can put as many alternative terms as we want in a non-heading sentence, e.g. "A shakeout test, also called a smoke test or pipe-cleaner test, is a ..."
Related to grafana/k6-docs#1424
At the broadest perspective, all load-test applications do more or less the same thing: they generate concurrent load and present metrics. Of course, there's a substantial amount of difference in the details of implementation.
Perhaps the workshops could make a factual, non-market-y case for why k6 is a good choice for load testing. It may help people see the "so what" behind the workshop. What's more, this exercise could perhaps carry over to other parts of the docs.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.