Comments (4)
Interesting: if the input data is different from run to run, the neoverse-v1 code is about 30% faster, in average, than the armv8-a+sve code. With some data sets the armv8-a+sve code takes 4x longer than neoverse-v1 code.
from bench2.
Thank you for pointing it out. This is my mistake. The following patch can make the compilation successful. But I don't have arm machine recently, so I can't update the benchmark results.
diff --git a/lib.c b/lib.c
index 19aa7ec..f7aff18 100644
--- a/lib.c
+++ b/lib.c
@@ -29,10 +29,10 @@ float v_sparse_dot_omp(uint32_t *lhs_idx, uint32_t *rhs_idx, float *lhs_val,
size_t lhs_pos = 0, rhs_pos = 0, lhs_loop_len = lhs_len / chunk * chunk,
rhs_loop_len = rhs_len / chunk * chunk;
float buff_l[chunk], buff_r[chunk], xy = 0;
-#pragma omp parallel for reduction(+ : xy)
+#pragma omp parallel reduction(+ : xy)
while (lhs_pos < lhs_loop_len && rhs_pos < rhs_loop_len) {
uint8_t m1 = 0, m2 = 0;
-#pragma omp reduction(| : m1) reduction(| : m2)
+#pragma omp simd reduction(| : m1) reduction(| : m2)
for (size_t i = 0; i < chunk; i++) {
for (size_t j = 0; j < chunk; j++) {
uint8_t res = lhs_idx[lhs_pos + i] == rhs_idx[rhs_pos + j];
from bench2.
I focused on the sparse_dot function. GCC outputs the same code, safe for one single instruction scheduled in a different slot.
Clang, on the other hand gets creative. If I take the lib.s created by clang with mcpu=neoverse-v1
and feed it to GCC it produces excellent results: 500ns for v_sparse_dot. If I feed the same lib.s to clang-15, I get 1700ns. Something on the IR to binary path is off. I'll get the latest clang, to make sure the issue is still there and try again. If so, I'll create an issue on the llvm project page.
from bench2.
Updated results with appropriate input test data:
clang-18 -Wall -O3 -mcpu=neoverse-v1 neo-test.c neo-lib.c
./a.out
changing matrix data for all iterations
1662ns runtime
./a.out 1
keeping matrix data for all iterations
1581ns runtime
versus
clang-18 -Wall -O3 -march=armv8.4-a+sve neo-test.c neo-lib.c
./a.out 1
keeping matrix data for all iterations
897ns runtime
./a.out
changing matrix data for all iterations
2550ns runtime
Conclusion: The observed behavior is due to branch prediction. In a real use case the clang compiler produces better code with -mcpu=neoverse-v1 than with -march=armv8.4-a+sve, as one would expect.
from bench2.
Related Issues (1)
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from bench2.