Comments (14)
I don't mind changing the existing format for markdown, it was always intended to be aimed at GitHub really.
from package-benchmark.
That sounds good, I don't think the original TextTable is maintained - no updates for a long time, but I don't think you need to touch TextTable - it has a customisation hook:
Custom Styles
You can create a custom table style by conforming to the TextTableStyle protocol.
enum MyCustomStyle: TextTableStyle {
// implement methods from TextTableStyle...
}
extension Style {
static let custom = MyCustomStyle.self
}
table.print(data, style: Style.custom)
So if you do a custom style that is Github markdown and in make a PR that utilises that new style in markdown pretty printing output mode, I think things should be good? The ascii/markdown printout paths are the same with some conditional printing for markdown really, so it should just be a couple of conditionals in there to use the new style.
from package-benchmark.
No nice testing strategy for print output formats I'm afraid, just manual testing really. But in this case, I think the change should be as simple as changing Style.fancy
to be conditionalized and use .PipeFormat
if we are in markdown output mode then? Just needs to test it...
from package-benchmark.
Yeah, that should do it (but I believe there are a few places that needs fixed in the same file for delta vs normal etc).
Easiest way to test manually is just to run from command line and copy-paste into a dummy GitHub repo with a markdown file and preview it there
from package-benchmark.
I in general do like unit testing too, but maybe put that on pause for this specific code for the moment - no changes planned really (during the remainder of this I'd expect primarily bug fixes and possibly new metrics, then likely move to Swift 6 and migrate tests to the new testing infrastructure that is in the pipeline - that might be a good time to revisit testing of the text output and would be happy to have some help with that when we get there).
from package-benchmark.
@hassila Sounds good. I think I could have the small diff with manual testing in place early next week. Thanks!
from package-benchmark.
Done in #247 , thanks!
from package-benchmark.
https://github.com/ordo-one/TextTable/tree/master?tab=readme-ov-file#fancygrid
It looks like we are currently using FancyGrid
? But I don't see a GitHub Markdown style table option.
https://github.com/ordo-one/TextTable/tree/master?tab=readme-ov-file#simple
It looks like the simple
option is not directly displaying in GitHub as a table.
https://github.com/ordo-one/TextTable/tree/master?tab=readme-ov-file#pipe
I believe pipe
might give us what we are looking for. GitHub seems to be formatting this correctly:
Name | Age | Birthday |
---|---|---|
Alice | 42 | 8/13/16 |
Bob | 22 | 8/13/16 |
Eve | 142 | 8/13/16 |
from package-benchmark.
https://github.com/astanin/python-tabulate/blob/v0.9.0/tabulate/__init__.py#L472-L491
It looks like the github
style from tabulate
is almost identical to the pipe
style.
https://github.com/ordo-one/TextTable/blob/master/Sources/TextTable/PipeFormat.swift#L33-L38
I'm guessing we can hack this code in the TextTable
fork to remove those extra colons?
from package-benchmark.
Maybe adding a new --format github
option would be more appropriate than doing too many changes on the existing --format markdown
option. Any thoughts about that?
from package-benchmark.
@hassila Do you know if anyone is already working on that? I can try! Do you know anything about the upstream status of the TextTable
repo? Is that still being maintained?
I could try a diff on the TextTable
fork and then try a diff on the Benchmark
repo. How does that sound?
from package-benchmark.
@hassila After taking a closer look… I think that the existing PipeFormat
is just the same as modern GitHub
. It's possible the github
style in the original python project is just an artifact from the legacy github
format before alignment was added.
What is your strategy for unit testing the whole print flow? How would I test a diff (other than manually launching benchmarks from command line)?
from package-benchmark.
- table.print(scaledResults, style: Style.fancy)
+ table.print(scaledResults, style: format == .markdown ? Style.pipe : Style.fancy)
@hassila Probably this should be good enough. Is there a specific benchmark already defined in the package that is good enough for testing?
from package-benchmark.
I believe there are a few places that needs fixed in the same file for delta vs normal etc
@hassila Hmm… I think I could have a diff ready to migrate those print
statements to the pipe
style. That might just be a few LOC. For unit testing… I might have some ideas about a refactoring here that might make unit testing easier and practical. That one might be 50 to 100 LOC… but could make it a lot easier to write an automated test against all this logic. What would your thoughts be about that? Has manual testing slowed you down before on this logic? Is this logic that you expect to iterate on and extend this year? Would automated unit tests increase engineering velocity in a big way?
I'm a big fan of unit testing… but if we feel like this code is overall stable and we don't expect many more changes this year then maybe it's for the best to keep things as they are. But if you had been planning to make things testable or wanted improved testability to move faster I might have some ideas on a refactoring. What are your thoughts about that?
from package-benchmark.
Related Issues (20)
- Seemingly erroneous scaled, pretty-printed results. HOT 4
- Report Swift toolchain version being used. HOT 2
- `benchmark init` plugin command fails HOT 5
- Support absolute/relative thresholds deviation configurations for --check-absolute too
- Different scaling behaviour in 1.14.0 HOT 2
- Specifying parameterized benchmarks and performing parameter sweeps HOT 2
- ratio between two related operations HOT 5
- Seven entries per benchmark per metric in influx. HOT 3
- Benchmark plugin swallows error codes from benchmark crashes HOT 3
- Could not build Objective-C module 'JWTKit' HOT 2
- Improved Docs for iOS Projects? HOT 6
- Error enum values not propagated as exit codes for benchmark comparison in CI. HOT 5
- Differing scaling factors across the same benchmark misrepresented in comaprisons. HOT 3
- Can engineers force memory reporting into one consistent scale unit (K or M)? HOT 2
- Any API for specifying `minIterations` on a benchmark? HOT 2
- Incorrect CI PR benchmark comparison HOT 3
- Support for memcpy metrics HOT 5
- Add more stats from proc_pid_rusage
- Can I pass additional flags when building benchmark? HOT 4
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from package-benchmark.