Code Monkey home page Code Monkey logo

pace's People

Contributors

github-actions[bot] avatar renovate[bot] avatar simonsan avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar

pace's Issues

Introduce Real-Time Configuration Validation in `pace setup config`

Overview

While the pace craft setup command has significantly improved the onboarding experience, there's an opportunity to enhance it further by introducing real-time validation for user inputs. This feature aims to ensure that all configuration values provided by users during the setup process are validated on-the-fly, minimizing common configuration errors and streamlining the setup experience.

Objectives

  • Enhance User Experience: Provide immediate feedback on configuration inputs to help users correct errors in real-time.
  • Reduce Setup Time: Decrease the time users spend on setup by preventing configuration mistakes that require reconfiguration.
  • Improve Reliability: Increase the reliability of the pace setup process by ensuring only valid configurations are saved.

Proposed Enhancements

  1. Real-Time Input Validation

    • Implement input validation logic that checks user inputs as they are entered during the setup process.
    • Provide immediate feedback to users if the input is invalid, along with helpful error messages or suggestions for correction.
    • Use validator crate
  2. Extended Configuration Options

    • Expand the pace setup config command to include prompts for additional configuration values, enhancing customization options for users.
    • Ensure each new configuration option includes specific validation rules based on expected input formats, value ranges, or other criteria.
  3. Validation Framework Integration

    • Integrate or develop a validation framework capable of handling various types of inputs and validation rules.
    • Allow for easy extension of the validation framework to support future configuration options.

Request for Comments (RFC)

I invite all contributors, especially those with experience in CLI tools and user input validation, to provide their feedback and suggestions on:

  • Best practices for implementing real-time validation in command-line interfaces.
  • Ideas for additional configuration options that could benefit from real-time validation.
  • Recommendations for existing Rust crates or libraries that facilitate input validation and error handling.

Conclusion

By introducing real-time validation for configuration inputs in the pace setup config command, we aim to make the onboarding process even smoother and more user-friendly. This enhancement will not only save users time by preventing common mistakes but also strengthen the overall reliability of the pace configuration process.

Improve Testability and Increase Code Coverage for Pace

Overview

As Pace continues to grow and evolve, ensuring high-quality code becomes increasingly critical. This issue aims to address the current gaps in test coverage and testability of the Pace project and its libraries. By focusing on enhancing our testing infrastructure and practices, we can significantly reduce bugs, improve stability, and ensure that new features meet our standards for quality and reliability.

Objectives

  • Assess Current Testing Framework: Evaluate the existing testing setup and identify areas for improvement.
  • Increase Test Coverage: Identify critical areas of the codebase that are currently under-tested or not tested at all.
  • Improve Testability of the Code: Refactor code where necessary to make it more amenable to unit and integration testing.
  • Integrate Additional Testing Tools: Explore and integrate tools for coverage analysis, fuzz testing, and other advanced testing methodologies.
  • Establish Testing Best Practices: Develop guidelines for writing effective tests and ensuring high-quality code.

Action Items

  1. Codebase Audit for Test Coverage

    • Conduct a thorough audit of the current test suite to identify coverage gaps.
    • Create a list of components with insufficient tests.
  2. Refactor for Testability

    • Identify and refactor tightly coupled components that hinder testability.
  3. Expand Test Suite

    • Write additional unit tests to cover critical logic and functionality.
  4. Tooling Integration

    • Set up a code coverage tool (we use cargo-tarpaulin)
    • Evaluate and integrate additional testing tools as needed.
  5. Documentation and Guidelines

    • Document testing best practices and guidelines for contributors.
    • Include examples of effective tests and common patterns to avoid.
  6. Continuous Improvement

    • Establish a process for regularly reviewing and improving test coverage.
    • Encourage contributions focused on testing via GitHub issues and pull requests.

Request for Contributions

We welcome contributions from the community to help achieve these objectives. Whether it's writing tests, refactoring for better testability, or improving our testing infrastructure, every contribution counts. If you're interested in helping, please comment on this issue with the areas you'd like to work on, or propose new ideas for enhancing our testing practices.

Conclusion

Improving testability and increasing code coverage are essential steps toward maintaining the Pace project's quality and reliability. With a solid testing foundation, we can confidently continue to develop and expand Pace, knowing that we have the practices and infrastructure in place to ensure its stability and performance.

Long-Term Considerations Regarding Toml File Storage

Current situation

I initially implemented the storage model for the activity log based on the Toml file format.
When we begin a new activity, we parse the log with all entries, and then we append it in memory to the vector and write the whole activity log back into the file.

When we end or update an activity, we do the same thing.

I think that has several disadvantages, e.g. when there is an error during writing back the file, it could be damaged and the activity log destroyed. Also, it will take longer and longer to parse it, with activities becoming more and more. I need to benchmark that, it could be negligible with a few thousand activities, which is unlikely to happen, as users might archive their activity log monthly when the archival feature is implemented.

I could refactor the entire model to an event based one, so the log file is really append-only and only writes to the end of the file. But I'm actually not sure if this makes sense at this point, because I want to implement the storage in a SQLite database soonish, which would make this obsolete. Because I don't think we want to do it event based in the database, as it's much easier to query for a record and update it or even batch update records.

The reason I initially used Toml was so users can edit it within their favourite text editor, and I found that kind of useful as I used that a lot to edit activities in bartib. I think this would become less useful, when I reimplement it in a way, that only events are stored. Because then it's not as easy to determine any more, what the actual status, duration etc. of an activity really is. To determine that, we would need to parse all activities in a certain time frame and then merge the events. Which will be much more complicated.

Pros And Cons

Current TOML-Based Storage Model:

Pros:

  • Human-Readable: TOML files can be easily read and edited with a text editor, providing transparency and direct access to the data for users.
  • Ease of Implementation: Implementing storage using TOML is relatively straightforward and doesn't require additional dependencies or infrastructure.
  • Low Complexity: The current model is simple and easy to understand, making it suitable for small to medium-sized datasets.

Cons:

  • Risk of Data Corruption: Writing the entire activity log file each time an update occurs increases the risk of data corruption if there's an error during the write operation.
  • Performance Degradation: Parsing and writing the entire file can become slow and inefficient as the log grows larger, impacting overall application performance.
  • Limited Scalability: The current model may struggle to handle large datasets efficiently, especially as the number of activities increases over time.

Event-Based Append-Only Model:

Pros:

  • Improved Data Integrity: Moving to an append-only model reduces the risk of data corruption, since updates are only appended to the end of the file.
  • Better Performance: With no need to reparse the entire file, performance is improved, especially for large activity logs.
  • Scalability: The event-based model scales more effectively with growing datasets, as it doesn't suffer as much from the performance degradation associated with reparsing the entire file.

Cons:

  • Complexity: Implementing an event-based model introduces additional complexity compared to the current read-all-write-all TOML-based approach, requiring careful design and implementation.
  • Loss of Human-Readability: While the append-only model is more efficient, it sacrifices the human-readable nature of TOML files, making direct editing by users more challenging.
  • Data Retrieval Complexity: Retrieving and interpreting data from an append-only log may require more sophisticated parsing and processing logic, potentially complicating certain operations.
  • Difficulty in Database Migration: If the event log is implemented using a file-based format like TOML, it may not be directly compatible with a database-backed storage solution like SQLite. This can result in the event-based model becoming obsolete when transitioning to a database-driven architecture, requiring a rewrite or significant refactoring of the storage layer.

Direct Transition to SQLite:

Pros:

  • Data Integrity and Reliability: SQLite provides robust data storage capabilities, ensuring data integrity and reliability, even in the face of unexpected errors or interruptions.
  • Efficient Queries: SQLite's query capabilities enable efficient retrieval and manipulation of data, supporting complex queries and analysis.
  • Scalability: SQLite can handle large datasets efficiently, making it suitable for applications with growing storage requirements.

Cons:

  • Dependency and Infrastructure: SQLite introduces a dependency on an external library and requires managing database connections and transactions, adding complexity to the application.
  • Deployment Considerations: Deploying and managing SQLite databases may require additional configuration and maintenance compared to simple file-based storage solutions.

Dependency Dashboard

This issue lists Renovate updates and detected dependencies. Read the Dependency Dashboard docs to learn more.

Pending Approval

These branches will be created by Renovate only once you click their checkbox below.

  • chore(deps): update actions/upload-artifact action to v4
  • chore(deps): update rust crate human-panic to v2
  • ๐Ÿ” Create all pending approval PRs at once ๐Ÿ”

Rate-Limited

These updates are currently rate-limited. Click on a checkbox below to force their creation now.

  • chore(deps): update rust crate clap to v4.5.4
  • chore(deps): update rust crate clap_complete to v4.5.2
  • chore(deps): update rust crate diesel to v2.1.6
  • chore(deps): update rust crate enum_dispatch to v0.3.13
  • chore(deps): update rust crate parking_lot to v0.12.2
  • chore(deps): update rust crate serde_json to v1.0.117
  • chore(deps): update rust crate thiserror to v1.0.60
  • chore(deps): update rust crate typed-builder to v0.18.2
  • chore(deps): update serde monorepo to v1.0.201 (serde, serde_derive)
  • chore(deps): update rust crate chrono-tz to 0.9.0
  • chore(deps): update rust crate insta to v1.38.0
  • chore(deps): update rust crate insta-cmd to 0.6.0
  • chore(deps): update rust crate rstest to 0.19.0
  • chore(deps): lock file maintenance
  • ๐Ÿ” Create all rate-limited PRs at once ๐Ÿ”

Open

These updates have all been created already. Click a checkbox below to force a retry/rebase of any.

Ignored or Blocked

These are blocked by an existing closed PR and will not be recreated unless you click a checkbox below.

Detected dependencies

cargo
Cargo.toml
  • abscissa_core 0.7.0
  • assert_cmd 2.0.14
  • chrono 0.4.35
  • chrono-tz 0.8.6
  • clap 4
  • clap_complete 4.5.1
  • clap_complete_nushell 4.5.1
  • derive_more 0.99.17
  • dialoguer 0.11.0
  • diesel 2.1.5
  • directories 5.0.1
  • displaydoc 0.2.4
  • enum_dispatch 0.3.12
  • eyre 0.6.12
  • getset 0.1.2
  • human-panic 1.2.3
  • humantime 2.1.0
  • insta 1.36.1
  • insta-cmd 0.5.0
  • itertools 0.12.1
  • libsqlite3-sys 0.27
  • merge 0.1.0
  • miette 7.2.0
  • once_cell 1.19.0
  • open 5.1.2
  • parking_lot 0.12.1
  • predicates 3.1.0
  • rayon 1.10.0
  • rstest 0.18.2
  • serde 1.0.197
  • serde_derive 1.0.197
  • serde_json 1.0.114
  • similar-asserts 1.5.0
  • simplelog 0.12.2
  • strum 0.26.2
  • strum_macros 0.26.2
  • tabled 0.15.0
  • tempfile 3.10.1
  • tera 1.19.1
  • thiserror 1.0.58
  • toml 0.8.12
  • tracing 0.1.40
  • typed-builder 0.18.1
  • ulid 1.1.2
  • wildmatch 2.3.3
crates/cli/Cargo.toml
crates/core/Cargo.toml
crates/time/Cargo.toml
github-actions
.github/workflows/audit.yaml
  • actions/checkout v4
  • dtolnay/rust-toolchain v1
  • Swatinem/rust-cache v2
  • rustsec/audit-check v1
  • actions/checkout v4
  • EmbarkStudios/cargo-deny-action v1
  • actions/checkout v4
  • actions/checkout v4
.github/workflows/ci.yaml
  • actions/checkout v4
  • dtolnay/rust-toolchain v1
  • actions/checkout v4
  • dtolnay/rust-toolchain v1
  • Swatinem/rust-cache v2
  • actions/checkout v4
  • dtolnay/rust-toolchain v1
  • Swatinem/rust-cache v2
  • actions/checkout v4
  • actions/checkout v4
  • taiki-e/install-action v2
  • dtolnay/rust-toolchain v1
  • Swatinem/rust-cache v2
  • actions/upload-artifact v3
  • actions/checkout v4
  • actions/checkout v4
  • dtolnay/rust-toolchain v1
  • Swatinem/rust-cache v2
  • actions/checkout v4
  • taiki-e/install-action v2
  • actions/checkout v4
.github/workflows/coverage.yaml
  • actions/checkout v4
  • taiki-e/install-action v2
  • codecov/codecov-action v4
.github/workflows/lint-docs.yaml
  • actions/checkout v4
  • dprint/check v2.2
.github/workflows/release-plz.yml
  • actions/checkout v4
  • MarcoIeni/release-plz-action v0.5
.github/workflows/release.yml
  • actions/checkout v4
  • actions/upload-artifact v4
  • actions/checkout v4
  • swatinem/rust-cache v2
  • actions/download-artifact v4
  • actions/upload-artifact v4
  • actions/checkout v4
  • actions/download-artifact v4
  • actions/upload-artifact v4
  • actions/checkout v4
  • actions/download-artifact v4
  • actions/upload-artifact v4
  • actions/checkout v4
  • actions/download-artifact v4
  • actions/checkout v4
  • actions/download-artifact v4
  • ncipollo/release-action v1
  • ubuntu 20.04
  • ubuntu 20.04
  • ubuntu 20.04
  • ubuntu 20.04
.github/workflows/valgrind.yml
  • actions/checkout v4

  • Check this box to trigger a request for Renovate to run again on this repository

Implement `review` command for activity insights

Overview

The pace CLI currently lacks a comprehensive way to review and summarize time spent on various activities. Users need an intuitive command to generate a detailed report of their activities, grouped by categories and subcategories, with total time spent on each. The proposed review command aims to fill this gap by aggregating activity data and presenting it in a structured and readable format.

Objectives

  • Implement a review command that aggregates activity data.
  • Group activities by categories and subcategories.
  • Display total time spent on each activity and category.
  • Ensure the output is formatted for easy readability.

Proposed Enhancements

  1. Data Aggregation Logic (pace_core)

    • Develop logic to parse activities_<date>.pace.toml and any associated activity logs.
    • Aggregate activities by categories and subcategories.
    • Calculate total time spent on each activity and category.
  2. Command Implementation (pace)

    • Implement the review command in the CLI interface using clap.
    • Integrate the data aggregation logic with the review command.
  3. Output Formatting (pace_core)

    • Design a format for the output with enhanced readability.
    • Implement formatting logic that aligns with the designed format.

Further Improvements

  • Provide options to filter the review by date range, e.g., --from and --to flags.
  • Allow users to export the review report to different formats, such as Markdown, PDF, json, and csv.
  • Implement caching mechanisms to improve performance for generating reviews.

Request for Comments (RFC)

Feedback is requested from contributors and users on the following:

  • Suggestions for the output format and additional formatting features.
  • Ideas for optimizing data aggregation and report generation.
  • Interest in additional filtering and export options.

Add access to comprehensive documentation directly from the CLI.

Something like a docs command could be helpful:

//! `docs` subcommand

use abscissa_core::{status_err, Application, Command, Runnable, Shutdown};
use clap::Args;

use crate::application::PACE_APP;

/// Opens the documentation.
#[derive(Command, Debug, Args, Clone)]
pub struct DocsCmd {
  dev: bool
}

impl Runnable for DocsCmd {
    fn run(&self) {
        match open::that("https://pace.cli.rs/docs") {
            Ok(_) => {}
            Err(err) => {
                status_err!("{}", err);
                PACE_APP.shutdown(Shutdown::Crash);
            }
        };
    }
}

Implement First Activity Wizard for Enhanced User Onboarding

Overview

To further enhance the onboarding experience for new pace users, we propose the addition of a "First Activity Wizard." This interactive guide will walk users through the process of logging their first activity, demonstrating the core functionality of pace and ensuring users are comfortable with the basics of the tool from the outset.

Objectives

  • Simplify Initial Learning Curve: Make it easier for new users to understand how to log activities with pace.
  • Demonstrate Key Features: Highlight essential features and commands as part of the user's first interaction with the tool.
  • Engage Users Immediately: Engage users by having them actively use pace as part of the setup process, improving retention and satisfaction.

Proposed Enhancements

  1. Wizard Design and Flow

    • Design a step-by-step flow for the First Activity Wizard, outlining each step from invoking the wizard to successfully logging the first activity.
    • Include brief explanations or tips about key features and best practices.
  2. Interactive CLI Implementation

    • Utilize clap, dialoguer, or similar crates to create interactive prompts guiding the user through the activity logging process.
    • Ensure the wizard is accessible to users with different levels of CLI experience.
  3. Validation and Feedback Mechanisms

    • Implement input validation for each step, providing real-time feedback to guide users in correcting errors.
    • On successful completion of the wizard, provide positive feedback and suggestions for next steps or additional resources.
  4. Documentation and Help Integration

    • Offer options during the wizard to access more detailed documentation or help for each step.
    • Include a way to invoke the wizard again in the future, for users who want a refresher.

Request for Comments (RFC)

Feedback and ideas are welcome on several fronts:

  • Suggestions for the essential steps and information to include in the First Activity Wizard.
  • Ideas for making the wizard engaging and informative without being overwhelming.
  • Experiences with similar onboarding tools or wizards in CLI applications that could inform the design of this feature.

Conclusion

The First Activity Wizard is envisioned as a key part of making pace accessible and inviting to new users. By guiding them through logging their first activity, we aim to demystify the process and showcase the simplicity and value of using pace for productivity tracking. This enhancement is about building confidence and competence in new users, setting them up for success with pace.

Improve onboarding experience for new `pace` users

Overview

The current onboarding experience for new users of the pace CLI tool is not as intuitive or welcoming as it could be. New users are immediately faced with a ParentDirNotFound error when attempting to record a new activity without having a pace.toml configuration file in place. This issue aims to outline and propose enhancements to streamline the initial setup process, making pace more accessible and user-friendly for first-time users.

Objectives

  • Simplify Initial Setup: Ensure users can easily set up pace without encountering errors.
  • Enhance User Guidance: Provide clear instructions and guidance throughout the setup process.

Proposed Enhancements

  1. Support PACE_HOME Environment Variable and OS-Dependent Directories

    • Check for a PACE_HOME environment variable to determine a custom configuration location.
    • Use the directories crate to find suitable OS-dependent directories for storing pace.toml when PACE_HOME is not set.
    • Ensure the chosen configuration directory exists or is created upon the first run.
  2. Interactive pace setup Command

    • Develop an interactive pace setup command to guide users through creating a global pace.toml.
    • Use the dialoguer crate for creating terminal prompts to gather user preferences.
    • POSTPONED #9

Further Onboarding Experience Improvements

  • #11
  • #12
  • Enhance error messages to include suggestions for corrective actions.
  • #13
  • Introduce update notifications to inform users about new features and improvements.

Request for Comments (RFC)

I invite all contributors and users to provide feedback on these proposed enhancements. Your insights are valuable to ensure we make pace as user-friendly and robust as possible. Specifically, I'm looking for feedback on:

  • Additional setup configurations that might be beneficial during the pace setup process.
  • Suggestions for improving the quick start guide and first activity wizard.
  • Ideas for making error messages more helpful and actionable.

Conclusion

Improving the onboarding experience is crucial for growing pace's user base and ensuring new users can start tracking their activities with minimal friction. By implementing these enhancements, we can make pace not only more accessible to newcomers, but also a delight to use right from the start.

Feedback from Byron (Umbrella)

Summary:

  • Data Storage and Safety:

    • Clarify data storage mechanism on project page; prefer real database like SQLite for data safety.
  • Data Import:

    • Support for importing existing data; consider providing importers for different sources (pace import -t clockify times.csv).
  • Data Export:

    • Ensure easy export of all data, preferably with SQLite backend, to avoid user lock-in.
  • HTML Templating for Reports:

    • Include HTML templating engine for generating reports or summaries for invoices.
  • Testing:

    • Implement comprehensive tests, including journey tests and unit tests, ensuring accuracy in timesheets and invoices. Verify behaviour with timezone switching.
  • Cross-Platform and Syncing:

    • Initial support on single machine is acceptable, but future consideration for multi-device sync and web interaction would be valuable. Self-hosting option preferred, but potential for productization exists if done well.

Originally posted by @Byron in Byron/byron#4 (comment)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.