Code Monkey home page Code Monkey logo

competitest.nvim's Introduction

CompetiTest.nvim

Neovim Lua License

Competitive Programming with Neovim made Easy

competitest_popup_ui CompetiTest's popup UI

competitest_split_ui CompetiTest's split UI

competitest.nvim is a testcase manager and checker. It saves you time in competitive programming contests by automating common tasks related to testcase management. It can compile, run and test your solutions across all the available testcases, displaying results in a nice interactive user interface.

Features

  • Multiple languages supported: it works out of the box with C, C++, Rust, Java and Python, but other languages can be configured
  • Flexible. No strict file-naming rules, optional fixed folder structure. You can choose where to put the source code file, the testcases, the received problems and contests, where to execute your programs and much more
  • Configurable (see Configuration). You can even configure every folder individually
  • Testcases can be stored in a single file or in multiple text files, see usage notes
  • Easily add, edit and delete testcases
  • Run your program across all the testcases, showing results and execution data in a nice interactive UI
  • Download testcases, problems and contests automatically from competitive programming platforms
  • Templates for received problems and contests
  • View diff between actual and expected output
  • Customizable interface that resizes automatically when Neovim window is resized
  • Integration with statusline and winbar
  • Customizable highlight groups

Installation

NOTE: this plugins requires Neovim ≥ 0.5

Install with vim-plug:

Plug 'MunifTanjim/nui.nvim'        " it's a dependency
Plug 'xeluxee/competitest.nvim'

Install with packer.nvim:

use {
	'xeluxee/competitest.nvim',
	requires = 'MunifTanjim/nui.nvim',
	config = function() require('competitest').setup() end
}

Install with lazy.nvim:

{
	'xeluxee/competitest.nvim',
	dependencies = 'MunifTanjim/nui.nvim',
	config = function() require('competitest').setup() end,
}

If you are using another package manager note that this plugin depends on nui.nvim, hence it should be installed as a dependency.

Usage

To load this plugin call setup():

require('competitest').setup() -- to use default configuration
require('competitest').setup { -- to customize settings
	-- put here configuration
}

To see all the available settings see configuration.

Usage notes

  • Your programs must read from stdin and print to stdout. If stderr is used its content will be displayed
  • A testcase is made by an input and an output (containing the correct answer)
  • Input is necessary for a testcase to be considered, while an output hasn't to be provided necessarily
  • Testcases can be stored in multiple text files or in a single msgpack encoded file
    • You can choose how to store them with testcases_use_single_file boolean option in in configuration. By default it's false, so multiple files are used
    • Storage method can be automatically detected when option testcases_auto_detect_storage is true
    • If you want to change the way already existing testcases are stored see conversion

Storing testcases in multiple text files

  • To store testcases in multiple text files set testcases_use_single_file to false
  • Files naming shall follow a rule to be recognized. Let's say your file is called task-A.cpp. If using the default configuration testcases associated with that file will be named task-A_input0.txt, task-A_output0.txt, task-A_input1.txt, task-A_output1.txt and so on. The counting starts from 0
  • Of course files naming can be configured: see testcases_input_file_format and testcases_output_file_format in configuration
  • Testcases files can be put in the same folder of the source code file, but you can customize their path (see testcases_directory in configuration)

Storing testcases in a single file

  • To store testcases in a single file set testcases_use_single_file to true
  • Testcases file naming shall follow a rule to be recognized. Let's say your file is called task-A.cpp. If using the default configuration testcases file will be named task-A.testcases
  • Of course single file naming can be configured: see testcases_single_file_format in configuration
  • Testcases file can be put in the same folder of the source code file, but you can customize its path (see testcases_directory in configuration)

Anyway you can forget about these rules if you use :CompetiTest add_testcase and :CompetiTest edit_testcase, that handle these things for you.

When launching the following commands make sure the focused buffer is the one containing the source code file.

Add or Edit a testcase

Launch :CompetiTest add_testcase to add a new testcase.
Launch :CompetiTest edit_testcase to edit an existing testcase. If you want to specify testcase number directly in the command line you can use :CompetiTest edit_testcase x, where x is a number representing the testcase you want to edit.

To jump between input and output windows press either <C-h>, <C-l>, or <C-i>. To save and close testcase editor press <C-s> or :wq.

Of course these keybindings can be customized: see editor_uinormal_mode_mappings and editor_uiinsert_mode_mappings in configuration

Remove a testcase

Launch :CompetiTest delete_testcase. If you want to specify testcase number directly in the command line you can use :CompetiTest delete_testcase x, where x is a number representing the testcase you want to remove.

Convert testcases

Testcases can be stored in multiple text files or in a single msgpack encoded file.
Launch :CompetiTest convert to change testcases storage method: you can convert a single file into multiple files or vice versa. One of the following arguments is needed:

  • singlefile_to_files: convert a single file into multiple text files
  • files_to_singlefile: convert multiple text files into a single file
  • auto: if there's a single file convert it into multiple files, otherwise convert multiple files into a single file

NOTE: this command only converts already existing testcases files without changing CompetiTest configuration. To choose the storage method to use you have to configure testcases_use_single_file option, that is false by default. Anyway storage method can be automatically detected when option testcases_auto_detect_storage is true.

Run testcases

Launch :CompetiTest run. CompetiTest's interface will appear and you'll be able to view details about a testcase by moving the cursor over its entry. You can close the UI by pressing q, Q or :q.
If you're using a compiled language and you don't want to recompile your program launch :CompetiTest run_no_compile.
If you have previously closed the UI and you want to re-open it without re-executing testcases or recompiling launch :CompetiTest show_ui.

Control processes

  • Run again a testcase by pressing R
  • Run again all testcases by pressing <C-r>
  • Kill the process associated with a testcase by pressing K
  • Kill all the processes associated with testcases by pressing <C-k>

View details

  • View input in a bigger window by pressing i or I
  • View expected output in a bigger window by pressing a or A
  • View stdout in a bigger window by pressing o or O
  • View stderr in a bigger window by pressing e or E
  • Toggle diff view between actual and expected output by pressing d or D

Of course all these keybindings can be customized: see runner_uimappings in configuration

Receive testcases, problems and contests

NOTE: to get this feature working you need to install competitive-companion extension in your browser.

Thanks to its integration with competitive-companion, CompetiTest can download contents from competitive programming platforms:

  • Download only testcases with :CompetiTest receive testcases
  • Download a problem with :CompetiTest receive problem (source file is automatically created along with testcases)
  • Download an entire contest with :CompetiTest receive contest (make sure to be on the homepage of the contest, not of a single problem)

After launching one of these commands click on the green plus button in your browser to start downloading.
For further customization see receive options in configuration.

Customize folder structure

By default CompetiTest stores received problems and contests in current working directory. You can change this behavior through the options received_problems_path, received_contests_directory and received_contests_problems_path. See receive modifiers for further details.
Here are some tips:

  • Fixed directory for received problems (not contests):
     received_problems_path = "$(HOME)/Competitive Programming/$(JUDGE)/$(CONTEST)/$(PROBLEM).$(FEXT)"
  • Fixed directory for received contests:
     received_contests_directory = "$(HOME)/Competitive Programming/$(JUDGE)/$(CONTEST)"
  • Put every problem of a contest in a different directory:
     received_contests_problems_path = "$(PROBLEM)/main.$(FEXT)"
  • Example of file naming for Java contests:
     received_contests_problems_path = "$(PROBLEM)/$(JAVA_MAIN_CLASS).$(FEXT)"
  • Simplified file names, it works with Java and any other language because the modifier $(JAVA_TASK_CLASS) is generated from problem name removing all non-alphabetic and non-numeric characters, including spaces and punctuation:
     received_contests_problems_path = "$(JAVA_TASK_CLASS).$(FEXT)"

Templates for received problems and contests

When downloading a problem or a contest, source code templates can be configured for different file types. See template_file option in configuration.
Receive modifiers can be used inside template files to insert details about received problems. To enable this feature set evaluate_template_modifiers to true. Template example for C++:

// Problem: $(PROBLEM)
// Contest: $(CONTEST)
// Judge: $(JUDGE)
// URL: $(URL)
// Memory Limit: $(MEMLIM)
// Time Limit: $(TIMELIM)
// Start: $(DATE)

#include <iostream>
using namespace std;
int main() {
	cout << "This is a template file" << endl;
	cerr << "Problem name is $(PROBLEM)" << endl;
	return 0;
}

Configuration

Full configuration

Here you can find CompetiTest default configuration

require('competitest').setup {
	local_config_file_name = ".competitest.lua",

	floating_border = "rounded",
	floating_border_highlight = "FloatBorder",
	picker_ui = {
		width = 0.2,
		height = 0.3,
		mappings = {
			focus_next = { "j", "<down>", "<Tab>" },
			focus_prev = { "k", "<up>", "<S-Tab>" },
			close = { "<esc>", "<C-c>", "q", "Q" },
			submit = { "<cr>" },
		},
	},
	editor_ui = {
		popup_width = 0.4,
		popup_height = 0.6,
		show_nu = true,
		show_rnu = false,
		normal_mode_mappings = {
			switch_window = { "<C-h>", "<C-l>", "<C-i>" },
			save_and_close = "<C-s>",
			cancel = { "q", "Q" },
		},
		insert_mode_mappings = {
			switch_window = { "<C-h>", "<C-l>", "<C-i>" },
			save_and_close = "<C-s>",
			cancel = "<C-q>",
		},
	},
	runner_ui = {
		interface = "popup",
		selector_show_nu = false,
		selector_show_rnu = false,
		show_nu = true,
		show_rnu = false,
		mappings = {
			run_again = "R",
			run_all_again = "<C-r>",
			kill = "K",
			kill_all = "<C-k>",
			view_input = { "i", "I" },
			view_output = { "a", "A" },
			view_stdout = { "o", "O" },
			view_stderr = { "e", "E" },
			toggle_diff = { "d", "D" },
			close = { "q", "Q" },
		},
		viewer = {
			width = 0.5,
			height = 0.5,
			show_nu = true,
			show_rnu = false,
			close_mappings = { "q", "Q" },
		},
	},
	popup_ui = {
		total_width = 0.8,
		total_height = 0.8,
		layout = {
			{ 4, "tc" },
			{ 5, { { 1, "so" }, { 1, "si" } } },
			{ 5, { { 1, "eo" }, { 1, "se" } } },
		},
	},
	split_ui = {
		position = "right",
		relative_to_editor = true,
		total_width = 0.3,
		vertical_layout = {
			{ 1, "tc" },
			{ 1, { { 1, "so" }, { 1, "eo" } } },
			{ 1, { { 1, "si" }, { 1, "se" } } },
		},
		total_height = 0.4,
		horizontal_layout = {
			{ 2, "tc" },
			{ 3, { { 1, "so" }, { 1, "si" } } },
			{ 3, { { 1, "eo" }, { 1, "se" } } },
		},
	},

	save_current_file = true,
	save_all_files = false,
	compile_directory = ".",
	compile_command = {
		c = { exec = "gcc", args = { "-Wall", "$(FNAME)", "-o", "$(FNOEXT)" } },
		cpp = { exec = "g++", args = { "-Wall", "$(FNAME)", "-o", "$(FNOEXT)" } },
		rust = { exec = "rustc", args = { "$(FNAME)" } },
		java = { exec = "javac", args = { "$(FNAME)" } },
	},
	running_directory = ".",
	run_command = {
		c = { exec = "./$(FNOEXT)" },
		cpp = { exec = "./$(FNOEXT)" },
		rust = { exec = "./$(FNOEXT)" },
		python = { exec = "python", args = { "$(FNAME)" } },
		java = { exec = "java", args = { "$(FNOEXT)" } },
	},
	multiple_testing = -1,
	maximum_time = 5000,
	output_compare_method = "squish",
	view_output_diff = false,

	testcases_directory = ".",
	testcases_use_single_file = false,
	testcases_auto_detect_storage = true,
	testcases_single_file_format = "$(FNOEXT).testcases",
	testcases_input_file_format = "$(FNOEXT)_input$(TCNUM).txt",
	testcases_output_file_format = "$(FNOEXT)_output$(TCNUM).txt",

	companion_port = 27121,
	receive_print_message = true,
	template_file = false,
	evaluate_template_modifiers = false,
	date_format = "%c",
	received_files_extension = "cpp",
	received_problems_path = "$(CWD)/$(PROBLEM).$(FEXT)",
	received_problems_prompt_path = true,
	received_contests_directory = "$(CWD)",
	received_contests_problems_path = "$(PROBLEM).$(FEXT)",
	received_contests_prompt_directory = true,
	received_contests_prompt_extension = true,
	open_received_problems = true,
	open_received_contests = true,
	replace_received_testcases = false,
}

Explanation

  • local_config_file_name: you can use a different configuration for every different folder. See local configuration
  • floating_border: for details see here
  • floating_border_highlight: the highlight group used for popups border
  • picker_ui: settings related to the testcase picker
    • width: a value from 0 to 1, representing the ratio between picker width and Neovim width
    • height: a value from 0 to 1, representing the ratio between picker height and Neovim height
    • mappings: keyboard mappings to interact with picker
  • editor_ui: settings related to the testcase editor
    • popup_width: a value from 0 to 1, representing the ratio between editor width and Neovim width
    • popup_height: a value from 0 to 1, representing the ratio between editor height and Neovim height
    • show_nu: whether to show line numbers or not
    • show_rnu: whether to show relative line numbers or not
    • switch_window: keyboard mappings to switch between input window and output window
    • save_and_close: keyboard mappings to save testcase content
    • cancel: keyboard mappings to quit testcase editor without saving
  • runner_ui: settings related to testcase runner user interface
    • interface: interface used to display testcases data. Can be popup (floating windows) or split (normal windows). Associated settings can be found in popup_ui and split_ui
    • selector_show_nu: whether to show line numbers or not in testcase selector
    • selector_show_rnu: whether to show relative line numbers or not in testcase selector
    • show_nu: whether to show line numbers or not in details windows
    • show_rnu: whether to show relative line numbers or not in details windows
    • mappings: keyboard mappings used in testcase selector window
      • run_again: keymaps to run again a testcase
      • run_all_again: keymaps to run again all testcases
      • kill: keymaps to kill a testcase
      • kill_all: keymaps to kill all testcases
      • view_input: keymaps to view input (stdin) in a bigger window
      • view_output: keymaps to view expected output in a bigger window
      • view_stdout: keymaps to view programs's output (stdout) in a bigger window
      • view_stderr: keymaps to view programs's errors (stderr) in a bigger window
      • toggle_diff: keymaps to toggle diff view between actual and expected output
      • close: keymaps to close runner user interface
    • viewer: keyboard mappings used in viewer window
      • width: a value from 0 to 1, representing the ratio between viewer window width and Neovim width
      • height: a value from 0 to 1, representing the ratio between viewer window height and Neovim height
      • show_nu: whether to show line numbers or not in viewer window
      • show_rnu: whether to show relative line numbers or not in viewer window
      • close_mappings: keymaps to close viewer window
  • popup_ui: settings related to testcase runner popup interface
    • total_width: a value from 0 to 1, representing the ratio between total interface width and Neovim width
    • total_height: a value from 0 to 1, representing the ratio between total interface height and Neovim height
    • layout: a table describing popup UI layout. For further details see here
  • split_ui: settings related to testcase runner split interface
    • position: can be top, bottom, left or right
    • relative_to_editor: whether to open split UI relatively to entire editor or to local window
    • total_width: a value from 0 to 1, representing the ratio between total vertical split width and relative window width
    • vertical_layout: a table describing vertical split UI layout. For further details see here
    • total_height: a value from 0 to 1, representing the ratio between total horizontal split height and relative window height
    • horizontal_layout: a table describing horizontal split UI layout. For further details see here
  • save_current_file: if true save current file before running testcases
  • save_all_files: if true save all the opened files before running testcases
  • compile_directory: execution directory of compiler, relatively to current file's path
  • compile_command: configure the command used to compile code for every different language, see here
  • running_directory: execution directory of your solutions, relatively to current file's path
  • run_command: configure the command used to run your solutions for every different language, see here
  • multiple_testing: how many testcases to run at the same time
    • set it to -1 to make the most of the amount of available parallelism. Often the number of testcases run at the same time coincides with the number of CPUs
    • set it to 0 if you want to run all the testcases together
    • set it to any positive integer to run that number of testcases contemporarily
  • maximum_time: maximum time, in milliseconds, given to processes. If it's exceeded process will be killed
  • output_compare_method: how given output (stdout) and expected output should be compared. It can be a string, representing the method to use, or a custom function. Available options follows:
    • "exact": character by character comparison
    • "squish": compare stripping extra white spaces and newlines
    • custom function: you can use a function accepting two arguments, two strings representing output and expected output. It should return true if the given output is acceptable, false otherwise. Example:
       require('competitest').setup {
       	output_compare_method = function(output, expected_output)
       		if output == expected_output then
       			return true
       		else
       			return false
       		end
       	end
       }
  • view_output_diff: view diff between actual output and expected output in their respective windows
  • testcases_directory: where testcases files are located, relatively to current file's path
  • testcases_use_single_file: if true testcases will be stored in a single file instead of using multiple text files. If you want to change the way already existing testcases are stored see conversion
  • testcases_auto_detect_storage: if true testcases storage method will be detected automatically. When both text files and single file are available, testcases will be loaded according to the preference specified in testcases_use_single_file
  • testcases_single_file_format: string representing how single testcases files should be named (see file-format modifiers)
  • testcases_input_file_format: string representing how testcases input files should be named (see file-format modifiers)
  • testcases_output_file_format: string representing how testcases output files should be named (see file-format modifiers)
  • companion_port: competitive companion port number
  • receive_print_message: if true notify user that plugin is ready to receive testcases, problems and contests or that they have just been received
  • template_file: templates to use when creating source files for received problems or contests. Can be one of the following:
    • false: do not use templates
    • string with file-format modifiers: useful when templates for different file types have a regular file naming
       template_file = "~/path/to/template.$(FEXT)"
    • table with paths: table associating file extension to template file
       template_file = {
       	c = "~/path/to/file.c",
       	cpp = "~/path/to/file.cpp",
       	py = "~/path/to/file.py",
       }
  • evaluate_template_modifiers: whether to evaluate receive modifiers inside a template file or not
  • date_format: string used to format $(DATE) modifier (see receive modifiers). The string should follow the formatting rules as per Lua's os.date function. For example, to get 06-07-2023 15:24:32 set it to %d-%m-%Y %H:%M:%S
  • received_files_extension: default file extension for received problems
  • received_problems_path: path where received problems (not contests) are stored. Can be one of the following:
    • string with receive modifiers
    • function: function accepting two arguments, a table with task details and a string with preferred file extension. It should return the absolute path to store received problem. Example:
       received_problems_path = function(task, file_extension)
       	local hyphen = string.find(task.group, " - ")
       	local judge, contest
       	if not hyphen then
       		judge = task.group
       		contest = "unknown_contest"
       	else
       		judge = string.sub(task.group, 1, hyphen - 1)
       		contest = string.sub(task.group, hyphen + 3)
       	end
       	return string.format("%s/Competitive Programming/%s/%s/%s.%s", vim.loop.os_homedir(), judge, contest, task.name, file_extension)
       end
  • received_problems_prompt_path: whether to ask user confirmation about path where the received problem is stored or not
  • received_contests_directory: directory where received contests are stored. It can be string or function, exactly as received_problems_path
  • received_contests_problems_path: relative path from contest root directory, each problem of a received contest is stored following this option. It can be string or function, exactly as received_problems_path
  • received_contests_prompt_directory: whether to ask user confirmation about the directory where received contests are stored or not
  • received_contests_prompt_extension: whether to ask user confirmation about what file extension to use when receiving a contest or not
  • open_received_problems: automatically open source files when receiving a single problem
  • open_received_contests: automatically open source files when receiving a contest
  • replace_received_testcases: this option applies when receiving only testcases. If true replace existing testcases with received ones, otherwise ask user what to do

Local configuration

You can use a different configuration for every different folder by creating a file called .competitest.lua (this name can be changed configuring the option local_config_file_name). It will affect every file contained in that folder and in subfolders. A table containing valid options must be returned, see the following example.

-- .competitest.lua content
return {
	multiple_testing = 3,
	maximum_time = 2500,
	testcases_input_file_format = "in_$(TCNUM).txt",
	testcases_output_file_format = "ans_$(TCNUM).txt",
	testcases_single_file_format = "$(FNOEXT).tc",
}

Available modifiers

Modifiers are substrings that will be replaced by another string, depending on the modifier and the context. They're used to tweak some options.

File-format modifiers

You can use them to define commands or to customize testcases files naming through options testcases_single_file_format, testcases_input_file_format and testcases_output_file_format.

Modifier Meaning
$() insert a dollar
$(HOME) user home directory
$(FNAME) file name
$(FNOEXT) file name without extension
$(FEXT) file extension
$(FABSPATH) absolute path of current file
$(ABSDIR) absolute path of folder that contains file
$(TCNUM) testcase number

Receive modifiers

You can use them to customize the options received_problems_path, received_contests_directory, received_contests_problems_path and to insert problem details inside template files. See also tips for customizing folder structure for received problems and contests.

Modifier Meaning
$() insert a dollar
$(HOME) user home directory
$(CWD) current working directory
$(FEXT) preferred file extension
$(PROBLEM) problem name, name field
$(GROUP) judge and contest name, group field
$(JUDGE) judge name (first part of group, before hyphen)
$(CONTEST) contest name (second part of group, after hyphen)
$(URL) problem url, url field
$(MEMLIM) available memory, memoryLimit field
$(TIMELIM) time limit, timeLimit field
$(JAVA_MAIN_CLASS) almost always "Main", mainClass field
$(JAVA_TASK_CLASS) classname-friendly version of problem name, taskClass field
$(DATE) current date and time (based on date_format), it can be used only inside template files

Fields are referred to received tasks.

Customize compile and run commands

Languages as C, C++, Rust, Java and Python are supported by default.
Of course you can customize commands used for compiling and for running your programs. You can also add languages that aren't supported by default.

require('competitest').setup {
	compile_command = {
		cpp       = { exec = 'g++',           args = {'$(FNAME)', '-o', '$(FNOEXT)'} },
		some_lang = { exec = 'some_compiler', args = {'$(FNAME)'} },
	},
	run_command = {
		cpp       = { exec = './$(FNOEXT)' },
		some_lang = { exec = 'some_interpreter', args = {'$(FNAME)'} },
	},
}

See file-format modifiers to better understand how dollar notation works.

NOTE: if your language isn't compiled you can ignore compile_command section.

Feel free to open a PR or an issue if you think it's worth adding a new language among default ones.

Customize UI layout

You can customize testcase runner user interface by defining windows positions and sizes trough a table describing a layout. This is possible both for popup and split UI.

Every window is identified by a string representing its name and a number representing the proportion between its size and the sizes of other windows. To define a window use a lua table made by a number and a string. An example is { 1.5, "tc" }.
Windows can be named as follows:

  • tc for testcases selector
  • si for standard input
  • so for standard output
  • se for standard error
  • eo for expected output

A layout is a list made by windows or layouts (recursively defined). To define a layout use a lua table containing a list of windows or layouts.

Sample code Result
layout = {
  { 2, "tc" },
  { 3, {
       { 1, "so" },
       { 1, "si" },
     } },
  { 3, {
       { 1, "eo" },
       { 1, "se" },
     } },
}

layout1

layout = {
  { 1, {
       { 1, "so" },
       { 1, {
            { 1, "tc" },
            { 1, "se" },
          } },
     } },
  { 1, {
       { 1, "eo" },
       { 1, "si" },
     } },
}

layout2

Statusline and winbar integration

When using split UI windows name can be displayed in statusline or in winbar. In each CompetiTest buffer there's a local variable called competitest_title, that is a string representing window name. You can get its value using nvim_buf_get_var(buffer_number, 'competitest_title').
See the second screenshot for an example statusline used with split UI.

Highlights

You can customize CompetiTest highlight groups. Their default values are:

hi CompetiTestRunning cterm=bold     gui=bold
hi CompetiTestDone    cterm=none     gui=none
hi CompetiTestCorrect ctermfg=green  guifg=#00ff00
hi CompetiTestWarning ctermfg=yellow guifg=orange
hi CompetiTestWrong   ctermfg=red    guifg=#ff0000

Roadmap

  • Manage testcases
    • Add testcases
    • Edit testcases
    • Delete testcases
    • Store testcases in a single file
    • Store testcases in multiple text files
    • Convert single file into multiple text files and vice versa
  • Run testcases
    • Support many programming languages
    • Handle compilation if needed
    • Run multiple testcases at the same time
      • Run again processes
      • Kill processes
    • Display results and execution data in a popup UI
    • Display results and execution data in a split window UI
  • Handle interactive tasks
  • Configure every folder individually
  • Integration with competitive-companion
    • Download testcases
    • Download problems
    • Download contests
    • Customizable folder structure for downloaded problems and contests
  • Templates for files created when receiving problems or contests
  • Integration with tools to submit solutions (api-client or cpbooster)
  • Write Vim docs
  • Customizable highlights
  • Resizable UI

Contributing

If you have any suggestion to give or if you encounter any trouble don't hesitate to open a new issue.
Pull Requests are welcome! 🎉

License

GNU Lesser General Public License version 3 (LGPL v3) or, at your option, any later version

Copyright © 2021-2023 xeluxee

CompetiTest.nvim is free software: you can redistribute it and/or modify it under the terms of the GNU Lesser General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version.

CompetiTest.nvim is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Lesser General Public License for more details.

You should have received a copy of the GNU Lesser General Public License along with CompetiTest.nvim. If not, see https://www.gnu.org/licenses/.

competitest.nvim's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

competitest.nvim's Issues

Some idea concerning new features

Fist of all, I really appreciate this awesome plugin, which do me a big favour. After using it for a while, I have the following ideas to make it better:

  • merge three command about testcases into one command (like testcases simply). And we can add, edit, remove testcases with our keybinding.
  • change the color of popup window's boarder (except Testcases window) by the test result, so as to recognise the result at glance. For example, red for wrong test and green for correct one.
  • add something like hook or user event . For instance, i can create build directory before running code to put the binary file into build directory then.

Unable to get compile directory working

Hi all,
I discovered this plugin yesterday and I'm amazed at the level of customization and documentation.

So I tried setting the plugin in my neovim config. Using the default configuration, it worked as expected. But I wanted a cleaner directory structure with the executable files in a seperate folder as well as the testcases.

So the basic folder structure that I desired was:

-> File.cpp
-> testcases 
		| -> Testcase files (input & output)
-> bin
		| -> Executables

For this I specified the following:

 testcases_directory = './testcases',
 compile_directory = '.',
 running_directory = './bin',
 compile_command = {
		cpp = { exec = "g++", "$(FNAME)", "-o", "./bin/$(FNOEXT)"}
 },

But the CompetiTest run gave an error.

Error executing luv callback: vim/_editor.lua:0: E5560: nvim_err_writeln must not be called in a lua loop callback stack traceback: [C]: in function 'nvim_err_writeln' vim/_editor.lua: in function 'notify' ...are/nvim/lazy/competitest.nvim/lua/competitest/utils.lua:8: in function 'notify' ...re/nvim/lazy/competitest.nvim/lua/competitest/runner.lua:205: in function 'execute_testcase' ...re/nvim/lazy/competitest.nvim/lua/competitest/runner.lua:128: in function 'run_first_testcases' ...re/nvim/lazy/competitest.nvim/lua/competitest/runner.lua:138: in function 'callback' ...re/nvim/lazy/competitest.nvim/lua/competitest/runner.lua:201: in function <...re/nvim/lazy/competitest.nvim/lua/competitest/runner.lua:167>

I also tried another way.

 testcases_directory = './testcases',
 compile_directory = './bin',
 running_directory = './bin',
 compile_command = {
		cpp = { exec = "g++", "../$(FNAME)", "-o", "./$(FNOEXT)"}
 },

which resulted in compilation error:

cc1plus: fatal error: B. Friendly Arrays.cpp: No such file or directory
compilation terminated.

Could anyone give any details on how to use the mentioned options or any of the corresponding commands(like compile_command) ?

Thanks

Running :CompetiTest commands throw `trailing data in msgpack string` error

I have been trying to integrate CompetiTest with my neovim setup for 2 days, but everytime I try to install and run the plugin, it throws the following error. I have searched regarding this error but haven't found any answers. My directory structure is shown in the following image.

image

I am attaching one sample command's output for reference, but every CompetiTest command is throwing same error. I am running Ubuntu 22.04 on Windows 11 WSL2. I have tried both the default config and the custom config but the error still stays the same.

:CompetiTest run
E5108: Error executing lua ...ker/start/competitest.nvim/lua/competitest/testcases.lua:256: trailing data in msgpack string
stack traceback:
        [C]: in function 'loader2'
        ...ker/start/competitest.nvim/lua/competitest/testcases.lua:256: in function 'buf_get_testcases'
        ...cker/start/competitest.nvim/lua/competitest/commands.lua:261: in function 'run_testcases'
        ...cker/start/competitest.nvim/lua/competitest/commands.lua:78: in function 'sub'
        ...cker/start/competitest.nvim/lua/competitest/commands.lua:103: in function 'command'
        [string ":lua"]:1: in main chunk

Kindly help me with this error and if any further clarification is required, I will be happy to share that.

Read until the end of the stdin is not working

I have tried this snippet of code to read until the last line of input:

std::string a;
while(std::cin >> a) {
  // do something
}

And when I CompetiTestRun it resulted in TIMEOUT. But it works in the terminal with redirecting a file to the stdin of the program.

./main < input.txt

Tests are not run when a dot `.` is in the file name

I guess that the problem that the code is only compiled with no test cases detected is due to the presence of a dot in the file name, A.Vanya-and-Fence.cpp. But when I changed the name to A.Vanya-and-Fence/main.cpp, tests are run successfully ❤️.

Note that :CompetiConvert auto also says no test cases detect even that they are already there and generated using :CompetiRecieve

https://codeforces.com/contest/677/problem/A

Feature request: Split a test into smaller cases

For a test with multiple test cases. e.g let there be 4 test cases, and each test takes a and b as inputs:

4
1 2
2 3
4 5
5 5

Conventionally, competitest would just store this test into a single file (input0.txt), but instead, I'd like it to be broken down into smaller cases (e.g storing the tests in files input0, input1, input2, and input3), with each file in the following format:

1
a b

This makes it easier to debug while in the contest

CompetiTestReceive enhancements

I am so happy to see new features in this amazing NeoVim plugin. I found that CompetiTestReceive requires an argument and we can receive a problem or even a contest. But we need more work on this command.

I know that you are planning to add a custom template when a problem file is created and here are some features I really need.

1. Cancel fetching a contest or a problem

I was fetching a problem and wanted to exit the directory prompt by pressing Esc but it didn't work.

2. Default directory

We need more configuration to the custom directory for each online judge, I used to add problems in its own directory and inside its online judge directory. I saw some people also do this not only me. This is what my directories structure looks like:

├── <online judge>
    ├── <contest id>
        ├── <problem name>.cpp
    ├── <problem name>
            ├── main.cpp

[#feature] diff view between the actual and the expected output

Sometimes there is a difference between the actual output and the expected one. So it is a good idea to enable a diff view with a keybinding or make it the default if a wrong answer is encountered.

We can do it with our terminals but the speed is the goal behind the plugin, we need to do it with just a key press.

Examples:

When input is null, plugin runs error

image

I've just started using neovim, I'm not sure why I'm getting this error.
Add input and it works, if not add input will run error.

Error executing luv callback:
...re/nvim/lazy/competitest.nvim/lua/competitest/runner.lua:214: bad argument #2 to 'write' (data must be string or table of strings, got nil)
stack traceback:
        [C]: in function 'write'
        ...re/nvim/lazy/competitest.nvim/lua/competitest/runner.lua:214: in function 'execute_testcase'
        ...re/nvim/lazy/competitest.nvim/lua/competitest/runner.lua:129: in function 'run_first_testcases'
        ...re/nvim/lazy/competitest.nvim/lua/competitest/runner.lua:139: in function 'callback'
        ...re/nvim/lazy/competitest.nvim/lua/competitest/runner.lua:202: in function <...re/nvim/lazy/competitest.nvim/lua/competitest/runner.lua:168>
Press ENTER or type command to continue
Error executing luv callback:
...re/nvim/lazy/competitest.nvim/lua/competitest/runner.lua:170: attempt to index field 'process' (a nil value)
stack traceback:
        ...re/nvim/lazy/competitest.nvim/lua/competitest/runner.lua:170: in function <...re/nvim/lazy/competitest.nvim/lua/competitest/runner.lua:168>
Press ENTER or type command to continue

Rerun makes no sense if the program is not recompiled

For re-running a test, the only scenario I can think of is that when someone modifies their program, opens Competitest (using :CompetitestRunNE) and runs that specific test case. So I dont think it really makes sense to re-run a test without recompiling the program

Bug: reopening viewer window with split ui throws error

Rerproduction Steps:

  • set runner ui to split
  • open competitest ui
  • open a viewer window (i/o/e...)
  • close viewer window (q)
  • close competitest UI (q)
  • open competitest ui again
  • open the viewer window again (Instead of opening like usual it throws errors)
E5108: Error executing lua: ...m/site/pack/packer/start/nui.nvim/lua/nui/popup/init.lua:143: Vim(append):Error executing lua callback: ...m/site/pack/packer/start/nui.nvim/lua/nui/split/init.lua:196: invalid augroup: nui_3_hide
stack traceback:
        [C]: in function 'create'
        ...m/site/pack/packer/start/nui.nvim/lua/nui/split/init.lua:196: in function <...m/site/pack/packer/start/nui.nvim/lua/nui/split/init.lua:195>
        [C]: in function 'nvim_exec_autocmds'
        ...ite/pack/packer/start/nui.nvim/lua/nui/utils/autocmd.lua:376: in function 'exec'
        ...m/site/pack/packer/start/nui.nvim/lua/nui/popup/init.lua:144: in function <...m/site/pack/packer/start/nui.nvim/lua/nui/popup/init.lua:143>
        [C]: in function 'nvim_win_call'
        ...m/site/pack/packer/start/nui.nvim/lua/nui/popup/init.lua:143: in function '_open_window'
        ...m/site/pack/packer/start/nui.nvim/lua/nui/popup/init.lua:255: in function 'show'
        ...tart/competitest.nvim/lua/competitest/runner_ui/init.lua:303: in function 'show_viewer_popup'
        ...tart/competitest.nvim/lua/competitest/runner_ui/init.lua:156: in function <...tart/competitest.nvim/lua/competitest/runner_ui/init.lua:155>
stack traceback:
        [C]: in function 'nvim_win_call'
        ...m/site/pack/packer/start/nui.nvim/lua/nui/popup/init.lua:143: in function '_open_window'
        ...m/site/pack/packer/start/nui.nvim/lua/nui/popup/init.lua:255: in function 'show'
        ...tart/competitest.nvim/lua/competitest/runner_ui/init.lua:303: in function 'show_viewer_popup'
        ...tart/competitest.nvim/lua/competitest/runner_ui/init.lua:156: in function <...tart/competitest.nvim/lua/competitest/runner_ui/init.lua:155>

It looks like viewer window gets uninitialized when competitest UI gets closed not really sure why. Not sure if it's desirable or not.

An easy fix is deleting the UI instead of hiding it when closing competitest UI.

diff --git a/lua/competitest/runner_ui/init.lua b/lua/competitest/runner_ui/init.lua
index 202422d..e6bf2a2 100644
--- a/lua/competitest/runner_ui/init.lua
+++ b/lua/competitest/runner_ui/init.lua
@@ -97,7 +97,7 @@ function RunnerUI:show_ui()
 				api.nvim_set_current_win(self.windows.tc.winid)
 				self.viewer_visible = false
 			else
-				self:hide_ui()
+				self:delete()
 			end
 		end

Fails to get problem or contest when there is a question mark.

I was giving a contest on codeforces. The first problem had a question mark in it. I did CompetiTestReceive contest and then I pressed I pressed the green button on my browser but as soon as it reached to neovim there occured an error because the first problem had a question mark in it. I would highly request you to just remove characters which cause this problem.

Steps to reproduce:

  1. Go to https://codeforces.com/contest/1800 on browser.
  2. Parse the contest.
  3. Look back at neovim and receive an error.

Receive problems confusion

Do this plugin only allow to receive the url of the problem or this it gives the actual problem.

For example.
image

The highlighted square is the text i would like to receive. Is it possible ? I have gotten the template file to work, but i don't see where i received the problem. (if that makes sense)

[Feature Request] read all testcases in the testcase directory

Sometimes the problem provide attachment such as sample_01.in and eg_02.in. It is quite a hassle to manually rename the files to the format as configured in testcases_single_file_format. Is it possible to configure competitest.nvim such that it reads all files that ends with a certain extension such as .in in the testcase directory?

Auto Recieve the whole contest

Recieve the whole contest, for the other problems.It should auto open a file on the buffer and with the task name.
This can be implement by keep recieving data and only close when no data is transmit for an amount of time.
This would be really useful and can bring huge convenience.

feat: Create testcases_directory if it doesn't exist

Currently if testcases_directory doesn't already exists competitest fails to save tests . This directory should be created if it doesn't exist . I think compile_directory & run_directory also faces similar issue though I haven't tried .

The problem of local configurations

Currently, any receive commands asks for directory and/or name of file.

This is not really useful when you are solving problems, and can be really annoying after some time.

The issue is, a configuration option can't be used as the configuration needs to be local and extended (this is how it is currently with config.load_local_config_and_extend).

Also, if a new problem is downloaded from a problem-set, it won't have a specific local configuration, as the problem does not exist on system for it to have a local configuration.

And if a per judge configuration is required, this should be done in the main configuration, as per judge configuration is not local to the problem but rather specific to the judge (by definition).

And specific compiler commands should have another solution than having local config files per problem, maybe a keybinding.

I think dropping the local configuration would allow the project to have configuration options, like "contest_directory", which would enable functionality similar to how cpbooster.nvim works.

If this is fine with the maintainer @xeluxee, I would like to work on such functionality.

feat: filtering output before comparing.

Are you interested in exposing a filter function in config that is applied on output before comparing it to check if it's matches with the desired one or not?

I have a debugging function that always prints lines starting with "DBG )>". If competitest compares the output with these removed I can observe the debug output while also check if the test is passing.

I can currently achieve it by injecting a custom compare method like this.

local compare_methods = require('competitest.compare').methods
function compare_methods.filter_squish(output, expout)
  local strs = {}
  for str in vim.gsplit(output, '\n') do
    if not vim.startswith(str, 'DBG )>') then
      table.insert(strs, str)
    end
  end
  return compare_methods.squish(table.concat(strs, '\n'), expout)
end

require('competitest').setup {
  output_compare_method = 'filter_squish',
}

But it's kind of hacky. So I think it'd be nice If competitest allows me to provide a filter function is config.

Some feature recommendation

Hi! First of all, thanks for the great plugin. I started using your plugin few days back and I am loving it.

There are some features that I would love to see personally. It may or may not be the goal of project.

  • Split window for adding test cases
    I am actually really good with split windows. Because moving from one window to another window just works seamlessly in split windows. I am kind of person who uses default vim bindings like <C-w>h and <C-w>l for moving between windows. Some custom keybindings annoy me. And these bindings doesn't work accurately for popup window.
  • Closing with autocmds
    If I close a window (with :q) for test case editor or runner, it should close all the related windows. Maybe autocmds can be used here. Then, special keybinding for closing the window would not be needed.
  • Saving with autocmds
    Saving any testcase is simple as saving file I feel. Because at last testcase editing is just editing a file. Maybe we can change buffer name for intuitiveness. So, this would remove any keybinding needed for saving the testcase. If something extra is needed, meybe BufWritePre or BufWritePost autocmds can be used.

These are basically just suggestions to enable me to use competitest buffers like normal buffers and nothing special.

If these are aligned with project goal and you need some sort of help in implementing these, I can definitely help you in these.

Thanks,
Rishabh Dwivedi

Status Line Integration

Hello,
Can you please explain how to integrate statusline variables name using lualine, that is what is your configuration to produce the second screenshot in the README?
For me the status line shows the name of the file when it is not active which I would like to keep, but change the statusline only if your buffer variable is set,
Thanks

How to write test cases

I was trying to use this with rust, but I don't really know how to start, I mean, I writed a simple lib to do tests but when I added a test case, it doesn't take the inputs, so I don't really know what to do, and sorry if this is a dummy question, I never tried something like this before.

Error in runner_ui

When I run :CompetiTestRun I get the following error.

E5108: Error executing lua ...ker/start/competitest.nvim/lua/competitest/runner_ui.lua:437: Expected Lua number
stack traceback:
        [C]: in function 'nvim_set_current_win'
        ...ker/start/competitest.nvim/lua/competitest/runner_ui.lua:437: in function 'delete_ui'
        ...ker/start/competitest.nvim/lua/competitest/runner_ui.lua:83: in function 'init_ui'
        ...packer/start/competitest.nvim/lua/competitest/runner.lua:305: in function 'show_ui'
        ...cker/start/competitest.nvim/lua/competitest/commands.lua:81: in function 'run_testcases'
        [string ":lua"]:1: in main chunk

I'm using nvim-0.7 . I think you're calling nvim_set_current_win with nil.

Integrations with tools for submitting solutions.

It seems like, from reading the readme file, that a submit solution tool is not decided on.

I would say that using online-judge-tools/oj is better, just because cpbooster uses nodejs, which is very annoying and is the reason why I am working on this project :D.

I would like to implement this feature, so I would love some recommendations/tips!

Is there a way to fetch test cases ?

Some problems have a lot of test cases so making them will take a lot of time. So, if there could be some sort of integration between this plugin and extensions like Competitive Companion it would be the best plugin for cp.

also thanks for this cool plugin.

CompetiTest receive only gets 1 testcase.

When trying to receive a problem using CompetiTest recieve problem and using the Competitive Companion on the problem, I only ever receive a single test case.

I've tried this on Codeforces and Codechef. I've also tried the plugin cphelper and the same issue happens there.

I think this is an issue with Competitive Companions. I think it's parsing the problem page and only getting the example testcase, which is usually singular. For example when trying it with this problem: https://codeforces.com/problemset/problem/4/A
The only test-case recieved is the example testcase.

How would I get all the testcases?

Issue with nui

Versions

  1. NeoVim: v0.7.2
  2. compitest: c86a94c - latest
  3. nui.nvim: 62facd3 - latest

Error message

image

Reproducible steps

  1. :CompiTestRun
  2. e
  3. q
  4. :CompiTestRun

[#feature] Importing the name of the problem along with testcase

i am trying to import the name of the problem along with the test cases before solving once in for all, as it will reduce the cumbersome process of naming the problem each time. If we could also create the file using a pre selected template it will be good.

I have seen another competitve programming plugin doing this and this could be a valuable addition for people who want to solve the problems on the go. #feature

Space in file name when receiving problem

USACO problems have space in their problem names, hence when receiving problem the file created has the space in its name.
This is an issue as USACO doesn't allow file which have space in their name.
I manually add an underscore everywhere where there is a space. Is there a way to automate this?

Why my `template_file` doesn't work when receive a contest ?

my config as below:

return {
	"xeluxee/competitest.nvim",
	dependencies = "MunifTanjim/nui.nvim",
	keys = {
		{ ";rr", "<cmd>CompetiTest run<cr>", desc = "Competitest run" },
		{ ";ra", "<cmd>CompetiTest receive testcases<cr>", desc = "Receive testcases" },
		{ ";rc", "<cmd>CompetiTest receive contest<cr>", desc = "Receive contest" },
		{ ";rh", "<cmd>CompetiTest show_ui<cr>", desc = "Show UI" },
		{ ";rd", "<cmd>CompetiTest delete_testcase<cr>", desc = "Delete testcases" },
	},
	config = function()
		require("competitest").setup({
			replace_received_testcases = true,
			testcases_use_single_file = true,
			view_output_diff = true,
			template_file = { "/home/fhawk/.config/nvim/template/cf.cpp" },
		})
	end,
}

Whitespaces not compiled correctly.

when we use competitest to receives problem from codeforces and contest. the name shouldn't have whitespaces cause when I try to run the file with my own configuration (input.in, output.out) it's led to error.
it's not a better idea to name the contest and problem with whitespaces.

Finally, thank you for this awesome plugin.

Adding many features

Thought I would let you know if you are working on something similar and your thought on this.

Split mode

image

Will consist of 6 windows: complication error, stdin, stderr, stdout, answer. Will be fully customizable. Personally I don't need complication error tab because there is already lsp

E.x. {"set nosplitright | vs | setl wfw | wincmd w | bel sp | vs | vs | 1wincmd w", {1, 2, 3, 4, 5}}

See: #8

Intergrate tabline with bufferline's custom section. Or maybe using winbar instead. Or perhaps something similar to incline.nvim? For now I will stick with custom tabpage :)

Get the most out of companion

Use regrex to parse problem name, let user determine the path to it.

E.x.

"https://codeforces.com/contest/(%d+)/problem/(%d)" -> vim.loop.os_homedir() .. "/code/contest/codeforces/$1/$2"
{"https://atcoder.jp/contests/(%s+)/tasks/(%s+)" -> vim.loop.os_homedir() .. "/code/contest/atcoder/$1/$2"

Keybinds

Add keybind tips: quick test case managing, for example:

  • 2t switch to test 2
  • hide current test, hide all test, hide AC tests, hide WA test

Auto parsing

Always listen to companion port since we are parsing new problem instead of just to the current file

Adding user custom output and local problem

g++ sol.cpp < 1.in > 2.out 2> 2.err
This will also support for problems that requires reading from a file
Adding local problem support also helps

Change C++ flags on the fly

g++ -Wall -O3
g++ -g -O2

Maybe compile_commands will helps?

Saving

Option to remember state of the current file, like submission history, local flags, test cases status

neovim style commands

:Competitest run 1
:Competitest submit

It makes auto complete much easier

See #7

Debugging

Debugging is very essential, we need to integrate with E.x nvim-dap

Stress Testing

Also very essential, I will go with codeforces's library
See #9

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.