KEYNAME_launchers_keynames=(KEYNAME1 KEYNAME2...) # As many as launchers required
#KEYNAME_KEYNAME1_inherit_info=0/1 # Obtain inheritable properties for the launcher from installation info MAYBE NOT NECESSARY
Special new properties for info
KEYNAME_name="NAME OF PROGRAM" # Used for the desktop name and the StartupWMClass
KEYNAME_version="VERSION OF THE PROGRAM" # Used for the version
KEYNAME_description= # already present # Used for the commentary
KEYNAME_tags=("programming_language" "web" ...) # The first used for the generic name and all of them for the categories. They will be all also used for the keywords
KEYNAME_arguments=("arg1" "arg2" "arg3") # Also used for the keywords
KEYNAME_associatedfiletypes= # Used for the mimetype
KEYNAME_binariesinstalledpaths= # Used for the exec and the tryexec
KEYNAME_icon=ICON_NAME # Determines a unique icon hardcoded as static icon in the repository. It will be moved to the installation directory, so the icon can be guessed in runtime
KEYNAME_LAUNCHERKEYNAME_actionkeynames=("openaction" "trashaction")
KEYNAME_LAUNCHERKEYNAME_openaction_exec=("nemo")
KEYNAME_LAUNCHERKEYNAME_openaction_name=("open with nemo")
These variables are always used to guess properties in the specified manner from the desktop launcher of the feature.
Properties that do not need to be specified (hardcoded for all launchers)
And from here the variables that overwrite from the inherited info are: KEYNAME_DESKTOPKEYNAME_name, KEYNAME_DESKTOPKEYNAME_version, KEYNAME_DESKTOPKEYNAME_commentary, etc.
This will overwrite the property of the desktop file from the possible inherited guess from the installation info
Properties that can have a default but can be modified
Basically we can divide this in two different algirthm. The first one is about code health and static checking of the code. Such as codacy, but for the customizer rules :
Examples of things of health this algorithm could revise:
All features have the mandatory properties
All features are using a pool of the possible properties and not outdated ones
Look for discrepancies such as defining an icon, bashfunction or file but the feature has not got a folder.
The second one should be about dynamic execution, so when a feature is executed the customizer itself can have a code to check the current execution conditions in order to log them into our test file; this could include the installed property, OS_NAME property, loaded package manager, XDG_DESKTOP_DIR... and toher information relevant of the current instance.
For each installation there should be a test function that checks that each property has been executed with the expected results
It should generate documentation of each feature
We will be using the flags of the customizer core to keep the state of the flags that we want to pass to each call.
Move wrappers to the endpoint.
The endpoint recognizes and uses the same args and flags as the installers, so a -v at customizer endpoint level will be passed to each call of the backend.
customizer.sh modes
install
uninstall
parallel
status
update: git fetch
upgrade: git pull
Argument specification.
Main
Import functions common ()
Locate install uninstall scripts()
Process initial args and set flags()
Choose algorithm based on initial args()
Init call strings depending on algortihm()
Process rest of args()
String callString = Process call()
Exec $callString
Args:
They choose the general behaviour of how the arguments will be translated to calls to the backend scripts of the customizer.sh endpoint.
--sudo, --user, --deduce-privileges
By default uses --deduce-privileges, which makes the call to backend customizer with sudo only if required.
If --user then all calls will be non privileged. Otherwise with --sudo.
--maximize-calls, --optimize-calls, --minimize-calls
By default uses --optimize-calls, which appends to the current string call if the mode has not changed. If the mode changes then the call of the current mode finishes and the string is appended to the final call string. This mode respects strictly the order of the installations. --maximize-calls appends the current call string to the final call string every time a feature is processed in the arguments. This mode respects strictly the order of the installations. --minimize-calls uses at maximum four calls, one for each mode if needed. The order will be starting with sudo uninstall, followed by sudo install and then uninstall and install.
--foreground, --background
By default --foreground, which executes the final call while trapping your prompt --background Adds a & to the final call in order to make it in the background and get the prompt as soon as the final call is issued.
--sequential, --safe-sequential, --parallel=bash, --parallel=gnome-terminal, --parallel=gnu,
By default --safe-sequential which appends the calls using && --sequential uses ; to append the calls to each other --parallel=bash appends the call using &, so each call is made parallel in the same terminal. --parallel=gnome-terminal appends the call as argument to a call to gnome-terminal in background --parallel=gnu uses gnu parallel to run each call as a call to be done by parallel.
--handle-dependencies, --ignore-dependencies
By default --handle-dependencies, which applies only when installing obligated user features that have root dependencies. In those cases, if any of the dependencies is not present, the call will be converted to a sudo call with the --dependencies (to be implemented) argument to install the dependencies only, and the subsequent user call to install the feature. --ignore-dependencies treats each feature as is
The EDITOR and GIT_EDITOR are special system variables used to determine which editor to use when an editor is not specified in an order and to know which editor to use when writing the reasons behind a git merge, respectively, for example.
In the nano feature declare these variables and check that it works.
Due to using indirect expansion, we donot cover the variable expansion with "", so it is possible that an special character is interpreted in some way, causing the main for of this function skip one or more element, and thus not downloading the icon.
This issue can be seen in the feature matlab when the URL of an icon contained &, in version 0.12.0
Currently the capability to add repositories to the package manager is handled using gpgSignatures, and sources. This is managed on the background using this two variables. APT_SOURCES_LIST_FOLDER ans GPG_TRUSTED_FOLDER
This has to be managed as a single property that has key names such as downloads which has downloadKeys.
Also the logic has to be for any existing package manager.
The changes introduced with the silentFunction capability, a hardcoded bashFunction, make that the features that use this capability ignore the folder in the current directory, which is not the desired specification for the aliases of the IDEs: IDEs should always open the working directory, even if no arguments are provided.
Current silentFunction functions look like this:
ideau ()
{
nohup ideau $@&> /dev/null &
}
But they should look like this (pycharm bashfunction from previous implementations):
Basically the F function also searches inside of some binary files (not all of them) and also searches inside a .git folder because there is no explicit rule to ignore this folder.
Reinfoce logic of binary files if possible and also extend the function to ignore .git folders.
Here the contents of the variable PROMPT_COMMAND, which contains the commands to execute every time we write the prompt to the terminal (every time we hit Enter).
The display of git prompt inside of a git repo fails because the condition of finding a .git directory in the current directory is too strong. Relax condition to reset prompt only and only if we are not inside a git repo.
Activating the flag autostart makes customizedr believes that there is available a desktop launcher in /usr/share/applications or $HOME/.local/share/applications, following the legacy behaviour of copy launcher. As such, it copies the corresponding launcher of the currently installing application to the autostart folder.
This behaviour needs to be integrated as an additional flag or behaviour into the dynamic launcher function. Simply it needs to copy (or link) the created desktop launcher in the desktop also into the autostart folder if the flag is active.
Pgadmin feature was not working mainly because the venv was not being created due to the abscense of ensurepip since it is not included with python default package since python3.8. This seems to be a problem from Ubuntu and Debian systems.
This is what the installation was saying:
I have added python3.8 and python3.8-venv as dependencies of both pgadmin and customizer. PR incoming
currently if we don't do a match with a feature with an argument, we try to expand the argument as a wrapper and that causes problems when using special characters in the arguments, for example ç.
We need to skip the problematic argument from the input and we will issue a warning if one of this character is detected in the argument.
trim data from data_features.sh to single files as ${keyname}_properties.sh
Currently the features are loaded but are not dynamically unloaded. We would need to do unset of the properties of a features after the execution is complete.
The main change in this minor v0.16 is going to be the transformation of data from bash variable declaration and indirect expansion to data in JSON format, maintaining the structure already present with indirect expansion.
Consequences
This will ease the manipulation of data but it will add an extra dependency to the customizer (a JSON parser). Also, it will remove almost every indirect expansion in our code, which increases the readibility and maintainability of the code. Luckily, we do not actually need to install the jq package (this could cause problems in minimal systems such as termux); instead, we can ensure the download of the binary of the shell program jq from this link, or upload it statically into the repository.
Also, this allows the decoupling of data and business logic; which makes it easier to change the customizer core and maintain all the precoded features.
There are other binaries that could be downloaded like this or could be uploaded to the repository such as wget from this link, but this is for another issue.
Design
Specification
The first thing that we need to do is design the JSON structure of a feature. The JSON structure of a feature will consist of metadata and a list of tasks. Each task will have different fields in order to provide the data for that certain task. Tasks could have implicit or optional fields. This is equivalent to the current specification.
But, differently from indirect expansion, tasks will not be groupped in the same type as a list. Instead, each task will be represented as a single dictionary in a certain position of the tasks list in the JSON. The tasks will be executed in the order that they appear in the list.
Naturally, manual content will also be reimplemented since with this new specification there is no need to have three types of manual content execution depending on when the manual code is going to be executed (pre, mid or post); instead, we will be able to declare as many manual contents as we need plus we will be able to perform a manual content in any point of the installation. This will increase flexibility of a feature installation.
Implementation
Add the dependencies: Upload JSON parser to the repo and ensure its existence in each run. Create a variable JSON_PARSER_BINARY that always points to the jq binary so it is always available to use in any point of the code.
Once, in compilation time, create a tool script that reads the properties of each feature and constructs its JSON: Basically, we need to look into the code of execute_installation and create a task for each call to a generic_${FLAG_MODE} following the order, where instead of executing the task, our mission will be parse the old data from bash variables into a list of tasks in JSON. We can add these changes in a tool or in a temporary branch, since this code will be used just once.
Ensure format: From the previous task, ensure that the contents of the JSON are correctly related to the files that they refer to. Also ensure that tasks are correctly parsed and its data is equivalent in bash or JSON. For that testing purpose we should use representative large and stable features such as pycharm, pgadmin, caffeine... Of course this test can not be fully performed until we implement the full JSON parser in our code, so we can test the features after they are encoded in JSON.
Convert the execute_installation function into a loop that will read all tasks from the tasks list of a feature in order. For each task, it detects the type of that task (using a common field taskType that all tasks will have), and calls a function that performs the parsing of that concrete data and returns it. We will have a dictionary that tells us which task types are available and what fields does that task have. This simplifies the code by deleting many if blocks for each task.
execute_installation()
{
for CURRENT_INSTALLATION_TASK_NUMBER = 0 ...
taskType="$(cat "${CURRENT_FEATURE_JSON_PATH}"| jq .tasks[${CURRENT_INSTALLATION_TASK_NUMBER}].taskType)"# obtain task type from task $CURRENT_INSTALLATION_TASK_NUMBER
fields="${TASK_FIELDS[${taskType}]}"# Fields is a String or array that contains the names available in a certain task type.
cat "${CURRENT_FEATURE_JSON_PATH}"| jq .tasks[${CURRENT_INSTALLATION_TASK_NUMBER}] | parseJsonFields ${fields}# Implicitly declare ${CURRENT_INSTALLATION_FEATURE_KEYNAME}_${CURRENT_INSTALLATION_TASK_NUMBER}_${fields[0]}, ${CURRENT_INSTALLATION_FEATURE_KEYNAME}_${CURRENT_INSTALLATION_TASK_NUMBER}_${fields[1]}... with the content of the corresponding field inside the variable and return the number of parsed fields.
generic_${FLAG_MODE}_${taskType}# Implicitly hardcoded variables corresponding to certain fields referencing bash variables, it performs the parsing and calls the worker function, such as downloaddone
}
The way of returning data will be supplying the JSON via stdin that we want to parse to the function and the name of the fields that we want to parse as arguments. These fields will be parsed and declared as ${CURRENT_FEATURE_KEYNAME}_${CURRENT_INSTALLATION_TASK_NUMBER}_${FIELD_TO_PARSE} so the caller or other functions can access them after calling the parsing function.
Add functions that encapsulate the logic of parsing the JSON of a certain task. These are the generic_${FLAG_MODE}_${taskType} functions, which will be modified in order to perform only parsing and error processing of the data.
Summing up, we will have a new execute_installation function that parses and executes the tasks sequentially, a set of functions generic_${FLAG_MODE}_${taskType} that reference the variables ${CURRENT_INSTALLATION_FEATURE_KEYNAME}_${CURRENT_INSTALLATION_TASK_NUMBER}_${fields[1]} declared by the function parseJsonFields ${fields}, which parses all the JSON fields with the same name received as argument from the JSON supplied using stdin and declares them into the corresponding variables.
Wrappers must be created using the property tags, which is present in all features. Each element in the tags array will be used to add that feature to a wrapper with the name of the tag.
This computation has to be done when asking for a wrapper the first time in run. (In our logic, this means that it's not a recognized feature)
Update version and also update exec property of the desktop launcher since it is using the argument -e of the gnome-terminal function, which is deprecated.
After trying this for a bit I wanted to remove it again. However, my terminal complains about
bash: ~/.customizer/data/functions.sh: No such file or directory
whenever I open a new terminal window. I already looked through the usual .profile .bashrc etc entries, but couldn't find a place where this would still be called.
Any idea how to get rid of this?