A comprehensive toolkit built with PyTorch, designed to facilitate the training, evaluation, and visualization of autoencoders. From simple linear autoencoders to convolutional and variational architectures, this project offers an intuitive and expandable framework for anyone delving into the realm of unsupervised learning.
To ensure the integrity, reliability, and quality of our code, it's essential to implement automated testing. This task involves setting up a testing framework and creating initial tests for the existing functionalities.
Tasks:
1. Framework Selection:
Evaluate and select an appropriate testing framework for the project (e.g., pytest, unittest, etc.).
Install necessary dependencies and set up the environment for test execution.
2. Initial Setup:
Set up a dedicated directory for tests (e.g., /tests).
Create templates or base files for future tests.
3. Test Writing:
Identify the main modules and functionalities that require test coverage.
Start by writing tests for critical or high-risk functionalities.
As tests are written, run them frequently to ensure they work as expected.
4. Continuous Integration:
Integrate test execution into the CI/CD pipeline, ensuring that tests run on all new -commits or pull requests.
Notes:
Remember to follow best practices for testing, such as keeping tests atomic, independent, and clear.
Keep the team informed about testing practices, considering hosting a brief training or review session to align all involved.
Consider using code coverage tools, like coverage.py, to monitor the extent of test coverage.
To enhance the code quality and maintainability of our project, we intend to integrate pylint as a linting tool and ensure that our code adheres to its guidelines and standards.
Tasks:
1. Installation:
Install pylint as a development dependency in the project.
Update any necessary documentation or README to inform contributors about the use of pylint.
2. Configuration:
Set up a .pylintrc configuration file if there are specific rules we want to enable/disable or modify based on our project's requirements.
Ensure that default pylint rules align with our coding standards and practices.
3. Code Adaptation:
Run pylint against the current codebase to identify areas of non-compliance.
Address the warnings and errors reported by pylint.
It's advisable to handle refactoring in chunks, possibly through multiple pull requests, to make reviewing easier.
4. Continuous Integration:
Integrate pylint checks into our CI/CD pipeline if applicable. This ensures that future code submissions are checked against pylint standards before merging.
Notes:
While adapting to pylint rules, ensure that code functionality isn't altered in the process.
Consider creating separate issues or pull requests for large modules or files, to keep the review process focused and manageable
Currently, the VAE design is hard-coded to accept images of size 64x64. This restricts the model's flexibility to be trained and tested on datasets with varying image sizes.
Steps to reproduce:
Run the project with "vae" type.
Try to initialize the VAE with a different image size, e.g., 128x128.
Observe the error or improper behavior of the model when training or testing on differently sized images.
Expected outcome:
The VAE should be able to accept any image size, much like the vanilla autoencoder model.
Currently, the ConvolutionalVAE design is tailored to specifically handle images of size 64x64. Due to hardcoded dimensions in the dense (fully connected) layers, the model cannot easily adapt to different image sizes without manual modifications.
Steps to reproduce:
Run the project with "conv_vae" type.
Try to forward an image of a different size, e.g., 128x128 or 32x32, through the model.
Observe the mismatch error or unexpected behavior due to fixed input-output sizes of certain layers, especially the dense layers related to the latent space.
Expected outcome:
The ConvolutionalVAE should ideally be as flexible as the ConvolutionalAutoencoder in handling any square image size.
Current outcome:
The ConvolutionalVAE can only handle images of size 64x64 without raising dimension-related errors or producing unexpected results.
We've identified an issue with our current Docker image where it fails to utilize GPU resources even when available. This can impact performance-intensive tasks and reduce the efficiency of our operations, especially for tasks that are optimized for GPU.
Expected Behavior:
When deploying the Docker container on a machine with GPU resources, the application inside the Docker should be able to leverage the GPU for its processes.
Current Behavior:
The application running inside the Docker container only uses CPU resources and doesn't seem to recognize or utilize the available GPU.
Steps to Reproduce:
Pull the current Docker image from our repository.
Deploy a container using the pulled image on a machine with GPU resources.
Monitor resource usage and observe that only CPU is utilized, and GPU remains idle.f GPU