ashnkumar / sketch-code Goto Github PK
View Code? Open in Web Editor NEWKeras model to generate HTML code from hand-drawn website mockups. Implements an image captioning architecture to drawn source images.
Keras model to generate HTML code from hand-drawn website mockups. Implements an image captioning architecture to drawn source images.
hello, I am referring to ur sketch-code project. May i know what's the name of the algorithm used?
kindly respond soon,
thanks in advance
Hi!
You have some bug in readme.
Copyright (c) 2018 Ashwin Kumar<[email protected]**@gmail.com**>
Hi, we are working on a similar project and we are trying to generate more data by converting additional screenshots to sketches.
If possible, would you mind elaborating on the following part of your blog post?
My final pipeline added one further step, which augmented these images by adding skews, shifts, and rotations to mimic the variability in actual drawn sketches.
We managed to get the contours of a screenshot, but we are having a hard time adding skews, shifts, and rotations to the contours.
Thanks so much and my email is [email protected] just in case
This line in convert_single_image.py
is causing problems.
from classes.inference.Sampler import *
Traceback (most recent call last):
File "convert_single_image.py", line 7, in <module>
from classes.inference.Sampler import *
ImportError: No module named inference.Sampler
I have installed numpy and my python has been 3.6 .but it still told me ModuleNotFoundError: No module named 'numpy'
,is there some question on my python path?
When I trained my own data, it generated multi .h5 file, how to use them?
environment: centos7
python version:3.5.1
All packages in requirements.txt were installed without error.
And files in data/ & bin/ were downloaded.
run scripts :
python3 convert_single_image.py --png_path ../examples/drawn_example1.png --output_folder ./generated_html --model_json_file ../bin/model_json.json --model_weights_file ../bin/weights.h5
raise error below:
Traceback (most recent call last):
File "convert_single_image.py", line 53, in
main()
File "convert_single_image.py", line 49, in main
model_weights_path = model_weights_file)
File "/home/apps/git/sketch-code/src/classes/inference/Sampler.py", line 23, in init
self.model = self.load_model(model_json_path, model_weights_path)
File "/home/apps/git/sketch-code/src/classes/inference/Sampler.py", line 74, in load_model
loaded_model.load_weights(model_weights_path)
File "/root/.local/share/virtualenvs/sketch-code-_Pp_OLh1/lib/python3.5/site-packages/keras/engine/topology.py", line 2616, in load_weights
f = h5py.File(filepath, mode='r')
File "/root/.local/share/virtualenvs/sketch-code-_Pp_OLh1/lib/python3.5/site-packages/h5py/_hl/files.py", line 269, in init
fid = make_fid(name, mode, userblock_size, fapl, swmr=swmr)
File "/root/.local/share/virtualenvs/sketch-code-_Pp_OLh1/lib/python3.5/site-packages/h5py/_hl/files.py", line 99, in make_fid
fid = h5f.open(name, flags, fapl=fapl)
File "h5py/_objects.pyx", line 54, in h5py._objects.with_phil.wrapper
File "h5py/_objects.pyx", line 55, in h5py._objects.with_phil.wrapper
File "h5py/h5f.pyx", line 78, in h5py.h5f.open
OSError: Unable to open file (truncated file: eof = 138280960, sblock->base_addr = 0, stored_eoa = 558939120)
Hi,
I am trying to add a new component 'Dropdown' to the dataset. Following are the steps performed:
But, the model is still not able to generate dropdown tags for the example images. The validation error also has increased while training the model.
Has anyone else tried to add new elements to the dataset?
Example Image:
Thanks and Regards,
Karan
Hi all.
I want to help, but I want to know if this project is active. Is this project active?
Hi, many thanks for sharing the data and code. how can we take it forward, how can we generate more data apart from synthesised data. can we create same kind of dataset for real time html page. if so, then how can we generate .gui files for that. if you have any resource or any thoughts please do share us.
Could not find a version that satisfies the requirement Keras==2.1.2 (from -r requirements.txt (line 1)) (from versions: )
No matching distribution found for Keras==2.1.2 (from -r requirements.txt (line 1))
Augmentation is used to generate more data? Isn't it? But this function augment_and_save_images does not generate more training data, it just changes the meta data and the data length of result is equal to the length of input data? Is it just used to improve adaptive performance?
Hi,
I am getting below errors while installing this. Can you please help me in fixing this issue.
C:\sketch-code\sketch-code>pip install -r requirements.txt
Collecting Keras==2.1.2
Using cached Keras-2.1.2-py2.py3-none-any.whl (304 kB)
Collecting tensorflow==2.2.0rc1
Using cached tensorflow-2.2.0rc1-cp38-cp38-win_amd64.whl (459.1 MB)
Collecting nltk==3.2.5
Using cached nltk-3.2.5.tar.gz (1.2 MB)
Collecting opencv-python==3.4.8.29
Using cached opencv_python-3.4.8.29-cp38-cp38-win_amd64.whl (31.1 MB)
Collecting h5py==2.7.1
Using cached h5py-2.7.1.tar.gz (264 kB)
Collecting matplotlib==2.0.2
Using cached matplotlib-2.0.2.tar.gz (53.9 MB)
ERROR: Command errored out with exit status 1:
command: 'c:\users\ratikant\appdata\local\programs\python\python38\python.exe' -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\Users\ratikant\AppData\Local\Temp\pip-install-5g3axj_i\matplotlib\setup.py'"'"'; file='"'"'C:\Users\ratikant\AppData\Local\Temp\pip-install-5g3axj_i\matplotlib\setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(file);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, file, '"'"'exec'"'"'))' egg_info --egg-base 'C:\Users\ratikant\AppData\Local\Temp\pip-pip-egg-info-vvl3p_rm'
cwd: C:\Users\ratikant.sahoo\AppData\Local\Temp\pip-install-5g3axj_i\matplotlib
Complete output (67 lines):
============================================================================
Edit setup.cfg to change the build options
BUILDING MATPLOTLIB
matplotlib: yes [2.0.2]
python: yes [3.8.2 (tags/v3.8.2:7b3ab59, Feb 25 2020,
23:03:10) [MSC v.1916 64 bit (AMD64)]]
platform: yes [win32]
REQUIRED DEPENDENCIES AND EXTENSIONS
numpy: yes [version 1.18.3]
six: yes [using six version 1.14.0]
dateutil: yes [using dateutil version 2.8.1]
functools32: yes [Not required]
subprocess32: yes [Not required]
pytz: yes [pytz was not found. pip will attempt to install
it after matplotlib.]
cycler: yes [cycler was not found. pip will attempt to
install it after matplotlib.]
tornado: yes [using tornado version 6.0.4]
pyparsing: yes [pyparsing was not found. It is required for
mathtext support. pip/easy_install may attempt to
install it after matplotlib.]
libagg: yes [pkg-config information for 'libagg' could not
be found. Using local copy.]
freetype: no [The C/C++ header for freetype (ft2build.h)
could not be found. You may need to install the
development package.]
png: no [The C/C++ header for png (png.h) could not be
found. You may need to install the development
package.]
qhull: yes [pkg-config information for 'qhull' could not be
found. Using local copy.]
OPTIONAL SUBPACKAGES
sample_data: yes [installing]
toolkits: yes [installing]
tests: no [skipping due to configuration]
toolkits_tests: no [skipping due to configuration]
OPTIONAL BACKEND EXTENSIONS
macosx: no [Mac OS-X only]
qt5agg: no [PyQt5 not found]
qt4agg: no [PySide not found; PyQt4 not found]
gtk3agg: no [Requires pygobject to be installed.]
gtk3cairo: no [Requires cairocffi or pycairo to be installed.]
gtkagg: no [Requires pygtk]
tkagg: yes [installing; run-time loading from Python Tcl /
Tk]
wxagg: no [requires wxPython]
gtk: no [Requires pygtk]
agg: yes [installing]
cairo: no [cairocffi or pycairo not found]
windowing: yes [installing]
OPTIONAL LATEX DEPENDENCIES
dvipng: no
ghostscript: no
latex: no
pdftops: no
OPTIONAL PACKAGE DATA
dlls: no [skipping due to configuration]
============================================================================
* The following required packages can not be built:
* freetype, png
----------------------------------------
ERROR: Command errored out with exit status 1: python setup.py egg_info Check the logs for full command output.
Dataset and the pre-trained model files are not accessible. Getting 403 Forbidden from the server
$ sh get_data.sh mkdir: cannot create directory ‘../data’: File exists --2022-03-18 12:14:57-- http://sketch-code.s3.amazonaws.com/data/all_data.zip Resolving sketch-code.s3.amazonaws.com (sketch-code.s3.amazonaws.com)... 52.217.16.188 Connecting to sketch-code.s3.amazonaws.com (sketch-code.s3.amazonaws.com)|52.217.16.188|:80... connected. HTTP request sent, awaiting response... 403 Forbidden 2022-03-18 12:14:57 ERROR 403: Forbidden.
Hi,
I am not able to find the code for making the images look hand-drawn. Is that code not released? Will it be possible for you to release that pipeline?
Thank you!
When i am trying to train my dataset, i do not know that how the .gui file i will get for my images.
This Error occurred when try to execute even on example png file included in examples folder:-
Traceback (most recent call last):
File "convert_single_image.py", line 53, in
main()
File "convert_single_image.py", line 49, in main
model_weights_path = model_weights_file)
File "/content/drive/My Drive/testp2c/sketch-code/src/classes/inference/Sampler.py", line 23, in init
self.model = self.load_model(model_json_path, model_weights_path)
File "/content/drive/My Drive/testp2c/sketch-code/src/classes/inference/Sampler.py", line 74, in load_model
loaded_model.load_weights(model_weights_path)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py", line 2234, in load_weights
hdf5_format.load_weights_from_hdf5_group(f, self.layers)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/saving/hdf5_format.py", line 700, in load_weights_from_hdf5_group
layer, weight_values, original_keras_version, original_backend)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/saving/hdf5_format.py", line 410, in preprocess_weights_for_loading
return _convert_rnn_weights(layer, weights)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/saving/hdf5_format.py", line 571, in _convert_rnn_weights
raise ValueError('%s is not compatible with %s' % types)
ValueError: GRU(reset_after=False) is not compatible with GRU(reset_after=True)
Hi, when I execute sh scripts/get_data.sh, then it says , cannot find or open ../data/all_data.zip, ../data/all_data.zip.zip or ../data/all_data.zip.ZIP. and I ready can not found any file from data folder, Then how can I do for this? thank you.
Hi guys ,
I found 100+ hand-drawn wireframe dataset and some wireframes are drawn by myself
How I train model for new wireframe dataset and used them in this project from scratch ?
From beginning of installion of packages to generate HTML code
Thank you :)
the ubuntu 16.04 Comes with 3.6,Downgrading is very troublesome
Using TensorFlow backend.
/usr/lib/python3.6/importlib/_bootstrap.py:219: RuntimeWarning: compiletime version 3.5 of module 'tensorflow.python.framework.fast_tensor_util' does not match runtime version 3.6
``
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.