chriscummins / programl Goto Github PK
View Code? Open in Web Editor NEWA Graph-based Program Representation for Data Flow Analysis and Compiler Optimizations
License: Other
A Graph-based Program Representation for Data Flow Analysis and Compiler Optimizations
License: Other
//deeplearning/ml4pl/testing:random_programl_generator
//deeplearning/ml4pl/testing:random_networkx_generator
//deeplearning/ml4pl/testing:random_graph_tuple_generator
//deeplearning/ml4pl/testing:random_graph_tuple_database_generator
//deeplearning/ml4pl/testing:random_log_database_generator
A minor refactor to remove the LLVM-specific stuff. Add a compiler
column set to llvm-6.0
when migrating the existing data.
Many of the opt pointer lists can't be parsed, e.g.
(float addrspace(1)** %6, 8), (float addrspace(1)** %7, 8), (float addrspace(1)* %14, 4), (float addrspace(1)* %19, 4), (float addrspace(1)* %25, 4), (float addrspace(1)* %30, 4), (float addrspace(1)* %37, 4), (float addrspace(1)* %42, 4), (float addrspace(1)* %49, 4), (float addrspace(1)* %54, 4), (float addrspace(1)* %61, 4), (float addrspace(1)* %66, 4), (float addrspace(1)* %73, 4), (float addrspace(1)* %78, 4), (float addrspace(1)* %85, 4), (float addrspace(1)* %90, 4), (float addrspace(1)* %97, 4), (float addrspace(1)* %102, 4), (float addrspace(1)* %109, 4), (float addrspace(1)* %114, 4), (float addrspace(1)* %121, 4), (float addrspace(1)* %126, 4), (float addrspace(1)* %133, 4), (float addrspace(1)* %138, 4), (float addrspace(1)* %145, 4), (float addrspace(1)* %150, 4), (float addrspace(1)* %157, 4), (float addrspace(1)* %162, 4), (float addrspace(1)* %169, 4), (float addrspace(1)* %174, 4), (float addrspace(1)* %181, 4), (float addrspace(1)* %186, 4), (float addrspace(1)* %193, 4), (float addrspace(1)* %198, 4), (float addrspace(1)* %205, 4), (float addrspace(1)* %210, 4), (float addrspace(1)* %217, 4), (float addrspace(1)* %222, 4), (float addrspace(1)* %229, 4), (float addrspace(1)* %234, 4), (float addrspace(1)* %241, 4), (float addrspace(1)* %246, 4), (float addrspace(1)* %253, 4), (float addrspace(1)* %258, 4), (float addrspace(1)* %265, 4), (float addrspace(1)* %270, 4), (float addrspace(1)* %277, 4), (float addrspace(1)* %282, 4), (float addrspace(1)* %289, 4), (float addrspace(1)* %294, 4), (float addrspace(1)* %301, 4), (float addrspace(1)* %306, 4), (float addrspace(1)* %313, 4), (float addrspace(1)* %318, 4), (float addrspace(1)* %325, 4), (float addrspace(1)* %330, 4), (float addrspace(1)* %337, 4), (float addrspace(1)* %342, 4), (float addrspace(1)* %349, 4), (float addrspace(1)* %354, 4), (float addrspace(1)* %361, 4), (float addrspace(1)* %366, 4), (float addrspace(1)* %373, 4), (float addrspace(1)* %378, 4), (float addrspace(1)* %385, 4), (float addrspace(1)* %390, 4), (float addrspace(1)* %397, 4), (float addrspace(1)* %402, 4), (float addrspace(1)* %409, 4), (float addrspace(1)* %414, 4), (float addrspace(1)* %421, 4), (float addrspace(1)* %426, 4), (float addrspace(1)* %433, 4), (float addrspace(1)* %438, 4), (float addrspace(1)* %445, 4), (float addrspace(1)* %450, 4), (float addrspace(1)* %457, 4), (float addrspace(1)* %462, 4), (float addrspace(1)* %469, 4), (float addrspace(1)* %474, 4), (float addrspace(1)* %481, 4), (float addrspace(1)* %486, 4), (float addrspace(1)* %493, 4), (float addrspace(1)* %498, 4), (float addrspace(1)* %505, 4), (float addrspace(1)* %510, 4), (float addrspace(1)* %517, 4), (float addrspace(1)* %522, 4), (float addrspace(1)* %529, 4), (float addrspace(1)* %534, 4), (float addrspace(1)* %541, 4), (float addrspace(1)* %546, 4), (float addrspace(1)* %553, 4), (float addrspace(1)* %558, 4), (float addrspace(1)* %565, 4), (float addrspace(1)* %570, 4), (float addrspace(1)* %577, 4), (float addrspace(1)* %582, 4), (float addrspace(1)* %589, 4), (float addrspace(1)* %594, 4), (float addrspace(1)* %601, 4), (float addrspace(1)* %606, 4), (float addrspace(1)* %613, 4), (float addrspace(1)* %618, 4), (float addrspace(1)* %625, 4), (float addrspace(1)* %630, 4), (float addrspace(1)* %637, 4), (float addrspace(1)* %642, 4), (float addrspace(1)* %649, 4), (float addrspace(1)* %655, 4), (float addrspace(1)* %659, 4), (float addrspace(1)* %664, 4), (float addrspace(1)* %670, 4), (float addrspace(1)* %675, 4), (float addrspace(1)* %682, 4), (float addrspace(1)* %687, 4), (float addrspace(1)* %694, 4), (float addrspace(1)* %699, 4), (float addrspace(1)* %706, 4), (float addrspace(1)* %711, 4), (float addrspace(1)* %718, 4), (float addrspace(1)* %723, 4), (float addrspace(1)* %730, 4), (float addrspace(1)* %735, 4), (float addrspace(1)* %742, 4), (float addrspace(1)* %747, 4), (float addrspace(1)* %754, 4), (float addrspace(1)* %759, 4), (float addrspace(1)* %766, 4), (float addrspace(1)* %771, 4), (float addrspace(1)* %778, 4), (float addrspace(1)* %783, 4), (float addrspace(1)* %790, 4), (float addrspace(1)* %795, 4), (float addrspace(1)* %802, 4), (float addrspace(1)* %807, 4), (float addrspace(1)* %814, 4), (float addrspace(1)* %819, 4), (float addrspace(1)* %826, 4), (float addrspace(1)* %831, 4), (float addrspace(1)* %838, 4), (float addrspace(1)* %843, 4), (float addrspace(1)* %850, 4), (float addrspace(1)* %855, 4), (float addrspace(1)* %862, 4), (float addrspace(1)* %867, 4), (float addrspace(1)* %874, 4), (float addrspace(1)* %880, 4)' (alias_set.py:95:MakeAliasSetGraphs() -> ValueError)
Note: This replaces github.com/ChrisCummins/ml4pl/issues/14
Which aggregates epoch stats across tags.
//deeplearning/ml4pl/graphs:graph_database_viz
Tracking issue for re-implementing the GGNN using pytorch.
This issue will be closed once the model achieves feature parity with the previous Tensorflow implementation:
See also #24.
A test suite is only useful when the results can be trusted, and presently, mid-way through a large refactor, many of the tests are broken.
There are legitimate reasons for a batch generator to produce an empty batch before reaching the end of the input graph iterator. However, we use the batch.graph_count to determine when we have reached the end of the batches:
def Run(self) -> None:
"""Run the epoch worker thread."""
rolling_results = batches.RollingResults()
for i, batch in enumerate(self.batch_iterator.batches):
self.batch_count += 1
self.ctx.i += batch.graph_count
# Record the graph IDs.
for graph_id in batch.graph_ids:
self.graph_ids.add(graph_id)
# Check that at least one batch is produced.
if not i and not batch.graph_count:
raise OSError("No batches generated!")
# We have run out of graphs.
if not batch.graph_count:
break
This causes the epoch to exit early, before having seen all batches.
Run a subset of the test suite on Travis CI using the phd_build
docker image. Some thought must go into deciding which tests to run, as many of them (e.g. GGNN integration tests) are too heavy to run in the 30-minute window provided to a Travis test job.
Reproduce:
bazel test //deeplearning/ml4pl/models/ggnn/...
Error:
# Test that model saw every graph in the database.
> assert results.graph_count == graph_db.split_counts[epoch_type.value]
E KeyError: 0
/home/zacharias/ml4pl/deeplearning/ml4pl/models/ggnn/ggnn_test.py:174: KeyError
This is a valid error, but the warning is annoying. Filter them.
In GGNN model:
/home/cec/phd/tools/venv/phd/lib/python3.6/site-packages/tensorflow/python/ops/gradients_util.py:93: UserWarning: Converting sparse IndexedSlices to a dense Tensor of unknown shape. This may consume a large amount of memory.
"Converting sparse IndexedSlices to a dense Tensor of unknown shape. "
This appears to be related to the number of timesteps we unroll for.
At a minimum, run tests with coverage, combine the reports, then write to a file.
Just as we finished the call :) the CPU-only devmap run failed:
$ bazel run //deeplearning/ml4pl/models/ggnn -- --graph_db='file:///var/phd/db/cc1.mysql?programl_devmap_amd' --log_db='file:///var/phd/db/cc1.mysql?programl_scratch_logs' --graph_batch_node_count=10000 --vmodule='*'=3
...
I1211 18:18:53 progress.py:92] Batch 48 with 16 graphs: accuracy=62.50%, precision=0.625, recall=0.625, f1=0.625 in 2s 465ms
I1211 18:18:54 progress.py:92] Batch 49 with 10 graphs: accuracy=30.00%, precision=0.090, recall=0.300, f1=0.138 in 1s 187ms
I1211 18:18:56 progress.py:92] Batch 50 with 8 graphs: accuracy=25.00%, precision=0.375, recall=0.250, f1=0.300 in 2s 536ms
Train epoch 2: 83%|███████████████████████████████████████████████████████████████████████████████████ | 452/544 [02:00<00:24, 3.76 graph/s, acc=0.589, loss=1.21, prec=0.434, rec=0.589]Exception in thread Thread-12: | 1/300 [04:57<24:43:24, 297.67s/ epoch]
Traceback (most recent call last):
File "/home/linuxbrew/.linuxbrew/Cellar/python/3.6.5/lib/python3.6/threading.py", line 916, in _bootstrap_inner
self.run()
File "/home/cec/.cache/bazel/_bazel_cec/d1665aef25bbeeb91c01df7ddc90dba7/execroot/phd/bazel-out/k8-fastbuild/bin/deeplearning/ml4pl/models/ggnn/ggnn.runfiles/phd/deeplearning/ml4pl/models/classifier_base.py", line 407, in Run
batch_results = self.model.RunBatch(self.epoch_type, batch)
File "/home/cec/.cache/bazel/_bazel_cec/d1665aef25bbeeb91c01df7ddc90dba7/execroot/phd/bazel-out/k8-fastbuild/bin/deeplearning/ml4pl/models/ggnn/ggnn.runfiles/phd/deeplearning/ml4pl/models/ggnn/ggnn.py", line 329, in RunBatch
outputs = self.model(*model_inputs)
File "/home/cec/.cache/bazel/_bazel_cec/d1665aef25bbeeb91c01df7ddc90dba7/execroot/phd/bazel-out/k8-fastbuild/bin/deeplearning/ml4pl/models/ggnn/ggnn.runfiles/pypi__torch_1_3_1/torch/nn/modules/module.py", line 541, in __call__
result = self.forward(*input, **kwargs)
File "/home/cec/.cache/bazel/_bazel_cec/d1665aef25bbeeb91c01df7ddc90dba7/execroot/phd/bazel-out/k8-fastbuild/bin/deeplearning/ml4pl/models/ggnn/ggnn.runfiles/phd/deeplearning/ml4pl/models/ggnn/ggnn_modules.py", line 69, in forward
prediction, num_graphs, graph_nodes_list, aux_in
File "/home/cec/.cache/bazel/_bazel_cec/d1665aef25bbeeb91c01df7ddc90dba7/execroot/phd/bazel-out/k8-fastbuild/bin/deeplearning/ml4pl/models/ggnn/ggnn.runfiles/pypi__torch_1_3_1/torch/nn/modules/module.py", line 541, in __call__
result = self.forward(*input, **kwargs)
File "/home/cec/.cache/bazel/_bazel_cec/d1665aef25bbeeb91c01df7ddc90dba7/execroot/phd/bazel-out/k8-fastbuild/bin/deeplearning/ml4pl/models/ggnn/ggnn.runfiles/phd/deeplearning/ml4pl/models/ggnn/ggnn_modules.py", line 471, in forward
return self.feed_forward(aggregate_features), graph_features
File "/home/cec/.cache/bazel/_bazel_cec/d1665aef25bbeeb91c01df7ddc90dba7/execroot/phd/bazel-out/k8-fastbuild/bin/deeplearning/ml4pl/models/ggnn/ggnn.runfiles/pypi__torch_1_3_1/torch/nn/modules/module.py", line 541, in __call__
result = self.forward(*input, **kwargs)
File "/home/cec/.cache/bazel/_bazel_cec/d1665aef25bbeeb91c01df7ddc90dba7/execroot/phd/bazel-out/k8-fastbuild/bin/deeplearning/ml4pl/models/ggnn/ggnn.runfiles/pypi__torch_1_3_1/torch/nn/modules/container.py", line 92, in forward
input = module(input)
File "/home/cec/.cache/bazel/_bazel_cec/d1665aef25bbeeb91c01df7ddc90dba7/execroot/phd/bazel-out/k8-fastbuild/bin/deeplearning/ml4pl/models/ggnn/ggnn.runfiles/pypi__torch_1_3_1/torch/nn/modules/module.py", line 541, in __call__
result = self.forward(*input, **kwargs)
File "/home/cec/.cache/bazel/_bazel_cec/d1665aef25bbeeb91c01df7ddc90dba7/execroot/phd/bazel-out/k8-fastbuild/bin/deeplearning/ml4pl/models/ggnn/ggnn.runfiles/pypi__torch_1_3_1/torch/nn/modules/batchnorm.py", line 81, in forward
exponential_average_factor, self.eps)
File "/home/cec/.cache/bazel/_bazel_cec/d1665aef25bbeeb91c01df7ddc90dba7/execroot/phd/bazel-out/k8-fastbuild/bin/deeplearning/ml4pl/models/ggnn/ggnn.runfiles/pypi__torch_1_3_1/torch/nn/functional.py", line 1666, in batch_norm
raise ValueError('Expected more than 1 value per channel when training, got input size {}'.format(size))
ValueError: Expected more than 1 value per channel when training, got input size torch.Size([1, 4])
See also #27.
For each of the labelled graph datasets, remove the back edges from graph tuples, and instead create the back edges when required by models.
This is to reduce the size of the datasets by trading off increased work vs network bandwidth.
Currently identifiers and immediates both have node type identifier
, and use embedding dictionary !IDENTIFIER
. Instead, we should add an immediate
type and use dictionary entry !IMMEDIATE
.
Currently, only node-level or graph-level classification is supported. If a realistic use case for having both at the same time appears, we may need to refactor. This is a tracking issue for discussing such cases.
(Background on this error at: http://sqlalche.me/e/e3q8)
[1] 59716 terminated env PYTHONIOENCODING=UTF-8 PYTHONUNBUFFERED=1 /miniconda3/bin/python --hos
Is this expected behaviour?
If so, would you mind updating the devmap dump?
Make sure that split tables (such as graph metadata / data) have cascaded delete working.
The gap between starting an epoch and receiving the first batch can be quite long for large datasets. I suspect that this is due to the BufferedGraphReader
first reading the IDs and sizes of all graphs in the table. For cases where limit
is a lot smaller than the size of the table, we can reduce this latency by inserting an offset into the SQL query, rather than reading the entire results set and discarding them:
# When we are limiting the number of rows and not reading the table in
# order, pick a random starting point in the list of IDs.
if limit and order != BufferedGraphReaderOrder.IN_ORDER:
batch_start = random.randint(
0, max(len(self.ids_and_sizes) - limit - 1, 0)
)
self.ids_and_sizes = self.ids_and_sizes[
batch_start : batch_start + limit
]
//deeplearning/ml4pl/graphs/unlabelled/cdfg:control_and_data_flow_graph
to //deeplearning/ml4pl/graphs/unlabelled/llvm2graph:graph_builder
and refactor to return a ProGraML proto rather than a networkx graph.//deeplearning/ml4pl/graphs/unlabelled/llvm2graph
binary to generate protos from the command line.This is not true of all (valid) programs, and is an unnecessary restriction. See the Linux kernel sources for valid examples, e.g. kernel panic (infinite loop with no exit).
The current test suite is far from comprehensive, yet still requires about an hour to run when there aren’t any cached results to re-use. Much of this time is spent in long running integration tests which use parametrised test fixtures to run a small-ish test case with dozens of permutations of parameters.
This has the downside of slowing down the iterative devel/debug cycle. To mitigate this we could use test sharding to run parts of the larger tests concurrently.
Bazel has support for test sharding built in, the hard part would be determining how to integrate that into pytest.
Once done, we could use a Travis build matrix to use the sharding, enabling a greater subset of the test suite to be run, see #45.
Command to reproduce:
$ bazel run //deeplearning/ml4pl/experiments/devmap:run_models -- --model ggnn --dataset amd --tag_suffix=test
Consider a human-friendly name, and fix race condition with multiple jobs starting at the same time.
Replace the old GraphMeta.group
column with a separate database for storing ID->split mappings. This will remove the need for separate corpus/devmap graph databases, and can be re-used for any database with a numeric ID field.
the nxgraph generated doesn't have a type
on its nodes, so construction fails with /phd/deeplearning/ml4pl/graphs/nx_utils.py", line 38, in NodeTypeIterator if data["type"] == node_type: KeyError: 'type'
Full trace below:
Traceback (most recent call last):
File "/private/var/tmp/_bazel_Zacharias/56319df27ced911066fd99c97e9dce78/execroot/phd/bazel-out/darwin-fastbuild/bin/deeplearning/ml4pl/graphs/unlabelled/llvm2graph/llvm2graph.runfiles/phd/deeplearning/ml4pl/graphs/unlabelled/llvm2graph/llvm2graph.py", line 274, in <module>
app.Run(main)
File "/private/var/tmp/_bazel_Zacharias/56319df27ced911066fd99c97e9dce78/execroot/phd/bazel-out/darwin-fastbuild/bin/deeplearning/ml4pl/graphs/unlabelled/llvm2graph/llvm2graph.runfiles/phd/labm8/py/app.py", line 168, in Run
RunWithArgs(RunWithoutArgs)
File "/private/var/tmp/_bazel_Zacharias/56319df27ced911066fd99c97e9dce78/execroot/phd/bazel-out/darwin-fastbuild/bin/deeplearning/ml4pl/graphs/unlabelled/llvm2graph/llvm2graph.runfiles/phd/labm8/py/app.py", line 144, in RunWithArgs
absl_app.run(DoMain, argv=argv)
File "/private/var/tmp/_bazel_Zacharias/56319df27ced911066fd99c97e9dce78/execroot/phd/bazel-out/darwin-fastbuild/bin/deeplearning/ml4pl/graphs/unlabelled/llvm2graph/llvm2graph.runfiles/pypi__absl_py_0_7_0/absl/app.py", line 300, in run
_run_main(main, args)
File "/private/var/tmp/_bazel_Zacharias/56319df27ced911066fd99c97e9dce78/execroot/phd/bazel-out/darwin-fastbuild/bin/deeplearning/ml4pl/graphs/unlabelled/llvm2graph/llvm2graph.runfiles/pypi__absl_py_0_7_0/absl/app.py", line 251, in _run_main
sys.exit(main(argv))
File "/private/var/tmp/_bazel_Zacharias/56319df27ced911066fd99c97e9dce78/execroot/phd/bazel-out/darwin-fastbuild/bin/deeplearning/ml4pl/graphs/unlabelled/llvm2graph/llvm2graph.runfiles/phd/labm8/py/app.py", line 141, in DoMain
main(argv)
File "/private/var/tmp/_bazel_Zacharias/56319df27ced911066fd99c97e9dce78/execroot/phd/bazel-out/darwin-fastbuild/bin/deeplearning/ml4pl/graphs/unlabelled/llvm2graph/llvm2graph.runfiles/phd/labm8/py/app.py", line 166, in RunWithoutArgs
main()
File "/private/var/tmp/_bazel_Zacharias/56319df27ced911066fd99c97e9dce78/execroot/phd/bazel-out/darwin-fastbuild/bin/deeplearning/ml4pl/graphs/unlabelled/llvm2graph/llvm2graph.runfiles/phd/deeplearning/ml4pl/graphs/unlabelled/llvm2graph/llvm2graph.py", line 269, in main
graph_proto = builder.Build(bytecode, opt)
File "/private/var/tmp/_bazel_Zacharias/56319df27ced911066fd99c97e9dce78/execroot/phd/bazel-out/darwin-fastbuild/bin/deeplearning/ml4pl/graphs/unlabelled/llvm2graph/llvm2graph.runfiles/phd/deeplearning/ml4pl/graphs/unlabelled/llvm2graph/graph_builder.py", line 92, in Build
graphs = [self.CreateControlAndDataFlowUnion(cfg) for cfg in cfgs]
File "/private/var/tmp/_bazel_Zacharias/56319df27ced911066fd99c97e9dce78/execroot/phd/bazel-out/darwin-fastbuild/bin/deeplearning/ml4pl/graphs/unlabelled/llvm2graph/llvm2graph.runfiles/phd/deeplearning/ml4pl/graphs/unlabelled/llvm2graph/graph_builder.py", line 92, in <listcomp>
graphs = [self.CreateControlAndDataFlowUnion(cfg) for cfg in cfgs]
File "/private/var/tmp/_bazel_Zacharias/56319df27ced911066fd99c97e9dce78/execroot/phd/bazel-out/darwin-fastbuild/bin/deeplearning/ml4pl/graphs/unlabelled/llvm2graph/llvm2graph.runfiles/phd/deeplearning/ml4pl/graphs/unlabelled/llvm2graph/graph_builder.py", line 132, in CreateControlAndDataFlowUnion
self.MaybeAddDataFlowElements(g, ffg.tag_hook)
File "/private/var/tmp/_bazel_Zacharias/56319df27ced911066fd99c97e9dce78/execroot/phd/bazel-out/darwin-fastbuild/bin/deeplearning/ml4pl/graphs/unlabelled/llvm2graph/llvm2graph.runfiles/phd/deeplearning/ml4pl/graphs/unlabelled/llvm2graph/graph_builder.py", line 179, in MaybeAddDataFlowElements
for statement, data in nx_utils.StatementNodeIterator(g):
File "/private/var/tmp/_bazel_Zacharias/56319df27ced911066fd99c97e9dce78/execroot/phd/bazel-out/darwin-fastbuild/bin/deeplearning/ml4pl/graphs/unlabelled/llvm2graph/llvm2graph.runfiles/phd/deeplearning/ml4pl/graphs/nx_utils.py", line 44, in StatementNodeIterator
yield from NodeTypeIterator(g, programl_pb2.Node.STATEMENT)
File "/private/var/tmp/_bazel_Zacharias/56319df27ced911066fd99c97e9dce78/execroot/phd/bazel-out/darwin-fastbuild/bin/deeplearning/ml4pl/graphs/unlabelled/llvm2graph/llvm2graph.runfiles/phd/deeplearning/ml4pl/graphs/nx_utils.py", line 38, in NodeTypeIterator
if data["type"] == node_type:
KeyError: 'type'
Rather than a single embedding table for {node selector/statement representation}, add support for arbitrary embedding dimensionalities.
This isn't a priority at the moment as we don't have a use-case for it.
As of #2, we must update the IR importers in //deeplearning/ml4pl/ir/create/...
to work with the new database schema.
Always use --hidden_size=node_embeddings_concatenated_width
Use a named tuple instead, then (optionally) have the log database collect those and convert them to mapped objects.
Create a programl_graph_protos
database and migrate the contents of ml4pl_unlabelled_graphs
to it.
This will speed up graph_tuple_database.GraphTuple.CreateFromProgramGraph()
by enabling
proto -> graph_tuple
rather than proto -> nx -> graph_tuple
.
The critical section of run ID assignment contains race conditions when multiple processes are running concurrently on a single machine. This manifests itself as failing tests for parameterized model tests.
During runtime of the LSTM, we lex the entire input, then truncate much of it. We can speed things up by only lexing up to --padded_sequence_length
tokens.
related to a part of #27.
This is the problem:
our position embeddings are useless in their current form, I think:
imagine you have 2 incoming edges with position embeddings p_1
and p_2
and states coming across these edges h_1
, h_2
all of the same edge type. An example could be c = a / b
Then the incoming message m
has this lousy property (w/ A
being a parameter matrix) :
m = A (p_1 + h_1) + A (p_2 + h_2) = A (p_1 + h_2) + A (p_2 + h_1)
by associativity and distributivity
meaning we can not distinguish between a/b
and b/a
like this.
There are five distinct representations we have considered so far:
Problem:
/tmp/ml4pl belongs to Tal and I can't delete it.
Possible resolutions:
/tmp/zacharias/...
I'm not sure if this is specific to the GGNN or applies to all classifiers, but memory consumption of long running jobs grows steadily before being killed by the OS when it reaches system capacity.
EDIT: This not specific to the GGNN, see my comments below.
how can I avoid that?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.