nbro / andz Goto Github PK
View Code? Open in Web Editor NEWAlgorithms and data structures for educational, demonstrational and experimental purposes.
License: MIT License
Algorithms and data structures for educational, demonstrational and experimental purposes.
License: MIT License
By following the example of R. Sedgewick and K. Wayne's implementation philosophy (from their work on implementing and explaining data structures and algorithms, whose code you can find here: https://github.com/kevin-wayne/algs4), I could remove the usage of inheritance to model relationshipts between data structures that in theory are connected by a parent-child relationship.
Parent-child relationships between data structures would be more obvious in certain cases
Less redundant code and duplication
Dependency on another implementation
Highly coupled modules
Theoretical parent-child relationships are not always best represented and implemented as such.
Example 1: a tree is a special type of a graph, but it's usually implemented quite differently from a graph, hence it may not be a good idea to inherit a tree from a graph.
This is one example where in theory data structures are connected by a parent-child relationship, but in practice it doesn't make much sense to implement it.
Solution 1: implement inheritance whenever it really makes sense
BST <- RBT
Heap <- MinHeap
Heap
, which need both to be test for min- and max-heaps.
ands
.Complex hierarchies
It should be possible to customise/choose the (current) hash function of the HashTable
data structure without affecting the general behaviour of the same, but at most its performance.
I should do some research around regarding other implementations!
When I started off this project, I wasn't really concerned with the efficiency of the implementations in every case, so, in a few cases, the purpose of the implementation was just to expose the concepts.
Deques apparently are better suited to implement queues than list, but currently my implementation of Queue uses list instead of queue. I should change its implementation to use deques.
To be honest, I am not sure if my SetDisjoint is even well implemented (I think it is, but its design seems so bad). I need to find a good implementation of a SetDisjoint and maybe compare it with mine..
Should I also test the private fields and methods of data structures or just its public interface?
Matrices I could implement (which could also be useful to implement graphs). Note that these type of matrices may derive from a common and more generic matrix data structure.
Which properties of a numerical algorithm should be tested? Does the answer to this question depend on the specific numerical algorithm?
This was probably a typo.
For consistency, I should assert in all methods that accept "indices" that they are valid, i.e. by using something of the form assert self.is_good_index(my_index)
. Examples of functions that do not do that: _index_of_max
, _index_of_min
, etc.
Mare sure that this assertion is required in certain methods: it may happen that it's "ok" that in a call or recursive sub-call to a function the indices are no more valid!!!
Check also the cases for BinaryHeap
, MinHeap
and MaxHeap
.
Careful: These explanations are incomplete!!!
A flow network is a directed graph G = (V, E), such that
Point 2 means that there aren't any anti-parallel edges. See below how to convert a graph with antiparallel edges to a flow network.
A flow in a flow network G = (V, E) is a real-valued function f : V x V -> ℜ that satisfies two properties:
Capacity constraint: ∀ u, v ∈ V : 0 ≤ f(u, v) ≤ c(u, v)
That is, for every two vertices u and v in V, we have that the flow between them is greater or equal to zero and is smaller or equal to the cost of the edge that connects u to v.
Flow conservation: ∀ u ∈ V - {s, t} : ∑v ∈ V f(u, v) = ∑v ∈ V f(v, u)
The value of a flow f is denoted by |f| and is defined by: |f| = ∑v ∈ V f(s, v) - ∑v ∈ V f(v, s)
That is the total flow out of the source s minus the total flow into the source s.
Each edge in the following flow network is labeled using a label of the form f/c, where f denotes the flow and c denotes the cost.
Given a flow network G = (V, E), a source node s and a sink node t, find a flow of maximum value.
In the case of the presence of antiparallel edges in the flow network, we remove them in the following way:
In the case we have multiple sources or sinks, we add respectively a supersource or supersink as follows:
Implement a few important algorithms to find the maximum flow, such as:
An augmenting path is a path constructed by repeatedly finding a path of positive capacity from a source s to a sink node t.
Examples???
function Ford–Fulkerson(G, s, t):
initialize flow f to 0
while there's an augmenting path p in the residual network Gf
augment flow f along path p
return p
where
Here's the Wikipedia article about the topic:
This project wasn't started as a TDD project because, at that time, I didn't know much about these techniques.
Why would or not TTD be the most appropriate process of developing software for this project?
Advantages?
Disadvantages?
Are there better alternatives for this project? Why? Examples?
The following resources apparently contain useful information to answer the previous questions.
Chapter 14 of "Introduction to algorithms" by CLRS to augment the RBT ds.
Slides by prof. Papadopoulou
The BST has a rank
method, which works in linear time instead of the possible optimal logarithmic time...
Add support for:
Order statistics tree (OS-RANK
, OS-SELECT
) in log(n)
time by augmenting the RBT with a field called size
to each node x
, which represents the number of nodes under x
. This field must be maintained though.
Add support for interval trees
The OS-SELECT(T, i)
(returns the ith element in T
) method could look like this:
OS-SELECT(T, i): // we're looking for the ith smallest element in T
OS-SELECT-AUX(T.root, i) // we start from the root
OS-SELECT-AUX(x, i):
r = x.left.size + 1 // x.left.size because the elements on the left precede x
if r == i: // x is at the ith position
return x
else if i < r: // search in the left subtree of x
return OS-SELECT-AUX(x.left, i)
else: // i >= r
// since the subtree rooted at x contains r elements
// that come before x's right subtree in a inorder tree walk
// the ith smallest element in the subtree rooted at x
// is the (i - r) element rooted at x's right subtree.
return OS-SELECT-AUX(x.right, i - r)
The OS-RANK(T, x)
(the number of elements that precede x
in an inorder tree walk of T
+ 1, to account for x
itself) that instead could look like this:
OS-RANK(T, x):
r = x.left.size + 1
y = x
while y != T.root:
// if y is a right child
// then both its parent and all the nodes in its parent's left subtree
// come before y in a inorder tree walk of T
if y == y.parent.right:
r = r + y.parent.left.size + 1
y = y.parent
return r
We need to maintain the field size
when the RBT
is modified during insertions and deletions. During insertions, when we go down through the path until the position of the new node, we increase the size of all nodes in these path by 1. Going up, to the fix the height of the tree to maintain it balanced, we do at most 2 rotations. During these rotations we also need to change the sizes of the nodes involved only in the rotation, since the rotation is a local operation. For example, in a left-rotate, we could do something like this (omitting handling of nil s and parent points):
VERIFY THAT THIS PSEUDOCODE IS CORRECT!!!
OS-LEFT-ROTATE(T, x):
y = x.right
x.right = y.left
y.left.parent = x
...
// the size of y becomes the size of its parent x
y.size = x.size
// the new size of x is the sum of the sizes of its two
x.size = x.left.size + x.right.size + 1 children + 1
...
During deletion, we do at most 3 rotations. When going up doing these rotations, we need also to decrease the size of the nodes all the way up to the root.
We can also implement easily other operations such as OS-ITH-SUCCESSOR
, OS-ITH-PREDECESSOR
by simply combining OS-RANK
and OS-SELECT
. For example OS-ITH-SUCCESSOR(T, x, i)
can be implemented as follows:
OS-ITH-SUCCESSOR(T, x, i):
r = OS-RANK(x)
return OS-SELECT-AUX(T.root, r + i) // select the (r + i)th element from the tree
OS-ITH-PRECESSOR
can be implemented similarly by passing as second parameter to OS-SELECT-AUX
r - i
.
All the operations we have mentioned so far run all in O(log n)
.
Idea: store intervals in the nodes of a RBT organized according to the low values of the intervals. In addition, augment the RBT's nodes with a field max
which is the maximum value of the subtree rooted at that corresponding node, i.e.
x.max = max(x.interval.high, x.left.max, x.right.max)
We need to maintain this new field max
during insertions and deletions. But this can be easily achieved, since this is a local information.
The INTERVAL-SEARCH
procedure looks like this:
INTERVAL-SEARCH(T, i):
x = T.root
while x != T.nil and i does not overlap x:
if x.left != T.nil and (x.left.max >= i.low):
x = x.left // move left if the maximum key in the left subtree is greater than i.low
else:
x = x.right
return x
Other operations that could be useful are:
INTERVAL-SEARCH-EXACTLY
(Check solutions to 2nd homework of ans2).
INTERVAL-SEARCH-ALL(T, i)
, which searches all intervals in T
which overlap the interval i
. This operation can be implemented by imitating the deletion of a found overlapping interval. (Check solutions to 2nd homework of ands2).
For example, in the case of heaps, I'm currently providing an additional non-standard method, i.e. delete
, for all binary heaps (MinHeap
, MaxHeap
and MinMaxHeap
), but delete
is usually not an operation provided by the specification of a binary heap, because, in general, it requires linear time (for a heap).
Another example is the TST. I'm also here providing a deletion
method, but, usually, ternary-search tree specifications do not state "deletion" as a standard operation.
One solution I'm currently seeing right now would be to not provide additional non-standard methods in the standard implementation of a data structure, but to eventually provide another subclass which implements the standard implementation whose purpose is to enhance it with the additional operations, or maybe to simply enhance the standard implementations (for example, in the case of BST
, I already implemented a BSTImproved
, whose purpose was to re-implement the "insertion" operation).
Problem: is it worth to create an additional data structure or implementation, maybe just for a method?
This project also aims at implementing algorithms which asymptotically are faster than their eventual alternatives. Nevertheless, asymptotically slower alternatives are not excluded: on the contrary, they may serve as a form of comparison (for the purposes of better understanding the complexity analysis, given that this project is mainly educational) with the theoretically asymptotically faster ones. Moreover, in practice, algorithms with lower asymptotic complexity may have a better runtime performance, at least for certain inputs and/or for certain architectures.
The goals of this issue (which is going to be permanent) are:
Ensure asymptotically fastest algorithms for a particular problem are implemented before the slower ones.
Implement interesting alternatives to the asymptotically fastest algorithms. Interesting alternatives are e.g. the ones that in practice (at least for certain inputs) perform better than the asymptotically fastest algorithms.
List and point to resources that may help in analyzing the implemented algorithms.
Take also into consideration the space complexity analysis of algorithms, which, so far, has not been performed much.
Describe (usual) techniques to compare asymptotic complexities of two algorithms. This may be useful e.g. if one wants to order (increasingly or decreasingly) algorithms according to their asymptotic growth rate [OPTIONAL].
List facts about the asymptotic behavior of functions. For example, polynomials grow slower than exponentials [OPTIONAL].
http://www.orcca.on.ca/~yxie/courses/cs2210b-2011/htmls/asns/asn1/sol-asn1-CS2210b-2011.pdf
http://www.orcca.on.ca/~yxie/courses/cs2210b-2011/htmls/asns/sol-mid-cs2210b-2011.pdf
https://courses.cs.washington.edu/courses/cse373/05sp/homework/hw2/373-HW2soln.pdf
http://www.math.purdue.edu/~sbasu/teaching/fall06/cs3510/sol1.pdf
http://www.techtud.com/sites/default/files/public/share/AsymptoticNotation.pdf
http://cse.unl.edu/~sscott/teach/Classes/cse156S06/Notes/AsymptoticNotation.pdf
http://cgm.cs.mcgill.ca/~godfried/teaching/dm-reading-assignments/Growth-of-Functions.pdf
https://www.cs.duke.edu/brd/Teaching/230/handouts/order-of-growth2.pdf
https://www.math.psu.edu/sites/default/files/public/migration/141rates1.pdf
https://www.cs.umd.edu/~mount/251/Handouts/cmsc251-nosol.pdf
Chapter 2 of "Algorithm Design" (by Kleinberg and Tardos).
The TST class raises a ValueError
in the insert
method if not key
, but raises TypeError
for the same reason in the methods delete
, search_recursively
and search
.
ValueError
should be raised when the input's domain is invalid, whereas TypeError
should be raised when the type of an input is not the expected one. So, I should also raise ValueError
when for example the input has length 0, but should have length > 0. And I should raise TypeError
if for example a check isinstance(my_input, my_required_type)
is false.
http://stackoverflow.com/questions/8297526/proper-exception-to-raise-if-none-encountered-as-argument
For example, in the class LinearProbingHashTable
, I have defined a method called __invariants
, whose purpose is to check that the preconditions, postconditions and invariants of the data structure or methods of the same are maintained. This method could be defined as an external function, like I'm already doing for heaps and binary search trees.
Another example where there's this problem is in the TST
class. I may also want to add a similar function to DisjointSetsForest
.
I had already implemented BFS and DFS, even though currently they were removed temporarily from this project, since the graph data structure was also removed (because I'd to fix a few things with it), but I would like to add also other algorithms like:
Uniform-cost search (also called cheapest-first search)
This is a variant of the famous Dijkstra's algorithm, where instead of starting with a queue full of nodes, it starts only with one item, and inserts new items as they are discovered. This can avoid some "decrease-key" operations.
Dijkstra's algorithm finds the shortest path between a source node and all nodes in a graph, whereas UCS only finds the shortest path between a source and a destination node.
Depth-limited search
Bidirectional search
These algorithms are usually used in path finding in combinatorial search. There's also a particular version of best-first search, i.e. a greedy version, where paths that are predicted to be closer are chosen first.
Beam search
A* (not a greedy version of a best-first search)
IDA* (or iterative-deepening A*)
Alpha-Beta pruning
2-opt, 2.5-opt, 3-opt, Lin–Kernighan, etc.
In general, because of the nature of problems (i.e. overlapping subproblems and optimal substructure) where dynamic programming can be applied, matrices are needed to keep track of results to subproblems, etc.
Floyd-Warshall
: simple O(V3) algorithm to find all source shortest path even with negative edges
DNA/RNA related algorithms
I've just noticed that method search_by_value
in the class Heap
raises a ValueError
if the input val
is None
.
The problem is that theoretically it's possible to add a HeapNode
to the heap (either max, min or min-max) with a value None
using the add
method defined in the Heap
class. In that same class, if the input is already a HeapNode
, no further checks are performed.
Two possible solutions:
HeapNode
objects with None
values to be added to the Heap
.HeapNode
objects with None
values are not added to the heap by modifying the add
method.Note: the tests have not caught this bug!
Tests under ds:
test_DSForests
test_Heap
test_MinMaxHeap
test_RBT
test_HastTable
test_MaxHeap
test_MinHeap
In some data structures, size
is a property (e.g. DisjointSetsForest
), and in others size
is a method (e.g. BinaryHeap
). Need to fix these inconsistencies.
So far what I've actually implemented was binary heaps, but I don't think I emphasized the fact that those are actually binary heaps, and not for example Fibonacci or binomial heaps.
For example, the delete
method has a type hint of TSTNode
, but actually it returns the value of the node being deleted not the node itself. The node actually should never be returned to the user, since it's only useful for the implementation of the data structure.
There are a private method at least which is missing the return value's type hint...
Currently, it seems that the implementation of DSForests
assumes that the user knows about DSNode
and thus can access its public fields. Is this a good thing in this case? My first intuition is that the user doesn't care about these implementations details and therefore they should be hidden. Note: if that's the case, we may need also to change the interface of certain methods, e.g. find
, which instead of returning DSNode
would return the original value stored in the corresponding representative.
For example, Kruskal's MST algorithm uses DSForests
, but doesn't really care about implementation details, it only cares about the main functionalities and advantages that DSForests
uses:
The current design of the graph data structure is very poor. I should do a little bit of research on the web regarding the best ways of implementing a graph data structure by not just considering a few particular cases, but it should be as much flexible and general as possible.
In some algorithms that run on the graph data structures, sometimes I need to do workarounds to provide a certain functionality (which functionalities?) using the current implementation (which was actually removed from this repository temporarily because of this issue).
I should provide the linked-list, matrix and maybe the edge list implementations, so that depending on the problem, we can choose whichever representation suits better for the situation.
Check this other issue related to the implementation of a matrix data structure.
How should I deal with directed and undirected graphs?
How should I deal with sparse and dense graphs?
Even though efficiency is not one of the main goals of this project, this data structure should be implemented as efficient as possible, i.e. known algorithms with an optional or good performance should clearly be preferred over a "random" and "custom" implementation.
Since I've changed something in the interface of the HeapNode class, I realized that the code under huffamn.py
won't work anymore.
Removed temporarily huffman.py
.
I should have started this project using maybe a test-driven development, but I was not aware at the time of all of these development processes, and I didn't have much time to do it in that way, anyway.
Since RBT derives from BST, it has derived all its implementations problems, so I'm just going to talk about the problems in the context of the BST.
The BST
allows duplicate keys. It also supports to associate a value with the key, but this probably does not make much sense: if the users associate a value to a key, when searching it may obtain a different value than he's expecting. In other words, the current BST
implementation is not really a map but allows you to associate a value with a key, but that may not make much sense.
The BST
allows you to both insert, search and delete elements either by key or by BSTNode
, and this can cause a few problems.
When searching or deleting using BSTNode
, the inputted BSTNode
...
The return value of the search and delete operations is a BSTNode
, which contains a value. Is it a good idea to expose the internal details to the client?
If I decide to not make this BST implementation a map, then probably the most sensible thing to do when searching and deleting is respectively to return a boolean value (to indicate if the key is in the tree or not) and to not return anything (since there's no value associated with the key, there's no need to return anything!)
If I decide to make this BST a map, then when searching and deleting I can return the corresponding value (if the key is in the map) or I can return, e.g., if it's not.
Currently BSTNode
objects are equal if they are the same object. This could make sense if we allow the user to know about BSTNode
objects, as it currently knows, but in case I decide to hide the implementation details of the BST
, i.e. hide as much as possible BSTNode
(and I probably should do it!), then it does not make sense to consider BSTNode
objects the same only if they are the same objects, but if they have the same keys (in case of non-map BST).
Do not allow duplicates and thus creating a BSTMap!
Or Maybe I should have a BST base class which allows duplicates, but the nodes only contain keys (i.e. without values). In this case, I should probably hide all implementations details (like the existence of a BSTNode) and I should implement the BSTNode. In this context, two BSTNodes would be equal not if they are the same object but if they contain the same key.
The tests currently provided are unit tests, i.e. they test only functions, classes or objects defined in one module: they do not test the interactions and dependencies between modules or packages (which would be integration testing).
When I was running all tests locally, using the script ./run_tests.sh
, at a certain point, an assertion (i.e. assert is_min_max_heap(self)
) in the deletion operation was not satisfied. This only occurred locally and when I run all tests, apparently.
In general, there was some strange behavior going on.
Verify that the algorithm for deleting an element from a MinMaxHeap (and also MaxHeap and MinHeap) really work, but thinking about all possible cases!!!
Essentially for convenience. This variable is used, as the name suggests, in the DFS algorithm to keep track of the global time, i.e. a counter variable used to mark when a node is first and last visited. This variable has nothing to do with the Graph
class, except that the DFS algorithm runs on a graph. In my opinion it should be separate from the Graph class, but I didn't find yet a convincing solution to this problem. The only one that actually comes to my mind right now (and as it came to my mind when I was first creating the Graph
class) is to make this variable a global variable only present in the file when the DFS algorithm runs, but I am still not sure if this is a good design too...
The following methods do not check if the key
is an instance of an Hashable
get
put
whereas the following do
delete
Note: update tests to reflect changes.
Here's a possible algorithm:
PRINT-CUT-ROD-SOLUTION(prices, n):
(r, s) = EXTENDED-BOTTOM-UP-CUT-ROD(prices, n)
while n > 0:
PRINT(s[n])
n = n - s[n]
This means that I've to override that method in the MinMaxHeap
class. So currently, the MinMaxHeap
class is buggy.
My idea was that a graph is by default directed, and to create an undirected graph you just need to add each node of an (virtual) edge to each other's node adjacent list.
For example, if A and B are part of an undirected edge, then you add A to the B's adjacent list and you add B to A's adjacent list too. Of course you need to do this for all edges. For this reason, I decided to create two functions, add_directed_edge
and add_undirected_edge
that do this automatically under the hood for you.
This seems to solve all the problems, but it might not be the case. For example, if you want to know if a Graph is directed or undirected, you need to check all (virtual) edges, i.e. all adjacent lists, etc.
There are also some other solutions to this problem, but this could introduce other problems, and so on, making the graph data structure always more complex, and maybe with a worse design.
For example, push_down
should be a private method, but its name does not start with _
.
Be careful: decide well which methods are part of the public API of these heap classes. To do that, you may need to think why would you use a heap.
python-coveralls
(which uses coverage.py
) does not consider modules which are not tested or, in general, discovered. Of course, this may be a little bit misleading for those looking just superficially at the status of the code coverage.
See here for possible solutions: TheKevJames/coveralls-python#142
After visualizing the behaviour of a TST data structure using this tool, I realized that my implementation of deletion is buggy.
_delete_fix
keeps deleting nodes up to the root and while u
has no children, but it doesn't consider the fact that the current node u
may contain a value and thus is not to be deleted.
This may not be the only problem... !! I need to think about all possible cases very well. Have a look at the implementation by galles.
Here's an enumeration of the data structures and algorithms that still need to be tested:
TST
Queue
All algorithms under:
In many cases, there's already a good starting point in the respective algorithms' own scripts to write their tests...
The concept of a tail-recursive function is especially important for functional programming languages. A tail-recursive function can be implemented in an optimized way by reusing the existing stack frame when calling recursively the function that is located in the tail position.
In the folder ands/algorithms/recursion
, there are a few examples of recursive functions. The idea of this issue is to promote the:
provision of a tail-recursive alternative implementation of the existing recursive functions (which are not already tail-recursive), and
documentation of the functions as tail-recursive or not.
Possibly useful resources to implement this
aij = p * (gij / cj) + δ, if cj != 0
aij = 1 / n, if cj == 0
All elements of A are strictly between 0 and 1.
All column sums of A are all equal to 1
Good for matrices which are
The method consists in repeating x = Ax (for some initial x0, e.g. such that ||x0||=1), until the absolute difference between the next x and the current x is "small enough".
Input: A, x0, termination condition
Output: vk (iterate when termination condition is met)
for k=1, 2 ... until termination condition is met do:
vtemp = A * vk - 1
vk = vtemp / ||vtemp||
end
The reason for the repeated normalization is that there is nothing in the definition of an eigenvector to prevent our iterates vk from growing in magnitude, which would indeed happen for large eigenvalues, accelerating roundoff error growth.
The termination condition depends on the purpose of the computation...!?
How do we know that matrix A has an eigenvalue equal to 1? (Answer: A is column stochastic, i.e. all column sums are 1, the Perron-Frobenius theorem tells us that this matrix has an eigenvalue 1)
If it does, how do we know that it's the dominant eigenvalue and thus the solution that we find corresponds to the dominant eigenvector?
Does this problem have a unique solution x?
With a few adjustments to the basic model, A has indeed an eigenvalue 1 and it's the dominant eigenvalue, with algebraic multiplicity (?), and the problem has a unique positive real solution x. Proof?
In the case of the PageRank, A has the dominant eigenvalue equal to 1, but in most cases we don't know its value. In those cases we can use the Rayleigh quotient, defined for any given vector v by: rq = (vT * A * v) / (vT * v).
An initial approximation to x could be a vector whose entries are 1/n.
The Perron-Frobenius theorem applies to matrices like A:
only possible approach for large n, where n is the number of pages
Matrices G (the connectivity matrix) and A (the transition probability matrix of a Markov chain, formed from G) can actually never be formed.
To preserve the sparsity of G, we can do A = pGD + ezT, where
D is the diagonal matrix formed as follows
djj = 1/cj, if cj != 0;
djj = 0, if cj == 0.
e is a vector of all 1s
z is the vector with components:
zj = δ, if cj != 0;
zj = 1/n, if cj == 0.
Thus the matrix ezT accounts for random choices of web pages that do not follow links.
x=Ax can thus be written as
x - Ax = 0
(I - A)x = 0
(I - (pGD + ezT))x = 0
(I - pGD - ezT)x = 0
Ix - pGDx - ezTx = 0
(I - pGD)x - ezTx = 0
(I - pGD)x = ezTx
Now let zTx = γ, so we have
(I - pGD)x = eγ
We don't know γ because it depends on the unknown x.
Let temporarily γ = 1.
Let I - pGD = Q.
So we end up with:
Qx = e
As long as p is strictly less than one, the coefficient matrix Q is nonsingular. Why???
Now we can solve the linear system Qx = e for x.
The power method can also be implemented without actually forming the transition probability (or Markov) matrix, A = p * G * D + e * zT, and thus preserving sparsity.
Check assignment 1 of the numerical computing course.
PageRank can also be implemented using the inverse iteration (or power) method.
Check also:
The MinPriorityQueue
is a data structure implemented using a MinHeap
and it could potentially be used in different algorithms.
So the idea of opening this issues is to try to understand if this implementation can be used in a flexible way: ideally it should be possible to use it whenever any algorithm needs an "min-priority queue", so it should be as simple as possible (i.e., without many secondary enhancements) and it should also be as much as possible decoupled from other data structures.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.