Code Monkey home page Code Monkey logo

hgtphylodetect's Introduction

HGTphyloDetect

Horizontal gene transfer (HGT) refers to the exchange of genetic material between disparate groups of organisms other than from parent to offspring, which has been confirmed as a very significant factor in adaptive evolution, disease emergence and metabolic shift that can act across various species. However, current methods for HGT detection are usually not automatic, narrow applicable or unavailable to use. In this work, we developed a versatile computational toolbox named HGTphyloDetect by combining a high-throughput pipeline together with phylogenetic analysis to facilitate comprehensive investigation of the potential mechanism for HGT events. Tests on two case studies suggest that this approach could effectively identify horizontally acquired genes with high accuracy. In-depth phylogenetic analysis further facilitates the navigation of the potential donors and detailed gene transmission process. The HGTphyloDetect toolbox is designed for ease of use and could accurately find HGT events with a very low false discovery rate in a high-throughput manner.

HGT identification pipeline

image

Citation

Please cite this paper: Yuan, Le, et al. HGTphyloDetect: facilitating the identification and phylogenetic analysis of horizontal gene transfer. Briefings in Bioinformatics (2023). https://academic.oup.com/bib/advance-article/doi/10.1093/bib/bbad035/7031155.

Installation

Install the latest version with:

$ git clone https://github.com/SysBioChalmers/HGTphyloDetect.git
$ cd HGTphyloDetect
$ pip install -r requirements.txt

Please note: this is sufficient to run the HGT detection functionality of HGTphyloDetect. Additional software dependencies and installation instructions are specified in the User tutorial.

Example

We provide a user-friendly example for small test, users just need to prepare a FASTA file including protein id and protein sequence, note that protein id should be from the GenBank protein database.

(1) If you are now in the HGTphyloDetect directory, just enter into the folder example via the command line:

cd example

(2.1) Then users can run the script for the input file (default AI value = 45, out_pct = 0.90):

python HGT_workflow.py input.fasta

(2.2) If users want to change the default values for the parameters used in the pipeline, e.g., AI value = 40, out_pct = 0.80, just reset the constant values and run the following:

python HGT_workflow.py input.fasta AI=40 out_pct=0.80

(3) Finally, our software could generate the output results as a file under the folder example for this gene/protein. The output file includes some important information, i.e., Alien index, E value and donor information. For example:

Gene/Protein Alien index E value Donor id Donor taxonomy
AAT92670 199.18 3.15e-87 WP_208929673 Bacteria/Firmicutes

User Manual

Please check the documentation User tutorial for the user manual. Here you can download this file for your case studies.

Contact

hgtphylodetect's People

Contributors

le-yuan avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar

hgtphylodetect's Issues

i am getting error while running HGT_workflow

when i run command for my gene, i get error please check gene accesion id.
python HGT_workflow.py input.fasta
This is gene 1------------------
NG_048001
Yes, blast file already exists, nice!
Attention: please check the gene accession id!

New protein used for HGT

In the readme, it notes that protein id should be from the GenBank protein database. But if I denovo assembled an new genome. Then I want to detect the HGT gene in the NCBI database. How should I deal with this problem. In a word, if I just want to calculate the AI value for the new protein. is it ok for me?

blastp file in phylogenetics

Hi,
It's a nice tool for HGT detecting!!!

but I got this error when i used example file "HGTphyloDetect/example/blastp_files/AAT92670.txt" to run phylogenetics step.

And i found that the format of two blastp files in your example(YOL164W.txt, AAT92670.txt) is different, so i guess it's the reason why i get this error.
i'm wondering how can i get the blastp file like "YOL164W.txt" , or can you give me some advices ?

This is gene AAT92670
Begin to run!!!
This is gene AAT92670 ---------------
Traceback (most recent call last):
File "/home/xinghe/data/Alvinellidae/HGT/3/phylogenetics/scripts/HGT_homologs_sequence.py", line 151, in
main()
File "/home/xinghe/data/Alvinellidae/HGT/3/phylogenetics/scripts/HGT_homologs_sequence.py", line 87, in main
accession_number, accession_similarity = parse_NCBI("./input/%s.txt" % gene)
File "/home/xinghe/data/Alvinellidae/HGT/3/phylogenetics/scripts/HGT_homologs_sequence.py", line 22, in parse_NCBI
accession = line.strip("\n").split("\t")[2]
IndexError: list index out of range
/usr/local/bin/mafft: Cannot open ./input/AAT92670_homologs.fasta.

/usr/local/bin/mafft: line 1270: /tmp/mafft.PsSNzDZxo9/infile: No such file or directory
awk: fatal: cannot open file /tmp/mafft.PsSNzDZxo9/size' for reading (No such file or directory) awk: fatal: cannot open file /tmp/mafft.PsSNzDZxo9/size' for reading (No such file or directory)
/usr/local/bin/mafft: line 1274: [: too many arguments
/usr/local/bin/mafft: line 1279: [: too many arguments
/usr/local/bin/mafft: line 1284: [: too many arguments
/usr/local/bin/mafft: line 1289: [: -lt: unary operator expected
/usr/local/bin/mafft: line 1294: [: -lt: unary operator expected
/usr/local/bin/mafft: line 1301: [: -lt: unary operator expected
/usr/local/bin/mafft: line 1308: [: -lt: unary operator expected

MAFFT v7.505 (2022/Apr/10)
https://mafft.cbrc.jp/alignment/software/
MBE 30:772-780 (2013), NAR 30:3059-3066 (2002)

High speed:
% mafft in > out
% mafft --retree 1 in > out (fast)

High accuracy (for <~200 sequences x <~2,000 aa/nt):
% mafft --maxiterate 1000 --localpair in > out (% linsi in > out is also ok)
% mafft --maxiterate 1000 --genafpair in > out (% einsi in > out)
% mafft --maxiterate 1000 --globalpair in > out (% ginsi in > out)

If unsure which option to use:
% mafft --auto in > out

--op # : Gap opening penalty, default: 1.53
--ep # : Offset (works like gap extension penalty), default: 0.0
--maxiterate # : Maximum number of iterative refinement, default: 0
--clustalout : Output: clustal format, default: fasta
--reorder : Outorder: aligned, default: input order
--quiet : Do not report progress
--thread # : Number of threads (if unsure, --thread -1)
--dash : Add structural information (Rozewicki et al, submitted)

ERROR: Alignment not loaded: "./intermediate/AAT92670_aln.fasta" Check the file's content.

IQ-TREE multicore version 2.1.2 COVID-edition for Linux 64-bit built Oct 22 2020
Developed by Bui Quang Minh, James Barbetti, Nguyen Lam Tung,
Olga Chernomor, Heiko Schmidt, Dominik Schrempf, Michael Woodhams.

Host: 7525B (AVX2, FMA3, 502 GB RAM)
Command: iqtree2 -T 40 -nt 6 -st AA -s ./intermediate/AAT92670_aln_trimmed.fasta -m TEST -mrate G4 -keep-ident -bb 1000 -pre ./intermediate/AAT92670
Seed: 425967 (Using SPRNG - Scalable Parallel Random Number Generator)
Time: Thu Feb 16 21:08:22 2023
Kernel: AVX+FMA - 6 threads (256 CPU cores detected)

Reading alignment file ./intermediate/AAT92670_aln_trimmed.fasta ... ERROR: File not found ./intermediate/AAT92670_aln_trimmed.fasta
code for methods in class “Rcpp_Fitch” was not checked for suspicious field assignments (recommended package ‘codetools’ not available?)
code for methods in class “Rcpp_Fitch” was not checked for suspicious field assignments (recommended package ‘codetools’ not available?)
Warning message:
package ‘phangorn’ was built under R version 4.2.2
[1] "AAT92670"
Error in file(file, "r") : cannot open the connection
Calls: read.tree -> scan -> file
In addition: Warning message:
In file(file, "r") :
cannot open file './intermediate/AAT92670.treefile': No such file or directory
Execution halted
perl: error while loading shared libraries: libnsl.so.1: cannot open shared object file: No such file or directory
Yep, finish!!!

Thanks!
XingHE

Gene accession id not recognized.

Thank you for publishing this tool. In principle it could be very useful for my research. However,
I'm not able to get it to work. Every protein I put in gives the 'Attention: please check the gene accession id!' error

When I try to run the programme with the example fasta file without the pregenerated blast file I get the following error:
This is gene 1------------------
AAT92670
Traceback (most recent call last):
File "/Users/marijn/HGTphyloDetect/example/HGT_workflow.py", line 212, in
main()
File "/Users/marijn/HGTphyloDetect/example/HGT_workflow.py", line 114, in main
for accession in accession_number[:200] :
UnboundLocalError: local variable 'accession_number' referenced before assignment.

Other proteins I tried are for example Q1DFT5 and Q50900, these give the 'Attention: please check the gene accession id!' error.

Do you know how to solve this issue?
Thank you in advance!

"NCBI database not present yet (first time used?)" followed by an error

After running the example with
python HGT_workflow.py input.fasta

the following output and error received:
`
This is gene 1------------------
AAT92670
Yes, blast file already exists, nice!
NCBI database not present yet (first time used?)
Traceback (most recent call last):
File "/apps/easybd/easybuild/software/Python/3.9.6-GCCcore-11.2.0/lib/python3.9/urllib/request.py", line 1346, in do_open
h.request(req.get_method(), req.selector, req.data, headers,
File "/apps/easybd/easybuild/software/Python/3.9.6-GCCcore-11.2.0/lib/python3.9/http/client.py", line 1257, in request
self._send_request(method, url, body, headers, encode_chunked)
File "/apps/easybd/easybuild/software/Python/3.9.6-GCCcore-11.2.0/lib/python3.9/http/client.py", line 1303, in _send_request
self.endheaders(body, encode_chunked=encode_chunked)
File "/apps/easybd/easybuild/software/Python/3.9.6-GCCcore-11.2.0/lib/python3.9/http/client.py", line 1252, in endheaders
self._send_output(message_body, encode_chunked=encode_chunked)
File "/apps/easybd/easybuild/software/Python/3.9.6-GCCcore-11.2.0/lib/python3.9/http/client.py", line 1012, in _send_output
self.send(msg)
File "/apps/easybd/easybuild/software/Python/3.9.6-GCCcore-11.2.0/lib/python3.9/http/client.py", line 952, in send
self.connect()
File "/apps/easybd/easybuild/software/Python/3.9.6-GCCcore-11.2.0/lib/python3.9/http/client.py", line 1426, in connect
self.sock = self._context.wrap_socket(self.sock,
File "/apps/easybd/easybuild/software/Python/3.9.6-GCCcore-11.2.0/lib/python3.9/ssl.py", line 500, in wrap_socket
return self.sslsocket_class._create(
File "/apps/easybd/easybuild/software/Python/3.9.6-GCCcore-11.2.0/lib/python3.9/ssl.py", line 1040, in _create
self.do_handshake()
File "/apps/easybd/easybuild/software/Python/3.9.6-GCCcore-11.2.0/lib/python3.9/ssl.py", line 1309, in do_handshake
self._sslobj.do_handshake()
ssl.SSLCertVerificationError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate in certificate chain (_ssl.c:1129)

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/HGTphyloDetect/example/HGT_workflow.py", line 212, in
main()
File "/HGTphyloDetect/example/HGT_workflow.py", line 90, in main
ncbi = NCBITaxa()
File "/.local/lib/python3.9/site-packages/ete3/ncbi_taxonomy/ncbiquery.py", line 112, in init
self.update_taxonomy_database(taxdump_file)
File "/.local/lib/python3.9/site-packages/ete3/ncbi_taxonomy/ncbiquery.py", line 131, in update_taxonomy_database
update_db(self.dbfile)
File "/.local/lib/python3.9/site-packages/ete3/ncbi_taxonomy/ncbiquery.py", line 758, in update_db
(md5_filename, _) = urlretrieve("https://ftp.ncbi.nih.gov/pub/taxonomy/taxdump.tar.gz.md5")
File "/apps/easybd/easybuild/software/Python/3.9.6-GCCcore-11.2.0/lib/python3.9/urllib/request.py", line 239, in urlretrieve
with contextlib.closing(urlopen(url, data)) as fp:
File "/apps/easybd/easybuild/software/Python/3.9.6-GCCcore-11.2.0/lib/python3.9/urllib/request.py", line 214, in urlopen
return opener.open(url, data, timeout)
File "/apps/easybd/easybuild/software/Python/3.9.6-GCCcore-11.2.0/lib/python3.9/urllib/request.py", line 517, in open
response = self._open(req, data)
File "/apps/easybd/easybuild/software/Python/3.9.6-GCCcore-11.2.0/lib/python3.9/urllib/request.py", line 534, in _open
result = self._call_chain(self.handle_open, protocol, protocol +
File "/apps/easybd/easybuild/software/Python/3.9.6-GCCcore-11.2.0/lib/python3.9/urllib/request.py", line 494, in _call_chain
result = func(*args)
File "/apps/easybd/easybuild/software/Python/3.9.6-GCCcore-11.2.0/lib/python3.9/urllib/request.py", line 1389, in https_open
return self.do_open(http.client.HTTPSConnection, req,
File "/apps/easybd/easybuild/software/Python/3.9.6-GCCcore-11.2.0/lib/python3.9/urllib/request.py", line 1349, in do_open
raise URLError(err)
urllib.error.URLError: <urlopen error [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate in certificate chain (_ssl.c:1129)>
`

I don't see any reference to required databases and such in the readme or user guide.
any help would be appreciated :)

Problem with running HGTphyloDetect

I have retrieved a protein sequence for the gene with Genbank acc.id AAK99679.1 (its a prokaryotic gene) and run the blastp in the 'main' directory of the tool by using the command **_python blastp.py input.fasta_** and the blastp file was successfully generated in the designated folder.
In the next step, for Horizontally transferred gene dectection, I run the command
python HGT_workflow_close.py input.fasta (for prokaryote to prokaryote).
Unfortunately, the following error message popped up:
This is gene 1------------------
AAK99679.1
Yes, blast file already exists, nice!
**_Attention: please check the gene accession id!_**
I have tried the same multiple times but the same error continued to occur.
Please help!
I need my analysis to be done at the earliest!
Waiting for your positive response!

Don’t waste your time, software not usable in current form

This pipeline was not ready for deployment to the general public and saying it is easy to use is false advertising. I encountered several issues, many of which I fixed on my own, but ultimately it is taking too much time to get this pipeline to work so I’m moving on. Below is a list of issues that I encountered. If they were fixed in the next version, I would try this pipeline again and maybe recommend it to others. But until then, to those thinking about trying it out, do not attempt to use this software unless you want to take the general ideas and basically write your own pipeline.

EDIT: AvP worked beautifully first try.

Issue 1: Input protein accessions need to match a GenBank accession number

This issue has been brought up by others. If you are using unpublished protein sequences you will need to modify the “HGT_workflow” Python scripts to manually specify your taxonomic group for alien index calculations.

Issue 2: BLASTP set to “remote”

Not a problem in itself, but after a few BLASTs NCBI reports a message that the IP address will be penalized for overuse. You get this same message if you BLAST too many sequences on their web server GUI. An option to specify remote or local database search would be nice. You can go into the BLASTP Python script and remove the remote flag.

Issue 3: NCBI taxonomy parsing errors spoil the run

Not every taxon has an NCBI tax ID at every taxonomic level. In the HGT output Python code, if one of the requested tax IDs is missing the run fails. I added some “try, except” blocks here so that the run wouldn’t crash if “taxid2name” failed.

Issue 4: After that, still not every gene is reported in the output

I don’t know why, but the log files to which I directed the code show that multiple HGT events were found and alien indexes calculated but they are ultimately not being written to the output TSV. This is a problem for non-HGT genes too. Basically a large number of the genes are not being reported to the output TSV and I don’t know why. Perhaps because of “taxid2name” and “ranks2lineage” function errors that the script can’t handle.

Issue 5: No connection between two modules of pipeline

This issue is another that has already been brought up. The BLASTP output format required for the alien index calculator is different than the format required for the phylogenetic steps, and the pipeline does not have a way to link these parts. One could match the BLASTP outputs for the two steps but would need change one of the Python scripts so that the ncbi_parse function indexing (accession number and percent similarity or evalue) is correct. In general, the output of the alien index calculator part and the phylogenetic part do not link up such that you need to write your own code anyway to go through the TSV output and select the genes that have been identified as HGT. As a result, it was easier for me to run BLASTP twice: first time for all the genes of interest with the output format compatible for the alien index calculator; second time just for the HGT genes with the output format compatible for the phylogenetic steps.

Issue 6: Failure to generate homologs FASTA file for phylogenetics

But even then the homologs FASTA file fails to generate for some genes even though the log file looks OK (the accessions are being reported with the correct taxonomy designation). I don't know why some genes work and some don't. The one that did work was clearly not HGT in the end.

Issue 7: Script requires very particular directory structure

The pipeline is not written such that the path to your input and path to your output can be specified. Instead, you have to recreate a directory tree that matches the particular way the pipeline was run for the paper in which it was published.

Issue 8: It's slow

The pipeline is slow to run compared to AvP, which completed the calculations very quickly.

In summary

All in all, this pipeline was generated for a particular use case and then published as a general tool but it has not been coded or tested to work on datasets that are not the original one. Even then there are steps in the pipeline that are missing (connecting the alien index calculation and phylogenetic portion) that you need to figure out on your own.

An issue regrading duplication

When I ran more than a thousand samples, there may be several identical accession numbers with different e-values and identities. It results in the inconsistency of the number of accession similarity and accession number in your script, HGT_homologs_sequence.py.

Using "from ete3 import NCBITaxa" cannot access NCBI

Hi,hank you for developing this tool. I encountered a problem when trying to extract NCBI taxa information. After troubleshooting, I found that the following commands caused the issue. I'm wondering if the NCBI taxa functionality in ete3 has become ineffective, especially for networks in mainland China.

from ete3 import NCBITaxa
ncbi = NCBITaxa()
ncbi.update_taxonomy_database()

error as show below:
NCBI database not present yet (first time used?)
Traceback (most recent call last):
File "/home/xiazq/miniconda3/envs/HGT_analyses/lib/python3.12/urllib/request.py", line 1344, in do_open
h.request(req.get_method(), req.selector, req.data, headers,
File "/home/xiazq/miniconda3/envs/HGT_analyses/lib/python3.12/http/client.py", line 1331, in request
self._send_request(method, url, body, headers, encode_chunked)
File "/home/xiazq/miniconda3/envs/HGT_analyses/lib/python3.12/http/client.py", line 1377, in _send_request
self.endheaders(body, encode_chunked=encode_chunked)
File "/home/xiazq/miniconda3/envs/HGT_analyses/lib/python3.12/http/client.py", line 1326, in endheaders
self._send_output(message_body, encode_chunked=encode_chunked)
File "/home/xiazq/miniconda3/envs/HGT_analyses/lib/python3.12/http/client.py", line 1085, in _send_output
self.send(msg)
File "/home/xiazq/miniconda3/envs/HGT_analyses/lib/python3.12/http/client.py", line 1029, in send
self.connect()
File "/home/xiazq/miniconda3/envs/HGT_analyses/lib/python3.12/http/client.py", line 1465, in connect
super().connect()
File "/home/xiazq/miniconda3/envs/HGT_analyses/lib/python3.12/http/client.py", line 995, in connect
self.sock = self._create_connection(
^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/xiazq/miniconda3/envs/HGT_analyses/lib/python3.12/socket.py", line 828, in create_connection
for res in getaddrinfo(host, port, 0, SOCK_STREAM):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/xiazq/miniconda3/envs/HGT_analyses/lib/python3.12/socket.py", line 963, in getaddrinfo
for res in _socket.getaddrinfo(host, port, family, type, proto, flags):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
socket.gaierror: [Errno -2] Name or service not known

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/bioData/run_data2/xiazq/HGTphyloDetect-master/example/xzaq.py", line 2, in
ncbi = NCBITaxa()
^^^^^^^^^^
File "/home/xiazq/miniconda3/envs/HGT_analyses/lib/python3.12/site-packages/ete3/ncbi_taxonomy/ncbiquery.py", line 112, in init
self.update_taxonomy_database(taxdump_file)
File "/home/xiazq/miniconda3/envs/HGT_analyses/lib/python3.12/site-packages/ete3/ncbi_taxonomy/ncbiquery.py", line 131, in update_taxonomy_database
update_db(self.dbfile)
File "/home/xiazq/miniconda3/envs/HGT_analyses/lib/python3.12/site-packages/ete3/ncbi_taxonomy/ncbiquery.py", line 758, in update_db
(md5_filename, _) = urlretrieve("https://ftp.ncbi.nih.gov/pub/taxonomy/taxdump.tar.gz.md5")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/xiazq/miniconda3/envs/HGT_analyses/lib/python3.12/urllib/request.py", line 240, in urlretrieve
with contextlib.closing(urlopen(url, data)) as fp:
^^^^^^^^^^^^^^^^^^
File "/home/xiazq/miniconda3/envs/HGT_analyses/lib/python3.12/urllib/request.py", line 215, in urlopen
return opener.open(url, data, timeout)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/xiazq/miniconda3/envs/HGT_analyses/lib/python3.12/urllib/request.py", line 515, in open
response = self._open(req, data)
^^^^^^^^^^^^^^^^^^^^^
File "/home/xiazq/miniconda3/envs/HGT_analyses/lib/python3.12/urllib/request.py", line 532, in _open
result = self._call_chain(self.handle_open, protocol, protocol +
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/xiazq/miniconda3/envs/HGT_analyses/lib/python3.12/urllib/request.py", line 492, in _call_chain
result = func(*args)
^^^^^^^^^^^
File "/home/xiazq/miniconda3/envs/HGT_analyses/lib/python3.12/urllib/request.py", line 1392, in https_open
return self.do_open(http.client.HTTPSConnection, req,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/xiazq/miniconda3/envs/HGT_analyses/lib/python3.12/urllib/request.py", line 1347, in do_open
raise URLError(err)
urllib.error.URLError: <urlopen error [Errno -2] Name or service not known>

HGTPhyloDetect : Not working with GenBank Accession ID

Error : This is gene 1------------------
AAF38644
Yes, blast file already exists, nice!
Attention: please check the gene accession id!

Here AAF38644 is a GenBank id of Q9Z6R0 (Uniprot). I have also tried with alternative GenBank accession, but the results are same.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.