bazelbuild / bazel-central-registry Goto Github PK
View Code? Open in Web Editor NEWThe central registry of Bazel modules for the Bzlmod external dependency system.
Home Page: https://registry.bazel.build
License: Apache License 2.0
The central registry of Bazel modules for the Bzlmod external dependency system.
Home Page: https://registry.bazel.build
License: Apache License 2.0
Hi,
Also a Googler. I have a pretty extensive set of dependency support package in bazel_rules_hdl that might be useful to upstream here.
The packages in there are all built hermetically from source, as such, they represent the full DAG of all the packages.
Is there any interest to migrate them here?
It'd be useful to have tooling to validate the presubmit.yml file is correctly formatted / follows the correct schema as it is not really documented anywhere what the correct format is and the current workflow relies on copy and paste from other modules.
For better security. @meteorcloudy for more detailed information
Particularly at this stage when testing bzlmod, it's common in my experience to make liberal use of the various bazel_dep
overrides, like archive_override
. It would great to allow the build to just ignore or warn on the overrides present in bazel_dep
s, particularly when an override pulls in other overrides. This won't be a big deal when it's easier to provide alternate bzlmod registries, but for now it makes testing cumbersome. It is an option to use patches
in the overrides, but then patches must exist for each step in the chain.
Hi all, I am trying to use bzlmod and make a private central registry. I found these scripts in the tools directory are used in the process. Is it possible to wrap these scripts as a separate module so that other private repositories can easily update them?
Another problem is if my archive url address is not public to everyone, which have to carry a token parameter when downloading. This situation doesn't seem to be supported now?
Right now the data contained in the metadata.json fields is very focused on pure technical package management functionality.
Especially for discoverability, it would be nice also to have some additional metadata:
This was generated with the add_module.py tooling; it should be updated to add the module file patch automatically.
Originally posted by @jsharpe in #282 (comment)
We could do it like npm does in package.json
: https://docs.npmjs.com/cli/v7/configuring-npm/package-json#repository
The helper script has a dependency on colorama but this is not documented in the instructions. Is it worth providing a requirements.txt file for this?
Could this be automated by making the helper script a bazel target in this repo so consumers can just do bazel run //tools:add_module
?
No PRs should alter the contents of existing module versions. This includes anything under the /modules/$NAME/$VERSION
directories.
The BUILD file:
cc_binary (
name = "async_dns",
srcs = [ "async_dns.cc" ],
deps = [ "@c-ares//:ares" ]
)
The C++ source file: async_dns.cc:
#include <ares.h>
#include <arpa/inet.h>
#include <ctype.h>
#include <netdb.h>
#include <netinet/in.h>
#include <stdarg.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <sys/socket.h>
#include <unistd.h>
static void state_cb(void *data, int s, int read, int write) {
printf("Change state fd %d read:%d write:%d\n", s, read, write);
}
static void callback(void *arg, int status, int timeouts,
struct hostent *host) {
if (!host || status != ARES_SUCCESS) {
printf("Failed to lookup %s\n", ares_strerror(status));
return;
}
printf("Found address name %s\n", host->h_name);
char ip[INET6_ADDRSTRLEN];
int i = 0;
for (i = 0; host->h_addr_list[i]; ++i) {
inet_ntop(host->h_addrtype, host->h_addr_list[i], ip, sizeof(ip));
printf("%s\n", ip);
}
}
static void wait_ares(ares_channel channel) {
for (;;) {
struct timeval *tvp, tv;
fd_set read_fds, write_fds;
int nfds;
FD_ZERO(&read_fds);
FD_ZERO(&write_fds);
nfds = ares_fds(channel, &read_fds, &write_fds);
if (nfds == 0) {
break;
}
tvp = ares_timeout(channel, NULL, &tv);
select(nfds, &read_fds, &write_fds, NULL, tvp);
ares_process(channel, &read_fds, &write_fds);
}
}
int main(void) {
ares_channel channel;
int status;
struct ares_options options;
int optmask = 0;
status = ares_library_init(ARES_LIB_INIT_ALL);
if (status != ARES_SUCCESS) {
printf("ares_library_init: %s\n", ares_strerror(status));
return 1;
}
// options.sock_state_cb_data;
options.sock_state_cb = state_cb;
optmask |= ARES_OPT_SOCK_STATE_CB;
status = ares_init_options(&channel, &options, optmask);
if (status != ARES_SUCCESS) {
printf("ares_init_options: %s\n", ares_strerror(status));
return 1;
}
status = ares_set_servers_csv(channel, "114.114.114.114");
if (status != ARES_SUCCESS) {
printf("ares_set_servers_csv: %s\n", ares_strerror(status));
return 1;
}
ares_gethostbyname(channel, "baidu.com", AF_INET, callback, NULL);
// ares_gethostbyname(channel, "google.com", AF_INET6, callback, NULL);
wait_ares(channel);
ares_destroy(channel);
ares_library_cleanup();
printf("fin\n");
return 0;
}
Use Bazel to build the C++ source file:
$ bazelisk build -s :async_dns
INFO: Analyzed target //:async_dns (0 packages loaded, 0 targets configured).
INFO: Found 1 target...
SUBCOMMAND: # //:async_dns [action 'Linking async_dns', configuration: 386057a8d24831c33aaf953f87aec3b0ebdbf5a779460907e5f2832f79a9e67a, execution platform: @local_config_platform//:host]
(cd /private/var/tmp/_bazel_jing/352d1d854f060ba5704eda8af920a5e3/execroot/__main__ && \
exec env - \
APPLE_SDK_PLATFORM=MacOSX \
APPLE_SDK_VERSION_OVERRIDE=11.1 \
PATH='/Users/jing/Library/Caches/bazelisk/downloads/bazelbuild/bazel-5.1.0-darwin-x86_64/bin:/usr/local/opt/node@16/bin:/Users/jing/.rbenv/shims:/Users/jing/.tiup/bin:/Users/jing/opt/anaconda3/condabin:/usr/local/sbin:/Users/jing/code/github/zk-code/scripts:/Users/jing/tools/apache-zookeeper-3.5.5-bin/bin:/Users/jing/code/github/trace/sky/skywalking-cli/bin:/Users/jing/tools/apache-skywalking-apm-bin-es7/bin:/usr/local/opt/ruby/bin:/Users/jing/code/github/devops/fluent-bit/build/bin:/Users/jing/tools/apache-maven-3.6.3/bin:/Users/jing/.gem/bin:/Users/jing/.cargo/bin:/Users/jing/code/github/network/caddy/cmd/caddy:/usr/local/opt/coreutils/libexec/gnubin:/Users/jing/tools/apache-skywalking-apm-bin-es7/bin:/usr/local/opt/bison/bin:/usr/local/opt/findutils/libexec/gnubin:/usr/local/opt/gnu-sed/libexec/gnubin:/Users/jing/tools/mongodb-osx-x86_64-4.0.2/bin:/usr/local/opt/[email protected]/bin:/Users/jing/tools/spark-2.2.0-bin-hadoop2.7/bin:/Users/jing/bin:/usr/local/bin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/Library/TeX/texbin:/Library/Apple/usr/bin:/Applications/Wireshark.app/Contents/MacOS:/Users/jing/go/bin:/Users/jing/.pub-cache/bin:/Applications/Visual Studio Code.app/Contents/Resources/app/bin:/usr/local/opt/fzf/bin' \
XCODE_VERSION_OVERRIDE=12.4.0.12D4e \
ZERO_AR_DATE=1 \
external/local_config_cc/cc_wrapper.sh @bazel-out/darwin-fastbuild/bin/async_dns-2.params)
# Configuration: 386057a8d24831c33aaf953f87aec3b0ebdbf5a779460907e5f2832f79a9e67a
# Execution platform: @local_config_platform//:host
ERROR: /Users/jing/code/xdf/sac/dns-prober/BUILD.bazel:47:11: Linking async_dns failed: (Aborted): cc_wrapper.sh failed: error executing command external/local_config_cc/cc_wrapper.sh @bazel-out/darwin-fastbuild/bin/async_dns-2.params
Use --sandbox_debug to see verbose messages from the sandbox
Undefined symbols for architecture x86_64:
"_ares__freeaddrinfo_cnames", referenced from:
_ares_parse_a_reply in libares.lo(ares_parse_a_reply.o)
_ares_parse_aaaa_reply in libares.lo(ares_parse_aaaa_reply.o)
"_ares__freeaddrinfo_nodes", referenced from:
_ares_parse_a_reply in libares.lo(ares_parse_a_reply.o)
_ares_parse_aaaa_reply in libares.lo(ares_parse_aaaa_reply.o)
"_ares__parse_into_addrinfo2", referenced from:
_ares_parse_a_reply in libares.lo(ares_parse_a_reply.o)
_ares_parse_aaaa_reply in libares.lo(ares_parse_aaaa_reply.o)
ld: symbol(s) not found for architecture x86_64
clang: error: linker command failed with exit code 1 (use -v to see invocation)
Error in child process '/usr/bin/xcrun'. 1
external/local_config_cc/cc_wrapper.sh: line 69: 72006 Abort trap: 6 "$(/usr/bin/dirname "$0")"/wrapped_clang "$@"
Target //:async_dns failed to build
Use --verbose_failures to see the command lines of failed build steps.
INFO: Elapsed time: 0.350s, Critical Path: 0.11s
INFO: 2 processes: 2 internal.
FAILED: Build did NOT complete successfully
The build shows that some symbols such as _ares__freeaddrinfo_nodes
are missing. And a check of libares.lo
confirms that _ares__freeaddrinfo_nodes
is undefined:
$ cat bazel-out/darwin-fastbuild/bin/async_dns-2.params
-lc++
-fobjc-link-runtime
-Wl,-S
-o
bazel-out/darwin-fastbuild/bin/async_dns
bazel-out/darwin-fastbuild/bin/_objs/async_dns/async_dns.o
-Wl,-force_load,bazel-out/darwin-fastbuild/bin/external/c-ares.1.16.1/libares.lo
-headerpad_max_install_names
-no-canonical-prefixes
-target
x86_64-apple-macosx
-mmacosx-version-min=11.1
-lc++
-target
x86_64-apple-macosx
$ nm -g bazel-out/darwin-fastbuild/bin/external/c-ares.1.16.1/libares.lo | ag ares__freeaddrinfo_nodes
no symbols
no symbols
no symbols
no symbols
no symbols
no symbols
U _ares__freeaddrinfo_nodes
U _ares__freeaddrinfo_nodes
ares__freeaddrinfo_nodes
is contained in https://github.com/c-ares/c-ares/blob/cares-1_16_1/ares_freeaddrinfo.c. But libares.lo does not include it.
$ ar -t bazel-out/darwin-fastbuild/bin/external/c-ares.1.16.1/libares.lo
__.SYMDEF SORTED
ares__close_sockets.o
ares__get_hostent.o
ares__read_line.o
ares__timeval.o
ares_android.o
ares_cancel.o
ares_create_query.o
ares_data.o
ares_destroy.o
ares_expand_name.o
ares_expand_string.o
ares_fds.o
ares_free_hostent.o
ares_free_string.o
ares_getenv.o
ares_gethostbyaddr.o
ares_gethostbyname.o
ares_getnameinfo.o
ares_getopt.o
ares_getsock.o
ares_init.o
ares_library_init.o
ares_llist.o
ares_mkquery.o
ares_nowarn.o
ares_options.o
ares_parse_a_reply.o
ares_parse_aaaa_reply.o
ares_parse_mx_reply.o
ares_parse_naptr_reply.o
ares_parse_ns_reply.o
ares_parse_ptr_reply.o
ares_parse_soa_reply.o
ares_parse_srv_reply.o
ares_parse_txt_reply.o
ares_platform.o
ares_process.o
ares_query.o
ares_search.o
ares_send.o
ares_strcasecmp.o
ares_strdup.o
ares_strsplit.o
ares_strerror.o
ares_timeout.o
ares_version.o
ares_writev.o
bitncmp.o
inet_net_pton.o
inet_ntop.o
windows_port.o
The BUILD file for c-ares module does not include ares_freeaddrinfo.c
:
bazel-central-registry/modules/c-ares/1.16.1/patches/add_build_file.patch
Lines 67 to 119 in 53e8c3a
Ideally the maintainers of each module should be added as Code Owners of the module directory, so that pull requests for that module can be assigned to them for review automatically.
@Wyverald @alexeagle WDYT?
bazelbuild/rules_k8s
No response
https://github.com/bazelbuild/buildtools
No response
When developing modules, it's useful to be able to make iterative changes without doing a full release of a ruleset. In order to do this, the integrity
field must be recalculated. How to do this needs to be automated or documented.
bzlmod always uses semver versions, e.g. 1.2.3
. However git tags are mixed, some like rules_python are 1.2.3
while others prefix with a v
like v1.2.3
in aspect_bazel_lib. (see https://semver.org/#is-v123-a-semantic-version )
Because of this, the code in the bcr UI to nav to the release notes is going to be broken in one of the two cases: bazel-contrib/bcr-ui#37
Note, the release notes pointer is important because many rules have more lines required in MODULE.bazel than the copy-paste bazel_dep
line that the BCR UI gives you.
N/A
No response
I think we simply need to collect more metadata from module maintainers when they register a module. Hopefully the tagging scheme is consistent across all releases, so this can be unversioned in the module/metadata.json file.
A few options to consider:
{"tags_use_v_prefix": true}
{"tagging_scheme": "v{version}"}
{"release_notes": "https://github.com/bazelbuild/rules_nodejs/releases/tag/{version}"}
Nice thing about 3 is it generalizes in case you don't use GitHub or choose somewhere else for release notes.
I am working on porting a ruleset to Bazel modules that uses dev dependencies, e.g. to support the tests for the presubmit checks. I am not sure whether the MODULE.bazel
checked into the BCR should
Is there a functional difference between and if not, is one of the approaches recommended?
Right now, the only thing I can think of is that the list of versions in /modules/$NAME/metadata.json
and the directories under /module/$NAME
should match.
Most of our modules only change weekly or monthly, but sometimes we cut several releases in a day. Are there any guidelines on how frequently things can change in the central registry?
Also, what is the policy on using non-tag/non-release versions of repos? We use aggressively upstream compiler toolchains, so downstream repos tend to frequently raise deprecation warnings or simply fail to build. We currently work around these issues by vendoring our own modules cut from the patches that fix our issues.
We now want to try to be more in sync with the central registry, but the versions of some packages do not yet contain the fixes we need.
Do we need to maintain our non-tag/non-release modules separately from the central registry (i.e. in our own fork), or is it possible for us to upstream these modules here? For instance, abseil-cpp
will raise a million warnings when built with clang 16 (probably 15 as well). The fixes for these warnings have already been merged in abseil main, but we do not want to wait until the next abseil release is cut.
Enabling more than standard releases would likely require versioning like 0.0.0-yyyymmdd.n-<commit-hash>
(where n
is 0, 1, 2 etc if e.g. a hotfix is pushed in the same day) or similar. I've seen that some modules already are tagged like this (e.g. the upb
module). What would be the compatibility level increment for releasing like that? I could imagine something like compatibility_level=yyyymmdd
?
Users working with (or on) LLVM may require very recent versions of the repo. We have module patches for the llvm-bazel-overlay but found that it would be better to add a module here first before upstreaming patches to LLVM main.
Would it be OK for us to add a module with patches here so that we can upstream the patches later to the main LLVM repo? We would want to update such a module probably around ~bi-weekly, tagged like 16.0.0-20221005.0-asdfasdf
.
A big question is how to not hammer CI with such a module. We would only implement the testing infrastructure that is part of the llvm-project-overlay
, not the entire LLVM test infrastructure. We should also keep in mind that the llvm-project-overlay
is ~5 projects in one. However, running the small subset of the LLVM tests would still likely be the largest CI job out of all the modules that are currently part of the bazel-central-registry. My initial naive estimation for running that small test subset would be ~500 single-core build minutes per architecture/OS combination ๐
Maybe only building a single target (probably the @llvm-project//clang:clang
target) and only building for x86_64
Linux initially would be an option?
https://github.com/bazelbuild/rules_rust
No response
https://github.com/google/benchmark
No response
No response
This:
module(
name = "foo",
version = "2.0",
toolchains_to_register = ["@toolchain_repo//:toolchain"],
)
ext = use_extension("//:extensions.bzl", "ext")
ext.some_tag()
use_repo(ext, "toolchain_repo")
should become this:
module(
name = "foo",
version = "2.0",
)
ext = use_extension("//:extensions.bzl", "ext")
ext.some_tag()
use_repo(ext, "toolchain_repo")
register_toolchains("@toolchain_repo//:toolchain")
There is no documentation for how to develop modules linked to from this repo. Could there please be a link to something to help people adopt bzlmod
?
For reference, the process seems to be:
./tools/add_module.py
file
URI of the registry in a project that uses the rulesetintegrity
SHA and update that in the registry clonebazel shutdown
first, then rebuild the projectIs this actually the process to follow, or is that a local workflow that can be followed to allow one to use the ruleset "in place" (as --override_repository
allows when using rulesets via the workspace)?
When adding rules_python in the latest version 0.14.0, bazel fails to apply the patch:
external/rules_python.0.14.0/.tmp_remote_patches/module_dot_bazel.patch: Expecting more chunk line at line 10
.
However, when I apply the patch manually with the patch
command, it works.
This seems to be the relevant code in bazel: https://github.com/bazelbuild/bazel/blob/bc087f49584a6a60a5acb3612f6d714e315ab8b5/src/main/java/com/google/devtools/build/lib/bazel/repository/PatchUtil.java#L404
It seems to have random OSPO people, jhfield, and some people who are no long on the team.
Context: #334 (comment)
Currently modules/
directory has a flat namespace, somewhat like npm or pypi. However, I would recommend allowing a github like repo syntax. So instead of foo
, adopt a structure like github.com/bazelbuild/foo
. This would allow a more conflict-free approach to names, and naturally reflects the location where the dependency is hosted/managed.
Can we add https://github.com/bazelbuild/rules_docker to the registry too?
In the common case, ruleset releases will require a trivial PR to this repo to add the new release version. The trivial PR just copies the existing registration, with a bumped version number and content integrity hash.
Currently, maintainers like me have multiple ruleset releases per week, and it will be a lot of added burden to manually operate the BCR process to keep these up-to-date.
Proposal:
https://github.com/AcademySoftwareFoundation/openexr
No response
OpenEXR can be used with Bazel 4.x, 5.x but does not have bzlmod support. Would be happy to see support for this.
This means that if you're trying to set up an on-prem registry with entries for resources that need authentication to access the incorrect SHA is calculated for the archive, preventing the module being used at all.
Not only does add_module.py
depend on colorama
(#46) but it also has an undeclared dependency on yaml
.
If we really believe that bzlmod
is sufficient for developers, this should be handled by a MODULE.bazel
file that uses rules_python
to import the required third party deps, and then a simple bazel run //tools:add_module
should be enough to bootstrap a new dependency. But configuring everything in the WORKSPACE.bazel
file would be fine too :)
I did a quick grep of the BCR and noticed that a number of modules reference GitHub archives that are not guaranteed to be stable (see bazel-contrib/SIG-rules-authors#11 (comment) for context):
All these modules will have to be modified to be fetched from the /archive/refs/tags/<TAG>.tar.gz
endpoint.
I bazelized fmt -> https://vertexwahn.de/2022/09/27/bazelizingfmtlib/
fmt is now also part of this bazel-central-registry. Maybe it makes sense to make use of https://github.com/fmtlib/fmt/blob/master/support/bazel/BUILD.bazel and not creating an own build file via a patch.
Grpc uses com_envoyproxy_protoc_gen_validate
through module extensions:
grpc_repo_deps_ext = use_extension("//bazel:grpc_deps.bzl", "grpc_repo_deps_ext")
use_repo(
grpc_repo_deps_ext,
"com_envoyproxy_protoc_gen_validate",
"com_google_googleapis",
"com_github_cncf_udpa",
"envoy_api",
)
However these cannot be picked up transitively, presenting problems if we want to use com_envoyproxy_protoc_gen_validate
in our own project.
Can we have grpc
depend on com_envoyproxy_protoc_gen_validate
as a module? Which would, of course require supporting com_envoyproxy_protoc_gen_validate
as a module.
As part of #7, there were many Bazel modules checked in the BCR with custom MODULE.bazel files and patches while the original project doesn't natively support Bzlmod. To drive the Bzlmod migration in the community, we need help to upstream the MODULE.bazel files and patches back to the original repositories.
The list of modules in the BCR that already natively supports Bazel but not yet migrated to Bzlmod are:
Note that the MODULE.bazel files and patches in the BCR for them only ensure the minimal use case can work, to fully migrate those projects to Bzlmod probably requires extra work.
https://github.com/protocolbuffers/protobuf
No response
@Yannic says this is the next step to unblocking rules_proto usage in bzlmod.
https://github.com/apps/publish-to-bcr has been developed by the Bazel Rules SIG to make it easy to publish a project to the Bazel Central Registry. We should document and advocate its usage.
/cc @alexeagle
Dependencies of the Bazel project itself should be added into the Bazel Central Registry:
Blockers:
Notes:
I am trying to port a personal project to use bzlmod and bazel-central-registry, and one annoyance I've found is that the version of re2 in this repo is from the main
branch. The re2 project also maintains an abseil
branch which adds a dependency on abseil and has additional features not present in main (for example one that I rely on a lot is that the abseil
branch has ubiquitous support for std::string_view
but main
does not).
It seems reasonable to me that some people may want to use re2 without a dependency on abseil, but I'm wondering if there's a way to support the abseil branch since it adds a lot of useful functionality.
https://github.com/bazelbuild/bazel-skylib/tree/main/gazelle
bazelbuild/bazel-skylib#424 (comment)
Many rulesets depend on this to keep bzl_library
targets up-to-date, including the rules-template https://github.com/bazel-contrib/rules-template/blob/0bcaef52fda3e7bcfd8d294fd1a82e5a23a9ad02/BUILD.bazel#L6
When importing abseil-cpp, I get the following error:
[...]/external/abseil-cpp.20210324.2/absl/hash/BUILD.bazel:54:11: no such package '@com_google_googletest//': The repository '@com_google_googletest' could not be resolved: Repository '@com_google_googletest' is not visible from repository '@abseil-cpp.20210324.2' and referenced by '@abseil-cpp.20210324.2//absl/hash:hash_testing'
Is the dependency on googletest missing from https://github.com/bazelbuild/bazel-central-registry/blob/53e8c3ad84c9407df5d644fef680fe92f27e7a39/modules/abseil-cpp/20210324.2/MODULE.bazel?
As shown by #306, it's currently possible for the repository metadata to end up in a corrupted format.
I think the previous workflow we had that was removed with #237 would have caught this(though more "on accident" than being dedicated to this), so apparently the BuildKite presubmit checks aren't sufficient to catch this.
How do we want to deal this? Set up a dedicated metadata.json validation workflow?
We are currently transitioning our (now deprecated) bazel-eomii-registry
to become a fork of the bazel-central-registry
. We want to upstream the modules that are not reliant on our rules_ll
to this repo, since I believe they would be useful for more users.
The repos rules_m4
, rules_flex
and rules_bison
are essentially always used together, so I'm tracking progress for all three here. I intend to first add some proper testing that our original module files lacked. Then I'll submit the patched variants to this repo and then upstream the patches to the original repos.
The repos in question are for instance used by carbon and will in general likely be used by Bazel projects implementing programming languages. We also have a module for Carbon itself (very experimental, here), but testing that will likely not be straightforward and Carbon is IMO too unstable to become a module for now.
The registry can be searchable and present metadata using permalink URLs.
https://bcr-web-ui-hobofan.netlify.app/ is a start.
To provide better user experience, we should align the UI design of registry.bazel.build and bazel.build. However, the Bazel DevSite is not yet in its final form yet. We'll provide style guide and illustration/ icon assets when it's done.
https://github.com/ash2k/bazel-tools
No response
No response
Can somebody add boost support for bcr? Or may you accept a PR to add boost?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.