Code Monkey home page Code Monkey logo

bazel-central-registry's People

Contributors

aiuto avatar alexeagle avatar balestrapatrick avatar dzbarsky avatar fmeum avatar jmillikin avatar jondo2010 avatar jpsim avatar jsharpe avatar junyer avatar keith avatar kormide avatar lalten avatar macandy13 avatar mattyclarkson avatar mering avatar meteorcloudy avatar mgred avatar mmorel-35 avatar oxidase avatar phaedon avatar pjk25 avatar publish-to-bcr-bot[bot] avatar shs96c avatar simplydanny avatar tetromino avatar vertexwahn avatar wep21 avatar wyverald avatar yanndegat avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

bazel-central-registry's Issues

wanted: hdl/bazel_rules_hdl

Hi,

Also a Googler. I have a pretty extensive set of dependency support package in bazel_rules_hdl that might be useful to upstream here.

The packages in there are all built hermetically from source, as such, they represent the full DAG of all the packages.

Is there any interest to migrate them here?

Tooling to validate presubmit.yml

It'd be useful to have tooling to validate the presubmit.yml file is correctly formatted / follows the correct schema as it is not really documented anywhere what the correct format is and the current workflow relies on copy and paste from other modules.

Option to ignore module overrides in deps

Particularly at this stage when testing bzlmod, it's common in my experience to make liberal use of the various bazel_dep overrides, like archive_override. It would great to allow the build to just ignore or warn on the overrides present in bazel_deps, particularly when an override pulls in other overrides. This won't be a big deal when it's easier to provide alternate bzlmod registries, but for now it makes testing cumbersome. It is an option to use patches in the overrides, but then patches must exist for each step in the chain.

Use scripts from tools directory within private central registry

Hi all, I am trying to use bzlmod and make a private central registry. I found these scripts in the tools directory are used in the process. Is it possible to wrap these scripts as a separate module so that other private repositories can easily update them?

Another problem is if my archive url address is not public to everyone, which have to carry a token parameter when downloading. This situation doesn't seem to be supported now?

Add additional module metadata.json fields

Right now the data contained in the metadata.json fields is very focused on pure technical package management functionality.

Especially for discoverability, it would be nice also to have some additional metadata:

c-ares module: missing symbols

The BUILD file:

cc_binary (
  name = "async_dns",
  srcs = [ "async_dns.cc" ],
  deps = [ "@c-ares//:ares" ]
)

The C++ source file: async_dns.cc:

#include <ares.h>
#include <arpa/inet.h>
#include <ctype.h>
#include <netdb.h>
#include <netinet/in.h>
#include <stdarg.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <sys/socket.h>
#include <unistd.h>

static void state_cb(void *data, int s, int read, int write) {
  printf("Change state fd %d read:%d write:%d\n", s, read, write);
}

static void callback(void *arg, int status, int timeouts,
                     struct hostent *host) {
  if (!host || status != ARES_SUCCESS) {
    printf("Failed to lookup %s\n", ares_strerror(status));
    return;
  }

  printf("Found address name %s\n", host->h_name);
  char ip[INET6_ADDRSTRLEN];
  int i = 0;

  for (i = 0; host->h_addr_list[i]; ++i) {
    inet_ntop(host->h_addrtype, host->h_addr_list[i], ip, sizeof(ip));
    printf("%s\n", ip);
  }
}

static void wait_ares(ares_channel channel) {
  for (;;) {
    struct timeval *tvp, tv;
    fd_set read_fds, write_fds;
    int nfds;

    FD_ZERO(&read_fds);
    FD_ZERO(&write_fds);
    nfds = ares_fds(channel, &read_fds, &write_fds);
    if (nfds == 0) {
      break;
    }
    tvp = ares_timeout(channel, NULL, &tv);
    select(nfds, &read_fds, &write_fds, NULL, tvp);
    ares_process(channel, &read_fds, &write_fds);
  }
}

int main(void) {
  ares_channel channel;
  int status;
  struct ares_options options;
  int optmask = 0;

  status = ares_library_init(ARES_LIB_INIT_ALL);
  if (status != ARES_SUCCESS) {
    printf("ares_library_init: %s\n", ares_strerror(status));
    return 1;
  }
  // options.sock_state_cb_data;
  options.sock_state_cb = state_cb;
  optmask |= ARES_OPT_SOCK_STATE_CB;

  status = ares_init_options(&channel, &options, optmask);
  if (status != ARES_SUCCESS) {
    printf("ares_init_options: %s\n", ares_strerror(status));
    return 1;
  }

  status = ares_set_servers_csv(channel, "114.114.114.114");
  if (status != ARES_SUCCESS) {
    printf("ares_set_servers_csv: %s\n", ares_strerror(status));
    return 1;
  }

  ares_gethostbyname(channel, "baidu.com", AF_INET, callback, NULL);
  // ares_gethostbyname(channel, "google.com", AF_INET6, callback, NULL);
  wait_ares(channel);
  ares_destroy(channel);
  ares_library_cleanup();
  printf("fin\n");
  return 0;
}

Use Bazel to build the C++ source file:

$ bazelisk build -s :async_dns
INFO: Analyzed target //:async_dns (0 packages loaded, 0 targets configured).
INFO: Found 1 target...
SUBCOMMAND: # //:async_dns [action 'Linking async_dns', configuration: 386057a8d24831c33aaf953f87aec3b0ebdbf5a779460907e5f2832f79a9e67a, execution platform: @local_config_platform//:host]
(cd /private/var/tmp/_bazel_jing/352d1d854f060ba5704eda8af920a5e3/execroot/__main__ && \
  exec env - \
    APPLE_SDK_PLATFORM=MacOSX \
    APPLE_SDK_VERSION_OVERRIDE=11.1 \
    PATH='/Users/jing/Library/Caches/bazelisk/downloads/bazelbuild/bazel-5.1.0-darwin-x86_64/bin:/usr/local/opt/node@16/bin:/Users/jing/.rbenv/shims:/Users/jing/.tiup/bin:/Users/jing/opt/anaconda3/condabin:/usr/local/sbin:/Users/jing/code/github/zk-code/scripts:/Users/jing/tools/apache-zookeeper-3.5.5-bin/bin:/Users/jing/code/github/trace/sky/skywalking-cli/bin:/Users/jing/tools/apache-skywalking-apm-bin-es7/bin:/usr/local/opt/ruby/bin:/Users/jing/code/github/devops/fluent-bit/build/bin:/Users/jing/tools/apache-maven-3.6.3/bin:/Users/jing/.gem/bin:/Users/jing/.cargo/bin:/Users/jing/code/github/network/caddy/cmd/caddy:/usr/local/opt/coreutils/libexec/gnubin:/Users/jing/tools/apache-skywalking-apm-bin-es7/bin:/usr/local/opt/bison/bin:/usr/local/opt/findutils/libexec/gnubin:/usr/local/opt/gnu-sed/libexec/gnubin:/Users/jing/tools/mongodb-osx-x86_64-4.0.2/bin:/usr/local/opt/[email protected]/bin:/Users/jing/tools/spark-2.2.0-bin-hadoop2.7/bin:/Users/jing/bin:/usr/local/bin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/Library/TeX/texbin:/Library/Apple/usr/bin:/Applications/Wireshark.app/Contents/MacOS:/Users/jing/go/bin:/Users/jing/.pub-cache/bin:/Applications/Visual Studio Code.app/Contents/Resources/app/bin:/usr/local/opt/fzf/bin' \
    XCODE_VERSION_OVERRIDE=12.4.0.12D4e \
    ZERO_AR_DATE=1 \
  external/local_config_cc/cc_wrapper.sh @bazel-out/darwin-fastbuild/bin/async_dns-2.params)
# Configuration: 386057a8d24831c33aaf953f87aec3b0ebdbf5a779460907e5f2832f79a9e67a
# Execution platform: @local_config_platform//:host
ERROR: /Users/jing/code/xdf/sac/dns-prober/BUILD.bazel:47:11: Linking async_dns failed: (Aborted): cc_wrapper.sh failed: error executing command external/local_config_cc/cc_wrapper.sh @bazel-out/darwin-fastbuild/bin/async_dns-2.params

Use --sandbox_debug to see verbose messages from the sandbox
Undefined symbols for architecture x86_64:
  "_ares__freeaddrinfo_cnames", referenced from:
      _ares_parse_a_reply in libares.lo(ares_parse_a_reply.o)
      _ares_parse_aaaa_reply in libares.lo(ares_parse_aaaa_reply.o)
  "_ares__freeaddrinfo_nodes", referenced from:
      _ares_parse_a_reply in libares.lo(ares_parse_a_reply.o)
      _ares_parse_aaaa_reply in libares.lo(ares_parse_aaaa_reply.o)
  "_ares__parse_into_addrinfo2", referenced from:
      _ares_parse_a_reply in libares.lo(ares_parse_a_reply.o)
      _ares_parse_aaaa_reply in libares.lo(ares_parse_aaaa_reply.o)
ld: symbol(s) not found for architecture x86_64
clang: error: linker command failed with exit code 1 (use -v to see invocation)
Error in child process '/usr/bin/xcrun'. 1
external/local_config_cc/cc_wrapper.sh: line 69: 72006 Abort trap: 6           "$(/usr/bin/dirname "$0")"/wrapped_clang "$@"
Target //:async_dns failed to build
Use --verbose_failures to see the command lines of failed build steps.
INFO: Elapsed time: 0.350s, Critical Path: 0.11s
INFO: 2 processes: 2 internal.
FAILED: Build did NOT complete successfully

The build shows that some symbols such as _ares__freeaddrinfo_nodes are missing. And a check of libares.lo confirms that _ares__freeaddrinfo_nodes is undefined:

$ cat bazel-out/darwin-fastbuild/bin/async_dns-2.params
-lc++
-fobjc-link-runtime
-Wl,-S
-o
bazel-out/darwin-fastbuild/bin/async_dns
bazel-out/darwin-fastbuild/bin/_objs/async_dns/async_dns.o
-Wl,-force_load,bazel-out/darwin-fastbuild/bin/external/c-ares.1.16.1/libares.lo
-headerpad_max_install_names
-no-canonical-prefixes
-target
x86_64-apple-macosx
-mmacosx-version-min=11.1
-lc++
-target
x86_64-apple-macosx

$ nm -g bazel-out/darwin-fastbuild/bin/external/c-ares.1.16.1/libares.lo | ag ares__freeaddrinfo_nodes
no symbols
no symbols
no symbols
no symbols
no symbols
no symbols
                 U _ares__freeaddrinfo_nodes
                 U _ares__freeaddrinfo_nodes

ares__freeaddrinfo_nodes is contained in https://github.com/c-ares/c-ares/blob/cares-1_16_1/ares_freeaddrinfo.c. But libares.lo does not include it.

$ ar -t bazel-out/darwin-fastbuild/bin/external/c-ares.1.16.1/libares.lo
__.SYMDEF SORTED
ares__close_sockets.o
ares__get_hostent.o
ares__read_line.o
ares__timeval.o
ares_android.o
ares_cancel.o
ares_create_query.o
ares_data.o
ares_destroy.o
ares_expand_name.o
ares_expand_string.o
ares_fds.o
ares_free_hostent.o
ares_free_string.o
ares_getenv.o
ares_gethostbyaddr.o
ares_gethostbyname.o
ares_getnameinfo.o
ares_getopt.o
ares_getsock.o
ares_init.o
ares_library_init.o
ares_llist.o
ares_mkquery.o
ares_nowarn.o
ares_options.o
ares_parse_a_reply.o
ares_parse_aaaa_reply.o
ares_parse_mx_reply.o
ares_parse_naptr_reply.o
ares_parse_ns_reply.o
ares_parse_ptr_reply.o
ares_parse_soa_reply.o
ares_parse_srv_reply.o
ares_parse_txt_reply.o
ares_platform.o
ares_process.o
ares_query.o
ares_search.o
ares_send.o
ares_strcasecmp.o
ares_strdup.o
ares_strsplit.o
ares_strerror.o
ares_timeout.o
ares_version.o
ares_writev.o
bitncmp.o
inet_net_pton.o
inet_ntop.o
windows_port.o

The BUILD file for c-ares module does not include ares_freeaddrinfo.c:

+ srcs = [
+ "ares__close_sockets.c",
+ "ares__get_hostent.c",
+ "ares__read_line.c",
+ "ares__timeval.c",
+ "ares_android.c",
+ "ares_cancel.c",
+ "ares_create_query.c",
+ "ares_data.c",
+ "ares_destroy.c",
+ "ares_expand_name.c",
+ "ares_expand_string.c",
+ "ares_fds.c",
+ "ares_free_hostent.c",
+ "ares_free_string.c",
+ "ares_getenv.c",
+ "ares_gethostbyaddr.c",
+ "ares_gethostbyname.c",
+ "ares_getnameinfo.c",
+ "ares_getopt.c",
+ "ares_getsock.c",
+ "ares_init.c",
+ "ares_library_init.c",
+ "ares_llist.c",
+ "ares_mkquery.c",
+ "ares_nowarn.c",
+ "ares_options.c",
+ "ares_parse_a_reply.c",
+ "ares_parse_aaaa_reply.c",
+ "ares_parse_mx_reply.c",
+ "ares_parse_naptr_reply.c",
+ "ares_parse_ns_reply.c",
+ "ares_parse_ptr_reply.c",
+ "ares_parse_soa_reply.c",
+ "ares_parse_srv_reply.c",
+ "ares_parse_txt_reply.c",
+ "ares_platform.c",
+ "ares_process.c",
+ "ares_query.c",
+ "ares_search.c",
+ "ares_send.c",
+ "ares_strcasecmp.c",
+ "ares_strdup.c",
+ "ares_strsplit.c",
+ "ares_strerror.c",
+ "ares_timeout.c",
+ "ares_version.c",
+ "ares_writev.c",
+ "bitncmp.c",
+ "inet_net_pton.c",
+ "inet_ntop.c",
+ "windows_port.c",
+ ],

Add CODEOWNER for each module

Ideally the maintainers of each module should be added as Code Owners of the module directory, so that pull requests for that module can be assigned to them for review automatically.

@Wyverald @alexeagle WDYT?

metadata should include a way to navigate to the tag

What happened?

bzlmod always uses semver versions, e.g. 1.2.3. However git tags are mixed, some like rules_python are 1.2.3 while others prefix with a v like v1.2.3 in aspect_bazel_lib. (see https://semver.org/#is-v123-a-semantic-version )

Because of this, the code in the bcr UI to nav to the release notes is going to be broken in one of the two cases: bazel-contrib/bcr-ui#37

Note, the release notes pointer is important because many rules have more lines required in MODULE.bazel than the copy-paste bazel_dep line that the BCR UI gives you.

Version

N/A

How to reproduce

No response

Any other information?

I think we simply need to collect more metadata from module maintainers when they register a module. Hopefully the tagging scheme is consistent across all releases, so this can be unversioned in the module/metadata.json file.

A few options to consider:

  1. {"tags_use_v_prefix": true}
  2. {"tagging_scheme": "v{version}"}
  3. {"release_notes": "https://github.com/bazelbuild/rules_nodejs/releases/tag/{version}"}

Nice thing about 3 is it generalizes in case you don't use GitHub or choose somewhere else for release notes.

Content of `MODULE.bazel` in the BCR

I am working on porting a ruleset to Bazel modules that uses dev dependencies, e.g. to support the tests for the presubmit checks. I am not sure whether the MODULE.bazel checked into the BCR should

  1. be identical to the one in the rulesets repo, including dev dependencies or
  2. contain only the non-dev dependencies, as there are the only ones relevant for dependency resolution when loaded as a module.

Is there a functional difference between and if not, is one of the approaches recommended?

What are the policies for rapidly changing modules and non-standard release versions?

Most of our modules only change weekly or monthly, but sometimes we cut several releases in a day. Are there any guidelines on how frequently things can change in the central registry?

Also, what is the policy on using non-tag/non-release versions of repos? We use aggressively upstream compiler toolchains, so downstream repos tend to frequently raise deprecation warnings or simply fail to build. We currently work around these issues by vendoring our own modules cut from the patches that fix our issues.

We now want to try to be more in sync with the central registry, but the versions of some packages do not yet contain the fixes we need.

Do we need to maintain our non-tag/non-release modules separately from the central registry (i.e. in our own fork), or is it possible for us to upstream these modules here? For instance, abseil-cpp will raise a million warnings when built with clang 16 (probably 15 as well). The fixes for these warnings have already been merged in abseil main, but we do not want to wait until the next abseil release is cut.

Enabling more than standard releases would likely require versioning like 0.0.0-yyyymmdd.n-<commit-hash> (where n is 0, 1, 2 etc if e.g. a hotfix is pushed in the same day) or similar. I've seen that some modules already are tagged like this (e.g. the upb module). What would be the compatibility level increment for releasing like that? I could imagine something like compatibility_level=yyyymmdd?

wanted: llvm/llvm-project

Users working with (or on) LLVM may require very recent versions of the repo. We have module patches for the llvm-bazel-overlay but found that it would be better to add a module here first before upstreaming patches to LLVM main.

Would it be OK for us to add a module with patches here so that we can upstream the patches later to the main LLVM repo? We would want to update such a module probably around ~bi-weekly, tagged like 16.0.0-20221005.0-asdfasdf.

A big question is how to not hammer CI with such a module. We would only implement the testing infrastructure that is part of the llvm-project-overlay, not the entire LLVM test infrastructure. We should also keep in mind that the llvm-project-overlay is ~5 projects in one. However, running the small subset of the LLVM tests would still likely be the largest CI job out of all the modules that are currently part of the bazel-central-registry. My initial naive estimation for running that small test subset would be ~500 single-core build minutes per architecture/OS combination ๐Ÿ˜… Maybe only building a single target (probably the @llvm-project//clang:clang target) and only building for x86_64 Linux initially would be an option?

Migrate existing modules to the new toolchain registration API

See bazelbuild/bazel#15829

This:

module(
  name = "foo",
  version = "2.0",
  toolchains_to_register = ["@toolchain_repo//:toolchain"],
)

ext = use_extension("//:extensions.bzl", "ext")
ext.some_tag()
use_repo(ext, "toolchain_repo")

should become this:

module(
  name = "foo",
  version = "2.0",
)

ext = use_extension("//:extensions.bzl", "ext")
ext.some_tag()
use_repo(ext, "toolchain_repo")
register_toolchains("@toolchain_repo//:toolchain")

No documentation for how to iteratively develop modules

There is no documentation for how to develop modules linked to from this repo. Could there please be a link to something to help people adopt bzlmod?

For reference, the process seems to be:

  1. Create a clone of the registry
  2. Add your ruleset to that using ./tools/add_module.py
  3. Point to the file URI of the registry in a project that uses the ruleset
  4. Realise that you need to make a change to the ruleset
  5. Make the change, push to a remote fork of the ruleset
  6. "Somehow" recalculate the integrity SHA and update that in the registry clone
  7. Rebuild the project
  8. Realise that you need to run bazel shutdown first, then rebuild the project

Is this actually the process to follow, or is that a local workflow that can be followed to allow one to use the ruleset "in place" (as --override_repository allows when using rulesets via the workspace)?

Patch fails in rules_python @0.14.0

When adding rules_python in the latest version 0.14.0, bazel fails to apply the patch:

external/rules_python.0.14.0/.tmp_remote_patches/module_dot_bazel.patch: Expecting more chunk line at line 10.

However, when I apply the patch manually with the patch command, it works.

This seems to be the relevant code in bazel: https://github.com/bazelbuild/bazel/blob/bc087f49584a6a60a5acb3612f6d714e315ab8b5/src/main/java/com/google/devtools/build/lib/bazel/repository/PatchUtil.java#L404

Namespace

Currently modules/ directory has a flat namespace, somewhat like npm or pypi. However, I would recommend allowing a github like repo syntax. So instead of foo, adopt a structure like github.com/bazelbuild/foo. This would allow a more conflict-free approach to names, and naturally reflects the location where the dependency is hosted/managed.

Automate release mirroring

In the common case, ruleset releases will require a trivial PR to this repo to add the new release version. The trivial PR just copies the existing registration, with a bumped version number and content integrity hash.

Currently, maintainers like me have multiple ruleset releases per week, and it will be a lot of added burden to manually operate the BCR process to keep these up-to-date.

Proposal:

  • add a GH Actions automation on a cron
  • for some subset of modules (maybe something in the modules/foo/metadata.json indicating "auto_mirror")
  • check if there's a newer release of the module than we currently have in the registry
  • if so: create a PR that contains a commit which adds the new module version
  • if the PR is green, and authored by the bot, then maintainers here can trivially approve and merge, or we can decide to let them auto-merge
  • [stretch] and also: do the mirroring to the mirror.bazel.build repo so that the ruleset can have more reliable delivery

`./tools/add_module.py` depends on `yaml` but this is not documented

Not only does add_module.py depend on colorama (#46) but it also has an undeclared dependency on yaml.

If we really believe that bzlmod is sufficient for developers, this should be handled by a MODULE.bazel file that uses rules_python to import the required third party deps, and then a simple bazel run //tools:add_module should be enough to bootstrap a new dependency. But configuring everything in the WORKSPACE.bazel file would be fine too :)

wanted: com_envoyproxy_protoc_gen_validate

Grpc uses com_envoyproxy_protoc_gen_validate through module extensions:

grpc_repo_deps_ext = use_extension("//bazel:grpc_deps.bzl", "grpc_repo_deps_ext")

use_repo(
    grpc_repo_deps_ext,
    "com_envoyproxy_protoc_gen_validate",
    "com_google_googleapis",
    "com_github_cncf_udpa",
    "envoy_api",
)

However these cannot be picked up transitively, presenting problems if we want to use com_envoyproxy_protoc_gen_validate in our own project.

Can we have grpc depend on com_envoyproxy_protoc_gen_validate as a module? Which would, of course require supporting com_envoyproxy_protoc_gen_validate as a module.

Upstream Bzlmod changes to original repositories

As part of #7, there were many Bazel modules checked in the BCR with custom MODULE.bazel files and patches while the original project doesn't natively support Bzlmod. To drive the Bzlmod migration in the community, we need help to upstream the MODULE.bazel files and patches back to the original repositories.

The list of modules in the BCR that already natively supports Bazel but not yet migrated to Bzlmod are:

  • bazel_skylib
  • protobuf
  • platforms
  • rules_proto
  • rules_pkg
  • rules_python
  • rules_nodejs
  • rules_license
  • grpc
  • glog
  • googletest
  • gflags
  • upb
  • c-ares
  • abseil-cpp
  • re2
  • boringssl
  • stardoc

Note that the MODULE.bazel files and patches in the BCR for them only ensure the minimal use case can work, to fully migrate those projects to Bzlmod probably requires extra work.

Modularize core Bazel dependencies

Dependencies of the Bazel project itself should be added into the Bazel Central Registry:

  • bazel_skylib
  • bazel_skydoc
  • com_google_protobuf
  • platforms
  • rules_cc
  • rules_java
  • rules_proto
  • rules_pkg
  • rules_python
  • rules_nodejs
  • rules_sass
  • grpc
  • upb
  • c-ares
  • zlib
  • abseil-cpp
  • re2
  • boringssl

Blockers:

Notes:

  • Currently we are writing MODULE.bazel files for those project and checking them in the BCR, that is fine because Bzlmod don't read MODULE.bazel from the source archive. But ideally they should be upstreamed.

wanted: re2-abseil

I am trying to port a personal project to use bzlmod and bazel-central-registry, and one annoyance I've found is that the version of re2 in this repo is from the main branch. The re2 project also maintains an abseil branch which adds a dependency on abseil and has additional features not present in main (for example one that I rely on a lot is that the abseil branch has ubiquitous support for std::string_view but main does not).

It seems reasonable to me that some people may want to use re2 without a dependency on abseil, but I'm wondering if there's a way to support the abseil branch since it adds a lot of useful functionality.

wanted: bazel_skylib_gazelle_plugin

Module location

https://github.com/bazelbuild/bazel-skylib/tree/main/gazelle

Link to bzlmod issue in the module's repository

bazelbuild/bazel-skylib#424 (comment)

Any other context to provide?

Many rulesets depend on this to keep bzl_library targets up-to-date, including the rules-template https://github.com/bazel-contrib/rules-template/blob/0bcaef52fda3e7bcfd8d294fd1a82e5a23a9ad02/BUILD.bazel#L6

Fund our work

  • Sponsor our community's open source work by donating a feature bounty

abseil-cpp is missing dependency on googletest?

When importing abseil-cpp, I get the following error:

[...]/external/abseil-cpp.20210324.2/absl/hash/BUILD.bazel:54:11: no such package '@com_google_googletest//': The repository '@com_google_googletest' could not be resolved: Repository '@com_google_googletest' is not visible from repository '@abseil-cpp.20210324.2' and referenced by '@abseil-cpp.20210324.2//absl/hash:hash_testing'

Is the dependency on googletest missing from https://github.com/bazelbuild/bazel-central-registry/blob/53e8c3ad84c9407df5d644fef680fe92f27e7a39/modules/abseil-cpp/20210324.2/MODULE.bazel?

Verify metadata.json integrity in CI?

As shown by #306, it's currently possible for the repository metadata to end up in a corrupted format.

I think the previous workflow we had that was removed with #237 would have caught this(though more "on accident" than being dedicated to this), so apparently the BuildKite presubmit checks aren't sufficient to catch this.

How do we want to deal this? Set up a dedicated metadata.json validation workflow?

wanted: jmillikin/rules_{bison, m4, flex}

We are currently transitioning our (now deprecated) bazel-eomii-registry to become a fork of the bazel-central-registry. We want to upstream the modules that are not reliant on our rules_ll to this repo, since I believe they would be useful for more users.

The repos rules_m4, rules_flex and rules_bison are essentially always used together, so I'm tracking progress for all three here. I intend to first add some proper testing that our original module files lacked. Then I'll submit the patched variants to this repo and then upstream the patches to the original repos.

The repos in question are for instance used by carbon and will in general likely be used by Bazel projects implementing programming languages. We also have a module for Carbon itself (very experimental, here), but testing that will likely not be straightforward and Carbon is IMO too unstable to become a module for now.

Align registry.bazel.build style with Bazel DevSite

To provide better user experience, we should align the UI design of registry.bazel.build and bazel.build. However, the Bazel DevSite is not yet in its final form yet. We'll provide style guide and illustration/ icon assets when it's done.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.