Comments (28)
All right... the above two commits seem to get me most of the way there. Things build, etc. Was a bit of a slog.
from community.
I'm game to try it. Like I said, I'm pretty sure that what we pick, we come back in a few months with some code written and apply a bit of sed, but I do think we want globally unique: there are many use cases where these will be linked together for both recommended and unrecommended reasons. Let's not set ourselves up for accidental name collisions.
Also, with C++17, the cost of namespaces (in terms of keystrokes) is a lot lower.
from community.
Will we also have some kind of
openxla-base
(util
, ...) for libraries shared between different "backends", or the plan to keep shared code in iree repo?
That is being discussed, but we are biasing towards getting moving at the moment over busting out dependencies. We have some work to do to get the dev/versioning workflow going for the level of things we have, and I'd rather get some more mileage on that before we go too crazy with a lot of repositories.
Will we depend on absl/tsl? E.g. logging, Status, StatusOr inside nvgpu compiler/runtime? Or in compiler use LLVM logging (almost non existent), and in runtime use IREE logging (no idea what's the status).
The further "core-ward" we go, we have no plan to depend on absl/tsl, and I would be somewhat resistant to doing so because they have both proven to be problematic (so much so that we excised after thinking "how bad could it be?").
Concrete thoughts...
I don't think that we should be mixing universes in the compiler code and need to "build up" from LLVM vs grafting other base libraries.
The runtime code for nvgpu has some more give to it from a dependency standpoint, but for the level of things expected to be in there, I would like to avoid the complexity that comes from taking complicated deps if possible.
Some of this stuff is preference and some has proven to be more trouble than it is worth in the past... The hard line that we can't cross in this repo is that dependencies must be thin and must have a well supported CMake build.
from community.
Will this sit directly under
https://github.com/openxla
as inopenxla/openxla-nvgpu
?
Yes, that is what I'm proposing.
from community.
Does that translate to “write access”?
Yes (which we already do on these repos for other contributors), but I tried to phrase it as more general. Being a "component maintainer" on OpenXLA parlance grants some other privileges in terms of overall openxla direction/management.
from community.
What will be the C++ namespace for all the new code in this project?
::xla::...
?
Not sure I would speak to "all of the new code", but historically, dialects and components of IREE take a namespace like ::mlir::iree_compiler::IREE::<COMPONENT>
, which nests well by letting everything resolve components by IREE::
. For the record, I don't love that it is all rooted under mlir
but that has been that way for a long time and could be cleaned up.
If not extending that namespacing scheme, I'd encourage at least something similar: ::openxla_compiler::XLA
. I've deviated from this a couple of times in the past and always regretted it and come back to normalize.
from community.
I would be concerned with plugins and target specific components redefining symbols in shared namespace: I'm not sure what the benefits of eliding the target from the namespace buys us in practice?
(nesting compilers under ::mlir
(as in ::mlir::mhlo::
for example) is nice in that you have an implicit using namespace mlir;
(maybe just a workaround for the google style banning using namespace
?), but we're still heavily using a specific sub-namespace to compartimentalize).
from community.
maybe just a workaround for the google style banning
using namespace
(ftr - we're not exporting such legacy rules to our new OSS projects)
from community.
Newer waste a good opportunity to bike-shed 😄
Logging, as brought up by @ezhulenev, caught my eye. For the compiler, I would suggest we try to use the diagnostic handler infrastructure as much as possible and log warnings/errors there. That will force us to provide messages with good context.
For "trace what this does" debugging use cases, I agree that LLVM-DEBUG
is not great but will work for now. We will need to replace this later with something that has severity and other bells and whistles but that can be done. I am used to the VLOG
interface and we could have a shim that maps to LLVM-DEBUG
if we decide to care already. Maybe IREE wants to provide this akin to what tsl
does, so that different integrations can swap in what fits their needs. Depending on tsl
just for that seems a bit heavy.
Regarding namespaces: I agree with @stellaraccident: Lets bias on progress rather than perfect choice for now. Having said that, I personally would use openxla::compiler::nvgpu
for the CUDA compiler. We will have more compilers sooner than later and having them in different namespaces helps my mental model of where code belongs. Might be xla bias and I have no strong opinion. Just some 🚲 🛖 🖌️
Excited to see this project spin up!
from community.
Both of the initial commits have landed.
@theadactyl Can we get someone with admin rights on the org to create the openxla-nvgpu repo? It should be populated with https://github.com/iree-org/iree-samples/tree/main/openxla-nvgpu
from community.
Coming in late and I think just agreeing with what's already been decided, but strong positions on: not taking an absl or tsl dep in the compiler and scoping the [sub]namespace to the project (so not just "iree" or "openxla" or "compiler")
I do think that the Google (Titus) advice on not having deeply nested namespaces is pretty good and not just a weird Google thing: https://abseil.io/tips/130. Every level of nesting gives us a fun new place for collisions. So I would vote for something like openxla_nvgpu
as a top-level namespace. Probably nesting it under mlir
would be fine, since we shouldn't define things that conflict with MLIR anyway. I think there was at some point a limitation of ODS that meant you had to define things in the mlir
namespace, but I also suspect that this is a relic of the Google C++ style guide ban on using namespace mlir
, which while not completely misguided IMO, seems excessively strict. using namespace mlir
seems better than sneakily getting that by just defining everything nested under that namespace. I guess the difference is that you can only do the latter for one namespace. I think we can probably limit ourselves to using namespace mlir
and using namespace llvm
and then not have to nest our namespaces, unless there's some other reason to do so.
from community.
Fantastic! Looking forward to the collaboration.
from community.
Will this sit directly under https://github.com/openxla
as in openxla/openxla-nvgpu
?
from community.
cc @nluehr
from community.
Does that translate to “write access”?
from community.
What will be the C++ namespace for all the new code in this project? ::xla::...
?
from community.
Proposed Directory Structure
openxla-nvgpu
will have a similar directory layout to IREE, upon which it depends:compiler/ src/ openxla_nvgpu/ Dialects/ runtime/ src/ openxla_nvgpu/ build_tools/
I'm curious why the sub-directory names are repeating "openxla"? That is why compiler/src/openxla_nvgpu
instead of compiler/src/nvgpu
?
Even further, if the repository itself is all about nvgpu
, why repeating it at all? What else will be inside compiler/src
in this repo?
from community.
It is just following the IREE convention of the include directory being rooted at the src/
and wanting to arrive at fully qualified include paths (i.e. `#include "openxla_nvgpu/...").
I've more or less resigned myself to the fact that the least bad thing is to repeat yourself exactly once in service of having globally unique include paths.
from community.
I'm not a big fan of mlir
top level namespace, and iree_compiler
as nvgu really just a project on top of IREE compiler, and I don't like openxla_compiler
namespace, and I don't have any suggestions :)
What if I want to write custom VM module under runtime/src/openxla_nvgpu
? In IREE it's mostly under iree
and iree::vm
namespace, should it be ::openxla
or regular ::xla
here?
from community.
Oh I missed the point about the include directory convention, makes sense!
(I believe this convention is in place in mlir-hlo as well, I remember a rationale doc for the path structure, I think @burmako originated it, I don't know if it is public)
from community.
I'm not a big fan of
mlir
top level namespace, andiree_compiler
as nvgu really just a project on top of IREE compiler, and I don't likeopenxla_compiler
namespace, and I don't have any suggestions :)What if I want to write custom VM module under
runtime/src/openxla_nvgpu
? In IREE it's mostly underiree
andiree::vm
namespace, should it be::openxla
or regular::xla
here?
Just throwing things out there... ::openxla::nvgpu
? We're going to regret whatever we choose. Might as well at least not start with a namespace that is already used.
from community.
I'd go with ::openxla
, ::openxla::compiler
, ::openxla::runtime
, etc...? And not mention nvgpu at all. Do we see a single project depending on multiple "openxla compilers", e.g. openxla-nvgpu
and openxla-amdgpu
? Or link into single binary?
from community.
Another directory naming question, why compiler/src/openxla_nvgpu
and not compiler/src/openxla/nvgpu
?
Will we also have some kind of openxla-base
(util
, ...) for libraries shared between different "backends", or the plan to keep shared code in iree repo?
Will we depend on absl/tsl? E.g. logging, Status, StatusOr inside nvgpu compiler/runtime? Or in compiler use LLVM logging (almost non existent), and in runtime use IREE logging (no idea what's the status).
from community.
What will be the license? Copyright 2023 The OpenXLA Authors
... Licensed under the Apache License v2.0 with LLVM Exceptions.
from community.
Another directory naming question, why
compiler/src/openxla_nvgpu
and notcompiler/src/openxla/nvgpu
?
I don't have a preference.
from community.
This has been a heavy overhead week for me but I should finally get some coding time this afternoon, and since I can probably bootstrap the project somewhat efficiently, I'll take a stab at that. As noted, I'll stage it in an iree-samples directory first and will then hand off to someone Google-side to create the repo (which requires a bit of red tape).
from community.
And initial CuDNN custom module that does nothing related to CuDNN yet: https://github.com/iree-org/iree-samples/pull/123/files
from community.
I don't like openxla_nvgpu
namespace, because presumably we'll have some kind of openxla-base
shared by all openxla compilers, and it's nice to share a namespace to skip qualifying all imports.
+HUGE to using namespace
but only for small number of things (mlir
and llvm
)
from community.
Related Issues (20)
- Add StableHLO as a frontend to IREE Discord
- Add openxla-discuss as invitee to community meetings
- Add previous community meeting slide decks to OpenXLA drive HOT 1
- Update CONTRIBUTING.md files HOT 1
- Archive or delete old triton repository
- Create community meeting notes doc
- Create community channel guide
- Add IREE <-> OpenXLA redirects in Discord
- Automate call for agenda for Community Meeting?
- Add iree-discuss redirect to appropriate ML on openxla.org
- Deprecate IREE YouTube / YouTube cleanup
- Redirect IREE github org to OpenXLA github org HOT 1
- Call to sign up for openxla-announce & openxla-discuss
- Add OpenXLA member orgs to github
- Standard community resources footer across repos?
- Create RFC about RFCs
- Add slides links & delete pdfs for past community meetings
- Move IREE videos to OpenXLA youtube
- RFC - Creating an openxla/dev-tools repo and building automation HOT 3
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from community.