Code Monkey home page Code Monkey logo

u-dales's People

Contributors

bss116 avatar chiil avatar dmey avatar samoliverowens avatar sjboeing avatar thijsheus avatar tomgrylls avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

u-dales's Issues

dT in wall function for momentum when iwalltemp/=2

The wall function for momentum uses the temperature difference between the wall and the adjacent cell, dT. However, under neutral conditions or under non-neutral conditions when iwalltemp/=2 (zero flux or constant flux BCs used) this term is still calculated even though the wall temperature is not being modelled. This means that the wall function will have a stability term based off a temperature difference that is not being modelled.

This will occur:

  1. in neutral conditions if there is a difference between the arbitrarily defined air temperature (typically thl=288) and the values assigned in Tfacinit.f90. This can be avoided by ensuring that these are equal and therefore dT=0 by definition (currently done in preprocessing but maybe this should be automated as it should not need to be considered when modelling neutral conditions). Perhaps have a switch for if (ltempeq) where Tfacinit.f90 is read.
  2. When iwalltemp=1 the air temperature is non-uniform and dT will be non-zero. Potential solution would be to define a switch within each case in wfuno.f90 where dT=0 if iwalltemp=1. Or may be easier to just edit function unom and we just need Ribl0=0 under these conditions. @ivosuter - thoughts?

The consequence of this bug is that under these specific conditions we can expect changes to the applied wall shear stress due this stability term.

Restructure and retest modinlet and moddriver. Improved modboundary.

Summary: Necessity to test modinlet. Potential to improve implementation of modinlet and moddriver within modstartup and modboundary. Potential to have consistent notation for all BC input parameters.

Initial thoughts:

  1. modinlet.f90 has not been used for a long time and requires testing to make sure it is still compatible with the latest version of the code.

  2. The implementation of both of these modules is outdated and not consistent with the desired structure of modboundary (moddriver was developed to follow the same implementation as modinlet). Examples include:

  • If BCxm is not equal to 1 then other boundary condition input parameters will be overwritten in modstartup.f90. Can these bits of code can be moved to the subroutine initboundary for clarity? Can the set of switches and cases be redefined so that input paramters are not overwritten?
  • The use of switches such as linoutflow and lper2inout requires revision. linoutflow is confirmed to work for the use of moddriver and is essential to changing the boundary conditions for pressure but its use elsewhere in the code requires revision.
  • The use of ubulk and uouttot (see #54 ). See ?.
  • Consistency of subroutines iosi, iohi, ioqi with inflow-outflow simulations and periodic simulations. Currently these are used when lateral momentum BCs are periodic but subroutine iolet is used instead otherwise.
  1. Other related improvements to modboundary:
  • Consistent notation for all the cases e.g. BCxm, BCyq etc. For example, 1 - periodic, 2 - inflow-outflow, 3 - driver.
  • Same implementation for BCs in y-direction as x-direction. Currently cannot set inflow-outflow conditions for scalars in y-direction apart from specific subroutine scalSIRANE.
  • scalSIRANE, change name or remove as will not be needed if above is correctly implemented.
  • NOTE: currently overwriting by setting sv_top to svprof(ke) in modstartup.

Change `write_inputs.m` to a MATLAB function instead of a script

write_inputs.m currently has the experiment number for the preprocessing hardcoded, which is updated by the shell script write_inputs.sh and causes git to detect a file change every time the preprocessing is used. It also relies on the environmental variables "DA_EXPDIR" and "DA_TOOLSDIR". Re-writing it as a MATLAB function, it can take these parameters as input arguments instead, see https://uk.mathworks.com/matlabcentral/answers/388543-how-do-i-pass-a-shell-script-variable-as-an-argument-to-a-matlab-function.

Runtime performance degradation with Intel compilers

First noticed by an MSC student who is currently using the code. The speed of computation is significantly reduced when any passive scalar fields are modelled on the Imperial HPC. The same reduction is not found on my Mac.

I have done some tests to isolate the issue to these specifications. Using example simulation 002 as a base case and running it with and without two scalar fields, we get these times to complete the simulation.

Mac OS X, Release:
No scalar: 6 s
With scalars: 8 s

Mac OS X, Debug:
No scalar: 53 s
With scalars: 1 min 12 s

HPC, Release:
No scalar: 3 s
With scalars: 1 min 51 s

HPC, Debug:
No scalar: 1 min 54 s
With scalar: 4 min 38 s

We expect the simulation to be slightly slower due to the additional prognostic fields and the demands of the Kappa advection scheme (@bss116 looked into this and showed ~20-30% increase is possible if I remember correctly?). However, the increase in computation time on HPC, particularly in release mode, is far greater than this. It is noticeable how much greater the effect is in release mode.

I have never noticed this problem prior to this (before September 2019 when I was last running simulations). The problem has been reproduced with a range of scalar source types and scalar boundary conditions so is unlikely to be caused by these. I have also done some preliminary checks by commenting certain parts of the code, such as the statistics, to check that these aren't the route of the issue.

As the problem seems to be dependant on the system that we compile upon, I wondered if this could have arisen due to changes to the flags used to compile the system on HPC? If that is the case how can I easily revert these to what we had previously to test this hypothesis? And what would be a good way to identify where in the code the time is suddenly being lost? @dmey do you have any advise with these two questions.

Automate modstatsdump

Currently the statistics to be calculated in modstatsdump is hardcoded. For switches between neutral and non-neutral, simulations with and without scalars, the modstatsdump file needs to be changed (see the different files modstatsdump.f90_SCALARS and modstatsdump.f90_EXTENSIONS).
The calculation in modstatsdump should be automated according to the simulation setup. This can be done by adding switches in the calculation routine, e.g.
if ltempeq then calculate thl statistics
if lmoist then calculate qt statistics
if nsv>0 then calculate sv statistics.
nstat needs to be assigned depending on the switches and number of scalar variables.

Reduce input files for neutral simulations

Currently initfac.f90 always reads in the files walltypes.inp and Tfacinit.inp. We aim to minimise the required input files.

  • walltypes.inp will be required even for neutral simulation because it sets the variable facz0, which is used by wfneutral. From what I see this is the only variable that is used for neutral simulations.

  • Tfacinit is currently always read and it sets the variable facT. However facT is only used by wfuno, not wfneutral. Can we set a if ltempeq around reading in Tfacinit to remove this from the required input files for neutral simulations?

  • For neutral simulations without floor facets: we need a switch that only reads in facets.inp and walltypes if nfcts > 0.

Add ifort support in CI

Currently we only run CI tests with gfortran on macOS and Ubuntu. Ideally, since we also support ifort, we should also add the relevant tests using ifort to the CI build matrix. This is not a strong requirement for now and will require additional effort given that we will have to install ifort on the CI.

Large static memory allocation on hpc system

Describe the bug
Code will crash on one of the first lines of a subroutine where a static memory declaration over some threshold value is made. Only occuring when compiled through intel, run on Imperial HPC and with a sufficiently large domain size (have not managed to reproduce this bug locally). The bug arose following a change to the architecture of the HPC system.

To Reproduce
Compile code on HPC and sufficiently large simulation (e.g. 100x200x200 on debug queue with 5 nodes).

Expected behavior
Segmentation fault at start of said subroutine:

forrtl: severe (174): SIGSEGV, segmentation fault occurred

However, we have found with similar memory related issues in the past that bug can occur intermittently and not always cause runtime error on a line that is relevant to the problem itself.

Check the use of ubulk and uouttot

The use of the variables ubulk and uouttot are currently a bit confusing.
The variable uouttot is calculated in the flow rate forcing and then (always? only with some switches?) overwritten and used by modboundary. It needs to be checked that this is done correctly and as intended.
The calculation in modstartup and use of the variable ubulk in modboundary should also be checked. ubulk is the volume averaged velocity (e.g. 2 m/s), and uouttot as calculated in the flow rate forcing masscorr is the outlet plane averaged velocity times the outlet area.

Add scientific documetation

add docs describing what has been added to uDALES etc since it has been forked- could be a simple dump of some paragraphs from some papers.

Add regression tests

At the moment we do not carry out any sort of regression tests in uDALES. All tests are simply build and runtime tests. I would suggest using this issue as a working template for any tests we need to add. Please keep editing this section and add comments when you want a test to be implemented.

  • Divergence test โ€“ random velocity field and check for div u = 0.
    TODO: add description

  • Dry ABL run which is well documented (consistency with DALES)
    TODO: add description

  • Test for IBM
    TODO: add description

  • Test for driver simulation
    TODO: add description

  • Test uses EB
    TODO: add description

  • Scalar dispersion test
    TODO: add description

  • Chemistry test
    TODO: add description

Unify post-processing scripts

Currently, several scripts are used in post-processing data required for uDALES. This is a currently a placeholder to discuss how we are going to do this.

FFT not working with energy balance

I get the following error message when ipoiss = 0 and running with the energy balance on.

Program received signal SIGFPE: Floating-point exception - erroneous arithmetic operation.
Backtrace for this error:
#0  0x7f838ed062da in ???
#1  0x7f838ed05503 in ???
#2  0x7f838e382f1f in ???
#3  0x562f4d975ddc in wfuno_
	at /home/so4718/diurnal-cycle/u-dales/src/wf_uno.f90:979
#4  0x562f4d87e7dd in __modibm_MOD_bottom
	at /home/so4718/diurnal-cycle/u-dales/src/modibm.f90:1058
#5  0x562f4d9699d7 in dalesurban
	at /home/so4718/diurnal-cycle/u-dales/src/program.f90:108
#6  0x562f4d81fa08 in main

Open source uDALES

We should discuss if and when the project is going to be open-sourced and what needs to be done before then.

Remove deprecated lwallfunc from modinlet

The switch lwallfunc describes whether wallfunctions are used. It is deprecated since the use of wallfunctions is now described by the choice of boundary conditions. The switch only appears in the inlet generation modinlet.f90 and should be replaced with the switches for boundary conditions in &BC. Current default value is lwallfunc = .true. in line with default BCs.

Versioning

At the moment we are not versioning this project. I would suggest using semantic versioning (https://semver.org/) and agree on a version number to start with as soon as possible.

Define requirements for running uDALES with/without experiments

This is a placeholder to understand the requirements of each team-member individually. For example, does using the current https://github.com/uDALES/cookiecutter-u-dales make things easier? @bss116, @tomgrylls, @ivosuter, @samoliverowens, @mvreeuwijk please add your use-case below and how you would like to run and develop uDALES in an ideal world. This should be at a very high level -- i.e. no need to specify whether you want a bash script or a python one -- that will be an implementation issue that we may not (/yet) need to worry about...

Add user documentation

Add Namelist and possible configuration available in uDALES โ€“ most of them have not changed and are available directly from the DALES repo.

Clean up and restructure namoptions

The current namoptions sections can be made more clear and some parameters may be outdated. In this issue we can discuss how to clean up and restructure the namoptions.

Some initial suggestions:

  • remove nblocks from &DOMAIN and replace with switch lblocks. I understand that all surfaces in the blocks.inp are read in as soon as we have any blocks - only if there are no blocks at all this input file is not needed. Correct?

  • add extra section &FORCING where all forcing related parameters are specified, analogous to &BC.

  • add extra section &STATISTICS with specification on which statistics files to output (e.g. lxytdump) and after how many time steps (tstatsdump). Could also be a more general &OUTPUT section that also contains parameters for instantaneous field dumps?

Update or delete branches

@bss116 @samoliverowens @tomgrylls can you please start to bring your branches up-to-date with master when you have a moment to avoid merge conflicts later on when we update headers etc.

This can be done with the following command:

# From your branch
git fetch --all
git merge origin master

Incorrect bounding wall k-values.

The current algorithm for generating the bounding walls appears to be wrong for the set up shown below from @bss116.

facets

Clearly the large gap shouldn't be there. For example, the cell coordinates of the first two bounding walls are:

il iu jl ju kl ku
1 1 1 10 0 1
1 1 1 10 10 13

The ku for the first wall should be 9.

Example simulation 201 fails using Intel compiler

The energybalance example simulation 201, which run without issues locally, fails when running on the ICL cluster. Strangely, running in debug mode does not produce an error stack to show where the code terminates, but running it in release does, pointing to the poisson solver (modpois.f90 line 1045). This makes me think that the error may at a different spot than indicated in the error stack. A similar issue arises when running the example simulation 501 with an extended vertical domain size, where debug does not produce an error stack. However, the error (in release) comes from a different line, this time from the subgrid model (modsubgrid.f90 391). Again this simulation works well on my local machine. I am not sure where to start looking for the error. Log files are attached.
output.201-debug.txt
output.201-release.txt
output.501-debug.txt
output.501-release.txt

Improved/ simplified scalar source terms

  1. Edit existing switch lscasrcr to be default option for scalar sources.
  2. Develop pre-processing to define point, line and other scalar sources at lowest level using only input file scals.inp.xxx.
  3. Create case iscasrc instead of the current set of switches. lscasrc and lscasrcl to be options 2 and 3. These have more complex attributes such as normalisation, variable heights, and initial spreads but implementation is more challenging and specific to case that is being run (e.g. infinite canyons). Point 1) therefore acts as basic user implementation of scalar sources iscasrc = 1.

Volume averaged flow rate forcing

Create an additional subroutine in modmpi that calculates volume averaged variables (masking out the buildings).
Create a new forcing subroutine in modforces that enforces a volume averaged flow rate.
Change masscorr forcing to outflow forcing for explicit reference.

Originally posted by @tomgrylls in #16 (comment)

Create first release

I think it would make sense to create a release as soon as possible after all critical bugfixes have been fixed and perhaps a minimal documentation is available.

Bug in pre-processing createfloors with lstaggered

The pre-processing createfloors creates too large floor facets at the y-domain end when used with staggered blocks (see attached image). Does not seem to create issues for running the simulation but since this is one of our example cases (102) it needs to be fixed before release
0.1.0.

102

Unify pre-processing scripts

Currently, several scripts are used in pre-processing data required for uDALES. This is a currently a placeholder to discuss how we are going to do this.

Kappa advection scheme only working for scalars

Setting iadv_thl = 7 (and not using scalars) firstly means that ih is set to 2, which means the x grid arrays (dxf etc) are not populated correctly, resulting in a division by zero when calculating e.g dxfi.

We also have the problem that advecc_kappa uses ihc and arrays like dxfci etc rather than dxfi, which aren't populated unless iadv_sv == 7.

I guess we should also populate these arrays when any variable is running the kappa scheme, rather than just scalars. If so, we'd need to think about what we set ih to, given that it is used in this section - it'd be nice to be able to set it to 1 as this would solve the first problem.

Add statistics variables to restart files

Currently only field variables are saved in the restart initd and inits files. If the time-averaged statistics variables are added, then the time-average of the statistics output will no longer be limited by the runtime. The variables that need to be added are a subset of the ones loaded from modfields by the subroutine statsdump in modstatsdump.f90:

use modfields, only : um,up,vm,wm,svm,qtm,thlm,pres0,ncstaty,ncstatxy,ncstatyt,ncstattke,&
. This is done in modsave.f90 to write additional variables to the restart files and in modstartup.f90 to read in the additional variables at warmstart.

Neutral flat simulation requires bldT > 0.

A neutral simulation that has the floor covered with one large facet (see 001 in branch bss/example-simulations) crashes when the parameter bldT in namoptions section ENERGYBALANCE is not set. (default value is 0.). The simulation runs for values > 0.

Error stack:
forrtl: error (73): floating divide by zero
Image PC Routine Line Source
u-dales 000000000078E7CF Unknown Unknown Unknown
libpthread-2.17.s 00002B2CCE195370 Unknown Unknown Unknown
u-dales 00000000007055BE modthermodynamics 419 modthermodynamics.f90
u-dales 0000000000701E61 modthermodynamics 313 modthermodynamics.f90
u-dales 00000000006F6DDA modthermodynamics 69 modthermodynamics.f90
u-dales 000000000070A6A2 MAIN__ 145 program.f90

Division by zero seems to come from this line:
presh(k) = presh(k-1)rdocp - grav(pref0*rdocp)dzf(k-1) / (cpthvf(k-1))

@ivosuter any ideas how to remove bldT requirement for neutral simulations?

In moddriver, <var>mdriver is not set to <var>0driver at simulation start

When running driven simulations, I noticed that the temperature would go to zero on the first timestep due to the fact that thlmdriver is initially set to zero. This occurs because of the condition rk3step == 1 in the following sequence in drivergen, which is called by boundary before time-stepping begins (when the condition is false).

      if (rk3step == 1) then
        umdriver = u0driver
        vmdriver = v0driver
        wmdriver = w0driver
        !e12mdriver = e120driver
        if (ltempeq) then
          thlmdriver = thl0driver
        end if
        if (lmoist) then
          qtmdriver = qt0driver
        end if
        if (nsv>0) then
          svmdriver = sv0driver
        end if
      end if

I have simply changed the condition to be rk3step .le. 1, but maybe this isn't optimal.

Update pre-processing for simulations with no buildings

If the pre-processing is set up to have no blocks (lflat = .true.), it puts a single floor facet coving the domain. To be consistent with the example shown in 001, we would like to have no floor facet at all (i.e., the "original DALES" setup using the subroutine bottom). This was discussed in #84 (comment).

Change massflowrate to outflow rate forcing and consistent assignment

Massflowrate forcing is just forcing an outflow rate -- see discussion in #16, and it needs several improvements.

  1. change name in namoptions and code to outflow rate
  2. consistency in assignment: for coldstarts the rate is calculated from the initial velocity profile, for warmstarts it is set via namoptions uflowrate and vflowrate in &PHYSICS. I would suggest always setting it in namoptions.
  3. The flow rate is set as average velocity * outflow plane area. I suggest to change it to average velocity only and leave the calculations of the outflow plane area to the forcing routine.
  4. Make the output of the flow rate in the log file consistent with the above changes.

Remove libm switch

The switch libm determines whether an immersed boundary method (IBM) is used for obstacles read in from the blocks.inp file. Since we always want to resolve given obstacles with the IBM, this switch is not needed. It may make sense to replace it with a check whether obstacles are present (e.g. whether nblocks > 0) for computational reasons. The default value is libm = .false., which will be changed to libm = .true. with the namoptions-cleanup.

Add example simulations

Add example simulation setups in the examples folder with the namoptions reduced to a minimal version needed for the specific case. These cases should include:

  • neutral stability case with blocks -- which forcing? one case for each forcing?

  • non-neutral case with temperature -- different forcings?

  • scalar release case

  • full energy balance

please expand the list!

Use compilers options consistently across vendors

@tomgrylls before the final 0.1.0 release I would suggest discussing and reviewing compilers flags so that we can make sure that these are applied consistently and for the correct reason across different vendors. I will leave this issue open as a placeholder for now and we can pick it up again before the 0.1.0 release.

Further namoptions cleanup

This issue is about cleaning up the list of parameters that can be set in the namoptions input file. There are a few points up for discussion (we might also want to break them into separate issues):

&RUN

  • lles = .true. Discuss whether to remove non-LES functionality
  • randu = 0. Default changed from 0.5 in DALES. Should we change back?
  • randthl = 0. Default changed from 0.1 in DALES. Should we change back?
  • randqt = 0. Default changed from 1e-5 in DALES. Should we change back?
  • courant = -1 Code sets it to 1.5 or 1.1 (if Kappa or upwind scheme is used). These are different values than in DALES!
  • diffnr = 0.25 Diffusion number? Used to determine adaptive time step. Can we get rid of it?
  • lreadmean = .false. Potentially deprecated. Should we remove it?
  • lper2inout = .false. Potentially deprecated. Should we remove it?
  • lwalldist = .false. Switch to calculate wall effects for the subgrid models (Smagorinsky or one equation). Potentially deprecated. Should we remove it?
  • lstratstart = .false. Description missing.

&BC

  • wtsurf = -1. What to do with subroutine bottom?
  • wqsurf = -1.
  • thls = -1.
  • z0 = -1.
  • z0h = -1.
  • !wsvsurfdum = Should be reviewed when checking scalars implementation
  • !wsvtopdum =

&WALLS

  • iwallmom = 2 Should we set default to 3? (neutral wall function) Will need to test this first! Edit: We decided to remove option 3 from the user documentation as the code chooses this automatically for neutral simulations

&PHYSICS

  • igrw_damp = 2 Gravity wave dumping. We use = 0 (no damping).

&DYNAMICS

  • iadv_tke = -1 Only applicable if loneeqn = True. Consider removing this option?
  • iadv_qt = -1 Should we extend and test use of option 7 (Kappa scheme)?
  • iadv_sv = -1 Consider removing option 1 (upwind scheme)?
  • ipoiss = 1 Should be set to ipoiss = 0 when bug in EB is fixed (Issue #61).

&NAMSUBGRID

  • loneeqn = .false. Should we remove this?

&INLET

  • linletRA = .false. These parameters are associated with the turbulent inlet generation. They are untested with the current version of the code. Remove or update?
  • lfixinlet = .false.
  • lfixutauin = .false.

&SCALARS

  • lreadscal = .false. Deprecated, should we remove?
  • lscasrc = .false. Description missing
  • lscasrcl = .false. Description missing

general:

  • ensure consistent spacing before and after = when namoptions are changed by da_inp.sh and da_prep.sh.

Assign flow forcing in namoptions

This issue is to discuss how the flow forcing should be handled. There are several ways to force the flow. It would be good to have a separate section for their assignment in the namoptions file, similar to the boundary conditions (BC) section. Further it should be checked at startup that only compatible types of forcings are applied.

Possible forcings for momentum:

  1. using a fixed pressure gradient
  2. prescribe a free stream velocity (either u or v)
  3. prescribe an outflow rate (u or v, or both)
  4. prescribe a volume average flow rate (u or v, or both)
    please complete the list

The outflow and volume flow rate are now prescribed in the namoptions using the plain velocity (e.g. 2 m/s). The same holds for the freestream forcing. These switches could be put together and simplified, for instance one parameter for the value: ufixed, vfixed, and switches for the type: lfreestreamu, lfreestreamv, loutflowu, loutflowv, lvolflowu, lvolflowv.

What types of forcing are missing in the list?
Which can be combined (e.g. lvolflowu, lvolflowv)?
Would it make sense to summarise them in a single function and use cases to define which one to use? At the moment the freestream forcing is applied at a different place in the code than the flow rate forcing, is there a reason for it?

Improve statistics output modstatsdump.f90

This issue addresses code improvement and further functionalities for the statistics routine modstatsdump.f90.
Code improvement: The code has currently some statistics calculations hardcoded for each variable. This should be replaced by generic functions and function calls for the variables. Care needs to be taken about static or dynamic allocation of output arrays, when the number of output variables is only determined at runtime.
Additional features: Adding the block masks as statistics output can be useful for post-processing.
Currently the lowest (floor) level is masked in the statistics output. I would suggest to instead not output it at all and move the z-axes accordingly.

Summary of tasks:

  • change to generic statistics functions and calls for output variables

  • automate routine for non-neutral and/or scalar statistics output (#42)

  • add forcing to output (#48)

  • add block masks to output

  • remove masked entries

  • only output variables that are actually calculated

Add inputdir and outputdir arguments

When using u-udaes, all inputs and outputs are processed from/to the current working directory. This should change so that the path to inputs and outputs can be specified directly when the program is invoked.

E.g.

path/to/u-dales --input-dir=path/to/inputs --output-dir=path/to/outputs

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.