Code Monkey home page Code Monkey logo

Comments (65)

vvinuv avatar vvinuv commented on June 2, 2024 1

@yymao We are using only galaxies less than redshift of 0.3 and have some magnitude limit. Therefore, I think we will have similar deepness in catalog compared to the data.

from descqa.

aphearin avatar aphearin commented on June 2, 2024 1

By now I have tried many methods @yymao, fixing some relationships, breaking others. Probably unsurprisingly, the methods that seem to work best involve rank-ordering by M_infall in some way (either at the galsampler level, or remapping stellar mass using an empirical stellar-to-halo-mass relation or a conditional stellar mass function or deconvolution abundance matching). Note that any such method is inequivalent to standard empirical modeling for at least two reasons, 1. Galacticus is not M_infall-complete, 2. At fixed M_infall, I preserve the stellar-mass rank-order-percentile predicted by Galacticus, so strong levels of assembly bias as predicted by Galacticus are preserved when doing things way.

from descqa.

rmandelb avatar rmandelb commented on June 2, 2024 1

@aphearin - Thanks for the update. Let's have a discussion whenever you feel you're ready for feedback from the group. (No need to give an update while you are still actively trying stuff out, unless you feel you need the feedback to choose a way to proceed!)

from descqa.

j-dr avatar j-dr commented on June 2, 2024

If we're okay with including another dependency, I would highly recommend that we use Corrfunc. It is very fast, has a nice python API and has many different pair counting functions already implemented.

from descqa.

rmandelb avatar rmandelb commented on June 2, 2024

That code is quite convenient for scalar correlation functions. If there is any chance we're going to also want validation tests with spin-2 quantities like shear, then I think we should instead use a code like treecorr, which is very fast and does include correlations of spin-2 quantities (also with a nice python API).

from descqa.

j-dr avatar j-dr commented on June 2, 2024

Good point. My only problem with treecorr is that it doesn't support periodic boundary conditions as far as I know, but that may not matter if we're just focusing on lightcone based statistics for DESCQA2. Corrfunc only has GSL and numpy as dependencies, so it might be worth considering using both.

from descqa.

evevkovacs avatar evevkovacs commented on June 2, 2024

Another possibility is HaloTools

from descqa.

yymao avatar yymao commented on June 2, 2024

An additional benefit of using treecorr or halotools is that their lead developers are very active DESC members.

That being said, I'll be happy to let whoever volunteer to work on this test to choose the most suitable package.

from descqa.

yymao avatar yymao commented on June 2, 2024

Just a quick update --- we've decided on using treecorr but had some issues installing it on the NERSC DESC Python environment. Will continue to work on resolving this issue...

from descqa.

vvinuv avatar vvinuv commented on June 2, 2024

from descqa.

yymao avatar yymao commented on June 2, 2024

We (mostly @vvinuv) are still working on fixing some potential bugs (see progress in #38), but otherwise this test is close to done. We still need validation criteria though!

Here's an example plot for the buzzard catalog, taken from this descqa run
image

from descqa.

slosar avatar slosar commented on June 2, 2024

Do we still have just lightcones and correlation functions? Am I correct to see that your galaxies are a factor of 2-3 overclustered? Having power spectrum of period box would be my ideal test but if all you have is lightcones then I guess we'll have to live with projected correlation funcs.

from descqa.

yymao avatar yymao commented on June 2, 2024

Updated test: https://portal.nersc.gov/project/lsst/descqa/v2/?run=2017-12-18&test=tpcf_Wang2013_rSDSS (results are similar)

@slosar The catalogs are over-clustered probably because the catalog is not deep enough?

@evevkovacs We do have snapshots, don't we? This question seems to appear many times...

from descqa.

evevkovacs avatar evevkovacs commented on June 2, 2024

@vvinuv @yao @slosar I think it is important to debug the test first and make sure it is correct.

from descqa.

slosar avatar slosar commented on June 2, 2024

@yymao Which catalog is not deep enough? If you galaxies in a mag range then they are what they are, by definition you are deep enough? (in the limit of narrow enough bins that bias ~ const across mag bin)

from descqa.

yymao avatar yymao commented on June 2, 2024

@slosar mock catalogs have cutoff in redshift: protoDC2 light cone goes to only z=1, and the Buzzard light cone goes to z=2.1.

from descqa.

rmandelb avatar rmandelb commented on June 2, 2024

Can you clarify what is being plotted? It says xi(theta) which confuses me a bit; that is, usually xi is used for a 3D correlation function as a function of r. Is this w(theta), the projected 2D angular cross-correlation? The issue with that one is that it includes information both about the 3D clustering and the N(z). I mean, if you have a sample with the very same 3D clustering but a different N(z), then w(theta) will look different. If we have a separate N(z) test already, then I think we want this to be a test of 3D clustering rather than w(theta), so as to ask a single specific question.

So if this is w(theta), then in fact my first question would simply be whether the mock galaxies have the same dN/dz as the real ones that were used as the validation data?

Also, apologies, but what Wang et al (2013) paper is it? If I look that up on ADS there are lots of possibilities but then none that I glanced at seem to have the data shown here.

from descqa.

slosar avatar slosar commented on June 2, 2024

I would also prefer to see 3D clustering test as it is just more accurate representation of the exact test that you are doing. The validation should be over the mag limit and redshift limit of the test sample -- if it is fine at low-z, I'm ok to believe it must be reasonable at higher z.

from descqa.

yymao avatar yymao commented on June 2, 2024

@rmandelb I think what is plotted is projected correlation (and the label should be changed) but @vvinuv can correct me if I'm wrong.

@slosar when you say 3D clustering, is the galaxy sample you have in mind within a thin redshift range with an absolute magnitude cut?

from descqa.

vvinuv avatar vvinuv commented on June 2, 2024

@rmandelb the label on the y-axis is wrong. I meant w(theta) which is the angular correlation function NOT projected correlation. I use the Wang et al paper (http://adsabs.harvard.edu/abs/2013MNRAS.432.1961W) as it is one of the latest measurements from SDSS galaxies. However, Wang et al work do not use the redshift information and all they use only r-band magnitude less than 21 (@slosar also I was wrong to say that Wang et al use redshift information). The recent results mostly agrees with the validation data for both catalogs https://portal.nersc.gov/project/lsst/descqa/v2/?run=2017-12-20_1&test=tpcf_Wang2013_rSDSS .

I haven't tested the code for 3D correlation function. I am not quite sure whether people use the 3D correlation function from observations.

It you are interested in the status of projected correlation function keep on reading.

We did a validation work on the projected correlation which is a function of comoving distance as defined in Zehavi et al 2011. They use SDSS galaxies in different brightness and in narrow redshift bins to find the projected correlation. In our earlier validation work the projected correlation function mostly agreed with Zehavi et al 2011. However, the results of tests during and after the LSST sprint meeting do not agree with Zehavi et al 2011. I haven't checked if there are any bugs in the test.

from descqa.

rmandelb avatar rmandelb commented on June 2, 2024

Hi @vvinuv - thanks for clarifying. Indeed in my message I accidentally wrote "projected 2D angular cross-correlation" when I meant "angular correlation function"... i.e., my statements about this quantity including information both about the intrinsic clustering and the N(z) apply to the angular correlation functions that you are showing. So my point still stands: comparing this quantity with the data conflates two things, the intrinsic clustering of the sample and its N(z). I thought the point of the clustering-related tests was to check whether the bias as a function of luminosity and scale makes sense (since the LSS, WL, and PZ groups all care about this), so you should be doing a calculation that gets at the 3D clustering. @slosar seems to agree.

I am not quite sure whether people use the 3D correlation function from observations.

Depends what you mean. To calculate it, you require spectroscopy. So it can only be done with certain samples (and even then, typically the projected version is used, i.e., wp(rp), which involves projecting xi(r) over projected line-of-sight distance out to some separation like 60 Mpc/h). Of course we won't have spectra for LSST and won't measure the 3D correlation function, but that is not the issue here. The purpose of the test was to check that the galaxy bias is reasonable and to do that we should measure something related to the intrinsic clustering, like xi(r) or wp(rp), and not w(theta) which includes the N(z) information as well.

If there was a plan of including validation tests on the intrinsic clustering, the N(z), and w(theta), then that seems redundant. We only need the first two.

from descqa.

slosar avatar slosar commented on June 2, 2024

@vvinuv Rachel said everything: it is ok to "cheat" in validation to make sure your catalog is correct and to use the actual distances (even with spectro survey you have RSD). In particular all projected corr function (either to theta or to rp) are integrals over the 3D which is the fundamental quantity, so even if not observable, if you get that right you'll get everything else right. Also, if you get that wrong, knowing where it is wrong in 3D will make it easier to debug the problem.

from descqa.

vvinuv avatar vvinuv commented on June 2, 2024

Thanks @rmandelb and @slosar . As @rmandelb pointed out that angular correlation function is a combination of redshift and intrinsic clustering. I think @evevkovacs showed that N(z) of catalogs mostly agrees with the DEEP2 data and I assume this is also true for SDSS galaxies. In that case the recent angular correlation test was mostly agrees with the observation (if you like to see it please go to this link https://portal.nersc.gov/project/lsst/descqa/v2/?run=2017-12-21_2&test=tpcf_Wang2013_rSDSS). On the other hand there are several assumptions going on in the validation test for angular correlation function. Therefore, I totally agree with @rmandelb and @slosar that the validation test should be separating the intrinsic clustering and N(z).

As we are in the same page of intrinsic clustering, I was testing for projected correlation function ( wp(rp)) not RSD. The validation test of wp(rp) have some bugs. I am trying to identify the bugs either in the code or in the catalogs.

from descqa.

rmandelb avatar rmandelb commented on June 2, 2024

I'm confused; what changed between the plot you showed above (which looks like a bad match) and the plots that are linked in your latest comment (which look significantly better)?

from descqa.

vvinuv avatar vvinuv commented on June 2, 2024

There was a bug in the code to select the sample for angular correlation function. Earlier I was only using the galaxies within redshift of 0.3. That sample was also used for testing for SDSS size-luminosity relation. I forgot to remove the redshift constraint and the resulting angular correlation functions didn't match with the observation. After I removed the redshift constraint the correlation functions are better behaving.

from descqa.

vvinuv avatar vvinuv commented on June 2, 2024

@rmandelb @slosar, @yymao refactored the validation script and he was able to find the projected correlation function as shown here https://portal.nersc.gov/project/lsst/descqa/v2/?run=2017-12-21_32&test=tpcf_Zehavi2011_rSDSS. I just glanced at his code and seems there are no issues.

from descqa.

yymao avatar yymao commented on June 2, 2024

@vvinuv @rmandelb I can confirm that the w(theta) is now fixed (the fix involves the combination of not limiting maximal redshift and using a better footprint to generate random). See plots here

As @vvinuv mentioned, I was able to generate a reasonable wp(rp) results once the use of pi_max is correctly accounted for when calculating wp. See plots here.

New code can be found in vvinuv#1, and will later be merged here.

from descqa.

rmandelb avatar rmandelb commented on June 2, 2024

@yymao - I am concerned about the wp(rp); am I correctly interpreting the proto-DC2 plot as showing almost no evolution of the bias with luminosity? Buzzard doesn't look very good either.

Is there any reason not to show past 10 Mpc/h? It would be good to test the clustering out to a few tens of Mpc/h if possible (e.g., if the validation data allows and there isn't some other obstacle that I'm missing).

from descqa.

slosar avatar slosar commented on June 2, 2024

I'm totally confused -- don't those lines imply fainter objects to be more clustered than bright objects? (in https://portal.nersc.gov/project/lsst/descqa/v2/?run=2017-12-21_32&test=tpcf_Zehavi2011_rSDSS) Also, didn't we agree want to do 3D clustering? Finally, I really think we should test to 200 Mpc/h to see the BAO bump (to make sure it is correctly broadened, etc, it is just a nice sanity check). Another reason to go with xi rather than w_p.

from descqa.

rmandelb avatar rmandelb commented on June 2, 2024

@slosar , I don't see what you mean. The data from Zehavi show a monotonic increase in the clustering with luminosity as far as I can see in the place you linked.

from descqa.

yymao avatar yymao commented on June 2, 2024

Note that I'm using a new color scheme where a brighter sample has a brighter, warmer color.

I'm working on extending the test to larger scale (but not to 200 Mpc/h, I don't think the proroDC2 catalog is big enough). Will report back once I have that ready.

Do we still want to test 2pt correlation function in a snapshot box?

from descqa.

yymao avatar yymao commented on June 2, 2024

@rmandelb @slosar OK --- I've tried to extend the scale to about 40 Mpc/h and here's the results. You can see that there's very few pairs beyond 10 Mpc/h and the points are jumping around.

I've also used fewer bins to reduce noise and you can see the lack of magnitude trend in protoDC2 is quite clear. I've done a self-review of my code so I don't think this is due to a bug, but nevertheless @vvinuv should review it to make sure.

from descqa.

rmandelb avatar rmandelb commented on June 2, 2024

Whew. Is it possible to define the maximum scale for clustering-related tests based on the box size? i.e., always have a minimum scale of whatever you want the minimum to be, always have a fixed logarithmic bin size, and choose the max scale to be some fixed fraction of box size -> that determines the number of bins? That way we can get something reasonable for protoDC2, and some other reasonable thing for Buzzard and cosmoDC2, without having to change code around later on.

If @vvinuv does not find a problem with your code, then we should consider a problem with the catalog. The first thing I would ask is how does this catalog look on the luminosity vs. halo mass or stellar vs. halo mass relation? If the clustering is flat with luminosity, that implies some weirdness in those relationships. Possibly this is something that @aphearin is looking to tune?

from descqa.

yymao avatar yymao commented on June 2, 2024

@rmandelb I guess we can ask catalog providers to provide a scale beyond which they don't trust their catalogs.

I also wonder the if clustering signals would look better if we use number densities, instead of luminosities, to select the galaxy samples, given that the luminosity function of protoDC2 does not match to observed ones very well. In fact, @aphearin might already have some results on this end.

from descqa.

vvinuv avatar vvinuv commented on June 2, 2024

@slosar, as @rmandelb pointed out that the clustering from Zehavi increases with luminosity. @slosar are you implying that the 3D correlation function to be found is in terms of projected correlation and RSD in a contour figure as in Zehavi et al 2010? Or do you want to use the truth values of 3D information of galaxies in the simulation to find 3D correlation (which is more unlikely)?

I didn't find any bugs in the code written by @yymao and I approved the changes. Also, the protoDC2 is only 25 sq. degrees and there won't be any statistical power after 10 Mpc but the buzzard may be better for that. @evevkovacs @aphearin are working on some tests on protoDC2 which may have an effect on the 2pt function.

from descqa.

rmandelb avatar rmandelb commented on June 2, 2024

I guess we can ask catalog providers to provide a scale beyond which they don't trust their
catalogs.

Sure, though I think that if we used some fixed fraction of box size I think it would be hard to go too badly wrong.

I also wonder the if clustering signals would look better if we use number densities, instead of
luminosities, to select the galaxy samples

If we had roughly the right trend in clustering as a function of luminosity, but were having trouble matching the data in amplitude, I would say that's worth trying. But right now the clustering appears to be essentially flat with luminosity, so even if we chop the bins differently (within this range of luminosities), that will presumably still be roughly true. To me these plots are already enough of a diagnostic of a problem with the clustering that it's worth digging into some of the other tests I suggested, like luminosity vs. halo mass.

from descqa.

slosar avatar slosar commented on June 2, 2024

rmax for xi should be a fixed fraction of box size and note that correlation function will start to break down well below L_box, because box imposes its own integral constraint (in principle you can predict that, but not worht the hassle, just stop at L_box/8 or or something).
These clustering amplitudes are not to good, true.

from descqa.

yymao avatar yymao commented on June 2, 2024

@rmandelb @slosar For light cones, things would be a bit trickier than just using a fixed fraction of the box size. Buzzard light cone catalog, for example, uses three different boxes (of different volumes), and the plot you've seen (saying buzzard_test on it) was made with a small patch of the full Buzzard catalog. For protoDC2 things are less complicated: it repeats a single box several times, so using a fixed fraction of the box size is probably OK for protoDC2.

from descqa.

slosar avatar slosar commented on June 2, 2024

@yymao Ok, no need to overcomplicate things. Just use the smallest fundamental length-scale, even if you have to put in this by hand...

from descqa.

aphearin avatar aphearin commented on June 2, 2024

@yymao @rmandelb - yes, I am actively working on improving the trends of clustering with luminosity and stellar mass, by rescaling variables using Monte Carlo techniques. I'm happy to share results thus far, I've made a lot of headway but it's a work in progress.

from descqa.

yymao avatar yymao commented on June 2, 2024

@aphearin There's something I don't fully understand --- I remember that your method just rescales the quantities but preserves the rank ordering, is that true? If so, how does that induce magnitude-dependant clustering if there's no such signal to begin with, as @rmandelb pointed out earlier?

from descqa.

morriscb avatar morriscb commented on June 2, 2024

Sorry for the late arrival. @sschmidt23 pointed me at this issue as the Photo-z WG has interest in this due to the use of cross-correlation redshift determination in our PZCalibrate deliverable. Assuming the issues with the absolute magnitude can be resolved then this test should meet our needs.

One question I have though is, is this test only being preformed on low redshift objects and not to the higher redshifts available in protoDC2 or Buzzard? Considering you are mostly comparing to SDSS results, it's not enterily clear whether a cut on redshift is being made or not. I ask because we would like to know that some bias evolution exists as a function of redshift to test methods to mitigate this evolution in the cross-correlation redshift calibration.

from descqa.

yymao avatar yymao commented on June 2, 2024

@morriscb protoDC2 has only z < 1 galaxies (a cut is made at z=1). Buzzard has galaxies up to z=2.1 (not sure if a hard cut is made but I don't think so).

from descqa.

morriscb avatar morriscb commented on June 2, 2024

Okay, so I assume that means galaxies up to those redshifts are being used in this test which is what I wanted to know. I wasn't sure if a cut was being made in redshift below the limits of the simulations or not. Thanks.

from descqa.

yymao avatar yymao commented on June 2, 2024

@morriscb for the correlation functions with apparent magnitude cuts, yes. For correlation functions with absolute magnitude cuts there are additional redshift cuts applied before the correlation functions are calculated.

from descqa.

vvinuv avatar vvinuv commented on June 2, 2024

from descqa.

rmandelb avatar rmandelb commented on June 2, 2024

@vvinuv - what about https://arxiv.org/abs/1210.6694 ? (see e.g. figures 10-12)

I agree in general with Chris that we do want to have some sanity check of the galaxy biases at higher redshifts. This will be important for LSS, LSS+WL combined analysis, and PZ.

from descqa.

vvinuv avatar vvinuv commented on June 2, 2024

from descqa.

rmandelb avatar rmandelb commented on June 2, 2024

I would add that Chris can probably suggest a specific validation criterion. One more thing to think about:

We don't necessarily need a validation test on the clustering signal. We can do a validation test on the galaxy bias as a function of redshift / luminosity, using the fact that we know the matter clustering (so bias = sqrt((xi gg) / (xi mm))).

from descqa.

evevkovacs avatar evevkovacs commented on June 2, 2024

@yymao @vvinuv @rmandelb In future, to avoid confusion, we should make the cuts on the catalog data as close as possible to the data to which they are being compared, and these cuts should be stated clearly on the plots and in the summary text files. If additional redshift etc. cuts are being made, they should also be on the plots and in the files. I think this should be our standard practice for all validation tests.

from descqa.

yymao avatar yymao commented on June 2, 2024

@evevkovacs hmm I think we are already doing that right now, no?

from descqa.

evevkovacs avatar evevkovacs commented on June 2, 2024

Well, there were some questions above about the redshift ranges above. I checked the plots and couldn't see any labels pertaining to redshift, so those should be included. However I did notice that a file called config.yaml is being printed out in the summary, which has some information. Not all our tests have this file in the summary section. Is it a copy of the yaml file for the test? This would be a useful thing to have printed in the summary section by default.

from descqa.

morriscb avatar morriscb commented on June 2, 2024

@vvinuv For our use case we don't need to be assured that the clustering amplitude is entirely physical, just that there is a trend in correlation amplitude with absolute magnitude and it isn't flat like you observed in a plot previously created.

A specific test for our use case could be selecting one of the absolute magnitude bins and measure, as @rmandelb suggested, bias = sqrt((xi gg) / (xi mm)) at several different redshifts and then assert that d bias/dz != 0, the derivative of the bias with respect to redshift is non-zero. Having this will at least allow us to test bias mitigation techniques in the context of clustering redshifts.

from descqa.

sschmidt23 avatar sschmidt23 commented on June 2, 2024

As Chris said, one of the main things we want to check in DC2 is our ability to correct for galaxy bias evolution, so I think we need d bias/dz >~0.1-0.2 at least, (though it may be larger) for several absolute magnitude ranges out to z=1 as a good quantitative criteria for now. That is, we want to be sure that the bias evolution is significantly non-zero and measurable beyond statistical errors. I don't have an intuition for what the bias will do at higher redshifts, we'll have to think about that a bit more for when the 1<z<3 DC2 catalog is available.

from descqa.

aphearin avatar aphearin commented on June 2, 2024

@sschmidt23 @morriscb - In the ideal case, an actual test function is written that is incorporated into DESCQA - this is the preferred workflow for working group members to make a specific request of mock catalogs.

At minimum, could you be more precise in specifying d bias/dz is defined? Galaxy bias is only defined for a specific galaxy sample. For which specific galaxy sample selection function(s) would you like to see bias evolution?

from descqa.

morriscb avatar morriscb commented on June 2, 2024

Hi @aphearin, @rmandelb asked for a suggestion in the thread so I spit balled an idea. Also I mentioned a sample in my post as picking one of the absolute magnitude bins that are already being used in the clustering amplitude vs abs mag plots that were shown on this issue. The exact sample isn't really that important for us, the test only need show that the simulations can produce bias evolution with redshift.

from descqa.

j-dr avatar j-dr commented on June 2, 2024

Something that isn't being covered in this issue, but will be important for clusters is small scale, color dependent clustering (particularly for red galaxies...). Not sure if I should open a new issue related to this. Happy to do so if we don't want to cram too many things into one test.

This will definitely be important for cluster miscentering though, so thought I would bring it to people's attention. @erykoff

from descqa.

rmandelb avatar rmandelb commented on June 2, 2024

@j-dr - does it make sense to do this as a test of small-scale clustering split by color? Or could we do a more specific test that focuses on the populations of cluster-mass halos?

My gut feeling is that this deserves a separate issue, because the goal of the validation test (what science it will enable) is quite different from the goal of the large-scale clustering validation test.

from descqa.

aphearin avatar aphearin commented on June 2, 2024

I agree entirely with @rmandelb - this warrants a separate issue. The science targets driving these two validations are pretty distinct, as is the labor required of catalog producers.

from descqa.

j-dr avatar j-dr commented on June 2, 2024

I'm happy to separate this into a separate test. I'll open a new issue and we can discuss there.

from descqa.

yymao avatar yymao commented on June 2, 2024

@rmandelb @vvinuv @j-dr @morriscb @sschmidt23 @slosar @aphearin @patricialarsen

We had lots of discussion on this thread about galaxy clustering and galaxy bias but haven't reached a concrete plan. So let me see if I can capture the essentials and draft a plan here.

➡️ Current implementation can be found here, for reference.

  1. The galaxy-galaxy clustering signal should have absolute magnitude dependence. I think @vvinuv's test already cover this (comparing with Zehavi 11) but we need some criteria. @rmandelb @slosar, any suggestion?

  2. The galaxy bias should have redshift dependence. I think this sounds a different test. It is related to this but the representation is pretty different. @morriscb @sschmidt23, would you agree? Should we open a new issue for the redshift dependent bias test?

  3. I think we should open a new issue for color-dependent clustering. @j-dr @aphearin would you agree? I think @aphearin already has implemented the color-dependent clustering test outside DESCQA. With some efforts we should be able to port it in.

  4. @patricialarsen is there other requests from TJP for this test?

from descqa.

aphearin avatar aphearin commented on June 2, 2024
  1. I think we should open a new issue for color-dependent clustering. @j-dr @aphearin would you agree? I think @aphearin already has implemented the color-dependent clustering test outside DESCQA. With some efforts we should be able to port it in.

Yes, I agree this should be a separate test. I am happy to share my (Halotools-based) code for this purpose, although it is based on snapshots with xyz coordinates.

from descqa.

j-dr avatar j-dr commented on June 2, 2024

I also agree that color dependent clustering should be a separate test. Can't we just repurpose what @vvinuv has done for magnitude dependent clustering since that is already implemented on lightcones which is what we really want to test at the end of the day?

from descqa.

vvinuv avatar vvinuv commented on June 2, 2024

from descqa.

yymao avatar yymao commented on June 2, 2024

This has been implemented in #91.

from descqa.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.