Code Monkey home page Code Monkey logo

pc-pricing-tutorial's Introduction

Travis build status

Slack (#pricing-tutorial)

P&C Pricing Tutorial

The goal of this project is to build an end-to-end reproducible example of a ratemaking project in R, in the form of a book. The target audience includes students, actuaries, and data scientists who are interested in learning about insurance pricing or porting their existing workflows. As much as possible, we'll provide reproducible code for the technical bits, including data manipulation, exploratory data analysis, modeling, validation, implementation, and report writing. Significant simplifications from real life (due to lack of details in the dataset, for example) will be noted. We'll follow modeling best practices, but also point out incorrect/suboptimal workflows that are prevalent.

Package Dependencies

To install the necessary packages to run the code in this repo, you can restore the library using renv as follows:

if (!requireNamespace("remotes"))
  install.packages("remotes")

remotes::install_github("rstudio/renv")
renv::restore()

Contributing

Interested in joining in on the fun? Look at the issues page to see what tasks need help and check out contributing guidelines. Not familiar with R but want to lend your actuarial expertise? Please feel free to comment on issues to share your thoughts or open new issues to let us know how we can do things better!


Please note that this project is released with a Contributor Code of Conduct. By participating in this project you agree to abide by its terms.

pc-pricing-tutorial's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

pc-pricing-tutorial's Issues

Reg Checklist - The Filed Rating Plan - Supporting Data

  • Provide state-specific, book-of business-specific univariate historical experience data consisting
    of, at minimum, earned premiums, incurred losses, loss ratios and loss ratio relativities for each category of model output(s) proposed to be used within the rating plan.
  • Provide an explanation of any material (especially directional) differences between model indications and state-specific univariate indications. (Multivariate indications may be reasonable as
    refinements to univariate indications, but likely not for bringing about reversals of those indications. For instance, if the univariate indicated relativity for an attribute is 1.5 and the multivariate indicated relativity is 1.25, this is potentially a plausible application of the multivariate techniques. If, however, the univariate indicated relativity is 0.7 and the multivariate indicated relativity is 1.25, a regulator
    may question whether the attribute in question is negatively correlated with other determinants of risk. Credibility of state data should be considered when state indications differ from modeled results based on a broader data set. However, the relevance of the broader data set to the risks being priced should also be considered.)

Reg Checklist - The Filed Rating Plan - Responses to Data, Credibility and Granularity Issues

  • What consideration was given to the credibility of the output data? (At what level of granularity is credibility applied. If modeling was by-coverage, by-form or by-peril, explain how these were handled when there was not enough credible data by coverage, form or peril to model.)
  • If applicable, discuss the rationale for using a model that is more granular than the rating plan. (This is applicable if the insurer had to combine modeled output in order to reduce the granularity of the rating plan.)
  • If applicable, discuss the rationale for using a rating plan that is more granular than modeled output. (A more granular rating plan implies that the insurer had to extrapolate certain rating treatments, especially at the tails of a distribution of attributes, in a manner not specified by the model indications.)

Reg Checklist - The Filed Rating Plan - Relevance of Variables / Relationship to Risk of Loss

  • Provide an explanation how the characteristics/rating variables, included in the filed rating plan,
    logically and intuitively relate to the risk of insurance loss (or expense) for the type of insurance product being priced. Include a discussion of the relevance each characteristic/rating variable has on consumer behavior that would lead to a difference in risk of loss (or expense). (This explanation would not be needed if the connection between variables and risk of loss (or expense) has already been illustrated.)

Reg Checklist - Building the Model - “Old Model” Versus “New Model”

  • An explanation of why this model is better than the one it is replacing. How was that conclusion formed? What metrics were relied on for measurement? (Regulators should expect to see improvement in the new class plan’s predictive ability or other sufficient reason for the change.)
  • Were 2 Gini coefficients compared? What was the conclusion drawn from this comparison? (One example of a comparison might be sufficient.)
  • Were double lift charts analyzed? What was the conclusion drawn from this analysis?
  • Provide a list of all new predictor variables in the model that were not in the prior model. (Useful to differentiate between old and new variables so the regulator can prioritize more time on factors not yet reviewed.)
  • Provide a list of predictor variables used in the old model that are not used in the new model. Why were they dropped from the new model?

Decide on model type

Let's keep it simple with GLM.

  • Regularization?
  • Frequency/severity vs pure premium
  • By-peril or not
    • Whether to account for dependency if by-peril

Reg Checklist - Building the Model - Modeler/Software

  • Provide the names, contact emails, phone numbers and qualifications of the key persons who:
    a. Led the project
    b. Compiled the data
    c. Built the model
    d. Performed peer review
  • What software was used? Provide the name of the software vender/developer, software product and a software version reference.
  • When did work to build the model begin and when was the model build finalized?

Investigate leaflet widget size

The thing is pretty big which causes the page to take a while to load. Should look into what's causing it and see if we can trim it some.

Reg Checklist - Building the Model - High-Level Narrative for Building the Model

  • Identify the type of model (e.g. Generalized Linear Model – GLM, decision tree, Bayesian Generalized Linear Model, Gradient-Boosting Machine, neural network, etc.), describe its role in the rating system and provide the reasons why that type of model is an appropriate choice for that role. (If by-peril or by-coverage modeling is used, the explanation should be by-peril/coverage.)
  • A description of why the model (using the variables included in it) is appropriate for the line of business. (If by-peril, by-form or by-coverage modeling is used, the explanation should be byperil/coverage/form.)
  • Describe the model review process, from initial concept to final model. Keep this in overview narrative mode, less than 3 pages.
  • Describe whether loss ratio, pure premium or frequency/severity analyses was performed and, if separate frequency/severity modeling was performed, how pure premiums were determined.
  • What is the model’s target variable? (A clear description of the target variable is key to understanding the purpose of the model.)
  • Provide a detailed description of the variable selection process.
  • Was input data segmented in any way,
    e.g., was modeling performed on a by coverage or by-peril basis or by-form? Explain the form of data segmentation and the reasons for data segmentation. (The regulator would use this to follow the logic
    of the modeling process.)
  • Describe any limitations or concerns in the analysis resulting from data issues and discuss the resulting impact on the modeling results.
  • How data credibility (or lack thereof) was accounted for in the model building? (Adjustments may be needed given models do not explicitly consider the credibility of the input data or the model’s resulting output; models take input data at face value and assume 100% credibility when producing modeled output.)

Reg Checklist - Building the Model - Medium-Level Narrative for Building the Model

  • Describe any judgment used throughout the modeling process. Disclose assumptions used in constructing the model and provide support for these assumptions.
  • If post-model adjustments were made to the data and the model was rerun, explain the details and the rationale. It is not necessary to discuss each iteration of adding and subtracting variables, but the regulator should be provided with a general description of how that was done, including any measures relied upon. (Evaluate the addition or removal of variables and the model fitting.)
  • Describe the univariate testing and balancing that was performed during the model-building process, including a verbal summary of the thought processes involved.
  • Describe the 2-way testing and balancing that was performed during the model-building process, including a verbal summary of the thought processes of including (or not including) interaction terms.
  • For the GLM, what was the link function used? What distribution was used for the model (e.g., Poisson, Gaussian, log-normal, Tweedie)? Explain why the link function distribution was chosen. Provide the formulas for the distribution and link functions, including specific numerical parameters of the distribution.
  • Were there data situations GLM weights were used? Describe these. (Investigate whether identical records were combined to build the model.)

Creation of combined dataset

This is the policy-level dataset that has been joined with the mapping tables, i.e. factor levels should be human-readable descriptions instead of code.

Reg Checklist - Building the Model - Predictor Variables

  • Provide the names, descriptions and uses of each predictor variable, offset variable, control variable, proxy variable, geographic variable, geodemographic variable and all other variables in the model; explanations should not use programming language or code.
  • For each predictor variable, state whether the variable is continuous, discrete or Boolean.
  • Provide an intuitive argument for why an increase in each predictor variable should increase or decrease frequency, severity, loss costs, expenses, or whatever is being predicted.
  • If the modeler used a Principal Component Analysis (PCA) approach, provide a narrative about that process, explain why PCA was used, and describe the step-by-step process used to transform observations (usually correlated) into a set of linearly uncorrelated variables. Include a listing of the PCA variable and its principal components.

Data prep

As mentioned in #1 (comment) we'll start with the publicly available Brazilian personal auto data. Since we're focused on building the general pipeline we should ensure it'll be easy to plug in another dataset should we choose to later.

I took a quick look and started documenting findings here: https://github.com/kasaai/pc-pricing-tutorial/blob/master/analysis/data-prep.md

Tasks

  • Dictionary for translating column names to English (#6).
  • Translating code descriptions (in the mapping tables) to English, as appropriate (#6).
  • Join the policy table with the mapping tables, so we have meaningful values for all levels (#7).
  • Join the GADM geopackage features to the regions table for visualizations on a map (#8).

We should be OK working with Portuguese in the factor levels as long as we know what they represent.

Reg Checklist - Selecting Model Input - Data Organization

  • Document the method of organization for compiling data, including procedures to merge data from different sources and a description of any preliminary analyses, data checks, and logical tests performed on the data and the results of those tests. (This should explain how data from separate sources was merged.)
  • Document the process for reviewing the appropriateness, reasonableness, consistency and comprehensiveness of the data, including a justification of why the data makes sense. (For example, if by-peril modeling is performed, the documentation should be for each peril and make intuitive sense. For example, if “murder” or “theft” rates are used to predict the wind peril, provide support and a logical explanation)
  • Disclose material findings from the data review and identify any potential material limitations, defects, bias or unresolved concerns found or believed to exist in the data.
  • For any errors or material limitations in the data, explain how they were corrected.

Reg Checklist - Selecting Model Input - Available Data Sources

  • Provide details of all data sources including the experience period for insurance data and when the data was last recorded or updated. (This information can be used to evaluate the completeness of the data source, integrity of the data source, relevance of the data to the predictive timeframe, the potential for historical bias, transparency to insured of the data source, and the ability of the insured to make corrections to the data source.)
  • Specify the companies whose data is included in the datasets. (If the filer is part of a group, do the datasets include data from affiliated companies? If so, which companies? If the filer is an advisory organization, what companies are used? Are the companies included in the data relevant and compatible to the company that filed the rating plan?)
  • Provide the geographical scope and geographic exposure distribution of the data. (Evaluate whether the data is relevant to the loss potential for which it is being used. For example, verify that hurricane data is only used where hurricanes can occur.)
  • List each data source. For each source, list all data elements used as input to the model that came from that source.
  • Specify the type of data (e.g., accident year or policy year, text, numeric).
  • Explain if internal or external data was used and if external data was used, disclose reliance on data supplied by others.
  • Provide details of any non-insurance data used (customer-provided or other), including who owns this data, how consumers can verify their data and correct errors, whether the data was collected by use of a questionnaire/checklist, whether it was voluntarily reported by the applicant, and whether any of the variables are subject to the Fair Credit Reporting Act. If the data is from an outside source, what steps were taken to verify the data was accurate? (If the data is from a third-party source, the company should provide information on how the source addresses the questions in this consideration.)

Train/test split

We're currently using first half of 2013. To capture seasonal effects, we may want to use a whole year. We can plug this in later though since it should have little effect on the modeling process.

automate analysis plan

We want to specific the execution plan for the analysis so that it can be automated. Concern that GNU make might be too complicated for new users. Want to investigate usage of drake.

Reg Checklist - The Filed Rating Plan - General Impact of Model on Rating Algorithm

  • In the Actuarial Memorandum section on the SERFF Supporting Documentation tab, for each model
    relied upon, include a document that explains the model and its role in the rating system. (This item becomes “Essential” if the role of the model cannot be immediately discerned by the reviewer from a quick review of the rate and/or rule pages. (Importance is dependent on state requirements and ease of identification by the first layer of review and escalation to the appropriate review staff.))
  • Provide an explanation of how the model was used to adjust the rating algorithm.
  • Provide a complete list of all characteristics/variables used in the proposed rating plan, including those used as input to the model (including sub-models and composite variables) and all other characteristics/variables used to calculate a premium. For each characteristic/variable, indicate if it is only input to the model, whether it is only a separate univariate rating characteristic, or whether it is both input to the model and a separate univariate rating characteristic. The list should provide transparent descriptions of each listed characteristic/variable. (Examples of variables used as inputs to the model and used as separate univariate rating characteristics might be criteria used to determine a rating tier or household composite characteristic.)
  • For each characteristic/variable used as both input to the model (including submodels and composite variables) and as a separate univariate rating characteristic, explain how these are tempered or adjusted to account for possible overlap or redundancy in what the characteristic/variable measures. (Modeling loss ratio with these characteristics/variables as control variables would account for possible overlap. The insurer should address this possibility or other considerations, e.g., tier placement models often use risk characteristics/variables that are also used elsewhere in the rating plan.)
  • If the filing support includes an update or replacement of an existing model, identify and explain the changes in calculations, assumptions, parameters and data used to build the models. Provide an explanation of why the updated/replacement model is better than the one it is replacing, including, how that conclusion was reached, and the metrics relied upon to reach that conclusion.

Reg Checklist - Selecting Model Input - Sub-Models

  • Disclose reliance on sub-model output used as input to this model. If a sub-model was relied upon, provide the vendor name, and the name and version of the sub-model. If the submodel was built/created in-house, provide contact information for the person responsible for the sub-model. (Examples of such sub-models include credit/financial scoring algorithms and household composite score models. Sub-models can be evaluated separately and in the same manner as the primary model under evaluation.)
  • If using catastrophe model output, identify the vendor and the model settings/assumptions used when the model was run. (For example, it is important to know hurricane model settings for storm surge, demand surge, long/shortterm views.)
  • If using catastrophe model output (a sub-model) as input to the GLM under review, disclose whether loss associated with the modeled output was removed from the loss experience datasets. (If a weather-based sub-model is input to the GLM under review, loss data used to develop the model should not include loss experience associated with the weather-based sub-model. Doing so could cause distortions in the modeled results by double counting such losses when determining relativities or loss loads in the filed rating plan. For example, redundant losses in the data may occur when non-hurricane wind losses are included in the data while also using a severe convective storm model in the actuarial indication. Such redundancy may also occur with the inclusion of fluvial or pluvial flood losses when using a flood model, inclusion of freeze losses when using a winter storm model or including demand surge caused by any catastrophic event.)
  • If using output of any scoring algorithms, provide a list of the variables used to determine the score and provide the source of the data used to calculate the score. (Any sub-model should be reviewed in the same manner as the primary model that uses the submodel’s output as input.)
  • Was the sub-model previously approved (or accepted) by the regulatory agency? (If the sub model was previously approved, that may change the extent of the sub-model’s review.)

Outline of tasks in pricing

While we're deciding on the data, we can put together an outline of typical things that need to be done, i.e. specific tasks in data prep, modeling, preparing reports. I expect this to evolve over time but we need something to get going.

Who would like to give a shot at this?

Datasets

First order of business is figuring out what data we'll be using. There is a decent sized collection of actuarial datasets in CASdatasets (unaffiliated with the Casualty Actuarial Society, name seems to be a concidence...). E.g.

library(tidyverse)
library(CASdatasets)
data("freMPL1")
glimpse(freMPL1)
# Observations: 30,595
# Variables: 22
# $ Exposure    <dbl> 0.583, 0.200, 0.083, 0.375, 0.500, 0.499, 0.2...
# $ LicAge      <int> 366, 187, 169, 170, 224, 230, 169, 232, 241, ...
# $ RecordBeg   <date> 2004-06-01, 2004-10-19, 2004-07-16, 2004-08-...
# $ RecordEnd   <date> NA, NA, 2004-08-16, NA, 2004-07-01, NA, 2004...
# $ VehAge      <fct> 2, 0, 1, 1, 3, 3, 6-7, 4, 5, 2, 2, 6-7, 2, 3,...
# $ Gender      <fct> Female, Male, Female, Female, Male, Male, Mal...
# $ MariStat    <fct> Other, Alone, Other, Other, Other, Other, Oth...
# $ SocioCateg  <fct> CSP1, CSP55, CSP1, CSP1, CSP47, CSP47, CSP50,...
# $ VehUsage    <fct> Professional, Private+trip to office, Profess...
# $ DrivAge     <int> 55, 34, 33, 34, 53, 53, 32, 38, 39, 43, 44, 5...
# $ HasKmLimit  <int> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, ...
# $ BonusMalus  <int> 72, 80, 63, 63, 72, 68, 50, 57, 54, 76, 72, 5...
# $ VehBody     <fct> sedan, microvan, other microvan, other microv...
# $ VehPrice    <fct> D , K , L , L , L , L , G , B , B , M , M , L...
# $ VehEngine   <fct> injection, direct injection overpowered, dire...
# $ VehEnergy   <fct> regular, diesel, diesel, diesel, diesel, dies...
# $ VehMaxSpeed <fct> 160-170 km/h, 170-180 km/h, 170-180 km/h, 170...
# $ VehClass    <fct> B, M1, M1, M1, 0, 0, B, A, A, M2, M2, M1, M1,...
# $ ClaimAmount <dbl> 0.0000, 0.0000, 0.0000, 0.0000, 1418.6103, 0....
# $ RiskVar     <int> 15, 20, 17, 17, 19, 19, 19, 19, 19, 10, 10, 1...
# $ Garage      <fct> None, None, None, Private garage, None, None,...
# $ ClaimInd    <int> 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, ...

@bigalxyz Could you take a look at the data package and see if any datasets would suffice? After installing the package per the instructions at http://dutangc.free.fr/pub/RRepos/web/CASdatasets-index.html you can run ?CASdatasets to see the documentation.

Data granularity

Some of the exposures are large, but they might actually be individual policies with many vehicles. Will have to investigate/ask.

Reg Checklist - The Filed Rating Plan - Definitions of Rating Variables

  • Provide a transparent presentation and explanation of binning decisions that assign ranges of model outputs to particular rating categories.
  • Provide complete definitions of any rating tiers or other intermediate rating categories that translate the model outputs into some other structure that is then presented within the rate and/or rule pages.

ASB ASOP on Modeling

@kevinykuo, I wanted to make sure you saw this and filing an issue is the easiest way for me to do this at work. This is definitely something we want to consider across all of Kasa AI's projects. Do we want to consider submitting any comments? I haven't had a chance to review yet, but wanted to get it logged in github so I don't forget.

ASOP on Modeling

Reg Checklist - Building the Model - Massaging Data, Model Validation and Goodness-of-Fit Measures

  • Provide a description of how the available raw data was divided between model development, test and validation datasets. Describe all circumstances under which the testing and validation datasets were accessed.
  • Describe the methods used to assess the statistical significance/goodness of the fit of the model, such as lift charts and statistical tests. Disclose whether the results are based on testing data, validation data and holdout samples. Ensure that the assessment includes model projection results compared to historical actual results to verify that modeled results bear a reasonable relationship to actual results. Discuss the results. (Some states require state-only data to test the plan, especially for analysis where using the state-only data contradicts the countrywide results. State-only data might be more applicable but could also be impacted by low credibility for some segments of risk.)
  • Describe any adjustments that were made in the data with respect to scaling for discrete variables or binning the data.
  • Describe any transformations made for continuous variables.
  • For each discrete variable level, provide the parameter value, confidence intervals, chi-square tests, p-values and any other relevant and material tests. Were model development data, validation data, test data or other data used for these tests? (Typical p-values greater than 5% are large and
    should be questioned. Reasonable business judgment can sometimes provide legitimate support for high p-values. Reasonableness of the p-value threshold could also vary depending on the context of the model, e.g., the threshold might be lower when many candidate variables were evaluated for inclusion in the model.)
  • Identify the threshold for statistical significance and explain why it was selected. Provide a verbal defense for keeping the variable for each discrete variable level where the pvalues were not less than the chosen threshold.
  • For overall discrete variables, provide type 3 chi-square tests, pvalues, F tests and any other relevant and material test. Were model development data, validation data, test data or other data used for these tests?
  • For continuous variables, provide confidence intervals, chi-square tests, p-values and any other relevant and material test. Were model development data, validation data, test data or other data used for these tests?
  • Describe how the model was tested for stability over time. (Evaluate the build/test/validation datasets for potential model distortions (e.g., a winter storm in year 3 of 5 can distort the model in both the testing and validation datasets).)
  • Describe how the model was tested for geographic stability, e.g., across states or territories within state. (Evaluate the geographic splits for potential model distortions.)
  • Describe how overfitting was addressed and the results of correlation tests.
  • Provide support demonstrating that the GLM assumptions are appropriate (for example, the choice of error distribution). (Visual review of plots of actual errors is usually sufficient.)
  • Provide the formula relationship between the data and the model outputs, with a definition of each model input and output. Provide all necessary coefficients to evaluate the predicted value for any real or hypothetical set of inputs. (B.4.l and B.4.m will show the mathematical functions involved and could be used to reproduce some model predictions.)
  • Provide 5-10 sample records and the output of the model for those records.

Reg Checklist - The Filed Rating Plan - Consumer Impacts

  • Identify model changes and rating variables that will cause large premium disruptions.
  • Did the insurer perform sensitivity testing to identify significant changes in premium due to small or incremental change in a single risk characteristic? If so, discuss and provide the results of that testing. (One way to see sensitivity is to analyze a graph of each risk characteristic’s/variable’s possible relativities. Look for significant variation between adjacent relativities and evaluate if such variation is reasonable.)
  • Measure and describe the impacts on expiring policies and describe the process used by management to mitigate or get comfortable with those impacts.
  • Provide a rate disruption analysis, demonstrating the distribution of percentage impacts on renewal business (create by rerating the current book of business). Include the largest dollar and percentage impacts arising from the filing, including (desirably) the impacts arising specifically from the adoption of the model or changes to the model as they translate into the proposed rating plan. (While the default request would typically be for the distribution of impacts at the overall filing level, the regulator may need to delve into the more granular variable-specific effects of rate changes if there is concern about particular variables having extreme or disproportionate impacts, or significant impacts that have otherwise yet to be substantiated. See Appendix C for an example of a disruption analysis.)
  • Provide exposure distributions for output variables and show the effects of rate changes at granular and summary levels. (See Appendix C for an example of an exposure distribution.)
  • Explain how the insurer will help educate consumers to mitigate their risk.
  • Identify sources to be used at "point of sale" to place individual risks within the matrix of rating system classifications. How can a consumer verify their own "point-of-sale" data and correct any errors? (Could be "Essential" if the variables/characteristics used could 1) have public-policy implications, 2) result in erroneous information being used, or 3) result in many large, disruptive premium changes at renewal. Another consideration to judge “importance” is whether consumers are proactively involved (e.g., use of consumer credit information and credit-report accuracy issues))
  • Identify rating variables that remain static over a consumer’s lifetime versus those that will be updated periodically. Document guidelines for variables that are listed as static yet for which the underlying consumer attributes may change over time.
  • Provide the regulator with a description of how the company will respond to consumers’ inquiries about how their premium was calculated.
  • Provide the regulator with a means to calculate the rate charged a consumer. (Especially for a complex model or rating plan, a score or premium calculator via Excel or similar means would be ideal, but this could be elicited on a case-by-case basis. Ability to calculate the rate charged can allow the regulator to perform sensitivity testing when there are small changes to a risk characteristic/variable. )

Reg Checklist - Selecting Model Input - Adjustments and Scrubbing

  • Provide pre-scrubbed data distributions for each input. (Compare with post-scrubbed.)
  • How was missing data handled?
  • If duplicate records exist, how were they handled?
  • Were any data outliers identified and subsequently adjusted? Name the outliers and explain the adjustments made to these outliers.
  • Were premium, exposure, loss or expense data adjusted (e.g., developed, trended, adjusted for catastrophe experience or capped) and, if so, how? Do the adjustments vary for different segments of the data and, if so, what are the segments and how was the data adjusted? (Look for anomalies in the data that should be addressed. For example, is there an extreme loss event in the data? If other processes were used to load rates for specific loss events, those losses should be removed from the input data, e.g., large losses, flood, hurricane or severe convective storm models for PPA comprehensive or homeowners’ loss.)
  • What adjustments were made to raw data, e.g., transformations, binning and/or categorizations? If so, name the characteristic/variable and describe the adjustment.
  • Provide post-scrubbed data distributions for each input.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.