Code Monkey home page Code Monkey logo

systemicrisk's Introduction

Systemic Risk

This framework calculates, analyses and compares the following systemic risk measures:

Some of the aforementioned models have been improved or extended according to the methodologies described in the V-Lab Documentation, which represents a great source of systemic risk measurement.

The project has been published in "MATLAB Digest | Financial Services | May 2019".

If you found it useful to you, please consider making a donation to support its maintenance and development:

PayPal

Requirements

The minimum required MATLAB version is R2014b. In addition, the following products and toolboxes must be installed in order to properly execute the script:

  • Computer Vision System Toolbox
  • Curve Fitting Toolbox
  • Econometrics Toolbox
  • Financial Toolbox
  • Image Processing Toolbox
  • Optimization Toolbox
  • Parallel Computing Toolbox
  • Statistics and Machine Learning Toolbox
  • System Identification Toolbox

Usage

  1. Create a properly structured database (see the section below).
  2. Execute one of the following scripts (they can be edited following your needs and criteria):
    • run.m to perform the computation of systemic risk measures;
    • analyze.m to analyze previously computed systemic risk measures.

Dataset

Datasets must be built following the structure of default ones included in every release of the framework (see Datasets folder). Below a list of the supported Excel sheets and their respective content:

  • Shares: prices or returns expressed in logarithmic scale of the benchmark index (the column can be labeled with any desired name and must be placed just after observation dates) and the firms, with daily frequency.

  • Volumes: trading volume of the firms expressed in currency amount, with daily frequency.

  • Capitalizations: market capitalization of the firms, with daily frequency.

  • CDS: the risk-free rate expressed in decimals (the column must be called RF and must be placed just after observation dates) and the credit default swap spreads of the firms expressed in basis points, with daily frequency.

  • Balance Sheet Components: the balance sheet components of the firms expressed in omogeneous observations frequency, currency and scale, structured as below:

    • Assets: the book value of assets.
    • Equity: the book value of equity.
    • Separate Accounts: the separate accounts of insurance firms.
  • State Variables: systemic state variables, with daily frequency.

  • Groups: group definitions are based on three-value tuples where the Name field represents the group names, the Short Name field represents the group acronyms and the Count field represents the number of firms to include in the group. The sum of the Count fields must be equal to the number of firms. For example, the following groups definition:

    Firms in the Shares Sheet: A, B, C, D, E, F, G, H
    Insurance Companies: 2
    Investment Banks: 2
    Commercial Banks: 3
    Government-sponsored Enterprises: 1

    produces the following outcome:

    "Insurance Companies" contains A and B
    "Investment Banks" contains C and D
    "Commercial Banks" contains E, F and G
    "Government-sponsored Enterprises" contains H

  • Crises: crises can be defined using two different approaches:

    • By Events: based on two-value tuples where the Date field represents the event dates and the Name field represents the event names; every dataset observation matching an event date is considered to be associated to a distress occurrence.
    • By Ranges: based on three-value tuples where the Name field represents the crisis names, the Start Date field represents the crisis start dates and the End Date field represents the crisis end dates; every dataset observation falling inside a crisis range is considered to be part of a distress period.

Notes

  • The minimum allowed dataset must include the Shares sheet with a benchmark index and at least 3 firms. Observations must have a daily frequency and, in order to run consistent calculations, their minimum required amount is 253 for prices (which translates into a full business year plus an additional observation at the beginning of the time series, lost during the computation of returns) or 252 for logarithmic returns. They must have been previously validated and preprocessed by:

    • discarding illiquid series (unless necessary);
    • detecting and removing outliers;
    • removing rows with NaNs or filling the gaps through interpolation.
  • It is not mandatory to include financial time series used by unwanted measures. Optional financial time series used by included measures can be omitted, as long as their contribution isn't necessary. Below a list of required and optional time series for each category of measures:

    • Bubbles Detection Measures:
      • Required: shares (prices).
      • Optional: none.
    • Component Measures:
      • Required: shares (any).
      • Optional: none.
    • Connectedness Measures:
      • Required: shares (any).
      • Optional: groups.
    • Cross-Entropy Measures:
      • Required: shares (any), cds.
      • Optional: capitalizations, balance sheet, groups.
    • Cross-Quantilogram Measures:
      • Required: shares (any).
      • Optional: state variables.
    • Cross-Sectional Measures:
      • Required: shares (any), capitalizations, balance sheet.
      • Optional: separate accounts, state variables.
    • Default Measures:
      • Required: shares (any), capitalizations, cds, balance sheet.
      • Optional: none.
    • Liquidity Measures:
      • Required: shares (prices), volumes, capitalizations.
      • Optional: state variables.
    • Regime-Switching Measures:
      • Required: shares (any).
      • Optional: none.
    • Spillover Measures:
      • Required: shares (any).
      • Optional: none.
    • Tail Dependence Measures:
      • Required: shares (any).
      • Optional: state variables.
  • Firms whose time series value is constantly equal to 0 in the tail, for a span that includes a customizable percentage of total observations (by default 5%), are considered to be defaulted. Firms whose Equity value is constantly negative in the tail, for a span that includes a customizable percentage of total observations (by default 5%), are considered to be insolvent. This allows the scripts to exclude them from computations starting from a certain point in time onward; defaulted firms are excluded by all the measures, insolvent firms are excluded only by SCCA default measures.

  • Once a dataset has been parsed, the script stores its output in the form of a .mat file; therefore, the parsing process is executed only during the first run. The file last modification date is taken into account by the script and the dataset is parsed once again if the Excel spreadsheet is modified.

  • The dataset parsing process might present issues related to version, bitness and regional settings of the OS, Excel and/or MATLAB. Due to the high number of users asking for help, support is no more guaranteed; the guidelines below can help solving the majority of problems:

    • A bitness mismatch between the OS and Excel may cause errors that are difficult to track. Using the same bitness for both is recommended.
    • An Excel locale other than English may produce wrong outputs related to date formats, text values and numerical values with decimals and/or thousands separators. A locale switch is recommended.
    • Both Excel 2019 and Excel 365 may present compatibility issues with MATLAB versions prior to R2019b. In later versions, the built-in function readtable may still not handle some Excel spreadsheets properly. A downgrade to Excel 2016 is recommended.
    • Some Excel spreadsheets might contain empty but defined cells in columns or rows located far away from the area in which data is stored. Those cells extend the range being read by the parser, producing false positives when checking for missing values.
    • The dataset parsing process takes place inside the ScriptsDataset\parse_dataset.m function. It is important to check the correctness of the arguments being passed to the function call. Error messages thrown by the aforementioned function are pretty straightforward and a debugging session should be enough to find the underlying causes and fix datasets and/or internal functions accordingly.
    • If the dataset parsing process is too slow, the best way to speed it up is to provide a standard Excel spreadsheet (.xlsx) with no filters and styles, or a binary Excel spreadsheet (.xlsb).
  • Some scripts may take very long time to finish in presence of huge datasets and/or extreme parametrizations. The performance of calculations may vary depending on the CPU processing speed and the number of CPU cores available for parallel computing.

Example Datasets

The Datasets folder includes many premade datasets. The main one (Example_Large.xlsx), based on the US financial sector, defines the following entities and data over a period of time ranging from 2002 to 2019 (both included):

Benchmark Index: S&P 500

Financial Institutions (20):

  • Group 1: Insurance Companies (5)
    • American International Group Inc. (AIG)
    • The Allstate Corp. (ALL)
    • Berkshire Hathaway Inc. (BRK)
    • MetLife Inc. (MET)
    • Prudential Financial Inc. (PRU)
  • Group 2: Investment Banks (6)
    • Bank of America Corp. (BAC)
    • Citigroup Inc. (C)
    • The Goldman Sachs Group Inc. (GS)
    • J.P. Morgan Chase & Co. (JPM)
    • Lehman Brothers Holdings Inc. (LEH)
    • Morgan Stanley (MS)
  • Group 3: Commercial Banks (7)
    • American Express Co. (AXP)
    • Bank of New York Mellon Corp. (BK)
    • Capital One Financial Corp. (COF)
    • PNC Financial Services Inc. (PNC)
    • State Street Corp. (STT)
    • US Bancorp (USB)
    • Wells Fargo & Co. (WFC)
  • Group 4: Government-sponsored Enterprises (2)
    • Federal Home Loan Mortgage Corp / Freddie Mac (FMCC)
    • Federal National Mortgage Association / Fannie Mae (FNMA)

Risk-Free Rate: 3M Treasury Bill Rate

State Variables (8):

  • FFR: the effective federal funds rate.
  • TBILL_DELTA: the percent change in the 3M treasury bill rate.
  • CREDIT_SPREAD: the difference between the BAA corporate bond rate and the 10Y treasury bond rate.
  • LIQUIDITY_SPREAD: the difference between the 3M GC repo rate and the 3M treasury bill rate.
  • TED_SPREAD: the difference between the 3M USD LIBOR rate and the 3M treasury bill rate.
  • YIELD_SPREAD: the difference between the 10Y treasury bond rate and the 3M treasury bond rate.
  • DJ_CA_EXC: the excess returns of the DJ US Composite Average with respect to the S&P 500.
  • DJ_RESI_EXC: the excess returns of the DJ US Select Real Estate Securities Index with respect to the S&P 500.
  • VIX: the implied volatility index.

Screenshots

Screenshots

systemicrisk's People

Contributors

tommasobelluzzo avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

systemicrisk's Issues

Safeplot Issue

I‘ve installed MATLAB Version: 9.1.0.441655 (R2016b), and tried the run.m again. Now the "parse_dataset" funtion works fine for the "Example_Large.xlsx" after "Example_Large.mat" being deleted.

When I set 'CrossSectional' ENABLED true, ANALYZE true, and COMPARE true, some errors reported as below:

Error using cellfun
Input #2 expected to be a cell array, was string instead.

Error in safe_plot>safe_plot_internal (line 50)
r = cellfun(@(x)[' ' x],r,'UniformOutput',false);

Error in safe_plot (line 18)
safe_plot_internal(ipr.handle);

Error in run_cross_sectional>analyze_result (line 322)
safe_plot(@(id)plot_correlations(ds,id));

Error in run_cross_sectional>run_cross_sectional_internal (line 164)
analyze_result(ds);

Error in run_cross_sectional (line 51)
[result,stopped] =
run_cross_sectional_internal(ds,sn,temp,out,k,d,car,sf,fr,analyze);

Error in
run>@(temp,out,analyze)run_cross_sectional(ds,sn,temp,out,0.95,0.40,0.08,0.40,3,analyze)

Error in run (line 121)
[result,stopped] = run_function(temp,out,analyze);

In safe_plot.m, the relevant codes about line 50 are as below:

function safe_plot_internal(handle)

persistent ids;

name = func2str(handle);
name = regexprep(name,'^@\([^)]*\)','');
name = regexprep(name,'\([^)]*\)$','');

try
    id = [upper(name) '-' upper(char(java.util.UUID.randomUUID()))];
catch
    id = randi([0 10000000]);
    
    while (ismember(id,ids))
        id = randi([0 100000]);
    end
    
    ids = [ids; id];
    id = [upper(name) '-' sprintf('%08s',num2str(id))];
end

try
    handle(id);
catch e
    delete(findobj('Type','Figure','Tag',id));
    
    r = getReport(e,'Extended','Hyperlinks','off');
    r = split(r,newline());
    r = cellfun(@(x)['  ' x],r,'UniformOutput',false);  %%% line 50 is here
    r = strrep(strjoin(r,newline()),filesep(),[filesep() filesep()]);

    warning('MATLAB:SystemicRisk',['The following exception occurred in the plotting function ''' name ''':' newline() r]);
end

end

I'm new to MATLAB and it's hard for me to understand these codes completely in a short time. Is there any idea to this issue? Thanks a lot.

Originally posted by @DF-18 in #12 (comment)

Question About Separate Accounts

Hello! In the Example_Large.xlsx of Datasets,there is "Separate Accounts".Can you tell me what the "Separate Accounts" is?Thank you very much!

Question about CoVaR computation

Hi Tommaso,

Thanks for the amazing works, the toolbox is quite useful!
And I‘m really confused about the output of CROSS-SECTIONAL MEASURES. CoVaR computation requires the coefficients estimates through quantile regression, b = quantile_regression(y,x,a), how to get the estimation results of the coefficient "b"?

Best regards,
Guo

Typo in calculate_mes bandwidth

Within the calculate_mes function in the run_cross_sectional.m file, the current bandwidth for the kernel density estimator appears to be based on the Silverman "rule-of-thumb bandwidth estimator":

    r0_n = 4 / (3 * length(r0_m));
    r0_s = min([std(r0_m) (iqr(r0_m) / 1.349)]);
    h = r0_s * (r0_n ^ (-0.2));

However, I believe that the exponent is incorrect. I believe the formula for h should actually be:

    h = r0_s * (r0_n ^ (0.2));

See https://link.springer.com/chapter/10.1007/978-3-030-16272-6_3#Sec11 Section 5.5.

In addition I believe that you may want to scale r0_s by the conditonal market volatility, as everything else within the kernel density estimate is scaled. That is I believe this line:

    r0_s = min([std(r0_m) (iqr(r0_m) / 1.349)]);

Should actually be:

    r0_s = min([std(r0_m ./ s_m) (iqr(r0_m ./ s_m) / 1.349)]);

When using actual P&L, if you don't scale r0_s by s_m, the value of h will be proportional to the standard deviation of the P&L. Practically, this will mean that for the value of f will be approximately .5 as (c ./ s_m) - u) divided by some very large number will converge to approximately 0, since (c ./ s_m) - u) should always be on the scale of approximately -10 to 10.

Question About LRMES Calculation

Hi, Tommaso,
Thank you for sharing your code.
In SystemicRisk/ScriptsProbabilistic/calculate_mes.m,
I noticed that you have changed the
lrmes = 1 - exp(-18 .* mes);
to
lrmes = 1 - exp(log(1 - d) .* beta_x);
Why?
Is there any literature supporting this ?
I am So confusing.
Thanks a lot.

Best,
Andy

Dataset Parsing Compatibility Issue

Hi, Tommaso,
Thank you for sharing your code. Unfortunately, when I run the code (file run.m), I get some errors. Matlab is not my usual software, I just wanna implement something. I've run the code with different versions of Matlab, including R2016b, R2014b and R2012a, and the error is the same.
At the beginning, I set the path to the folder where are the code files (SR folder): cd 'C:\Matlab\SR'; after that, I run the 'run' file and get the following errors:

Undefined function or variable 'detectImportOptions'.
Error in parse_dataset>parse_dataset_internal (line 73)
opts = detectImportOptions(file,'Sheet',1);
Error in parse_dataset (line 19)
data = parse_dataset_internal(res.file);
Error in run (line 12)
data = parse_dataset(fullfile(path,'\Datasets\Example.xlsx'));

I have read that usually these errors are due to the different versions of Matlab, but unfortunately, I couldn't solve the problem in this way. Could you please take a look over these errors? And, also, what is the Matlab version in which you run your code?
Thank you very much.

Best,
Nicu

Does the cross-section part work for unbalanced panel data?

I have a dataset that is unbalanced panel data, which means each firm may not have the same time span.

For example, in Shares sheet, based on the default dataset format, the time span is fixed. So in my unbalanced panel dataset, some cells for closing price are null, because those firms has not been listed or has ben delisted. Those cells are blank in the excel file.

When I input this dataset, an error occured as below:

misuse parse_dataset>ensure_field_consistency (line 491)
The 'Shares' sheet contains invalid column types.
error parse_dataset>read_table (line 775)
tab = ensure_field_consistency(name,tab,i,output_vars{i},data_types{i},date_format_dt);
error parse_dataset>parse_table_standard (line 694)
tab = read_table(file,file_name,index,name,date_format,data_types);
error parse_dataset>parse_dataset_internal (line 65)
tab_shares = parse_table_standard(file,file_name,1,'Shares',date_format_base,[],[],true);
error parse_dataset (line 47)
ds =
parse_dataset_internal(file,file_sheets,version,date_format_base,date_format_balance,shares_type,crises_type,distress_threshold);
error run (line 71)
ds = parse_dataset(file,ds_version,'dd/mm/yyyy','QQ yyyy','P','R',0.05);

It says there are some columns in 'Shares' sheet are invalid.

I checked the data format, and all cells are numeric, except the header(1st row) and the date(1st column), which is identical to the default dataset.

I can't figure out why this error occured. Would u mind checking the dataset in attachment?

BTW, I've tried to replace blank cells with "0". And the error reported is the same as before.

Thank u very much.
inputdata.zip

Question about MES computation

Hi Tommaso,
In SystemicRisk/ScriptsModels/connectedness_metrics.m, in order to compute the MES, you wrote the following 2 lines of code that I didn't fully understand the reasoning behind it.

r0_n = 4 / (3 * length(rm_0));
...
h = r0_s * r0_n ^0.2;

Is this explained in any research paper/ book?

Moreover, are you somehow replicating the following formula to obtain the ES and then compute the MES?

Formula SE

I'm really confused.
Thank you,
Suzy

Run Error on Spillover Measures

          When I run the run.m in the MATLAB2018B of Windows 10,there is an error:  

错误使用 parallel.FevalFuture/fetchNext (line 217)
The function evaluation completed with an error.

出错 run_spillover>run_spillover_internal (line 70)
[future_index,value] = fetchNext(futures);

出错 run_spillover (line 37)
result = run_spillover_internal(data,out_temp,out_file,ipr.bandwidth,ipr.lags,ipr.h,ipr.generalized,ipr.analyze);

出错 run (line 90)
result_spillover = run_spillover(data,out_temp_spillover,out_file_spillover,252,2,4,true,true);

原因:
错误使用 statmvnrobj (line 23)
Covariance matrix SIGMA is not positive-definite.

How to solve it?

Is there something wrong the code or I run the code in a wrong way?

Dear Tommaso,

I am trying to run the CrossSectional part of this project. Is it correct to edit run.m like this?(see below) BTW, nothing else changed.

measures_setup = { % NAME ENABLED ANALYZE COMPARE FUNCTION 'Component' false true true @(ds,temp,file,analyze)run_component(ds,temp,file,bw,0.99,0.98,0.05,0.2,0.75,analyze); 'Connectedness' false true true @(ds,temp,file,analyze)run_connectedness(ds,temp,file,bw,0.05,false,0.06,analyze); 'CrossEntropy' false true true @(ds,temp,file,analyze)run_cross_entropy(ds,temp,file,bw,'G',0.4,'W','N',analyze); 'CrossQuantilogram' false true true @(ds,temp,file,analyze)run_cross_quantilogram(ds,temp,file,bw,0.05,60,'SB',0.05,100,analyze); 'CrossSectional' true true true @(ds,temp,file,analyze)run_cross_sectional(ds,temp,file,0.95,0.40,0.08,0.40,3,analyze); 'Default' false true true @(ds,temp,file,analyze)run_default(ds,temp,file,bw,'BSM',3,0.08,0.45,2,0.10,100,5,0.95,analyze); 'Liquidity' false true true @(ds,temp,file,analyze)run_liquidity(ds,temp,file,bw,21,5,'B',500,0.01,0.0004,analyze); 'RegimeSwitching' false true true @(ds,temp,file,analyze)run_regime_switching(ds,temp,file,true,true,true,analyze); 'Spillover' false true true @(ds,temp,file,analyze)run_spillover(ds,temp,file,bw,10,'G',2,4,analyze); };

If the way I set is correct, I'm confused about the error that MATLAB reported.(see below)

bivariate_caviar>bivariate_caviar_internal (line 78) too many outputs error bivariate_caviar (line 49) [caviar,beta,ir_fm,ir_mf,se,stats] = bivariate_caviar_internal(r,a,cir,cse); error run_cross_sectional>run_cross_sectional_internal (line 111) [caviar,~,ir_fm,ir_mf] = bivariate_caviar(r_i,ds.A); error run_cross_sectional (line 48) [result,stopped] = run_cross_sectional_internal(ds,temp,out,k,d,car,sf,fr,analyze); error run>@(ds,temp,file,analyze)run_cross_sectional(ds,temp,file,0.95,0.40,0.08,0.40,3,analyze) error run (line 159) [result,stopped] = run_function(ds,temp,out,analyze);

I'm new to MATLAB, so maybe I'm doing a stupid thing here.
How to fix the issue to get the results as expected?

CATFIN for every firm

Dear Tommaso,

It is possible to adjust the code to diplay the results in a excel file for CATFIN, Absorbtion Ratio and Turbulence index for every firm in part, not the average for the whole sample?

Thanks!

The set of parameters

Dear Tommaso,

May I ask some questions about the parameters setting?

The description of input parameters in run_cross_sectional.m is as below.

% [INPUT]
% ds = A structure representing the dataset.
% sn = A string representing the serial number of the result file.
% temp = A string representing the full path to the Excel spreadsheet used as template for the result file.
% out = A string representing the full path to the Excel spreadsheet to which the results are written, eventually replacing the previous ones.
% k = A float [0.90,0.99] representing the confidence level (optional, default=0.95).
% d = A float [0.1,0.6] representing the six-month crisis threshold for the market index decline used to calculate the LRMES (optional, default=0.4).
% car = A float [0.03,0.20] representing the capital adequacy ratio used to calculate SES and SRISK (optional, default=0.08).
% sf = A float [0,1] representing the fraction of separate accounts, if available, to include in liabilities and used to calculate SES and SRISK (optional, default=0.40).
% fr = An integer [0,6] representing the number of months of forward-rolling used to calculate the SRISK, simulating the difficulty of renegotiating debt in case of financial distress (optional, default=3).
% analyze = A boolean that indicates whether to analyse the results and display plots (optional, default=false).
%

The two questions related are:

  1. Brownlees and Engle(2016) assumed the liabilities of the firm is non-negotiable. Under this kind of set, does it mean to set the 'fr' parameter to zero?

  2. Since 'd' represents the six-month crisis threshold, how could I change the horizontal period from 'six-month' to, like 'one-month' or 'three-month'?

Thanks a lot.

Question about the function 'read_table'.

Dear Tommaso,

May I ask a question about the function 'read_table'?

I found that the function is nonexistent in my matlab R2020b. I check the guidelines on the main page of the following products and toolboxes, and they are all installed:

  • Computer Vision Toolbox
  • Curve Fitting Toolbox
  • Econometrics Toolbox
  • Financial Toolbox
  • Image Processing Toolbox
  • Optimization Toolbox
  • Parallel Computing Toolbox
  • Statistics and Machine Learning Toolbox
  • System Identification Toolbox

Could you please tell me where is the source of this function? I would appreciate it a lot if you could give me a hand. Thank you very much.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.