Code Monkey home page Code Monkey logo

Comments (6)

chenwilliam77 avatar chenwilliam77 commented on May 14, 2024 1
  1. The setting :n_mh_blocks actually refers to the number of HDF5 blocks to be used, for memory reasons. The Setting :n_mh_param_blocks is probably the correct equivalent since I presume you're comparing parameter blocking. Furthermore, the setting :n_mh_simulations is the number of draws per block. Just making sure this is clear. You can check lines 116-136 in DSGE/src/defaults.jl to see descriptions on the Metropolis-Hastings settings.

  2. I'm not sure why you're getting that error when using n_mh_simulations = 1250000 , but I suspect it's due to memory problems.

  3. What data are you using? The data vintage you're using seems to not result in a working dataset for me.

  4. I tried estimating your model. First, the rejection rate seems quite high given your choice of mh_cc (I get about 80-90%). Lowering mh_cc to .05 produces a rejection rate of 31% on average. Second, I get a time of roughly 1 - 1.2 min per 10000 draws (1000 draws per block for 10 blocks, where "block" here is a block of the HDF5 file being created), which would imply 2-2.5 hours for a Metropolis-Hastings estimation yielding 1.25 million draws (though it is possible that my machine is faster than yours).

I then also started an estimation of 10,000 draws per block, for 125 blocks, to get 1.25 million total draws (inclusive of draws that would be burned). My estimated runtime is 2 hours and 1 minute (takes about 1 minute per block).

My estimation script (after adding your Gabaix model to my local version of DSGE.jl)

using DSGE, ModelConstructors
m = Gabaix()
m <= Setting(:data_vintage, "181115")
m <= Setting(:date_forecast_start, quartertodate("2007-Q4"))
m <= Setting(:n_mh_simulations, 1000)
m <= Setting(:n_mh_blocks, 10)
m <= Setting(:mh_thin, 4)
m <= Setting(:mh_cc, .05)
df = load_data(m; try_disk = false, summary_statistics = :none)
data = df_to_matrix(m, df)
estimate(m, data)
  1. It may be helpful if you could run
using DSGE, BenchmarkTools
m = Gabaix()
m <= Setting(:data_vintage, "181115")
m <= Setting(:date_forecast_start, quartertodate("2007-Q4"))
df = load_data(m; try_disk = false, summary_statistics = :none)
data = df_to_matrix(m, df)

@btime begin
    posterior(m, data)
end

I get 3.33 ms for the time it takes to calculate the log posterior once. This will help us get a sense of the difference in machine speed.

from dsge.jl.

chenwilliam77 avatar chenwilliam77 commented on May 14, 2024 1
  1. So you want to estimate the model on data from 1959:Q1 to 1981:Q4, with a forecast starting in 1982:Q1, correct? Note that after transformations, the data matrix seen by the estimation will just range from 1959:Q3 to 1981:Q4 since the first two quarters are used to calculate growth numbers (e.g. the GDP level is transformed to GDP growth). Just making sure you're setting up the data correctly.

  2. Can you post your entire estimation script? I just want to be certain that any differences in timing between my results and yours are due to machine differences. I am mildly surprised that your machine is twice as slow when the calculation time of the posterior is almost identical.

  3. There is "naive" automatic parameter blocking available. By default, we have

m <= Setting(:n_mh_param_blocks, 1)

meaning there is only one parameter block, as is the standard case for random-walk MH. However, it is known that blocking parameters can improve sampling efficiency. Since the FRBNY DSGE team has mostly moved toward using our SMC algorithm, we only got around to implementing a very naive parameter blocking scheme, which randomly allocates parameters into :n_mh_param_blocks (see generate_param_blocks) and updates each block on their own. So if you set

m <= Setting(:n_mh_param_blocks, 5)

our MH algorithm will randomly split the parameters of your model into 5 blocks during the set up of the algorithm and will block-update during sampling.

from dsge.jl.

caioodantas avatar caioodantas commented on May 14, 2024

Thanks again, William.

The date start really didn't make sense. While the tests below were done using :date_forecast_start, quartertodate("2007-Q4"), the period I'm interested in is actually :date_forecast_start, quartertodate("1982-Q1")`.

In the speed test you suggested, I got 3.38 ms, so the remaining runtime difference might be due to our machines.

Using
m <= Setting(:n_mh_simulations, 1000)
m <= Setting(:n_mh_blocks, 10)
I got 117 seconds, so it would take about 4 hours to complete the 1.25 million draws. Using 1982 as the starting date I get 137 seconds.

I didn't really get your point about :n_mh_param_blocks. Is there a function for automatic parameter blocking or should I just try multiple values out?

from dsge.jl.

caioodantas avatar caioodantas commented on May 14, 2024

I did get a bit confused with the date specification, but the period I'm interested in is actually 1982:Q1 to 2007:Q4, so a bit longer than the one I was using before. Nevertheless, using 125 blocks of 10,000 draws I got it to run in 118 minutes.

I tried using m <= Setting(:n_mh_param_blocks, 5), but it ended up slowing the process down. I might try other values later.

The script I'm using now is

m = XGabaix()
m <= Setting(:date_presample_start, quartertodate("1982-Q1"))
m <= Setting(:date_forecast_start, quartertodate("2008-Q1"))
m <= Setting(:n_mh_simulations, 10000)
m <= Setting(:n_mh_blocks, 125)
m <= Setting(:mh_thin, 4)
m <= Setting(:mh_cc, .06)
m <= Setting(:use_population_forecast, false)
@time begin
df = load_data(m, try_disk = false, check_empty_columns = false, summary_statistics = :none)
data = df_to_matrix(m, df)
estimate(m, data)
end

Strangely, I'm not getting rejection rates as high as you reported with mh_cc=.06.

from dsge.jl.

chenwilliam77 avatar chenwilliam77 commented on May 14, 2024
  1. The mh_cc you originally reported was 0.6 (see your first post). A mh_cc = .06 is close to the one I ended up using (.05).

  2. Your script looks fine. Glad to hear the estimation finished in 2 hours (and is thus faster than Dynare)

  3. I would assume that parameter blocking will generally slow down the estimation because for each block draw you need to evaluate the likelihood. So a sample from the posterior with 5 blocks involves 5 calls of the likelihood. The gains from parameter blocking are in terms of efficiency, i.e. for the same number of draws, you're more likely to get a higher effective sample size per draw with blocking than without it. The gain in runtime from parameter blocking occurs from the fact that you can generally run MH for shorter chains due to the higher effective sample size per draw. I believe MCMCDiagnostics.jl should have some useful tools for assessing ESS. A relevant paper about blocking is this one.

from dsge.jl.

chenwilliam77 avatar chenwilliam77 commented on May 14, 2024

Hi caioodantas,

I just remembered another feature of the DSGE.jl code which may suggest our code is even faster than Dynare. I don't know how the Dynare code for estimation works since I haven't looked into it, but the number of simulations (i.e. n_mh_simulations) in DSGE.jl will be the number of draws post-thinning. In other words, if you specify mh_thin = 4 as well as n_mh_simulations = 10000, under the hood, we are actually taking 40,000 draws and saving only every fourth draw. So if the Dynare code you ran that took 3 hours does not thin during the algorithm itself (i.e. run a total of 4 * 1,250,000 = 5,000,000 draws), then the DSGE.jl code should be even faster compared to Dynare.

from dsge.jl.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.