↧
Fan Chart Actual and FC Data
↧
Excel Add-in
1/ Ideallly, I would expect a description to appear above the name of the series in the EXCEL (attached pic).
But, unfortunately, I have no clue on compatibility issues with EXCEL data type.
Appending description to the name of the series?! I think that users would prefer to keep names of series separate from description. I believe the Description of the series is very useful feature for end-users, which usually don't care about the names Eviews users exploit in programming...
2/ And one more minor issue - EXCEL Add-in imports ID series as "General format". That's would be nice to set @DATE, e.g. as Excel number category: Date (which sets DATE type to default regional settings of the user).
But, unfortunately, I have no clue on compatibility issues with EXCEL data type.
Appending description to the name of the series?! I think that users would prefer to keep names of series separate from description. I believe the Description of the series is very useful feature for end-users, which usually don't care about the names Eviews users exploit in programming...
2/ And one more minor issue - EXCEL Add-in imports ID series as "General format". That's would be nice to set @DATE, e.g. as Excel number category: Date (which sets DATE type to default regional settings of the user).
↧
↧
Excel Add-in
↧
FAVAR add-in
Hi Dakila,
thanks again for your feedback. I really appreciate the option to get some feedback from you.
Let me shortly cite BBE2005 (p. 404):
From my perspective, the underlined definition of slow-moving variables is exactly a description of the impulse response behaviour of those variables, namely that they are not supposed to react to an unanticipated shock in the policy rate within the same period. Therefore, I believe that the IRFs of those variables, should - by definition - start with zero. That is how I interpret BEE's statement.
To give an example: Blake, Mumtaz and Rummel ensure this in their FAVAR EViews Tutorial by estimation the observation equation (2) both without the policy rate (for slow-moving variables) and with the policy rate (for fast-moving variables) --> see step 9 right here:
https://cmi.comesa.int/wp-content/uploads/2016/03/Ole-Rummel-13-Feb-Exercise-on-factor-augmented-VARs-EMF-EAC-9-13-February-2015.pdf
The alternative option would be to estimate the observation equation in a single step without taking the differentiation between slow- and fast-moving variables into account. Then, we receive loadings unequal to zero for the policy rate and, consequently, IRFs of slow-moving variables that start above or below zero. Technically, those two options are quite familiar, however, I stongly believe that only the one I mentioned first leads to the shock behaviour BBE mentioned in the statement above.
Why do you think that the assumtion regarding the loading matrix is debatable?
Again, thank you very much in advance for your feedback.
Best regards
Markus
thanks again for your feedback. I really appreciate the option to get some feedback from you.
Let me shortly cite BBE2005 (p. 404):
In particular, we define two categories of information variables: "slow-moving" and "fast-moving." Slow moving variables (think of wages or spending) are assumed not to respond contemporaneously to unanticipated changes in monetary policy. In contrast, fast-moving variables (think of asset prices) are allowed to respond contemporaneously to policy shocks.
From my perspective, the underlined definition of slow-moving variables is exactly a description of the impulse response behaviour of those variables, namely that they are not supposed to react to an unanticipated shock in the policy rate within the same period. Therefore, I believe that the IRFs of those variables, should - by definition - start with zero. That is how I interpret BEE's statement.
To give an example: Blake, Mumtaz and Rummel ensure this in their FAVAR EViews Tutorial by estimation the observation equation (2) both without the policy rate (for slow-moving variables) and with the policy rate (for fast-moving variables) --> see step 9 right here:
https://cmi.comesa.int/wp-content/uploads/2016/03/Ole-Rummel-13-Feb-Exercise-on-factor-augmented-VARs-EMF-EAC-9-13-February-2015.pdf
The alternative option would be to estimate the observation equation in a single step without taking the differentiation between slow- and fast-moving variables into account. Then, we receive loadings unequal to zero for the policy rate and, consequently, IRFs of slow-moving variables that start above or below zero. Technically, those two options are quite familiar, however, I stongly believe that only the one I mentioned first leads to the shock behaviour BBE mentioned in the statement above.
Why do you think that the assumtion regarding the loading matrix is debatable?
Again, thank you very much in advance for your feedback.
Best regards
Markus
↧
Fan Chart Actual and FC Data
Say the simulations run from 2018-2020, and you have historical data from 2000-2018.
The simulations page should run from 2000-2020 with K cross-sections. Copy the historical series into the panel page, and then copy the simulations into the panel page too. The simulation series will then have constant values for 2000-2018 and different values for each cross-section from 2018 onwards.
The simulations page should run from 2000-2020 with K cross-sections. Copy the historical series into the panel page, and then copy the simulations into the panel page too. The simulation series will then have constant values for 2000-2018 and different values for each cross-section from 2018 onwards.
↧
↧
Maximum number of iteration exceed error
I tried to SVAR identification using five variable (Government spending,Tax,GDP,CPI and Real interest rate) where 1990-2017 time series data. While I run VAR its prompted Maximum number of iteration exceed error. Then I increase maximum iterations and run again. In this time, it prompted Optimization may be unreliable error. can anyone help me to sort this thing out? I have attached work file and data set here.
Error codes:
1. Maximum number of iterations exceeded in "FREEZE (TABLE1) SGVAR1.SVAR(RTYPE=PATSR,NAMEA=PATA,NAMEB=PATB, F0=U)"
2. Optimization may be unreliable (first or second order conditions not met) in "FREEZE(TABLE1) SGVAR1.SVAR (RTYPE=PATSR,NAMEA=PATA,NAMEB=PATB, F0=U)".
Error codes:
1. Maximum number of iterations exceeded in "FREEZE (TABLE1) SGVAR1.SVAR(RTYPE=PATSR,NAMEA=PATA,NAMEB=PATB, F0=U)"
2. Optimization may be unreliable (first or second order conditions not met) in "FREEZE(TABLE1) SGVAR1.SVAR (RTYPE=PATSR,NAMEA=PATA,NAMEB=PATB, F0=U)".
↧
FAVAR add-in
Hi Dakila,
I just came across a handbook by Andrew Blake and Haroon Mumtaz (https://www.bankofengland.co.uk/ccbs/applied-bayesian-econometrics-for-central-bankers-updated-2017) that supports my argumentation regarding the zeros in the loadings matrix for slow-moving variables:
This procedure ensures zeros in the fourth column of the loadings matrix.
I hope this enriches our further discussion.
Best regards
Markus
I just came across a handbook by Andrew Blake and Haroon Mumtaz (https://www.bankofengland.co.uk/ccbs/applied-bayesian-econometrics-for-central-bankers-updated-2017) that supports my argumentation regarding the zeros in the loadings matrix for slow-moving variables:
This procedure ensures zeros in the fourth column of the loadings matrix.
I hope this enriches our further discussion.
Best regards
Markus
↧
Maximum number of iteration exceed error
↧
DCC GARCH with exogenous variables
↧
↧
spreadsheet commands in tables
Is there a way to use Excel-style commands to edit the data in spreadsheets?
The specific task I am trying to do is manipulate a time series add factor. Let's say that I have a target that I want my model to definitely hit in the next couple periods so I am manually adjusting the add factor. However, for whatever reason, this add-factor is not constant over the rest of the periods, so I can't just copy and paste a constant series. Is there a quick way in EV to move the add factor down by 0.5 over all time periods? Or is there a quick way to make the add factor go down by .5 in a step over each time period? Thanks.
The specific task I am trying to do is manipulate a time series add factor. Let's say that I have a target that I want my model to definitely hit in the next couple periods so I am manually adjusting the add factor. However, for whatever reason, this add-factor is not constant over the rest of the periods, so I can't just copy and paste a constant series. Is there a quick way in EV to move the add factor down by 0.5 over all time periods? Or is there a quick way to make the add factor go down by .5 in a step over each time period? Thanks.
↧
spreadsheet commands in tables
↧
Wavelet Transform
EViews Mirza wrote:We've made an add-in as a series proc that can do just this! :D You can download the add-in here:
http://www.eviews.com/Addins/addins.shtml
Respected sir could you please elaboaret how to use it after installation.
↧
Minimize workfile window
↧
↧
Imposing parameter restrictions on state sapce models
↧
Minimize workfile window
Yes, the following changes will be in the next patch for EViews 10 & 11...
New Commands:
Note: If multiple windows have the same name, all of them will be affected.
Also, the WINCLOSE command was created just for completeness. It just calls CLOSE.
Updated SHOW command:
Since it would be a bit wordy to call SHOW, and then WINMAXIMIZE to display a series in maximized view, I added some new options to the SHOW command to do it all in one step:
max option makes the window become maximized
min option makes the window become minimized
normal option makes the window become normally sized (neither maximized nor minimized) from any current state
restore option makes a minimized window revert back to previous state, or a maximized window back to normal, or does nothing if window is already normal
Note: the SHOW command still doesn't work on workfile windows.
Steve
New Commands:
- WINMAXIMIZE
WINMINIMIZE
WINNORMAL
WINRESTORE
WINCLOSE
Note: If multiple windows have the same name, all of them will be affected.
Also, the WINCLOSE command was created just for completeness. It just calls CLOSE.
Updated SHOW command:
Since it would be a bit wordy to call SHOW, and then WINMAXIMIZE to display a series in maximized view, I added some new options to the SHOW command to do it all in one step:
- SHOW(max|min|normal|restore) ..argument(s)...
max option makes the window become maximized
min option makes the window become minimized
normal option makes the window become normally sized (neither maximized nor minimized) from any current state
restore option makes a minimized window revert back to previous state, or a maximized window back to normal, or does nothing if window is already normal
Note: the SHOW command still doesn't work on workfile windows.
Steve
↧
Minimize workfile window
↧
FAVAR add-in
↧
↧
Unit specific time averages
↧
bandwidth and the number of lags
I would like to know how I can infer the number of lags used for the bandwidth associated to the (long-run) variance estimator.
In running a system with GMM, I get the following output:
Included observations: 247
Total system (balanced) observations 3458
Kernel: Bartlett, Bandwidth: Variable Newey-West (2), No prewhitening
Simultaneous weighting matrix & coefficient iteration
Convergence achieved after: 1 weight matrix, 2 total coef iterations
Can you tell me if I can infer the number of lags from the Bandwidth (Variable Newey-West (2)).
Is it 2 the number of lags?
For your additional information, if I use the NW fixed option with 247 observations, I get the following, where the Bandwidth is 5.
Using the formula available here is rather complex to extract the number of lags: http://www.eviews.com/help/helpintro.ht ... 23ww155429
Included observations: 247
Total system (balanced) observations 3458
Kernel: Bartlett, Bandwidth: Fixed (5), No prewhitening
Simultaneous weighting matrix & coefficient iteration
Convergence achieved after: 1 weight matrix, 2 total coef iterations
In running a system with GMM, I get the following output:
Included observations: 247
Total system (balanced) observations 3458
Kernel: Bartlett, Bandwidth: Variable Newey-West (2), No prewhitening
Simultaneous weighting matrix & coefficient iteration
Convergence achieved after: 1 weight matrix, 2 total coef iterations
Can you tell me if I can infer the number of lags from the Bandwidth (Variable Newey-West (2)).
Is it 2 the number of lags?
For your additional information, if I use the NW fixed option with 247 observations, I get the following, where the Bandwidth is 5.
Using the formula available here is rather complex to extract the number of lags: http://www.eviews.com/help/helpintro.ht ... 23ww155429
Included observations: 247
Total system (balanced) observations 3458
Kernel: Bartlett, Bandwidth: Fixed (5), No prewhitening
Simultaneous weighting matrix & coefficient iteration
Convergence achieved after: 1 weight matrix, 2 total coef iterations
↧
bandwidth and the number of lags
↧