Quantcast
Channel: EViews
Viewing all 24149 articles
Browse latest View live

Fan Chart Actual and FC Data

$
0
0
Hi Gareth - thanks for your reply. I am a bit confused on this however. How this possible as the graph is being taken from the MC tab which will just have the 10,000 (or specified) simulated values. How would it be possible to draw in another historical series form a different tab?

Thanks again.



Excel Add-in

$
0
0
1/ Ideallly, I would expect a description to appear above the name of the series in the EXCEL (attached pic).
But, unfortunately, I have no clue on compatibility issues with EXCEL data type.
Appending description to the name of the series?! I think that users would prefer to keep names of series separate from description. I believe the Description of the series is very useful feature for end-users, which usually don't care about the names Eviews users exploit in programming...

2/ And one more minor issue - EXCEL Add-in imports ID series as "General format". That's would be nice to set @DATE, e.g. as Excel number category: Date (which sets DATE type to default regional settings of the user).


Excel Add-in

$
0
0
startz wrote:Steve,

Also consider a second sheet that serves as a data dictionary with first column names and second column description. That's a pretty common format.


This is interesting solution. I suppose the question is - is it possible to automate using EXCEL add-in?!

Thanks


FAVAR add-in

$
0
0
Hi Dakila,

thanks again for your feedback. I really appreciate the option to get some feedback from you.

Let me shortly cite BBE2005 (p. 404):
In particular, we define two categories of information variables: "slow-moving" and "fast-moving." Slow moving variables (think of wages or spending) are assumed not to respond contemporaneously to unanticipated changes in monetary policy. In contrast, fast-moving variables (think of asset prices) are allowed to respond contemporaneously to policy shocks.


From my perspective, the underlined definition of slow-moving variables is exactly a description of the impulse response behaviour of those variables, namely that they are not supposed to react to an unanticipated shock in the policy rate within the same period. Therefore, I believe that the IRFs of those variables, should - by definition - start with zero. That is how I interpret BEE's statement.

To give an example: Blake, Mumtaz and Rummel ensure this in their FAVAR EViews Tutorial by estimation the observation equation (2) both without the policy rate (for slow-moving variables) and with the policy rate (for fast-moving variables) --> see step 9 right here:
https://cmi.comesa.int/wp-content/uploads/2016/03/Ole-Rummel-13-Feb-Exercise-on-factor-augmented-VARs-EMF-EAC-9-13-February-2015.pdf

The alternative option would be to estimate the observation equation in a single step without taking the differentiation between slow- and fast-moving variables into account. Then, we receive loadings unequal to zero for the policy rate and, consequently, IRFs of slow-moving variables that start above or below zero. Technically, those two options are quite familiar, however, I stongly believe that only the one I mentioned first leads to the shock behaviour BBE mentioned in the statement above.

Why do you think that the assumtion regarding the loading matrix is debatable?

Again, thank you very much in advance for your feedback.

Best regards
Markus


Fan Chart Actual and FC Data

$
0
0
Say the simulations run from 2018-2020, and you have historical data from 2000-2018.

The simulations page should run from 2000-2020 with K cross-sections. Copy the historical series into the panel page, and then copy the simulations into the panel page too. The simulation series will then have constant values for 2000-2018 and different values for each cross-section from 2018 onwards.


Maximum number of iteration exceed error

$
0
0
I tried to SVAR identification using five variable (Government spending,Tax,GDP,CPI and Real interest rate) where 1990-2017 time series data. While I run VAR its prompted Maximum number of iteration exceed error. Then I increase maximum iterations and run again. In this time, it prompted Optimization may be unreliable error. can anyone help me to sort this thing out? I have attached work file and data set here.

Error codes:
1. Maximum number of iterations exceeded in "FREEZE (TABLE1) SGVAR1.SVAR(RTYPE=PATSR,NAMEA=PATA,NAMEB=PATB, F0=U)"
2. Optimization may be unreliable (first or second order conditions not met) in "FREEZE(TABLE1) SGVAR1.SVAR (RTYPE=PATSR,NAMEA=PATA,NAMEB=PATB, F0=U)".


FAVAR add-in

Maximum number of iteration exceed error

$
0
0
Hello,

It's always good to try different types of initial values to see if one more effectively leads to convergence. The default normal distribution appears to work with your data (remove the "F0=U" option from the svar command).



DCC GARCH with exogenous variables

$
0
0
Hi all,

Is there any way to include exogenous variables in a DCC model? I'm using the dccgarch11 program, but apparently you can only include exogenous variables in the mean and variance equations.

Thank you in advance.

Best,

Luis


spreadsheet commands in tables

$
0
0
Is there a way to use Excel-style commands to edit the data in spreadsheets?

The specific task I am trying to do is manipulate a time series add factor. Let's say that I have a target that I want my model to definitely hit in the next couple periods so I am manually adjusting the add factor. However, for whatever reason, this add-factor is not constant over the rest of the periods, so I can't just copy and paste a constant series. Is there a quick way in EV to move the add factor down by 0.5 over all time periods? Or is there a quick way to make the add factor go down by .5 in a step over each time period? Thanks.


spreadsheet commands in tables

Wavelet Transform

Minimize workfile window

$
0
0
Hi Steve

Any progress on how to programatically minimize and maximize all eviews objects?
Thomas


Imposing parameter restrictions on state sapce models

$
0
0
Thanks a lot Glenn. Always very helpful.
S.


Minimize workfile window

$
0
0
Yes, the following changes will be in the next patch for EViews 10 & 11...

New Commands:
    WINMAXIMIZE
    WINMINIMIZE
    WINNORMAL
    WINRESTORE
    WINCLOSE
These new commands basically behave like the current CLOSE command, taking all the same arguments such as @all, @objects, @prg, @db, and @wf. It can also take a named window.

Note: If multiple windows have the same name, all of them will be affected.

Also, the WINCLOSE command was created just for completeness. It just calls CLOSE.

Updated SHOW command:
Since it would be a bit wordy to call SHOW, and then WINMAXIMIZE to display a series in maximized view, I added some new options to the SHOW command to do it all in one step:

    SHOW(max|min|normal|restore) ..argument(s)...
When no option is specified, if the specified object is already open & minimized, SHOW will now restore the window to it's previous state
max option makes the window become maximized
min option makes the window become minimized
normal option makes the window become normally sized (neither maximized nor minimized) from any current state
restore option makes a minimized window revert back to previous state, or a maximized window back to normal, or does nothing if window is already normal

Note: the SHOW command still doesn't work on workfile windows.

Steve



Minimize workfile window

$
0
0
great Steve, thanks, it is highly appreciated!
Thomas


FAVAR add-in

$
0
0
If lambda_y equal to zero for the slow-moving variables then you will miss not just the impact period but all period effects of FFR (federal funds rate). That does not make sense.


Unit specific time averages

$
0
0
Hi Glenn,

I have the below function that I used some time ago and now I am not sure what it calculates?

The function is:
Series A = (@meansby(Series X, Series Y, Series Z ,"2003 2018"))

It will be very helpful if someone can reply.

Regards
Masum


bandwidth and the number of lags

$
0
0
I would like to know how I can infer the number of lags used for the bandwidth associated to the (long-run) variance estimator.

In running a system with GMM, I get the following output:

Included observations: 247                
Total system (balanced) observations 3458                
Kernel: Bartlett, Bandwidth: Variable Newey-West (2), No prewhitening                
Simultaneous weighting matrix & coefficient iteration                
Convergence achieved after: 1 weight matrix, 2 total coef iterations                

Can you tell me if I can infer the number of lags from the Bandwidth (Variable Newey-West (2)).
Is it 2 the number of lags?

For your additional information, if I use the NW fixed option with 247 observations, I get the following, where the Bandwidth is 5.
Using the formula available here is rather complex to extract the number of lags: http://www.eviews.com/help/helpintro.ht ... 23ww155429

Included observations: 247                
Total system (balanced) observations 3458                
Kernel: Bartlett, Bandwidth: Fixed (5), No prewhitening                
Simultaneous weighting matrix & coefficient iteration                
Convergence achieved after: 1 weight matrix, 2 total coef iterations


bandwidth and the number of lags

$
0
0
Yes, 2 is the number of selected lags.


Viewing all 24149 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>