Quantcast
Channel: EViews
Viewing all 24088 articles
Browse latest View live

Fetching Forecast Data From Datastream

$
0
0
Let us look into it. If however you use, the Zappy servers you do get the forecast data:
Code:
dbopen(t=datastreamxml)
fetch WDWOGDPDA


Alternatively, you can manually fetch the data
Code:
fetch WDWOGDPDA~1980-01-01~:2020-01-01




Incorporing new forecasted values to estimations

$
0
0
You might want to think about the econometrics behind what you're doing. Using forecasted values of Y to re-estimate the equation and generate new forecasts does nothing - you'll get the same forecasts as if you had just used the original data.

Proof:
Code:

create m 1990 2020
series x1=nrnd
series x2=nrnd
smpl 1990 2015
series y=nrnd

equation eq1.ls y c x1 x2
smpl 2016 @last
eq1.fit yf

smpl 1990 2016
equation eq2.ls yf c x1 x2
smpl 2017 @last
eq2.fit yf2

show yf yf2



Does such a tool exist

$
0
0
Hello, I would like to see if there is a programme for this:


I am looking for an econometric forecasting tool which measures the reaction to impulses as a forecast. Simply stated, in my work, I use a function called Orthogonalised Impulse Response Function (OIRF) where a one percent standard deviation economic shock, for example, is induced into a system and the reaction to this shock is measured in terms of its magnitude and duration. For example, if I wish to trace the path of an economic demand shock (independent variable) to the exchange rate (dependent variable), I induce a one standard deviation positive demand shock and then trace the magnitude and duration of the country's exchange rate (to test for example, its flexibility). My question is, is there a useful developed econometric software where I could perform the same procedure but from a forecasting perspective? Using a very simple example - I wish to measure the future movement in the UK's exchange rate when the interest rate increases, I would like to test and measure the future exchange rate movement? In this case, I would induce a positive interest rate shock (independent variable) and measure the future values of the exchange rate (dependent variable), in terms of the percentage deviation from zero (the start of the induced shock) and the length of time it takes for this effect of the shock to die out. Can this be done? Naturally, I do this with retrospective data - I am looking to perform this with real time data. Is there such a forecasting procedure? Your comments will be very gratefully recieved.


Fetching Forecast Data From Datastream

$
0
0
We finally moved off Zappy (which is still giving the forecast part without any problems).

It's dataworks which is being a pain...


VAR Model

Johansen procedure

$
0
0
post this question on stack.exchange.com or other forums where econometricians could easily help you. Read any time series book relevant chapters for a good answer


KPSS Test Output Interpretation

$
0
0
Hasna wrote:So in summary we can say if

Kwiatkowski-Phillips-Schmidt-Shin test statistic Value is less than (<) any of the below than we would accept the null i.e the variable is stationary.
Asymptotic critical values*:        1% level         0.216000
         5% level         0.146000
         10% level         0.119000

Please answer.


if the estimated test statistic is less than the critical values tabulated above, yes, you fail to reject the null of stationarity.


Fetching Forecast Data From Datastream

$
0
0
It doesn't appear to be on the EViews side. I've sent an email to Thomson to see what needs to be done.



Probit Visualization

$
0
0
I was just using randomly generated data. However, if it's of any help, here is the link to the full lecture.

http://www.columbia.edu/~so33/SusDev/Lecture_9.pdf

They first talk about this graph on slide 28.

As to your other question about whether other variables are estimated at their means or not, I'm not entirely sure. If I had to guess, I'd say the professor is plotting y and an x against eachother in actuals form -- seeing as marginal effects are not talked about until slide 68 (40 slides after the graph). However, clearly it's not your typical x - y actuals scatter plot. The probit CND curve is in the mix too.

[PS], it's a lengthy pdf, I'm not expecting people to read it cover to cover or especially I dont want people to feel like they're doing my work for me. Basically, if someone from the community recognizes what's going on in the graph and is able to put it in eviews terms - that's what I'm really after. Because, like I said, I tried my way, and I'm starting to think I'm approaching this from the wrong way.


VECM with flat impulse responses

$
0
0
Hi Billaudes , its 2015 and I have the same problem running a VECM (which is a VAR with first differences). Id like to know if you solved your problem and how.
Thank you so much for your response!
@Econodelics


save wmf file, unexpected result

$
0
0
EV9 XP x86, ver20151116

Code:

wfcreate u 8
genr x = @obsid
x(7) = 1
freeze(gf) x.line
gf.axis range(2,5)
gf.save(t=wmf) gf.wmf
gf.save(t=emf) gf.emf

for the save wmf file, the graph was not cropped, the lines go out of the frame.
gf.wmf.7z


jpg version of wmf
gf.jpg



collecting fitted values/stats for rolling panel regression

$
0
0
Thanks for the advise!

I have another question though.
if my initial equation was y=c(1)+c(2)x1 +c(3)x2
and I recursively estimated my coefficients
for t=1 c(1,t) c(2,t) c(3,t)
for t=t+1 c(1,t+1) c(2,t+1) c(3,t+1)
for t=t+2 c(1,t+2) c(2,t+2) c(3,t+2)

and so on until t=432

My total sample has 504 observations (1973m01 to 2014m12), and my coefficient matrix is of size [3,432] (I used the first 72=504-432 observations in order to make my first estimation) and then recursively estimated my coefficients.

So c(1) c(2) c(3) are parts of a matrix [3,432] and x1,x2 are variables that include 504 observations each
All I want to end up is to simple compute:

y1,t=c(1,t)+c(2,t)x1 +c(3,t)x2 where x1 and x2 are going to be the 73 observation (1979m01), of each variable respectively
y1,t+1=c(1,t+1)+c(2,t+1)x1 +c(3,t+1)x2 where x1 and x2 are going to be the 74 observation (1979m02), of each variable respectively
and so on until the end of my sampling period (2014m12).

Thanks Gareth for all the help until now, I appreciate it!


Dummy Variables

$
0
0
hello,

being new to eviews, I have some issues creating dummy variables.
I want to create a dummy variable (1=cash 0=all other). I generated and defined them on my own

In regression, I used @expand(cash) and at first i had the collinear probelm. I looked at this http://www.eviews.com/Learning/dummies.html, and removed the constant.
If i want to include the constant to the regression, I read here that I have to use the @expand(cash, @dropfirst). This would regress Y with the constant, with Xs and with my dummy?


Dummy Variables

$
0
0
Yes. But why not just try it to see what happens?


x12 specification file

$
0
0
Hi, HollEviews.

I've attached a subroutine that handles both the force option as well as automatic reading of the .saa file.

The key to this is really just to use @temppath to locate the .saa file. Once you have the file's location, an import statement can load the data into the workfile for you.

See the attached program to see what this looks like. Both Quarterly and Monthly variations of the import statement are handled.

Cheers,
Graeme



text box to keep track of program files

$
0
0
The commands below might be useful. Consult the EViews documentation for the specifics.

    logmode
    logmsg
    logsave

As an example, if I'm using the exec command to run a number of programs, I usually put something like the code below into my programs. In this way, I can keep track of what's going on. The setmaxerrs statement also comes in handy.

Code:
logmode all
logmsg Starting to execute: ProgramName.prg
' - - - commands - - -
logmsg Starting to execute first code chunk...
' - - - commands - - -
logsave(type=rtf) LogFileName.text



GMM error code

$
0
0
Hi, I am estimating a model with a dynamic component. The model contains a one-period lag of the dependent variable and four other contemporaneous variables. I can use a 2SLS method, but it is suggested that I explore the Arellano Bond method. I follow the dynamic panel wizard for the GMM/DPD method. When I run the regression I receive the message "number of instruments greater than number of observations". This is an unbalanced panel covering over 40 cross sections and 60 time periods and I am using the variables as above. Why would this error message appear? Is the AB method not suitable for such large time periods?


Breusch-Pagan LM Test for Random Effects

$
0
0
Glenn I am using EVIEWS 8...and installed the add in for Breusch Pagan LM test.

Howver, when I run the test, I am gettinga message as:

Procedure can only be run from equations estimated by list

Why is this happening??

Kindly advice me in this regard.....


GMM error code

$
0
0
You probably don't have enough observations. With the AB model you can use a lot of instruments very quickly.


GMM error code

$
0
0
I've 1600 observations. When I reduce this to just 3-4 time periods the error code changes to near singular matrix, which is what one would expect. How does the full period model generate over 1600 instruments?


Viewing all 24088 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>