Quantcast
Channel: EViews
Viewing all 24155 articles
Browse latest View live

ardl command - error message

$
0
0
Please ignore my post. I found the mistake. Many thanks!



ardl command - error message

Fama-MacBeth regression

$
0
0
Hi,

i have to run several fama-macbeth regressions for my factors.
I'm not that familiar with eviews but i think i understand how your code works.

i wanted to test if the hml factor yields the same risk premium and t-stat as it did in your example. I did it for a 30 industry and 49 industry portfolio and my results don't make any sense.... (slightly negative and insignificant, while hml in your example is positive and has a t-stat >2)

I think the issue is my portfolio... Did i write it in the right way? series* is my 49industry portfolio and pr* my 30 industry portfolio.

Thanks for your help.


ardl command - error message

$
0
0
Hello Gareth,

I've just updated to EViews 9.5. I noticed that, EViews 9.5 no longer readily provides the "adjustement speed" coefficient (CointEq(-1), EViews 9 manual II, p. 293). It was quite useful to have that. How can I get that coefficient with EViews 9.5?

Thanks again,

Mara


ardl command - error message

$
0
0
You'll have to wait until the next patch which will re-enable it.


Restricting parameters: "AR is not defined".

$
0
0
I estimated a model in eviews, but I want to tweak the parameters a bit to my liking prior to exporting the residuals.

The model is the following simple AR-model:

Y ar(25)

And after estimation I have made the judgement that the estimation is biased and want to modify it so that the parameter is restricted in the following way:

Y = 0.46*ar(25)

Which yields me the "Ar is not defined" error. There is also a secondary issue, eviews won't let me estimate this sort of equation since there is no parameter to estimate, yet I still want to obtain the residuals. Is there a way to accomplish what I attempted here in eviews?

EDIT: I think I got the right form:

[AR(25) = 0.46] in the place of AR. Still having the issue with no parameters specified.


Restricting parameters: "AR is not defined".

$
0
0
Your model appears to be
y = .46*y(-25)

in which case the residuals are going to be
y - .46*y(-25)


Fama-MacBeth regression

$
0
0
I'm not sure what you mean. The example uses rmkt, not hml.



Fama-MacBeth regression

$
0
0
Okay, my question is not regarding the factors, but rather about the portfolio creation. In your file you name them 11 to 15 and than 21to 25, probably because it is a 5x5 portfolio. my question is if the way you name the portfolios has an impact on the fama macbeth regression.


long and short term equations

$
0
0
Hello,
the run of cointegration test on eviews gives us:


Date: 04/13/17 Time: 09:25
Sample(adjusted): 2002 2015
Included observations: 14 after adjusting endpoints
Trend assumption: Linear deterministic trend (restricted)
Series: CE E G
Lags interval (in first differences): 1 to 1
                    
Unrestricted Cointegration Rank Test
                    
Hypothesized        Trace    5 Percent    1 Percent    
No. of CE(s)    Eigenvalue    Statistic    Critical Value    Critical Value    
                    
None **     0.943976     63.45509     42.44     48.45    
At most 1     0.682073     23.10735     25.32     30.45    
At most 2     0.396247     7.064266     12.25     16.26    
                    
*(**) denotes rejection of the hypothesis at the 5%(1%) level
Trace test indicates 1 cointegrating equation(s) at both 5% and 1% levels
                    
                    
Hypothesized        Max-Eigen    5 Percent    1 Percent    
No. of CE(s)    Eigenvalue    Statistic    Critical Value    Critical Value    
                    
None **     0.943976     40.34774     25.54     30.34    
At most 1     0.682073     16.04309     18.96     23.65    
At most 2     0.396247     7.064266     12.25     16.26    
                    
*(**) denotes rejection of the hypothesis at the 5%(1%) level
Max-eigenvalue test indicates 1 cointegrating equation(s) at both 5% and 1% levels
                    
Unrestricted Cointegrating Coefficients (normalized by b'*S11*b=I):
                    
CE    E    G    @TREND(01)        
0.309809    -0.026339    -0.003360     6.357138        
5.323801    -0.019033    -0.002098     1.986750        
3.174429    -0.002514    -0.002042     0.606928        
                    
                    
Unrestricted Adjustment Coefficients (alpha):
                    
D(CE)     0.122194    -0.025537     0.060332        
D(E)     18.17960     26.80285    -15.71174        
D(G)    -327.9016     1305.121     1094.855        
                    
                    
1 Cointegrating Equation(s):     Log likelihood    -175.5371        
                    
Normalized cointegrating coefficients (std.err. in parentheses)
CE    E    G    @TREND(01)        
1.000000    -0.085017    -0.010844     20.51954        
     (0.00752)     (0.00097)     (1.75813)        
                    
Adjustment coefficients (std.err. in parentheses)
D(CE)     0.037857                
     (0.01084)                
D(E)     5.632203                
     (4.25272)                
D(G)    -101.5869                
     (242.823)                
                    
                    
2 Cointegrating Equation(s):     Log likelihood    -167.5155        
                    
Normalized cointegrating coefficients (std.err. in parentheses)
CE    E    G    @TREND(01)        
1.000000     0.000000     6.46E-05    -0.511195        
         (7.6E-05)     (0.06071)        
0.000000     1.000000     0.128314    -247.3704        
         (0.00536)     (4.29587)        
                    
Adjustment coefficients (std.err. in parentheses)
D(CE)    -0.098097    -0.002732            
     (0.18107)     (0.00110)            
D(E)     148.3253    -0.988978            
     (55.5757)     (0.33866)            
D(G)     6846.617    -16.20399            
     (3476.79)     (21.1864)            
                

How can I write short term and long term correlations from this result.

Regards.



Restricting parameters: "AR is not defined".

$
0
0
Well, yes, but I would like to do the same thing with an MA model and different combinations of AR and MA terms, then the solution involves solving difference equations which is a bit of a headache. Much easier to just constrict the parameters and estimate the model...


Nowcasting with a State-Space Model

$
0
0
Hi all. I'm having trouble with my state-space model.

My aim is to create a 'nowcast' for GDP growth. Apart from my GDP series, I am using seven other economic time series, which are named "bp", "emp", "ip", "ism", "nhs", "rs", "ts".

From reading the manual and various papers, I believe I have specified my model correctly.

Code:

@signal gdp = c(101) + sv1*ip + sv2*bp + sv3*emp + sv4*ism + sv5*rs + sv6*nhs + sv7*ts + [var = exp(c(102))]
@state sv1 = sv1(-1) + [var = exp(c(1))]
@state sv2 = sv2(-1) + [var = exp(c(2))]
@state sv3 = sv3(-1) + [var = exp(c(3))]
@state sv4 = sv4(-1) + [var = exp(c(4))]
@state sv5 = sv5(-1) + [var = exp(c(5))]
@state sv6 = sv6(-1) + [var = exp(c(6))]
@state sv7 = sv7(-1) + [var = exp(c(7))]


I've attached a screenshot from the specification.

Next, I estimate the model, and then produce a signal series.

Here's my problem:
Half of the time series have data up to and including 2017M01. The other half have data up to and including 2017M02.

My understanding was that a state-space model could deal with this "jagged" edge, and produce a value for GDP for 2017M02 even thought not all data was available for that month.

Am I missing a step? Is my model specified correctly?

I'd appreciate any help.

Thanks!

Ben


Restricting parameters: "AR is not defined".

$
0
0
I don't think that EViews has any way to constrain ARMA parameters.


rename dummy and diagonal dummy

$
0
0
Hello,

I have make some modification to rename dummy series when i spanning by three weeks or by others frequency with two ways, but the two modification doesn't work:
1/rename dummy by the corresponding start date: d_2000_01_03_001, d_2000_01_24_002, d_2000_02_14_003,...etc (d_{start date}_001), but the series will be renamed by this: d_2000_01_03_001, d_2000_01_10_002, d_2000_02_17_003 with this code:
%series_name = "d_" +@datestr(@dateval(@otod(!id)), "YYYY_MM_DD_")+ @str(!id, "i03")

wfcreate w 03/01/2000 31/12/2001
!size = 3 'how many weeks per dummy
!first_date_number = @dateval(@otod(1))
for !i = 1 to @obsrange
!week = @datediff(@dateval(@otod(!i)), !first_date_number, "ww")
!id = @floor(!week / !size) + 1
%series_name = "d_" +@datestr(@dateval(@otod(!id)), "YYYY_MM_DD_")+ @str(!id, "i03")
if not @isobject(%series_name) then
series {%series_name} = @floor(@datediff(@date, !first_date_number, "ww") / !size) + 1 = !id
endif
next

2/rename dummy by the corresponding start date and end date: d000103_000117_001, d000124_000207_002,...etc (d{start date}_{end date}_001), but the series will be renamed by this d000103_000106_001, d000110_000113_002, ...etc with this code:
%series_name = "d" +@datestr(@dateval(@otod(!id)), "YYMMDD_")+@datestr(@dateval(@otod(!id))+3, "YYMMDD_")+ @str(!id, "i03") but it doesn't work.
how can rename dummy by start date, and by start date and end date, when i spanning by different frequency.


Removing outliers

$
0
0
I have a dated monthly series and I would like to remove the largest 10% in absolute terms. In other words, I want to create a new series that excludes the largest absolute observations from the original series. Any idea how to do that?

Thanks



rename dummy and diagonal dummy

$
0
0
Hello,

The solutions to both your questions are the same. At the moment, you're creating your series names by treating !id as an observation and using it in @otod(!id). But !id isn't an observation number, it's a function of the observation numbers that produces a unique value for each group of weeks (every three weeks in your example). More specifically, for the 105 observations in your example, !id ranges from 1 to 35, so @otos(!id) will produce the first 35 dates in your sample. However, you can calculate the observation number of the first week in each group from !id, it's just (!id - 1) * !size + 1. Use that as the argument to @otod. Similar, the observation number of the last week in each group is !id * !size, assuming that produces a valid observation number (since the last group may hold fewer than !size weeks).


Removing outliers

$
0
0
Hello,

Assuming that by remove you mean translate to NAs, I believe the following demonstrates what you want:

Code:

series y = @recode(@abs(x) < @quantile(@abs(x), .9), x, na)



Error wfsave command & delete function

$
0
0
I came across some curious performance behavior in one of my programs, so I looked into the phenomenon and it turned out to have the same root cause. Using a modified version of Gareth's script, I observed the following runtimes (in seconds) for deleting various numbers of scalars:

Code:

100k 200k 300k 400k
(1) x* 2.297 8.625 20.117 34.947
(2) x_!i_*     1.578 5.547 12.238 21.298
(3) x_!i_!j_* 29.405 68.358 108.006 150.844
(4) x_!i_!j_!k 2.656 7.833 15.285 25.445

I'm guessing that the results for (2) and (4) are the most surprising. I suspect that most people would have the intuition that deleting objects in bigger "chunks" would always be faster. Obviously, that's not the case, so let me illuminate some of EViews' inner workings...

Deletion techniques (1-3) must translate the wildcard expressions into a list of matching object names. The observed differences in runtime primarily reflect a tradeoff between two aspects of that process, 1) going through EViews' master list of objects to find objects with matching names, and 2) building the secondary list containing those matching names. For example, in the 100k scenario, (1) will go through the master list once and build a large list of 100k names. Contrast this with (2), which will go through the master list 100 times and build 100 lists of 1k names each (noting that the master list shrinks after each intermediate deletion of 1k objects). (3) is even more extreme, going through the (ever shrinking) master list 10,000 times, yet building small lists of only 10 names.

Going through the master list of objects so many times, even as it's slowly shrinking, is slow. Going though fewer times, with the list shrinking more quickly, as (2) does, is better. Why then is (1) not the best? It turns out that EViews' algorithm for building the secondary list of matching names slows down as that list becomes large. In computer science, we'd categorize the temporal behavior of the algorithm as being O(n^2), which means that the work the algorithm must perform, and thus the time it takes to execute, grows quadratically with the size of the list. In other words, if the size of the list doubles, the time it takes to build quadruples. In our scenario, (1) builds a list 100 times larger than (2), which takes 100^2 = 10,000 times as long. Even considering that (2) is going to build 100 lists, (1) still spends 100 times the effort (and time) on list-building. This significant increase in work causes (1)'s runtime exceed (2)'s. The is a similar effort differential between (2) and (3), but it's completely outweighed by the time (3) spends repeatedly traversing the master list. In a way, (2) represents a performance "sweet spot" among these three techniques. (2) doesn't build lists large enough cause slowdowns, as (1) does, nor does it traverse the master list enough times to cause slowdowns, as (3) does.

(4) doesn't go through the wildcard resolution process, so it skips all the work I've outlined above. That (4) is still a little bit slower than (2) mostly stems from the fact that (4) goes through 100k iterations while (2) only goes through 100. There is overhead in executing an EViews program and all those extra iterations cost time.


Removing outliers

Fama-MacBeth regression

$
0
0
It's not clear why you think the portfolio names would impact the regression. The portfolio names in the example are arbitrary (I downloaded the series from Ken French's data library and simply renamed them - the frenchdata add-in is an easy way to download). Furthermore the same set of factors is used for each regression in the initial Fama-Macbeth step, for example.


Viewing all 24155 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>