`summarise()` has grouped output by 'time'. You can override using the
`.groups` argument.
report2
1
2
FALSE
7.9
4.2
TRUE
5.0
6.4
Regressions
did_lm <-feols(actual_profit ~ report, data = rd_did_panel)did_sub <-feols(actual_profit ~ report, data =filter(rd_did_panel, time ==2))did_fixed <-feols(actual_profit ~ report | firm, data = rd_did_panel)did_did <-feols(actual_profit ~ report | firm + time, data = rd_did_panel)msummary(list(simple = did_lm, "time 2"= did_sub, "firm FE"= did_fixed, "two-way FE"= did_did),gof_omit = gof_omit, stars = stars)
simple
time 2
firm FE
two-way FE
* p < 0.1, ** p < 0.05, *** p < 0.01
(Intercept)
5.338***
4.193***
(0.153)
(0.363)
report
1.029***
2.174***
1.405***
5.142***
(0.243)
(0.407)
(0.237)
(0.488)
Num.Obs.
1000
500
1000
1000
R2
0.018
0.054
0.612
0.662
R2 Within
0.066
0.183
RMSE
3.75
3.68
2.36
2.20
Std.Errors
IID
IID
IID
IID
FE: firm
X
X
FE: time
X
What if we have three periods?
Note
We assume that over time investors and regulators get better at detecting when firms exaggerate in their report.
Time 1: Reports are not believable, nobody reports
Time 2: The biggest exaggerations will be caught, only well performing firms will report and communicate that they are doing excellent.
Time 3: More subtle exaggerations will be caught. The worst performers will not report at all, the moderate performers will report and say that they will do well, the good performers will report that they are doing excellent.
Setup of three period simulation
N <-1000T <-3cutoff2 <-3# performance cutoff to report for time 1cutoff3 <-c(4/3, 4+2/3) # performance cutoff to report for time 2profit1 <-5profit2 <-c(1.5, 6.5) #Profits for time 2 depending on reportprofit3 <-c(2/3, 3, 7+1/3) #Profits for time 2 depending on reportrd_did3_firm <-tibble(firm =1:N,performance =runif(N, 0, 10),firm_effect =rnorm(N, 0, 2) +ifelse(performance < cutoff2, 3, 0))
Three period simulation
rd_did3_panel <-tibble(firm =rep(1:N, each = T),time =rep(1:T, times = N)) %>%left_join(rd_did3_firm, by ="firm") %>%mutate(# When will firms report?report =case_when( time ==1~0, time ==2& performance < cutoff2 ~0, time ==3& performance < cutoff3[1] ~0,TRUE~1),noise =rnorm(T*N, 0, 5),profit_no_report = firm_effect + noise +case_when( time ==1~ profit1, time ==2~ profit2[1], time ==3~ profit3[1] ),profit_report = firm_effect + noise +case_when( time ==1~ profit1, time ==2~ profit2[2], time ==3& performance < cutoff3[2] ~ profit3[2],TRUE~ profit3[3] ),actual_profit =ifelse(report ==1, profit_report, profit_no_report) )
Finally, when research settings combine staggered timing of treatment effects and treatment effect heterogeneity across firms or over time, staggered DiD estimates are likely to be biased. In fact, these estimates can produce the wrong sign altogether compared to the true average treatment effects.
Solution
While the literature has not settled on a standard, the proposed solutions all deal with the biases arising from the “bad comparisons” problem inherent in TWFE DiD regressions by modifying the set of effective comparison units in the treatment effect estimation process. For example, each alternative estimator ensures that firms receiving treatment are not compared to those that previously received it.
Simulation Setup - The True Average Treatment Effect of Three Groups
The Estimated Effect by Twoway Fixed Effects of 500 Simulations
The Sun and Abraham (2021) Solution - Restrict The Sample
The Estimated Effect with the Sun and Abraham Solution
What is the level of the treatment variable? What is the comparison?
Mixed race or same-sex race
State legislation
Country legislation
Firm corporate governance changes
References
Abadie, Alberto, Susan Athey, Guido W. Imbens, and Jeffrey Wooldridge. 2017. “When Should You Adjust Standard Errors for Clustering?” Working {{Paper}}. Working Paper Series. National Bureau of Economic Research. https://doi.org/10.3386/w24003.
Baker, Andrew C., David F. Larcker, and Charles C. Y. Wang. 2022. “How Much Should We Trust Staggered Difference-in-Differences Estimates?”Journal of Financial Economics 144 (2): 370–95. https://doi.org/10.1016/j.jfineco.2022.01.004.
Huntington-Klein, Nick. 2021. The Effect: An Introduction to Research Design and Causality. First. Boca Raton: Chapman and Hall/CRC. https://doi.org/10.1201/9781003226055.
Sun, Liyang, and Sarah Abraham. 2021. “Estimating Dynamic Treatment Effects in Event Studies with Heterogeneous Treatment Effects.”Journal of Econometrics, Themed Issue: Treatment Effect 1, 225 (2): 175–99. https://doi.org/10.1016/j.jeconom.2020.09.006.