A series is said to be stationary when the statistical properties (importantly mean, variance and auto-correlation from time series forecasting perspective) of the series is time invariant (i.e. don’t vary with the time). In simpler terms, when observed across any regular time intervals they will remain the same.

However, this is a more of an ideal definition which not so quite often occur in real time. Only a flat line or an oscillation would be perfectly stationary. In real life the series will contain periodic fluctuation (seasonality), trend and variance in the data. As can be seen in the below figure, even the stationary series, if observed, against regular time intervals will have a varying mean. So while being referred to series being stationary in time series it is mentioned more in acceptable terms (i.e allowing acceptable amount of variations).

**With which series can you estimate the future better?**

Given, two sets of images, first set contains the Non-Stationary series and second contains stationarized series for the same.

Do you think having a stationary series gives you a better chance of estimating the future values more accurately than a non-stationary series?

**Non-Stationary Series:**

**Stationarized Series:**

Well, certainly stationary series looks more predictable with lesser variations across time while non-stationary series looks more volatile over time and would possess more difficulty and higher chance of error while approximating/estimating a future value. The same can be understood in more detail from below example.

**Understanding the problem with Non-Stationarity**

Consider an example, where a retail store has a customer who purchases milk every day from the store. For the past 4 week he has been buying on a average 10 packs of milk with Monday and Tuesday being 15 packs each, Wednesday 12, Thursday 18, Friday and Saturday being 5 packs each and no purchase on Sunday. Even with a bit of variation to his daily purchases, there is a particular pattern as observed across 4 weeks and it is easy for the shopkeeper to estimate his needs and keep his share packed and ready for each day of the coming weeks.

Lately the customer has started to take on a average 25 packs, Monday and Wednesday being 25 and Tuesday being 40. Also, the amount of daily purchases are varying, some Tuesdays being 15 and some Mondays being 35, unlike before where it used to be the same for same days across weeks. The shopkeeper, would get his estimates wrong for some initial weeks basing his plan based on the past purchase pattern but would readjust his estimates based on the recent observations.

However, considering the entire milk purchase data of the customer the shopkeeper would feel less confident in wake of his recent changes in purchase. Also, given he doesn’t understand the reason for the change (which might have been he has set up a coffee shop or increase in his family size) he can only approximate and forecast for the future weeks, which if not entirely accurate, would be near about.

However, recently the customer has started to ask on a average 18 packs of milk. Shopkeeper will now have a hard time to estimate his weekly needs which has been changing over time and fluctuating a lot more. He would be very uncertain about the future needs of the customer.

This kind of data is known as non-stationary series and they are extremely hard to estimate accurately. Non-Stationarity is introduced due to some or other external events like market fluctuations, manufacturing plant closures, promotion and campaigns, increasing demand of the product, expansion to new markets etc. which needs to be accounted for in the model separately indicating occurrence of such event in the past and then in planned future to get an accurate forecast. Else, the model will assume the fluctuations as part of usual pattern and not as something caused by external events and carry the impact into the future, leading to spurious forecast. Stationarizing, smoothens the fluctuations resulting into more accurate and less erroneous future estimations.

As a solution the data is made to be stationary using statistical techniques and then foretasted such that model is made less susceptible to unusual changes and fluctuating demand. Statistical techniques can be broadly classified as:

**1. Detrending the series: **to handle deterministic trend in the data

**2. Deseasonalizing the series: **to handle deterministic seasonality in the data

**3. Differencing the series: **to smoothen the random fluctuations and presence of unit root in the data

**4. Log transforming the series: **to smoothen variation in the data

Each of these techniques smoothens the data of extreme changes that are caused by external factors (as in our example it could be increase in family size or starting of a coffee cafe) which could have an increase/decrease in the trend over time. Forecasting on such a volatile information will yield a less accurate forecast due to spurious understanding of the buying patterns without indication of what caused the change in the buying requirements. But before selecting proper techniques for stationarizing we need to understand various forms of non-stationarity in the data.

**Types of Non-Stationarity process**

Non-Stationarity can be due to many different forms: presence of unit root, changing variance, level shifts, seasonality etc. However, certain forms of non-stationarity usually seen are:

**1. Presence of unit root**

In a time series, any point can be represented as function of previous point and some randomness that can’t be predicted, which means any point in the series is depended upon previous point in history and some random value. Consider a first order AR model:

y(t) = a* y(t-1) + e(t), a is the coefficient modeled for, also known as root.

y(t-1) = a * y(t-2) + e(t-1)

y(t-2) = a * y(t-3) + e(t-2) and so on……,

when a=1, i.e the data is known to possess unit root and the equation turns out to be

y(t) = y(t-3) + ( e(t) + e(t-1) + e(t-2) ), when substituted for y(t-1) and y(t-2) in y(t)

What it essentially means is the time series model will be very susceptible to the shocks in the data and will continue to carry the effect of the shocks into the future without converging back to the long term mean or average trendline of the series (i.e. the series will not be mean reverting). This results into spurious and unreliable forecast. Due to summing up of the random terms ( e(t) + e(t-1) + e(t-2) ), the series accumulates error component (randomness) which keeps on adding up as length of the series increases.

**Solution for stationarizing: **Differencing the series

**2. Deterministic Trend (Trend-Stationary)**

A series may not have unit root, yet be non-stationary. When the non-stationarity in the series is only caused by variation in the trend over the time. Also, given the trend is determinable and removing the trend from the data makes the data stationary.

**Solution for stationarizing: **Detrending the series

**3. Varying variance **

When the series is observed to be non-stationary due to varying variance across time

**Solution for stationarizing: **Log Transformation

**4. Seasonality**

When the series is observed to to be highly seasonal and constant across different cycles (year , if monthly data). However, seasonal time series models (seasonal Arima, holt winters etc.) are able to account for seasonality and forecast. But highly varying seasonality over time causes wrong assumption of seasonal volumes. So it is always better to deseasonalize and forecast and reseasonalize the series back when there is an determinable seasonality present in the data.

**Solution for stationarizing: **Deseasonalizing the series

**How to check if my data is Non-Stationary?**

**1. Graphical Observation:**

Graphs visualize a lot more information in quick and simple manner and are easy to interpret and understand.

**– Run Sequence plot:**

Plot a run sequence plot (plotting forecast variable across sequence of time) to observe any indication of presence of trend, seasonality and varying variance

**Run Sequence Plots for Non-Stationary data:**

**Run Sequence for Non-Stationary data:**

**– ACF Plot:**

ACF plots shows auto correlation at different lags. For Non-Stationary Series, ACF plot can be seen to decay quite slowly and gradually and be well above significance levels, while for stationary series it would sharply decay.

**ACF Plots for Non-Stationary data:**

**ACF Plots for Stationarized data:**

**2. Simple Summary Statistics:** Split the time sequenced data into regular intervals and calculate summary statistics (mean and variance) across each intervals to observe difference in measures across each interval.

**3. Statistical Tests: **There are some statistical tests to indicate presence of unit root causing non-stationarity in the series which otherwise can’t be confirmed by graphical or summary statistics.

**– ADF (Augmented Dickey-Fuller) test for unit root:**

ADF test has a null hypothesis that there is a unit root present in the series which makes alternative hypothesis as the series has no unit root and is stationary. However, the point to be careful is unit root is only one form of non-stationarity. The series might not have unit root yet be non-stationary. If p-value is greater than 0.05 we fail to reject null hypothesis indicating the series is non-stationary.

*The below ADF test are done for the non-stationary and stationarized series shown above.*

*ADF test on Non-Stationary data:*

adf.test(ts_ls[[1]])Augmented Dickey-Fuller Test

data: ts_ls[[1]]

Dickey-Fuller = -1.9987, Lag order = 6, p-value = 0.5764

alternative hypothesis: stationary

adf.test(ts_ls[[2]])Augmented Dickey-Fuller Test

data: ts_ls[[2]]

Dickey-Fuller = -1.4321, Lag order = 6, p-value = 0.8153

alternative hypothesis: stationary

adf.test(ts_ls[[3]])Augmented Dickey-Fuller Test

data: ts_ls[[3]]

Dickey-Fuller = -1.8925, Lag order = 6, p-value = 0.6213

alternative hypothesis: stationary

adf.test(ts_ls[[4]])Augmented Dickey-Fuller Test

data: ts_ls[[4]]

Dickey-Fuller = -3.5367, Lag order = 6, p-value = 0.03986

alternative hypothesis: stationary

**ADF test on Stationarized data:**

adf.test(diff_ls[[1]])Augmented Dickey-Fuller Test

data: diff_ls[[1]]

Dickey-Fuller = -5.9724, Lag order = 6, p-value = 0.01

alternative hypothesis: stationary

adf.test(diff_ls[[2]])Augmented Dickey-Fuller Test

data: diff_ls[[2]]

Dickey-Fuller = -6.5913, Lag order = 6, p-value = 0.01

alternative hypothesis: stationary

adf.test(diff_ls[[3]])Augmented Dickey-Fuller Test

data: diff_ls[[3]]

Dickey-Fuller = -5.5312, Lag order = 6, p-value = 0.01

alternative hypothesis: stationary

adf.test(diff_ls[[4]])Augmented Dickey-Fuller Test

data: diff_ls[[4]]

Dickey-Fuller = -7.205, Lag order = 6, p-value = 0.01

alternative hypothesis: stationary

**– KPSS (Augmented Dickey-Fuller) test for unit root:**

The null hypothesis for the test is that the data is stationary that makes alternate hypothesis as data is not stationary. If p-value is greater than 0.05 we fail to reject null hypothesis indicating the series is non-stationary.

*The below KPSS test are done for the non-stationary and stationarized series shown above.*

*KPSS test on Non-Stationary data:*

kpss.test(ts_ls[[1]], null = c(“Level”, “Trend”), lshort = TRUE)KPSS Test for Level Stationarity

data: ts_ls[[1]]

KPSS Level = 1.3168, Truncation lag parameter = 3, p-value = 0.01

kpss.test(ts_ls[[2]], null = c(“Level”, “Trend”), lshort = TRUE)KPSS Test for Level Stationarity

data: ts_ls[[2]]

KPSS Level = 1.5826, Truncation lag parameter = 3, p-value = 0.01

kpss.test(ts_ls[[3]], null = c(“Level”, “Trend”), lshort = TRUE)KPSS Test for Level Stationarity

data: ts_ls[[3]]

KPSS Level = 1.5097, Truncation lag parameter = 3, p-value = 0.01

kpss.test(ts_ls[[4]], null = c(“Level”, “Trend”), lshort = TRUE)KPSS Test for Level Stationarity

data: ts_ls[[4]]

KPSS Level = 0.6002, Truncation lag parameter = 3, p-value = 0.02262

*KPSS test on Non-Stationary data:*

kpss.test(diff_ls[[1]], null = c(“Level”, “Trend”), lshort = TRUE)KPSS Test for Level Stationarity

data: diff_ls[[1]]

KPSS Level = 0.11611, Truncation lag parameter = 3, p-value = 0.1

kpss.test(diff_ls[[2]], null = c(“Level”, “Trend”), lshort = TRUE)KPSS Test for Level Stationarity

data: diff_ls[[2]]

KPSS Level = 0.18094, Truncation lag parameter = 3, p-value = 0.1

kpss.test(diff_ls[[3]], null = c(“Level”, “Trend”), lshort = TRUE)KPSS Test for Level Stationarity

data: diff_ls[[3]]

KPSS Level = 0.10726, Truncation lag parameter = 3, p-value = 0.1

kpss.test(diff_ls[[4]], null = c(“Level”, “Trend”), lshort = TRUE)KPSS Test for Level Stationarity

data: diff_ls[[4]]

KPSS Level = 0.019579, Truncation lag parameter = 3, p-value = 0.1

**How does forecast vary on Non-Stationary and Stationarized series?**

Below graphs shows a very simple forecast, which definitely can be improved, but given the goal of the article is to focus on non-stationarity and not on forecast. What is to be focused upon is the level at which the non-stationary and stationrized data generates the forecast. In Non-stationary data we see the level at which the forecast is being generated has some influence of the past shocks or fluctuations. While stationarized series is more resistant to the same.

**Forecast on Non-Stationary series:**

**Forecast on Stationarized series:**

**Conclusion**

Non-Stationarity is an important step in time series forecasting and it has a huge impact on the accuracy and stability of the model. Not many people understands the concept of non-stationarity, not atleast completely leading to sub-optimal model development. However, once the idea behind non-stationarity and its impact on modelling is well understood it becomes quite a bit easier to explore the non-stationary concept and certainly develop a better model.

I like the helpful information you provide in your articles.

I will bookmark your blog and check again here frequently.

I’m quite certain I’ll learn lots of new stuff right here! Best of luck for the next!

LikeLike

Very nice post. I just stumbled upon your weblog and wanted to say

that I’ve truly enjoyed surfing around your blog

posts. After all I will be subscribing to your rss feed and I hope you write again very soon!

LikeLike

Someone necessarily help to make critically articles I might state.

That is the very first time I frequented your website page and thus far?

I surprised with the analysis you made to make this actual submit

extraordinary. Great activity!

LikeLike

Great article I’ll be back for some more.

LikeLike

Very good info. Lucky me I discovered your site by accident (stumbleupon). I have saved it for later!

LikeLike

*I?m impressed, I must say. Really rarely do I encounter a blog that?s both educative and entertaining, and let me tell you, you have hit the nail on the head. Your idea is outstanding; the issue is something that not enough people are speaking intelligently about. I am very happy that I stumbled across this in my search for something relating to this.

LikeLike

Only wanna state that this is invaluable, Thanks for taking your time to write this.

LikeLike