layout: true --- class: title-slide background-image: url("figs/titlebg.png") background-position: 100% 50% background-size: 100% 100% .content-box-green-trans[ .pull-left-1[  ] .pull-right-2[ # Forecast combinations: ## Modern perspectives and approaches ### Yanfei Kang ]] --- class: center, hide-slide-number background-image: url(figs/retail.jpeg) background-size: cover # Retail --- class: center, hide-slide-number background-image: url("figs/smart-meter.jpeg") background-size: cover # Smart meter --- class: inverse, left, middle *"If we know that learning algorithm `\(A\)` is superior to `\(B\)` averaged over some set of targets `\(F\)`, then the No Free Lunch theorems tell us that `\(B\)` must be superior to `\(A\)` if one averages over all targets not in `\(F\)`. This is true even if algorithm `\(B\)` is the algorithm of purely random guessing." * .left[-- Wolpert (1996)] --- class: inverse, left, middle *“The No Free Lunch Theorem argues that, without having substantive information about the modeling problem, there is no single model that will always do better than any other model.”* .left[-- Kuhn and Johnson (2013)] --- # Algorithm selection problem - Using measurable features of the problem instances to **predict which algorithm is likely to perform best**. - Applied to e.g., classification, regression, constraint satisfaction, forecasting and optimization (Smith-Miles, 2009). --- # Forecasting - One automatic way would be to resort to statistical model selection approaches, like information criteria or cross-validation. - Combining the forecasts across multiple models is often a better approach than identifying a single “best forecast”. --- # Perspectives of combination 1. Combining **multiple forecasts** derived from different methods for a given time series. 2. Combining the base forecasts of each series in a **hierarchy**. 3. Combining forecasts computed on different perspectives of the **same data**. --- # Perspective 1 .content-box-green[Combining multiple methods (point forecasts)] <img src="figure/unnamed-chunk-2-1.svg" width="864" style="display: block; margin: auto;" /> --- # Perspective 1 .content-box-green[Combining multiple methods (point forecasts)] Suppose we have `\(M\)` forecasting models. `\(\mathbf{f}_h\)` denotes `\(M\)`-vector of `\(h\)`-step-ahead forecasts. `\(\mathbf{\Sigma}_h\)` is the `\(M \times M\)` covariance matrix of the `\(h\)`-step forecast errors. - Simple combination: `\(f_{ch} = M^{-1}\mathbf{1}^\prime\mathbf{f}_h\)`. - Linear combination: `\(f_{ch} = \mathbf{w}_h^\prime\mathbf{f}_h\)`. - Optimal weights: `\(\mathbf{w}_h = \frac{\mathbf{\Sigma}_h^{-1}\mathbf{1}}{\mathbb{1}^\prime\mathbf{\Sigma}_h^{-1}\mathbf{1}}\)` (Bates and Granger, 1969). --- # Proportion of papers on forecast combination in WOS <center> <img src="figs/prop.png" height="450px"/> </center> --- # Perspective 1 .content-box-green[Combining multiple methods (point forecasts)] - Regression-based weights: regress past observations on past individual forecasts. - Performance-based weights: more weight on methods with better historical performance. - Criteria-based weights: more weight on methods with lower AIC values. - More variations including Bayesian weights, nonlinear combinations, combining by learning, etc. --- # Perspective 1 .content-box-green[Combining multiple methods (probabilistic forecasts)] <img src="figure/ensembles-1.svg" width="864" style="display: block; margin: auto;" /> --- # Perspective 1 .content-box-green[Combining multiple methods (probabilistic forecasts)] <img src="figure/unnamed-chunk-3-1.svg" width="864" style="display: block; margin: auto;" /> --- # Perspective 1 .content-box-green[Combining multiple methods (probabilistic forecasts)] <img src="figure/unnamed-chunk-4-1.svg" width="864" style="display: block; margin: auto;" /> --- # Perspective 1 .content-box-green[Combining multiple methods (probabilistic forecasts)] <img src="figure/unnamed-chunk-5-1.svg" width="864" style="display: block; margin: auto;" /> --- # Perspective 1 Combining probabilistic forecasts: mix the forecast distributions from multiple models (**linear pooling**) <img src="figure/unnamed-chunk-6-1.svg" width="864" style="display: block; margin: auto;" /> --- # Perspective 1 ### Linear pooling: a finite mixture - Forecasts obtained from a weighted mixture distribution of the component forecasts. - Equivalent to a weighted average of the distribution functions `$$F_{ch}(y) = \sum_{i = 1}^Mw_iF_{ih}(y),$$` where `\(F_{ih}(y)\)` is the `\(h\)`-step forecast distribution for the `\(i\)`th method. - Works best when individual models are over-confident and use different data sources. - Allows us to accommodate skewness and kurtosis (fat tails), and also multi-modality. --- # Perspective 1 ### Linear pooling: a finite mixture <img src="figure/unnamed-chunk-7-1.svg" width="864" style="display: block; margin: auto;" /> --- # Perspective 1 .content-box-green[Combining multiple methods (probabilistic forecasts)] - Weights from historical performance, represented by logarithmic scores etc. - Optimizing logarithmic scores of the combined probabilistic forecasts. - Similarly, optimizing other scoring rules also work (such as Brier score, CRPS). - Bayesian model averaging. - Quantile forecast combinations. --- # Perspective 2 .content-box-green[Hierarchical forecasting] .pull-left[  ] .pull-right[ - Multivariate forecasting problem. - Variables follow linear constraints. - Forecast reconciliation: combining forecasts. ] --- # Perspective 3 .content-box-green[Wisdom of the data] .pull-left[ <img src="figure/unnamed-chunk-8-1.svg" width="576" style="display: block; margin: auto;" /> ] -- .pull-right[ <img src="figure/unnamed-chunk-9-1.svg" width="576" style="display: block; margin: auto;" /> ] --- # Modern perspectives - Combining multiple methods: feature-based forecasting. - Hierarchical forecasting: optimal reconciliation with immutable forecasts. - Wisdom of the data: improving forecasting by subsampling seasonal time series. --- background-image: url(figs/generation.jpg) background-size: cover .pull-middle[ .content-box-green-trans[ .large.white[Feature-based forecasting] ]] --- # Feature-based forecasting - One way to forecast: manually inspect the time series `\(\rightarrow\)` understand its characteristics `\(\rightarrow\)` manually select an appropriate method according to the forecaster’s experience. - Not scalable for **large collections of series**. - **An automatic framework** is need! - Pioneer studies: rule-based methods (e.g., Collopy and Armstrong, 1992; Wang, Smith-Miles, and Hyndman, 2009) - More recently: **feature-based forecasting** via meta-learning. --- .pull-left[###Raw data <img src="figure/unnamed-chunk-10-1.svg" width="504" style="display: block; margin: auto;" /> ] .pull-right[ ### Feature representation <img src="figure/unnamed-chunk-11-1.svg" width="504" style="display: block; margin: auto;" /> ] --- .pull-left[###Raw data <img src="figure/unnamed-chunk-12-1.svg" width="504" style="display: block; margin: auto;" /> ] .pull-right[ ### Feature representation <img src="figure/unnamed-chunk-13-1.svg" width="504" style="display: block; margin: auto;" /> ] --- # More features .content-box-gray[ ### STL decomposition based features By STL, `\(x_t = S_t + T_t + R_t\)`. 1. Strength of trend: `\(F_1 = 1- \frac{\text{var}(R_t)}{\text{var}(x_t - S_t)}.\)` 2. Strength of seasonality: `\(F_2 = 1- \frac{\text{var}(R_t)}{\text{var}(x_t - T_t)}.\)` ] .content-box-green[ ### More available at - [`tsfeatures` ](https://cran.r-project.org/web/packages/tsfeatures/index.html) (Hyndman, Kang, Montero-Manso, Talagala, Wang, Yang, and O'Hara-Wild, 2020) - [`feasts`](https://feasts.tidyverts.org/) (O'Hara-Wild, Hyndman, and Wang, 2021) ] --- .pull-left[ ### Raw data <img src="figure/unnamed-chunk-14-1.svg" width="504" style="display: block; margin: auto;" /> ] .pull-right[ ### Feature extraction ``` r library(tsfeatures) M3.selected %>% tsfeatures(c("frequency", "stl_features", "entropy", "acf_features", "pacf_features", "heterogeneity", "hw_parameters", "lumpiness", "stability", "max_level_shift", "max_var_shift", "unitroot_pp", "unitroot_kpss", "hurst", "crossing_points")) #> # A tibble: 100 × 41 #> frequency nperiods seasonal_period trend spike linearity curvature e_acf1 e_acf10 seasonal_strength peak trough #> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> #> 1 4 1 4 0.999 1.68e-9 5.41 3.53 -0.113 0.130 0.258 2 3 #> 2 4 1 4 0.989 2.32e-7 -4.88 -1.23 0.201 0.756 0.134 3 1 #> 3 4 1 4 0.950 3.63e-6 3.08 -3.72 -0.135 0.366 0.569 1 4 #> 4 4 1 4 0.951 1.27e-6 -6.05 3.21 -0.418 0.557 0.712 1 4 #> 5 4 1 4 0.994 5.14e-8 6.14 1.86 -0.260 0.242 0.0800 4 2 #> 6 4 1 4 0.989 5.43e-7 3.92 -3.83 0.0405 0.298 0.466 1 3 #> 7 4 1 4 0.999 1.08e-9 6.39 -0.515 0.103 0.0955 0.229 4 2 #> 8 4 1 4 0.610 1.23e-6 0.815 -0.0690 -0.488 0.350 0.979 4 1 #> 9 4 1 4 0.966 7.81e-7 5.79 1.11 -0.117 0.357 0.671 1 4 #> 10 4 1 4 0.958 2.71e-6 6.13 -0.488 -0.131 0.293 0.297 2 4 #> # ℹ 90 more rows #> # ℹ 29 more variables: entropy <dbl>, x_acf1 <dbl>, x_acf10 <dbl>, diff1_acf1 <dbl>, diff1_acf10 <dbl>, #> # diff2_acf1 <dbl>, diff2_acf10 <dbl>, seas_acf1 <dbl>, x_pacf5 <dbl>, diff1x_pacf5 <dbl>, diff2x_pacf5 <dbl>, #> # seas_pacf <dbl>, arch_acf <dbl>, garch_acf <dbl>, arch_r2 <dbl>, garch_r2 <dbl>, alpha <dbl>, beta <dbl>, #> # gamma <dbl>, lumpiness <dbl>, stability <dbl>, max_level_shift <dbl>, time_level_shift <dbl>, max_var_shift <dbl>, #> # time_var_shift <dbl>, unitroot_pp <dbl>, unitroot_kpss <dbl>, hurst <dbl>, crossing_points <dbl> ``` ] --- # PCA `\(\Rightarrow\)` 2d <center> <img src="figs/instancespace.png" height="550px"/> </center> --- # The framework for feature-based forecasting <center> <img src="figs/forecastdiag.png" height="550px"/> </center> --- # Challenge 1: Training data .pull-left[ We need diverse training data. <center> <img src="figs/forbes.png" height="350px"/> </center> ] -- .pull-right[ Our work: .tiny.content-box-gray[ - Yanfei Kang, Rob J. Hyndman, Kate Smith-Miles. (2017). Visualising Forecasting Algorithm Performance using Time Series Instance Space, *International Journal of Forecasting* 33(2): 345–358. - Yanfei Kang, Rob J Hyndman, Feng Li (2020). GRATIS: GeneRAting TIme Series with diverse and controllable characteristics,* Statistical Analysis and Data Mining* 13(4): 354-376.] ] --- # Challenge 2: Features - Manual choice and estimation of features, which vary from tens to thousands (Fulcher and Jones, 2014). - Extracted using the historical data that are doomed to *change*. - Not robust in the case of limited historical data. -- .content-box-gray[ - Yanfei Kang, Wei Cao, Fotios Petropoulos, Feng Li (2021). Forecast with forecasts: Diversity matters, *European Journal of Operational Research* 301(1): 180-190. - Xixi Li, Yanfei Kang, Feng Li (2020). Forecasting with time series imaging, *Expert Systems with Applications* 160: 113680. ] --- # Other challenges .content-box-green[Intermittent data] .content-box-red[Uncertainty estimation] .content-box-gray[Meta-learners] -- .content-box-gray.tiny[ - Li Li, Yanfei Kang, Fotios Petropoulos, Feng Li (2022). Feature-based intermittent demand forecast combinations: bias, accuracy and inventory implications, *International Journal of Production Research* 61(22): 7557-7572. - Evangelos Theodorou, Shengjie Wang, Yanfei Kang, et al. (2021). Exploring the representativeness of the M5 competition data, *International Journal of Forecasting* 38(4): 1500-1506. - Xiaoqian Wang, Yanfei Kang, Fotios Petropoulos, Feng Li (2021). The uncertainty estimation of feature-based forecast combinations, *Journal of the Operational Research Society* 73(5): 979-993. - Li Li, Yanfei Kang, Feng Li (2022). Bayesian forecast combination using time-varying features, *International Journal of Forecasting* 39(3): 1187-1302. - Thiyanga S. Talagala, Feng Li, Yanfei Kang (2021). FFORMPP: Feature-based forecast model performance prediction, *International Journal of Forecasting* 38(3): 920-943. - Li Li, Feng Li and Yanfei Kang (2023), “Forecasting Large Collections of Time Series: Feature-Based Methods”, In Forecasting with Artificial Intelligence: Theory and Applications. Cham , pp. 251-276. Springer Nature Switzerland. ] --- background-image: url(figs/generation.jpg) background-size: cover .pull-middle[ .content-box-green-trans[ .large.white[Feature-based forecasting] .white.font110[*Training data generation*] ]] --- # Gaussian Mixure Autoregressive (MAR) models .content-box-gray[ ### `\(\text{MAR}(K;p_1, \cdots, p_K)\)` model `\(x_t=\phi_{k0}+\phi_{k1}x_{t-1}+\cdots+\phi_{kp_k}x_{t-p_k}+\epsilon_t, \epsilon_t \sim N(0, \sigma_k^2)\)` <br> with probability `\(\alpha_k\)`, where `\(\sum_{k=1}^K \alpha_k= 1\)`. ] .content-box-green[ ### Merits of MAR models ✨ - Consist of multiple stationary or non-stationary autoregressive components. - Possible to capture many (or any) time series features, e.g., non-stationarity, nonlinearity, non-Gaussianity, cycles and heteroskedasticity ] --- class: split-two # What do they look like? .pull-left[ ### Yearly ``` r *library(gratis) # library(feasts) set.seed(1) *mar_model(seasonal_periods=1) %>% # generate(length=30, nseries=3) %>% autoplot(value) + theme(legend.position="none") ``` ] .pull-right[ <img src="figure/unnamed-chunk-16-1.svg" width="504" style="display: block; margin: auto;" /> ] --- class: split-two # What do they look like? .pull-left[ ### Quarterly ``` r library(gratis) library(feasts) set.seed(2) *mar_model(seasonal_periods=4) %>% # generate(length=40, nseries=3) %>% autoplot(value) + theme(legend.position="none") ``` ] .pull-right[ <img src="figure/unnamed-chunk-17-1.svg" width="504" style="display: block; margin: auto;" /> ] --- class: split-two # What do they look like? .pull-left[ ### Monthly ``` r library(gratis) library(feasts) set.seed(123) *mar_model(seasonal_periods=12) %>% # generate(length=120, nseries=3) %>% autoplot(value) + theme(legend.position="none") ``` ] .pull-right[ <img src="figure/unnamed-chunk-18-1.svg" width="504" style="display: block; margin: auto;" /> ] --- # Visualisation in 2D space <center> <img src="figs/coverage.png" height="550px"/> </center> --- background-image: url(figs/generation.jpg) background-size: cover .pull-middle[ .content-box-green-trans[ .large.white[Feature-based forecasting] .white.font110[*Automatic feature extraction*] ]] --- # How to use diversity to forecast? - Input: the forecasts from a pool of models. - Measure their diversity, a **feature** that has been identified as a decisive factor in forecast combination (Thomson, Pollock, Onkal, and Gonul, 2019; Lichtendahl and Winkler, 2020). - Through meta-learning, we link the diversity of the forecasts with their out-of-sample performance to fit combination models based on diversity. --- # Diversity matters? .pull-left[ <img src="figure/unnamed-chunk-20-1.svg" width="576" style="display: block; margin: auto;" /> ] .pull-right[ ``` #> # A tibble: 9 × 2 #> Method MASE #> <chr> <dbl> #> 1 nnetar_forec 0.351 #> 2 auto_arima_forec 0.363 #> 3 tbats_forec 0.408 #> 4 snaive_forec 0.412 #> 5 ets_forec 0.435 #> 6 naive_forec 0.53 #> 7 rw_drift_forec 0.603 #> 8 thetaf_forec 0.744 #> 9 stlm_ar_forec 0.822 ``` ] --- # Combining two methods .pull-left[
] .pull-right[ ``` #> # A tibble: 9 × 2 #> Method MASE #> <chr> <dbl> #> 1 nnetar_forec 0.351 #> 2 auto_arima_forec 0.363 #> 3 tbats_forec 0.408 #> 4 snaive_forec 0.412 #> 5 ets_forec 0.435 #> 6 naive_forec 0.53 #> 7 rw_drift_forec 0.603 #> 8 thetaf_forec 0.744 #> 9 stlm_ar_forec 0.822 ``` ] --- # Combining two methods .pull-left[
] .pull-right[ ``` #> # A tibble: 9 × 2 #> Method MASE #> <chr> <dbl> #> 1 nnetar_forec 0.351 #> 2 auto_arima_forec 0.363 #> 3 tbats_forec 0.408 #> 4 snaive_forec 0.412 #> 5 ets_forec 0.435 #> 6 naive_forec 0.53 #> 7 rw_drift_forec 0.603 #> 8 thetaf_forec 0.744 #> 9 stlm_ar_forec 0.822 ``` ] --- # Combining two methods .pull-left[
] .pull-right[ ``` #> # A tibble: 9 × 2 #> Method MASE #> <chr> <dbl> #> 1 nnetar_forec 0.351 #> 2 auto_arima_forec 0.363 #> 3 tbats_forec 0.408 #> 4 snaive_forec 0.412 #> 5 ets_forec 0.435 #> 6 naive_forec 0.53 #> 7 rw_drift_forec 0.603 #> 8 thetaf_forec 0.744 #> 9 stlm_ar_forec 0.822 ``` ] --- # Yes, diversity matters! <img src="figure/unnamed-chunk-28-1.svg" width="864" style="display: block; margin: auto;" /> --- # Yes, diversity matters! <img src="figure/unnamed-chunk-29-1.svg" width="864" style="display: block; margin: auto;" /> --- # Measuring diversity .small[ $$ `\begin{aligned} MSE_{comb} & = \frac{1}{H} \sum_{i=1}^{H}\left( \sum_{i=1}^{M}w_if_{ih} - y_{T+h}\right)^2 \\ & = \frac{1}{H}\sum_{i=1}^{H}\left[ \sum_{i=1}^{M}w_i(f_{ih} - y_{T+h})^2 - \sum_{i=1}^{M}w_i(f_{ih} - f_{ch})^2\right] \\ & = \frac{1}{H}\sum_{i=1}^{H}\left[\sum_{i=1}^{M}w_i(f_{ih} - y_{T+h})^2 - \sum_{i=1}^{M-1} \sum_{j=1,j>i}^{M}w_iw_j(f_{ih}-f_{jh})^2\right] \\ & = \sum_{i=1}^{M}w_i MSE_i - \sum_{i=1}^{M-1} \sum_{j=1,j>i}^{M}w_iw_jDiv_{i,j}, \end{aligned}` $$ `\(H\)` is the forecasting horizon, `\(M\)` is the number of forecasting methods, and `\(T\)` is the historical length. ] --- # Measuring diversity $$ `\begin{aligned} Div_{i,j}& = \frac{1}{H} \sum_{i=1}^{H} (f_{ih}-f_{jh})^2, \\ sDiv_{i,j} &= \frac{\sum\limits_{h=1}^H(f_{ih}-f_{jh})^2}{\sum\limits_{i=1}^{M-1}\sum\limits_{j=i+1}^M\left[\sum\limits_{h=1}^H(f_{ih}-f_{jh})^2\right].} \end{aligned}` $$ --- # Diversity for forecast combination <img src="figs/diversity-extraction.jpg"/> --- # Method pool <center> <img src="figs/methods.png" height="500px"/> </center> --- # Meta-learner - XGBoost algorithm. - The following optimization problem is solved to obtain the combination weights `$$\text{argmin}_{w} \sum\limits_{n = 1}^N\sum_{i=1}^{M} w({Div_n})_i \times \text{Err}_{ni},$$` where `\(Div_n\)` indicates the forecast diversity of the `\(n\)`-th time series, `\(w({Div_n})_i\)` is the combination weight assigned to method `\(i\)` for the `\(n\)`-th time series based on the diversity, and `\(\text{Err}_{ni}\)` is the error produced by method `\(i\)` for the `\(n\)`-th time series. --- # Forecasting M4 100,000 series with various frequencies. <center> <img src="figs/XGBoost.png" width="800px"/> </center> --- # Forecasting FMCG Sales of fast moving consumer goods (FMCG) from a major North American food manufacturer - Monthly data. - April 2013 to June 2017. <center> <img src="figs/FMCG.jpg" width="800px"/> </center> --- background-image: url(figs/generation.jpg) background-size: cover .pull-middle[ .content-box-green-trans[ .large.white[Feature-based forecasting] .white.font110[*Intermittent demand*] ]] --- # Intermittent demand <img src="figure/unnamed-chunk-30-1.svg" width="864" style="display: block; margin: auto;" /> - Several periods of zero demand. - Ubiquitous in practice - retailing, aerospace, etc. - Two sources of uncertainty. - Sporadic demand occurrence. - Demand arrival timing. --- # Intermittent demand: literature - Parametric methods: Croston, SBA, TSB, etc. - Non-parametric methods: bootstrapping methods, overlapping and non-overlapping aggregation methods, etc. - Temporal aggregation: ADIDA, IMAPA, etc. - Machine learning methods. A more detailed review can be found in Petropoulos, Apiletti, Assimakopoulos, Babai, Barrow, Taieb, Bergmeir, Bessa, Bijak, Boylan, and others (2022). --- # Forecast combination for intermittent demand 1. FIDE: Feature-based Intermittent DEmand forecasting 2. DIVIDE: DIVersity-based Intermittent DEmand forecasting --- # Intermittent demand features <center> <img src="figs/features.png" width="900px"/> </center> --- class: split-50 # Forecasting method pool .column[.content.vmiddle[ - Naive - Seasonal Naive (sNaive) - Simple Exponential Smoothing (SES) - Moving Averages (MA) - AutoRegressive Integrated Moving Average (ARIMA) - ExponenTial Smoothing (ETS) ]] .column[.content.vmiddle[ - Crostons method (CRO) - Optimized Crostons method (optCro) - SBA - TSB - ADIDA - IMAPA ] ] --- # FIDE and DIVIDE framework <center> <img src="figs/framework.jpg" width="750px"/> </center> --- # Application to Royal Air Force (RAF) data. 5000 monthly time series with high intermittence. <center> <img src="figs/RAFresults.png" width="600px"/> </center> --- background-image: url(figs/generation.jpg) background-size: cover .pull-middle[ .content-box-green-trans[ .large.white[Hierarchical forecasting] ]] --- # Hierarchical Time Series .pull-left[  ] .pull-right[ - Have base forecasts at all nodes. How can we make these coherent? - Bottom up ignores top nodes. - Top Down ignores bottom nodes. ] --- background-image: url(figs/generation.jpg) background-size: cover .pull-middle[ .content-box-green-trans[ .large.white[Wisdom of the data] .white.font110[*Subsampling seasonal time series*] ]] --- # Framework .pull-left[ <center> <img src="figs/foss.png" height="500px"/> </center> ] .pull-right[ .content-box-gray.tiny[ - Xixi Li, Fotios Petropoulos, Yanfei Kang (2022). Improving forecasting by subsampling seasonal time series. *International Journal of Production Research* 61(3): 976-992. - Yanfei Kang, Evangelos Spiliotis, Fotios Petropoulos, Nikolaos Athiniotis, Feng Li, Vassilios Assimakopoulo (2021). Déjà vu: A data-centric forecasting approach through time series cross-similarity, *Journal of Business Research* 132: 719-731. ] ] --- # Conclusions - Different perspectives of combining forecasts. - Combining forecasts from multiple models. - Forecast reconciliation. - Wisdom of the data. - Future directions. - Selecting forecasts to be combined. - Advancing the theory of nonlinear combinations. - Focusing more on probabilistic forecast combinations. --- class: center background-image: url(figs/review.jpg) background-size: cover --- class: center background-image: url(figs/ftp.jpg) background-size: cover --- class: center background-image: url(figs/ftp2.png) background-size: cover --- class: center background-image: url(figs/fppcn.png) background-size: cover --- background-image: url("assets/titlepage_buaa.png") background-position: 100% 50% background-size: 100% 100% .font300.bold[ **Thanks!** ] <br> <br> <br> .content-box-gray.font160[
[github.com/kl-lab](https://github.com/kl-lab)
[yanfei.site](https://yanfei.site)
[kllab.org](https://kllab.org) ]