Skip to content

Commit 567181b

Browse files
Solved Meta main #524 (#530)
* Solved Meta main #524 * Restore the 'probability-interface' line as per feedback in PR review [#530](#530 (comment)) * Fixes #524: Add Quarto Meta Variables - Added `<meta doc-base-url>` to replace `..` or `../..` usage across tutorials. - As per feedback from @penelopeysm, added an anchor to the specific part of the docs for the tutorial. - Included the `probability-interface` tutorial in the context. - Ensured no unnecessary whitespace changes to keep the pull request clean and focused. * Fixes #524: Added Necessary Quarto Meta Variables - Implemented all required Quarto meta variables, as suggested by @yebai. - These changes include the addition of all necessary meta variables identified up to this point. - Future adjustments can be made as needed to accommodate any further requirements. * Fix docs base link * Remove trailing slashes, add prob interface variable * Re-add site-url variable * Use doc-base-url throughout --------- Co-authored-by: Penelope Yong <[email protected]>
1 parent b53df24 commit 567181b

File tree

14 files changed

+106
-70
lines changed

14 files changed

+106
-70
lines changed

_quarto.yml

Lines changed: 44 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@ website:
1212
site-url: https://turinglang.org/
1313
site-path: "/"
1414
favicon: "assets/favicon.ico"
15-
search:
15+
search:
1616
location: navbar
1717
type: overlay
1818
navbar:
@@ -50,7 +50,7 @@ website:
5050
sidebar:
5151
- text: documentation
5252
collapse-level: 1
53-
contents:
53+
contents:
5454
- section: "Users"
5555
# href: tutorials/index.qmd, This page will be added later so keep this line commented
5656
contents:
@@ -59,7 +59,7 @@ website:
5959

6060
- section: "Usage Tips"
6161
collapse-level: 1
62-
contents:
62+
contents:
6363
- tutorials/docs-10-using-turing-autodiff/index.qmd
6464
- tutorials/usage-custom-distribution/index.qmd
6565
- tutorials/usage-probability-interface/index.qmd
@@ -72,7 +72,7 @@ website:
7272
- tutorials/docs-16-using-turing-external-samplers/index.qmd
7373

7474
- section: "Tutorials"
75-
contents:
75+
contents:
7676
- tutorials/00-introduction/index.qmd
7777
- text: Gaussian Mixture Models
7878
href: tutorials/01-gaussian-mixture-model/index.qmd
@@ -129,7 +129,7 @@ website:
129129
background: "#073c44"
130130
left: |
131131
Turing is created by <a href="http://mlg.eng.cam.ac.uk/hong/" target="_blank">Hong Ge</a>, and lovingly maintained by the <a href="https://github.com/TuringLang/Turing.jl/graphs/contributors" target="_blank">core team</a> of volunteers. <br>
132-
The contents of this website are © 2024 under the terms of the <a href="https://github.com/TuringLang/Turing.jl/blob/master/LICENCE" target="_blank">MIT License</a>.
132+
The contents of this website are © 2024 under the terms of the <a href="https://github.com/TuringLang/Turing.jl/blob/master/LICENCE" target="_blank">MIT License</a>.
133133
134134
right:
135135
- icon: twitter
@@ -162,6 +162,42 @@ execute:
162162

163163
# Global Variables to use in any qmd files using:
164164
# {{< meta site-url >}}
165-
site-url: https://turinglang.org/
166-
get-started: docs/tutorials/docs-00-getting-started/
167-
tutorials-intro: docs/tutorials/00-introduction/
165+
166+
site-url: https://turinglang.org
167+
doc-base-url: https://turinglang.org/docs
168+
169+
get-started: tutorials/docs-00-getting-started
170+
tutorials-intro: tutorials/00-introduction
171+
gaussian-mixture-model: tutorials/01-gaussian-mixture-model
172+
logistic-regression: tutorials/02-logistic-regression
173+
bayesian-neural-network: tutorials/03-bayesian-neural-network
174+
hidden-markov-model: tutorials/04-hidden-markov-model
175+
linear-regression: tutorials/05-linear-regression
176+
infinite-mixture-model: tutorials/06-infinite-mixture-model
177+
poisson-regression: tutorials/07-poisson-regression
178+
multinomial-logistic-regression: tutorials/08-multinomial-logistic-regression
179+
variational-inference: tutorials/09-variational-inference
180+
bayesian-differential-equations: tutorials/10-bayesian-differential-equations
181+
probabilistic-pca: tutorials/11-probabilistic-pca
182+
gplvm: tutorials/12-gplvm
183+
seasonal-time-series: tutorials/13-seasonal-time-series
184+
contexts: tutorials/16-contexts
185+
miniature: tutorial/14-minituring
186+
contributing-guide: tutorials/docs-01-contributing-guide
187+
using-turing-abstractmcmc: tutorials/docs-04-for-developers-abstractmc-turing
188+
using-turing-compiler: tutorials/docs-05-for-developers-compiler
189+
using-turing-interface: tutorials/docs-06-for-developers-interface
190+
using-turing-variational-inference: tutorials/docs-07-for-developers-variational-inference
191+
using-turing-advanced: tutorials/tutorials/docs-09-using-turing-advanced
192+
using-turing-autodiff: tutorials/docs-10-using-turing-autodiff
193+
using-turing-dynamichmc: tutorials/docs-11-using-turing-dynamichmc
194+
using-turing: tutorials/docs-12-using-turing-guide
195+
using-turing-performance-tips: tutorials/docs-13-using-turing-performance-tips
196+
using-turing-sampler-viz: tutorials/docs-15-using-turing-sampler-viz
197+
using-turing-external-samplers: tutorials/docs-16-using-turing-external-samplers
198+
using-turing-implementing-samplers: tutorials/docs-17-implementing-samplers
199+
using-turing-mode-estimation: tutorials/docs-17-mode-estimation
200+
usage-probability-interface: tutorials/usage-probability-interface
201+
usage-custom-distribution: tutorials/tutorials/usage-custom-distribution
202+
usage-generated-quantities: tutorials/tutorials/usage-generated-quantities
203+
usage-modifying-logprob: tutorials/tutorials/usage-modifying-logprob

tutorials/01-gaussian-mixture-model/index.qmd

Lines changed: 12 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -130,7 +130,7 @@ chains = sample(model, sampler, MCMCThreads(), nsamples, nchains, discard_initia
130130

131131
::: {.callout-warning collapse="true"}
132132
## Sampling With Multiple Threads
133-
The `sample()` call above assumes that you have at least `nchains` threads available in your Julia instance. If you do not, the multiple chains
133+
The `sample()` call above assumes that you have at least `nchains` threads available in your Julia instance. If you do not, the multiple chains
134134
will run sequentially, and you may notice a warning. For more information, see [the Turing documentation on sampling multiple chains.](https://turinglang.org/dev/docs/using-turing/guide/#sampling-multiple-chains)
135135
:::
136136

@@ -161,7 +161,7 @@ It can happen that the modes of $\mu_1$ and $\mu_2$ switch between chains.
161161
For more information see the [Stan documentation](https://mc-stan.org/users/documentation/case-studies/identifying_mixture_models.html). This is because it's possible for either model parameter $\mu_k$ to be assigned to either of the corresponding true means, and this assignment need not be consistent between chains.
162162

163163
That is, the posterior is fundamentally multimodal, and different chains can end up in different modes, complicating inference.
164-
One solution here is to enforce an ordering on our $\mu$ vector, requiring $\mu_k > \mu_{k-1}$ for all $k$.
164+
One solution here is to enforce an ordering on our $\mu$ vector, requiring $\mu_k > \mu_{k-1}$ for all $k$.
165165
`Bijectors.jl` [provides](https://turinglang.org/Bijectors.jl/dev/transforms/#Bijectors.OrderedBijector) an easy transformation (`ordered()`) for this purpose:
166166

167167
```{julia}
@@ -255,7 +255,7 @@ scatter(
255255

256256

257257
## Marginalizing Out The Assignments
258-
We can write out the marginal posterior of (continuous) $w, \mu$ by summing out the influence of our (discrete) assignments $z_i$ from
258+
We can write out the marginal posterior of (continuous) $w, \mu$ by summing out the influence of our (discrete) assignments $z_i$ from
259259
our likelihood:
260260
$$
261261
p(y \mid w, \mu ) = \sum_{k=1}^K w_k p_k(y \mid \mu_k)
@@ -299,11 +299,11 @@ end
299299
::: {.callout-warning collapse="false"}
300300
## Manually Incrementing Probablity
301301

302-
When possible, use of `Turing.@addlogprob!` should be avoided, as it exists outside the
302+
When possible, use of `Turing.@addlogprob!` should be avoided, as it exists outside the
303303
usual structure of a Turing model. In most cases, a custom distribution should be used instead.
304304

305305
Here, the next section demonstrates the perfered method --- using the `MixtureModel` distribution we have seen already to
306-
perform the marginalization automatically.
306+
perform the marginalization automatically.
307307
:::
308308

309309

@@ -312,8 +312,8 @@ perform the marginalization automatically.
312312
We can use Turing's `~` syntax with anything that `Distributions.jl` provides `logpdf` and `rand` methods for. It turns out that the
313313
`MixtureModel` distribution it provides has, as its `logpdf` method, `logpdf(MixtureModel([Component_Distributions], weight_vector), Y)`, where `Y` can be either a single observation or vector of observations.
314314

315-
In fact, `Distributions.jl` provides [many convenient constructors](https://juliastats.org/Distributions.jl/stable/mixture/) for mixture models, allowing further simplification in common special cases.
316-
315+
In fact, `Distributions.jl` provides [many convenient constructors](https://juliastats.org/Distributions.jl/stable/mixture/) for mixture models, allowing further simplification in common special cases.
316+
317317
For example, when mixtures distributions are of the same type, one can write: `~ MixtureModel(Normal, [(μ1, σ1), (μ2, σ2)], w)`, or when the weight vector is known to allocate probability equally, it can be ommited.
318318

319319
The `logpdf` implementation for a `MixtureModel` distribution is exactly the marginalization defined above, and so our model becomes simply:
@@ -330,7 +330,7 @@ end
330330
model = gmm_marginalized(x);
331331
```
332332

333-
As we've summed out the discrete components, we can perform inference using `NUTS()` alone.
333+
As we've summed out the discrete components, we can perform inference using `NUTS()` alone.
334334

335335
```{julia}
336336
#| output: false
@@ -352,21 +352,21 @@ let
352352
end
353353
```
354354

355-
`NUTS()` significantly outperforms our compositional Gibbs sampler, in large part because our model is now Rao-Blackwellized thanks to
355+
`NUTS()` significantly outperforms our compositional Gibbs sampler, in large part because our model is now Rao-Blackwellized thanks to
356356
the marginalization of our assignment parameter.
357357

358358
```{julia}
359359
plot(chains[["μ[1]", "μ[2]"]], legend=true)
360360
```
361361

362362
## Inferred Assignments - Marginalized Model
363-
As we've summed over possible assignments, the associated parameter is no longer available in our chain.
363+
As we've summed over possible assignments, the associated parameter is no longer available in our chain.
364364
This is not a problem, however, as given any fixed sample $(\mu, w)$, the assignment probability — $p(z_i \mid y_i)$ — can be recovered using Bayes rule:
365365
$$
366366
p(z_i \mid y_i) = \frac{p(y_i \mid z_i) p(z_i)}{\sum_{k = 1}^K \left(p(y_i \mid z_i) p(z_i) \right)}
367367
$$
368368

369-
This quantity can be computed for every $p(z = z_i \mid y_i)$, resulting in a probability vector, which is then used to sample
369+
This quantity can be computed for every $p(z = z_i \mid y_i)$, resulting in a probability vector, which is then used to sample
370370
posterior predictive assignments from a categorial distribution.
371371
For details on the mathematics here, see [the Stan documentation on latent discrete parameters](https://mc-stan.org/docs/stan-users-guide/latent-discrete.html).
372372
```{julia}
@@ -399,7 +399,7 @@ chains = sample(model, sampler, MCMCThreads(), nsamples, nchains, discard_initia
399399
Given a sample from the marginalized posterior, these assignments can be recovered with:
400400

401401
```{julia}
402-
assignments = mean(generated_quantities(gmm_recover(x), chains));
402+
assignments = mean(generated_quantities(gmm_recover(x), chains));
403403
```
404404

405405
```{julia}

tutorials/04-hidden-markov-model/index.qmd

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,7 @@ This tutorial illustrates training Bayesian [Hidden Markov Models](https://en.wi
1414

1515
In this tutorial, we assume there are $k$ discrete hidden states; the observations are continuous and normally distributed - centered around the hidden states. This assumption reduces the number of parameters to be estimated in the emission matrix.
1616

17-
Let's load the libraries we'll need. We also set a random seed (for reproducibility) and the automatic differentiation backend to forward mode (more [here](https://turinglang.org/dev/docs/using-turing/autodiff) on why this is useful).
17+
Let's load the libraries we'll need. We also set a random seed (for reproducibility) and the automatic differentiation backend to forward mode (more [here]( {{<meta doc-base-url>}}/{{<meta using-turing-autodiff>}} ) on why this is useful).
1818

1919
```{julia}
2020
# Load libraries.
@@ -125,7 +125,7 @@ We will use a combination of two samplers ([HMC](https://turinglang.org/dev/docs
125125

126126
In this case, we use HMC for `m` and `T`, representing the emission and transition matrices respectively. We use the Particle Gibbs sampler for `s`, the state sequence. You may wonder why it is that we are not assigning `s` to the HMC sampler, and why it is that we need compositional Gibbs sampling at all.
127127

128-
The parameter `s` is not a continuous variable. It is a vector of **integers**, and thus Hamiltonian methods like HMC and [NUTS](https://turinglang.org/dev/docs/library/#Turing.Inference.NUTS) won't work correctly. Gibbs allows us to apply the right tools to the best effect. If you are a particularly advanced user interested in higher performance, you may benefit from setting up your Gibbs sampler to use [different automatic differentiation](https://turinglang.org/dev/docs/using-turing/autodiff#compositional-sampling-with-differing-ad-modes) backends for each parameter space.
128+
The parameter `s` is not a continuous variable. It is a vector of **integers**, and thus Hamiltonian methods like HMC and [NUTS](https://turinglang.org/dev/docs/library/#Turing.Inference.NUTS) won't work correctly. Gibbs allows us to apply the right tools to the best effect. If you are a particularly advanced user interested in higher performance, you may benefit from setting up your Gibbs sampler to use [different automatic differentiation]( {{<meta doc-base-url>}}/{{<meta using-turing-autodiff>}}#compositional-sampling-with-differing-ad-modes) backends for each parameter space.
129129

130130
Time to run our sampler.
131131

tutorials/05-linear-regression/index.qmd

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -164,7 +164,7 @@ end
164164

165165
## Comparing to OLS
166166

167-
A satisfactory test of our model is to evaluate how well it predicts. Importantly, we want to compare our model to existing tools like OLS. The code below uses the [GLM.jl]() package to generate a traditional OLS multiple regression model on the same data as our probabilistic model.
167+
A satisfactory test of our model is to evaluate how well it predicts. Importantly, we want to compare our model to existing tools like OLS. The code below uses the [GLM.jl](https://juliastats.org/GLM.jl/stable/) package to generate a traditional OLS multiple regression model on the same data as our probabilistic model.
168168

169169
```{julia}
170170
# Import the GLM package.

tutorials/06-infinite-mixture-model/index.qmd

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -81,7 +81,7 @@ x &\sim \mathrm{Normal}(\mu_z, \Sigma)
8181
\end{align}
8282
$$
8383

84-
which resembles the model in the [Gaussian mixture model tutorial](https://turinglang.org/stable/tutorials/01-gaussian-mixture-model/) with a slightly different notation.
84+
which resembles the model in the [Gaussian mixture model tutorial]( {{<meta doc-base-url>}}/{{<meta gaussian-mixture-model>}}) with a slightly different notation.
8585

8686
## Infinite Mixture Model
8787

@@ -149,7 +149,7 @@ end
149149
```{julia}
150150
using Plots
151151
152-
# Plot the cluster assignments over time
152+
# Plot the cluster assignments over time
153153
@gif for i in 1:Nmax
154154
scatter(
155155
collect(1:i),

tutorials/08-multinomial-logistic-regression/index.qmd

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -144,8 +144,8 @@ chain
144144

145145
::: {.callout-warning collapse="true"}
146146
## Sampling With Multiple Threads
147-
The `sample()` call above assumes that you have at least `nchains` threads available in your Julia instance. If you do not, the multiple chains
148-
will run sequentially, and you may notice a warning. For more information, see [the Turing documentation on sampling multiple chains.](https://turinglang.org/dev/docs/using-turing/guide/#sampling-multiple-chains)
147+
The `sample()` call above assumes that you have at least `nchains` threads available in your Julia instance. If you do not, the multiple chains
148+
will run sequentially, and you may notice a warning. For more information, see [the Turing documentation on sampling multiple chains.]( {{<meta doc-base-url>}}/{{<meta using-turing>}}#sampling-multiple-chains )
149149
:::
150150

151151
Since we ran multiple chains, we may as well do a spot check to make sure each chain converges around similar points.

tutorials/09-variational-inference/index.qmd

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -13,7 +13,7 @@ Pkg.instantiate();
1313
In this post we'll have a look at what's know as **variational inference (VI)**, a family of _approximate_ Bayesian inference methods, and how to use it in Turing.jl as an alternative to other approaches such as MCMC. In particular, we will focus on one of the more standard VI methods called **Automatic Differentation Variational Inference (ADVI)**.
1414

1515
Here we will focus on how to use VI in Turing and not much on the theory underlying VI.
16-
If you are interested in understanding the mathematics you can checkout [our write-up](../../tutorials/docs-07-for-developers-variational-inference/) or any other resource online (there a lot of great ones).
16+
If you are interested in understanding the mathematics you can checkout [our write-up]( {{<meta doc-base-url>}}/{{<meta using-turing-variational-inference>}} ) or any other resource online (there a lot of great ones).
1717

1818
Using VI in Turing.jl is very straight forward.
1919
If `model` denotes a definition of a `Turing.Model`, performing VI is as simple as
@@ -26,7 +26,7 @@ q = vi(m, vi_alg) # perform VI on `m` using the VI method `vi_alg`, which retur
2626

2727
Thus it's no more work than standard MCMC sampling in Turing.
2828

29-
To get a bit more into what we can do with `vi`, we'll first have a look at a simple example and then we'll reproduce the [tutorial on Bayesian linear regression](../../tutorials/05-linear-regression/) using VI instead of MCMC. Finally we'll look at some of the different parameters of `vi` and how you for example can use your own custom variational family.
29+
To get a bit more into what we can do with `vi`, we'll first have a look at a simple example and then we'll reproduce the [tutorial on Bayesian linear regression]( {{<meta doc-base-url>}}/{{<meta linear-regression>}}) using VI instead of MCMC. Finally we'll look at some of the different parameters of `vi` and how you for example can use your own custom variational family.
3030

3131
We first import the packages to be used:
3232

@@ -248,7 +248,7 @@ plot(p1, p2; layout=(2, 1), size=(900, 500))
248248

249249
## Bayesian linear regression example using ADVI
250250

251-
This is simply a duplication of the tutorial on [Bayesian linear regression](../../tutorials/05-linear-regression/) (much of the code is directly lifted), but now with the addition of an approximate posterior obtained using `ADVI`.
251+
This is simply a duplication of the tutorial on [Bayesian linear regression]({{< meta doc-base-url >}}/{{<meta linear-regression>}}) (much of the code is directly lifted), but now with the addition of an approximate posterior obtained using `ADVI`.
252252

253253
As we'll see, there is really no additional work required to apply variational inference to a more complex `Model`.
254254

@@ -599,7 +599,7 @@ println("Training set:
599599
VI loss: $vi_loss1
600600
Bayes loss: $bayes_loss1
601601
OLS loss: $ols_loss1
602-
Test set:
602+
Test set:
603603
VI loss: $vi_loss2
604604
Bayes loss: $bayes_loss2
605605
OLS loss: $ols_loss2")
@@ -765,8 +765,8 @@ plot(p1, p2; layout=(1, 2), size=(800, 2000))
765765
So it seems like the "full" ADVI approach, i.e. no mean-field assumption, obtain the same modes as the mean-field approach but with greater uncertainty for some of the `coefficients`. This
766766

767767
```{julia}
768-
# Unfortunately, it seems like this has quite a high variance which is likely to be due to numerical instability,
769-
# so we consider a larger number of samples. If we get a couple of outliers due to numerical issues,
768+
# Unfortunately, it seems like this has quite a high variance which is likely to be due to numerical instability,
769+
# so we consider a larger number of samples. If we get a couple of outliers due to numerical issues,
770770
# these kind affect the mean prediction greatly.
771771
z = rand(q_full_normal, 10_000);
772772
```
@@ -795,7 +795,7 @@ println("Training set:
795795
VI loss: $vi_loss1
796796
Bayes loss: $bayes_loss1
797797
OLS loss: $ols_loss1
798-
Test set:
798+
Test set:
799799
VI loss: $vi_loss2
800800
Bayes loss: $bayes_loss2
801801
OLS loss: $ols_loss2")

0 commit comments

Comments
 (0)