*with different caveats
$$P(A \given B) = \frac{P(A, B)}{P(B)}$$
$$\theta \sim \rr{Beta}(\alpha, \beta), \rr{ } Y_i \given \theta \sim \rr{Bernoulli}(\theta)$$
$$ \rr{versus } Y_i \sim \rr{Bernoulli}(\theta), \rr{ } \theta \rr{ unknown} $$
$$ \pi(\theta) = \frac{1}{\rr{B}(\alpha, \beta)} \theta^{\alpha - 1} (1 - \theta)^{\beta - 1} $$
$$p(y_i \given \theta) = \left( \begin{array} Nn \\ y_i\end{array}\right) \theta^{y_i} (1 - \theta)^{1-y_i}$$
$$p(\theta \given y_i) = \rr{Beta}(\alpha + y_1, \beta + 1 - y_i)$$
$$\begin{align}X_i \given \mu & \mathop{\sim}^{\rr{iid}} \rr{Normal}\left( \mu, \sigma^2 \right), \rr{ } i = 1..n; \\ \mu & \sim \rr{Normal}\left( \mu_0, \sigma_0^2\right)\end{align}$$
$$\begin{align} p(\mu|X) & = \pi(\mu)\prod_{i=1}^n \tfrac{1}{\sqrt{2\pi\sigma^2}}\, e^{-(x_i-\mu)^2/(2\sigma^2)} \\ & = \pi(\mu)(2\pi\sigma^2)^{-n/2}\, e^{ -\sum_{i=1}^n(x_i-\mu)^2/(2\sigma^2)} \\ & = \pi(\mu) (2\pi\sigma^2)^{-n/2}\, e^{ -\sum_{i=1}^n{( (x_i-\bar{x}) - (\mu-\bar{x}) )^2 \over 2\sigma^2}} \\ & \propto e^{ {-1\over2\sigma_0^2} (\mu-\mu_0)^2} e^{ {-n\over2\sigma^2}(\mu-\bar{x})^2 } \end{align}$$
$$\begin{align} & l(B \given \{\vec{Z}_i, C_{ijk}\} ) = \rr{const} + \\ & \sum_{i=1}^I \left( \sum_{j \in J_i} \sum_{k \in K_{ij}} \left[ \vec{x}_{jk}' B - \rr{log} \sum_{l = 1}^{L_j} e^{ \vec{x}_{jl}' B} \right] \right) \cdot \vec{Z}_i \end{align}$$
Let
$$\rr{log}\vec{a}_i = \sum_{j \in J_i} \sum_{k \in K_{ij}} \left[ \vec{x}_{jk}' B - \rr{log} \sum_{l = 1}^{L_j} e^{ \vec{x}_{jl}' B} \right]$$
Let
$$Z_i \sim \rr{Multinom}_N \left(\vec{p}_i\right)$$
$$\begin{align} \rr{log}p(B, Z \given C_{ijk} ) = & \rr{const} + \\ & \sum_{i=1}^I \left( \rr{log} \vec{p}_i + \rr{log}\vec{a}_i\right) \cdot \vec{Z}_i \end{align}$$
$$\begin{align}\vec{Z}_i \given B, \rr{ data} \sim \rr{Multinom}_N \left( \vec{p}_i \vec{a}_i\right)\end{align}$$
*probabilities proportional to; element-wise vector product
BDA pp. 289-292:
Reference: BDA pp. 290-291
Detailed balance:
$$p(\theta_a) J(\theta_a \given \theta_b) = p(\theta_b) J (\theta_b \given \theta_a)$$
A basic setup:
A simple yet powerful concept: divide and conquer.
Posterior distribution: $$\left.\left( \begin{array} d\theta_1 \\ \theta_2\end{array}\right)\right| \rr{ }y \sim \rr{N} \left( \left( \begin{array} dy_1 \\ y_2\end{array}\right), \left( \begin{array} n1 & \rho \\ \rho & 1 \end{array} \right) \right)$$
Gibbs updates: $$\begin{align} \theta_1 \given \theta_2, y & \sim \rr{N} (y_1 + \rho (\theta_2 - y_2), 1 - \rho^2) \\ \theta_2 \given \theta_1, y & \sim \rr{N} (y_2 + \rho (\theta_1 - y_1), 1 - \rho^2)\end{align}$$
Define \\(x \sim_j y \\) if \\( \left.x_i = y_i\right.\\) for all \\(i \neq j\\) and let \\( \left.p_{xy}\right.\\) denote the probability of a jump from \\( x \in \Theta \\) to \\( y \in \Theta\\). Then, the transition probabilities are
$$p_{xy} = \begin{cases} \frac{1}{d}\frac{g(y)}{\sum_{z \in \Theta: z \sim_j x} g(z) } & x \sim_j y \\ 0 & \text{otherwise} \end{cases}$$
Reference: Wikipedia.
So $$\begin{align}g(x) p_{xy} & = \frac{1}{d}\frac{ g(x) g(y)}{\sum_{z \in \Theta: z \sim_j x} g(z) } \\ & = \frac{1}{d}\frac{ g(y) g(x)}{\sum_{z \in \Theta: z \sim_j y} g(z) } = g(y) p_{yx}\end{align}$$
since \\(x \sim_j y \\) is an equivalence relation.
So, the detailed balance is satisfied, Gibbs works.
$$\left.\left( \begin{array} d\theta_1 \\ \theta_2\end{array}\right)\right| \rr{ }y, \rho \sim \rr{N} \left( \left( \begin{array} dy_1 \\ y_2\end{array}\right), \left( \begin{array} n1 & \rho \\ \rho & 1 \end{array} \right) \right)$$
Gibbs updates: $$\begin{align} \theta_1 \given \theta_2, y & \sim \rr{N} (y_1 + \rho (\theta_2 - y_2), 1 - \rho^2) \\ \theta_2 \given \theta_1, y & \sim \rr{N} (y_2 + \rho (\theta_1 - y_1), 1 - \rho^2)\end{align}$$
$$\rho \given \theta_1, \theta_2, y \sim \rr{ draw via Metropolis}$$
$$\begin{align} r & = \frac{p(f^{-1}(\psi^{\rr{proposed}})) \cdot |J(\psi^{\rr{proposed}})|}{p(f^{-1}(\psi^{\rr{initial}})) \cdot |J(\psi^{\rr{initial}})|} \\ &= \frac{p(\theta^{\rr{proposed}})}{p(\theta^{\rr{initial}})} \cdot \frac{|J(\psi^{\rr{proposed}})|}{|J(\psi^{\rr{initial}})|} \end{align}$$
Accept with probability \\( \rr{min} (1, r) \\) as usual.
logit transformation
$$\begin{align}\rr{invlogit}'(\psi) & = \left( {\rr{exp}(\psi) \over 1 + \rr{exp}(\psi)}\right)' \\ & = {\rr{exp}(\psi) \over \left(1 + \rr{exp}(\psi) \right)^2}\end{align}$$