Demonstration
System Setup
We demonstrate the use of the package.
First, let's setup the array geometry. We will consider a system with a uniform linear array (ULA) with M = 20 sensors and a spacing of 0.5 m, a sampling frequency of fs = 3000Hz, where the medium has a propagation speed of c = 1500 m/s:
M = 20
Δx = range(0, M*0.5; length=M)
c = 1500.
fs = 3000.
nothingFor the target sources, we will generate k = 4 targets, with varying bandwidths and varying SNRs:
k = 4
ϕ = [ -60, -15, 30, 45]/180*π # True direction-of-arrivals
f_begin = [ 10, 100, 500, 800] # Source signal starting frequencies
f_end = [1000, 1500,1000, 900] # Source signal ending frequencies
snr = [ -6, -4, 0, 4] # Source signal SNRs in dB
nothingFor simulating the signals, let's use a utility function we used for the experiments. The length of the simulated signal will be N = 256. This corresponds to using S = 8 snapshots in the paper.
using Random, Random123
seed = (0x97dcb950eaebcfba, 0x741d36b68bef6415)
rng = Random123.Philox4x(UInt64, seed, 8)
Random123.set_counter!(rng, 1)
N = 256
include("../../scripts/common.jl")
y, x = simulate_signal(
rng, N, N*8, ϕ, snr, f_begin, f_end, fs, 1.0, Δx, c; visualize=false
)
y256×20 Matrix{Float64}:
2.41268 3.69594 1.10652 … 1.84746 1.77406 0.418829
2.6832 -1.05787 -7.08544 0.531795 3.40024 0.861415
-3.68347 -3.31873 1.44214 0.250248 -0.650553 -1.6076
-1.49179 2.55911 2.78216 -0.961023 -3.0605 -0.390765
4.94765 2.822 -1.14346 2.54909 0.926269 2.24845
1.59004 -2.82084 -2.5981 … -0.834809 0.699906 1.06315
-1.01393 -2.13994 0.526488 1.57082 -0.201475 -0.53313
0.354168 0.650365 1.60448 0.467595 -1.13887 0.313759
1.32715 0.563384 0.293914 -0.644324 0.739284 -0.0700972
-1.19494 1.33997 2.5249 0.70136 0.578626 0.331896
⋮ ⋱
2.72925 0.958283 -0.0405954 0.253108 4.93267 3.39696
-0.482729 -2.31521 1.28278 3.5585 0.439871 -1.54394
-1.23014 0.264802 1.92125 -2.63589 -3.67888 -4.22299
-0.132067 1.59717 0.509598 … -6.03918 -2.19505 3.77222
-0.519421 0.253739 -0.531679 1.28968 1.76831 1.04294
0.898724 1.76966 1.56844 3.04472 -2.30506 -1.36001
2.23922 0.627158 -2.09597 -4.84669 -4.00989 -1.77705
1.55561 -1.37883 -2.7723 0.017852 2.3198 1.71712
-2.82849 0.283568 3.0907 … 4.12195 1.28745 -1.24495We can visualize the simulated signal through beamforming. Consider the Capon spectral estimator, more commonly known as minimum-variance distortionless response, or MVDR for short :
using Tullio
using Plots
include("../../scripts/baselines/common.jl")
include("../../scripts/baselines/subbandbeamform.jl")
N_fft = 32
N_snap = N ÷ N_fft
R, _, f_range = snapshot_covariance(y, N_fft, fs, N_snap) # Short-time Fourier transform
config = ArrayConfig(c, Δx)
n_grid = 2^10
ϕ_range = range(-π/2, π/2; length=n_grid)
P = subbandmvdr(R, ϕ_range, f_range, config)
Plots.heatmap(ϕ_range, f_range, 10*log10.(P'), xlabel="DoA (ϕ)", ylabel="Frequency")
Plots.vline!(ϕ, label="True DoAs", linecolor=:red)
savefig("angle_frequency_spectrum_plot.svg")
nothingThis yields the angle-frequency spectrum:
We can see that the signals were correctly simulated as we specified.
Creating the Bayesian Model
Now, let's construct the Bayesian model. We will use a non-informative inverse-gamma prior on the source SNRs ($\gamma$ in the paper), a truncated negative-binomial prior on the model order ($k$), a Jeffrey's scale prior on the signal power ($\sigma^2$) as stated in the paper:
alpha, beta = 0, 0
source_prior = InverseGamma(0.01, 0.01)
order_prior = truncated(NegativeBinomial(1/2 + 0.1, 0.1/(0.1 + 1)), 0, M-1)
model = WidebandDoA.WidebandIsoIsoModel(
N, Δx, c, fs, source_prior, alpha, beta; order_prior
)
nothingThe model can be conditioned on the data as follows:
cond = WidebandConditioned(model, y)
nothingWe are now ready to infer the posterior for this model.
Inference with RJMCMC
For inference, we will use the ReversibleJump package, which is the inference counterpart of this package.
We use independent jump proposals with the uniform-log-normal auxiliary proposal distributions ($q(\phi)$, $q(\gamma)$ in the paper) as done in the paper:
using ReversibleJump
prop = UniformNormalLocalProposal(0.0, 2.0)
jump = IndepJumpProposal(prop)
nothingFor the update move, we will use slice sampling[N2003] with the stepping-out procedure:
mcmc = SliceSteppingOut([2.0, 2.0])
nothingThe RJMCMC algorithm we use is the non-reversible jump algorithm by Gagnon and Doucet[GD2020].
rjmcmc = ReversibleJump.NonReversibleJumpMCMC(jump, mcmc; jump_rate=0.9)
nothingNow let's simulate some Markov chains!
n_samples = 4_000
initial_params = WidebandDoA.WidebandIsoIsoParam{Float64}[]
initial_order = 0
samples, stats = ReversibleJump.sample(
rng,
rjmcmc,
cond,
n_samples,
initial_order,
initial_params;
show_progress=false,
)
samples4000-element Vector{Vector{WidebandIsoIsoParam{Float64}}}:
[WidebandIsoIsoParam{Float64}(0.8087467725851902, 1.511385091418317)]
[WidebandIsoIsoParam{Float64}(0.8087467725851902, 1.511385091418317)]
[WidebandIsoIsoParam{Float64}(0.8087467725851902, 1.511385091418317)]
[WidebandIsoIsoParam{Float64}(0.8087467725851902, 1.511385091418317)]
[WidebandIsoIsoParam{Float64}(0.8030181861034825, -1.2341152366120869)]
[WidebandIsoIsoParam{Float64}(0.7946613169973156, -1.2015284164451963)]
[WidebandIsoIsoParam{Float64}(0.7946613169973156, -1.2015284164451963)]
[WidebandIsoIsoParam{Float64}(0.7946613169973156, -1.2015284164451963)]
[WidebandIsoIsoParam{Float64}(0.7946613169973156, -1.2015284164451963)]
[WidebandIsoIsoParam{Float64}(0.7946613169973156, -1.2015284164451963)]
⋮
[WidebandIsoIsoParam{Float64}(-1.0303497604886007, -2.8447749616850087), WidebandIsoIsoParam{Float64}(0.7837756456964959, -0.230704576742086), WidebandIsoIsoParam{Float64}(-0.2572979800805376, -2.074587389429687), WidebandIsoIsoParam{Float64}(0.5316393086481526, -1.0987142262996652)]
[WidebandIsoIsoParam{Float64}(-1.0303497604886007, -2.8447749616850087), WidebandIsoIsoParam{Float64}(0.7837756456964959, -0.230704576742086), WidebandIsoIsoParam{Float64}(-0.2572979800805376, -2.074587389429687), WidebandIsoIsoParam{Float64}(0.5316393086481526, -1.0987142262996652)]
[WidebandIsoIsoParam{Float64}(-1.0303497604886007, -2.8447749616850087), WidebandIsoIsoParam{Float64}(0.7837756456964959, -0.230704576742086), WidebandIsoIsoParam{Float64}(-0.2572979800805376, -2.074587389429687), WidebandIsoIsoParam{Float64}(0.5316393086481526, -1.0987142262996652)]
[WidebandIsoIsoParam{Float64}(-1.0346983552722555, -2.7594283779502278), WidebandIsoIsoParam{Float64}(0.7843204419956291, -0.055719073993072576), WidebandIsoIsoParam{Float64}(-0.25936177089725104, -1.9560999204540406), WidebandIsoIsoParam{Float64}(0.519813839027897, -1.1379483792065435)]
[WidebandIsoIsoParam{Float64}(-1.0346983552722555, -2.7594283779502278), WidebandIsoIsoParam{Float64}(0.7843204419956291, -0.055719073993072576), WidebandIsoIsoParam{Float64}(-0.25936177089725104, -1.9560999204540406), WidebandIsoIsoParam{Float64}(0.519813839027897, -1.1379483792065435)]
[WidebandIsoIsoParam{Float64}(-1.0402035664145473, -2.801772829275667), WidebandIsoIsoParam{Float64}(0.7844411983738, -0.25795880142627625), WidebandIsoIsoParam{Float64}(-0.25809764575908156, -1.8794606254578767), WidebandIsoIsoParam{Float64}(0.5228386279097865, -1.3842304528188003)]
[WidebandIsoIsoParam{Float64}(-1.0402035664145473, -2.801772829275667), WidebandIsoIsoParam{Float64}(0.7844411983738, -0.25795880142627625), WidebandIsoIsoParam{Float64}(-0.25809764575908156, -1.8794606254578767), WidebandIsoIsoParam{Float64}(0.5228386279097865, -1.3842304528188003)]
[WidebandIsoIsoParam{Float64}(-1.0402035664145473, -2.801772829275667), WidebandIsoIsoParam{Float64}(0.7844411983738, -0.25795880142627625), WidebandIsoIsoParam{Float64}(-0.25809764575908156, -1.8794606254578767), WidebandIsoIsoParam{Float64}(0.5228386279097865, -1.3842304528188003)]
[WidebandIsoIsoParam{Float64}(-1.0402035664145473, -2.801772829275667), WidebandIsoIsoParam{Float64}(0.7844411983738, -0.25795880142627625), WidebandIsoIsoParam{Float64}(-0.25809764575908156, -1.8794606254578767), WidebandIsoIsoParam{Float64}(0.5228386279097865, -1.3842304528188003)]Signal Detection
Detection can be performed by analyzing the posterior of the model order ($k$).
Here is the trace of the model order:
Plots.plot([stat.order for stat in stats], xlabel="RJMCMC Iteration", ylabel="Model order")
savefig("model_order_trace.svg")
nothingAnd the histogram:
Plots.histogram([stat.order for stat in stats], xlabel="order", normed=true)
savefig("model_order_hist.svg")
nothingFrom the posterior, we can obtain point estimates of the model order $k$. In the paper, we use the median.
DoA Estimation
Now, let's estimate the DoAs.
For this, we will discard the first 10% of the samples and only use the remaining samples.
burned = samples[n_samples ÷ 10:end]3601-element Vector{Vector{WidebandIsoIsoParam{Float64}}}:
[WidebandIsoIsoParam{Float64}(0.7831985288652852, -0.3414974415227825), WidebandIsoIsoParam{Float64}(-0.2583382810851599, -1.9297386910840033), WidebandIsoIsoParam{Float64}(0.5271535608503864, -1.4359605575254837)]
[WidebandIsoIsoParam{Float64}(0.7831985288652852, -0.3414974415227825), WidebandIsoIsoParam{Float64}(-0.2583382810851599, -1.9297386910840033), WidebandIsoIsoParam{Float64}(0.5271535608503864, -1.4359605575254837)]
[WidebandIsoIsoParam{Float64}(0.7831985288652852, -0.3414974415227825), WidebandIsoIsoParam{Float64}(-0.2583382810851599, -1.9297386910840033), WidebandIsoIsoParam{Float64}(0.5271535608503864, -1.4359605575254837)]
[WidebandIsoIsoParam{Float64}(0.7831985288652852, -0.3414974415227825), WidebandIsoIsoParam{Float64}(-0.2583382810851599, -1.9297386910840033), WidebandIsoIsoParam{Float64}(0.5271535608503864, -1.4359605575254837)]
[WidebandIsoIsoParam{Float64}(0.7831985288652852, -0.3414974415227825), WidebandIsoIsoParam{Float64}(-0.2583382810851599, -1.9297386910840033), WidebandIsoIsoParam{Float64}(0.5271535608503864, -1.4359605575254837)]
[WidebandIsoIsoParam{Float64}(0.7831985288652852, -0.3414974415227825), WidebandIsoIsoParam{Float64}(-0.2583382810851599, -1.9297386910840033), WidebandIsoIsoParam{Float64}(0.5271535608503864, -1.4359605575254837)]
[WidebandIsoIsoParam{Float64}(0.7831985288652852, -0.3414974415227825), WidebandIsoIsoParam{Float64}(-0.2583382810851599, -1.9297386910840033), WidebandIsoIsoParam{Float64}(0.5271535608503864, -1.4359605575254837)]
[WidebandIsoIsoParam{Float64}(0.781702323886521, -0.2222768068850423), WidebandIsoIsoParam{Float64}(-0.2622249640109023, -2.130304703548901), WidebandIsoIsoParam{Float64}(0.5231765157060034, -1.2584818620913216)]
[WidebandIsoIsoParam{Float64}(0.781702323886521, -0.2222768068850423), WidebandIsoIsoParam{Float64}(-0.2622249640109023, -2.130304703548901), WidebandIsoIsoParam{Float64}(0.5231765157060034, -1.2584818620913216)]
[WidebandIsoIsoParam{Float64}(0.781702323886521, -0.2222768068850423), WidebandIsoIsoParam{Float64}(-0.2622249640109023, -2.130304703548901), WidebandIsoIsoParam{Float64}(0.5231765157060034, -1.2584818620913216)]
⋮
[WidebandIsoIsoParam{Float64}(-1.0303497604886007, -2.8447749616850087), WidebandIsoIsoParam{Float64}(0.7837756456964959, -0.230704576742086), WidebandIsoIsoParam{Float64}(-0.2572979800805376, -2.074587389429687), WidebandIsoIsoParam{Float64}(0.5316393086481526, -1.0987142262996652)]
[WidebandIsoIsoParam{Float64}(-1.0303497604886007, -2.8447749616850087), WidebandIsoIsoParam{Float64}(0.7837756456964959, -0.230704576742086), WidebandIsoIsoParam{Float64}(-0.2572979800805376, -2.074587389429687), WidebandIsoIsoParam{Float64}(0.5316393086481526, -1.0987142262996652)]
[WidebandIsoIsoParam{Float64}(-1.0303497604886007, -2.8447749616850087), WidebandIsoIsoParam{Float64}(0.7837756456964959, -0.230704576742086), WidebandIsoIsoParam{Float64}(-0.2572979800805376, -2.074587389429687), WidebandIsoIsoParam{Float64}(0.5316393086481526, -1.0987142262996652)]
[WidebandIsoIsoParam{Float64}(-1.0346983552722555, -2.7594283779502278), WidebandIsoIsoParam{Float64}(0.7843204419956291, -0.055719073993072576), WidebandIsoIsoParam{Float64}(-0.25936177089725104, -1.9560999204540406), WidebandIsoIsoParam{Float64}(0.519813839027897, -1.1379483792065435)]
[WidebandIsoIsoParam{Float64}(-1.0346983552722555, -2.7594283779502278), WidebandIsoIsoParam{Float64}(0.7843204419956291, -0.055719073993072576), WidebandIsoIsoParam{Float64}(-0.25936177089725104, -1.9560999204540406), WidebandIsoIsoParam{Float64}(0.519813839027897, -1.1379483792065435)]
[WidebandIsoIsoParam{Float64}(-1.0402035664145473, -2.801772829275667), WidebandIsoIsoParam{Float64}(0.7844411983738, -0.25795880142627625), WidebandIsoIsoParam{Float64}(-0.25809764575908156, -1.8794606254578767), WidebandIsoIsoParam{Float64}(0.5228386279097865, -1.3842304528188003)]
[WidebandIsoIsoParam{Float64}(-1.0402035664145473, -2.801772829275667), WidebandIsoIsoParam{Float64}(0.7844411983738, -0.25795880142627625), WidebandIsoIsoParam{Float64}(-0.25809764575908156, -1.8794606254578767), WidebandIsoIsoParam{Float64}(0.5228386279097865, -1.3842304528188003)]
[WidebandIsoIsoParam{Float64}(-1.0402035664145473, -2.801772829275667), WidebandIsoIsoParam{Float64}(0.7844411983738, -0.25795880142627625), WidebandIsoIsoParam{Float64}(-0.25809764575908156, -1.8794606254578767), WidebandIsoIsoParam{Float64}(0.5228386279097865, -1.3842304528188003)]
[WidebandIsoIsoParam{Float64}(-1.0402035664145473, -2.801772829275667), WidebandIsoIsoParam{Float64}(0.7844411983738, -0.25795880142627625), WidebandIsoIsoParam{Float64}(-0.25809764575908156, -1.8794606254578767), WidebandIsoIsoParam{Float64}(0.5228386279097865, -1.3842304528188003)]Bayesian model averaging (BMA) correponds to flattening all the local variables:
flat = vcat(burned...)14164-element Vector{WidebandIsoIsoParam{Float64}}:
WidebandIsoIsoParam{Float64}(0.7831985288652852, -0.3414974415227825)
WidebandIsoIsoParam{Float64}(-0.2583382810851599, -1.9297386910840033)
WidebandIsoIsoParam{Float64}(0.5271535608503864, -1.4359605575254837)
WidebandIsoIsoParam{Float64}(0.7831985288652852, -0.3414974415227825)
WidebandIsoIsoParam{Float64}(-0.2583382810851599, -1.9297386910840033)
WidebandIsoIsoParam{Float64}(0.5271535608503864, -1.4359605575254837)
WidebandIsoIsoParam{Float64}(0.7831985288652852, -0.3414974415227825)
WidebandIsoIsoParam{Float64}(-0.2583382810851599, -1.9297386910840033)
WidebandIsoIsoParam{Float64}(0.5271535608503864, -1.4359605575254837)
WidebandIsoIsoParam{Float64}(0.7831985288652852, -0.3414974415227825)
⋮
WidebandIsoIsoParam{Float64}(0.5228386279097865, -1.3842304528188003)
WidebandIsoIsoParam{Float64}(-1.0402035664145473, -2.801772829275667)
WidebandIsoIsoParam{Float64}(0.7844411983738, -0.25795880142627625)
WidebandIsoIsoParam{Float64}(-0.25809764575908156, -1.8794606254578767)
WidebandIsoIsoParam{Float64}(0.5228386279097865, -1.3842304528188003)
WidebandIsoIsoParam{Float64}(-1.0402035664145473, -2.801772829275667)
WidebandIsoIsoParam{Float64}(0.7844411983738, -0.25795880142627625)
WidebandIsoIsoParam{Float64}(-0.25809764575908156, -1.8794606254578767)
WidebandIsoIsoParam{Float64}(0.5228386279097865, -1.3842304528188003)On the other hand, Bayesian model selection corresponds to selecting the samples that have a specific model order.
Here is the marginal posterior of the DoAs:
Plots.histogram([θ.phi for θ in flat], normed=true, bins=256, xlims=[-π/2, π/2], xlabel="DoA (ϕ)", label="Posterior", ylabel="Density")
Plots.vline!(ϕ, label="True", color=:red, linestyle=:dash)
savefig("doa_hist.svg")
nothingUnfortunately, this does not yield the DoA posterior corresponding to each source. For this, we turn to the relabeling algorithm by Roodaki et al.[RBF2014]. This algorithm fits a Gaussian mixture model on the histogram above. It also generates labels for each local variable, so that we can label variable other than just the DoAs. For this though, we have to choose the number of mixture components. Roodaki et al. recommend the 80% or 90% upper percentile of the posterior:
k_mixture = quantile([stat.order for stat in stats], 0.9) |> Base.Fix1(round, Int)
ϕ_post = [[target.phi for target in sample] for sample in burned]
mixture, labels = WidebandDoA.relabel(
rng, ϕ_post, k_mixture; show_progress=false
)
mixtureMixtureModel{Distributions.Normal{Float64}}(K = 4)
components[1] (prior = 0.2358): Distributions.Normal{Float64}(μ=-1.0348530609477118, σ=0.012243993061575375)
components[2] (prior = 0.2547): Distributions.Normal{Float64}(μ=-0.2592017530212177, σ=0.0030004293668731263)
components[3] (prior = 0.2547): Distributions.Normal{Float64}(μ=0.5256595348615929, σ=0.0030024841179694833)
components[4] (prior = 0.2547): Distributions.Normal{Float64}(μ=0.7829513754465836, σ=0.0015906165067898713)
Let's check that the Gaussian mixture approximation is accurate. This can be done by comparing the marginal density of the mixture against the histogram:
using StatsPlots
Plots.stephist([θ.phi for θ in flat], normed=true, bins=256, xlims=[-π/2, π/2], xlabel="DoA (ϕ)", ylabel="Density", label="Posterior", fill=true)
Plots.plot!(range(-π/2,π/2; length=1024), Base.Fix1(pdf, mixture), label="Mixture Approximation", linewidth=2)
savefig("doa_relabel_density.svg")
nothingThe DoA posterior of each source is then represented by each component of the mixture:
Plots.plot(mixture, xlims=[-π/2, π/2], label="Component", fill=true)
Plots.vline!(ϕ, label="True", color=:red, linestyle=:dash)
savefig("doa_relabel_comp.svg")
nothingReconstruction
Finally, we will demonstrate reconstruction. Unfortunately, the API for reconstruction is a little less ironed-out, but it is functional.
For this, we have to use the labels generated by the relabeling procedure. Here, for each RJMCMC sample, we will sample from the conditional posterior conditional and then relabel the signal segments to their corresponding source. We also thin the samples by a factor n_thin = 100 to speed up things and reduce memory consumption.
x_samples = [Vector{Float64}[] for j in 1:k_mixture]
x_means = [Vector{Float64}[] for j in 1:k_mixture]
n_thin = n_samples ÷ 10
samples_thinned = burned[1:n_thin:end]
labels_thinned = labels[1:n_thin:end]
for (sample, labs) in zip(samples_thinned, labels_thinned)
dist_x = WidebandDoA.reconstruct(cond, sample)
x_sample = rand(rng, dist_x) # Conditional posterior sample
x_mean = mean(dist_x) # Conditional posterior mean
kj = length(sample)
total_len = length(x_sample)
blocksize = total_len ÷ kj
# Labeling the conditional posterior mean and sample
for (idx, label) in enumerate(labs)
if label > k_mixture
# A label of k_mixture + 1 corresponds to clutter
continue
end
# The source signals are flattened so we have to slice the block corresponding
# to the source the label is pointing to.
blockrange = (idx-1)*blocksize+1:idx*blocksize
xj_sample = x_sample[blockrange[1:N]]
xj_mean = x_mean[ blockrange[1:N]]
push!(x_samples[label], xj_sample)
push!(x_means[label], xj_mean)
end
end
nothingNow that sampling and relabeling is done, we can visualize the posterior samples against with the minimum mean-squared error (MMSE) estimates (posterior mean). The variability of the posterior samples represent the posterior uncertainty:
x_mmse = mean.(x_samples)
plts = map(1:k_mixture) do j
p = Plots.plot(x_mmse[j], linecolor=:blue, label="MMSE", xlabel="Sample index")
for x_sample in x_samples[j]
Plots.plot!(p, x_sample, linecolor=:blue, alpha=0.5, linewidth=0.2, label=nothing)
end
p
end
Plots.plot(plts..., layout = (4, 1))
savefig("recon_samples.svg")
nothingFinally, let's compare the MMSE against the ground truth x.
Plots.plot( x_mmse, layout=(4,1), label="MMSE", xlabel="Sample Index")
Plots.plot!(x, layout=(4,1), label="True", xlabel="Sample Index")
savefig("recon_mmse_comparison.svg")
nothing- N2003Neal, Radford M. "Slice sampling." The annals of statistics 31.3 (2003): 705-767.
- GD2020Gagnon, Philippe, and Arnaud Doucet. "Nonreversible jump algorithms for Bayesian nested model selection." Journal of Computational and Graphical Statistics 30.2 (2020): 312-323.
- RBF2014Roodaki, Alireza, Julien Bect, and Gilles Fleury. "Relabeling and summarizing posterior distributions in signal decomposition problems when the number of components is unknown." IEEE Transactions on Signal Processing (2014).