Improving structured post hoc inference via a hidden Markov model.
In a recent paradigm of selective inference, the user is free to select any subset of variables after ”having seen” the data, possibly repeatedly and the aim is to provide valid confidence bounds, called post hoc bounds, on the proportion of falsely selected variables. In this paper, we show that a hidden Markov modeling is particularly suitable for this type of inference. By using this specific structure, we propose new post hoc bounds that improve the state of the art. The latter domination is illustrated both via numerical experiments and real data examples.