Pursuing A/B tests allows you to proactively find ways to increase your revenue and to make sure that your continuous optimization process is heading in the right direction. Your choice of KPIs is up to you, as your test could be focused on bid timeouts, SSP incremental value, layout changes etc.
In fact, we highly encourage that you A/B test each and every change to your ad stack. This process will help you to identify optimizations you may have overlooked that could end up becoming quick wins, or on the other hand to realize that some of your changes may not be as productive as you intended.
However, we also understand that, in an increasingly complex programmatic industry, running and setting up an A/B test, and then understanding the data can turn out to be tricky without the right tools or processes. Not having this full understanding of the A/B testing process can quickly lead to a biased analysis, which in turn can lead you to take decisions which end up being wrong, or misled at the very least. This guide on A/B testing is intended for adops & yield managers in publisher monetization teams, which will undoubtedly learn a couple tricks to take their programmatic revenue to the next level!
We at Pubstack have been advocating for the extensive use of A/B testing in adtech for quite some time. In fact, this methodology was the only way to get viable and truthful metrics on many of our studies. You can find our whitepaper on monetisation that respects GDPR and user choice right here : "The guide to maximizing Monetization while respecting user choice"
Testing changes on your stack can only be verified through A/B testing, if you want to get your numbers (and your decisions!) right everytime. One of our customers has been using this method to measure the impact of each change to their ad stack, find out more by reading their Use case : Le Monde.