In a situation where I'm assessing an uncertain thing, I'm often tempted to default to the uniform ignorance prior - assigning an equal probability to all potential outcomes. This is almost always a mistake, and it's better to put in the effort to select even a slightly more informative prior. The benefits of moving from the ignorance prior to a slightly informed prior are larger than the predictive gains, because it acts as a forcing function for me to actually try and understand the question.
For instance, if I'm trying to assess whether if Trump tweets that we're going to attack another country, we actually would go to war, I would want to first assess the base prior of the likelihood of the US going to war w/ another country in 2020. I could start by assigning equal weight to both outcomes, 50% to each. But it would be better to put in slightly more effort to construct a reference class, such as looking at previous years and whether we went to war, or using Laplace's Rule of Succession on the tweets to inform my estimates.
As a matter of philosophical inquiry I don't understand a principled way to approach the "going too far" and creating an informative prior that is actually incorporating evidence that should be explicitly modeled. This seems like a risk when leaning away from the ignorance prior, because it might cause me to smuggle in more of my point of view and tip the scales of the equation. My current operating stance is to aim for broad reference classes, and to question myself if I start going three + levels deep.