It's perfectly sensible. Change in software leads to bugs - inescapable fact. Further, it can take time to find bugs, because some of them will live in rarely exercised code paths. General software changes tend to result in net positive change in the # of bugs. Changes that are strictly limited to fixing bugs can average to a net negative change in # of bugs, if done with care.
So if you take a piece of software, with its features just developed to an acceptable state, then compare that software with itself after a long period of use, it should be obvious that a lot more bugs will be /known/ about after the period of use. That information alone is worth something if you want to understand the behaviour and stability of a system in the face of whatever inputs. It means if needs be you could choose to limit the input in order to avoid whatever bugs. Further, if during that period of use those bugs are fixed (and fixes strictly limited to that), then the software at the end stands a good chance of being less buggy than the software at the beginning, for the given features.
This all sounds obvious to me. In case it isn't to you, or you think it's hand-wavingly subjective, let me point that the like of RedHat objectively earn billions of $ per annum exploiting this.