I've heard variations of that phrasing various times, and I don't think it's wrong, but I'd argue there are better ways to conceptualize the condition number. It's more about the stability of the estimated coefficients for a given set of explanatory variable values. The coefficient are estimated by inverting a matrix of data values, and the condition number measures how sensitive the coefficients are to small changes in the data values. For low condition numbers, you can alter/remove some of the data, and the coefficients should not drastically change (in other words, the estimated coefficients are stable). But for matrices with very large condition numbers, even small changes to the data values can wildly change the estimated coefficients (meaning that the estimated coefficients are not stable/reliable).
This is a bit easier to understand using simple numbers rather than matrices. Inverting a matrix with a large condition number is equivalent to finding the inverse of a number that is very close to 0. For example, the inverse of 0.001 is 1,000, and the inverse of 0.0001 is 10,000. Even though 0.001 and 0.0001 are very close in absolute value (they're both close to 0), their inverses are very different in absolute value (1000 vs 10000). To put it another way, for values very close to 0, the inverse is very sensitive to small changes of the number. This stability of the inverse is what condition numbers measure for matrices rather than single numbers.
I hope that helps, and let me know if any of that was not clear. There are also many resources available to learn about condition numbers, as they are usually taught in Linear Algebra courses rather than geography or statistics.