On the World

Observation / Information and Experiments

NEVATHIR
November 16, 2018

Sciences depend on information collected via observations and experiments that the informational content or Kolmogorov complexity of collected data is of interest for classification. Computable enumeration on a Turing machine gives long-run informational content bounded by log of enumeration steps, plus constant. A particular question is whether Turing machines provide good model for collected data.

Notice that knots in 3D space can not be recorded on 1D Turing machine without losing connectedness. Thus, Turing machines didn't completely capture the complexity of data, but here we restrict ourselves to 1D data. Did Turing machines provide good model for 1D data?

Probability processes and deterministic chaos provide time series whose generic informational content is proportional to enumeration steps, which goes beyond the log bound for computable enumerations. If probability processes or deterministic chaos exists for arbitrarily large enumeration steps, the time series data aren't computationally enumerable. Conversely, if time series data of arbitrarily large enumeration steps are all computationally enumerable, probability processes or deterministic chaos can not exist.

The existence of probability processes or deterministic chaos might seem obvious, but the possibility that they are merely ad hoc theoretical constructs remains. Perhaps quantum mechanics and the randomness arising from collapse of wave functions are all ad hoc.

However, if quantum mechanical randomness isn't ad hoc, quantum mechanics of arbitrarily large enumeration steps isn't computationally enumerable. Turing machines provide bad models for 1D time series data.

If generic 1D time series for probability processes or deterministic chaos does exist, we may compute properties of statistical aggregates, but not features that require observation or advanced evidentiary methods. Ensembles are predictable, but time series data aren't. Delicate features like long-run biological evolution or economics are beyond limited computational capacity humans possess.

Statistical methods like regression or artificial intelligence methods like neural networks that combine evidences/data with computation may produce short-run features, even to astonishing degree of accuracy, if regularity exists.

Predictable regularities provide correlation, not causation, between data sets. Understanding underlying mechanism at work requires understanding data structure, not merely regularities. Short-run feature prediction is easier than understanding.

Beyond short-run features, Keynesian radical uncertainty and historical accidents pose considerable difficulty for short-run prediction, due to lack of numerical probability relations. If radical uncertainty or historical accidents are at work, computation may fail due to lack of numerical data. There is no general method that provides reliable prediction.

[On the World]