Self-Improving Automata, Langford-Moore Paradox
How does self-improving automata, in the sense of self-reflecting intelligence, escape Langford-Moore paradox? Any design for a new machine must be specified in the parent machine. The answer seems to be, that, by means of gaining experience, by means of assimilating external data, by means of learning. But what is added when one learns? A bias. Nothing radically new is added, but the form according to which opinions, etc. are arranged.
How does complexity - this is obviously not “entropy” but something more complicated and subtle - ever increase? Creative element.
Further, setting aside knowledge that needs sensory data, Is the data for an axiomatic mathematical system all encoded inside the axioms and rules of inferences?