Logo notas.itmens

Cybernetics, Artificial Intelligence

I'm not very interested in artificial intelligence since there's no reason to be interested in a confused and largely sham field that is destined to fail, but its history, along with that of its predecessor, cybernetics, is worth studying.

To Read list:

Artificial Intelligence#

Linguistics, Rule-based approaches#

It was believed that natural language understanding was AI complete (has to do with the so-called linguistic turn in philosophy) so the effort was first spent on linguistics and rule-based approaches to natural language understanding. This was shut down following Yehoshua Bar-Hillel's series of official reports for the NRC (and now-famous footnotes spelling out the depth of problems the field faced) in 1966, that concluded with the judgement that the notion that computers could be programmed with the world knowledge of humans - which was needed for natural language comprehension - was “utterly chimerical, and hardly deserves any further discussion.”

Data driven approaches to AI#

The world wide web provided the availability of massive datasets. Suddenly what used to be shallow became adequate and worked: supervised learning algorithms such as aritificial neural networks, decision trees, and Bayesian classifiers had existed in university labs for decades. More:

  • Marokov models, maximum entropy models, conditional random fields, large-margin classifiers (support vector machines). Largely based on optimizing learning methods operating on Big Data.

Machine translation was later cracked by using a statistical approach that was based on the noisy-channel approach: viewing sentences from a souce language and a target language as an information exchange in which bad translations constituted a form of noise, and making it the system's task to reduce the noise in the translation channel between source and target sentences.