In this section I will briefly present a number of experiments, which were aimed at resolving the relevance of various factors for the CLASPnet simulations. First, the architecture of the network is addressed by comparing the performance of modular and non-modular networks (3.4.1). Second, the training corpus is changed in a number of ways: what is the importance of its size? (3.4.2); what role do the semantic and orthographic representations play? (3.4.3); which part of the semantic representation is most important? (3.4.4); does the double justification of the orthographic representations carry any significance? (3.4.5); and, how important are the punctuation marks? (3.4.6). Third, I consider which of the tasks of the network is learnt first (3.4.7). Fourth, the analyze-current-input task of CLASPnet is compared to the predict-next-output task (3.4.8). (The precise numbers on which the charts in this section are based can again be found in Appendix 5.)
The focus of all these experiments is not on explaining why slightly different simulations also lead to different results -- each of the many simulations could receive as extensive a discussion as the one I am giving CLASPnet. The aim is rather to gain more insight into how CLASPnet itself works and which of its properties have been of special importance for obtaining the results described in the previous section.