By Mike Page (auth.), John A. Bullinaria BSc, MSc, PhD, David W. Glasspool BSc, Msc, George Houghton BA, MSc, PhD (eds.)

This quantity collects jointly refereed models of twenty-five papers provided on the 4th Neural Computation and Psychology Workshop, held at collage university London in April 1997. The "NCPW" workshop sequence is now good proven as a full of life discussion board which brings jointly researchers from such different disciplines as man made intelligence, arithmetic, cognitive technological know-how, machine technological know-how, neurobiology, philosophy and psychology to debate their paintings on connectionist modelling in psychology. the overall topic of this fourth workshop within the sequence was once "Connectionist Repre­ sentations", an issue which not just attracted contributors from these types of fields, yet from allover the area besides. From the viewpoint of the convention organisers targeting representational matters had the virtue that it instantly concerned researchers from all branches of neural computation. Being so critical either to psychology and to connectionist modelling, it truly is one region approximately which all people within the box has their very own powerful perspectives, and the variety and caliber of the shows and, simply as importantly, the dialogue which them, definitely attested to this.

Show description

Read Online or Download 4th Neural Computation and Psychology Workshop, London, 9–11 April 1997: Connectionist Representations PDF

Similar psychology books

Molecular Revolution: Psychiatry and Politics

No additional details has been supplied for this name.

Extra resources for 4th Neural Computation and Psychology Workshop, London, 9–11 April 1997: Connectionist Representations

Sample text

MLPs with one or more hidden layers are used to classify non-separable classes. g. circular) decision regions. Differing from hyperplanar (open) decision regions which cover an infinite portion of the input space, the receptive field of each unit is local and restricted to only a small region in input space. 1(b) illustrates hyperspherical (closed) decision regions. J. A. Bullinaria et al. ), 4th Neural Computation and Psychology Workshop, London, 9–11 April 1997 © Springer-Verlag London Limited 1998 27 00 o (0)

    When an input pattern matches a training pattern, a single hidden unit has maximum output. If the norm weight along each dimension is decreased at some RBFs, the receptive fields will overlap, and the network will form a distributed representation by employing an extended, and possibly superposed representation on its hidden layer. In this case the network retains its localist status by representing training patterns at the RBF centre vectors, but now represents the mapping in a distributed fashion.

    It has been shown how RBF networks' hidden layer representations can be made more distributed by increasing the overlap of RBF receptive fields. This can improve generalization, as the network is better able to interpolate between training patterns. The closed nature of RBF receptive fields restricts the resources used to represent the patterns, and therefore prevents excessive interference. 6 Conclusion It has been shown how RBF networks form the distributed representations that are highly regarded in MLPs [12).

Download PDF sample

Rated 4.10 of 5 – based on 20 votes