Unsupervid clustering with spiking neurons by spar temporal coding and multilayer RBF n

更新时间:2023-07-05 00:56:01 阅读: 评论:0

Unsupervid clustering with spiking neurons by spar temporal coding and multi-layer RBF networks
S.M. Bohte, J.A. La Poutré, J.N. Kok
Software Engineering (SEN)
SEN-R0036 December 31, 2000
失重环境Report SEN-R0036
ISSN 1386-369X
CWI
优缺点英文
P.O. Box 94079
1090 GB  Amsterdam
The Netherlands
深圳营养师
CWI is the National Rearch Institute for Mathematics and Computer Science. CWI is part of the Stichting Mathematisch Centrum (SMC), the Dutch foundation for promotion of mathematics and computer science
and their applications.
bialetti>雅虎宝贝鱼翻译SMC is sponsored by the Netherlands Organization for Scientific Rearch (NWO). CWI is a member of ERCIM, the European Rearch Consortium for Informatics and Mathematics.Copyright © Stichting Mathematisch Centrum P.O. Box 94079, 1090 GB  Amsterdam (NL) Kruislaan 413, 1098 SJ  Amsterdam (NL)
Telephone +31 20 592 9333
Telefax +31 20 592 4199
Unsupervid Clustering with Spiking Neurons by Spar Temporal Coding
and Multi-Layer RBF Networks
Sander M.Bohte1Han A.La Poutr´e1Joost N.Kok1,2
S.M.Bohte@cwi.nl Han.La.Poutre@cwi.nl joost@liacs.nl
1CWI
P.O.Box94079,1090GB Amsterdam,The Netherlands
2LIACS,Leiden University
P.O.Box9512,2300RA Leiden,The Netherlands
ABSTRACT
We demonstrate that spiking neural networks encoding information in spike times are capable of computing and
兔英文learning clusters from realistic data.We show how a spiking neural network bad on spike-time coding and
Hebbian learning can successfully perform unsupervid clustering on real-world data,and we demonstrate how
temporal synchrony in a multi-layer network induces hierarchical clustering.We develop a temporal encoding of
continuously valued data to obtain adjustable clustering capacity and precision with an efficient u of neurons:
input variables are encoded in a population code by neurons with graded and overlapping nsitivity profiles.
We also discuss methods for enhancing scale-nsitivity of the network and show how induced synchronization
of neurons within early RBF layers allows for the subquent detection of complex clusters.
2000Mathematics Subject Classification:82C32,68T05,68T10,68T30,92B20.
1998ACM Computing Classification System:C.1.3,F.1.1,I.2.6,I.5.1.ant是什么意思
Keywords and Phras:Spiking neurons,unsupervid learning,high-dimensional clustering,complex clusters,
Hebbian-learning,synchronousfiring,spar coding,temporal coding,coar coding.
Note:Work carried out under theme SEN4“Evolutionary Systems and Applied Algorithmics”.This paper has
been submitted for publication,a short version has been prented at the International Joint Conference on
Neural Networks2000(IJCNN’2000)in Como,Italy.
1.Introduction
It is well known that cortical neurons produce all-or-none action potentials or spikes,but the sig-nificance of the timing of the puls has only recently been recognized as a means of neuronal information coding.As the biological evidence has been [1],it has been shown theoret-ically that temporal coding with single spike times allows for powerful neuronal information processing [2].Furthermore,it has been argued that coordinated spike-timing could be instrumental in
solving dynamic combinatorial problems[3].Since time-coding utilizes only a single spike to transfer informa-tion,as appod to hundreds infiring-rate coding,the paradigm could also potentially be beneficial for efficient pul-stream VLSI implementations.
The considerations have generated considerable interest in time-bad artificial neural networks, e.g.[4;5;6].In particular,Hopfield[7]prents a model of spiking neurons for discovering clusters in an input space akin to Radial Basis Functions.Extending on Hopfield’s idea,Natschl¨a ger&Ruf[5] propo a learning algorithm that performs unsupervid clustering in spiking neural networks using spike-times as input.This model encodes the input patterns in the delays across its synaps and is shown to reliablyfind centers of high-dimensional clusters,but,as we argue in detail in ction2,is limited in both cluster capacity as well as precision.
We prent methods to enhance the precision,capacity and clustering capability of a network of spiking neurons akin to[5]in aflexible and scalable manner,thus overcoming limitations associated
2
with the network architecture.Inspired by the local receptivefields of biological neurons,we encode continuous input variables by a population code obtained by neurons with graded and overlapping s
ensitivity profiles.In addition,each input dimension of a high dimensional datat is encoded pa-rately,avoiding an exponential increa in the number of input neurons with increasing dimensionality of the input data.With such encoding,we show that the spiking neural network is able to correctly cluster a number of datats at low expen in terms of neurons while enhancing cluster capacity and precision.The propod encoding allows for the reliable detection of clusters over a considerable and flexible range of spatial scales,a feature that is especially desirable for unsupervid classification tasks as scale-information is a-priori unknown.
By extending the network to multiple layers,we show how the temporal aspect of spiking neurons can be further exploited to enable the correct classification of non-globular or interlocking clusters.In a multi-layer RBF network,it is demonstrated that the neurons in thefirst layer center on components of extended clusters.When all neurons in thefirst RBF layer are allowed tofire,the(near)synchrony of neurons coding for nearby components of the same cluster is then distinguishable by a subquent RBF layer,resulting in a form of hierarchical clustering with decreasing granularity.Building on this idea,we show how the addition of lateral excitatory connections with a SOM-like learning rule enables the network to correctly parate complex clusters by synchronizing the neurons coding for parts of the same cluster.Adding lateral connections thus maintains the low neuron count achieved by coar coding,while increasing the complexity of classifiable clusters.
Summarizing,we show that temporal spike-time coding is a viable means for unsupervid neural computation in a network of spiking neurons,as the network is capable of clustering realistic and high-dimensional data.Adjustable precision and cluster capacity is achieved by employing a1-dimensional array of graded overlapping receptivefields for the encoding of each input variable.By introducing a multi-layer extension of the architecture we also show that a spiking neural network can cluster complex,non-Gaussian clusters.Combined with our work on supervid learning in spiking neural networks([8]),the results show that single spike-time coding is a viable means for neural information processing on real-world data.
The paper is organized as follows:we describe the spiking neural network and limitations in ction 2.In ction3we introduce a means of encoding input-data such to overcome the limitations, and clustering examples using this encoding are given in ction4.In ction5we show how the architecture can be extended to a multi-layer RBF network capable of hierarchical clustering,and in ction6we show how the addition of lateral connections enables the network to classify more complex clusters via synchronization of neurons within an RBF layer.The conclusions are given in ction7.
2.Networks of delayed spiking neurons
In this ction,we describe the spiking neural network as introduced in[5]and the results and open questions associated with this type of network.
The network architecture consists of a fully connected feedforward network of spiking neurons with connections implemented as multiple delayed synaptic terminals(figure1A).A neuron j in the network generates a spike when the internal neuron state variable x j,identified with“membrane potential”, cross a thresholdϑ.This neuron j,connected to a t of immediate predecessors(“pre-synaptic neurons”)Γj,receives a t of spikes withfiring times t i,i∈Γj.The internal state variable x j(t)is determined by the time-dynamics of the impact of impinging spikes on neuron j.As a practical model, we u the Spike Respon Model(SRM)introduced by Gerstner[9],where the time-varying impact of a spike is described by a spike-respon function.Depending on the choice of suitable spike-respon functions one can adapt the SRM to reflect the dynamics of a large variety of different spiking neurons. In the SRM description,the internal state variable x j(t)is simply the sum of spike-respon functions
2.Networks of delayed spiking neurons3
Figure1:(a)Network connectivity and a single connection compod of multiple delayed synaps.(b)Graph of the learning function L(∆t).∆t denotes the time-difference between the ont of a PSP at a synap and the time of the spike generated in the target neuron.
ε(t,t i)weighted by the synaptic efficacy w ij:
x j(t)= i∈Γj w ijε(t−t i).(2.1) In the network as introduced in[5],an individual connection consists of afix
ed number of m synaptic terminals,where each terminal rves as a sub-connection that is associated with a different delay and weight(figure1a).The delay d k of a synaptic terminal k is defined by the difference between thefiring time of the pre-synaptic neuron,and the time the post-synaptic potential starts rising.Effectively, the input to a neuron j then becomes:
x j(t)= i∈Γj m k=1w k ijε(t−t i−d k).(2.2) Input patterns can be encoded in the synaptic weights by local Hebbian delay-learning where,after learning,thefiring time of an output neuron reflects the distance of the evaluated pattern to its learned input pattern thus realizing a kind of RBF neurons[5].For unsupervid learning,the Winner-Take-All learning rule modifies the weights between the source neurons and the neuronfirst tofire in the target layer using a time-variant of Hebbian learning:If the start of the PSP at a synap slightly precedes a spike in the target neuron,the weight of this synap is incread,as it exerted significant influence on the spike-time via a relatively large contribution to the membrane potential.Earlier and later synaps are decread in weight,reflecting their lesr impact on the target neuron’s spike time. For a weight with delay d k from neuron i to neuron j we u
∆w k ij=η L(∆t)=η(1−b)e−(∆t−c)2β2+b ,(2.3)
after[5](depicted infigure1b),where the parameters b determines the effective integral over the entire learning window,βts the width of the positive learning window and c determines the position of this peak.The value of∆t denotes the time difference between the ont of a PSP at a synap and the time of the spike generated in the target neuron.The weight of a single terminal is limited by a minimum and maximum value,respectively0and w max.
An input(data-point)to the network is coded by a pattern offiring times within a coding interval ∆T and each input neuron is required tofire at most once during this coding interval.In our experiments,we t∆T to[0–9]ms and delays d k to1–15ms in1ms intervals(m=16).For the reported simulations,the parameter values for the learning function L(∆t)are t to:b=−0.2,史塔克家族
烤鸭英语4
c=−2.85,β=1.67,η=0.0025and w max=2.75.To model the(strictly excitatory)post-synaptic potentials,we ud anα-function:
ε(t)=t
τ
e(1−tτ),(2.4)
weight什么意思withτ=3.0ms,effectively implementing leaky-integrate-and-fire spiking neurons.
Results obtained for a formal model of spike-bad Hebbian learning by Kempter et al.[10]show that the class of learning rules as in(2.3)can pick up temporal correlations on the time scale of the learning-window and does not need overall weight normalization as required in most traditional Hebbian learning algorithms.
Previous Results and Open Questions.In[5]Natschl¨a ger&Ruf showed that artificially con-structed clusters of inputsfiring within the encoding interval are correctly clustered in an unsupervid manner,but the type of clusters they consider verely limits applicability.For N input neurons,a cluster C in[5]is defined of dimensionality M≤N with M-dimensional location{T1,...T M},T i being the spike-time of input neuron i.For such a tup it was found that the RBF neurons converged reliably to the centers of the clusters,also in the prence of noi and randomly spiking neurons.
In practice,problems ari when applying this scheme to more realistic data.Afirst issue concerns the coding of input:following the aforementioned method,we were not able to successfully cluster data containing significantly more clusters than input-dimensions,especially in the ca of low di-mension
ality.This problem is associated with the minimum widthβof the learning function L(∆t), leading to afixed minimal spatial extend of a learned cluster,potentially(much)larger than the actual cluster size.In fact,for2-dimensional input containing more than two clusters,the above algorithm failed in our experiments for a wide range of parameters.Furthermore,thefinite width of the learning rule effectively inhibits the detection of multiple nearby clusters of smaller size relative to the width of the learning function,requiring advance knowledge of the effective cluster-scale.Hence,to achieve practical applicability,it is necessary to develop an encoding that is scalable in terms of cluster ca-pacity and precision and that is also efficient in terms of the number of input-neurons required.In the following ction,we prent improvements to the architecture that address the issues.
3.Encoding continuous input variables in spike-times
To extend the encoding precision and clustering capacity,we introduce a method for encoding input-data into temporal spike-time patterns by population coding.Although our encoding is simple and elegant,we are not aware of any previous encoding methods for transforming continuous data into spike-time patterns and describe the method in detail.
As a means of population coding,we u multiple local receptivefields to distribute an input variable
over multiple input neurons.Such a population code where input variables are encoded with graded and overlapping activation functions is a well-studied method for reprenting real-valued parameters (e.g.:[11;12;13;14;15;16]).In the studies,the activation function of an input-neuron is modeled as a local receptivefield that determines thefiring rate.A translation of this paradigm into relative firing-times is straightforward:an optimally stimulated neuronfires at t=0,whereas a value up to say t=9is assigned to less optimally stimulated neurons(depicted infigure2).
For actually encoding high-dimensional data in the manner described above,a choice has to be made with respect to the dimensionality of the receptive-fields of the neurons.We obrve that the least expensive encoding in terms of neurons is to independently encode the respective input variables:each input-dimension is encoded by an array of1-dimensional receptivefields.Improved reprentation accuracy for a particular variable can then be obtained by sharpening the receptive fields and increasing the number of neurons[15].Such coar coding has been shown to be statistically bias-free[12]and in the context of spike-time patterns we have applied it successfully to supervid pattern classification in spiking neural networks[8].
In our experiments,we determined the input ranges of the data,and encoded each input variable with neurons covering the whole data-range.For a range[I I n max]of a variable n,m neurons

本文发布于:2023-07-05 00:56:01,感谢您对本站的认可!

本文链接:https://www.wtabcd.cn/fanwen/fan/90/167313.html

版权声明:本站内容均来自互联网,仅供演示用,请勿用于商业和其他非法用途。如果侵犯了您的权益请与我们联系,我们将在24小时内删除。

标签:环境   营养师   失重
相关文章
留言与评论(共有 0 条评论)
   
验证码:
Copyright ©2019-2022 Comsenz Inc.Powered by © 专利检索| 网站地图