Special Session on Neural Network Architectures

Novel Computation and Communication Methods and Architectures for Neural Networks

Organizers:

  • Kun-Chih (Jimmy) Chen, National Sun Yat-sen University, Taiwan
  • Masoumeh (Azin) Ebrahimi, KTH Royal Institute of Technology, Sweden

Rationale and scope for the special session:

Deep Neural Networks (DNN) have shown significant advantages in many domains such as image processing, speech recognition, and machine translation. Current DNNs include many layers and thousands of parameters, leading to the high design complexity and power consumption when developing large-scale deep neural network accelerators. In addition, contemporary DNNs are usually trained based on tons of labeled data. Therefore, it is time-consuming to generate an optimal DNN when facing a new dataset.

To reduce the design challenge of generating a cost-efficient neural network model, computation units in supervised learning has become an emerging topic in recent years. Unsupervised learning is another angle of machine learning algorithms that works with datasets without labeled data. The common unsupervised learning methods, such as Spike Neural Network (SNN), is trained based on the spike generation between SNN neurons. Although it has the benefit of low-power data process, the low computing accuracy is the main problem of the current unsupervised learning methods. To address the design problems of the supervised and unsupervised learning methods, some novel computation methods or architectures, such as stochastic computing, near-memory processing, etc., are seen as the viable solutions to meet the performance and design productivity requirements. In addition, some communication issues between neuron processing elements need to be considered. The special session is motivated by these challenges and opportunities and aims at attracting contributions on efficient design solutions for both computation and computation aspects in supervised as well as unsupervised learning approaches. The topics of interest include, but are not limited to:

  • Next-generation supervised and unsupervised learning methods and architectures
  • Novel computation methods and architectures for neural network design
  • Novel interconnection methods for efficient computing in deep neural network

Submission:

See the submission page to see how to submit a paper to this session. Special Session papers will undergo similar reviews as any other papers submitted to the conference.