physical reservoir computing: possibility to resolve the

7
Physical reservoir computing: Possibility to resolve the inconsistency between neuro-AI principles and its hardware Akira Hirose , Seiji Takeda , Toshiyuki Yamane , Hidetoshi Numata , Naoki Kanazawa , Jean Benoit Heroux , Daiju Nakano , Ryosho Nakane, and Gouhei Tanaka Department of Electrical Engineering and Information Systems / Institute for Innovation in International Engineering Education, The University of Tokyo, Tokyo 153-8656, Japan IBM Research - Tokyo, IBM Japan, Shin-Kawasaki, Kanagawa, 212-0032, Japan Abstract. This paper discusses desirable architectures and structures of truly neural hardware. First, we review the principles of neural processing, that is, ”pattern information representation” and ”pattern information processing.” Then, from this principle viewpoint, we discuss physical reservoir computing, in particular, wave-based hardware such as lightwave and spin-wave reservoir computing. We also discuss the complex-valued neural networks, which is the primary framework in such wave-based neural networks. Keywords: Pattern information representation, pattern information processing, complex-valued neural networks, physical reservoir computing 1 Introduction Artificial intelligence based on neural networks such as deep learning neural networks (neuro-AI) realizes high performance in many tasks in various situations. However, it also requires a long learning time and huge power consumption. It is also true that time-sequential-data processing is also realizable by extending the networks to recurrent architecture. The power and time consumption becomes, however, much larger. In the modern society that utilizes sensor networks and cloud computing, it is urgent necessity to reduce the power consumption totally and globally. 2 The essence of neural networks: Pattern information representation and pattern information processing The modern neuro-AI is based on the two principles, namely, ”pattern information representation” and ”pattern information processing.” That is, many neurons represent information in cooperation, and the inner product of the weights and multiple-neuron information yields outputs [9]. Contrarily, ”symbol information representation” and ”symbol information processing” are the bases of the conventional logic-AI, where we assign meanings to respective bits and flags to realize logic processing and deduction. They are the base of the so-called von- Neumann-type hardware, that can also conducts numerical processing with a most-significant bit and other bit roles assigned. First, we describe the difference and similarity between these two principles. We go back to the most basic point. Fig. 1 shows (a) neurons in neural networks for the pattern information processing and (b) logic gates in logic circuits for the symbol information processing. The most simple McCulloch-Pitts model relates the output signals y j to the input signals x i with an activation fuction f and a threshold θ j as y j = f ( i w ji x i - θ j ) (1) Similarly, the input signals at the logic gates x i (usually 0 or 1) and the output signals y j are related to each other with a step function 1 as y j = 1 ( i 1 · x i - θ j ) (2) where we can obtain an AND-gate by adjusting the threshold as 2 j < 3, for example, 2.5, or OR-gate by setting θ j =0.5 for example. In this sense, logic gates are very similar to neurons. A part of this work was presented in Oyo Butsuri of Japan Society of Applied Physics in 2019 [10]. This work was supported in part by JSPS KAKENHI Grant No. 18H04105, in part by Tohoku University RIEC Cooperative Research Project, and also in part by the New Energy and Industrial Technology Development Organization (NEDO) 18102285-0. ICONIP2019 Proceedings 49 Volume 16, No. 4 Australian Journal of Intelligent Information Processing Systems

Upload: others

Post on 03-Feb-2022

1 views

Category:

Documents


0 download

TRANSCRIPT

Physical reservoir computing: Possibility to resolve theinconsistency between neuro-AI principles and its hardware

Akira Hirose⋆, Seiji Takeda†, Toshiyuki Yamane†, Hidetoshi Numata†, Naoki Kanazawa†,Jean Benoit Heroux†, Daiju Nakano†, Ryosho Nakane, and Gouhei Tanaka

Department of Electrical Engineering and Information Systems /Institute for Innovation in International Engineering Education,

The University of Tokyo, Tokyo 153-8656, Japan† IBM Research - Tokyo, IBM Japan, Shin-Kawasaki, Kanagawa, 212-0032, Japan

Abstract. This paper discusses desirable architectures and structures of truly neural hardware. First,we review the principles of neural processing, that is, ”pattern information representation” and ”patterninformation processing.” Then, from this principle viewpoint, we discuss physical reservoir computing, inparticular, wave-based hardware such as lightwave and spin-wave reservoir computing. We also discuss thecomplex-valued neural networks, which is the primary framework in such wave-based neural networks.

Keywords: Pattern information representation, pattern information processing, complex-valued neuralnetworks, physical reservoir computing

1 Introduction

Artificial intelligence based on neural networks such as deep learning neural networks (neuro-AI) realizes highperformance in many tasks in various situations. However, it also requires a long learning time and huge powerconsumption. It is also true that time-sequential-data processing is also realizable by extending the networks torecurrent architecture. The power and time consumption becomes, however, much larger. In the modern societythat utilizes sensor networks and cloud computing, it is urgent necessity to reduce the power consumptiontotally and globally.

2 The essence of neural networks: Pattern information representation andpattern information processing

The modern neuro-AI is based on the two principles, namely, ”pattern information representation” and ”patterninformation processing.” That is, many neurons represent information in cooperation, and the inner product ofthe weights and multiple-neuron information yields outputs [9]. Contrarily, ”symbol information representation”and ”symbol information processing” are the bases of the conventional logic-AI, where we assign meanings torespective bits and flags to realize logic processing and deduction. They are the base of the so-called von-Neumann-type hardware, that can also conducts numerical processing with a most-significant bit and other bitroles assigned.

First, we describe the difference and similarity between these two principles. We go back to the most basicpoint. Fig. 1 shows (a) neurons in neural networks for the pattern information processing and (b) logic gates inlogic circuits for the symbol information processing. The most simple McCulloch-Pitts model relates the outputsignals yj to the input signals xi with an activation fuction f and a threshold θj as

yj = f

(∑i

wjixi − θj

)(1)

Similarly, the input signals at the logic gates xi (usually 0 or 1) and the output signals yj are related to eachother with a step function 1 as

yj = 1

(∑i

1 · xi − θj

)(2)

where we can obtain an AND-gate by adjusting the threshold as 2 < θj < 3, for example, 2.5, or OR-gate bysetting θj=0.5 for example. In this sense, logic gates are very similar to neurons.

⋆ A part of this work was presented in Oyo Butsuri of Japan Society of Applied Physics in 2019 [10]. This workwas supported in part by JSPS KAKENHI Grant No. 18H04105, in part by Tohoku University RIEC CooperativeResearch Project, and also in part by the New Energy and Industrial Technology Development Organization (NEDO)18102285-0.

ICONIP2019 Proceedings 49

Volume 16, No. 4 Australian Journal of Intelligent Information Processing Systems

2 Authors Suppressed Due to Excessive Length

x1

x2

x3

y1

x1x2x3

y1

x1

x2

x3

yj

x1x2x3

yj

x1

x2

x3

yJ

x1x2x3

yJ

(a) (b)

Fig. 1. Fundamental difference and similarity in their principles between (a) neurons in neural networks for pattern infor-mation representation/processing and (b) logic gates in logic circuits for symbol information representation/processing.

However, neurons usually receive many input signals transmitted from many other neurons with manysynapses, and their weighting factors wji take continuous values. Putting stress on these facts, we can say thatthe neural internal state uj is the inner product of the weights w = [wji] and the input signals x = [xi]. Thatis,

uj =∑i

wjixi − θj =J∑

i=0

wjixi (3)

where we assume θj = wj0, xo = −1 formally. We call this manner the pattern information representation, wherethe information is represented by many neurons. We also call the processing manner the pattern informationprocessing, where the inner product of many weights and input signals results directly in the output. Sinceinformation is supported by many neurons, a fault of a single or a few neurons does not so influential to theoutput value.

It is also significant that information ”A” a little different from information ”B” is also very similar to ”B.”This is the origin of the generalization ability that neural networks acquire. Statistically, averaging the innerproduct results in the correlation of the weights and input signals. The importance of correlation is found notonly in such microscopic single neuron activities but also in macroscopic big-data processing in the neuro-AIworld.

In contrast, a logic gate usually has only a small number of input signals so that a fault in a single inputresults in an output error. Generalization ability is also unacquirable. However, it is good for the logic circuit tofully utilize the small number of inputs to represent information densely by putting importance to a single bitor a single flag. This usage was meaningful in particular when the memory capacity is quite limited as it wasdecades ago. This is the basis of CPUs (central processing units) used in the von-Neumann-type computers,which is very different from that of the brain.

3 Serious issues found in the present hardware: Inconsistency between theprocessing principle and the physical reality

It is possible to conduct neural network processing on symbol information representation/processing hardwareby a software-based strategy. This is the present neuro-AI. However, there exists inconsistency between theprocessing principles and the hardware bases [10]. This point is the most essential issue, which results in seriousproblems such as the huge power consumption. This inconsistency is illustrated in Fig. 2.

Table 1 lists the most serious two problems in large scale integrated circuits such as the present CPUs in von-Neumann-type computers. The variability in element characterisitcs is the biggest barrier in the Moore’s law.

50 ICONIP2019 Proceedings

Australian Journal of Intelligent Information Processing Systems Volume 16, No. 4

Physical reservoir computing: Possibility to resolve the inconsistency 3

%&/$•

Physically utilize spatiotemporal pattern Physically utilize bits (symbols)

Principle of Neuro AI = Pattern information representation/porocessing

Neuro-hardware based on pattern information representation/processing (inner product, correlation)

Physical reservoir devices

Future Neuro-AI devices

Von Neumann-hardware (logic, deduction) based on symbol information representation/processing

ENIAC(1946)

Present numerical calculator, PC, ...

InconsistentConsistent

Complementary

Fig. 2. Conceptual illustration showing the physical reservoir hardware consistent with the neuro-AI principles as wellas the von-Neumann-type hardware inconsistent with them.

Table 1. Two serious problems in large scale integrated circuits, their symptoms and the causes.

Problem Symptom Cause

Element variabil-ity

Increase of errors in logic circuit Extreme reduction of atom numbers in a transistor,then the increase of standard deviation against meanof electric characteristics

Wiring explosion Increase of energy consumption to chargeand discharge the physical wires

Relative increase of wiring area to reducing transistorsize

We have to go further beyond. We can solve the variability problem by introducing a neural learning dynamicsin devices. One way is the reservoir computing mentioned below. That leads to ”non-elaborate” devices, whichdoes not require too much elaboration in the fabrication stage. Such a device acquires a normal operationthrough exposure to the environment. This means also that the device works flexibly even if the environmentchanges.

Wiring explosion is very serious. It is almost equivalent to the energy consumption explosion. An ideal deviceshould be hard-wiring free. Such a device is possible by using waves. For example, an optical reservoir computingsystem uses lightwave to map time-sequential signals to spatiotemporal domain. A spin-wave reservoir computinguses charge particles, i.e., electrons, but they do not flow, free from hard wiring. There exists a framework withwhich such wave-based neural networks are suitably constructed. It is the complex-valued neural networks [7,9].

4 Reservoir computing hardware

Presently, many researchers pay attention to reservoir computing [1, 16, 20, 32, 39–44, 46–50]. One of the mostattractive points in reservoir computing is the fact that it is realizable by using physical phenomena directly withflexible utilization of spatiotemporal domain. Physical reservoirs are consistent with the pattern informationrepresentation/processing, being free from the inconsistency between the processing principles and the physicalrealization. Those using waves are also free from hard wiring, resulting in no charging nor discharging andconsequently low power consumption. There are various ways to realize reservoirs [1]. In this paper, we reviewtwo reservoirs we proposed, namely, a lightwave reservoir system and a spin-wave reservoir chip.

Fig. 3 shows the basic structure of a lightwave reservoir computing system [38]. A semiconductor laser isfed with a feedback from outside of the laser cavity. Then the oscillation gets apart from usual oscillation, andbecomes unstable. We set the operation at the so-called ”edge of chaos,” a near-chaotic point. When we feeda time-sequential input signal at the distributed Bragg reflector (DBR) having electrodes, the laser generatesdiverse signals dependent on the input. We weight and gather the signals to synthesize a desirable output.This system is fabricated as a small optics suitable for integrated optical circuits. Besides this, various kinds oflightwave reservoirs have been proposed mainly by using optical fibers [5, 30].

ICONIP2019 Proceedings 51

Volume 16, No. 4 Australian Journal of Intelligent Information Processing Systems

4 Authors Suppressed Due to Excessive Length

Fig. 3. Micro-optics lightwave reservoir computing system [38].

(a) (b)

Fig. 4. (a) Structure and (b) wave propagation example of a spin-wave reservoir computing chip device [34].

Fig. 4(a) shows the basic structure of a spin-wave reservoir chip composed of a magneto-electric couplinglayer, a garnet film and a conductive substrate [33,34]. Spin waves excited in the garnet film at the red electrodespropagate with scattering and reflection, and interfere nonlinearly. They have also hysteresis, anisotropy anddispersion. Such complex dynamics of the spin wave results in the transformation of the input signals to generatevarious time-sequential signals dependent on the input signals, which realizes the synthesis of desired outputsignals at simple output neurons connected to this chip. If we implement an equivalent transformation witha von-Neumann-type computer by using software, we need large calculation time and energy consumption.Instead, we rely on the physics of the spin wave. We have equivalently infinite number of neurons in the garnetfilm. There is no explicit wiring, but the spin wave realizes the communication among the neurons.

5 Complex-valued neural networks: The coherent neural networks

The framework of complex-valued neural networks (CVNNs) is the principle of the dynamics in such wave-basedneural networks using lightwave, electromagnetic wave, sonic or ultrasonic wave, spin wave and various quantumwaves [9]. It is also the framework of neural networks that deal with information acquired by wave phenomenasuch as radar imaging and sensing systems.

The merit of CVNNs lies in the excellent generalization ability arising from the combination of complexnumbers and the learning process. CVNNs are different from double-dimensional real-valued neural networks asfollows. Complex numbers have multiple manners in their representation. One is the real 2×2 matrix represen-tation. Here, it is notable that the matrix has only 2 parameters in itself corresponding to phase and amplitude,instead of usual four parameters (components) in a 2×2 matrix [3]. This fact means a low degree of freedom inthe learning process, which also means sparsity built in the complex number inherently and intrinsically. Thisfeature works effectively in dealing with phase and amplitude adaptively [8,17–19]. In other words, the networkhas high ability to estimate and/or predict phase and amplitude values included in waves (and polarization in

52 ICONIP2019 Proceedings

Australian Journal of Intelligent Information Processing Systems Volume 16, No. 4

Physical reservoir computing: Possibility to resolve the inconsistency 5

Wave-based Hardware

including quantum computing based on

Complex-valued neural framework

Pattern representation and processing

Symbol representation and processing

Mas

sive

-con

nect

ion

arch

itect

ure

Smal

l-siz

e co

nnec

tion

arch

itect

ure

Logic framework&

Particle-based parallel hardware

von-Neumannframework

&Particle-based

hardware

Modular/Hierarchical neural network

framework

Neural networkframework

(Particle-base: Voltage,electron memory,...)

Spiking neural networks

Fig. 5. A chart to map complex-valued neural networks and other information processing frameworks with a coordi-nates having the two axes showing pattern information representation/processing or symbol information representa-tion/processing as the horizontal axis and the information connection amount/density as the vertical axis [15].

the case of quaternion networks [27–29,35–37]). This advantage gets obvious not as the complex number solely,but makes its appearance only when combined with the learning process. Most of electronic systems deal withwaves, sometimes with the help of Fourier transform. Then the CVNN framework is useful in various electronicsystems.

In general, if the number of parameters (connection weights) in a neural network is too small, the errorsat teacher signal points (examples for learning) do not converge at zero sufficiently. Contrarily, when it is toolarge, the network has too high degree of freedom, resulting in worse generalization characteristics. A CVNNrestrict the parameters in such a manner that every neuron processes the phase and amplitude appropriatelyin its treating waves. In this sense, a CVNN is a coherent neural network. Its first application fields includeadaptive processing in radar imaging and wireless communications [2, 45].

Wave phenomena including quantum effects are significantly important also in devices for information pro-cessing. This fact leads to the expectation to develop and fabricate new information devices that utilize wavephenomena and the principle of CVNN in the near future. Previous works proposed, for example, coherentlightwave neural networks [4, 6, 11–13]. They utilize the phase and the amplitude of lightwave for informationprocessing [21–24, 31]. They also presented preliminary experiments on frequency-domain multiplexing in thelearning and processing [25,26].

Fig. 5 is a chart to map CVNNs and other information processing frameworks with a coordinates with the twoaxes showing pattern information representation/processing or symbol information representation/processingas the horizontal axis whereas the information connection amount/density as the vertical axis [15]. The reservoircomputing based on CVNNs occupies the large upper-right area, which suggests their substantial developmentsin the near future [14].

6 Conclusion

We discussed the principles of neuro-AI systems in its dynamics, namely, the pattern information representationand the pattern information processing. Then we mentioned that physical reservoir computing realizes hardware

ICONIP2019 Proceedings 53

Volume 16, No. 4 Australian Journal of Intelligent Information Processing Systems

6 Authors Suppressed Due to Excessive Length

consistent with the neural principles. We showed two examples, a micro-optics lightwave reservoir computingsystem and a spin-wave reservoir chip device, both of which utilize waves. Finally we discussed the merits ofCVNNs in such wave-based reservoir computing hardware.

References

1. Special issue: Reservoir Computing. The Journal of IEICE (vol 102, February 2019), pp. 107–149 (in Japanese)2. Ding, T., Hirose, A.: Fading channel prediction based on combination of complex-valued neural networks and chirp

Z-transform. IEEE Transactions on Neural Networks and Learning Systems 25(9), 1686–1695 (September 2014)3. Ebbinghaus, H.D., Hermes, H., Hirzebruch, F., Koecher, M., Mainzer, K., Neukirch, J., Prestel, A., Remmert, R.:

Numbers. Springer-Verlag, (Chapter 3, Section 2) (1983)4. Handayani, A., Suksmono, A.B., Mengko, T.L.R., Hirose, A.: Blood vessel segmentation in complex-valued magnetic

resonance images with snake active contour model. International Journal of E-Health and Medical Communications1, 41–52 (2010)

5. Heroux, J.B., Numata, H., Kanazawa, N., Nakano, D.: Delayed feedback reservoir computing with VCSEL. In:International Joint Conference on Neural Networks (IJCNN) Rio de Janeiro. pp. 594–602. IEEE/INNS (2018)

6. Hirose, A.: Applications of complex-valued neural networks to coherent optical computing using phase-sensitivedetection scheme. Information Sciences –Applications– 2, 103–117 (1994)

7. Hirose, A.: Brain-type information processing systems discussed from device level viewpoints. The Brain and NeuralNetworks 5(2), 82–83 (1998), (The Journal of Japanese Neural Network Society, in Japanese)

8. Hirose, A.: Complex-valued neural networks: The merits and their origins. In: Proceedings of the International JointConference on Neural Networks (IJCNN) 2009 Atlanta. pp. 1237–1244. IEEE – INNS (June 2009)

9. Hirose, A.: Complex-Valued Neural Networks, 2nd Edition. Springer, Heidelberg, Berlin, New York (2012)10. Hirose, A.: Reservoir computing and complex-valued neural networks. Oyo Butsuri (The Journal of Japan Society

of Applied Physics) 88(8), 559–563 (2019), (in Japanese)11. Hirose, A., Eckmiller, R.: Behavior control of coherent-type neural networks by carrier-frequency modulation. IEEE

Transactions on Neural Networks 7, 1032–1034 (1996)12. Hirose, A., Eckmiller, R.: Coherent optical neural networks that have optical-frequency-controlled behavior and

generalization ability in the frequency domain. Applied Optics 35(5), 836–843 (1996)13. Hirose, A., Kiuchi, M.: Coherent optical associative memory system that processes complex-amplitude information.

IEEE Photon. Tech. Lett. 12(5), 564–566 (2000)14. Hirose, A., Nakano, D.: Prospects for reservoir computing. The Journal of IEICE 102(2), 134–139 (February 2019),

(in Japanese)15. Hirose, A., Tanaka, G., Takeda, S., Yamane, T., Numata, H., Kanazawa, N., Heroux, J.B., Nakano, D., Nakane, R.:

Complex-valued neural networks for wave-based realization of reservoir computing. In: International Conference onNeural Information Processing (ICONIP) 2018 Siem Reap. 449-456 (2018)

16. Hirose, A., Tanaka, G., Takeda, S., Yamane, T., Numata, H., Kanazawa, N., Heroux, J.B., Nakano, D., Nakane,R.: Proposal of carrier-wave reservoir computing. In: International Conference on Neural Information Processing(ICONIP) 2018 Siem Reap. pp. 616–624. APNNS (2018)

17. Hirose, A., Yoshida, S.: Generalization characteristics of complex-valued feedforward neural networks in relation tosignal coherence. IEEE Transactions on Neural Networks and Learning Systems 23, 541–551 (2012)

18. Hu, S., Hirose, A.: Proposal of millimeter-wave adaptive glucose-concentration estimation system using complex-valued neural networks. In: International Geoscience and Remote Sensing Symposium (IGARSS) 2018 Valencia.IEEE (to be presented 2018)

19. Hu, S., Hirose, A.: Millimeter-wave adaptive glucose concentration estimation with complex-valued neural networks.In: IEEE Transactions on Biomedical Engineering. vol. 66, pp. 2065 – 2071 (July 2019)

20. Jaeger, H.: The ”echo state” approach to analysing and training recurrent neural newtorks – With an erratum note.Technical Report 148, 13, German National Research Center for Information Technology (GMD), Bonn, Germany(2001)

21. Kawata, S., Hirose, A.: Coherent lightwave associative memory system that possesses a carrier-frequency-controlledbehavior. Opt. Eng. 42(9), 2670–2675 (2003)

22. Kawata, S., Hirose, A.: Coherent lightwave neural network systems: Use of frequency domain. World ScientificPublishing Co., Singapore (2003)

23. Kawata, S., Hirose, A.: Coherent optical adaptive filter that has an arbitrary generalization characteristic infrequency-domain by using plural path difference. In: Int’l Conf. on Artificial Neural Netw. (ICANN) 2003 / Int’lConf. on Neural Inform. Proc. (ICONIP) 2003 Istanbul. vol. Suppl. Proc., pp. 426–429 (2003)

24. Kawata, S., Hirose, A.: Coherent optical neural network that learns desirable phase values in frequency domain byusing multiple optical-path differences. Optics Letters 28(24), 2524–2526 (2003)

25. Kawata, S., Hirose, A.: Frequency-multiplexed logic circuit based on a coherent optical neural network. AppliedOptics 44(19), 4053–4059 (2005)

26. Kawata, S., Hirose, A.: Frequency-multiplexing ability of complex-valued Hebbian learning in logic gates. Interna-tional Journal of Neural Systems 12(1), 43–51 (2008)

27. Kim, H., Hirose, A.: Codebook-based hierarchical polarization feature for unsupervised fine land classification usinghigh-resolution PolSAR data. In: International Geoscience and Remote Sensing Symposium (IGARSS) 2018 Valencia.No. WEP1.PD.9 (July 2018)

54 ICONIP2019 Proceedings

Australian Journal of Intelligent Information Processing Systems Volume 16, No. 4

Physical reservoir computing: Possibility to resolve the inconsistency 7

28. Kim, H., Hirose, A.: Unsupervised fine land classification using quaternion auto-encoder-based polarization featureextraction and self-organizing mapping. IEEE Transactions on Geoscience and Remote Sensing 56(3), 1839–1851(March 2018)

29. Kinugawa, K., Fang, S., Usami, N., Hirose, A.: Isotropization of quaternion-neural-network-based polsar adaptiveland classification in poincare-sphere parameter space. IEEE Geoscience and Remote Sensing Letters 15(8), 1234–1238 (2018)

30. Larger, L., Soriano, M.C., Brunner, D., Appeltant, L., Gutierrez, J.M., Pesquera, L., Mirasso, C.R., Fischer, I.:Photonic information processing beyond turing: an optoelectronic implementation of reservoir computing. OpticsExpress 20(3), 3241–3249 (2012)

31. Limmanee, A., Kawata, S., Hirose, A.: Phase signal embedment in densely frequency-multiplexed coherent neuralnetworks. In: OSA Topical Meeting on Information Photonics (OSA–IP) 2005 Charlotte. No. ITuA2 (June 2005)

32. Maass, W., Natschlanger, T., Markram, H.: Real-time computing without stable states: A new framework for neuralcomputation based on perturbations. Neural Computation 14(11), 2531–2560 (2002)

33. Nakane, R., Tanaka, G., Hirose, A.: Demonstration of spin-wave-based reservoir computing for next-generationmachine-learning devices. In: International Conference on Magnetism (ICM) 2018 San Francisco. pp. 26–27 (July2018)

34. Nakane, R., Tanaka, G., Hirose, A.: Reservoir computing with spin waves excited in a garnet film. IEEE Access 6,4462 – 4469 (2018)

35. Shang, F., Hirose, A.: PolSAR land classificatoin by using quaternion-valued neural networks. In: Asia-Pacific Con-ference on Synthetic Aperture Radar (APSAR) 2013 Tsukuba. pp. 593–596. IEEE Geoscience and Remote SensingSociety Japan Chapter / IEICE Electronics Society (September 2013)

36. Shang, F., Hirose, A.: Quaternion neural-network-based PolSAR land classification in Poincare-sphere-parameterspace. IEEE Transactions on Geoscience and Remote Sensing 52(9), 5693–5703 (September 2014)

37. Shang, F., Hirose, A.: Averaged-Stokes-vector-based polarimetric SAR data interpretation. IEEE Transactions onGeoscience and Remote Sensing 53(8), 4536–4547 (August 2015)

38. Takeda, S., Nakano, D., Yamane, T., Tanaka, G., Nakane, R., Hirose, A., Nakagawa, S.: Photonic reservoir computingbased on laser dynamics with external feedback. In: International Conference on Neural Information Processing(ICONIP) 2016 Kyoto. pp. 222–230 (2016)

39. Tanaka, G.: Concept of reservoir computing and its recent trends. The Journal of IEICE 102(2), 108–113 (February2019), (in Japanese)

40. Tanaka, G., Nakane, R., Takeuchi, T., Yamane, T., Nakano, D., Katayama, Y., Hirose, A.: Spatially arrangedsparse recurrent neural networks for energy efficient associative memory. IEEE Transactions on Neural Networksand Learning Systems (to appear 2019)

41. Tanaka, G., Nakane, R., Yamane, T., Nakano, D., Takeda, S., Nakagawa, S., Hirose, A.: Exploiting heterogeneousunits for reservoir computing with simple architecture. In: International Conference on Neural Information Processing(ICONIP) 2016 Kyoto. pp. 187–194 (2016)

42. Tanaka, G., Nakane, R., Yamane, T., Takeda, S., Nakano, D., Nakagawa, S., Hirose, A.: Nonlinear dynamics ofmemristive networks and its application to reservoir computing. In: International Symposium on Nonlinear Theoryand Its Applications (NOLTA) 2017 Cancun. A2L-E-2 (2017)

43. Tanaka, G., Nakane, R., Yamane, T., Takeda, S., Nakano, D., Nakagawa, S., Hirose, A.: Waveform classificationby memristive reservoir computing. In: International Conference on Neural Information Processing (ICONIP) 2017Guangzhou. pp. 457–465 (2017)

44. Tanaka, G., Yamane, T., Heroux, J.B., Nakane, R., Kanazawa, N., Takeda, S., Numata, H., Nakano, D., Hirose, A.:Recent advances in physical reservoir computing: A review. Neural Networks 115, 100–123 (2019)

45. Terabayashi, K., Hirose, A.: Ultra-wideband direction-of-arrival estimation using complex-valued spatiotemporalneural networks. IEEE Transactions on Neural Networks and Learning Systems 25(9), 1727–1732 (September 2014)

46. Verstraeten, D., Schrauwen, B., D’Haene, M., Stroobandt, D.: An experimental unification of reservoir computingmethods. Neural Networks 20(3), 391–403 (April 2007)

47. Yamane, T., Katayama, Y., Nakane, R., Tanaka, G., Nakano, D.: Wave-based reservoir computing by synchronizationof coupled oscillators. In: International Conference on Neural Information Processing (ICONIP) 2015 Istanbul. vol. 3,pp. 198–205 (2015)

48. Yamane, T., Numata, H., Heroux, J.B., Kanazawa, N., Takeda, S., Tanaka, G., Nakane, R., Hirose, A., Nakano,D.: Dimensionality reduction by reservoir computing and its application to iot edge computing. In: InternationalConference on Neural Information Processing (ICONIP) 2018 Siem Reap. pp. 635–644 (December 2018)

49. Yamane, T., Takeda, S., Nakano, D., Tanaka, G., Nakane, R., Hirose, A., Nakagawa, S.: Simulation study of phys-ical reservoir computing by nonlinear deterministic time series analysis. In: International Conference on NeuralInformation Processing (ICONIP) 2017 Guangzhou. pp. 639–647 (2017)

50. Yamane, T., Takeda, S., Nakano, D., Tanaka, G., Nakane, R., Nakagawa, S., Hirose, A.: Dynamics of reservoircomputing at the edge of stability. In: International Conference on Neural Information Processing (ICONIP) 2016Kyoto. pp. 205–212 (2016)

ICONIP2019 Proceedings 55

Volume 16, No. 4 Australian Journal of Intelligent Information Processing Systems