Workshop on Neuromorphic High-Speed Communications (NeuCoS)

Thursday, December 9th, 2021

We are delighted to announce the first workshop on Neuromorphic High-Speed Communications (NeuCos) which will take place virtually on the 9th of December 2021, jointly organized by Karlsruhe Institute of Technology (KIT) and the Technical University of Eindhoven (TU/e). 

In recent years, a lot of progress have been made on harnessing powerful machine learning methods for revolutionising the way we communicate data – allowing for greater throughput, extended reach, and unparalleled flexibility. However, as conventional semiconductor technology is reaching its capacity and miniaturization limits, attention has been shifted towards implementing learning algorithms using alternative designs based on neuromorphic computing to power the next generation intelligent communication systems.

Featuring leading international experts, the NeuCos workshop aims at exploring the latest advancements in the application of machine learning and neuromorphic computing in designing high speed, energy efficient communication systems.

Workshop Program

We will have a one-day workshop including two keynote presentations and 8 invited talks. The tentative program is listed here:

Time Speaker Affiliation Title
9:30 Osvaldo Simeone King's College London Probabilistic Neuromorphic Computing and Applications to Communications (Keynote) [Slides]
10:30 David Saad Aston University Space of Functions Computed by Deep-layered Machines [Slides]
Coffee break (15 minutes)
11:15 Werner Teich University of Ulm From Recurrent Neural Network based Algorithm to Low-Power High-Speed Analog Circuits for Communications
11:45 Alexios Balatsoukas-Stimming Eindhoven University of Technology Machine Learning for Non-linear Signal Processing in Communications [Slides]
Lunch Break and Get-together on gather.town (1 hour)
13:30 Sebastian Cammerer NVIDIA Germany Trainable Communication Systems: Should we Learn Everything Again? (Keynote)
14:30 Darko Zibar Technical University of Denmark (DTU) Optimum Phase Measurement in the Presence of Amplifier Noise [Slides]
15:00 Christian Häger Chalmers University of Technology Physics-Based Machine Learning for Fiber-Optic Communication Systems [Slides]
Coffee break (15 minutes)
15:45 Vahid Aref Nokia Applications of Deep Learning for Coherent Optical Communications
16:15 Francesco da Ros Technical University of Denmark (DTU) Reservoir Computing for Short-reach Optical Communication [Slides]
16:45 Tim Uhlemann University of Stuttgart Learning Spectrally Efficient and Nonlinear Pulse Shaping for Coherent Optical Communications
Virtual get-together on gather.town for discussions (open end)

 

Registration

If you are interested in participating at the workshop, registration is possible free of charge. Please contact Holger Jäkel (holger.jaekel∂kit.edu) for registration.

Workshop Format

The workshop will be held as a virtual workshop using Zoom. The Zoom link of the workshop will be distributed to all registered participants shortly before the workshop.

Due to the ongoing COVID-19 pandemic and a quite dynamic situation in Germany, we have decided to move the workshop to a full virtual format.

Social Programme

We will have virtual get-togethers on the platform gather.town to allow for discussions

Funding

The workshop is funded by the excellence initative of the German research foundation and supported by the Celtic-Next project AI-NET-ANTILLAS (supported by the German Federal Ministry of Education and Research) and the European Research Council.

 

Details about the Talks and Speakers

Osvaldo Simeone (King's College, London, UK), "Probabilistic Neuromorphic Computing and Applications to Communications"

Probabilistic neuromorphic computing leverages randomness at the level of neuronal or synaptic activities to enable quantification of aleatoric and epistemic uncertainty. In this talk, I will provide a brief review of this subject by highlighting information-theoretic and Bayesian principles along with some applications, including to communication systems.

Osvaldo Simeone is a Professor of Information Engineering with the Centre for Telecommunications Research at the Department of Engineering of King's College London, where he directs the King's Communications, Learning and Information Processing lab. He received an M.Sc. degree (with honors) and a Ph.D. degree in information engineering from Politecnico di Milano, Milan, Italy, in 2001 and 2005, respectively. From 2006 to 2017, he was a faculty member of the Electrical and Computer Engineering (ECE) Department at New Jersey Institute of Technology (NJIT), where he was affiliated with the Center for Wireless Information Processing (CWiP). His research interests include information theory, machine learning, wireless communications, and neuromorphic computing. Dr Simeone is a co-recipient of the IEEE Vehicular Technology Society 2021 Jack Neubauer Memorial Award, 2019 IEEE Communication Society Best Tutorial Paper Award, the 2018 IEEE Signal Processing Best Paper Award, the 2017 JCN Best Paper Award, the 2015 IEEE Communication Society Best Tutorial Paper Award and of the Best Paper Awards of IEEE SPAWC 2007 and IEEE WRECOM 2007. He was awarded a Consolidator grant by the European Research Council (ERC) in 2016. His research has been supported by the U.S. NSF, the ERC, the Vienna Science and Technology Fund, as well as by a number of industrial collaborations. He currently serves in the editorial board of the IEEE Signal Processing Magazine and is the chair of the Signal Processing for Communications and Networking Technical Committee of the IEEE Signal Processing Society. He was a Distinguished Lecturer of the IEEE Information Theory Society in 2017 and 2018, and he is currently a Distinguished Lecturer of the IEEE Communications Society. Dr Simeone is a co-author of two monographs, two edited books published by Cambridge University Press, and more than 150 research journal papers. He is a Fellow of the IET and of the IEEE.

 

David Saad (Aston University, Birmingham, UK), "Space of Functions Computed by Deep-layered Machines"

Recent engineering achievements of deep-learning machines have both impressed and intrigued the scientific community due to our limited theoretical understanding of the underlying reasons for their success. This work provides a general principled framework for investigating the function space of different types of deep-learning machines, based on the generating functional analysis. It facilitates studying the number of solution networks of a given error around a reference multi-layer network. Exploring the function landscape of densely-connected networks, we uncover a general layer-by-layer learning behaviour, while the study of sparsely-connected networks indicates the advantage in having more layers for increasing generalization ability in such models. This framework accommodates other network architectures and computing elements, including networks with correlated weights, convolutional networks and discretised variables. Additionally, using large deviation theory allows one to study the sensitivity of networks to noise, weight binarisation and sparsification. A similar approach also facilitates studying the distribution of Boolean functions computed by recurrent and layer-dependent architectures, which we find to be the same. Depending on the initial conditions and computing elements used, we characterize the space of functions computed at the large depth limit and show that the macroscopic entropy of Boolean functions is either monotonically increasing or decreasing with the growing depth.
Bo Li and David Saad, Physical Review Letters 120, 248301 (2018)
Bo Li and David Saad, Jour. Phys. A, 53, 104002 (2020)
A. Mozeika, B. Li, and D. Saad, Physical Review Letters 125, 168301, (2020).

David Saad holds the 50th Anniversary Chair of Complexity Physics at Aston University, UK. He received a BA in physics and a BSc in electrical engineering from Technion, an MSc in physics and a Ph.D. in electrical engineering from Tel-Aviv University. He joined Edinburgh University in 1992 and Aston in 1995. His research focuses on the application of statistical physics methods to several fields, including neural networks, error-correcting codes, multi-node communication, network optimization, routing, noisy computation, epidemic spreading and advanced inference methods.

 

Werner Teich (University of Ulm, Germany), "From Recurrent Neural Network based Algorithm to Low-Power High-Speed Analog Circuits for Communications"

Despite the tremendous progress made in digital signal processing during the last decades, the constraints imposed by high data rate wireless communications are becoming ever more stringent. The development of the wireless internet of things with a massive machine-to-machine communications raised the importance of power consumption for sophisticated algorithms, such as channel equalization or decoding. The strong link existing between computational speed and power consumption suggests an investigation of signal processing with energy efficiency as a prominent design choice. Therefore we revisit the topic of signal processing with analog circuits and its potential to increase the energy efficiency. Channel equalization is chosen as one application of nonlinear signal processing, and a vector equalizer based on a recurrent neural network (RNN) structure is taken as an example to demonstrate the potential of state of the art in very large scale integration (VLSI) design. We show that it is possible to achieve an improvement for the energy requirement of three to four orders of magnitude compared with digital circuits. As a second example, we consider iterative decoding algorithm based on message passing. They can be represented by a generalized RNN structure. Again, this allows to derive an equivalent analog circuit. Compared to digital circuits, analog circuits allow to perform equalization or iterative decoding with increased computational speed, reduced chip area and power consumption.

Werner G. Teich graduated with a M.Sc. in Physics from Oregon State University, Corvallis, Oregon, in 1984. He received the Dipl.-Phys. and the Dr. rer. nat. degree in Physics from the University of Stuttgart in 1985 and 1989, respectively. In 1991 he joined the Department of Information Technology, Ulm University, Germany. Currently, he is Senior Lecturer in Digital Communications at the Institute of Communications Engineering, Ulm University. His research interests are in the general field of digital communications. Specific areas of interest include design and analysis of iterative methods in general, and the application of artificial neural networks for low power nonlinear signal processing with analog electronic circuits in particular.

 

Alexios Balatsoukas-Stimming (TU Eindhoven, The Netherlands), "Machine Learning for Non-linear Signal Processing in Communications"

The field of machine learning has seen tremendous advances in the past few years, largely due to the abundant processing power and the availability of vast amounts of data that enable effective training of deep neural networks. The main motivation for using machine learning comes from that fact that in some areas, such as image recognition, constructing models that are elegant, tractable, and practically useful is nearly impossible. The field of communications, however, is traditionally built on precise mathematical models that are well understood and have been shown to work exceptionally well for many practical applications. Unfortunately, the ever-increasing throughput and efficiency demands have forced communications systems designers to push the boundaries to such an extent that in many applications conventional mathematical models and signal processing techniques are no longer sufficient to accurately describe the encountered scenarios. This is where machine learning methods can come to the rescue as they do not require rigid pre-defined models and can extract meaningful structure from data in order to provide useful practical results. In this talk, I will describe several applications of machine learning techniques for signal processing in communications. In particular, I will first talk about the suitability of neural networks for non-linear signal processing tasks in the context of self-interference cancellation for full-duplex communications as well as digital predistortion of power amplifier non-linearities, including hardware implementation results. I will then explain the concept of deep unfolding and I will present its application to self-interference cancellation for full-duplex communications and to 1-bit precoding in massive MIMO systems.

Dr. Alexios Balatsoukas-Stimming is an Assistant Professor in the Department of Electrical Engineering of the Eindhoven University of Technology in The Netherlands, and he also holds an Adjunct Assistant Professor position at Rice University, USA. He received an MSc degree in Electronics and Computer Engineering from the Technical University of Crete, Chania, Greece, in 2012 and a PhD in Computer and Communications Sciences from the École polytechnique fédérale de Lausanne, Switzerland, in 2016. During 2018-2019 he was a postdoctoral researcher in the Telecommunications Circuits Laboratory of EPFL, Switzerland, and a visiting researcher at Cornell University, USA, and the University of California at Irvine, USA. Previously, he was a Marie Skłodowska-Curie postdoctoral fellow at the European Laboratory for Particle Physics (CERN) in 2017-2018. During his PhD studies he spent three months in 2015 as an intern at Intel Labs, Hillsboro, USA. Dr. Balatsoukas-Stimming has co-authored more than 60 peer-reviewed publications, two of which have received best paper awards (IEEE ICECS 2013 & 2015) and one being a best paper award finalist (IEEE ISCAS 2015). He has served as a program committee member for several conferences on VLSI systems and communications and as the lead editor for the IEEE Communications Society Best Readings in Polar Coding in 2019, and he is currently an Associate Editor for the IEEE Communications Letters. He has also served as a reviewer for more than 20 top-tier journals and conferences and his reviewing service has been recognized with three IEEE exemplary reviewer awards. His research interests include VLSI circuits for communications, error-correction coding theory and practice, as well applications of approximate computing and machine learning to signal processing for communications.

 

Sebastian Cammerer (NVIDIA, Germany), "Trainable Communication Systems: Should we Learn Everything Again?"

In this talk, we will summarize the recent ideas on end-to-end learning of the entire PHY layer and revisit state-of-the-art in research as well as its current limitations. In our vision, this path leads towards a universal framework that allows end-to-end optimization of the whole data link without the need for prior mathematical modelling and analysis solely based on data. Such a trainable communication system inherently tolerates and even exploits effects which are difficult to model, such as hardware imperfections and channel uncertainties. However, the practical success of this end-to-end learning vision for PHY layer communications strongly depends on its practical performance in terms of energy-efficiency, scalability and computational complexity. To overcome this high level of complexity induced by (too) universal neural network architectures, one needs to limit the degrees of freedom by imposing a carefully selected structure to the transceiver neural networks. We demonstrate that training on the bit-wise mutual information (BMI) allows seamless integration with practical bit-metric decoding (BMD) receivers and allows to benefit from the vast experience we have with classical design of “codes on graphs”. Going one step further, we apply the well-known concept of iterative (Turbo) receivers to trainable communication systems leading to so-called Turbo-autoencoders which can be seen as another step towards neural network structures tailored to communications. Such systems do not only enjoy a competitive and in some cases even superior performance, but facilitate a simplified design flow due to their conceptual elegance and, hence, may trigger a paradigm shift of how we design future communication systems.

Sebastian Cammerer is a research scientist at NVIDIA. Before joining NVIDIA he received his PhD in electrical engineering from the University of Stuttgart, Germany, in 2021. His main research topics are machine learning for (wireless) communications and channel coding. Further research interests are in the areas of modulation, parallel computing for signal processing and information theory. He is recipient of the IEEE SPS Young Author Best Paper Award 2019, the Best Paper Award of the University of Stuttgart 2018, the Anton- und Klara Röser Preis 2016, the Rohde&Schwarz Best Bachelor Award 2015, the VDE-Preis 2016 for his master thesis and third prize winner of the Nokia Bell Labs Prize 2019.

 

Darko Zibar (Technical University of Denmark), "Optimum phase measurement in the presence of amplifier noise"

In fundamental papers from 1962, Heffener and Haus showed that it is not possible to construct a linear noiseless amplifier. This implies that amplifier intrinsic noise sources induce random perturbations on the phase of the incoming optical signal, which translates into spectral broadening. Achieving the minimum induced phase fluctuation requires a phase measurement method that introduces minimum uncertainty, i.e., optimum phase measurement. We demonstrate that a measurement method based on heterodyne detection and extended Kalman filtering approaches optimum phase measurement in the presence of amplifier noise. A penalty of 5 dB (numerical) and 15 dB (experimental) compared to quantum limited spectral broadening is achieved. Spectral broadening reduction of 44 dB is achieved, compared to when using the widely employed phase measurement method, based purely on the argument of the signal field. Our results reveal new scientific insights by demonstrating a phase measurement method that enables to approach minimum phase fluctuation, induced by amplifier noise. An impact is envisioned for phase-based optical sensing systems, as optical amplification could increase sensing distance with minimum impact on the phase.

Darko Zibar is Professor at the Department of Photonics Engineering, Technical University of Denmark and the group leader of Machine Learning in Photonics Systems (M-LiPS) group. He received M.Sc. degree in telecommunication and the Ph.D. degree in optical communications from the Technical University of Denmark, in 2004 and 2007, respectively. He has been on several occasions (2006, 2008 and 2019) visiting researcher with the Optoelectronic Research Group led by Prof. John E. Bowers at the University of California, Santa Barbara, (UCSB). At UCSB, he has been working on topics ranging from analog and digital demodulation techniques for microwave photonics links and machine learning enabled ultra-sensitive laser phase noise measurements techniques. In 2009, he was a visiting researcher with Nokia-Siemens Networks, working on clock recovery techniques for 112 Gb/s polarization multiplexed optical communication systems. In 2018, he was visiting Professor with Optical Communication (Prof. Andrea Carena, OptCom) group, Dipartimento di Elettronica e Telecomunicazioni, Politecnico di Torino working on the topic of machine learning based Raman amplifier design. His research efforts are currently focused on the application of machine learning techniques to advance classical and quantum optical communication and measurement systems. Some of his major scientific contributions include: record capacity hybrid optical-wireless link (2011), record sensitive optical phase noise measurement technique that approaches the quantum limit (2021) and design of ultrawide band arbitrary gain Raman amplifier (2019). He is a recipient of Best Student paper award at Microwave Photonics Conference (2006), Villum Young Investigator Programme (2012), Young Researcher Award by University of Erlangen-Nurnberg (2016) and European Research Council (ERC) Consolidator Grant (2017). Finally, he was a part of the team that won the HORIZON 2020 prize for breaking the optical transmission barriers (2016).

 

Christian Häger (Chalmers University of Technology, Sweden), "Physics-Based Machine Learning for Fiber-Optic Communication Systems"

Rapid improvements in machine learning over the past decade are beginning to have far-reaching effects. In this work, we propose a new machine-learning approach for fiber-optic systems in which signal propagation is governed by the nonlinear Schrödinger equation (NLSE). Our main idea is to exploit the fact that the popular split-step method for numerically solving the NLSE has essentially the same functional form as a “deep” multi-layer neural network; in both cases, one alternates linear steps and pointwise nonlinearities. We demonstrate that this connection allows for a principled machine-learning approach by appropriately parameterizing the split-step method and viewing the linear steps as general linear functions, similar to the weight matrices in a neural network. The resulting physics-based machine-learning model has several key advantages compared to conventional “black-box” function approximators. For example, it allows us to easily examine and interpret the learned solutions in order to understand why they perform well. 

Christian Häger received the Dipl.-Ing. degree (M.Sc. equivalent) from Ulm University, Germany, in 2011 and his Ph.D. degree from Chalmers University of Technology, Sweden, in 2016. He is currently an Assistant Professor in the Department of Electrical Engineering at Chalmers University of Technology, Sweden. Before that, he was a postdoctoral researcher at the Department of Electrical and Computer Engineering at Duke University, USA and at the Department of Electrical Engineering at Chalmers University of Technology. His research interests lie at the intersection of communication systems, machine learning, and signal processing. He received the Marie Sklodowska-Curie Global Fellowship from the European Commission in 2017 and a Starting Grant from the Swedish Research Council in 2020.

 

Vahid Aref (Nokia, Germany), "Applications of Deep Learning for Coherent Optical Communications"

The next generation of coherent optical transceiver supports high symbol rates beyond 100GBaud with modulation formats such as 64-QAM or larger constellation size. In such demanding conditions, nonlinear impairments of coherent transceivers as well as the nonlinearity of optical fiber may penalize the system performance significantly. Thus, mitigation of nonlinear effects plays a crucial role to minimize the total distortion. In this talk, we review some deep learning applications to compensate these nonlinear effects. First, we show the potential gain of neural-network based pre-distortion pre-compensating transmitter impairments. Then, we switch to the receiver and present a post-equalization strategy to mitigate fiber nonlinearity and transmitter nonlinearity. At last, we present some results on end-to-end signal space optimization for optical coherent communication.

Vahid Aref is an R&D engineer of optical networks in Nokia working on development of high speed optical coherent solutions. From 2015 till 2020, he was part of Nokia Bell Labs as a research engineer in optical networks research lab. Dr. Aref also serves as guest lecturer at the University of Stuttgart Since 2016. He received his PhD degree in computer and communication sciences from École Polytechnique Fédérale de Lausanne (EPFL), Switzerland, in March 2014. Prior joining Nokia, he was a research assistant in the communication theory laboratory (LTHC), EPFL, from 2010 until 2014. Then, he conducted post-doctoral research in the institute of telecommunications (INÜ) at University of Stuttgart in 2014. He has published more than 90 peer-reviewed conference proceedings and articles in prestigious journals such as Nature photonics, IEEE Transactions of Information Theory, IEEE Transactions on Communications and IEEE/OSA Journal of Lightwave Technology. Dr. Aref has received several awards for his works including the co-recipient of best journal award 2018 (ITG-Preis 2018) from German Society of Information technology (ITG).

 

Francesco da Ros (Technical University of Denmark), "Reservoir Computing for Short-reach Optical Communication"

Short link interconnections relying on intensity-modulation and direct-detection (IM/DD) play a key role in telecom infrastructure nowadays as they support the majority of the machine-to-machine communication within and between data centers. These low-cost low-complexity connections require dealing with the nonlinear distortion introduced by the interplay of fiber dispersion and square-law detection. Machine learning-based nonlinear equalizers can compensate for such impairments. Here we discuss the performance improvements achieved with digital-only equalizers and compare them with a hybrid optoelectronic receiver based on optical pre-processing and digital post-processing.

Francesco Da Ros is a senior researcher in the Machine Learning in Photonic Systems (MLiPS) at DTU Fotonik. He received his Ph.D. in 2015 from the Technical University of Denmark (DTU) including a research stay at the Fraunhofer Heinrich-Hertz Institute. Between 2015 and 2018, he worked within the Center for Silicon Photonics for Optical Communications at DTU and joined the MLiPS group in 2019. Dr. Da Ros has co-authored 150+ papers in the fields of optical communication and nonlinear optics, and he is currently leading a Villum Young Investigator project on optical implementations of machine learning techniques (OPTIC-AI). He is an OPTICA Ambassador and Senior Member, an IEEE Senior Member, and has served in the TPC of CLEO since 2018 and of OECC/PSC (2021-2022).

 

Uhlemann FotoTim Uhlemann (University of Stuttgart, Germany), "Learning Spectrally Efficient and Nonlinear Pulse Shaping for Coherent Optical Communications"

In a former publication we have shown that an autoencoder can learn a comprehensive communication scheme incl. geometric constellation shaping and an appropriate linear pulse-shaping over the nonlinear optical fiber channel. In this work, we extend an autoencoder-based framework at the transmitter to a nonlinear pulse former, implemented using a convolutional neural network. Applying a tailored training strategy, we achieve significant gains over the linear structure.

Tim Uhlemann is Ph.D. student at the Institute of Telecommunications at the University of Stuttgart. He received his Bachelor's degree in electrical engineering at the DHBW Ravensburg as a cooperative student of Daimler AG in 2014. After three years as IT project manager for shopfloor systems at Daimler AG, i.e., the 3D location system, he became a system architect for the Internet of Things in the Mercedes-Benz Technologiefabrik. In parallel he completed his Master's degree in electrical engineering at the University of Stuttgart from 2016 to 2019. His research interests include physical layer aspects of optical fiber communications, location systems and corresponding machine learning techniques. He is an IEEE student member