Applying Neural Networks in Optical Communication Systems: Possible Pitfalls

Abstract
We investigate the risk of overestimating the performance gain when applying neural network-based receivers in systems with pseudorandom bit sequences or with limited memory depths, resulting in repeated short patterns. We show that with such sequences, a large artificial gain can be obtained, which comes from pattern prediction rather than predicting or compensating the studied channel/phenomena.