Browsing by Author "Zhao, Chenglin"
Now showing 1 - 4 of 4
Results Per Page
Sort Options
Item Open Access Hamming–Luby rateless codes for molecular erasure channels(Elsevier, 2019-11-27) Wei, Zhuangkun; Li, Bin; Hu, Wenxiu; Guo, Weisi; Zhao, ChenglinNano-scale molecular communications encode digital information into discrete macro-molecules. In many nano-scale systems, due to limited molecular energy, each information symbol is encoded into a small number of molecules. As such, information may be lost in the process of diffusion–advection propagation through complex topologies and membranes. Existing Hamming-distance codes for additive counting noise are not well suited to combat the aforementioned erasure errors. Rateless Luby-Transform (LT) code and cascaded Hamming-LT (Raptor) are suitable for information-loss, however may consume substantially computational energy due to the repeated uses of random number generator and exclusive OR (XOR). In this paper, we design a novel low-complexity erasure combating encoding scheme: the rateless Hamming–Luby Transform code. The proposed rateless code combines the superior efficiency of Hamming codes with the performance guarantee advantage of Luby Transform (LT) codes, therefore can reduce the number of random number generator utilizations. We design an iterative soft decoding scheme via successive cancelation to further improve the performance. Numerical simulations show this new rateless code can provide comparable performance comparing with both standard LT and Raptor codes, while incurring a lower decoder computational complexity, which is useful for the envisaged resources constrained nano-machinesItem Open Access High-dimensional metric combining for non-coherent molecular signal detection(IEEE, 2019-12-13) Wei, Zhuangkun; Guo, Weisi; Li, Bin; Charmet, Jérôme; Zhao, ChenglinIn emerging Internet-of-Nano-Thing (IoNT), information will be embedded and conveyed in the form of molecules through complex and diffusive medias. One main challenge lies in the long-tail nature of the channel response causing inter-symbol-interference (ISI), which deteriorates the detection performance. If the channel is unknown, existing coherent schemes (e.g., the state-of-the-art maximum a posteriori, MAP) have to pursue complex channel estimation and ISI mitigation techniques, which will result in either high computational complexity, or poor estimation accuracy that will hinder the detection performance. In this paper, we develop a novel high-dimensional non-coherent detection scheme for molecular signals. We achieve this in a higher-dimensional metric space by combining different non-coherent metrics that exploit the transient features of the signals. By deducing the theoretical bit error rate (BER) for any constructed high-dimensional non-coherent metric, we prove that, higher dimensionality always achieves a lower BER in the same sample space, at the expense of higher complexity on computing the multivariate posterior densities. The realization of this high-dimensional non-coherent scheme is resorting to the Parzen window technique based probabilistic neural network (Parzen-PNN), given its ability to approximate the multivariate posterior densities by taking the previous detection results into a channel-independent Gaussian Parzen window, thereby avoiding the complex channel estimations. The complexity of the posterior computation is shared by the parallel implementation of the Parzen-PNN. Numerical simulations demonstrate that our proposed scheme can gain 10dB in SNR given a fixed BER as 10 -4 , in comparison with other state-of-the-art methods.Item Open Access Machine learning-enabled globally guaranteed evolutionary computation(Nature Publishing Group, 2023-04-10) Li, Bin; Wei, Ziping; Wu, Jingjing; Yu, Shuai; Zhang, Tian; Zhu, Chunli; Zheng, Dezhi; Guo, Weisi; Zhao, Chenglin; Zhang, JunEvolutionary computation, for example, particle swarm optimization, has impressive achievements in solving complex problems in science and industry; however, an important open problem in evolutionary computation is that there is no theoretical guarantee of reaching the global optimum and general reliability; this is due to the lack of a unified representation of diverse problem structures and a generic mechanism by which to avoid local optima. This unresolved challenge impairs trust in the applicability of evolutionary computation to a variety of problems. Here we report an evolutionary computation framework aided by machine learning, named EVOLER, which enables the theoretically guaranteed global optimization of a range of complex non-convex problems. This is achieved by: (1) learning a low-rank representation of a problem with limited samples, which helps to identify an attention subspace; and (2) exploring this small attention subspace via the evolutionary computation method, which helps to reliably avoid local optima. As validated on 20 challenging benchmarks, this method finds the global optimum with a probability approaching 1. We use EVOLER to tackle two important problems: power grid dispatch and the inverse design of nanophotonics devices. The method consistently reached optimal results that were challenging to achieve with previous state-of-the-art methods. EVOLER takes a leap forwards in globally guaranteed evolutionary computation, overcoming the uncertainty of data-driven black-box methods, and offering broad prospects for tackling complex real-world problems.Item Open Access Random sketch learning for deep neural networks in edge computing(Springer Nature, 2021-03-25) Li, Bin; Chen, Peijun; Liu, Hongfu; Guo, Weisi; Cao, Xianbin; Du, Junzhao; Zhao, Chenglin; Zhang, JunDespite the great potential of deep neural networks (DNNs), they require massive weights and huge computational resources, creating a vast gap when deploying artificial intelligence at low-cost edge devices. Current lightweight DNNs, achieved by high-dimensional space pre-training and post-compression, present challenges when covering the resources deficit, making tiny artificial intelligence hard to be implemented. Here we report an architecture named random sketch learning, or Rosler, for computationally efficient tiny artificial intelligence. We build a universal compressing-while-training framework that directly learns a compact model and, most importantly, enables computationally efficient on-device learning. As validated on different models and datasets, it attains substantial memory reduction of ~50–90× (16-bits quantization), compared with fully connected DNNs. We demonstrate it on low-cost hardware, whereby the computation is accelerated by >180× and the energy consumption is reduced by ~10×. Our method paves the way for deploying tiny artificial intelligence in many scientific and industrial applications.