LIPIcs.MFCS.2023.85.pdf
- Filesize: 0.65 MB
- 15 pages
In this paper, we investigate computational power of threshold circuits and other theoretical models of neural networks in terms of the following four complexity measures: size (the number of gates), depth, weight and energy. Here, the energy of a circuit measures sparsity of their computation, and is defined as the maximum number of gates outputting non-zero values taken over all the input assignments. As our main result, we prove that any threshold circuit C of size s, depth d, energy e and weight w satisfies log(rk(M_C)) ≤ ed (log s + log w + log n), where rk(M_C) is the rank of the communication matrix M_C of a 2n-variable Boolean function that C computes. Thus, such a threshold circuit C is able to compute only a Boolean function of which communication matrix has rank bounded by a product of logarithmic factors of s, w and linear factors of d, e. This implies an exponential lower bound on the size of even sublinear-depth and sublinear-energy threshold circuit. For example, we can obtain an exponential lower bound s = 2^Ω(n^{1/3}) for threshold circuits of depth n^{1/3}, energy n^{1/3} and weight 2^o(n^{1/3}). We also show that the inequality is tight up to a constant factor when the depth d and energy e satisfies ed = o(n/log n). For other models of neural networks such as a discretized ReLU circuits and descretized sigmoid circuits, we define energy as the maximum number of gates outputting non-zero values. We then prove that a similar inequality also holds for a discretized circuit C: rk(M_C) = O(ed(log s + log w + log n)³). Thus, if we consider the number gates outputting non-zero values as a measure for sparse activity of a neural network, our results suggest that larger depth linearly helps neural networks to acquire sparse activity.
Feedback for Dagstuhl Publishing