1 Introduction

Given the significant benefit in power and cost saving of 1-bit analog-digital-converter (ADC), considerable efforts have been dedicated to signal designs and processing techniques for this ultra-low resolution ADC in high-bandwidth and/or multi-antenna systems Liu et al. (2019); Jeon et al. (2019); Choi et al. (2020); Xu et al. (2018); Zhang et al. (2016); Mo et al. (2017); Mollen et al. (2017); Xiong et al. (2017); Jacobsson et al. (2017); Studer and Durisi (2016). Over the years, several interesting information-theoretical results have been obtained for both point-to-point and multi-user channels with 1-bit ADC under additive white Gaussian noise (AWGN). For example, it has been shown in Singh et al. (2009); Mo and Heath (2015) that Quadrature phase Shift Keying (QPSK) is capacity-achieving in point-to-point single- and multiple-antenna static channels. Optimal signaling schemes and fundamental limits of 1-bit ADC have also been established for point-to-point fading channels in Krone and Fettweis (2010); Mezghani and Nossek (2008); Vu et al. (2018), Vu et al. (2019). Recently, under the assumption of AWGN, signal design and fundamental limits of 1-bit ADC have also been extended to multi-user static channels Rassouli et al. (2018) and multi-user fading channels (Ranjbar et al., 2019; Ranjbar et al., 2020). Specifically, it was shown in Rassouli et al. (2018) that any point in the capacity region of a 2-user static Gaussian multiple access channel (MAC) can be achieved by input signals with bounded supports. Furthermore, an upper bound on the sum-capacity was also developed in Rassouli et al. (2018). However, to our knowledege, the detailed characteristics of the optimal signals for such static Gaussian MACs remain unknown. In Ranjbar et al. (2020), by exploiting the effect of fading, the detailed characteristics of optimal input signals on the boundary of the capacity region of a 2-user Gaussian fading MAC with 1-bit ADC were also addressed.

Current and future active wireless systems (AWSs) such as 5G and beyond cellular networks with their multi-tier heterogeneous architectures are being designed to operate in the same or adjacent spectrum to other existing wireless systems. For example, proliferation in the number of wireless users and devices fueled by emerging applications in e.g., Internet of Things (IoT), unmanned systems, wearable technology, remote sensing is leading to the design of active-active coexistence such as LTE-U and WiFi in unlicensed bands, and incumbent, priority and general authorized access in Citizens Broadband Radio Service (CBRS) bands FCC (2020). Such coexistence is intensifying concerns on co-channel and adjacent-channel interference and its management for wireless systems. Specifically, AWSs themselves need to cope with increased active co-channel interference, which is generated in different ways. For example, due to their heterogeneous structures and the high frequency reuse factor, future AWSs require sharing of time-frequency resources with existing users and this makes intercell interference no longer negligible Osseiran et al. (2014); Chen and Zhao (2014); Feng et al. (2014); Lin et al. (2014); Fodor et al. (2012). In addition, radio frequency interference (RFI) mitigation might not be perfect, which leads to residual interference. Such intermittence and asynchronism make the statistical properties of RFI at AWS complicated. In particular, the traditional approach of treating co-channel interference plus noise as Gaussian no longer holds Irio et al., 2020; Irio et al., 2019; ElSawy et al. (2013); Lin et al. (2014). For example, aggregate interference generated by small cells to macro cells are non-Gaussian. It is due to the effect of dominant interferers, and the central limit theorem no longer holds Quek et al. (2013). In many wireless networks, especially heterogeneous AWSs, co-channel interference plus noise can be accurately modeled as Gaussian mixture (GM) Irio et al. (2020); Quek et al. (2013); Gulati et al. (2010); Stein (1995); Middleton (1999); Wang and Poor (1999); MIT Lincoln Laboratory (Reynolds, 2009); Erseghe et al. (2008); Moghimi et al. (2011); Bayram and Gezici (2010); Nasri and Schober (2009); Kenarsari-Anhari and Lampe (2010); Bhatia and Mulgrew (2007).

During the last few years, there have been several contributions on fundamental limits and optimal signal designs for non-Gaussian AWSs Das (2000); Fahs et al. (2012); Tchamkerten (2004); Oettli (1974); Cao et al. (2014); Vu et al. (2015); Ranjbar et al. (2018); Dytso et al. (2017). However, the results are rather limited. It is because for non-Gaussian channels, the assumption of having Gaussian input signals is no longer valid. Due to the difficulty in studying the detailed properties of the optimal inputs and in establishing the capacity in closed-form for a non-Gaussian channel, numerical methods are usually need to find the capacity-achieving signal, even for a point-to-point channel Vu et al. (2015); Le et al. (2016). In our recent work in Rahman et al. (2020a), Rahman et al. (2020b), the detailed characteristics of a capacity-achieving scheme were studied for a point-to-point Gaussian mixture channel using 1-bit output quantization. In particular, it was shown in Rahman M. H. et al. (2020) that for a general GM channel, the maximum number of mass points in the optimal signal is four. In addition, under the special case of zero-mean GM components, QPSK is optimal. Unfortunately, at the network level, signal design and network information-theoretical results for non-Gaussian noise and interference are complete lacking. Therefore, considering non-Gaussian interference plus noise in multi-user AWSs presents new challenges.

Motivated by the above discussions, we investigate the network information-theoretical limits of a 2-user multiple access Rayleigh fading channel with 1-bit output quantization in the presence of Gaussian-mixture co-channel interference. Specifically, we establish the sum-capacity-achieving signaling schemes and the sum-capacity of the considered GM MAC, which is an accurate model to capture non-Gaussian co-channel interference plus noise in practical wireless networks under coexistence regimes, especially for those having heterogeneous structures and high frequency reuse factor. In general, the problem of maximizing the sum-rate over input signals to determine the sum-capacity is both analytically and computationally challenging, especially on the space of multi-dimensional probability distributions with a non-linear mapping from the inputs to the output and non-Gaussian noise plus interference. Therefore, the main contribution of this work lies in the explicit establishment of optimal signaling schemes for such non-linear and non-Gaussian multi-user channels. Our approach is to separate the phases and amplitudes of the input signals to study their effects to the main sum-rate optimization problem. The specific contributions of our work can be summarized as follows:

• In the first part of our work, we demonstrate that the phases of the optimal inputs must be π/2 circularly symmetric. While this property has been shown before over various 1-bit ADC single-user, it is not trivial to extend to multi-user channels in the presence GM noise. Given the π/2 circularly symmetric property, it is then demonstrated that the problem of optimizing the sum-rate is equivalent to minimizing the conditional output entropy.

• More importantly, in the second part of the paper, by establishing and examining the Kuhn-Tucker condition (KTC) on the optimal amplitude input distributions, we then show that the optimal input amplitudes are bounded. Towards this end, we exploit the convexity of the log of sum of Q functions to deal with the presence of a mixture of Gaussian components. Furthermore, since the main objective function is now linear over the feasible set of bounded amplitude distributions, it achieves the minimum value at an extreme point. As a result, we can conclude that the optimal input signals must have constant amplitudes. Therefore, the use of any π/2 circularly symmetric signaling schemes with constant amplitudes and full power are sum-capacity-achieving. Using these optimal input signals, the sum-capacity can finally be established

2 A 2-User Mac in Rayleigh Fading with 1-Bit adc and Achievable Sum-Rate

2.1 Channel Model

We consider a 2-user multiple access channel (MAC) under GM noise plus interference N as depicted in Figure 1. The two users transmit their own signals X1 and X2, respectively, to the base station being equipped with an 1-bit ADC. These two transmitted signals X1 and X2 are imposed by the power constraints




. The complex signal Z received at the base station is given as


FIGURE 1. A 2-user fading MAC under GM noise plus interference with 1-bit output quantization.

Here, the total noise plus interference N follows a GM distribution, which is a mixture of M Gaussian components, and its probability density function (PDF) is given as


In Eq. 2,


, 1 ≤ iM, is the ith complex Gaussian component with mean zero and variance


, and {ɛi} are the mixing probabilities satisfying


. Note that for a given complex realization n of N, pN(n) in Eq. 2 gives us the value of the PDF at that complex point n. As an illustrative example, Figure 2 shows the traditional Gaussian PDF and the 2-term GM PDF, both having zero mean and unit variance. Note that for the 2-term GM PDF, we use ɛ1 = 0.2, ɛ2 = 0.8,


, and




FIGURE 2. The Gaussian and 2-component GM PDFs.

Furthermore, in Eq. 1, H1 and H2 are the complex fading gains from user 1 and user 2 to the base station, respectively. In this paper, we consider Rayleigh fading channels, where H1 and H2 are circular symmetric Gaussian random variables with mean zero and variance




. Their PDFs are given as


In addition, the fading channel gains are assumed to be known at the base station, but not the users, and they change independently over time.

With 1-bit output quantization, the real and imaginary parts of the received signal Z will be fed through a 1-bit quantizer, which results in the following complex binary outputs:


where Quant (⋅) is the 1-bit quantization operation defined as:

It is then easy to see that the output Y can only take on one of the following values in



2.2 Ergodic Sum-Rate and Sum-Capacity

For a given set of input distributions




, the ergodic sum-rate of the considered MAC is the joint mutual information (MI) between the inputs X1 and X2 and the output Y, which is given as Gamal and Kim (2011).


In Eq. 6, the expectation E [⋅] is performed over fading gains H1 and H2, and


is the conditional joint MI for given H1 = h1 and H2 = h2. The ergodic sum-rate can be expressed in terms of joint and output entropies as follows:


In Eq. 7, the joint entropy


is calculated as


Note that




are the cumulative distribution functions (CDFs) of H1 and H2, respectively, and








, where the PDFs




are given in Eq. 3. In addition,


is the joint density function for given fading realizations H1 = h1 and H2 = h2, which can be calculated as




It should be mentioned that


. Furthermore, in Eq. 10,


is the well-known Q function, and




represent the real and imaginary parts of a complex number, respectively. In addition, the conditional output entropy can be written as:


For simplicity, hereafter, we shall use the notations




to refer to the density functions




, respectively.

The ergodic sum-capacity Cs of the considered MAC is the maximum ergodic sum-rate over all feasible input distributions




under the power constraints, which is given as:


3 Sum-Capacity Achieving Input Signals

In general, the problem of maximizing MI over input distributions under certain input constraints as in Eq. 12 has been extensively studied, but tractable solutions can only be obtained for very few specific cases when the mapping from the inputs to output is linear, and the noise is additive Gaussian. Unfortunately, for the considered non-Gaussian channel, we have a non-linear mapping from the inputs to the output under the presence of GM noise. Therefore, this optimization problem is not trivial. In the following, our approach to solve Eq. 12 is to first address the optimal phases. The optimal amplitude distributions are then investigated to determine the complete input distributions.

3.1 Optimal Phase Distributions

To examine the effect of the input phase distributions, we first re-write the conditional density function in Eq. 10 using the amplitudes and phases as:


The joint and output entropies in Eqs. 8, 11, respectively, can then be expressed as




In Eq. 15, ξ(⋅) is an entropy function of the distribution in Eq. 10, which is calculated as:


To determine the optimal phases of the input signals, for a given set of the inputs




, construct the following two other input distributions:


It is not difficult to verify that the two new distributions are π/2 circularly symmetric. Note that a density function FX(X) is π/2 circularly symmetric if FX(X) = FX (Xe/2) for any integer j. As we demonstrate in Appendix A, the use of




results in a uniform output Y, and the corresponding output entropy in Eqs. 8, 14 will be maximized, and it is equal to 2. Now, let compare the conditional entropy in Eq. 15 for two pairs of inputs,




. We first write this conditional entropy when the pair


is used as:


Because we consider Rayleigh fading, and the channel is ergodic, the expectation over H1 and H2 in Eq. 18 can be written in terms of their amplitudes and phases as


. Furthermore, we know that the phases of fading gains


are uniform. As such, the inner expectations over


do not depend on the phases of the inputs


. Following the same argument as in Ranjbar et al. (2020), we can simply let


without changing the conditional entropy. Therefore, we have:


Since the two conditional entropies are the same, it is then clear that the use of


leads to a better sum-rate. As a result, it can be concluded that the optimal input distributions are π/2 circularly symmetric. With such input signals, the output entropy in Eq. 8 is 2. Therefore, from Eq. 7, the sum-rate maximization problem to find the sum-capacity Cs in Eq. 12 becomes a minimization problem of the conditional output entropy as:






are both π/2 circularly symmetric. Since the objective function H(Y|X1, X2, H1, H2) is the function of




only, for the sake of convenience, we will use


to refer to H(Y|X1, X2, H1, H2). The optimal solutions, denoted as




, can be therefore expressed as:


3.2 Optimal Amplitude Distributions

Given the characteristic of the optimal phases established in the previous section, we now turn out attention to the optimality of the amplitude distributions.

To provide more insights on the solutions of Eq. 21, we first examine a simplified optimization problem by fixing the distribution


. In particular, we know that the set of input probability distributions with second moment constraint is convex and compact Abou-Faycal et al. (2001). Furthermore,


is a continuous function of


Ranjbar et al. (2020). Therefore, if we select a fixed distribution


, there always exists an optimal solution


to minimize


. That is:


We know that the entropy function


is weak continuous and weakly differentiable of


Borwein and Lewis (2010). Therefore, we can establish the Kuhn-Tucker condition (KTC) for which a distribution


is the solution of Eq. 22 as follows:


where μ1 is the Lagrangian multiplier and D (⋅) is the directional derivative. Before examining further the above KTC, we state the following result regarding μ1.

Proposition 1. The Lagrangian multiplier μ1 in Eq. 23 is positive. Equivalently, for the optimization problem in Eq. 22, full-power P1 is used.

Proof. Let Ω1 is the set of the feasible set of


that satisfies E [|X1|2] ≤ P1. The first consequence of having π/2 circularly symmetric input is that for a fixed




is a linear function of


and power constraint


is a linear function of


Cover and Thomas (2006). It is then clear that the objective function


achieves its minimum an extreme point of Ω1 Winkler (1988). In the following, we will show that any distribution with the second moment being smaller than P1 is not an extreme point of the set. Towards this end, let consider a distribution


on Ω1 such that E [|Xt|2] = Pt < P1. Let us assume, there exists a positive δ such that 0 < Ptδ, Pt + δ < P1. In addition, we define,




. It is obvious that




. Which means both




are in Ω1. Now, consider the following linear combination:


It can then be verified that when choosing t such as:


we have


. Thus,


is a convex combination of two other distributions in the feasible set. Therefore,


cannot be an extreme point on the set Ω1 Winkler (1988). It can then be concluded that the power constraint must be active, and μ1 is positive.

With a positive μ1, we shall analyze the properties of the amplitude of


. To do that, we re-write the entropy






Then, the KTC in Eq. 23 can be re-written as:


with the equality being achieved for any mass point


, where


is the set of point of increase of the optimal


. Before further examining this KTC, we have the following proposition regarding the log-convexity of the sum of Q functions.

Proposition 2.


is a convex function for non-negative ai, bi and for x ≥ 0.

Proof. The proof is straightforward. Specifically, it can be verified that


is convex. Equivalently,


is log-convex. Furthermore, the sum of log-convex functions are log-convex. Therefore,


is log-convex.

The result in Proposition 2 helps establish the finiteness


in Eq. 27, which is stated as follows:

Lemma 1. For any


in Ω1,


is finite.

Proof. Because 0 ≤ p(y|x1, x2, h1, h2) ≤ 1, it is apparent that


As shown in Appendix B, we have:


Then, by applying the convexity property of


, it follows that:


The finiteness of


in Eq. 27 leads to the following important result:

Theorem 1. The optimal input distribution


in (22) for a given


has a bounded amplitude.

Proof. The proof is done by contradiction. Specifically, assume that the amplitude of


is not bounded. It means there exists a mass point of


that goes to infinity. When it happens, it is clear that the LHS of the KTC in 27 goes to infinity for a positive μ1. On the other hand, the RHS of 27 is the conditional entropy, and it is always less than or equal to 2. That results in a contradiction. Hence, the amplitude of


must be bounded.

Given Theorem 1, we can now focus on the set of bounded


, denoted as


, and consider the following conditional entropy minimization problem for a fixed




A similar result as in Theorem 1 but for


is then given in the next theorem.

Theorem 2. The optimal input distribution


in Eq. 31 for a given


has a bounded amplitude.

Proof. The proof follows a similar procedure as before, and it can be summarized as follows. We can first establish the KTC for Eq. 31 as


where μ2 is the non-negative Lagrangian multiplier. It can then be verified that full-power power P2 is used, and μ2 > 0. Furthermore, we have:


In a similar manner as in the proof of Lemma 1, we can then show that


is finite. As a result, if the amplitude of


is not bounded, the LHS of Eq. 32 goes to infinity when |x2| approaches infinity, while the RHS of (32) is finite, which is not possible.

Now, by combining the results from Theorems 1 and 2, we can conclude the capacity-achieving input distributions




in Eq. 21 are π/2 circularly symmetric, and they both have bounded amplitudes. In the following, using a similar analysis as in Winkler (1988); Vu et al. (2019); Ranjbar et al. (2020), we shall demonstrate that both




in fact have constant amplitudes. First, let


is the set of amplitudes of all the distributions that are π/2 circularly symmetric and bounded amplitude on the feasible set Ω. It then follows that


Since all input distributions are π/2 circularly symmetric, as similar to the analysis we made earlier, the objective function


is independent of the phase


. Equivalently,


depends only on the amplitude of the input distributions. More importantly, for a fixed input distribution of one user, the objective function


is linear and continuous over the input distribution of the other user. Therefore, for a fixed input distribution of one user,


is minimized at an extreme point on the feasible set of the distributions of the other user. As a result, this optimal input must have a single mass point only. The proof of such uniqueness of the extreme point follows the same argument using a convex combination of multiple extreme points we made in the proof of Proposition 1. The detailed proof is therefore omitted here for brevity. By applying the same argument to both user 1 and user 2, it can then be concluded that the optimal




contain only a single mass point in their amplitudes.

Given the above results, we can conclude that the optimal distributions




are π/2 circularly symmetric, and they have constant amplitudes




, respectively. Then by setting


without changing the value of the conditional entropy as in Eq. 19, the sum-capacity of the considered GM MAC is calculated as:


Note that ξ(⋅) is an entropy function of the corresponding distribution.

It is not hard to verify that in the case of a single-user channel under power constraint P and fading H, the single-user capacity C can be obtained from Cs in 35 by setting P1 = P and P2 = 0, and it is given as:


Furthermore, C can be achieved by a π/2 circularly symmetric input having a constant input, e.g., QPSK.

As the final note, we would like to mention that the developed results above apply directly to the traditional Gaussian channel. It is because GM include Gaussian as a special case with M = 1.

4 Sum-Rate and Sum-Capacity: Numerical Examples

In the following, we will provide several examples to verify the optimality of




in terms of the sum-rate. Unless otherwise stated, we assume that the fading gains have unit variance.

First, let consider a 1-bit ADC MAC under a 2-term GM noise with ɛ1 = 0.45, ɛ2 = 0.55 and




. For this channel, it is assumed that the two users have equal transmit power P = P1 = P2. We consider several signaling schemes, including phase Shift Keying (PSK) and Quadrature Amplitude Modulation (QAM), for user 1 and user 2, respectively: 1) QPSK + QPSK; 2) QPSK+8-PSK; 3) 16-QAM+16-QAM; and 4) Gaussian + Gaussian. Figure 3 shows the sum-rates achieved by these modulation schemes over a wide range of SNR, which is defined as SNR =


. These sum-rates are numerically calculated from Eqs. 7, 8, 11 using the corresponding input signals. The sum-capacity calculated from Eq. 35 is also provided. In addition, the single-user capacity C in Eq. 36 is plotted as a reference.


FIGURE 3. Sum-rates achieved by different pairs of input signals and Single-user capacity over the 2-term GM MAC. The two users are assumed to have equal transmit power P = P1= P2 and single-user has transmit power P.

The superiority of QPSK + QPSK and QPSK+8-PSK can clearly be seen from Figure 3. The reason for that is because such input signals are π/2 circularly symmetric with constant amplitudes, which are sum-capacity-achieving. It is also clear from Figure 3 that over the considered SNR range, the single user capacity C is always smaller than the sum-capacity Cs. While the single-user case corresponds to a corner point of the 2-user capacity-region, operating at this corner point is clearly sub-optimal. However, the results shown in Figure 3 indicates that both Cs and C asymptotically approach 2 bits/sec/Hz at a sufficiently high SNR. This fact can also be verified from Eqs. 35, 36.

Our results on the optimal signaling schemes also hold for a general GM channel having any number of Gaussian components. To demonstrate it, Figure 4 presents the sum-rates achieved by the same signaling schemes over a MAC under GM noise having three Gaussian components with ɛ1 = 0.9, ɛ2 = 0.05, ɛ3 = 0.05 and


. Note that both users are assumed to use the same transmit power. As we mentioned earlier, the sum-capacity is calculated using Eq. 35, while the other sum-rates are obtained from Eqs. 7, 8, 11. For comparison, the single-user capacity in Eq. 36 is also provided. It can be seen from Figure 4 that QPSK + QPSK and QPSK+8-PSK outperform the other signaling schemes in terms of the sum-rate. As similar to the previous results for the 2-term GM channel, the sum-capacity Cs is significantly larger than the single-user capacity C over the SNR range of interest.


FIGURE 4. Sum-rates achieved by different pairs of input signals and Single-user capacity over the 3-term GM MAC. The two users are assumed to have equal transmit power P = P1= P2 and single-user has transmit power P.

In Figures 5, 6, the sum-rates are plotted for the considered 2-GM and 3-GM channels, respectively, but using un-equal transmit power with P1 = P and P2 = 3P. Note that in this case, we still define SNR as SNR =


. Clearly, the optimality of QPSK + QPSK and QPSK+8-PSK is persistent with the results achieved in the case of equal transmit power. Note that we also observe the same sub-optimality of a single-user capacity case in terms of sum-capacity. The results, however, are omitted for brevity.


FIGURE 5. Sum-rates achieved by different pairs of input signals over the 2-term GM MAC. The two users are assumed to have unequal transmit power P1 = P and P2 = 3P.


FIGURE 6. Sum-rates achieved by different pairs of input signals over the 3-term GM MAC. The two users are assumed to have unequal transmit power P1 = P and P2 = 3P.

Finally, Figure 7 compares the sum-capacities of three different channels: the Gaussian, 2-term GM, and 3-term GM channels. Note that we use the same 2-term and 3-term GMs as before. For simplicity, we assume the equal transmit power again with P1 = P and P2 = P, and use SNR =


as before. With the chosen parameters, we achieve the highest sum-capacity over the 3-term GM channel. The Gaussian noise is the worst-case noise in this case. However, it is clear from Eq. 35 that the sum capacity Cs is sensitive to the choice of M and the set of {ϵi} and {σi}, 1 ≤ i ≤ M. Due to the complexity of the function Cs in Eq. 35, it is not straightforward to analytically compare the sum-capacities of different Gaussian and GM channels. We believe that such interesting investigation requires additional studies.


FIGURE 7. Sum-capacities of MACs under different types of noise: Gaussian, 2-term GM, and 3-term GM.

5 Conclusion

In this paper, we have addressed the optimal input distributions and the sum-capacity of a 2-user Rayleigh fading MAC under a general Gaussian-mixture noise plus interference with 1-bit ADC. The phases of the optimal inputs were first shown to be π/2 circularly symmetric. By exploiting this result, it was proved that the amplitudes of the optimal input distributions must only have a single mass point in order to minimize the conditional entropy. As a result, the sum-capacity achieving signaling schemes are π/2 circularly symmetric with a single mass point amplitude using full power. The advantages of the proposed signaling schemes in terms of the sum-rate were also clearly demonstrated.

Data Availability Statement

The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.

Author Contributions

All authors listed have made a substantial, direct, and intellectual contribution to the work and approved it for publication.


This work was partially supported by Air Force Research Lab/Intelligent Fusion Technology under SBIR Grant No. IFT079-02.

Conflict of Interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Publisher’s Note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.


Appendix A

Proof that


From Eq. 13, it can be verified that


. Then we have the following:


The third equality is based on the variable transformation and the fourth equality is due to the fact that


. Thus, the output is uniform, and



Appendix B

The Inequality


We Have


Note that the last inequality comes from the fact that Q(x) is a decreasing function of a positive x.


This article is autogenerated using RSS feeds and has not been created or edited by OA JF.

Click here for Source link (https://www.frontiersin.org/)