Code Monkey home page Code Monkey logo

Comments (6)

jhoydis avatar jhoydis commented on August 15, 2024

Hi,

I have done some 8PSK simulations with your constellation in this notebook:
https://colab.research.google.com/drive/1QoE7I3pAymRqWgggz832B-TP72uO-dZw?usp=sharing

It seems to work as expected. Maybe you have some error in your Mapper/Demapper?

Hope this helps.

from sionna.

Koltice avatar Koltice commented on August 15, 2024

Thanks for the help!
I had a mistake in my code, now it's fixed and the plots look like this

output1
output2

I hope you can help my with one question I have still:
Why hase the 16QAM a worse BER than the 8PSK? Shouldn't be the curves the other way around?
Thanks!

from sionna.

jhoydis avatar jhoydis commented on August 15, 2024

Not sure to understand your observation correctly. 16QAM is clearly better with a channel code which shows that you get a higher mutual information at the output of the demapper. For the uncoded results, it might because I use in my example code wrongly the ebnodb2no-function for uncoded transmissions. It should be

no = ebnodb2no(ebno_db, num_bits_per_symbol=self.num_bits_per_symbol, coderate=1)

from sionna.

Koltice avatar Koltice commented on August 15, 2024

Thanks again for your help!
sorry I got it mixed up there, so what I don't understand is:
Why hase the 8PSK a worse BER than the 16QAM? Shouldn't the plot show this order (from best to worst) --> 8PSK - 16QAM?
So for the uncoded case it's pretty close, but I'd have thought, that in the encoded case, the encoded 8PSK performes a bit better than the encoded16QAM... or is there something I'm overlooking?

output
output1

This is the code I used:

import numpy as np
import tensorflow as tf
import time # for throughput measurements

# Configure the notebook to use only a single GPU and allocate only as much memory as needed
# For more details, see https://www.tensorflow.org/guide/gpu
gpus = tf.config.list_physical_devices('GPU')
print('Number of GPUs available :', len(gpus))
if gpus:
    gpu_num = 0 # Number of the GPU to be used
    try:
        #tf.config.set_visible_devices([], 'GPU')
        tf.config.set_visible_devices(gpus[gpu_num], 'GPU')
        print('Only GPU number', gpu_num, 'used.')
        tf.config.experimental.set_memory_growth(gpus[gpu_num], True)
    except RuntimeError as e:
        print(e)
        
# Import Sionna
try:
    import sionna
except ImportError as e:
    # Install Sionna if package is not already installed
    import os
    os.system("pip install sionna")
    import sionna
    

# For the implementation of the Keras models
from tensorflow.keras import Model

# Import required Sionna components
from sionna.mapping import Constellation, Mapper, Demapper
from sionna.utils import BinarySource, compute_ber, BinaryCrossentropy, ebnodb2no, PlotBER
from sionna.channel import AWGN
from sionna.fec.ldpc import LDPC5GEncoder, LDPC5GDecoder
class UncodedSystemAWGN(Model): 
    def __init__(self, n,constellation,NUM_BITS_PER_SYMBOL_psk):
        super().__init__() 
        self.num_bits_per_symbol = NUM_BITS_PER_SYMBOL_psk
        self.n = n
        self.k = int(n*1)
        self.coderate = 1
        self.constellation = constellation
        self.mapper = Mapper(constellation=self.constellation)
        self.demapper = Demapper("app", constellation=self.constellation, hard_out=True)
        self.binary_source = BinarySource()
        self.awgn_channel = AWGN()
    
    @tf.function(jit_compile=True) 
    def __call__(self, batch_size, ebno_db):
        no = ebnodb2no(ebno_db, num_bits_per_symbol=self.num_bits_per_symbol, coderate=1)
        bits = self.binary_source([batch_size, self.n])
        codewords = bits
        x = self.mapper(codewords)
        y = self.awgn_channel([x, no])
        bits_hat = self.demapper([y,no])
        return bits, bits_hat
class CodedSystemAWGN(Model): 
    def __init__(self, n, coderate, constellation,NUM_BITS_PER_SYMBOL):
        super().__init__() 
        self.num_bits_per_symbol = NUM_BITS_PER_SYMBOL
        self.n = n
        self.k = int(n*coderate)
        self.coderate = coderate
        self.constellation = constellation
        self.mapper = Mapper(constellation=self.constellation)
        self.demapper = Demapper("app", constellation=self.constellation)
        self.binary_source = BinarySource()
        self.awgn_channel = AWGN()
        self.encoder = LDPC5GEncoder(self.k, self.n)
        self.decoder = LDPC5GDecoder(self.encoder, hard_out=True)
    
    @tf.function(jit_compile=True) # activate graph execution to speed things up
    def __call__(self, batch_size, ebno_db):
        no = ebnodb2no(ebno_db, num_bits_per_symbol=self.num_bits_per_symbol, coderate=self.coderate)
        bits = self.binary_source([batch_size, self.k])
        codewords = self.encoder(bits)
        x = self.mapper(codewords)
        y = self.awgn_channel([x, no])
        llr = self.demapper([y,no])
        bits_hat = self.decoder(llr)
        return bits, bits_hat
NUM_BITS_PER_SYMBOL_qam = 4

constellation2 = Constellation("qam", NUM_BITS_PER_SYMBOL_qam)
constellation2.show();

NUM_BITS_PER_SYMBOL_psk = 3

real = 1 * np.cos(np.pi/4)
imag = 1 * np.sin(np.pi/4)
CONST_SHAPE = np.array([-1, 1, 1j, -1j, complex(real, imag), complex(real, -imag), complex(-real, imag), complex(-real, -imag)]) # set the shape of the constellation diagram
constellation1 = Constellation("custom", NUM_BITS_PER_SYMBOL_psk, CONST_SHAPE)
constellation1.show();
n = 1200
k = 800
coderate = k/n
model_coded_awgn = CodedSystemAWGN(n, coderate, constellation1,NUM_BITS_PER_SYMBOL_psk)
model_uncoded_awgn = UncodedSystemAWGN(n,constellation1,NUM_BITS_PER_SYMBOL_psk)
model_coded_awgn_2 = CodedSystemAWGN(n, coderate, constellation2,NUM_BITS_PER_SYMBOL_qam)
model_uncoded_awgn_2 = UncodedSystemAWGN(n,constellation2,NUM_BITS_PER_SYMBOL_qam)

ber_plots = PlotBER("AWGN")

ber_plots.simulate(model_uncoded_awgn,
                  ebno_dbs=tf.range(0,15),
                  batch_size=256,
                  num_target_block_errors=100, # simulate until 100 block errors occured
                  legend="Uncoded 8PSK",
                  soft_estimates=True,
                  max_mc_iter=100, # run 100 Monte-Carlo simulations (each with batch_size samples)
                  show_fig=False);

ber_plots.simulate(model_uncoded_awgn_2,
                  ebno_dbs=tf.range(0,15),
                  batch_size=256,
                  num_target_block_errors=100, # simulate until 100 block errors occured
                  legend="Uncoded 16QAM",
                  soft_estimates=True,
                  max_mc_iter=100, # run 100 Monte-Carlo simulations (each with batch_size samples)
                  show_fig=True);

ber_plots = PlotBER("AWGN")

ber_plots.simulate(model_coded_awgn,
                  ebno_dbs=tf.range(0,9),
                  batch_size=256,
                  num_target_block_errors=100, # simulate until 100 block errors occured
                  legend="Coded 8PSK",
                  soft_estimates=True,
                  max_mc_iter=100, # run 100 Monte-Carlo simulations (each with batch_size samples)
                  show_fig=False);

ber_plots.simulate(model_coded_awgn_2,
                  ebno_dbs=tf.range(0,9),
                  batch_size=256,
                  num_target_block_errors=100, # simulate until 100 block errors occured
                  legend="Coded 16QAM",
                  soft_estimates=True,
                  max_mc_iter=100, # run 100 Monte-Carlo simulations (each with batch_size samples)
                  show_fig=True);

from sionna.

jhoydis avatar jhoydis commented on August 15, 2024

I think you might be ignoring that you plot BER vs Eb/No and not vs SNR. It is not surprising that QAM (with ray labelling) does better than PSK.

from sionna.

Koltice avatar Koltice commented on August 15, 2024

Alright, I think I got it! Thanks again for your help!

from sionna.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.