Channel coding is adding some extra codes
to a digital sequence M by definite rules so that irregular information
sequence M becomes a regular digital sequence Y (code sequence). That is to
say, in code sequence, every code of information sequence is related to extra
code. At the receiving end, the channel encoder encodes with this prescient
coding rules, or verifies that the received digital sequence R conforms to the
set rule to find out errors in R and then corrects them. That is the basic idea
of channel coding, namely, verifying and correcting the errors during
transmission based on correlativity.
Generally, digital sequence M is
transmitted with K codes as a group. We call the one with K-code block an information
code block. Channel encoder adds some extra codes to each information code
block by definite rules, and so the code block with n-code is constituted. Such
n codes are mutually related, that is, the extra n-k codes are called the supervising
codes of this code block. In terms of information transmission, supervising
codes are redundant, as it carries no information. However, such redundancy gives
the codes some error detection and correction capability, so the reliability of
transmission is increased and error rate is reduced. On the other hand, if we
require the speed of information transmission to remain constant, after
supervising codes are added, the duration of each code in the code block should
be reduced. For a binary code, pulse width should be also reduced. If the
normalized width of each code pulse is 1 before coding, it should be k/n after
coding, so channel bandwidth should be spread by n/k times. In this case,
bandwidth redundancy substitutes for reliability of channel transmission. If
the speed of information transmission is allowed to be slower, the duration of
each code after coding can remain the same. In this case, bandwidth redundancy
substitutes for reliability of channel transmission.
As shown in Table 4-1,
there are great gaps between coding gains from different coding methods and the
ideal coding gain (up to Shannon limit).
Table 4-1 BPSK or QPSK Coding Gain
Coding Adopted
|
Coding Gain (dB@BER = 10-3)
|
Coding Gain (dB@BER = 10-5)
|
Data Speed
|
Ideal Coding
|
11.2
|
13.6
|
|
Cascaded Code (RS and Convolution Code Viterbi Coding)
|
6.5 ~ 7.5
|
8.5 ~ 9.5
|
moderate
|
Convolution Code Sequence Coding (Soft Decision)
|
6.0 ~ 7.0
|
8.0 ~ 9.0
|
moderate
|
Cascaded Code (RS and Group Code)
|
4.5 ~ 5.5
|
6.5 ~ 7.5
|
Very High
|
Convolution Code Viterbi Coding
|
4.0 ~ 5.5
|
5.0 ~ 6.5
|
High
|
Convolution Code Sequence Coding (Hard Decision)
|
4.0 ~ 5.0
|
6.0 ~ 7.0
|
High
|
Group Code (Hard Decision)
|
3.0 ~ 4.0
|
4.5 ~ 5.5
|
High
|
Convolution Code Threshold Coding
|
1.5 ~ 3.0
|
2.5 ~ 4.0
|
Very High
|
It is observed that, for the same modulating, coding gains vary with
different coding schemes. The coding schemes we usually adopt are convolution
code, Reed-Solomon code, BCH code and Turbo code, etc. Convolution code is used
for voice and low-speed signaling in WCDMA, while Turbo code is used for data
encoding.
4.4.1 Convolution
Code
The n codes generated by the convolution coder during any definite
time is dependent on K information bits during this period and the number of
information bits during the former N-1 period of time. At this moment,
supervising code monitors the information during the N period of time when the
number of codes nN is called constraint length.
The decoding schemes of convolution code are as follows: threshold
decoding, hard decision Viterbi decoding and soft decision Viterbi decoding. Among
these decoding schemes, the best one is soft decision Viterbi decoding, which
is usually adopted. Compared with hard decision Viterbi decoding, it is not
much more complex, but its performance is better by 1.5 ~ 2 dB.
4.4.2 Turbo
Code
We are striving to approach Shannon limit in coding field, where Turbo code is an
innovation milestone. Grid code is close to Shannon
limit in case of bandwidth-limited channels, while Turbo code is especially
applicable to bandwidth-unlimited channels, such as deep space communication
and satellite communication. Theory emulation shows that, in the AWGN channel
with 0.7dB Eb/N0, Turbo code with 1/2 code rate has bit error rate of 10-5.
Two or more basic coders are cascaded in parallel
via one or more interweavers, and so Turbo code is constituted. Turbo code is based
on the correction of the algorithm and structure of the traditional cascade
code. The positive feedback of iterative decoding is cancelled thanks to the
introduction of internal-interweaver. The algorithm of
Turbo iterative decoding involves SOVA (soft output Viterbi algorithm) and MAP
(maximum posterior probability algorithm) and so on. Thanks to each iterative
performance of MAP algorithm excels Viterbi algorithm, iterative decoding of
MAP algorithm can get more coding gains.
Комментариев нет:
Отправить комментарий