In the process of audio digitization of analog speech signal processing steps in order
In the process of audio digitization of analog speech signal processing steps in order ( )
A. Sampling, quantization, coding
B. Quantization, sampling, coding
C. Sampling, coding, quantization
D. Coding, quantization sampling
The process of converting an analog signal into a digital signal: sampling→quantization→encoding.
The process of converting an analog signal into a digital signal is known as analog/digital conversion, which consists of:
sampling: digitizing the signal on the time axis;
quantization: digitizing the signal on the amplitude axis;
encoding: recording the sampled and quantized digital data in a certain format.
Digital audio computer data storage is accessed in the form of 0, 1, then the digital audio is the first audio file is converted, followed by these level signals into binary data to save the playback of these data is converted to analog level signals and then sent to the speakers to broadcast, digital sound and the general tape, Digital sound and general tape, radio, television sound on the storage and playback method has a fundamental difference. In contrast, it has a convenient storage, low storage costs, storage and transmission process without sound distortion, editing and processing is very convenient and so on.
After the car stereo is retrofitted with DSP, is the input signal of the original car speakers from the host to the DSP and then to the host out? Or directly from the DSP out?
The DSP is installed on the output side of the host, and the audio signal output from the host is transmitted to the DSP, and the signal is processed by the DSP’s power amplification and signal conditioning before being transmitted to the speakers in the car.
Difficulties in the development of voice wake-up headset
Difficulties in the development of voice wake-up headset mainly include the following:
1. Difficulties in voice wake-up technology: voice wake-up technology requires high-precision voice recognition and processing capabilities, and for small devices such as headset, the processing capability is limited, so the algorithm needs to be optimized and streamlined to improve the accuracy of voice wake-up and the response speed.
2. Difficulty of headset hardware design: In order to realize the voice wake-up function, it is necessary to add hardware such as microphone and voice processing chip in the headset, which puts forward higher requirements for both the design and manufacture of the headset.
3. Difficulty of energy management: Wake-on-Voice requires the headset to maintain a certain power consumption in standby mode to ensure that it can be woken up at any time, but excessive power consumption will affect the headset’s lifespan, so it needs to be optimized for energy management.
4. Difficulty of environmental interference: due to the diversity of the use of the scene, the headset needs to accurately recognize the user’s voice commands in a noisy environment, which puts forward higher requirements for voice recognition algorithms and noise suppression technology.
In short, the development difficulties of the voice wake-up headset are reflected in technology, hardware, energy consumption and the environment and other aspects, requiring developers to carry out refined optimization and improvement in all aspects, in order to achieve a more accurate, stable and convenient voice wake-up function.
What is the role of adding windows and splitting frames to a speech signal
Adding windows and splitting frames are both preprocessing stages for extracting features from speech signals. Split frames first, then add windows, then do the Fast Fourier Transform.
Simply put, a speech signal is not smooth as a whole, but it can be regarded as smooth locally. In the later stage of speech processing, it is necessary to input a smooth signal, so the whole speech signal should be divided into frames, that is, cut into many segments.
In the range of 10-30ms can be considered a stable signal, generally not less than 20ms for a frame, about 1/2 the length of the frame for the frame shift sub-frame. Frame shift is the overlap area between two adjacent frames, is to avoid too much variation between two adjacent frames.
After adding window according to the above method, there will be a discontinuity between the beginning section and the end end of each frame, so the more the frames are divided, the bigger the error with the original signal will be. Adding windows is to solve this problem, so that the signal becomes continuous after sub-framing, and each frame will show the characteristics of the periodic function. In speech signal processing in general plus Hamming window.
1, the research direction of speech processing
Speech processing (speechsignalprocessing) is used to study the process of speech articulation, the statistical properties of speech signals, automatic speech recognition, machine synthesis, and speech perception, and other various processing techniques.
2, speech information parameters
Language information is mainly contained in the parameters of the speech signal, so accurate and rapid extraction of the parameters of the speech signal is the key to speech signal processing.
Commonly used speech signal parameters are: resonance peak amplitude, frequency and bandwidth, pitch and noise, noise discrimination. Later, parameters such as linear prediction coefficients, vocal tract reflection coefficients and cepstrum parameters were proposed.
These parameters only reflect some average characteristics of the articulatory process, while the actual language articulation changes quite rapidly, which needs to be described by a non-smooth stochastic process, therefore, after the 1980s, the study of non-smooth parameter analysis methods for speech signals developed rapidly, and a whole set of fast algorithms were proposed, as well as a new implementation of the use of optimization laws to synthesize the parameters of the statistical analysis of signals. Algorithms, achieved very good results.