Overview What is a Support Vector Machine? Train an SVM classifier with scikit-learn Implement your SVM with CMSIS-DSP What is a Bayesian estimator? Train your Bayesian estimator with scikit-learn Implement your Bayesian estimator with CMSIS-DSP What is clustering? Use CMSIS-DSP distance functions Miscellaneous new CMSIS-DSP functions Related information Next steps
Miscellaneous new CMSIS-DSP functions
The functions that are listed in this section of the guide are useful if you are building more complex Classical-ML algorithms.
CMSIS-DSP introduces two new functions that are useful if you are computing weighted sums of points or scalars.
- arm_ barycenter_f32
- Is a utility function that computes the barycenter of some weighted points.
- arm_weighted_sum_f32
- Works with scalars and computes the weighted sum of those scalars.
The following code describes the API of those functions:void arm_barycenter_f32(const float32_t *in, const float32_t *weights, float32_t *out, uint32_t nbVectors, uint32_t vecDim); float32_t arm_weighted_sum_f32(const float32_t *in, const float32_t *weigths, uint32_t blockSize);
CMSIS-DSP introduces two new functions that are related to entropy.
- arm_entropy_f32
- Computes the entropy of a probability distribution pSrcA.
- arm_kullback_leibler_f32
- Computes the Kullback Leibler divergence between two probability distributions.
The following code describes the API of those functions:
float32_t arm_entropy_f32(const float32_t * pSrcA, uint32_t blockSize); float32_t arm_kullback_leibler_f32(const float32_t * pSrcA, const float32_t * pSrcB, uint32_t blockSize);
When working with Gaussian probability, rounding issues can become a problem. This is because the dynamic of the values can be large. Instead, you can work with the log of the values.
- arm_logsumexp_f32
- Computes the sum of probabilities represented by their log. This sum is computed considering accuracy issues.
- arm_logsumexp_dot_prod_f32
- Computes the dot product when the values are represented by their log.
When working with conditional probabilities, represented as tables, you often need to compute dot products between the row and the columns of those matrixes. If the probabilities are represented by their log values you'll need to use a function like arm_logsumexp_dot_prod_f32.
The following code describes the API of those functions:float32_t arm_logsumexp_dot_prod_f32(const float32_t * pSrcA, const float32_t * pSrcB, uint32_t blockSize, float32_t *pTmpBuffer); float32_t arm_logsumexp_f32(const float32_t *in, uint32_t blockSize);