hello,
i present an example of use of Racket in science for the computation and plot of the sine function in Racket computed with a neural network.
Here is the plot of the neural sine in -pi/2 and pi/2 :
this is a good approximation, but not completely exact, and here is ,to compare, both the mathematical sine function and the neural sine:
Damien
2 Likes
An interesting feature of the DNN (Deep Neural Network) is that if we truncate the coefficient of the matrix it can keep anyway a good accuracy in the result, see this article about that idea.
I can check this feature with Racket by truncating the coefficient of the learned flomat matrices of the DNN of sine in this little (Scheme+) part of code:
{M <- (get-field M r3)} ; get the vector of matrices in the retro-propagation class
(display "Matrix vector M=") (newline)
(display M)
(newline)
{precision <- 1000.0}
(display "precision=") (display precision) (newline)
(define (trunc3 x) ; truncate a number x to log10(precision) decimals
{round{precision * x} / precision})
(define-pointwise-unary trunc3) ; flomat library feature that create an unary function .trunc!
(define (trunc3-matrix mt) ; truncate coefficient of a matrix
(.trunc3! mt))
; truncate all the transitional matrices of the deep neural network
(for-racket ([mt M])
(trunc3-matrix mt))
(display "Matrix vector modified M=") (newline)
(display M)
(newline)
(send r3 test Ltest)
(newline)
{Lplot-DL-trunc <- (send r3 DL-data-2D)}
(plot (list (points Lplot-DL-trunc #:sym 'circle1
#:color "green"
#:label "neural sine - matrices with truncated numbers")
(points Lplot-DL-main #:sym 'circle1
#:color "red"
#:label "neural sine")))
so i can check graphically that the accuraccy is still good by comparing the position of the points with and without truncated coefficients matrices:
the zooming feature of DrRacket can help with that, showing that the red and green points almost matches:
Just a few more words about this,in my opinion, the accuracy is relatively conservative from matrix to matrix with truncated coefficients because matrix multiplication is composed of line and row dot products ,composed themselves with sum and multiply.
Sum keep the accuracy : result get the accuracy of inputs, multiply is less conservative (due to shift in bits or digits).
A final word about the use of this: having truncated coefficients use less memory. The context of this work has been done for use in network communication between ground and spaceship send to study solar wind where uplink data rate is only 2 kbps and with communications windows between 15 and 80 minutes. The embedded AI that control the scientific system is learn on ground and the DNN (the matrices) are send by the uplink to the spacecraft. Having low precision coefficient makes lighter the size of data transfered.
Source of image.