High Definition Standard Definition Theater
Video id : 1Ag47jgGNNI
ImmersiveAmbientModecolor: #fbfbfa (color 2)
Video Format : 22 (720p) openh264 ( https://github.com/cisco/openh264) mp4a.40.2 | 44100Hz
Audio Format: Opus - Normalized audio
PokeTubeEncryptID: 5f05c94c2abe0e1ba476e6513d6447480299ae68df12c38e37cd5a451501ae9cb32de31992f9b7daf9228a34d63b0e40
Proxy : eu-proxy.poketube.fun - refresh the page to change the proxy location
Date : 1716053913988 - unknown on Apple WebKit
Mystery text : MUFnNDdqZ0dOTkkgaSAgbG92ICB1IGV1LXByb3h5LnBva2V0dWJlLmZ1bg==
143 : true
A neural network reverts back to its original state as we turn L1 regularization on and off.
Jump to Connections
119 Views • Mar 9, 2024 • Click to toggle off description
Here is an animation of the three weight matrices of a neural network during training. We train a neural network to memorize random data, but as we train the network, we toggle L1 regularization on the weights so that the network is regularized for a number of generations, and after the regularization, the network trains without regularization. We repeat this process of turning on and off L1 regularization several times.

The L1 regularization hampers the ability of the network to memorize data, so we see an increase in the loss level of the network. The L1 regularization also increases the sparsity of the neural network, and this causes the entries in the square to appear more red,blue,green, or black.

The neural network is initialized to zero. The structure of the neural network is Chain(SkipConnection(Dense(mn,mn,atan) ,+),SkipConnection(Dense(mn,mn,atan),+),SkipConnection(Dense(mn,mn,atan),+)) where mn=50. This means that it has atan activation and a skip connection for each weight matrix. The skip connections allow us initialize the weight matrices to near zero, and the skip connections allow us to associate the inputs and the outputs of the weight matrices so that it makes sense to overlay these weight matrices in the animation.

The three weight matrices are colored red, green, and blue in the animation. We brighten the animation by taking the square root of the absolute values of all of the entries in these weight matrices.

We observe that after each period of regularization and non-regularization, the neural network reverts nearly to its initial state. This observation shows that L1 regularization (at least for the kind of network that I am training) cannot be used to easily change the structure of a neural network.

The notion of a neural network is not my own. I am simply making this animation to highlight how neural networks behave with respect to AI safety. It is good for AI safety that the neural network is able to recover its original state The neural network is only able to recover its original state under specialized conditions, so the state of a trained neural network is only somewhat special and it still contains a lot of randomness to it. On the other hand, since the network is able to recover its original state, we have a correspondence between neural networks trained with L1 regularization and networks trained without L1 regularization. Since L1 regularized networks are sparse, they are considered more interpretable than non-L1 regularized networks, but since L1 regularized networks correspond to non-L1 regularized networks, the non L1-regularized networks gain the interpretability from L1 regularized networks.

This animation shows that neural networks are able to recover their former states in some scenarios. I have made other visualizations that highlight instances in which neural networks are not able to recover their former states.

Unless otherwise stated, all algorithms featured on this channel are my own. You can go to github.com/sponsors/jvanname to support my research on machine learning algorithms. I am also available to consult on the use of safe and interpretable AI for your business. I am designing machine learning algorithms for AI safety such as LSRDRs. In particular, my algorithms are designed to be more predictable and understandable to humans than other machine learning algorithms, and my algorithms can be used to interpret more complex AI systems such as neural networks. With more understandable AI, we can ensure that AI systems will be used responsibly and that we will avoid catastrophic AI scenarios. There is currently nobody else who is working on LSRDRs, so your support will ensure a unique approach to AI safety.
Metadata And Engagement

Views : 119
Genre: Science & Technology
Date of upload: Mar 9, 2024 ^^


Rating : 5 (0/7 LTDR)
RYD date created : 2024-03-13T06:54:56.200577Z
See in json
Tags
Connections
Nyo connections found on the description ;_; report a issue lol

YouTube Comments - 0 Comments

Top Comments of this video!! :3

Go To Top