Channel Avatar

Joseph Van Name @UCIwKbqWH48vHf-QvRqmyx0Q@youtube.com

1.9K subscribers - no pronouns :c

Ph.D. in Mathematics, So far, I am posting animations of the


04:00
This AI produces an octonion-like algebra by preserving all orthogonal projections.
01:10
Producing an octonion-like algebra through repeated polar decompositions
04:29
This AI produces the para-octonions. Here is the spectrum during training.
05:07
This AI produces the octonions. Here is the spectrum during training.
03:42
This AI produces an octonion-like algebra. Here is the spectrum during training.
01:20
Spectra of real and complex LSRDRs of octonion-like algebra during training
01:04
Fitness levels of real LSRDRs of the octonion-like algebra during training
01:15
Fitness levels of complex LSRDRs of the octonion-like algebra during training
03:04
I made a neural network converge near its initialization.
06:00
Catastrophic and complete forgetting in machine learning visualized
02:00
The eigenvectors of complex Hermitian matrices vary smoothly.
02:00
Visualization of the singular value decomposition
00:34
Iterating a quantum channel related function and producing a unique fixed point.
10:00
A reversible cellular automaton plays 2 dimensional Bennett's pebble game on a rectangular grid.
10:00
A reversible cellular automaton plays 2 dimensional Bennett's pebble game on a triangular grid.
23:53
A 1D reversible cellular automaton plays and finishes Charles Bennett's pebble game.
03:00
Iteration of a random bilinear mapping from 3 dimensional space to itself.
01:03
I trained an AI to rediscover an octonion-like algebra
01:19
An AI model that completely forgets its initialization : Training an approximate quantum measurement
01:55
My highly regularized AI model memorizes data but has zero memory of its initialization/training.
06:42
A fitness function for reducing tuples of symmetric matrices to vectors that has few local maxima
02:58
AI finds a 297 element subset of {1,...,34}x{1,...,34} with no row/column 3 arithmetic progression
03:29
Spectra of Jacobians of a neural network with orthogonal weight matrices
03:29
Jacobian matrices of neural network with orthogonal weight matrices
02:47
Singular values of Jacobians of neural network with orthogonal weight matrices
00:47
We find a zero of a neural network vector field on a sphere
00:59
This neural network grew triangular weight matrices while training.
00:48
This quantum channel became the identity mapping after training.
02:01
A Markov chain of unimodular -1,0,1 matrices with -1,0,1 inverses
03:50
Training 5 neural networks to imitate each other
00:42
I trained a neural network to zero out blocks in its weight matrices
01:57
Training a neural network to entangle pairs of neurons: Weight matrices during training
05:22
Evolutionary computation producing new -1,0,1-unimodular matrices from old ones.
00:30
Evolutionary computation producing a -1,0,1-unimodular matrix
01:50
Training a quantum channel to approximate a vector-valued function 5 times
00:43
Training a real completely positive operator to approximate a randomly generated function 10 times.
20:27
Critically simple Laver-like algebras with 3 generators and cardinality at most 80
17:41
Superreduced multigenic Laver tables with 3 generators of size at most 80.
26:08
Critically maximal congruences on superreduced multigenic Laver tables.
01:00
Gradient descent produces a Steiner triple system with 31 elements
01:28
Evolutionary computation produces a de Bruijn sequence of length 64.
00:42
Finding a fixed point of a neural network using gradient descent
52:17
Spectra of completely positive superoperators produced from multigenic Laver tables
01:15
A neural network initialized with orthogonal weight matrices apparently has just one local maximum.
01:50
A randomly initialized neural network apparently has just one local maximum.
02:15
Gradient descent partitions random graphs into few cliques
02:01
Gradient descent finds large cliques in random graphs
01:10
Gradient descent finds a 77 element subset of {1,...,240} with no length 4 arithmetic progression
01:07
Gradient descent finds a 37 element subset of {1,...,240} with no length 3 arithmetic progression
01:01
Using gradient ascent to satisfy a Boolean formula while minimizing the number of true variables
05:34
This neural network quickly forgets what it was previously trained on.
03:13
I made another array where each row and column has the same number of squares of each color.
02:00
I made an array where each row and column has the same number of squares of each color.
10:25
I tried to disprove the existence of a higher infinity and produced these tables instead. Part 3
13:59
I tried to disprove the existence of a higher infinity and produced these tables instead. Part 2.
20:49
I tried to disprove the existence of a higher infinity and produced these tables instead.
21:17
Ways to append an ordinal to a classical Laver table of size at most 128: 2-adic ordering
15:29
Critically simple Laver-like algebras with 2 generators
09:31
All 6854 ways to append an ordinal to a classical Laver table of size at most 64: 2-adic ordering
09:31
All 6854 ways to append an ordinal to a classical Laver table of size at most 64