Skip to content

Commit 8c25911

Browse files
committed
updated example descriptions
1 parent 413c9da commit 8c25911

File tree

1 file changed

+5
-5
lines changed

1 file changed

+5
-5
lines changed

README.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -10,13 +10,13 @@ This repository explores KANs by porting the KAN Python implementation from [ML
1010

1111
## Empowering edges
1212

13-
The fundamental innovation of KANs lies in their learnable activation functions on edges. The paper [KAN: Kolmogorov-Arnold Networks](https://arxiv.org/abs/2404.19756) suggests using a linear combination of B-Splines and the SiLU function. Subsequent research also recommends the use of Chebyshev polynomials among others. One key feature of these functions is that their derivatives are well defined and easy to calculate, which is crucial for gradient descent optimization.
13+
The fundamental innovation of KANs lies in their learnable activation functions on edges. The paper [KAN: Kolmogorov-Arnold Networks](https://arxiv.org/abs/2404.19756) suggests using a linear combination of B-Splines and the SiLU function. Subsequent research also recommends the use of Chebyshev polynomials among others. One key feature of these functions is that their derivatives are well defined and easy to calculate, which is crucial for gradient descent optimization.
1414

1515
| **Basis Functions** | **Derivatives** |
1616
|--------------------|----------------|
17-
| **B-Spline** | |
17+
| **B-Splines & SILU** | |
1818
| <img src="imgs/bspline_silu_basis.png" width="300"/> | <img src="imgs/bspline_silu_basis_der.png" width="300"/> |
19-
| **Chebyshev** | |
19+
| **Chebyshev Polynominals** | |
2020
| <img src="imgs/chebyshev_basis.png" width="300"/> | <img src="imgs/chebyshev_basis_der.png" width="300"/> |
2121
| **Gaussian RBF** | |
2222
| <img src="imgs/gaussian_rbf.png" width="300"/> | <img src="imgs/gaussian_rbf_der.png" width="300"/> |
@@ -29,7 +29,7 @@ The [ML without tears](https://mlwithouttears.com/2024/05/15/a-from-scratch-impl
2929

3030
### 1D regression problem
3131

32-
Refer to [train_1d.mojo](train_1d.mojo) for a simple 1D regression problem. This example compares the performance of a classical MLP with two KAN networks: one utilizing B-Spline-based edges and the other using Chebyshev polynomial-based edges.
32+
Refer to [train_1d.mojo](train_1d.mojo) for a simple 1D regression problem. This example compares the performance of a classical MLP with three KAN networks: one utilizing B-Spline-based edges, another using Chebyshev polynomial-based edges, and the third employing Gaussian RBF-based edges.
3333

3434
<img src="imgs/train_1d.png" width="600"/>
3535

@@ -39,7 +39,7 @@ Performance:
3939

4040
### 2D regression problem
4141

42-
[train_2d.mojo](train_2d.mojo) implements a 2D regression problem. We compare again the performance of a classical MLP with two KAN networks: B-Spline-based and Chebyshev polynomial-based edges.
42+
[train_2d.mojo](train_2d.mojo) implements a 2D regression problem. We compare again the performance of a classical MLP with three KAN networks: B-Spline-based, Chebyshev polynomial-based, and Gaussian RBF-based edges
4343

4444
<img src="imgs/train_2d.png" width="600"/>
4545

0 commit comments

Comments
 (0)