Inspired by darknet and leaf
- not requires
std(onlyallocfor tensor allocations, bump allocator is ok, so it can be compiled to stm32f4 board) - available layers:
Linear,ReLu,Sigmoid,Softmax(no backward),Conv2d,ZeroPadding2d,MaxPool2d,AvgPool2d(no backward),Flatten - available optimizers:
Sgd,Adam,RMSProp - available losses:
CrossEntropy(no forward),MeanSquareError - available backends:
Native,NativeBlas(no convolution yet)
- example of running
yarnnin browser usingWASM - example of running
yarnnonstm32f4board - finish
AvgPool2dbackpropogation - add
Dropoutlayer - add
BatchNormlayer - convolution with BLAS support
CUDAsupportOpenCLsupport
DepthwiseConv2dlayerConv3dlayerDeconv2dlayerk210backend
use yarnn::model;
use yarnn::layer::*;
use yarnn::layers::*;
model! {
MnistConvModel (h: u32, w: u32, c: u32) {
input_shape: (c, h, w),
layers: {
Conv2d<N, B, O> {
filters: 8
},
ReLu<N, B>,
MaxPool2d<N, B> {
pool: (2, 2)
},
Conv2d<N, B, O> {
filters: 8
},
ReLu<N, B>,
MaxPool2d<N, B> {
pool: (2, 2)
},
Flatten<N, B>,
Linear<N, B, O> {
units: 10
},
Sigmoid<N, B>
}
}
}