SIREN paper

ML
Author

Nipun Batra

Published

April 27, 2023

Introduction

In this post, I’m noting some observations from the SIREN paper.

This is based on some quick experiments with their awesome fork of TFPlayground here

OOD

ReLU does better in OOD

Sine Activation does “bad” in OOD regions

Ability to learn with simple networks

Simple Network with ReLU unable to learn

Simple Network with Sine activation can learn well

Ability to fit complex functions

On very complicated datasets, ReLU is not able to drive the train loss low

On very complicated datasets, Sine is able to drive the train and test loss very low!

Fitting “speed”

ReLU is slow

Sine is fast