Network architectures we could use with current version of TF Lite for Micro?

In all the TinyML course examples, We were using either Fully connected layers or Tiny Convolution Network. What are the other network architectures we could use with current version of TF Lite for Micro ?

In one the colab it says there are: single_fc, low_latency_conv, low_latency_svdf, tiny_embedding_conv.
But maybe there are more.

The reason that we have stayed with full connected and conv models is because those are the ops that are well supported. There are no RNN or LSTM ops supported, though that is something under works.

Thanks VJ. It appears that for now mastering DNN and CNN should be enough to explore the current version of TF Lite Micro.

@ramkoppu There was a student in my tinyML class at Harvard that built a LSTM to recognize the keywords.

Check out Sequence-to-Sequence Models on Microcontrollersusing TFLite Micro here:

Thanks @vjreddi
Nice to see that your students also working on Federated Learning for Tiny devices which I published a white paper on this topic at the beginning of this year.

Awesome, feel free to share it here :slight_smile:

The white paper was submitted to one of the government AI challenge competitions. Because of the nature of the this challenge confidently, I can’t share it here. But all I can say is it is an interesting topic to explore and build the products using it.

What is the current roadmap for RNN or LSTM ops support?