Sensor fusion or not - is two better than one?

I wasn’t sure whether to post it here or in the projects section, but it’s not a project yet, more of a theoretical musing. I wonder what you all think, or maybe have some experience already.

Imagine a TinyML device that is supposed to monitor the operating conditions of some operator-controlled machine, and classify them as low/medium/high wear conditions. We know that the high wear can come from running the machine at higher speeds, but also from running it unevenly, with rapid changes in the speed. So the model should keep an eye on both things: the speed itself, and also the jerky vs smooth transitions.

Here’s the question: would it be better to try and get all this information from one accelerometer, or use two of them, to have separate features for the two phenomenons we’re trying to monitor?

Actually as soon as I wrote this, I realized that both sensors would generate the same output, so instead of two sensors it would maybe need two models, which then have their outputs combined, sort of like random forests work.

What do you all think?

This is entirely theoretical from me so take it with a grain of salt, but I have to believe that using two is better than one. In fact, I would suggest using two independent devices running the two different models and combining their output one step up from the devices in some sort of a cascade. This sidesteps the latency issues one might run into using a bigger model for the two features or using a multi-tenancy approach.

I have not seen a lot of discussion of multi-device monitoring (I believe some is coming up in the next EdX course) but since one of the selling points of TinyML was that the devices are inexpensive, it could very well be worth the trouble to set up.

1 Like

Interesting!

Latency wouldn’t be a huge issue, we’re talking about a sort of a simple visual feedback (like a smiley or frowning face on a display), nothing critical.
I like the idea of separating the models, but it almost sounds like having three MCUs, the third one being in charge of controlling the other two :wink: Well, maybe not that complicated. Maybe a dual-core MCU like the new RP2040 could be an interesting choice for such an application.

I guess it would need a good balance between the performance and the cost, which would definitely go up in case of multiple MCUs.

Thanks for your input!

:+1:

Problem with Multiple MCUs is also the cost of managing these devices; in fact just this morning I was putting this material into the Course 4 TinyML as a Service lecture! Managing these devices increases cost, complexity and makes them more error prone (reliability issues).

I gave it some more thought since the last time I posted, and I came up with the following conclusion: instead of trying to do everything at once, or with two models/sensors/MCUs, maybe it could be much simpler.

If one of the features I’m monitoring is simply the machine speed, measured as the frequency of vibrations, I can get that with conventional Digital Signal Processing methods, and get some average value. That’s my first input.

Meanwhile the TinyML model does anomaly detection, telling me if the machine is running smoothly, or roughly, low vs high amplitude and/or frequency of the speed changes. That’s the second input.

Both inputs combined give me a simple indicator of how the machine is doing. Of course this would work for some complex scenarios where we don’t really know the output, but in a case like described could simplify a lot, and allow us to use cheaper MCU and just one sensor. Does that make sense, pun not intended?

This makes a lot of sense. As much as I like to throw complexity at a problem, it usually works better when I can figure out how to make it more simple. I think you are on the right track.

1 Like

Yea, I also agree. Furthermore, it makes tuning the models/tasks much more easy. When you do multimodal execution, it is hard to know where the issues are stemming from when you are dealing with more than one input.

1 Like