A feature of my HoovorNet is it’s minimal use of dynamic memory. Take a look at my general architecture for HooVorNet below. The input is 256 x 256 x 3 and the output is 16 x 16 x 768. If you carry through those multiplications you will find that 256 x 256 x 3 = 16 x 16 x 768, which means the output uses the same memory as the input, and if you look closely then you will see that the largest use of memory is the output of the first feature generator which is only 150% of the input memory. And HooVorNet doesn’t employ any sort of bottlenecks or hidden memory expansions. This max pipeline memory for HooVorNet is 75% the max memory used by MobileNetV2. The parameter memory is quite small at around 4700% of the size of the input image, but not as small as some of the implementations of Mobilenetv2. This means that the static parameter memory of my HooVorNet is larger, but the dynamic pipeline memory use can be smaller, which I believe means that it is possible that more pipelines of HooVorNet could be simultaneously processed in parallel on hardware then could be with MobileNetV2 pipelines.