Paper Summary for Efficient Attention by AI Researchers
This paper is about Efficient Attention, an attention with Linear complexities that eliminates the need of Dot-product attention thus reducing the memory and growing computational costs.
Annotated paper can be found here and the link to feature pyramid network (FPN) can be found here
· Deep learning