Having worked for a number of companies that have implemented DPUs - offload cards as we used to call them - they nearly always ended up being more trouble than they were worth. Aside from very real "new batch of implementation flaws" the problem usually boiled down to having some connection state information on the card. This is a nightmare when you want to load balance/fail over etc as you have to find a way to move state information between cards - which you end up doing via the main CPU anyway.
If you are Amazon/Microsoft/Google you can work round these problems with tightly controlled configurations and if you are Joe Schmo running a single server in an office you probably aren't pushing any limits. For everybody else, you are opening the door to a world of difficult to diagnose network issues for the sake of a few % more free CPU cycles.
Unusually I find myself agreeing with Gartner. "Applicable to less than 1%"