Fairly short window of opportunity
LLMs are very much flavour of the day, in much the same way as everyone talks about the latest fantastic bridge project (the press around the building of the Clifton Suspension Bridge was vast and colourful[1]). Meanwhile, there are many more people are being regularly served by all the hundreds (thousands?) of smaller footbridges and unglamourous overpasses put up in the same timescale. Similarly, for each honking great LLM there are numerically many more tasks that can be tackled with smaller ML models.[2]
If d-Matrix can avoid getting sucked totally into the vision of selling increasingly large systems to make a small number of big sales, before being overrun by the likes of nVidia, there should be a healthy long-term market for decent model-running chips that can be used in smaller products, that sell in larger numbers. Some alternatives/competition down there would be a Good Thing, especially if it can bring down prices and become well-known outside of specialist circles. Okay, they are still up against nVidia (Jetson) but that is put at the pricey end, leaving much room below where there is, what, the K210, anything else?[3]
[1] actually, mostly b&w 'cos the newspapers back then - look, figure of speech, ok!
[2] not really an amazing observation: so long as it is possible to build a smaller working X you will see more mini-X than maxi-X in use: consider JCBs and the cute little mini-diggers.
[3] genuine question, do you know of other devices in that range or lower that help run your smaller ML models? Actually available as hardware, not just drop-in IP for your next fab run?