This is quite an interesting development. Basically, it means that an open source project team cannot guarantee that it is able to quietly fix security bugs without giving the game away that that's what they're doing. I suspect that as time passes this kind of data mining will only improve, to the point where it becomes difficult for work on criticial fixes to be carried out on public-facing repos without highlighting that a fix is underway.
The only way of hiding such work then is if there's also a closed source repo hidden away from the public eye. That raises all sorts of challenges, like who is allowed inside that private bubble, and is it breaking the license conditions by hiding it?
So What About Closed Source?
I wouldn't mind betting that closed souce projects have similar "tells". How about, what time of day does a key MS Windows software architect drive past on the way home? If they're burning the midnight oil, perhaps there's a reason.
A more elaborate take on that - if you know the network of people inside MS well enough and can see their behaviours in intimate detail (ahem, Google and Facebook), you might guess that, if the developers with associations to someone who talks about networks in Windows are all working late, Windows has a significant bug in that area that is not yet fixed.
And where that gets very interesting is if you consider who has such detailed access and data. Facebook, Apple, Google are the obvious ones.
So, how about TikTok, with their permissions-grab client apps?
The only way of truly defeating this kind of thing is if the "signal" is drowned out by deliberately generated "noise". So, perhaps whole teams get to work late simply for no other reason than to confuse the situation...