Viva Blocked
Viva Insights is blocked in my tenant and we are doing just fine. No FOMO and no issues. Has anyone found a use for this? Legal declared it a "creepy version of Facebook in a cheap suit"
Microsoft is adding Copilot adoption benchmarks to Viva Insights, a tool that lets managers monitor teams to spot those that are gulping down the AI Kool-Aid fastest. Viva Insights is Microsoft's vaguely creepy monitoring tool, designed to slurp data from employee activities, verfiying how their teams stack up against everyone …
Viva Insights (as opposed to Viva Engage, per AC's comment above) is just an analytics suite for Microsoft 365 adoption and use. Not sure why the article characterises the tool as 'creepy', the point is to do exactly what the article later on describes - give managers data-driven insights into how people are actually using Microsoft 365 apps on the ground. It's just an extension of the base analytics that were already available to IT teams centrally anyway. These are actually a pretty valuable tool for monitoring and forecasting licensing needs, and making and reviewing business cases.
We've already been using elements of this to look at Copilot license usage within our own business, and in particular, looking at the stats in conjunction with anecdotal feedback has helped us to tailor our AI usage policy to weed out emerging bad practices and promote the good. E.g. We had one specific user who was using it in Excel a lot more than everyone else, and when we asked what they were using it for, it turned out that they were using it to restructure pivot tables, without then cross-checking the results. Without those stats, we'd never have known that this was happening, and that user would likely have spread the bad practice around as adoption slowly increases. This is especially useful for us as we're about to slam the gates shut on staff using any other AI apps or tools given the risks involved in exposing business data. At least Copilot keeps it all within the tenant so it doesn't increase the attack surface beyond what's already there. If staff must use a chatbot, we'd rather it be one we have a degree of insight and control over.
A lot of the use cases we've been seeing are genuinely helpful, which is why we're not cancelling the subscriptions. The time savings on meetings alone have been enough to justify the cost, as we've been able to set up reminders with suggested prompts to help draft agendas and discussion points ahead of calls, and then to review and correct the post-meeting summaries the AI drafts from the transcripts. And there's other cases where it's been helpful - finding trends that were being missed in customer services and operations and suggesting metrics to better track these issues, restructuring documentation to be posted on our CMS, and generating human-readable incident reports from system logs. The key thing is that people can't be replaced, all of the output needs to be checked against the inputs to verify and ensure nothing's been missed - but this is still way faster than having to do the full write-ups manually.
The time savings on meetings alone have been enough to justify the cost, as we've been able to set up reminders with suggested prompts to help draft agendas and discussion points ahead of calls, and then to review and correct the post-meeting summaries the AI drafts from the transcripts
So MAYBE these meetings are useless, or the people organizing these meetings are just good at wasting everyone's time?
If a meeting is needed and is managed by competent people, results are obtained without the help of Copilot.
>” suggested prompts to help draft agendas and discussion points ahead of calls”
These are activities that you should be doing prior to asking an AI, as without a reason for a meeting, there is no point in arranging a meeting, as you have no idea as to the purpose, who should be invited, and whether the meeting is phone/video/in-person…
This AI is bridging a gap. You're delusional if you think you can just handwave away the value these are adding here. Management says there must be a regular meeting, then there will be one regardless. You also underestimate people's inability to determine in advance what they want to cover and to express what they actually want to directly. There is often plenty of talking in circles first, hence some are keen on meetings.
"Management says there must be a regular meeting, then there will be one regardless."
AI isn't going to fix bad management.
Go into the meeting with a single item on the agenda "any other business".
Finish the meeting about 10 seconds later when it is determined that there is none.
>” You also underestimate people's inability to determine in advance what they want to cover and to express what they actually want to directly. There is often plenty of talking in circles first, hence some are keen on meetings.”
Those types of meetings are best held over lunch, round a coffee table or at the pub ie. They are informal and without explicit agenda. These meetings serving two purposes firstly getting everyone familiar with your thinking and secondly helping you to clarify your thinking ie. Making sure you haven’t accidentally got yourself into a rabbit hole. Now hold a proper meeting with all that need to be involved and your team members are already up to speed on the issue and able to meaningfully contribute. If your company doesn’t permit you to talk with your desk neighbours or leave your desk unless it is to go to a meeting then simply arrange reoccurring and adhoc meetings in the cafe et al… Yes that means your “coffee break” maybe an hour rather than 10 minutes, but your not actually taking a coffee break, just attending a meeting at which coffee is available…
Basically, you will achieve more and quicker if you actually talk to your colleagues outside of the formality of fully structured and documented meetings.
Dear user. We understand from the feedback your system has been sending us that you are a regular user of our discounted CoPilot trial. Excellent. You won't mind if we now increase your subscription costs by 1000% to pay for it. You don't need to do anything to accommodate this change. It will be seamless. Alternatively you can stop using all of our products next Thursday.
Four montha ago I got given a new laptop from our lovely IT department. You are supposed to check that everything you normally use and do works before you take it away with you. I was looked at with increasing incredulity as I first moved the start button over to the left and did other taskbar related fixing including killing Copilot. Then I started on Edge and switched off Copilot there, before trying to do the same on Office. The option wasn’t there and someone said you can’t do so on our rollout of it. When I said "But what about if I want to" before being cut off mid sentence by an IT boss who informed me that was the only LLM that we were allowed to use. I said if I had been allowed to finish my sentence it would have ended with "turn off and never use AI?" He just said you can’t turn it off, so I opened Task Manager and killed the local one that launched with Outlook and the Copilot exe that was running. I then said everything was fine apart from having Copilot on the thing and walked out with my new machine.
Normally you get a feedback survey sent to you from the IT support person you visited. Strangely I didn’t!
This entire section from Scotech describes perfectly what one of the major faults with LLM adoption is. Manglement are foisting it on staff, but because no one understands how to utilise it for the most part we're all guinea pigs. The staff, the people or systems whos data is processed via an LLM, the shareholdes, society at large all of us are the guinea pigs and big tech are the mad scientists.
The spyware is there to enable gatekeeping which is fair enough in corporations, it's their company after all. Make no mistake, the root of the problem comes from "AI" investors looking for a problem to solve and not having a decent product to do that. What they should be doing is figuring out how to use these LLMs and then training staff rather than just throwing caution and their own data to the winds.
The cogent response to all this is for IT departments who want to invest time in AI to do exactly that and then roll it out with training and proper guardrails.
"How to use these LLMs"
Well I suppose if you just need to fill disk they're alright. Maybe shunt the models back and forth to stress test your network? In a pinch I suppose you could actually run them to keep the place warm when the heat goes out.
Never been more glad to be in trades than right now. No copilot, no AI at all. Not even any ML, though properly done ML might actually be useful, LLMs are decidedly shit.
To be fair, I have made AI tools for my colleagues that reduce entire tasks to simple clicks. The human is still necessary as they have to verify and take ownership of the output. They however refuse to use it despite the significant gains in performance. The reason being, in my opinion, there's a quick dopamine hit and sense of achievement doing the little mundane things. This doesn't bolster productivity though, and hence the KPI can make sense, only if it's not onerous.
Well, in the world where office attendance is monitored and reported on as an official Bums On Seats metric, surely a measure that shows you've pledged your allegiance to the new god can only be a good thing?
"How was your day today, dear?"
"I travelled unnecessarily to the office, sat at a desk to use the laptop I carried in from home, and was required to use an unreliable software tool that took up my time for no benefit to me, my employer or anybody other than the software vendor's share price. Same as yesterday."
I went onsite on Friday for no reason other than to be badged in. The rest of the team I'm working with all have ManFlu so were either off or working from home.
In the end I messaged a friend who works in the labs to have lunch so I could at least speak to one human at work!
"You are in the doghouse for AI avoidance."
Next week: "Oi you, stop that! You are in the doghouse for churning out AI slop all day long!"
"What's that, you want a meeting to agree an official AI usage window? Oh, dear, that's above my level of responsibility, I'll have to escalate that to my Demi-God."
Three years later: "Oi you, stop that! You are doing useful work and scuppering my attempts to tilt the AI Policy my way! Make sure you are at the 100th meeting next month."
When I mentioned the prevalence of errors in AI output, my manager suggested that I run my prompt through a competing AI system and compare the results. That might work for short snippets of code, but remembering my comp-sci classes from college, the rate of code variation grows quickly as program size and complexity increases.
This stuff is supposed to improve my productivity, yes? Yes?
"Microsoft 365 Copilot Chat (work)"
I have absolutely no idea what this might be, but it seems clear to me that the person who is responsible for naming MS products should be sacked (and maybe replaced by the brand new "MS365 Teams Copilot Azure Windows Office AI Branding Capability (new edition)")
Just for the fun of it, I looked up how much MS have invested in AI.
The AI generated summary of Google tells me that MS invested $88B on AI in 2024 (and fired 15000 to pay for it) and plans to spend another $80B in 2025.
That is serious money to turn into smoke. I assume the leadership would like to see a non-zero ROI from it, at least have something to show in the news.
"This time they are betting the farm on the next hype. A good bet?"
The outcome is assured. I thoroughly encourage their "courageous" gamble.
Given the what the farm has produced I am not sure anyone would want it even to recover a debt.
For anyone who isn't up to date, your data in OneDrive will be fed into the AI unless you use one of three chances per year to opt out (the usual caveats of a "temporary" failure preventing the switch being disabled and the option being randomly enabled anyway apply).
So it's time to get that data out of OneDrive if you had ever uploaded it.
Tron ("Brilliant", above) has it spot on.
Microsoft is very unlikely to go belly up if what is currently being called AI does turn out to be a colossal bubble. The company has long since achieved Too Big To Fail status and to change that will require time and coordinated and determined action by the authorities where its major customers are based.
A burst bubble would be embarassing for the promoters of this "AI", and one might hope that there would be some changes in MS's senior management, and also in any other publicly-traded companies that have bet the farm on it and must book existence-threatening losses as a result.
In large companies that merely have to write off large sums the blame will doubtless be placed on bad advice from IT, which can be handled by the standard procedure of installing a new CIO with instructions to cut headcountcosts.
Move along, nothing to see here....
Ready to be watched like never before? Meet Microsoft Copilot. The productivity tool that doesn’t just see all, it obsesses over your every keystroke. Your emails, spreadsheets, and calendar entries aren’t data anymore. Privacy? That’s a bedtime story for babies now, and “personal space” got deleted just before Copilot suggested an edit to your awkward confession — right in the same search window.
It’s the surveillance system you didn’t ask for but definitely deserve, watching, taking notes, and ready to tattletale before your second coffee. Compliance and transparency have never been this invasive or this much fun. Microsoft Copilot: making productivity feel way less private since 2025.
+1 for use of the word “Panopticon”
I would go for camera oscura either literally "dark room" or the ancient optical device which requires darkness but the world is diminished and upside down.
This nonsense does not illuminate; rather it is confounding what little its users know about anything — casting a great dark swathe over the whole shop.
... AI is dead in the water already.
The technical level not only is no longer impressed but does see AI mainly as a risk.
Not for taking over the world, but for shooting them in the foot...
And I see first signs, that leaders start listening and noticing lack of real results from AI.
Microsoft giving boneheaded management a tool to "monitor usage" of AI by their minions is IMHO a sign of panic...
It's a sign of the whole wrongheaded approach.
Fancy sorting out the telemetry of an unknown and unteasted technology, whilst putting your very real data into that tech, and only *THEN* figuring out how to measure what 'success' looks like.
The hubris attached to those billions invested is breathtaking.
If you have followed Microsoft things on social media for a while, you'll have noticed that more than a few of its "MVP"s have from time to time demanded more convenient ways to turn off CoPilot. This is a problem for Microsoft since it wants to feed all of a users interactions back into its LLM. Most users, even if they do think CoPilot is worthwhile, prefer to simply make occassional queries to it - much like Google (sorry Bing) search. "A better Bing" is not what Microsoft has been selling, and let's face it, no one pays for web searches, unless they are trying to avoid Google or Bing.
From the perspective of MS executives, forcing their programmer employees to use CoPilot gives them data to train their LLM on. It isn't like they are getting that from many non-Microsoft programmers. The same holds for Office products.
>From the perspective of MS executives, forcing their programmer employees to use CoPilot gives them data to train their LLM on.
And as Copilot usage has been hooked directly into employees' performance reports, meaning their jobs pretty much depend on making some kind of meaningful use out of it, in theory this ensures that the data will not be irrelevant garbage. In practice however, as more and more people just use it for needless make-work projects to boost their own KPIs, the quality of data will degrade anyway. In this case though it'll be much slower than with scraping bulk data off the Internet, so it will be that much more difficult to sift out the chaff later on down the line.
But hey, at least they'll save a marginal amount by having the drones crank out the data for them, so the shareholders won't care.
Perhaps the data goes back to MS development so they can learn how to fix Windows and its entourage of over complicated management tools.
I’ve just discovered MS Graph interfaces need about another four things turned on since the last time I refreshed it. I have no idea why.
Well, for fun, I asked Copilot to rate the competence of stuff I've worked on, and it absolutely loves it and suggests ways that I can use to promote how valuable I am as a developer.
Obviously this makes me somewhat biased, but if used by someone to determine technical competency and quality of work, I really cannot fault it. Of course, I shall get it to write all my annual reviews from now on.
Scary
Judged by a machine for either using the machine or not using the machine
Seems like a perfect HR tool
We need to lose 1,000 people
OK no problem let’s just run a few queries to generate justifications and do you want them emailed or texted?
Auto deletion of logins and there gone
Dreamy for bad managers to hide their poor work
Dreamy for HR to hide their poor work
Dreamy for legal to justify their existence
Nightmare for the company as it goes tits up
Taking a management perspective. A good boss would be interested only in how productivity & quality reacted to the adoption of Copilot and judge if that warranted the licences. Who cares if particular individuals use it? You might insist they try it but as long as they are productive they are worth their salary. Even if you think there are people out there that will use it and be more productive, it doesn't matter, hire them too - it's called growth. If your market is topped out (unusual), ok maybe then you will focus on margins only.