May I just be the first to say...
IT'S A TRAP!
Microsoft’s love of Linux is extending to its flagship Visual Studio suite. Redmond has released for download an extension it has developed that lets you roll C++ code for Linux servers, desktops and devices. Visual Studio will copy and remote build source and launch the application with a debugger. There’s added support in …
Who's trapping whom? This sounds like embrace-extend-extinguish in reverse.
Microsoft is actually helping disaffected Windows developers make the switch to Linux, and when Microsoft abandons them they'll say "screw you guys, I'm staying here even if I have to learn Eclipse or Emacs or even Vim". Am I missing something? Where's the world domination strategy here? (IT? icon because MS has dominated IT for 20+ years!)
Perhaps it's a "get those pesky programmers off our platform so we can dumb it down big-time" strategy?
If the disaffected Windows programmers embrace make, gcc or llvm, autoconf tools and so on, then great.
If they just take their dependence on VS to Linux the result is Linux world gets filled up with VS-compiler-specific C++ code generated by VS wizards. I can decipher most C++ written by people on Linux using just vi or another text editor and grep. Deciphering VS-generated C++ code on Windows using a text editor is a shortcut to a stress induced heart attack. This is the trap.
NT 3.1 to NT 4.0 (and maybe later) had five subsystems:
1) NTVDM: Run dos programs in virtual machine, even x86 on Alpha, MIPS, PowerPC etc. Today not as good as "Dosbox" a cross platform emulator that could even run DOS graphical adventure games on Symbian. Works well on Windows and Linux)
2) WOW: Translate 16 bit Windows API calls to 32bit NT calls and probably runs the code in an NTVDM?
3) POSIX: Usually off. Sort of UNIX compatibility, you needed Internix or Services For Unix to really make use of it
4) Networking used essentially MS LanManager, which was option on pre Warp IBM OS/2 and bundled with pre 1993 MS OS/2 (briefly sold as server by MS for Wn3.x workstations to schools mostly), when IBM and MS parted company on OS/2
5) OS/2: OS/2 text mode (not Presentation Manager), i.e. Console apps ran perfectly on NT4.0 as I had to put some for an Accounts Dept.
I think Ballmer depreciated all this stuff. Mostly gone by Vista?
NT was originally very modular and before NT4.0 even Graphics (Screen & Printer) drivers sensible outside Kernel. NT 3.1 was first version in 1993, possibly because partly based on MS OS/2? As well as all Dave Cutler's VMS inspired work.
With Win95 MS started to lose the plot. They totally lost it in 2003 and the development for Vista went out of control, as well as the Ribbon inflicted on Office. Was XP, Server 2003*, Visual Studio 6, SQL7 and Office 2003 the peak of MS implementations?
[*Though I preferred Win2K Enterprise / Advance Server to Server 2003, it was a bit bloated, so we switched to Linux servers only in 2007]
The NT POSIX subsystem was (apparently) the result of a poorly worded US Gov/DOD specification for an OS refresh, which just required "POSIX compliant" - to which MS added the bare minimum POSIX 1 layer to NT and therefore satisfied the spec, and undercut the big-iron UNIX vendors. Thus NT was foistered on poor end-users who then had to re-write their UNIX software for NT.
I was told this mid-90s by some of those poor end-users.
...lets not mention the Windows-for-Warships farce...
Face it - with actual powerful GPUs requiring a lot of hw interaction to be used proficiently, a lot of drivers code needs to be in the kernel, or you will just end up moving a lot of bytes to/from user and kernel space - Linux moved them as well.
While in NT 3.5/4 era there were just low-res display for 2D bitmaps, and GPU did little more than displaying graphic memory data on screen and drawing the cursor, now you have something much more complex.
It would be nice to be able to select a low-performance yet more secure driver, and an high performance one, depending on the machine use, though.
.. not necessarily if your GPUs have an onboard cpu controller with local bus access they can collect and run code direct from memory and/or use the same bus to interact with other hardware needing only a kernel level control to pass the GPU code page pointer to get it moving. Security is maintained as the GPU only runs code passed to it and it leaves the main CPU free to do what it does best.
I agree, Server 2003, XP and Office XP were the pinnacle of Microsoft's products. Nothing since then has provided anything more or anything better. Since those days Linux has improved to the point that it maybe possible to replace MS Access.
Windows NT was designed with a "native API" upon which other "subsystems" could be implemented. "Win32" is actually a subsystem implementing the "Win32 API" over the "native API" - although with time there are Win32 functions that calls into the kernel directly.
Other subsystem can be created over the native API - as it was done for OS/2 and POSIX (although both partially - for different reasons...). I'm not 100% sure, but WinRT also is implemented directly over the native API, not over Win32.
Guess they did something alike to support Ubuntu. After all part of the plumbing to support POSIX applications was already there - and they explicitly tell syscall are somehow "converted" - thereby there's not a Linux kernel in a VM running under.
The native API was never officially documented, even if there are a few applications that calls it directly. For example if you want to run something at boot time (like chkdsk and a few AV programs do), before Win32 support is fully available, you may have to code against the native API directly.
See for example:
Thanks for the responses, all. LDS's technet ref, in particular, gave me something of an insight into what is meant by 'Windows Subsystem', so I think my original query is answered.
I know a bit more now than I did this morning, but I think I'll stop there - it's way too big a topic to get into from a cold start. Nevertheless, it's odd to realise that I find the internal structure of Windows far more interesting than the external fluff that we all get hung up on.
I liked it for RAD prototyping of ideas in VB6. VB.net is C# with a VB syntax, pointless. C# is probably the successor to VB6, but far slower to prototype ideas in and the newer "visual" stuff replacing the VB6 era forms lack some of the earlier features. VS now seems to have 3 incompatible unfinished "models" for the visual side of windows. Why would I use it for Linux?
I can't imagine why anyone developing for Linux isn't doing it on a Linux box, given how horrible Windows has got.
Im sorry, but VB.Net is many things, but c# with VB syntax it is not. Its also not a successor to VB6, VB.Net is the successor to VB6.
Obviously from your post, you havent used Visual Studio in quite some time and definitely not longer than 5 or 10 minutes.
As for the actual topic itself, if MS wants to add support for Linux C++ devs then all the better, it gives those guys another tool they can choose to use if they want it. Indeed, it looks like you haven't read further than the title, as none of the templates offered appear to be relevant to visual development.
Remote build - does this mean the plugin takes my code, puts it onto a Microsoft server somewhere, and compiles it there? If so, what MS are asking is for me to give them my source code to whatever application I'm writing, sniff whether it's of any value to them, if so then 'nick' it. If I am working for a client, this arrangement is an absolute no-no.
I'll stick to writing C++ with Code::Blocks on an actual Linux system, thanks.
It looks it's 10 that will offer an "Ubuntu subsystem". While this plugin will let you compile on any Linux of your choice - and all we know how difficult may be to run an executable build on one distro/version on another, especially if not linked statically.
Still, this one asked me for Android support to be installed - it looks another project derived by having implemented Android support in the IDE.
@Novex - They've not really done as much as you may think. It's not compiling the code with a Microsoft compiler. It's just a bit of software which uses SSH to download your source code to a Linux machine, and then runs the gcc compiler there to compile the code and run it. It also appears that you can debug by using gdb. It's not much different from ssh-ing into a remote server via a terminal and doing it yourself, or writing a script to do it. Note that the "remote server" could quite easily be running in a VM on your workstation.
I'm pretty sure people have been able to do all this with Emacs since at least the 1990s. I've never tried it myself, so I don't know the details however. It may be new for MS Visual Studio, but it isn't a new idea by any stretch of the imagination.
I'll take a guess that the reason that MS are doing this now is to try to keep MS VS relevant for Windows devs who are porting their C or C++ applications or libraries to Linux VMs running on MS Azure. Without it, Windows developers would need to learn to use a new editor, and if they did that, they might not go back to MS VS later.
It has zero relevance to people who are already developing on Linux full time, since they will use native editors or IDEs (or whatever strikes their fancy).
I do something which is somewhat the reverse of this for a project I am working on. It's a set of cross-platform libraries. In my case I develop the software on Ubuntu, and then set up a series of automated tests (tens of thousands of them) which run in different operating systems running in s series of VMs. I use SSH to connect to them, send the source code, compile, and run the tests. Being able to edit the files remotely isn't of much value to me since it would make keeping track of the source code too difficult. I find it much better to write it on Linux, get it working, and then port it (which has mainly been just satisfying different compiler warnings).
What MS have done is to take a simplified version of the above concept and let you trigger it by clicking on a menu in MS VS.
Linux has had a fully functional development environment since day-1.
All I can see is that with everything burning around them, Microsoft is trying to find some way to make themselves relevant in the coming years.
Sorry, bridges burned, trust destroyed and I've moved onto better things years ago.
Other big names from the past that went the same way - SCO, Borland, etc you get the idea.
the point is that it gives people choice. Now, in the Linux world, there are a hundred and one tools to develop in the same language, adding in another tool isnt a bad thing.
From the perspective of the Windows developer, this now gives them the possibility of working with Linux infrastructure in a less frustrating manner. If you can build a Linux console app in C++ from Visual Studio on your Win X box then remotely build it against a separate Linux machine, wouldn't that be a not bad thing to do?
> Compile on my I7 Windows Box, deploy to the Raspberry Pi. Recompile in a fraction of the time it would need on the Pi....
Sadly, no - according to the link posted above by Ian7, the code is copied to your RPi and compiled by the RPi, so the process may actually slower than if you did it all on the RPi.
But OTOH, if you like VS, you get to edit using VS and there is more storage on your PC's hard drive for version control overhead etc.
The point of this is to require the developer to work from a windows machine and pass microsoft C++ with their libraries to unix.
The developer is still tied to MS as they stay on MS C++ and the unix system has to carry the MS libraries rather than say the developer using gcc and learning the freedom of not having to use a dying platform.
Lastly is MS going to add their unix components to the GNU or is their a hidden bill awaiting payment in the future?
Actually, no, and you see it from the (bad) habits of many Linux developers. I still find some for whom a debugger is a strange new tool (and let's not talk of profilers or the lime).
Eclipse came later to the party, and it came from Java. And it is still an IDE which is too easy to hate, especially if you use it not for Java. Too many IDEs were written for Java as well. VI and friends can be useful for some relatively small projects, a pain in the ass for large ones. But I'm used to IDEs since the 1990s, and maybe I'm a bit spoiled - I understand there are always Luddites who prefer a more complicated way of doing something as their fathers and grandfathers did, because it makes them feel "true men".
Borland went ouf of business not far bad IDEs, but for too many ill-fated decisions, including porting commercial IDEs to platofrms were developers are terrified at the idea of paying for dev tools. They look to prefer printf() - or any equivalent - liberally planted all around their code, and perusing through endless logs.
Porting Delphi and C++ Builder IDEs to Linux was a waste of resources, and actually stalled and crippled the Windows version. JBuilder was killed by the price of Eclipse.
But it looks the actual owner is attempting again to compile for Linux, although it looks almost a by-side product of their native compiler for Android (LLVM based) - implementing for example an ARC model for memory management.
Forget about Eclipse (2001) and Java (1995), neither existed when Unix (1960''s) and Linux (1991) first arrived.
Pick an editor of choice - VI, EMACS, ED, etc
Use make, cc, gcc etc.
Job done. Any platform, this CPU architecture, cross-compile to another CPU architecture - all out of the box.
OK, so its not an integrated development environment (IDE) as they came later, but you can still choose what works best for you / the capabilities of your target platform - headless via SSH to a Raspberry Pi / midrange box , RDP to a Pi or mid-range box and do it via a GUI - you choose.
By comparison - Microsoft IDE - target Windows, their framework, their way, no choices.. until now. I don't see the need, other than a desperate land-grab in a vastly dwindling market - look at the car crash of the W10 app store as an example of the failure !
The current Linux tool sets are fine, pick an old one, or a new one if you prefer, choose your language, and target get coding. Done.
Not disputing that some people like IDE's
The original point was that there is no point MS trying to move into Linux when its all comprehensively covered - whatever your likes, dislikes / platform / build / debug requirement. there isn't a gap I can see for VS which is far more restrictive, like everything Microsoft - and I say that as someone who *used* to push for MS stuff a lot before they turned bad..
The whole point is people LIKE using an IDE. That is actually a fact.
Add to this that VS is one if not the best IDE so far.
I've heard there are still some people who prefer to use vi and handcrafted makefiles, but why not, after all if they want to live like it was the 20th century, it's their choice...
The first IDEs appeared in the '80s. TurboPascal was, for example, among the firsts. Makefiles could be a dark art, that's why tools like autotools or cmake are needed. Working on large projects needs support to easily manage all the required files and easily navigate them.. Debugging and profiling them also require something more than a command line tool. Trying to work in 2016 as it was 1976 is pretty dumb.
Anyway, MS had no reason to support other OS in VS till now. After all Linux does nothing to ease porting its applications to Windows.
If you're wondering around any medium or large-size project with vi, emacs, or ed then that way lies madness. You need to be able follow tags, get declarations, get call hierarchies, and so on. Split screen editors, compiler errors taking you to the right file in the editor, debugging using the editor windows, and SVN browsers are also extremely useful. That is why people like using IDEs.
Also, are people actually compiling on the Pi instead of using cross gcc?
Porting Delphi and C++ Builder IDEs to Linux was a waste of resources
I think it was called Kylix. Back then I was using only C++ Builder (first contact was version 3) for development and liked it a lot. I had high hopes when I first heard about Kylix: I could start developing and porting my existing software for Linux in a non-emacs environment and with powerful (at the time) ui rad tools. Total disaster there, Kylix was a steaming pile of <insert favourite excrement here>.
@Dwarf - The point is to try to keep MS VS relevant to current users who may be developing C or C++ (and perhaps C#) code which has to run in a Linux machine "in the cloud". Without it, the existing user base will gradually abandon ship to use cross-platform tools.
It's not meant for Linux developers. It's meant for MS Windows developers who now find that legacy platform Windows skills don't get them very far in an industry that is going through another platform change like it did from mainframe to PC.
"It has cemented its hold, in part, thanks to the environment's ease of use"
Are we talking about the same software? It's painfully difficult to use. Everything is hidden away in a byzantine maze of options menus, with vitally important settings given the same visual precedence as the trivial.
I am not entirely sure if you are taking the piss, but OK, I will bite.
NMake has been deprecated for quite some time now, MSBUILD took over back in 2013 and is better, by and large, than NMake.
You can execute MSBUILD from PowerShell at any point in time, as long as you have you path set up to point to the correct version of MSBUILD (you will have a separate exe for each version of .Net you have installed, or explicitly call the one you want).
Alternatively, try PSake, its a PowerShell based build engine. Its actually OK, the build is defined as code as opposed to mark up, which makes it quite powerful, along the lines of Rake etc.
I would seriously consider moving over to PSake then.
With your current problem, is it that nmake is defaulting back to the windows directory when you are trying to invoke a target in nmake?
Oh, and MSBUILD has been around since the first days of .Net, its the only thing I have ever used to build ,Net to be honest. Its just that in 2013 it became the only way to build .Net
Microsoft was nowhere to be seen when I was looking for a RAD Dev Studio kind of environment way back in 2005, only now, 11 years later, do they decide to come to the Linux development party. After funding so many attacks against Linux, coercing Android vendors to pay them for licensing then forcing them to sign an NDA. And did you hear? The SCO Zombie is still claiming they own Linux and are appealing the latest ruling. Didn't Groklaw discover that M$ was allegedly arms-length funding SCO's efforts?
Thanks but no thanks. Now sod off.
This goes back a dozen years or so and back then the SCO Group realised they were in big trouble because their brand of Unix was losing market share to both Linux and other proprietary brands of Unix hence their vexatious litigation. Microsoft had only just emerged from their antitrust battle with the US government so they had to be very careful about what they did.
They did two things and the first of those was to vastly overpay the SCO Group for the rights to use their version of Unix (that never went anywhere serious) and via their contacts, Microsoft introduced the SCO Group to new investment funders. The net result of both acts was the cash injection of millions of dollars which funded the anti-IBM and anti-Linux litigation at precisely the right time. As for the current reawakened appeal case, I don't think that'll get anywhere because every time SCO's arguments have been tested, they've pretty much lost.
One of my favourite statistics is that 98% of the world's supercomputers run on Linux, 2% on Unix and 0% on Windows. Satya Nadella's recent, and welcome, pragmatism is a realisation that in the commercial world there are powerful and unbeatable rivals so accommodation and tolerance is the way forward which will probably really annoy Messrs Ballmer and Gates. That said, I don't expect to Microsoft Linux on Distrowatch anytime soon although with the rate of change that's been going on I guess that anything is now possible!
Indeed, anything is possible and the question is, is it possible (and economically viable) to make something better than Linux because either Microsoft need to do that (and convince the business world that their offering is better) or they will be using Linux at the heart of Windows (be it probably cloud based) in the future.
The next generation of computer users will have grown up with things like the Raspeberry Pi (and Android) and even though they might have an XBOX or have used Windows10, schools will advise them that modern businesses use Linux for critical applications, and honestly the restrictions and limitations of Windows become very apparent after even limited exposure to Linux.
Visual Studio, aka "System Builder" these days, is a rich environment that doesn't really give you anything much for the $$$ that the Enterprise or eXpensive editions cost. For that outlay you're going to find that such and such and extension only works with such and such a product, assuming you only have that particular Service Pack. In the end you'll have a build environment that never seems to quite work unless its for very specific tasks, an environment that will demand to be updated all the time ($$$) or it will just stop working. MinGW seems to do the job as well or better.
As for cross-development, it probably works, But then so does Eclipse. So keep VS where it belongs -- making programs for Windows.
I'm not an expert, of course. I just happen to need to use it to build some code for an embedded board. It would be the work of a moment with Linux but has proved to be an absolute pain in the a$$ getting the bits and pieces organized for System Builder. The debugger's nice, I'll give them that, but then if you're spending your programming life on a source debugger you're probably not doing as good a job as you should be doing designing and writing the code in the first place.
In Linux I can say things like this:
sed -i -e s/wrong_variable_name/refactored_variable_name/g `find . -type f -name "*.cpp" | grep -v root_of_dir_tree_to_not_process`
In VS, I think I need to load up MSDN to learn there is no dialog defined for that.
Isn't this a bit well late? We go it. Gates, and Ballmer weren't so hot on Linux and wnated to avoid it like the FOSS plague. Now at the end of the Windows age. It's like they've seen the errors of their was? Or is this just as it appears merely the murmerings of some silly old codger yelling 'And another thing!'.
I would Franky ask the base question why now? And why that product? Assuming for the moment my target was some *nice. Would it not be cheaper to just use the standard set of tools there. It's this kind of stigmatism that's keeping Linux, and BSD back As everyone thinks it has to be MicroSoft in oder for it to be relevant.
I think the difference now is that the Gates and and Ballmer (particularly) Microsoft nationalists have been replaced with an intelligent pragmatist in the form of Satya Nadella. I think he realises that Microsoft's lost on mobile and server operating operating systems so it'll probably be coexistence and competition on that score and an attempt to keep Microsoft dominant on home and business use desktop PCs where it still has the lead and a strong player on cloud-based productivity apps.
I suspect that the funding of destructive and hostile law suits (a la SCO Group) will probably be over and that there'll be an end to the billion dollar loss making botched tech launches and large acquisitions of the Ballmer years.
Now even Linux code can have a factory for the factory for the factory that allocates space for an int. And a different factory chain for unsigned int. And so on.
// BAD NOT THE M$ WAY, TOO SCARY FOR THE YOUNG ONES AND THE BEAN COUNTERS
int &foo = iMSIntFactory<iIntMSFactoryAllocatorPatternAdapter<iHeapAllocatorPattern>>(1); // or something even worse.
It only needs four cpp files and two assemblies.
Biting the hand that feeds IT © 1998–2022