* Posts by Steve Channell

168 posts • joined 19 Aug 2014

Page:

Renting IT hardware on a subscription basis is bad for customers

Steve Channell
Thumb Up

Architecture absents

Time was when hardware provision took so long that hardware order needed to be placed near the begging of a project.. which meant we needed quite a good idea of architecture early on.

Architecture modules needed to be quite comprehensive and sized to avoid ordering the wrong platform.

Without the architecture trigger, it is tempting to skip the platform architecture phase completely, and presume cloud scalability.

We're making F# more normal as a language, says its creator

Steve Channell
Happy

Re: I like playing in F major

No it's not based on Fortran (in the same way C is not based on COBOL).

While F# Is functional, it has always supported OO programming, and even mutability (you just need to declare variables as such - like Rust)

The biggest impediment to F# adoption is the number of features that have migrated to C#

Steve Channell
Happy

Re: I like playing in F major

You can program microcontrollers in F# using Wilderness Labs Meadow

UK Treasury and Bank of England starting to sound serious about 'Britcoin'

Steve Channell

Re: Will happen, will be first, will succeed, but will be more like a database

"Proof of work" is inherently slow, but slowness prevents excessive forking (an issue for Ethereum relative to slower bitcoin). A CBDC would only include clearing banks, other central banks, and some wholesale banking parties - no retail transaction. With a smaller group, simpler/faster hashes can be used.

In the retail channel, digital removes friction for large payments, and automates settlement for smaller payments.

The Bank of England will be amongst the first, because (unusually) it is not the only issuing bank for sterling£ notes (RBS, BoS, Ulster Bank etc also issue £ notes).

Steve Channell
Thumb Up

Will happen, will be first, will succeed, but will be more like a database

CBDC Will use digital ledger technology (DLT), but will be nothing like bitcoin: There will be no mining, and participating nodes will be limited and pre-trusted.

Unlike bitcoin or distributed database a CBDC DLT can be extremely fast because DLT time is relative, and forking provides distributed concurrency. Bitcoin hashing provides a trust mechanism (proof of waste), but doesn't need to be as expensive if nodes are pimited

'We will not rest until the periodic table is exhausted' says Intel CEO on quest to keep Moore's Law alive

Steve Channell
FAIL

I do hope they paid you

For the gushing review of their polished mirrors and atmospheric smoke. RibbonFET might be a game changer, but was not demonstrated, and might not work.

The reality is that Intel need to "promise" jam tomorrow because it has a weak position, and about to take a beating. AMD make faster AMD64 chips (x86-64, x64,.. are aliases), nVidia makes more scalable chips, and Apple is humiliating them with M1 max chip.

Nvidia nerfs RTX 3080, 3070, 3060 Ti GPUs to shoo away Ethereum miners

Steve Channell
Facepalm

Re: nVlidia is not a government agency

The key is "one of the definitions", not the definition: Friedrich Engels (Karl Marx's buddie) was a "socialist", but also a factory owner. The illustration was not to suggest that nVidia is "socialist", but as a preface to the proposition that it will fail.

"proof of waste" is just about the most stupid idea for consensus in a digital-ledger and a waste of the computational power of a GPU: the answer is not regulation of supply, but innovation to remove the need

Steve Channell
Facepalm

Re: nVlidia is not a government agency

"care about maintaining their multi-decade customer base" matches one of the definitions of socialist, while controlling supply matches one of the definitions of state socialism.

nVlidia already have an "optimisations" that switches off the GPU when monitor disconnected, but that has just created a market for HDMI dongles that pretend to be monitors.

They are right to be weary of being pushed into crypto segment, because the Ponzi scheme will collapse eventually, and want a customer base left when it does.

My view is that crippling cards to perform integer arithmetic slowly, won't work, but will make GPU programming more complex.

It's better to produce an alternative to "proof of waste" than to control supply

Gartner's Windows 11 adoption advice: Explore but don't rush

Steve Channell
Thumb Up

Wayland and GPU access from WSLg is worth having as a developer

otherwise, just another incremental improvement on Windows NT 3.51

Microsoft's .NET Foundation under fire as resigning board member questions its role

Steve Channell
Unhappy

Funding is the key

The problem for any small developer in this space is that most users will always choose the MS option. NHibernate is one of a number of adequate ORM tools, but most people will just use Entity Framework.

This guy develops a UI framework (maybe with the thought of monitizing it at some time), but can't compete with the MS UI stack, and can't use board membership to get a better billing.

The answer is for MS to do something like "Google summer of code" and pay develops to develop interesting tools

Imagine a fiber optic cable that can sense it's about to be dug up and send a warning

Steve Channell
Big Brother

Hunter Killer

The biggest advantage of technology has to be in sub-sea fibre-optics where access is difficult with long distances between repeaters.

Detecting earthquakes and anchor damage seem like a euphemism for detected physical hacking for espionage. Israel demonstrated that fibre-optic can be hacked; provided you don't splice all fibres at the same time, it's undetected by the owner of the cable.

Getting an early warning that somebody is interfering with a sub-sea cable is only really useful if you can have a hunter killer submarine lurking underwater to kill an attacker before they get away - for that you'd need nuclear powered submarines.

While Australia is a relative backwater (in global telecoms), it is significant that none of its sub-sea cables go near China.. where anchor damage and earthquakes are such a problem.

Only one software giant to make impact on the robotic process automation market, says analyst

Steve Channell
Windows

Robotic Process *Analysis*, not automation

The biggest value of RPA tooling is not automation, but Analysis.

While RPA tools are great for screen scrapping, the advent of Single-Page-Application frameworks {Angular, React, Vue, Elm, etc} has rendered a generation of automations redundant because [1] pages are more complex [2] web-services are better suited to traditional integration (even if the integration is node.js mashup)

*Attempting* RPA, is a good way to perform process analysis, but the outcome is just as likely to provide information for an integration project as it is for a robot - and in a small number of low-volume cases it's actually cheaper to get people to do it.

The exception is where the integration touch-points are well understood by the vendor, or they have hooks in the API (usually because they wrote it). This is why Microsoft is likely to be the biggest winner:

[1] Outlook/Exchange/Office365 are the largest automation surface (even if few are using Dynamics, or hosting in Azure), and Active-Directory is the trust mediator

[2] RPA tooling can be discounted for platform lock-in

Microsoft emits last preview of .NET 6 and C# 10, but is C# becoming as complex as C++?

Steve Channell

native targets

For a very long time .NET assemblies loaded into the Global Assembly Cache (GAC) have been compiled to native code. Today's .NET native target is more like LLVM than the VM's of old, and C# like Clang.

If you don't like garbage collection, you wont like the .NET runtime providing GC for you. RAII does not obviate the need to use smart pointers to avoid memory leaks. Porting .NET is no more difficult than porting LLVM

Steve Channell

Re: "the ability to use operators on generic types."

Operator overloads are static function that provide syntax sugar so statements like "hello " + "world" work in an idiomatic way.

This change is not for functions to operate on (VB style) Variant values, but for cases like matrix functions or complex numbers where type information is available at compile time

Please, no Moore: 'Law' that defined how chips have been made for decades has run itself into a cul-de-sac

Steve Channell

Moore core

The original Moore's law ran out decades ago, since then the computational power of CPU has relied on adding more cores and tricks with speculative execution (trick because of Spectre/Meltdown).

Aside from the excellent insights in the article, the other ceiling is the ability to break a problem into small enough chunks to keep graph processors busy with work - most algorithms can't be broken down into small tasks. Graphcore works because it focuses on AI

The Register just found 300-odd Itanium CPUs on eBay

Steve Channell
Meh

Rose tinted view?

"Intel liked the idea of having another product line" is a rose tinted view. Intel wanted to kill the x86 architecture and the licence agreements IBM had forced it to sign with AMD and other fab vendors. Intel thought it could end-of-life the x86 architecture line, and move everything (including desktops) to Itanic.

It wasn't a "wrong call", it was flanked by AMD Opteron dual-core processor and AMD64 extended instruction set.

Microsoft abandons semi-annual releases for Windows Server

Steve Channell
Mushroom

More pincer movements for Azure

Sounds like a pincer movement to undermine Windows VM in other people's cloud platforms.

MS have been touring Office365 interop as an advantage of Azure over AWS, and "Windows works best on Azure" marketing campaign is just a matter of time

Richard Branson uses two planes to make 170km round trip

Steve Channell
Thumb Down

Concorde

You could see the curvature of the earth and darkness of the sky from Concorde at 2/3 of the altitude of Branson’s plane, but didn’t experience weightlessness (mainly because rapid decent would make passengers feel sick (there’s a reason NASA’s weightless simulator is called the vomit comet)).

Branson’s plane is “innovative” in that it does not require air for the engine, but does not use the lack of drag to get you anywhere, other than where you started. Was it a test-bed for a plane that could avoid sonic boom it would have a place in history, but it’s a technological dead-end that just provides a joyride (without the meal or champagne that Concorde provided).

In the joyride stakes, it’s not as good as Bezos’s craft.. had it been commercial a decade ago, it might have been a commercial success, but missed the boat

IBM's 18-month company-wide email system migration has been a disaster, sources say

Steve Channell
Coat

Re: PROFS

As if by chance, this year would be the 40th birthday of PROFS.

Microsoft flips request to port Visual Studio Tools for Office to .NET Core from 'Sure, we'll take a look' to 'No'

Steve Channell
Windows

Java version of .NET Core is OpenJDK

The "DLL Hell" question related to versioning in the type binding: you can load different version of the same library into a process without problems. COM support in .NET is like CORBA/IIOP support in Java, it is being omitted (wrongly) because there is very little demand for the feature.

Microsoft provided VSTO as a transition from VBA macros, but was not widely adopted. Right or wrong, Microsoft are focusing customization on web-services (for interoperability with web-version).

COM interop will remain in .NET (and even migrate to the Linux kernel), but VSTO will remain on .NET Framework because of "Enterprise Services" (COM+) in the same way that IIOP endpoint will remain on J2EE.

Java is "mature" in many respects, but its module system is twenty years behind .NET

Google drinks from Oracle's pond: SQL system log slurp part of grand data-sharing vision

Steve Channell
Facepalm

Golden Gate clone

Google produces a log shipping tool like Oracle Golden Gate and it's news?

This space is going to get interesting with the advent of standardized "append-only-tables" with multi-master replication (branded "Blockchain" natch)

Graph databases to map AI in massive exercise in meta-understanding

Steve Channell
Facepalm

The next graph database is not a graph database

Graph theory can be applied to any form of information (just like relational theory), but that does not mean you need necessarily to structure it as a network of nodes connected by edges.

If you want to find out if Vladimir Putin is connected to Donald Trump on LinkedIn, if Boris Johnson is related to Joseph Stalin on a DNA tracking service, or if Taliban are financed by heroin; a graph database is an excellent solution because the “graph” is unbounded. Pandemic contact tracing is also an unbounded graph, but a graph database is not a good solution because of the rate of change.

Language parsing and Bill-of-materials are also graph problems, but the best solutions is to assemble them in memory because they are bounded and finite in scale and apply complex constraints. Constraints are the weak point of graph database because “structure” is not included in meta-data – you can either apply constrains outside the database or suffer the performance problems of interpreting predicate logic.

The next “graph database” will take advantage of increased computer memory and GPGPU to traverse graphs in parallel.

Google proposes Logica data language for building more manageable SQL code

Steve Channell

Re: Little late for an April fool joke

SQL was designed for System/R and copied for DB" and by Oracle. Even in DB2/MVS embedded SQL, the SQL was still sent (and stored) on the database server during bind.

Nobody loves it, but at least you don't have to explain "void" in Java or { } ({ and } were originally to save space versus begin/end or do/end

Steve Channell

Re: Little late for an April fool joke

The grammar is large because of the number of keywords, but those keywords enable it to be single-pass. I don't particularly like it, but I do like the fact that I can use any parser engine from yacc to antlr to parsec, and haven’t had to re-learn it for different engines. (I’d change the verb order to match LINQ but only because I like intelisense, and add lambda functions)

It was inspired by DL1, but it works.

Steve Channell
Facepalm

Little late for an April fool joke

the whole point of SQL is that it is easy (quick) to parser and optimise on the server side, relative to the time taken to execute.

For further optimisation, a hash indexed cache of execution-plans and cursor handles avoids most of the actual parsing.

where things can be improved is in the area of mapping (to classes), and client side syntax checking.. where C# LINQ has basically got it nailed

from c in database.Customers where c.Id == 5 select c;

=>

select c.* from customers where c.Id = 5;

So how's .NET 6 coming along? Oh wow, Microsoft's multi-platform framework now includes... Windows

Steve Channell
Windows

.NetCoreApp5.0

For added confusion it is still .net Core for assembly metadata.

It'll only really be finished when Tavis Ormandy's LoadLibrary DLL loader makes it into the Linux kernel

IBM creates a COBOL compiler – for Linux on x86

Steve Channell

Re: Too late....

CICS/6000 mapped the CICS command interface (API) (not macros) to the Encinia TP monitor (not BEA/Oracle Tuxedo).

Encinia was slower than CICS/ESA because it used Unix processes instead of the TCB threads and pipes instead of CSA.

Over a decade on, and millions in legal fees, Supreme Court rules for Google over Oracle in Java API legal war

Steve Channell
Meh

Tad unfrair

CP/M 86 was designed as a feeder product for Concurrent CP/M and was a re-write of CP/M. Seattle Computer Products DOS was a simple program loader + shell with just enough OS for a demo. IO.SYS and MSDOS.SYS were tiny. Microsoft got the OS gig from IBM because they pitched Xenix (Unix on a Micro). MS BASIC and BASICA (Basic Advanced) used the syntax on basic (including READ for in-line data cards).

SAP exec reminds the world that Microsoft is a customer

Steve Channell
Windows

30 year old news

Microsoft became a SAP Customer more than 30 years ago running R2 on an IBM mainframe, and used to do development on a DEC VAX.

I'm somewhat surprised that this might still be the case given how capable Dynamics is (since Great Plains was migrated away from Btrieve files).

"dog fooding" first referred to the move to OS/2 and LAN Manager

You wouldn’t know my new database, she goes to another school: Oracle boasts of earthshattering tech the outside world cannot see

Steve Channell
Unhappy

Anyone who's negotiated with Oracle know exactly what he means..

Oracle ALWAYS discount their official price list, and never ever allow customers to pay less without a fight.

Reduce your usage by 10%, that'll be a 10% price increase on what you have left. Thinking of JBoss instead of WebLogic to save money.. think again, RDBMS price goes up.

The only leverage you have with Oracle is to threaten scorched-earth

Netflix reveals massive migration to new mix of microservices, asynchronous workflows and serverless functions

Steve Channell
Mushroom

Load Balancing and TSB

One of the reasons for the disastrous separation from Lloyds Bank was the late decision to use the load-balancing software that Netflix had open-sourced. It turned out that a routing service that worked well for routing requests to a CDN was not that good for stateful banking services.

They might call the deployment 'Strangler fig', but others will call it 'Canary testing': Just because they are "web scale" does not mean it is any good.

'It's where the industry is heading': LibreOffice team working on WebAssembly port

Steve Channell
Thumb Up

Re: It's the way the industry is heading

Libra Office on WebAssembly is probably cheaper than re-writing as a HTML5 app, so a reasonable strategy. To survive, Libra/Open Office needs to protect the flank from Microsoft and Google with some kind of web-editor, but it is niche (nice to have) for normal office users.

Where it is interesting is to provide a benchmark for other desktop apps considering the need for multi-platform deployment.. to look a the performance and bloat and make a decision based on someone else’s pain. The big news in this area will be Microsoft Foundation Classes (MFC) as Web-Assembly.

Oracle sweetens Java SE subscriptions with a spoonful of free ‘GraalVM’ runtime said to significantly speed Java

Steve Channell
Windows

Playing catchup

The .net strategy has always been, multi-language. The only reason the CLR does not natively support Java is legal reasons (IKVM adds JVM support.

The reason ECMAScript and Python don't work so well on a "foreign" runtime is dynamic code and it's metadata for eval() performance.

Interoperability between Java and non managed code (C/C++) is still problematic because of the rubish JNI

In Rust we trust: Shoring up Apache, ISRG ditches C, turns to wunderkind lang for new TLS crypto module

Steve Channell
Flame

slow news day?

they're porting a Mozilla TLS library to the Apache httpd mod api and exporting a C api. The fact that the Mozilla library is implemented using Rust seems pretty incidental.

"In Rust we trust" is catchy, but given that all shared buffer management will continue to be done in C it's not justified. Ironically C++ shared_ptr<> and other smart pointers can't be used with this mod without compiling httpd using CLang

'It's dead, Jim': Torvalds marks Intel Itanium processors as orphaned in Linux kernel

Steve Channell
Facepalm

mulii-core killed Itanic

While AMD64 Opteron killed the market for Itanic, it was the multi-core approach that killed its projected performance advantage, leading to a change in software design. The remaining case for very long word instructions use-cases was killed by GPGPU.

It was no just a commercial failure, but an architecture: good riddance to bad rubbish.

Explained: The thinking behind the 32GB Windows Format limit on FAT32

Steve Channell
Meh

FAT fail

File Allocation Table is terrible design for SSD devices because it requires that the first storage blocks (where the table is stored) are re-written over and over again reducing the life of the device, and is a fixed size irrespective of the size or number of the files .. but has the single advantage of being simple (and standard - copied from CP/M). Things would be different had MS licenced the NTFS format or Unix i-node file-system been open source.. but we are where we are

Google Cloud (over)Run: How a free trial experiment ended with a $72,000 bill overnight

Steve Channell
Mushroom

kamikaze testing

They write a program that has been written thousands of times before, don't do any effective analysis, design or testing before deployment to thousands of nodes... And the good news is they escaped the bill?

Public cloud is expensive because "we" pay a tax to subsidies these "free trial" services for stupid people who think they can use billing caps as a substitute for testing.

The risk that holds back cloud computing is that your critical workload might end up sharing infrastructure with these stupid people and not migrated away before iops overload the hypervisors.

I know we're supposed to feel bad for them, but bankruptcy is an effective way to weed out the stupid fools that use kamikaze testing

Business intelligence vendor MicroStrategy reveals it’s bought a billion bucks of bitcoin

Steve Channell
Pint

Re: "Dependable store of value"?

Bitcoin price is strongly correlated with the market-cap of illegal drugs.. so will floor if they are legalised, or the FBI think they've identified enough of the drug users/traders to swoop,

On the plus side, it provides hackers with a nice big target for penetration + good of them to let the NSA know that they're not a major drug baron.

Microsoft submits Linux kernel patches for a 'complete virtualization stack' with Linux and Hyper-V

Steve Channell

Re: "Windows 10 is on a path to becoming a hybrid Windows/Linux system"

This is likely about shifting hyper-v into firmware that can be bundled with servers by hardware vendors

A flurry of data warehouse activity surrounds Snowflake's staggering $120bn valuation

Steve Channell
Unhappy

Re: There is no such thing as magic

When I first designed a core banking database for millions of transactions per day, special care was needed to the design indexces, segment size and extends and partitioning to match CPU and volume count, with extensive testing for /*+.. */ query hints to use hash and temp for sorting and scheduler optimisation to avoid parallel jobs using buffer pools at the same time.. doing that today would be largely redundant, but legacy DBA procedures have not moved on.

Snowflake might have a great UI, but the biggest advances are due to simply updating application and using instrumentation in database: when a $50k server can store all data in optane memory with hundreds of CPU cores, there is a case for moving analytics back to operational database and avoiding almost all of the data-warehouse use-cases. When Teradata launched (with i386 AMP), it was a step-change in performance, but the biggest (multi-million) DBC/1012 then, is less capable then a commodity laptop today..

Great technology, but the real competitor is that $50k box as a VDI running {Tableau, PowerBI, Qlikview}

Steve Channell
Facepalm

There is no such thing as magic

If we're honest the reason that Snowflake has a market sector is [1] Hadoop based databases are still very primitive (not significantly different from the Huston Automatic Spooling Progam of the 1960's), [2] Licence price of traditional RDBMS are very high.

There is no magic, Snowflake is not especially different from DATAllegro which sharded data across commodity servers in a previous generation, but unlike DATAllegro it runs in the cloud where it must compete with the cloud vendors own products and cover the rental charge of servers. While it has a high valuation it can invest in marketing and technology to optimise (caching, distribution, query plan, serialisation) for specific use-cases.. but eventually economics will catch-up with them: The question whether they can lock-in customers before the prices drops for alternatives.

Back to the Fuchsia, part IV: Google's in-development OS now open to community contributions

Steve Channell

Re: Whatever.

Both Windows NT and Darwin/MacOS use microkernel architecture, but include additional core services within the kernel to trade safety for performance. The Hybrid kernel uses message passing as the API protocol, but skips on permission checking on calls within the kernel.

When you consider NUMA computers, message-passing becomes more efficient because it avoids slow access across the NUMA backplane. Not for a very long time has message-passing involved actual copying because the OS can use copy-on-write pages to transfer pointers between user and supervisor code.

Linux is not a monolith kernel like Unix used to be because i uses modules, but user-mode TCP has highlighted that loading everything into the kernel is not always desirable.

Microkernels might yet have their day for containers hosts and hypervisors

Steve Channell
Go

Hypervisor operating system

While there is little appetite for anther competitor for Apple and Windows (evidenced by ChromeOS), a microkernel operating system is an ideal platform for for Hypervisors.

Fuchsia / Zircon however is more likely to drive the open-source of the Microsoft Executive as a firmware kernel operating system to replace/augment UEFI

Buggy behavior bites .NET SqlClient for unlucky Linux users

Steve Channell
Boffin

TCP issue

Managed languages are portable.. until they hit issues with the underlying operating System and TCP stack. Java had this issue with green threads, and was ditched because of the "write anywhere, debug everywhere" anti-pattern.

The likely culprit is the SQL Server host side using Fibres for lightweight scheduling causing out-of-order message dispatch over the TCP socket.

The secondary culprit is session pooling: windows uses COM+ to reuse database session pool to avoid multiple open (but quiescent) sessions, which can either be fixed by adjusting Linux Kernel parameters, or implementing an async/pooling/queuing pattern.

The tertiary culprit is lost packets over the network fabric caused by a flood of messages.

The root cause is likely to be 1000+ open connections between servers: it's never going to be efficient whether it works or not - the solution is to hire someone who knows what they're doing, and stop pretending it's not fundamentally your own problem.

AWS Babelfish for PostgreSQL: A chance to slip the net of some SQL Server licensing costs?

Steve Channell
Meh

Re: Lawsuit?

Oracle compatibility is much more likely to give rise to a lawsuit.. for SQL/Server there is always an option to buy a licence from SAP (for Sybase TSQL)

Snowflake Q3 losses almost double, stock market flinches, then reckons: Nah, it's fine

Steve Channell
Meh

Re: Snowflake comes from the snowflake schema

A snowflake schema is definitely a data-mart anti-pattern and often a data-warehouse anti-pattern, but poor support for update of columnar tables means it is still applicable for mixed-mode databases used for online/real-time lookup and analysis (especially when federated with the columnar store for history)

Snowflake like products shine when the schema is complex because the domain is complex... if a sales-person tells you it was not named after snowflake schema, it is more likely because they're struggling to address complex use-cases.

The pertinent data-warehouse anti-pattern is applying a meta-warehouse to a data-warehouse problem

Steve Channell
Meh

Snowflake comes from the snowflake schema

Star schema has already been optimized by cube OLAP engines, but for more complex problems Fact + Dimensions does not cut it.

The idea of a snowflake optimizer is to join the smaller tables together before a full scan of the fact table. snowflake is good for translating a single SQL query into query steps for different stores, but struggles when a joins span multiple stores: a query of Score OR product OR customer OR region will die because of data distribution.

The challenge for Snowflake is: (1) colocation of data is always better, (2) query optimisation is what RDBMS already do, (3) in the end, established vertically integrated RDBMS will be better (Oracle & SQL/Server already support Hadoop files).

My bet is a Salesforce trade-sale

I work therefore I ache: Logitech aims to ease WFH pains with Ergo M575 trackball mouse

Steve Channell
FAIL

Re: Where's the Lefty Version

I find this right handled prejudice deeply offensive, with constant discrimination against left handled people troubling... but on a serious note..

Many years ago, I joined a consultancy that took what appeared to be a forward looking decision to equip consultants with ergonomic logitech mice to go with the nipple mice of our Toshiba laptops.. as a lefthander I found them unusable.

Nvidia says microservices will drive a SmartNIC into every server

Steve Channell
Thumb Up

Re: @Steve Channell Haven't you heard?

The reference to microkernels was as an example of the design process of loosely coupled system that evolve to include common core capabilities into every process/service to address performance issues.

micro-services architecture as what we used to call service oriented architecture before J2EE bundled all the services into a container. The key point is that micoservices allow independent changes without the need for monolithic quarterly change cycles: there is no reason for the { caching, web, authentication personalisation, inventory, basket, pricing, recomendation, purchase, warehouse, shipping, review, accounting, analysis} services should need the same infrastructure or change-management - three key considerations:

1) The JVM/CLR stacks need to change quickly for web-facing services, that would impose a heavy testing cycle on purchase/shipping in a monolithic deployment.

2) Services touching payment services must be thoroughly reviewed tested to avoid crime

3) presentation tiers needs to scale out with load, but other services need to scale-up with load

The architecture driver is not a design pattern, but governance, scalability and performance. My point is that micro-services are not a design pattern, but a deployment pattern - IoC allows you to decouple decouple design from deployment - deploying only the parts you need..

There are monolithic deployment architecture (J2EE, mainframe), pure micro-services deployed as K8 containers deployment architecture, and hybrid deployment architecture where some standardisation facilitates performance optimisation

Page:

SUBSCRIBE TO OUR WEEKLY TECH NEWSLETTER

Biting the hand that feeds IT © 1998–2021