I'd be quite reticent to start urine dribbling on my iPhone
If you've got an enlarged prostate, reticence will be the least of your dribbling problems.
3354 publicly visible posts • joined 6 Sep 2013
Sorry, my original post was badly phrased. I meant that credit should go to the lawmakers rather than to Google for "opt-in" being the default - I very much doubt it was their preference. Why anyone would choose to do so is beyond me for the reason you so clearly state.
I'd left my dumb heating controller on its frost setting while away for several weeks, returning to find the house at a toasty 26C and the gas meter spinning like a demented fruit machine. If they can attach some of those robot arms to a Hive so that I can thwack a sticking motorized valve from a distance of 2000km, I might consider it worth the investment.
It strikes me that when many houses have access to more than one TV and there is a national curriculum and a shortage of laptops, there are cheaper and, potentially, more effective, ways of delivering education than by ad-hoc provision by individual schools. By all means have local - and familiar - teachers available to review work and answer questions, but turning every classroom into a broadcast studio seems a curious approach for a government that is otherwise committed to centralising everything. They have had a year.
I think one could equally worry about the future of new Open Source projects - why bother if you're simply giving a leg up to your massive potential competitors.
But it also works the other way - there wouldn't be the manpower to work on some crucial Open Source projects if large corporations weren't paying their staff to do it.
Open Source only really works if there's enough mutual interest between a disparate group of developers. It's hard for anyone to have a mutual interest with a corporation the size of Amazon, but licensing is probably not the answer: they have the resources to take a useful concept and reimplement it if it's worth their while. And if you insist on finding an answer, you might find yourself approaching software patent territory,,,
And if you use a database that allows you to define a "virtual" table based on an underlying SQL query, you can tweak the JOIN or the data model subsequently without having to tweak the application.
There is clearly some mechanistic tedium involved in constructing a runtime object from a database row, but that was a largely-solved problem when COBOL was invented. I'm not convinced the convolutions of EF to deal with foreign keys add anything, and as soon as you start having to think how the supposed abstraction is actually working behind the scenes, the abstraction loses it value.
Not to mention that a lot of work goes into database query optimisation - you don't want an ORM second-guessing it.
Nice to see PostgreSQL called out - it's actually a very good platform for creating application-specific views of the database in a way that decouples the data from the application much more effectively than ORMs - at least in my opinion. And frankly, if you're trying to manipulate a database efficiently without knowing SQL, you need someone who does know SQL (or at least how the queries are executed) to do the work behind the scenes for you.
That's really not practical though
At one time, that was commonly said about putting guards on dangerous machinery. Making people liable proved it was extremely practical.
I'd like to see the judge's face when someone claims to have taken reasonable steps to protect his customers' data by relying on code whose origins are indeterminate, whose accuracy he hasn't personally attempted to verify and for which no-one else is prepared to accept responsibility.
the clue is there in the word mainframe
I'd have thought that having all the data in one place, rather than scattered across the cloud, should make it easier to manage, not harder, in principle - but, admittedly, that's not much help if you can't find anyone who still has a clue how it works.
Doesn't really bode well. I was never able to use the previous version because the Forms Designer in Visual Studio wouldn't work. There are lots of references online to it being unusable for many with occasional replies from others saying "works OK for me" - but no suggestion as to what might make the difference. In the end it was quicker just to use the native UI components than try to solve the problem. The Xamarin stuff is very good in principle, but that doesn't help if the tooling is flaky and I wouldn't risk breaking an otherwise stable installation of Visual Studio to see if the new version works better than the last.
At least with native applications there's some consistency baked into the platform - you can pretty much guarantee the same shortcut will open the "File" menu, for example.
I don't have any disabilities that significantly affect my ability to interact with computers, but when I've tried to interact with websites using accessibility aids (like screen readers, for example), I'm struck by how laborious a process it is, particularly when there's no real semantic difference between the "commands" (links) and the regular text on the page. You often have to sit through a recital of most of the page's contents before you can find the link you want to follow.
There's an inbuilt tendency to put as much information on the page as possible as it's quite easy - for many people - visually to screen out what they're not interested in and it reduces the number of clicks needed to reach the target. But if you have to listen to the whole lot being read out (or have to scroll laboriously through it in large high-contrast letters reading every word) it's much quicker to have multi-level menus with fewer choices even given the extra navigation to move between them.
Of course, it would be good to get the basics right, but I think it would be helpful to offer different models of interaction for different users - with consistent mechanisms for invoking them.
I'm not sure it's JavaScript (per se) that's the problem - it's had enough performance work done on it for the purpose it serves. It's the operations on the DOM that take the time - if you were looking at the most efficient way to update a user interface it wouldn't be by applying a stream of incremental changes using a text-based mark-up language that then interacts with a text-based styling system that likely causes a significant amount of re-rendering that's then partially invalidated by the next incremental change.
It would be the same problem with Wasm - which is as near to native code as you're going to get - you still have to call back into the DOM to display the results.
Historically, the focus has always been on getting load off the servers and shifting it to the browser. If you're happy to do the work on the server, then there's an argument for doing all of it and simply sending a stream of GPU operations to the browser to draw the result. I think that's the way we may be headed, even if we're proceeding by a series of apparently random walks.
The beauty of low code is you can adapt at make changes at a rate of knotts. It's not your traditional truth around time of someone upends a policy.
Well, I'm now totally reassured that the speed of a low code response has no detrimental effect on its quality or accuracy.
The last time I had any sight of social security systems in England - which was about 20 years ago - the people processing benefit claims could only do around 60% of them using their aging ICL mainframe, the other 40% had to be manually calculated. The reason for that was mostly that as new benefits were introduced there were a whole load of transitional issues relating to people still being entitled to elements of old benefits that were no longer available to new claimants but also being entitled to other benefits from newer programmes with different criteria. The systems simply couldn't cope with the significant number of people who straddled various policy eras.
The only way to have a truly "low-code" benefits system is to select a set of parameters and for politicians to accept that their future policy options are mostly limited to changing the parameters rather than the algorithm for deriving the payment - or creating entirely independent benefits.
This, of course, will never happen - so there is a perpetual accumulation of cruft, however little code you (pretend to) start with.
What's surprising is that they can behave like an intensely monopolistic corporation and it's taken so long for the people that are supposed to prevent that to notice.
Part of the problem is that the regulators are still working on bricks-and-mortar timescales - they see the dangers coming down the track but fail to appreciate how quickly they become embedded. Part of the problem is that when regulators outside the US try to take action they come under trade pressure for threatening the profits of US corporations.
The basic rule for "platform" companies has to be that you can't own the platform and use it yourself (except exclusively) - it's inherently anti-competitive. If Amazon wants to have an e-commerce platform, it can't sell on it itself. If Apple and Google want to run appstores, they can't distribute their own content. That, at least, would be a start.
To be fair, the original licensing requirements for SQL Server (which essentially meant if you used it to support a web service you had to have a licence for every possible user of the website) forced me into the sub-optimal position of having to use an Access database instead and Jet drivers that leaked memory and locked up to the extent that the web server had to be restarted several times a day. I moved swiftly on to PostgreSQL, though in its more recent incarnations SQL Server became a much more credible candidate for a lot of applications.
SQL Server has been both useful and affordable for a relatively small proportion of its life.
There was I thinking their thin film division was responsible for The Da Vinci Code.
It's a difficult thing to evaluate. I'm currently on a volcanic rock in the Atlantic (I'd like to think that one day I will be hollowing it out for nefarious purposes), but can get a 100Mbit/sec connection for €27,99 a month - provided I take a contract for 2 years. ADSL has roughly the same monthly price, but because of fixed costs is significantly cheaper for a shorter contract (note that they load significant "installation costs" into the fibre product but not into the ADSL product). However, you can't get any sort of service without going through a person to person sale (in a shop or by telephone) and that of course has to be paid for by vigorous upselling. The great thing about the UK market is the relative ease of sign-up and switching.
Is only available if you've created an account, by which time it's a bit late.
On the one hand I find it frustrating that Google is constantly complaining about logins from unauthorized devices and flinging captchas in my path at every opportunity, but it at least means their stalking is at least partially frustrated. I don't think it's a good trade to offer more information in exchange for less annoyance. And equally for advertisements.
You cannot optimize middleware for a given OS without causing performance problems and/or security issues
Middleware is just software. If you can't develop software, even complex software, that runs reliably on multiple platforms then presumably all the Linux and GNU contributors have been wasting their time.
Unfortunately, the enablers are a sizable proportion of the electorate - Trump won last time basically because enough voters chose to reject fact in favour of fantasy.
That applies in spades when it comes to Covid - a lot of people flatly reject the truth and there are votes in pandering to their beliefs.
All Trump did was to demonstrate that if you remove even the pretence of integrity from politics it is an electoral asset.
Indentured labour is a form of servitude in which you "voluntarily" agree to have your rights restricted for a defined period, but not in perpetuity. I'd say that was a reasonably accurate description of a system in which you are effectively tied to an employer and expect to be paid significantly less than the going rate for the job in the hope that eventually you might be released and treated like an equal.
Strange how creating a form of indentured labour in order to appease the natives is ultimately to their detriment.
If only there'd been other examples of the negative economic consequences of xenophobia that might have better guided immigration policy.
The point I think I failed to make is that "agile", like every development methodology, is a trap, it's just a trap of a different kind.
If you're firing out frequent updates, you're pretty much constrained by the skillset of the people you have to hand (because you haven't time to reskill/rehire) and you end up with dreadful decisions (like adding cruft to your applications to do database joins) because the frequent release cycle that allows you quickly to add functionality means that long-term architecture changes have to be approached crabwise and likely with much ultimately unneeded effort to keep each iteration functional in its temporarily incomplete state.
Constant incremental change is fine right up to the point that the premises on which your development is based turn out never to have been valid, or have become invalid over time. It seems a little optimistic to assume that one development approach should be appropriate at all stages of a project lifecycle.
I know I'm too old to be agile and am insensitive to the Zeitgeist, but I got a distinct whiff of "Emperor's new clothes" from this interview.
We would then rewrite these queries into multiple queries that respect the domain boundaries and perform any necessary joins at the application layer
Nothing screams "modularity" like building a knowledge of your transient database schema into your applications.
For the foreseeable future we're going to remain with MySQL just because we have a lot of expertise there
If you had more expertise available with a database that facilitated doing joins across multiple data sources, you might not have had to build that into your applications. Shouldn't you be looking at the expertise you need, rather than the expertise you have, especially on a "foreseeable future" timeline?
This.
"Does not scale" leads directly to "does not compute" in such scenarios.
I can't glean a great deal of useful insight from the AWS post, but it does seem that there's a kind of circular dependency: the scaling and provisioning depends on higher-level services that don't work when there are scaling and provisioning issues. Calling up the protocol stack is a risky business, because it inevitably calls right back down.
It's not just the hardware. There's not much you can do (presently) if you're, say, a European country concerned about most international traffic routing through one of the Five Eyes where it will be mined not just for military but also economic intelligence.
It seems reasonable to expect that there will not only be more control over infrastructure equipment, but increasing control on peripheral connectivity between geopolitical blocks.
why we should have difficulty
I think one problem is that even at this particular point we don't actually know what the final regime is going to be. It would have been more logical to have a negotiation, agree on the regime and then have a transition period in which the necessary systems were developed and tested.
But given that it would have been even more logical not to bother with this self-inflicted chaos, we deserve all the difficulty we create.
While we're part of the ECHR, it wouldn't fly for the criminal offence - but if it's such a big deal, presumably bringing in legislation would be a cheaper and more certain way of dealing with it in future.
Retrospective legislation on taxation is actually quite common, particularly to close perceived loopholes.