Re: I have a better way to save space on phones
I came here to post same comment, but you beat me to it.
Have a (see icon), later in the day, though.
69 posts • joined 9 May 2018
(letting apart the case of native code)
Java as a source language, yes (though you should have a look at Kotlin).
But Android does not run a standard JVM:
Java bytecode is transformed into something very specific to Android before your phone dares to execute application.
What matters for me as a driver is to be aware there is something in this direction, as well as how does this something roughly looks like.
As long as there is a big enough surface of really transparent stuff (a.k.a. windscreen and windows), a blur, not invisible, pillar area may be enough
Thanks for the clarification, I assumed kernel code was compiled to "preferred" register size of CPU.
Well, I reckon this is ambiguous for MIPS, but in my remembering (which can be wrong) Alpha was kinda 64b only with support for 32b.
I totally agree regarding endianess. Actually, I almost noticed it, but I did not just to stay short.
Anyway, NT kernel was able to run of beast with very different ways of interfacing with peripherals, for instance, and my I never heard my colleagues writing drivers saying they had to take care of this.
From the ground-up, Windows NT was built as really really portable.
Proof are its 4 initial platforms:
- 32b x86 (nobody had ever though of a 64b x86 at that time
- 64b Alpha
- 64b MIPS (I got one at work!)
- PowerPC (I saw references inside doc., even possible one of my colleagues used one) I do not remember if it was 32b of 64b
Itanium came later.
To me "is for use in applications that require fast and/or light structured data storage, where raw file access or the registry does not support the application's indexing or data size requirements." is a good description of sqlite.
Any comment from people working with this kind of tool ?
I read the whole set of comment and I saw a single reference to timing attack
(of maybe 2 if you assume the sentence about probing generated assembly was, too)
Now, I have a question for people proposing others languages:
Is there a way to make sure a function can be guaranteed to execute in a constant time (that is: same time whatever input on same machine) ?
If one wants to be sure, same question could be asked to C users.
I wonder which kind of domain is the one with the most program lines created per year ?
Sure, the web is the most visible (tautology?), but how about all those embedded programs we do not even see , or do not even know they exist ?
(I read one day there are 4K of code within an electric shaver)
Note this is a true question, I do not have any clue.
But what I know, is that JS did nothing for me when writing firmware for various devices. Neither python, nor java, BTW.
Veteran Unix admin are used to the old adage: if it ain't broke, don't fix.
And BTW, Debian did a good job of increasing boot performance by replacing /bin/sh years ago: You can run a search of dash cs. bash
And BTW*2, boot frameworks including parallelism though using shell scripts exist
As long as platform does not get any new specific hardware gadget, is there is need for a change when all CPU/board specific stuff already is there ?
Using already cited example: Does something prevent last kernel version working on powerpc/amigaone ? (assuming of course that involved devices are still operational)
Please, avoid "ad hominem" argument.
What I said is what I can say after several years of working pieces of OS on several CPU architectures.
If you want performance, ISA matters, but at soon as the expected features are present, what matters is implementation and memory access (and I/O, if your data set needs it). Oh, and regarding SIMD, you should have a look at what last ARM ISA offers, you would discover why Fujitsu uses it for HPC beasts.
You may as well apply your last sentence to yourself, as it looks like you forgot some lesson from history.
x86 is born as a desktop CPU rather than a server one.
Regarding difference between desktop and server CPU, I already eared that: This is what company sending servers with their own CPUs said to the world when they started to feel the hot from (comparatively) cheap x86 server lines.
BTW, yes, there is a difference between desktop and server CPUs, but the instruction set is moot for that purpose.
You could have:
ASSIGN 1000 to LABEL
GOTO LABEL => will GOTO 1000
But then something worst was added:
A way to pass labels as parameter for a subroutine
(that is: a function w/o return value),
with a specific syntax inside subroutine to select a label from parameter list:
Caller jumps there right after the return !
RT as Real Time, a new class of scheduling
You said new ? Hence I had to test that.
This allowed the whole team to discover that looping on decrementing an unsigned 32b integer from its maximum value to zero can take time (more then than now).
And of course, no way to kill process, as there was only one (single-core) CPU…
This makes me remember when I had to write a FORTRAN program for which natural data set was a small bag of stacks, but there was no way to create big array to contain biggest possible data (I know, because I tried and I got a "virtual memory full" error message).
Had to put stacks into files, and use windowing, to operate on stacks and just flushing/reloading when needed,
Given the size of image captor on a phone, I assume even plain 16mm focal could be called super-tele.
Also, even if 35mm format totally disappears one day (sigh), I assume it will stay used as reference, so as to allow comparison whatever the year of models. BTW, I also assume there is no point trying to get size of captor on a phone as a reference, as this size is prone to change at high pace.
In fact, every sensible OS takes care of clearing any page upon allocation to a process.
Otherwise, fighting against spectre and al. would be useless, because randomly reading old data from RAM by running a bunch of processes massively allocating stack and/or heap would be enough to get all sorts of /interesting/ informations.
Sorry for misunderstanding
From the post I answered to, I assumed you meant Sun participated in designing the chip(s).
Anyway, Sun was not alone to have a compiler and an OS port for Itanium (hint: company I worked for had, too). But CPU not able to reach announced performances, or too late, had a big influence to bad reception by potential customer. Plus price. Plus abysmal performance on legacy binaries. Plus need to recompile when CPU version changed if you wanted maximum performance...
Trying to replace history by fantasy does not help Intel, you know.
Sun, as a competitor was not involved in Itanium.
Some others competitors believed the Itanium tide would sweep their own CPUs and gave up even before fight. In the end, they no longer had their own CPU, and not a good enough one as a replacement.
BTW, amongst these competitors, there was Alpha, whose demise was a big boost for AMD as some design team went there after there boot was sunk by management.
Running a PC is not the sole purpose CPUs, you know it.
I have participated in setting specifications of several SoC with various CPUs on they, and then writing firmware while other engineers actually created the chip. In every case, the development platform was a x86 PC either on Windows or Linux.
You know, there is low power and low power.
Or, so-called low-power and really low power.
Thus, it all depends what kind of energy source is available the thing (as in IoT).
If juice come from a battery nobody will change during the life span of captor, you need really low power (for which Sigfox is not the only existing option, but it is one, while LTE is not then).
You are lucky if your user have no idea.
For what I saw, there are indeed such users. But among others, one can find:
- Those who assume it is magic ("Oh, you need time to think before acting ?")
- Those who believe they know. <= ALERT Call for troubles
Note this is not specific to computer science, BTW
Biting the hand that feeds IT © 1998–2021