I simultaneously disagree with you and them. I think you could do something useful by throwing money at the problem, but they act like you can get a person educated in a generic way that's primarily self-directed. You certainly can learn a lot of things, especially IT-related skills, entirely on your own. However, no matter how good the guides and resources are, that mostly works when you have someone motivated to learn a lot of tiny details and creative enough to put them together in a useful way. The benefit of more organized education is the ability to learn from others, both the teachers and the other students. I gained a lot of skills by discussing what I was doing with others who were interested and capable of doing similar and hearing their ideas, and I passed some ideas of my own to my fellows, and I also got some valuable information by talking with professors who had long careers in industry before they turned to teaching. No matter how great the lecture someone records and puts up on YouTube, it won't give you those things. You can do without them, but if you're going to, you don't need to bother with more textbooks as there are plenty of those for free right now which provide the information a self-taught student will need.
Posts by doublelayer
9408 publicly visible posts • joined 22 Feb 2018
Page:
- ← Prev
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
- 21
- 22
- 23
- 24
- 25
- 26
- 27
- 28
- 29
- 30
- 31
- 32
- 33
- 34
- 35
- 36
- 37
- 38
- 39
- 40
- 41
- 42
- 43
- 44
- 45
- 46
- 47
- 48
- 49
- 50
- 51
- 52
- 53
- 54
- 55
- 56
- 57
- 58
- 59
- 60
- 61
- 62
- 63
- 64
- 65
- 66
- 67
- 68
- 69
- 70
- 71
- 72
- 73
- 74
- 75
- 76
- 77
- 78
- 79
- 80
- 81
- 82
- 83
- 84
- 85
- 86
- 87
- 88
- 89
- 90
- 91
- 92
- 93
- 94
- 95
- 96
- 97
- 98
- 99
- 100
- 101
- 102
- 103
- 104
- 105
- 106
- 107
- 108
- 109
- 110
- 111
- 112
- 113
- 114
- 115
- 116
- 117
- 118
- 119
- 120
- 121
- 122
- 123
- 124
- 125
- 126
- 127
- 128
- 129
- 130
- 131
- 132
- 133
- 134
- 135
- 136
- 137
- 138
- 139
- 140
- 141
- 142
- 143
- 144
- 145
- 146
- 147
- 148
- 149
- 150
- 151
- 152
- 153
- 154
- 155
- 156
- 157
- 158
- 159
- 160
- 161
- 162
- 163
- 164
- 165
- 166
- 167
- 168
- 169
- 170
- 171
- 172
- 173
- 174
- 175
- 176
- 177
- 178
- 179
- 180
- 181
- 182
- 183
- 184
- 185
- 186
- 187
- 188
- 189
- Next →
IT needs more brains, so why is it being such a zombie about getting them?
From browser brat to backend boss: Will WASM win the web wars?

Re: There is no direction to go but up, starting from EXCEL
I didn't vote on it, but do you really not understand why people might support what they said and why your views on their comment are similarly open to debate? For example, you say that they have "no knowledge of the tech". You don't cite any reason for saying that and it wouldn't matter because they didn't say enough about anything to be able to guess what their experiences are.
As for why someone might be negative about Excel, it's because it's a really inefficient method of coding which supports almost none of the useful features of programming languages, and if you rely on it, those differences will start to be noticed. The original comment that started our thread pointed this out: if you can take a spreadsheet that takes twenty minutes to load, and possibly longer to update, and turn it into a task that lasts seconds without having to optimize it beyond a generic compiled language, then Excel is not only showing significant inefficiencies such that a 99.59% speed boost was possible, but it's inefficiency that is clearly noticeable to the user. At that rate, you always do the 99% speedup just because it's there, but if it's 0.5 seconds to 5 ms, then it doesn't matter as much unless you're doing it frequently. When you're improving from minutes to seconds, it always matters. This is a real problem, and you've already read about it here, but you attribute negative views toward Excel to snobbery and insult the person who wrote it on claims you can't back up.
Want tunes with that? India-made POS terminal includes a speaker

Why the confused headline and opening paragraph
I know the writer probably didn't write the headline, but both their opening paragraph and the headline produced from it are written as if this is completely crazy as a feature, while the rest of their article provides a completely logical reason for its inclusion and it's not the only one. It would also work for those who can't see the terminal due to a disability and, given the language support, could help for those traveling to a location in which a language they don't speak is spoken. Of course, in the latter case, they could probably look at a screen and see the price, but not necessarily instructions or other text. I'm not surprised that many PoS systems have left it out, but seeing it put in this one isn't very surprising to me.
What happens when What3Words gets lost in translation?

Or a few primarily British accents where or and aw sounds can sound the same, meaning that flaw and floor can't be distinguished by sound. I wonder whether W3W, which appears to be American (where telling those words apart is trivial) have that set in their list of homophones, extended to derivative words.

No, it does none of that stuff. It is an encoding system, not an error detecting, let alone an error correcting one. An error detection system would have additional tokens such that most incorrect combinations would simply be invalid. You could, for example, add in a fourth word which is based on the three preceding words and must be a certain one for each preceding group, thus a misheard combination would come up as rejected unless you happen to hit a workable combination by luck. Error correction would likely require even more words so the data could be sent twice. Their algorithm doesn't have that. I don't know why you said it. It is really making you sound like you either have no understanding what error correction is or how W3W works, and I don't mean from a technical perspective but the bit that's obvious even if your only experience of it was reading this article, where they point out that it doesn't do error correction.
From the rest of your statements, I'm going with both. For example, you say that "natural language has enormous redundancy". It doesn't, by the way. It has a bit of redundancy, but really not too much as you probably know if you've ever been misheard. However, this isn't natural language, it's words. You can probably guess the ends of my sentences if you've read the starting half because I'm making a single point. If, however, you read this clause and I invite you to guess the following ones: laminate energetic truckle facade, you don't have so much of a chance. All redundancy is lost even though the same tokens are used. You also cite some "forensic analysts" who say that "natural language is constantly self correcting". I don't even know what these analysts are trying to say, but I'm guessing it's not what you said.

Re: Amateurish, at best.
Admittedly, they have a much larger keyspace they're trying to include, which makes their challenge a bit harder. If they created a combination for every square on a Mercator map to 0.00001 degree, they'd need five words from a set of 1024 to include them all, which they evidently thought was too long. Of course, they could try improving this by tossing out the squares that are the open ocean, antarctica, etc, but they decided not to. The problem is that, having found this problem, they decided to allow plenty of confusable pairs, but to make their algorithm try to keep apart the pairs they thought of. That's a rather weak way to deal with that problem.

Take people with a lot of different accents and a bad phone line and the clashes will start to be noticeable. A lot of people speaking English will pronounce long E and short I differently. A person who has learned English recently and started with a different language may not. In fact, that vowel confusion is a stereotypical accent characteristic of a few languages that use more pure vowel sounds than English does, although its presence in practice varies a lot. Similarly, there are some languages that don't use the dental fricatives (the various sounds that th can make in English), and some people who have learned languages that lack them will do something else as they learn to make the same sounds. Two common methods effectively replace it with a t or s sound. Take a person like that and suddenly, words like think/sink or thank/tank become harder to tell apart, especially if you add in static on a phone or radio. This isn't only for people who recently learned English. For example, some Irish accents have a different pronunciation of th which is quite distinctive from t, most of the time, but close enough that I've seen a misunderstanding between an Irish person and a British person even though both were native English speakers.

Re: Hmmm...
Yes, in most cases. However, if the connection was brief for some reason, for example the person is injured, and the person handling the emergency is far from the area, for example a state-wide emergency service because the local ones are overloaded, you may lose that information. This is why a word-based location system needs to be very careful about not putting things that sound similar anywhere close to each other, which might include keeping them out of the same large areas. If they don't have error correction, which they don't, it might be worth keeping all the possible clashes in other countries, or at least on different sides of a large country.

Re: Mountain rescue
"If you muff the second or third decimal place of *any* lat/lon you can easily be several km out"
Assuming you meant taking 50.120 to 50.129, no, it wouldn't put you that far out, about 110 m. Even changing the second digit from 50.10 to 50.19 would be about 1.1 km, which is enough to cause a problem, but not quite as high as you state. This is why some code with error correction would be useful. However, the W3W system appears not to have any of that, just turning the numbers into words. If you know that you're going to hear a series of numbers, you have less possibility of mishearing than if you could hear any of ten thousand English words, especially if there is a bad connection or unfamiliar accent in play. On a bad phone connection, someone misheard me saying "I am tired" as "I was fired", so it's really not that hard to make a mistake. Their protocol doesn't have any correction mechanism, so you have to hope that they have placed any potentially misheard words very far apart. Being sent to Arizona instead of Britain is a giveaway, but being sent to somewhere twenty kilometers away might not be if the area is sparse enough.
Farewell WordPad, we hardly knew ye

Re: So LibreOffice it is then.
I'm confused. Your comment and the article are both quite clear that WordPad can't save to .docx, but the version I have lists it along with .rtf, .odt, and .txt as options, although .rtf is the default. A simple document saved as .docx correctly, but I didn't try anything complicated in it. This is the latest production version of Windows 11, so maybe it was added more recently, but since I haven't used it in probably a decade, it could easily have been there longer than we thought.

Re: slow transformation
With home, you're correct because the workarounds are not obvious. With Pro, however, you are not correct. If you're not connected to the internet, they'll ask for WiFi and has a button for offline right in the GUI, which will allow you to set up a local account. The feature's clearly there in Pro, even though they have, in my opinion unacceptably, hidden it in the Home edition.
Right to repair advocates have a new opponent: Scientologists

Re: Expose
That was probably a nice-looking copy, not the text as a whole, which he probably already had access to. In addition, that was in the 1800s when printing was a bit more expensive. Nowadays, there are copies of most religious texts nearly everywhere you look. No matter what language you speak, if you have internet access you can get a copy in probably under a minute, and there are people who make it their lives work to print up paper copies of their favorite one and give them away, just as there are translators who will learn a language, or in the past make up a system of orthography, just to translate one into that language.
Arm wrestles assembly language guru's domains away citing trademark issues

It is true that "trademark violation" isn't a technical term. I think we all know what I meant, though. A use of a trademarked term that a court will agree is not allowed if the trademark holder pursues a case. It's not that unusual a term to describe it, since the law does have rules for what can happen with a trademark and the court will decide whether the use was permissible based on whether it follows those rules or not. The rest of your comment appears to say the same thing that I said.

I've stated in other comments that they're wrong about this being a violation, but they do have a responsibility to protect their trademark if they want to keep it. One of the reasons your trademark can be revoked is that you haven't been using it or that you have allowed others to do so to an extent that degrades the usefulness of the trademark to distinguish products. That is a relatively narrow thing, though, so this website not only doesn't fall under that responsibility, but likely doesn't violate the trademark in the first place.

Re: Seems a bit Monty Pythonesque...
In this case, it wasn't the registrar who was contacted but the host. I'm sure that, if you had a host that ignored the lawyers, they'd have gone after the registrar, but just having a name that the registrar won't take from you won't protect you if your host will respond and cancel your service.
East Germany didn't have a presence on the public internet, at least not an announced one. They did use the .dd domain for a private network between a couple universities, and they had some links that may have used someone else's internet connections, but probably not too much before the collapse.

Re: It's a bloody word in our bloody language!
The point, however, is that none of them actually violate trademark for exactly the reason you state. Apple Insider isn't a tech company, it is a media company that writes about tech companies. It doesn't matter that, when it says Apple, it's talking about the company instead of something else, because it isn't making products and alleging that Apple did so. I don't know what arm-assembly did exactly, but it was likely also an informational site that was not a chip manufacturer. Had they been selling something that implied that ARM endorsed it, they could have complained about that, but if that was the case, they could have complained no matter what domain it was and could complain now that it's on a different one. What they actually complained about was the use of their trademark as a substring on a site that didn't otherwise have any conflict with their trademark, so their complaint is probably invalid. Try getting a web host to understand the difference, though, because they deal with a lot of real fraudulent sites all the time and probably respond quickly if some verifiable company sends a note saying that something needs to be taken down.

Re: It's a bloody word in our bloody language!
"I think the lawyers of the first two companies are very versed with taking down sites."
No, not really. They don't do what ARM has done here. For example, if I look through a list of popular domains for Apple, I can find quite a few sites that contain those trademarked terms. For example, if I search for Apple and remove the ones owned by Apple, I find the following:
appleinsider.com: covers Apple products
applebees.com: This is a restaurant.
patentlyapple.com: Some guy posting articles related to Apple.
applevis.com: A site that talks about using Apple products if you're blind or visually impaired.
applegazette.com: I don't know why it's called this, but it's a blog that doesn't talk about Apple products.
This is not all the ones I found. This is a subset of addresses that clearly had "apple" in the name. All of them are still around. I can do the same with Windows. I'm sure they have filed trademark complaints before, but they probably reserve that for malicious uses or someone actually trying to impersonate them, not someone who merely talks about their products like ARM has done.
I'll see your data loss and raise you a security policy violation

Re: Sounds familiar
It is not IT's responsibility to audit everyone's files. Different users will have different requirements. For example, I keep some stuff on my work machine's local drive. It is there because I can run things faster locally if it doesn't have to be pulled down, but anything important also exists on some other system such that I can lose the disk without losing important data. A script that checks whether there are files on my disk will say that there are, but it won't indicate a problem. If you were strict about that, then any program which creates files in the documents folder (I've known several) would set it off. If you check what people are using on the server, a user who has exactly zero files might be suspicious, but there could be some jobs that don't require storing files there. If they mostly work in email, either having a job that's entirely communications or a job that's not done on the computer, then maybe they really don't have files. If they put one file there once and never used it again, the script will indicate that their folder isn't empty even though it contains nothing of use, so you'd need a much more complicated script to look for changing files or a certain expected quantity of stored data with both false positive and negative possibilities.
With this level of complexity, it really isn't as simple as you make it sound. It's also not what IT is there to do. They create the policies and make things work. They're not supposed to police everything a user does with a computer. When other policies exist, they usually don't come with a requirement for an automatic system to notice every violation. For example, if there is a room where you may not bring liquids because there is sensitive hardware there, then the general steps will be to put a sign on the door and to make it clear that this is important, not to build a water scanner and install it so you can't enter without passing. I'm generally supportive of having the computers check for risks and warn about them, but that doesn't mean that every possible problem needs to have a detection script. If employees don't bother to follow the policies that have been set up for a reason, bad things might happen.

Re: BTDT
/tmp, on many systems, is not on a disk and is not maintained through reboots. There are lots of other places that are on filesystems that persist. A program can keep some temporary data, in fact it makes a lot of sense, but if you deliberately put your data where it will be lost in a certain case, then you should be able to recover from that situation somehow. In addition, unless the server was forcefully powered down, any properly written software would have had the standard shutdown notifications, whether that is the service system stopping them with the scripts they wrote for that purpose or monitoring for shutdown notifications. Software that expects that the server will never be rebooted is not adequate.
Another basic rule is if you need a file to still be there no matter what happens to the computer, putting it on a ramdisk is stupid. If you put it on a ramdisk, have a plan for what you'll do if that disk ceases to exist. Depending on what the program does and how large it is, that can be anything from "no problem, they just run it again from the start and if that causes a delay, that's on them" to "store completed stages and information to resume in a different program-specific temp directory and only keep the current stage's work in /tmp".

Re: Outlook...
I mostly agree with you if we're only talking about email servers. While storing stuff in deleted isn't a good idea, if someone has three gigabytes of mail they want to archive in a different folder, we have servers which have plenty of storage to hold that. Admins that continue having tiny quotas do not help unless there is a real space problem, and if there is, it's more likely to be solved by buying bigger disks or focusing on users that really use a lot more than everyone else.
However, storage quantity was really not Huon's users' problem, even though that was the initial symptom. Their problem could have been whatever reason they had for saying that the servers were unreliable, which could have been a very serious problem. It could, however, be a completely invalid assumption made by someone who has no clue what they're talking about, and this seems more likely to me given that the person who said it didn't understand what C:\temp is for. You don't need a large disk if users are supposed to put all their data on a different disk. It should have had enough space for the temporary data that it would need to store and for system growth, but not necessarily for all the work the team did, since that was supposed to be stored elsewhere and that system could have as much storage provisioned as needed.
BOFH: What a beautiful tinfoil hat, Boss!

Re: Another new boss!
Whenever they get a boss who understands them and works to help them, one of three things happen:
1. The boss decides to move somewhere else to become a BOFH-style person themselves in their own fiefdom. This is the best scenario for the boss, because being the BOFH's friend often works well if you're careful.
2. The boss's BOFH training is not as strong as expected and the boss degrades back into a normal boss. This is a risky option which can lead from firing to death.
3. The boss decides to try taking over the company and pushing out the BOFH. This is almost invariably fatal.
OpenAI urges court to throw out authors' claims in AI copyright battle

Re: Transformative means copyright does not apply
There is plenty of verbatim text in models like this. They're very large. The fact that they throw away some of the training data rather than retaining verbatim copies of everything and that the copies they have are in a different format doesn't change that. It can and has quoted large chunks of text verbatim without having someone provide it, simply when querying for the title to be quoted. True, they sometimes try to stop you because it was clear they would be sued, and I don't think anyone's gotten it to print an entire book with one command because it was designed for smaller output chunks, but the text is in there in a format that's much closer to the original than you have suggested.

Re: DMCA?
"because of course the only way OpenAI could find a copy of any of their works in digital form is if OpenAI had themselves broken into the Kindle app store or something like that."
When I read this, I thought you might be sarcastic. I can't make that make sense with the rest of your comment, though, so I fear you're being serious. If you were, you're quite wrong. There are plenty of copies of books on the internet without permission, and they're not that hard to find by accident. Publishers go after particularly popular ones, but there are many that are on sites with small readership that publishers either don't know about or don't want to spend hours to fight when they can pop up again in ten minutes if they want to. Those sites are breaking the law, but just because the publishers haven't stopped them doesn't mean that a crawler can read the book from there and do whatever it wants with it. The LMMs have scraped a lot of the internet, and I'm sure they've included plenty of information that wasn't supposed to be on the pages it was on. Having stumbled on copyrighted works when I wasn't even trying to, I know it's not that hard.

We get our language from what we read and what we say. The modifications that have been produced recently are of our own collective making, and most of them have a point. In fact, I would go as far as saying that they are an improvement in many cases, because modern writing tends to be more informal and thus gets a point across without the excessive wordiness common in past centuries. Nothing about the older style is intrinsically better just because the books we still read from that period tend to be pretty good. Many bad books were written like that but we don't read them nowadays because they didn't stay popular enough to become classics.

When I came back to read replies, I noticed an error of my own in my comment. In case anyone thought there was something wrong there, you're right, beta radiation involves the destruction (well, splitting that's unlikely to be reversed) of a neutron, not creation. I just have to correct myself on that one.

They had big crawler bots to collect a bunch of text from the internet, not paying too much attention to where they got it. You can find a lot of books online somewhere if you do that, because people like breaking copyright and because there is a lot of fair use out there. For example, I've quoted passages of books before, and for a book that's popular enough, there are probably quotes of most of the passages somewhere. A bot that's specifically designed to make statistical connections between text chunks won't have much problem stitching those back together, especially if it has partial or complete copies of the books from a site that wasn't supposed to have them up there.

For two reasons. One problem is that training only on old books will make a bot that can talk like old authors, and I know there are some people who view that as a benefit, but it's not natural for modern readers. The bot would also lack any information about the modern world because there would be nothing in its training set. If you asked a simple question like "What is beta radiation", you might not get any answers because one core part of understanding beta radiation is that a neutron is created, and neutrons weren't discovered or named until the 1920s, so they wouldn't be listed in much public domain material. Even as that got added in, you'd keep having this problem with stuff that is common knowledge and not found in works old enough to be in public domain. You would probably have to fill in the gaps by finding other public domain stuff, or more likely stuff under a permissive license, which could be used. Many wikis online would probably be licensed under acceptable terms, although that's no guarantee that each of those will necessarily add more useful information.
The second reason is as a result of this. LMM companies don't want to produce a bot that has obvious flaws. They'd much rather produce a bot that has really big hidden flaws, especially if their competitors are making one like that. Unless everyone is restricted to using only public domain content, no big company will ever restrict itself in that way, and if they ever are restricted in that way, I expect that most of them will continue using content they're not allowed to, just doing a bit more to hide that they're doing so. Their product needs to sound authoritative or it will become too obvious that it made everything up and people won't buy it. They'll go to whatever lengths they need to to prevent people from seeing them backslide.

Re: Please can we have a standard like a robots.txt file
Do you really want it to be opt out? And sure, we can have such a standard just as soon as we get someone to make it clear that we have copyright rights to stuff we made that includes restrictions on using that as AI training data. If regulators agree with companies like OpenAI that they are free to mash up anything they find online, then the opt out system will have no legal meaning and can be cheerfully ignored by anyone who wants to.
We all scream for ice cream – so why are McDonald's machines always broken?

Re: Wait, their milkshake maker works like an HP printer ?
No problem, still display EC402 but have a book which lists those codes and the longer messages. You could digitize it into one HTML file with a bit of JS to jump to the code when entered. It wouldn't be the first manual to have an index of error codes. They're not doing it to save a bit on the screen. They're doing it to make a lot on resetting the computer, and the fact that they make enough on that to pay for people to travel to the restaurants and do it indicates that it's quite the profitable enterprise.
We're about to hit peak device count, says HTC veep, as AR takes over

Re: @Pascal Monett - Not going to happen
However, that case was clearly related to an illness, not a new way of checking the internet. Brain surgery for medical reasons makes sense, although I think many would be wary about the risks, but brain surgery for anything else is not a great idea. If someone dies because they were a test subject for fixing nerve damage that led to paralysis, we'd understand that the risks were worth it and they would hopefully agree as they'd know about the risks. If someone dies because they were a test for a chip that can play video inside your head so you don't need a screen, then I don't think we would be in favor so much. The high risk of something like that means that we will probably not want, and companies will not want to, do research into brain-embedded consumer electronics. I think futurists know this but like to pretend otherwise since the idea of a flawless computer implant like that could be cool if they don't think too much, and not thinking is one of their specialities, right along with copying from science fiction authors who actually considered possible consequences.

Battery life
I don't think we'll see these sci-fi visions coming any time soon, but if they want them, the first problem will be making them last more than an hour or two before the battery dies. This is a problem people who want to look into the future don't often bother with, because the battery is one of the least fashionable parts of a device. Yet, it's going to prevent people from adopting lots of technology as a daily tool because they don't want to be tethered to a wall. Some hardware can get away with battery life like that. If people put this on to play a game and then take it off and do something else, it doesn't have to last all day. If people are going to take it with them, they'll want some reason to believe that it won't be dead when they turn it on.
That's far from the only problem. Just talking about technical limitations, there is the size and weight problem which doesn't work that well with the battery problem, the getting sufficient data in and out problem, and the interface problem of how you interact with the stuff that's being displayed in front of you. The latter will also be tricky, because people don't want to use voice commands for everything and I'm guessing waving your hands to mimic interacting with objects you can't feel is not going to prove any more popular. I get why these analysts haven't thought of the even more plentiful reasons why people won't have anything useful to do with the devices, but you'd think they could at least consider the obvious technical limitations.
Microsoft ain't happy with Russia-led UN cybercrime treaty

Nothing in the proposal, at least as I understand it, says that a company can exempt its employees from provisions by saying that they're really researchers. Not that this would be an acceptable solution to the problem anyway. Security research is a very broad thing, and a lot of activities can be included. None of them should be forbidden so that dictatorships can have more surveillance powers. Changing "forbidden" to "license required" doesn't change the situation.
Tor turns to proof-of-work puzzles to defend onion network from DDoS attacks

Re: This would work fine in the days of single machine attacks.
Yes, but in most cases, it's a few thousand machines repeatedly connecting. It's not just one connection from each machine in the botnet, but a flood from all of them combined. If all the nodes are restricted to a lower connection speed, then the scale of the attack will decrease unless the attacker can get even more machines from which to attack. Depending on who the attacker is, this might involve more expense than they want to incur. This is especially the case for so-called hactivists who often like DDOS as a weapon because it's pretty basic and they don't have the skills to do much else, and that means they also don't have the skills to create a good botnet so often simply buy time on someone else's.

"Doesn't that then immediately link an anonymous site to an entry in a global unchangeable ledger"
In a way, but three factors limit how bad that could be for them:
1. Wallets can be created easily, so in the worst case, it's linked to an empty wallet. Some Tor sites already accept donations through cryptocurrency, so known wallet IDs aren't that rare as it is.
2. There are some cryptocurrencies that are designed not to make public information about previous transactions, so they could use one of those.
3. It wouldn't be very hard to make a pool of a lot of different people, including but not limited to the hidden service. If you mined in the pool, you'd know that one of the twelve thousand wallet IDs in the pool belongs to the operator of the service, but not which one. The pool wouldn't have to know either; as far as they know, they get hashes from someone and pay out, but they don't know who did the hashing.
I'm still glad they didn't go that way. It would inevitably lead to more abuse and, by using a more complex system, there would be more capacity for vulnerabilities in a protocol that already has a few but to which no alternative at a similar scale really exists. Oh, and it would be more unpleasant for the users.
OpenTF forks Terraform, insists HashiCorp is the splinter group

Re: A question for y’all
That's the trouble with many proprietary semi-open licenses. They're intended to look like they are basically the same as open source, just not for AWS and Azure. Since you're not AWS or Azure, then you shouldn't feel any difference and should stop complaining about the change in license. They then have the freedom to interpret the new license conditions as applying to whoever they want, which means that the original people might actually limit it to the large cloud providers, but if someone purchases the company later and decides to apply it to you, they are able to send threatening letters. It might not stand up in court, but they have lots of ways to try to threaten someone into paying. This is why I don't view those licenses as anything other than proprietary. Like or hate the OSI, I support the definition they wrote.

It really depends on how conservative users are. I tend to let license conditions rule my actions even when there's little chance of enforcement; if there is a clause that says I need a license for production use, then I'll assume that production use probably includes a lot of things and will either buy a license or more likely just not use it. Yes, they may not care about my tiny uses and never try to enforce that bit, but I tend to assume that if they can, then someone might, so I should act as if they will after I've invested resources into that tool.
Silicon Valley billionaires secretly buy up land for new California city

Re: They keep trying to build in deserts
"In general water can be piped tens of miles to where people live at reasonable cost."
We're not talking about tens of miles. North America is a large continent. If you're living in the southwest desert region, you can be hundreds of miles from the nearest ocean. The solution they've used thus far is to pipe water from a river. This has caused some problems with all the people trying to use the same river for agriculture and cities, and that's with most of that desert still empty. If we tried to move people over there and build more cities for them to live in so they're not living where land is arable, then you'd need a lot more water. What they're doing now is likely not sustainable for the river, and there's no way they could manage to increase without much worse. You can of course build a really long pipe to bring water in from the coast, but that's expensive and so a lot of people will be trying to find some reason why they don't have to do it because how much does this farm take from the river, it can't hurt that much can it?
As for ethanol-based fuels, one reason people want to stop using fossil fuels is because they produce pollution including carbon dioxide and methane, which ethanol doesn't fix. It fixes some problems, but not all of them. Meanwhile, you could put solar panels in a desert where there is little obstruction for sunlight as a more efficient power source, although that too has its problems with the need for water to keep the panels clean and isn't a perfect power source. You could argue for keeping people out of desert areas for solar about as easily as keeping them away from arable land for ethanol.

Re: They keep trying to build in deserts
The problem is that, if you put people in places where nothing grows, you will be spending a lot more resources doing things like delivering stuff to them and insulating them from the reasons that stuff won't grow. Large cities in the desert need lots of water, which obviously isn't there since it's a desert. Some ones will be lucky enough to be large cities in a desert near the coast, so desalinization is an option. However, with a country like the US, there is a lot of desert that's not near the ocean, meaning that if you put too many people there, you'll quickly drain the water that's nearby. A few large rivers, when people try to use them to provide water for millions of people and, where possible, agriculture, is going to cause serious damage to that river. You also have increased risk of fires in a place like that, and it is difficult to prevent fires from being dangerous or to keep rebuilding stuff when the fires have destroyed it.
At current levels of food production, do we really need to use all arable land for agriculture to the extent that cities are preventing us from doing so? There is more arable land in other parts of the continent which isn't farmed for various reasons, including environmental concerns and more complicated transportation routes. This is only relevant if we're building entirely new cities, since most existing cities were put on arable land because, when they were founded, they were getting more of their food supply either from the surrounding land or near the river that served as the primary method of long-range transportation, and now that there is a city there, it's not worth moving it.
Japan complains Fukushima water release created terrifying Chinese Spam monster

You have some logical problems. One is reading "2% of" as 102%, when it actually means 2%. The other is assuming that, if people are complaining about one thing being dangerous, then the one that is actually dangerous must be the one they're not complaining about. People tend to complain about the things they think are worse. Neither is dangerous, and this should be obvious, but they're not focusing on the carbon because it's really quite safe and there's plenty of it all over the place, including in your body, right now. There is, in fact, plenty more in your body depending on your age because when we were doing atmospheric nuclear testing, we put a lot more of it up there. It's not the only thing we put up there, which is why we should not do any more atmospheric testing, in fact we can just not do any testing, but the carbon wasn't the dangerous part of that pollution.
Wordpress sells 100-year domain, hosting plan for $38K

Re: Off Topic
"On a day when the UKs air traffic control borked the Reg has nothing to say."
Oh, that's a shame. Tell you what, here's a different article that looks interesting instead, published eight hours before your comment:
UK flights disrupted by 'technical issue' with air traffic computer system

Re: Nearly half the web?
I have no numbers on it, and I'm sure they have a great incentive to exaggerate. However, I see it used on so many sites that don't need it. A lot of small companies or organizations seem to think that it's the perfect CMS for their websites, and I've downloaded firmware update files from a wp-content directory about two hours ago. I have two sites with it myself, although technically only one of those is mine, the other being one that someone else designs and maintains and my job is to make sure the server is still paid for and not broken. It is very common on small sites, both personal and business, so whatever the number is, it's probably quite a lot of domains that have a WP backend somewhere.

I think about this sometimes when I find sites that haven't been updated since 2004 but are still available without going through the Internet Archive. Sadly, this isn't very common, but I do stumble on such things from time to time and start to wonder how the author kept it going. Of course, the simple answer is that they are still alive and pay for the domain renewals and somewhere to host, but if they were doing that, I would assume they'd update at least a few things on their site. I'm thinking about personal sites that use ancient presentation tags that no longer work, partially because they were browser-specific, or sometimes it's a person with their personal freeware that "runs well on Windows 95, 98, ME, and probably 2000 but I haven't checked yet". Sometimes it still would run on Windows 10, sometimes it is broken on everything except a VM, but either way the files will make it obvious that they're the same as they were decades ago.
Since the sites I run will go down as soon as I stop paying certain regular bills, I kind of like the idea of one that's designed for more longevity. I wouldn't pay this much for WordPress to try to do it, though. My cheaper solution has been to make sure the archives are aware of public sites that I want to archive in case the proverbial bus comes for me.
Hope for nerds! ChatGPT's still a below-average math student

Re: " throw in a few trick questions"
"I'm even less impressed by handwaving appeals to res cogitans to try to claim human cognition is magically different."
It doesn't impress me either, and I've argued against it on multiple occasions. I question whether you correctly parsed my comment, as I didn't say that or anything like it for this case. I don't even have to argue by neuroscience here. My argument is purely technical and concerns what we are capable of doing as independent organisms and what an LMM is capable of doing.
At a very basic level, we are capable of taking an action and seeing what happens. We can drop an item and notice that it falls, and from this arrive at a guess that every item will fall when dropped. A bit more experimentation and we can start to guess that items will fall faster if they're more compact and start to create ideas about air resistance. An LMM is not able to drop any items or watch what happens, so any information it has about what objects do has to be told to it. It is able to answer questions about kinematics only because someone has written them down and it is repeating what someone else said.
Similarly, with my discussion of whether LMMs reason (they don't). This does not mean that a program is incapable of reason. I think that, if we got a computer powerful enough and a program designed to make more general logical connections, it would be capable of making generalizations the way we do, and we definitely do as it's one of the core parts of education. LMMs, however, are not written to do that. Their only purpose is to write, and they have not been set up with logical constructs that are more advanced than writing likely passages of text. I am not saying that human brains are more powerful than computers in some way. I'm saying that human brains do things that this program was never intended to do, which is why our brains will be capable of things that this program never will. Give me a new type of program doing different things in a different way, and that could all change.

Re: " throw in a few trick questions"
No, we analyze correlations between a lot of things, not just tokens. That's why we are more capable than LMMs. We can make correlations between actions and consequences, intentions and results, processes and products. It can only make correlations between chunk of text and other chunk of text. We can also experiment on our own to gain new correlations, without having to have a new training run feed it to us from someone's exogenous information. Oh, and we have the ability to generalize which it mostly cannot; this is why a lot of children can solve any mathematical equation using operations they know, and any computer can do it extremely quickly, but GPT cannot solve a simple equation if it hasn't already memorized the answer.

Re: Just wondering
That is certainly optimal, and it's common as courses get more difficult, but it is sometimes difficult to find a new problem that can be solved by students that haven't learned much yet and has never been considered by anyone on the internet. The basics of education will unavoidably be things which are known and discussed by lots of people, and students will need to get past those levels in order to get to the places where they can be set more complex problems that haven't already been solved.

Re: Just wondering
I think it says about as much about the subject as the program, since what it says is whether the solutions to problems asked at that level are likely enough to have already existed on the internet. At low levels, the answer is yes for most subjects. A computer science question asked of relatively early students like "Write quicksort on arrays of doubles in C++" is a question where someone has written that code and published it on a page with most or all of those keywords on it, thus it's relatively easy for a program that has mashed up a lot of websites to find and reconstitute something close to it. A similarly difficult history essay is likely to be on a topic that has already been discussed by some people online, maybe even in these forums, and the bot can copy from that. A political course that focuses on modern events means that the bot can mash up some newspaper articles and make an essay that approximates an acceptable response. A mathematical problem, made from randomly chosen numbers, is probably not written on a website since the sites will have been written with instructions on solving such problems in general, not that one in particular. Hence, the bot can't solve it, since it hasn't been built to reason or to understand, just to repeat what the internet generally says in a format that looks like natural language.
Uncle Sam accuses SpaceX of not considering asylees and refugees for employment

Re: Asylum isn't what it used to be
That may be true (before I go farther, I know neither Catia nor Solidworks, nor any CAD really). However, that information could be passed through to the manager for them to rank applicants. Perhaps they got several applicants who all know Catia, in which case Solidworks guy is probably not going to get invited. If nobody with existing Catia experience is available, they might be more willing to try someone with less experience than they want, but more experience than not hiring anyone. HR doesn't know how easy or hard it is to pick up a skill, nor do they know what related skills convey valuable information. The person who wants the job filled should understand those things and can better parse the available candidates for them.

While I don't know the word, it would seem to be quite distinct from "asylum seekers", which would include anyone who had requested asylum, no matter what state that application was in. This term would appear to be limited to those who applied for and were granted asylum. I don't really have a word or phrase to describe that particular set that's shorter than the eight I just used, but it's still valid to distinguish them from all seekers.
Dropbox limits ‘all the storage you need’ unlimited plan, blames abusive users

I don't know whether that is supported, but it really can cause problems. If you're planning to download a 110 GB file, you don't expect it to finish any time soon, so you might not bother watching the download. If you come back a couple hours after starting and realize that it stopped after twenty minutes and has been doing nothing, you now need to restart it and keep checking back to see if it happened again. I've experienced this once when trying to download a file about 700 MB from a server that did support resuming downloads but kept failing for some reason every 12-18 MB. Continually going back and pressing the resume button got annoying, but it's not like I could just write a script to pull the partial download from Firefox and do it there. I kept considering whether I should write a script to do it automatically and start over or to keep switching windows to press the button every five minutes (it wasn't a fast server).
Page:
- ← Prev
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
- 21
- 22
- 23
- 24
- 25
- 26
- 27
- 28
- 29
- 30
- 31
- 32
- 33
- 34
- 35
- 36
- 37
- 38
- 39
- 40
- 41
- 42
- 43
- 44
- 45
- 46
- 47
- 48
- 49
- 50
- 51
- 52
- 53
- 54
- 55
- 56
- 57
- 58
- 59
- 60
- 61
- 62
- 63
- 64
- 65
- 66
- 67
- 68
- 69
- 70
- 71
- 72
- 73
- 74
- 75
- 76
- 77
- 78
- 79
- 80
- 81
- 82
- 83
- 84
- 85
- 86
- 87
- 88
- 89
- 90
- 91
- 92
- 93
- 94
- 95
- 96
- 97
- 98
- 99
- 100
- 101
- 102
- 103
- 104
- 105
- 106
- 107
- 108
- 109
- 110
- 111
- 112
- 113
- 114
- 115
- 116
- 117
- 118
- 119
- 120
- 121
- 122
- 123
- 124
- 125
- 126
- 127
- 128
- 129
- 130
- 131
- 132
- 133
- 134
- 135
- 136
- 137
- 138
- 139
- 140
- 141
- 142
- 143
- 144
- 145
- 146
- 147
- 148
- 149
- 150
- 151
- 152
- 153
- 154
- 155
- 156
- 157
- 158
- 159
- 160
- 161
- 162
- 163
- 164
- 165
- 166
- 167
- 168
- 169
- 170
- 171
- 172
- 173
- 174
- 175
- 176
- 177
- 178
- 179
- 180
- 181
- 182
- 183
- 184
- 185
- 186
- 187
- 188
- 189
- Next →