Stolen Cookies
Why are cookies trusted across IP address changes..?
All the better to track you with, little red riding hood..
More than 170,000 users are said to have been affected by an attack using fake Python infrastructure with "successful exploitation of multiple victims." According to CheckMarx, members of the Top.gg GitHub organization – a top resource for Discord bot makers – as well as other developers were targeted, and it all hinged on …
Will it? I thought most train networks were behind NAT? As are most mobile networks?
I realise NAT is not an ideal solution to anything by any means.. There was once a thing called Mobile IP that would let you keep the same address on the move..
But if someone jumps in with an IP address from a completely different provider/continent with your cookies, it should be suspicious enough to require re-authentication, no?
I suspect that this refers to accessing the interwebs by using your mobe as an access point, rather than the on-board WiFi.
That's what I usually do. For 1) it's highly likely that we are using the same cell, but my phone isn't sharing the connection with everyone else on the train, and, 2) I don't trust them.
-A.
FWIW, GitHub asks to re-authenticate on the same device/browser/IP after certain amount of time passed since the previous verification, when accessing privileged areas, like some of the repo settings pages. They call it a sudo mode. Though, if there's access to grab a new cookie immediately, a coordinated attack is not impossible.
That said, GH does not treat the web interface for making commits a privileged area.
Another thing that could be stolen would be a personal access token, which even their official app uses under the hood to operate. It can be used with a normal Git client. Commits with such tokens are rejected when they modify `.github/workflows/` but that's the only file-based rule I'm aware of.
> I thought most train networks were behind NAT? As are most mobile networks?
Which will be different NATs with different IPs...
And in the case of larger NATs, a pool of IPs that individual TCP connections are balanced over (most people doing this will set it to try and reuse the same IP for the same client because of issues with the handful of services that assume an IP address and a user are somehow related - but I have seen plenty of NATs where every TCP connection is a different IP)
> There was once a thing called Mobile IP that would let you keep the same address on the move.
This is literally an always on VPN, that just happens to be (an unimplemented) part of the IP specs. If you want that you can use any existing VPN protocol, with a higher probability that it will actually work.
Even if the hacker hijacked cookies of the Github account page, they shouldn't be able to make a push without the private SSH/GPG keys, or add keys without logging in again with a password.
Obviously, if the hacker was inside editor-syntax's computer with a keystroke collector, they could anything - but that's more than cookies.
No, it only requires that for initial log-in. After that, it trusts the cookies, apparently.
But probably you were faecetiously referring to the practice of applying MFA to everything as if it were a silver bullet to absolve oneself of all cybersecurity responsibilities.. It isn't, of course, there are side-channels abundant.
I get that it's a nice language for teaching programming. I did, after all, grow up with Pascal.
I don't much like it, though. It seems unsuited to programming in the large.
Lots of people who need to write code but aren't really programmers day-to-day seem to rely on it to access useful library code, mostly written in C. In fact I would go as far as to say that their "coding" amounts to gathering together third party libraries, written in C, and providing boilerplate glue.
This seems dangerous to me. If you are working with code then you ought to be able to understand it, whether written in Python, C or Logo.
-A.
“code”, is an encoding. Like ASCII and UTF8 are text encodings, programming languages, assembly and even mouse clicks and config files are encoded instructions for a machine to operate. Don’t be snooty about how you encode your instructions.
What you should take away from this article is that code written by developers (and their understanding of it), is not the issue. It’s supply chain risk awareness, management and the reality of million line code bases.
This is by design, once work is a process it can be production lined and wages suppressed. The issue is now systematic and not with individuals, you can’t blame someone on the line if things don’t drop in front of them in the right configuration.
Python ain’t the problem, look up a bit
"Python ain’t the problem, look up a bit"
No Python isn't the problem, but this isn't the first supply chain compromise for the pypi package manager. It seems to be a favorite repeat offender for some reason. Python is fine, but I no longer trust pypi or any of the auto-dependency installers. I may revisit that if we can go twelve months without another pypi supply chain compromise.
In this example, it wasn't PyPi that had a problem. It was the package that was instructed to retrieve code from somewhere else, download it, and run it. Nobody broke into PyPi to submit a poisoned package; they broke into someone's GitHub to make a real package poisoned. The important part is that, unlike previous attacks which have indeed used PyPi, this could have been done to any project using any language as long as somewhere in the build system accepted a dependency's URL. They picked a Python package in this case, but that wasn't required for this to work.
So... you read and understood all of the C libraries you use in your projects? You do grokk ATLAS? (ok, that is written in Fortran - you can use it from C, AFAIK, so...).
I don't really like python, I can read it, but the differences between the old and the new one break something in my understanding. I don't dislike python. I just don't use it (much). Except changing colleagues' code to suit my needs.
The main problem is that pythons way of pulling in dependencies makes these attacks a bit simpler than replacing e.g. libblas in a Linux repository. But even there, attacks on the supply change are possible. And don't get me started on the flatpackisation of the Linux world...
Old and new? I assume you're talking about going from Python 2 to Python 3: Python 3 has now been around longer than Python 2 was and it contained many major changes between 1.6/2.0 and 2.7.
The change I dislike the most was turning print into a function. Since Python 3.3 (when we got u"" back) I've had few complaints with the language itself.
Python has this wonderful module in its standard library, called ctypes
. It lets you create wrappers around libraries that were never written for use from Python. And with clever use of wrapper types and wrapper routines, you can make it look like those libraries were indeed written for Python.
There’s an art to that. Ask me about it. I’ve done a few.
Don't knock such "glue" languages. They're extremely useful, especially for one-off research projects, prototyping and of course "scripting" - which is what they're for.
After all, sh/bash/zsh et al are in the same bucket.
There are a few things I don't like about Python - meaningful whitespace for one - but it is very useful, easily available everywhere and better than many of the alternatives.
We're all using code that is so complex that no single person could have written all of it, or do you think you could write your own modern OS or web browser?
And it's not as if the Python language is the risk or the source of exploits, these are all too often in C-libraries and C remains famously difficult to write safe code in, even for experts.
But this attack has nothing to do with either: as so often, it relies on getting people to install stuff without thinking. This is the same as the first PC viruses or even the jokers who got people to run scripts like rm -rfd /
.
The problem isn't the language but the distribution mechanism, potentially you could get similar issues with R (cran), perl (cpan) or even TeX (ctan) (and doubtless many others), it's only less likely because they don't have such a high profile as Python. For me the only sensible solution is never to allow anything to automatically install dependencies other than the official OS repository, especially on production machines. The more automatic things are made the more they are likely automatically to go wrong.
There was a very good series discussing this, and related matters, on LWN last year (https://lwn.net/Articles/924104/) with a brief follow up here: https://lwn.net/Articles/959236/
Python itself isn't the problem so much as the ease of use of nonstandard libraries by any idiot in charge of a keyboard. It's the same problem on Rust: cargo. Nodejs: npm. Go: go get.
C and C++ only get a free pass because it is LESS simple to install a random library off the internet and all of its dependencies, and less simple to publish one. Except in Arduino land, of course, where it is just as bad as Python
We are now in a world where people who call themselves programmers simply ask Google/Stackoverflow or worse ChatGPT which library to pull from pypi to do whatever they need, and they follow it blindly
Fsck yeah! Have an upvote. Python looks to me like a language that was designed to help you make mistakes.Get the indentation wrong: subtle but serious errors. Assign a value to an object using different character casing than used previously: new member gets created; so hard to spot. Method not defined: you'll find out when the production system keels over. Multiple function return values: you can return different combinations of things from one method at various points. All this could be embarrassing if it were you producing the poor code, but now you have to debug a single 700-line routine that was written by some hare brain who did not believe in commenting code.
Obviously it works for many people. But not for me. I can't get with the whitespace being important. Not at all. I just don't seem to be able to see it, and my editor doesn't help me like it does with braces (or BEGIN/END or whatever).
And making changes is just really, really hard - I can't enter the logic I want as a stream of consciousness and then tell the editor to indent it for me as a check. With spaces there is no way to separately verify if I have made a mistake in the structure.
When trying to use Python I feel I am back in the early 1970's using Fortran and having to make sure I didn't accidentally use one too few spaces so the first character of my intended line got eaten as a continuation marker! I suppose I am at least grateful I'm not having to use a hand card punch...
In conventional free-format languages, the compiler pays attention to the statement bracketing, but ignores the whitespace. The human programmer typically uses the whitespace to reflect the structure of the code as per the statement bracketing. This offers some redundancy; any discrepancy between the two should raise an alarm that something is not quite right with the code.
Python’s use of whitespace to define statement bracketing gets rid of this redundancy. That’s bad. But you can get it back, by using “#end” comment lines to mark the ends of statements. For example:
␣␣␣␣async def sendall(self, data, timeout = None) :
␣␣␣␣␣␣␣␣self.sock_need = SOCK_NEED.NOTHING
␣␣␣␣␣␣␣␣deadline = AbsoluteTimeout(timeout)
␣␣␣␣␣␣␣␣while True :
␣␣␣␣␣␣␣␣␣␣␣␣try :
␣␣␣␣␣␣␣␣␣␣␣␣␣␣␣␣sent = self.sock.send(data, socket.MSG_DONTWAIT)
␣␣␣␣␣␣␣␣␣␣␣␣except BlockingIOError :
␣␣␣␣␣␣␣␣␣␣␣␣␣␣␣␣sent = 0
␣␣␣␣␣␣␣␣␣␣␣␣#end try
␣␣␣␣␣␣␣␣␣␣␣␣data = data[sent:]
␣␣␣␣␣␣␣␣␣␣␣␣if len(data) == 0 :
␣␣␣␣␣␣␣␣␣␣␣␣␣␣␣␣timed_out = False
␣␣␣␣␣␣␣␣␣␣␣␣␣␣␣␣break
␣␣␣␣␣␣␣␣␣␣␣␣#end if
␣␣␣␣␣␣␣␣␣␣␣␣send = (await sock_wait_async(self.sock, False, True, deadline.timeout))[1]
␣␣␣␣␣␣␣␣␣␣␣␣if not send :
␣␣␣␣␣␣␣␣␣␣␣␣␣␣␣␣timed_out = True
␣␣␣␣␣␣␣␣␣␣␣␣␣␣␣␣break
␣␣␣␣␣␣␣␣␣␣␣␣#end if
␣␣␣␣␣␣␣␣#end while
␣␣␣␣␣␣␣␣if timed_out :
␣␣␣␣␣␣␣␣␣␣␣␣raise TimeoutError("socket taking too long to send")
␣␣␣␣␣␣␣␣#end if
␣␣␣␣#end sendall
I also have custom editor commands defined to jump between lines with matching indentation; this allows me to navigate quickly between beginnings and ends of such statement constructs.
It's obviously largely down to personal preference and experience but it was a design decision specifically to make it easier for non-programmers to learn how to structure their programs.
Python has long outgrown the "hobby" domain and has been used successfully in large systems for well over a decade. While there are attempts to backfill the language with "real" features of grown-up languages, these are often misapplied by over-enthusiastic novices. I'd argue Python does best when it sticks to its strength, which is focus on clarity of expression, and all lets all the memory critical stuff be done by C/C++ or increasingly Rust.
Try an IDE instead of a text editor. Modern tools are pretty good at that, the last time I had a python-related mistake due to white space was over 5 years ago when I was trying to write a python one liner to execute from bash, and I missed the white space between command arguments, which caused bash to return an exception, not python. Also, use functions/scoping and iterators instead of nested rats nests :)
I can't enter the logic I want as a stream of consciousness and then tell the editor to indent it for me as a check.
Maybe you might try designing your code first, then entering it. It's called Programming on Purpose; you might try it before complaining that Language X makes you have to think about what you are doing first.
Maybe you might try designing your code first, then entering it.
Nah. Why would I want to do that? <grin> I did that when I was being paid to code! And I used BLISS, which was a truly great language which I used for many years (I still have to be careful sometimes not to put dots in front of variable names!).
Now, for preference, I use C for compiled code, Bash for scripting and Perl for combinations of the two.
Seriously, my real beef with Python is that it is horrible for modifying existing code - that is where the whitespace problems occur, in my experience. It is fine if you have the luxury of being able to design something first. In fact, if I do need to use Python for something I have been known to develop it in Perl first and when it is working use that as a design to reimplement it in Python.
Put up your own repo server like nexus3oss (https://help.sonatype.com/en/download.html) it is completely free in its basic version, you can proxy the original repos, and
run tests and if they tests ok, You can push to verified repos you use in your production. No more downloading directly from the internjet.
I only proxy used packages, so all you do not use it do not care about.
I have a tiny Intel NUC with an image of this running: https://github.com/sonatype/docker-nexus3/blob/main/Dockerfile
serving Docker,Pypi,Maven and Nuget repos for all my needs.
Along with a gogs git repo server and Jenkins CI I have all I need so I do not have to push my stupid bad code out on gitlab.com or github.
Since pip can install directly from any private git server, maybe a private repo is the easies way to start if you only use python.
I saw it, but I have a serious point, that this is a very public place when it comes to mentioning software that you or your organisation are using in a possibly vulnerable way, whether you know it or not - telnet for instance.
For instance, we only very recently updated our installation of [name withheld] which is meant to be a secure data connection to [name withheld]. Unforgivable, really - anything could have happened.
It won't now. It isn't connecting at all.
I think that [name withheld] needs to upgrade their [name withheld] - but at the last word, they are blaming our [other name withheld]. I wish I could place a bet on it being both. But that would be unethical. After all, it's within my power to [withheld] the [withheld] myself, if I wanted to.
This post has been deleted by its author