back to article Surprise! Voting app maker roasted by computer boffins for poor security now begs US courts to limit flaw finding

Voatz, the maker of a blockchain-based mobile election voting app pilloried for poor security earlier this year, has urged the US Supreme Court not to change the 1986 Computer Fraud and Abuse Act (CFAA), a law that critics say inhibits security research because it's overly broad. The app maker filed an amicus brief [PDF] on …

  1. Anonymous Coward
    Anonymous Coward

    Voatz meet the Streisand Effect.

    In California at least the changes you want may be thwarted by a simple Anti-SLAPP suit. That's a Strategic Lawsuit Against Public Participation.

    Security researchers discussing your products security or lack thereof is the very definition of that Public Participation part. Not to mention their 1st Amendment Rights to discuss what a shitty company you're being for trying to stop them from publicly discussing your abysmal security.

    And the harder you try to bury the truth the more the internet will keep bringing it to the front of searches on what all the fuss is about.

    "What's this Voatz thing? Oh wow. They wrote a program with such shitty security it took a team of security pros less time to get in than it takes to get into a hookers knickers!"

    1. doublelayer Silver badge

      Re: Voatz meet the Streisand Effect.

      Ah, but it's not the discussion that they mind. If people investigate a system to find that it's a hideous mass of holes, they're violating the trust of the organization that put out the hideous mass of holes. It's important that we respect the rights of places that don't bother doing their own security testing and choose to use untrustworthy and unsafe code to store and process our information to make money. More than that, we must protect those who don't want to bother making good products from people who shamelessly figure out whether something will become a safety risk and, these people have no scruples, have the gall to tell the public about it after they tell the company who doesn't fix it. Consider how you would feel if someone researched the safety of cars and told people about the ones that blow up so you couldn't purchase one of those. Consider how you would feel if there was someone with the audacity to check if the claims of other product's advertisements were true and call out the selfless manufacturers when they were found to be lying through their teeth. These people must be stopped today.

      1. ThatOne Silver badge
        Big Brother

        Re: Voatz meet the Streisand Effect.

        Indeed, whatever a company says or does is god-given law, and should be accepted and respected without questioning, period!

        1. FlamingDeath Silver badge

          Re: Voatz meet the Streisand Effect.

          In the murky world of sociopaths and their interchangeable masks they call companies, I’m not sure how much the Streisand effect has.

          Do you suppose PHORM, Bell Pottinger et al just disappeared into nothingness?

    2. John Brown (no body) Silver badge

      Re: Voatz meet the Streisand Effect.

      "And the harder you try to bury the truth the more the internet will keep bringing it to the front of searches on what all the fuss is about."

      I was thinking along similar lines. If the law stops unauthorised security researchers looking at stuff then the respected security researchers just have to produce regular press releases stating which companies and systems they have been refused authorisation for. If the have nothing to hide, they have nothing to fear :-)

  2. Anonymous Coward
    Anonymous Coward

    Real Life Superhero's

    In my book security researchers/reverse engineers are like real-life modern day Superhero's.

    They are the only ones that keep people safe and secure and in some cases can even save lives by exposing flaws and sh*tty security practices in todays internet connected world.

    I also believe that overly broad and expansive terms and conditions that threaten users about reverse engineering their sh*tty app is usually in place because the developer has either baked in some kind of spyware/malware or has some really sh*tty security/privacy they are trying to hide (like the app in the article).

    I am sure there are many that would disagree with me and would glady support laws that limit securty researchers like say, DLink, Lenovo, Cisco, Linksys, Microsoft etc etc....

    There are many great Hero's out there protecting lives like: Citizen Lab, Kaspersky Labs, SandboxEscaper, etc etc

    With the horrific state of our nations politics ALL voting apps should be completely open-source.

    /rant

    1. DS999 Silver badge

      Security of voting machines

      Is hardly the biggest problem with voting in the US. If you have a paper trail for every vote then I say who cares if the software running the machines is open source. Just have a requirement to conduct random hand recounts of a statistically significant percentage to verify the totals and you know the machines are counting accurately.

      The real problem that needs to be fixed is access to the polls. In Georgia in 2018 the republican secretary of state, who is by law responsible for the election, was running for governor. He used his authority as SOS to order the closing of a bunch of polling places in majority black precincts. As a result, the average wait time to vote in majority white precincts was 6 minutes. The average wait time to vote in majority black precincts was 51 minutes. That is what voter suppression looks like, and is a far greater concern that whether source code for the election machines is publicly available.

      1. theOtherJT Silver badge

        Re: Security of voting machines

        If you have a paper trail for every vote then I say who cares if the software running the machines is open source. Just have a requirement to conduct random hand recounts of a statistically significant percentage to verify the totals and you know the machines are counting accurately.

        No! No no no no no no no!

        So that machine has to print two copies of that - one for the voter so they can check and one to go in the ballot box for the theoretical recount. The voter obviously needs to be prevented from touching the latter, or they could deliberately vote for the wrong candidate and then tamper with the "log" paper vote to try and get the election invalidated.

        So, now we have 2 paper ballots, and the voter can't see that they're the same - and that's assuming that the voter even bothers to check the one they were given.

        Never trust. Never. EVER.

        I don't disagree that there are way more effective voter suppression tactics already in use - but imagine what happens should those be cracked down upon by some theoretical future honest administration. There's still a massive opening for someone to mass manipulate votes as long as these stupid, wasteful, pointless things remain in use.

        Don't make me post the Tom Scott video again.

        1. DS999 Silver badge

          Re: Security of voting machines

          No, if you have a touchscreen machine have it print a ballot for a scanner with the little bubbles filled in. You can check that it did what you want, then personally place it in the scanner just like if you had manually filled in the little bubbles yourself. There's no need for two copies, or for the touchscreen you use to keep ANY record of what you entered. The scanner, and the paper ballots it read, are the only official record of the vote.

          The only software compromise you need to worry about is in the scanners, or the system the scanners upload their results into. That's what the manual recount of the paper ballots printed by the touchscreens is intended to address, by using statistics to count a small number randomly to be assured that the results match.

          Of course there's no reason to have the touchscreens, it is simpler to just have people fill in the bubbles with #2 pencils, but if people insist on technology...

          1. John Brown (no body) Silver badge

            Re: Security of voting machines

            "Of course there's no reason to have the touchscreens, it is simpler to just have people fill in the bubbles with #2 pencils, but if people insist on technology..."

            Depends. Is this an election where the voter only chooses Trump. Biden, Other or are there 25 other elections on the same ballot at the same time. each with 10 candidates?

            1. DS999 Silver badge

              Re: Security of voting machines

              Like all elections in the US, there will be a bunch of other elections on the ballot as well. Not sure why that would be easier with a touchscreen. It is about the same amount of effort to press "John Smith" or "Jane Doe" as it is to fill a bubble next to the name.

              More importantly, more can go wrong with the high tech solution. The touchscreen machine might break, or the touchscreen get out of calibration so you press one name but get another. With a pencil the worst that happens is the lead breaks, and they can solve that by having spare pencils on hand.

              1. Blank Reg

                Re: Security of voting machines

                unless the vote is contested, in which case it will be hanging chads all over again, only now it will be bubbles

    2. Cuddles

      Re: Real Life Superheroes

      "In my book security researchers/reverse engineers are like real-life modern day Superheroes."

      Which is exactly where the problems come from. Superheroes are (usually) vigilantes operating outside the law. Their actions are often overlooked because they help people with problems that can't be addressed by the authorities using normal means. But even then they still frequently have trouble with the law, and often require some forgiving authority figure to look the other way or actively cover up for them, because even though their actions may seem right they are still technically illegal. And of course, it's often difficult to draw a line between a hero is forced into a difficult choice, a hero who regularly breaks the rules a bit more than normal, an anti-hero who doesn't care about the rules, and an outright villain.

      All sounds rather like security researchers really. The good ones are usually doing work that is morally right but often legally questionable, and there is often not a clear line between those genuinely acting for good and those who happen to expose bad practices while only doing the hacking for fun, or being otherwise actively malicious. This is why people like Snowden and Assange, for example, aren't exactly universally loved - just because they exposed some bad stuff doesn't mean they're actually good themselves. And similarly the likes of the research groups listed above mostly do seem to be acting for good, but may well be breaking the law in doing so, and since opinions on what is actually good vary they're not universally loved either.

      And as with superheroes, the big problem is that they're all dealing with things that the law can't actually handle. Just as superpowers aren't handled very well by the legal system, laws developed decades or centuries ago don't handle computers and the internet very well. Everyone has an opinion on when vigilantism should or shouldn't be allowed, but even agreeing on whether something does or doesn't break the law is difficult enough before you even start asking about should.

      So yes, security researchers are very much like superheroes. Without them, the internet would be a much less safe place. But relying on vigilantes operating in legal grey areas is far from the best way to deal with real world issues, especially given how difficult it can be to tell the heroes and villains apart, and when they can swap roles from comic to comic.

      1. Falmari Silver badge

        Re: Real Life Superheroes

        Wow have an up vote, loved it so well argued.

      2. amanfromMars 1 Silver badge

        Re: Some Real Life Superheroes are/can be Extremely Deadly Effective.

        But relying on vigilantes operating in legal grey areas is far from the best way to deal with real world issues, especially given how difficult it can be to tell the heroes and villains apart, and when they can swap roles from comic to comic. ....Cuddles

        The military minded would probably disagree with you, Cuddles, and would also most likely also deny in pleasant company that their spooky special forces are villainous.

        :-) Don't push your luck though and ask active remote agents doing spooky special forces work about any of that. They mightn't give the answers that you want them to.

        And don't you just love those legal grey areas :-)

    3. FlamingDeath Silver badge

      Re: Real Life Superhero's

      Your words remind of the Grenfell fire

      The parallels are uncanny

      Replace shitty mobile app developer with shitty company designing flammable building materials

      And also replace security researchers with firefighters

  3. Warm Braw

    Allowing tech companies to threaten criminal action...

    I can think of a number of tech companies that would be overjoyed if they could actually prosecute people for failure to comply with their tracking requirements. It's simple "law & order" - they buy the law, you follow their orders...

  4. You aint sin me, roit
    Coat

    Simple solution...

    Just leave it to the Russians to test the security of your voting software for you.

    You know they are going to in any case...

  5. LazLong

    'Cause spys respect EULAs, right?

    Yeah, because of course the Nork's, Russia's, Iran's, and China's security agencies respect EULAs, so we don't need software to actually be written securely.

    Fucking idiots.

    1. Wayland

      Re: 'Cause spys respect EULAs, right?

      Better to have real criminals find and exploit the bugs than white hats who simply warn you and don't tell the criminals. Which one provides the most motivation? Clearly when your system is under heavy attack by criminals you are more inclined to fix bugs.

  6. Doctor Syntax Silver badge

    I can't see anything wrong with Voatz position - providing, of course, that they're* then on the hook for consequences, civil and/or criminal, if the product gets hacked in ways that the unauthorised testing could have brought to their attention.

    * They including the management in person as well as the company.

    1. doublelayer Silver badge

      I get that you're trying to be conciliatory, but that approach is extremely unacceptable. They shouldn't just be liable after their travesty causes massive problems. That way would help, but it's a lot like saying that I'm responsible after my non-IAEA approved nuclear reactor spreads radioactive waste over the local area. When things are this important, they need to be responsible before anything happens. That means that, when there are sufficient amounts of consumer information involved, when the government is going to use it, or when a malfunction could cause injury or death, the law should require that they do testing with independent testers and they should either have to implement the fixes to any bugs the testers find or appeal the decision not to. Having researchers who test systems simply helps this process and makes it cheaper. We require it of people making medicines or medical devices. We require it of people making cars and aircraft. We require it of people growing or manufacturing food. We can require it of people using the public's data, too.

      1. Doctor Syntax Silver badge

        "I get that you're trying to be conciliatory"

        Moi?

        1. doublelayer Silver badge

          Yes, it seemed to me as if you were trying to be conciliatory. Statements that start like "I can't see anything wrong with Voatz position - providing" sound as if you think there's a possibility where their recommendations can be accepted. You clearly hedge against that with the recommendation you provided, and that limitation is useful, but in my opinion you've already given up too much by giving them anything. My reasons for that opinion are stated above.

    2. DS999 Silver badge

      On the hook for consequences?

      So if the product gets hacked by someone in another country and a president is elected who should not have won based on the true vote count, how would even the most extreme penalty of being able to bankrupt the company with a big fine and execute its management come close to compensating the populace for that?

      The only remedy is to have enough checks and balances that it is completely impossible for software bugs or hacks to change the result. There is simply no room AT ALL for trusting ANYONE in something this important to a democracy.

  7. Anonymous Coward
    Anonymous Coward

    Talk Talk

    A friend of mine discovered how to get into Talk Talk routers through WiFi by activating WPS remotely.

    He did the moral but illegal thing of warning Talk Talk about the vulnerability.

    He was arrested and punished.

    So as not to get into trouble with the law he should have published his findings anonymously and widely on back hat forums. Then with routers being breached all over the UK the vulnerability would have got fixed. The way he did it the vulnerability is still there.

  8. Eclectic Man Silver badge

    Democracy is in the counting*

    *According to a quote I saw attributed to the playwright Tom Stoppard.

    When I was a practicing IT security person I got annoyed at the relatively intelligent people** proposing 'cryptographic voting schemes' which, when analysed did nothing to prevent cheating.

    I devised my own, Socially secure cryptographic election scheme (Electronics Letters, 23rd May 1991, pp955-957 / or digital reference 10.1049/el:19910596), which is hopelessly complicated, but does allow everyone to check all the votes have been counted correctly. Unfortunately I could not find a solution that both allowed for genuine voter secrecy AND restricted each voter to one vote. So there is the problem of coercion and paying for votes (which is why photography in voting stations is illegal, you could be proving to your 'sponsor' that you voted the way they wanted).

    Most of the electronic election schemes I've seen allow whoever programs the machines to cheat with impunity. You don't need to hack machines or deny people in opposition areas the vote if you programmed the machine to misreport the votes. Even if there is a paper trail, unless you are going to use people to count the votes (not entirely accurate, but people are witnesses, which machines are not) the machines that count the votes must not know which candidate they are supporting or the system is open to fraud. Note that almost all of the USA's vote machine manufacturers are or were Republican Party donors.

    The system of putting a cross on a piece of paper that is subsequently counted by humans is actually quite robust, has a nice trail that allows for genuinely independent re-counts, and, importantly, forensic examination of ballots in the event of suspect fraud, ballot box stuffing etc. Creating thousands of fake votes by computer is easy, doing the same for paper ballots is really tricky, and can lead to imprisonment.

    Basically, sometimes the old ways are best.

    As for the law, one hopes there is a defence of 'it being in the Public Interest' to protect democracy, or at least of 'legitimate comment'.

    **(Including surprisingly one from the late, lamented, Prof Roger Needham.)

  9. Neoc

    In other news...

    ...law proposed which makes it illegal for "Choice!" magazine to test products without first gaining approval, in case they find faults with it.

  10. hayzoos

    Twist the law

    I always viewed the "unauthorized use" to be in regards to the owner of the computer in question. If the owner of the computer authorizes a use, then the "unauthorized use" of the CFAA does not apply. I would think the DMCA could be better argued to apply. That's a whole other can of worms. Either law is suitable to be twisted in the manner attempted.

    I don't believe the security research in question was grey area. I think it was bona fide effort. There was no risk of harm to anybody except revealing the poor security of the Voatz app would damage the company's undeserved reputation for building a secure voting app.

  11. FlamingDeath Silver badge

    ”Voatz goes on to argue that allowing security researchers to violate rules and policies upends the expectations of companies setting those policies, as if their words should be law.”

    Someone should tell this chump, as a software engineer, it is he who defines what is allowed or not allowed, and relying on human laws is indicative of his own failings as a software engineer. Knowing the language to define the structure is only a fraction of what is needed.

    Software is a joke, if it was a building it would of been torn down ages ago and marked as dangerous (actually its not dangerous, just VERY badly implemented, by dangerous people

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like