Automatic test case generation
Whilst this can be a major benefit, never forget that there still needs to be a set of tests against requirements.
Oxford-based Diffblue has claimed its AI will automate one of the most important but tedious tasks in software development: writing unit tests. Test-driven development (TDD) is a methodology invented – or, as he has said, rediscovered – by Kent Beck, who wrote a unit test framework for Smalltalk in the late '80s. The idea of …
Maybe it can be of benefit (although I question major) but as you say there needs to be tests against requirements. This is where the major benefits lie. Ideally you would develop a complete set of tests for all requirements before you develop your code. Automatic test generation might be useful as other software and hardware changes but, even then, you still need to run the requirements testing. Back before AI became a thing we called it regression testing.
P.S. Skipping the boring parts of programming is more fun but also a leads to many system failures.
This kind of mass, automatic test case generation is only useful in documenting the current behaviour of the code - bugs and all. It's the kind of thing I'd use once when taking over a codebase with poor coverage at the unit level, and then never again. But that's because maintaining good quality tests that cover what should happen, with a high coverage percentage, is inherently a human activity driven by knowledge of requirements.
While you are mostly right, it actually is "Still does what is said in the previous code.", or "Compared to the previous code these behaviors changed". That totally unrelated to functional correctness, but still useful information. Those humans you are talking about can add this as another tool in their toolbox.
Tests have their place (just mocking api's doesn't seem very adequate...) but the devil is always in the non-functional testing space. If this was truly AI, then it could be of use in this space. As it's not, it can't.
Edit: Oh, and writing adequate unit tests is part of the job and if you deliver code into production without them you should be shown the comfy chair.
..... i'd agree, but they are ALWAYS the first thing to go. In fact, I've found it more prevalent in Agile processes to not actually get done at all
That reason seems to be that since Agile is abused so much as an excuse to make changes/change minds on functionality that apathy sets in very quickly... and once it does, it is tremendously difficult to back out of that.
It doesn't help when companies still (and lots of them do) have the viewpoint that time spent on TDD is wasted upfront time
"Writing unit tests may be important, but it is less interesting than adding features."
... but needlessly fixing regression bugs in code that has grown organically over time to meet client needs is even less interesting/more soul destroying still. Especially when you've got an incomplete test suite (or none at all) to back you up.
As developers, we need to embrace and champion the idea that unit tests are part of anything we develop, be it shiny new feature or extension as demanded by the client (especially so in this last case when the account managers start trying to push deadlines on us!)
I recall a good while back reading this utterly moronic statement that developers should be able to write their code in such a way that it anticipates all future needs and thus will never need testing again - requirements are fluid, therefore code has to be as well. It's up to us to ensure that these changes we make are an evolution rather than a cancerous growth, and unit tests are a key component of this.
There's been many instances where I said "it's likely they'll change their minds on this so we should make the code easy to change". My assumptions were correct, they changed their minds. But I did not anticipate the level of change in the requirements. To the point where unit tests would have to be rewritten extensively, and the code-changes meant existing tests would be broken.
Tests breaking as a result of scope change is only to be expected - in fact, understanding why those tests broke and fixing/re-working/removing as required only goes to strengthen things.
Of course, if the spec change is to the point where a considerable number of your tests are hopelessly broken or no longer relevant then it's time to push back to the product owner and say that what's being asked for is not what you were originally tasked with building...
I've had projects where I was frightened of changing anything, in case it broke something important and I wouldn't be able to tell. This was perhaps/probably a result of someone losing the original requirements or never having any in the first place then lying about it. I would like for characterisation tests to be generated in an automated way but for only those parts which are a dependency for something else. 100% coverage means that nothing can change without apparently failing a test and that's not maintainable so it would be great to have an intelligent tool the only protects the critical parts.
I think we might be building the tool you are describing - *characterisation tests to be generated in an automated way but for only those parts which are a dependency for something else. 100% coverage means that nothing can change without apparently failing a test and that's not maintainable so it would be great to have an intelligent tool the only protects the critical parts*
We are in pre-release mode but if you could do me a favor and take a look at what we are doing and let me know how close we come to hitting the mark - or not - we'd appreciate it
If any of this looks useful hit me up - We'd be happy to get you access.
"100% coverage means that nothing can change without apparently failing a test and that's not maintainable"
For unit tests yes, but for generated characterisation tests? With the right tooling it should just generate a list of things that are changed and generate new tests to adapt to those changes. I mean, tests are supposed to fail when you change functionality...
since they give the developer confidence that an application or service still works after they add or modify the code.
SMARTR AI Systems give the developer confidence that former SCADA applications and service servers ..... Remote Virtual AIgents ...... are enabled to still work after AIMODified code is added and which only to follow the Simple Instruction Sets with Advanced Information for Present Programming with New Products Always Available for Future and Derivative Power Players. .....Intellectual Property Market Movers and Shakers with Tin Pot Hotshots to Satisfy and Assuage/Restrain and Reeducate/Quarantine and Brainwash, just as the need may dictate and require.
There hasn't been a tool like this before. The purpose of the Community Edition is to have a free way for people to see what the tool can do.
Is Experimental Remote Third Party Secretarial Use of that particular programming tool aided and abetted by home development teams in order to train/gain most valuable alternate company experience, or is it thought to be better prohibited and best denied, ....... although that is decision made wisely by others absolutely ages ago ..... just in case you were thinking of wasting expansive time imagining such choices have never before been made and answered many, many times before.
El Reg would surely love to freely share your considered reply to such an Almighty Offer and Genuine Request of Future Project leaders/readers/writers.
Yes Let's Go or No Let's Stop here for a while and Panic with a Rapid Stock Check*?
And will someone wake up, Sanmigueelbeer, for he did ask .....
iWake me up when AI has come up with a methodology to do away AGILE (and their practitioner). ..... Sanmigueelbeer
It's supposed to be Test Driven Development. The tests are created first and constitute a template for the solution. But I've yet to see this approach used in real life.
One of the most irksome chores is when you modify some legacy code and the build fails because it only has 10% test coverage. That 10% is the bit you've worked on, but you now own the whole damn thing and you're going to spend hours writing tests to cover the rest. I can see that this utility would be valuable in this situation, but I'm unsure whether that's a good thing.
I've adopted this approach in real life. It's where the benefits really start to kick in.
Sure, full test coverage is great and can help detect and prevent bugs. But TDD isn't about testing.
It's about thinking about the code before you write it.
Teams that follow TDD create fewer bugs and create more maintainable code and deliver more quickly, and I believe that's because their code is just better.
Agree that tests are supposed to drive the development, so tests can come first. They also document what code is supposed to do, so I think the AI angle is broken.
Although I personally like to test first, research has shown that testing first or last really doesn't make much difference. It appears to be the development process having small steps (small test granularity) that is beneficial, and this helps with the quality and design.
Exactly the tests are written first then code is written to pass the tests, TDD the tests drive the code.
I can see that this software is good for legacy code even with bugs. As changes that causes a test to fail will mean looking at the changes and may point out existing bugs. But I fail to see how it helps TDD.
Also I wonder how good it is at creating unit tests for larger functions that do more than a single thing. Legacy code will probably have large functions that would take multiple small functions to do the same. In TDD you tend to write small functions that do a single thing.
I've known a couple of projects to fail with TDD because the tests led development in the wrong way. I think code and tests should, where possible be developed in lock-step. This reduces the overhead of writing tests. For an existing project some form of reflection should be able to create stub tests for nearly everything. You might want to sprinke some AI pixie dust on them but once the grunt work of creating the boilerplate test code has been done, there really isn't that much of a barrier to entry. And this is the key point when it comes to testing: developers should feel comfortable with reading and writing tests.
"we can't tell if the current logic that you have in the code is correct or not, because we don't know what the intent is of the programmer, and there's no good way today of being able to express intent in a way that a machine could understand."
Primarily because the machine can't "understand" anything in the sense we can.
In the past, where it mattered one used formal specification languages such as VDM or Z to handle this problem. Although they can't demonstrate whether what the application designer specified is sensible, they do show whether the code meets that specification.
The other important issues are boundary conditions, such as bad data, for which there are several automated techniques (e.g. fuzzing), and race conditions in multi-threaded code, which can be hard to track down. Any competent AI system would need to be able to handle these.
Now, when I work as an application designer with a developer, we start from a specification of both what the code should do and what it should not do, and from that agree a test set that covers both for each unit developed individually and the application as a whole.
Until AI gets more proficient at deducing the purpose of a particular path of execution, humans will always be involved in testing code. With that being said, even humans can have a hard time trying to figure out what a piece of code does if it's complex enough. This leads to a pet peeve of mine. Comments or the lack thereof. I can't tell you the number of times that I have looked at code, and found no comments in much of it. Then it takes time for me to trace through the logic to piece together the original programmer's mindset when the code was written. This is more akin to legacy code than it is for recent code, especially if development is active and you are on the team.
With that being said, even humans can have a hard time trying to figure out what a piece of code does if it's complex enough. This leads to a pet peeve of mine. Comments or the lack thereof. I can't tell you the number of times that I have looked at code, and found no comments in much of it. Then it takes time for me to trace through the logic to piece together the original programmer's mindset when the code was written. .... Maelstorm
A similar pet peeve floats around in the El Reg commentary spaces, Maelstorm, whenever downvotes are not accompanied by authorship with anything intelligible to further process as valid and acceptable or ignorant nonsense to be avoided and rejected.
Such though is able to tell one considerably more than was surely ever desired or expected to be present in an anonymous virtually remote silence ..... so all is not lost.
In some cases, one could almost believe dumb virtual machine algorithms were responsible for negative voiceless human decisions.
This post has been deleted by its author
If we could express intent in a way that a machine could understand... we wouldn't actually need unit tests. Just get the machine to generate the code to implement the intent. If the intent changes, re-generate.
All we would need then is a system to help us test our intent...
I and my team would be very interested in a tool that could automate writing unit tests for C#. You mentioned that you have a Visual Studio version in the works, that would be great. The company I work for makes medical diagnostic testing and chemical testing equipment. Having better and more unit tests would help us with compliance to FDA and ISO 13485 standards. Thank you for offering the details of this new product.
Biting the hand that feeds IT © 1998–2020