Discover millions of ebooks, audiobooks, and so much more with a free trial

Only $11.99/month after trial. Cancel anytime.

Python Continuous Integration and Delivery: A Concise Guide with Examples
Python Continuous Integration and Delivery: A Concise Guide with Examples
Python Continuous Integration and Delivery: A Concise Guide with Examples
Ebook245 pages2 hours

Python Continuous Integration and Delivery: A Concise Guide with Examples

Rating: 0 out of 5 stars

()

Read preview

About this ebook

Gain the techniques and tools that enable a smooth and efficient software development process in this quick and practical guide on Python continuous integration (CI) and continuous delivery (CD). Based on example applications, this book introduces various kinds of testing and shows you how to set up automated systems that run these tests, and install applications in different environments in controlled ways. Python Continuous Integration and Delivery tackles the technical problems related to software development that are typically glossed over in pure programming texts.
After reading this book, you’ll see that in today's fast-moving world, no software project can afford to go through development, then an integration phase of unpredictable length and complexity, and finally be shipped to the customer -- just to find out that the resulting application didn't quite fill their need. Instead, you’ll discover that practicing continuous integration and continuous delivery reduces the risks by keeping changes small and automating otherwise painful processes. 
What You Will Learn
  • Carry out various kinds of testing, including unit testing and continuous integration testing, of your Python code using Jenkins
  • Build packages and manage repositories
  • Incorporate Ansible and Go for automated packaging and other deployments
  • Manage more complex and robust deployments
Who This Book Is For

Python programmers and operating staff that work with Python applications.
LanguageEnglish
PublisherApress
Release dateDec 28, 2018
ISBN9781484242810
Python Continuous Integration and Delivery: A Concise Guide with Examples

Related to Python Continuous Integration and Delivery

Related ebooks

Programming For You

View More

Related articles

Reviews for Python Continuous Integration and Delivery

Rating: 0 out of 5 stars
0 ratings

0 ratings0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    Python Continuous Integration and Delivery - Moritz Lenz

    © Moritz Lenz 2019

    Moritz LenzPython Continuous Integration and Deliveryhttps://doi.org/10.1007/978-1-4842-4281-0_1

    1. Automated Testing

    Moritz Lenz¹ 

    (1)

    Fürth, Bayern, Germany

    Before diving into examples of how to test Python code, the nature of tests must be discussed in more detail. Why do we want to have tests? What do we gain from them? What are the downsides? What makes a good test; what’s a bad test? How can we classify tests? And how many of which kinds of tests should we write?

    1.1 What Do We Want from a Test?

    Why bother with writing tests at all? There are a number of reasons why we want to write or, at least, have tests.

    It is not uncommon to have several tests in a test suite, written in response to different needs.

    Fast Feedback

    Every change to code comes with the risk of introducing bugs. Research shows that somewhere in the range of 7% to 20% of all bug fixes introduce new bugs.¹

    Wouldn’t it be great if we could find those bugs before they find their way to the customer? Or even before your colleagues see them? This is not just a question of vanity. If you receive quick feedback that you have introduced a bug, you are more likely to remember all the details of the part of the code base you just worked on, so fixing the bug tends to be much faster when you get fast feedback.

    Many test cases are written to give this kind of fast feedback loop. You can often run them before you ever commit your changes to the source control system, and they make your work more efficient and keep your source control history clear.

    Confidence

    Related to the previous point, but worth mentioning separately, is the confidence boost you can get from knowing that the test suite will catch simple mistakes for you. In most software-based businesses, there are critical areas where serious bugs could endanger the whole business. Just imagine you, as a developer, accidentally mess up the login system of a health-care data management product, and now people see others’ diagnoses. Or imagine that automatic billing charges the wrong amount to customers’ credit cards.

    Even non-software businesses have had catastrophic failures from software errors. Both the Mars climate orbiter² and the first launch of the Ariane 5 rocket³ suffered the loss of the respective vehicle, owing to software issues.

    The criticality of their work puts emotional stress on software developers. Automated tests and good development methodology can help alleviate this stress.

    Even if the software that people are developing is not mission-critical, risk adversity can cause developers or maintainers to make the smallest change possible and put off necessary refactoring that would keep the code maintainable. The confidence that a good test suite provides can enable developers to do what is necessary to keep the code base from becoming the proverbial big ball of mud.⁴

    Debugging Aid

    When developers change code, which in turn causes a test to fail, they want the test to be helpful in finding the bug. If a test simply says something is wrong, this knowledge is better than not knowing about the bug. It would be even more helpful if the test could provide a hint to start debugging.

    If, for example, a test failure indicates that the function find_shortest_path raised an exception, rather than returning a path, as expected, we know that either that function (or one it called) broke, or it received wrong input. That’s a much better debugging aid.

    Design Help

    The Extreme Programming (XP)⁵ movement advocates that you should practice test-driven development (TDD) . That is, before you write any code that solves a problem, you first write a failing test. Then you write just enough code to pass the test. Either you are done, or you write the next test. Rinse and repeat.

    This has obvious advantages: you make sure that all code you write has test coverage and that you don’t write unnecessary or unreachable code. However, TDD practitioners have also reported that the test-first approach helped them write better code. One aspect is that writing a test forces you to think about the application programming interface (API) that the implementation will have, and so you start implementing with a better plan in mind. Another reason is that pure functions (functions whose return value depends only on the input and that don’t produce side effects or read data from databases, etc.) are very simple to test. Thus, the test-first approach guides the developer toward a better separation of algorithms or business logic from supporting logic. This separation of concerns is an aspect of good software design.

    It should be noted that not everybody agrees with these observations, with counterpoints from experience or arguments that some code is much harder to test than write, leading to a waste of effort, by requiring tests for everything. Still, the design help that tests can provide is a reason why developers write code and so should not be missing here.

    Specification of the Product

    The days of big, unified specification documents for software projects are mostly over. Most projects follow some iterative development model, and even if there is a detailed specification document, it is often outdated.

    When there is no detailed and up-to-date prose specification, the test suite can take the role of specification. When people are unsure how a program should behave in a certain situation, a test might provide the answer. For programming languages, data formats, protocols, and other things, it might even make sense to offer a test suite that can be used for validating more than one implementation.

    1.2 Downsides of Tests

    It would be disingenuous to keep quiet about the downsides that tests can have. These downsides should not detract you from writing tests, but being aware of them will help you decide what to test, how to write the tests, and, maybe, how many tests to write.

    Effort

    It takes time and effort to write tests. So, when you are tasked with implementing a feature, you not only have to implement the feature but also write tests for it, resulting in more work and less time do other things that might provide direct benefit to the business. Unless, of course, the tests provide enough time savings (for example, through not having to fix bugs in the production environment and clean up data that was corrupted through a bug) to amortize the time spent on writing the tests.

    Extra Code to Maintain

    Tests are code themselves and must be maintained, just like the code that is being tested. In general, you want the least amount of code possible that solves your problem, because the less code you have, the less code must be maintained. Think of code (including test code) as a liability rather than an asset.

    If you write tests along with your features and bug fixes, you have to change those tests when requirements change. Some of the tests also require changing when refactoring, making the code base harder to change.

    Brittleness

    Some tests can be brittle, that is, they occasionally give the wrong result. A test that fails even though the code in question is correct is called a false positive. Such a test failure takes time to debug, without providing any value. A false negative is a test that does not fail when the code under test is broken. A false negative test provides no value either but tends to be much harder to spot than false positives, because most tools draw attention to failed tests.

    Brittle tests undermine the trust in the test suite. If deployment of a product with failing tests becomes the norm because everybody assumes those failed tests are false positives, the signaling value of the test suite has dropped to zero. You might still use it to track which of the tests failed in comparison to the last run, but this tends to degenerate into a lot of manual work that nobody wants to do.

    Unfortunately, some kinds of tests are very hard to do robustly. Graphical user interface (GUI) tests tend to be very sensitive to layout or technology changes. Tests that rely on components outside your control can also be a source of brittleness.

    False Sense of Security

    A flawless run of a test suite can give you a false sense of security. This can be due either to false negatives (tests that should fail but do not) or missing test scenarios. Even if a test suite achieves 100% statement coverage of the tested code, it might miss some code paths or scenarios. Thus, you see a passing test run and take that as an indication that your software works correctly, only to be flooded with error reports once real customers get in contact with the product.

    There is no direct solution for the overconfidence that a test suite can provide. Only through experience with a code base and its tests will you get a feeling for realistic confidence levels that a green (i.e., passing) test run

    Enjoying the preview?
    Page 1 of 1