Development teamwork: How software development becomes transparent

Transparency in software development projects is not only important for project managers, product owners or the developers themselves, but for all stakeholders who want to be sure that a project is going in the right direction. Digital solutions are available for agile work with backlogs and task boards, which also support long-distance collaboration. Examples include Jira from Atlassian, Azure Devops Services from Microsoft or Trello. In the end, however, it is only marginally the tools for communication and collaboration that lead to the desired result.

More important are the agile methods and frameworks such as Scrum or SAFe, which make it possible to achieve short feedback cycles using sprints. With regular events such as Sprint Review, Sprint Planning, Daily Sprint and retrospectives, a toolbox is available for such feedback cycles. They can be used to continuously inspect and control the result of the work performed.

Regular reviews of the work progress provide the prerequisite for developing the product incrementally and iteratively according to the requirements of stakeholders, for early detection of undesirable developments, for continuous learning and for improvement. So that feedback from later users and customers can come as quickly as possible, it is advisable to deliver product versions at an early stage that provide a partial benefit for the later product. At least the interested parties should be invited to use and give feedback early. The idea of ​​such minimal viable products (MVP) comes from the lean startup approach.

Agile methods such as Scrum and Extreme Programming provide the processes to realize incremental development and continuous improvement. How can you create a status of the work that is inspected by stakeholders and can thus ensure transparency in product development at all times? Tools and procedures for this are described below.

To see what developers are doing, it’s worth taking a look at where the developers are storing the result of their work – the source code and other file artifacts. This will be version control, in most cases Git. In addition to saving, this and other tools offer other important functions:

  • Conflict resolution (merging) in the event that several people want to save changes to files at the same time;

  • a log that stores who changed what and when. Changes can also be undone to restore older, but still functioning stands;

  • Code reviews to ensure code quality;

  • synchronize the status in the central repository with the computers of the developers. This offers the possibility to work offline and anywhere. After the work is done, the local changes can be transferred to the central repository.

It is important that the developers regularly synchronize their work with the central repository and merge them with changes made by colleagues. Regularly means at least once a day. This is not only necessary to see the progress and current status, but also to not let the work of the developers involved diverge too far. The result would be lengthy integration efforts that cost a lot of time and have a negative impact on the stability and quality of the software.

This would be the first step: there is a central memory in which everyone involved, regardless of where they are working, regularly save their results. Here you can also check who worked on what and when. Does the source code now need to be browsed to see if the work is going in the right direction? No, luckily not. That would also be impractical. It is better to check the current status of the work in the form of functional software.

This is achieved with the concept of a continuous delivery pipeline. It can be used to build the corresponding version of the software from the current version of the code in the version management and to install it in the target environment. Depending on the type of software, a target environment can be a web server or an app store, from which the app can be installed on a smartphone.

The Continuos Delivery Pipeline ideally also installs the required infrastructure of servers, network services, database and web servers. This is done using scripts and configuration files. This approach is called infrastructure as code. Scripts and configuration files also serve as documentation, which is why the documentation and the systems created from it are synchronized. One of the tools for installing and configuring infrastructures in their environments is the open-source infrastructure-as-code tool Terraform. It is also supported by the major cloud providers.

This procedure works best with cloud-based infrastructure (IaaS = Infrastructure as a Service), as provided by the large cloud providers. These solutions are also available for infrastructures in classic data centers. Infrastructures based on software containers represent a particularly flexible and currently popular solution. It is important to save installation scripts and configuration files together with the source code in the version management. Then the software and the underlying infrastructure are synchronized.

Because continuous delivery pipelines play such an important role for modern and agile software development, there is a large selection of tools and platforms. GitHub, GitLab, Altassian BitBucket, Jenkins and Azure DevOps are just a few of the well-known ones. Continuous delivery pipeline installations can be performed manually or triggered by events. Ideally, the installation process takes place without further human intervention. At the beginning, the pipeline has to be set up once and then adapted to current developments. This effort is worthwhile in larger, long-term product developments.

It is ideal if the pipeline is implemented from the start of development. To put it bluntly: the first line of code being developed should run through a working pipeline. Then you are not only informed about the status of the product right from the start, it is also easier and less risky to set up, develop and operate this pipeline.

Software products can usually be developed in such a way that functional product versions are available at an early stage. The Sprint for Sprint software is developed incrementally and iteratively based on user and customer feedback. If a development team reports that something visible can only be presented towards the end of the project, this should be interpreted as an alarm signal.

The client and the developer should have a common understanding of when a software is “finished”. What belongs to this finish is laid down in a definition of done. It specifies which functional, non-functional and qualitative properties a function, a product increment or a release must have in order for it to be considered ready and deliverable. If there is no common understanding here, this can lead to nasty surprises in the course of the project. This is where agile action with short feedback cycles helps. If product stands are delivered early and regularly, you can quickly see whether there are any discrepancies.

Does the software delivered in this way now have to be tested by hand? That would certainly be too time-consuming and is not the most enjoyable activity. There are a variety of solutions to automate the testing of requirements. This is done by programming the tests or writing them so that they can be carried out automatically. Since this is an important function, there are innumerable frameworks and tools. Testing can be automated at all levels and from all perspectives

Accordingly, the tests are defined in different test types and types for different areas of application and target groups. The various types of tests include functional tests or acceptance tests. They are interesting for people who want to test software from a user or customer perspective. End-to-end tests are also possible, but they are usually more complex to automate.

The concept of Behavior Driven Design (BDD) provides an interesting approach. Using a descriptive language, for example Gherkin for Cucumber, requirements are described in a language that is understandable for everyone involved and later implemented and executed as coded tests.

This facilitates the joint creation of requirements and verification of the desired functionality by stakeholders or product owners. In this case, the documentation of the requirements tests the requirements for the software itself. Requirements documentation and tests are synchronous.

There are also unit and integration tests: They test the technical functionality of software. Unit tests focus on the functionality of the code without considering the dependent systems such as databases. Integration tests test the software together with the surrounding systems with which it will later be operated. Tests in this area say a lot about the inner quality of the software. In this context, the value of “code coverage” is interesting. It describes the percentage of code that is covered by tests. After all, functional and acceptance tests are also important, even if they only have the tip of the iceberg in view.

There are also non-functional requirements such as security, performance and scaling. The quality requirements for the software are checked here. Frameworks and solutions, such as Apache JMeter, also exist for these types of tests. It makes sense to integrate these tests into the continuous delivery pipeline.

The installed software is also automatically tested. In the end, not only is the current state of the software available, but the question is also answered as to whether the desired functional and non-functional features have been fulfilled. On this basis, it can be decided whether the software will be installed on the next staging level, e.g. User Acceptance Test (UAT). This can even be continued automatically up to the production level. The software is automatically brought up to production if the quality and functional requirements are met on all staging environments (test, UAT, …) on the way there.

This procedure also works in large environments. Amazon, for example, deploys to its production systems every few seconds. The result should be presented in a user-friendly and clear manner so that the forest remains visible for this amount of tests in front of the trees.

Platforms for software lifecycle management and continuous delivery offer opportunities to display results and status of installation, tests and other activities in a user-friendly manner. This way you can see at a glance whether the current work status leads to a functioning product and whether functional and non-functional requirements are met. It makes sense to choose a top-down approach. An overview is used to quickly see where a product is. This overview can be used to navigate in detailed views, down to individual tests and code changes. This enables developers and system administrators to qu
ickly find problems and their causes.

The delivery pipeline can also make the current state of the software available to users and customers. These don’t have to click through everything; Tests are carried out automatically and the results are clearly available. At a glance you can see where the product is currently.

It is advisable to keep an eye on another topic, especially if it is an important and critical application that should continue to play an important role in the future: internal code quality. This deals with topics such as complexity, coupling and lines of code per method. This topic is relevant if the program code should remain maintainable, expandable and modernizable in the future – and without undue effort and risk of errors.

There are also solutions for this, such as SonarQube, which test the code for quality features, can be integrated into the delivery pipeline and, in extreme cases, prevent code from being delivered that does not meet specified standards. These tools also offer clear graphical reports that provide an overview of quality and critical areas.

With all the great dashboards and reports on tests and code quality, which hopefully always show everything in green, the software itself is still the most important thing in the end. “Working software before documentation” is one of the central agile values. Customers don’t pay for a continuous delivery pipeline or dashboards that show good test results, but for a product that meets their needs, is easy to use, and looks good. The focus should always be on the software produced. It is crucial that you provide the user, the customer and yourself with software quickly and regularly, continuously test it and give feedback.

Is the right thing developed in the required quality? Determining this early saves a lot of money and helps to meet deadlines. This article is only intended to provide an overview. Especially for larger, long-term and important product developments, it is worthwhile to do this undoubtedly high effort. It depends on:

  • A source code management system like Git, in which everyone involved can check in their work at short frequency and thus resolve any conflicts that arise. Automated code analyzes and manual code reviews ensure that only code flows into the product that meets the required quality criteria.

  • A continuous delivery pipeline automatically creates the current product version from the current status of the code. This is made available to those interested in the product. Tests and checks for code quality are also carried out automatically in the pipeline. Success or failure of the tests and the code quality check can serve as quality gates as to whether this version is made available to the next staging stage.

  • Dashboards and reports that present the results of installation, tests, quality metrics and bugs in a clear manner so that you are quickly informed about the status of the software and the status of work.

  • Working software provides an early subjective, “feeling” whether the development is going in the right direction. Minimal Viable Products (MVP) provide feedback from customers and users at an early stage. Agile practices provide feedback at short intervals. In this way, future work can be re-prioritized and all aspects of the project continuously improved. (hv / fm)

Leave a Reply

Your email address will not be published. Required fields are marked *