Archives

Continuous Delivery within the .NET realm (The Continuous Improver)

On August 30, 2016, in Syndicated, by Association for Software Testing
0

Continuous what?

Well, if you browse the internet regularly, you will encounter two different terms that are used rather inconsistently: Continuous Delivery and Continuous Deployment. In my words, Continuous Delivery is a collection of various techniques, principles and tools that allow you to deploy a system into production with a single press of a button. Continuous Deployment takes that to the next level by completely automating the process of putting some code changes that were committed to source control into production, all without human intervention. These concepts are not trivial to implement and involve both technological innovations as well some serious organizational changes. In most projects involving the introduction of Continuous Delivery, an entire cultural shift is needed. This requires some great communication and coaching skills. But sometimes it helps to build trust within the organization by showing the power of technology. So let me use this post to highlight some tools and techniques that I use myself. 

What do you need?
As I mentioned, Continuous Delivery involves a lot more than just development effort. Nonetheless, these are a few of the practices I believe you need to be successful.

  • As much of your production code as possible must be covered by automated unit tests. One of the most difficult part of that is to determine the right scope of those tests. Practicing Test Driven Development (TDD), a test-first design methodology, can really help you with this. After trying both traditional unit testing as well as TDD, I can tell you that is really hard to add maintainable and fast unit tests after you’ve written your code.
  • If your system consists of multiple distributed subsystems that can only be tested after they’ve been deployed, then I would strongly recommend investing in acceptance tests. These ‘end-to-end’ tests should cover a single subsystem and use test stubs to simulate the interaction with the other systems.
  • Any manual testing should be banned. Period. Obviously I realize that this isn’t always possible due to legacy reasons. So if you can’t do that for certain parts of the system, document which part and do a short analysis on what is blocking you.
  • A release strategy as well as a branching strategy are crucial. Such a strategy defines the rules for shipping (pre-)releases, how to deal with hot-fixes, when to apply labels what version numbering schema to use.
  • Build artifacts such as DLLs or NuGet packages should be versioned automatically without the involvement of any development effort.
  • During the deployment, the administrator often has to tweak web/app.config settings such as database connections strings and other infrastructure-specific settings. This has to be automated as well, preferably by parametrizing deployment builds.
  • Build processes, if they exist at all, are quite often tightly integrated with build engines like Microsoft’s Team Build or JetBrain’s Team City. But many developers forget that the build script changes almost as often as the code itself. So in my opinion, the build script itself should be part of the same branching strategy that governs the code and be independent of the build product. This allows you to commit any changes needed to the build script together with the actual feature. An extra benefit of this approach is that developers can test the build process locally.
  • Nobody is more loathed by developers than DBAs. A DBA that needs to manually review and apply database schema changes is a frustrating bottleneck that makes true agile development impossible. Instead, use a technique where the system uses metadata to automatically update the database schema during the deployment.

What tools are available for this?

Within the .NET open-source community a lot of projects have emerged that have revolutionized the way we build software.

  • OWIN is an open standard to build components that expose some kind of HTTP end-point and that can be hosted everywhere. WebAPI, RavenDB and ASP.NET Core MVC are all OWIN based, which means you can build NuGet packages that expose HTTP APIs and host them in IIS, a Windows Service or even a unit test without the need to open a port at all. Since you have full control of the internal HTTP pipeline you can even add code to simulate network connectivity issues or high-latency networks.
  • Git is much more than a version control system. It changes the way developers work at a fundamental level. Many of the more recent tools such as those for automatic versioning and generating release notes have been made possible by Git. Git even triggered de-facto release strategies such as GitFlow and GitHubFlow that directly align with Continuous Delivery and Continuous Deployment. In addition to that, online services like GitHub and Visual Studio Team Services add concepts like Pull Requests that are crucial for scaling software development departments.
  • XUnit is a parallel executing unit test framework that will help you build software that runs well in highly concurrent systems. Just try to convert existing unit tests built using more traditional test frameworks like MSTest or Nunit to Xunit. It’ll surface all kinds of concurrency issues that you normally wouldn’t detect until you run your system in production under a high load.
  • Although manual testing of web applications should be minimized and superseded by JavaScript unit tests using Jasmine, you cannot entirely get rid of a couple of automated tests. These smoke tests can really help you to get a good feeling of the overall end-to-end behavior of the system. If this involves automated tests against a browser and you’ve build them using the Selenium UI automation framework, then BrowserStack would be the recommended online service. It allows you to test your web application against various browser versions and provides excellent diagnostic capabilities.
  • Composing complex systems from small components maintained by individual teams has been proven to be a very successful approach for scaling software development. MyGet offers (mostly free) online NuGet-based services that promotes teams to build, maintain and release their own components and libraries and distribute using NuGet, all governed by their own release calendar. In my opinion, this is a crucial part of preventing a monolith.
  • PSake is a PowerShell based make-inspired build system that allows you to keep your build process in your source code repository just like all your other code. Not only does this allow you to evolve your build process with new requirements and commit it together with the code changes, it also allows you to test your build in complete isolation. How cool is it to be able to test your deployment build from your local PC, isn’t it?
  • So if your code and your build process can be treated as first-class citizens, why can’t we do the same to your infrastructure? You can, provided you take the time to master PowerShell DSC and/or modern infrastructure platforms like TerraForm. Does your new release require a newer version of the .NET Framework (and you’re not using .NET Core yet)? Simply commit an updated DSC script and your deployment server is re-provisioned automatically.

Where do you start?

By now, it should be clear that introducing Continuous Delivery or Deployment isn’t for the faint of heart. And I didn’t even talk about the cultural aspects and the change management skills you need to have for that. On the other hand, the .NET realm is flooded with tools, products and libraries that can help you to move in the right direction. Provided I managed to show you some of the advantages, where do you start?

  • Switch to Git as your source control system. All of the above is quite possible without it, but using Git makes a lot of it a lot easier. Just try to monitor multiple branches and pull requests with Team Foundation Server based on a wildcard specification (hint: you can’t).
  • Start automating your build process using PSake or something alike. As soon as you have a starting point, it’ll become much easier to add more and more of the build process and have it grow with your code-base.
  • Identify all configuration and infrastructural settings that deployment engineers normally change by hand and add them to the build process as parameters that can be provided by the build engine. This is a major step in removing human errors.
  • Replace any database scripts with some kind of library like Fluent Migrator or the Entity Framework that allows you to update schema through code. By doing that, you could even decide to support downgrading the schema in case a (continuous) deployment fails.
  • Write so-called characterization tests around the existing code so that you have a safety net for the changes needed to facilitate continuous delivery and deployment.
  • Start the refactoring efforts needed to be able to automatically test more chunks of the (monolithical) system in isolation. Also consider extracting those parts into a separate source control project to facilitate isolated development, team ownership and a custom life cycle.
  • Choose a versioning and release strategy and strictly follow it. Consider automating the version number generation using something like GitVersion.

Let’s get started

Are you still building, packaging and deploying your projects manually? How much time have you lost trying to figure out what went wrong, only to find out you forgot some setting or important step along the way? If this sounds familiar, hopefully this post will help you to pick up some nice starting points. And if you still have question, don’t hesitate to contact me on twitter or by reaching out to me at TechDays 2016.

How do you do, Head of Testing? vol. 3 (The Pain and Gain of Edward Bear)

On August 30, 2016, in Syndicated, by Association for Software Testing
0

Quite OK, thanks for asking. I feel like this writing exercise helps me clear my head a bit. Many have learned by now that “all models are wrong but some are useful” – a saying attributed to the statistician George Box. In my quest for trying to understand an organization and its dynamics, I’ve come […]

Tools: Take Your Pick Part 4 (Hiccupps)

On August 30, 2016, in Syndicated, by Association for Software Testing
0

Back in Part 1 I started this series of posts one Sunday morning with a very small thought on tooling. (Thinking is a tool.) I let my mind wander over the topic and found that I had opinions, knowledge, ideas, and connections that I hadn’t made ex…

Tools: Take Your Pick Part 3 (Hiccupps)

On August 29, 2016, in Syndicated, by Association for Software Testing
0

In Part 1 of this series I observed my behaviour in identifying problems, choosing tools, and finding profitable ways to use them when cleaning my bathroom at home. The introspection broadened out in Part 2 to consider tool selection more generally. I speculated that, although we may see someone apparently thoughtlessly viewing every problem as a nail and hence treatable with the same hammer, that simple action can hide deeper conscious and unconscious thought processes. In Part 3 I find myself with these things in mind, reflecting on the tools I use in my day-to-day work.

One class of problems that I apply tools to involves a route to the solution being understood and a desire to get there quickly. I think of these as essentially productivity or efficiency problems and one of the tools I deploy to resolve them is a programming or scripting language.

Programming languages are are tools, for sure, but they are also tool factories. When I have some kind of task which is repetitive or tiresome, or which is substantially the same in a bunch of different cases, I’ll look for an opportunity to write a script – or fabricate a tool – which does those things for me. For instance, I frequently clone repositories from different branches of our source code using Mercurial. I could type this every time:

$ hg clone -r branch_that_I_want https://our.localrepo.com/repo_that_I_want

… and swear a lot when I forget that this is secure HTTP or mistype localrepo again. Or I could write a simple bash script, like this one, and call it hgclone:

#!/bin/bash

hg clone -r $1 https://our.localrepo.com/$2

and then call it like this whenever I need to clone:

$ hgclone branch_that_I_want repo_that_I_want

Now I’m left dealing with the logic of my need but not the implementation details. This keeps me in flow (if you’re a believer in that kind of thing) or just makes me less likely to make a mistake (you’re certainly a believer in mistakes, right?) and, in the aggregrate, saves me significant time, effort and pain.

Your infrastructure will often provide hooks for what I sometimes think of as micro tools too. An example of this might be aliases and environment variables. In Linux, because that’s what I use most often, I have set things up so that:

  • commands I like to run a particular way are aliased to always run that way.
  • some commands I run a lot are aliased to single characters.
  • some directory paths that I need to use frequently are stored as environment variables.
  • I can search forwards and backwards in my bash history to reuse commands easily.

One of the reasons that I find writing (and blogging, although I don’t blog anything like as much as I write) such a productive activity is that the act of doing it – for me – provokes further thoughts and connections and questions. In this case, writing about micro tools I realise that I have another kind of helper, one that I could call a skeleton tool.

Those scripts that you return to again and again as starting points for some other piece of work, they’re probably useful because of some specific piece of functionality within them. You hack out the rest and replace them in each usage, but keep that generally useful bit. That bit is the skeleton. I have one in particular that is so useful I’ve made a copy of it with only the bits that I was reusing to make it easier to hack.

Another class of problem I bump into is more open-ended. Often I’ll have some idea of the kind of thing I’d like to be able to do because I’m chasing an issue. I may already have a tool but its shortcomings, or my shortcomings as a user, are getting in the way. I proceed here in a variety of ways, including:

  • analogy: sometimes I can think of a domain where I know of an answer, as I did with folders in Thunderbird.
  • background knowledge: I keep myself open for tool ideas even when I don’t need tools for a task. 
  • asking colleagues: because often someone has been there before me.
  • research: that frustrated lament “if only I could …” is a great starting point for a search. Choosing sufficient context to make the search useful is a skill. 
  • reading the manual: I know, old-fashioned, but still sometimes pays off.

On one project, getting the data I needed was possible but frustratingly tiresome. I  had tried to research solutions myself, had failed to get anything I was happy with, and so asked for help:

#Testers: what tools for monitoring raw HTTP? I’m using tcpdump/Wireshark and Fiddler. I got networks of servers, including proxies #testing

— James Thomas (@qahiccupps) March 26, 2016

This lead to a couple of useful, practical findings: that Fiddler will read pcap files, and that chaosreader can provide raw HTTP in a form that can be grepped. I logged these findings in another tool – our company wiki – categorised so that others stand a chance of finding them later.

Re-reading this now, I notice that in that Twitter thread I am casting the problem in terms of the solution that I am pursuing:

I would like a way to dump all HTTP out of .pcap. Wireshark cuts it up into TCP streams. 

Later, I recast the problem (for myself) in a different way:

I would like something like tcpdump for HTTP.

The former presupposes that I have used tcpdump to capture raw comms and now want to inspect the HTTP contained within it, because that was the kind of solution I was already using. The latter is agnostic about the method, but uses analogy to describe the shape of the solution I’m looking for. More recently still, I have refined this further:

I would like to be able to inspect raw HTTP in real time, and simultaneously dump it to a file, and possibly modify it on the fly, and not have to configure my application to use an external proxy (because that can change its behaviour).

Having this need in mind means that when I happen across a tool like mitmproxy (as I did recently) I can associate it with the background problem I have. Looking into mitmproxy, I bumped into HTTPolice, which can be deployed alongside it and used to lint my product’s HTTP.  Without the background thinking I might not have picked up on mitmproxy when it floated past me; without picking up on mitmproxy I would not have found HTTPolice or, at least, not found it so interesting at that time.

Changing to a new tool can give you possibilities that you didn’t know were there before. Or expose a part of the space of possible solutions that you hadn’t considered, or change your perspective so that you see the problem differently and a different class of solutions becomes available.

Sometimes the problem is that you know of multiple tools that you could start a task in, but you’re unsure of the extent of the task, or the time that you’ll need to spend on it, whether you’ll need to work and rework or this is a one-shot effort and other meta problems of the problem itself. I wondered about this a while ago on Twitter:

With experience I become more interested in – where other constraints permit – setting up tooling to facilitate work before starting work.

— James Thomas (@qahiccupps) December 5, 2015

And where that’s not possible (e.g. JFDI) doing in a way that I hope will be conducive to later retrospective tooling.

— James Thomas (@qahiccupps) December 5, 2015

And I mean “tooling” in a very generic sense. Not just programming.

— James Thomas (@qahiccupps) December 5, 2015

And when I say “where other constraints permit” I include contextual factors, project expectations, mission, length etc not just budget

— James Thomas (@qahiccupps) December 5, 2015

Gah. I should’ve started this at https://t.co/DWcsnKiSfS. Perhaps tomorrow.

— James Thomas (@qahiccupps) December 5, 2015

I wonder if this is irony.

— James Thomas (@qahiccupps) December 5, 2015

A common scenario for me at a small scale is, when gathering data, whether I should start in text file, or Excel, or an Excel table. Within Excel, these days, I usually expect to switch to tables as soon as it becomes apparent I’m doing something more than inspecting data.

Most of my writing starts as plain text. Blog posts usually start in Notepad++ because I like the ease of editing in a real editor, because I save drafts to disk, because I work offline. (I’m writing this in Notepad++ now, offline because the internet connection where I am is flaky.) Evil Tester wrote about his workflow for blogging and his reasons for using offline editors too.

When writing in text files I also have heuristics about switching to a richer format. For instance, if I find that I’m using a set of multiply-indented bullets that are essentially representing two-dimensional data it’s a sign that the data I am describing is richer than the format I’m using. I might switch to tabulated formatting in the document (if the data is small and likely to remain that way), I might switch to wiki table markup (if the document is destined for the wiki), or I might switch to a different tool altogether (either just for the data or for everything.)

At the command line I’ll often start in shell, then move to bash script, then move to a more sophisticated scripting language.  If I think I might later add what I’m writing to a test suite I might make a different set of decisions to writing a one-off script. If I know I’m searching for repro steps I’ll generally work in a shell script, recording various attempts as I go and commenting them out each time so that I can easily see what I did that lead to what. But if I think I’m going to be doing a lot of exploration in an area I have little idea about I might be more interactive but use script to log my attempts.

At a larger scale, I will try to think through workflows for data in the project: what will we collect, how will we want to analyse it, who will want to receive it, how will they want to use it? Data includes reports: who are we reporting to, how would they like to receive reports, who else might be interested? I have a set of defaults here: use existing tooling, use existing conventions, be open about everything.

Migration between tools is also interesting to me, not least because it’s not always a conscious decision. I find I’ve begun to use Notepad++ more on Windows whereas for years I was an Emacs user on that platform. In part this is because my colleagues began to move that way and I wanted to be conversant in the same kinds of tools as them in order to share knowledge and experience. On the Linux command line I’ll still use Emacs as my starting point, although I’ve begun to teach myself vi over the last two or three years. I don’t want to become dependent on a tool to the point where I can’t work in common, if spartan, environments. Using different tools for the same task has the added benefit of opening my mind to different possibilities and seeing how different concepts repeat across tools, and what doesn’t, or what differs.

But some migrations take much longer, or never complete at all: I used to use find and grep together to identify files with certain characteristics and search them. Now I often use ack. But I’ll continue to use find when I want to run a command on the results of the search, because I find its -exec option a more convenient tool than the standalone xargs.

Similarly I used to use grep and sed to search and filter JSON files. Now I often use jq when I need to filter cleanly, but I’ll continue with grep as a kind of gross “landscaping” tool, because I find that the syntax is easier to remember even if the output is frequently dirtier.

On the other hand, there are sometimes tools that change the game instantly, In the past I used Emacs as a way to provide multiple command lines inside a single connection to a Linux server. (Aside: putty is the tool I use to connect to Linux servers from Windows.) When I discovered screen I immediately ditched the Emacs approach. Screen gives me something that Emacs could not: persistence across sessions. That single attribute is enough for me to swap tools. I didn’t even know that kind of persistence was possible until I happened to be moaning about it to one of our Ops team. Why didn’t I look for a solution to a problem that was causing me pain?

I don’t know the answer to that.

I do know about Remote Desktop so I could have made an analogy and begun to look for the possibility of command line session persistence. I suspect that I just never considered it to be a possibility. I should know better. I am not omniscient. (No, really.) I don’t have to imagine a solution in order to find one. I just have to know that I perceive a problem.

That’s a lesson that, even now, I learn over and over. And here’s another: even if there’s not a full solution to my problem there may be partial solutions that are improvements on the situation I have.

In Part 4 I’ll try to tie together the themes from this and the preceding two posts.
Image: https://flic.kr/p/5mPY4G
Syntax highlighting: http://markup.su/highlighter

Tools: Take Your Pick Part 2 (Hiccupps)

On August 29, 2016, in Syndicated, by Association for Software Testing
0

In Part 1, I described my Sunday morning Cleaning the Bathroom problem and how I think about the tools I’m using, the way I use them, and why.  In particular I talked about using a credit card as a scraper for the grotty build up around the s…

Tools: Take Your Pick Part 1 (Hiccupps)

On August 29, 2016, in Syndicated, by Association for Software Testing
0

It’s early on a Sunday morning and I’m thinking about tools. Yes, that’s right: Sunday morning; early; tools.Sunday morning is bathroom cleaning morning for me1 and, alongside all the scrubbing, squirting, and sluicing I spend time evaluating the way I…

Stairs help for blind people (zagorski software tester)

On August 28, 2016, in Syndicated, by Association for Software Testing
0

TL;DR This post is follow up on my post about traveling to CAST 2016 as software tester. In that post I mentioned that I noticed at Paris Charles De Gaulle Airport metal pattern at the top of the stairs. Using Google, I tried to find out the purpose of that pattern, but with no result. … Continue reading Stairs help for blind people

Protecting your time (Nicky Tests Software)

On August 26, 2016, in Syndicated, by Association for Software Testing
0

Last night I attended a software testing meet-up  where Örjan spoke about his experiences in Managing Quality in an Agile team. He raised a few interesting points from a management perspective – but it was one in particular that caught my attention: protecting time.

In order to help his team achieve the tasks they planned in sprints, he would try and stop people hindering his team unnecessarily. He wanted to help make it easier for his team to work. And I get that. I’ve been on both sides of the equation – I’ve been the person trying to ask a developer questions only to be blocked by their dev team lead and I’ve also been the person who’s been “protected” by their team lead so I can focus on my work and reach a deadline. From both perspectives I’ve been able to appreciate both the frustrations and the benefits of such an approach.

But what interests me – is the different ways people go about protecting their time.

Blocking off certain periods of times for meetings

At a recent project, they protected their time by blocking off meetings on Tuesdays and Thursdays. If you wanted to have a meeting on one of those days, it’d just have to wait until the following Wednesday and Friday (this has been going on for almost a year now)

A guy in the meet-up said that his team tried to block off meetings in the afternoons and asked for advice from the meet-up group on how to go about achieving that, as that approach only worked for a few months. A few people encouraged him to get “buy-in” from different members of the team. In retrospect, I wish I asked him a few more questions about his situation instead of just giving him my thoughts on the matter. Lesson learned.

Designated question person

Two people in the meet-up had a similar approach to protect their time and that was to have a designated question person who would answer questions/deal with issues from people outside of the team. The role would rotate on a regular basis (whether that be daily or weekly).

They also elaborated on the benefits of this role – other than having less people being interrupted, it meant that the designated person probably knew when was the best time to ask questions etc. from a member of their own team (to minimise the risk of disrupting their flow).

The identity of this person would be communicated to other teams affected, for example – through a sign that says “Support” on their desk.

Do not Disturb

I worked at a company where headphones worked as a “Do not Disturb” sign. It meant that if someone had them on, then it was best not to interrupt them unless it was particularly urgent (you could’ve also just pinged them on Slack). To be honest, I’m not entirely sure how I feel about this approach. While I understand that headphones are a great way of tuning everyone out so you can focus on your work, I felt it didn’t always mean I was in a condition to not be disturbed. For example: I like to listen to music when I’m testing, but that doesn’t mean it’s a bad time to ask me a question while I’m doing that.
A guy in the meet-up said that his team has 3 hourglasses for his team of 9. When you had the hourglass on your desk, it worked as a Do Not Disturb sign. I quite liked that approach (made me think of that old soap Days of Our Lives).

Make meetings less appealing

I’m sure a lot of people can relate to being in meetings that went on for way too long or they thought at some point “why exactly am I here? The subject of the meeting doesn’t quite match what is currently being discussed”
Face to face communication is awesome – don’t get me wrong. But often a quick chat, email or IM would also do. 

Last night, two approaches were discussed that strived to reduce the appeal of meetings.
The first approach was to calculate a rough estimate of the cost of each meeting. That is, estimate the average per hour salary of each attendee (this doesn’t just apply to consultants, but employees as well) and display it at the meeting. This seemed to help people stay on track and make sure only the people necessary were there.
The second approach was to set a weekly time budget for meetings for each person. At the person’s company, they have an allocated budget of 5 hours a week (I don’t know if this differed slightly depending on the role each person held) At his company, there is a designated minute taker for each meeting so that the meeting notes could be shared with all the relevant people. 
That discussion at the meet-up has given me a lot to think about. Not only when it comes to coming up with ideas in how to protect your time, but also in communicating those to your team, to your manager or to people who your work impacts. 
Saying no to meetings and bringing these ideas up can be a somewhat intimidating experience. Even worse, being in a meeting, then 10 minutes in thinking “Can I leave? Does the law of two feet apply here?”.

Testers Role in Agile Requirements Exploration (Assert.This)

On August 26, 2016, in Syndicated, by Association for Software Testing
0

Black Box, White Box, Gray box throw those words away. Test the thing not the box it came in… That little gem came straight from Janet Gregory during the Testers Role in Agile Requirements Exploration Workshop at CAST 2016. I struggled with choosing what sessions to attend, I didn’t want to miss anything and this … [Read more…]

יום פקודה Command and Conquer (אשרי אדם מפחד תמיד Happy is the man who always fears)

On August 25, 2016, in Syndicated, by Association for Software Testing
0

echo “olleh#dlrow~” |rev|cut -d~ -f2|sed “s/#/ /”|awk ‘{print $2 ” ” $1}’Can you read this? great!(By the way – some code, so no Hebrew)When I got out of the university, I had exactly zero familiarity with the linux shell, and found myself in an enviro…

Page 1 of 41234

Looking for something?

Use the form below to search the site:


Still not finding what you're looking for? Drop a comment on a post or contact us so we can take care of it!