Testing tours at TdT – a tester’s tale

The last TdT meetup was somewhat of a special occasion. For one it was a workshop instead of a presentation, which means that we formed groups to work together then presented and had discussions about our work that we had just done (instead of ONLY speaking from experience gained prior to the discussion). It was also “owned” by four people (instead of the typical one or two) which helped a lot with facilitating the discussion while we were working and as well as after, while discussing challenges and insights. I also had the opportunity to work with people I hadn’t had the chance to, which is always nice :-).

The workshop was structured in a fairly typical way, we arrived at the venue, chatted about random things, I found and solved a rather interesting puzzle game in the kitchen :D, then Dolly, Dorel, Elena and Oana, the content owners, introduced us to the theme of the meeting – learning! – talked about their goals with the workshop – wanting to become teamSTAR this year – go team go! – then described touring and how it fit with the theme.

They also introduced us to the app we were going to test, a chatting platform named slack, similar to an IRC client named – or so I thought before the workshop. They had cue cards printed out with the different tours we could choose from – they went with the FCC CUTS VIDS tours.

Testing tours are not really new, but I’ve rarely seen them being used on projects I’ve been on or talked to other testers about, not to mention being used consistently (this statement is of course limited by my experience :-)). I first heard about touring while doing the BBST Test Design course where one of the techniques we covered was touring, (and where I first read this article by Michael Kelly). While this was my first contact, I only started to understand the power of touring after reading James Whittaker’s take on touring described in his book ‘Exploratory Testing’ (he also has a couple of articles dedicated to this technique here).

I read the book while I was on a project testing medical software and this technique helped me a lot in learning, mapping and better understanding the intricate connections between elements of the product, by touring the application and mapping the central data elements it worked with, then we used this map to vary the data we were working with, after which we moved on to touring the application to figure out what features were important for different types of users, then we created scenarios based on these user types.

While using tours proved invaluable on this project, on my current project I can’t really use the technique consistently (I’m helping in setting up an automation environment and writing automated checks, with a bit of testing sprinkled in when release candidates are promoted).

Given my current situation, attending the workshop was an excellent opportunity to use the technique once more. Participants formed groups of 2 to 4 people and chose a tour type. I had the pleasure to work alongside Tamara and Florin, both fairly new to testing, but eager to start. We chose the Variability tour, because it seemed easy enough: just look for things in the app that you can change and change them. We were somewhat wrong about this.

After choosing the type of tour, we quickly set our laptops up in the corner of the garden img_3073outside the venue, and designated roles for each of us: Tamara sat at the keyboard, she did most of the navigating and the “trying things out”, Florin did research as well as compared what we found in the desktop version of slack with his Android version, while I took notes and came with suggestions for what elements to vary.

Our first challenge came in the form of trying to figure out how much to vary the data we were working with, while still learning new things about the app. Our reasoning was that since the goal of the Variability tour was to learn the capabilities of the application by changing the different elements it is comprised of (buttons, settings etc.) we should see these different changes take effect not just compile a list of possible variations. So we limited the amount of variation in the hopes to cover more ground.

We started our tour with the different ways Notifications can be set up in the app. Right from the start, there were a lot of ways to customize it:

  • Do we want to receive notifications from everyone or just a certain group? How about specific channels within a group?
  • Do we only want direct notifications and a select group of highlighted words?
  • Or possibly from specific users?

Next there were specific ways the notification popup window could look and feel:

  • Should it contain the text being received?
  • Should it flash the screen?
  • Should it make a sound? If so, which sound?
  • Where should it appear on the monitor? How about if there are multiple monitors connected?

We also learned that users can set Do Not Disturb periods (based on Time Zones only reachable from the web version of the app), as well as timers that can act as a temporary DnD when we want to do something uninterrupted. These settings, of course could be further customized on a group/channel level.

After we were done with Notifications, we moved on to varying the look of the app by changing its theme to a different preset, creating a new preset altogether, we even looked at importing themes into slack. We found out that this option is NOT variable in the mobile version of the app. When we started focusing on Advanced options to see if we could find any interesting features, we realized there were only 10 minutes left of the session, and we were at the 3rd point of a 6 point list of settings.

Managing time turned out to be our second challenge.

Needless to say, we tried to wrap up the session by finding elements that were more ephemeral than the settings of the app. We noticed another team had enabled TaskBot, so we varied the different elements of a task created with the bot, created temporary channels made out of our group and tried to send different types of messages. We also ended up talking about the way we varied the messages sent throughout the session so we added that in our notes as well.

After the one hour testing session we gathered inside, where each team talked about their brief experience using tours, and whether they see this as something they could do on their current project:

  • There were a few feature “tourists”, who had interesting takes on what to include as a feature and what to exclude (my take on this is that while it’s important to have a common understanding of a feature on a given project I am on, this might change if I move on to another project)
  • There was one group that picked scenario tours, who had a hard time coming up with realistic scenarios – or at least they did not talk about the ways they tried to get to these scenarios. (my conclusion was that understanding the features and the users of the app would greatly help in this, so touring the app from these perspectives before searching for scenarios would probably be a better way to learn the app)
  • There was a group who did a Configuration tour, which was fairly similar to our Variability tour, since they are related in concept. (my take on the difference between the two is that while in the Variability tour you learn about the application by varying things in the app, the Configuration tour focuses on the permanent changes that you can “inflict” upon the app).
  • Lastly, there was a team that did a Testability tour, and they hardly dealt with the application itself, and looked for logs, access to local databases, controls within the app to use to automate different checks with, possible bots to use for creating/modifying/deleting key elements of the app. This tour, while very different and interesting, was the least context dependent of all of the tours (making the experience more easily transferable to other projects)

While we didn’t get to discuss about it during the workshop, a third challenge I see with ongoing tours is this: Do we keep the map we created up to date with new changes being added to the app? In other words, is the artefact we create during a tour lose its value? I know it’s valuable for a while, but is it worth it to keep the map/list updated? As I see it, the act of creating the map or list is vastly more valuable, and it keeps its value while you base other tours on it, I am more skeptical of its usefulness after that, though.

All in all, I came away from this workshop with interesting ideas, and a firmer grasp on the technique than before. If i were to sum up the experience of touring an application and engaging in the discussions after in one sentence, it would be this one: Without resorting to assumptions and our previous experience with the application, it will take a lot of time to do each type of tour, and that some of the tours are more useful to do early (Testability, Feature, User tours) while others are better done later (Scenario tours), but ultimately it’s worth the effort.

Testability Tours – Testing Tours Meetup

Last week at the monthly meetup organized by Tabara de Testare four of our colleagues organized a workshop on Testing Tours – we call them content-owners @ Tabara de Testare. Even though many people have written on touring over the past 10 – 15 years, this technique seems very little known and used by practitioners. This is why when my colleagues chose this theme I thought it was a great idea. Also, there are many tours that I’m not familiar with, so it  was a good learning opportunity for me.

About 25 people gathered for this meetup and, after the introduction delivered by the content-owners, we split in teams of 2, 3 or 4 testers and we went out in the garden to get started. This month’s venue was ClujHub, and the great thing about this co-working space, besides the downtown location, is this garden!p_20160907_190235_pnI teamed up with Gabi Kis, a fellow facilitator from Tabara de Testare. We took the tours cards prepared by the content-owners and went through all of them to see which tour we’d like to perform – all the other teams already chose their tour, so it seemed to me that we were a little behind; now that I’m thinking back, I’m surprised that I was not stressed by this :). Having the cards in a physical form, not on a PowerPoint slide, made it easy for us to take them in the garden – I also noticed that all teams took the card for the chosen tour with them One other aspect I liked at having the cards was that most of them contained enough information to give starting ideas on how to approach the tour.

Out of all the tours, we stopped at the following 3:

Scenario Tour – List realistic scenarios for how different types of users would use the product.

Testability Tour – Find features and tools to help in testing the application.

Complexity Tour – Identify the most complex things present in the application.

For the Scenario Tour we weren’t sure if we just needed to list the scenarios or also perform them. For the Complexity Tour we couldn’t find a working definition for “complexity in the application”. We decided to choose the Testability Tour.

Testability Tour on Slack

The application we had to work with was Slack, and luckily for me I was familiar with it as we use it at Altom and also on several projects. Gabi didn’t know the application very well, so for him it was a good opportunity to discover it.

As we didn’t have access to the internal documentation of Slack, we started listing the items we thought would make the application easier to test. We structured our notes in 3 sections:

  1. App Features
    1. API – can be used for generating test data
    2. Websockets – same as API
    3. Logs
    4. Error messages (UI level, API level and logs) – this would help the tester to better understand when something goes bad
    5. File sharing – can we have access to the cloud storage, but not through the application to control the test data?
    6. Bots – can we use them in our chat tests to get responses?
  2. Tools – as this is a client-server application, we started to think about what kind of tools we could use to test it:
    1. Proxy (Charles / Fiddler) or Wireshark – to intercept the calls
    2. data sets – Gabi said that he would like to be able to build specific data sets to put the application in different states.
    3. server side
      1. Top / Perfmon / Nagios – to monitor server side resource utilization
      2. Jmeter – to send requests for load and performance testing
    4. client side
      1. Perfmon / Top / Activity monitor – to check the desktop application resource utilization (the desktop client is a web app packaged as a standalone application)
      2. adb / instruments – to check the mobile application resource utilization
      3. Yslow and Developer tools – for the browser client
    5. spider tools – to discover and list different features from the application; one aspect we thought of was that if the app uses custom controls the tool won’t be able to find too many things…
  3. App Structure
    1. ease of identifying elements for UI level test automation
    2. how easy it would be to build our own test environment
    3. client app installation – do we have a silent install? We asked ourselves if this is an app feature, as it can be used by sys admin to install Slack on large networks, or a testability feature, as it would allow us to setup the test environment easier.
    4. server side installation – Can it be installed on premise? If yes, how easy can it be set up?

The above list was not created in the order it is now displayed, but more as a brainstorming session: when we identified one item, we would seek to explain why it could be relevant and try to categorise it in one of the main sections. What I find interesting is that we initially started with 2 sections, Features and Tools, and while coming up with new ideas we thought of adding a new section, App Structure (one could argue that this last section could easily be part of the App Features section).

About 45 minutes into the exercise there was still no touring of the application. We thought of taking one item from our list and start investigating it, so we chose the logs feature: we wanted to know if there are any client side logs on the desktop client.

I was using a Mac, so I thought Slack would save the data under the user’s Library folder, but nothing was there. We looked in Application Support and in Cache and we couldn’t see anything relevant. I looked in the System / Library folder, and still nothing.

I googled a bit for Slack logs and local storage, and for some reason Google decided to return https://status.slack.com, which is a nice status overview of the application. This makes me think that there should be more detailed monitoring on the server :). Unfortunately we didn’t find anything else relevant in the google search.

I looked in the application package from /Application. Nothing relevant there either.

The next step was to open Activity Monitor and double-click on the Slack process, and then I went to the Open Files and Ports and noticed that the user files are saved in ~/Library/Containers/com.tinyspeck.slackmacgap/Data

slack_activitymonitor_poza3

So this is where all the goodies are! Listing the folder content we noticed that most of the folders are aliases for system folders

slack_localdata_list_poza4

We looked into Library, and found the same behaviour: lots of Aliases.

slack_localdata_library_list_poza5

We also noticed the Logs folder. Unfortunately it was empty…

Next we went to Application Support and found Crashes and Slack. We digged into Slack and found what we were looking for: the log file.

slack_localdata_appsupport_slack_list_poza6

OS X comes with a pretty neat tool, Console, which helped us to inspect the log. Below is an example of what my Slack log shows today. You can find information about users joining teams and channels, who left a channel, if websockets are opened, etc.

Now that we found our log, we decided to also look in what is saved locally, so we googled for an SQLite browser and found http://sqlitebrowser.org. We downloaded it and first opened localStorage.sqlite, but this had data from 2015. We then opened localCache.sqlite and found the cached data. We also tried to also open localCache.sqlite-wal but it was password-protected.

Going back to inspect the other folders from Application Support, we noticed that Slack has an alias for AddressBook, which made us wonder about the integration from these two applications and if we can use Address Book to inject data in Slack for testing purposes.

One of our last thoughts before the time was up was that we might have to define what we wanted to test. We approached this tour as if we wanted to test “everything” (we thought of client and server features, tools for performance and UI level automation). Had we have started with a more focused mission, our notes would have been very different.

What I liked about this session with Gabi was that we started with a brainstorming based on our previous experiences with similar applications and then chose one aspect – client side logging – and drilled in it. During all this time we tried to take notes so that we would use them for future, maybe more focused, touring. Here is our log, with notes in English and Romanian, to have a better view of what our outcome was after the tour.

testingtours_originallog

After one hour of touring, it was time to go back inside and do a debriefing. Each team presented their work and the discussions were facilitated by the content-owners (what they learned, how they organized their work and notes…).

One thing I learned during the debriefing is that the order in which the tours are performed matters. For example the Scenario Tour more suitable to be done after the User Tour, as one needs a list of users to identify the scenarios, or the Feature and Configuration Tour to get familiar with what the application can do. This makes total sense now.

One interesting discussion was between Team #4 (Variability Tour) and Team #5 (Configuration Tour) as they toured around similar areas, and they were debating if their work was relevant for the tour taken. One of the content owners clarified that the Configuration tour can be seen as a sub-tour of the Variability Tour, the main difference being that the Configuration Tour is focused on persistent changes, while the Variability Tour is focused on temporary changes.

All in all it was a great workshop. People were highly engaged, showing thirst for knowledge and discussion. I challenge you to try testing tours with your team as an exercise and see the benefits for yourself.

P.S. of course such a meetup couldn’t have finished without a beer 😀

img_4069

EuroSTAR TeamSTAR competition – sustineti echipa Altom!

Salut,

4 dintre colegii nostri de la Altom s-au hotarat sa participe la concursul TeamSTAR competition organizat de EuroSTAR. Pentru asta au facut un film de 2 minute in care au prezentat pasiunea lor pentru testare si de ce vor sa participe la conferinta.

Filmul poate fi gasit aici: http://www.eurostarconferences.com/blog/2012/9/3/creatures-of-testing-revealed

Pentru a ajunge in Amsterdam, ei au nevoie de cat mai multe voturi de la membrii comunitatii EuroSTAR si de cat mai multe comentarii pe pagina de mai sus.

Rugamintea lor e sa ii ajutam sa ajunga la conferinta!

Alex