Tabara de Testare Brasov – Kick off meeting

Back to the origins.

In 7 noiembrie 2019, plina de emotie si cu planurile foarte bine structurate in minte, am ajuns la “Tabara de Toamna” de la Albota.

Stiam ca scopul meu cel mai mare (in afara de a invata, a socializa cu persoane din domeniu) era ca in viitorul apropiat sa pun Brasovul pe harta “Tabara de Testare”. Ma framanta ideea ca un oras cu o comunitate IT destul de importanta nu este deja parte a acestui grup fain. 

Asa ca am inceput sa vorbesc, sa intreb, sa aflu cum fac altii lucrurile sa mearga, intalnirile sa se desfasoare, speakerii sa fie dornici de a prezenta si in final comunitatea sa se adune luna de luna.

3 luni mai tarziu, in 27 februarie 2020 toate planurile si ideile au prins viata la “Tabara de Testare Brasov – Kick off meeting”.

Emotii, colegi de la Bucuresti alaturi, o sala faina gata sa ne gazduiasca, suport si incurajari via Slack de la prietenii “Tabara de Testare” din alte orase, prajituri si multe zambete, persoane gata sa sustina ideea si lume noua curioasa sa afle care e planul pentru viitor, o prezentare captivanta sustinuta de Andrei Pirvulescu, multe poze si un joc moderat de Andrei Dobrin, care a cerut munca in echipa si comunicare. Acestea au fost ideile care au caracterizat kick off meeting-ul.

In mai putin de 30 de zile va avea loc cea de-a doua intalnire.

Fingers crossed!

Gearing up for CodeCamp

As CodeCamp 2018 is drawing near, I keep perusing my notes and wondering about the upcoming talks. Becoming CodeCamper for a day was such a rewarding experience last year, especially since it gave me a sense of belonging and allowed me to get together with fellow enthusiasts. #ByTheCommunityForTheCommunity is the shared vision that prompted the Testing Camp and the CodeCamp to partner up in the first place. Since April 2016 (Iasi) and May 2016 (Cluj), this partnership has brought together Content Owners as well as Participants from various IT fields, some of which have later on delivered presentations or workshops at the Testing Camp Meetups.

I’ve been switching between the Tester and Developer hats for a while now, which is all the more reason to look forward to the next gathering, with its cross-disciplinary approach. But for now, I’d like to give you an overview of what I took from the previous edition.

When I registered for the 2017 edition of CodeCamp in Timisoara (our first one), I struggled with a different kind of “knapsack problem”. Choosing between 8 parallel tracks and more than 50 speakers was no easy endeavor. Packing them in one day either. Especially since the Testing Camp had been allotted an entire track on the agenda. However, once I had settled on my conference line-up, I simply couldn’t wait to get there and learn the ropes of new testing, marketing and development-related topics.

Just read on for snippets from my Camping Log.

colaj
  1. Why do Projects Fail?

I first pitched camp at Track 5 and attended a presentation delivered by Andreea Bozesan and Andrei Panu from Softvision. It focused on reasons why projects may start off on the wrong foot or simply face hurdles along the way, which prevent them from achieving their milestones or trigger failure altogether. I found the speakers’ approach highly useful, because it provided examples for all stages of the Product Life Cycle. Instead of mere theoretical scenarios, these examples illustrated actual challenges from real-life projects, such as:

  • skipping the feasibility study
  • budgeting little time for software architecture and QA
  • scope creep
  • poor managing of remote teams and/or cultural differences
  • insufficient project tracking

(to name but a few of the situations brought to the table).

If I were to find some common ground between all these examples, I’d say that, more often than not, it all boils down to (lack of) communication. Among the takeaways suggested for preventing project failure, I jotted down the following:

  • management and stakeholder support
  • clear vision & realistic objectives
  • clear and optimized scope
  • formal methodology in place
  • skilled and motivated team
  • proper testing process
  • user involvement
  1. Using Technology in Online Marketing: Chatbots

The second presentation targeted (but was not limited to) the Generation Z and the marketing strategies that can be employed to engage such users, which are basically born with a digital footprint and favor social media interactions. Georgiana Dragomir from Grapefruit gave us a taster of how Chatbots foster customer loyalty and retention. Several case studies backed this statement up and provided memorable examples. Here are some of them:

  • The Pizza Hut chatbot (Sales & Advertising) – available via Facebook Messenger and Twitter. It is meant to simplify the ordering experience and catch up with Domino’s more advanced technical options. After a mere three months, Pizza Hut managed to increase its engagement and boost customer retention.
  • SIMI (Creative Marketing) – designed as a Personal Bartender Chatbot, which comes up with recipes based on the ingredients input by the users. To prompt retention, it also rewards its customers with free drinks and paid taxi rides to and from the bar, so as to avoid any drunk driving.
  • ERICA (Customer Service) – the digital assistant, released by the Bank of America. It is a proactive chatbot, which uses AI, predictive analytics and cognitive messages to oversee payments and offer support in developing saving plans. This initiative is aimed at encouraging customers to change their spending habits.

Consequently, emphasis was placed on the marketing aspects, rather than on the technical implementation. This shift in perspective provided me with valuable interdisciplinary insights. What I also found interesting in addition to the use cases, is the fact that Facebook Messenger offers the necessary infrastructure for developing chatbots. This means that it takes little time to implement and maintain one, thus making it more accessible to developers and the end users alike.

  • Infrastructure Testing for Docker Containers

Next on my line-up was the presentation delivered by Alina Ionescu from Haufe Group.  It brought me closer to a type of testing, which I was yet unfamiliar with. Consequently, I found it very useful that Alina focused on an actual project to contextualize the subject matter. Infrastructure Testing had been conducted for a large backend project with more than 10 other dependencies. This sheer scale entails working with an immutable infrastructure. Since some Docker containers don’t complete at the same time, the need arises to check that everything is up and running.

Apart from the technical benefits of using such tools as Bash or Docker, what I found particularly interesting was the process itself, which is aimed at ensuring transparency and communication at team level. The workflow involves creating a ticket before the actual deploy, so that all involved parties are informed. The infrastructure tests are run. If they pass, the ticket is closed automatically and everyone is again briefed. In case of test failure, it is possible to roll back and work on a solution. Prioritizing your tests is also an option.

Having pointed out the process, it is also well worth mentioning that Infrastructure Testing is only one of the stages, slotted after the code deploy. Below is a visual rendition of how testing is parceled out:

Code Camp 2017 Infrastructure Testing

(Adapted from Alina’s presentation)

Visualizing the process aided me in understanding each stage better and grasping the benefits of this “Deploy-Destroy-Redeploy” approach, which is less time-consuming and more performance-oriented. Writing automated tests in the same environment that the Developers use is another plus. The deployments thus become more efficient and predictable, while focus is placed on decreased recovery times and higher quality. An extensive project like the one in the example benefits from this approach, which I think can also come in handy when scaling an initially smaller project.

  • A Game of Performance

Delivered by Alex Moldovan from Fortech, this presentation revolved around the mobile aspect of performance, suggesting various approaches to handling browser issues, app size and JavaScript.

It was quite intriguing for me to take a peek behind the curtains, especially since I had already come across and muddled through some of those issues myself, yet only as a user. Being introduced to the challenges mobile developers face on the eclectic and ever-evolving browser and device market really puts things into perspective. For one, it definitely makes you empathize more with the struggles put into providing users with an efficient, effective, satisfactory and accessible experience.

The catchy titles, the well-chosen visuals and the Alice-Developer-Persona made the suggested solutions more memorable.

Here are some of my takeaways:

Code Camp 2017 Mobile Performance
Code Camp 2017 Mobile Performance
  • Testing Trends or Buzzwords?

The last item on the agenda of the Testing Camp set about rounding off a diverse and engaging Track. Throughout their sessions, the content owners had offered their view on a number of topics, ranging from Infrastructure, Front-end, Continuous Delivery to Planning, as well as Exploratory Testing. Therefore, it seemed only fitting for Iulian Benea from Steadforce to prompt the audience to consider how Testing is evolving. Three aspects provided me with ample food for thought.

First of all, Iulian addressed the current need to automate tests as much as possible, in order to catch as many bugs as possible at an early stage. While this approach is cost-effective and less time-consuming, I think it should still leave room for Exploratory Testing, which can uncover important bugs in a shorter time span and can also be conducted in a structured and traceable manner (e.g. through SBTM).

The second aspect revolved around the specialization of testing. Usability, Performance, Security, Data Analysis and DevOps are just some of the focus points, which have gained leverage and popularity over time. These are more often than not connected with or influenced by the new fields, that are high in demand nowadays and constitute the third course of our “food for thought” meal: Big Data, Augmented Reality, Artificial Intelligence, Internet of Things and the coveted Blockchain Technology, to name but a few.

Drawing on these three aspects, we went on to discuss how Testers could adapt to such almost paradigmatic changes, in order to perform their tasks. Developing one’s skills beyond testing has become paramount. Adding request analysis, scripting, programming, management and even legal compliance to one’s profile are some examples in this respect. Specializing in Mobile Development, DevOps or Big Data has also been requested by various industries. During the Q&A session, we broached the trend in Timisoara. From the audience’s experience, Testers are currently learning how to write code, while Developers are conducting more testing. Some companies are experimenting with Test-Driven Development, while others favor employing Automation Testers with JS.

It was a lively discussion and I felt inwardly glad that I had selected such a varied range of topics at CodeCamp 2017, that I could add to my technical kit and further explore.

  • Gamification

In addition to the various tracks, the Code Campers had the opportunity to engage in various gamified activities, designed by the partner companies present at the event. During the breaks, you could take online quizzes on your topic(s) of interest, dabble in Augmented Reality, try your hand in technical trivia or participate in the Code Camp Raffle.

Bottom line: Apart from dealing with the technical challenges prepared, you could also get to know fellow campers and network.  Which is what getting together on such occasions is basically all about: experimenting in a safe environment, exchanging best practices and keeping up-to-date with the most recent trends.

Curious? Then just register hereYou can also sign up as a Content Owner and prepare to share your experience with eager Code Campers! See you on April 21st!

Celebrating 5 years of “Tabara de Testare” Bucuresti

If you check our meetup on November 9 we had set up our “usual” second Thursday of the month meetup just that it wasn’t usual at all… We celebrated 5 years of monthly meetups at Tabara de testare Bucuresti!!!

On the agenda for that evening we had the overview of the presentations/workshops we had during 2017 but this time we also prepared an overview of the previous years and afterwards we continued with the workshop on “Storytelling and communication” by Stefan Bratosin.

I’m going to start with Stefan’s workshop first because the 5 years part needs to be saved for last like all the good things.

Stefan’s workshop was inspired by some improvisation classes that he took and while being involved more and more in the classroom he realized how the exercises that he was doing could help other testers better communicate and be better story tellers since this is a big part of what we do.

The workshop had a lot of cool and fun exercises like:

  • The whole group had to count to 30 without anyone overlapping. The exercise was very interesting and we managed to count to 30 as a whole group(we were about 20) and without anyone overlapping. The “Aha” part of this exercise was when Stefan asked us to close our eyes and try this way to count to 30. We managed to listen and focus way better than the time we had our eyes opened and try to search each others all over the room and see who was the next one that will be going to say a number. We actually listened this time!
  • Question rally – something similar to “Whose line is it anyway” ( here is an example of the show with Whoopi Goldberg) where there were only questions. We were given a theme and we could answer only with a question. Really fun exercise in which we could see in action: open or closing questions, probing questions or rephrasing.
  • Another exercise from Stefan’s workshop showed us the difference between using “yes, and” and using “yes, but”. During the exercise we could notice that using “yes, but” was not at all constructive and at least during the exercise it was basically cutting off the conversation
  • In the story telling exercise we had to create a story and tell it as best as we could from a team of 5 volunteers point of view. It was really fun and again we could notice on how important is to listen to the other ones or things could derail quickly. An example of a story – “The New Year’s Eve “. Our team had to create a story and the main character was our friend “Georgica”. The twist of the exercise was that Stefan would point to us when we had to switch and take the role of the narrator. By the second team which tried telling the story they could notice that they have to better listen to their colleagues rather than focusing on what’s next so they can continue on what the story was all about. If you didn’t do that, in our teams story, the main character didn’t even make it to the new year’s eve party 🙂

Besides all the fun that we had during the workshop, we were reminded how important is communication and storytelling to our tester’s job so a big thank you to Stefan.

Here are some pictures from the exercises that we did:

600_466037810  600_466025894  600_466036718600_466025704-2

5 years of Tabara de Testare Bucuresti

5 years means a lot of time but as they say “time flies when you’re having fun”.

So what we wanted to do different this time from the other anniversary editions is to try and summarise all the years, basically a trip down memory lane.

For me personally were the best slides I’ve worked on so far for the anniversary editions. It let me remember how we started, how many people helped us start, what we did these past years and of course how many we accomplished, in the end showing that we are truly a community of software testers and that without the people in it, we wouldn’t have anything.

Was really challenging to summarise all of the above (and many more) so we tried our best to do it through an infographic:

atdt statistics

As you could see in the infographic there were 71 meetups in 60 months and we wanted to showcase the meetups in each year and for this we created gifs with pictures from them and verbally mention some of them since there were a lot to go through.

2012:

600_187644612-ANIMATION-2

2013:

600_239456282-ANIMATION

2014:

600_328515242-ANIMATION

2015:

600_436116881-ANIMATION

2016:

600_453469985-ANIMATION

2017:

600_458583608-ANIMATION

We couldn’t celebrate 5 years of Tabara de Testare Bucuresti without our traditional “Cartoon Tester” special cake . Here is also a big thank you for our supporters of this edition: ING Romania and QTeam Software Solutions which helped us with the cake and snacks/beverages.

 

23517727_1684545994924329_4028969433585539319_n

What’s next?

That’s a really good question!!! I remember 5 years ago when we started having the meetups I was thinking: ”let’s start it and see where it goes”. Oh well, it went very good, so for the next 5 years I expect even more amazing things, even more cooler meetups, more international speakers, more people being content owners, more workshops during the years and the list could continue.

Not sure what’s next or could say specific how is going to be in the next 5 years, but I certainly know it’s going to be awesome since YOU ARE “TABARA DE TESTARE”!!! and knowing the amazing people that are members of this community there is no other way than an even greater journey in the years to come.

See you at the 10 years anniversary edition,

Andrei Pirvulescu

Cautam Content Owners pentru Tabara de Toamna – Editia Nr. 5

Esti Software Tester sau lucrezi intr-un domeniu conex? Iti doresti sa impartasesti altora din cunostintele tale? Atunci nu mai sta pe ganduri: Sustine un workshop in cadrul Taberei de Toamna!

Indiferent daca ai livrat deja numeroase sesiuni practice ori abia acum te familiarizezi cu vorbitul in public, aici poti sa te perfectionezi in continuare. Vei avea sustinerea noastra in pregatirea atelierului, precum si feedback, pentru a te prezenta cu o varianta imbunatatita la alte evenimente nationale si internationale.

Asemeni editiilor precedente, incurajam o abordare hands-on si te invitam sa participi la un experiment inedit, pe durata unui weekend prelungit.

De ce? Pentru ca editia 2017 sta sub semnul experimentarii cu diverse metode, abordari sau instrumente, intr-un cadru safe, care incurajeaza schimbul de experienta si invatarea continua, fara constrangerile unui produs anume ori a unui mediu de lucru specific.

Daca doresti sa ni te alaturi, completeaza acest formular si trimite-ne intre 01.09. si 10.09. propunerea ta pentru un atelier.

Hai cu noi in Tabara:

  • De ce? –  pentru “Experimente si Experiente”
  • Cand?   –  26.10. – 29.10.2017
  • Unde?   –  Hotel Silva, Busteni

Ne vedem acolo,

Facilitatorii TdT

Aici gasesti versiunea in limba engleza.

Tabara_de_Toamna_2017

Risk Analysis in Software Testing cu James Bach

Meetup-ul din octombrie 2016 de la Tabara de Testare Cluj a fost unul special: content owner a fost James Bach care a tinut o prezentare despre Risk Analysis in Software Testing.

In pregatirea pentru cursurile pe care le tine pentru testeri, James a incercat cu participantii un exercitiu privind riscurile, analiza lor si implicatiile pe care le au in testare. James si participantiiau cautat i dei de test pentru o masina autonoma de nivel 4, apoi au comparat notitele. Ce a iesit? Va lasam sa descoperiti singuri:

 

A Taster of Automated Testing

Last Saturday, I attended my first monthly meet-up hosted by the Testing Camp in Timisoara. This session placed special emphasis on Go.CD and Docker, while aiming at introducing the topic to those of us who are yet unfamiliar with such tools or frameworks. Since I’ve mainly focused on Manual Testing so far, I was eager to get a taster of what Automation is all about, especially in the context of Continuous Delivery. So I packed my laptop and went to join the fellow Meeples who had registered for the session, delivered by Alina Ionescu and facilitated by Camil Bradea, Iulian Benea and Ecaterina Ganenco.

In order to assist us in this process, Alina informed us in advance about the tools we needed to install on our computers, prior to the meet-up. During the actual session, she also provided a step-by-step guideline for us to rely on while creating our testing environment. Once receiving the instructions, we paired up and started working on our assignment. It was quite challenging to navigate our way via the Terminal by typing and executing text-based commands, but it was all the more satisfactory when the steps started rendering results. Whenever we seemed to get side-tracked by an error message or some other type of constraint, we exchanged views with other teams or received support from the facilitators, who mingled and tackled as many questions as possible. By the end of the meet-up, I was thrilled to have set up my own GitHub account and to have performed my first Commit and Push. Creating the necessary stages and jobs was equally rewarding.

Most of the participants managed to run an automated test on their machine within the allotted time frame. Even those who had only succeeded in passing some of the stages, simply resolved to try again at home, since we could all just revisit the steps in the guideline that Alina had provided at the beginning of the session.

To sum this whole experience up, I’ll just leave you with my “Lessons Learned”:

Risk Analysis for Testers – o seară cu James Bach

 

Luni pe 10 octombrie la 18:30 te așteptăm la The Office, Cluj pentru un meetup special: James Bach ne va ține o prezentare despre Risk Analysis in Software Testing.

Ca și anul trecut, James Bach vine din nou la Cluj cu ocazia cursurilor RST si Session-Based Test Management organizate de Altom. James a acceptat invitația noastră de a vorbi în cadrul unui meetup despre un subiect foarte relevant pentru mulți dintre noi.

James Bach

“I’ve noticed that a lot of testers struggle with risk analysis, and yet risk analysis is really the core purpose of testing. So, for this presentation, I want to engage you in a risk analysis exercise.” – James Bach

La sfârșitul prezentării vom avea o oră de networking să ne cunoștem mai bine.

Ce spui, vii?

RSVP pe meetup: http://www.meetup.com/Tabara-de-Testare-Cluj/events/233995367/

 

Testing tours at TdT – a tester’s tale

The last TdT meetup was somewhat of a special occasion. For one it was a workshop instead of a presentation, which means that we formed groups to work together then presented and had discussions about our work that we had just done (instead of ONLY speaking from experience gained prior to the discussion). It was also “owned” by four people (instead of the typical one or two) which helped a lot with facilitating the discussion while we were working and as well as after, while discussing challenges and insights. I also had the opportunity to work with people I hadn’t had the chance to, which is always nice :-).

The workshop was structured in a fairly typical way, we arrived at the venue, chatted about random things, I found and solved a rather interesting puzzle game in the kitchen :D, then Dolly, Dorel, Elena and Oana, the content owners, introduced us to the theme of the meeting – learning! – talked about their goals with the workshop – wanting to become teamSTAR this year – go team go! – then described touring and how it fit with the theme.

They also introduced us to the app we were going to test, a chatting platform named slack, similar to an IRC client named – or so I thought before the workshop. They had cue cards printed out with the different tours we could choose from – they went with the FCC CUTS VIDS tours.

Testing tours are not really new, but I’ve rarely seen them being used on projects I’ve been on or talked to other testers about, not to mention being used consistently (this statement is of course limited by my experience :-)). I first heard about touring while doing the BBST Test Design course where one of the techniques we covered was touring, (and where I first read this article by Michael Kelly). While this was my first contact, I only started to understand the power of touring after reading James Whittaker’s take on touring described in his book ‘Exploratory Testing’ (he also has a couple of articles dedicated to this technique here).

I read the book while I was on a project testing medical software and this technique helped me a lot in learning, mapping and better understanding the intricate connections between elements of the product, by touring the application and mapping the central data elements it worked with, then we used this map to vary the data we were working with, after which we moved on to touring the application to figure out what features were important for different types of users, then we created scenarios based on these user types.

While using tours proved invaluable on this project, on my current project I can’t really use the technique consistently (I’m helping in setting up an automation environment and writing automated checks, with a bit of testing sprinkled in when release candidates are promoted).

Given my current situation, attending the workshop was an excellent opportunity to use the technique once more. Participants formed groups of 2 to 4 people and chose a tour type. I had the pleasure to work alongside Tamara and Florin, both fairly new to testing, but eager to start. We chose the Variability tour, because it seemed easy enough: just look for things in the app that you can change and change them. We were somewhat wrong about this.

After choosing the type of tour, we quickly set our laptops up in the corner of the garden img_3073outside the venue, and designated roles for each of us: Tamara sat at the keyboard, she did most of the navigating and the “trying things out”, Florin did research as well as compared what we found in the desktop version of slack with his Android version, while I took notes and came with suggestions for what elements to vary.

Our first challenge came in the form of trying to figure out how much to vary the data we were working with, while still learning new things about the app. Our reasoning was that since the goal of the Variability tour was to learn the capabilities of the application by changing the different elements it is comprised of (buttons, settings etc.) we should see these different changes take effect not just compile a list of possible variations. So we limited the amount of variation in the hopes to cover more ground.

We started our tour with the different ways Notifications can be set up in the app. Right from the start, there were a lot of ways to customize it:

  • Do we want to receive notifications from everyone or just a certain group? How about specific channels within a group?
  • Do we only want direct notifications and a select group of highlighted words?
  • Or possibly from specific users?

Next there were specific ways the notification popup window could look and feel:

  • Should it contain the text being received?
  • Should it flash the screen?
  • Should it make a sound? If so, which sound?
  • Where should it appear on the monitor? How about if there are multiple monitors connected?

We also learned that users can set Do Not Disturb periods (based on Time Zones only reachable from the web version of the app), as well as timers that can act as a temporary DnD when we want to do something uninterrupted. These settings, of course could be further customized on a group/channel level.

After we were done with Notifications, we moved on to varying the look of the app by changing its theme to a different preset, creating a new preset altogether, we even looked at importing themes into slack. We found out that this option is NOT variable in the mobile version of the app. When we started focusing on Advanced options to see if we could find any interesting features, we realized there were only 10 minutes left of the session, and we were at the 3rd point of a 6 point list of settings.

Managing time turned out to be our second challenge.

Needless to say, we tried to wrap up the session by finding elements that were more ephemeral than the settings of the app. We noticed another team had enabled TaskBot, so we varied the different elements of a task created with the bot, created temporary channels made out of our group and tried to send different types of messages. We also ended up talking about the way we varied the messages sent throughout the session so we added that in our notes as well.

After the one hour testing session we gathered inside, where each team talked about their brief experience using tours, and whether they see this as something they could do on their current project:

  • There were a few feature “tourists”, who had interesting takes on what to include as a feature and what to exclude (my take on this is that while it’s important to have a common understanding of a feature on a given project I am on, this might change if I move on to another project)
  • There was one group that picked scenario tours, who had a hard time coming up with realistic scenarios – or at least they did not talk about the ways they tried to get to these scenarios. (my conclusion was that understanding the features and the users of the app would greatly help in this, so touring the app from these perspectives before searching for scenarios would probably be a better way to learn the app)
  • There was a group who did a Configuration tour, which was fairly similar to our Variability tour, since they are related in concept. (my take on the difference between the two is that while in the Variability tour you learn about the application by varying things in the app, the Configuration tour focuses on the permanent changes that you can “inflict” upon the app).
  • Lastly, there was a team that did a Testability tour, and they hardly dealt with the application itself, and looked for logs, access to local databases, controls within the app to use to automate different checks with, possible bots to use for creating/modifying/deleting key elements of the app. This tour, while very different and interesting, was the least context dependent of all of the tours (making the experience more easily transferable to other projects)

While we didn’t get to discuss about it during the workshop, a third challenge I see with ongoing tours is this: Do we keep the map we created up to date with new changes being added to the app? In other words, is the artefact we create during a tour lose its value? I know it’s valuable for a while, but is it worth it to keep the map/list updated? As I see it, the act of creating the map or list is vastly more valuable, and it keeps its value while you base other tours on it, I am more skeptical of its usefulness after that, though.

All in all, I came away from this workshop with interesting ideas, and a firmer grasp on the technique than before. If i were to sum up the experience of touring an application and engaging in the discussions after in one sentence, it would be this one: Without resorting to assumptions and our previous experience with the application, it will take a lot of time to do each type of tour, and that some of the tours are more useful to do early (Testability, Feature, User tours) while others are better done later (Scenario tours), but ultimately it’s worth the effort.

Testability Tours – Testing Tours Meetup

Last week at the monthly meetup organized by Tabara de Testare four of our colleagues organized a workshop on Testing Tours – we call them content-owners @ Tabara de Testare. Even though many people have written on touring over the past 10 – 15 years, this technique seems very little known and used by practitioners. This is why when my colleagues chose this theme I thought it was a great idea. Also, there are many tours that I’m not familiar with, so it  was a good learning opportunity for me.

About 25 people gathered for this meetup and, after the introduction delivered by the content-owners, we split in teams of 2, 3 or 4 testers and we went out in the garden to get started. This month’s venue was ClujHub, and the great thing about this co-working space, besides the downtown location, is this garden!p_20160907_190235_pnI teamed up with Gabi Kis, a fellow facilitator from Tabara de Testare. We took the tours cards prepared by the content-owners and went through all of them to see which tour we’d like to perform – all the other teams already chose their tour, so it seemed to me that we were a little behind; now that I’m thinking back, I’m surprised that I was not stressed by this :). Having the cards in a physical form, not on a PowerPoint slide, made it easy for us to take them in the garden – I also noticed that all teams took the card for the chosen tour with them One other aspect I liked at having the cards was that most of them contained enough information to give starting ideas on how to approach the tour.

Out of all the tours, we stopped at the following 3:

Scenario Tour – List realistic scenarios for how different types of users would use the product.

Testability Tour – Find features and tools to help in testing the application.

Complexity Tour – Identify the most complex things present in the application.

For the Scenario Tour we weren’t sure if we just needed to list the scenarios or also perform them. For the Complexity Tour we couldn’t find a working definition for “complexity in the application”. We decided to choose the Testability Tour.

Testability Tour on Slack

The application we had to work with was Slack, and luckily for me I was familiar with it as we use it at Altom and also on several projects. Gabi didn’t know the application very well, so for him it was a good opportunity to discover it.

As we didn’t have access to the internal documentation of Slack, we started listing the items we thought would make the application easier to test. We structured our notes in 3 sections:

  1. App Features
    1. API – can be used for generating test data
    2. Websockets – same as API
    3. Logs
    4. Error messages (UI level, API level and logs) – this would help the tester to better understand when something goes bad
    5. File sharing – can we have access to the cloud storage, but not through the application to control the test data?
    6. Bots – can we use them in our chat tests to get responses?
  2. Tools – as this is a client-server application, we started to think about what kind of tools we could use to test it:
    1. Proxy (Charles / Fiddler) or Wireshark – to intercept the calls
    2. data sets – Gabi said that he would like to be able to build specific data sets to put the application in different states.
    3. server side
      1. Top / Perfmon / Nagios – to monitor server side resource utilization
      2. Jmeter – to send requests for load and performance testing
    4. client side
      1. Perfmon / Top / Activity monitor – to check the desktop application resource utilization (the desktop client is a web app packaged as a standalone application)
      2. adb / instruments – to check the mobile application resource utilization
      3. Yslow and Developer tools – for the browser client
    5. spider tools – to discover and list different features from the application; one aspect we thought of was that if the app uses custom controls the tool won’t be able to find too many things…
  3. App Structure
    1. ease of identifying elements for UI level test automation
    2. how easy it would be to build our own test environment
    3. client app installation – do we have a silent install? We asked ourselves if this is an app feature, as it can be used by sys admin to install Slack on large networks, or a testability feature, as it would allow us to setup the test environment easier.
    4. server side installation – Can it be installed on premise? If yes, how easy can it be set up?

The above list was not created in the order it is now displayed, but more as a brainstorming session: when we identified one item, we would seek to explain why it could be relevant and try to categorise it in one of the main sections. What I find interesting is that we initially started with 2 sections, Features and Tools, and while coming up with new ideas we thought of adding a new section, App Structure (one could argue that this last section could easily be part of the App Features section).

About 45 minutes into the exercise there was still no touring of the application. We thought of taking one item from our list and start investigating it, so we chose the logs feature: we wanted to know if there are any client side logs on the desktop client.

I was using a Mac, so I thought Slack would save the data under the user’s Library folder, but nothing was there. We looked in Application Support and in Cache and we couldn’t see anything relevant. I looked in the System / Library folder, and still nothing.

I googled a bit for Slack logs and local storage, and for some reason Google decided to return https://status.slack.com, which is a nice status overview of the application. This makes me think that there should be more detailed monitoring on the server :). Unfortunately we didn’t find anything else relevant in the google search.

I looked in the application package from /Application. Nothing relevant there either.

The next step was to open Activity Monitor and double-click on the Slack process, and then I went to the Open Files and Ports and noticed that the user files are saved in ~/Library/Containers/com.tinyspeck.slackmacgap/Data

slack_activitymonitor_poza3

So this is where all the goodies are! Listing the folder content we noticed that most of the folders are aliases for system folders

slack_localdata_list_poza4

We looked into Library, and found the same behaviour: lots of Aliases.

slack_localdata_library_list_poza5

We also noticed the Logs folder. Unfortunately it was empty…

Next we went to Application Support and found Crashes and Slack. We digged into Slack and found what we were looking for: the log file.

slack_localdata_appsupport_slack_list_poza6

OS X comes with a pretty neat tool, Console, which helped us to inspect the log. Below is an example of what my Slack log shows today. You can find information about users joining teams and channels, who left a channel, if websockets are opened, etc.

Now that we found our log, we decided to also look in what is saved locally, so we googled for an SQLite browser and found http://sqlitebrowser.org. We downloaded it and first opened localStorage.sqlite, but this had data from 2015. We then opened localCache.sqlite and found the cached data. We also tried to also open localCache.sqlite-wal but it was password-protected.

Going back to inspect the other folders from Application Support, we noticed that Slack has an alias for AddressBook, which made us wonder about the integration from these two applications and if we can use Address Book to inject data in Slack for testing purposes.

One of our last thoughts before the time was up was that we might have to define what we wanted to test. We approached this tour as if we wanted to test “everything” (we thought of client and server features, tools for performance and UI level automation). Had we have started with a more focused mission, our notes would have been very different.

What I liked about this session with Gabi was that we started with a brainstorming based on our previous experiences with similar applications and then chose one aspect – client side logging – and drilled in it. During all this time we tried to take notes so that we would use them for future, maybe more focused, touring. Here is our log, with notes in English and Romanian, to have a better view of what our outcome was after the tour.

testingtours_originallog

After one hour of touring, it was time to go back inside and do a debriefing. Each team presented their work and the discussions were facilitated by the content-owners (what they learned, how they organized their work and notes…).

One thing I learned during the debriefing is that the order in which the tours are performed matters. For example the Scenario Tour more suitable to be done after the User Tour, as one needs a list of users to identify the scenarios, or the Feature and Configuration Tour to get familiar with what the application can do. This makes total sense now.

One interesting discussion was between Team #4 (Variability Tour) and Team #5 (Configuration Tour) as they toured around similar areas, and they were debating if their work was relevant for the tour taken. One of the content owners clarified that the Configuration tour can be seen as a sub-tour of the Variability Tour, the main difference being that the Configuration Tour is focused on persistent changes, while the Variability Tour is focused on temporary changes.

All in all it was a great workshop. People were highly engaged, showing thirst for knowledge and discussion. I challenge you to try testing tours with your team as an exercise and see the benefits for yourself.

P.S. of course such a meetup couldn’t have finished without a beer 😀

img_4069