OLAP cube for tech support process analysis

We have implemented an OLAP cube to analyze tech support processes in our products in real time. Now we're explaining how the system works and what advantages it has granted us.

olap support header

Tech support processes unite all product operation participants: support engineers, developers, and of course, the PM and the client. At the lower level, SLA obliges us to monitor the bug fixing; at the upper level, we have to watch the product's overall state so errors don’t slow down its development.

When we decided to create tech support analytics, the main goal was to ensure process transparency. Support tickets are gathered in Jira, and the developers create tasks in TFS. There was no direct communication; we had to track the statuses manually. So the general picture was very difficult to manage.

Therefore, last year the support team contemplated metrics that would enable all stakeholders to have consolidated information on the workload dynamics and request the processing speed of the support and the developers.

Under the OLAP cube hood

Our analytics operate based on the following metrics:

  1. Open tasks: what the tech support received, what the developers are fixing.
  2. Closed tasks: what the support handled by themselves, where the developers were involved, which issues were moved to backlog (if the team agreed with the client that fixing a feature wasn’t a priority at that moment).

For a while the support team was collecting these metrics manually to prove to us we needed them. After that, the BI-engineers and infrastructure department specialists automated the process and assembled the cube.

It works on the basis of Microsoft BI-line (ETL MS SSIS, analytics MS SSAS, DB MS SQL). We connect to Jira via the API and get all the necessary requested data: when each of them was created, with what status, to which task it is tied in TFS, etc. With the TFS we work directly, taking data from its database.

Aside from the current states, the history of all task changes is pulled from Jira. This allows us to calculate all transitions between task states, so, for instance, there are no duplications in the data if the task is closed and reopened several times.

One of the intricacies is establishing a link between a Jira ticket and a TFS task. The support engineers add the TFS link by themselves; and free data entry means a risk of errors and inaccuracies. We developed a system of checks that helps establish correct connections by indirect signs: projects, dates, performers, etc.

For each metric we can establish any rules, thus enriching the calculation logic. Additional controls check that the links between the resulting metrics are intact. So, in the future we will be able to add metrics without compromising the concluding picture. This approach will also enable data to be added from other brand new sources for instance, to link tasks with articles in Confluence.

The results we got

The cube shows the tech support workload, the dynamics of product request processing, and the development team's dynamics of execution of tasks created by the tech support. The whole volume of requests is processed in about a minute, so the update is performed virtually in real time.

As a result, PMs, managers and the tech support service itself can control:

  • How effectively the support and the development teams cooperate, and how quickly they solve each other’s tasks. 
  • How well problem solving works in the tech support scope, how quickly knowledge is shared between developers and support engineers if everything’s fine, the number of tickets from the tech support to the product devs has to decrease.
  • If we have a sufficient number of tech support staff on the products how quickly the ticket number goes down.

And most importantly, we can spot status misalignment between Jira and TFS, i.e. when a task is closed in TFS and open in Jira (and vice versa). The second scenario is a serious risk for the team, because it means that the support considers the task resolved while the developers are still working on it. Each such case must be investigated quickly either someone forgot to flip the status or there was a communication failure.

Finally, the cube has automated preparation of regular product support reports that we provide to our clients. Data doesn't have to be gathered manually anymore 90% of the job is done at a push of a button. The remaining 10% represent an intellectual contribution of the support service and PMs who add their analysis on completed tasks.

Our latest publications

See our knowledgebase