Rethinking Toggl projects to get better insights

Hello everyone, stemming from the discussions that we had back in the gathering, I make a proposal having two things in mind:

  • We won’t never track it all and there’s little value on trying to do so
  • We need to know the main areas of work and how they relate to our budget so we can take more informed decisitions

For that, I’d go with a first simple step of tidying up the projects and start tracking the basics. Here is the list and what I understand by them, please share your ideas/additions:

Sysadmin: anything related to ofn-install plus server management such as provisioning, deployment or dealing with incidents (what we called operations). For now, I don’t think we’ll need to differentiate between them.

Bugs: I didn’t know this one existed until the gathering. I started using it when I’m dealing with a bug either fixing or investigating its issue, or even when discussing it with product. I propose we keep it that way. It’ll shed some light on how much time we spend on them.

Dev: obviously programming hours but also any other task that requires dev know-how like answering questions on Slack, debating on community, etc. But those that are not included in the categories above. This leaves us with features and general tech discussions.

Testing: pretty clear I think. Any time we spend testing.

Release: I heard some people suggesting this. It makes sense since it is one of the areas we want to improve.

In any case, it’s important we state the id of the PR or issue always. AFAIK most of us have been doing it so far.

These are the tech ones (and we could start creating/cleaning these) but I’d like others to share their ideas on ATAP stuff. I can think of: funding, communication, instances support, but I’m sure there are others.

What I don’t find an easy solution for is tracking specific features like PI or subscriptions. Tags? Do we need it at all?

So, I’d say we start with an agreed list for tech stuff and iterate from there. Then, I will document them.

1 Like

awesome, sounds good.
we will have a list of the epics, right? like PI, Subs v1, API, Mobile, etc

Some thoughts you may want to incorporate:
I’d keep “Code Review and Merge”.
I’d add “Tech Debt” as a separate entry.
I’d remove “Dev” or maybe call it “Dev Other Features” because all dev should fall under “Bugs” or tech debt or one of the tech epics right?
And I’d divide sysadmin in “Ops” and “DevOps”, it’s very different to spend money on maintaining the servers running and up to date (ops) from improving processes and tools like ofn-install (devops). Ops is something that could be managed by local instances, devops is not.

we will have a list of the epics, right? like PI, Subs v1, API, Mobile, etc

that’s exactly the doubt I have. How we keep track of this but still having the other axis: sysadmin, regular development, testing, etc? the only way I know of is using tags. This way we can know how much time is spent on testing for instance, across all epics.

Agree on all the other suggestions (I totally forgot about code review and merge). I wasn’t sure if it was worth splitting sysadmin in two but it does make sense. I also like to have more visibility of the ops side.

yeah, I’d keep testing as separate thing and for development/developers we can create tasks for every big epic, otherwise we use the default “Dev - Other features”

I’d like to know other’s opinions @Rachel, @danielle, @MyriamBoure, @Kirsten, @Jen? If we go with this it has to make sense to all of us. We want to have meaningful and actionable data not just something else to care about.

Another question I have is, where should this be documented?

I think it is great but if I remember correctly we are paying toggl per user right?

I’m tracking my time testing on the French free toggl account so it is stored somewhere (the data exist).

But if I’m joining the global account, that means the toggl licence is more expensive. Would this be worth, although we don’t pay tester?

I mean I can also download my time entries on excel if we need to know how much time we spend on testing.