WFH: make it work for you

Mark Robinson

OC-47-square-resized

Years ago, before WFH even became an acronym, I was employed by a small technology company where everyone worked from home, apart from when we had to make necessary business trips to see clients. So home-working due to the arrival of the Coronavirus and the need to help prevent its spread is nothing new to me. Of course, in those days the technology was not what it is now. Communication was generally by phone, so we all had unlimited call packages with our respective phone companies, and we had to remember to finish and redial if the call lasted longer than an hour in order to avoid additional charges. Skype was a revelation when it arrived. Document sharing meant emailing changes back and forth and testing required the developer to email the code (zipped so Outlook wouldn’t quarantine it) to the tester…

My experience of working from home – or let’s call it WFH and bring me into the 2010’s – was largely positive. I was pretty disciplined and didn’t let myself get distracted, so I was productive. At the time my children were small so being able to be flexible about my working hours meant that I was usually available for the school run, sports days and school fairs, so I was lucky to be able to enjoy those. Not something that’s an advantage at the moment with the school closures of course…

My business trips were invariably long distance, either by train or plane, hence we were able to get by with just one car. In theory this meant we could save some money, but in practice we just bought a more expensive car. The lack of a daily commute effectively provided an extra hour a day, which I spread evenly between extra work or play time.

I became very friendly with the postman and delivery drivers once they realised there was almost always someone in our house, which is why we get excellent service from them to this day – one in particular will still redeliver almost any time we like. Of course, at busy times like Christmas our porch was full of other people’s parcels, but that did mean we got to know our neighbours a lot better too. And learn that some of them thought it entirely reasonable that you store their package for a week or so.

I recall feeling somewhat isolated at times, and I was fortunate to have my family in the house with me when I needed some human contact – though admittedly it’s hard to bounce ideas on the best way to model the cells in a  demo mobile network off a four-year old who really wants you to build him a train track.

The lack of face-to-face contact was a struggle occasionally, though of course today’s video conferencing through apps such as Teams does mitigate that in the main. However, nothing replaces being able to park yourself by someone’s desk and demand their attention; when remote it’s too easy to ignore emails or chat messages! You also miss out on those office conversations you overhear and become part of that can spark a great idea or help solve the problem you or a colleague has been fretting over.

Discipline is very important too. By that I mean disciplining ourselves to stop work, rather than falling into the trap of checking emails, replying to chat messages and finishing tasks off that if we were working in an office we’d probably leave and come back to the next day. And even with my almost fifteen years prior experience, I still catch myself doing that this time around.

So try and make the most of these strange times we’re living through, use the available technology to stay in contact and don’t be afraid to admit it can be difficult sometimes. I look foward to a face-to-face chat in the pub one day…

Mastering the basics

Tam Mageean

TamB&W

The root cause of a lot of adolescent Agile issues spur from exceptions that teams make in the values and principles in order to better accommodate conventional ways of working.

Common examples include:

  • Waiving the need for a dedicated Scrum Master/Product owner in Scrum
  • Creep of exceptions to the definition of done
  • Removing structured points of collaboration (ie standups, retrospectives)
  • Introducing dependencies external to the team

The reasons behind allowing these problem-causing exceptions are often “it is what it is” instances – “we had to do that because that’s just how our business works”, “we aren’t a typical Agile implementation” etc.

The belief that a team or business is unique and therefore unable to adhere to all of the Agile values and principles is more-often the norm. There’s nothing wrong for wanting to tailor their implementation. However, the ways in which people modify and when they choose to do so are the source of the issues.

Every business, team and service is a little bit different, so it’s not surprising working in an Agile way isn’t ever a one-size-fits-all solution. Furthermore, in this age of “scrumdamentalism” and other rigid “dark agile” methodologies, we’re seeing more and more reasons why treating the manifesto and the scrum guide as gospel could cause more harm than good.

Most experienced agilists will likely agree with your want to customise. After all, it wouldn’t be “Agile” to have a framework with a fixed means of implementation. Allowances can be a sign of a mature team and can boost team performance if done correctly.

So, how do you make exceptions to your implementation without them becoming the cause of your problems? The best thing you can arm yourself with is, in true Agile fashion, is an awareness of where you are and where you want to be.

When you take a look at skill acquisition paths like Shu-Ha-Ri and the Dreyfus model, they share similar steps.

  1. Start with adhering closely to rules, followed by
  2. Learn to adapt and apply situational awareness.
  3. Apply your own rules based on experience, learned behaviours and awareness.

These models reinforce tailoring is necessary for progression. The difference, however, is exceptions are applied as a result of proven experience, not on assumptions that have been informed by years of working incorrectly. In Agile, this is where the importance of empiricism comes into play and where exception-makers stumble.

We learn our way forward in Agile. Making exceptions for fear of things not working, rather than with empirical evidence that they don’t work is a regular tripping point. When this happens, not only does it show that the exceptions are fallible, but it also shows there is still a need for a more competent understanding of the Agile values overall. At this point it’s recommended to go back to the basics and see if you can apply your learnings differently.

Until the values and principles are truly internalised (such as knowing the need for empiricism and improvement through retrospection when making changes), there’s enough evidence to show the team is still not at a point where it’s ready to jump into the much tougher realms of exceptions and custom, post-agile approaches.

For example – until you understand why you estimate, there’s no need to experiment with the concept of “no estimates”. That lack of understanding will become both a pitfall and a roadblock in your recovery from an inevitable failure.

Start with the fundamentals and aim to master them. Only then have you created a safe environment to experiment and adapt with any substance. At the very least, you’ll always have those basics to fall back on.

How one scrum team reached and maintained the ideal burndown line

Robin Moore

Robin

There is a general question in Scrum circles about how closely it is reasonable to expect a scrum team to adhere to the ideal line on a burndown chart during a development sprint.

Here we take a look at what a specific, Opencast-led scrum team, in a specific set of circumstances did to move from having a volatile burndown chart to one that fairly closely and consistently tracked the ideal line in just a few sprints.

For clarity, in Scrum a burndown chart is a graph used to track the progress of a team of software developers during a development cycle or sprint.  The vertical access on the left shows estimate of the amount of work a team has committed to complete during that sprint and the horizontal axis depicts the number of days available in that sprint.

The ideal line is a line starting from the top left of the graph and moving down towards the bottom right.  It plots work completed by the team in equal daily increments until all the work is completed, exactly at the end of the sprint.

How closely teams adhere to this line can be used to identify and manage project risk early, but can also be seen as a measure of how effective a scrum team is in terms of estimating the amount of work to be done and their capacity to deliver it, in uncertain and complex situations.

A key Scrum principle is that software development teams improve by inspecting what they are doing and adapting what they do, to improve their predictability and productivity, in terms of delivering valuable working software.  The main mechanisms they use to do this are the sprint retrospective, where they identify areas for improvement, and the actions they agree to implement in the next sprint, to capitalise on those opportunities.

What type of improvements were made?

To keep things simple, we looked at the number and types of improvement actions the team agreed to implement over a 10-sprint period.  We then looked at the changes to the burn down chart and team velocity (amount of work done in each sprint).  They fell into four categories, with 77% being things that the team were in control of themselves, their ways of working.

no of improvements

Direction & stakeholder engagement improvements

Stakeholder direction was clear at a roadmap level and we worked with them to agree the concept of a single front door to the scrum team.  This was to avoid stakeholders going direct to developers to ask for specific work to be done outside the roadmap or challenge team estimates. We also agreed the principles that the scrum team has ownership of what they would commit to in a sprint and the sprint backlog.

Office facility improvement

The office environment and facilities were challenging, and the team needed basics like functioning Wi-Fi, a large screen TV to allow mobbing, and we also provided a coffee machine.

Development tool improvements

There were challenges around the build pipeline and not having access rights to different environments were identified as problems. Although some of these issues were addressed, many had to be lived with and it was accepted these would be a drain on the team’s resources over time.

Ways of working improvements

The team initially used the mobbing technique do develop software. This helped them to learn the new technologies together, share their different skills sets and bond as a team.  Over the course of 10 sprints the team made lots of small improvements in the way they worked which ultimately did have an impact on their productivity and predictability. The areas improved included:

  • Communicating concerns around the size of the team to ensure it was understood that bringing in more developers would likely slow the team down.
  • Morning stand-ups initially took too long to go around the whole team. Also, on Fridays the team was dispersed meaning we had to do the stand up over a voice only Skype call. A couple of useful improvements were: the daily time taken doing stand-up was cut in half by walking the scrum board rather than going around the individuals.  Friday Skype calls were replaced by; doing Friday stand ups on Slack, which was both quicker, more easily understood and more enjoyable.
  • New definitions of ready and done were created by the new team.
  • The approach to story development and elaboration underwent 5 or 6 improvement iterations over the sprints. This helped improve engagement within the team.  Engagement between, developers and testers were further improved by changing seating arrangements and ensuring they discussed bugs before raising them on Jira and the physical scrum board.
  • Noise levels were identified as an issue for developer concentration, affecting productivity, because the office was fairly busy with a large scrum team and stakeholders. A number of approaches were iterated to improve this over time.
  • Improved engagement between the two teams working on the same code base was achieved by attending each other’s daily stand ups, ensuring development was aligned.

When were improvements identified?

Initially a broad mix of improvement opportunities were identified but once these were addressed the majority of improvements were achieved by making changes to the way the team worked and engaged.

improvements2

What impact did the improvements have on productivity?

improvement3

Velocity improved quickly by clearing the environmental impediments. There was a peak around sprint six, where the team worked outside normal timeboxed sprint to hit a hard deadline.  It took a couple of sprints for the team to recover from this, as can be seen by a dip in productivity, however the overall improvement trajectory continued.

What impact did the improvements have on predictability?

 The first 3 sprints

The highest variety in terms of action type and volume of improvement opportunities were identified here.  The burndown charts were fairly volatile with a number of environmental factors slowing the team and blocking developed code from being deployed.

improvement4

improvement5

improvement6

The next 4 sprints

A fewer number of actions issues focused on fewer types of improvement that the team could control.  It can be seen that the burndown charts are less volatile and moving closer to the ideal burndown line.

sprint1

sprint 2

sprint3

sprint4

The Final 3 sprints

Only a slight decrease in the number of improvement actions but the team tracks much more closely to the ideal line – less volatile and therefore much more predictable.

final1

final2

final3

Conclusion

Clearly a development team needs to have decent relationships with their stakeholders, a functioning office and a reasonable set of development tools in order to be able to deliver working software, but once these are achieved to a workable level, improvements in productivity and predictability would appear to be in the team’s own gift through identifying and making improvements to the way they engage with each other and their approach to development.

So what is a Technical Lead anyway?

Snip20200129_1

What’s the difference between a Technical Lead and other developers? It’s not just about the years of experience. Some people will never learn the skills they need to become Leads, or are more interested in becoming technical specialists in one area. Some developers will show their aptitude from the start. So what makes someone a good Lead?

It’s not necessarily about technical knowledge

Is someone a Lead because they know more Java patterns than you? Because they are know what a Monad is? Not necessarily. The Technical Lead does not need to be the developer with most experience in the technology stack you’re working with right now. A junior developer with a year’s experience may have spent that time immersed in a single JavaScript framework and have the best knowledge on the team of its peculiarities of syntax and usage.

But it can be about bringing their experience to bear…

A good Lead will be looking at things more broadly. Using the example of the JavaScript framework above, a lead will be more concerned with the following:

  • How does this framework fit with the rest of the architecture?
  • How do we share that developer’s experience with the rest of the team so that we eliminate the single point of failure that exists due to that key individual?
  • Can we write good tests for this code?

A Lead Developer will have spent years working with different technologies and approaches. This experience can be the key to being able to know how to ask the right questions, and then to take the correct actions based on those questions.

…and applying best practice

A Technical Lead knows that they won’t be spending all of their time coding, in fact they’ll spend less time than any other developer in the team and, when they do, it will be as part of a pair or in a mob – it should be rare to see a Lead typing alone with headphones on. Instead they’ll be spending their time trying to bring best practice to the whole development team – TDD, pairing, code review, running automated tests, monitoring builds and deployments and in general ensuring that if they were to leave tomorrow then team would be in a better position than before they joined. They’ll also be spending time making sure they know what best practice is – attending developer communities, having conversations about upcoming changes on Slack and doing research on advantageous new technology.

Think of the development team as an orchestra – the Tech Lead is not the solo lead violinist, they’re the conductor. They make sure the whole team works harmoniously to produce the final result.

Picture 1

Characteristics of good and bad Leads

In the course of my career I’ve met a wide variety of lead developers. While they have exhibited a diverse set of personalities there are certain characteristics I have seen that can make teams successful and happy, or conversely an immensely frustrating and dysfunctional experience.

 

Bad Leads

  • Think they know it all
  • Don’t listen to more junior staff
  • Sit and code all day
  • Don’t make themselves available
  • Don’t know what’s going on in their team
  • Take on all the complicated tasks themselves

Good Leads

  • Spend a substantial part of their day away from coding
  • Pair with and mentor other developers
  • Keep learning and are prepared to take advice from anyone
  • Make themselves open and available
  • Know what they don’t know
  • Know when to delegate

Picture 2

Still want to be a Lead?

Not everyone wants to lead a team. Many developers become contractors because they are more interested in broadening their technical experience and want to spend as much time as possible solving technical problems. But being a Lead Developer can be immensely satisfying for those who want to have an influence on the software development process and who want to help other team members to improve their skills and experience.

The skills you learn as a Lead can help you progress in your career as well – learning the fundamentals of Software Architecture, understanding approaches to Testing, Continuous Delivery and Agile Development can lead to other roles within the industry. Make sure you stay open to new ideas, listen to your peers and remember, no matter how senior you are, you should always keep learning.

At Opencast Software we’re always looking for good Leads, alongside strong developers and automated testers. Why not get in touch via https://opencastsoftware.com/careers/ ?

Using ML to improve alerting systems

Andy
ALERTING systems are an important tool for IT support teams. These teams are often responsible for maintaining and fixing services, so they need to be informed in a timely manner of any faults in order to keep services running at peak performance.

Traditionally, alerting systems consist of simple threshold-based anomaly detection processes, where the threshold value is manually set by the team. For example, the error count for each service is calculated over rolling time periods (say, five minutes) and if this count is greater, than or equal to, the threshold value then the team is alerted.

The main benefit of this approach is it enables the support team to have a degree of control over the alerting process (e.g. adjusting the thresholds according to planned activity). However, there are a couple of drawbacks:

  • The threshold is static – it is the same value every day and every time, which may not be true of user activity;
  • Low thresholds – usually chosen for services with lower user activity, they tend to alert on noise, or when there is an atypical burst of activity, as opposed to symptoms of a faulty service.

We decided to test a theory that Machine Learning could circumvent these drawbacks. An innovation project set up by the team prototyped an alerting system that dynamically and automatically determines if there is a genuine fault within the services.

Fundamental to the prototype is a predictive analysis tool that, for any given rolling 15-minute period, determines if the number of errors is typical or atypical.

It achieves this using statistical formulae to benchmark the real time error count value against historic data for that same 15-minute time period. Then, if the number of errors is atypical, the prototype computes if this is or isn’t noise. It achieves this by comparing the ratio of healthy calls to error calls in this 15-minute period.

Thus the prototype only alerts on a service if both the number of errors is atypical for the time period, and the error count is not the consequence of noise. In this way, the alerting of a service is controlled by an automated, unsupervised machine learning algorithm.

To determine if machine learning can improve the alerting process, we tested the effectiveness of this prototype by comparing it to a traditional alerting system over a period of 14 days, for a single service and a single type of error.

Fortunately, this was a period where there was a lot of alerting activity on this service. The traditional service alerted on 32 separate occasions. In contrast, the prototype alerted on only four occasions (these overlapped with four of the 32 occasions when the traditional system alerted).

Clearly, the prototype has the potential to reduce the number of alerts but is it missing any genuine system faults that the traditional system is alerting on? The team found the answer to this was ‘no.’ This answer was arrived at by comparing the ratio of healthy calls to error calls for each alerting period.

In the periods alerted by the traditional system, this percentage of unhealthy to healthy was consistently less than 0.5% , with a maximum of 2.93%. In contrast, in the periods alerted by both, this percentage was 50.2% at its lowest.

In conclusion, the prototype empirically demonstrates there is the potential for machine learning to seriously improve traditional alerting systems. Although this is good news, there are many other issues to consider in order to improve the prototype further. These include collecting more suitable historical data and improving the predictive analysis model.

Let us know if you’ve had any interesting findings or how you’ve used ML to improve service performance.

Andy.

Scrutinising The Product Vision

IMG_8580

Why fix something that’s not broken? Or so goes the old saying. When implementing change in business, there are often a wide range of reasons that the choice is made to do so. Often there is a need to react to an ever growing market of competition and to keep up with the digitalisation of services. Sometimes something breaks and it’s decided it must be fixed. In some cases it’s someone’s job to come up with snazzy new ideas that cost the company millions of pounds, and any new idea gets sanctioned regardless of its business value.

What we call the “Product Vision” is whatever it is we are setting out to change and attached to this should be a problem statement. I.e what are we trying to solve? If I make a change to something in my personal life, I tend to weigh it up very carefully. Granted, if it’s a choice between a pizza or an Indian takeaway on a Friday evening, I’m more frivolous (I might even have both). However, if I’m changing broadband supplier for example, I analyse the options very carefully indeed. How much is it going to cost? What is the impact of me switching over? What happens if I do nothing and stick with what I’ve got?

From working across a variety of businesses over the years, I’ve noticed how commonly these fundamental questions get overlooked. Do business case templates get populated and sanctioned? Sure. Do those business cases get revisited throughout the project lifecycle and adapted appropriately as things change? Sometimes. Do those business cases get revisited at the end of the project to see if we achieved what we set out to? Significantly less often.

As a Business Analyst, I find projects are much more successful when given the opportunity to scrutinise the proposal as far as I possibly can, asking the all-important “why?” questions. Often this leads to the proposal becoming something completely different. The product that was first proposed has transformed. On some occasions it turns out there isn’t actually a problem to solve.

Change can be very attractive for statistics, to be seen to be “doing something” and an attempt to make a business better or more profitable. However, the time taken over the scrutiny of an idea at the beginning is one of the most valuable exercises any business can execute. It’s always tempting to get the ball rolling as quickly as possible, but taking time to talk with the right people about the problem statement is not something to be overlooked.

Some of the best pieces of work I’ve been part of have concluded with the business collectively saying, “We actually don’t need to change this thing we thought we needed to. We’d be better spending our money on this other thing that’s cheaper and more effective than a multi-million pound project.”

If you set off in your car and you realise you’re driving in the wrong direction for your destination, do you change course? Even the most stubborn of us would admit we’d get back on track, so why do so many businesses decide not to stop, think about things and look for the right direction?

Similarly, it’s never too late to scrutinise the product vision. Granted, it’s much more lean and effective to do so at the start, but it’s certainly not too late once a project is in flight.

It can be scary to be the one voice saying that something isn’t right. It’s difficult when you’re paid to improve business performance, to admit that we need to change how we do things. Or we got this wrong on this occasion. But think how much more can be achieved if we do use that voice.

Be honest, transparent and realistic, and dare to drive the car to the place you actually need to go to.

Let us know your thoughts.

Sheena.

Going digital for the benefit of the people?

Snip20190828_1

So it was that time of year again. The time when you get a letter from HMRC and you can see it looks pretty formal inside the envelope. So I gingerly opened it up and to my delight it was a refund of overpaid taxes. I’m not sure how that’s happened and it’s not a big amount, but it’s in my favour and I’m going to claim it.

Now, it’s been quite some time since I dealt with HMRC. The last time I had something like this, it was for an underpayment and they just adjusted my tax code. No bother. But here I was given the choice – I can receive a cheque in the post in a few weeks’ time or I can use the new digital route and get it in three days. Not wanting to keep my beer money out of reach for too long, I went the digital route…

As I said, it’s been some time. So I’m faced with signing up to HMRC’s service – Government Gateway beckons. Happily, it’s been thoroughly renovated since those days of dial-up internet and the multi-coloured self-assessment pages. Off I went to get my passport, popped a few details in and bingo! A fully-fledged account was set up in minutes. And it knew why I was there. A couple of button presses later to enter some payment details and I was done. Easy.

This feeds into a conversation we had at work the other day. Why force people to use digital services that don’t work for them? As a current example, there’s talk about the NHS using AI instead of face-to-face GP appointments for routine screening checks. But this doesn’t work for certain areas of society. So, instead of forcing people, be inclusive and offer some choices like I had above. When the digital solution is better/faster/more convenient, then the public will follow. After all, who goes to the bank to stand in a queue to withdraw money at lunchtime these days?

Carl.