Agile

Our Agile Process

February 14, 2019

February 14, 2019 by Fernando Lopez

At Ebury, all of our development teams use the Agile Scrum framework, so we thought it would be interesting to share a little bit about what goes on under the hood.

Scrum framework

Scrum is used by over 12 million people around the world for products big and small. It all starts with a Product Owner who represents customers and other stakeholders. The Product Owner drives the Product Backlog, a prioritized dynamic list of all the work that might be needed for the product.

Work is done by a self-organizing Development Team during the Sprint, a period of time between one and four weeks.

During Sprint Planning, based on the Sprint goal, the Sprint Backlog is populated. Once a day, the Development Team meets for 15 minutes for the Daily Scrum to inspect and adapt their progress toward the Sprint goal and to surface dependencies or impediments.

So, who makes sure that the Scrum Framework is understood and enacted? the Scrum Master. The Scrum Master is the servant leader of the Scrum Team and helps everyone understand Scrum theory, practices and rules. As the team works towards the Sprint goal, iterative delivery and feedback allow us to adapt our next steps. To improve transparency, the product increment can be released continuously during the Sprint.

At the end of the Sprint, the Scrum Team invites the stakeholders to the Sprint Review, where they collectively inspect the results. After the Sprint Review, the Scrum Team runs a Sprint Retrospective where they evaluate how they worked and build a plan for how to improve.

Scrum at a glance
Scrum at a glance

Our Roles

People are important, but everyone has to play their role in order to work fluently. The development team is responsible for delivering potentially shippable product increments (PSIs) at the end of each sprint, that is every at Ebury is two weeks. A team is typically made up of 5-9 individuals, including the Product Owner and the Team Lead, with cross-functional skills (developers and QAs working hand by hand) who do the actual work: analyse, design, develop, test and so forth. The development team in Scrum is self-organizing, meaning that they decide how to implement the work committed to the Product Owner.

The Product Owner is the person in charge of organizing all the stakeholder requests so that they make sense in terms of agility and prioritizes them, then she takes them to the development team so that they can be implemented.

The Team Lead is part of the team, in direct contact with the product owner, acts as an umbrella for the team and removes impediments. He is the go-to person for everything related to the sprint.

Most organizations adapt the agile framework to their needs, in our case the role of the Scrum Master is shared between the team leader and the Agile coaches.

The Scrum Master position at Ebury acts as an Agile Coach and it is not tied to a single team. They work with several teams facilitating meetings and ceremonies and helping everyone in anything related to agile practices and anything else required to succeed in each sprint. At the end of the day, they are the facilitators of the agile process, ensuring that the Scrum Framework is used as intended. They work together with the Team Leader to remove impediments. They also facilitate key sessions, encouraging the team to improve, acting as change agents and mentors.

They also support the Tech Lead role, who is responsible for designing the best technical solutions and how the architecture is evolving to meet requirements. All devs rely on the Tech Lead for technical sparring. 

Stakeholders have deep knowledge of our business and are accountable for accomplishing the company’s Objectives and Key Results (OKRs). Most of the communication between stakeholders and the dev team is through the Product Owner. But, we love to keep in touch with people working in Operations, Sales or Risks department, as we know that they appreciate our work.

Our Scrum Events

Planning

Is one of the most critical meetings for the Sprint goal. We usually spent between 1 and 2 hours reviewing the work from the Product Backlog that’s most valuable to be done next and assign that work into Sprint backlog. To speed up this meeting we have smaller  Refinement or 3 amigos meetings in advanced when necessary.

Give me six hours to chop down a tree and I will spend the first four sharpening the axe.

– Abraham Lincoln

Daily meeting

We no longer use the term ‘Stand-up meeting’ since the teams are made up of people who work from different parts of the world.

A short organizational meeting is held each day. Limited to 15 minutes long. Each team member has to answer the following three questions:

  • What did you do yesterday?
  • What will you do today?
  • Are there any impediments in your way?

All member have to pay attention to what the other teammates say. It is not a reporting meeting. Go!, we are agile.

Scrum of Scrums

Given that we have many development teams working on functionalities that can interact with each other, we need a mechanism in which the teams update each other on what they are working on. In this way, we can identify potential problems early and stay coordinated.

The Scrum of Scrums, is a brief weekly meeting attended by the Agile Coaches and a member of each team, to discuss in-progress tasks, the ones that will be done during this sprint and those that are coming soon.

Refinement meeting

To ensure the readiness of the stories that come in the next sprint, we run an on-demand activity in a Sprint through which the Product Owner and the Development Team add granularity to the issues in the Product Backlog.

The scope and objective of each story are explained. The aim of the meeting is to create a future vision and detect dependencies, risks and impediments that need to be solved.

Sprint Review

When the Sprint is coming to an end, we have to review the work delivered and get feedback from the Product Owner. The validation of the stakeholders does not wait for this revision. We use videos, User Acceptance Testing (UAT), for the stakeholders to approve asynchronously during the sprint. We try to identify the main problems and clarify the high-level plan for the next iteration. We also keep an eye on the team’s efficiency metrics.

Retrospective

In essence, it is the only meeting that needs an agile team to improve the way they work. It values two of the key principles of Scrum: self-inspection and adaptability. In just over an hour, the team reviews the process of the last iteration(s), trying to look for improvement areas and necessary actions so that the next Sprint is always better than the previous one.

Sometimes we may think that the time spent reflecting on work is excessive, but it is the only way to continuously improve.

scrum retrospectives
Scrum retrospectives

Want to join us?

If you like what you heard so far, we are always looking for passionate individual to join our Scrum teams. Have a look at our Career page and see you soon!

Share on Share on FacebookGoogle+Tweet about this on TwitterShare on LinkedIn

Data science, Design/UX

Visualizing User Experience Data with Google Data Studio (Part II)

January 21, 2019

January 21, 2019 by Carmel Hassan

In our previous article ‘Visualizing User Experience Data, we defined a framework to measure the User Experience of a product. In this article, we want to share how we can use Google Data Studio to visualise those metrics and facilitate the decision making during the design process.

Data Studio

Data Studio is a free tool offered by Google that allows you to create interactive dashboards and reports with data visualizations using multiple sources of data.

Data Studio user interface is pretty intuitive, however, there are two key concepts you need to manage to get started:

  1. Data sources let you connect data sets. This is the first thing you have to do before adding charts to reports.
  2. Reports let you visualise data. You have to select one or many data sources to feed data displays. Reports can be shared, interacted and exported.

Starting a new report is as simple as clicking on a button. If you don’t want to start from scratch, you can follow their tutorial that explains how to work with reports step-by-step.

datastudio user interface
Data Studio User Interface

Read more

Share on Share on FacebookGoogle+Tweet about this on TwitterShare on LinkedIn

Development

The Fin and the Tech

January 17, 2019

January 17, 2019 by Victor Tuson Palau

We often talk about Ebury being a disruptive Fintech company. Today I wanted to go into more detail about what makes Ebury a Financial and Technology company.

FINtech

Ebury’s core business is foreign currency exchange (Forex), and we focus on the enterprise segment of the market. To better illustrate what we do, let’s take as an example a made-up European toy company, which we will call TOYSA.Read more

Share on Share on FacebookGoogle+Tweet about this on TwitterShare on LinkedIn

Design/UX

Visualizing User Experience Data (Part I)

January 14, 2019

January 14, 2019 by Carmel Hassan

Ebury Online is a platform that allows users to do international payments quickly, securely and efficiently. Using data analytics it’s an important part of Ebury’s design process.

We, as product designers, need to have a great understanding of data in order to make informed decisions that will impact both on the business and the user experience (UX).

Services like Google Analytics or Hotjar facilitate the exploration and understanding of how websites are navigated. However, in 2017 and 2018, only 1 in 3 designer-related roles use any experience monitoring tool.

At Ebury, we think that data can certainly help us to find answers to questions like ‘who is our audience?’, ‘what do they do?’, ‘how do they perceive and experience the product?’ and ‘how good is that for the business?’.

As shown in the image below, data is collected directly from our Online platform using the huha.js library, sent through Segment and shared to different end-points, which will facilitate smart analytics.

UX Data Tools
Tool framework to collect UX metrics

Among all the tools available in the market, we have selected a subset where Google Data Studio plays an important role helping to connect, visualise and share data insights coming from multiple sources, both to monitor and to proactively look for answers.

Defining a UX Metric Framework

Before jumping into the first dashboard with Data Studio, we need to understand what information will be represented as a User Experience Key Performance Indicator (KPI).

We are defining KPIs like Goals, Signals and Metrics, as per the HEART framework which is intended to provide guidance on how to measure through automation the user experience at scale.

Adoption and Retention

Adoption measures how many new users interact with your product. It seems fair to consider that this metric is fundamental. But getting new users is as important as keeping them for ‘x’ amount of time. This is called Retention.

Both Adoption and Retention represent how successful your product attracts and retain users during a timeframe.

Engagement

The engagement metric measures user interaction. A reasonable ratio will depend on the type of your product. You won’t have the same rates of engagement with a social network app than with a billing platform.

Viewing Engagement metrics alongside Adoption and Retention metrics will allow us to compare the level of involvement of new and existing users.

Task Performance

The HEART framework defines a metric called Task Success, which we have renamed it to Task Performance.In addition to Results (effectiveness of a task), we have extended it to include efficiency metrics like Time on task, Effort and Errors.

We’ll design a dashboard to allow seamless analysis of task performance for different segments and cohorts of users. For example, we can filter down metrics to show task performance of ‘production users based in the UK’ as well as for ‘users who only do payment authorisations’.

Happiness

Happiness is meant to measure user attitudes and perceived satisfaction. We’re measuring happiness based on the result of usability surveys. In the future, we expect to include data from inline feedback forms.

Summary

The HEART Framework helps us to define easily a set of metrics that will inform our design process. Although metrics about the audience are not part of the initial framework, getting to know how customers are distributed based on different traits such as language, location, or activity time can give us additional information to add context to our data.

Defining a relevant framework of user experience metrics is the first step before deciding how to collect and visualise them.

In the next post, I’ll share how to use Google Data Studio to create reports and facilitate the data analysis.

Share on Share on FacebookGoogle+Tweet about this on TwitterShare on LinkedIn

Design/UX, Events

The best of Generate Conference – 2018

November 12, 2018

November 12, 2018 by Carmel Hassan

Last month, the Ebury team attended Generate, a conference dedicated to designers who are looking to improve the user experience (UX) of their websites. We’ll be talking through what we learned, how we’ll be applying our new knowledge to our UX, but most importantly, what you can take away to apply yourself.

The opening talk was presented by Sarah Parmenter who spoke about the importance of digital marketing strategies that can be easily applied by anyone. Parmenter shared key rules that can be applied today to help decide the best media tool to distribute your company’s message: First, think about your Product, then the client Experience, and then the Story. Only then, you can choose the right media outlet.

Our tip for you: If you decide to do a video, make sure you include subtitles, as 85% of all videos are played without sound.

Do you struggle to get your videos noticed? Try Hashtagify to find the best hashtags to get your content noticed.

The closing talk was presented by Sara Soueidan, a front-end UI developer who talked through how cascading style sheets (CSS) and scalable vector graphics (SVG) can be used for better usability and accessibility. One takeaway we gathered from this was to make sure that you integrate these inclusive design practices as part of your natural design and development process.

While CodePen’s senior software engineer Cassidy Williams impressed attendees by coding an image chosen at random found on Dribbble, designer and developer, Ricardo Cabello, demoed Three.js library to demonstrate how you can create WebVR interfaces with the library available on that platform.

UX consultant Trine Falbe talked through the importance of ethics when designing, highlighting the importance of how the data generated by users is looked after. In an age where data is the new oil, this is to consider for data-driven teams.  

Probably one of the most interesting talks of the day was presented by Andrew Godfrey, Senior Design Specialist at Invision, on Design Systems fails. Godfrey exposed some of the goals a successful design system has, such as:

  • Improved consistency
  • Efficient time on task
  • Efficiency reuse
  • Inclusive design (accessibility)
  • Reduction of defects
  • Improved UX
  • Strong design community

Godfrey also highlighted common failures that need to — and can easily — be avoided, such as:

  • Low adoption by internal staff
  • Low understanding by internal staff
  • Mismanaged content
  • Scale system difficulty
  • Lack of support
  • Missing the bigger picture (not just in UI components)
    • Lack of style guides
    • Unclear visual components
    • Unclear standards
    • Accessibility
    • Animation
    • Information architecture

Godfrey advocates considering Design Systems as a core project inside the business, which means adopting processes such as:

  • A plan, strategy, and process
  • A roadmap and priorities
  • Scaling up when validating
  • Incorporating ways of measuring and sharing success
  • Creating prototypes that can be validated
  • Assigning a ‘person of expertise’, that knows the system well
  • Effectively calculating design debt

 

For us at Ebury, Design Systems are one of the key tools to create and maintain a good user experience across all of our services. This is the key principle behind Ebury Chameleon and the reason why we’ll continue investing and improving our processes to ensure high-quality products and services.

Share on Share on FacebookGoogle+Tweet about this on TwitterShare on LinkedIn

Events

Jenkins World: Fighting the Jenkinstein

November 9, 2018

November 9, 2018 by Luis Piedra Márquez

Jenkins World is a two-day conference held specifically for IT executives, DevOps practitioners, Jenkins users and partners.

This year, I attended the conference in Nice, France.  Jenkins World has traditionally always been held in San Francisco, but this year it expanded across to Europe with the additional conference admitting more than 800 attendants. Due to the success of the first European conference, another has been scheduled for next year—taking place in Lisbon.

The event was sponsored and driven by CloudBees, the company behind Jenkins (in fact, the creator of Jenkins, Kohsuke Kawaguchi, is CTO of CloudBees. A few big names attended, such as Amazon, Google, Microsoft, Docker and VMWare. There was also a great presence of the Jenkins OSS community.

While Jenkins has been around for more than ten years, its roots can be traced back almost 15 years, so it was created in a completely different landscape compared with today’s technology and industry practices—for example, there were neither cloud nor Agile methodologies back then. In 2014, the then-called workflow plugin (later renamed to Pipeline) was introduced. It was a disruptive change, as there was no direct compatibility between old-style pipelines (that were, and still are available) and new pipelines. I remember being very critical at the time, since it was quite bugged and a somewhat incomplete function, (even in 2016 when Jenkins 2.0 was announced) so I wouldn’t have recommended migrating to Jenkins pipelines at that point. However, I was proven wrong and it evolved well. By mid-2017 it was production ready, so we started a successful migration at Ebury later that year.

Note that we are talking about three years of evolution to have a usable pipeline, which is a considerable amount of time. In fact, during this time other pipeline solutions, like CircleCI and Bitbucket Pipelines managed to be built from scratch and launched. However, this last year has been a year of wonder for Jenkins, with lots of amazing initiatives crystallizing for pipeline and beyond. All of this was presented at the Jenkins World conference in a quite well structured way:

Pipeline

With Declarative Pipeline Syntax and multibranch and organization jobs, Jenkins made a step forward in the last couple of years. Pipeline as Code continues being the backbone of the project, and that’s good news.

Configuration as Code

Although Pipeline as Code helped us a lot with job configuration, and the different cloud plugins also helps configuring slaves/nodes… when it comes to configure the Jenkins instances that instrument all that, it’s still really painful, and this tool comes to fill that gap.

We’ve been waiting for something like this for a long time, and thanks to Praqma and Ewelina Wilkosz, it’s finally here. It’s important to note that while this has not come from Cloudbees or Jenkins core, it has now been embraced as part of Jenkins core. It just looks awesome. We’re really excited to start configuring our instances this way.

 Jenkins Evergreen

This project is about Jenkins out-of-the-box experience. Think of it as a Linux distribution, a complete set of plugins thoroughly tested together for functionality and security, bundled with a Jenkins LTS version.
Of course, it will reduce flexibility in some scenarios, but quoting Michaël Pailloncy, we used to install the latest Jenkins versions instead of LTS when we were young… but we don’t do it anymore, and the same may apply for the huge collection of plugins we use day-to-day.

Cloud Native Jenkins

Along with configuration, another long lasting pain in Jenkins’ administrations has always been infrastructure for the master instance, particularly when it comes to storage. It also weighed the possibility of true high availability (HA) for Jenkins master. This is by far the less polished of the superpowers, exemplified by the fact that it is a Special Interest Group and not a Project.

Jenkins X

Continuous integration and deployment for Kubernetes in an “opinionated” way. This means it’s only for Kubernetes, and it enforces a certain way of working. Quite far from the extraordinary flexibility that has characterised Jenkins. As Koshuke explained, it acts like a train with a fixed destination (i.e Kubernetes) and fixed rails… but it’s a really comfortable and fast train.

Kubernetes has been a topic of discussion in almost every talk, as it’s now seen almost as a standard for microservices architectures.

Image: https://jenkins.io/

An other interesting topic at the conference was the “Jenkinstein”. Over the time, when Jenkins instances start to grow in usage in organisations, they sometimes tend to grow in uncontrollable ways, essentially becoming monsters that need more and more dedicated work maintaining them.

These ‘superpowers’ will help developers and Jenkins administrators to fight these “Jenkinsteins” monsters. At Ebury, we’ve taken advantage of everything included in Jenkins 2.0—over the last year we’ve managed to remove all the mess in jobs creation, linking and configuration. However, we still need some puppeting when it comes to the operations side, and we will for sure also take advantage of Configuration as Code and Jenkins Evergreen projects.

Enjoy Jenkins and automation, and remember two quotes I have learnt over the last two days:

  • If you automate a mess, you get an automated mess
  • “Never send a human to do a machine’s job”
Share on Share on FacebookGoogle+Tweet about this on TwitterShare on LinkedIn

Events, LABS

EUROPYTHON 2018 Asyncio learnings

October 23, 2018

October 23, 2018 by hectoralvarez

EDINBURGH
Thanks to Ebury’s learning program, Héctor Álvarez and Jesús Gutiérrez were selected to attend EuroPython in Edinburgh.

EuroPython is a yearly conference in Europe that focuses on Python programming language and its ecosystem . This year’s sessions were held at the Edinburgh International Conference Center; an amazing building in the core of the city, just a stone’s throw away from the historical city of Edinburgh.

More than 1200 programmers and python lovers from 51 countries attended the event. With over 150 sessions in 7 tracks we were prepared to take away as much new information as possible.

From the outset, it was clear that the hottest topic in EuroPython was asynchronous programming. First integrated into 3.6 Python, and in various iterations since, it is still entirely possible to use Python without needing or even knowing about the asynchronous paradigm. However, if you are interested in the nuts and bolts of the tech involved, read on.

For the beginners out there, your central processing unit (CPU) follows a synchronous programming model, which means that things happen one by one. For example, when you create a function that performs a long-running action, it returns only when the action is finalised and it can return the result. Even when different programs that are run by your Operating System (OS), which is programmed synchronously, the OS manages them asynchronously. This is why multitask Operating Systems have been used for a long time.

The pitfall of asynchronous programming is that it’s difficult to know which of the coroutines has the execution time, and which coroutine spawned which; the event loop obviously knows but the programmer doesn’t.

The recently launched Python3.7 tries to solve that problem with inheritance of tags.

Under the asynchronous topic:

Asyncio in python 3.7 and 3.8 (Yury Selivanov)

Asyncio in production (Hrafn Eiriksson)

Asyncio in practice: We did it wrong (Lynn Root)

Here, I’ll briefly explain how the asynchronous programming works since Python 3.5.

As mentioned before, classic program code is executed in a single event line.In asynchronous programming, however, code is executed in a single loop. That loop is part of the code that orchestrates what’s executed when, inside the loop, there are a group of tasks—coroutines. Coroutines are defined with the reserved word async.

When a coroutine has been executed, it reports to the loop when it’s waiting for an external resource using the reserved word await.

When the loop detects that a coroutine is awaiting, it gives the execution time to the next coroutine, the loop then stores the memory state and where it’s waiting. When the external resource finally gives the response it fires a callback so knows that the coroutine is ready to keep working.

That’s  the theory, but now it’s time to look at the code:

 

import asyncio
import logging


logging.basicConfig(format='%(asctime)s %(message)s', datefmt='[%H:%M:%S]')
log = logging.getLogger()
log.setLevel(logging.INFO)


# define a coroutine
async def sleeper(name, delay):
    """This coroutine will wait for 2 seconds and then keep working."""
    log.info(f"{name}: START (wait for {delay}s)")
    await asyncio.sleep(delay)
    log.info(f"{name}: END (wait for {delay}s)")
    return name

if __name__ == '__main__':
    # create the loop
    loop = asyncio.get_event_loop()

    coroutine1 = sleeper('first coroutine', 2)
    coroutine2 = sleeper('second coroutine', 5)
    task1 = loop.create_task(coroutine1)
    task2 = loop.create_task(coroutine2)

    log.info("main: START run_until_complete")
    loop.run_until_complete(asyncio.wait([task1, task2]))
    log.info("main: END   run_until_complete")


Here at  Ebury, we don’t currently use asynchronous programming because Django (our framework) is not asynchronous. However, there are parts of the code that are slow (like sending an email that will delay for a second), and so in those cases we use a workaround—task executor named celery. If you want to know more about celery, follow this link: https://labs.ebury.rocks/?s=celery

Miscellaneous

Domain Driven Design, Robert SmallShire

Domain Driven Design is an approach to software development that emphasises high-fidelity modelling of the problem domain, which uses a software implementation of the domain model as a foundation for system design.

PEP 557 (data classes) versus the world (Guillaume Gelin)

Data classes are a very controversial feature, yet here it’s explained why they’re useful for us.

Getting started with mypy and type checking (Jukka Lehtosalo)

Mypy , is defined as: a static type checker for Python that aims to combine the benefits of dynamic (or “duck”) typing and static typing.

Static typing can help you find bugs faster with less testing and debugging. In large and complex projects, this can be a major time-saver.

Python decorators: Gift or poison? (Anastasiia Tymoshchuk)

A taxonomy of decorators: A-E (Andy Fundinger)

Python 2 is dead! Drag your old code into the modern age (Becky Smit)

What’s new in Python 3.7 (Stephane Wirtel)

Pythonic code vs. Performance (Łukasz Kąkol)

 

Share on Share on FacebookGoogle+Tweet about this on TwitterShare on LinkedIn

Javier Vázquez
Salesforce Developer

Events

Ebury Salesforce at the DreamOlé 2018 Event

August 30, 2018

August 30, 2018 by Javier Vázquez

On the 27th of April this year, the Ebury Salesforce team attended DreamOlé; the biggest Salesforce event in Spain, with collaborators and speakers from around the world gathering to share their knowledge, skills and experience with the audience.

The important presentations from our perspective were…

Read more

Share on Share on FacebookGoogle+Tweet about this on TwitterShare on LinkedIn

Development, Events

Takeaways from the 2018 ExpoQA in Madrid

July 23, 2018

July 23, 2018 by Daniel Gordillo

For the fourth consecutive year, Ebury attended the ExpoQA conference during 4-6 June in Madrid. Events such as these are paramount in order to stay updated with the latest news in technology, tools, methodologies and all the nerdy stuff we love.

We would like to highlight the following  presentations:

  • Focus on product quality instead of testing by Dana Aonofriesei. She offered a look into how we need to pay attention to quality in production monitoring. We loved her alert system where the alerts have the status “Pending”, “Researching” and “Solved” to help manage the alerts and give better visibility. In addition, we really liked how her system automatically assigns bugs by “keywords”.
  • Yes, we can. Integrating test automation in a manual context by Andreas Faes. Based on his experience, he talked about implementing test automation processes in his company, up to the point of how developers are using code created by QA (dev in test) to test his developer code, similar to TDD but with tests driven by QA. This is something that we will be looking to apply in our own teams.

Read more

Share on Share on FacebookGoogle+Tweet about this on TwitterShare on LinkedIn