Ebury LABS http://labs.ebury.rocks Ebury’s official technology blog Fri, 31 Aug 2018 09:39:35 +0000 en-GB hourly 1 https://wordpress.org/?v=4.8 Ebury Salesforce at the DreamOlé 2018 Event http://labs.ebury.rocks/2018/08/30/ebury-salesforce-dreamole-2018-event/ http://labs.ebury.rocks/2018/08/30/ebury-salesforce-dreamole-2018-event/#respond Thu, 30 Aug 2018 15:14:27 +0000 http://labs.ebury.rocks/?p=1705 Ebury Salesforce at the DreamOlé 2018 Event

On the 27th of April this year, the Ebury Salesforce team attended DreamOlé; the biggest Salesforce event in Spain, with collaborators and speakers from around the world gathering to share their knowledge, skills and experience with the audience. The important presentations from our perspective were… From Zero to CI in 30 minutes by Christian Szandor … Continue reading Ebury Salesforce at the DreamOlé 2018 Event

The post Ebury Salesforce at the DreamOlé 2018 Event appeared first on Ebury LABS.

]]>
Ebury Salesforce at the DreamOlé 2018 Event

On the 27th of April this year, the Ebury Salesforce team attended DreamOlé; the biggest Salesforce event in Spain, with collaborators and speakers from around the world gathering to share their knowledge, skills and experience with the audience.

The important presentations from our perspective were…

From Zero to CI in 30 minutes by Christian Szandor Knapp

Christian talked about the benefits of Continuous Integration and how it can streamline our deployment flow and free up the team to work more on delivering functionality to the business. As everyone who works with Salesforce is aware, changesets are horrible and time-consuming to use but here we are in 2018 and a majority of us are still using them.

Salesforce DX was the first major step from Salesforce to close the deployment gap, taking the source code and metadata outside of a Salesforce org. We also got (quick to spin up) scratch orgs, a command-line interface and Heroku Flow’s integration with GitHub plus the ability to plug into 3rd party build and test automation tools. It was a good start.

Christian focussed on CircleCI in conjunction with Salesforce DX. He talked through the ease of setup, minimal variables needed and the power of parallelisation when deploying. This was then demonstrated in a live demo, and indeed the setup process does appear to be a strong point of the product. Here at Ebury we have spent some time looking at Copado, probably a more rounded solution at this time, we liked the product but in the end we decided not to proceed. However, CircleCI is something we will watch and see how it evolves.

….read more about CircleCI 2.0 here and Ebury’s experience on the Salesforce DX pilot here. Watch the DreamOlé session here and follow Christian Szandor Knapp (ch_sz_knapp) on Twitter here.

A quest to stop Salesforce mutants, a testing tale! by Sara Sali

The goal of mutation testing is the measurement and improvement of test quality. How? By performing small changes (on the mutants, which have undesired behaviours) in our code and if our tests fail with those changes, that means the test suite is doing a good job. But if our tests do not detect the added “bugs”, then adjustments are required. It could be that the mutation introduced is never actually executed (your code is dead) or that your test coverage is incomplete.

Sara has built a beta app that enables mutation testing in Salesforce, this is a first for the platform. The scores that Sara’s app produces once the tests are run enables you to focus in on the areas that need attention, plus if you are handing over to another developer or team then they can use these scores to give them confidence in the testing.

mutation score = killed mutants / all mutants

There can be issues with APEX tests, in that less conscientious developers can be focussed purely on playing the game rather than aiming for quality, and Sara has addressed this in her design. I think we would like to use something like this at Ebury so hopefully Sara and her company will decide to go the open source route. Either way, this was a great session from someone who really understands the subject very deeply.

You can watch Sara’s presentation here.

Platform Events and the Spanish Omelette by Jero Guerrero and Pedro M. Molina

A talk on Salesforce Platform Events, the enterprise messaging service that has now replaced the (push model) Streaming API. These Platform Events are mostly used to connect Salesforce with external systems through a pub/sub model and enable a highly distributed set of business applications to interact based on changes in the state of their customers, products, or anything at all that is meaningful to their organisation.

An Event driven approach is not new, but the challenge for Salesforce was to implement it in their metadata-driven, multi-tenant model. The usual approach would be to have a persistent queue for each subscriber, but with a huge amount of tenants each potentially having multiple subscribers this could be a stateful nightmare for Salesforce to handle.

What could we do with it? Well, imagine we were listening to our online platform and we have a customer who is logging in and getting quotes but not trading. When we hit a defined threshold of quotes without trades, we could throw an event and update their status to something like “churn risk” in Salesforce and create a callback for their account manager to follow up with them.

And the Spanish Omelette reference? Well, that referred to a demo showing an integration with Twitter and Salesforce showing how a poll conducted with hashtags could be pulled into Salesforce and used there. Watch the Platform Events session here.

We at Ebury would like to thank all the presenters and organisers for making this event so educational and we are all looking forward to the next DreamOlé event. You can find all the sessions, not just the ones I have mentioned, on the DreamOlé site here.

The post Ebury Salesforce at the DreamOlé 2018 Event appeared first on Ebury LABS.

]]>
http://labs.ebury.rocks/2018/08/30/ebury-salesforce-dreamole-2018-event/feed/ 0
Takeaways from the 2018 ExpoQA in Madrid http://labs.ebury.rocks/2018/07/23/takeaways-2018-expoqa-madrid/ http://labs.ebury.rocks/2018/07/23/takeaways-2018-expoqa-madrid/#respond Mon, 23 Jul 2018 07:48:51 +0000 http://labs.ebury.rocks/?p=1683 Takeaways from the 2018 ExpoQA in Madrid

For the fourth consecutive year, Ebury attended the ExpoQA conference during 4-6 June in Madrid. Events such as these are paramount in order to stay updated with the latest news in technology, tools, methodologies and all the nerdy stuff we love. We would like to highlight the following  presentations: Focus on product quality instead of … Continue reading Takeaways from the 2018 ExpoQA in Madrid

The post Takeaways from the 2018 ExpoQA in Madrid appeared first on Ebury LABS.

]]>
Takeaways from the 2018 ExpoQA in Madrid

For the fourth consecutive year, Ebury attended the ExpoQA conference during 4-6 June in Madrid. Events such as these are paramount in order to stay updated with the latest news in technology, tools, methodologies and all the nerdy stuff we love.

We would like to highlight the following  presentations:

  • Focus on product quality instead of testing by Dana Aonofriesei. She offered a look into how we need to pay attention to quality in production monitoring. We loved her alert system where the alerts have the status “Pending”, “Researching” and “Solved” to help manage the alerts and give better visibility. In addition, we really liked how her system automatically assigns bugs by “keywords”.
  • Yes, we can. Integrating test automation in a manual context by Andreas Faes. Based on his experience, he talked about implementing test automation processes in his company, up to the point of how developers are using code created by QA (dev in test) to test his developer code, similar to TDD but with tests driven by QA. This is something that we will be looking to apply in our own teams.

  • Why has software security gotten worse? And what can we do about it? by Santhosh Tuppad. A presentation on how security testing has become more paramount now that even more devices and applications used in everyday life are connected to the internet. We learnt that you don’t have to be an expert to discover security issues in your systems but if we want specific security tests we should hire a hacker.
  • The final frontier? Testing in production by Marcel Gehlen and Benjamin Hofmann. They gave us an interesting view on testing in different environments. We were interested in their solution regarding testing in a “production” environment by managing the architecture e.g. redirecting production environment to obfuscated data stores or using an A/B testing technique.


We would like to thank Ard Kramer, Miriam Miranda, Graham Moran and the organisers of the ExpoQA for giving all of us the opportunity to share ideas and learn from other developers.

The post Takeaways from the 2018 ExpoQA in Madrid appeared first on Ebury LABS.

]]>
http://labs.ebury.rocks/2018/07/23/takeaways-2018-expoqa-madrid/feed/ 0
DjangoCon Europe 2018 http://labs.ebury.rocks/2018/06/26/djangocon-europe-2018/ http://labs.ebury.rocks/2018/06/26/djangocon-europe-2018/#respond Tue, 26 Jun 2018 09:20:29 +0000 http://labs.ebury.rocks/?p=1671 DjangoCon Europe 2018

DjangoCon Europe 2018, the European conference for the Django framework, was held this year in Heidelberg, Germany, from May 23rd – 27th. Professionals from around the world gathered together to enjoy a collaborative environment with talks given on a variety of topics, from philosophical issues, to technical details. Ebury adopted Django a few years ago … Continue reading DjangoCon Europe 2018

The post DjangoCon Europe 2018 appeared first on Ebury LABS.

]]>
DjangoCon Europe 2018

DjangoCon Europe 2018, the European conference for the Django framework, was held this year in Heidelberg, Germany, from May 23rd – 27th. Professionals from around the world gathered together to enjoy a collaborative environment with talks given on a variety of topics, from philosophical issues, to technical details.

Ebury adopted Django a few years ago now as part of its core technology stack and this conference is always great to look at future opportunities.

We particularly like to highlight the talks on Django-channels 2, a project that takes Django and extends its abilities beyond HTTP, using protocols like: Websockets, allowing developers to do applications with asynchronous communication, and doing compatible synchronous and asynchronous Django code. Andrew Godwin, the main developer of the Django-channels and South, explained the Channels’ architecture and recommended best use cases for synchronous and asynchronous code.

Additionally, the conference covered technical topics such as; representing hierarchies in relational databases, improving Django deployments with packaging, using Docker, solid  API creation and GraphQL usage, accessibility, advantages search, data edition in production environments, protecting personal data, authentication and authorisation using third parties, and more.

Here is a list of the most interesting resources discussed during the Conference:

You can find all the talks presented at the event uploaded here. Thanks to all those involved for their dedication and effort in such a great event, see you next year!

The post DjangoCon Europe 2018 appeared first on Ebury LABS.

]]>
http://labs.ebury.rocks/2018/06/26/djangocon-europe-2018/feed/ 0
Introducing huha.js: Analysing User Experience with Javascript http://labs.ebury.rocks/2018/06/22/huha-user-experience-javascript/ http://labs.ebury.rocks/2018/06/22/huha-user-experience-javascript/#respond Fri, 22 Jun 2018 10:35:59 +0000 http://labs.ebury.rocks/?p=1634 Introducing huha.js: Analysing User Experience with Javascript

We love building great products, but a product would be completely useless if it is not properly designed for the people who are meant to use it. This lack of efficiency impacts the user experience (UX) of the solution. But, how can we achieve a good UX when developing a product? Is there a way … Continue reading Introducing huha.js: Analysing User Experience with Javascript

The post Introducing huha.js: Analysing User Experience with Javascript appeared first on Ebury LABS.

]]>
Introducing huha.js: Analysing User Experience with Javascript

We love building great products, but a product would be completely useless if it is not properly designed for the people who are meant to use it. This lack of efficiency impacts the user experience (UX) of the solution. But, how can we achieve a good UX when developing a product? Is there a way that we can measure user performance objectively?

By trying to answer these questions, we realised there aren’t any cheap and easy-to-use tools ready for any member of our team. So, since we have some experience in things like building software, we decided to develop our own tool.

We are glad to introduce huha.js, a Javascript framework that is intended to measure the usability and user experience in an automated way, considering the limitations of the model and best practices.

In this post, we would like to share how it was built, in order to get fast and detailed feedback on user experience and be able to provide support to a highly iterative agile development practice.

 

Measuring user performance

We needed a way to be ahead of customer feedback and understand how the actual user behaviour is impacted after the solution is applied. We asked ourselves “how can we collect information about what our customers are doing when they try to achieve a goal?”

We started simple and began by defining a model that will represent how users interact with products based on Tasks. A Task is a minimal activity that a user does when using an application, like logging in, creating an item, searching or filtering data.

The model will be implemented with Javascript, since it will allow us to integrate it easily with any of our projects built with web technologies. So far, we just need a class with a name that will represent our tasks.

class Task {
  constructor(name) {
    this.name = name;
  }
}

Easy peasy, right? Let’s make this task more useful. Our goal is to include more metrics that will help us to understand the user performance.

Result

Our first metric is going to be the result of the task. A task can have two different results: completed or abandoned. If the user completes the task with success, it’s labeled as “completed”. If the user doesn’t go through the task completely, then it’s marked as “abandoned”.

In the code, we have a third label, used when the task is in progress, so the result is currently unknown.

const IN_PROGRESS = 'In progress'; 
const COMPLETED = 'Completed'; 
const ABANDONED = 'Abandoned';

class Task { 
  constructor(name) {
    // ...
    this.result = IN_PROGRESS; 
  }

  complete() {
    this.result = COMPLETED;
  }

  abandon() {
    this.result = ABANDONED;
  }
}

We will execute the “complete” and the “abandon” methods whenever we consider the task is finished. The applications integrated with the tool are responsible for changing the result of the task. This offers great flexibility while keeping things simple (we love the KISS principle!).

Interactions

Usually, the most efficient UI is the one that doesn’t require many interactions from the users. Therefore, we’ll try to have the number of interactions as minimum as possible without affecting other metrics.

This effort is quantified as a number in the task, which is initialised to zero. That value is increased whenever we consider there is an interaction.

class Task {
  constructor(name) {
    // ...
    this.effort = 0;
  }

  addInteraction() {
    this.effort += 1;
  }
}

Again, as we wanted a flexible tool, we offer a method for adding an interaction that needs to be executed by the application including this tool. This is normally triggered every time the user performs a click, a keystroke or gets an input focus.

Errors

Another metric we wanted to measure and, keep as low as possible, is the number of errors that users make during the execution of a task. This is because a user who encounters many errors trying to achieve something is likely to become frustrated, increasing the chances of them abandoning the task altogether.

Like the effort, errors are modelled as a number.

class Task {
  constructor(name) {
    // ...
    this.errors = 0;
  }

  addError() {
    this.errors += 1;
  }
}

Time

Finally, the last metric we are going to collect is the time that the user needs to finish a task. We don’t want our users spending too much time on tasks if that is the result of more effort or errors.  

In order to record that time, we are going to store when the task started and when it finished. So the time spent on a task will be the difference between these two date times.

class Task {
  constructor(name) {
    // ...
    this.start = new Date();
    this.end = null;
  }

  get time() {
    return this.end - this.start;
  }

  finish(result) {
    this.result = result;
    this.end = new Date();
  }

  complete() {
    this.finish(COMPLETED);
  }

  abandon() {
    this.finish(ABANDONED);
  }
}

A real example

Now that we have everything we need to start measuring how the users use the applications, let’s apply it to a real example: a login page.

Our definition of the task will be the following:

  • Result: Completed when clicking on the “Login” button and after entering an username and password. Abandoned when clicking on the “Lost your password?” link.
  • Interactions: They are incremented every time the user focuses on one of the inputs or clicks on either the “Login” button or the “Lost your password?” link.
  • Errors: Increased  whenever the user tries to login without entering a username or password.

Keep in mind that this doesn’t need to be the only way to be authenticated in an application, so we could potentially define different “login” tasks. Besides, a “successful” login is not when the user clicks on the login button but when the server authenticates the user, but in order to have a simple example we are assuming that just clicking on the “Login” button is enough.

HTML

<form>
  <input type="text" id="user">
  <input type="password" id="pass">
  <button type="button" id="login">Login</button>
  <a href="#" id="forgot">Lost your password?</a>
</form>

Javascript

const task = new Task('Login');
console.log(task.name); // Login
console.log(task.result); // In progress

const user = document.querySelector('#user');
const pass = document.querySelector('#pass');
const login = document.querySelector('#login');
const forgot = document.querySelector('#forgot');

login.addEventListener('click', () => {
  task.addInteraction();
  if (user.value && pass.value) {
    task.complete();
    console.log(task.result); // Completed
  } else {
    task.addError();
  }
}

forgot.addEventListener('click', () => {
  task.addInteraction();
  task.abandon();
  console.log(task.result); // Abandoned
}

user.addEventListener('focus', () => {
  task.addInteraction();
}

pass.addEventListener('focus', () => {
  task.addInteraction();
}

Tracking the metrics

So, we already know how to get all the different metrics but there is one important thing missing. How can we analyse them in order to make changes??

There are different approaches we can follow: 1) store all the data directly in a database (our own database or a cloud one like Firebase) and perform queries against it or 2) use a third party tool that already provides the analysis part (such as Google Analytics, Intercom or Segment).

Due to its ease of use, and because we were already using it in our projects, the first tracker we’ve added to our library is Google Analytics. We send three different events for storing the time on task, the effort and the errors. The result is indicated in all of them so we can then compare the different results.

class Task {
  // ...
  finish(status) {
    // ...
    this.track();
  }

  track() {
    gtag('event', 'timing_complete', {
      event_category: this.name,
      event_label: 'Time on task',
      value: this.time,
      name: this.result,
    });
    gtag('event', this.result, {
      event_category: this.name,
      event_label: 'Error',
      value: this.errors,
    });
    gtag('event', this.result, {
      event_category: this.name,
      event_label: 'Effort',
      value: this.effort,
    });
  }
}

Open source library: huha.js

As mentioned previously, we implemented and released huha.js, a Javascript library that is intended to measure user experience automatically based on the concepts explained on this post.

If you want to have a look at both the code and the documentation, you can check out the repository on Github. As an open source project, we are happy to receive contributions from the community!

Our context: we are agile

Before wrapping up, it is important to understand the context of the problem, how it originated and why usability is so relevant for our products.

We are applying an agile methodology. This means that we want to validate every new feature we launch as soon as possible. We release these features in small iterations, then we gather feedback from our customers to validate their usability and define new improvements according to the feedback.

One of the aspects that is going to affect those features most, and our customer’s feedback, is their experience with the solution. If clients don’t know how to use our products, or if they spend a long time trying to achieve a goal, their feedback will probably be negative and we would need to spend more time developing and  improving the usability.

The post Introducing huha.js: Analysing User Experience with Javascript appeared first on Ebury LABS.

]]>
http://labs.ebury.rocks/2018/06/22/huha-user-experience-javascript/feed/ 0
Queue tasks in Celery after database commit – Introducing django-transaction-hooks http://labs.ebury.rocks/2018/04/18/introducing-django-transaction-hooks/ http://labs.ebury.rocks/2018/04/18/introducing-django-transaction-hooks/#respond Wed, 18 Apr 2018 16:51:10 +0000 http://labs.ebury.rocks/?p=1622 Queue tasks in Celery after database commit – Introducing django-transaction-hooks

At Ebury, we use Django and have followed an ongoing upgrade path from 1.3 to 1.5 to 1.7. During that time we have had an issue that was messing with us. You might be familiar with it. We use celery for  executing asynchronous tasks and Django is our framework with PostgreSQL database. The issue occurs … Continue reading Queue tasks in Celery after database commit – Introducing django-transaction-hooks

The post Queue tasks in Celery after database commit – Introducing django-transaction-hooks appeared first on Ebury LABS.

]]>
Queue tasks in Celery after database commit – Introducing django-transaction-hooks

At Ebury, we use Django and have followed an ongoing upgrade path from 1.3 to 1.5 to 1.7. During that time we have had an issue that was messing with us. You might be familiar with it.

We use celery for  executing asynchronous tasks and Django is our framework with PostgreSQL database.

The issue occurs when an asynchronous task makes use of an object that has been just updated, or  created. There is a dependency with the database, the object might not have the updated status when the asynchronous task starts, or not even exists yet.

We are now able to utilise the  library django-transaction-hooks, which works with Django 1.6 through 1.8, and has been merged into Django 1.9+.

What is important with this library is that adds the event “on_commit” to manage timing with database transactions. So, we  can use this for scheduling when to queue tasks for celery workers. The main advantage comes when we want to queue using an object created into an atomic transaction. Consider the following example:

When a task is queued, for instance is not committed into database, and the odds of workers starting tasks with the response “ObjectDoesNotExist”  increases with the number of instructions in <other actions>.

With django-transaction-hooks the task is not queued until atomic block is committed.

Essentially, django-transaction-hooks just extends the back-end of the connection with database, managing in memory instructions added with “on_commit” method inside each block, and popping the list out once the transaction ends.

All perfect so far, this suits perfectly with what we want. However, there are two things that still need addressing: compatibility with standard database back-end and an ugly syntax.

As reflected in the library’s documentation, for using it we just need to change settings for the database engine.

DATABASES = {
    'default': {
        'ENGINE': 'transaction_hooks.backends.postgresql_psycopg2',
        'NAME': 'foo',
    },
}

However, people through our teams run their environments with a different settings files, depending on their needs, where they could be using a different backend. Calling “connection.on_commit” with django standard back-end will throw an “AttributeError”. So people would be forced to update its database back-end.

Here come across the second point, we don’t like that syntax. I personally hate the lambda syntax, so always try to avoid it.

At the moment we are only  using “on_commit” events for queuing to celery, and we have developed our tasks based on Task classes. So, this is the solution we have come up with: set a new method that looks like celery native and wrap compatibility between both engines.

class BaseTask(Task):
    """
    Base celery task for trades app
    """
    abstract = True

    def apply_on_commit(self, args=None, kwargs=None, task_id=None, producer=None,
                        link=None, link_error=None, **options):

        if settings.TRANSACTION_HOOKS_POSTGRE_BACKEND == settings.DATABASES['default']['ENGINE']:
            connection.on_commit(lambda: self.apply_async(args, kwargs, task_id, producer,
                                                          link, link_error, **options))
        else:
            self.apply_async(args, kwargs, task_id, producer, link, link_error, **options)

We look for the engine value to call “apply_async” method directly or we can use it with connection “on_commit”. Of course, this would need to be reviewed if we’d use more than one database. But it fits really clean in the code.

This means that as the teams move to utilising this new approach we can maintain compatibility with legacy methods too for a nice controlled adoption.

 

The post Queue tasks in Celery after database commit – Introducing django-transaction-hooks appeared first on Ebury LABS.

]]>
http://labs.ebury.rocks/2018/04/18/introducing-django-transaction-hooks/feed/ 0
Ebury Chameleon as an example of a Design System http://labs.ebury.rocks/2018/03/16/ebury-chameleon-example-design-system/ http://labs.ebury.rocks/2018/03/16/ebury-chameleon-example-design-system/#respond Fri, 16 Mar 2018 15:57:18 +0000 http://labs.ebury.rocks/?p=1589 Ebury Chameleon as an example of a Design System

How to build a design language that works across teams and platforms Invision acquired Brand.ai, UXPin released Systems, and Uber, IBM, and Salesforce are examples of companies who have decided to change the way of designing digital products. They all have one thing in common: using Design Systems as a way of creating outstanding user … Continue reading Ebury Chameleon as an example of a Design System

The post Ebury Chameleon as an example of a Design System appeared first on Ebury LABS.

]]>
Ebury Chameleon as an example of a Design System

How to build a design language that works across teams and platforms

Invision acquired Brand.ai, UXPin released Systems, and Uber, IBM, and Salesforce are examples of companies who have decided to change the way of designing digital products.

They all have one thing in common: using Design Systems as a way of creating outstanding user experiences.

A Design System is more than a style guide and a 2017 trend. It is the foundation of the design language in order to build consistent, convenient and scalable tech products. However, adopting a design system that also considers user requirements means choosing a clear strategy to avoid the risk of never getting things done.

How we’ve built the Ebury Chameleon Design System

Evolution vs Revolution

At Ebury the conversation began by bringing together Designers, Front-end Developers, Leaders and the Product Team involved. We could not afford to wait to have the perfect system before releasing it, so we have taken an evolving approach to ensure we can carry out optimisations and code refactor efficiently.

Design Principles

One of the initial steps we took was to identify the Design Principles that will guide the process. After performing some user research in order to identify the key pain points of our app, we defined the principles that better frame what we want to achieve with the implementation of the Design System:

Secure

Design must help us understand any financial information. Its primary goals are to reduce ambiguity, provide consistency, and use the proper metaphor for each piece of information.

Time-saving

Users have clear goals when using our platform, and we should be able to let them find what they need straight away. The design must allow quick and well-connected navigation as well as an efficient performance for both the client and the server.

Peace of mind

Our platform is there to help the user. The design is developed in order to reduce information density, allowing users to dig into data as they need it. We encourage interaction over static and crowded data display.

Tailor made

Personalisation is one of the key features that the creation of the design system relies on. We have to avoid making closed decisions that don’t allow users to adapt the brand or format to their specific needs and culture.

A System Design

Voice and tone

Our users are managing operations around the world and this implies many different situations where we have to communicate our messaging properly. We have defined some basic rules in order to write appropriate and convenient UI messages and to label our components. Similarly, every component is adapted to the language and culture so we can adapt to the user context easily.

8-point grid system

We decided to follow the 8-point grid system proposed by Bryn Jackson and built the layout system based on this. The grid definition allows us to find the perfect balance between alignment and proportions as well as to reach pixel perfection at different resolution sizes.

Typography

‘Roboto’ is the main family we’ve integrated to optimise the display view. We’ve also taken its variation ‘Roboto Condensed’ when it comes to representing numerical values on data tables, which gives us a neat text format.

Icons

We use system icons from the Material Design library to represent standard concepts across applications. Any newly-generated icon is created with the same design principles.

Icons are not only used to communicate a piece of information visually, but also to represent actions that will help the user to work more efficiently. Icons, shapes and colours are combined in order to allow our customers to understand what’s going on at any time.

Colour

There are different ways of creating colour schemes for a website or an app. At Ebury we’ve chosen a monochromatic scheme approach for several reasons:

  • To simplify interactive elements and their affordance as well as reserve complementary colours for important  elements
  • To define an automatic system of variations based on the hue with full control of the final look and feel
  • To set up an easy definition of it on a SASS or LESS file and change it dynamically

The colour palette is defined using HSL representation. This allows us to play with Hue, Saturation and Luminosity values in order to build the colour system.

Based on our Key Colour ‘Tech Blue’ defined in RGB #00BEF0 we have obtained the HSL equivalent HSL(193,100,47) and created the accent colour HSL(193,100,20).

// Main brand colors
$main: #00BEF0 !default;
$main-h: hue($main);
$main-s: saturation($main);

With the same Hue and Saturation simply by changing the Luminosity value we have obtained a range of values that will create the reference set of variations.

// Luminosity values
$l1: 12;
$l2: 20;
$l3: 37;
$l4: 47;
$l5: 56;
$l6: 88;
$l7: 96;

$main1: hsl($main-h, $main-s, $l1); // #00313d
$main2: hsl($main-h, $main-s, $l2); // #005266
$main3: hsl($main-h, $main-s, $l3); // #0097bd
$main4: hsl($main-h, $main-s, $l4); // #00c0f0
$main5: hsl($main-h, $main-s, $l5); // #1fd2ff
$main6: hsl($main-h, $main-s, $l6); // #c2f3ff
$main7: hsl($main-h, $main-s, $l7); // #ebfbff

Since we needed to have a neutral colour palette for other UI elements such as borders, backgrounds, shadows, etc., we’ve reduced the amount of tint of the main colour palette to obtain a greyish scale. Following a similar process, we have achieved a less saturated palette with equal luminosity and a subtle tone of the original blue (hue is kept to 193, but saturation is set at 10).

// Grayscale colors
$gray-h: $main-h;
$gray-s: 10;

$gray1: hsl($gray-h, $gray-s, $l1);
$gray2: hsl($gray-h, $gray-s, $l2);
$gray3: hsl($gray-h, $gray-s, $l3);
$gray4: hsl($gray-h, $gray-s, $l4);
$gray5: hsl($gray-h, $gray-s, $l5);
$gray6: hsl($gray-h, $gray-s, $l6);
$gray7: hsl($gray-h, $gray-s, $l7);

With these two palettes we have total freedom to create and build components and also generate new palettes for specific branded products, while keeping accessibility and contrast.  Only by changing the Hue value we can create new palettes that works well on screens with a similar brightness and ranges of colours.

Layout

Ebury Chameleon layout is optimised for desktop usage but responsively designed to be usable at any resolution on any device.

It is intended to allow a flexible navigation as well as keeping the main work space visible. When it comes to digging into the specific details of an item the user is shown a side panel that is accessible from a URL. The main menu is reduced for small screen sizes to keep a wider space for the content that matters.

Other considerations

There are many other things to consider when building a Design System, for example how to collaborate between team members, how the system is maintained, or which kinds of strategy we can follow to give good visibility of it to other departments.

At Ebury we have worked on defining a process that maintains our evolving system and is open for improvements and other suggestions.

The post Ebury Chameleon as an example of a Design System appeared first on Ebury LABS.

]]>
http://labs.ebury.rocks/2018/03/16/ebury-chameleon-example-design-system/feed/ 0
PyConEs 2017 overall http://labs.ebury.rocks/2017/10/31/pycones-2017-overall/ http://labs.ebury.rocks/2017/10/31/pycones-2017-overall/#respond Tue, 31 Oct 2017 17:02:20 +0000 http://labs.ebury.rocks/?p=1529 PyConEs 2017 overall

In September 2017 Ebury went to PyCon Spain 2017 which took place in Cáceres (Extremadura), at the beautiful location of San Francisco’s Cultural Complex. Read on for insights on refactoring, unicode, serverless, testing factories, diversity in the work place and open source! In this article we would like to give pull out some key talks … Continue reading PyConEs 2017 overall

The post PyConEs 2017 overall appeared first on Ebury LABS.

]]>
PyConEs 2017 overall

In September 2017 Ebury went to PyCon Spain 2017 which took place in Cáceres (Extremadura), at the beautiful location of San Francisco’s Cultural Complex.

Read on for insights on refactoring, unicode, serverless, testing factories, diversity in the work place and open source!

In this article we would like to give pull out some key talks at the conference and the different roles that we had there:

  1. As sponsors: We love Python so we are PyConEs sponsors as part of our commitment. Its great to meet others passionate about getting the best out of python and of course, because we are hiring.
  2. As participants: We want to learn, listen to interesting speakers and share ideas and experiences (we are not a Developersaurus Rex Company).

PyConES 2017 and what we have learnt about…

Development

High-impact refactors while keeping the lights on

Diego shows how they are handling a big refactor in ticketea using an A/B strategy, different from our refactor but similar to our main goal.

Unicode

With Python 3, strings will be unicode by default. We are all happy with that, although, do we know what unicode is? Do you know that emojis are unicode and even the different colours in emojis are part of the unicode?

Some resources:

Serverless

Serverless is so hyped nowadays and AWS lambda is the main star. Creating Python serverless with AWS is as simple as:

 def my_handler(event, context):
    message = 'Hello'
    return { 'message': message }

Also, there are some Python frameworks which make it easier, in particular:

  • Chalice: Microframework Flask-like.
  • Zappa: Tool to deploy your Python WSGI apps to AWS.

Testing

Factories, what the hell?

This talk showed an approach to solve the ’empty database problem’ for new developers or whilst executing tests using Factory Boy to generate model objects programmaticaly.

  • Pro: High customisation level.
  • Con: Extra effort to develop a ‘model generator’.

“Pytest: recomendaciones, paquetes básicos para testing en Python y Django” (Pytest: recommendations and basics for Python/Django testing)

Here lots of useful solutions for py.test are discussed, such as using the coverage files to feed a monitor daemon in order to run automatically only the tests that affect a modified piece of code. Lots of work still to be done here!

Miscellaneous

Gender gap

Diversity is important for improving what we do. Diversity is invaluable in regards to problem solving.

Women represent half of the population, however, the technological industry claims that only around 30% of their workforce are women, and that percentage decreases down to circa 20% when focusing on tech teams. If we analyse open source communities, those hardly reach 10% of women as is the case of the OpenStack Foundation or the Linux Kernel. The Python community has some initiatives such as young women community, to try tackling this issue.

Open source

Have you ever thought about how the open source is managed? Are people managing the software actually being paid for it? And if not,  has anyone time to do it? Find the answers to these questions by following the story of Werner Koch

Lastly, did you know that pypi and pip are maintained by one guy working part time at Amazon? Is that the model we want? Have a look below and have your own opinions.

I hope you have enjoyed reading this article and hope to see you on our blog soon!

The post PyConEs 2017 overall appeared first on Ebury LABS.

]]>
http://labs.ebury.rocks/2017/10/31/pycones-2017-overall/feed/ 0
Performance improvements in Salesforce Lightning components http://labs.ebury.rocks/2017/07/21/performance-improvements-salesforce-lightning-components/ http://labs.ebury.rocks/2017/07/21/performance-improvements-salesforce-lightning-components/#respond Fri, 21 Jul 2017 13:28:25 +0000 http://labs.ebury.rocks/?p=1484 Performance improvements in Salesforce Lightning components

In this post I will show you how to improve your component performance using caching, unbound expressions, and how and when to use the three types of Events available in LEX. Cache it Often, we are wasting precious time doing requests to the server whose responses we already know. But what if we could recycle … Continue reading Performance improvements in Salesforce Lightning components

The post Performance improvements in Salesforce Lightning components appeared first on Ebury LABS.

]]>
Performance improvements in Salesforce Lightning components

In this post I will show you how to improve your component performance using caching, unbound expressions, and how and when to use the three types of Events available in LEX.

Cache it

Often, we are wasting precious time doing requests to the server whose responses we already know. But what if we could recycle these responses? The time saved would mean a lower response time to the client, which is great. Using “action.setStorable();”, the request is only done the first time, and after that we use the same response, which is cached in our callback. This only happens for identical requests: the same method and parameter values.

/** Example: We are getting a contact list to display by pages, displaying 10 contacts 
* in every page, and we know this list is not modified in our component, so it is 
* not necessary to get the list every time we change the page.
* Solution -> cache!
* When we access to the component, the first page is loaded (request done), and 
* when we move to the second page another request is done (pagination attribute has 
* changed). If we then go back to the first page, this is displayed instantly 
* because it is cached and therefore no request is made.
**/
...
getContacts : function(component, event handler) {
 
  var action = component.get("c.getContactList");  
  var pagination = component.get("v.pagination"); // the page we have to display
  action.setParams({"pagination": pagination});    
  action.setStorable(); // activating cache
  action.setCallback(this, function(response) {

    if (component.isValid() && response.getState() == 'SUCCESS') {
      var result = response.getReturnValue();
      component.set("v.contactList", result['contactList']);
      ...
    } else {
      // manage errors
      ...  
    }

  });

  $A.enqueueAction(action);
}
...

Going deeper into this point, we find three different scenarios depending on how much time has passed since the last real request:

< 30s: The callback is executed using the response that was returned the first time.

>30s and <= 900s: The callback is executed using the response that was returned the first time BUT a real request is sent in the background. When the real request is processed, the response is compared with the first response and, if these responses are different, then the callback is executed again using the new response. If the responses are equal, nothing happens.

> 900s: The cached response is already forgotten and a new real request is performed like the first time.

Bound vs unbound expressions

You are probably used to bound expressions, but maybe you are not sure what they are? Well, have you ever seen an expression like “{!v.Contact.Name}” in your components? I’m sure you have.

These kind of expressions are used to display into our components the attribute values, so every time this value is changed in the client or server controller, it is refreshed in the page. And every time the user changes the value in the page, the related variable is updated. This is pretty cool but also quite expensive; because to manage these changes, many events are created, consuming platform resources.

With # instead of !, we are using unbound expressions, so the value is only displayed the first time, and later changes are not mirrored in the page. As no change is possible, the platform is not wasting resources to create and manage events.

<!-- If the contact name and title are modified in the controller, the user will only see how the title is updated -->
...
<aura:iteration items=”{!v.contacts}” var=”con”>
    <ui:outputText value="{#con.Name}"/>
    ...
    <ui:outputText value="{!con.Title}"/>
</aura:iteration>
...

To sum up, use unbound expressions as long as:

  1. A value does not change.
  2. A value might change but you do not want to display it in the page.
  3. You are passing a variable to a child component and the parent component does not want to know anything about the changes.

Use the correct Event type

We have three event types available to us in LEX: Component Event, Application Event, and the new Platform Event. They are very powerful, but it is critical to know where we have to use them if we want to get the best performance.

Component Event: the smallest event, it is fired by a component and can be caught by the same component or by any parent component.

In the above diagram, a component event fired by child 2 can be caught by any component in the diagram, and a component event fired by child 1 can only be caught by parent and child 1 components.

The first step is to create our component event. It has a variable to pass a string between components.

<!-- testEvent.evt -->
<aura:event type="COMPONENT">
   <aura:attribute name="message" type="String"/>
</aura:event>

After that, in the child component, we are registering the event we will fire. Notice that the name is used to reference the event in the controller, and the type is the event name.

<!-- childComponent.cmp -->
<aura:component>
...
<aura:registerEvent name="testComponentEvent" type="c:testEvent"/>
...
</aura:component>

Once we have registered the event, we can fire it in the controller.

/* childComponentController.js */
...
 testFunction : function(component, event, helper) {
   var evt = cmp.getEvent("testComponentEvent");
   evt.setParam("message", "OK");
   evt.fire();
   ...
 }
...

In the parent component, we need to define a handler, which will capture this event and execute the associated action.

<!-- parentComponent.cmp -->
<aura:component>
...
<aura:handler name="testComponentEvent" event="c:testEvent" action="{!c.handleEvent}"/>
...
</aura:component>
/* parentComponentController.js */
...
 handleEvent : function(component, event, helper) {
   var message = event.getParam("message");
   // use message
   ...
 }
...

Application Event: the medium event, it is fired by a component and can be caught by any component subscribed to it.

 

In this diagram, an application event fired by component 1 can be caught by any component in the diagram. An application event fired by child 1 can also be caught by any component in the diagram, however it is not recommended in this case as our goal is that only component 3 is subscribed to that event. Therefore we should use a component event for that in order to improve the performance.

The code is very similar, so I am commenting only on the differences. The event is created with an application type.

<!-- testEvent.evt -->
<aura:event type="APPLICATION">
   <aura:attribute name="message" type="String"/>
</aura:event>

In the component controller (that is firing the event) the way to reference it is a bit different.

/* firingComponentController.js */
...
 testFunction : function(component, event, helper) {
   var evt = $A.get("e.c:testEvent"); // key point
   evt.setParam("message", "OK");
   evt.fire();
   ...
 }
...

And, in the parent component, we must not define the event name. This is pretty important, if you define a name like we did with component events, the event will never get caught.

<!-- parentComponent.cmp -->
<aura:component>
...
<aura:handler event="c:testEvent" action="{!c.handleEvent}"/>
...
</aura:component>

Platform Event: the biggest event, it can be fired from apex code, flows, process builder or API calls (external system). We can create an apex trigger to handle the event in Salesforce or use CometD to enable an external system to handle it. We will not talk deeply about these events because we would need a full blog post just for it, but I want to leave a comment before finishing: please, do not use Platform Events to communicate with components, use Component or Application events for that.

As you can see, there are best practices around managing performance in Salesforce LEX that we have to understand and internalise, in order to apply them in our daily work. As you develop these new skills and gain fluency, the effort will be worth it, as users will be happier seeing how their Salesforce Lightning apps perform. Happy coding

The post Performance improvements in Salesforce Lightning components appeared first on Ebury LABS.

]]>
http://labs.ebury.rocks/2017/07/21/performance-improvements-salesforce-lightning-components/feed/ 0
Security in JavaScript: An AmsterdamJS story http://labs.ebury.rocks/2017/07/18/security-javascript-amsterdamjs-story/ http://labs.ebury.rocks/2017/07/18/security-javascript-amsterdamjs-story/#respond Tue, 18 Jul 2017 08:48:18 +0000 http://labs.ebury.rocks/?p=1438 Security in JavaScript: An AmsterdamJS story

Given that security is so important for our applications, then as front-end developers, why are we we so intimidated when we have to secure our projects? How can we easily improve the security layer in our own JavaScript code? I recently attended the 2017 AmsterdamJS Conference 2017, where I met Ingrid Epure. I would like … Continue reading Security in JavaScript: An AmsterdamJS story

The post Security in JavaScript: An AmsterdamJS story appeared first on Ebury LABS.

]]>
Security in JavaScript: An AmsterdamJS story

Given that security is so important for our applications, then as front-end developers, why are we we so intimidated when we have to secure our projects? How can we easily improve the security layer in our own JavaScript code?

I recently attended the 2017 AmsterdamJS Conference 2017, where I met Ingrid Epure. I would like to share some useful tips from her workshop, The Art of Keeping Your Application Safe.

Preventing XSS

The cross-side scripting (XSS) is one of the more usual vulnerabilities we can have in our web application. They allow potential attackers to inject malicious scripts in pages viewed by end-users.

But if we follow the common architecture in our projects that separates the HTML templates from the JS controllers, most of our work is done! Due to the fact the majority of the frameworks we use nowadays for rendering our templates escape the HTML code.

Divide and conquer

This old motto has been very present in the life of a developer. It is still very popular today with other cool names like Microservices, however I prefer to call it Component Driven Development.

The smaller our components, the better they will operate. This way, we can think about security less as we don’t have to cover complex scenarios.

Delegate to the browsers

We often forget that the browsers do a good job when we are manipulating the DOM. They also handle security concerns very well . So instead of creating / updating elements from the DOM directly from a string, why don’t we use the JS API?

Bad
return '<a href="...">...</a>';
Good
let a = document.createElement('a');
a.href = '...';
let text = document.createTextNode('...');
a.appendChild(text);

Avoiding reverse tabnabbing

Reverse tabnabbing is a phishing attack, where an attacker replaces a page tab with a malicious document by using window.opener.location.assign().

But we can avoid this just simply using noopener and noreferrer in the rel attributes of a link.

By adding the noopener keyword, the new / other page cannot access the window object via window.opener. And the noreferrer keyword tells the browser not to collect HTTP referrer information when the link is followed.

Lint your code

Everybody knows the benefits of a linter. They provide feedback in real-time about our code, according to the rules we specify. But most importantly, they do it automatically.

So, if we want to be sure that all the security rules will be applied, it will be better if we configure our linter for that.

Content security policy

One last thing we can do, is to include a header in the HTTP responses for restricting the domains that can load content in our application.

Content-Security-Policy: script-src 'self' static.ebury.com

The previous policy will allow to run only the scripts hosted in the same server as the application (self) and the ones hosted in static.ebury.com. Hence, it will mitigate potential XSS and data injection attacks performed by scripts hosted in external domains.

In case you now would like to watch the Java Script Security Workshop, it is available on YouTube. Enjoy!

The post Security in JavaScript: An AmsterdamJS story appeared first on Ebury LABS.

]]>
http://labs.ebury.rocks/2017/07/18/security-javascript-amsterdamjs-story/feed/ 0
TrailheaDX 2017 highlights http://labs.ebury.rocks/2017/07/12/trailheadx-2017-highlights/ http://labs.ebury.rocks/2017/07/12/trailheadx-2017-highlights/#respond Wed, 12 Jul 2017 14:16:23 +0000 http://labs.ebury.rocks/?p=1448 Salesforce DX Beta release

TrailheaDX is the Salesforce developer conference. It was held in San Francisco on the 28th and 29th of June, in the Moscone West Center. The number of attendees has quadrupled from last year, with people coming from all around the world, as my colleague and myself traveled from Málaga (Spain). This conference was born after … Continue reading TrailheaDX 2017 highlights

The post TrailheaDX 2017 highlights appeared first on Ebury LABS.

]]>
Salesforce DX Beta release

TrailheaDX is the Salesforce developer conference. It was held in San Francisco on the 28th and 29th of June, in the Moscone West Center. The number of attendees has quadrupled from last year, with people coming from all around the world, as my colleague and myself traveled from Málaga (Spain).

This conference was born after the demands from Salesforce developers of having their own event. It is true that there is a zone at Dreamforce for developers, but that was not enough for us, and finally our demands were listened to and Salesforce created this amazing conference.

One thing to note about TrailheaDX is that the people giving the talks are product managers, directors or engineers working directly with the products, so you have first hand information.

There were more than 180 sessions to cover multiple and diverse topics. 3 key areas led the whole conference and were highlighted during the opening keynote with amazing demos run by Leah McGowen-Hare (Director of employee and Trailhead content strategy) and Sarah Joyce Franklin (SVP Dev Relations & DM Trailhead). The demos are already online, check this powerful demo of Einstein and this one about Salesforce DX both run by Leah McGowen-Hare, and also this one about Platform Events run by Sarah Franklin.

Leah McGowen-Hare and Sarah Joyce Franklin running demo at Opening Keynote

Salesforce DX

Salesforce DX was released in Beta during TrailheaDX. I have already written a blog about it because I was participating in the private pilot, you can check it here to know what is Salesforce DX in detail.

Salesforce DX Beta release

As a summary, Salesforce DX is a group of tools oriented to develop and release Force.com applications following the industry standards of version control system (VCS), Continuous Integration (CI) and Continuous Deployment (CD). It comes with a Command Line Interface (CLI) that we could enjoy in every single talk about Salesforce DX.

Many of the members of the Salesforce DX team were at the conference giving multiple talks about this topic.

We had Wade Wagner (VP Product) introducing Salesforce DX, his session is already online.

We also had Dileep Burki (Director) talking about the Second Generation of Packaging. It is still in development mode and will be offered as private pilot in the next weeks and not available in beta after Winter ’18, so still a quite long way to go. The Second Generation of Packages is meant to be a unique package that can be used for ISV, Partners and end clients to work together with Salesforce DX for a more automated development and release experience.

There was also a workshop run by Jules Weijters called Salesforce DX and Continuous Integration, that showed how to use Salesforce DX and Travis as CI systems. This was one of my highlighted sessions to attend. Unfortunately the session was so packed that the WiFi couldn’t provide internet for all of us and I couldn’t do the workshop. A real shame. On a positive note, the workshop was based on a trail, that we all can do at home.

Salesforce DX and Continuous Integration Workshop

Thomas Dvornik (Technical Lead) gave a great geeky talk about how to customize your environment to be more productive using Salesforce DX. Mike Miller (Software Architect) and Jim Wunderlich’s (Salesforce DX Technical Lead) session was about how to migrate existing apps to Salesforce DX. This was the very same talk that I did for DreamOlé, although mine was in Spanish.

And there were more sessions I couldn’t attend! I could tell there were big expectations for Salesforce DX on this conference and I think expectations were exceeded.

Einstein

Artificial Intelligence has firmly arrived to Salesforce. It is called Einstein and there are several products around it.

Sales Cloud Einstein is a product that helps sales teams to focus on leads with higher conversion probabilities. Einstein takes the information of all your leads, and creates and trains a model that gives a score (probability of conversion) for each lead. Along with that score, Einstein gives feedback to the user about what is influencing that score so the user can act on it. For example, if the country is an important field for our model, and this field is empty, it will tell the user. It also offers Account and Opportunity Insights and can automatically log activities from your sales user email and calendar. The key point of Sales Cloud Einstein is that the user doesn’t need to upload any data or do anything to train the model, it takes all the data from your CRM (and email client and calendar if you allow it).  At Ebury, we are already in a trial with this. Looks promising!

Sales Cloud Einstein session

Einstein is an application that lives outside Salesforce, so in order to communicate with it we need an API. There are 3 different APIs: Einstein Vision, Einstein Language Intent and Einstein Language Sentiment.

Einstein Vision API gives us the power to evaluate images based on a model that you train. I will use the same example we saw in a workshop, let’s say we want to classify cats based on their breed every time we upload a cat image. The first thing we need to do is to create the model. We just have to upload images (sample data), the more the better, assigning a label for classification. In our example we would upload lots of cat images with a parameter saying if they are British, Bengali or Siamese. Once we have uploaded the images, we call the Einstein API method to create the DataSet, we will then have our model ready to be trained. Then we will use Einstein API to train the model. This process is asynchronous and can take some time, depending on how big the DataSet is. When this is done, we are ready to get predictions against this model. When we upload an image it will be automatically classified by Einstein. Amazing!

The other two apis are for Neuro-Language Processing (NLP). Based on plain text Einstein can tell you what your customer wants, Einstein Intent, and if that message is positive, negative or neutral, Einstein Sentiment, giving you the result based on probability.

The concept is the same as that for Einstein Vision. We need to create a model, uploading lots of plain text sentences with their corresponding labels. This label would be positive, negative, or neutral for Einstein Sentiment. For Einstein Intent we would define labels based on what the customer is requesting, it could be support, services, or sales to redirect the inquiry to the right team. We would then create the DataSet and train the model asynchronously. And we are ready. Next thing is to upload a sentence and see what our models return. This is great for automating routing of customer requests for example.

One thing to take in mind with AI, the better your data is, the better your prediction will be.

These are very powerful tools that I’m sure will be growing in the next releases.

Platform events

Salesforce knows that integrations are a key part of the platform and wants to make the communication between Salesforce and external systems easier for their customers.

At TrailheaDX Salesforce presented Platform Events, a custom messaging platform so that customers can build and publish their own events, within Salesforce or outside of Salesforce. This platform is based on the publish-subscribe model.

There are 3 main components: Salesforce message bus where events are added in chronological order with replay IDs. Publishers, that can be internal or external to Salesforce, can push new events into the bus. Internal apps will use Apex, Process Builder or Flows, while external apps will use Salesforce native APIs (REST, SOAP). Consumers, that can also be internal or external to Salesforce, can subscribe to events via CometD/Bayeaux protocols and with Apex.

Platform Events architecture

A Custom Event looks very familiar to Custom Objects, but their API name ends with “__e” (instead of “__c”). You can also add custom fields to it, keeping the “__c” suffix. To publish an event you only need to create a record of that event object. To listen to an event, we can create a trigger in Apex if we are in Salesforce, or subscribe to it using CometD/Bayeaux protocols if we are outside the Salesforce ecosystem.

This is a very powerful tool that we are going to use at Ebury to make our integrations more effective.

These points that I just summarised were the 3 main key areas of the conference, but there were many other topics covered in the sessions: testing, new features coming up like External Services or Salesforce API explorer, lightning components, community lightning builder, and many more. But I’m afraid I can’t talk about of all of them in just 1 post, and it was literally impossible to go to all sessions.

Alongside the conference there were also sessions focused on Equality for All and the importance of a diverse community. Tony Prophet, the Chief Equality Officer, was in the conference contributing to  different panels. The sessions and panels were inspiring.

Tony Prophet, Chief Equality Officer at Equality For all panel

At the same time that sessions were happening there were booths distributed in the “Salesforce Forest” where you could go to see a demo or ask for information. There was a booth for almost everything: Salesforce DX, Einstein Vision, Einstein API, Community Lightning Builder, Lightning Application Builder, Certifications, Salesforce API, and many more. And of course there were booths for partners where you could go to get a demo for their products, such as DocuSign, Rabbit, and Copado.

Developer Forest at TrailheaDX

We could then relax after the first session’s day with some beers and music at the party at Warfield. This featured a live set from from Thievery Corporation!

For the closing keynote we had Damon Lindelof, screenwriter and producer, most known for the television series Lost, sharing his experience with us. Lots of Lost fans in the audience.

You can check TrailheaDX ‘17 sessions in the Salesforce Developer channel on YouTube. Enjoy!

As a closing note I have to say I just loved TrailheaDX, I came back home with the bag full of motivation, full of new things I want to learn, and full of new features that I’m willing to apply here at Ebury. Above all I came back feeling very happy and very fortunate for what I do.

The post TrailheaDX 2017 highlights appeared first on Ebury LABS.

]]>
http://labs.ebury.rocks/2017/07/12/trailheadx-2017-highlights/feed/ 0