What Is One-to-One Marketing & How Does It Work?

When you have experiences with brands that seem so tailored to you as an individual, have you ever wondered how they do it… or how you can too?

True one-to-one marketing interactions today go far beyond using your name. The brands that have really mastered personalization present enticing product recommendations, abandoned cart emails, and use unique codes and coupons per individual.

The best marketing today reads the thoughts of customers, and even foretells the future (okay, not literally, but you get the idea, right?). It helps customers by giving them exactly what they want and even offers predictions as to what they are likely to want next.

How can this all be done? Artificial intelligence marketing.

AI — today more of a buzzword than well-understood technology — is designed to optimize marketing tasks, tools, and outputs. It’s on the verge of transforming our roles and is serving as a catalyst to the evolution of e-commerce.

Global marketing organizations are using AI and its subset technologies — machine learning and deep learning — within their strategies to unify the entire customer experience with “that little extra.”

AI for personalization

More than half of global marketers are using AI for personalization, to understand cross-channel customer behavior, and manage interactions. According to our Building Trust and Confidence report.

Throughout the rest of 2019 and into 2020, we’ll begin to see one-to-one marketing in action and pinpoint areas of influence, success, and improvement.

This brings up several interesting questions: what is true one-to-one marketing (what does it look and feel like), how does it actually work within a marketing platform (what’s actually happening beneath the hood), and how does AI play into all of this?

What Is One-to-One Marketing?

In its simplest form, true one-to-one marketing means each person gets their own content, their own experience.

The beauty of AI is that it can define segments down to the individual — each person can get their own email, text message, app notification, web content, or overlay… virtually anything customizable can be customized.

If you have 3 million contacts, for instance, but you have 1 million segments, that’s not one-to-one.

One-to-one marketing is even more granular, more relevant, and ideally more engaging than mass marketing or it’s sister, “segmentation.” Individual one-to-one marketing ensures individualized messages are made for every customer — though these terms of personalization, individualized, one-to-one customized, and more are often used interchangeably.

Examples of One-to-One Marketing

What does one-to-one marketing look like? Let’s look at a few examples.


One-to-one marketing brings an unmatched “human” touch. In that spirit, Sigma Beauty sends an individualized code to customers.

individualized code

Image source: Sigma Beauty

Birthdays and other anniversaries

Many B2C brands already do some triggered communications around key dates for customers. These are a sort of “must” in e-commerce (and tactics you can’t afford to get wrong).

happy birthday

Image source: Sunglass Hut

Real-time availability

You know the ones — you’re browsing an online store or begin to build a shopping cart, but exit your browser before completing a purchase… then, voilà! A couple hours later, you receive an email prompting you to resume your activity:

Real-time availability

Image source: Carnival

Abandonment emails

You know the ones — you’re browsing an online store or begin to build a shopping cart, but exit your browser before completing a purchase… then, voilà! A couple hours later, you receive an email prompting you to resume your activity:

These examples scratch the surface of the potential of AI-infused marketing technologies. But they give you an idea of how the disruptive technology can manifest outwardly within your marketing.

The Problem with One-to-One Marketing

The problem is that one-to-one marketing is impossible to achieve manually. Marketers that attempt to achieve the above kinds of tactics manually may find some minimal and isolated success but ultimately become crazed and overworked — driving themselves mad, like elves frantically and furiously breaking their backs to create, wrap, and organize every Christmas gift without an assembly line to help mechanize the process.

When you want to achieve one-to-one marketing at scale across gigantic databases of hundreds of thousands of contacts, this requires specialized technology that can handle massive amounts of customer data and crank out intelligent outputs.

Overcoming Common One-to-One Marketing Challenges

The majority of modern-day marketing suites tend to enable some level of personalization or customization, but only on a channel-by-channel lens.

This siloed approach limits the capabilities, creativity potential, and visibility of marketing teams, and, most importantly, almost always causes a fragmented customer experience.

To overcome this common technological limitation, these brands often find that communications work best when they’re calculated and executed based on singular customer events (defined, isolated instances like a customer making a purchase, or implicit events like customer churn and attrition).

The best one-to-one, omnichannel experiences are built in engines that are channel-agnostic, ensuring every interaction is unique, containing the best content, independent of channel.

For example, a platform that consolidates all web, mobile, email and purchase information into a unified customer profile would include preferences, behavior trends, predicted behaviors, propensities, and affinities for individuals. This unified customer profile is the “single source of truth,” the foundation for which hyper-personalization can begin to take place across all channels.

Personalization engines understand the nature of individual events, and they can associate context with events, then choose relevant content (without any channel-specific styling), and ultimately syndicate that output via the preferred channel of the customer.

If this sounds far-fetched, complicated, or too techy, rest assured that it’s the only way by which true one-to-one is happening today. By working with the machine (in essence, AI, as we get to below) you VASTLY simplify this process — relying, as it is, on the genius of the machine to take over the pulling of the strings in your show. Now, you don’t have to understand the ins and outs of the machine for this stuff to work. But if you want to pull back the curtain just a tad, read on.

How Does Artificial Intelligence Marketing Actually Happen?

…and what is the machine really doing that’s so cool?

As mentioned, by working with an AI-enabled marketing automation system, you can automate the process of delivering one-to-one experiences to your entire database, at scale, across every channel. Let’s lift up the hood and see how it all happens.

Artificial intelligence works best when it’s weaved into the fabric of a marketing platform. This distinction is critical — AI isn’t one individual tool or thing. It’s a part of the entire system. In other words, AI isn’t an individual atom making up the body; it’s the emotion or love or air that runs throughout the body.

An underlying layer of code that’s hard-wired to “self-learn” can work with multiple channels, in multiple instances, and across a database. That means that AI works best when vertically integrated across a platform for enhanced capabilities, not when implemented within a single channel.

embedded code

An embedded code lies underneath and informs every task that an AI-enabled marketing platform handles. While this illustration incorporates complex actions and confusing processes, it shows how AI is ingrained into an automation platform. Image source: Emarsys

Self-learning systems — which most AI-branded tools should also be — have to learn on their own, creating, reacting to, and defining rules that aren’t explicitly told to them by humans. On top of all that, these processes will all need to occur within a matter of milliseconds. A solution that helps you do this ostensibly sits atop all customer data with 360-degree coverage — including CRM, behavior, product, and external datasets.

The good news? AI-driven solutions are built to make this easy to manage so you’re as hands-off as possible — in fact, they work best this way.

The best AI machines literally let you tell them your strategy (e.g., setting a sliding scale to indicate how aggressively you want to discount an item for a particular campaign), and then do the rest.

AI systems lean on complicated algorithms (which, like the fire burning the coal, you don’t touch) that work in conjunction with previous behavior data to develop probabilities of certain events happening (like a customer purchasing), taking into account expected revenue/cost as well as the guidelines that you set in place to create a final output. The whole process works at once, kind of like this:

AI-enabled machines

AI-enabled machines use algorithms, customer data, and other variables to help marketers send content most likely to drive action, at the best time, to every contact.

Through an extendable model, users can add many contact fields, relational data tables, or decision trees to create quite sophisticated models for campaigns.

Use case: one-to-one incentives

Let’s look at one use case, and how it works: Incentives.

AI-enabled engines typically contain rules for personalizing. So, during “send-time,” the channels basically dial up the personalization engine, and request content for individual contacts.

Since the machine is programmed to (a) consider all previous purchase, historical, and website data of each contact, (b) understand that information, (c) develop customized communications mostly likely to convert each contact, then (d) send, it can intelligently, automatically evaluate how much to discount for every customer:


Incentivization is unique to every person — some people require steeper discounts, while some require no incentive. The machine knows how much to take off for every customer to maximize likelihood of purchase.

Consumers are then provided with incentives most likely to engage them, personalized at the point of interaction.

Pro Tip: There’s a lot of fear and rhetoric about AI taking jobs. It won’t! Menial and repetitive tasks like segmenting data, crafting campaign blueprints, and creating if-then rules should be and can be offloaded to the machine. In addition to a superior customer experience, half of the point of using AI is to relieve your workload and create efficiencies previously unattainable.

When all of this comes together, the end result is true omnichannel one-to-one marketing that is driven by AI. AI marketing is, indeed, the bridge that leads to scalable, individual interactions.

Final Thoughts

AI is real, and it allows us to solve real problems for real people — to help our customers, by:

  • Trying to anticipate what else they might find helpful in tandem with a current or recent purchase, e.g. via product recommendations
  • Allowing customers to stay up to date with content that populates in real-time based on when it’s opened, e.g. weather updates or local hotel availability
  • Making it easy to take the next best action in a given circumstance, e.g. with timely push notifications from a mobile app

As an optimization technology and depending upon how much quality data you have and which technology you’re using, AI can help you deliver a mildly better to drastically enhanced customer experience.

Whether the marketing masses know it or not, AI has the potential to be the salvation that time-strapped, over-worked teams have been waiting for.

AI marketing solutions connect the entire marketing spectrum, including customer data, access to that data by the marketer and machine, campaigns and content, and execution. AI will help you provide one-to-one personalization for each and every one of your deserving customers.

Handpicked Related Resources:


Preparing for the Future of Data: New Roles, Big Data, Blockchain, and VR/AR

Technology continues to make for new possibilities for marketing, and the role of the marketer — now more of a moving target — is changing with new tech, more channels, and, as we’ll discuss in this post, accumulation of more data.

The worlds of technology, data, and personalization have converged. If this were a tale, it might sound something like this: cloud computing broke on to the scene within the last decade… enabling better storage and organization of what we’ve deemed big data… marketing automation tools have standardized and accelerated all the things we can do with data… and AI (such as chatbots, predictive content engines, adtech, and 1:1 personalization) have catapulted all of this into another stratosphere.

How are marketing teams evolving? Where are they getting the skillset to deal with data? As the world around becomes more fragmented and complicated (and as new technologies like blockchain and AR/VR come to the forefront), how can you simplify all of it for better outcomes? And where do your opportunities within all of this lie?

The Impact of Big Data on Marketing Organizations

In short, big data is now and will continue to augment marketing in many important areas. Soon, AI will do the same. Today, almost all marketing organizations are using big data, and while first-party data is your most valuable asset, data itself has become both commoditized and democratized, as I wrote about a couple weeks ago.

I’m most interested in where data is going — the functions or features data will enable which doesn’t already exist. Rule- and outcome-based marketing is one area that’s going to take off.

At the moment a human decides on the strategy and tactics behind achieving a goal. Imagine, though, if we could categorize content and products, as well as categorize customers in cohorts and events. Having an interface that enables us to just define the goals — but that lets the machine execute towards that goals without the marketer to select which content for which client on which channel at what time — would be a revolution of what is, today, the status quo in marketing execution.

twitter“Imagine a future where marketers simply define goals — but where a machine executes in selecting #content, #channel, and which #customer to send to” says @eisenhut_dan               CLICK TO TWEET

The Marketing Team of the Future: Mapping out Your Team and New, Key Roles

AI skills are spreading quickly. Do marketing organizations need dedicated data scientists or machine learning experts to leverage data for AI?

Part of the promise and benefit of AI-driven marketing platforms is that you don’t need developer-like or AI skills — marketers with no technical experience can now reap the rewards of such systems.

But that’s not to say there isn’t a market for or value in people with expertise in that area.

To hire/acquire or contract for data talent

The decision to hire/acquire vs. contract for AI- or data-skilled workers is normally defined based on the amount of data a company has and how big the product catalog is.

If you only have one source of data entry and 100 products, it might suffice to  just use business rules across relational tables vs. letting the machine decide. If you leverage multiple channels and have several thousands of items, this is where the matrix of combinations begins to get complex to manage. If you have never used any machine learning approach before, ideally you’d first contract with a software provider who also offers to consult outside of pure software. Once they prove the value of a machine learning approach, the next step is to evaluate how your own team can learn the same skills by engaging with the experts. If your own team realizes they will not be able to handle the day to day, then you should reach out to specialized talent platforms using popular networks like Upwork, Topcoder, and Kaggle.

But with this in mind, what new, key roles might your marketing team of the future have?

The new marketing organization

The new technologies and advancements in data science that we’ve been discussing will open the door for new kinds of leadership and managerial roles in some (mostly retail) industries. Here’s three new jobs that will begin to emerge.

   ► Chief Experience Officer

Many customer-centric leaders have been clamoring for a Chief Customer Officer for a bit now. Soon, we’ll start to see the emergence of a related position, the Chief Experience Officer. This person will purely take care of the customer experience for companies that sell any products and services.

   ► Augmented Reality Manager

AR/VR technologies are becoming a core part of marketing. But the development of the technologies hasn’t really sat “in marketing” because most marketers can’t program these kinds of machines and the development of them isn’t a marketing activity. Soon, though, the gap will tighten and uniquely-talented people with both right and left brain competencies will manage the entire AR experience, especially in retail.

   ► Bot Developer

Bots are already serving purposes across the Internet, transitioning what were basic, human duties into automated replicas, like customer support chats. Bots hold great potential for both desktop and mobile. Where mobile apps stand today, bots could stand tomorrow. Marketing-related bots will require developers to program them to not only be smart, but also benefit the brand.

twitterNew & emerging technology will warrant new #marketing jobs like CXO, Augmented Reality Mgr, & Bot Developer                            CLICK TO TWEET

How Blockchain Will Impact Marketing, Data, and Your Team

Data protection and privacy are huge concerns right now for consumers. If it seems like cryptocurrency and blockchain are all the hype, it’s because they are. But how will blockchain encryption impact marketing in the near term? I see three ways.

Related Content: What Is Blockchain & How Is It Changing Marketing?

   ► Online Ad Verification

As brands continue to pull online ad spend amid concerns over data safety, deploying blockchain for ad verification could prove to be a supportive endeavor for marketers.

Blockchain could provide a mechanism for brands to know ad placement. The global forecast for revenue lost to ad fraud in 2017 is approximately $16.4 billion (Business Insider). Considering how expensive ad delivery auditing is today, blockchain offers a cost-effective alternative – decentralized ad auditing.

With blockchain, you could potentially take ad deliveries from a server, and then release them to the mining machines, which would then analyze them for fraud. Simple indicators like a “non-live” browser supposedly seeing an ad can help determine if the ad delivery actually took place. The next step down the line may be identifying that kind of fraudulent activity, and blacklisting it in real time.

   ► Data Protection and Regulatory Compliance

With the GDPR coming into effect across all major markets, you can also leverage the blockchain technology to store large volumes of customer data securely.

Related Content: What You Need to Know About the GDPR [Plus Bonus Influencer eBook]

The technology benefits from being securely encrypted and decentralized are that such measures offer an alternative to traditional data storage methods. Additionally, regulatory compliance to the GDPR that will require marketers to take consent from their customers can also be managed through blockchain.

Blockchain experts expect marketers to use this technology to negotiate consent in an era of mandatory digital consent.

   ► Media Buying

Facebook and Google have transitioned their business models around their programmatic advertising and display ad offerings/networks. Google is essentially the middleman that helps advertisers and website owners build trust with each other.

But what if an advertiser and a website owner already trusted each other? They wouldn’t need a middleman to take a cut off of their profits. By using blockchain technology for programmatic buying, advertisers can easily verify if a user is authentic, and ensure that the website owner is only charging the advertiser for real clicks. Blockchain could very well change Google’s display network model approach.

The On- and Offline “Data-to-Device” Relationship Will Improve CX

Marketers are already starting to use offline data to influence the digital experience. One untapped area of data management is how online behavior influences offline advertising and offerings.

If I walk into a store, I’d love to have a personal assistant — based of all the data points I’ve given a company via every channel — that could support me in buying decisions of more expensive or luxurious products. A personal assistant like this could be accessed either via an iPad I can take with me when I enter a store (and get product locations based of online wish lists, browsing, cart, or purchase behavior) or even a human consultant.

➤ Use Case Example: Let’s say I walk into a furniture store. I indicate, upfront, to a sales consultant which products, materials, and colors I’d like to talk about in detail instead of the painful process of losing time by being consulted around products I already dislike. This is just one simple example how the unified customer profile can create a positive sales experience for both sales consultant as well as the client.

Data and augmented/virtual reality

Ever wondered what the implications of all of this customer data you’re collecting will be on augmented/virtual reality systems once you have them? Let’s not jump forward too far (these technologies are several years in the future), but it never hurts to take a look into the crystal ball.

Amazon Go has already spurred the next phase of retail by letting you get in and out of a store quickly — with their in-store systems,  you select what you want and walk out without standing in line.

This in-store experience (paired with same day delivery) will not only decrease the cost to run a physical store, but will also finally offer a means for impatient window shoppers to start to buy more.

On the other hand, wearables like Google’s Glass project could start to render additional information about products you view. If you’re on a diet, the overlay can show you, in advance if the selected product is exceeding your daily calorie limit, or maybe if a piece of furniture will look good in your own room. Based on your running behavior, it could also recommend the best running shoe for you.

These are just simple examples how data can start to support humans to actually get meaningful advertising based on real day-to-day interactions with products.

Today, there is very limited information about how we use a product we purchased. But we’re not far off from having a real-time marketing data interface that would allow a marketer to note customer behavior or intent within a brick-and-mortar store and then be able to react to that behavior (with an incentive or product recommendation) before the customer leaves the store. The real challenge will be the time investment into a rich customer-facing interaction — the relevant data and the needed technology already exists.

Closing Thoughts

As a result of all this evolution and convergence of tech, marketing, and data, it seems like data science has become a core competency for any high-flying marketing team. Historically, data work had laid with the IT folks — but they’re now embedded within the fabric of any marketing team that’s trying to knit together their data-driven strategy.

For now, continue to improve your data quality by using clean collection methods. Continue learning and experimenting with new capabilities with a keen eye to the future — looking out for one or two of these pieces of technology that could give you a competitive advantage down the line. ◾

New Call-to-action
Handpicked Related Content:

Daniel Eisenhut

Daniel Eisenhut is Vice President of Services and Support at Emarsys, and has been with the company since 2011.

Connect with Daniel: LinkedInEmail


Using AI for Marketing: How Machines Optimize Decision-Making

This article features content from Revolution 2017. Join us for in London March 2020 for our next event. Interested in learning more? Click here.

Retail sales and growth is reliant on the depth of your customer knowledge. Do you know them well enough, for example, to predict what they’ll want to purchase from you next? While some patterns are obvious, like Black Friday sales, the further detail you go into with each customer, the more complicated your predictions become.

At the recent Emarsys Revolution event in Berlin, Physics Professor and Founder of BlueYonder, Dr. Michael Feindt, spoke about the state of prediction today and how artificial intelligence (AI) is taking data analysis far beyond what humans can do on their own.

Knowing Your Customer in an Uncertain World

Data not only helps us understand what happened in the past, but also provides clues about what we can expect to happen in the future. Some things like recency and spend amount can follow defined patterns that you can translate into a personalized email or offer. Without a technology solution, you might be able to sift through the data for a single customer by herself in a day.

Now think about doing that for thousands of customers, each with thousands of data points. That’s where marketing teams begin to look for software or third-party services to scale all that analysis. And because predicting customer behavior as a central strategic activity for marketing, we must go beyond finding faster ways to make sense of the growing amount of data. Our databases are rich with clues that can help us deliver more personalized experiences to our clients. We must become a lot more accurate in the decisions we make based on all of that data.

Accuracy in prediction depends on a system that accounts for every possible input and influence. Tracking only recency and average order value gives you a rudimentary picture of your customer. Now add in channel, preferred device, average incentives, and related items purchased, and the decision becomes more complicated, takes longer to make, and may still only be a guess.

“What we’re trying to do is to give companies — now mainly retailers — real value through data and scientific software. So it’s about bringing science into economy but hiding the complexity and trying to build applications that are end-to-end applications for real problems that people have in business.”

Professor Michael Feindt • Founder, Blue Yonder • @M_Feindt

The Anatomy of a Prediction

To predict with any accuracy, you first need data to pore through, the more data the better, because this will be what creates the most accurate and in-depth picture of each customer. Then you’ll analyze the data and look for patterns, causes, and the relationships between them. These predictive insights form the basis of the prescriptive action you’re going to take.

But there’s a lot to account for between poring through the data and deciding on the best action to take.

Causation and correlation

When you find patterns in customer behavior, you then have to determine why those patterns are occurring. This often begins with trying to correlate two data points around the pattern and see what insights you can glean, because ultimately, you’re trying to learn what created the pattern and whether it presents a marketing opportunity. The problem is that using only a few correlated data points doesn’t necessarily reveal the true cause — or even a true correlation.

Causation and correlation are mainstays of statistical analysis, but some people are confused by the relationship between the two. As Prof. Feindt’s colleague Lars Trieloff at Blue Yonder puts it:

“For instance, there is a distinct correlation between the number of people drowning in swimming pools in the US and the number of movies Nicolas Cage appears in. Yet not even his harshest critics would accuse Mr. Cage of causing people to drown and the hypothesis that swimming pool deaths inspire Hollywood studios to cast Mr. Cage does not align with Hollywood production timelines.”

To understand the real reason customers are behaving in a certain way, the marketer needs many data points to draw correlations between. The data’s out there, but it’s not humanly possible to analyze it all in such a way and still be able to act in time.

Predictive + prescriptive analytics

Once the data’s been analyzed, the next step is to identify what’s predictable and then come up with an actionable plan to address that.

You can use predictive analytics to predict customer behavior in the near future based on recent behavior. This stage essentially finds the pattern in customer behavior and calculates the probability that the pattern will occur again.

Prescriptive analytics offer an action recipe, a plan for execution based on a decision fed by many data inputs. The factors that most influence these kinds of decisions include:

  • Objective data from as many sources as possible
  • Predictions from the predictive analysis
  • Cost/Utility. How much budget do you want to spend based on the outcome you want to achieve?
  • Reinforcing your approach by measuring the approach’s performance and using that data to make your efforts better targeted and more efficient
  • Using technology to take all the analytical insights you’ve gathered to scale the workflow

Predicting retail probabilities

So when can you use predictive analytics? The world is a very uncertain place, so it’s hard to predict anything impacted by random chaos. Weather is chaos, and we’ve come a long way with our ability to forecast, but even so, we can’t predict what the weather will be more than 14 days into the future. There are too many variables that can occur over too long a period of time.

The key is balancing predictability and uncertainty. For marketing, this means they have to aim for the sweet spot halfway between two impossible extremes. Due to uncertainty, every decision is a compromise between complete unpredictability where everything is random, and absolute predictability where everything is foreseeable.

In retail, you’re trying to predict a behavior or an event where the main factors for calculating such probabilities are:

  • Items you want to sell
  • Stores where stock will be
  • Days the events will have the best chance of being successful

Once you predict the event, then you have to prescribe the action to take, and you have to make that decision based on the given KPIs. For a grocery store that stocks perishables, you can only improve one KPI at the cost of another. Too much stock, and you have to write it off. Too little, and customers will be displeased that you’re out of stock. This is impossible for a human to correctly predict, but this is where AI technology offers a more efficient way to align to business goals and make it easier to achieve both a decline in write-offs and out-of-stocks.

The human limits of prediction

Very important personal and professional decisions are done by us, by gut feeling, and they will not be automated, even also in the future. So that’s a good method. For example, these questions… like these once-in-a-lifetime decisions like: “Should I take this job?” or “Should I marry this woman?” — These are things that will never be automated. So that’s a good story.

But for example, if you’re a retailer — and I think it’s true for almost every enterprise — many decisions are to be done there. So what we already do is that about 99% of all decisions can be automated. Not only can we automate it, but at the same time, they are improved quite a bit.”

“One great example where machines are better than the humans is making decisions under uncertainty. Important personal and professional decisions are done by us, by gut feeling, and they will not be automated. But a retailer has many decisions to make that can be automated — in fact, 99% of all decisions can be automated. Not only can we automate them, but at the same time, the accuracy of those decisions is improved quite a bit.”

Professor Michael Feindt • Founder, Blue Yonder • @M_Feindt

As Prof. Feindt believes, the human “gut feeling” is not a reliable decision-making tool. Marketing is a complex system, and results and decisions can’t be measured with the gut. You want to influence customers by observing customer purchase behavior, but with so many external choices, the gut feeling is tantamount to numerology. Sure, there must be people out there who can do either accurately, but with no way to measure such a subjective and potentially mythic decision-making process, your success might just be coincidental.

Like automated science, you have to observe what happens, but in marketing, there’s too much to observe. You need to match up correlation and causation to be sure that you’ve correctly identified the relationship, and you can only do that by basing every predictive decision on data.

Prof. Feindt points out that among the decisions that machines would be more effective making than humans are those that are constantly repeated. How many of each product should you stock? What is the best price, depending on time of day, year, location, and a million other possible variables? Which customers should you send the catalog to?

The problem, though, as Prof. Feindt outlines it is that when humans are the ones making those repeat decisions, one of three things will happen:

  • Most companies will do nothing (BAU)
  • A few companies might consider their own business rules, mostly focusing on “How did we do this before?” or “What did we do last year?”
  • A sliver of companies will actually think about the process and aim for a way to do it better.

When these decisions become habits, the only thing you can predict is getting the same exact result as you have every time before, and worse, your competitors may kill you in the market for it.

“Just think about your company and your own process,” Fiendt said. “How often is the default ‘what did we do last year?’ Yes, last year. Good, huh? But that’s doing nothing. It’s even worse than doing nothing! It’s wrong because you’re predictable. It’s really dangerous to do that because it will be easy for your competitors to find out. ‘These idiots! They do the same thing as last year.’”

Even under the best of circumstances, a human marketer can’t keep up with the data or feed all available data in to making the decision in anything close to real time. That means the marketer makes most decisions based on an average result across many customers. This is not the data marketers need to effectively personalize their interactions. To do better than a guess, you must account for uncertainty, and the only way to do that with accuracy is to use AI.

“I automated the stuff that I did with my students and research in elementary particles physics,” Feindt said. “So we found out that, more or less, we do similar things again and again over the years in making statistical analysis of such elementary particle collider events. So we automated that, and in the meantime, we had 4 times superhuman performance. We have a computer program now that is 4 times better than 400 physicists over 10 years, including myself.”

Machine + Data = Most Accurate Prediction

The closest thing we will ever have to an accurate fortune teller is a machine running AI software. AI is already prevalently used in machine learning, image recognition, and deep neural networks, and one of the most amazing things that AI is really good at is reinforcement learning, because you build intelligent algorithms that can optimize very complex actions.

AI takes uncertainty into account

One of the reasons AI is so revolutionary for the marketer is that it can accommodate the unpredictable chaos of the real world. Where the standard approach predicts a number, with no accounting for uncertainty, AI offers complete conditional probability density (a range of possible numbers in a bell curve). Risk management is also accounted for.

“That the future is uncertain is bad. If we want to decide now how many to order, then we have to find out what is the cost if I order too many and what is the cost if I order too few. We have a probability distribution and a cost function, and now we can use mathematical optimization algorithms to find the best decision, telling us: Okay, today I have to order 280 apples so that we will have the best compromise of being out of stock and having to throw it away.”

Professor Michael Feindt • Founder, Blue Yonder • @M_Feindt

Where AI shines in retail

Specifically, AI can transform the retail supply chain through optimizing automation. With 99% automation, a retailer can see significant improvements:

  • Write-offs and waste decrease
  • Operations costs go down
  • Fewer out-of-stock events
  • In the case of food retail, the produce will be fresher
  • Inventory turnover goes up
  • Efficiency improves

Blue Yonder has successfully applied AI-driven supply chain automation to a variety of industries. For example, one German grocery chain used automation to bring their out-of-stock events way down from 7.5% to around 0.5% to 1.0%. The grocery chain had access to all the necessary data and information before, but they didn’t know how to put it to work.

The meat industry uses AI to waste less meat. The auto industry uses AI for predictive shipping. Not only does AI more accurately predict order times and volumes, but the technology can also be used to calculate the right price at the right times.

Making Uncertainty a Little Less Uncertain

AI solves the human limitations in predictive and prescriptive analysis. Causation and correlation are often intermixed, making it very difficult to really know what made a customer decide to purchase from you. There are simply too many variables at play for humans to get a good predictive picture quickly. Due to the incredible level of computing power we have these days, the machines are going to win in this respect every time.

However, with AI, you can use intelligent algorithms to determine the cause and effect by digging through loads of historical data, and by scientifically proving the relationship between cause and the desired effect, you’ll make far better predictions about customer behavior.

As Prof. Feindt has said, “No AI is no alternative.” BAU is already changing. As companies think about their strategies for tomorrow, AI is the best way available to step up to the next level and improve the customer experience.

 ► Catch Professor Feindt’s full-length, 35-minute Revolution presentation, here.


Clarifying Observables: A Tutorial

Most people, including myself, meet observables for the first time when starting to develop Angular applications. Observables are the key elements of the framework, you can’t do too many things without using them. For example, HTTP requests return their results as an Observable. This way you can think that it is just another fancy variation for Promises and don’t use them for anything else. If you do this, sometimes weird things will happen (for example HTTP requests will be sent multiple times) and you will be missing many elegant solutions in your code. In this tutorial I’ll show how I managed to understand how Observables really work and help you make development with Angular more productive and relaxing.


Starting to look at HTTP requests in Angular as an alternative Promise implementation can be a good starting point and a misleading one as well. Their API is somewhat similar, as both provide a success and a failure callback for listening to results and errors.

We start the operation with a function call and the returned Observable/Promise emits the result/error later in time. This is where the similarities start and end. Everything else – execution, number of results and behavior – differs.

Multiple results

While a Promise only emits the result once, Observables can emit multiple values over time.

In the above example, the Observable emits the values 0,1,2,3,4 delayed by one second and then completes. The subscribe method is called five times. Besides multiple values, we can also detect the end of values. On completion the third callback gets called inside the subscribe function. After completion no more values are emitted.

Emitting values over time makes Observables very similar to streams (for example in Node.js). You might have found out that they also have similar methods like merging two separate streams or buffering (merge, buffer).

Synchronous execution

When a promise is resolved, the then callback is called asynchronously. Inside the Javascript event loop the then callbacks will be executed in the next cycle. In contrary the subscriptions of an Observable will be executed synchronously after a value is passed in.

If you run this example, you will see that the value assigned inside the then callback is still undefined when it is printed with console.log. On the other hand, the value inside the subscribe callback won’t be undefined and it will be printed out by console.log.

This synchronous execution also goes for Subjects when calling the next method.

The resolved log will appear before the result in the console, because it iterates through all the subscriptions synchronously.

Multiple executions

Have you experienced that things get weird when subscribing to an Observable multiple times? Like being executed multiple times, for example with an HTTP request?

It is because, when the subscribe method is called, a separate execution is created for the observable. And if that execution consists of an HTTP request, the endpoint will be called again.

We would expect that the second subscription (B), which arrives after 2 seconds, receives the same values as the first subscription. But in reality B gets the values from the start, just delayed with 2 seconds. The reason behind this is that every subscribe method creates a new execution, restarting the observable separately from the previous one.

Promises won’t restart when you write multiple then methods to the same promise, they just execute asynchronously and get the same value. To create the same behavior with Observables we have to apply the share operator, which gives the same execution for every subscription. In the background the operator creates a Subject and passes the values onto that.

Array methods

While Promises has only the then method to mutate the returned value, Observables have multiple methods for it. These methods are named very similar to array methods.

Inside the then method you can either return a new value or a new promise. It acts the same, the next then method gets the value returned previously. With Observables we have to separate synchronous (map) and asynchronous (flatMap) transformation. Observables also have many array methods (filter, reduce, join, includes etc.) and array methods from utility libraries (Lodash: pluck, groupBy etc.).


Through Promises it is absolutely possible to understand Observables, you just have to know the differences:

  • Multiple values over time
  • Synchronous callbacks
  • Multiple executions
  • Array like methods

Hopefully the above comparisons have clarified the misunderstandings and obscure parts of Observables. For further learning I would recommend reading the blog of André Staltz (core contributor of RxJS) and listening to his tutorials on Egghead.

This article originally appeared on the Emarsys Craftlab blog.


Testing HTTP Requests in Angular has Never Been Easier


When Angular finally came out, it was possible to test HTTP requests, but it was a tedious work to set it up properly. Multiple dependencies were needed during the module setup and the faked connections were only available through an Observable object. To make things even harder, no built-in assertions were available for the requests. The Angular team knew these problems, so with Angular 4.3 they introduced a new module called HttpClientModule that intends to replace the existing HttpModule and make usage and testing easier by providing straightforward interfaces.

In this tutorial I’ll show write tests with the new HttpClientModule.

Getting started


First we will test a basic request, the GET request. It will call an url without a body or additional headers. The Github API has an endpoint for retrieving public profile information about users. The profile information is returned in JSON format.

The getProfile method sends a GET request to the API and returns the response. Every request made with the HttpClientModule returns an Observable. The returned value is the parsed JSON response body.

Writing the first test

The first thing we have to do is to set up the test dependencies. The HttpClientdependency is required. If we don’t provide it, we will get this error message: No provider for HttpClient!.

Angular provides the HttpClientTestingModule that resolves every dependency needed for HTTP testing. There is no more tedious setup: you don’t need MockBackend and BaseRequestOptions as dependencies and the factory method for Http won’t be necessary, either. Below you can see what the previous setup looked like. Multiple lines collapsed to just one module.

Let’s use the new setup to write the first test that checks the result of the request.

We can control the backend with the HttpTestingController. After we get the instance of it from the TestBed, we can set expectations against the incoming requests. In this example only one request is expected with an exact url. To set responses we just need to call flush, which converts the given object to JSON format by default. Finally, we need to check the response through the subscribe method.

Digging deeper

GET requests are good for retrieving data, but we’ll make use of other HTTP verbs to send data. One example is POST. User authentication is a perfect fit for POST requests. When modifying data stored on the server we need to restrict access to it. This is usually done with a POST request on the login page.


Auth0 provides a good solution for handling user authentication. It has a feature to authenticate users based on username and password. To demonstrate how to test POST requests, we will send a request to the Auth0 API. We won’t be using their recommended package here, because it would abstract out the actual request, but for real-world scenarios I would recommend using it.

The main difference between this example and the previous one is that here we are sending a JSON payload to the server and appending additional headers onto it. We don’t have to manually JSON.stringify the payload — the request methods will take care of it. The response will be in text format, no conversion will be done.

Let’s look at the test to see how we can check every detail of the request.

The expectOne method can take different arguments. The first one we used at the profile request was a simple string. It only checked the url being called. When we pass an object to this method, we can check the requests method also, but nothing else. To make more precise assertions, we have to pass a function to the expectOne method.

The function gets the request as an argument. You have to return a boolean value: true if every detail matches your expectation and false otherwise. This way you can check every aspect of a request.


We managed to set up tests from a basic GET request to a more complex POST request. We have seen that services with the new HTTP module can be much cleaner compared to the old one. It is always a good idea to keep an eye on the new features of Angular.

To see the tests in action check out this GitHub repository.

This article originally appeared on the Emarsys Craftlab blog. 

Programming Code as a Technology Abstract Presentation Background

A Guide for Developers: Making Interfaces Easy to Use

A guide to making easy-to-use interfaces, part 1

This is a two part series about using UI heuristics — or, in other words, making interfaces not merely easy to use, but also easy to learn.

In this first part, we’ll discover what heuristics are, and why they’re important. In part two, we’ll learn some quick and simple ways how anyone can make easy to learn interfaces.

As with most content on the Emarsys Craftlab blog, this article is written with developers in mind — but hopefully anyone, from product managers through researchers to designers, can utilise the things listed in this post.

Without further ado, let’s roll.


What Are Heuristics?

Heuristics is one of those quirky terms no one really knows until they look it up — but here’s a word most people have heard: “eureka”.

Both eureka and heuristics come from the same origin — the Greek “heuriskein”, meaning “find” or “discover”.

Supposedly, Archimedes screamed “eureka!” into the aether when, after much trial and error, he finally found a way for determining the purity of gold.

Similarly, “heuristics” refer to hands-on methods, where people can learn something on their own by experimenting.

Most people, especially programmers, have had eureka moments — after all, getting code to work tends to contain at least a pinch of trial and error. Making something work on your own is an entirely different feeling than following guides or merely replicating what a teacher jots on the blackboard.

With the proper use of UI heuristics, anyone can make their software not just simple to pick up and use, but also easier to master.

In different contexts, heuristics can have slightly different meanings. In education, in AI, in mathematics — they all use this word somewhat differently.

In this article, we’re focusing on UI heuristics — which is, generally speaking, a collection of guidelines and practices that help the usability and learnability of user interfaces.

Usability and Learnability

Thankfully, more and more developers mind the usability aspect of their product than two decades ago — after all, it’s kind of important whether people can actually use your software properly or not.

But there’s another ingredient in the batter, usability’s less popular (but just as important) cousin: learnability.

If usability represents the ease and comfort of using your application, then learnability is the ease and comfort of mastering your application.

Because as most things in this crazy world, your software inherently has something that’s called…

The Learning Curve

Unlike heuristics, the term learning curve probably sounds more familiar. It’s a handsome visualization of the relation between proficiency (expertise, skill) and experience (time spent, number of tries).

A boring ol’ linear learning curve.

Generally, the more time you spend doing something, the more proficient you become — acquiring knowledge in the process.

People tend to throw around sentences like “its learning curve is as steep as a wall”, while ironically, a steep learning curve is what we aim for: it means one becomes really proficient in a short amount of time.

A steep learning curve—the graph equivalent of a “get rich fast” scheme.

A flat learning curve is the opposite: taking a long time to gather a measly amount of knowledge.

Flatter than a punctured tire.

But in most real-life scenarios, the average learning curve looks something like this:

A very typical S-shaped learning curve.

If you take a look at bits of this graph, you can see the following:

1. Slow start: as they say, there’s a first time for everything — and more likely than not, you’ll find it difficult to do something decent the first time you try; be it speaking a new language, drawing a face, playing the guitar, or coding an app. You’re still just tickling the fundamentals, trying to grasp the logic behind it all.

2. Picks up pace: you’ve nailed the basics, and have an overall understanding of the hows and whys. Once this “click” happens, you shift to the fastest gear of learning — especially if you do both hands-on experimenting and proactively looking up learning materials.

3. Plateauing: once you master all the ins and outs of the craft, it will get more difficult to become even more proficient — you’ve hit the so called skill ceiling. Sure, you can still experiment and learn new things, but it’ll be nowhere as easy and quick as building up your knowledge until this point.

With most things you learn, you’ll go thru these 3 phases.

But here’s the twist: it’s not only dealing with the craft itself that builds your knowledge.

The User vs The World

What I’m going to tell you is, of course, super obvious — but bear with me for a few minutes, it will all come together in the end.

Let’s say here’s you.

Wow, it’s you!

And you exist in the world.


In this world, there’s this Thing and you, for one reason or another, have to interact with it. You might have some goal in mind, and that Thing, you reckon, certainly would help achieving the said goal.

So you start interacting with the Thing — and as you do, you learn about it. Knowledge starts building up inside your brain, and the more knowledge you have, the more different you interact with the Thing.

The learning loop between you and the Thing.

Here’s the catch: there’s not only that one particular Thing that builds your knowledge — it’s all the things in the world. Some of that knowledge might also affect the way you interact with the Thing.

Surprise: many other external experiences form your relationship with the Thing.

Let’s put this another way: the Thing is an App. You are the user. In the world, you use other apps, web pages, and interfaces that influence your understanding, habits, and expectations — all of them affecting your relationship with that one App.

(Okay, maybe this kind of looks like a well combed boy.)

Understanding how to reap the benefits of this existing knowledge is basically the essence of UI heuristics.

The Learning Curve vs The World

Now that we know how external factors influence one’s knowledge, let’s get back to our handy learning curve.

When designing the user interface of our app, we have two goals:

Take a mental photo of this — it’s important.

1. Starting our user as high as possible: by knowing their existing knowledge, we can build an interface that they already know — cutting off the dreaded flat bit of the learning curve, enabling them to be productive right from the start. This is mostly usability.

2. Helping the user reach the top as fast as possible: implementing UI heuristic practices, we can help the user reach the skill ceiling as fast as humanly possible — making them even more productive, and thus providing even more value. This is learnability.

Thankfully we have tools to make this two things happen — and we’ll be learning about them in the second part of this article.


If you only take home a single paragraph from this article, I want that to be this:

1. Utilize the knowledge the user already has.

2. Extend the user’s knowledge.

This is a really simple but powerful mindset to have when designing an interface — as you can lift a lot of weight off from the users’ shoulders if you do your research and create something they can just grab and use right away; but you can also empower the users to master your app and become as efficient and effective as one can possible be.

Coming Soon in Part 2

We’ll take a look at the heuristic practices that make your app not just easier to pick up and use, but also a breeze to learn. Stay tuned!

 This article originally appeared on the Emarsys Craftlab blog. 


Weekend Bias in Send Time Optimization

As data scientists, we are responsible for the whole process of implementing machine learning algorithms for the benefit of our customers: from choosing problems and algorithms to follow-up measurement and refinement. After an initial implementation of a new algorithm, we have to watch closely how it performs in real life and quickly iterate together with Product and Software Engineering to eliminate performance bottlenecks and respond to important effects we did not prepare for.

In this post, I will tell a story about how a supposedly minor difference between A/B testing groups can shift a whole experiment.

It started with a routine follow-up of a pilot using the new algorithm, but as more and more days passed, it became apparent that what we saw was not random noise but an important effect we did not yet understand. We brainstormed about possible causes and what we should experience if one of our assumptions was an oversimplification. Then we carefully checked each of these ideas until we understood what was behind the scenes.

The Machine Learning Part

The objective came from our customers: send emails to each of their contacts with the best possible timing when the contacts are the most receptive. We thought about and tested many different approaches and chose a slightly modified multi-armed bayesian bandit algorithm. We decided on sending time based on previous success of different send times. We assume each contact has an open rate in every two-hour slot of the day and they open each email sent in that time slot with this probability. Depending on whether they opened a particular email in the time slot, we update the parameters of the beta distribution of that time slot in a bayesian manner. As time went on we had more information, thus smaller variance in every time slot and with higher and higher probability we would send each letter in the time slot with the highest open rate.

Deciding on one send in detail: we sampled once from each of the 12 beta distributions and chose the slot with the highest sample value. Before each send, we took the one year history of the contact and computed priors based on previous send time optimized campaigns globally. This way we have as much information as possible for new contacts as well. Note that we not only learn from opens, but also from sends without opens as well.

Tendencies We Expected

So the algorithm was optimizing the sending on an hourly basis regardless of the day of the week. However, people have different daily routines on weekends and on weekdays. We have five weekdays competing with two weekend days. Thus we assumed that the algorithm would learn weekday preferences because weekend performance would be worse. We also knew that once the algorithm learned something, it was slow to adapt if preferences changed. It would also easily get stuck in a sub-optimal state if the first few reactions were not typical: either by chance or if a contact gets exceptionally engaging campaigns in sub-optimal or even bad time slots.

In order to see the added value despite the natural fluctuation in campaign open rate we decided to measure results with A/B testing: send emails to the first group with send time optimization and send the other group emails at a fixed time. Then we measured the relative performance of the two groups getting exactly the same email at different times.

From previous experience we knew that different campaigns have a huge variance in their open rate. This is why we chose a series of similar daily campaigns for testing the algorithm. From simulations we could predict that the uplift we could hope for was much less than the effect of content or seasonality for example.

… and those that surprised us

Send time optimized campaigns performed better on weekends better but significantly worse on weekdays. The variance was high but still the pattern was clear.


When we looked at the data after one week the overall trend was downwards as on a Thursday in all the data up to that point the effect of weekdays outweighed the effect of weekends. However, if we looked at all the data two weeks later on a Sunday the numbers indicated the underlying upward trend.


The underlying effect

It was not obvious to figure out — since this phenomenon became significant just after a month — that this customer’s globally observed open rate is contrasting on different days of the week. From Monday to Friday the open rate decreases then it picks up again till Monday. Of course the variance is high but by examining the median of campaign open rates on different days the pattern became apparent. There were two more important effects:

  1. Time optimized sending started in the afternoon and ended next day early afternoon.
  2. Morning is the best time for most of the contacts.

Combining these two effects most of the contacts in the STO group got their email almost one day later than those in the control group. For example on Tuesdays the open rate of the STO group was attributed to the Tuesday campaign as sending started on Tuesday but actually it reflected the open rate on Wednesday. On the other hand the open rate of the CONTROL group was indeed the open rate on Tuesday as their emails were sent out immediately.


The solution

First and foremost, we wanted to measure our performance from this data as precisely as possible. This was an important pillar of gaining the trust of our client, so keep using the send time optimization feature with a modified setup. We came up with two approaches.

  1. Compare same days of the week over time. With this approach we have much smaller amount of data but we completely eliminated the problem.
  2. Aggregate first to weeks and then compare weeks over time. With this approach we either take only weeks with send on all 7 days of the week (not a realistic setup) or remain with some part of the problem. We chose to include weeks with sends on at least 5 days and could see some improvement over time although after aggregation we had few remaining data points.

In the above chart we plotted data from 8 weeks before starting to use send time optimization to 8 weeks with send time optimization.

Next, we wanted to ensure that later comparisons are as easy as possible. The most straightforward and easy fix is described below.

Send the emails to the control group in the middle of the time range in which sending to the send time optimized group is spread. Rephrasing this — as usually marketers have a fixed time for sending — set the launch date of the send time optimised group around 12 hours earlier then the launching of the control group campaign. Actually a good compromise is to set the launch date of the send time optimised campaign to midnight as most of the campaigns’ launch times are around 6AM to 6PM. This way it is easier to identify pairs of campaigns later.

Closing thoughts

Own the assumptions of your algorithm and measurement process as real life will always be different from the lab and you want to measure your results, even if you know your model oversimplifies the reality.

Think about alternatives of A/B testing, not only for making decisions but for measuring your existing algorithms as well.

You might have problems either with your machine learning algorithm or with your measurement process: watch out for both!

This post originally appeared on the Emarsys Craftlab blog. 


How to Do Proper Tree-Shaking in Webpack 2

Tree-shaking means that Javascript bundling will only include code that is necessary to run your application. The term tree-shaking was first introduced by Rich Harris’ module bundler, Rollup. It has been made available by the static nature of ES2015 modules (exports and imports can’t be modified at runtime), which lets us detect unused code at bundle time. This feature has become available to Webpack users with the second version. Now Webpack has built-in support for ES2015 modules and tree-shaking.

In this tutorial I’ll show you how tree-shaking works in Webpack and how to overcome the obstacles that come our way.

If you just want to skip to the working examples visit my Babel or Typescript repository.

How Tree-Shaking Works in Webpack 2

The way tree-shaking works in Webpack can be best shown through a minimalistic example. I’ll compare it to a car that has a specific engine.

Example Application

The way tree-shaking works in Webpack can be best shown through a minimalistic example. I’ll compare it to a car that has a specific engine. The application consists of two files. The first one holds the different engines as classes and their version as a function. Every class and function is exported from its file.


The next file describes the car with its engine and serves as the entry point for our application. We will start the bundling from this file.

After defining the car class, we only use the V8Engine class, the other exports remain untouched. When running the application it will output ‘V8 Sports Car’.

With tree-shaking in place we expect the output bundle to only include classes and functions we use. In our case it means the V8Engine and the SportsCarclass only. Let’s see how it works under the hood.


When we bundle the application without transformations (like Babel) and minification (like UglifyJS), we will get the following output:

Webpack marks classes and functions with comments which are not used (/* unused harmony export V6Engine */) and only exports those which are used (/* harmony export (immutable) */ __webpack_exports__[“a”] = V8Engine;). The very first question you may ask is that why is the unused code still there? Tree-shaking isn’t working, is it?

Dead Code Elimination vs Live Code Inclusion

The reason behind this is that Webpack only marks code unused and doesn’t export it inside the module. It pulls in all of the available code and leaves dead code elimination to minification libraries like UglifyJS. UglifyJS gets the bundled code and removes unused functions and variables before minifying. With this mechanism it should remove the getVersion function and the V6Engine class.

Rollup, on the other hand, only includes the code that is necessary to run the application. When bundling is done, there are no unused classes and functions. Minification only deals with the actually used code.

Setting It Up

UglifyJS doesn’t support the new language features of Javascript (aka ES2015 and above) yet. We need Babel to transpile our code to ES5 and then use UglifyJS to clean up the unused code.

The most important thing is to leave ES2015 modules untouched by Babel presets. Webpack understands harmony modules and can only find out what to tree-shake if modules are left in their original format. If we transpile them also to CommonJS syntax, Webpack won’t be able to determine what is used and what is not. In the end Webpack will translate them to CommonJS syntax.

We have to tell the preset (in our case babel-preset-env) to skip the module transpilation.

The corresponding Webpack config part.

Let’s look at the output what we got after tree-shaking: link to minified code.

We see the getVersion function removed as expected, but the V6Engine class remained there in the minified code. What can be the problem, what went wrong?

Troubles Ahead

First Babel detects the ES2015 class and transpiles it down to it’s ES5 equivalent. Then comes Webpack by putting the modules together and in the end UglifyJS removes unused code. We can read what is the exact problem from the output of UglifyJS.

WARNING in from UglifyJs Dropping unused function getVersion [,9] Side effects in initialization of unused variable V6Engine [,4]

It tells us that the ES5 equivalent of the V6Engine class has side effects at initialization.

When we define classes in ES5, class methods have to be assigned to the prototype property. There is no way around skipping at least one assignment. UglifyJS can’t tell if it is just a class declaration or some random code with side effects, because it can’t do control flow analysis.

Transpiled code breaks the tree-shaking of classes. It only works for functions out of the box.

There are multiple on-going bug reports related to this on Github in the Webpack repository and in the UglifyJS repository. One solution can be to complete the ES2015 support in UglifyJS. Hopefully it will be released with the next major version. Another solution can be to implement an annotation for downleveled classes that mark it as pure (side effect free) for UglifyJS. This way UglifyJS can be sure that this declaration has no side effects. Its support is already implemented but to make it work, transpilers have to support it and emit the @__PURE__ annotation next to the downleveled class. There are ongoing issues implementing this behavior in Babel and Typescript.

Babili to the Rescue

The developers behind Babel thought why not make a minifier based on Babel that understands ES2015 and above? They created Babili, which can understand every new language feature that Babel can parse. Babili can transpile ES2015 code into ES5 code and minify it including removal of unused classes and functions. Just like UglifyJS would have already implemented ES2015 support with the addition that it will automatically catch up with the new language features.

Babili will remove unused code before transpilation. It is much easier to spot unused classes before downleveled to ES5. Tree-shaking will also work for class declarations, not just functions.

We only have to replace the UglifyJS plugin with the Babili plugin and remove the loader for Babel. The other way around is to use Babili as a Babel preset and use only the loader. I would recommend using the plugin, because it can also work when we are using a transpiler that is not Babel (for example Typescript).

We always have to pass ES2015+ code down to the plugin, otherwise it won’t be able to remove classes.

ES2015+ is also important when using other transpilers like Typescript. Typescript has to output ES2015+ code and harmony modules to enable tree-shaking. The output of Typescript will be handed over to Babili to remove the unused code.

The output now won’t contain the class V6Enginelink to minified code.


The same rules apply for libraries as for our code. It should use the ES2015 modules format. Luckily more and more library authors release their packages in both CommonJS style format and the new module format. The entry point for the new module format is marked with the module field in package.json.

With the new module format unused functions will be removed, but for classes it is not enough. The library classes also have to be in ES2015 format to be removable by Babili. It is very rare that libraries are published in this format, but for some it is available (for example lodash as lodash-es).

One last culprit can be when the separate files of the library modify other modules by extending them; importing files have side effects. The operators of RxJs is good example for this. By importing an operator it modifies one of the classes. These are considered side effects and they stop the code from being tree-shaken.

The Inner Workings of Webpack’s Tree-Shaking

With tree-shaking you can bring down the size of your application considerable. Webpack 2 has built-in support for it, but works differently from Rollup. It will include everything but will mark unused functions and classes, leaving the actual code removal to minifiers. This is what makes it a bit more difficult for us to tree-shake everything. Going with the default minifier, UglifyJS, it will remove only unused functions and variables. To remove classes also, we have to use Babili which, understands ES2015 classes. We also have to pay special attention to modules, whether they are published in a way that supports tree-shaking.

I hope this article clarifies the inner workings behind Webpack’s tree-shaking and gives you ideas to overcome the obstacles.

You can see the working examples in my Babel and Typescript repository.

This post originally appeared on the Emarsys Craftlab Blog.

How Your Application Can Benefit from AOT

Multiple solutions for Angular Ahead of Time (AOT) Compilation

When we started developing new applications at Emarsys in the early stages of Angular (2 beta), the first thing we noticed is the growing size and slowing speed of the application. The size of the unminified source code quickly grew above 3 Mbs and took multiple seconds just to become responsive.


Just-in-Time (JIT) compilation

The main reason for this was that we were using JIT compilation. It creates a performance penalty by parsing the component templates every time the user opens the web page. The parsing also needs the compiler bundled into the application. It is the part that transforms HTML templates into runnable code. The compiler can take up half of the bundled code size, which is a huge portion.

Just-in-Time (JIT) compilation

We generate the source code at build time and JIT compilation starts to parse the templates at run time. Only after this can the application start with the generated code.

Ahead-of-Time (AOT) compilation

We can cope with this performance penalty if we move the compilation out of the run time (browser) to the source code generation. It statically analyzes and compiles our templates at build time.

Ahead-of-Time (AOT) compilation

This way compilation happens only once at build time and we no longer need to ship the Angular compiler and the HTML templates into the bundle. The generated source code can start running just after downloaded into the browser, no previous steps are needed.

The AOT compilation turns this HTML template

into this runnable code fragment

Benefits of AOT compilation

  • Smaller application size (Angular compiler excluded)
  • Faster component rendering (already compiled templates)
  • Template parse errors detected earlier (at build time)
  • More secure (no need to evaluate templates dynamically)

For AOT compilation we need some tools to accomplish it automatically in our build process. Currently two solid solutions exist. Both methods work, but they serve different purposes and have different advantages and disadvantages.

Solution 1: ngc command line tool

The ngc command line tool comes with the package @angular/compiler-cli. It is a wrapper around the Typescript compiler (tsc). You can specify the files to be compiled within tsconfig.json with the files and excludes field. Compiler specific options can be placed inside the angularCompilerOptions property.

When running ngc, it searches for the entry module (it gives the context to the compilation) and the corresponding components, directives and pipes. For every one of them it compiles and outputs an .ngfactory.ts suffixed Typescript file where the compiled templates reside. The compiled files are generated next to the original file by default. The destination can be modified by genDirinside angularCompilerOptions.

It also generates the transpiled files from Typescript to Javascript besides the compiled factory files. These files represent the uncompiled Typescript files with the description of the original classes next to them in a .metadata.jsonsuffixed file. These files are not needed to run the application in AOT mode. They only come in handy when building an Angular library that supports AOT compilation.

To use the newly generated files, we will have to change the bootstrap of the application.

We need to change the module file to the .ngfactory.ts suffixed one
and import the bootstrap from @angular/platform-browser. The @angular/platform-browser doesn’t include the compiler, making a huge gain in file size.

After generating the AOT compiled Typescript files another step is needed where we bundle the application.


  • It can always be used with the newest version of Angular just after it has been released
  • After compilation any kind of bundling tool can be used
  • Outputs metadata files for library development


  • Only supports HTML in templates and CSS in styles.
  • No watch mode yet
  • Need to maintain AOT version of the bootstrap file

Example repositories

Solution 2: @ngtools/webpack plugin

The next package @ngtools/webpack is a plugin for Webpack 2 published as part of the Angular CLI repository. It gives a loader and a plugin to set up the configuration.

The plugin needs the location of the Typescript configuration and the entry module of the application. The entryModule property consists of the file path and the exported module class divided by a hashmark. With these it can run the AOT compiler and generate the factory files.

One big difference here is that it won’t transfer the factory, metadata and transpiled JIT Javascript files to the filesystem. It will only bundle the application based on the factory files, which only exist inside Webpack’s memory filesystem. It also searches for the entry point and transforms the bootstrap file automatically to become suitable for AOT compiled files.

The loader inside the rules property enables to use any kind of file type inside the component’s decorator templateUrl and styleUrls property.
For example SCSSLESS for stylesheets and PUG for templates. It replaces the relative URLs in templateUrl and styleUrls to require statements. Also lazy loaded modules on routes transpiled to import statements and the sub-modules are also AOT compiled with the main application.

This loader basically does the job of awesome-typescript-loader + angular-router-loader + angular2-template-loader and adds AOT compilation to the chain.


  • Custom file types available for templates and styles through Webpack loaders (SCSS, PUG,…)
  • No separate process for compilation
  • Watch mode for AOT compiled files
  • No need to maintain AOT version of bootstrap file
  • No output to disk for separate *.ngfactory.ts files


  • Can only be used with Webpack 2
  • Need to wait for new versions after Angular release to Angular CLI repository catch up
  • Compatible with current version
  • Not good for AOT compatible package publishing, because it doesn’t output separate compiled files

Example repositories

Solution 3: @ultimate/aot-loader plugin

A brand new Webpack 2 plugin (currently in beta) backed by the team behind Ultimate Angular. It gives a loader and a plugin to set up the configuration.

@ultimate/aot-loader is very similar in configuration and in abilities to the Angular CLI plugin. The factory and metadata files are only generated to the memory file system. The entry point is also transformed to load the factory files instead of the regular ones. Lazy loaded modules are transpiled and split into different chunks apart from the main bundle.

Loaders inside rules enable us to write templates and styles of components in the desired extension (SCSS, PUG, etc.).


  • Same advantages as for the Angular CLI package
  • Compatible with Angular 4


  • Can only be used with Webpack 2
  • Not good for AOT compatible package publishing, because it doesn’t output separate compiled files

Example repositories


It doesn’t matter which solution you choose, your application can greatly benefit from AOT compilation through size and speed. It halves the size of a small to medium sized application and multiplies it’s start up speed. If you are only using HTML for templates and CSS for styles or using a different build system from Webpack or developing a package in Angular 2, ngc can be a good fit. Otherwise I would stick with the @ngtools/webpack or the @ultimate/aot-loader plugin and enjoy it’s benefits over the command line solution.

If you want to dive deeper into AOT compilation, you can read the official documentation also.

Special thanks to Wassim Chegham for the images about AOT and JIT compilation.

This post originally appeared on the Emarsys Craftlab blog. 

icons on an iPhone

Designing for Mobile Engagement: Enhancing Apps with Messaging

If you design a mobile app, whip out your phone and look at your home screen. Really, just do it now! Where do you expect your app to interact with your users?

Choose your home screen

You probably were looking at your app’s icon. You are building the app after all, you expect or at least want your users to start your app and use all the awesome features you designed. Maybe you even looked at the notification area thinking about those push messages marketers were sending. One about a recent new feature and one every time the user has a new follower on her profile. But there are a few other channels you could think of: email (for sending forgotten password emails), SMS (when the security team succeeds in convincing the PM to implement two factor authentication) or even a chat interface with bots (maybe you could experiment with those conversational interfaces in your next design sprint).

In either case, all of these channel should be working together and have a unified great experience if you want more engagement, more returning users, and a successful app and business.

Why Engagement Matters

Getting people to download your app is generally not that hard. There are plenty of proven marketing strategies for that. If your app answers a human need, is built upon a validated hypothesis, and executed well, you just have to create a nice app page, throw some money on SEO and there you have a few hundred thousand downloads. The issue is how to get people to actually use your app and discover all those nice features you have designed.

Most apps downloaded don’t even get a chance, as 25% of apps installed will never get started. You have very little way of reaching these people, and this just shows you what vanity numbers like downloads tell us about success – almost nothing.

Even if your app does get started, 23% of apps are only used once. This can mean several things:

  • The app’s capabilities and the user’s expectations raised by marketing does not match.
  • The user is forced to use the app to complete a one time action. Maybe the users don’t need an app at all, but a better mobile web experience.
  • The app is not habit forming. There are very few apps (and even fewer 3rd party apps not coming from major players like Facebook) which can achieve regular use. Regular use is hard, 62% of the apps are used less than 11 times. And you do want people coming back: retention rate is one of the basic pirate metrics that shows how a business is doing, and you usually want the highest number of returning users.

All three of these are design issues, you have to collaborate more with marketing (align your messaging already!), with business (at least know your business model), and design your app in a way that it actually enables people to discover the app’s features, and keep on using them. But designing awesome screens alone is not enough. Enter messaging.

Neko Atsume — Cats guarantee retention

As a side note, there are a few apps which don’t actually need any messaging to see a record number of returning users, for example Pokemon Go, even with falling numbers they are still doing great, or Neko Atsume. So unless your app is built on a huge IP or it contains cats, read on.

Effective Messaging

Messaging is like any other UI element in your app, it should support the experience of the app. Also like any other UI element, it should be a consistent part of the flow. There are just too many apps where it’s plainly visible that messaging was not part of the original design, just added latter.

The trickiest thing about messaging is using the people’s attention effectively. Human attention is a finite resource and I believe it’s the duty of designers to use this resource the right way (or indeed any human resource, like memory or focus). Meaning you should work with this resource with consideration, give a meaning to every interaction and make sure to not overdo it.

To achieve consideration, every message should follow a few broad guidelines:

  1. Relevant: it should be something that the users would actually care about and adds to their overall experience. Many apps try to force their way into your life without adding anything interesting (trending messages from Twitter, ring a bell?), like begging for attention. Relevancy comes from content, it should be both have some actual substance and designed in a way that helps digesting it.
  2. Personal: meaning the message should be tailored for the user getting it. Sometimes this just means actually using the user’s name, other times it’s an information piece created just for the user or something based on the user’s behavior. The hard part on getting personal is that the users have to also perceive the content as personal. Many algorithmic personalizations, while working well, are not designed with empathy or even with common sense. Remember that one time when you looked at a bag on a website? Now you get bag ads forever.
  3. Contextual: the message coming at the right time, may be at the right place or even on the right channel. Getting notified about a trending festival while you are working is annoying. Getting notified of storms a few hundred kilometers away is useless. Getting a sale newsletter at the end of the sale is too late.
  4. Actionable: the user should be able to do something with the message she gets. And the action inside the message should be effective, so the user should be able to actually do what the action called her to do.
  5. Opting out: this method should be available in some form. Most messaging channels provide a way of opting out for the users that you have no control over, like sending your email to spam. It’s better to get proactive and give the choice to your users and maybe retain some form of messaging options for latter.
Twitter’s “Popular in your network” email message. Here the issue with algorithmic recommendation is both relevancy and personalization seems to be a hit or miss. Not really actionable, you may interact with the tweets but no indications why you should do that. Context only appears in the content itself. At least there is a clear opt-out button at the end.

To get started with messaging you should choose the right channels for your app.


Stats seem to vary a lot, and usually depends on your industry, but here are some examples:

  • 70% of emails opened on mobile
  • 50% open rate on mobile
  • 3.3% click rate (2.7% for non responsive)

Even before mobile, services used email to keep in touch with their users. Usually the email address is the first piece of contact information that you learn about your users, and since it’s so ubiquitous, it’s also easy to ask for permission to stay in touch. This also means that most people get too many emails, so important messages may go without getting read.


Due to the few design restrictions, there are plenty of ways to make emails personal and actionable. But it’s hard to make emails really contextual, even the shortest email may not be read at the right time or at the right place. So it’s better to only send emails that don’t really depend on when and where the user will open it.

Emails are great for sending out product news, weekly updates, summary of activities, and information intended to be kept for latter reference by the user.

Tips for designing email messages:

  • Subject line matters, even more as on desktop. Mobile email clients tend to display less characters (25–30) so the first few words should be very focused and make it clear what the email is about.
  • You have to use responsive design, there is now way around this. Especially if the email is part of the app workflow. Here is a great article to get you started by Joe Munroe.
  • Limit the length. Since there is no limit on how long an email can be, it’s too easy to keep on adding words, sentences and call to actions. Keep in mind that the longer the email the less likely it gets read in time. Remember when you last thought: “I don’t have time right now to read this, I’ll just keep it in my mailbox and read it latter”? Especially transaction emails should be understood at once.
  • Make sure to test your email on most mobile clients before sending.
  • Mobile email doesn’t work well with big images, so if you intend to keep that one huge hero image, make sure your email is also understandable without it.
  • For clickable areas use the same size as you would use for any button on the app UI. You don’t want the user to fail tapping that tiny link.
Fitbit’s weekly progress report (I should walk more). This report looks way better on desktop, the fluid texts make everything look too tight. There is no action, although it would be great to view this report also in the app. Also an image is missing.


SMS design is also important for engagement. The statistics surrounding it are astounding:

  • Open rate: 98%
  • Click through rate: 36%

Texting as part of a service has existed for quite some time (remember when we used to get weather forecast via SMS?), and for example emergency services still use it to reach people when necessary. The biggest advantage for texting is that you don’t even need an app to send timely messages to your users. On the other hand the limitations of the format means you need to have a really well-written copy.

Besides using SMS for messaging recently, services started to pop-up built solely on texting. Digit is a savings service that does most of it’s communication via texting (although they have apps too). Cloe is recommendation service for local businesses. DirtyLemon is a company serving lemonade, where all customer interactions run over texting. They are examples showing how apps can be designed only based on conversation.

SMS is great for sending highly critical and time sensitive information, and when internet may not be available, for example travel information.

Tips for designing SMS messages:

  • Even though you can send multi-part SMS with images, each message you send costs you (and in some cases even your customer) money, so consider the length, especially if you use personalization (first names can be any length) or link as a call to action.
  • Just like with other messages the first few words count the most as people will see them first in their list.
  • Unlike other forms of messaging, phone numbers can change owners. Make sure your sending critical information to the right person.


Push notifications can help you cut through the noise and the numbers around them suggest they work.

  • Opt-in rate: 50.2%
  • Open rate: 10.2%

With the average smartphone user having 50+ apps, even regular use of an app won’t necessary start with a tap on it’s icon. Many uses, like your friend answering your earlier message or your food delivery arriving, will happen via push notifications. They can be highly contextual, targeting an exact time, place (or for that matter any sensor that a device has, like the owner going to fast), or the opposite, like not sending updates that can wait while the owner of the device is asleep.

With push notifications there are huge differences between the platforms regarding permissions and interactions. iOS and Android do things differently. iOS and Android convert differently. Know your guidelines for both platforms. Before diving into the details, check guidelines on the platform documentation:

Push notifications are great for sending transactional messages that require the user’s action to continue and highly contextual information, that may depend on time, certain place, or any other device sensor.

Push notifications can also act as conversations. Lifeline is game that you can play without starting the app showing how engagement doesn’t need app usage.

Tips for designing push messages:

  • Use the platform’s capabilities, like embedded images on Android and available actions, for many actions the user doesn’t even need to open the app, like quick replying a message.
  • You can set up push messages to vibrate, but only do it for really important information.
  • Think about how your message will look like among other messages. Will it be important enough that the user would decide to open it, or will it be just cleared without getting read?

This is an excellent article on push notifications by Noah Weiss from Slack.

Chat bots

As chat services turn into platforms, more and more apps appear as chat bots. I think of these as the next level of texting services (and also as beefed up IRC bots). There are examples already in Asia how this would work.

Weather chat bot on Facebook Messenger

Conversational interfaces evolve towards a new interaction paradigm, but right now there doesn’t seem to be a huge difference compared to how other type of messages need to be designed.

Tips for Designing Engaging Messages

Knowing your channels and having a plan on how to use them should already help. Here are a few tips to get you started designing better messages.

  1. Do your research. To know what messages would be important for the users, and how the messages fit into their daily routine you should go out and talk to your customers. While this is also important for the whole design process, interviews and field studies can uncover new scenarios where messaging will be important.
  2. Design your story. Just like with the app design, storytelling helps summarizing research facts, identify contextual information, and push these throughout the design process. A journey mapping or a story mapping exercise helps visualizing they story and uncover further message touch points.
  3. Have a copywriting guide. Just like having design principles for designing a UI, you should also have guidelines for writing words to have a consistent tone of voice. A great way of doing this is to have a product stance, a personality for your app. For smaller apps this doesn’t need to be a full persona, a few statements what would your product do should suffice.
  4. Deep linking. A message with an action should be like an open door to your app that invites the user inside. So make sure if you invite the user to the living room, she doesn’t end up in the kitchen. If you send a push to the user about a feature, taping the message should bring the user straight to that exact point in the app where the feature is.
  5. Design with data. To achieve real personalization, designers should learn to use the data available. Never use placeholder text and dummy data if you have real data available as it may look different than what you expect. Even if the users don’t input any data about themselves, there are still many things you can use, like app usage patterns, date and time, other sensors. These may all influence what messages you can send. And also be prepared if you have missing data, dear <first_name>.
  6. Use automation. When creating messages, automation can help to implement complex strategies on when and what messages should be sent. Besides the messages itself, design also the rules when to send (and when not to send) messages.
  7. Context, context, context. Context matters. Do you really think that new follower interests the user when her phone is on 5% battery? Why not wait until the phone is on charger again? While research should uncover lots of interesting scenarios, a source of true delight is when you can anticipate and resolve the user’s anxiety.
  8. Measure the effects. With messages you should measure not just open rates, but also actions taken and unsubscribes or even uninstall rate. These may uncover problems where additional iterations will be needed.
  9. First impressions count. Make sure your first message is great, as it will set the expectations for the rest of your messages. It’s easy for users to loose interest and opt out from further communication.


This article was written based on my talk at MobileWeekend 2016.

This post originally appeared on the Emarsys Craftlab blog.