The Gilt technology organization. We make work.

Gilt Tech

Advanced tips for building an iOS Notification Service Extension

Kyle Dorman ios

The Gilt iOS team is officially rolling out support for “rich notifications” in the coming days. By “rich notifications”, I mean the ability to include media (images/gifs/video/audio) with push notifications. Apple announced rich notifications as a part of iOS 10 at WWDC last year (2016). For a mobile first e-commerce company with high quality images, adding media to push notifications is an exciting way to continue to engage our users.

alt image

This post details four helpful advanced tips I wish I had when I started building a Notification Service Extension(NSE) for the iOS app. Although all of this information is available through different blog posts and Apple documentation, I am putting it all in one place in the context of building a NSE in the hopes that it saves someone the time I spent hunting and testing this niche feature. Specifically, I will go over things I learned after the point where I was actually seeing modified push notifications on a real device (even something as simple as appending MODIFIED to the notification title).

If you’ve stumbled upon this post, you’re most likely about to start building a NSE or started already and have hit an unexpected roadblock. If you have not already created the shell of your extension, I recommend reading the official Apple documentation and some other helpful blog posts found here and here. These posts give a great overview of how to get started receiving and displaying push notifications with media.

Tip 0: Sending notifications

When working with NSEs it is extremely helpful to have a reliable way of sending yourself push notifications. Whether you use a third party push platform or a home grown platform, validate that you can send yourself test notifications before going any further. Additionally, validate that you have the ability to send modified push payloads.

Tip 1: Debugging

Being able to debug your code while you work is paramount. If you’ve ever built an app extension this tip may be old hat to you but as a first time extension builder it was a revelation to me! Because a NSE is not actually a part of your app, but an extension, it does not run on the same process id as your application. When you install your app on an iOS device from Xcode, the Xcode debugger and console are only listening to the process id of your application. This means any print statements and break points you set in the NSE won’t show up in the Xcode console and won’t pause the execution of your NSE.

alt image

You actually can see all of your print statements in the mac Console app but the Console also includes every print/log statement of every process running on your iOS device and filtering these events is more pain than its worth.

alt image

Fortunately, there is another way. You can actually have Xcode listen to any of the processes running on your phone including low level processes like wifid, Xcode just happens to default to your application.

alt image

To attach to the NSE, you first need to send your device a notification to start up the NSE. Once you receive the notification, in Xcode go to the “Debug” tab, scroll down to “Attach to Process” and look to see if your NSE is listed under “Likely Targets”.

alt image

If you don’t see it, try sending another notification to your device. If you do, attach to it! If you successfully attached to your NSE process you should see it grayed out when yo go back to Debug > Attach to Process.

alt image

You should also be able to select the NSE from the Xcode debug area.

alt image

To validate both the debugger and print statements are working add a breakpoint and a print statement to your NSE. Note: Everytime you rebuild the app, you will unfortunately have to repeat the process of sending yourself a notification before attaching to the NSE process.

Amazing! Your NSE development experience will now be 10x faster than my own. I spent two days appending “print statements” to the body of the actual notification before I discovered the ability to attach to multiple processes.

alt image

Tip 2: Sharing data between your application and NSE

Although your NSE is bundled with your app, it is not part of your app, does not run on the same process id (see above), and does not have the same bundle identifier. Because of this, your application and NSE cannot talk to each other and cannot use the same file system. If you have any information you would like to share between the app and the NSE, you will need to add them both to an App Group. For the specifics of adding an app group check out Apple’s Sharing Data with Your Containing App.

This came up in Gilt’s NSE because we wanted to have the ability to get logs from the NSE and include them with the rest of the app. For background, the Gilt iOS team uses our own open sourced logging library, CleanroomLogger. The library writes log files in the app’s allocated file system. To collect the log files from the NSE in the application, we needed to save the log files from the NSE to the shared app group.

Another feature you get once you set up the App Group is the ability to share information using the app group’s NSUserDefaults. We aren’t using this feature right now, but might in the future.

Tip 3: Using frameworks in your NSE

If you haven’t already realized, rich notifications don’t send actual media but just links to media which your NSE will download. If you’re a bolder person than me, you might decide to forgo the use of an HTTP framework in your extension and re-implement any functions/classes you need. For the rest of us, its a good idea to include additional frameworks in your NSE. In the simplest case, adding a framework to a NSE is the same as including a framework in another framework or your container app. Unfortunately, not all frameworks can be used in an extension.

alt image

To use a framework in your application, the framework must check the “App Extensions” box.

alt image

Most popular open source frameworks are already set up to work with extensions but its something you should look out for. The Gilt iOS app has one internal framework which we weren’t able to use in extensions and I had to re-implement a few functions in the NSE. If you come across a framework that you think should work in an extension, but doesn’t, check out Apple’s Using an Embedded Framework to Share Code.

Tip 4: Display different media for thumbnail and expanded view

When the rich notification comes up on the device, users see a small thumbnail image beside the notification title and message.

alt image

And when the user expands the notification, iOS shows a larger image.

alt image

In the simple case (example above), you might just have a single image to use as the thumbnail and the large image. In this case setting a single attachment is fine. In the Gilt app, we came across a case where we wanted to show a specific square image as the thumbnail and a specific rectangular image when the notification is expanded. This is possible because UNMutableNotificationContent allows you to set a list of UNNotificationAttachment. Although this is not a documented feature it is possible.

var bestAttemptContent = request.content.mutableCopy() as? UNMutableNotificationContent
let expandedAttachment = UNNotificationAttachment(url: expandedURL, options: [UNNotificationAttachmentOptionsThumbnailHiddenKey : true])
let thumbnailAttachment = UNNotificationAttachment(url: thumbnailURL, options: [UNNotificationAttachmentOptionsThumbnailHiddenKey : false])
bestAttemptContent.attachments = [expandedAttachment, thumbnailAttachment]

This code snippet sets two attachments on the notification. This may be confusing because, currently, iOS only allows and app to show one attachment. If we can only show one attachment, then why set two attachments on the notification? I am setting two attachments because I want to show different images in the collapsed and expanded notification views. The first attchment in the array, expandedAttachment, is hidden in the collapsed view (UNNotificationAttachmentOptionsThumbnailHiddenKey : true). The second attachment, thumbnailAttachment, is not. In the collapsed view, iOS will select the first attachment where UNNotificationAttachmentOptionsThumbnailHiddenKey is false. But when the nofication is expanded, the first attachment in the array, in this case expandedAttachment, is displayed. If that is confusing see the example images below. Notice, this is not one rectangular image cropped for the thumbnail.

alt image

alt image

Note: There is a way to specify a clipping rectangle using the UNNotificationAttachmentOptionsThumbnailClippingRectKey option, but our backend system doesn’t include cropping rectangle information and we do have multiple approprite crops of product/sale images available.


Thats it! I hope this post was helpful and you will now fly through building a Notification Service Extension for your app. If there is anything you think I missed and should add to the blog please let us know,

alt image

ios 7 push notifications 5 notification service extension 1
Gilt Tech

Open Source Friday

HBC Tech open source

From the 54 public repos maintained at to the name of our tech blog (displayed in this tab’s header), open source has been part of our team’s DNA for years. Check out this blog post from 2015 if you’re not convinced.

Our open source love is why we’re excited to participate in our first Open Source Friday on June 30. Open Source Friday is an effort being led by GitHub to make it easier to contribute to the open source community. This blog post has more detail on the who, what and why. We’re hoping to make this a regular activity to help our teams foster an open-source-first culture as we grow and evolve.

Some of the projects we’ll be working on:

  • CleanroomLogger - Evan Maloney will be tackling a specific long-standing user request: custom “named subsystems” for logging. Some background here:
  • ApiBuilder - Ryan Martin will be working on fixing some edge cases in the Swagger Generator for ApiBuilder.
  • gfc-guava - Sean Sullivan will be updating the documentation for gfc-guava and working on compatability with Google Guava 22.0.

If you’re inspired but don’t know where to start, head to our directory of open source projects, visit this list by GitHub or ping us on Twitter and we can help point you in the right direction.

open source 65 culture 35 community 1
Gilt Tech

Hudson's Bay Company at QCon

HBC Tech conferences

Heading to QCon? Don’t miss these two sessions! If you can’t make it, stay tuned here for slides and recordings from the conference.

Removing Friction In the Developer Experience

If you follow this blog at all, you know that we talk a lot about how we work here. Whether it’s out approach to adopting new technology, the work of our POps team or our team ingredients framework, we’re not shy when it comes to our people and culture.

With that in mind, it only makes sense that Ade Trenaman, SVP Engineering at Hudson’s Bay Company, will be part of the Developer Experience track at QCon New York in June. Titled “Removing Friction In the Developer Experience”, Ade will highlight a number of the steps we’ve taken as an organisation to improve how we work. His session will cover:

  • how we blend microservice / serverless architectures, continuous deployment, and cloud technology to make it easy to push code swiftly, safely and frequently and operate it reliably in production.
  • the organisational tools like team self-selection, team ingredients (see above), voluntary adoption and internal startups that allow us to decentralise and decouple high-performing teams.

Survival of the Fittest - Streaming Architectures

Michael Hansen will also be at QCon this year. Mike’s talk will help guide the audience through a process to adopt the best streaming model for their needs (because there is no perfect solution).

In his own words: “Frameworks come and go, ​so this talk is not about the “best” framework or platform to use, rather it’s about core principles that will stand the tests of streaming evolution.”

His talk will also cover:

  • major potential pitfalls that you may stumble over on your path to streaming and how to avoid them
  • the next evolutionary step in streaming at Hudson’s Bay Company

Hope to see you there!

developer experience 1 culture 35 stream processing 1 data 27 qcon 1
Gilt Tech

Let’s run an experiment! Self-selection at HBC Digital

Dana Pylayeva Agile


Inspired by Opower’s success story, we ran a self-selection experiment at HBC Digital.

Dubbed as “the most anticipated event of the year” it enabled 39 team members to self-select into 4 project teams. How did they do it? By picking a project they wanted to work on, the teammates they wanted to work with and keeping a “Do what’s best for the company” attitude. Read on to learn about our experience and consider giving a self-selection a try!

A little bit of introduction:

Who are we?

HBC Digital is the group that drives the digital retail/ecommerce and digital customer experience across all HBC retail banners including Hudson’s Bay, Lord & Taylor, Saks Fifth Avenue, Gilt, and Saks OFF 5TH.

Our process, trifectas and team ingredients

Our development process is largely inspired by the original Gilt process and has the ideas of intrinsic motivation in its core. What agile flavor do we use? It depends on the team.

Each team has a full autonomy in selecting Scrum, Kanban, XP, a combination thereof or none of the above as their process. As long as they remain small, nimble, able to collaborate and continuously deliver value, they can tailor the process to their needs.

We do keep certain key components standard across all teams. One of them is a “Trifecta” – a group of servant-leaders in each team: a Product Manager, an Agile Project Manager and a Tech Lead. They work together to support their team and enable the team’s success. We value continuous learning and facilitate role blending by instilling our Team Ingredients framework. Originally designed by Heather Fleming, the Team Ingredients framework facilitates team-level conversations about the team strengths, learning interests and cross-training opportunities.

Over the years the framework evolved from being a management tool for assessing teams from “outside in” to being a team tool that supports self-organizing and learning discussions. After a major revamp and gamification of the framework in 2016, we now use it as part of our Liftoff sessions and team working agreement conversations.

Just like our Team Ingredients framework, our process continues to evolve. We experiment with new ideas and practices to facilitate teams’ effectiveness and create an environment for teams to thrive. The self-selection is our latest experiment and this blog post is a glimpse into how it went.

Self-selection triggers and enablers

Organizational change

As an organization that grew through acquisitions, at one point we found ourselves dealing with an unhealthy mix of cultures, duplicate roles and clashing mindsets. To remain lean and agile, we went through a restructuring at all levels.

Inspiring case studies

When we were evaluating the best ways to re-form the teams, we came across Amber King and Jess Huth’s talk on self-selection at Business Agility 2017 Conference. The lightbulb went on! Amber and Jess were describing exactly the situation we were in at that time and were reporting the positive effect of running a self-selection with the teams at Opower. We followed up with them on Skype afterwards. Hearing their compelling story again and being encouraged by their guidance, we left the call fired up to give the self-selection a try!

Self-selection manual

When it is your turn to plan for self-selection, pick up a copy of Sandy Mamoli and David Mole’s book “Creating Great Teams: How Self-Selection Lets People Excel” This very detailed facilitation guide from the inventors of self-selection process is indispensable in preparing for and facilitating a self-selection event.

Past success

What worked in our favor was the fact that Gilt had tried running a self-selection in 2012 as part of a transition to “two-pizza” teams. The self-selection event was called a Speed Dating, involved 50 people and 6 projects. Fun fact - a number of today’s leaders were involved in 2012 event as regular participants.


We kept the preparation process very transparent. Dedicated Slack channel, Confluence page with progress updates and participants’ info, communication at the tech all-hands meetings and Q&A sessions – everything to avoid creating discomfort and to reduce the fear factor amongst team members.

Self-selection in seven steps

Seven Steps of Self-Selection

1. Get Leadership Buy-In

One of the first steps in a self-selection is getting buy-in from your leadership team. Whether you start from feature teams or component teams, a self-selection event has a potential of impacting the existing reporting structure in your organization. Have an open conversation with each of the leaders to clarify the process, understand their concerns and answer questions.

Is there a small modification you can make to the process to mitigate these concerns and turn the leaders into your supporters? From our experience, making a self-selection invitational and positioning it as “an experiment” fast-tracked its acceptance in the organization.

2. Identify Participants

How many people will be involved in your self-selection? Will it include all of your existing project teams or a subset?

Reducing the size of the self-selection to only a subset of the teams at HBC Digital made our experiment more plausible. By the same token, it created a bit of a confusion around who was in vs. who was not.

If you are running a self-selection for a subset of your teams, make sure that the list of participants is known and publicly available to everyone. Verify that the total number of participants is equal or smaller than the number of open spots on the new teams.

Pre-selected vs. free-moving participants

Decide if you need to have any of the team members pre-selected in each team. For us, the only two pre-selected roles in each team were a Product Manager and a Tech Lead. They were the key partners in pitching the initiative to team members. All others (including Agile Project Managers) were invited to self-select into new teams.

FTEs vs. Contractors

If you have contractors working on your projects alongside the full-time employees, you will need to figure out if limiting self-selection to full-time employees makes sense in your environment.

Since our typical team had a mix of full-time employees and contractors, it was logical for us to invite both groups to participate in the self-selection. After all, individuals were selecting the teams based on a business idea, a technology stack and the other individuals that they wanted to work with. We did make one adjustment to the process and asked contractors to give employees “first dibs” at selecting their new teams. Everyone had equal opportunity after the first round of the self-selection.


Usually, you would want to limit participation to those directly involved in a self-selection. In our case, there was so much interest in the self-selection experiment across the organization, that we had to compromise by introducing an observer role. Observers were invited to join in the first part of the self-selection event. They could check out how the room was set up, take a peek at the participants’ cards. They could listen to initiative pitches for all teams, without making an actual selection. Observers were asked to leave after the break and before the start of actual teams’ selection.

3. Work with Your Key Partners

Adjust the process to fit your needs

During our prep work we discovered that some team members felt very apprehensive about self-selection processes. To some extent, it reminded them of a negative experience they had in their childhood with a selection into sports teams. We collaborated with current teams’ Trifectas to reduce potential discomfort with the following adjustments:

  • We modified the “I have no squad” poster into “Available to help” poster for a more positive spin.
  • We made a compromise on consultants’ participation, asking them to add their cards to “I am available to help” poster in the first round and letting them participate equally starting from the second round.
  • We introduced a “No first come first serve” rule to keep the options open for everyone and avoid informal pre-selection.

Product Managers and Tech Leads pitches.

Coach them to inspire people with their short pitches about a product vision and a technology stack:

  • Why is this initiative important to our business?
  • How can you make a difference if you join?
  • What exciting technologies will you get a chance to work with if you become a part of this team?
  • What kind of team are we looking to build?

Establish the team formula

This part is really critical.

Your team formula may include the core team only, or like in our case, include members from the larger project community (Infrastructure Engineers, UX Designers etc.) As a facilitator, you want to understand very well the needs of each project in terms of specific roles and the number of people required for each role. Cross-check the total number of people based on the team formula with the number of people invited to participate in the self-selection. Avoid putting people into “musical chairs” at all cost!

4. Evangelize

Take the uncertainty out of the self-selection! Clarify questions, address concerns, play the “what-ifs”, collect questions and make answers available to everyone.

We learnt to use a variety of channels to spread the word about the self-selection:

  • announcements at Tech All-hands meetings
  • dedicated Q&A sessions with each existing group.
  • Confluence Q&A page
  • #self-selection Slack channel
  • formal and informal one-on-one conversations (including hallway and elevator chats)
  • discussion between the Tech Leads and Product Managers and their potential team members

5. Prepare


It was important for us to find the right space and set the right mood for the actual self-selection event. The space that worked for us met all of our criteria:

1) Appropriate for the size of the group 2) Natural light 3) Separate space for pitches and for team posters 4) Away from the usual team spaces (to minimize distractions)


Speaking of the right mood, we had enough good snacks brought in for all participants and observers!

Depending on the time of the day, you may plan on bringing breakfast, lunch or snacks into your self-selection event. We ran ours in the afternoon and brought in a selection of European chocolate, popcorn and juices.


Help the participants remember the rules and find the team corners by preparing posters. Be creative, make them visually appealing. Here is what worked for us:

1) One team poster per team with the project/team name, team formula and a team mascot.

2) Rules posters:

  • “Do what’s best for the company”
  • “Equal team selection opportunity”
  • “Teams have to be capable of delivering end to end”

3) “Available to help” poster. This is very similar to “I have no squad” poster from Sandi Mamoli’s book. However, we wanted to make the message on that poster a little bit more positive.

Participants Cards

At a minimum, have a printed photo prepared for each participant and color-coded labels to indicated different roles.

We invested a little more time in making participants cards look like game cards and included:

  • a LinkedIn photo of the participant
  • their name
  • a current role
  • their proficiency and learning interests in the eleven team ingredients
  • a space to indicate their first, second and third choices of the team (during the event).

Using our Team ingredients framework and Kahoot! survey platform we created a gamified self-assessment to collect the data for these cards.

Participants rated their skill levels and learning interests for each of the ingredients using the following scale:

3 – I can teach it

2 – I can do it

1 – I’d like to learn it

0 – Don’t make me do it

6. Run

It took us exactly one month to get to this point. On the day of the self-selection the group walked into the room. The product managers, tech leads and the facilitator were already there. The room was set and ready for action!

Initiative Pitches

Participants picked up their cards and settled in their chairs, prepared to hear the initiative pitches and to make their selections. This was one of the most attentive audience we’ve seen! We didn’t even have to set the rules around device usage - everyone was giving the pitches their undivided attention.

After a short introduction from the facilitator and a “blessing” from one of the leaders, Product Managers and Tech Leads took the stage.

For each initiative they presented their vision of the product, the technology stack and their perspective on the team they’d like to build. It was impressive to see how each pair worked together to answer questions and inspire people. At the end of the pitches, we took a short break. It was a signal for observers to leave the room.

Two rounds of self-selection

After the break, Product Managers and Tech Leads took their places in the team corners. We ran two rounds of self-selection, ten minutes each.

During the first self-selection round people walked around, checked the team formula, chatted with others and placed their cards on a poster of their first choice team. Contractors and others, who didn’t want to make a selection in the first round, placed their cards on “Available to help” poster. At the end of the round, each tech lead was asked to give an update on the following:

  • Was the team complete after this round?
  • Were there any ingredients or skills missing in the team after the first round?

During the second round, there were more conversations, more negotiations and more movement between the teams. Some people agreed to move to their second choice teams to help fill the project needs. The “Do what’s best for the company” poster served as a good reminder during this process.

The debrief revealed that three teams out of four had been fully formed by the end of the second round. The last team had more open spots still. It was decided that those will be filled later by hiring new people with the required skillset.

The self-selection event was completed. It was a time to celebrate and to start planning the work with the new teams.

7. Support New Teams

Transition Plan

With the self-selection exercise, our teams formed a vision for their ideal “end state”. Afterwards, we needed to figure out how to achieve that vision. Tech leads worked with their new team members to figure our the systems they supported, the projects they were involved with at that time and mapped out the transition plan.

Team Working Agreement

Once all members of the new teams were available to start, we faciliated Liftoff workshops to help them get more details on the product purpose, establish team working agreements and help the teams understand larger organizational context.

Coaching/Measuring Happiness

Our experiment didn’t stop there. We continue checking in with the team through coaching, measuring happiness (we use gamified Spotify Squad Health check) and facilitating regular retrospectives.

What’s next?

As our roadmap continues to change and as we get more people joining the organization, we may consider running a self-selection again with a new group. Or we may decide to move away from “large batches” of self-selection and experiment with a flow of Dynamic Reteaming.

Time will tell. One thing is clear - we will continue learning and experimenting.

How can you learn more?

We hope this blog post inspired you to think about a self-selection for your teams. Still have questions after reading it? Get in touch with us, we’d love to tell you more!

We are speaking

Join our talks and workshops around the World:

  1. “The New Work Order” keynote at Future of Work by Heather Fleming, VP People Operations & PMO

  2. Removing Friction In the Developer Experience at QConn New York by Adrian Trenaman, SVP Engineering

  3. Discover Your Dream Teams Through Self-Selection with a Team Ingredients Game at Global Scrum Gathering Dublin by Dana Pylayeva, Agile Coach

Great books that inspired us

  1. Sandy Mamoli, David Mole “Creating Great Teams: How Self-Selection Lets People Excel”
  2. Diana Larsen, Ainsley Nies Liftoff: Launching Agile Teams & Projects
  3. Heidi Shetzer Helfand Dynamic Reteaming. The Art and Wisdom of Changing Teams
culture 35 agile 11 self-selection 1 leadership 7
Gilt Tech

CloudFormation Nanoservice

Ryan Martin aws

One of the big HBC Digital initiatives for 2017 is “buy online, pickup in store” - somewhat awkwardly nicknamed “BOPIS” internally. This is the option for the customer to, instead of shipping an order to an address, pick it up in a store that has the items in inventory.

A small part of this new feature is the option to be notified of your order status (i.e. when you can pickup the order) via SMS. A further smaller part of the SMS option is what to do when a customer texts “STOP” (or some other similar stop word) in response to one of the SMS notifications. Due to laws such as the Telephone Consumer Protection Act (TCPA) and CAN-SPAM Act, we are required to immediately stop sending additional messages to a phone number, once that person has requested an end to further messaging.

Our SMS provider is able to receive the texted response from the customer and POST it to an endpoint of our choosing. We could wrap such an endpoint into one of our existing microservices, but the one that sends the SMS (our customer-notification-service) is super-simple: it receives order events and sends notifications (via email or SMS) based on the type of event. It is essentially a dumb pipe that doesn’t care about orders or users; it watches for events and sends messages to customers based on those events. Wrapping subscription information into this microservice felt like overstepping the bounds of the simple, clean job that it does.

So this is the story of how I found myself writing a very small service (nanoservice, if you will) that does one thing - and does it with close-to-zero maintenance, infrastructure, and overall investment. Furthermore, I decided to see if I could encapsulate it entirely within a single CloudFormation template.

How we got here

Here are the two things this nanoservice needs to do:

  1. Receive the texted response and unsubscribe the customer if necessary
  2. Allow the customer notification service (CNS) to check the subscription status of a phone number before sending a SMS

In thinking about the volume of traffic for these two requests, we consider the following:

  1. This is on [] only (for the moment)
  2. Of the online Saks orders, only a subset of inventory is available to be picked up in the store
  3. Of the BOPIS-eligible items, only a subset of customers will choose to pickup in store
  4. Of those who choose to pickup in store, only a subset will opt-in for SMS messages
  5. Of those who opt-in for SMS, only a subset will attempt to stop messages after opting-in

For the service’s endpoints, the request volume for the unsub endpoint (#1 above) is roughly the extreme edge case of #5; the CNS check (#2) is the less-edgy-but-still-low-volume #4 above. So we’re talking about a very small amount of traffic: at most a couple dozen requests per day. This hardly justifies spinning up a microservice - even if it runs on a t2.nano, you still have the overhead of multiple nodes (for redundancy), deployment, monitoring, and everything else that comes with a new microservice. Seems like a perfect candidate for a serverless approach.

The architecture

As mentioned above, a series of order events flows to the customer notification service, which checks to make sure that the destination phone number is not blacklisted. If it is not, CNS sends the SMS message through our partner, who in turn delivers the SMS to the customer. If the customer texts a response, our SMS partner proxies that message back to our blacklist service.

The blacklist service is a few Lambda functions behind API Gateway; those Lambda functions simply write to and read from DynamoDB. Because the stack is so simple, it felt like I could define the entire thing in a single artifact: one CloudFormation template. Not only would that be a geeky because-I-can coding challenge, it also felt really clean to be able to deploy a service using only one resource with no dependencies. It’s open source, so anyone can literally copy-paste the template into CloudFormation and have the fully-functioning service in the amount of time it takes to spin up the resources - with no further knowledge necessary. Plus, the template is in JSON (which I’ll explain later) and the functions are in Node.js, so it’s a bit of


Here at HBC Digital, we’ve really started promoting the idea of API-driven development (ADD). I like it a lot because it forces you to fully think through the most important models in your API, how they’re defined, and how clients should interact with them. You can iron out a lot of the kinks (Do I really need this property? Do I need a search? How does the client edit? What needs to be exposed vs locked-down? etc) before you write a single line of code.

I like to sit down with a good API document editor such as SwaggerHub and define the entire API at the beginning. The ADD approach worked really well for this project because we needed a quick turnaround time: the blacklist was something we weren’t expecting to own internally until very late in the project, so we had to get it in place and fully tested within a week or two. With an API document in hand (particularly one defined in Swagger), I was able to go from API definition to fully mocked endpoints (in API Gateway) in about 30 mins. The team working on CNS could then generate a client (we like the clients in Apidoc, an open-source tool developed internally that supports Swagger import) and immediately start integrating against the API. This then freed me to work on the implementation of the blacklist service without being a blocker for the remainder of the team. We settled on the blacklist approach one day; less than 24 hours later we had a full API defined with no blockers for development.

The API definition is fairly generic: it supports blacklisting any uniquely-defined key for any type of notification. The main family of endpoints looks like this:


notification_type currently only supports sms, but could very easily be expanded to support things like email, push, facebook-messenger, etc. With this, you could blacklist phone numbers for sms independently from email addresses for email independently from device IDs for push.

A simple GET checks to see if the identifier of the destination is blacklisted for that type of notification:

> curl https://your-blacklist-root/sms/555-555-5555
{"message":"Entry not blacklisted"}

This endpoint is used by CNS to determine whether or not it should send the SMS to the customer. In addition to the GET endpoint, the API defines a PUT and a DELETE for manual debugging/cleanup - though a client could also use them directly to maintain the blacklist.

The second important endpoint is a POST that receives a XML document with details about the SMS response:

<?xml version="1.0" encoding="UTF-8"?>
<moMessage messageId="123456789" receiptDate="YYYY-MM-DD HH:MM:SS Z" attemptNumber="1">
    <source address="+15555555555" carrier="" type="MDN" />
    <destination address="12345" type="SC" />
    <message>Stop texting me</message>

The important bits are the source address (the phone number that sent the message) and the message itself. With those, the API can determine whether or not to add the phone number to the blacklist. If it does, the next time CNS calls the GET endpoint for that phone number, the API will return a positive result for the blacklist and CNS will not send the SMS. The POST to /mo_message lives at the top-level because it is only through coincidence that it results in blacklisting for SMS; one could imagine other endpoints at the top-level that blacklist from other types of notifications - or even multiple (depending on the type of event).

Let’s see some code

First there are a couple functions shared across all the endpoints (and their backing Lambda functions):

function withSupportedType(event, context, lambdaCallback, callback) {
  const supportedTypes = ['sms'];
  if (supportedTypes.indexOf(event.pathParameters.notification_type.toLowerCase()) >= 0) {
  } else {
    lambdaCallback(null, { statusCode: 400, body: JSON.stringify({ message: 'Notification type [' + event.pathParameters.notification_type + '] not supported.' }) });

function sanitizeNumber(raw) {
  var numbers = raw.replace(/[^\d]+/g, '');
  if (numbers.match(/^1\d{10}$/)) numbers = numbers.substring(1, 11);
  return numbers;

These are there to ensure that each Lambda function is a) dealing with invalid notification_types and b) cleaning up the phone number in the same manner across all functions. Given those common functions, the amount of code for each function is fairly minimal.

The GET endpoint simply queries the DynamoDB for the unique combination of notification_type and blacklist_id:

const AWS = require('aws-sdk'),
      dynamo = new AWS.DynamoDB();

exports.handler = (event, context, callback) => {
  const blacklistId = sanitizeNumber(event.pathParameters.blacklist_id);
  withSupportedType(event, context, callback, function(notificationType) {
      TableName: event.stageVariables.TABLE_NAME,
      Key: { Id: { S: blacklistId }, Type: { S: notificationType } }
    }, function(err, data) {
      if (err) return callback(err);
      if ((data && data.Item && afterNow(data, "DeletedAt")) || !onWhitelist(blacklistId, event.stageVariables.WHITELIST)) {
        callback(null, { statusCode: 200, body: JSON.stringify({ id: blacklistId }) });
      } else {
        callback(null, { statusCode: 404, body: JSON.stringify({ message: "Entry not blacklisted" }) });

function afterNow(data, propertyName) {
  if (data && data.Item && data.Item[propertyName] && data.Item[propertyName].S) {
    return Date.parse(data.Item[propertyName].S) >= new Date();
  } else {
    return true;

// Set the whitelist in staging to only allow certain entries.
function onWhitelist(blacklistId, whitelist) {
  if (whitelist && whitelist.trim() != '') {
    const whitelisted = whitelist.split(',');
    return whitelisted.findIndex(function(item) { return blacklistId == item.trim(); }) >= 0;
  } else {
    return true;

Disregarding the imports at the top and some minor complexity around a whitelist (which we put in place only for staging/test environments so we don’t accidentally spam people while testing), it’s about a dozen lines of code (depending on spacing) - with minimal boilerplate. This is the realization of one of the promises of the serverless approach: very little friction against getting directly to the meat of what you’re trying to do. There is nothing here about request routing or dependency-injection or model deserialization; the meaningful-code-to-boilerplate ratio is extremely high (though we’ll get to deployment later).

The PUT (add an entry to the blacklist, managing soft-deletes correctly)

exports.handler = (event, context, callback) => {
  const blacklistId = sanitizeNumber(event.pathParameters.blacklist_id);
  withSupportedType(event, context, callback, function(notificationType) {
      TableName: event.stageVariables.TABLE_NAME,
      Key: { Id: { S: blacklistId }, Type: { S: notificationType } },
      ExpressionAttributeNames: { '#l': 'Log' },
      ExpressionAttributeValues: {
        ':d': { S: (new Date()).toISOString() },
        ':m': { SS: [ toMessageString(event) ] }
      UpdateExpression: 'SET UpdatedAt=:d ADD #l :m REMOVE DeletedAt'
    }, function(err, data) {
      if (err) return callback(err);
      callback(null, { statusCode: 200, body: JSON.stringify({ id: blacklistId }) });

and DELETE (soft-delete entries when present)

exports.handler = (event, context, callback) => {
  const blacklistId = sanitizeNumber(event.pathParameters.blacklist_id);
  withSupportedType(event, context, callback, function(notificationType) {
      TableName: event.stageVariables.TABLE_NAME,
      Key: { Id: { S: blacklistId }, Type: { S: notificationType } },
      ExpressionAttributeNames: { '#l': 'Log' },
      ExpressionAttributeValues: {
        ':d': { S: (new Date()).toISOString() },
        ':m': { SS: [ toMessageString(event) ] }
      UpdateExpression: 'SET DeletedAt=:d, UpdatedAt=:d ADD #l :m'
    }, function(err, data) {
      if (err) return callback(err);
      callback(null, { statusCode: 200, body: JSON.stringify({ id: blacklistId }) });

functions are similarly succinct. The POST endpoint that receives the moMessage XML is a bit more verbose, but only because of a few additional corner cases (i.e. when the origin phone number or the message isn’t present).

exports.handler = (event, context, callback) => {
  const moMessageXml = event.body;
  if (messageMatch = moMessageXml.match(/<message>(.*)<\/message>/)) {
    if (messageMatch[1].toLowerCase().match(process.env.STOP_WORDS)) { // STOP_WORDS should be a Regex
      if (originNumberMatch = moMessageXml.match(/<\s*source\s+.*?address\s*=\s*["'](.*?)["']/)) {
        var originNumber = sanitizeNumber(originNumberMatch[1]);
          TableName: event.stageVariables.TABLE_NAME,
          Key: { Id: { S: originNumber }, Type: { S: 'sms' } },
          ExpressionAttributeNames: { '#l': 'Log' },
          ExpressionAttributeValues: {
            ':d': { S: (new Date()).toISOString() },
            ':m': { SS: [ moMessageXml ] }
          UpdateExpression: 'SET UpdatedAt=:d ADD #l :m REMOVE DeletedAt'
        }, function(err, data) {
          if (err) return callback(err);
          callback(null, { statusCode: 200, body: JSON.stringify({ id: originNumber }) });
      } else {
        callback(null, { statusCode: 400, body: JSON.stringify({ message: 'Missing source address' }) });
    } else {
      callback(null, { statusCode: 200, body: JSON.stringify({ id: '' }) });
  } else {
    callback(null, { statusCode: 400, body: JSON.stringify({ message: 'Invalid message xml' }) });

A couple things to call out here. First - and I know this looks terrible - this function doesn’t parse the XML - it instead uses regular expressions to pull out the data it needs. This is because Node.js doesn’t natively support XML parsing and importing a library to do it is not possible given my chosen constraints (the entire service defined in a CloudFormation template); I’ll explain further below. Second, there is expected to be a Lambda environment variable named STOP_WORDS that contains a regular expression to match the desired stop words (things like stop, unsubscribe, fuck you, etc).

That’s pretty much the extent of the production code.

Deployment - CloudFormation

Here’s where this project gets a little verbose. Feel free to reference the final CloudFormation template as we go through this. In broad strokes, this template matches the simple architecture diagram above: API Gateway calls Lambda functions which each interact with the same DynamoDB database. The bottom of the stack (i.e. the top of the template) is fairly simple: two DynamoDBs (one for prod, one for stage) and an IAM role that allows the Lambda functions to access the databases.

On top of that are the four Lambda functions - which contain the Node.js code (this is the “YO DAWG” part, since the Javascript is in the JSON template) - plus individual permissions for API gateway to call each function. This section (at the bottom of the template) is long but is mostly code-generated (we’ll get to that later).

In the middle of the template lie a bunch of CloudFormation resources that define the API Gateway magic: a top-level Api record; resources that define the path components under that Api; methods that define the endpoints and which Lambda functions they call; separate configurations for stage vs prod. At this point, we’re just going to avert our eyes and reluctantly admit that, okay, fine, serverless still requires some boilerplate (just not inline with the code, damn it!). At some level, every service needs to define its endpoints; this is where our blacklist nanoservice does it.

All-in, the CloudFormation template approaches 1000 lines (fully linted, mind you, so there are a bunch of lines with just tabs and curly brackets). “But wait!” you say, “Doesn’t CloudFormation support YAML now?” Why yes, yes it does. I even started writing the template in YAML until I realized I shouldn’t.

Bringing CloudFormation together with Node.js

To fully embed the Node.js functions inside the CloudFormation template would have been terrible. How would you run the code? How would you test it? A cycle of: tweak the code => deploy the template to the CloudFormation stack => manually QA - that would be a painful way of working. It’s unequivocally best to be able to write fully isolated and functioning Node.js code, plus unit tests in a standard manner. The problem is that Node.js code then needs to be zipped and uploaded to S3 and referenced by the CloudFormation template - which would create a dependency for the template and would not have achieved the goal of defining the entire service in a single template with no dependencies.

To resolve this, I wrote a small packaging script that reads the app’s files and embeds them in the CloudFormation template. This can then be run after every code change (which obviously would have unit tests and a passing CI build), to keep the template inline with all code changes. The script is written in Node.js (hey, if you’re running tests locally, you must already have Node.js installed locally), so a CloudFormation template written in JSON (as opposed to YAML) is essentially native - no parsing necessary. The script can load the template as JSON, inject a CloudFormation resource for each function in the /app directory, copy that function’s code into the resource, and iterate. Which brings us to

The other thing to note about going down the path of embedding the Node.js code directly in the CloudFormation template (as opposed to packaging it in a zip file): all code for a function must be fully contained within that function definition (other than the natively supported AWS SDK). This has two implications: first, we can’t include external libraries such as a XML parser or a Promise framework (notice all the code around callbacks, which makes the functions a little more verbose than I’d like). Second, we can’t DRY out the functions by including common functions in a shared library; thus they are repeated in the code for each individual function.


So that’s it: we end up with a 1000-line CloudFormation template that entirely defines a blacklist nanoservice that exposes four endpoints and runs entirely serverless. It is fully tested, can run as a true Node.js app (if you want), and will likely consume so few resources that it is essentially free. We don’t need to monitor application servers, we don’t need to administer databases, we don’t need any non-standard deployment tooling. And there are even separate stage and production versions.

You can try it out for yourself by building a CloudFormation stack using the template. Enjoy!

aws 10 cloudformation 1 sms 1 nanoservice 1 api-driven development 1 swagger 1 apibuilder 1 serverless 1
Gilt Tech

The POps Up Plant Shop

HBC Digital culture

How do we keep our teams happy and high-performing? That’s the focus for the People Operations (POps) team.

The POps mission is:

To build and maintain the best product development teams in the world through establishing the models around how we staff and organize our teams, how we plan and execute our work, and how we develop our people and our culture.

Our work includes:

We also like to have some fun, too.

Surprise and Delight

This week we coordinated an intercontinental “POps Up Plant Shop” for our people in NYC and Dublin. Between the two offices, we distributed 350 plants. Crotons, ivies, succulents and more were on offer. Everyone loved the surprise. While POps is focused on working with our tech teams, we noticed a few folks from other departments at HBC taking plants for their desks - a good indicator that what we’re doing is working!

Beyond adding a dash of color the office, offices plants are proven to increase happiness and productivity which aligns perfectly with the mission of the POps team.

people operations 1 happiness 2 productivity 2
Gilt Tech

Mobile Design Sprint

HBC Digital mobile

HBC Digital is a large organization. We are hundreds of technologists responsible for the retail experiences for many of North America’s largest retailers including Saks Fifth Avenue, Saks OFF 5TH, Gilt, Lord & Taylor and the Bay. Our breadth allows us to work on complex challenges with huge upsides. The number of opportunities available to us, however, requires commitment from our teams to ensure we are focused on the right problems.

Recently our mobile team took part in a week-long design sprint. The goal of the five-day process was to answer critical business questions through design, prototyping and testing ideas with customers, who are always at the center of our work. They wanted to make sure they were solving the right problem for our customers.

The design sprint was inspired by past exercises we’ve conducted with Prolific Interactive, however, this iteration was facilitated by the Senior Program Manager on our mobile team. The goal was to use the Saks Fifth Avenue app to “reduce shopping friction, unifying the customer experience across physical and digital stores”.

The Process

Each day of the five-day sprint had a particular focus:

  • Day 1 - Goal Setting and Mapping the Challenge
  • Day 2 - Sketching Ideas and Setting a Direction
  • Day 3 - Prototyping
  • Day 4 - Prototyping
  • Day 5 - User Testing

The exercise involved experts from across Hudson’s Bay Company including product, engineering, UX, business partners from Saks Fifth Avenue stores and our customers.


Any team embarking on a design sprint should outline their goal and opportunities at the start of the sprint. These help to keep the team focused throughout the exercise. We identified three specific opportunities for our team:

  • Refine the vision for the Saks app
  • Seek business opportunities of being a partner with other divisions in HBC
  • Quickly vet ideas in line with Saks’ business themes

What We Learned

The “expert panel” conducted with our business partners from stores was one of the big wins of the week. The group setting allowed for lots of interaction and Q&A. Everyone on the team had the first-hand experience of hearing about the pain points of our partners in stores which paid huge dividends during our storyboarding and prototyping sessions.

Day 5 was “judgement day”. We created a test environment in our Saks Downtown store to mimic the in-store experience we envisioned during our prototyping session. By demoing in-store with Saks Fifth Avenue shoppers, we were able to get real-time feedback from our customers as they interacted with the prototype. The ability to iterate based on customer feedback before entering production will help to reduce our engineering overhead.

An added bonus of the sprint was how it energized our people. The team decided what to focus on, experimented with new technologies and connected directly with our store operations team and customers. All of these opportunities boosted morale and engagement.

Some of the things we plan to change for next time include:

  • adjust the timing of some activities (diligent time keeping of activities will pay off when mapping out the agenda for our next design sprint)
  • involve more people from our engineering team to improve the fluidity of our prototyping sessions
  • invest more time in preparation ahead of the exercise to improve our efficiency

What’s Next

With the design sprint complete, we are moving on to the feasibility/technical discovery process and defining the MVP. The tech discovery process for the MVP will feature a hackathon next month to test and build on some of the themes and technologies we identified as opportunities in the design sprint. The user testing with customers in-store during the design sprint will also heavily influence our work during the hackathon.

Stay tuned to this blog or head over to the App Store and download the Saks Fifth Avenue app to keep an eye on what we’re building.

agile 11 saks 1 mobile 23 design 6 ux 4
Gilt Tech

Meetups: April Recap and What's Happening In May

John Coghlan meetups

April Meetups: 105 guests, 48 seltzers, 45 All Day IPAs, 19 pizzas & 2 great speakers.

On April 20, we hosted the NYC Scrum User Group for the third time in 2017. Rob Purdie, founder of the group and Agile Coach at IBM, gave an update on IBM’s Agile Transformation. The talk repeatedly returned to the theme of ensuring your team is “doing the right work”, warning the room of agilists that becoming very efficient at doing work that doesn’t matter is the fastest way to get nowhere. It reminded me of a quote written on the wall of our office: “Our greatest fear should not be failure, but of succeeding in life at things that don’t really matter.” While every NYC SUG Meetup has been great, this one stood out for its accessibility and high levels of audience engagement.

NY Scala University Meetup

A few days later we hosted Li Haoyi (pictured above) who gave a great talk on ‘Designing Open Source Libraries’ at our NY Scala University Meetup. He focused on intuitiveness, layering and documentation as the three keys to creating an open-source library that will keep engineers happy and drive engagement. Haoyi, the author of many popular libraries and fresh off a talk at Scala Days, drew the biggest turnout yet to our new Lower Manhattan HQ. We had to order more pizza 10 minutes after we opened the doors! His honest insights and great delivery also set a record for laughs.

Looking Ahead

Here are some of the tech events on our calendar in May. Hope to see you there!

  • May 1 - Dana Pylayeva, HBC Digital’s Agile Coach, is organizing Big Apple Scrum Day, a one day community conference focused on Scrum/Agile principles and practices. This 2017 theme is Always Keep Growing.
  • May 6-7 - We’re sponsoring !!Con (pronounced “bang bang con”), “two days of ten-minute talks (with lots of breaks, of course!) to celebrate the joyous, exciting, and surprising moments in computing”.
  • May 10 - Evan Maloney, Distinguished Engineer at HBC Digital, will be speaking at the Brooklyn Swift Developers Meetup at Work & Co in DUMBO. His talk will trace through the evolution of our project structure and development workflow to arrive at where we are today: a codebase that’s about halfway through a transition to Swift. Some folks from our mobile team will be visiting from Dublin for this one!
  • May 11 - Petr Zapletal of Cake Solutions will deliver a talk on how to avoid common pitfalls when designing reactive applications at our NY Scala University Meetup.
  • May 24 - Demo Day for our ScriptEd class - a group of high school students who have been learning web development from HBC Digital engineers in our offices every week since September.
  • May 25 - NYC PostgreSQL User Group Meetup - details coming shortly.
  • May 26 - Summer Fridays start!
meetups 35 scrum 1 agile 11 scala 17
Gilt Tech

HBC Digital is Sponsoring !!Con

HBC Digital conferences

On May 6-7 one of the year’s most unique tech events is taking place in NYC. !!Con (pronounced “bang bang con”) is two-days of ten-minute talks featuring a diverse array of speakers and topics. You won’t find a lineup like this at your typical tech conference - punch cards, cyborgs, glowing mushrooms, queer feminist cyberpunk manifestos and airplane noise are just a few of the topics on the agenda.

Given the excitement around this conference, tickets went fast - sold-out-in-minutes-fast - but there will be videos and a live stream so the 200+ person waiting list and those unable to be in NYC next weekend will still be able to enjoy the talks. Stay tuned to @bangbangcon on Twitter for more info.

We’re thrilled to be supporting this year’s !!Con as an AWESOME! Sponsor. Be sure to say hi to one of our friendly engineers and snag some HBC Digital swag if you’re there!

tech 21 conferences 27
Gilt Tech

Pau Carré Cardona To Speak at O'Reilly AI Conference

HBC Digital AI

The O’Reilly Artificial Intelligence Conference is coming to New York City in June. From June 27-29, the top minds in AI will be meeting for “a deep dive into emerging AI techniques and technologies with a focus on how to use it in real-world implementations.”

We are excited that one of our software engineers, Pau Carre Cardona, will be leading a session as part of the “Implementing AI” track on June 29. Pau’s talk will expand upon his widely read blog post on how we have applied deep learning at Gilt to complete tasks that require human-level cognitive skills. He’ll touch on how we have leveraged Facebook’s open source Torch implementation of Microsoft’s ResNet for image classification and his open-source project TiefVision which is used to detect image similarity.

You can find more details on Pau’s session here: Deep Learning in the Fashion Industry.

machine learning 8 deep learning 3 AI 2 gilt 88 conferences 27
Page 1 of 68