I’m here to warn you about the dangers of front-end user tracking. Not because Google is tracking you, but because it doesn’t track you quite well enough.

    What follows is a story in three parts: the front-end tracking trap I fell into, how we dug ourselves out, and how you can go around the trap altogether.

    Part 1: A Cautionary Tale

    The year was 2019. Opendoor was signing my paychecks.

    We were launching our shiny new homepage.

    We had spent a month migrating our landing pages from the Rails monolith to a shiny new Next.JS app. The new site was way faster and would therefore convert better, saving us millions of dollars annually in Facebook and Google ad costs.

    Being responsible, we ran the roll-out as an A/B test, sending half of the traffic to the old site so we could quantify our impact1.

    The impact we’d made was making things worse. Way worse. The new site got crushed.

    What happened?

    WTF. Google had told us our new page was way better. The new site even felt snappier.

    “Figure it out.” The engineers on revamp paired up with a Data Scientist and went to go figure out what the hell was going on. They started digging into every nook and cranny of the relaunch.

    A week went by. Our director peeked in curiously. Murmurs about postponing the big launch started to circle. Weight was gained; hair was lost.

    Ultimately, the clue that cracked the case was bounces. Bounces (IE, people leaving right away) were way up on the new site. But it was clear the new site loaded much faster. Bounce rates should have gone down, not up.

    How did we measure bounce rates? We dug in.

    How bounces work

    When the homepage loads, the front-end tracking code records a ‘page view’ event. If the ‘page view’ event was recorded, but then nothing else happens, analytics will consider that user to have “bounced”.

    It turned out that the old site was so slow that many folks left before their ‘page view’ ever got recorded. In other words, the old site was dramatically under-reporting bounces.

    It was like comparing two diet plans and saying the one where half the subjects quit was better because the survivors tended to lose weight.

    Part 2: How we fixed bounces

    If the front-end was under-reporting bounces, could we find a way to track a ‘page view’ without relying on the client?

    There was. It was on the server - though in our example, we tracked the event in Cloudflare, which we were already using for our A/B test setup.

    We started logging a page-about-to-be-viewed event instead of the page view event, which was really page-viewed-long-enough-for-the-tracking-javascript-to-load event. We updated our bounce metrics calculation.

    Lo and behold, the new infra was better after all! We had been giving our old page too much credit this entire time, but nobody was incentivized to cry wolf.

    Part 3: Front-end tracking done right

    Forsake the front-end. Tis a terrible place to track things, for at least three reasons.

    1. Performance

    The less JavaScript (especially third-party) you have on your landing pages, the better. It’s a better customer experience, and it improves your page’s conversion and quality score.

    We calculated that getting rid of Segment and Google Tag Manager on our landing pages would yield about 10-15 points of Google PageSpeed. Google takes PageSpeed into account for Quality Score, which in turn makes your CPMs/CPC cheaper.

    2. Fidelity

    Somewhere between a half and a quarter of all users have ad-blockers set up. If you’re relying on a pixel event to inform Google / Facebook of conversions, you’re not telling them about everybody. This makes it harder for their machine learning to optimize which customers to send your way. Which means you’re paying more for the same traffic.

    3. Powerlessness

    You want to believe that you have control of the JavaScript running on your page, but how many browser extensions does the user have? How much has actually loaded? Wait, what version of IE is this person on?

    What should i do instead?

    Take all your client-side tracking, and move it

    • to the edge for things like page views (the server is fine, here, though, if you KISS)
    • to the server for events that have consequences, like button presses.
    • to publishers for paid traffic conversion, inform Google/Facebook via their server-side APIs when feasible, instead of trying to load a pixel

    FAQs & Caveats

    Shouldn’t. We used Segment to identify anonymous users; the change was just calling .identify() in Cloudflare (and handling the user cookie there).

    I heard server-side conversion tracking for google and facebook doesn’t perform as well.

    I’ve heard (and experienced) this as well. We’re entering black magic territory here… try it.

    The End.

    Want to tell me I’m misinformed / on-point / needed? Hit me up.


    1. We explicitly only changed the infra which served our landing pages, and kept the content - the HTML/CSS/JS - identical. Once the new infra was shown to work, we would begin to experiment with the website itself. 

    “Alexey, do you feel the points you bring up during our post-mortems are productive?” my tech lead asked at our 1:1.

    Well, shit. I had thought so, but apparently not.

    Earlier in the year, I became the Engineering Manager on a team responsible for half of the outages at our 2,000 person company. After each incident, the on-call engineer would write-up a doc and schedule a meeting.

    “How come this wasn’t caught in unit tests?” I found myself asking, in front of the assembled team. Next post-mortem, same thing. “I get that we didn’t have monitoring for this particular metric, but why not?” Week after week.

    The tech lead had asked a great question. Was my approach working?

    “I want to set high expectations,” I told him. “It’s not pleasant being critiqued in a group setting, but my hope is that the team internalizes my ‘good post-mortem’ bar.”

    The words sounded wrong even as I said them.

    “Thanks for the feedback.” I said “Let me think on it.”

    Feedback budgets

    I thought about it.

    There’s a limited budget for criticism one can ingest productively in a single sitting. Managers will try to extend this budget through famed best-practices like the shit-sandwich and the not-really-a-question question. Employees learn these approaches over time and develop an immunity.

    This happened here. Once my questioning reached the criticism threshold, I was no longer “improving the post-mortem culture.” I was “building resentment and defensiveness”.

    I had run over budget. And yet, there was important feedback to give!

    Change the template, change the world

    Upon reflection, I ended up updating our post-mortem template. My questions became part of the template that got filled in before meeting.

    This way, it was the template pestering the post-mortem author. My role was simply to insist that the template be filled out; an entirely reasonable ask.

    Surprisingly enough, this worked; post-mortems became more substantive. The team pared down outage frequency and met OKR goals.

    Process linters

    One Simple Trick I had stumbled into was that there was a way to get around feedback budgets. Turns out there’s this other, vaster budget to tap into: the budget of process automation. When feedback is automated, it arrives sooner, feels confidential, and lacks judgement. This makes it palatable; this is why the budget is vaster.

    The technical analogy here is how we use linters. “Nit: don’t forget to explicitly handle the return value” during code review feels mildly frustrating. Ugh. It’s “just a style thing” and “the code works”. I’ll make the change, but with slight resentment.

    Yet, if that same “unhandled return value” nudge arrives in the form of a linter, it’s a different story. I got the feedback before submitting the code for review; no human had to see my minor incompetence.

    As a software engineer, Have Good Linters is an obvious, uncontroversial best practice. The revelatory moment for me was that templates for documents were just another kind of linter.

    Happy Ending

    My insight completely transformed the way Opendoor Engineering thinks about feedback; I crowd-surfed, held aloft by the team’s grateful arms, to receive my due praise as the master of all process improvement.

    Just kidding; COVID-19 happened and I switched jobs.


    The Appendices Three

    I: Process linters seen in the wild

    Meetings

    Feedback “we have too many meetings”; “what’s the point of this meeting”; “do I need to be here” Linter mandate no-meetings days; mandate agendas; mandate a hard max on attendee count.

    Progress Updates

    Feedback “Hey, how’s that project going? Haven’t heard from you in a bit” Linter Daily stand-ups (synchronous or in slack/an app); issue trackers (Linear, Asana, Jira, Trello)

    Bug Reports

    Feedback “Hey, a friend who uses the app said that our unsubscribe page is broken?” Linter Quality pre-deploy test coverage, automated error reporting (Sentry), Alerting on pages or business metrics having anomalous activity patterns (Datadog).

    II: You’ve gone too far with this process crap

    The process budget is vaster than the feedback budget, but it isn’t unlimited. A mature company is going to have lots of legacy process - process debt, if you will.

    Process requires maintenance and pruning, to avoid “we do this because we’ve always done this” type problems. High-process managers are just as likely to generate unhappy employees as high-feedback managers.

    III: The post-mortem template changes, if that’s what you’re here for

    A. “5 Whys” Prompts

    Our original 5 Whys prompt was “Why did this outage occur.” During the post-mortem review, I kept asking questions like “but why didn’t this get caught in regression testing?”

    So, after discussion, I added my evergreen questions to the post-mortem template. They are:

    • Why didn’t the issue get caught by unit tests?
    • Why didn’t the issue get caught by integration/smoke tests?
    • Why didn’t the issue get flagged in Code Review?
    • Why didn’t the issue get caught during manual QA?
    • If the outage took over an hour to get discovered, why didn’t the monitoring page our on-call?
    B. Defining “Root Cause”

    “5 Whys” recommends continuing to ask why until you’re about five levels deep. We were often stopping at one or two.

    To make stopping less ambiguous, here are a set of “root causes” that I think are close to exhaustive:

    • trade-off we were aware of this concern but explicitly made the speed-vs-quality trade-off (IE, not adding tests for an experiment). This was tech debt coming back to bite us.
    • knowledge gap the person doing the work was not aware that this kind of error was even possible (IE, tricky race conditions, worker starvation)
    • brain fart now that we look at it, we should have caught this earlier. “Just didn’t get enough sleep that night” kind of thing.

      If you keep asking “why” but haven’t gotten to an answer that boils down to one of these, keep going deeper (or get a second opinion).

    2013 Marissa Mayer bans work-from-home at Yahoo
    2020 Jack Dorsey permanently institutionalizes work-from-home at Twitter.

    Working remotely: not just for COVID-19 anymore.

    From academia to the Open Source movement, remote collaboration is not exactly novel. From Github to DuckDuckGo, remote-first successful businesses are no longer rare.

    Remote work occupied the cultural relevance of something those whiz kids do, working on a beach in Thailand. Until today.

    Actually, we took this photo in Vietnam. Hi Luka!

    What has prevented remote work adoption?

    1. Inertia. I have a job and a system that “works” for me. Don’t change what isn’t broken.
    2. Status. I have a “real job” at a “real company.” Working remote is for weirdos and loners.
    3. Productivity. Real decisions happen in elevators on the way to lunch. We know how to be efficient and innovative in-person. How the hell do you build a team remotely?

    COVID-19 addressed inertia. Twitter, a Legitimate Tech Company™️, addressed status. Productivity is still pretty hit-or-miss, but it’s early days yet.

    Trust me, I’m an expert guinea pig

    I came to the Bay Area like a moth to the flame of start-ups after college.

    Lacking an H-1B, I left in 2014, and started Hacker Paradise, a boutique travel business catered to remote workers. During that time, I worked remotely as a software engineer for an SF client for over a year.

    In 2016, I came back to San Francisco. I did it for the same reason I came in the first place - the quality of the jobs and the community.

    I don’t love working remotely. I miss the energy I get from a well-run office environment. COVID-19 has taught me that I’m an extrovert, and shelter-in-place has sucked.

    And yet.

    Reality is that which, when you stop believing in it, doesn’t go away

    - Philip K. Dick.

    Implications

    1. Remote work is about to do to San Francisco what San Francisco did to the South Bay.

    Techies have been leaving San Francisco for cities like Portland, Austin, Denver and Seattle (PADS) for over a decade. We seek a place to settle down and improve cost-to-quality-of-life ratio.

    This is how you know they appreciate us.

    As it becomes easier than ever to keep your job while moving, this trend will pick up. The microkitchens at Twitter are nice, but not “an extra $2k/mo in rent” nice.

    As ambitious millennials realize they can optimize for quality of life, expect businesses like Culdesac and Hacker Paradise (see what I did there) to blossom.

    San Francisco will neither be abandoned quickly nor completely. It’ll retain a “city emeritus” status, like London, Philadelphia or Palo Alto.

    2. We’re going to learn to run remote organizations.

    We are, as an industry, pretty clueless at remote company building.

    In fairness, we’re not even that good at regular company building. We just figured out 1:1s were a good idea and are still deciding who dotted-lines to whom in the matrix org structure.

    You can’t just slap “remote friendly” on your jobs page and decide you’ve done a “heck of a job.”

    Should team sizes change? How to you measure and manage morale? What about onboarding and knowledge sharing?

    There’s some knowledge to be gleaned from the Basecamp folks; they wrote a book about remote work. That book is the COBOL of remote work; it works, but we can do better.

    3. We’re about to enter a renaissance in remote collaboration tooling.

    Slack is doing phenomenally well during COVID-19, but it still kind of sucks, right? At least when people emailed you, they didn’t expect a response right away. Etiquette around Slack usage is still pretty immature, am I right @channel?

    Also. Software engineers use Github, and it’s a pretty mature way to collaborate on shared work. Every other industry is still sending around Presentation Final Final [2].pdf. Some of my favorite former co-workers are building companies in this space. expect more businesses like Figma and Loom to blossom.

    PS. You’re wrong, Alexey. Offices are here to stay.

    That’s true. They will.

    We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run

    - Amara’s Law

    The move to remote-as-mainstream-option will take a good decade or two. The inflection point, however, was today, on May 12, 2020.

    I for one look forward to kids asking what it was like when I had to leave for work every day.

    a user sums it up

    Create an open source project, they said.

    It’ll be great for your resume, they said.

    Part 0: Wherein we provide context

    The year was 2013, Meteor was the hip new kid on the block and Coffeescript was a reasonable JS dialect choice. We were fresh out of college. Meteor was hosting their first ‘Summer Hackathon’ in San Francisco on 10th and Minna, and we figured this was our shot.

    Sequestering ourselves on the second floor of the hackspace, a couple of college friends and I acquired a whiteboard and some markers, and jotted down our big idea list. Not lacking for ambition, the first thing we figured we would fix was server-side rendering. Turned out this seemingly simple task was already on Meteor’s Trello Roadmap, so, what the heck, we figured we’d pitch in.

    I inquire about server-side rendering

    Speaking with the actual development team a few hours later, we learned that server side was (1) hard and (2) coming soon anyway1. Fair enough - we’d solve our other painpoint, the lack of a proper admin DSL.

    Part 1: introducing Z, the Mongo Admin

    Coming from Django land, one of my favorite Framework features had always been being able to describe Admin UIs with a very high level DSL - stuff like

    from django.db import models
    
    class Author(models.Model):
        name = models.CharField(max_length=100)
        title = models.CharField(max_length=3)
        birth_date = models.DateField(blank=True, null=True)

    leading to auto-generated pages that look like:

    django admin screenshot, from tutorial

    Meteor didn’t have an admin DSL2 yet. I wanted very much to build one, and a zero-config one (IE, automatically discover your models) at that. I kept suggesting this idea until the others agreed. Leadership.

    We cranked out a desired feature-list on a whiteboard. It looked like this.

    The List

    • document view
    • dealing with arrays
    • dealing with nested objects
    • boolean fields
    • integer fields
    • editing
    • auto-discovery & Schema
    • collection (table) view
    • home page view
    • doesn’t look terrible
    • wrapping the admin into a package.

    Then we got started. I hid in a corner and tried to figure out how to get a list of Collections, and the fields in each collection, out of an arbitrary Meteor app. It turned out there was no official way but if I poked under the covers of the undocumented Meteor._RemoteCollectionDriver.mongo.db.collections call.

    It turned out you had to ‘warm up’ the _RemoteCollectionDriver (by creating an arbitrary collection) to get the collections loaded. This was the way things worked around Meteor 0.6.

    Package loading was the other relatively painful thing - since we couldn’t figure out quite how to get our package loaded last (since we relied on a router and a number of other packages), Yefim solved the problem the pragmatic Hackathon way, and named the package Z-Meteor Admin. This way, unless we relied on a package that decided to start with two Z’s, Meteor would load us last.

    Launching with the MVP

    Surprisingly, it kind of worked.

    an early version of the UI

    We demoed at the Hackathon and even won the ‘Award for Most Technically Impressive’.

    Greg and I kept working on the project throughout the summer and fall of 2013. We cleaned it up and released 1.0 in December 2013, after gaining over 200 stars on Github.

    For our 1.0 release, I demoed a slightly-less-hacky-and-now-renamed Houston Admin at Meteor Devshop 10:

    Here’s the blog post I wrote to celebrate the event.

    Part 2: A primer on gas tank emptying

    Between the 1.0 in December 2013 and roughly summer 2014, our enthusiasm for working on the project waned. There were many reasons, but here are some:

    1. We had shit to do.

    Greg finished school and was working for Gumroad, not in Meteor. I was doing consulting/startup prototyping work, also mostly not in Meteor. Yefim & Ceasar made the wise decision of not doing much contributing work past the initial release.

    2. We were no longer scratching our own itch.

    Originally, Yefim and I used Houston for the Intern Dinners we were running that summer. It was pretty helpful, and if we needed more UI stuff, we’d just add it. That was summer 2013.

    Even though many of the feature requests we were getting were quite sensible (and we implemented a bunch of them), we really weren’t scratching our own itch after summer 2013.

    3. ‘How do I even’-esque Github Issues

    Actually, having gone through some of the ~270 Github Issues on the project, I’m surprised by how good many of them were and how attentive and friendly we were about them. Still, there were a few bad apples, like css not rendering, how to upload files and Cannot login on Heroku app that either failed to provide enough context and/or were coming from folks who were not otherwise yet competent Meteor (or even JS) developers.

    4. Meteor doesn’t support Mounting & hacky CSS.

    Meteor bundles all CSS & Javascript together when it compiles. This is not the ideal behavior for UI libraries, since if the parent app has some logic that say, for example $('.save').disable() and later the user goes to Houston, all of a sudden all of our ‘save’ buttons are disabled and its unclear why. Likewise, any global CSS rules the parent app chooses to use, like (say) making table columns 200 pixels high, will also make our table columns 200 pixels high. Greg did some crazy things to avoid the namespace collissions.

    Django solves this problem through URL namespaces, allowing the ‘admin’ to behave largely like its own app. Express.js allows for ‘mounting’ of multiple sub-apps on certain directory paths - in either case, no shared CSS/JS is bundled for these cases, avoiding collissions like the above.

    Meteor did not support Mounting out of the box. I bugged the devs about it at a later Devshop, and got an off-hand note that this was not a priority for now. Later on we tried to add Mounting to Meteor and/or host the Admin as a separate app that shared only the database with the parent app, but by then lacked the enthusiasm to bring the projects to completion.

    5. Testing is tough!

    Reactive apps with not a ton of business logic are tough to test! Perhaps we simply lacked the experience here, but when Greg and I tried to add proper integration tests to the app, we spent tens of hours being our heads against the wall, time that would have been better spent actually fixing bugs.

    6. Router Wars

    When we created Houston in summer 2013, there was a router that I believe was just called Router. Later, Iron-Router becames the default router (perhaps this was a rename?) - and even later, Flow-Router became the preferred router. Here’s a post on the state of routing as of Summer 2015.

    The point was, you couldn’t really use both routers and so we would need lots of clever work to see if we could support both paths. We discussed this in a Github Issue and I wrote up a prototype but ultimately just didn’t have enough time/energy to ship a supports-both solution.

    7. When Undocumented Internal Dependencies Change

    Remember that _RemoteCollectionDriver hack? In most every release (0.7, 0.8) my ‘whatever, go ask mongo what collections there are’ hack kept breaking and I had to re-implement a new hack based on whichever refactor the Meteor Development Group implemented internally.

    Not particularly taxing, but just a drain.

    8. We never stepped back and thought about architecture.

    Things were in slighly better shape than they were back during the hackathon, but there was never a “how should this package be properly architected” conversation. As a result, in later months as I tried to go back and implement changes, it was never clear to me what was where or why, or what edge cases I would need to consider and support. This made “weekend-a-month” type support less useful over time as I knew the codebase less and less well.

    Part 3: Where we go to Costa Rica and try to turn things around

    Costa Rica: the plan

    Around Summer 2014, a year after we started, things were going kind of slumpey. As an effort to regroup, I invited Greg to Costa Rica where I was staying for a few months: Hey Greg, come to Costa Rica!

    The plan was basically to unbreak the critical stuff and make a plan for what we wanted to do next, and maybe also play some soccer on the beach.

    Costa Rica: what actually happened

    I got bogged down on a consulting project that was a bit behind schedule as Greg visited. So we got maybe 1-2 days of work done, which instead of dealing with any of the issues on my part was shipping a new feature I decided I wanted, called Custom Actions, which were kind of cool but not at all the problem.

    Custom Actions

    We still played some soccer, though. Greg was way better at soccer than the rest of us.

    Get other maintainers, they said

    We had two pretty helpful maintainers turn up: First, Roger decided to redesign our UI from basic boostrap to the line-green version you see today. Second, sometime around late 2014, this Swiss dude named Matteo showed up and fixed a bunch of things. That was awesome, and we talked to him on Skype and gave him write access as somebody had suggested for growing your maintainers.

    Later, in 2015, my friend Sam came in and added some proper tests, but development had largely ceased by then.

    Still, for whatever reason, nobody stuck around and answered issues, etc, for the long term, so it was still up to us.

    Part 4: Decline and Fall of the Houston Admin

    Meteor 1.0 is released!

    Coinciding with Greg’s visit, Meteor finally released 1.0 in October 2014. Perhaps we’ll no longer have to adjust the incompatible hacks we use to figure out which collections are to be added and we can focus on the good stuff!

    Blaze vs React

    If only. A year later, the big question up in the air becomes whether to use Meteor’s original Blaze templating or switch to a react-based front-end. React is great and all, but we for one are not up for yet anothe rewrite of this thing we haven’t really gotten to use for our own stuff in like, two years.

    1.3

    To add insult to injury, The Meteor Development Group ignores the principles of Semantic Versioning to release 1.3 in April 2016 with Breaking Changes - and our package is broken yet again. Maybe.

    Part 5, an Epilogue

    Perhaps the most poignant symptom of the sort of stagnancy that the project fell into is that I had an idea to write this blog-post in early 2015, and it took two years to even get the post-mortem for this project done.

    I think Greg may have fixed the 1.3 issue, but the last change I can see in the codebase is in March 2016 and frankly I just don’t care anymore. On the bright side, it looks like there’s a half-decent competitor that people who want an admin can use3. Also, the Meteor Development Group looks like it’s less interested in Meteor and focusing on GraphQL tools these days, so maybe nobody needs this. On the bright side, 880 stars on Github, so that’s cool!

    What have we learned?

    • Open-source maintenance is hard, especially if your only remaining motivation is altruism. Richard Schneeman gave a great talk about Saving Sprockets last year which nailed it.
    • Perhaps we should have quit while we were ahead and put an “not actively maintained, looking for maintainers” on the README back in 2014. That would have been a bit more responsible.
    • On the bright side, building a key piece of infrastructure for a hip new development framework turned out to be a great way to get a lot of users for your open source project.

    …one more thing

    I leave you with the contributions over time graph, which tells the whole story, but with graphics. contributions over time

    Thank you

    • To Greg, the prospect of collaborating with whom kept me in the project as long as it did.
    • The Yefim, Ceasar, Matteo, Roger, Sam and other contributors!
    • The the Meteor Development Group folks for their contributions to realtime web development, whose ideas will live on independent of Meteor’s future, and also for the free snacks and t-shirts and the Pebbles we won at that hackathon back at 2013.

    1. as far as I know, Meteor still doesn’t have proper Server-Side rendering. Arunoda wrote a community library in 2015, but given the Blaze/React split, I’m uncertain how much of this has come to pass. 

    2. Ironically, the closest thing to an Meteor Admin at the time was Genghis, a standalone one-page PHP-based Mongo editor by Justin Hileman, whose younger brother I had somehow hired the year prior. Silicon Valley is a small place. 

    3. though it too looks like it has not been updated in 10 months :-/. 

    Published: May 06 2017

    For most1 engineering tasks, I prefer to avoid TDD, or Test Driven Development.

    Yet when I job-hunt, I use TDD religiously2. It has been tremendously helpful.

    Hmpf. Why?

    Most interview challenges actually come in two parts.

    1. Can you take a broadly-worded problem and nail it down into something unambiguous?
    2. Can you implement a challenging but unambiguous spec in the allotted time?

    TDD forces you into the ideal mindset for nailing down (1) problem definitions. There’s no better way to properly grok a problem than to have to think through all the fun ways an implementation could be slightly off.

    The implementation itself (2) also gets easier since you no longer have to assume that your code “probably” works, or fiddle with a REPL each time you’re ready to check.

    So how do I do it?

    As soon as you hear the problem, resist the urge to rush straight to the solution.

    Instead, force yourself to fully think through the test cases for an arbitrary implementation. Make it a dialog - “what if the same element is in the array multiple times?” Get the interviewer engaged. Show you care.

    By the time you’ve written good tests and the interviewer agrees, you’re all set. Even though you tried not to, you probably already have a decent solution in mind. Your tests will let you know you’re done. They’ll also give you the comfort that your subsequent clean up / refactoring doesn’t inadvertantly break your implemenation.

    Yeah, but I can always just write tests later.

    When I conduct interviews, I keep running into candidates clever enough to power through engineering challenges on raw talent3. The candidate implements an optimal solution within fifteen minutes. Then it’s time for tests and she just cannot think of cases beyond the common path, which makes her look like a cowboy.

    I blame the curse of knowledge - once you’ve solved the problem, it’s harder to think about what wrong turns you could have taken. It all feels trivial.

    Seriously though, are you going to make me import a testing framework? Like, Mocha or Cucumber? Ugh.

    ¯\_(ツ)_/¯, I just use asserts and print statements. If I’m feeling fancy, I’ll create an array of test case scenarios and run them like

    test_cases = [
      [[2, 2], 4],
      [[1, 2, 3, 4], 10],
      [[], 0],
      [[-3, 2], 1],
      [[100, 2], 102],
      [["banana"], None]
    ]
    
    for test_case in test_cases:
      args, expected_answer = test_case
      answer = quantum_addition(args)
    
      print("called add({}) and got {}, expected {}"
        .format(args, answer, expected_answer))
    
      if answer != expected_answer:
        print("UH OH!")
    
      # (or you could just do: assert answer == expected_answer)
    

    If you insist on being fancy, you can do that thing where you ask your interviewer if they’d like you to use a proper testing framework first, and of course they don’t, but now it looks like you could and you’re just doing them a favor by using asserts, cause you’re like, super assertive.

    Ok so now what?

    Well obviously you should come work at Opendoor with me! We have the good coconut water and don’t run out of it until like noon, 12:30 on good days. Plus, there’s a decent chance you’ll run into me on your phone screen and we can have that awkward interaction of trying to figure out why my name sounds familiar.


    Thanks to Vaibhav, Joe, Nick, Gayle, Tess, Charlie, Kevin, Zain, and Yahel for giving feedback on earlier drafts that I incorporated.

    1. Don’t get me wrong, TDD has its uses, but we often mistake what Kent Beck meant by ‘unit’. Testing ‘just right’ is hard. 

    2. To be clear, TDD won’t help you with interview where you don’t actually code (IE, whiteboard interviews) You’ll still have to start by asking questions, though. 

    3. I used to have that problem earlier in my career. I’ve addressed it, largely by becoming dumber.