• Your Last Business Card

    Your Last Business Card

    What do you bring with you when you go to an event? Do you have a business card, or do you just do the awkward phone number exchange if you connect with someone? What if there was an easier way.

    I went to an event last week with my prototype for my single, reusable “business card” and it was a great success, with lots of people asking me where I had gotten it and being surprised that I had made it, so I wanted to share what I did. It’s inexpensive and very easy to do.

    What you’ll need

    • Some NFC stickers (It’s hard to find a non-Amazon source for these unfortunately)
    • An NFC encoding app (NFC Tools is free and available on Android and Apple)
    • Access to something like Canva or Photopea
    • A QR code generator
    • A link to send people to
    • One paper business card
    • Optional – lanyard or card holder

    What to do

    Build your Landing Page

    The first thing you need is a place to send people. One of the simplest ways to do this is using a service like Linktree – you can use this to build one page that you keep up to date with all the ways that folks can contact you, your social media profiles, and links you want to highlight. If you prefer to roll your own that’s possible too, I built my own page using some repeatable WordPress blocks. However you choose to do this, you just need a link to be your home base.

    Generate Your QR Code

    Using any QR code generator, generate a code that goes to your landing page. The site I have linked has lots of configuration options, and will generate a code with a transparent background, which makes it easy to incorporate into many business card designs.

    Don’t forget to play around with the settings – there are lots of options to allow you to really customise the look of it, changing colours to match your card, inserting a logo, making it look less blocky, etc.

    Double check your phone can read it by opening your phone camera and pointing it at your new code – you should see your link pop up and be able to go straight to it. Once you’re happy, download your code as a PNG file (to maintain that transparent background).

    Design a Business Card

    I used Canva to find a business card template that I liked, and modified it to my needs. Canva is free software (with some paid templates), and you can filter to exclude paid template options too. It’s easy to click through options and modify a card to suit you.

    You can upload your new QR code to Canva and drop it into your new card design. Because I only wanted a one-sided card, I put this at the bottom of mine, but this could also go on the back of your card depending on how you plan to store/share the card. If you’re planning to have a visible back on your business card, remember that you’ll be adding a sticker back there too, so either make sure it’s a printable sticker and print your card at home, or leave a space in your design where a sticker can be added without covering anything important.

    You can also use an app like Photopea to design a business card if you prefer to start from absolute scratch and want the most customisation options. Printers such as DPI will have PDF templates you can download and use to ensure you get the size right.

    Program Your NFC Sticker

    Install NFC Tools on your phone and head to the “WRITE” tab. Tap “Add a record” and then choose “URL/URI”. Enter the landing page address that you created earlier and hit OK. Tap “Write” and then hold your phone close to one of your NFC stickers. Tada! Your sticker will now direct people right to your site whenever they tap their phone to it!

    The video here shows the process in the NFC tools app. It’s worth exploring all the options for writing to the tags, there’s a lot of fun applications – e.g. configure it to connect to your wifi. No more calling out a wifi password to guests, they just tap to connect.

    Package It Up

    Get your business card printed somewhere or print at home (I used Digital Printing Ireland as I’ve used them for a lot of things now and always loved their service), add your NFC sticker somewhere, and pop it into your lanyard.

    Why a QR code and an NFC sticker? Most phones have NFC now, but not everyone keeps that turned on or uses it, and certain older phone models don’t have NFC at all. Having the QR code gives you a fallback if your NFC tap doesn’t work.

    Take It For A Spin

    You’re all set with (hopefully) the last business card you’ll ever need to own. Need to exchange contact details? Invite others to tap your card or scan the QR code.

    Since my card links to my page, I can keep that landing page up to date with important links, new articles, and changing contact details, all without needing to change my business card. It was really impactful at my most recent event, and a really easy and quick way to share all of my key details with folks. The lanyard was super visible, and encouraged people to ask about connecting with me, and really cut down on the awkward “calling out my phone number and misspelling my email address” shuffle.

    I hope this was helpful!

  • New Gender Pay Gap Portal

    New Gender Pay Gap Portal

    If you’ve been on my site before, chances are you’ve seen the link in the menu to the 2022 gender pay gap database. When I set up this page on my website last year, I had to work around a number of tricky limitations and ended up having to insert custom code into WordPress to display the data the way I wanted to. There wasn’t a plugin (or three) that I could use to load the data from a database and allow for it to be searched, updated, etc. easily.

    I found some ways to make it work for the first year, but then the government announced that they would not actually have a portal for 2023 either, and that they didn’t have a clear date for when there would be a portal available. I started to try and expand my existing setup, and ran into many of the same issues as last year, compounded by the fact that I was trying to manage and display two different sets of data. I kept running into the fact that it would just be better to build something more custom. Over the last week or so I’ve spent time doing just that. It’s all the same data, displayed as it was before in searchable sortable tables, but with a number of improvements.

    The new site has the full 2022 database, and I’m building the 2023 database at the moment. There’s also a form where you can submit reports with their info directly, to help me build 2023’s dataset more quickly (and any future datasets too).

    This new sub-site will let me keep expanding the dataset, and hopefully expand to include some visualisations of data, comparisons, etc.

    If you appreciate what I’m doing here and want to help, one of the best ways you can do so is to use the form on the new site and submit links to any company’s gender pay gap report, and ideally pull out the headline figures for me.

    Watch this space for updates as the 2023 database grows!

  • What we owe to each other

    I attended a talk recently about migraine, and included in the talk was a demo and a quick blurb about a new-ish medical device to potentially treat migraine and other headache conditions (an external vagus nerve stimulation device, for the curious). It seems an interesting development, since previous incarnations of the same required surgery and an implanted device, but when I did a little more investigating I was disappointed to discover that the device operates on a subscription model. Every 93 days, you have to buy a new card to “activate” the device and make it work for another block of time. It’s not to do with the monitoring of a patient, or wanting a clinical touchpoint every 3 months or so (because you can also opt for the very expensive “36 months in one go” option), it is simply a business model – sell a device that becomes an expensive paperweight if a subscription is not maintained.

    Over the last few days, it has prompted me to think about the landscape we are building for ourselves – one populated with smart devices, subscription devices, as well as an increasing cohort of medical devices – what it will look like in the future, what we owe to customers of these devices if we are involved in making them, and ultimately, what we owe to each other.

    Subscription Business Models

    Subscription-based business models are nothing new – chances are you’re paying for at least one subscription service yourself. For many businesses they are an attractive choice as they mean a continuous revenue stream, a constant inflow of cash you can plan around, rather than one big bang purchase and then a drought. And lots of people are fine with paying for subscription models, even if they don’t love them, but what if we’re talking about more than just streaming music or paying monthly for Photoshop? What if instead of software or an intangible thing, we’re talking about physical devices?

    Physical devices with a subscription model aren’t exactly new, and they’ve had their problems – Peloton came under fire in 2021 after it seemed to release an update that forced a subscription onto its users and rendered its treadmills useless without one. BMW were recently the subject of much negative press for their subscription model heated seats – shipping cars with the physical equipment needed to heat the seats, but locking it behind a subscription paywall. And HP Instant Ink subscribers found that once they cancelled the Instant Ink service, the ink left in their cartridges stopped working, even though it was still sitting their in their printers.

    This is all very annoying, but mostly you could argue the above are luxuries – your seats aren’t heated, your day still goes on. But these are not the only kinds of devices that, increasingly, are coming with subscriptions.

    What happens when your bionic eyes stop working?

    The merging of technology and medicine is, to a certain extent, inevitable. People have unofficially relied on technology to supplement and assist with medical issues for a long time now (such as those with diabetes hacking pumps to essentially make artificial pancreas, a process known as looping, or people with vision impairments using apps to see through the camera, receiving audio descriptions), and as time goes on, manufacturers are joining the market with “official” solutions. There is huge potential to make lives better with assistive technologies, by automating processes that were manual or artificially replacing senses to name just two examples. Often these developments have been lauded as the “way of the future” and a huge step forward for humanity, but what happens when the initial shine passes?

    A CNN article from 2009 speaks about Barbara Campbell, a woman who was diagnosed with retinitis pigmentosa – a condition which gradually robbed her of her sight. In 2009, she was participating in an FDA approved study of an artificial retina – a technological solution to her impaired vision, a microchip to help her see again by stimulating the retina electrically in the way that light should be. Combined with a pair of sunglasses and a camera to capture the world around her, the devices allowed her to see again, with her interpretation of the new signals improving all the time. By all accounts, it’s a dream scenario – technology that is really doing good and changing someone’s life for the better.

    Now, in 2022, things have changed. In 2020, the company that manufactured these implants had financial difficulty. Their CEO left the company, employees were laid off, and when asked about their ongoing support, Second Sight told IEEE Spectrum that the layoffs meant it “was unable to continue the previous level of support and communication for Argus II centers and users.” Around 350 patients worldwide have some form of Second Sight’s implants, and as the company wound down operations, it told none of them. A limited supply of VPU (video processing units) and glasses are available for repairs or replacements, and when those are gone, patients are figuratively and literally in the dark.

    Barbara Campbell was in a NYC subway station changing trains when her implant beeped three times, and then stopped working for good.

    Now patients are left with incredibly difficult decisions. Do they continue to rely on a technology which changed their lives but which has been deemed obsolete by the company, that may cause problems with procedures such as MRIs, with no support or repair going forward? Or do they undergo a potentially painful surgery to remove the devices, accruing more medical costs and removing what sight they have gained? Do they wait until the implant fails to remove it, or do they remove it now, gambling on whether it might continue working for many years? Do they walk around for the rest of their lives with obsolete, non-functional technology implanted in them, waiting for the day it fails and replacement parts can no longer be found?

    Meanwhile, Second Sight has moved on, promising to invest in continuing medical trials for Orion, their new brain implant (also to restore vision) for which it received NIH funding. Second Sight are also proposing a merger with an biopharmaceutical company called Nano Precision Medical (NPM). None of Second Sight’s executives will be on the leadership team of the new company. Will those who participated in the Orion trials to date continue to receive support in the future, or even after this merger?

    IEEE Spectrum have written a comprehensive and damning article examining the paths taken by Second Sight, piecing together the story though talking to patients, former doctors and employees, etc. and although it’s clear that Strickland and Harris know more about this than anyone, even they can’t get a good answer from the companies about what happens now to those who relied on the technology. Second Sight themselves don’t have a good answer.

    Subscription Paperweights

    Second Sight’s bionic eyes didn’t come with a subscription, but they should have come with a duty of care that meant their patients never had to worry about their sight permanently disappearing due to a bug that no one would ever fix or a wire failing. And while bionic eyes are an extreme example of medical tech, they’re an excellent example of the pitfalls that this new cohort of subscription-locked medical devices may leave patients in.

    Lets return for a moment to the device that started me down this line of thought – the external vagus nerve stimulator. I have had a difficult personal journey with migraine treatment. I have tried many different drug-based therapies, acute and daily preventatives, and have yet to find one which has been particularly effective or that didn’t come with intolerable side effects. I am now at a point where the options available to me are specialist meds available only with neurologist consults and special forms and exceptions, or the new and exciting world of migraine medtech devices. And the idea of a pocketable device that I use maybe twice a day, perhaps again if I am experiencing an attack, is appealing. With fewer side effects, no horrible slow taper off a non-working med, and let’s be honest, a cool cyberpunk vibe, I’m more than willing to give this kind of thing a try. Or I would be, if it wasn’t tied to a subscription model.

    Because I can’t help but think of Barbara, and the day her eyes stopped. And I can’t help but think about how many companies fail, suddenly, overnight, with no grand plan for a future or a graceful wind down. And I can’t help but worry that I might finally find something to sooth this terrible pain in my head, to give me my life back, only to have it all disappear a year down the line because the company fails and I have no one to renew a subscription with, and my wonder-device becomes an expensive and useless piece of e-waste.

    The decision to add a subscription model to a device such as this is financial, not medical. Internal implantable vagus nerve stimulators already exist for migraine and other conditions, and you don’t renew those by tapping a subscription card. This is a decision motivated by revenue stream.

    To whom do you have a duty of care?

    The migraine vagus nerve device is not alone. When I shared the news about this device with a friend, she told me that her consultant had been surprised to find that her smartphone-linked peak flow meter did not have a subscription attached. Subscription medtech devices have become a norm without many noticing, because many people do not (and may never) rely on devices like this to exist.

    The easy argument here is that companies deserve to recoup their expenses – they invested in the devices, in their testing and development, in their production. If the devices are certified by medical testing boards, if they underwent clinical trials, there is significant costs associated with that, and given the potential market share of some of these devices, if they simply sell them for a one-time somewhat affordable cost, they will never see a return on their investment. This, in turn, will discourage others from developing similar devices. And look, it’s hard to refute that because it is true – it is expensive to do all of these things, and a company will find it very hard to continue existing if they simply sell a handful of devices each year and have no returning revenue stream. If this were purely about the numbers, this would be the last you’d hear from me on the topic. But it’s not.

    If you develop a smart plug with a subscription model, and your company fails, this is bad news for smart plug owners, but a replacement exists. A world of replacements, in fact. And the option to simply unplug the smart device and continue without it is easy, and without major consequence. The ethical consequences are low. But developing a medtech device is simply not the same. It is about so much more than the numbers. This is not about whether someone can stream the latest Adele album, this is about ongoing health and in some cases lives, and this is an area of tech that should come with a higher burden than a smart doorbell or a plug.

    When you make any sort of business plan, you’ll consider your investors, your shareholders, perhaps your staff, and certainly your own financial health, but when it comes to medtech, these aren’t the only groups of people to whom you should owe your care and consideration. Your circle of interested parties extends to people who may rely on your device for their health, for their lives, beyond just a simple interest or desire to have a product that works. Simply put, is your duty of care to your shareholders, or to your patients?

    What do we owe to each other

    Do you owe the same duty of care to a smart doorbell owner as to a smart heart monitor owner? Who will face a tougher consequence if your company fails without warning?

    Second Sight took grants and funding to continue developing and trialling their brain implant while they quietly shelved the products they had already implanted in real patients – is this ethical? Is it right? How can anyone trust their new implant, knowing how patients using their previous implant were treated and the position they were left in? And is it right to continue to grant funding to this research?

    Companies right now are developing devices which lock people into a subscription model that will fail if and when the company fails, at a time when we are all concerned about the impact of technology on the environment, conscious of e-waste, and trying to reduce our carbon footprint. They are developing devices that work for 12 months and then must be replaced with new ones. Is it right to develop throwaway medical devices that stop working simply so that you can lock people into a renewing subscription/purchase model?

    It is undeniable that technology can help where current medical options have failed. We have already seen this with devices that are on the market, and with new devices that arrive. We should want to pursue these avenues, to make lives better and easier for those who need help. We should fund these technologies, spur on innovation and development in these areas, and help everyone to reach their fullest potential.

    But we owe it to each other to push for better while we do. To push back on devices that will fail because someone’s payment bounces. To push back on devices that only have subscription models and no option to purchase outright. To push for higher standards of care, better long term support and repair plans which can exist even if the company fails. To push for companies to be held to these standards and more, even if it makes things more difficult for them. And to push companies to keep developing, even with these standards in place, to keep developing even though it is hard.

    We deserve a duty of care that extends not just to lifetime of device, but the lifetime of a patient.

    This isn’t just about home security, or smart lights – this is people’s health, their lives. The duty of care should be higher, the ethical burden stronger. We owe it to each other to not allow this world to become one where vision and pain relief and continued survival depends on whether or not you can pay a subscription.

  • Actually inclusive engineering

    I want to talk about ethics, diversity, and inclusion in engineering, how we often miss the mark, the impact that has, and the changes we can make to truly bring change from the inside out. My goal is to explain why this is important, and show you some examples where a simple decision resulted in a barrier for someone.

    Why does this matter? Why is it important to be thinking about ethics when we’re developing software? Because software (apps, websites, etc) is becoming the fabric of society – increasingly it is involved in everything we do, from shopping for groceries to online banking to socialising. There is very little in our lives now that is not touched, in some way, by software.

    As we integrate software into more and more areas of our lives, we are also increasingly turning to automated and predictive solutions to perform tasks that were once manual. We are asking computers to do “human” things, more open-ended “thinking” tasks, but computers aren’t human. Most people, when they think of AI, think of something like Data from Star Trek. The reality of AI however is that we have “narrow” AI – models which are trained to do a specific thing and that thing only. These models are unable to add context to their decisions, to take additional factors into account if they are not in the data model, or even to question their own biases. It takes in data, and returns data.

    Lastly, we often spend a lot of time discussing how we will implement something, but perhaps not as much time discussing whether we should implement something. We know that it is possible to build software which will have in-app purchases, and that it’s possible to incentivise those in-app purchases so that they are very attractive to app users. We have seen that it is possible for people to target this marketing towards children – answering the “can”, but not addressing the “should we?”

    When I say we should consider the “should” rather than the “can”, what do I really mean? I’m going to show some real world examples where decisions made during product design ripple out into the world with negative effects. In each of these examples, there probably wasn’t malicious intent, but the result is the same – a barrier for an individual. Most of these examples are not due to programming errors, but by (poor) design.

    Have you ever accidentally subscribed to Amazon Prime?

    Do you know what a dark UX pattern is? You’ve probably encountered one, even if you’ve never heard the term. Have you ever accidentally opted-in to something you meant to deselect, found an extra charge on a bill that you didn’t even realise you had signed up for? Have you ever tried to cancel a service, only to discover that the button to “cancel” is hidden below confusing text, or the button that looks like a cancel button actually signs you up for even longer? How about accidentally signing up to Amazon Prime when you just wanted to order a book? These are dark UX patterns – design changes that are designed to trick the user. They can be beneficial for the person who implements them, but usually to the detriment of the user. In the image above, we see two buttons to add your tickets to the basked. An optional donation can be added with the tickets, but the option to add without donation is much harder to read. It also points backwards, implying visually that this would bring you back a step. Is the value of this donation worth the confusion? Is this ethical? Should a donation be opt-out or opt-in?

    Have you ever been told that your name is incorrect?

    Your name is one of the first things you say to people you meet, it is how you present yourself to the world. It is personal and special. But what if you are told that your name is incorrect due to lazy or thoughtless programming every time you try to book an airline ticket, access banking, healthcare, or any number of services online? A multitude of online forms fail to support diacritical marks, or declare that names are too short or too long based on simple biases and the incorrect assumption that everyone has a first and last name that looks like our own. Instead, we should be asking – do we need to separate people’s names? Why do you need a “first” and “last” name? Could we simply have a field which would accommodate a user’s name, whatever form that takes, and then another which asks what they prefer to be called?

    Let’s talk about everyday barriers

    We’ve never been more aware of handwashing, and a lot of places are using automatic soap or hand sanitiser dispensers to ensure that people stay safe without having to touch surfaces. But what if they don’t work for you? Many soap dispensers use near-infrared technology, which sends out invisible light from an infrared LED bulb for hands to reflect the light back to a sensor. The reason the soap doesn’t just spill out all day is because the hand acts to bounce back the light and close the circuit, activating the soap dispenser. If your hand has darker skin, and actually absorbs that light instead, then the sensor will never trigger. Who tests these dispensers? Did a diverse team develop these or consider their installation?

    Why don’t zoom backgrounds work for me?

    If you’re like me, you’ve been using meeting backgrounds either to have some fun or to hide untidy mixed working spaces while adapting to working from home during this past year. When a faculty member asked Colin Madland why the fancy Zoom backgrounds didn’t work for him, it didn’t take too long to debug. Zoom’s facial detection model simply failed to detect his face. If you train your facial detection models using data that isn’t diverse, you will release software that doesn’t work for lots of faces. This is a long running problem in tech, and companies are just not addressing it.

    Why can’t I complain about it on twitter?

    On twitter, the longer image I shared was cropped….to include only Colin’s face.

    When Colin tweeted about this experience, he noticed something interesting with Twitter’s auto-cropping for mobile. Twitter crops photos when you view them on a phone, because a large image won’t fit on screen. They developed a smart cropping algorithm which attempted to find the “salient” part of an image, so that it would crop to an interesting part that would encourage users to click and expand, instead of cropping to a random corner which may or may not contain the subject. Why did twitter decide that Colin’s face was more “salient” than his colleague’s? It could be down to the training data for their model, once again – they used a dataset of eye tracking information,  training their model to look for the kinds of things in an image that people look at when they look at an image. Were the photos tested diverse? Were the participants diverse? Do people just track to “bright” things on a screen. It certainly seems there was a gap and the end result is insulting. Users tested the algorithm too, placing white and black faces on opposite ends of an image to see how twitter would crop them. The results speak for themselves. Twitter said they tested for bias before shipping the model….but how?

    This impacts more than social media. It could impact your health

    Pulse oximeters measure oxygen saturation. If you’ve ever stayed in a hospital chances are you’ve had one clamped to your finger. They use light penetration to measure oxygen saturation, and they often do not work as well on darker skin. This has come to particular prominence during the pandemic, because hospitals overwhelmed with patients started spotting differences in oxygen levels reported by bloodwork and by the pulse ox. This could impact clinical treatment decisions, as they report higher oxygen saturations than are actually present. This could lead to a delay in necessary clinical treatment when a patient’s o2 level drops below critical thresholds.

    This could change the path your life takes

    COMPAS is an algorithm widely used in the US to guide sentencing by predicting the likelihood of a criminal reoffending. In perhaps the most notorious case of AI prejudice, in May 2016 the US news organisation ProPublica reported that COMPAS is racially biased. While COMPAS did predict reoffending with reasonable accuracy, black people were twice as likely to be rated a higher risk but not actually reoffend. The graphs show that risk scores are very far from normal distribution – they are skewed heavily towards low risk for white defendants. In multiple real life examples from the ProPublica analysis, the black defendant was rated as a higher risk, despite fewer previous offences, and in both cases that individual did not reoffend, although the “lower risk” defendant did. 

    And these are, sadly, just selected examples. There are many, many, many more.

    Clang, clang, clang went the trolley

    As we come to the end of the real world examples, I want to leave you with a hypothetical that is becoming reality just a little bit too fast. Something that many people are excited about is the advent of self-driving cars. These cars will avoid crashes and keep drivers safe and allow us to do other things with our commute. But….

    Have you ever heard of the trolley problem? It’s a well known simple question that is often used to explore different ethical beliefs. In case you aren’t familiar with this yet, the picture above is a fair summary. Imagine you are walking along and you see a trolley, out of control, speeding down the tracks. Ahead, on the tracks, there are five people tied up and unable to move. The trolley is headed straight for them. You are standing some distance off in the train yard, next to a lever. If you pull this lever, the trolley will switch to a different set of tracks. However, you notice that there is one person on the side track. You have two options:

    • Do nothing and allow the trolley to kill the five people on the main track.
    • Pull the lever, diverting the trolley onto the side track where it will kill one person.

    What is the right thing to do?

    The tricky thing is that there are numerous ways to try and decide what is right, and there isn’t really a right answer. As humans we can perhaps use more about the context to aid us in a decision, and we can take in all of the information about the situation even if we have not encountered the situation before, but even then we still can’t always come to a right answer. So how do we expect a smart car to decide?

    While we might not see a real life trolley problem in our lifetimes, the push towards self driving cars will almost certainly see a car presented with variations on this problem – in avoiding an accident, does the car swerve to hit one pedestrian to save five? Does it not swerve at all, to preserve the life of the driver? Given what we know about recognition software as it currently stands, will it accurately recognise every pedestrian?

    How will the car decide? And who is responsible for the decision that it makes? The company? The programmer who implemented the algorithm?

    I don’t have an answer for this one, and I’m not sure that anyone does. But there is a lot that we can do to action inclusive and diverse programming in our jobs, every single day, so that we remove the real barriers that I’ve already shown.

    What can we do?

    First and foremost, diversity starts from the very bottom up. We need to be really inclusive in our design – think about everyone who will use what you make and how they will use it, and really think beyond your own experience.

    Make decisions thoughtfully – many of the examples I’ve shown weren’t created with malicious intent, but they still hurt, dehumanised, or impaired people. Sometimes there isn’t going to be a simple answer, sometimes you will need to have a form with “first name” and “last name”, but we can make these decisions thoughtfully. We can choose to not “go with the default” and consider the impact of our decisions beyond our own office.

    Garbage in, garbage out – if you are using a dataset, consider where it came from. Is it a good representative set? Is your data building bias into the system, or is it representative of all of our customers?

    Inclusive hiring – when many diverse voices can speak, we spot more of these problems, and some of them won’t make it out the door. Diverse teams bring diverse life experiences to the table, and show us the different ways our “defaults” may be leaving people out in the cold.

    Learn more – In the coming days and weeks, I’ll be sharing more links and some deep dives into the topics I’ve raised above, because there is so much more to say on each of them. I’m going to try and share as many resources and expert voices as I can on these topics, so that we can all try to make what we make better.