• Your Last Business Card

    Your Last Business Card

    What do you bring with you when you go to an event? Do you have a business card, or do you just do the awkward phone number exchange if you connect with someone? What if there was an easier way.

    I went to an event last week with my prototype for my single, reusable “business card” and it was a great success, with lots of people asking me where I had gotten it and being surprised that I had made it, so I wanted to share what I did. It’s inexpensive and very easy to do.

    What you’ll need

    • Some NFC stickers (It’s hard to find a non-Amazon source for these unfortunately)
    • An NFC encoding app (NFC Tools is free and available on Android and Apple)
    • Access to something like Canva or Photopea
    • A QR code generator
    • A link to send people to
    • One paper business card
    • Optional – lanyard or card holder

    What to do

    Build your Landing Page

    The first thing you need is a place to send people. One of the simplest ways to do this is using a service like Linktree – you can use this to build one page that you keep up to date with all the ways that folks can contact you, your social media profiles, and links you want to highlight. If you prefer to roll your own that’s possible too, I built my own page using some repeatable WordPress blocks. However you choose to do this, you just need a link to be your home base.

    Generate Your QR Code

    Using any QR code generator, generate a code that goes to your landing page. The site I have linked has lots of configuration options, and will generate a code with a transparent background, which makes it easy to incorporate into many business card designs.

    Don’t forget to play around with the settings – there are lots of options to allow you to really customise the look of it, changing colours to match your card, inserting a logo, making it look less blocky, etc.

    Double check your phone can read it by opening your phone camera and pointing it at your new code – you should see your link pop up and be able to go straight to it. Once you’re happy, download your code as a PNG file (to maintain that transparent background).

    Design a Business Card

    I used Canva to find a business card template that I liked, and modified it to my needs. Canva is free software (with some paid templates), and you can filter to exclude paid template options too. It’s easy to click through options and modify a card to suit you.

    You can upload your new QR code to Canva and drop it into your new card design. Because I only wanted a one-sided card, I put this at the bottom of mine, but this could also go on the back of your card depending on how you plan to store/share the card. If you’re planning to have a visible back on your business card, remember that you’ll be adding a sticker back there too, so either make sure it’s a printable sticker and print your card at home, or leave a space in your design where a sticker can be added without covering anything important.

    You can also use an app like Photopea to design a business card if you prefer to start from absolute scratch and want the most customisation options. Printers such as DPI will have PDF templates you can download and use to ensure you get the size right.

    Program Your NFC Sticker

    Install NFC Tools on your phone and head to the “WRITE” tab. Tap “Add a record” and then choose “URL/URI”. Enter the landing page address that you created earlier and hit OK. Tap “Write” and then hold your phone close to one of your NFC stickers. Tada! Your sticker will now direct people right to your site whenever they tap their phone to it!

    The video here shows the process in the NFC tools app. It’s worth exploring all the options for writing to the tags, there’s a lot of fun applications – e.g. configure it to connect to your wifi. No more calling out a wifi password to guests, they just tap to connect.

    Package It Up

    Get your business card printed somewhere or print at home (I used Digital Printing Ireland as I’ve used them for a lot of things now and always loved their service), add your NFC sticker somewhere, and pop it into your lanyard.

    Why a QR code and an NFC sticker? Most phones have NFC now, but not everyone keeps that turned on or uses it, and certain older phone models don’t have NFC at all. Having the QR code gives you a fallback if your NFC tap doesn’t work.

    Take It For A Spin

    You’re all set with (hopefully) the last business card you’ll ever need to own. Need to exchange contact details? Invite others to tap your card or scan the QR code.

    Since my card links to my page, I can keep that landing page up to date with important links, new articles, and changing contact details, all without needing to change my business card. It was really impactful at my most recent event, and a really easy and quick way to share all of my key details with folks. The lanyard was super visible, and encouraged people to ask about connecting with me, and really cut down on the awkward “calling out my phone number and misspelling my email address” shuffle.

    I hope this was helpful!

  • Progress with PayGap.ie

    I’ve made some great connections in the last few months with my PayGap.ie portal, and wanted to share some of what I’ve been doing.

    I visited the Geary Institute of Public Policy to speak at one of their lunchtime seminars about the portal, and following this, they invited me to write a short paper for PublicPolicy.ie. That paper outlined the state of reporting so far, what I’ve learned, and suggested some policy improvements that could be made.

    I visited Phoenix FM to talk to them about the portal, and also gave them a brief statement on the announcement of a government portal (at long last!)

    There’s more coming, and I’m very grateful for the opportunities I’ve had to share my work with others so far!

  • Ireland’s Gender Pay Gap Reporting

    In 2022, Irish companies with more than 250 employees published their first gender pay gap reports. While some companies were already voluntarily disclosing this information, legislation introduced in 2021 made the publication mandatory. It’s January 2024, and we’re two years in, let’s take a look at what this legislation has given us, and what it’s still failing to do.

    A little background

    Why am I talking about these gender pay gap reports, isn’t it good enough that the data is out there? Well, in 2022 I added a page on this site which showed the data I had gathered for companies reporting that year. At the end of 2023, I realised that the page wasn’t enough, and that I needed more space and flexibility, so I moved the data to its own site. All of the data gathering has been manual (more on that later), and that means that I have read almost every gender pay gap report published since 2022. All of this means that I am intimately familiar with the quality of the gender pay gap reports that have been published, and I’m unhappy to say that it absolutely is not good enough that the data is out there. Let’s talk about why.

    Accessibility

    The manual data gathering for the databases has been made much more difficult by the choices made by companies producing the reports. When the government published the legislation, they included specifics about what data should be included in the report, but absolutely no details on what format the report should take, how it should be laid out, etc. This has resulted in the reports themselves arriving in a mishmash of formats including everything from 13 page glossy PDFs with stock photos of happy working women, to single web pages that were overwritten with 2023’s data and Powerpoint files that you have download to access. This has made gathering the data difficult, as it all has to be manually read and gathered. With no standard format, there’s no way to automate the process of gathering the data, as there’s no guarantee you’d be scraping the right data.

    Since all of the data gathering has been manual, this has given me a lot of time to think about how people are choosing to present the data, and consider the accessibility failures that they represent.

    A quartile graph from Coillte’s 2022 report.
    A headcount graph from Respond’s 2022 report.

    Above is a quartile graph from Coillte’s 2022 gender pay gap report (a format which they repeated in 2023). It’s clear that many entities chose to use brand or theme colours for their reports, but like this example from Coillte, this is very much to the detriment of readability. This green on green graph is extremely unclear even to me, a reader without colourblindness. The legend appears to show the colour of the text in the boxes, not the bars of the graph. The bars in the graph are a gradient which partially matches the text colour, but not fully. And finally the legend itself is so small and far away from the text that it’s very difficult to compare the colours to the graph. This is a barely readable graph, and the information doesn’t appear anywhere else in text-only format, you have to interpret the two similar shades of green in order to read the data. Near-identical shades of green proved a popular theme in 2022, with Respond choosing that palette to display their overall headcount. Again, the colour blocks on the legend are extremely small, making it difficult to be certain which block is which.

    Cork City Council headcount
    Royal Victoria Eye & Ear Part Time Mean & Median Gap

    In 2023, Cork City Council chose shades of red and slightly browner red to display the overall proportions of male and female employees. Once again, there’s a repeat of the tiny legend blocks, with not enough colour to allow for a comparison. These colours would also be extremely difficult for people with some forms of colourblindness to read. As with so many other reports, the information is only represented in graph form, so if you can’t accurately interpret the graph, you can’t get the data.

    The image on the right is from the Royal Victoria Eye & Ear Hospital’s 2022 report. I have not changed the size or quality of this upload, this is exactly how the image appears in the report. The blurry text in this heavily pixelated image is not reproduced elsewhere in the report in a table, or in explanatory text below the graph. You simply have to zoom in and do your best to read the figures. Is it a 6 or a 5? 26.50 or 25.60? Almost every graph in their report is included like this – obviously a screenshot from their reporting tool included without any consideration of the quality.

    Quartiles from Kildare County Council’s report

    In 2023’s report, Kildare County Council chose to represent their quartile data using shades of red and slightly-darker-red. Because the numbers are represented only by the sideways text on the bars, and the bars change size, the font also changes size for each number, and finally, in case you thought you could be certain about which bar represents male and female, the little figure on the top of each bar also changes for every single bar…

    At the risk of making this an overly long post, I wanted to highlight just one more pair of graphs.

    These are the quartile graphs for Analog Devices. On the left is the graph for 2022, and the right is 2023. At a quick glance, you would be forgiven for thinking that incredible progress had been made, and that the upper quartiles now contain far more women than men, but look again. The proportions haven’t changed much at all, but the colours have been flipped between the years, so that the colour which represented male in 2022 represents female in 2023. A charitable interpretation of this is that they are simply using brand colours, and whoever compiled the 2023 report didn’t see the 2022 version. A less charitable interpretation of this is obvious.

    Truly, this section could go on for pages, it could be (and maybe soon will be) a gallery of data visualisation sins spanning many more reports. The selection I’ve shared above are not the only offenders, merely some of those I noted while reviewing reports. And while it’s been simply frustrating for me to get the data from these graphs, anyone using a screen reader, anyone with a form of colourblindness, etc. would likely struggle to interpret any data from many of these reports. These reports don’t meet basic accessibility standards, making them useless for many people.

    Oversight is an oversight

    If you’re thinking that surely someone, somewhere, must be overseeing these reports, and wondering why they haven’t asked the companies to do a better job, you are going to be disappointed. With no government provided central portal for all the reports, there is also absolutely no oversight of the reports. No government body has been appointed or designated as the people who should oversee the reports and make sure they are readable and contain all the information that they are supposed to. The result of this is clear to anyone who reads more than a few of the resulting reports.

    Since the beginning of reporting, companies have been publishing reports that do not meet accessibility standards, that do not include all of the data they should, or that include data that has been incorrectly calculated or reported. The Brothers of Charity reported their figures in Euro rather than percentages (which is the specified reporting method) in both 2022 and 2023. Several companies (Actavo, Ardmac, Depaul, Dublin Bus, to name just a few) did not report their quartiles correctly, some didn’t include part or all of the data (Mazars, Standard Life).

    The legislation says that the reports must be accessible for three years, but when I undertook a review of my 2022 dataset, the number of broken links was substantial. Some companies moved the files and it was easy to find them again, but other companies have simply overwritten last year’s webpage, obliterating their 2022 data.

    Lastly, as I write this it is the 24th of January. Given that companies must choose a date in June and then have 6 months from that date to report, every single company report was due on or before December 31st. As you can see from the “missing” page on my portal, that has not happened. Allowing for the fact that some companies may have published in the few days since I last checked, or that I may be missing the link to one or two of these companies, it is fair to say that the majority (if not all) of the companies on the list are late with their report. And this is exactly what happened in 2022 also – companies were publishing reports well into the first quarter of the year, with absolutely no sanctions. I expect to still be gathering 2023 data well into March or April of this year.

    Failure to centralise

    Gender pay gap reporting has been included in UK legislation for several years, and there is a central portal available for the reports. Companies submit their data there, and this is a database that anyone can search. While the Irish government promised a similar central portal, they have now said they have no timeline for this will be available. This is perhaps the biggest and most disappointing failure of all with respect to this legislation, as the lack of a portal fundamentally undermines the point of publishing the data in the first place.

    Having data on the pay gaps in Irish companies is so useful, but only if people can actually use the data. In its current form, it is effectively unusable. People who want to compare the data from one year to the next are left with no choice but to build their own database and perform their own calculations. With reports that go missing each year, that have different formats, and omitted data, you’ve got to build up some sort of spreadsheet yourself to even view the change over time. The same is true for people who wish to compare companies across certain industries, for example. And for those who don’t want to become citizen data scientists simply to understand where their own employer stands, where do they go?

    With no central portal, no standard format, and no oversight about the correctness of the reports, the companies might as well be printing them out and then throwing them straight in the recycling bin.

    Wrapping it all up

    It will come as no surprise to anyone who’s read this blog, or spoken to me in person, that I believe that sunlight is the best disinfectant – i.e. the best way to promote change is to show the current state of play in black and white. Viewing the pay gaps and quartiles for companies makes it abundantly clear that there is still a long way to go in terms of female representation in certain industries, and in higher paying jobs in many industries. Companies often invest a lot in PR about how they do so much for their female employees, but the figures don’t lie, and that’s why it is so important to have this data accessible to everyone.

    While I am proud of the work I’ve done with my gender pay gap portal, my ultimate desire is that this be a redundant project. It shouldn’t be up to me or any other individual citizen to deliver on a promise made by our government, to hold companies accountable for the deadlines they miss, and to remind them of their legislative obligations. As time passes, the number of companies included in the mandatory reporting will increase and this will become unsustainable for me, as one individual, to maintain.

    The original legislation has gotten us halfway there, it’s not good enough for the government to simply drop the ball now. They need to bring this across the finish line, and deliver an effective portal that everyone can use, that can act as a long term historical repository, and that actually makes this publishing worthwhile.

  • New Gender Pay Gap Portal

    New Gender Pay Gap Portal

    If you’ve been on my site before, chances are you’ve seen the link in the menu to the 2022 gender pay gap database. When I set up this page on my website last year, I had to work around a number of tricky limitations and ended up having to insert custom code into WordPress to display the data the way I wanted to. There wasn’t a plugin (or three) that I could use to load the data from a database and allow for it to be searched, updated, etc. easily.

    I found some ways to make it work for the first year, but then the government announced that they would not actually have a portal for 2023 either, and that they didn’t have a clear date for when there would be a portal available. I started to try and expand my existing setup, and ran into many of the same issues as last year, compounded by the fact that I was trying to manage and display two different sets of data. I kept running into the fact that it would just be better to build something more custom. Over the last week or so I’ve spent time doing just that. It’s all the same data, displayed as it was before in searchable sortable tables, but with a number of improvements.

    The new site has the full 2022 database, and I’m building the 2023 database at the moment. There’s also a form where you can submit reports with their info directly, to help me build 2023’s dataset more quickly (and any future datasets too).

    This new sub-site will let me keep expanding the dataset, and hopefully expand to include some visualisations of data, comparisons, etc.

    If you appreciate what I’m doing here and want to help, one of the best ways you can do so is to use the form on the new site and submit links to any company’s gender pay gap report, and ideally pull out the headline figures for me.

    Watch this space for updates as the 2023 database grows!

  • Email your representatives about Facial Recognition Technology

    In light of the recent riots in Dublin, Helen McEntee is calling for an expansion in the use of facial recognition technology. As I have discussed on this blog, there are numerous ethical, privacy, and civil liberties issues with facial recognition technology. While McEntee has said “There will have to be safeguards – codes of practice – in place. People’s individual privacy, GDPR issues, all of this will have to be addressed and will have to be brought forward with the legislation“, I do not believe our government have a good track record in respecting these issues, and rushing to expand the scope of this legislation will almost certainly ensure that safeguards, codes of practice, and other issues will not be addressed in time.

    I believe it is important to contact my government representatives to let them know that I do not support the expansion of this legislation, and that these concerns need to be addressed.I encourage you to do the same, and for ease, have included below the email I have sent to my own TDs. Feel free to use this as is, or modify to suit. You can find TD contact details, and identify those in your constituency, on the Oireachtas website.

    Dear [name],

    I am writing to express my concern over plans to expand the scope of facial recognition technology legislation. While I understand that there are calls for action following the riots in Dublin last week, I believe it is crucially important that the serious ethical, privacy, and civil liberties issues with using this technology are understood and addressed by all of our representatives before moving forward with the adoption of this technology, and certainly before rushing to expand the scope.

    The Irish Council for Civil Liberties has frequently discussed the issues with facial recognition technology and has been vocal in its desire for a ban on the technology, with good reason. There are known key issues with the technology that have not been addressed, including:

    1. Bias, Discrimination, and Accuracy – Countless studies have shown that most if not all facial recognition algorithms are biased, and have much lower accuracy when it comes to recognising faces which are not white males. While this would be annoying if it were simply a case of being unable to face-unlock your phone, the application of this technology in law enforcement has led to wrongful arrests in the US due to erroneous identification. A NIST study [1] of almost 200 facial recognition algorithms noted that they were 10 to 100 times more likely to misidentify Asian and African American faces. A test of Amazon’s facial recognition technology [2] matched 28 members of Congress with mugshots of people who had committed crimes.

      FRT algorithms are trained on datasets, and unless that dataset is of high quality and represents a full diversity of individuals, the algorithm will learn the biases that already exist in society. Unchecked operation of these algorithms will result in individuals with facial differences, individuals from minority communities, individuals who are not white and male, being disproportionately affected by misidentifications.
    2. Transparency and Accountability – Citizens have a right to know if their faces are being used to train the FRT algorithms the Gardaí plan to deploy, and there is currently no established method for people to discover if their face has been used, or to opt out of their face being used. Biometric data privacy and security has been a continuing problem for this government, and we have seen from the outcomes of the investigations around the Public Services Card that the level of transparency and accountability necessary for the deployment of these systems is not in place, and not robust enough to be trusted. There is no room for scope creep with the gathering, storage, and use of biometric data, and the use of FRT with bodycam and CCTV footage offers citizens absolutely no way to opt out of this data gathering.

      Facial recognition algorithms themselves operate as a “black box” – offering no explanation as to why faces were matched or not, what criteria were used to match faces, etc. This means that Gardaí will also be unable to explain to someone why their face has been matched, or why they have been questioned in relation to an issue. The algorithms are completely opaque, and do not provide the kind of clear, understandable transparency that is absolutely necessary when applied to policing to ensure that they are not abused or misused. With a black box system, could the Gardaí even satisfy a request for a removal from the database? How could they ensure this has been completed?

      Investigations by the Data Protection Commission have shown numerous issues with, and violations of, data privacy laws by both An Garda Síochána, and government bodies. This has not established a basis for trust and transparency between the public and these organisations when it comes to respecting data privacy, and does not lead me to believe that these bodies will be held properly accountable for issues with data privacy in respect to FRT.
    3. Regulations and safeguards – If the technology must be used, then it is absolutely critical that clear regulations and safeguards are established before a single piece of footage is scanned. These cannot be delayed or applied after the fact, and they cannot be vague. There must be clear regulations about who can use it, how and where it can be used, and what options are available for people who feel it has been abused. There must be clear guidelines about what actions can be taken in the case of misidentification, or requests for removal from the database used by the algorithm.

      Anyone working in technology could speak to the idea of “least privilege” – i.e. that the way you should apply security to things is by assessing the absolute least amount of privilege necessary to do something, and then allow only that. If the government is going to insist on the use of FRT, I urge you to consider applying such a principle. If the use is to be restricted to only reviewing footage after the fact, make that explicit and clear in the legislation. Those who can use it, and the exact circumstances in which it may be used must be clear and explicitly defined. The penalties for misuse of the technology by any individual should be clearly defined.

    Facial recognition technology has the potential to seriously impact all of our daily lives, with implications for civil liberty, mass surveillance, and misuse of biometric data to name just a few issues. While I recognise the importance of using modern technology to enhance public safety, it is imperative that we not sleepwalk into writing loose legislation that will lead to misidentification of individuals and abuse of the systems.

    Regards,

    Jennifer Keane

    [1] NIST study on Facial Recognition – https://www.nist.gov/news-events/news/2019/12/nist-study-evaluates-effects-race-age-sex-face-recognition-software

    [2] Amazon’s FRT – https://www.aclu.org/news/privacy-technology/amazons-face-recognition-falsely-matched-28

  • What we owe to each other

    I attended a talk recently about migraine, and included in the talk was a demo and a quick blurb about a new-ish medical device to potentially treat migraine and other headache conditions (an external vagus nerve stimulation device, for the curious). It seems an interesting development, since previous incarnations of the same required surgery and an implanted device, but when I did a little more investigating I was disappointed to discover that the device operates on a subscription model. Every 93 days, you have to buy a new card to “activate” the device and make it work for another block of time. It’s not to do with the monitoring of a patient, or wanting a clinical touchpoint every 3 months or so (because you can also opt for the very expensive “36 months in one go” option), it is simply a business model – sell a device that becomes an expensive paperweight if a subscription is not maintained.

    Over the last few days, it has prompted me to think about the landscape we are building for ourselves – one populated with smart devices, subscription devices, as well as an increasing cohort of medical devices – what it will look like in the future, what we owe to customers of these devices if we are involved in making them, and ultimately, what we owe to each other.

    Subscription Business Models

    Subscription-based business models are nothing new – chances are you’re paying for at least one subscription service yourself. For many businesses they are an attractive choice as they mean a continuous revenue stream, a constant inflow of cash you can plan around, rather than one big bang purchase and then a drought. And lots of people are fine with paying for subscription models, even if they don’t love them, but what if we’re talking about more than just streaming music or paying monthly for Photoshop? What if instead of software or an intangible thing, we’re talking about physical devices?

    Physical devices with a subscription model aren’t exactly new, and they’ve had their problems – Peloton came under fire in 2021 after it seemed to release an update that forced a subscription onto its users and rendered its treadmills useless without one. BMW were recently the subject of much negative press for their subscription model heated seats – shipping cars with the physical equipment needed to heat the seats, but locking it behind a subscription paywall. And HP Instant Ink subscribers found that once they cancelled the Instant Ink service, the ink left in their cartridges stopped working, even though it was still sitting their in their printers.

    This is all very annoying, but mostly you could argue the above are luxuries – your seats aren’t heated, your day still goes on. But these are not the only kinds of devices that, increasingly, are coming with subscriptions.

    What happens when your bionic eyes stop working?

    The merging of technology and medicine is, to a certain extent, inevitable. People have unofficially relied on technology to supplement and assist with medical issues for a long time now (such as those with diabetes hacking pumps to essentially make artificial pancreas, a process known as looping, or people with vision impairments using apps to see through the camera, receiving audio descriptions), and as time goes on, manufacturers are joining the market with “official” solutions. There is huge potential to make lives better with assistive technologies, by automating processes that were manual or artificially replacing senses to name just two examples. Often these developments have been lauded as the “way of the future” and a huge step forward for humanity, but what happens when the initial shine passes?

    A CNN article from 2009 speaks about Barbara Campbell, a woman who was diagnosed with retinitis pigmentosa – a condition which gradually robbed her of her sight. In 2009, she was participating in an FDA approved study of an artificial retina – a technological solution to her impaired vision, a microchip to help her see again by stimulating the retina electrically in the way that light should be. Combined with a pair of sunglasses and a camera to capture the world around her, the devices allowed her to see again, with her interpretation of the new signals improving all the time. By all accounts, it’s a dream scenario – technology that is really doing good and changing someone’s life for the better.

    Now, in 2022, things have changed. In 2020, the company that manufactured these implants had financial difficulty. Their CEO left the company, employees were laid off, and when asked about their ongoing support, Second Sight told IEEE Spectrum that the layoffs meant it “was unable to continue the previous level of support and communication for Argus II centers and users.” Around 350 patients worldwide have some form of Second Sight’s implants, and as the company wound down operations, it told none of them. A limited supply of VPU (video processing units) and glasses are available for repairs or replacements, and when those are gone, patients are figuratively and literally in the dark.

    Barbara Campbell was in a NYC subway station changing trains when her implant beeped three times, and then stopped working for good.

    Now patients are left with incredibly difficult decisions. Do they continue to rely on a technology which changed their lives but which has been deemed obsolete by the company, that may cause problems with procedures such as MRIs, with no support or repair going forward? Or do they undergo a potentially painful surgery to remove the devices, accruing more medical costs and removing what sight they have gained? Do they wait until the implant fails to remove it, or do they remove it now, gambling on whether it might continue working for many years? Do they walk around for the rest of their lives with obsolete, non-functional technology implanted in them, waiting for the day it fails and replacement parts can no longer be found?

    Meanwhile, Second Sight has moved on, promising to invest in continuing medical trials for Orion, their new brain implant (also to restore vision) for which it received NIH funding. Second Sight are also proposing a merger with an biopharmaceutical company called Nano Precision Medical (NPM). None of Second Sight’s executives will be on the leadership team of the new company. Will those who participated in the Orion trials to date continue to receive support in the future, or even after this merger?

    IEEE Spectrum have written a comprehensive and damning article examining the paths taken by Second Sight, piecing together the story though talking to patients, former doctors and employees, etc. and although it’s clear that Strickland and Harris know more about this than anyone, even they can’t get a good answer from the companies about what happens now to those who relied on the technology. Second Sight themselves don’t have a good answer.

    Subscription Paperweights

    Second Sight’s bionic eyes didn’t come with a subscription, but they should have come with a duty of care that meant their patients never had to worry about their sight permanently disappearing due to a bug that no one would ever fix or a wire failing. And while bionic eyes are an extreme example of medical tech, they’re an excellent example of the pitfalls that this new cohort of subscription-locked medical devices may leave patients in.

    Lets return for a moment to the device that started me down this line of thought – the external vagus nerve stimulator. I have had a difficult personal journey with migraine treatment. I have tried many different drug-based therapies, acute and daily preventatives, and have yet to find one which has been particularly effective or that didn’t come with intolerable side effects. I am now at a point where the options available to me are specialist meds available only with neurologist consults and special forms and exceptions, or the new and exciting world of migraine medtech devices. And the idea of a pocketable device that I use maybe twice a day, perhaps again if I am experiencing an attack, is appealing. With fewer side effects, no horrible slow taper off a non-working med, and let’s be honest, a cool cyberpunk vibe, I’m more than willing to give this kind of thing a try. Or I would be, if it wasn’t tied to a subscription model.

    Because I can’t help but think of Barbara, and the day her eyes stopped. And I can’t help but think about how many companies fail, suddenly, overnight, with no grand plan for a future or a graceful wind down. And I can’t help but worry that I might finally find something to sooth this terrible pain in my head, to give me my life back, only to have it all disappear a year down the line because the company fails and I have no one to renew a subscription with, and my wonder-device becomes an expensive and useless piece of e-waste.

    The decision to add a subscription model to a device such as this is financial, not medical. Internal implantable vagus nerve stimulators already exist for migraine and other conditions, and you don’t renew those by tapping a subscription card. This is a decision motivated by revenue stream.

    To whom do you have a duty of care?

    The migraine vagus nerve device is not alone. When I shared the news about this device with a friend, she told me that her consultant had been surprised to find that her smartphone-linked peak flow meter did not have a subscription attached. Subscription medtech devices have become a norm without many noticing, because many people do not (and may never) rely on devices like this to exist.

    The easy argument here is that companies deserve to recoup their expenses – they invested in the devices, in their testing and development, in their production. If the devices are certified by medical testing boards, if they underwent clinical trials, there is significant costs associated with that, and given the potential market share of some of these devices, if they simply sell them for a one-time somewhat affordable cost, they will never see a return on their investment. This, in turn, will discourage others from developing similar devices. And look, it’s hard to refute that because it is true – it is expensive to do all of these things, and a company will find it very hard to continue existing if they simply sell a handful of devices each year and have no returning revenue stream. If this were purely about the numbers, this would be the last you’d hear from me on the topic. But it’s not.

    If you develop a smart plug with a subscription model, and your company fails, this is bad news for smart plug owners, but a replacement exists. A world of replacements, in fact. And the option to simply unplug the smart device and continue without it is easy, and without major consequence. The ethical consequences are low. But developing a medtech device is simply not the same. It is about so much more than the numbers. This is not about whether someone can stream the latest Adele album, this is about ongoing health and in some cases lives, and this is an area of tech that should come with a higher burden than a smart doorbell or a plug.

    When you make any sort of business plan, you’ll consider your investors, your shareholders, perhaps your staff, and certainly your own financial health, but when it comes to medtech, these aren’t the only groups of people to whom you should owe your care and consideration. Your circle of interested parties extends to people who may rely on your device for their health, for their lives, beyond just a simple interest or desire to have a product that works. Simply put, is your duty of care to your shareholders, or to your patients?

    What do we owe to each other

    Do you owe the same duty of care to a smart doorbell owner as to a smart heart monitor owner? Who will face a tougher consequence if your company fails without warning?

    Second Sight took grants and funding to continue developing and trialling their brain implant while they quietly shelved the products they had already implanted in real patients – is this ethical? Is it right? How can anyone trust their new implant, knowing how patients using their previous implant were treated and the position they were left in? And is it right to continue to grant funding to this research?

    Companies right now are developing devices which lock people into a subscription model that will fail if and when the company fails, at a time when we are all concerned about the impact of technology on the environment, conscious of e-waste, and trying to reduce our carbon footprint. They are developing devices that work for 12 months and then must be replaced with new ones. Is it right to develop throwaway medical devices that stop working simply so that you can lock people into a renewing subscription/purchase model?

    It is undeniable that technology can help where current medical options have failed. We have already seen this with devices that are on the market, and with new devices that arrive. We should want to pursue these avenues, to make lives better and easier for those who need help. We should fund these technologies, spur on innovation and development in these areas, and help everyone to reach their fullest potential.

    But we owe it to each other to push for better while we do. To push back on devices that will fail because someone’s payment bounces. To push back on devices that only have subscription models and no option to purchase outright. To push for higher standards of care, better long term support and repair plans which can exist even if the company fails. To push for companies to be held to these standards and more, even if it makes things more difficult for them. And to push companies to keep developing, even with these standards in place, to keep developing even though it is hard.

    We deserve a duty of care that extends not just to lifetime of device, but the lifetime of a patient.

    This isn’t just about home security, or smart lights – this is people’s health, their lives. The duty of care should be higher, the ethical burden stronger. We owe it to each other to not allow this world to become one where vision and pain relief and continued survival depends on whether or not you can pay a subscription.

  • Facial recognition is terrible at recognising faces.

    If you’ve ever used a Snapchat, Instagram, or TikTok filter, you’ve probably used facial recognition technology. It’s the magic that makes it possible for the filters to put the virtual decorations in the right place, it’s why your beauty filter eyeshadow (usually) doesn’t end up on your cheeks.

    It’s fun and cute, but chances are that if you’ve never had any issues with these filters, it’s because your face looks a lot like mine. Unfortunately, the same cannot be said for many other people.

    Why don’t zoom backgrounds work for me?

    This image has an empty alt attribute; its file name is ZoomBackgroundFaceDetect-1024x277.jpeg

    During the pandemic I, along with many other people, used Zoom virtual backgrounds extensively – calm pictures to hide my office background, funny pictures to communicate my current stress levels, you name it. I found they worked fairly well, perhaps struggling a little to handle the edges of my curly hair, but my experience wasn’t universal. When a faculty member asked Colin Madland why the fancy Zoom backgrounds didn’t work for him, it didn’t take too long to debug. Zoom’s facial detection model simply failed to detect his face.

    

    Why can’t I complain about it on twitter?

    This image has an empty alt attribute; its file name is twittercropping-1024x635.png
    On twitter, the longer image I shared was cropped….to include only Colin’s face.

    When Colin tweeted about this experience, he noticed something interesting with Twitter’s auto-cropping for mobile. Twitter crops photos when you view them on a phone, because a large image won’t fit on screen. They developed a smart cropping algorithm which attempted to find the “salient” part of an image, so that it would crop to an interesting part that would encourage users to click and expand, instead of cropping to a random corner which may or may not contain the subject.

    Guess which part of the image Twitter’s algorithm cropped to. Why did Twitter decide that Colin’s face was more “salient” than his colleague’s?

    How does this happen?

    Facial Recognition Technology (FRT) is an example of Narrow Artificial Intelligence (Narrow AI). Narrow AI is an AI which is programmed to perform a single task. This is not Data from Star Trek, not the replicants from Blade Runner, this is more like the drinking bird from The Simpsons which works…until it doesn’t.

    Facial recognition algorithms are trained to do one thing, and one thing only – recognise faces. And the way you train an algorithm to recognise faces (the way you train any of these narrow algorithms) is by showing it a training dataset – a set of images that you know are faces – and telling it that all the images it sees are faces, so it should learn to recognise features in these images as parts of a face. But there isn’t an “intelligence” deciding what a face looks like, and while it is possible to try and help the algorithm by providing a descriptive dataset, it’s not possible to direct this “learning” specifically. The algorithm is a closed box. Once the algorithm has “learned” what a face is, you can test it by showing it images that do, or do not, contain faces, and see if it correctly tells you that there is a face in the image but, crucially, it is still very very difficult to tell what that algorithm is using to decide if a face is present. Is it looking for a nose? Two eyes? Hair in a certain location?

    Is it even looking at the faces at all?

    Of course it’s looking at the faces to determine if faces are present, right? How else would it do it?

    Well, if you’ve ever done any reading about AI recognition, you’ll undoubtedly be amused by the above statement, because (as I mentioned above) you can’t really specifically direct the “learning” that happens when an algorithm is figuring things out, and in perhaps their most human-like trait, AI algorithms take shortcuts. There’s a well known case of an algorithm trained to determine whether the animal in a photo was a wolf or a husky dog, and by all accounts the algorithm worked very well. Until it didn’t.

    The researchers intentionally did not direct the learning away from what the AI picked up the first time it tried to understand the images, and so what the AI actually learned was that wolves appear on light backgrounds and huskies don’t. It wasn’t even looking at the animals themselves – just the photo background. And thus, once it was tried on a larger dataset, it began incorrectly identifying wolves and dogs based on the background alone.

    This situation might have been constructed just for research, but the hundreds of AI tools developed to help with Covid detection during the pandemic were not, and as you may have already guessed, those algorithms by and large were not the solution to the problem, and instead had the same problem as above.

    A review of thousands of studies of these tools identified just 62 models that were of a quality to be studied, and of those, none were found to be suitable for clinical use due to methodological flaws and/or underlying bias. Because of the datasets used for training these models, they didn’t learn to identify Covid. One dataset commonly used contained paediatric patients, all between age 1-5. The resulting model learned not to identify Covid, but to identify children.

    Other models learned to “predict” Covid based on a person’s position while being x-rayed, because patients scanned lying down were more likely to be seriously ill and the models simply learned to tell which x-rays were taken standing up and which were taken lying down. In some other cases the models learned what fonts each hospital used and fonts from hospitals with more serious caseloads became predictors of Covid.

    And there are so many other incidents of AI failing to recognise something correctly or failing to have some human oversight or context applied that the AIAAAC has an entire repository to catalogue them.

    What has this got to do with Facial Recognition Technology in Ireland?

    Quite a lot. I’ve already shown you that AI models are often not very good at recognising the things they are supposed to be trained to recognise. Much of this is down to the training dataset that you use – the pictures you show your algorithm to teach it how to recognise things.

    You may have already spotted a problem here because, of course, not all faces are alike – some people have facial differences that may mean their face does not have a typical structure, some may have been injured in a way that has changed their face. And, of course, the world contains a wide variety of skin tones. And if you are asking if the kinds of datasets available to train these models contain a diverse set of faces, incorporating all of the above and more, then the answer is a resounding no. Efforts are being made by some groups to address this, but progress is slow.

    And what this boils down to is this: if your training dataset only contains typical faces, then your facial recognition algorithm will only learn to recognise and identify typical faces. If you train your facial detection models using data that isn’t diverse, you will release software that doesn’t work for lots of faces. This is a well known, pre-existing, and long running problem in tech.

    Moreover, if you are not careful with your dataset, you will reinforce existing stereotypes and bias, a problem which is more severe for certain groups, such as women and POC. Models don’t just learn what you want them to learn, they also learn stereotypes like “men don’t like shopping” when they are trained on datasets in which most shoppers are female.

    Can’t we put in some checks and balances, some safeguards?

    Well, yes and no. While people are working to help people improve datasets, there are many datasets out there which contain demographic information which will likely be used to help train and build national or large-scale models, and since this demographic information represents how things currently are, it will also represent how people have been impacted by existing or past biases (e.g. historically poorer parts of a country, or groups of people who have been oppressed and, as a result, have lower rates of certain demographic indicators such as college education or home-owning). It’s hard to escape these biases when they are built into the data because they are still built into our societies.

    Additionally, in order for there to be checks and balances, we would need those in charge to understand the implications of all of the above (and more) and to care enough to enforce them by writing legislation with nuance and skill. This is a complex area that has caused issues in many countries that have tried to adopt some form of FRT.

    We have examples in our own country’s recent history about how our government has legislated for (and cared about) personal and biometric data, and their record is not good. An investigation by the Data Protection Commission found significant and concerning issues with the way the Public Services Card had been implemented, and the scope expanded without oversight or additional safeguards, sharing data between organisations that were never intended to be in the scope of the card. The Commission said, in its findings, that “As new uses of the card have been identified and rolled-up from time to time, it is striking that little or no attempt has been made to revisit the card’s rationale or the legal framework on which it sits, or to consider whether adjustments may be required to safeguards built into the scheme to accommodate new data uses.” The government’s first response to this report was not to adjust course or review the card internally itself, but to appeal this ruling and continue to push the card without any revisiting. (They dropped the appeal quietly in December 2021).

    On this basis, I do not believe that the government in its current form has the capacity or the will to legislate safely for the use of FRT. A first foray into gathering public data en masse resulted in illegal scope creep, extending the card’s reach far beyond what was permitted without any announcement or oversight, and no review or change to the safeguards. This is something that simply cannot be permitted to happen when it comes to the use of facial recognition technology, which has the potential to be infinitely more invasive, undermining rights to privacy and data protection, and (with flawed datasets) potentially leading to profiling of individuals, groups, and areas.

    Facial recognition technology is not fit for purpose. Existing models are not good at recognising a diversity of faces, and are unable to account for the biases built into the datasets that are necessary to train and build them. It cannot be a one-stop solution for enforcing laws.

  • IWD 2022 Winners & Losers

    International Women’s Day is upon us once again! It seems like only yesterday that I was setting up this blog to discuss some of my experiences as a woman in tech, but here we are again.

    I thought I’d take a moment this year to instead recognise some of the winners and losers of International Women’s Day this year, and yes, there are very definitely winners and losers. The day wasn’t always about the marketing opportunity – it’s supposed to be about celebrating the achievements of women, about celebrating the social, political, cultural, global impact that women have, about recognising the barriers that women face and what we can do to dismantle them, etc. Over time, however, the day has become heavily commercialised, and is now treated largely as an opportunity by many companies to post a slogan or a hashtag without any real effort to shift the conditions for the women who work for their companies.

    Gold Star Winner

    The undeniable Winner of IWD 2022 is the Twitter Gender Pay Gap Bot. This clever little bot, created by Francesa Lawson and Ali Fensome, uses data sourced from the UK Govt’s database on gender pay gap, which all companies with more than 250 employees are obliged to submit data to. When those companies tweeted using the IWD hashtag, the bot retweeted their tweets quoting the median hourly pay gap percentage. Watching the posts roll in all day was a delightful source of merry chaos, and an occasional source of delight when you see some companies which had genuine pay equality!

    Many companies, upon seeing themselves retweeted by the bot, chose the scorched earth policy of blocking the bot or deleting their message and retweeting it without the associated hashtag. This, predictably, didn’t work, and usually just served to draw more attention to their particular case. A related honourable mention must, therefore, go to Madeline Odent and her wonderful curated thread of all the companies who deleted/blocked/modified their posts in an attempt to evade the bot, thereby ultimately making an even bigger mess for themselves. I salute you for your hours of tireless work Madeline!

    Honorable Mention (Silver Boot)

    The Welsh Rugby Union used IWD to announce a suite of new initiatives such as providing free menstrual products, pelvic floor training, a partnership with a menstruation underwear brand, not to mention highlighting their awarding of full time professional contracts (in case you missed it at the start of the year).

    Jen, why are you talking about Wales Rugby, you might ask? Well, it’s just that some other rugby teams have been getting it fairly spectacularly wrong lately. Like the IRFU with regard to our own women’s rugby team just last week. And this year’s Golden Facepalm Winner……

    Golden Facepalm – The All Blacks

    In a world where the Black Ferns exist, and they have won five of the past six Women’s Rugby World Cup’s, where you had the option to retweet the message they shared for IWD and extend the reach of their twitter account with a simple “we support you” or “we support this” note, or even a black heart emoji, or just a plain retweet without comment, the All Blacks chose to post this instead.

    This.

    It’s actually still up there as of writing, on March 9th, despite almost an universally negative response. Why so negative? Let me count the ways.

    This message is centred in the perspective of what women do for men, rather than what they may do for themselves, or how they may exist for themselves. It has the same structure, and same failing, as the “she’s someone’s wife/mother/daughter” trope. She is someone all by herself, not merely in relation to the service she can provide to a man or the relation she is to a man. It casts women as the enablers or in support roles to men, and on International Women’s Day, it’s just not the day. “Congrats women for being so good at supporting the men in being brilliant” is a message that wouldn’t be great on most days really but for it to be your key marketing message on International Women’s Day is a spectacularly poor choice.

    “Allow” is also a poor choice of word here because it does have echoes of the “allowed out to play” attitude that we see reflected so often in mainstream media too, which is infantilising for men and insulting for women, so it’s doubly awful. While I understand that sometimes word choice in a tweet is also dictated by space, and I’d usually grant that this may have been a space related choice, I did check and you could have replaced the word “allow” with the whole phrase “support us in playing” and it would still have been under the word count so 🙅‍♀️.

    Lastly, I’ll mention the same thing which has been said in response to the tweet online, which is that the particular players chosen in some of these images are poor role models at the best of times, and especially poor role models for a day which is meant to honour and respect women. Players who have had domestic violence charges laid against them should not appear as part of promotional content for International Women’s Day twitter posts and that feels like such a basic rule that it is unbelievable that I should even have to type it, akin to “you should put on a coat if it is raining outside” or “look both ways before crossing the street”.

    They have weakly apologised for “not getting it right”, but not on the All Blacks twitter account, where the majority of their twitter followers actually are and where that post still remains(?!), but on their @NZRugby account, where they *checks notes* almost never tweet from (1217 tweets total at time of writing), and which has even fewer followers than the Black Ferns account which they still have chosen not to try and promote from the All Blacks account, so I guess some people might call that a little… insincere?

    Silver (?) Facepalm

    I’ve chosen to give this a Silver Facepalm because, like international brands everywhere considering their promotional material for IWD, I take the sanctity of these awards very seriously, and I couldn’t have two Gold Facepalms on the inaugural year of the awards as I felt it would make a mockery of the whole system. In iVisit London’s defence, I suppose they were just reposting copy given to them by the London Dungeon, so it’s really a shared award by both of them, so it’s a double Silver Facepalm.

    Again, in the category of “sentences and rules I didn’t think I’d be needing to clarify”, making a funny fun time joke about a notorious murderer of women and calling her Jackie *wowsparkle* is very much not quality copy for a day that is supposed to be about celebrating women. Maybe don’t try to yassify murderers for International Women’s Day? Maybe that’s not the vibe? Maybe if all today is to you is an opportunity to tweet some twee nonsense with a hashtag then you should just step away from your “murderous females” pinterest board and, just, take a personal day.

    And, I guess, it almost feels twee to say it myself but go with me here – you couldn’t have even on this, the day of international women, found a single female figure to advertise the London Dungeon? Leaving aside the fact that I think it is grotesque to use murder as a cutesy way to advertise yourself, even on this day you felt that the single well known male serial killer needed to be front and centre in your ad copy? Zero stars.

    The post has since been deleted and iVisit London have said they just shared ad copy from the London Dungeon and they shouldn’t have, it wasn’t up to their standards, etc. A fairly bland, standard apology. The London Dungeon said they wanted to give an opportunity to show a theory that Jack the Ripper could have been female but given that they’re replacing their usual actor for “one day and one day only” but this one day could have been any day, and there’s no reason for it to be one day only. A terrible marketing misstep on a day that should be about anything but marketing.

    What did you see yesterday?

    That’s what I saw in my corner of the internet yesterday. Did you see a particularly well thought out initiative that you’d like to share? Or a particularly egregious flop? I’d love to hear about it.

  • On Gender Quotas

    The Citizens’ Assembly has today voted for a program of reforms on gender equality in Ireland, including some recommendations around extending gender quotas, and ahead of the predictable backlash for gender quotas, I want to share some thoughts on the inevitable “best person for the job” rhetoric.

    A frequent refrain when people mention gender quotas is that it should just be “the best person for the job”, and that gender shouldn’t matter, but the people who make this argument rarely pause to consider or explore the sexist ideal they prop up with this statement. Let’s dig into that now.

    Studies have shown that when people are blinded to gender, the choices they make represent the actual spectrum of gender much more accurately. We see it in jobs, we see it in award nominations, we see it in all aspects of life. Which means that something different is happening when panels aren’t blinded. We’ve seen that panels are affected by unconscious bias, and end up hiring those who look like them, sound like them, etc. And we’ve seen that people who don’t fit the already established “mold” get left out of this process – even before we step into the interview, we have seen that biased algorithms filter out CVs of women and people of colour, and job descriptions discourage applications from underrepresented groups. We face an uphill battle to improve gender equality in hiring.

    Why not just more unconscious bias training?

    So why quotas? Why not just more training? Can’t we just trust that people will address their unconscious biases, or wait until we reach a more balanced representation organically?

    No. We can’t.

    Unconscious bias training remains a controversial topic. When people propose unconscious bias training, it is often met with resistance and mockery, and people questioning whether the training leads to real change. There have been some studies to examine the effect of unconscious bias training and right now, the evidence suggests that while the training does raise awareness of these biases, the overall effect is not translating into significant behavioural changes. It is worthwhile, but it is not enough.

    And so here we are, with mandated gender quotas. Why? Frankly, because for years, you were asked nicely and you ignored it. Many of the studies which show gender bias in hiring are decades old, this is not a new problem, and people have been raising it for a very very long time. Maybe you were given training about why diverse hiring matters, about unconscious bias, and you ignored it or didn’t internalise it enough to action it. Maybe you’ve never examined your job descriptions to see why all of your candidates look the same. So now your hand has to be forced with quotas, because you won’t do it voluntarily, and people should not have to wait ten more lifetimes for you to decide it suits you to make a change.

    But don’t you think it should be the best person?

    If we loop back to our original thesis, that it should always be just “the best person for the job” regardless of gender, I actually agree. It should be the “best” person. But the unspoken part of this is that you are saying that this is currently how things are actually done, that this idea of “best person regardless” is the current status quo. And there, I must firmly disagree.

    When you say “the best person” and imply that that’s what is happening right now, you’re propping up a myth, a status quo that isn’t. The status quo isn’t always hiring the best person, it’s hiring the one you like, and very often, the one you like is the one you match. And when the hiring panel is predominantly old white men, guess who matches them?

    Your status quo is a myth

    When you say “best person for the job” this is the unasked question which shows the problem with your statement: If we currently hire “the best person regardless of gender” then why are all of those best people white men? For decades? Really, not a single other person was better? Honestly? If, at the moment, the best person already always got the job, then why is there still such a lack of diversity in hiring? What is the reason?

    If we currently hire “the best person regardless of gender” then why are all of those best people white men?

    Please, honestly, examine this thing that you are implicitly saying. If you think that right now, we always hire the best person regardless of gender, then you are also saying that the current gender representation everywhere is an accurate reflection of skill and qualification. You are saying the the only bias which exists is one which would cause someone to hire an incompetent woman over a man because “diversity” when a literal embarrassment of riches of evidence shows the very opposite. And if you don’t understand why such a statement might cause me to raise my eyebrows, well you’ve got rather a lot of catching up to do.

    Can we completely eliminate bias from hiring? Maybe not. And maybe not soon. But gender quotas can force us to shine a light on how we currently hire, and make people think outside their current status quo.

    The research

  • Have you ever been told that your name is incorrect?

    Your name is one of the first things you say to people you meet, it is how you present yourself to the world. It is personal and special. But what if you are told that your name is incorrect due to lazy or thoughtless programming every time you try to book an airline ticket, access banking, healthcare, or any number of services online? This is all too often the case for people around the world, due to lazy or inconsiderate programming.

    What’s a fada?

    Diacritical marks are the marks which guide pronunciation, and they appear in numerous languages – if you’re a native english speaker, you might not use them frequently, but they can change not just the sound, but the meaning of a word. A fada changes the word “sean” (meaning old, pronounced shan) to the name Seán (pronounced shawn), changing the a sound to an aw sound. And if it is your given name, then to include the fada is correct. It is as crucial a part of the spelling of your name as any of the letters. Yet, all over the internet, people who try to include the fadas, accents, umlauts (or other diacritical marks) in their names are told their name is incorrect, invalid, or wrong.

    When it comes to including these diacritical marks on online forms, we too often hear the refrain that it’s a “technical issue”, but that doesn’t quite get to the heart of it, and also implies that it is very difficult to fix or perhaps not even possible. That’s not really true though.

    Back when people were first defining how computers would speak to each other, a character set was agreed upon, so that communication would be consistent. This character set was ASCII (American Standard Code for Information Interchange) and due to memory limitations of the time, ASCII could only fit 128 characters. This is enough for all the letters, numbers, and punctuation marks used in english, but not nearly enough to include all of the “special” characters used by other languages (such as a letter with an accent, á). But these characters aren’t special, they are a part of the language, as much as any other character.

    Competing standards and character sets exist and have done for decades now. Character sets such as Unicode support all the characters in different languages. So why aren’t we using them? Well, it’s probably two reasons:

    • Many older systems continue to use ASCII (such as legacy internal systems at banks and airlines) because they were designed when other character sets weren’t available, and many companies are running much older software than you would imagine at the core of their operations
    • Many things, such as databases and development platforms, default to non-inclusive character sets when you install them, and people don’t bother to change them before moving code into production because it doesn’t occur to them, and then it becomes a larger issue to fix because the system is already in use

    I don’t think either of these reasons are a good enough excuse. Legacy systems should be updated, and when you are developing a new system, there is absolutely no good reason to not begin your architecture with support for other languages.

    Irish has the status as the national and first official language of Ireland, and whether or not you speak it frequently, it is a common feature of our road signs, official documents, and yes, our names. And yet Irish people have had to battle national/state bodies for refusing to accept fadas in their names, and our own Data Protection Commission has decided decided against them. Gearóidín McEvoy points out that fadas aren’t exactly a new invention, so why should we have to fight for their inclusion?

    Your name is too long, too short

    How long is too long? And how can a name even be too long? Well, if you’re going to take part in a census in Ireland, you might be surprised to find how short the space is for a name on the form. The sample form for the 2016 census is available here and you can see that there is space for just 22 characters, including any spaces you might need. If you have a long name, you’re out of luck. And this is far from just an Irish problem. In Hawaii, Janice “Lokelani” Keihanaikukauakahihulihe’ekahaunaele had to fight the government to have her full name displayed on her official ID cards, and she spoke of her dismay at her name being treated like “mumbo-jumbo” and the disrespect she felt when a policeman told her to change her name back to her maiden name to have it fit on her license.

    Patrick McKenzie lives in Japan, and forms there accommodate typically Japanese names, but with 8 characters in his surname, and most Japanese surnames rarely exceeding 4 characters, Patrick routinely can’t fill in his name properly. Inspired by this, Patrick has also written a blog which notes falsehoods that programmers believe about names, which I highly recommend you read.

    I have also known friends with shorter surnames (e.g. two character surnames) to have significant difficulties with online forms, with their legal surname declared “too short”.

    The reality is that, particularly when you think globally, there is no “too short” or “too long” surname, and arbitrary character limits on form fields cause unnecessary difficulties for people who have to butcher their name to make it fit, and then face questioning from others when the name on the ticket doesn’t match the name they put into the form.

    First name and last name please

    If you have ever filled out an online form, chances are you have been asked to split your name, filling out your first name in one box, and your surname/second name/last name in a second box. But what if that is not how your name is structured? Around the world, names are structured in a number of ways that far exceed the constraint of “first” and “last” name. Many countries have names that contain multiple family names, part of a mother or father’s name, different endings depending on the sex of the child being named, etc. Moreover, the idea of a “first” name simply doesn’t translate to a number of cultures, who order parts of the name differently as a matter of course, or depending on the situation. For example, in the Chinese name Mao Ze Dong, the “first” piece of this name (reading left to right) appears to be Mao, but this is in fact the family name. Dong is the given or “first” name.

    The W3 has an excellent blog which discusses the issues with forms and personal names, which includes a number of clear examples of the ways in which the idea of a “first” name breaks down, and it should be mandatory reading for anyone who is designing a form. They note a key question that form designers should be asking themselves before writing a single line of code – do you actually need to have separate fields for given name and family name? If so, why? Could you not simply ask for a user’s full name as they would typically write it, and then if necessary, ask what they prefer to be called in communications so that you can still auto-populate your “Hey <name>” email?

    Inclusive form fundamentals

    A multitude of online forms fail to support diacritical marks, or declare that names are too short or too long based on simple biases and the incorrect assumption that everyone has a first and last name that looks like our own (or considers their name in terms of first and last).

    Beyond the frustration that this causes people, it is also dehumanising, insulting, and demeaning. Instead of telling the person “sorry, our system doesn’t handle this and that is our fault” the error messages tell people that their name is wrong, that they are wrong. It tells them they don’t know how to spell their name, or that their name is invalid. It makes people feel like their name or their culture is disrespected. It underlines the idea that this system is not built with everyone in mind, just with people who look like those who built it.

    It presents a barrier for someone every time they use your system, every time they are told they are wrong. It is an unfriendly user experience that turns users away.

    It might require extra work in development, to retrofit existing systems to support extra characters, or to ensure that inputs are validated so that special characters are processed and stored correctly in the underlying databases, but the alternative is unacceptable. The time to begin this work is long overdue.

    Your name is not invalid, our form is.

    Key points

    • Inclusive form design makes your product better
    • Inclusive error messages should focus on the system, not the user – if your system can’t handle a character, the character is not invalid, your system needs to be improved.
    • Not everyone considers their name in terms as simple as “first” and “last”
    • And you should ask yourself if you even need a name split this way, or are you just defaulting to the forms you recognise from elsewhere?
    • Special characters should be supported from the very beginning. They aren’t an edge case, they are critical.