S2:E2 – Design Principles to Combat Domestic Abuse (Lesley Nuttall)

Subscribe: Apple Podcasts | Spotify | Android

Since the outbreak of COVID-19, domestic violence against women and girls has increased. UN Women released a report with this information earlier this year. Lesley Nuttall, who works at IBM Security Labs, gave an overview of the statistics for domestic abuse. She also explained how technology is misused to intensify domestic abuse. Later we talked about her article “Five Technology Design Principles to Combat Domestic Abuse”. Lesley highlighted the importance of thinking about these concepts when we work on software and other technologies.

@techwomenshow

lesley
Lesley Nuttall

Sponsors

5mm-Ad-Image-SMALL

Transcript

ES: 

I’m Edaena Salinas, software engineer and host of The Women in Tech Show, a podcast about what we work on, not what it feels like to be a woman in tech. For more information about the show go to wit.fm.

Since the outbreak of COVID-19, domestic violence against women and girls has increased. UN Women released a report with this information earlier this year. Lesley Nuttall, who works at IBM Security Labs, gave an overview of the statistics for domestic abuse. She also explained how technology is misused to intensify domestic abuse. Later we talked about her article “Five Technology Design Principles to Combat Domestic Abuse”. Lesley highlighted the importance of thinking about these concepts when we work on software and other technologies.

Leslie Nuttall, who works at IBM security labs, is joining us today. Leslie, welcome to the women in tech show. 

LN: 

Thank you, it’s a pleasure to be here. 

ES: 

I want to talk with you today about domestic abuse and how technology has an impact in this. Particularly because I saw you wrote an article titled 5 technology design principles to combat domestic abuse before we get into detail about what those design principles are, I first wanted to begin with some context around domestic abuse, and some statistics about it. What have been some recent findings that you’ve seen on this? 

LN: 

Yeah, so I mean, domestic abuse is basically about control an and it doesn’t discriminate, so anyone of any gender, race, religion, age, sexual orientation can be a victim or a perpetrator. 

LN: 

But having said that, statistically it does appear that the majority of domestic abuse is a gendered crime. I have some UK crime statistics from 2018 and they showed that 92% of defendants in domestic abuse related prosecutions were actually men and 66% of the victims were recorded as female. 

Well, I mean the other bits were that 13% were male and 21% of the prosecutions victims, they weren’t actually recorded, so it is at a prevalent issue. And here in the UK, it’s actually estimated that one in five people experience domestic abuse in their lifetime, but for women that actually rises to one in three. 

And in other countries around the world, it’s it’s also prevalent in the United States, one in three women have experienced physical violence from an intimate partner, whereas in parts of sub Saharan Africa, partner violence is thought to be a reality for 65% of women, and it’s just everywhere, and even in the EU wide survey they indicated that 22% of women have experienced violence by a partner. And of course it varies in different countries, but it’s just everywhere unfortunately. And there was a UK news investigation particularly related to Tech facilitated abuse, and that was I think it was at the beginning of the year and they found that there was an 1800% increase in alleged cyber stalking offenses between 2014 and 2018. I think it was so we know that domestic abuse is a prevalent issue, and while that hasn’t changed, maybe the tools of their trade has, unfortunately. 

ES: 

As of June of this year, there are regions still around the world that are on lockdown because of the COVID-19 pandemic. How has this affected domestic abuse? 

LN: 

Yeah, you’re right. I mean, it’s what’s deeply worrying is that there was a UN report that came out a little while ago, and it was exploring the impact of COVID-19 on on women and that in particular highlighted a trend of increased abuse. Because basically when homes are placed in lockdown. 

There under more strain and that unfortunately exacerbates the situation, and there was another report. I think it was a UN population fund and they assumed a 20% increase in abuse all around the world and it’s become so widespread that the UN chief made a call for measures to address this, and I think he said it was a, and I quote “horrifying global surge in domestic violence”. And some other statistics, here in the UK we have a charity called Refuge and the run the National Domestic Abuse Helpline. And they saw that during lockdown, 60% more people were calling the helpline. And they had a 950% increase in traffic to their website. So it has definitely exacerbated the situation. 

But the way that technology is being used to enforce control during lockdown, it may have actually shifted focus. So it’s potentially less likely that perpetrators might be tracking your location or your calendars. But they may be using other techniques more, such as maybe monitoring video chats, or using smart home devices for manipulation. And often victims who are in this lockdown situation have no means of escape or they have limited support during the time. So it seems to be easier for abusers to gain that control.  

ES:  

What are other examples that you know of about how technology is misused? 

LN:  

Gosh, there are a lot unfortunately. So pretty much, any tool can become a weapon in the wrong hands. And that’s also the same with technology. Some of the typical examples we give, at home I have a connected doorbell, and within the app I can see who is at the door, and it also tells me of any movement outside. It’s a really great piece of tech because it was built with safety in mind. But I could use it to monitor and track my family because I get these instant notifications the moment any attempt is used to leave the home. So it can be used in both ways if coming and going. And then another example, so when you share your location of your device, it will often,  

LN: 

I think if not all the time, share your battery level and now that seems like a really innocuous piece of data and it can actually be quite useful, because if you’re suddenly if you’re on public transport, say or driving. 

LN: 

And your partner sees you suddenly stop moving. It isn’t because you’ve had an accident is because your phone died. But with that small piece of information in the hands of an abuser, the victim can’t escape that relentless barrage of abuse. You know, just by turning off their phone and telling a little white lie that their battery had died. 

LN: 

Because the abuser will know instantly that they’d lied and that could lead to an escalation in abuse. So I mean, there’s loads of other. There’s other examples could be spyware for surveillance. You can mirror devices so that you can see exactly what the person is doing on their phone or their device with your device. 

Bombarding with abusive messages, harassment, social media, financial monitoring, smart home devices. It just the list goes on, unfortunately. 

ES: 

In that article that I mentioned earlier, you talk about this investigation from the UK and how they found people are reporting their experiencing technology facilitated abuse. 

Is what you’re describing right now, the various examples, is this what technology facilitate abuse consists of and what it means. 

LN: 

Yes, so technology facilitated abuse is basically using technology for malicious purposes and I use that word. I mean, we know that technology is increasingly coming apart of our lives, I mean, and there can be immense benefits in the home to that can safeguard us. It can ease our routines and enrich our experiences.But you can also use that technologies and in a jewel app kind of way they can be used to make the abuser appear or seeing they can know everything. So all knowing and to the victim they can appear all powerful and that makes them feel vulnerable, powerless and they also for that they can’t escape and that nowhere is safe. 

ES: 

I’ve read different kinds of stories around this. Some of them have been about what you’re describing. Others have been people finding ways to get out of situations like they would order a pizza, and they put a note there that to call the police and that kind of stuff. So it’s. It’s also what you’re saying that there are pros and cons of the technology that’s everywhere right now, right? 

LN: 

Exactly, yes, and technology definitely can be used to help. I mean, there’s quite a few apps already out there that can be used to help victims or survivors find help or call for help if they need to. But yes, it can also be the other way where our technology is manipulated and that’s basically what these design principles were all about is raising awareness that, unfortunately, even though you have all the best intentions when you’re creating your technology, people will find ways of leveraging it to cause harm. So if you had that in the back of your mind right from the beginning, and you built your technology from the ground up, you know we’re thinking of that kind of bad actor use case. Then hopefully we’ll make it more resistant and you know, maybe even a bit safer people. 

ES: 

Exactly. In this article you begin this way, you talk about domestic abuse, some statistics, then some examples on how technology is being misused. Then you begin to present this idea of the five principles of products before we talk in detail about those, I wanted to ask, in your opinion, if you thought the companies or the people that developed technology, should they take responsibility of the fact that products that they build are being used to facilitate domestic abuse? 

LN: 

Yeah, it’s a good question. Because I mean, I firmly believe that, there are few of us in the technical community that intentionally create technology that causes harm, but it’s possible that we’re disconnected from the unintended effects of our creations. And when we started this initiative an we were looking at what’s available to help people in these kind of abusive relationships where technology is being used against them. Like I said, there’s quite a lot out there already to help them. How do I do set my phone up so that the privacy settings are how I would like them? Or you know how? What should I remove? What should I add so there’s a lot of like app like that out there, but that puts the onus on you and me. 

The end user to keep ourselves safe. I don’t think we’ll ever get away from that, but I would like to put some of the onus on on. Yes, the technology or the creators of the technology for us to think about that and how we can make life easier or safer for you and me. The end user. So that’s the aim anyway. 

ES: 

Let’s talk now about the five principles. The first one that you present is promoting diversity. Can you explain why this principle is important, particularly for what we’re talking about, developing technology and thinking how it can be used by abusers? 

LN: 

Yeah, definitely. So I guess often when we’re developing a new technology, we we have target users in mind, but they might not be the only type of users that actually end up using our systems and often unexpected users utilize our apps or or our systems in unexpected ways. So if we have a diverse design and development team. 

That actually broadens the understanding of user habits and so it allows you to have a greater exploration of use cases. Both the positive, the ones that you’re targeting maybe, but also the negative. And in this case it would be. How could it be used by a bad actor who’s trying to abuse, and so a nice example here is where you have location data that’s embedded in images. So if you’ve developed an image sharing app and maybe you haven’t considered that, children might be using your app, so you haven’t put that use case through your system and you haven’t thought about that. Maybe they would inadvertently share that image with somebody and they would then know their location. So in this particular scenario, let’s say we had parent and child who were escaping an abusive relationship, the parent doesn’t necessarily want to, you know, restrict their child from talking to the other parent and maybe they’re doing some homework one evening and their parent, the estranged parent, asks for an image of their homework to help them if they don’t realize the child is too young, maybe to understand the wording used in the privacy settings. They don’t realize that their location is included in that image or they think it’s a photo and then the strange partner then knows exactly where they are and you know they might have to scramble and run away. But if you had someone in your team that had maybe that experience or had read about these principles or had that experience and they could share that with the rest of the team. You would then try and include children in your in your target users. 

ES: 

The other one that you talk about is guaranteeing privacy and choice. What are current issues that you have seen of how privacy settings are presented? 

LN: 

Yeah, so this is all about making it easy for us to make active, informed decisions about our privacy settings. So often you might see little small red buttons on there, or maybe they’ll be phrases like advanced settings. And that can be quite intimidating, and it’s also if you have a big large green button or a little red advanced settings. It’s kind of quite obvious which one you’re going to choose and which ones easier and which ones more understandable, but you might not understand the consequences of that choice so, so that’s quite important, so I’ll give you one other point on that aspect. 

It’s about your default privacy settings. They’re very important because often when people you know don’t go towards the advanced settings or don’t go towards a small red button, they will choose the default settings and you need to make sure you factor in all these different kinds of scenarios. So I’ll give you a kind of a personal example, but it’s not quite domestic abuse related, but it gives a good explanation of why we might want to have appropriate default settings. So my husband, he went for a run in the local forest and he uses one of those fitness apps that keep track of your running and everything. So we set it up and then he went running through the forest and he and he passed another lady runner as he was going past didn’t think anything of it did his 5K. He came back and passed her again at the car park and then once he got home he showed me it popped up and said oh it looks like you were running with this lady. Would you like to connect with her and because her default settings were set as default everything was open. He could see you know who she was. It could see the links with social media. He could see what group she was in. 

She happened to be in a mom’s running group, so he knew she was a mother. She only had a few followers so he could tell by the surname who the husband was on his settings like me. Everything is locked down. Yeah, so she has no idea that he’s seen all that information about her, so that’s just a really simple case where you know default settings, everything’s open. And maybe that’s not the right choice for every every app. 

ES: 

Exactly, yes, because sometimes I get it that some applications their whole purpose might be to connect and you can meet new people. So if everybody has locked their information, there’s nothing to see. So I get it why some might choose to leave everything open by default, but a lot of times this is misleading or I’ve encountered cases where the option to not share is greyed out and the text is very small and it doesn’t even look like it’s a button, but it’s a button, so it’s like they’re trying to guide you to choosing what they want you to choose.  

LN: 

Exactly, and sometimes a lot of apps now because of wearables, and you know our smartphones. It’s linking the real world with the virtual world. So yes, maybe having that that social aspect of having openness by default is great in the virtual world. But when that starts to be linked to the real person and I can then see where they live, What Car they drive. Maybe then it should be more transparent about what you’re sharing exactly, so it’s just something to think about anyway. 

ES: 

The third principle that you talk about is combating gaslighting. Can you talk about what this means? 

LN: 

Sure, so gaslighting is where a person manipulates someone psychologically into doubting their memories or judgment. So the point here is that if you were able to remove evidence of an action actually happening, or maybe there never wasn’t evidence to begin with, it could cause someone to start to question their memory. For example, if you had a messaging app and you know, maybe you’ve discussed with your partner that you’re going out for the evening with a friend. Is that OK? Are you OK to look after the little one or something like that? And they said, yeah, sure, that’s fine. And then everything goes well. But then if you come home and they start abusing you and you say, Well, you said I could go out and then you look on the app and there’s no evidence at all. That that message was there because they’ve removed it. You know. Then you might, and if that happens over and over again, and it’s relentless and it’s the drip drip, you might start to question yourself. So what we’re saying here is that first of all, you don’t want to be able to change the history or the truth of you. Know the auditing trail without some kind of notification to all who participated in it. So we want there to be pertinent and timely notifications as well as that audit trail so that it’s really clear who was done what, when, so that they can’t try and manipulate you because you’ll have the evidence in front of you that makes sense. 

ES: 

Yeah, so makes sense. Let’s talk about the next one, which is strengthening security and data. What are some of the ideas behind this principle? 

LN: 

OK so we have GDPR an that requires that we only collect and share the personal information that we actually need. But the idea here is that that consideration could actually be extended to other types of data. So a car app is a good example whereby it will share all of the car journeys with all of the users of the app ignoring who was driving the car and ignoring also the fact that some journeys might be considered private. I mean even in non abusive relationships on journeys might be considered private if I went to the doctor and I wasn’t ready to share that with my partner. 

If I wanted to go and buy a secret birthday present, I might not want to share that with my partner just yet. So it’s about what is considered private and can we give that choice to the consumer to decide what is considered private and also back to the A battery example? You would think that would be a really innocuous piece of data, but it’s not necessarily so. Maybe we should think about more about the data and how we’re using it and the other thing here is that so most of the perpetrators out there they’re using standard functionality. They’re logging in as standard users. They’re not hacking in, and they’re using standard functionality to abuse their victims, basically, and that type of user is not normally considered as part of their security threat model. 

So if we start to think of that persona and put into our threat model, then we can start, you know, mitigating those risks. 

ES: 

You mentioned thread model. I just want to make a quick parenthesis because I know you been working in security. An thread modeling is an important component of security. Can you explain for those that aren’t familiar with it. What it consists of? 

LN: 

So my role, I don’t really do security threat modeling very much. I am more a troubleshooter you could say, but normally you have personas and you would understand how they could navigate your application or your software or your system and you would try and mitigate that threat. 

A lot of the time you consider maybe outside forces trying to get in and you will have your external perimeter firewalls or whatever you have to try and stop it, and then you might have insider threats. People who are elevating their privileges within the system and how can they you know, access the data. How can you ensure that privileges are set up appropriately and you know access control lists and all those kind of things? 

So those are normally the two main kind of threats outside and internal, but you don’t necessarily think that people who are logged in with minimum privileges would have the ability to cause harm. And maybe this is where this is slightly different to the normal kind of persona you would consider. 

ES: 

When I’m trying to get out of this, also that to show that people and companies developing software. have various systems in place. One of them involves security reviews, thread modeling. There are discussions going on. Documents are being made, specifications, rules, analysis of use cases. And this is a good place where some of this discussion that we’re talking about can take place, right?  

LN: Yes, definitely.  

ES:  

Some people might be thinking, well, we’re talking about this design principles, like when is this brought up? When should we talk about this? Who should we talk about it? What is it? Is it just about being aware and knowing they exist or? Can they be embedded in our software development process? 

LN: 

So I think that could be, I mean the the first step, of course is awareness. If you’re not aware of an issue, it’s really hard to tackle it. So that is definitely the first point. And if you’ve got it in the back of your mind and you find a place throughout that whole development cycle to mention it, or to raise the question, then yes. But you’re right. You know the security threat model is a good place. 

When you’re designing your thinking up your personas and your use cases again, a really good place. Yeah, if you have a methodology that you use, you could implement it in there, put it somewhere where it’s quite good, and we’ve tried to help. Because yeah, like I said, the first step is being aware, but then the next step is, well, what do I do about it? How do I go about helping? So what we try to do with the paper, we’ve created a set of questions that you could ask within your your team meetings or whatever you do. Some of the areas you might want to consider and think about, and you’re right, they might fit into different parts of the cycle, but at least you have a checklist there. A sheet that you can start to open up those conversations. An all technology is different. All teams know their own technology the best. 

So we didn’t want it to be prescriptive, but we wanted it to enable conversations basically. 

ES: 

I want to talk now about the last principle that you present, which is making technology more intuitive. I know we touched a little bit on this one when we talked about the privacy and security settings. Can you explain in a little more detail what this principle is about? 

LN: 

Sure, so victims of coercive control. They live in complex ever shifting worlds. And sometimes they may lack the energy or even the confidence to navigate new technologies. So what we’re asking here is that if all users could intuitively use and understand technology then that could help reduce the risk of the abusers from dominating with their greater technical confidence. I mean either with threats or by installing applications the victim doesn’t understand. So I mean, we’ve seen cases where I think a lot of the time there will be one person in the family who is maybe more technically confident. 

And we’ll set up a lot of the devices and the software in the house. They have the admin privileges, you know, they set up your your smart speakers, your smart assistants, your IoT devices, and the other people in the family might not necessarily know how they work. Maybe the wording in when you look online, it’s all really quite complex technical jargon being used and they’re just not confident enough to to understand and tinker with it to understand how it works. So we would ask that you know documentation be accessible. We don’t have to use jargon, we can make it readable to end users, and I’m talking about end user devices and software. You know, I understand that enterprise level, it’s slightly different, but if end users could intuitively, use and understand software and devices that could help, and they could say, you know, I know when I’m going to leave the front door that that particular device will do this, or I know when I’ve moved from room to room and the smart lights come on that you’re able to track it on a device when the smart devices come on. If you know that information you know you can change your habits maybe, or change your or do what you want. You have control basically and another nice example here is making it obvious when remote functionality’s been turned on. It’s not always obvious that, say, a smart assistant has been accessed remotely. You know sometimes it might have a tiny little LED that goes from breathing to solid or or something like that, but we need to make it obvious what’s going on in your environment and you need to understand how it works. 

ES: 

And just to recap, the five design principles, 2% are promoting diversity, then guaranteeing privacy and choice combating gaslighting, strengthening, security and data. 

The last one that we talked about is making technology more intuitive. One last question I wanted to ask you is, I read the article and I saw it was published under IBM policy lab, what is thi? 

LN: 

Sure, so maybe I’ll talk a bit about the policy lab and why we published it with them. So the IBM policy lab, they’re very focused on providing policymakers recommendations for, you know, all of the emerging problems in technology and background is the IBM has always believed that when you have a big challenge, you really require a practical solution to it, and that’s what they’re trying to do. That’s what they’ve been charted to create, so there focused on the big problems in SoC we presented with. Like I said, we’ve been running this initiative for quite awhile. We’ve been, I think, about a year now where we did all the research. 

And so we presented all our findings in the paper, and they completely agreed that this was a practical solution to a quite a challenging societal issue of our time. Really, myself, I work in the IBM security expert labs. It’s called and I basically liaise directly with customers all around Europe, helping them get the best from their security deployments. At the focus of it all is helping others. And that’s really what drew me to this initiative. 

ES: 

Well, Lesley, thank you for coming on the show. It’s been great talking to you about technology and how this is impacting in domestic abuse and how we can make it better. 

LN: 

Thank you. It’s been lovely to have talked.