Ethical Dilemmas in Product Management: A candid conversation with Rich Mironov


It's not often that we get to talk about Product Ethics in such a candid way. But, recently I sat down with Rich Mironov to explore the topic. And there were basically three topics that emerged: Products built that purposely crossing ethical lines, products built with no intention of being unethical but end up being used that way, and the responsibilities we have as product managers and leaders to prevent them both.

You can watch the video or read the full transcript below, but here is the tl:dr version.

  1. Whenever we talk about ethical dilemmas and what actions a person is responsible to take, it is key to recognize that often taking those actions are unavailable to everyone and we need to check our privilege.

  2. There are two big types of ethical issues when developing products -- those which are intentionally designed to harm (most often discovered by how money is made) and those which are being used in ways which weren’t intended and are causing harm (surfaced by utilizing devil’s advocate techniques, diverse teams, and groups responsible to pretend to act maliciously).

  3. It is everyone’s responsibility to speak up (see point 1 about privilege) when they realize a product can cause harm. A good test is to ask yourself if you’d be ok telling your friends, parents, or partner about what is happening. This is especially true for product leaders who need to create environments where raising one’s hand is encouraged even if it is to share bad news and potentially ethical guidelines to help start the conversation.

Below is the transcript of the full conversation with outside participants listed as “guest”. Enjoy!

----------------

Tami: Let’s start with the topic of privilege.

Rich: I mentioned the word privilege, I think this is really important. There are a lot of people who have to work to put food on the table for their kids or they are on h1 visas in the US which is a horrible system and punishes the people in it, you got a sick kid who needs the health insurance that comes from your company or you don't have money in the bank. There's a lot of reasons why you may be working at a job on a product that you may not love.

So we want to acknowledge first that this is going to be about people who have the freedom to stand up and say, “I don't want to do that”, and maybe risk firing or losing their job or getting sidelined in some way because it's never, never a popular discussion. We're talking about folks who have a little range of movement here.

And the other is there's no governing body for what we do. There's no official license or certificate that comes from the State Board of Product Management approvals. There's no Bar Association equivalent. It's pretty hard to figure out how you're going to get in trouble just for doing something that other people don't think is appropriate. So, I think this is much more about how we approach the world individually. And as product managers it is not about some rules and regulations that say there's some product management code of ethics, that there's some board that's going to enforce. I haven't seen it. I don't believe in it. I don't think we're in a job where that works.

So I’m framing those as much more of a personal discussion, I think, and how we face the world and whether we're doing things we're proud of. 

Tami: Let’s keep in mind that as people work in technology, part of our ethos is to challenge the status quo. And to say rules need to be broken, and therefore you're probably involved in some sort of culture that's encouraging you to look the other way around something. The question is, where are those lines being drawn?

How do we get better at establishing for ourselves, for the companies we work for, and for the teams we work with where it's not okay? 

Rich: I think that's right. I think we end up drawing lines for ourselves and the people we care about and the folks we respect, but there's going to be a lot of people out there who are taking jobs for a lot of money in companies or doing things that I personally think are abominable, and hateful. And, you know, I may call them out in private, but I probably won't call those individuals out in public.

Product Built with Malicious Intent:

Tami: Let's talk about malicious intent. So, we think about product ethical problems, there are the bad outcomes that actually come as intended, and then those that were unintended. So let's talk about bad outcomes that actually harm products. Someone has bad intentions. Do you think this happens?

Rich: It happens all the time. The way I think of it is, the company itself is in a business that if you inspect how they make money, you should be suspicious. If you look for instance at financial products, there's a bunch of companies who are in the business of at least claiming to be in the business of helping you improve your credit rating, and they have apps and they have all these things and worksheets and stuff.  

But at least several “clean up your credit” companies that I know if you ask where their source of revenue is, they make their money by referral fees from credit card companies for people applying for credit cards. And it's very likely if you have bad credit, and you're maxed out, the last thing you need is another credit card. So, they're setting themselves up to a place where in order to achieve the revenue goals, they're going to take the users of their application to a place that the users probably shouldn't go. Because the folks that are paying them and the folks who are using them have opposing objectives here.

Tami: Like check cashing stores, like a digitized version of them, payday loans, etc. Where they act like they're your savior when in fact, they're going to charge you predatory rates and you're never going to get out of the hole. That's actually their intention.

Rich: That's right. And for folks who don't know there's a couple of brilliant women, Nandini Jammi and Claire Atkin run Check My Ads Institute (https://checkmyads.org/ ), which is taking on the ad tech companies that say that they promote brand safety for major consumer brands. But in fact, those exchanges end up putting lots and lots of those paid ads on hate sites and disinformation sites and things selling Nazi memorabilia – against the ad exchanges’ own stated policies.

And so, these two women have figured out that if they take snapshots of the ads appearing on all these hateful sites, and then send them to the brand managers who didn't know they were paying for hate and disinformation, this is a way to de-platform some of the bad actors. Because in the AdTech space, there's a tremendous incentive to push out ads to more and more places, even if you think they stink. And the dirty secret is a lot of the tech companies say that they're all about brand safety, but when you watch how they behave, the money flows. They're funding all the folks on election disinformation.

Tami: In the nature of ads, there's so many parts of the web, which was never learned about. And randomly one of my jobs, I worked for a web arbitrage company, in 2008. So, web arbitrage for those of you that are unfamiliar, if you've ever Googled for something and landed on a page, because you just type something, and that page is pretty much just a combination of ads that lead you to where you really wanted. That's called web arbitrage, and they pay for that google click at 30 cents a click and then they get paid 60 cents a click for whatever you're actually clicking on.

What was amazing to me was that Yahoo was funding us, Yahoo wanted their ad network ads to be shown on sites and they gave us this premium pipe. I only worked for them for like a month before I fully understood what's going on. And luckily, talking about privilege, I was 26. And I wasn't really involved in product management yet. And I was lucky enough to be invited to Cornerstone On Demand through my connections to join that company. But I definitely would  have stayed at the web arbitrage company there longer at $15 an hour because I didn't have the ability to pay rent otherwise.

Rich: So I'm just thinking about some other things where we know we're likely to get in trouble and I usually think about the financial sector and the whole crypto meltdown of course is horrible but completely predictable. In order to make those kinds of returns, you have to do immoral or illegal or highly risky things and then guess what, you get bankrupt. Let’s go back, for instance, to the bond rating agencies. Companies pay the bond rating agencies to rate the bonds that they are going to bring to market. That payment comes from the company's floating bonds. So, the bond rating agencies have a lot of incentive to put in higher scores than the bond buyers would like. Finance markets are rife with these kinds of mixed incentives.

We'll throw one more out before we let the category go – the company that used to be called Facebook, whatever it's called today. They are in the business of capturing as much of your personal information as they possibly can and selling it to advertisers who want to know all about you, that you don't want to tell them. At every step, we've seen that the safeties that they apply there and others in the market (like everything happening in Twitter) is half-hearted.  It's window dressing. It's spam washing or whatever, because they're really trying to maximize revenue at the expense of the user’s data. If you didn't think that's what they were doing, then I think you're not a product manager who understands your own economic model.

Tami: As we say, follow the money.

Rich: The other category we’re going to get to is bad outcomes that we didn’t intend, but let’s open for questions. Someone called out that Insurance companies incentivize their salespeople to sell insurance or warranties that you don’t really need. 

Guest: I work in finance and we’re utilizing people’s personal data to aid in machine learning and I think it’s a gray area. Even if it’s anonymized.

Rich: I think so too and I might ask what the use case is and whether it’s used for good things.

Bad outcomes not intended

Tami: It’s the sort of thing that Rich and I were talking about are certain employee groups that have stood up against when a new client shows up. Whether it's the government who's using a product or otherwise, employees say “We don't want the NSA using our technology because they're going to do things that are not we intended to do.”

Rich: And how did that end? Most of those people got demoted or left the company. And they signed the contract anyway. It’s a good example anyway. 

Let’s pick on AI for a second, because I think there's a couple of good examples here. I know that a bunch of banks want to automate approvals of mortgage applications, which makes perfect sense. If we can take good applicants out of the loop and approve them in 16 seconds would be great. The 60 years worth of mortgage data they use just accidentally happens to be real historical data. And anybody in the mortgage business knows that until recently (maybe still) there's a lot of redlining in mortgages, where people in poor neighborhoods or have the wrong ethnic group or their names don't sound right, they get their mortgages turned down, even though they have the same finances as people in richer parts of town. And so when we use that 60 years worth of AI historical data to automate mortgages, what we do is we can now Redline and break the law for people who should get mortgages in16 seconds, instead of having somebody spend a whole day doing that. 

Tami: There’s nothing like a computer that propagates bias. 

Rich: That's right. Propagates bias. You know, we've seen a lot  in the vision systems of folks with different complexions who either are or aren't recognized, because mostly training data was white men. 

FDA drugs were for years new drugs were not tested on women. We only tested them on men. And turns out that sometimes women are different from men in the way that they react to drugs. So there's some structural things here that maybe we didn't notice. 

Tami: But then what do you do when you do that? A non malicious intent but all of a sudden you realize that the seatbelt you design doesn't protect pregnant women? 

Rich: This is a different problem. If you're at a company that's doing something that you believe is immoral and unethical, and you're able to leave, I think you might want to do that. If you're a company that's got some really interesting products that really have a use in the world but we haven't thought through how the baddies are going to do this or where the trolls are gonna take us?

Like 23andme… You sent in your DNA, and you didn't think about the fact that some police department picks up DNA from some crime scene. And it turns out one of your relatives left DNA at the site, they get a warrant and they match YOU.. so they know it's somebody in your family.  It wasn't really what the 23andme folks had in mind, but here we are. 

Tami: And is that ethical or not? If you think about the positives of finding a mass murderer or serial rapist which has happened, that's a good thing to put those people behind bars. But then there's the invasion of privacy issues.

Rich: You know, that this one feels complicated and that the folks who have Nest door doorbell cameras that are capturing things in the neighborhood are invading someone's privacy, but maybe it catches a crime? Apple Find My Phone is being hacked by stalkers, but it's useful if you're trying to find your kids. 

There are products that it's not so clear. What I see on social platforms is that the trolls always figure out a way to bend the system, especially if we're not paying attention – if we're not doing what’s called the red team exercise.  Where we have a team that tries to hack our own product, we bring in some outsiders who know how to manipulate systems and people to see where we can get in trouble.

Tami: Anyone here have a Red Team? A group of people who try to break the product you’re working on.

Guest: I work for a Tax software company and we work with a third party who identifies security vulnerabilities, and we implement many of their suggestions. We’re trying to advance, but we’re making progress.

Tami: The third party's mainly concerned with hacking into your database.

Rich: Yeah. And I didn't see any other yeses. I think that's of course what we should all be doing, but it rarely happens and it's expensive and in theory we should pay more attention. I think there's an equivalent exercise where we get some smart folks who have thought a lot about how to twist systems. By now we know that on hotel and travel review sites the majority of the posts are actually paid posts by folks who don't get paid very much and live in Southeast Asia somewhere. They give the paying hotel five stars. If you're gonna be in the reviews business, you have to think about how folks are gonna manipulate your system, and whether that behavior is OK or not.

Tami: To a certain degree, you could explore your terms and conditions and how somebody would violate them.

Rich: Yeah. But, you know, to the extent that we hide behind terms and conditions. I have a slightly different test I would apply, which is if you told your parents or your kids what people are using your product for, would you be embarrassed? Assuming they understood. Or if you're out for drinks with a bunch of friends who you really want to still be friends with over time and are you willing to say, “here's what somebody's doing with our system that we didn't anticipate.” without utter embarrassment. I think that's, that's a much more honest test than in paragraph #411 where we wrote that it was okay to sell all of your personal data and locations in real time to people who don't like you. 

Tami: Agreed. Each team should have a checklist of things that could go wrong, and maybe start with your T+Cs and things that you're not supposed to do. 

Rich: That at least gives you a place to stand if you say we don't accept hate speech and we don't accept videos of people doing things where they haven't agreed to be in the videos. You know at least there you have, you have a way to start canceling some of this out, but then you have to look forward too. 

Tami: Yeah. Let's talk about one of the most public ethical issues of the moment, which is everything going on at Twitter. 

Who feels bad for this crew that is left there? I feel so bad for the people that are left, because I'm sure some of them believe that they're gonna to protect something and I can't imagine they're going to be able to. 

Rich: That's right. I think some people are there because they've worked for years to build really good systems and they don't want the systems to fall apart. And that's both moderation and back office. I think a lot of people are there because they can't leave. So back to the H1B issues. There's a lot of people there who, if they quit or if they're fired, have 60 days to find a new job or they have to move back to where they're from. Indentured servitude is supposed to be illegal in the U.S. but, but there you go. 

It was interesting, I read there's an 11 or 12 page document somebody shared out, from the team that had done all the analysis of what would happen if you let people pay the $8 for the blue check instead of actually doing the verification. Every single thing that was on that page was exactly what happened, but that whole team got fired and nobody cared. We argue about the definition of free speech, but the terms and conditions are being violated all day long. 

Tami: All right. So someone says, yes, Red teams are popular with engineering teams trying to break a product. Yes, we called that Exploratory testing. That's one of the things which Rich and I were talking about in advance. What we really need is red teams deciding whether a product or feature should be built. I've seen this sometimes with UX teams trying to execute design thinking exercises. This is where a product management function can make an impact. 

Rich: So if you go to the very beginning of the process, before we're writing code, before we're interviewing users, you know, whatever, we're chartering this new thing, that's a really good time to decide if this is a bad, a bad product to build. Now, unfortunately, if you're at that place, it's likely that the founders of the company have taken money or the executives have taken money to build this thing and have told the investors already. So I think it's a really hard sell to go to your management and say, we shouldn't build the thing we've announced or raised money for. I do think depending on the situation, I would feel an obligation to do that. 

But you know, you have to understand your organization and whether that's how that's gonna go over because you need to have a plan for yourself. 

Tami: But I think that there's also the option of figuring out how to prevent that poor use, once you surface it. Part of what we do as product managers is we think about a problem. We think about a market, a group of users that has this problem and how we're gonna solve it. I think after we get to that point, we should also say, what else could this possibly be used for?

Rich: How could this be misused? Where else could apply? And by the way, there may be some really good use cases we haven't thought of that are good for the company, good for the planet, make money, you know, take the tech we've done and apply it to a bunch of other really cool things.  So tho those are both the same sort of thinking. But, back to this point, I think we really need to take some time out and ask how somebody with malicious intent would abuse the thing we love in our building. Then, can we fill in some of those cracks? Can we fill in the holes so it's way harder for somebody to abuse our product and embarrass us? 

Tami: Let’s talk about some more examples.

Rich: There’s somebody I worked with years and years ago named Randy Farmer put out a book in 2010 called Web Reputation Systems, which is all about upvotes and all these things that people do on and many of the ways that trolling folks can abuse your systems. His conclusion is that we have to try. It will never get there completely, but that we have to at least take the easy steps. 

One of his great examples, and I dunno if anybody's old enough to remember Farmville. But one of the things that some folks did on Farmville, if you remember, you could get a tractor and you could plant different plants on your field. And so, so there's some folks who figured out what the pixel layout would be on a field full of two different colored plants. So they could put curse words and bad language and things. Basically, they planted plants in the field so that when the plants grew up, the two different colors would spell it out. Of course it wasn't text so no text engine would ever identify that these plants were words in the same way that when you look at those captcha things. The point of the book is there's always ways to work around the system, but there's 10 or 20 obvious things you might do to reduce the volume and make it harder for folks to abuse your system. Now that's what 12 years old and, and it was a couple years in their writing, so not new stuff, but gosh, this has been around for a long time. 

Tami: I feel like every team, when you're coming up with ideas, should ask somebody to play devil's advocate. So similarly, play villain.

Rich: Fun, that's fun. 

Tami: Let them wear the hat, let them rotate responsibilities across your team, play villain. What would you do with our stuff that isn't so kosher?

Rich: That's right. Sometimes you find some outsiders who are just good at this and you can rent their brains for a couple hours and have them entertain you with the ways that they're gonna twist your system in knots. Maybe we can't fill all of the holes, but let's fill some of the easy ones. 

Tami: I just realized that there's a business opportunity here to hire a whole bunch of ex-cons to do that. People who think out of the box in a different way. 

Rich: Yeah. You want to be careful who you hire, but yes. It’s threat modeling, but it's not about security so much, it's about bad behavior. 

Tami: Something you and I were talking about in prep for this with Rich was the nature of video filters against child pornography and it worked backwards?

Rich: Yeah. I don't know if anybody knows the story, for brief time, I don't know if it's still in place, Apple put a filter in place on whatever photos you uploaded to iCloud from your iPhone. They searched them and used an AI video search thing for child pornography. Now, it also turned out to catch all kinds of other things. Like I'm a parent of a small kid who's got a rash on his or her bottom, and I've taken a picture and sent it to my doctor.  And suddenly those folks are locked out of all their accounts. So unintended consequences. These are hard problems, but do we take the time to think them through? 

Tami: This is a good time to jump into what should we do about it? We've been talking about a few different ideas, and something that I think is important, especially for the scenario that we were just talking about with the baby butts is Diversity on your team. There have been a number of studies about how so much of AI and I programming is done by white males, and all the data, and how, as you were talking about with the mortgage situation, that there's a lot of bias that the data just surfaces without you even realizing it was there before. 

The importance of having diversity on your team when it comes to working on the problems, and not just thinking about women or people of color, but socioeconomic status, educational background, et cetera, to help you, just get rid of your group. Think  there are lots of initiatives to hire veterans and otherwise, like, I don't know how many of you know that when fireworks go off on July 4th, it puts a lot of people into P T S D, and if you start shooting off fireworks the day before, or at not 9:00 PM when they're expecting it, it's even worse. And so developing empathy for different groups by having representatives of those groups on your teams could be really helpful. 

Rich: Yeah. It's hard enough to hire product managers, let alone, you know, get 'em across the spectrum. I’ve found that folks who are parents turn out to bring an awful lot of skills to product management that the non-parents don't have. That's an illegal question to ask in the interview, so of course I wouldn't. But, if you haven't tried to negotiate with your five-year-old why we're gonna have our vegetables before we have desserts and sweets, then you're really not equipped to argue with the executives at your company about why we're not gonna take a deal that has a lot of money, but it's gonna crash all your systems. I mean, it's the same set of skills, but I find that folks who've, who've been in different settings, who've worked in different industries or raised kids bring interesting points of view that I might not have. 

Tami: Yeah. Anyone else? I have a question or ideas about how you could prevent it.  

Rich: All, yeah, it’s wide open. Let's do it. Everybody's too quiet. 

Tami: Way too quiet. 

Speaker 5: Yeah. I actually read  a book called Technically Wrong. Uh, for some of you who may have read it, it's a great book, where the author actually says, go ahead and hire Misfits into your organization  just, just to get that diverse perspective. I experienced it, I used to be an engineer and joined a team that had no engineers in it. My way of communication was very different from how the team was communicating. So it took me a while to figure out, okay, what's the lingo? How can I get less structured in how I present information and so on. So you're absolutely right, Tami in that it's not just about the typical race or gender or the typical attributes. 

Rich: What do we do when we've got a product team of seven and four of them are misfits and can't get along with any of our stakeholders? 

Speaker 5: Hey, that's a culture problem. I think that's a culture problem that you can address differently. 

Tami: I also think that as you noted, there aren't that many product manager slots out there in any individual company. But when you're coming up with your product decisions in the same way, you should be incorporating different inputs. Find those misfits on the customer success team or on the sales team or otherwise to join the conversation. One of the participants says one of her best hires was a person who was on the verge of being let go because they wouldn't work within the rules. Do you want to tell us more about that?

Rich: What kind of rules were they breaking? 

Speaker 6: Sure. I can say a little bit about it. This person was in a non-programming role where they weren't supposed to access the system, but he was really clever about accessing areas that he really wasn't supposed to access and get things done for his team. Rather than go through channels. He had been told several times “you can't just do that”. I was managing a development and a support team and I thought that kind of resourcefulness and initiative is what I want on my team. So I hired him into my team and he was kind of an internal hacker, I guess. I knew him and I knew his intentions were good, kind of a purehearted person. And he, and then he went on, I think he worked for that company in a development capacity. He might still be there. 

Rich: Very sweet. Good. If you all don't mind, I'm gonna shift a little bit cuz I want to change it up slightly, which is for me, one of the core skills of product folks, particularly product leaders. Everybody in the product organization should be able to explain to our executive team, particularly the less technical folks, in money terms why something is important. I have the belief that at least for some of your executive team, any sentence that comes outta your mouth that's not denominated in currency won't be listened to and doesn't matter. But, if for instance, you were gonna talk about what advertisers are going to do on your social media when you reintroduce all this hate speech and all this other horrible stuff, you're able to have a discussion that's about money, not just about behaviors or ethics. 

And if we're in a place where, when bad things happen, it hurts the company's bottom line. That's a really very strong argument. So rather than saying, “I have to quit”, go to the executive team and say “if this series of things happens, if people get into our system and steal all the personal data of our customers, if we're flooded with bots that all vote up some crazy thing we now promise to do, cuz we say we're gonna do the number one thing that's voted up, here's how it's gonna cost us money and public relations and embarrass us in the world.” I think that's a really strong argument to make rather than I'm uncomfortable, because a lot of it depends on your company culture, but in a lot of places, your being uncomfortable is not a problem of the execs. 

Tami: I was just gonna add that execs don't want to be embarrassed. 

Rich: That's most of them, yeah. 

Tami: They don't want to be caught with their tail between their legs. Oh, we weren't thinking about that. Your job as an officer of your company is to have fiduciary duties and protect the brand, the company, the revenue, et cetera. And I'm telling you, you're not protecting it right now. And so in not those exact terms, help them have a realization that their name is on the line. 

Rich: If you can go down a list of bad behaviors or bad actors and say, we have this opening where some bad actors could do X, they haven't done it yet, but they're smart, they're gonna figure it out. I want to make sure that our team fixes this before you guys as an executive team have a headline on page three of the New York Times that says, oops, another credit agency has accidentally leaked all of the credit histories of 50 million folks to people who are gonna sell it on the dark web. Or whatever the thing is. Because I think most executive teams, most companies will react well if you can explain it in terms that are less technical and more painful. 

Tami: Yeah. I remember actually one of my projects of Pivotal Labs  we were working with JP Morgan Chase and Jamie Dimon had come up with the realization, this was before the Equifax leak, that if one of the credit agencies had a leak, he wanted to be able to turn off Chase's pipes immediately. That if there was a leak and they found out about it, he no longer wanted Chase to be providing credit information to those companies. In reality, it's not something that really helps because the data is already there. But I liked the idea of saying, we're not going to continue doing bad if we find out something is wrong. 

Rich: This is about thinking and anticipating as best we can. I gotta tell you that some of the smartest folks in the world are on the other side of this line and they're thinking of things that we haven't conceived of or we're not smart enough, or we're just not experienced enough. And so there's no perfect system here, but you know, we gotta try. 

Tami: Yeah. And I think, something we were talking about, as a responsibility of leadership is to create an environment and safety for your employees so that they can surface these problems. That if someone feels like when they raise their hand, they're going to get fired. That's not good. So how do we distribute our privilege to the people who work for us so that they can know how to surface a problem, how to raise their voice and their hand when something doesn't sit right.

Rich: To be their umbrella, right. To be, to be their semipermeable membrane. I'm lucky enough to have been in the C-suite a bunch. I've been in the room where it happens instead of getting the news afterwards.Being able to walk in and say, we as a group have uncovered this potential problem and I will own it. Psychological safety. As a product leader, how do we set this up so that our teams trust us to give them the resources and the time and the thinking and the little bit of cover to do the right things. 

Tami: Something Matt Barcomb and I were talking about is how do we give them the sense that they could leave. How do we help them speak in conferences? How do we give them connections, et cetera, the privilege and the mobility that we feel. They should also feel.

Rich: It's double edged, back to the parental thing. Every parent wants their kid to graduate and go off and do great things in the world without us. I'm terrifically proud of all the folks who've worked for me over the years who've gone on to be better and smarter and more successful than me.  That's a source of pride In the short term. You really want to keep the best folks on your team if you can, because they do really good work for you. There's this trade off between am I helping my folks build their visibility and career and long play? But I don't really want them to go yet. The other half of that, anybody who's been a product leader, I think has been in this chair, if somebody on my team comes to me and says, “I need to leave or I have to resign, or I'm gonna take a job somewhere else.”

We have to check our emotions on that and say “I'm sorry to lose you, but what can I do to be of help? Can you and I plan how we're gonna backfill your job? Can you and I plan how we're gonna do this gracefully and cover the work? How do we position this in a way that's positive for you and the company? Can I help?” Because the day that somebody tells me that they're leaving my group is the day that for me and them, they're gone. Right now, there may be three weeks or six months or whatever your local employment requirement is, but they are mentally checked out. But, at that moment, I think it's our obligation to help people do the thing they need and want to do unless we think it's dangerous or illegal in some way. To send them off with a good farewell and our best wishes. We don't want to put our folks in a situation where we're forcing them to do that. So again, this is about choice. How do we raise the kids? Well, how do we teach 'em everything we know and make 'em big and strong and great product leaders so they can go on and be great product leaders somewhere else? Maybe not today if I can keep 'em for another year. But that's the career arc. Anybody who takes the other approach, such as “you can't leave, I'm not gonna let you transfer to a better job within our company. I'm gonna block your career advancement” has a different set of ethical issues. 

Tami: Switching back to ethical issues, I think that as a leader you can also create guidelines for ethics in the same way we have design guidelines or otherwise that if someone sees a problem, they can say “this is a violation of number three”. “I think this is a problem based on question number five” or something like that. This then elevates the conversation to we've agreed on a set of principles and this doesn't feel right about one of them. Help me create a case financially and otherwise that we can escalate to say we shouldn't go down this path because it's in violation of this and also has financial reputation, et cetera, impacts down the line. 

Rich: Yeah. I think that's great. I don't have such a list. If somebody else does, that would be a great thing to share out. 

Tami: Maybe we can create one.

Rich: Together, maybe we can create one together. There's a blog post.

Tami: Would you want to tell your parents about it? 

Rich: That's right. I think that's a really good test. You know, do you want it written in the newspaper and published to everybody that you know? And if not…. aside from some things we do which are intellectual property or secret sauce. Bottom line, I want to be proud of the work I do. 

Tami: Yeah. Money shouldn’t be earned from bad actors.

Rich: Now again, that assumes you have enough money. But, I want to be able to come home and tell my kid what I did at work today and not have her back away and hide in the bathroom.

Tami: Or at a PTA meeting.

Rich: Good. There you go. We got a couple minutes  Wide open on this or anything? Anybody got an interesting question to throw out or point of view? 

Speaker 5: I'll just say this has been a great discussion and thanks. If you start working on the blog or the, you know, the best practices guide, I'm happy to chime in. I have some materials that I can potentially contribute, but send, we don't have one yet and we need one desperately, 

Tami: Please send it over. We'd be happy to include it. The only thing I wanted to add is that yesterday I learned that Airbnb has a Chief Ethical Officer and that this might become a trend. I want to emphasize to everybody, this is something that everybody needs to own. That if you put one person in the title of them being in charge, it often means that they're the only one who feels responsible and that's not okay. So think about what you as an individual, you as a leader can be doing to, to enable more ethical decisions to be made in your org.

Rich: Products. Sure. Big companies sometimes have somebody who's the advocate. The whistleblower goer to person, the ombudsman. Oh, cool. So they have one at Dell too. Ifyour company has such a person, then you may not have to be the person who carries that burden, but you know, that's a good signal.

Tami: Well, thank you all for joining. Thanks again, Rich. I'm really glad you were able to have this conversation.
-----

Product ethics is an interesting topic and this only skims the surface of a greater conversation we need to be having as product teams. If you can raise your voice safely to point out things of an ethical nature that your company needs to be considering. You are likely at a company that cares about the intended and unintended consequences of its products' use, like all ethical companies should. 

Some additional resources:

https://www.mindtheproduct.com/how-to-make-ethical-choices-when-developing-software-common-questions-answered/

https://ethicalos.org/wp-content/uploads/2018/08/EthicalOS_Check-List_080618.pdf

https://ethicalos.org/wp-content/uploads/2018/08/Ethical-OS-Toolkit.pdf

-----------------

Rich coaches product executives, product management teams and revenue software organizations.  He has also occasionally parachuted into software companies as interim VP Products/CPO.  A seasoned tech executive and serial entrepreneur, Rich has been the 'product guy' at six start-ups including as CEO and VP Product Management.  He is a relentless blogger, speaker, teacher and mentor on software strategy, product management, and aligning “what-we-can-build” with “what-markets-will-pay-for.”


Tami is the founder of The Product Leader Coach where I work with product leaders and teams to realize their potential by focusing on their strengths.

If you enjoyed this post, I am available for product leadership coaching or team training. Learn more about my services and upcoming children’s book.