fbpx

18 Conference Report #techonomy2018

Short Presentation: Jonathan Kreindler

1/1

  • Jonathan Kreindler at Techonomy 2018, Tuesday, November 13, 2018. (Photography by Paul Sakuma Photography)

Speaker

Jonathan Kreindler
CEO, Receptiviti


A presentation by Jonathan Kreindler of Receptiviti.

The following transcript has been lightly edited and condensed for ease of reading. 

Speaker: Jonathan Kreindler, Receptiviti

(Transcription by RA Fisher Ink)

Kreindler: Can you hear me okay? When I was chatting with David a couple of weeks ago about this talk, David said, “Focus on the science because what you guys do is really, really interesting,” and I’m going to focus on the science, David, don’t worry. But I’ve got a very short amount of time here to tell you a little bit about what we’re doing. And I want to touch on our science, I want to touch on two themes that have been talked about today and yesterday and the day before which is also a corporate culture and socially responsible corporate culture and also responsible technology. And I’m going to try and wrap that up, it looks like the clock says I have two and a half minutes, I thought I had five so I’m going to talk really quickly. And if I’m talking too quickly just raise your hand and hopefully the act of raising your hand will distract you from realizing that I’m talking too quickly. So we help large corporations better understand their culture, to mitigate the sort of problems and PR fiascos and corporate culture meltdowns that many large organizations have seen over the past couple of years. And we do it with a really interesting science as the basis and we do it in a way where we believe we’re being incredibly responsible with the technology. So I’m going to touch on each of those elements really, really quickly.

So to start it off, the science that we use is really around NLP, it’s a different form of NLP than most people are familiar with and essentially the way we look at language and the way we help organizations better understand their culture is by analyzing the people in their workforce and actually understanding their psychological state. And we do this in an anonymized sort of way but essentially when you talk and when you speak and when you communicate, you use two different categories of words, you use content words and you use function words. And content words are nouns and the verbs and the adjectives and these are the things that—these are the words that you intended to use, you use them very consciously. It’s what traditional NLP focuses on doing topic and theme extraction. And function words are words that you actually don’t even realize you’re generating, they’re largely prepositions and pronouns, they are processed very, very differently in the brain. But there are two fascinating things about function words, one is what I just mentioned which is you don’t realize you’re using them and the other really interesting thing is that when you break them down and categorize them and analyze the way that people use them, these very interesting trends and correlations pop out which is people in different psychological states use very, very different patterns of these categories of function words. And, so essentially when you’re looking at function words just the right way, you can actually understand the psychological state at a point in time of the person who is speaking or communicating.

And so what we’ve done is we’ve built the science into a platform that we’re now deploying with large corporations and we’re given the ability to actually understand the health of their culture in a very, very granular level in real time. And so with that comes quite a bit of responsibility, right? We get asked questions all the time around, “Is this ethical?” and, “Is this big brother-like?” And we can actually say no, because we’ve spent quite a bit of time, we actually spent two years building our platform and it seems like a very, very long period of time with the fast rate of technology today to build a platform like this but the reason that it took us so long is because we wouldn’t do it unless we could be absolutely sure that we were respecting PII and that we were keeping everybody safe at the end of the day. This is first and foremost for us as a company, it’s part of our ethical principles, we actually publish our ethics online, on our websites if you go to Receptivi.com you can read about our ethical guidelines, it’s actually interesting, it’s not like boring kind of legalese, it’s actually a really interesting document.

And we have a very firm belief that if you are holding PII, you are putting not just yourself but you’re putting anybody whose PII is there at risk. And we take a much longer term view of this than most organizations do because we realize that the future is a very long period of time and any corporation or any organization that’s amassing PII, they don’t know where it’s going to end up, they don’t know where it’s going to end up in 50 years, 75 years, or even 200 years. So our philosophy is that the only way to handle PII is to never touch it in the first place. So what we’ve done is we’ve built our platform in a way where we deploy a remote piece of code on prem at our client’s site and that piece of code does all the analysis of language, it turns it into these nonreversible engineered—you can’t reverse engineer the output from this code but we understand it and we’re able then to be able to analyze their workforce in a way where even if that data was hacked or leaked or whatnot, it would be completely useless to anybody but us and it also can’t be re-associated with the individuals in the organization.

And so I’m happy to talk about this later in more detail, I don’t have a lot of time to continue but essentially we feel that we’ve kind of solved a number of interesting problems and as a relatively small organization in Toronto who has actually figured out how to generate incredibly deep insights about people, like incredibly deep insights about people without ever actually seeing their data, we think we can actually set and we are setting a really, really interesting example for a lot of other large technology companies that are sitting on what could be very, very dangerous data over time. And so it’s not a challenge but I think we put it out there as an opportunity and we’d really like to see other organizations try and follow this lead and act in a socially responsible sort of way when it comes to handling personally identifiable information. And I don’t think there really are limits to what you can do in this way. I think there’s an incredible amount, if you have the energy and the focus and the willingness to actually try and do things slightly differently, I think that many of the large technology organizations out there who have recently seen problems with personally identifiable information leaks could actually solve this problem if they really, really wanted to. So thank you all very much.

Leave a Reply

Your email address will not be published. Required fields are marked *