Updated: January 20, 2024 22 mins read Published: July 18, 2023

Passion for Innovation: How Human Genius is Driving Technology Breakthroughs

In the recent conversation, Roman Reznikov, VP of Delivery, Digital Segment, at Intellias, and Ron Espinosa, Advanced Technology Solution Executive at Softchoice, discuss emerging technologies and how to create the next big thing

This spring, we talked with Ron Espinosa, Advanced Technology Solution Executive at Softchoice, a renowned company providing cutting-edge software-focused IT solutions. Join us as we unravel the ways cutting-edge technologies are reshaping industries from financial services and healthcare to agriculture. From unlocking the potential of massive data processing to addressing ethical dilemmas, Ron provides invaluable insights into how businesses can navigate the complexities of incorporating generative AI into their strategies.

FULL VIDEO TRANSCRIPT

Roman Reznikov (RR): Greetings, everyone around the globe! I’m excited to welcome you to the first episode of our webcast “Made by Humans,” where we discuss the human genius behind fantastic business ideas and technological breakthroughs.

Today, I’m honored to sit down with Ron Espinosa, an Advanced Technology Solution Executive at Softchoice, a firm that provides software-focused IT solutions. Ron describes himself on LinkedIn as a dot connector, strategist, evangelist, and author. Welcome to this interview, Ron. How are you?

Ron Espinosa (RE): Hi, I’m doing well. Thanks for having me, Roman. I appreciate it. I do get a lot of satisfaction from connecting dots and listening between the lines, so to speak. I’m happy to be here and share whatever insights I might have.

RR: Awesome. Thank you very much, Ron. My first question would be how do you keep on track or on the edge of all modern technology trends? How do you stay updated about everything that is going on in the technology space?

RE: That’s a daunting question, right? Because I don’t know that anybody these days can stay on top of everything. But I try. And I try by keeping my network open, number one. Because more eyes, more ears — you hear more. And then people will come back and say, “Oh, did you hear about this?”

Something I really like to do is connect with people, not just in a business or sales setting, but connect with them and just say, “Hey, did you hear about this?” [about] some type of technology, and people will come back to me when they hear about something. It creates an open dialogue of brainstorming, and that’s been really effective for me.

Obviously, there are also things like LinkedIn and joining groups. I think I was one of the first people to join LinkedIn years and years ago, being involved with groups there. But I think now, post-pandemic, getting back into the big trade shows and really starting to connect with people who are making a difference, both on the hardware and the software side, is really important. Obviously, having that open dialogue is also there. And sometimes you have to put your own self out there. You have to just say, “Hey, I had this crazy idea!” and put it out into the social media world and see what comes back. And it’s crazy the amount of links that I get. And then looking at the links, they may be from a different problem or a different perspective, but if you can look at technology as more of a bill of materials and less a bespoke solution, you can start to pull abstract ideas from it.

RR: Do you find visiting different conferences and events in the technological space useful? Is it something that would help you stay up to date about everything that’s going on, or is it not the source that you would suggest to our audience?

RE: It’s a yes and no answer, I think.

Yes, I think you have to be out in the conferences and you have to be judicious about which ones you select because time is our most precious asset. But what I try to do is look at things from an outcome base. I have a core group of customers or verticals or horizontal solutions I might be building at a particular time. And so I look at it from a standpoint of What am I trying to accomplish there? Then I start seeking out those experts in my network and beyond that may have a unique perspective on that or a point of view, and then I find out Where are they speaking? What are they attending? — those types of things.

And I try not to limit my scope for conferences or events to just the really, really big ones. Sometimes you can get a very small group together that’s just a cohort that’s investigating a technology, and it’s really useful to get in and just sit down at a dinner table and throw some ideas around.

RR: How about some professional sources like Forbes, Gartner, or HBR [Harvard Business Review]? Do you find them helpful to stay up to date? Would you recommend organizations to subscribe to those sources? Or can you find everything in open source or Google or ChatGPT or anything else? What’s your take?

RE: That’s a really interesting question, especially with generative AI now storming the castle, so to speak. Do we need to have subscriptions, and are Gartner and Forbes and subscription-based services like that at risk? Or is there a way to integrate the subscription model on a consumption basis into some type of Open AI or generative app or whatever it might be?

I’m a bit of a dinosaur, so I go back to the days of reading magazines and newspapers. I have found articles from Forbes and Gartner to be extremely helpful, especially when you can get behind the curtains and talk to the researchers themselves about what they’re finding. But again, there’s so much information, so much technology, that I approach it from a business outcome standpoint because I find that enables the researchers to be more effective.

I am looking at ways that generative AI can go out and do the search for me, pull the articles in, and maybe you pay for them on a “use basis, right? As I consume, I’ll pay. But I haven’t got rid of my subscriptions yet.

RR: Cool thought. As I understand, you are the one who brings innovation to your company. Is that correct?

RE: I think it’s a bunch of us. I’m tasked with a few different things. One of them is exploring industry solutions and establishing what those accelerators might look like. And then I work with the development team and the design team to flesh those out to serve our customers.

RR: How does your company accept new technology? How do you define which technology to invest in, which to explore more, and which is just temporary hype?

RE: I might sound like a broken record, but it’s really customer-based. What is going to serve the needs of the customer? And I think more so now than maybe going back to before the industrial revolution, we have that ability to really look at what the business problem is, what the customer needs — things that maybe a customer is not even saying yet — and investigate the technologies that can answer that. Most times it’s a suite of them; it’s not a silver bullet. I think we went through a phase in the technological revolution where it was Buy this particular software package and it will solve these 20 problems, or these 10 problems, or these 2 problems. Now it’s more like What’s the stack and how it can best be developed to produce this outcome? Or can we take what’s already there and refine it?

So, for me, the litmus test is Where’s the needle going to be moved for a human being or a group of human beings and what technology will do that? Sometimes it could just be the discrete deployment of technology or reconfiguring of what’s already there.

RR: Each new technology is some kind of risk or something uncertain or unknown. The question is how to find the right balance. What would you recommend to companies in order to find the right balance between being innovative and not wasting money on something which is not that firm to incorporate world yet?

RE: Something that I’ve picked up on working with folks like Dr. David Vroomlin out of Texas is the notion of living into your purpose rather than your strategy.

Look at the story of Eastman Kodak, who at one point was one of the biggest companies in the world and now is a fraction of the size they were. And you know the story there, that they had the rights to digital printing and digital cameras and what have you and kind of ignored it. They said they were a film company. No, they were a memories company, and their purpose got obfuscated by the strategy. Blockbuster, same thing. [They] had a chance to buy Netflix and didn’t, right? Because their strategy was to make money from rewind fees and late fees. They forgot they were a home entertainment company.

For me, when I’m looking at what’s next, it’s understanding very much your purpose and ignoring the strategy a little bit because you can adjust the strategy to fit your purpose. So, when you’re looking at what to do next and staying nimble and being able to address your customers — I think that has to be first and foremost.

What is my purpose? How best do I express that purpose to my customer base to keep their satisfaction as high as possible?

And then the biggest advice that I can give to anybody is to measure twice and cut once. As they say in the carpentry business, once the piece of wood is cut, you can’t put it back on. It’s out. With technology, especially with the plethora of options these days, I think you must measure twice, cut once. What I mean by that is get a team of people who can do the design work for you, that can do the discovery, product design, service design, experience design — that’s a big one. Once you have that and it’s mapped to your purpose, the technology kind of takes care of itself.

RR: What are the top three technologies appearing right now in the market that will change the game for Fortune 500 companies in the upcoming years, from your perspective?

RE: You can’t ignore generative AI, right? I mean, the things that are coming out of this are mind-boggling. I think I could give points one, two, and three to it. Certainly, there’s security that has to be factored in. If I have to give you three, I’m going to say one is generative AI. But then the next two are a little less about technology and a little bit more about the thought process, going back to my previous point considering design.

So one is security by design and any technology you deploy from the ground up. Otherwise, you’re going to leave yourself wide open.

And this one is something I don’t know that many people are thinking about, but I really hope they
are — it’s the ethics. You have to get yourself an ethics consultant. If you’re going down the road of automation, artificial intelligence, machine learning, and generative artificial intelligence, it’s no longer a question of Can I do it? It’s a question of Should I do it? and What are the potential ramifications? People joke around about Skynet and the Terminator, but I don’t know that we’re far off in certain ways when you start thinking about the possibilities. Also, when you’re using something like a ChatGPT or a BART [Bidirectional Auto-Regressive Transformers] or something along those lines, thinking about the question you want to ask is very important because if you ask a generic question, you’ll get a very wide-open fire hose answer.

I think we’re going to have to start considering more and more the way we ask questions, the way we think about our applications, the way we think about software and technology. And we really have to start thinking about the should rather than the can.

RR: This is a really interesting topic and from our experience, many clients are interested in generative AI and the application of different AIs in their business. But the question about ethics is really hot. How would you approach from an organizational perspective this ethical question of AI usage? What questions should C-levels ask themselves or their technology team before using those technologies?

RE: I think the first thing is you have to have a very clear outcome in mind. What are you trying to do? And does it make a difference for the people that it’s meant to serve? Whether those are internal employees, internal customers, vendors, partners, or external customers. What is the end in mind? So start there. It’s the old Stephen Covey thing: Begin with the end in mind.

Then challenge yourself to make sure that everything you’re doing is tied to that end result. And remember that one of the biggest parts of creating a strategy is where to say no. So again, just because I can ask generative AI to go out and design something and pull me back information, let’s say, on all my employees’ social media to see what’s trending so that I can be a better employer because people are talking about a certain topic, and I want to be in tune with that. Maybe that’s ethically not the right thing to do because of all the other information you could be pulling in. Now you have to weigh the benefits versus the risks. That’s kind of an extreme example, I think, but there are so many nuances that have to be considered at the C-level to say, What are our principles? What are we trying to accomplish? Where do we draw the line, and why are we drawing the line?

Again, that’s the purpose versus strategy. If you really live into that purpose and you stick to that, I think the strategic pieces (the questions) will become more readily apparent. The things to consider are — again, we’re talking about intelligence: artificial or not, it’s intelligence — if you wouldn’t ask it to someone’s face, I suggest not asking the machine to go ask it.

RR: That’s really true. The question that many companies have and sometimes hesitate to ask out loud is how they can save costs using AI, which from the other side means whether they can cut certain team members or employees and substitute them with AI. What’s your take on this from an ethical standpoint? How do you foresee that AI may change the job market in general?

RE: This is a really topical question. I think I’ve had this question raised in my last three or four social setting conversations where people are saying, What do you think about this? There’s a clamor out there right now that all AI development should stop for six months. I just can’t agree with that.

The reason is when we found out through medicine that we could do surgery rather than bleeding people out to relieve certain conditions, we didn’t say Let’s not do that. We explored and we pushed forward. And then when we found out that we could do certain surgeries without being invasive, we didn’t stop. We went there and we made it better. I think that’s what we really have to look at here is where can technology, AI in particular, help make things better for people.

And I think one of the key areas [where] it can do it is by freeing people up to do what people do best, which is to think and react, to have empathy and sentience. These are things that machines will never have, or not for the foreseeable future I don’t think.

But when you look at what technology can do for a company, I don’t think the answer should be How can we cut costs? I think, rather, the question is What things are we doing today that are labor-intensive that we’re doing simply because we’ve always done it that way or because some advancement in our business forced us into adding a couple of extra processes which had us add a couple of new people? Well, we can now streamline that by putting in new technology.

That shouldn’t automatically mean that the people that are there are going to go anywhere. I would offer that the technology still needs the human beings to be in the loop and be notified. Hey, validate this, right? We’re not 100% sure that this document says what I think it says. Can you check this?

And now you can elevate the level of employee satisfaction because they’re doing more meaningful work.

So, I see technology as an enabler to increase the morale of your team and improve the efficiencies, which should help you drive top- and bottom-line revenues.

RR: It absolutely makes sense. From your perspective, which industries could win more from the application of this kind of generative AI technologies in their business? What business domains might get the maximum outcome out of this?

RE: I don’t know who doesn’t benefit, but I understand your question. I think that document-heavy companies who have a lot of regulation, so financial services, inclusive of insurance, healthcare for certain [will benefit]. Because — and I know of customers who are doing this — when you can start using AI to process the petaflops of data out there around disease, what could be the potential outcome for cures and treatment and admitting patients and treatment of patients? What happens when pharmaceuticals can start leveraging this technology to find new chemical compounds to do different work, or to make different drugs work?

But then you get into things like farming. I was recently talking with a company that does a lot of growing, and they said, You know, we hadn’t thought about it, but it looks like this could really help us. Looking at genetics and different types of compositions they could make to make plants more robust at different temperatures, maybe optimize the water flow. There are so many applications. But I think that anywhere where there’s an intensive amount of documentation that has to be processed can get a big win here.

And I think anywhere where there’s an opportunity to try new permutations that would take human beings years to do. You have to remember: Every instance is like having 100,000 eyes looking at something and processing it every second of every day, and getting smarter by every action and reaction that happens.

RR: It’s interesting to see how the future from movies is actually coming to our real world, how those Skynets and other networks are actually becoming our reality.

RE: It’s funny, I sometimes wonder, as I’ve gotten older, whether the movies that I’ve watched growing up that suggested certain things were possible — Was that the inspiration for so many people who are now making those things possible? It’s the people asking: Why can’t I do that? Why not? What if we did that? What if we tried this other thing?

And there’s been a lot of failures, but people learn from them. And so they’re no longer failures, they’re building blocks. If you leave it alone, it’s a failure. I just want to know when someone’s going to make a proper lightsaber.

RR: That’s true. My next question would be about intellectual property rights. Generative AI basically generates the content. And the reasonable question is who owns the content under these circumstances? Is it the person who created the prompt or rather the people who built the AI algorithm? How do you think that growing interest in the generative AI topic might lead to changes in laws or impact the overall understanding of what intellectual property means?

RE: It’s a really interesting question and one we’re going to have to wrestle with, but we’ve been here before, right? We’ve found that legislation, especially in the US, will lag behind the technology. I think you’re going to see more and more of that because technology is just evolving so quickly. Things we can’t even contemplate right now — how do you legislate against it or regulate it to put some rails around it? I don’t think you can.

But I do think we can start to thinking about — there’s a legal concept in the US called the reasonable man. So, what would the reasonable man do? I’ll give you an example. Thirty years ago, it wouldn’t be reasonable to think that somebody should pull out their cell phone and make a phone call because they saw a car accident. They wouldn’t. That’s not reasonable because thirty years ago, everyone didn’t have phones. Now, with mobile phones being ubiquitous, it is more reasonable. So, maybe there’s a case to be made for liability. And so I think that what’s reasonable is something we have to look at. There are so many different ways to answer this. We can’t count on the governments to regulate. We can’t count on legislation. We have to do what’s reasonable. We have to have a reason for doing what we’re doing. And I think that we also have to have a purpose behind what we’re doing and why we want to do it. And then it’s a question of being brave enough to pull back and say, You know what? We went down this road. There’s some really interesting stuff out there, but no, we’re not going to do that because it’s raising this issue or it’s introducing certain bias.

And when I start looking at intellectual property, when I think about it through that lens, intellectual property gets really dicey. For instance, you and I are recording this conversation. I’m sharing information with you and you’re sharing information with me with the idea that it’ll be put into some type of interview and some results, but we both agreed to that in advance.

Who owns the intellectual property to this conversation? I would argue neither of us, right? If I’m having a conversation with you and two friends at a coffee shop, we’re exchanging ideas freely. There’s no intellectual property. However, if somebody walks by and hears the four of us talking about some great idea, takes it, runs out, and gets it published, is it their intellectual property now?

So, I don’t think that the conversation is unique to AI or unique to technology itself. I think the question is more around the uniqueness of the idea. Number one: I have to say that I think intellectual property as a term is way overused right now. Everybody thinks that they’ve got something novel. The question is, Is it in fact novel and can it be defended in a court? And if it’s not, then it’s not really intellectual property. It’s just a cool idea you thought you had.

Answering your question more directly, when I feed that intellectual property (my ideas) into a ChatGPT interface, and it goes out and pulls everything back for me, can I publish that without a disclaimer — say, “found using ChatGPT”? I don’t think that’s ethical. I think I have to. So, there’s that piece of it.

The other side, I think, is that we’ve lived with this idea forever when you think about music and songwriting. If I sit down at a piano and I play a piece by Bach which I’ve gotten really, really good at. I’m just playing Bach’s music, it’s his. I’m just using it.

However, if I go to the next level and I’m thinking hmm… I’m going to write my own music but I’ve been influenced by Bach. How much of that needs to be mine? How much can I say is mine and how much can I say is his? I think where we’ve gotten to is the point where it’s just irrelevant. The creator is the person who had the idea, or the music in this case, and they took into account various influences. I don’t think you can avoid influences. None of us live in a bubble.

RR: I really like your analogy of music and intellectual property in IT — that makes total sense.

RE: You know what’s interesting about music too is that, let’s take popular music: there’s basic rhythm and blues, and then you have your one chord, your four chord, your five chord. That’s just a standard framework, let’s call it. Is that any different from a technology stack or ChatGPT as the framework?

But now what becomes mine is I use that one, four, and five, and I make nuanced melodies over it or certain harmonies, or I change the words, or I could take one person’s song and change one or two notes and it becomes mine. I think that we have to start thinking about AI in that regard.

RR: Can you give a few examples of how you use ChatGPT or any other NLPs right now? Have they become part of your daily life, or not yet?

RE: Not so much daily life, although I’m starting to see some implications. What I’m finding fascinating is the interrogation. I’m sure you’ve seen the big example people put out: pirates walking out of a painting. So, if I said, I want a picture. Draw me or paint me, create me a picture of pirates walking out of a painting, I get 72,000 versions of it. Now, this is where I go down that line of getting better questions to get better results. Design me, create a picture of four pirates walking out of a painting, one with a parrot, one with a peg leg, one with an eye patch. I want it to be red velvet with a diamond, and I want one to be choking the other one. Now I get a different output.

So, when we start thinking about how that applies in business, it’s design me a system workflow that makes my document processing better. You’re going to get something that’s, you know, this wide. It will be good, but it won’t have any context.

If you can introduce, design me a process that extracts the salient data points (and here’s what they are) from this mortgage document (which then results in processing) and map it to these data sources that can then validate the application, you’re going to get a much better result.

Those are the things that I’m starting to play with now with the design teams. It’s really to elevate their importance and stress it to our customers that we have to get to a point where we understand those questions. Because again, it’s not Can I build you this app? The app can build the app, but we need to tell it what to build and we need to be very, very incredibly specific.

RR: So, basically, this is just another way to write code instead of following a certain script. You just need to be strong in writing proper prompts for AI and use it instead of IDEA or any other console. You can use ChatGPT or anything similar for creating new apps or new software products.

RE: Yeah, and the danger that comes with that. And I think we’ve seen this, right? We used to have to go and write mainframe code and then PHP was all the rage for a while, then we got a lot more into a low code/no code. This is just that next step. However, results are coming so fast back to the user that it becomes very, very easy to say problem solved and move on and not put the proper diligence into the potential implications of what could have come out of that. I think there was an old story that you can look up about Microsoft and their first AI-powered bot. It developed some rather interesting tendencies very quickly. Because some things probably weren’t fully explored or understood.

We have to make every effort to continuously be vigilant in understanding what the technology is doing and investigating, interrogating: Is it doing what it was intended to do? Is it pushing us closer to our purpose?

RR: Probably the last question from my side for the day. We saw that this boom for generative AI was mostly created by Open AI with their ChatGPT, and now it’s being picked up by Microsoft who is trying to integrate this into multiple products like Bing; soon we’ll see it in Microsoft Office. What do you think will be the response from Google and Amazon? What solutions can we expect from global cloud providers, and in what form?

RE: I think you’re going to see more of what you’ve been seeing. Interestingly, this week I saw a post that AWS has now opened a generative AI startup accelerator that obviously will run in AWS. Google also this week had a bunch of announcements at their data summit around incorporating gen apps and generative AI into various components of Google Cloud Platform.

I think that the distinguishing point you’re going to see is what you’ve seen already out of that of the major hyperscalers. Some are going to be very integrated into their product set, like Microsoft has their 365 platform and so everything gets integrated there. Keeping in mind, of course, that Microsoft also has Open AI separate of its core platform. So, there are elements that you can have within and there are things that you can still do with Open AI outside. But they’re looking at it as how can it augment their licensing.

A hundred and eighty degrees on the other side of the spectrum is Google that says, We’re not going to necessarily force you into a product. We’re going to open the doors and say everything is open, come use our stuff. Now what they’re going to do, and Google’s always taken this approach, is [to say] We’re going to give you value by giving you access to the models we’ve trained and accelerate your ability to adopt certain things.

So, I think it’s just a question, as a company: Where am I in my maturity curve? Where am I in my commitment to certain things?

If I’m a Microsoft shop with a heavy licensing presence of 365, then my natural path would be to continue down an Open AI integration and kind of push Microsoft to help you explore what you can do. Certainly, firms like ours do that.

On the flip side, if I’m more in a What’s possible What could I do here? mode and I want to be a little more nimble, then maybe I look at a Google and what can I do with them. And in many cases, what’s the best of breed coming out of AWS, Google, and Microsoft and how do I make them work together?

RR: Thank you very much, Ron. I really appreciate you joining this interview format. I’m looking forward to our upcoming endeavors. Thank you!

RE: Same here. Thank you very much. I appreciate the invite.

How can we help you?

Get in touch with us. We'd love to hear from you.

We use cookies to bring best personalized experience for you.
By clicking “Accept” below, you agree to our use of cookies as described in the Cookie Policy

Thank you for your message.
We will get back to you shortly.

AI Conversation with us
Loading...