AI in iGaming: Make the Right Bet

In a recent conversation, Roman Reznikov, VP of Delivery, Digital Segment, at Intellias, and Pierre Lindh, Co-founder and Managing Director at Next.io, discuss the dynamic intersection between generative AI and iGaming industry

December 06, 2023 21 mins read

We joined Pierre Lindh, Co-founder and Managing Director at Next.io, during a recent episode of the Next.io Spotlight podcast. Watch the full video to explore the benefits and challenges of generative AI to online gaming. Discover which technologies shape the future of interactive entertainment, how to empower responsible gaming through AI, and dive into AI business cases by Intellias.


Pierre Lindh: Roman Reznikov, Vice President of Delivery at Intellias. You have a PhD in Economics and a Master of Science in Finance; your CV is longer than my arm. It’s great to have you here, Roman!

Roman Reznikov: Thank you for inviting me.

Pierre Lindh: Intellias is a high-level and international technology provider. You focus a lot on asking how businesses can thrive using generative AI. It’s an interesting topic.

When I speak to organizations within the industry, many are aware of generative AI. Many individuals use ChatGPT, the most obvious example of generative AI. But many organizations are very much in the beginning; they are just trying to understand how their businesses can thrive using generative AI.

On one hand, we are very early in this because ChatGPT launched AI summer only nine months ago. On the other hand, AI products have been developed for many years.

I want to ask what feeling you are getting today about how far ahead companies are in their thinking of a generative AI. Are most of them at the starting pits, or are many of them advanced?

Roman Reznikov: You can now find companies in both parts of the scale. Some companies do not consider generative AI; some even restrict ChatGPT for employees for privacy or ethical reasons.

Other companies, specifically smaller ones, put everything in for generative AI. They see it as an opportunity to disrupt the market and to become one of the market leaders. They are trying to find a way to use generative AI and its products.

We also see that some companies are interested in generative AI but don’t know how to use it or use it in cases where there is no need for it.

There are specific patterns. Enterprises took generative AI as something that could improve performance or reduce the cost internally. While small companies, startups, and medium businesses instead took it as something that could bring them to the top of the market and add some competitive advantage.

Pierre Lindh: You’re saying that bigger companies focus on generative AI to determine how to cut costs. Smaller companies are trying to figure out how to use such tools to disrupt the market.

Roman Reznikov: Yeah, we see that.

Pierre Lindh: You mentioned some companies are reluctant to even look at the generative AI products and tools. Is this a sure way of being disrupted? How important do you think generative AI products will be shortly, and how important are they today?

Roman Reznikov: Generative AI, as well as other technologies that were on top of the hype, like blockchain, help solve a certain number of problems or challenges more efficiently. Sometimes, they help companies to differentiate. But that is not something companies should use in all circumstances.

We had a couple of customer requests to which we honestly replied: you don’t need it; this technology would not help you, would not solve your problem; instead, use a different toolset; the solution is already on the market.

On the one hand, generative AI provides many new opportunities and exciting ideas you can implement, specifically in personalization and visual content. On the other, we still see that not all companies know how to use it and how to solve problems they have in their business.

That’s why we spend a lot of time together with the client to understand whether this idea is feasible, whether this is already on the market, or whether someone on the market is already using generative AI the way our client wants to use it. That helps us direct our customers in the right direction to benefit and get the maximum out of this technology.

Pierre Lindh: The current AI state that we’re in now was kicked off by the launch of ChatGPT in December 2022. People got this massive Eureka moment when they tried out ChatGPT for the first time. What are some of the most interesting use cases you’ve seen in the field of generative AI since the launch of ChatGPT?

Roman Reznikov: As we’re talking about topics closer to iGaming, the use case we pilot now is commenting on the sports events using GPT models. It works well for live commenting translation to different languages and providing comments during football matches, both audio- and text-wise. It helps to extend this kind of content for sportsbook companies.

We see a lot of personalization in iGaming, for example, skins or visual slots that can be changed manually. There are implementations when, depending on user preferences, you can see a different design. We also observed experiments with virtual opponents in games. It could be real-time money games where you play poker with some visual avatar generated by the GenAI toolset. These are directions that we see a particular interest in right now.

Also, generative AI is good for analytical purposes. For example, to detect people with gaming addiction. One of the experiments we’ve seen combines the GPT model and data science. By analyzing a person’s behavior with data science, a model identifies whether this person has an addiction. Also, during certain game moments, a chatbot may ask you questions. Depending on the answers, it helps to decide whether you have an addiction and if it allows you to continue playing. Because in some legislation, you are not allowed to play if you have an addiction or you need to take pauses during the game.

That kind of tool is also helpful in identifying fraud detection or money laundry. Still, this is on the edge of generative AI and just AI or data science toolset; those are very close topics.

Pierre Lindh: Under the responsible gambling front, two companies are pioneering how to detect potentially harmful play with the help of AI. Kindred Group is a publicly traded company committed to eliminating harmful gambling with AI. Entain has launched the ARC system, which is doing something similar.

Here’s an interesting thing. Kindred launched its tool about two or three years ago and set out a very ambitious goal to eliminate all harmful gambling. Entain, who launched a similar system, was asked if they would also stop all harmful gambling. But Entain made an interesting point that it is hard to eliminate all harm from gambling without generating a lot of false positives.

The problem is if you turn up the sensitivity of this tool, you will also generate a lot of false positives. AI will falsely point at people who do not show signs of harmful play. So far, Kindred has been able to lower the revenue from dangerous sports, but they haven’t been able to eat into it that much yet. Supposedly, they also want to avoid regenerating these false positives.

I assume that this is a tricky question to solve.

Roman Reznikov: Exactly. That’s the pitfall many companies face: You may have a brilliant idea of using generative AI, but you realize this is more complex.

We use small experiments and prototypes whenever we work with our customers in this direction. It might be quite costly to invest a lot in a specific order and understand later it’s less fancy than it is supposed to be. That’s not a very straightforward solution, and it’s not easy to implement.

Pierre Lindh: For sure. With the generative AI and the LLMs we see, what are the general limitations of this approach to AI? Do you think these models will become more advanced and eventually be able to answer the most advanced questions, or are there limitations to how advanced these tools can become?

Roman Reznikov: I expect the advancement of not the language model itself but the applications that use those language models. I hope to see it in our daily activity with some office tools or the tools we use daily.

Speaking of large language models, there are two main limitations. One is a balance of creativity and preciseness of the model. Some models sound like robots, but they provide clear and structured answers. You clearly understand this is not a human being. At the same time, other models are more creative and sometimes offer some deviations from reality regarding the answers. Those models sound more natural and realistic. It would help if you found a balance; which one you prefer. The GPT model is very creative and good for some creative stuff, but if you try to use it in Healthcare, that might be pretty risky.

Another limitation is the kind of data set you use for those GPT models. If you have good data, the outcome would be more efficient. One of the new roles in some companies heavily using GPT or language models is a kind of data validator. Those are not data engineers. The goal is not to structure the data but rather to validate that okay, we don’t have any data that may corrupt the answer of the chatbot or tool we will use.

Most generative AI models use the existing content to create a new one: existing visuals to create a unique visual, existing text or information to create a new one. That may also create a situation where the quality of the content would go down because one generative AI model would use another generative AI model content. That’s my feeling and belief.

In terms of how this could be used and the number of applications, it’s going to grow. It might make our life easier in some cases.

Pierre Lindh: And what do you think we have ahead of ourselves? We are still at a time when most organizations need to catch up on using AI. Again, most companies are just trying to figure out the space. Will organizations start falling more and more back on AI shortly? Where is the development heading? What are some apparent kinds of disrupting use cases for AI?

Roman Reznikov: Organizations will start using AI more and more. I would split this into two parts.

The first part is the usage of AI by individuals in the business context. For example, software engineers already use copilots heavily. This helps to boost productivity and performance. Our company invests a lot in this direction because we see a particular potential.

The second part is how organizations use AI in business and how to enhance their products. There will likely be a particular boost in this during the coming months or maybe in a couple of years because we see the development of ecosystems around that.

Before now, you needed to think about connecting your applications by APIs and developing customizable solutions. That required a certain level of complexity. Nowadays, we see Google, Microsoft, AWS, and some others introducing their products in their Cloud environments.

It will be easier for companies to build their own applications, like with LEGO blocks, to not over-engineer the solution. Take a pre-ready solution and make some minor customization to address your business needs. That will boost the usage of such toolsets a lot in the future.

Pierre Lindh: Some people were amazed while using ChatGPT for the first time and saw the potential of how this can benefit themselves and their organizations. At the same time, the second cohort of people were terrified because maybe their jobs would be disrupted.

Do you see specific jobs and positions being disrupted by AI, or do you think all existing work can leverage ChatGPT and other tools to do their job better?

Roman Reznikov: I believe some titles will not exist in the future, but that’s not only because of ChatGPT or any other generative AI technology. Our business environment is changing, and some jobs we had 20 years ago are irrelevant.

One of our customers made a role called “AI Ethical Engineer.” The goal was that any generative AI or AI toolset the company would use would only leave employees with the job. AI should improve performance and make people more efficient but not substitute specific roles. This kind of solution is pretty sustainable, and that’s something that I suggest organizations follow.

When using AI or generative AI toolsets in a company, the first thought might be, “I can cut down the cost because I would need fewer people.” But this position must be corrected, as it creates resistance to AI adoption. We’ve seen those strikes in Hollywood within the filming industry. Organizations should avoid such situations and take generative AI as something that could help employees and make their lives easier, not something that could substitute them fully.

Pierre Lindh: At the same time, the most obvious use cases are chatbot customer support. If you create a good enough chatbot that can solve 90% of all customer support errands, say. Naturally, you will only need 10% of the workforce. There are still cases where employees will be affected; it seems inevitable.

Roman Reznikov: Those are cases when employees need to transform their jobs. Talking about support, we’ve seen that AI helps scale, but you still need people to address some of the cases. Service desk L1 support is shifted to the L2 support level because AI or chatbot could execute some tasks. Then, the company considered people working in those positions for the upskill to L2 and L3 support.

That’s the way you can transform the jobs of some of your employees by introducing a new toolset. Still, you must understand that I have yet to see a complete substitution of L1 support with GPT. There are a couple of cases, but they are horrible regarding customer service, so I wouldn’t suggest doing that. To build a great customer experience, you still need some human touch here.

Pierre Lindh: Another area that is interesting to talk about within the last LLM and generative AI push is the issue of privacy. Again, returning to the first reaction to ChatGPT, we can do incredible things with this tool. But at some point, people started waking up to the fact that, wait a minute, what are we inputting into ChatGPT? We would never want anyone to get their hands on sensitive stuff. And there’ve been a couple of alleged cases where you input something into the LLM, and then, suddenly, this information spreads to other places.

What’s your approach to privacy within AI? How do you combat these two factors? On the one hand, you want to leverage the power of ChatGPT, but then, on the other hand, you can’t disclose sensitive company information to this tool. How do you marry the two?

Roman Reznikov: Let’s split this question into two parts. The first part is how individuals use ChatGPT, and here, the recommendation is straightforward: don’t put sensitive information on your Facebook. Talking about the company’s security policies, you need to consider that people may misuse ChatGPT or any similar tool.

The second part is how corporations use generative AI or large language models. As mentioned, most of those solutions are built on large cloud providers from predefined blocks. Plus, you can use your own data set. Your own data set is already on some AWS, Google, or Microsoft cloud through which you can manage how your data is used. You are the owner of the data. It’s the same approach many companies apply for most business operations.

It’s a bad idea to put sensitive business information in public ChatGPT. This may create a particular risk for the company. But using custom decisions means you have applications that use your own data set. You connect it to a large language model and as part of a cloud set. That’s reasonable from the security perspective.

Pierre Lindh: I heard another exciting solution of how to use ChatGPT with sensitive information without this information being able to be used by someone else. There is a specific plugin. Whenever you mention the company name or something relevant to your company, the plugin will change the name to something else and feed it into a GPT. Then, the answer is provided to the plugin, which converts these words into the right ones and returns them to the employee. Problem solved.

Roman Reznikov: That’s an interesting solution. It does not solve the problem entirely, but it sounds creative.

Pierre Lindh: Some of the companies I speak to are creating a task force to understand the current landscape of generative AI, the existing tools, possibilities, and options to understand what decisions they may take in this field.

However, these task forces usually consist of people who are not necessarily AI experts; they might be AI enthusiasts or interested in the field. Their research is quite essential. In Intellias, you can act as that intermediary coming into an organization, having those discussions with the organizations to understand what approach to AI they should and shouldn’t take.

Can you talk more about interesting use cases that Intellias has produced in this field so far? What have you come up with?

Roman Reznikov: One of the first solutions we have started doing for our customers is support-related. Then, we started implementing AI solutions for internal process automation intelligent automation. Our team already practiced it: we helped our clients automate business processes, and we started adding some AI solutions to specific areas where it makes sense.

Among the interesting ones was an assessment of company mentions in different public media, comments, and social network activity. Our client is a large, well-known company, and monitoring their brand in a public space is essential. We assessed different sources: Facebook, Twitter, and other media. We considered articles and comments to show the client how different actions or a different public presence impact their brand and what works best. As a result, the company based its PR strategy on our assessment.

We had a couple of implementations in the Automotive space. Assistance for truck drivers was mostly about how to minimize fuel consumption. That was also the kind of recommendation agents that we implemented.

We also used similar things in the AgriTech domain, which was on the edge of generative AI and large language models. We helped the neuron network learn patterns of how different crops are growing in vertical farms and gave some recommendations to farm owners so they could make a choice. For example, you may want to invest more in light, and our solution offers probabilities and suggestions for what you may achieve in terms of the outcomes of this investment.

We used it in iGaming a couple of times. As I mentioned, it’s about personalization: how we can adjust the gameplay depending on the player’s preferences or previous choices. We had this experiment, and right now, it’s still pilot: trying to understand the addiction of certain people. In the iGaming space, we see it mostly in two directions: visual content and contextual help.

We had a project where we helped the company make online comments on different sports events and add a better experience to their betting system. Mostly, it was online tutorials or some help when someone had a specific issue with the game or navigation. The solution also helped to suggest tips.

Enhancing iGaming with Technology Solutions

Respond to new digital demands with iGaming software development services

Learn more

Pierre Lindh: Many game suppliers are using tools like Midway AI to create visuals for the game. That has helped a lot of game developers to increase their output.

Are there more ways to use these tools more efficiently than you’re saying? Currently, the designers are just using a Midway AI as you use ChatGPT. They create the asset, and it goes into the game. Are there ways to make this process more efficient or to improve it somehow?

Roman Reznikov: Right now, it depends more on the architecture of some online casinos. Some of them are flexible and allow you to change the complete gameplay quickly, but some architectures are older, and to change the view, you need to make significant changes to the code.

In the one with a flexible architecture, several presets are applied in different scenarios, depending on the user’s game style characteristics, awards history, and game preferences. Each user might get a unique experience and unique visual while playing. But to enable this, challenging work is required to create specific architecture. Companies should consider such scenarios and whether their solutions are flexible enough to allow generative AI. The one who has more flexible architecture wins right now.

Pierre Lindh: The first wave of leveraging generative AI tools in game production was using Midway AI to produce graphical assets faster. The personalization front is where things are heading now. Do you see other examples of personalization within the games?

Roman Reznikov: More on an experimental level. For example, a couple of companies tried to use Metaverse for lobbies for online casinos. You may use generative AI to create your avatar for those lobbies. Or it may be used for creating a virtual opponent in games like poker. You may use avatars created from your photo.

This is not super widespread right now, but we see a couple of experiments that look pretty interesting. That brings gambling and real-time money games closer to casual games.

Pierre Lindh: We spoke about this just before we started recording, but as a layman here, I assume you know one of the biggest challenges operators face today: to stay compliant.

You have 100 different regulations with 100 different rule sets, which means many headaches go into ensuring all your marketing campaigns and operations are compliant. You can face millions of euros in fines if you make one mistake.

If you could take the rule sets, the regulations, and all the historical findings from all the regulators, feed that into an AI model, and train it, then the model would understand which campaigns are compliant and which are not.

Let’s say I want to create a marketing campaign in Colombia. I should be able to ask this compliance AI model if my drive is compliant. Is that within the scope and reach of what generative AI can do today, or is it a thing of the future?

Roman Reznikov: The solution is quite feasible. But it might have multiple pitfalls, the same as in finding addicted people. But this is a billion-dollar solution; the one who designs it will solve a big market problem.

Pierre Lindh: That would also fit into the question of trust. Let’s say you created an AI model that focuses on whether your campaign is compliant. This is also a potential concern today: how much can you trust AI?

Roman Reznikov: The problem is that the cost of such a model’s mistake could be higher. It would cost the company a lot. It is essential to ensure this model is less creative and precise.

Pierre Lindh: For sure. There are some obvious examples of tools used today by individuals; we mentioned ChatGPT, Midway AI, and so on. For you as an expert, what are some of the most interesting generative AI tools you’ve seen to date, except for the obvious ones?

Roman Reznikov: A lot of engineers use GitHub Copilot and AWS CodeWhisperer. They also started using AWS Bedrock, a new AI tool, and Microsoft OpenAI stack for some engineering solutions.

Right now, Microsoft is introducing copilots to different Office tools. For example, Microsoft Bing has become way more productive than it used to be before. We expect to see more improvements in Office products like Microsoft Word, Microsoft PowerPoint, etc.

You know, there are a lot of tools, and I have screenshots of how to use them for various purposes, but I still need to think of something out of this right now.

Pierre Lindh: What do you think about the next generation of ChatGPT, the AgentGPT? The idea is to ask the LLM a question: say, for example, I want to organize an event for 20 people in Italy; help me organize this event; we want it to be horse riding.

AgentGPT will split this question into ten questions, answer them all, and then put them together in a project. It’s using Agents rather than just one question and one answer. A question leads to 10 questions that lead to 10 more questions. You can imagine a product manager working to solve a problem rather than one person solving the problem. What do you think of that approach?

Roman Reznikov: Such a tool exists already; it’s called Auto-GPT. It does precisely this: arranges events. I’m not a big user of it, but some of our business analytics use Auto-GPT for requirements or backlog creation. They input some details, and Auto-GPT provides an easy work breakdown structure to review and adjust.

Tools like ChatGPT will increase the number of different plugins suitable for other purposes.

Regarding your previous question, Perplexity.ai, a great tool popped up in my mind, Perplexity.ai. That’s a good alternative for ChatGPT; it also links the sources, which is helpful for scientific research.

I use Fireflies.ai to record my meetings and calls. You add it to conferences and get perfectly structured meeting notes.

Pierre Lindh: I’m just looking forward to the Microsoft copilot, so I no longer have to reply to my emails.

Once, I had a conversation on the best way to achieve a “zero inbox.” It’s a concept all professionals strive for when you have all the emails read and no pending emails to reply to. If you have a Microsoft copilot that can respond to your emails automatically, is that the solution to “zero inbox” when you minimize the emails, or is that a source for “infinite inbox”? Where I get an email, the AI will reply to it, and then another AI will respond to that email, and you get an infinite number of emails flying back and forth through the AIs. What do you think, Roman?

Roman Reznikov: That’s a good question. Once I created a reimbursement ticket and got an automatic reply. So, I set up my mailbox to send an automatic response to that reply and then forgot about this for a couple of months. When my mailbox started crashing, I realized we were sending each other a million messages back and forth. I also hit their system, as they received this ticket multiple times. But common sense is there.

Pierre Lindh: You would hope they would resolve the issues rather than send a million emails back and forth.

Do you have any predictions for the coming one to two years in the field of generative AI? Where are things heading? How big will the future GenAI development impact the way we work and the tools we use?

Roman Reznikov: The first part is a personal usage of generative AI tools. Those tools would help us to be efficient, affecting how our companies have a lot of people whose core work is creating PowerPoint presentations. Usually, those are boring presentations with the same stock images so the content could look better. While having a Midjourney, DALL-E, or ChatGPT, you can create more powerful content. It doesn’t mean those tools will create content instead of you; they will provide you with some nice visuals and text you can use in your presentations.

The quality of the content, its variability, and the speed of such content creation would improve a lot. I expect more tools to help us with the meeting notes, emails, and other similar areas. I still believe that the quality would not be that great if you want to fully generate PowerPoint presentations with any GPT tools without involving human beings in the future.

The second part is how organizations use those tools. I expect more new products in B2B and B2C areas, namely, a more personalized experience and higher-quality content. If two users go to the same website, they may see a completely different website just because they are from other countries. That kind of personalization will also play on the consumer side. Same for the business B2B area. Businesses could be more efficient and create new products off the market.

Evaluating the impact of AI in general, we expect improved performance and improved consumer experience due to better personalization and higher-quality visual content. The content will be cheaper, easier to access, and more unique. It’s not the same picture of happy people on multiple PowerPoint presentations. Those are two directions I would outline.

From a long-term perspective, I also see an impact on science. Unfortunately, when I was doing a PhD thesis, there was no ChatGPT. It’s not that I could do my research through the prompt to ChatGPT, but it could help find and analyze information and make researchers more productive. AI may boost the science field but not in a very short-term perspective.

Pierre Lindh: Earlier, we talked about Intellias and the help you provide to companies, particularly within the iGaming industry. Can you do a quick elevated pitch to describe who should contact you and what type of problems Intellias can help with?

Roman Reznikov: We primarily work with small and medium businesses and large enterprises. We help to boost the company’s productivity: we analyze products and services on the market and think about how generative AI, as well as any other modern technology, could help our clients improve their product and business KPIs.

Talking about the company, we generally focus on the Mobility, Retail, FS&I, and Telecom industries. Also, we work with implementing different technologies in industries like iGaming, Agriculture, Travel, and Logistics. We are experimenting with our customers to see where we can bring them value.

Pierre Lindh: Brilliant! We’ve been working with your colleague Olga and others for a while. It’s been a pleasure, and thank you so much for coming here today and sharing your perspectives and thoughts.

Roman Reznikov: Thank you for having me!

How can we help you?

Get in touch with us. We'd love to hear from you.

    I give consent to the processing of my personal data given in the contact form above as well as receiving commercial and marketing communications under the terms and conditions of Intellias Privacy Policy.

    We use cookies to bring best personalized experience for you.
    By clicking “Accept” below, you agree to our use of cookies as described in the Cookie Policy

    Thank you for your message.
    We will get back to you shortly.