Americas

Asia

Oceania

AI and Cybersecurity: Speed Bumps, Training, and Communication

Overview

If you are hesitant to embrace AI tools because of cybersecurity concerns, don’t miss this illuminating presentation from Dennis Legori who will explain why the risk of not using AI is greater than the risk of using it. What guardrails should you have in place to ensure AI is used for good and not creating unnecessary risk? How should you be thinking about completing a privacy impact analysis and risk assessment as your use of AI tools evolves? How can security leaders add speed bumps and education to provide caution and risk mitigation as needed? Dennis will share his perspective around these and other important issues you need to consider as you widely adopt AI tools in your organization.

Register Now

Transcript

00:00 [This transcript was auto-generated.]
00:08
Hi. I'm Dennis Legori, with Carrier cybersecurity team. And I'm here to talk about the business and security impact of artificial intelligence tools. Many of you who are listening will be in this position where you and your company have either embraced artificial intelligence tools, or you may be skeptical of them. And some of you may have even blocked artificial intelligence tools, and others, myself included. They be trying to find out that white balance between managing risk and managing the business impact of artificial intelligence tools, a little bit about myself. I'm Dennis Liguori, and I'm it carrier. I've lived in England for eight years in India for 16 years before coming to the United States in 1999. I have an MBA from Southern Illinois University and a master's in Public Administration. Before joining cybersecurity, I worked a 10 year for 10 years in a manufacturing company, where I helped to increase revenues from 5 million in 2003 to 50,000,010 years later, and then I pivoted to cybersecurity, where I have over 10 years of experience, including three years of experience at carrier, and the security awareness at Carrier between 2020 and 2023 has achieved six industry awards. I do encourage you to connect with me on LinkedIn. And if you have any questions related to this presentation, send any of your feedback or questions and I will be sure to respond. So as we get started for one of our training sessions we had conducted, we got some degree did some research and we referenced this article from the beyond identify survey. And what the survey asked is would chat GPT be used for cyber attacks in 2023 60% of the participants agreed that chat GBT will be used for cyber attacks. From a security perspective. It's important for security teams to understand and embrace AI tools, because if attackers use them, it's important that cyber security teams have enough knowledge to understand those type of cyber attacks. Another question that was asked was whether artificial intelligence benefits in cybersecurity outweigh its drawbacks. 55% agreed that the benefits outweigh the drawbacks. Only 11% disagreed, which indicates that what the industry thinks is that AI is benefits will outweigh the risks, especially when it comes to cybersecurity. As we look at different approaches that companies adopt, we found out three main categories, the first set of companies adopt a no risk approach, they may decide to completely block tools like Chad GPT, there may be a fear of unintentional data exposure or other associated risks. The problem with that is that may reduce the risk of data exposure, it may reduce the risk of bias it may reduce the risk of getting wrong outputs from AI tools. However, there is a significant increased risk. due to technological obsolescence. AI is rapidly evolving. And whether it's users or companies, if they don't adapt to the change, they can quickly fall behind. That's why the no risk approach has that added risk of technology to absorbs. The other category. And this is especially common with startup companies is the unmanaged risk approach where the priority is innovation. They quickly embrace any AI tools. They encourage their users to use these AI tools, and they have no consideration about the risk. While these companies may innovate, there is also the risks that comes with the reckless use of AI tools. These could include unintentional data exposure can also include bias, the so called garbage in garbage out AI tools are dependent on the input to generate an output accuracy, errors in code. All of these risks come as these companies innovate and those risks have to be considered. Then there are companies like carrier the companies that tried to manage risk approach that balances risk and innovation. They encourage users to embrace AI tools, but they also focus on other aspects such as how to reduce risk, increasing training opportunities, making users aware about the policy and creating policy. He's targeted at the use of AI tools where the risk is addressed and the users are made aware of the risks. Right now, the manage risk approach seems to be the way to go because it balances both risk and innovation, where the risk is reduced, but companies can still innovate. The other consideration when incorporating AI applications is how safe is the application to determine how safe the application is a series of steps have to be conducted. The first is to perform a risk assessment by asking the question, how is the AI application or company that bakes the application? How is it protected against cyber threats? What would happen if the application was compromised? What would happen if the company behind the application had a major compromise? What are the security controls in place to protect against a data breach or impact to the business those can be conducted by performing a risk assessment. The other consideration is to do an Architectural Review What other applications are connected. For example, when we talk about Zoom, zoom is a communication tool. And there is an AI application called fireflies.ai, which has the ability to transcribe notes and send a summary of the notes to all of the participants in a Zoom meeting. The problem with fireflies.ai is that if it's downloaded onto a zoom, it will impact the Architectural Review. Because previously, Zoom did not connect to an application. Now it does connect the application. What happens if one application is compromised? How would that impact the other application? That's why it's important to conduct an Architectural Review. Finally, there is the privacy impact analysis. What data does the AI application collect? What does the company do with the data? does it sell the data? Does it collect the data and use the data to make business decisions? Are those decisions or are those outputs shared with others? This is information that companies should understand when it comes to evaluating AI applications. Here lies the challenge in the last few years, especially since open AI launched last November, they've been so many different applications, either using open AI or using other AI tools. And there's a lot of funding going behind those companies. So they're popping up very quickly. Sometimes it can be hard to do an analysis on all of these companies. And that's where education and training and policies are critical. Users should understand the risk of downloading applications. Is it a legitimate application, because there's also attackers creating fake applications out there applications that it designed just to collect data, and misused data or applications that are used to get access to systems in an organization. So as a best practice, whenever possible, consider performing a risk assessment, consider conducting an Architectural Review, and definitely conduct a privacy impact analysis. Now, there
 
08:21
are other factors to consider when companies decide to embrace artificial intelligence, one of the first things that a company should do is to update the relevant policies, be it a security policy, or the acceptable use policy. And to put the information there highlighting the risk of sensitive information being uploaded specifically prohibiting users from uploading sensitive information on to AI tools, it should also train users on how to safely use AI tools. They should be trained, not just how to use it, but they should be trained on what information not to put into these tools. They should also be made aware about the output of these tools, the validity of the results have to be checked. If it's a software code, it has to be checked as a best practice of carrier what we tell users if you don't feel comfortable sharing information to your closest competitor, do not share it on an AI tool, especially if it's an open source tool where other people or other companies can have access to. With that said chat GPT has now released enterprise solutions. So on other companies, if it's an enterprise solution during the risk assessment, you should be able to determine what the company does with the data and how it protects the data. And that's what the privacy impact analysis, the risk assessment and the Architectural Review Board will conduct if they check off all boxes, then you can safely use these tools for Business School. purposes, the only other thing to consider is well, how much would that cost? All of those aspects fall under the umbrella because it's important that the users are trained on these factors so that they are aware about the steps that need to be done, so that they can avoid just downloading and using these tools without any guidance. Finally, it is relatively inexpensive to deploy speedbumps. The so called browser warnings, today's modern browsers are able to detect when an application is an AI application. And there are tools that are there that can capture that information. So having a page that cautions users and gives them a quick training message before going to the AI application can greatly help in the awareness for users. And it can also help with training. In this case, we see an example of a speed bump. In other words, proceed with caution here, when a user tries to go to chat God open ai.com They are prompted with a message are you sure you want to visit the site, it categorizes the site as AI and machine learning application. The site also has a link to the internet use policy. It has a link to best practices for using artificial intelligence applications. And it also has some safety tips. And all of these are links to a SharePoint site. So when a user clicks Continue, that means they're acknowledging that they're going that they are not violating a company's policies. By using these AI tools. This step not only provides a quick trading opportunity to the user, but it also helps transition some of the risks to the user so that the user is aware that by proceeding, they are acknowledging that they will not upload sensitive information on to be aI site. That last benefit of this is a company is able to collect information in the background on what AI applications users have been going to. And if there is any instance of data exposure, information can be tied back on the user, where they went to what they did the fact that they would at least want before proceeding. Finally, as we, as you think about this, and business decisions, whether you're a leader, or whether you're a user, here are some points that you should consider. And this applies to everyone. When it comes to AI tools. What is the risk? Does the benefits of AI outweigh the risk? In most cases? The answer is yes. When carefully used, always AI tools will have significant benefits, especially if it's used safely. The other consideration is budget. While some AI tools the three most tools used for business will incur a cost. And as you develop into your AI strategy, it's important to understand how much is that going to cost? What is the potential return on investment. The next consideration, especially for larger organizations, is to consider creating an AI council or an AI Task Force having a group of people that will explore the business opportunities. Having a bird those people will also assess the risks, they will also decide on how to communicate and what tools to adopt. So an AI counsel or an AI task force should be able to achieve those business tasks and help deliver a business decision. The other thing to consider when it comes to business decisions is to remind yourself we have users or customers make business decisions based on the output from AI tools. Similar to Google today, many users go on Google to perform a Google search, they will use Google to check reviews, companies, many companies, especially small businesses have a significant presence on Google. They want to appeal to the Google search engine. And they want to make sure that their company demonstrates a positive image on a Google search in the same way with users make business decisions based on the result or input and output from using AI tools. Companies have to think about that and see how they can adapt. For example, if a company issues press releases about key updates. And when those press releases make it to the internet. This helps develop the company's brand, an AI tools is able to capture information that is put out there through those personal reasons and it's able to give an output based on a user's input when it comes to information pertaining to that company. And we predict that businesses or people will be making business decisions based on output from AI tools. And finally, this the use cases unless the business or unless a user can clearly identify opportunities, AI is useless. It's important to remember that AI can only work when information is put into an AI tool or are a series of inputs into an AI tool will generate the necessary output. So that's where it's important for business, whether it's sales, whether it's marketing, whether it's communications, finance, it's important for the business to understand how they can use AI to enhance the business. And it's really important to understand what the different use cases are, and to help build those business cases that will justify whether the company should invest in AI tools, or whether the risks outweigh the benefits or whether they may not be a return on investment. But I clearly looking at the use cases and exploring use cases is the bigger not just the biggest opportunity, but also one of the biggest challenges for companies that can identify those use cases that can embrace artificial intelligence and appropriately find those different use cases will be able to create a solid business case where they are able to invest in upcoming AI tools, be it from Google, be it for Microsoft, those enterprise applications that can get very expensive because of not just the tools but because of the resources that are required. With that. I hope to like the summary of our topic today. Feel free to reach out to me on LinkedIn at Dennis Liguori and provide any feedback or any questions that you may have. And I'd be happy to share any updates that we have on as we evolve our AI strategy. With that, I'd like to thank you and the foundry team for the opportunity for allowing us to present Thank you