Legally Bond

An Interview with Mario Ayoub, Artificial Intelligence

November 13, 2023 Bond, Schoeneck & King PLLC
Legally Bond
An Interview with Mario Ayoub, Artificial Intelligence
Show Notes Transcript Chapter Markers

In this episode of Legally Bond, Kim speaks with Bond cybersecurity and data privacy attorney Mario Ayoub. Mario discusses artificial intelligence and the potential impacts on business as well as the risks associated with the use of AI.

Speaker 1:

Hello and welcome to Legally Bond, a podcast presented by the law firm Bonn Cheneykin King. I'm your host, kim Wolf Price. Our guest on today's episode is Mario Ayub, an associate in the business department who practices primarily in bonds, data privacy and cybersecurity practice. He's out of our Buffalo, new York office. Welcome to the podcast, mario. You're joining us for your first solo episode.

Speaker 2:

Hi Kim, Thanks for having me back on. I guess I didn't bomb the other ones as much, so glad to be back.

Speaker 1:

Not even a little bit, so we're glad to have you. So when you joined us, previously, it was a talk about cybersecurity and data privacy, which are your main practice areas, and today we're going to switch things up and talk about AI, or artificial intelligence. Does that sound like a plan?

Speaker 2:

Sounds great.

Speaker 1:

Okay, all right, but before we get started, since this is your first solo full episode, Ron, you did a solo special episode, but this is a full one. I think it's important for listeners to learn a bit about our guests. So if you wouldn't mind if you tell us just a little bit something about your background, where you grew up, family, whatever you'd like to share, sure.

Speaker 2:

Yeah, so I grew up in Buffalo. I've lived here my whole life to law school and early on I thought I was going to be very interested in and film production. That was a hobby of mine throughout high school. But then soon after starting undergrad, with some different courses I've taken, I kind of switched my focus a little bit more towards writing and advocacy. So I thought I was going to go into media content creation, but adjusted course.

Speaker 1:

So grew up in the Buffalo area, as I mentioned. You're back in the office and where did you go to undergrad?

Speaker 2:

So I went to the University of Buffalo where I got my degree in political science and international trade, which was a subset of the geography department. The geography department had a lot of different connections with the political science department and had some great independent study opportunities.

Speaker 1:

So what makes sense. You need to know what the area is you're studying in political science and those borders are big disputes.

Speaker 2:

Right, absolutely.

Speaker 1:

Fantastic, all right, so you alluded to this, but then you decided to maybe go for a little bit of warmth in law school and you headed down south. Where did you end up?

Speaker 2:

Yeah, so I chose to attend the University of North Carolina School of Law. I really liked their program, the faculty, and when I got down there I realized how close it was to Duke's campus and they had a joint program that I found out when I got down there with their public policy program. And the public policy program really appealed to me because they had a lot more detailed courses in national security and data science and other things I might not necessarily get at the law school. So I thought that would be a perfect combination with the law degree, especially since I was trying to focus on data privacy and security.

Speaker 1:

And was there a program through the schools the two schools where credits could count for both, or did you have to do two full loads of study? So yeah?

Speaker 2:

it was two full programs but when you did them together it would it shaved off a year. So I was able to get both done in four years. But yeah, it involved going up and down the highway to the two schools for four years. But and I didn't realize what type of basketball rivalries I was walking into as a Northeasterner I wasn't, wasn't tuned into how seriously those schools competed in basketball. It was a lot of fun.

Speaker 1:

Those are not names we use openly in Syracuse, new York, on our regular basis. The ACC thing Well, that's great. Well, and so you mentioned that because you were interested in data privacy. So you are. You knew in undergrad that that was something you might be interested.

Speaker 2:

Yeah, I think. I think when I got to Duke and started to do some of these data science courses and start to realize how you know how much data we are starting to collect and what we're using it for, and how powerful these data sets can be for analytics, for policymaking decisions, then I started to really think about, well, where are we keeping this data and who has access to it? And that led to some some good conversations with privacy faculty at UNC. I took a few courses at UNC that were targeted towards privacy law and it really appealed to me, so I wanted to try that out in practice and I've been enjoying it ever since.

Speaker 1:

That's fantastic. Okay, so how did you get from UNC slash Duke to Bond Buffalo?

Speaker 2:

Yeah. So before joining Bond, I was working remotely from Buffalo for a large firm, but I was really missing that in-person and office element, especially as a young attorney. I thought that was important to have more of that in-person community, especially as I was learning the ropes of being a lawyer and looking at the options in Buffalo. I wanted to stay local. Bond was building a full-fledged privacy and cybersecurity practice. I met with Jessica Copeland, who was the chair of the practice, and she had a very clear and strong vision of where she wanted to take the practice and grow the group and I thought it would be a great opportunity for me to stay local in my hometown but devote most of my time and practice to an area that I really like. So it was a unique opportunity. Very glad to have joined Bond and have been joining it ever since.

Speaker 1:

That's great, and did you join a little over a year ago?

Speaker 2:

December last year, so almost almost to one year.

Speaker 1:

Almost one year, very good, all right. Well, happy early anniversary, then, for joining. Thank you, that's great. We're glad you made that choice. All right, so for our topic for today, I guess we should start with the most basic question what is AI?

Speaker 2:

It's a basic question, but there are many answers that are not quite basic, so I'm going to try to keep this very high level, because we can spend a lot of time debating. We'll just have to find that. Yes, we'll leave it at that. So, at its most basic level, ai is a branch of computer science that's concerned with creating technology to do things that normally would require human intelligence or human decision making. So there are many definitions that can spring from this, but most include four elements pretty consistently the use of technology and specified objectives for that technology to achieve a level of autonomy by that technology to achieve defined objectives. So there has to be some level of autonomy where human can set the goals but the programmed algorithm takes over from there.

Speaker 2:

The third is the need for human input to train the technology and identify objectives to follow. This is changing as we advance in this area of technology. The AI is now starting to set its own objectives, which is exciting and a little scary at the same time. But right now there's still a need for human input To supervise we'll get into that a little bit in the regulatory context but also to set goals and objectives to make AI work for the human operators. Finally, the technology has to produce some sort of output. This could be performing a task. This could be drafting language or text, solving problems, producing content, helping with decision-making. There has to be an end output. All these four elements have to work together with a large dataset. Ai does not work without good quality, usually a massive amount of data to train the algorithm and to make sure things are running accurately.

Speaker 1:

That's a lot that goes into that.

Speaker 2:

There is. There's a lot to it. Then each subset. I cannot go into all of it right now, but each subset of AI is a machine learning, deep learning. They all have their unique subsets as well, unique definitions, but keeping a high level. We're looking at technology, algorithms trained on large datasets to perform functions in a way that humans would.

Speaker 1:

There are a lot of different tools that perform AI, perform these functions, those four functions, and not only are the tools constantly developing, but so is the use of the technologies and the parameters around how and when we can use it, or should use it, I should say.

Speaker 2:

Right, yeah, the technology is changing rapidly. If you look throughout history, you'll see areas of rapid advancement, much like any other technology rapid advancement and then some periods where there are as many breakthroughs, fueled by this late 90s big data boom where companies and organizations started to realize the value of collecting and maintaining large data stores. We are now in this AI boom where advancements are much more rapid. You can see this with renditions of chat, GBT. We're on the fourth rendition at this point. Each iteration comes with more advancements in response time, with the language that's used, with the program's ability to catch on to context or maybe even human emotion. We're seeing a lot of advancements rapidly at this current stage.

Speaker 1:

Yeah, even with all the advancements and all the talk about it, I think there's some people that might think that they personally, or even that their businesses, because of the type of business they're in, won't be impacted by AI. But that's likely not true, is it?

Speaker 2:

No, it's not true. Right now we're still in the early stages of implement. We're not in the early stages of AI development. Ai has been around for a long time. If you look at your Amazon account all the product recommendations, which I really enjoy getting those I love my targeted ads. I know not everyone does, but I do.

Speaker 1:

I do, except maybe my budget doesn't.

Speaker 2:

but yes, Right, but we're reaching a point where this technology is available to small and medium-sized businesses. The question is is the use of AI worth the risks for the efficiency gains that a business could enjoy? Broadly speaking, ai can support three important business needs automating business processes this is typically back office work, administrative and financial activities. Record keeping. Gaining insights through data analytics A lot of company websites already have tracking built in. Ai can help harvest some of this data and build out analytic reports that would have otherwise needed a consultant or an employee to dig through that data. And then engaging with customers and employees. Now a lot of websites. When you open them, you will see a pop-up window for a automated chat that's powered by AI, and some of the user interfaces to make these websites interactive are powered by AI. Your business might be able to survive without using AI, but as competitors start to implement these efficiency tools, it becomes a question of should I also join this wave so I can enjoy these efficiency gains as well and not get left behind?

Speaker 1:

It seems like, no matter what your industry, keeping some level of knowledge, keeping up to date on this, is really important.

Speaker 2:

Absolutely. We're seeing AI deployed across every industry financial manufacturing, customer service. It is really industry agnostic at this point.

Speaker 1:

I think with any new technology, anyone who might be an earlier adapter or has to make the change wonders what the risks are. What are the major risks with using AI?

Speaker 2:

Some of the major risks again will keep this high level. But to start, let's talk a little bit about accuracy. A recent study came out that 78 percent of the reported companies the companies that were involved with this survey were getting less than 80 percent accuracy with the algorithmic decisions that they were using, whether this was software that they developed or vendor software. 80 percent could be good or bad depending on the context. If this is 80 percent accuracy when you're using AI for, maybe, productivity decisions or certain back-end functions that don't impact consumer or customer data, maybe you're okay with 80 percent accuracy. Maybe that's still giving you an efficiency boost. If you take that 80 percent figure and you talk about, maybe, an algorithm that's used to extend credit to someone or to predict recidivism rates or use in law enforcement or more sensitive context, 80 percent becomes a lot more problematic. There are some concerns regarding accuracy. Another example chat GPT. A recent Stanford and Berkeley study showed that there is a fair amount of data drift that is occurring with chat GPT already and generating less and less accurate answers as time goes on. We're not sure exactly how this is occurring yet because the algorithms behind the open AI platform. We don't have a lot of visibility into how that works, but we are noticing with each generation there is some accuracy issues that are starting to accumulate within these generative AI platforms, just related to accuracy. Bias can be introduced into an algorithm as well, for sure, bias is a related risk to accuracy and this has to do more with the data that the model is trained on. Maybe that model is functioning accurately based on the data, but the data is flawed in some way. We're seeing this in a lot of facial recognition contexts, where data is trained on European white facial features photos and then the model has a problem predicting for other racial groups. There are some bias issues as well.

Speaker 2:

What causes AI accuracy issues? Poor quality or incomplete data sets. Again, data is super important. Poorly written algorithms Maybe you have data that's great and accurate and someone's vetting the quality, but the algorithm does not consider certain aspects of the data and it's producing poor results. Machines have a difficulty understanding context and tone and other human emotion. Maybe a user prompt will cause the AI model to veer off course. Then related poor user instructions. You can get wildly different results in chat GBT, even if you're looking to generate the same thing, depending on how you draft the prompt to the machine.

Speaker 1:

I was just thinking, probably doesn't pick up sarcasm. That's not, probably not. No, I've seen some people use Sometimes. Use sarcasm Sometimes, only occasionally.

Speaker 2:

Another issue, too, to keep in mind is transparency. This is changing a little bit, as larger companies like Microsoft, google, are starting to create open source platforms for developers to work with their AI tools. But a lot of AI models are proprietary and sometimes that's how smaller, medium-sized companies that offer AI tools stay competitive in the market. But while this is great from a product generation and IP standpoint, users are not able to effectively evaluate how their data is being used, how the model is treating their data, and it becomes a lot harder to check for things like bias and check for things like data accuracy. So there's been a recent push to open up development tools in a more open source manner, but there are still some transparency issues that are making difficult for regulators to ensure that these tools are being used in an appropriate way. Another shortcoming and risk for AI here is privacy and personal and sensitive information that is put into a generative model such as chat, gpt, for instance, there's no guarantee that that information won't be generated for another user for something completely different.

Speaker 1:

That's right.

Speaker 2:

We'll talk a little bit more, probably, about best practices, but when you're used, especially as attorneys, when you're using chat, gpt for generating documents or content, get to be careful what you're putting into the platforms, because we don't know necessarily where that data is going and who might be able to retrieve that data from a different prompt.

Speaker 2:

Then, finally, another risk is security. I've mentioned already the need to rely on large datasets, and this is a treasure trove for threat actors, bad actors who are deploying ransomware or other malicious software to either steal data or extort companies to block access to their data and force them to pay ransom. Ai, although it is beneficial in many aspects, it does create these large troves of data often sensitive and targeted information that are very, very attractive to threat actors, who could turn this data around, sell it on the black market or use it for other fraudulent purposes. Any company that's implementing AI into their workflow needs to make sure that they have security controls in place prior to implementing any AI tools. That's true, too, even if you're relying on vendor software, where the data lives with the vendor. The onus is still on the end user to make sure that the vendor is treating the data appropriately and adding the appropriate protections.

Speaker 1:

We always have to worry about our vendors making sure that they're doing the same thing we would do with our data.

Speaker 2:

Absolutely. When you're working with large datasets, that problem compounds even more.

Speaker 1:

Well, that's a lot of risks, but there always is with things that we like as new technology develops, and there's other things in our profession that we can touch on a little bit. But there are also benefits to using AI as well, aren't there?

Speaker 2:

Absolutely. We mentioned a few of them already, but efficiency is certainly one major attraction. There's the ability to replace and this could be a good or bad thing, but the ability to replace some administrative tasks automated. Especially for smaller businesses smaller companies that maybe cannot afford a full marketing department or a full content generation of department these tools can provide efficiency and cost-cutting measures. Another benefit is just the ability for anyone to use these tools. Right now, we're in a period of time where open AI and chat GPT are offering these tools for free. These tools, of course, are not free and are being funded in part on venture capital funds, but right now anyone can access them and take advantage of the efficiency and cost-cutting gains that AI affords. So we're really in a great spot right now for smaller and mid-sized companies that maybe don't have as much to spend on advanced technology because these are so readily available and at such a low cost.

Speaker 1:

Some of the things you mentioned sound like good ways for businesses to begin leveraging AI without if they're not ready to fully dive in.

Speaker 2:

Absolutely. You can pick and choose right now what makes sense for your business. So a couple applications to consider. I mentioned content generation. So if you have a storefront that relies on regular blog posts for SEO optimization, marketing, you don't have to rely just solely on your content writers. You can get an assist from from chat, EBT or some other generative AI program to help keep the flow of content going and the conversations moving about your product. Similar with CRM software, you can automate reach out campaigns marketing campaigns to your existing clients. Some softwares are starting to recommend when you should be reaching out to existing clients, what timeframes make sense, what products they might be interested in. Of course, there are a lot of privacy implications that you'll have to consider before leveraging these tools, but that can automate some of the marketing and client outreach strategy that you might have to spend a lot of time doing right now.

Speaker 2:

For the legal industry, you're seeing a lot of legal research tools that are popping up. We can talk a little bit more in detail about how accurate those are, but we're starting to see some AI assisted research where it suggests in cases to attorneys and some of that initial fact finding approach. And then there's also the decision to for businesses who want to leverage these tools. Do you want to rely on a vendor to provide these? Or maybe a larger company wants to develop their own models so that they can have more control over the data, more control over the algorithm? So that's always a decision. When you're going to start implementing some of these tools, Do you want to bring that in-house or do you want to rely on a vendor?

Speaker 1:

Well, since you brought it up, I think we should pick on our own industry a little bit. There have been some instances, I should say in the law, where attorneys have submitted briefs to the courts using AI without first double checking the tool and without citing to the use of AI. Those didn't go very well, did they?

Speaker 2:

No, they didn't. Yeah, earlier this year, a New York federal judge sanctioned some lawyers who submitted a legal brief written by a chat GPT and unfortunately included citations to cases that did not exist.

Speaker 1:

So with real judges listed for the cases even worse.

Speaker 2:

With real judges listed for the cases. Yes, so super problematic, embarrassing If you're the client. I can't imagine how the client feels. In addition to paying a $5,000 fine, these attorneys had to notify each of the judges that were falsely cited and identified as the author about the rulings and about their use of the rulings. So a certain level of embarrassment there, having to admit that, especially since these judges weren't even involved in this litigation at all. But the judge did note that he might not have punished these attorneys if they came clean about using chat GPT.

Speaker 2:

So there is a balance here to consider that, yes, chat GPT might be helpful for drafting court documents and keeping costs lower for clients. But there has to come with notice. There has to be a way that you can notify the other parties that you're using chat GPT. And we're seeing some local rules put forth by some judges where if you're going to use chat GPT for briefs to the court, that's fine, but you have to disclose that and obviously, with the example we just discussed, the onus is on the attorney to check that final work product. You cannot just take what is generated and throw that in your brief. There has to be check and place and that goes back to the rules of professional conduct, competency. There's one that comes to the top of my mind making sure what you're putting forward. You understand the technology, you understand that some of this could be inaccurate and you take steps to make sure that what you're putting forth to the judge or the party is accurate.

Speaker 2:

Earlier this week a couple of colleagues and I were exploring chat GPT and trying to look for case sites just to see how accurate it was. And we asked chat GPT to give us a list of recent New York cases regarding liquidated damage clauses. And very quickly chat GPT generated 10 cases with great looking sites, recent decisions, little case summary, very helpful. But when we started plugging these citations into Westlaw and other legal research tools, quickly realized that they didn't exist. So when we prompt the chat GPT again, like are these real cases? The response was no, but you didn't ask for real cases, oh funny. So again, that goes back to some of the limitations and risks of AI understanding context. If a colleague of mine asked me to research a set of decisions, I wouldn't ask well, would you like these decisions to be real or not? So we're still working through some of those the prompt design and context clues for AI. But accuracy, especially in the legal realm, is a big concern, especially with generative AI confidently generating content. That is just not accurate.

Speaker 1:

And I think you bring up a good point here. There have been many informational decisions on social media and the rules of professional conduct and ethics. In saying that competency means you have to understand those platforms and their use, I would venture to guess very soon we will see a similar from either the New York State Bar or New York City Bar or Pennsylvania, one of the big bars that often issues these, saying like you have to have a level of to be competent in practice, you have to have an understanding of use of AI as you're engaging in it.

Speaker 2:

Absolutely, and I think it's safe to say that at this point. Ai is going to be a part of the drafting process for many, many attorneys. So rather than outright banning it, we need to think of ways to increase transparency around its use and set parameters so we can use this effectively because of the efficiency gains. This is a good thing for clients, where we may be able to spend a little bit less time on certain portions of the drafting. So we don't want to close the door to this right away, but we also have to be very smart and cautious about how we implement it.

Speaker 1:

Absolutely, and there are. Not only are there issues of how to use AI in the law, but there are many complicated legal issues around the use of AI besides that from copyright, other ethics issues and beyond. So, and when I say all those, I'm saying I've seen Blaine Bettinger writing on this. He's coming back to talk about this copyright stuff there's I'm putting his invite right in the podcast but are there any laws right now that govern the use of AI?

Speaker 2:

Yeah, there are a few, with more obviously on the way. I'll go through a few. So let's start with state privacy laws, state consumer privacy laws. So currently California, colorado, virginia and Connecticut have all passed consumer privacy laws that govern the use of AI to a certain extent. So these state laws will allow a consumer to opt out of an automated decision-making process that can evaluate, analyze or predict personal aspects related to the consumer. So this includes their economic situation, health, personal preferences, sensitive personal preferences, interests, behavior, location and movements.

Speaker 2:

So any type of AI decision, any type of automated decision where it's gonna affect that consumer personally in a sensitive way, we need to make sure that consumers have an ability to opt out of processing in this manner. California takes it a step further and also requires businesses to disclose how they leverage the technology and how the underlying technologies process to come to these decisions. So even if you're relying on a vendor that uses AI to maybe sort your clients for CRM application, for example, you still, as the end user, you still need to know how that works. So early in talks with the vendor, when you're setting up the vendor engagement, you need to be asking these questions how does the AI algorithm work? And if they can't tell you or they don't want to tell you, then maybe it's time to think of an alternative service provider, because ultimately you, the end user, are gonna be responsible for knowing how it works and explaining that to your customers.

Speaker 1:

I'm gonna say that unfortunately it does seem like criminal law is gonna have to catch up with this as well, because of Wall Street Journal had an article yesterday about high school in New Jersey where inappropriate pictures of students were being circulated. It was the face of an actual student and then AI generated body. So I'm imagining criminal laws like harassment and other things are gonna have to now catch up to the use of AI.

Speaker 2:

Yeah. So we're reaching a point now where it's very difficult to tell what's a real photo and what's a real video and AI and another risk I guess we can mention right here is how AI can fuel cyber attacks and security incidents, especially phishing incidents, where phishing relies on a very accurate representation of, maybe, a coworker or a boss asking for information, requesting access. It's easy to spoof an email and it's easy to be suspicious of an email or text, but when you're seeing a video, maybe of a CEO, or a phone call that's using AI to generate a voice, it's becoming more difficult to discern what's real and what isn't. So that's gonna be another area that AI will present some serious risks.

Speaker 2:

Moving on to the federal level, we don't have a lot in place yet, but we are moving towards more concrete policy and regulation. The first thing I wanna mention here is in October 2022, the Biden administration released the blueprint for an AI Bill of Rights. So there are five guiding principles. First one you should be protected from unsafe and ineffective systems. Second, you should not face discrimination by algorithms and systems and they should be used in an equitable way. You should be protected from abusive data practices via built-in protections, and the consumer has to have agency over how this data is used. So that's closely related to how some states are already approaching it. With these opt-out rights, you should know that an automated system is being used and understand how it affects you. So, again, that's more of the disclosure requirements that are emphasized in the California law, and then, finally, you should be able to opt out where appropriate. So, although we're having a different patchwork of laws and authorities starting to come together, we're starting to see some general themes that are rising to the surface, notice and consent being critical. You need to let your consumers, your clients, know how you're using it and how they can opt out if they're not comfortable with it.

Speaker 2:

So recently as in a couple of days ago, the Biden administration released an executive order built on this AI blueprint Bill of Rights. Our data privacy team is going to do a deep dive in an upcoming client alert, so watch out for that. But just to highlight some of the major points from this executive order, the order requires that AI developers especially those of very powerful AI systems that rely on big data they share their safety test results and other critical information with the US government. So we're not sure exactly which agency that will be my guess. It'll probably be the FTC, as they have taken the lead in this space, which I will get to in a second Also to develop standards, tools and tests to help ensure that AI systems are safe and secure and trustworthy, and to rely on a set of principles put forward by the National Institute of Standards and Technology. They are a leader in setting standards and safeguards for cybersecurity, so they are being called on to create a new framework for AI systems. And then, finally, protecting Americans from AI-enabled fraud and deception by establishing standards and best practices for detecting AI generated content and authenticating official content. So that goes back to our discussion about fake photos, deep fake videos. It's going to be a challenge, but there has to be a framework to certify what is real, what might be an imitation, and I think that's going to be an ongoing problem that won't have an easy solution, but the executive order is getting us started on the right track about thinking about these risks and thinking about ways we can mitigate them.

Speaker 2:

To touch briefly on the FTC, the FTC right now is probably the leading agency for regulating the use of AI, and the FTC's main power comes from Section 5 of the FTC Act, which prohibits unfair or deceptive acts or practices that affect stream of commerce. So for deceptive practices, marketers should know that false or unsubstantiated claims about a product's efficacy those are considered deceptive and that's going to attract regulatory attention. So we're seeing that in the context of software companies that are making very broad claims about what their AI software can do. It's more intuitive than it is or it relies on more technology than it does. With AI being a hop-up and issue, we're seeing the marketplace crowded with applications and the FTC is starting to pay close attention to well, can these tools really do what these developers are saying that they can? So that's one area where the FTC is working.

Speaker 2:

And then the unfair side of Section 5, acts or practices are unfair if they cause, or are likely to cause, substantial injury to consumers and the consumer cannot reasonably avoid this injury.

Speaker 2:

So in the AI context, that's bias and discrimination in the algorithms that are used to make AI decisions, a lack of transparency, the integrity of data that's used to fuel these models, and then inappropriate uses of AI.

Speaker 2:

For example, a consumer consents to the use of their data for generating a decision, maybe for extending credit, but the software developer repurposes that data later. They retain the data and they use it in another model without allowing the consumer to opt out. That's going to be an unfair practice under Section 5. Ftc's biggest enforcement tool right now for AI is algorithmic disgorgement, which means if they find that the practice was unfair or deceptive, they will oftentimes ask the company to delete the problematic algorithm that was used and all the data that it generated. So that's very costly, especially since a lot of these algorithms take years to develop and tune. It's a word of warning for any company that relies on these proprietary tools to make sure that they are providing notice, providing ways to opt out, because the end result of a regulatory investigation could be throwing out everything that that company had developed over the years. So the cost of non-compliance is great and something to keep an eye on for sure.

Speaker 1:

So there's just so much with AI that we have to sort of keep an eye on and focus. It seems like being up to date, being diligent about it, on the development of the tools and the development of the laws, is sort of critical for any business that wants to incorporate AI into their business.

Speaker 2:

Absolutely. So a couple of compliance tips to keep in mind generally and this is gonna echo back to some of the conversation we had earlier but you need to understand the AI software that you're putting into practice and be prepared to explain its use. This does not apply just to developers. This applies to anyone who purchases a license to use software that leverages AI. You need to understand it because you're gonna be asked about it and the way the regulations are looking to be shaping up. There's gonna be a lot of disclosure requirements about the underlying mechanics. Now there's gonna be a balance between are we sharing so much technical information about how the AI model works that it's not usable to a consumer? So there is a way to disclose that. We'll probably add more confusion because this technology isn't straightforward. So it's about finding the right balance of what does the consumer really need to know to make an informed decision and giving them that information to be able to opt out or say okay, I'm comfortable with this. And also keeping it simple enough for consumers to understand who don't have a background, but also accurate enough to meet regulatory disclosure requirements. And for employers if you allow your employees to leverage AI tools, you need to set parameters. You need to define the play space of what's allowable, what's not allowable. So this could be defining what tools you can use or what applications you're allowed to use. So maybe you're allowed to use chat GPT, but only for certain applications like content generation, but you're not allowed to use it for client work product. It's worth getting granular with these policies and really eliminating doubt from your workforce about what they can use, what they can't use, and that's gonna create a safer environment for the use of AI. Another tip is to allow clients to opt out of AI decision-making, even if you're not covered under a law yet that requires it. State privacy laws are popping up all over the country. They're moving quickly New York, while we don't have a overarching consumer privacy law just yet. That's expected to be finalized in the next year or so. So if you're gonna use these tools, especially if you're a web-based company that may have customers or clients from all 50 states, it might be simpler just to start implementing some of these opt-in noticing consent mechanisms now so that you're ready when your state finally implements a consumer privacy law.

Speaker 2:

Another tip here is to not input client information into generative AI programs. Redact sensitive information. This is particularly useful for lawyers to keep in mind. You can use chat, gbt and other content generation platforms to draft legal documents, but use placeholders for your client names. Don't put in information that you wouldn't want someone to be able to extract, even accidentally, using the platform at a later date. Make sure you're redacting information where necessary. I found that it's useful. If you wanted to draft a legal document, ask chat GBT to draft the template, maybe get you started on the structure of the document and then take that offline, put in the client information.

Speaker 1:

Sort of treat it like you have a non-lost intern.

Speaker 2:

Exactly. That is a good way of thinking of it. Don't give it more information than it needs. When it's time to add that sensitive information, take that offline, take that into the secure environment that you work in your document management system and do not put that information into a generative platform. Check it all, Check it all. That was my next point Double check. I've worked product for accuracy. We've seen that already in some embarrassing situations in court. But even if you're not in court and you're drafting a document for a client, that's still embarrassing, even if you're not in front of formal proceeding.

Speaker 1:

It can be worse than embarrassing, can cause you some serious issues in your profession.

Speaker 2:

Right, I'll practice concerns and conflicts with the professional rules of conduct. So make sure you're checking your work product for accuracy. Nothing that chat GPT spits out should be something that you take for face value. Then, finally, deploy administrative, technical and physical controls to save card all the information that you use to inform these tools. So law firms already have to think about this from the document management side, but that's not unique to the legal industry.

Speaker 2:

Any company that houses data from clients or consumers, they need to be thinking about ways to keep the data safe. So things like multi-factor authentication. This is the number one easiest low hanging fruit safeguard that you could implement In most email and web environments. It's the simple click of a button to turn on multi-factor authentication for all users. It's a very easy way to achieve some degree of security Encryption of the data in both at rest, maybe on your server and the cloud, or in transit. Make sure you're using methods to transmit information In encrypted fashions. That could be a secure file sharing site versus sharing something into a body of an email, of course. So these are some tips to keep in mind. As the body of law grows in this area, we'll be able to refine these tips and we'll be able to continue to provide some guidance to our clients about how to stay ahead of upcoming laws and regulations, but at this point ask for consent, provide notice and be as transparent as possible.

Speaker 1:

In a lot of ways. Those are just basic rules of practice. Those are things you just have to do. I really appreciate, Mario, you running that through with us. There's so much more that we can talk about. Thanks for joining us on the podcast today. We're going to have to have you back again soon.

Speaker 2:

Thanks again for having me and, yes, I look forward to it.

Speaker 1:

Should I say that? No way, I was used in this conversation.

Speaker 2:

Yes, these are not AI generated voices. These are not AI generated.

Speaker 1:

This is actually Kim and Mario having a conversation All right. Well, I look forward to having you back next time. Thanks again.

Speaker 2:

Thank you.

Speaker 1:

Thank you for tuning into this episode of Legally Bond. If you're listening and have any questions for me, want to hear from someone at the firm or have a suggestion for a future topic, please email us at LegallyBondBSKcom. Also, don't forget to rate, review and subscribe to Legally Bond, whoever podcasts are downloaded. Until our next talk, be well.

Speaker 3:

Bond, seneca and King has prepared this communication to present only general information. This is not intended as legal advice, nor should you consider it as such. You should not act or decline to act based upon the contents. While we try to make sure that the information is complete and accurate, laws can change quickly. You should always formally engage a lawyer of your choosing before taking actions which have legal consequences. For information about our communication, firm practice areas and attorneys, visit our website BSKcom. This is Attorney Advertising.

Artificial Intelligence
Risks and Benefits of Using AI
Legal Industry
Compliance Tips for AI Usage
Legally Bond