Video: Live Webinar: AI in Foundations: Govern First, Accelerate Second | Duration: 3576s | Summary: Live Webinar: AI in Foundations: Govern First, Accelerate Second | Chapters: Webinar Introduction (7.84s), Webinar Introduction (89.32s), AI's Organizational Impact (207.49s), AI in Foundations (277.265s), AI Use Considerations (426.045s), AI in Grant-Making (560.225s), Data Governance Fundamentals (704.435s), Data Governance Priorities (930.96s), Governance and Implementation (1075.705s), AI Risk Assessment (1402.615s), AI Risks and Accountability (1543.515s), AI Vendor Considerations (1846.465s), Overcoming Adoption Hurdles (2034.86s), Effective Prompt Design (2079.465s), Reducing AI Hallucinations (2253.905s), Effective Prompt Engineering (2344.66s), Information Sharing Benefits (2440.915s), Understanding AI Agents (2530.62s), AI-Assisted Grant Review (2615.91s), Governing AI Implementation (2805.805s), AI Adoption Considerations (2896.155s), AI Misconceptions Addressed (2977.56s), Internal AI Policies (3095.34s), Evaluating AI Platforms (3173.75s), AI Adoption Benefits (3294.735s), Privacy and Data (3398.405s), Closing Remarks (3521.015s)
Transcript for "Live Webinar: AI in Foundations: Govern First, Accelerate Second":
Good afternoon, and welcome to our live webinar, AI and Foundations Govern First and Accelerate Second, hosted by PKF O'Connor Davies. Before we get started, I'd like to go over a few housekeeping items so you know how to participate in today's call. We're pleased to offer live closed captioning throughout the webinar. To access the captions, please use the StreamText link located in the chat section of your attendee panel. You'll have the opportunity to submit text questions to today's presenters by clicking on the q and a tab on the right hand panel as well. You may send in your questions at any time during the webcast. We have a lot of material to cover, and if time permits, we will make an effort to respond. If we cannot get to your questions, a response will be sent post event. This this webinar is offering one CPE credit and information technology. Polling questions will be launched in the polls tab on your right hand panel and will appear when we launch the poll question. You will need to respond to three of the polling questions to receive full credit. CPE certificates will be issued within eight to ten days via email. A copy of the PowerPoint slides and a recording of today's webinar will be made available to you via email forward business days post event. And as we near the end of the webinar, we do have a very short survey which will be prompted, and your response is greatly appreciated. At this time, I would like to introduce Scott Brown, partner with our private foundations practice with PK for Connor Davies. Scott? Good afternoon, everyone, and welcome to our quarterly webinar series. As Harleen mentioned, my name is Scott Brown, and I'm one of the partners in our firm's private foundation practice. For those of you joining for the first time and getting to know our firm and practice area, let me share a little background for those of you who know us, just a little refresher. I've had the pleasure of spending my entire eighteen year career here at the firm working in our private foundation practice. And today, I'm happy to say that we serve more than 650 private foundation clients, including family, corporate, community, and independent foundations as well as other similar grant making organizations. As part of our commitment to the foundation community, we publish monthly thought leadership bulletins, host quarterly webinars such as this one, and hold an annual private foundation symposium. That symposium has grown tremendously over the years and is typically held during the December, so please keep an eye out for our save the date announcement that should be coming out shortly. We're especially excited about today's presentation because it focuses on one of the most talked about topics in today's environment, AI, and more specifically, how AI can be used by private foundations. Presenting today is Thomas DeMeo, who leads our firm's cybersecurity and privacy advisory group. Tom brings more than twenty years of experience and is very familiar with the private foundation space, having worked with many of our clients in a variety of capacities, including reviewing IT infrastructure, assessing threats and vulnerabilities, and advising on on such topics as governance, privacy, business continuity, and disaster recovery. Tom, there's no pressure, but I think today we might be one of our biggest webinar audiences yet with over 300 foundations signed up. So I'm sure everybody is anxiously waiting for your presentation, so take it away, Tom. So first to clarify, it's Tom De Mayo. I think he called me tomato, which brought back grade school, trauma. I've already introduced me. But let's get into the let's get into the session. So, I mean, look, AI clearly has come with plenty of hype, plenty of marketing, plenty of very confident promises. And depending on who you ask, depending on the time of the day, right, it's either one gonna solve all our challenges or two, it's gonna bring the the end of the world. Right? It's gonna be one of those two things. This session is not about that. Right? When when Scott, Mike, and I were were brainstorming on how to how to really cater this to to the foundations based on our experience and and based on those that we work with, you know, we said we want something different. Right? We don't want this to be about fear mongering, not talking about efficiency in this abstract way where everyone focuses on the end goal, but we don't get to the heart of it. What we realized is that for foundations, you know, the more useful question is, how does it actually help the organization, and how does it advance the mission? Right? That is what we're gonna try to answer, today. And it's not even so much how, but can it? And the answer is yes. It can. But it's not gonna do it automatically. Right? It's not gonna do it without intervention. It can absolutely create real value, but it has to be approached thoughtfully. We have to do so in a controlled way, and it has to have the supported right guardrails guardrails. That's what this session is about. Right? It's how the foundation can embrace this change in a meaningful way with practical use, responsible governance, and a clear connection to the mission. So with that being said, you know, the real AI question, right, for private foundations, you know, it's not can it do impressive things. Again, it's does it help us pursue change more effectively? And if we look at this, you know, and we think about when does it actually matter, the the way we look at it, and I think it's it's it's a valuable proposition, is that, you know, AI matters to a foundation when it improves the research quality, the mission alignment, and that preparation for accountable human judgment. Right? That human in the loop, which means that we have these elements. Right? We have that research stickle. Can staff understand the field faster with flattening Nuance, without flattening Nuance or missing context? Is it a mission fit? Right? Can opportunities be compared against the strategy, and can we make the decision a little bit more faster? Decision preparation. Right? Can it improve internal memos and follow-up questions, right, without necessarily implying that final judge judgment? And can it instill with us that institutional learning that, look, leaves as people go, right, or maybe gets lost or or not maybe translated effectively over the years? When you look at the risk, I mean, the reality is it's not a question of whether or not AI is in your foundation. Right? The question is whether or not you're you're governing it. Right? That that is the core issue. And, you know, I was thinking about, I wanna say, maybe a month ago, I was sitting with a a CFO and and one of the CIOs of of a foundation and long term client of ours. And one of the things we were gonna propose to the to the trustees was the data governance program. I kinda centered around this. And we were thinking about ways to kinda, you know, help the trustees understand. And this was one of the core premises. You know, that, yes, it's there. But the question is, do we wanna control it? Right? The risk of not controlling it is greater than just letting it go within the the organization. Now, fortunately, we'd have to do much convincing for the trustees. They got it, and, you know, there wasn't there wasn't much to to talk about. But think about it within the context of your environment. Right? Microsoft three sixty five, which we know many of you use, it's there. Right? It's turned on by default. So right off the bat, it could be doing things that, quite frankly, you're you're not aware of. ChatGPT, Claude, Gemini, all these things that are out there, free to access. You know, there's that risk that, yes, they're using it. And quite frankly, they probably are, and they're putting confidential data within that. Right? Putting into these public tools that are gonna expose the the data. It's this informal use that's gonna create the documentation gaps. Right? If AI assisted in any recommendation period, that record should show that, yes, there was a human in a loop component and, right, and who performed it before we actually placed reliance on that. The fact of the matter is right? Scott mentioned almost 300 people. I can almost say with certainty. Right? There's there's some of you from each one of these lenses sitting within the audience. Right? And these are the questions that for each one of you, respected to your role, you should be asking yourselves. So one, as a board, right, are we letting AI influence judgment responsibly? Right? Are things happening that we're using it, but at the end end of the day, is that human in the loop component there? The chief executive officer, does this actually improve mission execution? Right? Not just speed. You know, a a key component of all this is that, you know, speed is not the goal. Trustworthiness is. Alright? That that's an important concept to to grasp when you're thinking about AI. The chief legal officer, can we explain and defend the use? The chief investment officer, does AI assisted portfolio analysis maintain the rigor that the investment committee would require, the same rigor that we use prior to utilizing AI to help maybe expedite certain situations? Chief financial officer, is it output? Is the output reliable? And, of course, the grant manager, will this improve the workflow without flattening the nuance? You know, because, yes, that is a real risk. When you start to use AI, you know, one of the key things you wanna factor in is it's going across maybe $40.50 grand programs or or submissions, LOIs, those types of things that it's not weeding out the nuance. Right? Because that is a difference between maybe making the decision for a grant and not. Right? So that nuance has to carry through the process. It belongs to the research cycle. Right? So, you know, a small program team let's say you have 40 active grants. The reality is, you know, you can't read everyone to the same level every single cycle and maintain that knowledge. Right? And that's where that learning outcome kinda comes into play. But if you look at this, you look across a process going from left to right, you know, this is articulating where AI could be very helpful, where it's kind of a combined component where, yes, AI and the human have to be in control, and then where it's human only. So, yes, the first few layers, the first few cycles, or the first few, circles here, AI is a very strong proponent. Right? It can scan the field, understand the context, test mission alignment. When it comes now comparing the opportunities, making sure that we're not losing that nuance, that's where, yes, it could facilitate, but that's where the human has to step in as well. It could surface gaps. It's great at that. It could prepare internal memos. Yes. It's good. But you need that human in the review component because one of the things you don't wanna lose, you know, and I think this is an issue across the board for everyone, you don't wanna you lose your personality and what makes you as a foundation you. Right? And if you start using AI too much to start drafting everything, it becomes sterile. So you wanna make sure even your your own personality, your inflection, your culture is embedded within those memos. Not just accuracy, but also that cultural part has a significant role. Exercising judgment, that's always gonna be the human. Right? And then learning from the outcomes. Okay? That's a big vantage point because, again, you know, we as humans, we forget, or people leave that had specific knowledge. Computers don't forget. Right? It is hard coded the second those zeros and ones hit that drive. So by leveraging that, yes, you're gonna build context, and you're gonna build, comparisons over time that a human alone, you know, wouldn't necessarily be able to do. So here's the governing principle. Right? AI performs downstream from your data, your controls, and your instructions, and I want you to think about that. Alright? Because, yes, we have a stream, and we have AI sitting down here. But if it's downstream from the data, what is that saying? That's telling us that the data is what's feeding the actual platform. So what does that actually imply? If we have bad data, well, we're gonna have bad outcomes. Alright? So we have to think about the input going into this. If we have weak controls. Right? So maybe we're oversharing within public platforms or, you know, we're not constructing things in a way or or the data was never set up to be ingested by AI. Well, then that means the confidence may not be that great. Why? Because it's gonna pick up on certain things that, quite frankly, you don't want, which is gonna lead to less confident outcomes. And it's no governance. Right? AI is just frankly a novelty where, you know, it's not an operating tool, but it's kind of embedded within there in some way, but it just has no governance, no no controls whatsoever. Now I want you to think of this in terms of a framework. Right? And each one of these kind of precedes the other. Right? They compound on each other. So when you're thinking about building this out, yes, you wanna govern the data. Right? That is the first thing you should be doing. You know, your sources, your taxonomy, and we'll talk about taxonomy in a second. Getting your AI guardrails, improved uses, risk tiers, reinforcing security, making sure it could access the systems that it should access and also that it's getting the complete picture by accessing all the systems it needs to access so, again, that it's not getting fragmented data. Prompting with structure. Right? And we're gonna talk about prompting in a little bit. And then, ultimately, getting to these agentic flows. Right? That's the next terminology you keep hearing in the marketplace. Agentic flows. Right? Simplifying the creation of agents. So, like I said, we have to cover the data first. And what does that actually mean? Right? When you think about data governance, you know, the key thing to remember is that this is not an IT issue. And then if you heard me spoke in the past when I was wearing my cybersecurity hat, you know, I always say that, you know, cybersecurity is not an IT issue. It's a management issue. Same thing when it comes to AI. This is a management issue. This is not an IT problem. Right? Data governance is gonna be operational discipline across the board. So when we think about it, one, and how do we set this up? We need to define what information matters most to the mission decision. Alright? Not everything is gonna be created equal in terms of the data, in terms of the of the information. There needs to be ownership, right, in some capacity. Right? We don't wanna make this overly complex or bureaucratic. You know, very specific key steps are gonna go a long way. We need to set standards for quality, naming, and versioning. That's taxonomy. Alright? Because you wanna have that that verbiage that you use internally and how you define certain things. Because if we don't have a consistent naming standard or things are are not consistent in systems that have evolved over the years between Flux, Foundev, or giving data, or how you move between SharePoint, what's gonna happen is is that the AI is not necessarily gonna make the connection. Right? So if you don't have that consistent naming in that taxonomy, your output is not gonna be as as as quality, right, as accurate. You need to control access based on role, need, and sensitivity. And I want you to think that in the context you hear of, well, AI returned answers for something that it shouldn't. It returned a you know, human resource records for staff or payroll information. Well, that's because the access controls weren't correctly set. Right? Somewhere, somehow, when this was set up, access was not correctly configured, which let the AI agent go and return something that it potentially shouldn't have. And, of course, we have to keep it active. Right? We have to keep it up to date. When we think about what to govern first, alright, start with these five. One, grantee and applicant master data. Alright. These are the ENs, the prior grant history, the LOIs. Alright? Grant files, expenditure and responsibility, financial and disbursement data, board and committee minutes, materials, and then that institutional knowledge and policy. Okay? These are pretty much the five categories that from, again, from a value proposition and, again, not turning this into an overly complex component here, that's gonna actually help drive the value. I've said on I I've kind of I've touched on this. Right? Weak data to sorts research. Right? Comparability and portfolio learning. So what does this look like when we start thinking about, you know, source data? If you have a grantee record, alright, and it's in multiple pretty much set up in multiple ways across the systems. One, you're gonna get a false comparability. Alright? It's gonna look at those uncommon it's really it's gonna look at that structure. And once it gets pushed into that common component, it's gonna miss the nuances. Alright? It's not gonna be able to put it together. You have multiple final versions of the same document with no clear source of truth. Look. It doesn't know that document seven actually superseded document five, and then you modify document five, which now becomes document eight. It's not gonna know that. Right? So you have to have clear version history when you're feeding it into the model because, otherwise, again, you're gonna get incorrect output output. You know, I noticed that one of the things even we developed internally, we learned that early on. Right? Because it wasn't necessarily looking at the date, so something was fed in with a PDF large document. But because it was fed in with with paper and, you know, there was not in any, you know, specific order and the AI was not capturing the date and putting it into context, was just going in sequence, well, we had that output. Right? So we had to teach it, and we had to put in those parameters to say, look. You're gonna look at that data, and that's gonna control what's authoritative versus what's not. Grant categories, use inconsistency across programs. Well, it's the yellow over incomplete records. Right? If they're on current, they're gonna create that false confidence and analysis. So bad input, bad output. It's making sound decisions that mortar that matter really more than speed. Right? You know, again, I I I said it, and I think it's something to kind of immortalize what you sell, but speed is not the goal. Trustworthiness is. Right? That is the model even us as a firm, you know, we've embedded into how we think about this. Yes. We wanna be more efficient, but we can't sacrifice the trustworthiness of the data, trustworthiness of our opinion. You know, that is hard a hard stop. Alright? Now this has to flow between the entire process. So your source packet. Okay? What improve what what pretty much approved internal and external materials was actually used. Right? Approved internal materials. What are the boundaries? What did the workflow exclude and why? And we start talking about boundaries, where we started instructing the model not to do certain things, like, maybe not hallucinate. Assumptions. Where did interpretation begin, and where did that evidence kind of end? Alright? You know, making sure that if the model is making assumptions, that it's transparent to you so that you know to go and challenge those assumptions to make sure that it didn't assume incorrectly. What remains unknown if there's gaps? Right? Weekly supported, flagged uncertainty. Alright? A lot of times when I'm prompting things, I have it give the I have it specifically give me a confidence score, especially if there's multiple elements. Because if it's 99% confident in one area, but then I see it's 50% confident in another area, well, then I know where I need to hone in, and I know I I know where I need to look. And then, of course, review. Who actually checked the output before it led to influencing a decision or a recommendation, say, a recommendation for a grant committing to a grant. Putting everything aside, this is, right, your minimum governance model. Right? You don't need to, again, to turn this into this big model this big, huge, you know, enterprise scheme. You need your executive sponsor. You need that tone at the top. You know, again, even if I put on my cybersecurity hat and you heard me talk in the past, I would always say the difference between a well defined security program and those that aren't, it's the organizations that have that backing at the top. Right? And this is gonna be part of that. Right? That's gonna be that executive sponsor that's gonna pretty much own it and help ensure that it flows through the organization. Right? That, you know, they they liaise with the board, and, again, everything kinda flows consistently down. You have your functional owners. Right? Your programs, your your finance people, your operations. Because your functional owners, you know, they're gonna become your subject matter experts in certain areas. They're gonna help define the use cases that, look, third parties alone aren't gonna be able to flush out. Right? There's certain things you do that are gonna be relevant to your organization, but that's gonna start with the functional owners helping to find those use cases. You need approved systems. Right? It needs to be very clear what you can and what you cannot use. You need your naming and retention. You need that taxonomy. Okay? Again, you need that access, and then you would need that review cadence. Alright? The bottom says it. Small and disciplined beats large and theoretical every time. Let's not overcomplicate for the sake of overcomplicate it. You know, if you've ever worked with me, I hate process for the sake of process. Right? It drives me nuts. Same concept here. So what are the lightweight guardrails? You know, yes. I'm sure many of you have a policy. Right? You have your acceptable use policy. I know because many have asked me to provide a template or do those type of things, which, you know, we're always happy to do. But it doesn't stop at the policy. Right? What you need is an operating model, and that's the difference. Okay? And that's where it says useful policy is only one part of the broader system. You need your governance, your data, your use cases. Right? Those use cases become very component, very important because those use cases are gonna make sure that the AI is actually serving you value as opposed to just getting a system that has these prebaked things, but it's not really tailored to you. Right? The way you're gonna figure out where AI fits within your organization is you're gonna have to do your your workflow analysis to figure out those specific use cases that would benefit you. Alright? Especially when you start having systems bringing in AI into their own platforms, which is where the monitoring comes into play. But you have to think about what's what's happening within there that I could leverage versus what's happening over here, and how do I maybe pull it together. Right? Because, yes, you're gonna have systems that are gonna have AI embedded within the applications. But there's gonna be a value proposition of having that AI model, say, a ChatGPT or a cloud outside of it that's gonna operate within the scenes. Right? The application knows what's in its boundaries. Right? So the AI operates within that. What it doesn't know is what's across those systems. Right? That's where that external AI component starts to get and have that value proposition. Okay? And then, yes, review monitoring. Like I said, monitoring, key example, would be that, you know, keeping your finger on the pulse of what's coming into applications that you already use, founded flux, board effects, so on and so forth, that might be building this into their into their actual platforms. Make it risk based. Okay? This is true for everything. Right? You gotta focus on the risk. Okay? Going from left to right. Right? Kind of lower risk to higher risk. Right? We're kind of shifting as we go along. If it's administrative acceleration, right, some meeting summaries, standard operating standing operating procedure cleanup, format conversioning, first draft outlines, absolutely perfect, low risk, okay, with the right prompts. Analytical support. Right? So now mission alignment, screening and first path synthesis, proposal comparison, issue scanning and gap identification. As long as they're good sources, right, this is gonna be a pretty powerful use case. Judgment adjacent risk. Right? This is where you start having AI and the human intersect. Recommendations that support due diligence summaries, board preparation, and briefing dress. Right? These are things that you're not gonna wanna just let AI produce and go to the board. Right? You're gonna have to have the human in the loop. Compliance documentation support. Again, it could start the process and accelerate there, but that human has to step in. When it comes to the final decision, this is where you, the foundation, the person owns it. Right? Final funding recommendations or rationale, legal or compliance conclusions. Right? External claims that are gonna impact, you know, or have an outcome. Which goes by AI can support judgment. It cannot own it. Alright? This is where, you know, if you broke it out, AI can assist with what versus what has to remain with the person. So, yes, first pass LOI triage, AI can assist with. Portfolio mapping, AI can assist with. Surfacing mission elements, AI can assist with. You know, synthesizing I can't say that word. Since the since whatever. Multi tier training progress reports to identify performance. Right? Combining it, putting it together, yes, AI can assist with. When it comes to grant determinations, set self dealing analysis, minimum distribution calculations, right, you kinda have to have that human in the loop component. And, again, this is very specific to you as a foundation, but really any business period. This is one of the things that, you know, we cons even even when we deployed internally, and we still deal with. Right? This this hallucinate this hallucination concern. Right? Because, you know, it's funny. Even as we deploy don't wanna say funny, but it's interesting. But even as we deployed it as a firm, and we still deal with this, you know, the first thing that a person needs within a conversation is that that one time I was using it and then it gave me the wrong answer. Right? Hallucinations, yes, could happen, but there's ways to control it. Alright? And the biggest risk with hallucination, you know, the way we see it is if it's blatant fabrication, if it's, like, completely out there, typically, you're gonna know. The real risk is when it's so incredibly convincing because it's that polished distortion. Alright? I remember I learned this early on. I was doing research, and I I was, again, early on in using ChatGPT and just kinda figuring out how to how to get my arms around all this stuff. And, you know, I was having to help me research for a particular article, and it started pulling these quotes from this person. I'm like, god. This is great. Right? These are great quotes, and it supports exactly what I'm trying to say. But then I pause. Right? I'm like, if I'm gonna if I'm gonna put this out there, obviously, I gotta make sure it's accurate. So I started doing more prompts to kinda hone in the factuality. And then sure enough, it was like, well, the person didn't say that. So I'm like, well, why why did you put it in there? Why did you say it was a quote? Well, you know, the response was that based on how the person answered in the past, they assumed that this is what the response would be. Right? And that was my first wake up call to just how polished the response could be and how it can make it seem so absolutely perfect, but it's false. Alright? This is where when you look at it, yes, it could compress nuance. Alright? And that's when you're looking at grant applications and you're going across, you know, nuance matters. Alright? It can create false comparability. You know, again, where it's looking across two documents and it it's giving you a comparison, but it's not exactly accurate. Alright? It rewards polish. So what do I mean by that? You know, if you're going across a set of, say, LOIs, alright, and some are incredibly well written, well thought out, alright, but then some, maybe not so much. That model, typically, by default, unless you start putting in the guards around it, is gonna have a favoritism towards the ones that are more polished than the ones that aren't as well done. Okay? That's where you can start running into problems. It masks weak support. You know, a model hallucinates when it doesn't have the facts. Alright? When it has to fill the gap. You know? Because when you think about AI in general, it is probabilistic. Alright? It is not deterministic. Right? That's the difference between an AI and an an explicit code. Right? In other words, it's searching for what it believes is the probable next answer. Now if between two components, it doesn't have that inner linking, it could start to create to make the two connect. That's where you end up with hallucination. Alright? That's where you start putting in, you know, specific prompting, which we'll talk about again in a little bit, but specific prompting to make sure that doesn't happen, that it doesn't factor in. And, of course, it implies judgment. Right? These are the issues when you think about hallucination in terms of where where things can go wrong. Now if you're a board member, right, if you're a management, you take anything from this, these are the questions you wanna go back and you wanna ask. So as a trustee, as a board member, right, where is AI currently influencing? Research, synthesis, or internal analysis? Where does AI assisted work stop in accountable human judgment begin? How will source governance preserve providence preserved in uncertainty surface before outputs influence decision? How will management know whether AI use is improving decision support rather than just simply speeding things up? Alright? Management. Right? You go back. What exact problem is this use case solving? Right? What does this actually look like in terms of success? You know, we sat with our CEO yesterday. We were talking about this, you know, kind of reshaping of how we're doing things. And this is always the question that we have to define. What does success look like? We can't just go chasing down a path without no clear understanding of what that end goal is. Because, unfortunately, with a lot of this AI stuff, everyone pushes that end AI that end goal as this performance dramatic impacting component, but it's but it's more than that. Alright? What data and systems are involved? Right? Who is reviewing that source quality? How do we make sure, you know, we're not putting ourselves where we're giving bad answers? What is the consequence of being wrong, and who is accountable for the output? Who reviews the output? Right? Again, before we get to that final decision. These are the questions whether you're a board member or you're a management you wanna take with you and you wanna go back. And, of course, we'll make all these kinda tied together. Now cybersecurity fundamentals. Alright? Again, probably many of you sort of heard me talk in the past. This is not a deep dive into cybersecurity. Alright? But, yes, you know, cybersecurity comes into play when you're dealing with AI. You know? Yes. There's the IT component to risk, but remember, AI could be used from a cybersecurity component. Alright? So, you know, if somebody gets into your AI systems or somebody gets to your source data and manipulates, right, that's a problem. Okay? I don't wanna spend too much time on it because it's really not the payoff or really not not the the component. But nonetheless, you know, these are your controls that are gonna have the highest payoff. And this is true whether AI or not. Okay? But, again, just a little refresher when it comes to the cybersecurity element that, you know, we don't wanna lose sight as that as well. You know? The way I've looked at it with certain companies I've worked with, especially on the data governance side, you know, if anything, cybersecurity has kinda helped position them for the data governance. Right? It's not it's not one to one, but it's a foundational component that has to have if you're gonna start relying on the data. Alright? How do we rely on the data? Well, we know it's trustworthy. How do we know it's trustworthy? Well, the people that could access it are only the people that should access it and modify it. Here's questions you should be asking your vendors. Right? So whether it's Flux, found it, giving data, so on and so forth. You know, has your platform added AI features recently? What do they do? Alright? Do you have an updated AI addendum or data processing agreement? Okay? Microsoft three sixty five, is is Copilot even on in our environment? Alright? If so, who's using it? Is our application and grantee data used to train or fine tune any model? Alright? Can we configure data residency? And this is maybe if you have you know, you're a global foundation and you have certain concerns with certain data. What is your incident response? And if we terminate the relationship, how long does this actually say? Now some of these questions are part and parcel of just a vendor doing the load transfer. Alright? And this is where I don't wanna spin you down a wrong direction either because AI, when it comes to vendors. Right? And maybe it's a little bit biased because we're we're seeing it towards ourselves. But true, nonetheless. You know, you wanna treat this no different than you treated any other vendor in the past. Right? You have to ask the prudent questions. Right? You've got to expect they're gonna have a security program. You got to expect they're gonna use the data only for the purposes that are there. This should be embedded from day one prior to AI. AI is just another software you're introducing into your environment with a slight little deviation and that, you know, yes, it could be used to train and do certain things that historically other systems couldn't do. So don't get so bent on treating this as something so fundamentally different, it's not. You know? I could tell you even us as a firm, we get these AI addendums, and you could tell it was driven by fear mongering. Right? Exactly what this section is not meant to do. This section is meant to ground you in the realities, you know, and what's actually the threat is and and how to benefit from it. Alright? But some of the things are just so absolutely absurd that, again, you could tell it's all baked within the fear mongering that has happened out there. And I could tell you too, even from a from a deployment standpoint, from our own experience, you know, because, again, it's one thing to assist an organization. It's another thing to live through it. You know? One of the hurdles that we had to come over, you know, especially when we started deploying OpenAI and and and kinda going wall to wall. You know, one, we had a governance model in place. That was the first thing that was that was that was a nonnegotiable day one component. Right? We were not deploying anything until we had a very defined governance model. But we actually had to get people to use it and convince them that it was okay because we had certain guardrails and things in place. Right? They were afraid to use it. That was inhibiting the adoption. Why were they afraid? Because there was so much fear mongering happening in the marketplace. When it comes to prompting, okay, you know, this is a discipline. Alright? This is where you're either gonna get good results or you're gonna get really bad results. Alright? When somebody talks to me and they say, god, Tom, you know, I I I got this crazy output or it hallucinated, The first thing I'm gonna ask them is what was your prompt? Alright? Because if you leave it too open ended, and we'll see that in a second, you're going to get bad results. So what good prompting does? It designs it defines the exact task. Right? What are you trying to get it to do? Keep it narrow. Keep it focused. Limits the evidence base to only what's approved. We don't say go out there and just start fabricating things. It surfaces missing information. So, hey. Tell me if you're not clear. Tell me if something else is needed. Don't just assume. It separates confirmed fact basis versus assumptions and references. Alright? Again, that goes back to when I told you when I bake in my prompts, certain things they'll do. Sometimes they'll have it give me confidence scores on the answers so I know where I need to hone in versus where I don't. Create a structure that humans can check and challenge. That means traceability. Right? Have it explain how it came to its conclusions that you could look at and say, you know what? That's a pretty damn good process. Or hold on a second. You know, this this doesn't look right. Alright? Even us as a firm. Right? We we we make sure and it's embedded within our processes and our culture, even things we develop, we have clear traceability so that the auditors, the tax professionals, the advisers could look at it and see how did it exactly come to this conclusion and was it appropriate. This is gonna be the first time you saw this acronym. Alright. I made sure. I used the to make sure that this is the first time you're gonna see it. Alright? So we own it. It's the grant's method plus your checks. Alright? Incredibly clever. G, what is the goal? Right? This is how you start to think about when you're putting together your prompts. Well, what is the goal? What are we trying to accomplish? What is the role? Right? When you're setting up prompts, one of the things you wanna do, you know, that's beneficial, is have it assume an identity. Right? I'm a pro a private foundation, program officer. I'm the CEO of a private foundation. I'm a board member of a private foundation. Right? Have it assume a role so that it puts it into length in that perspective. Application. Right? What foundation workflow is this for? Where does this actually come into is it for grants? Is it for finance? Necessary contact. Alright? Does it need certain background information or facts so that, again, it has the context of the story? Alright? Because if it doesn't have the context, it's not gonna give the same outcome. What are the terms and boundaries? Right? This is where you're gonna start putting in things to avoid do not do so that you do not hallucinate. Right? Structure. How do we want this output pretty much put in? How do we have this output come out in a way that's meaningful to me? And then your checks. Alright? How do we flag wheat support, missing information, ambiguous flags, and, of course, the human in the review component? Alright? So you sort here first. If anyone else takes it, they stole it. Reducing hallucinations. Right? Again, this is I know this is a hot topic. Strong props will not eliminate. Right? They're either gonna make it less likely or they're gonna make it easier to catch. Alright? No one could ever say to you they're gonna give you a 99% or a 100% accurate, and it's never gonna make a mistake. False. Alright? You know, you might get 90%, 95%, but there's gonna be elements where, you know, things aren't gonna be what you expect it to be because quite frankly, it's still the computer. You know? Sometimes I say AI is really hell of a lot smarter than me. Other times, I say, not so much. Right? Because I have context. You know? I I have a lot more rational decision making. I'm not kinda just trained by this binary out you know, information that's out there. You know, there's emotion. There's a lot of things that come into it. So how do we do this? Right? One, we're using only materials that we require. We're telling it so, yes, use only the materials that I provided below. Alright? It's better off if you have an authoritative document. Alright? And say it's public or whatever it is, and you can find it on the web. Don't tell it to go find it on the web. Give it the actual document. This way, there's no ambiguity or possibility that it's gonna go outside of. Do not invent facts, names, dates, amounts, collusions. Hey, AI. Don't hallucinate. Okay? If the information is missing, again, don't fill in the gap. Alright? Say so. Let me know. And then separate confirmed facts from assumptions so that I know where I need to focus and where I need to see where things are going wrong. Here's a weak prompt versus a strong prompt. Weak prompt. Hey, Chad. Review this grant application and tell me if it's good. Way too broad. Alright? One, it invites unsupported judgment. This is where you could hallucinate because it can make things up. It could outreach or overreach, and it can produce unverifiable conclusions. What is the strong prompt? Act as a program officer at a private foundation using only the materials provided. Evaluate this LOI against the following criteria. Do not infer or complete missing information. Alright? And then here, what we're trying to do, alignment with our three published priority areas attached, prior grants from our foundation, flag grant number for staff verification, geographic overlap, recurring grant activities. Right? And the good thing and beauty about this is once you hone in these prompts and you get them to a very stable state, save them. Right? Reuse them. Give them to your staff. Right? Give them to your your your colleagues. That's what we do internally. We share. If one benefits one person and it could benefit others, you know, our motto is sharing. Right? We wanna win as a team. We wanna win as one. Right? That's kind of our core model. But in order to do that, we have to share information. Profit by department. Alright. So here's example real workflows, and here's real guardrails. So program grant grant making, LOI LOI triage, expenditure responsibility checklist, progress reports, you know, aggregated by portfolio. Finance and controller, 99, nine ninety p f. Right? Flag grants needing, you know, x treatment. Minimum distribution verification versus payout. Audit provided by client. Organized requests and open items to your auditors. Operations are compliant. Well, a grant file audit. Flag missing ER documents. Board conflict disclosure for minutes. GMS data quality. Right? EINs, incomplete fields. Right? So that's your your your grant, files if there's things missing within the package. Executive and board, well, investment committee, flag IPS conflicts, IRS correspondence, strategic plans, synthesized officer reports. Again, all things that are you know, would work within your organization or relatively through. Kinda moving along because we are running up on time, and I wanna make sure we leave time for questions. Agents. Alright? This is the next term you're going to get bombarded with. Alright? Now what is an agent? Alright? Think of an agent as a worker. Right? It is something that is going to perform a task on its own. Right? It's not gonna require you to prompt it. It is going to do it. It is autonomous. Right? That is where an agent comes from. Alright? So if you think about it in this context and if you read it, a governed workflow in which AI performs a sequence of bounded supported tasks across systems with human review at the moments that matter. But human review is after the fact that it processed autonomously. Alright? What it is not. Right? It should never be this autonomous grant making decision making function. Right? It should never, on its own, just be a black box scoring card for application worthiness. Absolutely not. Right? We would never wanna do that. It should never actually communicate outside without somebody stepping in. Could it? Yes. Would you wanna design it like that? No. It should not be a a shortcut around actually reviewing the policy and the accountability. Alright? It's something you could build before the earlier disciplines are genuinely in the past. So in other words, you don't build an agent until you have that governed program established that we talked about in the one through five pillars before. Okay? This is the building on top of it. If you don't have the foundation, you're gonna put your foundation at well, you're gonna put your foundation at risk. So here's a fake foundation. Alright? Horizon Pathways. Its mission, improve post secondary completion and economic mobility for first generation college students in three states. They got a 180 letters of inquiry per cycle, 45 invited proposals, accurate active progress reports, the site visit notes, public research, state data, and peer funded intelligence. Leadership wants well, they want stronger research. They want a consistent first pass. They want a clear gap finding. They want better internal memos. They want time for judgment. Right? They want more staff capacity for the things that value, not some of the more tedious things that just take up time. Alright? That's the value proposition of this as well. You're freeing up people to do things that are more value driven. Okay? But these are the use cases. Right? This is what they identify. So here's a governed agentic workflow. This is gonna be from start to decision support. So one, intake agent. Right? These are all individual agents minus the human gate review. Okay? So agent one, its job is to map submitted materials to the foundation taxonomy, check all the required fields. Then it goes to the research enrichment agent. Polls approve public and internal contacts within a governed source packet. Then it moves over to the mission alignment, compares opportunity to strategy, geography, and priority areas of the foundation. Okay? Then we stop. Right? That program officer steps in, reviews alignment summary before diligence even begins. Alright? Well, now we have the diligence gap agent. Hey. Surface missing information and follow-up questions for staff, and then we have the briefing agent. Build the first pass memo, fax interpretation, and, again, you're gonna have your human review at the end because you're gonna make the final decisions as a human. Okay? Here's an example how it fits. Human review. Alright? You have to keep it within the guardrails. Okay? Submission alignment summary, human review. Diligence gap supported, human review. Staff briefing drafted, human review. Board packets finalized. Well, don't send it to the board before you review it. Alright? That's your final decision. What stays human? Mission interpretation and strategic judgment, public impact, trade offs and recommendations, accountability for the decisions. Right? No matter what, whatever you build, this has to stay human. Here's what good adoption looks like in '90 even if you put a hundred and eighty days a year, right, whatever it is. But one, you inventory your current AI use. And, you know, formal or informal, you have to know what's there because you have to be able to support it. Alright? You know, there's something called shadow IT, and this is where employees start to use tools outside of what's approved because they feel like they're not getting what should be provided internally, there's better ways of doing things. So they take it up on themselves. Right? Not a great idea, but not providing people with what they need results in these behaviors. Alright? So you have to listen to what people are asking for. A proven living. Right? So this is where we figure out the tools we're gonna use, the data categories we're gonna touch, and we're gonna come up with our first kind of AI policy. We're gonna pilot. We're gonna look at lower risk workflows first. Okay? Built in human review. We're gonna get our feet wet. We're gonna get ourselves comfortable. We're gonna train people. Alright? Not train models. Remember, grants plus your checks. Alright? Review discipline, prompt library, configured AI assistance, then we get into our agentic workflows. Alright? You don't need to roll a roll out AI. Right? You need to govern a few practical use case as well and then evaluate whether an agent makes sense. You know, sometimes prompting is sufficient. Sometimes GPTs, you know, within the opening eye spectrum where they just kinda preformalized prompts are sufficient. You don't need an agent. It doesn't need to be autonomous. Alright? At the end of the day, you're gonna have a combination between the two most likely. So this is what it is. Right? Govern first, accelerate second. Alright? Govern the data, bound the use, protect the environment. Right? Again, this is not for your mind. You know, if you ask us, you will you ask me personally. Right? And if you said to me, Tom, should we be looking at this? Absolutely. Right? You know, should we be afraid of it? I wanna say afraid of it, but you you should be knowingly cautious. Alright? Everything has risk. It's whether or not you understand it and how to control it. Alright? You should be afraid of using your computer to some extent. Right? It's no different. But what do we do? Are we putting controls in place to make sure that it operates within an accepted range? Now one of the things that I did put in here, again, the sample AI use cases, This is, you know, different different elements for you to look at. So an LI, FastPass review assistant, grantee report synthesizer, site visit relationship preparation, trustee board draft support. Again, the dos, the don'ts, right, human review gate, so on and so forth. So, again, you know, our goal was to make this as value add as possible, not to make you afraid, but to effectively how to embrace it in a controlled manner. Alright? So with that being said, let's move on to questions. Great. Thank you, Tom, for that very relevant presentation related to, you know, the the great enough for profit industry, specifically private foundations and and public charities. We've received about a couple dozen questions. I've been able to put them into common themes. If we do not get to your question, we'll respond with a formal answer once the call is all the way. So, Tom, what have you witnessed as being one of the biggest misconceptions surrounding AI adoption? I think I think the biggest biggest misconception and, again, this goes back to the marketing. Right? I think it's oversimplified in what it actually does, and I think it's over exaggerated in the results. Right? I think that's where organizations sometimes get frustrated because, you know, they go through these demos with these software providers, and, look, the demo's gonna look perfect. Right? It's gonna operate on the data that they know. They've done it. But then when it gets put in, you know, it doesn't really match reality. And, you know, yes, it's very easy to just find a chat GBT and use it, but at the using it effectively and building in the governance, you know, it takes a little bit more time and effort than just buying a solution. Right? So I think those those are the biggest misconceptions, at least from from what I've seen. Thank you. Can you touch on the privacy of family member data? What are the security concerns, especially with Copilot and Microsoft Office? So like anything else, you know, when you're looking at the AI you're gonna use so if you're you're within Microsoft and you're paying for Copilot, you know, their agreement with you is that it's supposed to be maintained private, not be used for training your data. Right? You know, if you're using free versions of anything, quite frankly, it's gonna be public. Alright? So none of them offer, like I said, I'm aware of, you know, privacy guarantees of something they're providing for free. Alright? Part of your trade off of of using these free tools is that you're giving them your information to train their model. Alright? So as long as you have those enterprise agreements and you have explicit callouts within the contracts that is not gonna use your data to train it, you understand how they can maybe access it in the back end, all those different types of controls if it's truly private, you know, it's asking the right questions and making sure you have the correct contractual guarantees around it. Thanks. Another one. When developing internal AI policies, do you recommend a holistic approach for all departments, legal, grants management, accounting, etcetera, partake in the full process? So I think when you get to specifics within a department, yes, but from an overarching policy, you know, I think you could have a pretty effective policy without getting into every nuance of every, you know, specific line. Right? Overarching governing statements to make sure that things things are being, appropriate. Now certain departments might have specific standards that they wanna follow. Right? And that's where they might have input. And because that's gonna be driven by risk as well. You know, certain departments based on what they do could have specific risks that you might wanna call out relative to that department. So with any policy, especially when something is far reaching, you know, you gotta start with the high level that's gonna cover the broad, and then you start getting into your specifics with the input of the relevant stakeholders to make sure that from an actual implementation standpoint, human safeguards, you've thought through, you know, how you're gonna do this effectively and what what you might never allow AI to do, period. So it's gonna be a combination. I can hear you, Mike. I think you're on mute. My apologies. How do folks evaluate choosing between CHAT, GBT, Chlord, Gemini, etcetera? And once you have paid and once you have a paid version, is it safe to actually use data to use our actual data? You know, it's it's tough. Right? They they're all like, they're all very good. I mean, let's face it. You got OpenAI. You got Claude. You got Google. Right? They're all powerhouses. They're all very well funded and backed. Claude has been a little bit more better with the marketing as of recent. I don't necessarily think you can go wrong per se with any one of them. Right? I I think they're gonna ebb and flow in terms of their capabilities because they're all kinda competing with each other. Alright? Like, even us. We went with OpenAI. Now Claude has gotten a lot of press. You know, they've done great marketing. They had a Super Bowl campaign that they did, all this other stuff. So it started bringing up a lot of questions about why aren't we using Claude. Right? Because it's the natural human reaction. But, you know, one of the things that we try to emphasize to our employees is patient. Right? Yes. Maybe Claude does this better now, but they have competition. Right? Competition is healthy. So when you're with the big brands for a platform LLM, I don't necessarily think you're gonna go wrong with either one. When it comes to the paid forward version, you know, I would say, typically, within your big players, yes. If you're paying for it, they're gonna have the option to not train on the data. Alright? Sometimes that you have to turn off explicitly. Alright? It's gonna depend, and you just gotta make sure, again, you're looking at what the terms are relative to the platforms. Like, say, for example, yes, we have OpenAI, but we were looking at Claude from a coding standpoint. And Claude teams, without us going to a full enterprise version, gave us that privacy component that we wanted for what we were trying to accomplish. So, again, it's just asking the right questions and following through with with making sure the terms are stated. Yep. Like any time of change, in your opinion, what is the biggest risk or opportunity loss of not adopting AI? I think well, one, I think the opportunity loss from my person perspective is that, you know, you're you're you're letting people do tasks that really aren't value add. Right? So the opportunity loss is that, potentially, they could be doing a lot more benefit for your mission than being bogged down with with process. Alright? You know, this is where it could really be of value to kinda speed things up. Plus two, you know, as as a foundation, again, it's all about that mission, all about driving that outcome that you're you're you're behind. This has the opportunity to effectively allow you to do it better, to potentially do it quicker, to do it more informed. So I think, again, that's where the opportunity lost I I think I do think the opportunity is only yours to lose by not going with it. Right? I think that that is a fair statement. I think there's a lot of power in it, but, again, with the correct guardrails. So, you know, I think our message is, or at least with the way I look at it is, don't look at it as about how do we justify not using it. Flip it. Right? How do we justify using it? And I think what you're gonna find is that you're gonna find a lot of valuable use cases where it's gonna provide real value. Right? And it's gonna and that's gonna be the signal to really what the opportunity cost, well, say, is relative to to the to the specific foundation. Thanks, Tom. Two or three more questions. We'd love for you to talk more about privacy and how foundations can protect their data when using it to record board minutes and draft minutes for those meetings. Is there a template policy that we can utilize? Like, well, the yeah. So the template AI acceptable use policy, I I don't have an issue providing that. Right? Our clients have asked us for it. We we provide it for you. Even if you're not a client, I don't have an issue, or we don't have an issue. When it comes to the privacy of it so one, the first question goes back to is what you're using guaranteeing you that privacy of the platform alone. Right? That's the first question. Alright? The second question becomes with where you're putting the data in or potentially exporting the data to, does it have the correct access controls around it? Alright? What do I mean by that? So, yes, maybe you're using a private platform, and you're sucking in all the board minute meeting minutes, and you're you're exporting them. But then you put it on the SharePoint share, you know, not really thinking about where it's going or, you know, maybe you assume that it's restricted, but it's not. Alright? What that opens up the possibility is is that, you know, most of these AI tools will respect the permissions that you have. So if I'm a user and I'm accessing a SharePoint site, if I don't have permissions to the folder, well, the AI agent's gonna know that I don't have permissions, and it's not gonna read anything within that folder. Can't access it. But if the foundation mispermissioned that, right, and I start asking certain questions and it can get to it, it's gonna give me the answer. Right? So those are the two big things you have to really think about. So one is a model private in and of itself. And two, what are you doing with the data that's, one, feeding it? And once you're getting that actual export, where is this going? If you embrace those two things, you're gonna protect the data. Great. Thank you so much, Tom. I think we have a minute left. I'm gonna hand it over to Harleen for some closing remarks and instructions. All right. Thank you, Michael, and thank you, Tom and Scott. Thank you everyone else for attending our webinar today. If you have not already completed it, we have launched our survey located in the survey tab of your panel into a prompt on your screen. So we we appreciate your feedback as it's is important to us. A friendly reminder that a copy of the PowerPoint slides and a recording of today's webinar will be made available to all attendees via email four business days post event. If you are interested in CPE and you answer three of the polling questions today, CPE certificates will be issued to those attendees within eight to ten days via email. Thank you again and have a great rest of your day.