All Episodes
AI in Enterprise: Security, governance, and robots
August 22, 202500:54:09

AI in Enterprise: Security, governance, and robots

with Jim Spignardo, ProArch

AI in Enterprise: Security, governance, and robots

0:000:00

Show Notes

Jim Spignardo is the Director of Cloud Strategy and AI Enablement at ProArch, a global IT consulting and services organization with nearly 20 years in business and locations across North America, the UK, and India. ProArch stands for Professional Architects - a name that gets mispronounced regularly enough that Jim flags it every time. Jim has been with ProArch for nine years and has watched the company navigate every major technology shift of the past two decades: the birth of cloud, the mobile era, and now the AI wave.

Jim's current role - Director of Cloud Strategy and AI Enablement - barely existed three years ago. He's one of the people creating the playbook for what enterprise AI adoption actually looks like in practice: not the headline features, but the governance, the use case qualification, the security posture, and the process discipline that make AI adoption stick rather than stall. He publishes prolifically on LinkedIn and runs a newsletter called Control Alt Innovate.

The AI Anxiety Problem: Everyone Is Asking, Nobody Knows What to Do

Jim hears the same conversation over and over: an IT admin says they're engaging with ProArch because their board keeps asking “what are we doing with AI?” - and they don't have an answer. The pressure is real. The expectation is that every organization is already using AI. But most aren't doing it in any structured way, and many believe they aren't doing it at all.

That last belief is almost always wrong. AI tools are already in use inside organizations that think they've banned them. Someone on the sales team is pasting prospect emails into ChatGPT. Someone in marketing is using a Chrome extension that runs on a third-party model. Someone in finance uploaded a spreadsheet to an AI tool to generate a report. The organization doesn't know it's happening, and because they don't know, they have no visibility into what data is leaving, where it's going, or what agreements govern it.

Jim's framing: if you don't know what AI tools your team is using, you don't know your risk exposure. The first priority is not picking the right AI tools - it's getting visibility into what's already happening.

Start With Policy, Not Tools

Before piloting any AI tool, before identifying use cases, before anything - organizations need an AI use policy. Jim is emphatic about this ordering. The policy should define which tools are acceptable, what data is permitted to flow into those tools, what is explicitly off-limits (customer PII, financial data, proprietary IP), and who is responsible for making those decisions going forward.

The policy doesn't need to be comprehensive on day one. If you don't have clarity yet, start narrow: “We are only allowing this one tool for this one purpose until we establish a better framework.” A narrow policy that is actually followed beats a broad policy that is ignored or unknown. The goal is to get ahead of the shadow AI problem before it compounds.

ProArch uses Microsoft Defender for Cloud Apps to give clients visibility into what third-party AI applications employees are accessing - even from personal devices on corporate networks. The tool generates a security review score and privacy score for each application, letting IT make informed allow/block decisions rather than blanket prohibitions that get bypassed anyway.

The Two Universal Use Cases: Meeting Notes and RFPs

When Jim is asked where to start with AI, he names two use cases that work across almost every organization regardless of industry or size.

Meeting notes: Collecting, organizing, and acting on information from meetings is something almost no organization does well. AI tools that transcribe and summarize meetings are low risk, immediately valuable, and don't require building anything custom. The upgrade is integration: if the meeting notes tool can push summaries directly into your CRM, your PSA system, or your ticketing platform, you eliminate another manual step and the notes actually become actionable. Jim calls this the lowest-hanging fruit across the entire enterprise AI landscape.

RFP automation: ProArch used to avoid RFPs almost entirely - too time-consuming, too uncertain, too demoralizing. They built a custom no-code agent with full knowledge of their organization, capabilities, differentiators, and case studies. Feed it an RFP, it identifies the gaps, starts drafting responses, and flags where human judgment is needed. Result: approximately 50% reduction in time and effort, lower anxiety for the team, and consistent output quality. Every RFP used to be a unique snowflake. Now they have a standard.

Microsoft Copilot, Security, and the Privacy Misconception

Jim is a long-tenured Microsoft professional - Microsoft Certified Trainer for nine years - and is candid about both where Microsoft excels and where it doesn't. On security and AI governance, it excels.

The most important misconception he corrects: Microsoft uses OpenAI's models, but that does not mean Microsoft has the same privacy posture as OpenAI. Microsoft licenses the models and adapts them to their platform. Your data does not train the models. Microsoft respects your existing security boundaries - role-based access controls, permission structures, data classification - when AI surfaces results. If an employee doesn't have permission to see a document, Copilot won't surface it in a summary, even if it's relevant.

Jim's broader point: Microsoft is not the most secure platform by default, but it is one of the most securable platforms if you invest in configuring it correctly. For organizations already in the M365 ecosystem, the marginal cost of adding Copilot and the security stack is low relative to the data governance capabilities it enables.

His real-world Copilot moment: opening a Word document and seeing a prompt at the top asking whether he wanted to draft a summary of a recent meeting or justify to a client why they should purchase a particular service - unprompted, based purely on what Copilot could infer from his recent activity across documents, meetings, and chats. That level of contextual integration is what distinguishes Copilot from standalone AI tools: not the model, but the data depth it can reason across.

AI Examine: ProArch's Tool for Evaluating AI System Outputs

ProArch built an internal product called AI Examine, designed for organizations that have developed their own full-stack AI solutions and want to evaluate those systems against responsible AI standards. The tool sits on top of an AI system's outputs and measures them for bias, accuracy, and ethical compliance - providing a dashboard that lets teams dial in system performance over time.

The use case: an organization builds an AI system for customer support, hiring, or underwriting. AI Examine evaluates the outputs of that system rather than the inputs or the model weights. If the system is producing biased results - by demographic, by geography, by any measurable dimension - AI Examine surfaces that and gives teams a tool to correct it. This is particularly relevant for regulated industries where AI governance is not optional.

The Robot Convergence Is Coming Faster Than IT Roadmaps Allow

Jim's broader argument is that the window to get AI governance right is shorter than most organizations realize - not because of software AI, but because physical robotics is converging on the same timeline. Commercial-grade robots that are currently priced at $40,000–$50,000 are likely to hit consumer-accessible price points (leased for a few hundred dollars a month) within a few years. The governance frameworks, data policies, and IT infrastructure decisions companies are making now will determine whether they can absorb that transition quickly.

The C-3PO / R2-D2 distinction maps to real product categories: ambient, highly capable reasoning systems (closer to what Alexa should be but isn't yet), versus mobile task-completion robots that move through physical environments. Both will require organizations to have thought through what data these systems can see, what they can act on, and how human oversight is maintained.

Jim is genuinely frustrated with Amazon's pace on Alexa. The smart home AI category is wide open. An aging population, young families, companionship and health monitoring use cases - all of it is underserved. Sam Altman's collaboration with Jony Ive on a new AI device is exactly the kind of move that could take that market in a year or two if Amazon doesn't accelerate.

The AI Implementation Framework: 7 Steps

Jim's step-by-step framework for organizations starting their AI journey:

  1. Define your pain points first. Ask: if we could solve this problem, what would it mean to the business? More revenue? More clients served? More grants written? The business question precedes the technology question.
  2. Define your use cases precisely. Name the specific workflow, the specific team, the specific outcome you want. Vague use cases produce vague pilots that produce no useful learning.
  3. Establish an AI use policy before you deploy anything. Define acceptable tools, acceptable data inputs, and what is off-limits. Start narrow if you have to.
  4. Build a roadmap, not just a pilot. Show how each use case will evolve over time. Don't deploy AI into one workflow in isolation - plan the progression.
  5. Train your people on prompting. Do not assume anyone intuitively knows how to get value from these tools. The better someone communicates, the more they get from AI - but prompting is a skill, not a personality trait, and it can be taught.
  6. Designate an internal AI champion. Someone who is passionate, knowledgeable, and accountable for defining what success looks like. If you don't have that person internally, hire a Virtual Chief AI Officer to fill the role until you do.
  7. Measure, reinforce, and integrate. Define your metrics upfront. Return on investment from AI is not immediate. Repeatedly go back to your users, reinforce adoption, and gradually make the AI layer indistinguishable from the human layer in the workflows where it's been deployed.

Tools & Resources Mentioned

  • ProArch (proarch.com) - Global IT consulting and services. Services include AI strategy, Microsoft 365 implementation, security architecture, and the new Virtual Chief AI Officer offering for organizations without in-house AI leadership.
  • AI Examine - ProArch's proprietary tool for evaluating AI system outputs against bias, accuracy, and ethical standards. Designed for organizations with custom-built AI solutions that need ongoing governance monitoring.
  • Microsoft Copilot + M365 - Jim's recommended AI platform for enterprise organizations already in the Microsoft ecosystem. Key advantage: respects existing RBAC security permissions, does not train on your data, provides governance controls unavailable in consumer AI tools.
  • Microsoft Defender for Cloud Apps - The security tool Jim uses to surface shadow AI usage within organizations. Shows which third-party applications employees are accessing, their security and privacy scores, and enables informed allow/block decisions at the policy level.
  • Notebook LM - Google's research and summarization tool. Jim uses it to convert long-form content into audio - the example given: turning a ChatGPT-written village walking tour into an audio guide ready for municipal use.
  • Control Alt Innovate - Jim's LinkedIn newsletter covering AI strategy, enterprise adoption, and technology leadership. He publishes approximately three articles per week on LinkedIn and uses it both to share perspective and get feedback that evolves his thinking.

Frameworks

You Don't Know What You Don't Know (Shadow AI)

Organizations that believe they are 'not using AI' almost certainly are - through unauthorized employee use of consumer AI tools. The risk is not just policy violation; it's data exposure without visibility. The first step in any AI governance program is getting visibility into what tools are already in use, not choosing new ones. Microsoft Defender for Cloud Apps is one way to do this at scale.

The Drudgery-Distraction-Dull Framework

Jim's adaptation of the classic robotics framing ('dirty, dull, dangerous'). For AI: the best initial targets are tasks that are drudgery (low-value, repetitive, draining), distractions (work that pulls people away from higher-value activity), and dull (rote processing that humans do consistently worse than machines). RFP response and meeting notes both qualify on all three dimensions.

Art of the Possible

Jim borrows this framing from Microsoft: the primary barrier to AI adoption in most organizations is not cost or capability - it's imagination. Decision-makers haven't seen what AI can actually do in their specific workflows. Demonstrations that show specific, concrete capabilities for a specific audience unlock adoption faster than any case study or ROI analysis. Get the light bulb to go on first.

Use Cases Before Technology

The wrong implementation sequence: deploy AI tools, then develop use cases for them. The right sequence: define business pain points, define what solving them would mean, identify where AI fits, then select and deploy tools. Organizations that put technology first end up with expensive licenses and no adoption because nobody ever defined what success looks like.

Co-Creation (vs. AI Replacement)

Jim's preferred framing for AI-augmented work: not 'AI does this' but 'humans and AI co-create this.' The distinction matters for change management - employees who feel replaced disengage; employees who feel augmented invest. The goal is workflows where the AI and human contributions become indistinguishable from the output perspective, not workflows where humans are removed.

Most Securable (Not Most Secure)

Jim's long-standing characterization of Microsoft's security posture: it is not the most secure platform out of the box, but it is one of the most securable platforms if you invest in configuring it. This framing applies to AI governance too - the controls are available, but they require deliberate implementation. Organizations that deploy Microsoft and don't configure the security stack are not getting the advantage they're paying for.

FAQ

What should an organization do first when getting started with AI?

Establish an AI use policy before deploying any tool. The policy defines which tools are acceptable, what data can be used with them, and what is off-limits. If you're uncertain, start narrow: allow one tool for one purpose and expand as your understanding grows. Without a policy, you have no baseline for measuring compliance, no protection against shadow AI risk, and no framework for making future decisions.

What are the best first AI use cases for any organization?

Meeting notes collection and synthesis - ideally with CRM or ticketing integration - and RFP response automation. Both are universal, low-risk, and immediately measurable. Meeting notes address a problem every organization has but rarely solves well. RFP automation reduces a high-anxiety, high-effort process and creates quality consistency that manual responses cannot match. ProArch achieved approximately 50% time reduction on RFPs with a custom no-code agent.

Does Microsoft Copilot train on your company's data?

No. Microsoft licenses OpenAI's models but does not use your organizational data to train those models. More importantly, Copilot respects your existing security permissions - role-based access controls, document permissions, classification labels. If an employee doesn't have access to a file, Copilot won't surface it in a summary for that employee. This is a meaningful governance advantage over consumer AI tools that have no awareness of your organizational security structure.

How do you find out what AI tools your employees are already using?

Microsoft Defender for Cloud Apps can surface shadow AI usage - showing which third-party applications employees are accessing, including AI tools, and providing security and privacy scores for each application. This gives IT the visibility to make informed policy decisions rather than blanket bans that get bypassed. Organizations that 'don't allow AI' typically discover through this kind of visibility that employees are using multiple AI tools already.

What is a Virtual Chief AI Officer and when should a company hire one?

A Virtual Chief AI Officer is an outsourced role that provides the strategic AI leadership function - use case qualification, governance development, vendor evaluation, adoption measurement - for organizations that don't have that capability in-house. ProArch offers this as a service. It's appropriate for companies that recognize they need an AI strategy and an internal champion to drive it, but don't have the internal talent or aren't ready to hire a full-time executive for the function.

What fields are most at risk from AI disruption, and what should students study?

Jim points to corporate photography and headshots (AI-generated images for $45 produce 150 variations instantly), interior design (AI can redesign a room from a photo in seconds), and creative roles broadly where the work is primarily generative rather than curatorial or relational. His advice: whatever you study, understand the AI tools in that field deeply enough to use them as leverage, and identify specifically what human judgment or relationship capability you bring that the AI cannot. Study what makes you irreplaceable, not just what has historically had jobs.

What's Jim's view on LLMs as the long-term foundation of AI?

LLMs are one type of AI, not the final one. AI existed before LLMs and will include technologies beyond LLMs. Jim expects the eventual picture to be a combination: machine learning doing one thing, business intelligence doing another, LLMs doing a third, and together unlocking something closer to artificial general intelligence - or what is now being called superintelligence. The question of when AGI arrives and how we'll know is still open. What's clear is that LLMs as the sole focus is a temporary frame.

Links & Resources