
AI Is Booming At Work — But It Still Doesn't Understand Our Jobs
Generative and agentic AI tools like ChatGPT, Claude, and Slack AI are quickly becoming part of everyday work. Yet a new Salesforce and YouGov survey shows a critical problem: most of these tools still lack the job context they need to be truly useful, trusted, and safe in the enterprise.
In other words, AI is getting smarter — but it often has no idea what your role is, what systems you use, or what "good" looks like in your organization.
This blog breaks down what that gap is, why it matters, and how Salesforce + trusted enterprise data can close it.
The Job-Context Gap: AI That Talks Well But Doesn't Work Well

Workers are clearly embracing AI. Salesforce's Generative AI Snapshot research shows:
- More than half of workers believe generative AI will help them advance their careers.
- 61% of employees use or plan to use generative AI at work.
But there is a major disconnect between this excitement and real-world value. According to Salesforce research:
- 76% of workers say AI tools lack the job-specific context they need to be truly effective.
- Many employees don't know how to use these tools with trusted data or in a secure way.
Most popular AI tools are trained on public internet data. That makes them good at generic answers, but weak at:
- Understanding your role (for example, SMB AE vs. enterprise CSM).
- Using your company's customer data, policies, and playbooks.
- Respecting your governance, permissions, and compliance requirements.
Without that context, AI becomes "interesting" instead of "indispensable."
Shadow AI: When Workers Go Around IT To Get Things Done

Because generic tools feel easier and faster, many workers are adopting them on their own — creating a new form of shadow IT: shadow AI.
Salesforce's global survey of over 14,000 workers found:
- 55% of generative AI users have used unapproved tools at work.
- 40% have used tools that are explicitly banned.
That behavior introduces real risk:
- Sensitive customer or company data copied into public tools.
- No audit trail, policy enforcement, or data residency controls.
- Inconsistent answers and experiences for customers and teams.
Interestingly, workers aren't trying to be reckless. Many say ethical and safe AI use means relying on company-approved tools — but they don't have access to tools that are as powerful or as easy to use as the public ones. This is the experience gap IT and business leaders need to close.
Why Job Context Is The Missing Ingredient

For AI to move from novelty to necessity in the enterprise, it must understand three things deeply:
1. The Worker
- Role, seniority, and responsibilities.
- Targets, KPIs, and metrics that matter.
- Preferred workflows and tools (Sales Cloud, Service Cloud, Slack, etc.).
2. The Work
- Customer history, open opportunities, active cases.
- Current deals, SLAs, campaigns, and projects.
- Internal knowledge: battlecards, macros, solution articles, templates.
3. The Guardrails
- Data access and sharing rules.
- Compliance requirements (for example, GDPR, HIPAA, PCI).
- Security, retention, and audit policies.
When AI has this context and is grounded in trusted customer data, it can move from generic content generation to high-value, role-aware assistance in flow of work.
From Generic LLM To Trusted Enterprise Copilot

Salesforce's approach to AI is built around a simple idea: every worker should have a copilot that understands their job and their customers — without compromising trust.
That requires combining three layers:
1. The Model: Generative + Agentic AI
- Large language models that excel at natural language understanding and generation.
- Agentic capabilities that can take action in systems — not just answer questions.
2. The Data: Secure, Unified, Customer-Centric
- Customer 360 data unified across sales, service, marketing, commerce, and more.
- Fine-grained permissions and security controls respected by default.
- Grounded responses that reference real, live enterprise data — not just the public web.
3. The Interface: Embedded In Flow of Work
- AI surfaces inside Salesforce, Slack, and other tools workers already use.
- Suggestions and automation triggered by real-time events (for example, a new case, an at-risk opportunity).
- Explainable outputs so users know why AI recommended a step or generated a response.
When those three layers connect, AI can understand not just language, but context, intent, and impact.
What Context-Aware AI Looks Like In Salesforce

Sales: From "Write an email" to "Move this opportunity to closed won"
Instead of asking a generic tool "Write a follow-up email," a Salesforce copilot that knows your pipeline can:
- Review the opportunity history and last customer interaction.
- Draft a follow-up email aligned to your sales methodology and pricing rules.
- Suggest next-best actions (for example, loop in a technical specialist, schedule a demo).
Service: From "Answer this question" to "Resolve this case"
A context-aware AI assistant for service can:
- Use past cases, knowledge articles, and entitlements to suggest resolutions.
- Generate a tailored, empathetic response that matches your brand tone.
- Trigger workflows: escalations, field service visits, or follow-up surveys.
Marketing: From "Write a campaign" to "Drive this segment outcome"
Grounded in real segment and performance data, AI can:
- Propose campaign ideas tied to specific personas and lifecycle stages.
- Generate copy variants aligned with brand and compliance guidelines.
- Recommend channels and timings based on past performance.
Leaders: How To Close The Context Gap In Your Organization

To move from fragmented, risky AI adoption to trusted, high-value AI at scale, leaders should focus on four priorities.
1. Establish Clear, Practical AI Policies
- Define which tools are approved and for what use cases.
- Communicate what data can and cannot be shared with external models.
- Create simple, role-specific guidelines — not just long legal PDFs.
2. Ground AI In Trusted, Governed Data
- Invest in unifying customer and operational data with robust governance.
- Make sure permissions and security models are enforced at the AI layer.
- Prefer platforms where AI "comes to the data," instead of exporting data to unmanaged tools.
3. Build Role-Based AI Experiences
- Prioritize the highest-value, highest-friction workflows by role.
- Design prompts, templates, and automations tailored to how those roles actually work.
- Measure adoption, time saved, and business outcomes — not just "number of prompts run."
4. Upskill Your Workforce
- Train employees not only on how to use AI, but how to review, edit, and govern AI outputs.
- Give teams examples of good prompts, good reviews, and good guardrails.
- Encourage feedback loops so AI systems and prompts improve over time.
The Future: Agentic AI That Works Like A Teammate, Not A Tool

We are moving from AI that simply generates text to agentic AI that can take actions across your systems as a trusted teammate. In a Salesforce context, that could mean AI that can:
- Qualify leads, update records, and schedule meetings autonomously — with human oversight.
- Monitor accounts for risk signals and proactively recommend save actions.
- Continuously learn from outcomes to improve future recommendations.
But that future only works if AI:
- Understands roles, processes, and data deeply.
- Operates within strong governance frameworks.
- Is embedded directly into the applications where work actually happens.
The survey findings are a clear call to action: AI in the enterprise must evolve from "smart text generator" to trusted, job-aware copilot — and that evolution will be powered by platforms like Salesforce that put customer data, security, and context at the center.
---
This email was sent automatically with n8n