Compare LLM Model vs LLM Service

The risk profile of AI models and the providers running them are different. Learn how to correctly evaluate them
securityrisk
Cover image

I have seen many articles lately claiming that DeepSeek is dangerous, DeepSeek can steal your data, DeepSeek spies on you. The problem is that too many of these are not clear about what they are referring to, or simply misunderstand: There is a ๐˜€๐—ถ๐—ด๐—ป๐—ถ๐—ณ๐—ถ๐—ฐ๐—ฎ๐—ป๐˜ ๐—ต๐˜‚๐—ด๐—ฒ ๐—ฑ๐—ถ๐—ณ๐—ณ๐—ฒ๐—ฟ๐—ฒ๐—ป๐—ฐ๐—ฒ ๐—ถ๐—ป ๐˜๐—ฒ๐—ฟ๐—บ๐˜€ ๐—ผ๐—ณ ๐—ฟ๐—ถ๐˜€๐—ธ ๐—ฏ๐—ฒ๐˜๐˜„๐—ฒ๐—ฒ๐—ป ๐˜๐—ต๐—ฒ ๐—บ๐—ผ๐—ฑ๐—ฒ๐—น ๐—ฎ๐—ป๐—ฑ ๐˜๐—ต๐—ฒ ๐˜€๐—ฒ๐—ฟ๐˜ƒ๐—ถ๐—ฐ๐—ฒ.

  • ๐——๐—ฒ๐—ฒ๐—ฝ๐—ฆ๐—ฒ๐—ฒ๐—ธ ๐—ฅ๐Ÿญ, the open-source model that the company released, and
  • the services that ๐—›๐—ฎ๐—ป๐—ด๐˜‡๐—ต๐—ผ๐˜‚ ๐——๐—ฒ๐—ฒ๐—ฝ๐—ฆ๐—ฒ๐—ฒ๐—ธ ๐—”๐—ฟ๐˜๐—ถ๐—ณ๐—ถ๐—ฐ๐—ถ๐—ฎ๐—น ๐—œ๐—ป๐˜๐—ฒ๐—น๐—น๐—ถ๐—ด๐—ฒ๐—ป๐—ฐ๐—ฒ ๐—•๐—ฎ๐˜€๐—ถ๐—ฐ ๐—ง๐—ฒ๐—ฐ๐—ต๐—ป๐—ผ๐—น๐—ผ๐—ด๐˜† ๐—ฅ๐—ฒ๐˜€๐—ฒ๐—ฎ๐—ฟ๐—ฐ๐—ต ๐—–๐—ผ., Ltd. offer running this model, such as the website, mobile apps and API.

Let try an analogy (somewhat):

  • The Model, e.g., Deepseek R1
    • This is the car blueprint. It might have defects and not be perfect. Everyone can look at it, test it, make sure it works well enough. But the blueprint wonโ€™t hurt you. Itโ€™s just a piece of paper. (well, it might blow your mind ๐Ÿคฏ๐Ÿ˜„)
  • The service, e.g., a website (ChatGPT), mobile apps (Siri) or API (Perplexity API), provided using this model
    • This is the actual car made following the blueprint. The car manufacturer might have followed the blueprint, resolved some issues (e.g., biases), added software to track your driving habits (e.g., register prompts) or added a GPS tracker without you knowing (e.g., collect privacy information).

The same model can be used in very different ways. And many risks related to the car manufacturer, more so than the blueprint itself.


I hope this helps clarify things a little.

Now, let look at the risk profiles:

RiskDeepseek R1 (Model)Deepseek R1 run yourselfDeepseek R1 served by a trustworthy website, mobile apps or API providerDeepseek R1 served by a website, mobile apps or API provider you donโ€™t trust
ControlN/A - Itโ€™s open source, you can look at it.๐ŸŸขย  - Full control over model deployment, fine-tuning, and usage when self-hosted๐ŸŸกย  - It should be fine, but check these T&Cโ€™s.๐Ÿ”ดย - Can disappear under you at any moment.
PrivacyN/A - The model itself donโ€™t do anything. It canโ€™t steal and exfiltrate date. ๐ŸŸขย ๐ŸŸกย  - If your account information (name, emailโ€ฆ) and input (prompts / uploaded documents) are treated privately like it should, there is no major risk.๐Ÿ”ด - All the information you type or upload could be siphoned by the provider.
Security๐ŸŸกย  - There is a minimal risk of having malicious reasoning that could then be injected into your own code/answers๐ŸŸขย  - if you donโ€™t let the model connect elsewhere. Where is a theoretical possibility that the model could somehow hack itself out of its environment, but weโ€™re not there (yet).๐ŸŸกย - You rely on the providerโ€™s security controls to protect your data. ๐Ÿ”ด - This is particularly risky if the app has extended permissions on your device (looking at you, vibe-coders)
Bias / Censorship๐ŸŸกย  - Possible.๐ŸŸกย - Same as Model๐ŸŸกย - Same as Model๐Ÿ”ด - if the provider further filters the input or output.
MonitoringN/A - Itโ€™s there, you can look at it.No built-in monitoring - depends on implementationProvider can monitor usage, implement rate limits, and track user behaviourProvider can monitor usage, implement rate limits, and track user behaviour
CostN/A - Itโ€™s free and open source.One-time cost of computing resources for deploymentUsually subscription or pay-per-use pricing modelsUsually subscription or pay-per-use pricing models
ReliabilityN/ADependent on the infrastructure you're running this onSubject to provider uptime, server stability, and maintenance schedulesSubject to provider uptime, server stability, and maintenance schedules

Conclusion

So, what's the takeaway? When you hear someone talking about the risks of DeepSeek (or any AI model), take a moment to ask: are they talking about the model itself, or the service running it? The distinction matters a lot.

The model is just the blueprint. It's the open-source code that anyone can inspect, test, and run on their own hardware. The real risks come from who's driving the car built from that blueprint, aka the service provider. That's where your data, privacy, and control are at stake.

If you're running it yourself, you're in the driver's seat. If you're using a trustworthy provider, you're probably fine, but always check the fine print. And if you're using a provider you don't trust? Well, you're essentially handing over the keys and hoping for the best.

Stay curious, stay critical, and don't let the hype cloud your judgement. The tech is exciting, but it's worth taking the time to understand what you're really signing up for.

Olivier Reuland