If you compare the results, we of course see some differences. One glaring difference stood out to me. Even though the prompt clearly stated “Please include code examples.”, neither of the locally running Ollama API versions of DeepSeek-R1 included code examples in their response, though the online AIs did do that. If I tell the local DeepSeek to “Give an example of Python code.”, they include examples in their output. But, they did not in the test above. I’m not sure if this is simply due to the local DeepSeek LLMs being smaller models to fit within local memory or if there is some other reason for that difference.
1 Like