My Lemmy Box
  • Communities
  • Create Post
  • heart
    Support Lemmy
  • search
    Search
  • Login
  • Sign Up
Alas Poor Erinaceus to [email protected]English • 4 months ago

‘Sputnik moment’: $1tn wiped off US stocks after Chinese firm unveils AI chatbot

www.theguardian.com

external-link
message-square
200
fedilink
650
external-link

‘Sputnik moment’: $1tn wiped off US stocks after Chinese firm unveils AI chatbot

www.theguardian.com

Alas Poor Erinaceus to [email protected]English • 4 months ago
message-square
200
fedilink
Emergence of DeepSeek raises doubts about sustainability of western artificial intelligence boom
  • @[email protected]
    link
    fedilink
    31•4 months ago

    My understanding is it’s just an LLM (not multimodal) and the train time/cost looks the same for most of these.

    • DeepSeek ~$6million https://www.theregister.com/2025/01/26/deepseek_r1_ai_cot/?td=rt-3a
    • Llama 2 estimated ~$4-5 million https://www.visualcapitalist.com/training-costs-of-ai-models-over-time/

    I feel like the world’s gone crazy, but OpenAI (and others) is pursing more complex model designs with multimodal. Those are going to be more expensive due to image/video/audio processing. Unless I’m missing something that would probably account for the cost difference in current vs previous iterations.

    • @[email protected]
      link
      fedilink
      English
      38•4 months ago

      The thing is that R1 is being compared to gpt4 or in some cases gpt4o. That model cost OpenAI something like $80M to train, so saying it has roughly equivalent performance for an order of magnitude less cost is not for nothing. DeepSeek also says the model is much cheaper to run for inferencing as well, though I can’t find any figures on that.

      • @[email protected]
        link
        fedilink
        5•4 months ago

        My main point is that gpt4o and other models it’s being compared to are multimodal, R1 is only a LLM from what I can find.

        Something trained on audio/pictures/videos/text is probably going to cost more than just text.

        But maybe I’m missing something.

        • @[email protected]
          link
          fedilink
          English
          23•4 months ago

          The original gpt4 is just an LLM though, not multimodal, and the training cost for that is still estimated to be over 10x R1’s if you believe the numbers. I think where R 1 is compared to 4o is in so-called reasoning, where you can see the chain of though or internal prompt paths that the model uses to (expensively) produce an output.

          • @[email protected]
            link
            fedilink
            5•
            edit-2
            4 months ago

            I’m not sure how good a source it is, but Wikipedia says it was multimodal and came out about two years ago - https://en.m.wikipedia.org/wiki/GPT-4. That being said.

            The comparisons though are comparing the LLM benchmarks against gpt4o, so maybe a valid arguement for the LLM capabilites.

            However, I think a lot of the more recent models are pursing architectures with the ability to act on their own like Claude’s computer use - https://docs.anthropic.com/en/docs/build-with-claude/computer-use, which DeepSeek R1 is not attempting.

            Edit: and I think the real money will be in the more complex models focused on workflows automation.

        • @[email protected]
          link
          fedilink
          9•4 months ago

          Yea except DeepSeek released a combined Multimodal/generation model that has similar performance to contemporaries and a similar level of reduced training cost ~20 hours ago:

          https://huggingface.co/deepseek-ai/Janus-Pro-7B

          • veroxii
            link
            fedilink
            4•4 months ago

            Holy smoke balls. I wonder what else they have ready to release over the next few weeks. They might have a whole suite of things just waiting to strategically deploy

    • modulus
      link
      fedilink
      9•4 months ago

      One of the things you’re missing is the same techniques are applicable to multimodality. They’ve already released a multimodal model: https://seekingalpha.com/news/4398945-deepseek-releases-open-source-ai-multimodal-model-janus-pro-7b

[email protected]

[email protected]
Create a post
You are not logged in. However you can subscribe from another Fediverse account, for example Lemmy or Mastodon. To do this, paste the following into the search field of your instance: [email protected]

This is the official technology community of Lemmy.ml for all news related to creation and use of technology, and to facilitate civil, meaningful discussion around it.


Ask in DM before posting product reviews or ads. All such posts otherwise are subject to removal.


Rules:

1: All Lemmy rules apply

2: Do not post low effort posts

3: NEVER post naziped*gore stuff

4: Always post article URLs or their archived version URLs as sources, NOT screenshots. Help the blind users.

5: personal rants of Big Tech CEOs like Elon Musk are unwelcome (does not include posts about their companies affecting wide range of people)

6: no advertisement posts unless verified as legitimate and non-exploitative/non-consumerist

7: crypto related posts, unless essential, are disallowed

  • 25 users / day
  • 102 users / week
  • 333 users / month
  • 1.57K users / 6 months
  • 1 subscriber
  • 1.38K Posts
  • 7.57K Comments
  • Modlog
  • mods:
  • @[email protected]
  • BE: 0.18.4
  • Modlog
  • Instances
  • Docs
  • Code
  • join-lemmy.org