When I started angel investing in the late 1990s, a tech investment included a significant technology risk, with the potential upside being groundbreaking innovation. Being an investor at this time meant taking a considerable technology risk and betting on actual tech, such as nanotech, semiconductors or biotech.

E-commerce, albeit hyped and interesting, was not considered tech. It was “Business 2.0”, plain and straightforward, hype included.

  • kokope11i@lemmy.world
    link
    fedilink
    English
    arrow-up
    19
    arrow-down
    4
    ·
    2 days ago

    To summarize, “I have a POV that almost no one else has. Why is everyone not naming things the way I see them.”

    • corsicanguppy@lemmy.ca
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 day ago

      To summarize, “I have a POV that almost no one else has. Why is everyone not naming things the way I see them.”

      Yes, pinocchio, your company is a real tech company because they use tech tools.

      (sorry, it’s just a tech leveraging company, the same way my bus driver leverages the bus but does not fix or build it. My bus driver is not a bus; just the driver)

    • REDACTED@infosec.pub
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      12
      ·
      2 days ago

      Seriously. Your opinions and hate aside, LLM, deep learning and reasoning models are amongst one of the most advanced software technologies available to consumers.

      This post is lame

      • JayleneSlide@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        1
        ·
        1 day ago

        No, no they’re not. These are just repackaged and scaled-up neural nets. Anyone remember those? The concept and good chunks of the math are over 200 years old. Hell, there was two-layer neural net software in the early 90s that ran on my x386. Specifically, Neural Network PC Tools by Russell Eberhart. The DIY implementation of OCR in that book is a great example of roll-your-own neural net. What we have today, much like most modern technology, is just lots MORE of the same. Back in the DOS days, there was even an ML application that would offer contextual suggestions for mistyped command line entries.

        Typical of Silicon Valley, they are trying to rent out old garbage and use it to replace workers and creatives.

        • REDACTED@infosec.pub
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          2
          ·
          1 day ago

          I genuinely can’t tell if you’re being for real. By the same logic, raytracing is ancient tech that should be abandoned.

          The stuff we had when people thought Hitler is still alive on some Island and stuff we have now is barely comparable, even thought yes, they use a similar underlying technology.

          Since I never had the chance to try it out myself, how was your neural network and LLMs reasoning back in the day? Imo that’s the most impressive part, not that it can write.

          • JayleneSlide@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            5 hours ago

            And an additional response, because I didn’t fully answer your question. LLMs don’t reason. They traverse a data structure based on weightings relative to the occurrence frequency in their training content. Loosely speaking, it’s a graph (https://en.wikipedia.org/wiki/Graph_(abstract_data_type)). It appears like reasoning because the LLM is iterating over material that has been previously reasoned out. An LLM can’t reason through a problem that it hasn’t previously seen unlike, say, a squirrel.

          • JayleneSlide@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            6 hours ago

            By the same logic, raytracing is ancient tech that should be abandoned.

            Nice straw man argument you have there.

            I’ll restate, since my point didn’t seem to come across. All of the “AI” garbage that is getting jammed into everything is merely scaled up from what has been before. Scaling up is not advancement. A possible analogy would be automobiles in the late 60s and 90s: Just put in more cubic inches and bigger chassis! More power from more displacement does not mean more advanced. Continuing that analogy, 2.0L engines cranking out 400ft-lb and 500HP while delivering 28MPG average is advanced engineering. Right now, the software and hardware running LLMs are just MOAR cubic inches. We haven’t come up with more advanced data structures.

            These types of solutions can have a place and can produce something adjacent to the desired results. We make great use of expert systems constantly within narrow domains. Camera autofocus systems leap to mind. When “fuzzy logic” autofocus was introduced, it was a boon to photography. Another example of narrow-ish domain ML software is medical decision support software, which I developed in a previous job in the early 2000s. There was nothing advanced about most of it; the data structures used were developed in the 50s by a medical doctor from Columbia University (Larry Weed: https://en.wikipedia.org/wiki/Lawrence_Weed). The advanced part was the computer language he also developed for quantifying medical knowledge. Any computer with enough storage, RAM, and the hardware ability to quickly traverse the data structures can be made to appear advanced when fed with enough collated data, i.e. turning data into information.

            Since I never had the chance to try it out myself, how was your neural network and LLMs reasoning back in the day? Imo that’s the most impressive part, not that it can write.

            It was slick for the time. It obviously wasn’t an LLM per se, but both were a form of LM. The OCR and auto-suggest for DOS were pretty shit-hot for x386. The two together inspried one of my huge projects in engineering school: a whole-book scanner* that removed page curl and gutter shadow, and then generated a text-under-image PDF. By training the software on a large body of varied physical books and retentively combing over the OCR output and retraining, the results approached what one would see in the modern suite that now comes with your scanner. I only achieved my results because I had unfettered use of a quad Xeon beast in the college library where I worked. That software drove the early digitization processes for this (which I also built): http://digitallib.oit.edu/digital/collection/kwl/search

            *in contrast to most book scanning at the time, which required the book to be cut apart and the pages fed into an automatically fed scanner; lots of books couldn’t be damaged like that.

            Edit: a word