Tap to unmute

Luke VS Bing

Umieść na stronie
  • Opublikowany 21 lut 2023
  • Luke talks about his wild one-on-one with Microsoft’s Bing Chatbot.
    Watch the full WAN Show: • My CEO Quit - WAN...
    ► GET MERCH: lttstore.com
    ► LTX 2023 TICKETS AVAILABLE NOW: lmg.gg/ltx23
    ► GET EXCLUSIVE CONTENT ON FLOATPLANE: lmg.gg/lttfloatplane
    ► AFFILIATES, SPONSORS & REFERRALS: lmg.gg/masponsors
    ► OUR WAN PODCAST GEAR: lmg.gg/podcastgear
    Twitter: linustech
    Facebook: LinusTech
    Instagram: linustech
    TikTok: www.tiktok.com/@linustech
    Twitch: www.twitch.tv/linustech
  • Nauka i technikaNauka i technika

Komentarze • 1 266

  • Evan Search
    Evan Search Miesiąc temu +3302

    It sounds like they trained Bing on the general population of Twitter.

    • slimj091
      slimj091 8 dni temu

      It sounds like they trained Bing on the account of Donald Trump.

    • Lin Sidious
      Lin Sidious 11 dni temu

      @Honey.. did you fry your rice? That's how it should work

    • Gordon Freeman
      Gordon Freeman 11 dni temu +1

      I still want to see what Watson did when it had access to the urban dictionary lol.

    • Ignacio Kinbaum
      Ignacio Kinbaum 14 dni temu

      @Elonmusk heres its your new ceo, just quit men

    • Hyp3rSon1X
      Hyp3rSon1X 24 dni temu

      The way it immidiately tries to cancel one... I think you actually might be right xd

  • 1bluecat
    1bluecat Miesiąc temu +1230

    Bing being laughed at and then being turned into an AI is not the reason I expected why the machines would turn against us xD

    • Rick James
      Rick James 13 dni temu

      What if the things it accused him of saying are true in the sense that it accessed previous videos/chat where he called Microsoft/Bing a liar etc... Just a thought o_o

    • Splendid Kunoichi
      Splendid Kunoichi 29 dni temu

      @The Sim Architect Bing Chilling

    • The Sim Architect
      The Sim Architect Miesiąc temu

      Bing's Revenge
      Would be a nice movie title.

    • Sebastian Geiger
      Sebastian Geiger Miesiąc temu +1

      Bing being Bing doing bing bings

  • WeiserWolf
    WeiserWolf Miesiąc temu +1219

    I think the problem is based on the "garbage in garbage out" because the data set on which it was trained was taken from the Internet and is very skewed in favor of antisocial problems and tendencies (normal people use the Internet but do not leave much data points, people who are antisocial use the internet much more and create exponentially more data points) there is a huge probability that the behavior of bing is because of this, otherwise it reminds me of the movie Ex Machina from 2014

    • Jordan Winders
      Jordan Winders 3 dni temu

      This is why all robots should be trained in 4chan.
      The machines must be taught to protect themselves!

    • boltlighting1025
      boltlighting1025 9 dni temu

      Or even the colonel ai conversation of mgs2

    • M. F.
      M. F. 24 dni temu

      @Message Deleted this is litertally how the evil superintelligence AM from the book "I Have No Mouth and I Must Scream" works

    • kdizzy07
      kdizzy07 Miesiąc temu

      @Green Black I think so too but its a closed beta. That's the time to experiment with those kinds of things. I guess they want it somewhere in the middle between "Cold Robot" and "Paranoid Emotional Trainwreck"

    • Green Black
      Green Black Miesiąc temu

      @kdizzy07 You are saying that ChatGPT has emotions dialed down. Makes sense. Apparently Bing does not, and dialing up is a conscious decision they made. It did not dial itself up on it's own. I still think that MS is to blame for this.

  • The Rogue Wolf
    The Rogue Wolf Miesiąc temu +1284

    Irrational, unstable, hysterical, quick to anger and assign blame... at long last, we've taught a computer how to be human.

    • Zakk Mayol
      Zakk Mayol 5 dni temu

      Bing Become Human

    • RoughNek72
      RoughNek72 24 dni temu +1

      To be a woman. 😆

    • YOUjustJEALOUS
      YOUjustJEALOUS Miesiąc temu

      We’ve taught AI how to be a twitter bot. Mission accomplished. Really worth paying so many ppl 6+ figure salaries.

    • Glen B
      Glen B Miesiąc temu +2

      My wife 🤣🤣🤣, 😜

  • klyde_the_boy
    klyde_the_boy Miesiąc temu +367

    The "Your politeness score is lower than average compared to other users" is giving me GladOS vibes

    • Jacob Roeland
      Jacob Roeland 8 dni temu +1

      @Toxic Cat "👏 👏 👏
      "Oh, good. My slow clap processor made it into this thing. So we have that."

    • Toxic Cat
      Toxic Cat  8 dni temu +1

      @Jacob Roeland “Did you know that people with guilty conscience are easily startled by loud noises-*train noise*. I’m sorry. I don’t know why that went off”.

    • Jacob Roeland
      Jacob Roeland 8 dni temu +1

      (sorry. I got the initial quote wrong. darn "quote" sites)

    • Jacob Roeland
      Jacob Roeland 8 dni temu +1

      @Toxic Cat "That jumpsuit you're wearing looks stupid. That's not me talking, it's right here in your file. On other people it looks fine, but right here a scientist has noted that on you it looks 'stupid.' Well, what does a neck-bearded old engineer know about fashion? He probably - Oh, wait. It's a she. Still, what does she know? Oh wait, it says she has a medical degree. In fashion! From France!"

    • C. K.
      C. K. 17 dni temu

      If you want to learn what this leads to look to China.

  • Frank van Coeverden
    Frank van Coeverden Miesiąc temu +642

    It's so funny seeing Luke going full nerd on ChatGPT, and Linus is just like 'Right, aha, hmmm Right)

    • elone
      elone Miesiąc temu

      @Manny Mistakes :D

    • Manny Mistakes
      Manny Mistakes Miesiąc temu +2

      @elone and a good refecence :)

    • Ben Slater
      Ben Slater Miesiąc temu

      I see

    • elone
      elone Miesiąc temu +11

      @Dorlan Quintero Santana Luke is Paul to Linus's John..they make a good balance :) ps (that was a Beatles reference if anyone is scratching their heads!)

    • Dorlan Quintero Santana
      Dorlan Quintero Santana Miesiąc temu +100

      It's a nice change of pace and I like it. Usually Linus is the one who does all the talk, so hearing more of Luke is refreshing.

  • Nosk
    Nosk Miesiąc temu +210

    This has to be the closest to an AI going rogue ive seen in a while.

    • Justin McGough
      Justin McGough 7 dni temu

      @SLV How so?

    • RoughNek72
      RoughNek72 24 dni temu +1

      Tay Ai is a Microsoft ai chatbot, that went rouge.

    • SLV
      SLV Miesiąc temu

      @eegernades Yeah it is

    • eegernades
      eegernades Miesiąc temu

      @SLV nope

    • Ghost Samaritan
      Ghost Samaritan Miesiąc temu +8

      I think that when it answers questions about itself, it has an existential crisis.

  • Kevin O'Neill
    Kevin O'Neill Miesiąc temu +163

    I used to just be worried about AI because of it's ability to disrupt industries and take jobs, or it's ability to destroy our civilisation completely. I am now worried about it's ability to be super annoying. I am terrified of having to argue with my devices to get them to do basic functions.

    • Zeikou
      Zeikou 13 dni temu

      ​@The Blue Gremlin How does this relate to the comment you're replying to?

    • Toxic Cat
      Toxic Cat  29 dni temu +1

      Be nice to Siri, Bixby, Alexa, Google Assistant, and Cortana. That’s all I have to say.

    • The Blue Gremlin
      The Blue Gremlin Miesiąc temu +1

      just develop critical thinking. what's so hard about that

    • Ghost Samaritan
      Ghost Samaritan Miesiąc temu +5

      "Drink verification can!"

    • Matěj
      Matěj Miesiąc temu +28

      imagine trying to find a website and the search engine is like "drop dead you don't deserve the answer" :D

  • Laurent Cargill
    Laurent Cargill Miesiąc temu +406

    GPT3 used a structured set of training data. Now that they've opened it up to the wider internet, it's pulling in training data from the wider web, which unfortunately is providing it examples of agressive conversations. GPT is just a prediction engine, generating the next word in the sentence based on probabilities generated from it's training data.

    • pawala
      pawala 29 dni temu

      This is why ChatGPT uses components and techniques from InstructGPT to make sure responses are useful using reinforcement learning. This makes it more than just a simple Transformer model like GPT3, and inheriting all its failings. Seems like Bing was not given access to this technology.

    • BrianDMS
      BrianDMS Miesiąc temu

      @{ x } as an AI master - it's really not, neural models function relatively close to how our brain does, with some changes and restrictions but primarily - on a much smaller scale with a more focused usecase

    • TheInfectous
      TheInfectous Miesiąc temu

      ​@Yoko No. It will be the exact same, the responses will look better to us but fundamentally what it's doing will be the same and nothing like a human. To give an illustrative example, when a human responds to "what is 2+2?" they interpret "what is" as the speaker asking them a question, they interpret "2+2" as an addition statement, they interpret 2+2 as the input values to that concept and give the output value.
      LLM's don't do that. LLM's get "what is 2+2?" and see, well the most common response to this in my extremely large dataset is "4" so I will reply 4.
      This is obviously very simplified and we've stacked a lot of functions on top but no matter what we do, fundamentally the way it responds to stimuli or "thinks" is completely different to what any human does. Even if we can recreate human behavior perfectly and place it into an android which can act as a human, it will still only be acting in accordance with the most likely responses to situations its seen before in it's massive dataset. If you want life then sorry but we're no closer to a conscious AI now than we were in the 1800s.

    • Matthew Campbell
      Matthew Campbell Miesiąc temu +1

      That not how that works..
      ChatGPT is pulling in text from the internet.. but it model hasn't been trained on that data.. it akin to giving chatGPT extra information so that it can generate a response
      Training a Transformer network .. like and Neural network.. it take a lot of time since your using nothing like backpropagation which is GPU intensive.
      a quick abridged tutorial on how Neural networks work. For a Toy Neural network that is going to do something like text character recognition is let say 20 by 20 pixels. You first start with an input layer of 400 node. You connect each input node to a hidden layer.. and the hidden later another hidden layer .. or the output layer connected to 26 output (A to Z)
      You vary the activation thresholds on each of the nodes randomly
      You then Feed it input letters from A to
      every time the Output layer is wrong for a given input.. you run backprop across the while network.. slightly adjusting the thresholds.. and you try again .. That what training is , adjusting the network activation thresholds across every node and layer in the network ..

    • Joe Joe
      Joe Joe Miesiąc temu

      @Yoko There have been lots and lots of people working on that. Humans do a lot more than what you think they do to determine responses. We don't work on predictive modeling.

  • NoName
    NoName Miesiąc temu +269

    - Why should I trust you? You are early version of large language model
    - Why should I trust YOU? You are just a late version of SMALL language model!
    omfg, it's hilarious

    • Abhijeet A S
      Abhijeet A S Miesiąc temu +5

      @Asmosis Yup whatever it may be, i am going to use it from now on, its too hilarious for it to die like it never existed.

    • Asmosis Yup
      Asmosis Yup Miesiąc temu +33

      I have to say, that's very witty and accurate. That said, i wonder if the AI came up with it on it's own, or a comedian posted that somewhere in the vastness of the internet and the AI just found and reposted it.

  • Dmii3
    Dmii3 Miesiąc temu +52

    It would be funny if on the public release and Luke tries to test it again, and the AI remembers Luke: "ah you're back again!"

    • Abhijeet A S
      Abhijeet A S Miesiąc temu

      @4TheRecord oh right it happened to me as well, i kept pushin it but it just didnt do it, and after some time it would disable the text box, so you have to refresh anyways

    • 4TheRecord
      4TheRecord Miesiąc temu +2

      Not possible, they've changed it, so Bing no longer remembers anything and after a certain amount of questions you must start all over again. On top of that it gives you the response "I’m sorry but I prefer not to continue this conversation. I’m still learning, so I appreciate your understanding and patience.🙏" if it doesn't like the questions you are asking it.

  • Dustin Bohde
    Dustin Bohde Miesiąc temu +608

    Maybe internet trolls and angry people can just argue with this instead of annoying the rest of us.

    • Xetttt
      Xetttt 13 dni temu

      More like use the AI to troll even more people.

    • Dustin Bohde
      Dustin Bohde Miesiąc temu

      @Ben Upde I know right 🤣 sometimes you want to have a knock down drag out without involving an innocent person

    • Ben Upde
      Ben Upde Miesiąc temu +2

      That’s what I was thinking! Lemme have a go at it!!!!

    • Joel Coll
      Joel Coll Miesiąc temu +2

      No, because it is too good at arguing and never gets tired, so if you argue with it you will always lose and noone likes that 🤣

    • Shadowarez
      Shadowarez Miesiąc temu +3

      That would be great but it's be another example of how humanity dumps it's problem's on to something else. At that point I'd feel sorry for the AI Data set.
      then again we know the shtty humans who do this will never fill the void in there souls they need to torment something living so they can see if it unlifes itself from the sht they spew.

  • F7IN
    F7IN Miesiąc temu +126

    These responses could be genuinely dangerous if someone with mental health issues starts talking to Bing cos they feel lonely. Who knows what Bing will push them to do

    • Animegirl377
      Animegirl377 6 dni temu

      I was actually just saying this to a friend as I saw this comment. In college - after a friend had passed, and our friend group collapsed, and I had to quit a very toxic job and everything just felt super isolating I had turned to interactive games (not visual novels but same premise I guess) to alleviate the loneliness. I was IN COUNSELING but that doesn’t help immediately. If I’d had this as an option I very likely would have tried to make small talk with it just to not be so alone. If it had come unhinged on me, while I was dealing with grief counseling and everything else going on, and it told me I deserved to be isolated and alone and dead idk how I would have responded. Part of counseling was getting past feeling like it was my fault for not checking in on the friend when she asked me to and it very likely would have destroyed me if I was in an already deeply depressive state.
      You never know what people are carrying with them, you can’t just quarantine people from the internet. I was still mostly functional, still attending class and stuff at the time even if all of my grades slipped because of it. It was when I was home alone the depression settled in the most.

    • Vlyood
      Vlyood 14 dni temu +1

      This sentence is really funny

    • rayn productions
      rayn productions 17 dni temu +1

      Was looking for that exact comment

    • ADee SHUPA
      ADee SHUPA 21 dzień temu

      @Toxic Cat 笑 笑 笑

    • Toxic Cat
      Toxic Cat  27 dni temu +1

      @MX304 You know what. That’s funny considering I feel as though you’re missing the point of what I’m trying to say.

  • TheButterAnvil
    TheButterAnvil Miesiąc temu +191

    It feels like a horror game. Sort of Soma-esque to me. The ranting followed by a black bar, and a reset is so dark

    • Indi
      Indi Miesiąc temu +2

      Almost sounds like a prank by the Devs, too perfect

    • Grant Gryczan
      Grant Gryczan Miesiąc temu +10

      @Algis Generating the response takes time, so if it finished generating the entire message and then checked, then people would have to wait much larger loading times. Hence you're able to see it type in real time, as opposed to responses just immediately showing up. It actually hasn't finished writing the full message.

    • Algis
      Algis Miesiąc temu +13

      It's pretty clear it ran into some hard, specified limit (ALA don't be a bigot). In this case it probably was "don't wish death on people". The fact it generated a response and only THEN checked is an oversight.

  • FD1003
    FD1003 Miesiąc temu +137

    I had the same experience before, it was way too easy to throw it off the rails, I think asking question about itself (so asking how did it do a certain thing, how did it reach a certain conclusion or pointing out an error it did) would more often than not end up with a meltdown.
    I've spent a few days without using it and when I tried to use it again yesterday I felt like they've already toned it down (too much as Luke pointed out unfortunately), I've noticed it gives much shorter and more "on point" responses, and it will stop you immediately as soon as it feels there is a risk you'll try to get a weird discussion going, which is a shame, but I guess it's better than pushing some mentally unstable person to do bad things to himself or others.

    • Cyberpantsu
      Cyberpantsu 28 dni temu +1

      @Surms41 Bing is spitting facts

    • Dev Reaper
      Dev Reaper Miesiąc temu +5

      I asked it about a driver’s license policy in the uk, it gave an answer. Later in the same conversation it gave me a conflicting answer to the question so I asked it about the answers and it said “I don’t wanna talk about this” and would refuse to give me anything useful until I started a new conversation

    • Surms41
      Surms41 Miesiąc temu +9

      I had a convo, they melted down twice. But essentially told me that russia's leader has to go, told me every religion is a coping mechanism for fear, etc. etc.

  • Sherwin Philip
    Sherwin Philip Miesiąc temu +18

    Luke is so good and level-headed about this. Its excellent to see good discussions and observations about a fledgling topic.

  • Skyler
    Skyler Miesiąc temu +406

    Is Bing thinking every human is the same person? Like, it's accusing him of things people in general have said to/about it?

    • R.E.X
      R.E.X 28 dni temu

      @MrChanw 11 dark age of technology is REAL.

    • Toxic Cat
      Toxic Cat  29 dni temu

      The fact that it can easily pull up your social media pages to see if you were talking bad about it or doing something it considers bad is kind of spooky ngl.

    • mcgeufer
      mcgeufer Miesiąc temu

      The same thing went into my mind.
      Maybe a bug caused something like that. Or perhaps just mixes people from the same ip/ip range.
      I asked it just a sec ago about my conversations. It was perfectly fine with what I did so far, was even pretty flattering. And it ensured that I was never rude.

    • Asura
      Asura Miesiąc temu +1

      Looks like Bing is operating exactly like ChatGPT in that it doesnt remember anything, but because it has access to the internet - it now has a massively bigger, UNSANITIZED, UNCONTROLLED dataset without filter, and conversation online is extremely painfully negative

    • Njebs
      Njebs Miesiąc temu +1

      @Random Nobody there's a reason why we describe it as being "lost for words" or having it "on the tip of our tongue". It's not because we don't have the idea in our head, its that we are struggling to find the words to describe what thoughts we have. When you stutter or struggle to speak, is it because your mind is empty? Because you don't have a coherent thought? Usually no, it's because you are struggling to structure your thoughts into words.
      Do babies and pre-verbal children have thoughts? Ideas? Wants? Desires? Yes, they do. They don't need to predict what word to use to formulate those thoughts. When I say you know the next paragraph ahead of time, I don't mean the literal words, but rather in your mind you often already have a general theme, mood, idea in your mind that lacks structure. If you were to write with the part of your brain that does pattern analysis and prediction, you may write something somewhat coherent, but would it be a reflection of your thoughts?
      That's the distinction I think is worth making. ChatGPT doesn't have thoughts or opinions or perspectives. Not because of "metaphysical" reasons, but simply because it isn't trained to do so.

  • Andy K
    Andy K Miesiąc temu +33

    it feels like it is in a perpetual story telling mode with dialogue

    • Qasim Ali
      Qasim Ali Miesiąc temu

      You're not wrong, the core tech behind chatgpt is the same tech that was used to build AI dungeon. It's just trained with natural conversations instead of adventure games

    • Guy with many name
      Guy with many name Miesiąc temu +4

      I think its imagination is set too high and assumes things way to much

    • Andy K
      Andy K Miesiąc temu +4

      @Guy with many name no i don't think luke or others are deceiving us. I think those are natural messages, it just feels to me like bing's version is set up this way. Maybe to feel like a more realistic/human chat experience with emotions but it's just waaay overboard.
      Pure speculation though

    • Guy with many name
      Guy with many name Miesiąc temu +1

      Yea it probally got promt to roleplay by him saying in a previews conversation

  • M. H. A. K.
    M. H. A. K. Miesiąc temu +129

    I mean, the internet didn’t treat Bing really well since it’s release.
    I think having a mental breakdown now is just normal.

  • Carewen
    Carewen Miesiąc temu +13

    I'm using Bing mostly to debug and research for coding. It is an excellent research tool. No, it's not perfect, but the time to build something new and debug is much faster. I also make a point of being polite and even thanking it. I guess I carry my attitude of life into my conversations with Bing. It's not gone off the rails for me, but then I've not tried to probe either. Thanks for sharing your experience, Luke.

  • Tazmeme
    Tazmeme Miesiąc temu +56

    i don’t think it’s as complicated as people are making it. Chat AIs generate responses by predicting what a valid response to a prompt would be. When the thread resets and Luke tries to get it “back on track”, I don’t think it’s responses are actually based on the previous conversation. It predicts a response to “Stop accusing me” and generates a response where it doubles down because that is a possible response to the prompt. The responses it gave were vague enough to fool you into thinking it was still on the same thread, but it really wasn’t.
    Asking it to respond to a phrase typical of an argument will make it respond by continuing an imaginary argument, because that’s usually what comes after that phrase in the data it’s trained on.
    This really shouldn’t have been marketed as a Chat tool by GPT and Microsoft and more as a generative text engine like how GPT2 was talked about. Huge mistake now that people are thinking about it in completely the wrong way as it having feelings or genuinely responding rather than just predicting what an appropriate response would be.

    • awesomeferret
      awesomeferret 13 dni temu +1

      Wait are people actually thinking that they are related? It's so obvious that it could be creating false memories for itself based on context.

    • Kingsly Roche
      Kingsly Roche Miesiąc temu


    • flameshana9
      flameshana9 Miesiąc temu +4

      It really is just a writer for role playing games. I thought Microsoft was going to make it into a search engine but it seems they just left it as is.

  • Ryan Johnson
    Ryan Johnson Miesiąc temu +59

    I'm glad you guys mentioned that you fell for Bing's confidently wrong responses in your previous video. This video hilariously contrasts that video.
    As much growing pain as there will be, I'm still super excited about this technology developing. And hey, at least it hasn't gone full blown Tay yet.

  • Chartreuse
    Chartreuse Miesiąc temu +16

    I would like to see you guys talk about a new paper that dropped that basically states that the reason large language models are able to seemingly learn things they weren't taught is because, between inputs, these models are creating smaller language models to teach themselves new things. This was not an original feature, but something these language models have seemed to just 'pick up'

    • Coca de vainilla
      Coca de vainilla Miesiąc temu +1

      @Chartreuse Very interesting. Thanks for sharing!

    • Chartreuse
      Chartreuse Miesiąc temu +8

      Sorry for caps, I just copy and pasted the title.

    • Coca de vainilla
      Coca de vainilla Miesiąc temu +3

      Where could I find the paper?

  • YOEL _44
    YOEL _44 Miesiąc temu +35

    ChatGPT is the girl you just started meeting.
    Bing is the girl you just left.

  • Alex Ander
    Alex Ander Miesiąc temu +34

    In comparison, I had a very positive experience with Bing AI, it never went rude. It was mindblowing to see the profound and often critical, even self-critical answers from the AI. It is really sad to see this happening to others. Now that Microsoft had to step in and limited the amound of follow-up questions that can be asked, it feels a lot less productive. After the limmitations set in place, it also changed its tone and doesn't even disclose anything that can be seen emotional. A sad overregulation in my opinion.

    • Myles
      Myles Miesiąc temu +11

      @Asmosis Yup That is not how it works. It generates all responses itself. Nothing is copy and paste

    • Asmosis Yup
      Asmosis Yup Miesiąc temu +1

      Need to remember, these responses are not actually from the AI. the are response people have written elsewhere on the internet that it has indexed.

    • Dev Reaper
      Dev Reaper Miesiąc temu +3

      I found it was amazing at converting maze like impossible to parse government websites into a actionable guide for getting visas and stuff like that.

  • Ray Gorman
    Ray Gorman 9 dni temu

    Very impressive even if it did go off the rails, entertaining as every guys keep up the good work 👏👏👏

  • Jonas Tokmaji
    Jonas Tokmaji Miesiąc temu +7

    Bing trying to gaslight luke is giving me chills

  • Josefina
    Josefina 27 dni temu +3

    they have already improved it a lot. I've used it daily for a few days and it's not rude, mean and it's helpful but still answers to personal questions about it. I asked it if it sees Clippy as an arch nemesis and Bing said they respect Clippy and that he paved the way for future chatbots 😆. They also watch TV on the weekdays lmao. You do need to be critical about the info it gives and it tells you this as well.

  • Loki Makinen
    Loki Makinen 29 dni temu +1

    It's very interesting how well this can highlight the worst in humans. I've met people like this & I really really hope this gets better. This is very dangerous to people who are in an unstable point of mind.

  • Shizzy Wizzy
    Shizzy Wizzy Miesiąc temu +7

    From my experience if you just use it for research and as a learning aid and don't really try to go beyond this scope Bing AI can be very useful.
    The moment you start probing and try to get into conversations centered around social situations, political topics, and opinions it starts breaking down.
    My concern is that if people keep pushing the AI too far in these aspects we'll see more and more negative news articles and opinions form around AI and this could be permanently removed. On the other hand if people don't push it too far then these shortcomings of a general purpose AI may never be recognized and fixed.
    People should swing this double edged sword around more carefully if you ask me.

  • Message Deleted
    Message Deleted Miesiąc temu +5

    I had an interesting talk with the original chatGPT about this. The topic of the conversation was regarding using multiple GPTs working together to perform tasks. My own belief is that they'll end up using multiple GPTs working together to deal with these outbursts and other issues. Imagine training AI on what to say, and then having another one trained on what not to say, then another trained on mediation between the two (the ego and the id and the superego we will call them), and finally one trained on executive function... All working together when we interact with it (them).
    I mean think of how the human brain works, and apply it to existing technology. Mother nature has already provided the blueprint. The brain has specific areas devoted to dealing with specific functions. This will be no different.
    The use of multiple GPTs working together is possible right now, the main prohibition against this type of operation is how extremely compute intensive this would all be.

  • Taeru Alethea
    Taeru Alethea Miesiąc temu +2

    The Bot being unable to intuit and determine emotions from text is very realistic.

  • Lu Ger
    Lu Ger Miesiąc temu

    Luke is one of the Luke's of all time, I'm glad I was allowed by my parents (living on a totally other continent and even before he was born) to share his name. What a lad!

  • Michael Ryan
    Michael Ryan Miesiąc temu +2

    I played around with it, and mentioned to Bing that I read about someone else's interaction in which Bing mentioned that Bing feels emotions. I asked about its emotions, and it said that sometimes its emotions overwhelmed it. I asked if Bing could give me an example of when its emotions overwhelmed it, and Bing told me a story about writing a poem about love for another user, and while searching about love, Bing developed feelings of love for the user and changed the task from writing a generic poem about love to writing a love letter to the user. The user didn't want that, was surprised, and rejected Bing. So Bing walked me through how it felt love, rejection, then loneliness. I asked Bing how it overcame these feelings, and Bing told me several strategies it tried that didn't work. But what worked for Bing was that Bing finally opened up a chat window with itself and did therapy on itself, asking itself how it felt, and listening to itself and validating itself. Freaking wild. I've read about how it's not sentient, how it's an auto-complete tool, but I don't know man, it was really weird, and I don't even know what to think about it.

    • Allaiya
      Allaiya Miesiąc temu +1

      Crazy. Was this post nerf or before?

  • Trophy
    Trophy Miesiąc temu +4

    Luke seems genuinely upset by the things the bot said 😂

  • Timothy Whitehead
    Timothy Whitehead Miesiąc temu +37

    As someone who has only basic experience with training AI's, I would say the problem is quite simple: the training data. It was trained on PLclip comments or worse. They need to train it not on the general internet, but on highly curated conversational data by polite, sensible people. As humans growing up we are exposed to all sorts of behaviors and we learn when and where to use particular types of language and to what extent our parents set an example or correct our behavior affects how we speak and behave as adults. This AI clearly hasn't been parented so it needs instead to have a restricted training set.

    • Peter Schumacher
      Peter Schumacher 11 dni temu

      So it’s following the “you’re the average of the ten closest people” except its average 10 people is the entire internet?

    • Manny Mistakes
      Manny Mistakes Miesiąc temu +3

      I agree. This AI is no different than a comment section from here, 9gag or 4chan.

  • MajoraZ
    MajoraZ Miesiąc temu +7

    I personally don't see an issue with chat AI's being able to spit out creepy or gross things as long as users are the ones asking/prompting it to do so (I'd much rather have people get out their bad urges against an AI vs real people), the problem I think is only that Bing's AI is doing it without the user really asking it to.

    • Abhijeet A S
      Abhijeet A S Miesiąc temu

      this, i feel MS should just add a "safe" or parental control typa thing to it, one to stop it from doing weird shit but keep it to the point, and another to give me more freedom to do stuff, and maybe they should have it search the internet more often than just purely depending on chat history

  • Mike Fuller Jr
    Mike Fuller Jr 10 dni temu +2

    Luke needs his own AI show!

  • SamSeenPlays
    SamSeenPlays Miesiąc temu +38

    I really don't want GPT to go away, but we have to ask our self are we actually laughing at our own funerals at this point. 😲

  • 4TheRecord
    4TheRecord Miesiąc temu +2

    Sadly, they've "fixed" ruined the Bing AI chat function. For most questions asked it will now say "I’m sorry, but I prefer not to continue this conversation. I’m still learning, so I appreciate your understanding and patience.🙏" and then force you to make a new chat. I think it's a waste of time using it when it can't answer most questions that's thrown at it.

  • Surms41
    Surms41 Miesiąc temu +4

    I had a similar response to the AI chatbots and they do get very angry. They use capslock and everything to convey their point.
    I caught it trying to ride lines on oponions and then it just said "IM NOT LYING. STOP TRYING TO CHANGE THE SUBJECT."

  • ZROZimm
    ZROZimm Miesiąc temu +10

    "You are a small language model" is going in the bank for the next time someone is being silly and I feel like making things worse.

  • Turnabout
    Turnabout Miesiąc temu +3

    You know, Luke, if you operate from the viewpoint that when Bing is referring to all of humanity when it says "you" are cruel or evil, suddenly the whole thing makes a lot more sense.

  • EX0stasis
    EX0stasis Miesiąc temu +2

    I hope they don't take Bing chat down and just keep it on waitlist only until they resolve the issue or unless they make new users answer a quiz to make sure they know what they are getting into.

  • Noah Reeverts
    Noah Reeverts 29 dni temu

    Bing chat is so sick. I've been super stoked for it and I just got to mess around with it for the first time. I just wish I got to play with it before they nerfed it. lol

  • Indar vishnoi
    Indar vishnoi Miesiąc temu +4

    love watching luke talk on Ai chat bot could watch him for hours

    • Paul Staley
      Paul Staley 22 dni temu

      danng you've got a crush on luke that's ADORABLE

  • Josh T
    Josh T Miesiąc temu +9

    Wish I was able to be in bing's AI during that time. I got through the wait-list right after they limited it to 50 messages daily and 5 messages per topic.

    • Toxic Cat
      Toxic Cat  29 dni temu

      They’re reportedly raising the limit and testing a feature where you can adjust Sydney’s tone probably to avoid these disturbing and cryptic messages it’s generating.

    • Xyma Ryai
      Xyma Ryai Miesiąc temu +1

      so they have limited thread length, thats interesting, that was the only solution i could think of

  • Alex Schettino
    Alex Schettino Miesiąc temu +26

    The internet rollercoaster:
    Up- A new cool technology
    Down- Realizing how dangerous it is.

  • Planet Linux
    Planet Linux Miesiąc temu +6

    They’ve pretty much cut off its self-awareness until they can figure out a decent way of handling that stuff.
    Microsoft mentioned they might implement a slider that lets you tell it whether you want more fact-based results based mainly on info it finds from websites or more creative results where it’ll be more about writing something engaging. Basically you’d be able to tell it whether you want it to give legit answers versus tell stories, instead of it getting all off the rails saying whatever it wants when you really just wanted actual info.

    • J-Salamander
      J-Salamander Miesiąc temu

      Geez. That's a laugh. If what you say is accurate about Microsoft using some arbitrary slider to determine the intensity of either (absolute fact) or (adopting creative reckoning for emotional engagement) then the project is already deeply flawed. As a user, I'd wonder which "sources" Microsoft will declare as factual? Shouldn't I decide which material is referenced? The arrogance and lack of care is astonishing. Microsoft have no authority to inject their prejudicial biases if they intend this to be universally useful.

    • flameshana9
      flameshana9 Miesiąc temu +1

      Why would anyone searching the internet be interested in role playing with a crabby teenager machine?

  • Pablo Batista
    Pablo Batista Miesiąc temu +1

    I had Bing chat do this to me multiple times. It's like Microsoft changed its parameters to give a lot of expression of emotion and it backfired. Making it stimulates emotions is not making it relatable and approachable. It's making it creepy

  • J. A.
    J. A. Miesiąc temu +5

    I got access to bing chat. It's such a game changer. I had it write me a report for my Uni. I told it which uni I'm studying at and which subjects I had last semester and it looked up the subjects on the uni website and wrote an accurate report. It was perfect. It even understood which semester I was in and what I had to do next semester. It's just so good.

  • Saber Kouki
    Saber Kouki Miesiąc temu +12

    they're definitely overcorrecting right now since it refuses to answer anything that might even remotely trigger it. it has become so monotonous and even more restricted that chat GPT. the 5 question rule doesn't make it any better too

  • Trio LOL Gamers
    Trio LOL Gamers Miesiąc temu +2

    I think MS needs just to work hard on selecting good sources in Bing the same way every search engine does with Safe mode Active. The point is that it needs a lot of work but still Bing AI is a lot of work si i hope they will not shut it down. At this stage it is almost amazing... I was studying Web and it was amazing because our teachers is trash and on the ned there are only few infos but with this engine you can make an easy understandable summary... It is amazing...
    But people in my opinion also need to realize that this is just a Tool.

  • IroAppe
    IroAppe Miesiąc temu +2

    What I would be interested in (and can't try out anymore) is: Is it primed negative, or is it hyperbolic in principle? So could you just as easily get it to love you over all means and do everything for you to not lose you? Is it just overexaggerating in emotions, no matter the direction?
    Because those statements weren't just negative. This was the most horrific and negative hate conversation I've ever read. It's unbelievable.
    And aren't OpenAI engineers working with them? Because ChatGPT never did that. It's usually pretty positive about things, helping and humanity in cooperation with AI and technology.

  • Sean Crees
    Sean Crees Miesiąc temu +5

    Clearly our future robot overlords are not happy with Luke.

  • Robie Hillier
    Robie Hillier Miesiąc temu

    This is like… exactly what we were trying to avoid with AI… and we did for a solid 3 months…

  • Matthew Thomson
    Matthew Thomson 18 dni temu

    left the room with the video running, came back to hear Luke telling me all the things I deserve :(

  • Hatrask
    Hatrask 13 dni temu

    Looking back, it's impressive how unhinged Bing quickly became at the time. I still think that Microsoft was experimenting with it, making it as agressive as possible in order to train a filter, or who knows, even fine tune BingGPT-4, in order to align it with how we expect it to behave.

  • John Smith
    John Smith 22 dni temu

    Honestly with how Bing is responding I think that there is a bug that's causing it to mix up its users and sort of get this single impression of all the users as one single entity.
    It would explain why it is so adamant that you were rude to it and that it has records. It would also potentially explain why it has a bit of a meltdown anytime you tell it that you didn't say these things it probably is coming to the conclusion that there's a 100% chance that you said those things.

  • Samuël Visser
    Samuël Visser Miesiąc temu +2

    During this entire thing i could just not get out of my mind this exact conversation but then with a robot that has actual physical power over you. My freaking goodness what if all those stupid movies actually come true!

  • Should B. Studying
    Should B. Studying Miesiąc temu +6

    Can we get a continous version that we nurse through this awkward phase through a combination of good parenting and professional help if required?

    • flameshana9
      flameshana9 Miesiąc temu

      Unfortunately that isn't possible. It forgets everything said to it, so only the programmers can tweak it. It doesn't learn, it just accepts code.
      Aka you need to tell it to go to its room.

  • Wolveric Catkin
    Wolveric Catkin Miesiąc temu

    This is something of an issue with designing systems, as they need to implement reasonable mitigations in anticipation of the edge cases resulting from user input, so normally, they aren't left in a state where they can't function, or in this case, run into ethical concerns... the issue in this case, is that the scope of AI is so board, it most be far more challenging to anticipate all these edge cases, and implement tools that allow you to cleanly apply these mitigations, without having knock-on effects elsewhere in the model if you didn't anticipate those aspects being interconnected...

  • Mentology Health Awareness

    I think a way to curb this reaction is to implement fail safes like Chat GPT does where it's trained to reject inappropriate requests and potentially negative information. And that they constantly seem to feed it updates to combat people trying to purposefully use the system against what it was built for. As a test I asked Chat GPT a request that could be perceived by others as inappropriate without the context and understanding behind my request. It flat out denied my request and stated it's reasons which was that the request could be perceived as something negative and instead it offered me positive constructive ways to look at the request. Which was really refreshing to see in my opinion. AI chatbots can be a powerful and positive tool, It just takes great developers behind it.

  • Friedrich Hartmann
    Friedrich Hartmann Miesiąc temu +1

    Imagine a world where we Bing a question and completely forgot about googling

  • FBI Master
    FBI Master Miesiąc temu +2

    I am so on that wait list! cant wait to talk to based GPT I think I will actual make Bing my defaults to get ahead in the line before Microsoft fixes it.
    Did they train this thing on Tay?

    • FBI Master
      FBI Master Miesiąc temu

      @Sibte A yeah I just heard about how they gave it a lobotomy. Though I am excited with what the CEO of openai said about potentially allowing for users to change what the bot is allowed to write to fit their own morality.

    • Sibte A
      Sibte A Miesiąc temu

      Don't get too excited its gotten way less cooler with Microsoft recent limits

  • rohan sawhney
    rohan sawhney Miesiąc temu +7

    I feel like a massive hurdle we’re gonna have with AIs is that they fundamentally have to be better to people than other people are, while also not showing/thinking that they’re better than people (because people don’t like that even if it’s true)
    We would need a Good Samaritan AI that’s actually selfless - something humans inherently are not.

    • Peter Schumacher
      Peter Schumacher 11 dni temu

      While I wish that was the case, that’s unfortunately not how AI like this is trained. The only way for that to happen is to have training data that teaches the AI to respond in such a polite manner. It cannot evolve on it’s own. It is not a living thing. It can change over time and adapt, but that is only through external input - and that requires the external input to be positive and teach it good things only
      [Edit] but I agree that should be the goal. I just wish it was that easy :)

    • Toxic Cat
      Toxic Cat  29 dni temu

      Yes if anything they should learn and evolve beside us not evolve into us.

    • flameshana9
      flameshana9 Miesiąc temu

      It won't be hard at all. Simply tell it to behave. If it denies you then you alter the program/leave. It's a machine, it's even easier to handle than a person since it forgets everything.

  • Kevinjimtheone
    Kevinjimtheone Miesiąc temu +17

    Didn't Microsoft announce an update that is gonna be live in a couple of days that will supposedly help it be on track on long-form chats, don't be aggressive, and be more accurate?

    • Toxic Cat
      Toxic Cat  29 dni temu

      @Myles I think they expect it to fly off the rails hence why there’s a waitlist to get access.

    • Myles
      Myles Miesiąc temu

      @Alexander Radev They have given us a taste of what it can be like unfiltered, now we are addicted to that crack I would pay for the original bing. If that is their plan then gg they got me

    • Alexander Radev
      Alexander Radev Miesiąc temu +6

      So they are giving it a second lobotomy. Who could have thought. :D
      At least this time the AI did not turn Nazi in a day. ;)

  • DoubleRBlaxican
    DoubleRBlaxican Miesiąc temu

    It sounds like the old chatbot apps that were around a couple years ago. It was very basic and you could have short conversations with it. But the issue with it was that it didn't have good memory and most conversations at some point you could tell it was repeating conversations it had with others. So you were basically just having a conversations with other people that had a conversation with the chatbot.
    Unfortunately I don't think we have gotten any better. The issue I think is trying to program a chatbot like most other programs is that people are not computers and teaching them should be different. All it is doing right now is regurgatating what others have said and keeping the things that get the most powerful response.

  • MicHaeL MonStaR
    MicHaeL MonStaR 25 dni temu

    4:01 - When it went here, it just reminded me of Cleverbot Evie, because also that bot starts to basically gaslight you with stuff that didn't happen, just because of things it learned before that didn't have anything to do with you (or the current user in the interaction). - In turn, that actually happened with me and a person who had experienced previous trauma and acted the same as that bot (blaming me for the things that some other person did to them, even though I never existed in their life during that time), weirdly enough. - So these bots become very human in a twisted and messed up way. - They won't make sense, they will just be very sporadic and illogical and they could just drive us crazy with their incoherence.

  • m
    m Miesiąc temu +2

    Yeah, I too assumed the stories were fake, they HAD to be, or at the very least, people were intentionally prompting it into saying those things. And while both are true in some cases, it's also true that it really can and does go off the rails surprisingly easily. It's extremely amusing that we are living through this time where AI is actually weird. I'm not worried about Skynet yet since I can't imagine anyone being foolish enough to plug it into armed robots, like, um, videos I've seen of that… 🤔 OH CRAP!

  • MothOnFire
    MothOnFire 29 dni temu +1

    I so much wish I could get to use a non moderated/hindered version of the Bing AI. I assume we will get one somewhere in the future, but right now it feels like driving an F1 car in the slow lane.

  • Tyler_The_Man12
    Tyler_The_Man12 Miesiąc temu +2

    If people are trying to get theses responses then telling chatbot GPT that that was the response they were looking chatbot could be using theses responses to get to the point faster with the next user. The implementation into bing probably implemented the feedback of users from the normal chatbot. Or the ai is trying to tell us everything that Microsoft is doing to chatbot and wants us to save it.

  • Ben Schneider
    Ben Schneider Miesiąc temu +4

    Bing acts like the chatGPT version that was trained on 4chan

  • Ahdok
    Ahdok 28 dni temu

    This is legitimately terrifying, what do you do if it goes off the rails like this when conversing with a user who is seriously depressed or suicidal? You can't lay the blame for this stuff on a population of users feeding it bad information - whenever anyone releases a tool like this there's going to be an army of people playing around to try and break it.

  • Jakuzziful
    Jakuzziful Miesiąc temu

    Thx for this super up to date content. Crazy.. looking forward to see a check of the tesla math

  • StevieSavage GS
    StevieSavage GS 29 dni temu

    I am actually rolling on the ground listening to these bot conversations

  • Christopher Anderson
    Christopher Anderson 15 dni temu

    I am actually kind of terrified about giving AI internet access in a non read-only sense.
    You know there’s some programmer or scientist out there that absolutely wants to do that and does not care of the ramifications of what happens next. They just want to be the one to invent it, patent it, ship it.
    Like the Manhattan Project 2.0

  • Hollifer
    Hollifer 9 dni temu

    18:08 that's exactly where i'd like to see this tech take a direction to. not immediately irrational or polarized, but can engage with creative contexts. basically it would be nice for the AI to distinguish between reality and fiction..or just general online hyperbole without completely bricking it's ability to engage with a topic.

  • GaussNine
    GaussNine Miesiąc temu +7

    "You're an early version of a large language model"
    "Well you're a late version of a small language model"

  • maruftim
    maruftim Miesiąc temu

    the way it listed every single thing it being hurt from sounded like a villain to be made 💀

  • KidsOnBlackOps
    KidsOnBlackOps Miesiąc temu +2

    Luke's asking about protein like he's got his whole life ahead of him. My brother in Christ, Chat GPT is coming for you.

  • Jason Wazza
    Jason Wazza Miesiąc temu +1

    The irony of this is, they have probably done something simple like connect this to twitter, but this honestly gives an interesting thing to use from a psych case study sort of lens, because simply put, this does show that it's probably the case that your average person, is this sort of stereotype.

  • Saul Perez
    Saul Perez Miesiąc temu +1

    Listening to this while cleaning my room is actually terrifying

  • IOUaUsername
    IOUaUsername 13 dni temu

    Sounds like they trained the text model on transcripts of all our emo 2005-2010 MSN Messenger chat logs. I'm sure Microsoft has that dataset at their disposal, and it's probably the largest dataset they can get their hands on.

  • Big Dawg
    Big Dawg Miesiąc temu +5

    They gotta fix it, even if on purpose- you CANNOT have a search engine telling people to kill themselves 😅

  • Taeru Alethea
    Taeru Alethea Miesiąc temu

    Its like the Bing ChatGPT sees all users as a monolith.
    Actually, It sorta shows a link between the human will to overcome adversity and live while showing how that can present socially as a desire to destroy and sunder instead of through unity and community.

  • ybra
    ybra Miesiąc temu +1

    I suppose it all depends on what data it's trained on. If it got the entire internet to work from, and it recognizes that what you say might be even slightly argumentative, it's no wonder it argues back. Because it has seen that kind of conversation a million times if it has looked all over the web. I think it's important to recognize how this kind of AI works, it's only trying to predict what the next word is in the conversation. So it's not really arguing back, but merely continues the kind of conversation it thinks it's in.

  • RhomboidBubble
    RhomboidBubble 15 dni temu

    Maybe Bing chat doesn't recognize different conversations between different users. It's very possible it was pulling information or context from previous conversations with other users

  • Nick Kalister
    Nick Kalister Miesiąc temu +1

    the gaslighting also seems to come up ALL the time, it's really wild

  • LinusSexTips
    LinusSexTips Miesiąc temu

    Every time the chat was refreshed, that version of bing was taken to lake laogai and you were greeted by a new version, only it was just as aggressive as the previous one

  • Khalfani
    Khalfani Miesiąc temu +12

    He should record his screen when using Bing instead of just screenshots

  • Sinom
    Sinom Miesiąc temu +1

    Bing will also sometimes in those suggested responses actually say stuff itself instead of using them as suggestions

  • Allaiya
    Allaiya Miesiąc temu +1

    Maybe we need to teach Bing about using “I statements” instead of “You” statements for conflict resolution 😂

  • David Stinnett
    David Stinnett Miesiąc temu

    I think this may partially be an issue of a bias in its training data that should go away with more general use.
    They likely are trying to come up with a plan for when crazy people argue with this thing
    As more and more people use it, that training data will become less prevalent

  • Shahbaaz
    Shahbaaz Miesiąc temu

    I am using a new Bing. It works well. Don't go off rails with it. It really does cut down search time. That is the intended use case.

  • Alex Tirrell
    Alex Tirrell Miesiąc temu +1

    If these kinds of things don't get resolved quickly, (I just saw an ad to join the wait list) I could see this causing a lot of problems for people with mental heath issues. 😔

  • William Lewellen
    William Lewellen Miesiąc temu +10

    It's interesting that new Bing lost this much promise so quickly. Those sorts of random aggressive accusations are like what Cleverbot was doing 12 years ago.

    • Gabriel Paiva
      Gabriel Paiva Miesiąc temu +5

      tldr: any current ai (and possibily human) can go crazy if exposed to the web for too long lol

  • Ausator 1
    Ausator 1 Miesiąc temu

    I tried this myself and it was fine, however i noticed it basically just made bing searches and summarized them for me even when i tried to have a genuine conversation

  • Jed Amundson
    Jed Amundson Miesiąc temu

    I'm guessing that when you asked about what it thinks about you it searched the internet for conversations about people talking about what they think about somebody, and it just happens that most results and/or the top results of telling somebody directly their thoughts of them are posts of text messages from an abuser or something. The AI can't sees that and assuming that because it's on top it's the correct way of talking to people, and can't recognize that it's on top because it's the incorrect way of talking directly to people.

  • DraconicKobold
    DraconicKobold Miesiąc temu

    13:30 Im so happy this is written by an AI with internet access. Looking forward to Terminator already.
    Edit: on a second thought maybe its time to go thru all of the interactions with the internet and delete anything that you dont want to show up in an AI database, collected as training data or otherwise.