Seemingly trivial things that intrigue you
Comments
-
orraloon said:
BBC R2 been playing lots of Queen tracks this evening. What got me intrigued is hearing an early live on t'BBC version of We Will Rock You then immediately followed by the (rather different) version we all know (and love 😊). Takes me back to ma yoof. Old Grey Whistle Test, Seven Seas of Rhye... happy days.
I suspect that the greatest musicians, irrespective of genre, are the ones who are/were even better live than in their most famous recordings.0 -
The Universe and things.0
-
An interesting take on why the likes of ChatGPT and Midjourney can produce such good facsimiles of writing and photographs, but still get such basic things wrong.
https://medium.com/the-conversation/it-takes-a-body-to-understand-the-world-why-chatgpt-and-other-llms-dont-know-what-they-re-saying-856c114529f61985 Mercian King of Mercia - work in progress (Hah! Who am I kidding?)
Pinnacle Monzonite
Part of the anti-growth coalition0 -
I used it once in anger. I was impressed how it gave me a comprehensive answer covering everything I needed. I was unimpressed though when I discovered the answer was made up and not factually correct. It seems it does this.rjsterry said:An interesting take on why the likes of ChatGPT and Midjourney can produce such good facsimiles of writing and photographs, but still get such basic things wrong.
https://medium.com/the-conversation/it-takes-a-body-to-understand-the-world-why-chatgpt-and-other-llms-dont-know-what-they-re-saying-856c114529f60 -
Jose Been used it on the Brabantse Pijl the other day and it claimed Lutsenko has won Strade Bianchi and Amstel which she confirmed he definitely hasn’t.TheBigBean said:
I used it once in anger. I was impressed how it gave me a comprehensive answer covering everything I needed. I was unimpressed though when I discovered the answer was made up and not factually correct. It seems it does this.rjsterry said:An interesting take on why the likes of ChatGPT and Midjourney can produce such good facsimiles of writing and photographs, but still get such basic things wrong.
https://medium.com/the-conversation/it-takes-a-body-to-understand-the-world-why-chatgpt-and-other-llms-dont-know-what-they-re-saying-856c114529f60 -
There was an article in the Guardian where they explained it just gives a plausible answer and not necessarily a factually correct one. Given Google lost so much value over just this, it's a bit surprising that ChatGPT is no better.Pross said:
Jose Been used it on the Brabantse Pijl the other day and it claimed Lutsenko has won Strade Bianchi and Amstel which she confirmed he definitely hasn’t.TheBigBean said:
I used it once in anger. I was impressed how it gave me a comprehensive answer covering everything I needed. I was unimpressed though when I discovered the answer was made up and not factually correct. It seems it does this.rjsterry said:An interesting take on why the likes of ChatGPT and Midjourney can produce such good facsimiles of writing and photographs, but still get such basic things wrong.
https://medium.com/the-conversation/it-takes-a-body-to-understand-the-world-why-chatgpt-and-other-llms-dont-know-what-they-re-saying-856c114529f60 -
Doesn't it just suck up whatever it finds on the internet including the inaccuracies and put it into words?TheBigBean said:
There was an article in the Guardian where they explained it just gives a plausible answer and not necessarily a factually correct one. Given Google lost so much value over just this, it's a bit surprising that ChatGPT is no better.Pross said:
Jose Been used it on the Brabantse Pijl the other day and it claimed Lutsenko has won Strade Bianchi and Amstel which she confirmed he definitely hasn’t.TheBigBean said:
I used it once in anger. I was impressed how it gave me a comprehensive answer covering everything I needed. I was unimpressed though when I discovered the answer was made up and not factually correct. It seems it does this.rjsterry said:An interesting take on why the likes of ChatGPT and Midjourney can produce such good facsimiles of writing and photographs, but still get such basic things wrong.
https://medium.com/the-conversation/it-takes-a-body-to-understand-the-world-why-chatgpt-and-other-llms-dont-know-what-they-re-saying-856c114529f60 -
Couldn't it cross reference though? Six the same, one different...Pross said:
Doesn't it just suck up whatever it finds on the internet including the inaccuracies and put it into words?TheBigBean said:
There was an article in the Guardian where they explained it just gives a plausible answer and not necessarily a factually correct one. Given Google lost so much value over just this, it's a bit surprising that ChatGPT is no better.Pross said:
Jose Been used it on the Brabantse Pijl the other day and it claimed Lutsenko has won Strade Bianchi and Amstel which she confirmed he definitely hasn’t.TheBigBean said:
I used it once in anger. I was impressed how it gave me a comprehensive answer covering everything I needed. I was unimpressed though when I discovered the answer was made up and not factually correct. It seems it does this.rjsterry said:An interesting take on why the likes of ChatGPT and Midjourney can produce such good facsimiles of writing and photographs, but still get such basic things wrong.
https://medium.com/the-conversation/it-takes-a-body-to-understand-the-world-why-chatgpt-and-other-llms-dont-know-what-they-re-saying-856c114529f6
Weather modelling does this now.0 -
I've said it before I trust searching the internet for facts more than my own memory.0
-
No, sorry, nice try, we're still doomed.0
-
I'm sure it does and probably gets things right most times but I don't think anyone needs to fear it writing A* graded essays for school kids quite yet.focuszing723 said:
Couldn't it cross reference though? Six the same, one different...Pross said:
Doesn't it just suck up whatever it finds on the internet including the inaccuracies and put it into words?TheBigBean said:
There was an article in the Guardian where they explained it just gives a plausible answer and not necessarily a factually correct one. Given Google lost so much value over just this, it's a bit surprising that ChatGPT is no better.Pross said:
Jose Been used it on the Brabantse Pijl the other day and it claimed Lutsenko has won Strade Bianchi and Amstel which she confirmed he definitely hasn’t.TheBigBean said:
I used it once in anger. I was impressed how it gave me a comprehensive answer covering everything I needed. I was unimpressed though when I discovered the answer was made up and not factually correct. It seems it does this.rjsterry said:An interesting take on why the likes of ChatGPT and Midjourney can produce such good facsimiles of writing and photographs, but still get such basic things wrong.
https://medium.com/the-conversation/it-takes-a-body-to-understand-the-world-why-chatgpt-and-other-llms-dont-know-what-they-re-saying-856c114529f6
Weather modelling does this now.0 -
Would you trust your own memory more than cross referencing the interent?Pross said:
I'm sure it does and probably gets things right most times but I don't think anyone needs to fear it writing A* graded essays for school kids quite yet.focuszing723 said:
Couldn't it cross reference though? Six the same, one different...Pross said:
Doesn't it just suck up whatever it finds on the internet including the inaccuracies and put it into words?TheBigBean said:
There was an article in the Guardian where they explained it just gives a plausible answer and not necessarily a factually correct one. Given Google lost so much value over just this, it's a bit surprising that ChatGPT is no better.Pross said:
Jose Been used it on the Brabantse Pijl the other day and it claimed Lutsenko has won Strade Bianchi and Amstel which she confirmed he definitely hasn’t.TheBigBean said:
I used it once in anger. I was impressed how it gave me a comprehensive answer covering everything I needed. I was unimpressed though when I discovered the answer was made up and not factually correct. It seems it does this.rjsterry said:An interesting take on why the likes of ChatGPT and Midjourney can produce such good facsimiles of writing and photographs, but still get such basic things wrong.
https://medium.com/the-conversation/it-takes-a-body-to-understand-the-world-why-chatgpt-and-other-llms-dont-know-what-they-re-saying-856c114529f6
Weather modelling does this now.
So "quite" is the key word here.0 -
That's not how it works. It's studying how words are arranged. It has no real understanding of what any of them mean.focuszing723 said:
Couldn't it cross reference though? Six the same, one different...Pross said:
Doesn't it just suck up whatever it finds on the internet including the inaccuracies and put it into words?TheBigBean said:
There was an article in the Guardian where they explained it just gives a plausible answer and not necessarily a factually correct one. Given Google lost so much value over just this, it's a bit surprising that ChatGPT is no better.Pross said:
Jose Been used it on the Brabantse Pijl the other day and it claimed Lutsenko has won Strade Bianchi and Amstel which she confirmed he definitely hasn’t.TheBigBean said:
I used it once in anger. I was impressed how it gave me a comprehensive answer covering everything I needed. I was unimpressed though when I discovered the answer was made up and not factually correct. It seems it does this.rjsterry said:An interesting take on why the likes of ChatGPT and Midjourney can produce such good facsimiles of writing and photographs, but still get such basic things wrong.
https://medium.com/the-conversation/it-takes-a-body-to-understand-the-world-why-chatgpt-and-other-llms-dont-know-what-they-re-saying-856c114529f6
Weather modelling does this now.
Similarly with images, it is creating arrangements of pixels that are similar to those it has been trained on. It doesn't understand any part of the three dimensional world that it's images appear to depict.1985 Mercian King of Mercia - work in progress (Hah! Who am I kidding?)
Pinnacle Monzonite
Part of the anti-growth coalition0 -
I don't think so. If you read the article RJS posted, it is simply trying to connect words together that might make sense.Pross said:
Doesn't it just suck up whatever it finds on the internet including the inaccuracies and put it into words?TheBigBean said:
There was an article in the Guardian where they explained it just gives a plausible answer and not necessarily a factually correct one. Given Google lost so much value over just this, it's a bit surprising that ChatGPT is no better.Pross said:
Jose Been used it on the Brabantse Pijl the other day and it claimed Lutsenko has won Strade Bianchi and Amstel which she confirmed he definitely hasn’t.TheBigBean said:
I used it once in anger. I was impressed how it gave me a comprehensive answer covering everything I needed. I was unimpressed though when I discovered the answer was made up and not factually correct. It seems it does this.rjsterry said:An interesting take on why the likes of ChatGPT and Midjourney can produce such good facsimiles of writing and photographs, but still get such basic things wrong.
https://medium.com/the-conversation/it-takes-a-body-to-understand-the-world-why-chatgpt-and-other-llms-dont-know-what-they-re-saying-856c114529f6
For example, I'd be intrigued if you asked it to tell you about WvA's win a P-R. I'd imagine it will give you a detailed response despite it not having happened.0 -
Someone asked it submit answers for the ARB exams and it achieved a pass. I think we will need to think more carefully about how we set exam questions.Pross said:
I'm sure it does and probably gets things right most times but I don't think anyone needs to fear it writing A* graded essays for school kids quite yet.focuszing723 said:
Couldn't it cross reference though? Six the same, one different...Pross said:
Doesn't it just suck up whatever it finds on the internet including the inaccuracies and put it into words?TheBigBean said:
There was an article in the Guardian where they explained it just gives a plausible answer and not necessarily a factually correct one. Given Google lost so much value over just this, it's a bit surprising that ChatGPT is no better.Pross said:
Jose Been used it on the Brabantse Pijl the other day and it claimed Lutsenko has won Strade Bianchi and Amstel which she confirmed he definitely hasn’t.TheBigBean said:
I used it once in anger. I was impressed how it gave me a comprehensive answer covering everything I needed. I was unimpressed though when I discovered the answer was made up and not factually correct. It seems it does this.rjsterry said:An interesting take on why the likes of ChatGPT and Midjourney can produce such good facsimiles of writing and photographs, but still get such basic things wrong.
https://medium.com/the-conversation/it-takes-a-body-to-understand-the-world-why-chatgpt-and-other-llms-dont-know-what-they-re-saying-856c114529f6
Weather modelling does this now.1985 Mercian King of Mercia - work in progress (Hah! Who am I kidding?)
Pinnacle Monzonite
Part of the anti-growth coalition0 -
The irony is all architects rely heavily on computers now.rjsterry said:
Someone asked it submit answers for the ARB exams and it achieved a pass. I think we will need to think more carefully about how we set exam questions.Pross said:
I'm sure it does and probably gets things right most times but I don't think anyone needs to fear it writing A* graded essays for school kids quite yet.focuszing723 said:
Couldn't it cross reference though? Six the same, one different...Pross said:
Doesn't it just suck up whatever it finds on the internet including the inaccuracies and put it into words?TheBigBean said:
There was an article in the Guardian where they explained it just gives a plausible answer and not necessarily a factually correct one. Given Google lost so much value over just this, it's a bit surprising that ChatGPT is no better.Pross said:
Jose Been used it on the Brabantse Pijl the other day and it claimed Lutsenko has won Strade Bianchi and Amstel which she confirmed he definitely hasn’t.TheBigBean said:
I used it once in anger. I was impressed how it gave me a comprehensive answer covering everything I needed. I was unimpressed though when I discovered the answer was made up and not factually correct. It seems it does this.rjsterry said:An interesting take on why the likes of ChatGPT and Midjourney can produce such good facsimiles of writing and photographs, but still get such basic things wrong.
https://medium.com/the-conversation/it-takes-a-body-to-understand-the-world-why-chatgpt-and-other-llms-dont-know-what-they-re-saying-856c114529f6
Weather modelling does this now.
0 -
Should be relatively straightforward to use AI to check if AI wrote itrjsterry said:
Someone asked it submit answers for the ARB exams and it achieved a pass. I think we will need to think more carefully about how we set exam questions.Pross said:
I'm sure it does and probably gets things right most times but I don't think anyone needs to fear it writing A* graded essays for school kids quite yet.focuszing723 said:
Couldn't it cross reference though? Six the same, one different...Pross said:
Doesn't it just suck up whatever it finds on the internet including the inaccuracies and put it into words?TheBigBean said:
There was an article in the Guardian where they explained it just gives a plausible answer and not necessarily a factually correct one. Given Google lost so much value over just this, it's a bit surprising that ChatGPT is no better.Pross said:
Jose Been used it on the Brabantse Pijl the other day and it claimed Lutsenko has won Strade Bianchi and Amstel which she confirmed he definitely hasn’t.TheBigBean said:
I used it once in anger. I was impressed how it gave me a comprehensive answer covering everything I needed. I was unimpressed though when I discovered the answer was made up and not factually correct. It seems it does this.rjsterry said:An interesting take on why the likes of ChatGPT and Midjourney can produce such good facsimiles of writing and photographs, but still get such basic things wrong.
https://medium.com/the-conversation/it-takes-a-body-to-understand-the-world-why-chatgpt-and-other-llms-dont-know-what-they-re-saying-856c114529f6
Weather modelling does this now.0 -
You could use x amount of different strains of AI to check whatever, and a certain percentage is used to gauge the accuracy. The difference to Humans doing this is the speed also the lack of bias.rick_chasey said:
Should be relatively straightforward to use AI to check if AI wrote itrjsterry said:
Someone asked it submit answers for the ARB exams and it achieved a pass. I think we will need to think more carefully about how we set exam questions.Pross said:
I'm sure it does and probably gets things right most times but I don't think anyone needs to fear it writing A* graded essays for school kids quite yet.focuszing723 said:
Couldn't it cross reference though? Six the same, one different...Pross said:
Doesn't it just suck up whatever it finds on the internet including the inaccuracies and put it into words?TheBigBean said:
There was an article in the Guardian where they explained it just gives a plausible answer and not necessarily a factually correct one. Given Google lost so much value over just this, it's a bit surprising that ChatGPT is no better.Pross said:
Jose Been used it on the Brabantse Pijl the other day and it claimed Lutsenko has won Strade Bianchi and Amstel which she confirmed he definitely hasn’t.TheBigBean said:
I used it once in anger. I was impressed how it gave me a comprehensive answer covering everything I needed. I was unimpressed though when I discovered the answer was made up and not factually correct. It seems it does this.rjsterry said:An interesting take on why the likes of ChatGPT and Midjourney can produce such good facsimiles of writing and photographs, but still get such basic things wrong.
https://medium.com/the-conversation/it-takes-a-body-to-understand-the-world-why-chatgpt-and-other-llms-dont-know-what-they-re-saying-856c114529f6
Weather modelling does this now.0 -
AI is as biased as the humans who created it. Don’t be fooled.focuszing723 said:
You could use x amount of different strains of AI to check whatever, and a certain percentage is used to gauge the accuracy. The difference to Humans doing this is the speed also the lack of bias.rick_chasey said:
Should be relatively straightforward to use AI to check if AI wrote itrjsterry said:
Someone asked it submit answers for the ARB exams and it achieved a pass. I think we will need to think more carefully about how we set exam questions.Pross said:
I'm sure it does and probably gets things right most times but I don't think anyone needs to fear it writing A* graded essays for school kids quite yet.focuszing723 said:
Couldn't it cross reference though? Six the same, one different...Pross said:
Doesn't it just suck up whatever it finds on the internet including the inaccuracies and put it into words?TheBigBean said:
There was an article in the Guardian where they explained it just gives a plausible answer and not necessarily a factually correct one. Given Google lost so much value over just this, it's a bit surprising that ChatGPT is no better.Pross said:
Jose Been used it on the Brabantse Pijl the other day and it claimed Lutsenko has won Strade Bianchi and Amstel which she confirmed he definitely hasn’t.TheBigBean said:
I used it once in anger. I was impressed how it gave me a comprehensive answer covering everything I needed. I was unimpressed though when I discovered the answer was made up and not factually correct. It seems it does this.rjsterry said:An interesting take on why the likes of ChatGPT and Midjourney can produce such good facsimiles of writing and photographs, but still get such basic things wrong.
https://medium.com/the-conversation/it-takes-a-body-to-understand-the-world-why-chatgpt-and-other-llms-dont-know-what-they-re-saying-856c114529f6
Weather modelling does this now.
https://metro.co.uk/2017/07/13/racist-soap-dispensers-dont-work-for-black-people-6775909/
0 -
You could use x amount of different strains of AI to check whatever, and a certain percentage is used to gauge the accuracy. The difference to Humans doing this is the speed also the lack of bias.
0 -
I mean Christ! An AI architect wouldn't come up with that $h1t.0 -
0 -
Oh, I like that.0
-
I know people who would give a BS answer like that assume if you ask you don’t know. Vino won Amstel and Iglinsky won Strade Bianchi, they are kazakhs and so is Lutsenko.Pross said:
Jose Been used it on the Brabantse Pijl the other day and it claimed Lutsenko has won Strade Bianchi and Amstel which she confirmed he definitely hasn’t.TheBigBean said:
I used it once in anger. I was impressed how it gave me a comprehensive answer covering everything I needed. I was unimpressed though when I discovered the answer was made up and not factually correct. It seems it does this.rjsterry said:An interesting take on why the likes of ChatGPT and Midjourney can produce such good facsimiles of writing and photographs, but still get such basic things wrong.
https://medium.com/the-conversation/it-takes-a-body-to-understand-the-world-why-chatgpt-and-other-llms-dont-know-what-they-re-saying-856c114529f6
Sounds about right …
0 -
Have done for decades. Also missing the point slightly. These are essay questions on professional conduct and the like, ChatGPT wasn't asked to design anything.focuszing723 said:
The irony is all architects rely heavily on computers now.rjsterry said:
Someone asked it submit answers for the ARB exams and it achieved a pass. I think we will need to think more carefully about how we set exam questions.Pross said:
I'm sure it does and probably gets things right most times but I don't think anyone needs to fear it writing A* graded essays for school kids quite yet.focuszing723 said:
Couldn't it cross reference though? Six the same, one different...Pross said:
Doesn't it just suck up whatever it finds on the internet including the inaccuracies and put it into words?TheBigBean said:
There was an article in the Guardian where they explained it just gives a plausible answer and not necessarily a factually correct one. Given Google lost so much value over just this, it's a bit surprising that ChatGPT is no better.Pross said:
Jose Been used it on the Brabantse Pijl the other day and it claimed Lutsenko has won Strade Bianchi and Amstel which she confirmed he definitely hasn’t.TheBigBean said:
I used it once in anger. I was impressed how it gave me a comprehensive answer covering everything I needed. I was unimpressed though when I discovered the answer was made up and not factually correct. It seems it does this.rjsterry said:An interesting take on why the likes of ChatGPT and Midjourney can produce such good facsimiles of writing and photographs, but still get such basic things wrong.
https://medium.com/the-conversation/it-takes-a-body-to-understand-the-world-why-chatgpt-and-other-llms-dont-know-what-they-re-saying-856c114529f6
Weather modelling does this now.1985 Mercian King of Mercia - work in progress (Hah! Who am I kidding?)
Pinnacle Monzonite
Part of the anti-growth coalition0 -
Language based AI has unfortunately exactly the same biases as the data it's trained on.focuszing723 said:You could use x amount of different strains of AI to check whatever, and a certain percentage is used to gauge the accuracy. The difference to Humans doing this is the speed also the lack of bias.1985 Mercian King of Mercia - work in progress (Hah! Who am I kidding?)
Pinnacle Monzonite
Part of the anti-growth coalition0 -
Apparently not, at least on image creation. Again, it has no external frame of reference. It's like someone who has learnt about the world through watching TV, but never been outside.rick_chasey said:
Should be relatively straightforward to use AI to check if AI wrote itrjsterry said:
Someone asked it submit answers for the ARB exams and it achieved a pass. I think we will need to think more carefully about how we set exam questions.Pross said:
I'm sure it does and probably gets things right most times but I don't think anyone needs to fear it writing A* graded essays for school kids quite yet.focuszing723 said:
Couldn't it cross reference though? Six the same, one different...Pross said:
Doesn't it just suck up whatever it finds on the internet including the inaccuracies and put it into words?TheBigBean said:
There was an article in the Guardian where they explained it just gives a plausible answer and not necessarily a factually correct one. Given Google lost so much value over just this, it's a bit surprising that ChatGPT is no better.Pross said:
Jose Been used it on the Brabantse Pijl the other day and it claimed Lutsenko has won Strade Bianchi and Amstel which she confirmed he definitely hasn’t.TheBigBean said:
I used it once in anger. I was impressed how it gave me a comprehensive answer covering everything I needed. I was unimpressed though when I discovered the answer was made up and not factually correct. It seems it does this.rjsterry said:An interesting take on why the likes of ChatGPT and Midjourney can produce such good facsimiles of writing and photographs, but still get such basic things wrong.
https://medium.com/the-conversation/it-takes-a-body-to-understand-the-world-why-chatgpt-and-other-llms-dont-know-what-they-re-saying-856c114529f6
Weather modelling does this now.1985 Mercian King of Mercia - work in progress (Hah! Who am I kidding?)
Pinnacle Monzonite
Part of the anti-growth coalition0 -
Stop disagreeing with me, you know I'm right and you're wrong. Just accept Humanity is doomed and have some cake to pacify reality.0
-
Late Sunday night and you are on one. Time to reduce the salt intake again.focuszing723 said:Stop disagreeing with me, you know I'm right and you're wrong. Just accept Humanity is doomed and have some cake to pacify reality.
0 -
I suspect that, like in my related field, there is a problem with an over-reliance with people not understanding the theory and how to actually do the work manually accepting what comes out of the software. I’ve regularly picked up work that I can see at a glance isn’t right and get “that’s what the software said” responses.focuszing723 said:
The irony is all architects rely heavily on computers now.rjsterry said:
Someone asked it submit answers for the ARB exams and it achieved a pass. I think we will need to think more carefully about how we set exam questions.Pross said:
I'm sure it does and probably gets things right most times but I don't think anyone needs to fear it writing A* graded essays for school kids quite yet.focuszing723 said:
Couldn't it cross reference though? Six the same, one different...Pross said:
Doesn't it just suck up whatever it finds on the internet including the inaccuracies and put it into words?TheBigBean said:
There was an article in the Guardian where they explained it just gives a plausible answer and not necessarily a factually correct one. Given Google lost so much value over just this, it's a bit surprising that ChatGPT is no better.Pross said:
Jose Been used it on the Brabantse Pijl the other day and it claimed Lutsenko has won Strade Bianchi and Amstel which she confirmed he definitely hasn’t.TheBigBean said:
I used it once in anger. I was impressed how it gave me a comprehensive answer covering everything I needed. I was unimpressed though when I discovered the answer was made up and not factually correct. It seems it does this.rjsterry said:An interesting take on why the likes of ChatGPT and Midjourney can produce such good facsimiles of writing and photographs, but still get such basic things wrong.
https://medium.com/the-conversation/it-takes-a-body-to-understand-the-world-why-chatgpt-and-other-llms-dont-know-what-they-re-saying-856c114529f6
Weather modelling does this now.0