Seemingly trivial things that intrigue you

1297298300302303434

Comments

  • briantrumpet
    briantrumpet Posts: 20,005
    orraloon said:

    BBC R2 been playing lots of Queen tracks this evening. What got me intrigued is hearing an early live on t'BBC version of We Will Rock You then immediately followed by the (rather different) version we all know (and love 😊). Takes me back to ma yoof. Old Grey Whistle Test, Seven Seas of Rhye... happy days.


    I suspect that the greatest musicians, irrespective of genre, are the ones who are/were even better live than in their most famous recordings.
  • focuszing723
    focuszing723 Posts: 8,062
    edited April 2023
    The Universe and things.
  • rjsterry
    rjsterry Posts: 29,341
    An interesting take on why the likes of ChatGPT and Midjourney can produce such good facsimiles of writing and photographs, but still get such basic things wrong.

    https://medium.com/the-conversation/it-takes-a-body-to-understand-the-world-why-chatgpt-and-other-llms-dont-know-what-they-re-saying-856c114529f6
    1985 Mercian King of Mercia - work in progress (Hah! Who am I kidding?)
    Pinnacle Monzonite

    Part of the anti-growth coalition
  • TheBigBean
    TheBigBean Posts: 21,756
    rjsterry said:

    An interesting take on why the likes of ChatGPT and Midjourney can produce such good facsimiles of writing and photographs, but still get such basic things wrong.

    https://medium.com/the-conversation/it-takes-a-body-to-understand-the-world-why-chatgpt-and-other-llms-dont-know-what-they-re-saying-856c114529f6

    I used it once in anger. I was impressed how it gave me a comprehensive answer covering everything I needed. I was unimpressed though when I discovered the answer was made up and not factually correct. It seems it does this.
  • Pross
    Pross Posts: 43,396

    rjsterry said:

    An interesting take on why the likes of ChatGPT and Midjourney can produce such good facsimiles of writing and photographs, but still get such basic things wrong.

    https://medium.com/the-conversation/it-takes-a-body-to-understand-the-world-why-chatgpt-and-other-llms-dont-know-what-they-re-saying-856c114529f6

    I used it once in anger. I was impressed how it gave me a comprehensive answer covering everything I needed. I was unimpressed though when I discovered the answer was made up and not factually correct. It seems it does this.
    Jose Been used it on the Brabantse Pijl the other day and it claimed Lutsenko has won Strade Bianchi and Amstel which she confirmed he definitely hasn’t.
  • TheBigBean
    TheBigBean Posts: 21,756
    Pross said:

    rjsterry said:

    An interesting take on why the likes of ChatGPT and Midjourney can produce such good facsimiles of writing and photographs, but still get such basic things wrong.

    https://medium.com/the-conversation/it-takes-a-body-to-understand-the-world-why-chatgpt-and-other-llms-dont-know-what-they-re-saying-856c114529f6

    I used it once in anger. I was impressed how it gave me a comprehensive answer covering everything I needed. I was unimpressed though when I discovered the answer was made up and not factually correct. It seems it does this.
    Jose Been used it on the Brabantse Pijl the other day and it claimed Lutsenko has won Strade Bianchi and Amstel which she confirmed he definitely hasn’t.
    There was an article in the Guardian where they explained it just gives a plausible answer and not necessarily a factually correct one. Given Google lost so much value over just this, it's a bit surprising that ChatGPT is no better.
  • Pross
    Pross Posts: 43,396

    Pross said:

    rjsterry said:

    An interesting take on why the likes of ChatGPT and Midjourney can produce such good facsimiles of writing and photographs, but still get such basic things wrong.

    https://medium.com/the-conversation/it-takes-a-body-to-understand-the-world-why-chatgpt-and-other-llms-dont-know-what-they-re-saying-856c114529f6

    I used it once in anger. I was impressed how it gave me a comprehensive answer covering everything I needed. I was unimpressed though when I discovered the answer was made up and not factually correct. It seems it does this.
    Jose Been used it on the Brabantse Pijl the other day and it claimed Lutsenko has won Strade Bianchi and Amstel which she confirmed he definitely hasn’t.
    There was an article in the Guardian where they explained it just gives a plausible answer and not necessarily a factually correct one. Given Google lost so much value over just this, it's a bit surprising that ChatGPT is no better.
    Doesn't it just suck up whatever it finds on the internet including the inaccuracies and put it into words?
  • focuszing723
    focuszing723 Posts: 8,062
    Pross said:

    Pross said:

    rjsterry said:

    An interesting take on why the likes of ChatGPT and Midjourney can produce such good facsimiles of writing and photographs, but still get such basic things wrong.

    https://medium.com/the-conversation/it-takes-a-body-to-understand-the-world-why-chatgpt-and-other-llms-dont-know-what-they-re-saying-856c114529f6

    I used it once in anger. I was impressed how it gave me a comprehensive answer covering everything I needed. I was unimpressed though when I discovered the answer was made up and not factually correct. It seems it does this.
    Jose Been used it on the Brabantse Pijl the other day and it claimed Lutsenko has won Strade Bianchi and Amstel which she confirmed he definitely hasn’t.
    There was an article in the Guardian where they explained it just gives a plausible answer and not necessarily a factually correct one. Given Google lost so much value over just this, it's a bit surprising that ChatGPT is no better.
    Doesn't it just suck up whatever it finds on the internet including the inaccuracies and put it into words?
    Couldn't it cross reference though? Six the same, one different...

    Weather modelling does this now.
  • focuszing723
    focuszing723 Posts: 8,062
    I've said it before I trust searching the internet for facts more than my own memory.
  • focuszing723
    focuszing723 Posts: 8,062
    No, sorry, nice try, we're still doomed.
  • Pross
    Pross Posts: 43,396

    Pross said:

    Pross said:

    rjsterry said:

    An interesting take on why the likes of ChatGPT and Midjourney can produce such good facsimiles of writing and photographs, but still get such basic things wrong.

    https://medium.com/the-conversation/it-takes-a-body-to-understand-the-world-why-chatgpt-and-other-llms-dont-know-what-they-re-saying-856c114529f6

    I used it once in anger. I was impressed how it gave me a comprehensive answer covering everything I needed. I was unimpressed though when I discovered the answer was made up and not factually correct. It seems it does this.
    Jose Been used it on the Brabantse Pijl the other day and it claimed Lutsenko has won Strade Bianchi and Amstel which she confirmed he definitely hasn’t.
    There was an article in the Guardian where they explained it just gives a plausible answer and not necessarily a factually correct one. Given Google lost so much value over just this, it's a bit surprising that ChatGPT is no better.
    Doesn't it just suck up whatever it finds on the internet including the inaccuracies and put it into words?
    Couldn't it cross reference though? Six the same, one different...

    Weather modelling does this now.
    I'm sure it does and probably gets things right most times but I don't think anyone needs to fear it writing A* graded essays for school kids quite yet.
  • focuszing723
    focuszing723 Posts: 8,062
    Pross said:

    Pross said:

    Pross said:

    rjsterry said:

    An interesting take on why the likes of ChatGPT and Midjourney can produce such good facsimiles of writing and photographs, but still get such basic things wrong.

    https://medium.com/the-conversation/it-takes-a-body-to-understand-the-world-why-chatgpt-and-other-llms-dont-know-what-they-re-saying-856c114529f6

    I used it once in anger. I was impressed how it gave me a comprehensive answer covering everything I needed. I was unimpressed though when I discovered the answer was made up and not factually correct. It seems it does this.
    Jose Been used it on the Brabantse Pijl the other day and it claimed Lutsenko has won Strade Bianchi and Amstel which she confirmed he definitely hasn’t.
    There was an article in the Guardian where they explained it just gives a plausible answer and not necessarily a factually correct one. Given Google lost so much value over just this, it's a bit surprising that ChatGPT is no better.
    Doesn't it just suck up whatever it finds on the internet including the inaccuracies and put it into words?
    Couldn't it cross reference though? Six the same, one different...

    Weather modelling does this now.
    I'm sure it does and probably gets things right most times but I don't think anyone needs to fear it writing A* graded essays for school kids quite yet.
    Would you trust your own memory more than cross referencing the interent?

    So "quite" is the key word here.
  • rjsterry
    rjsterry Posts: 29,341

    Pross said:

    Pross said:

    rjsterry said:

    An interesting take on why the likes of ChatGPT and Midjourney can produce such good facsimiles of writing and photographs, but still get such basic things wrong.

    https://medium.com/the-conversation/it-takes-a-body-to-understand-the-world-why-chatgpt-and-other-llms-dont-know-what-they-re-saying-856c114529f6

    I used it once in anger. I was impressed how it gave me a comprehensive answer covering everything I needed. I was unimpressed though when I discovered the answer was made up and not factually correct. It seems it does this.
    Jose Been used it on the Brabantse Pijl the other day and it claimed Lutsenko has won Strade Bianchi and Amstel which she confirmed he definitely hasn’t.
    There was an article in the Guardian where they explained it just gives a plausible answer and not necessarily a factually correct one. Given Google lost so much value over just this, it's a bit surprising that ChatGPT is no better.
    Doesn't it just suck up whatever it finds on the internet including the inaccuracies and put it into words?
    Couldn't it cross reference though? Six the same, one different...

    Weather modelling does this now.
    That's not how it works. It's studying how words are arranged. It has no real understanding of what any of them mean.

    Similarly with images, it is creating arrangements of pixels that are similar to those it has been trained on. It doesn't understand any part of the three dimensional world that it's images appear to depict.
    1985 Mercian King of Mercia - work in progress (Hah! Who am I kidding?)
    Pinnacle Monzonite

    Part of the anti-growth coalition
  • TheBigBean
    TheBigBean Posts: 21,756
    Pross said:

    Pross said:

    rjsterry said:

    An interesting take on why the likes of ChatGPT and Midjourney can produce such good facsimiles of writing and photographs, but still get such basic things wrong.

    https://medium.com/the-conversation/it-takes-a-body-to-understand-the-world-why-chatgpt-and-other-llms-dont-know-what-they-re-saying-856c114529f6

    I used it once in anger. I was impressed how it gave me a comprehensive answer covering everything I needed. I was unimpressed though when I discovered the answer was made up and not factually correct. It seems it does this.
    Jose Been used it on the Brabantse Pijl the other day and it claimed Lutsenko has won Strade Bianchi and Amstel which she confirmed he definitely hasn’t.
    There was an article in the Guardian where they explained it just gives a plausible answer and not necessarily a factually correct one. Given Google lost so much value over just this, it's a bit surprising that ChatGPT is no better.
    Doesn't it just suck up whatever it finds on the internet including the inaccuracies and put it into words?
    I don't think so. If you read the article RJS posted, it is simply trying to connect words together that might make sense.

    For example, I'd be intrigued if you asked it to tell you about WvA's win a P-R. I'd imagine it will give you a detailed response despite it not having happened.
  • rjsterry
    rjsterry Posts: 29,341
    Pross said:

    Pross said:

    Pross said:

    rjsterry said:

    An interesting take on why the likes of ChatGPT and Midjourney can produce such good facsimiles of writing and photographs, but still get such basic things wrong.

    https://medium.com/the-conversation/it-takes-a-body-to-understand-the-world-why-chatgpt-and-other-llms-dont-know-what-they-re-saying-856c114529f6

    I used it once in anger. I was impressed how it gave me a comprehensive answer covering everything I needed. I was unimpressed though when I discovered the answer was made up and not factually correct. It seems it does this.
    Jose Been used it on the Brabantse Pijl the other day and it claimed Lutsenko has won Strade Bianchi and Amstel which she confirmed he definitely hasn’t.
    There was an article in the Guardian where they explained it just gives a plausible answer and not necessarily a factually correct one. Given Google lost so much value over just this, it's a bit surprising that ChatGPT is no better.
    Doesn't it just suck up whatever it finds on the internet including the inaccuracies and put it into words?
    Couldn't it cross reference though? Six the same, one different...

    Weather modelling does this now.
    I'm sure it does and probably gets things right most times but I don't think anyone needs to fear it writing A* graded essays for school kids quite yet.
    Someone asked it submit answers for the ARB exams and it achieved a pass. I think we will need to think more carefully about how we set exam questions.
    1985 Mercian King of Mercia - work in progress (Hah! Who am I kidding?)
    Pinnacle Monzonite

    Part of the anti-growth coalition
  • focuszing723
    focuszing723 Posts: 8,062
    rjsterry said:

    Pross said:

    Pross said:

    Pross said:

    rjsterry said:

    An interesting take on why the likes of ChatGPT and Midjourney can produce such good facsimiles of writing and photographs, but still get such basic things wrong.

    https://medium.com/the-conversation/it-takes-a-body-to-understand-the-world-why-chatgpt-and-other-llms-dont-know-what-they-re-saying-856c114529f6

    I used it once in anger. I was impressed how it gave me a comprehensive answer covering everything I needed. I was unimpressed though when I discovered the answer was made up and not factually correct. It seems it does this.
    Jose Been used it on the Brabantse Pijl the other day and it claimed Lutsenko has won Strade Bianchi and Amstel which she confirmed he definitely hasn’t.
    There was an article in the Guardian where they explained it just gives a plausible answer and not necessarily a factually correct one. Given Google lost so much value over just this, it's a bit surprising that ChatGPT is no better.
    Doesn't it just suck up whatever it finds on the internet including the inaccuracies and put it into words?
    Couldn't it cross reference though? Six the same, one different...

    Weather modelling does this now.
    I'm sure it does and probably gets things right most times but I don't think anyone needs to fear it writing A* graded essays for school kids quite yet.
    Someone asked it submit answers for the ARB exams and it achieved a pass. I think we will need to think more carefully about how we set exam questions.
    The irony is all architects rely heavily on computers now.
  • rick_chasey
    rick_chasey Posts: 75,661
    rjsterry said:

    Pross said:

    Pross said:

    Pross said:

    rjsterry said:

    An interesting take on why the likes of ChatGPT and Midjourney can produce such good facsimiles of writing and photographs, but still get such basic things wrong.

    https://medium.com/the-conversation/it-takes-a-body-to-understand-the-world-why-chatgpt-and-other-llms-dont-know-what-they-re-saying-856c114529f6

    I used it once in anger. I was impressed how it gave me a comprehensive answer covering everything I needed. I was unimpressed though when I discovered the answer was made up and not factually correct. It seems it does this.
    Jose Been used it on the Brabantse Pijl the other day and it claimed Lutsenko has won Strade Bianchi and Amstel which she confirmed he definitely hasn’t.
    There was an article in the Guardian where they explained it just gives a plausible answer and not necessarily a factually correct one. Given Google lost so much value over just this, it's a bit surprising that ChatGPT is no better.
    Doesn't it just suck up whatever it finds on the internet including the inaccuracies and put it into words?
    Couldn't it cross reference though? Six the same, one different...

    Weather modelling does this now.
    I'm sure it does and probably gets things right most times but I don't think anyone needs to fear it writing A* graded essays for school kids quite yet.
    Someone asked it submit answers for the ARB exams and it achieved a pass. I think we will need to think more carefully about how we set exam questions.
    Should be relatively straightforward to use AI to check if AI wrote it
  • focuszing723
    focuszing723 Posts: 8,062

    rjsterry said:

    Pross said:

    Pross said:

    Pross said:

    rjsterry said:

    An interesting take on why the likes of ChatGPT and Midjourney can produce such good facsimiles of writing and photographs, but still get such basic things wrong.

    https://medium.com/the-conversation/it-takes-a-body-to-understand-the-world-why-chatgpt-and-other-llms-dont-know-what-they-re-saying-856c114529f6

    I used it once in anger. I was impressed how it gave me a comprehensive answer covering everything I needed. I was unimpressed though when I discovered the answer was made up and not factually correct. It seems it does this.
    Jose Been used it on the Brabantse Pijl the other day and it claimed Lutsenko has won Strade Bianchi and Amstel which she confirmed he definitely hasn’t.
    There was an article in the Guardian where they explained it just gives a plausible answer and not necessarily a factually correct one. Given Google lost so much value over just this, it's a bit surprising that ChatGPT is no better.
    Doesn't it just suck up whatever it finds on the internet including the inaccuracies and put it into words?
    Couldn't it cross reference though? Six the same, one different...

    Weather modelling does this now.
    I'm sure it does and probably gets things right most times but I don't think anyone needs to fear it writing A* graded essays for school kids quite yet.
    Someone asked it submit answers for the ARB exams and it achieved a pass. I think we will need to think more carefully about how we set exam questions.
    Should be relatively straightforward to use AI to check if AI wrote it
    You could use x amount of different strains of AI to check whatever, and a certain percentage is used to gauge the accuracy. The difference to Humans doing this is the speed also the lack of bias.
  • rick_chasey
    rick_chasey Posts: 75,661

    rjsterry said:

    Pross said:

    Pross said:

    Pross said:

    rjsterry said:

    An interesting take on why the likes of ChatGPT and Midjourney can produce such good facsimiles of writing and photographs, but still get such basic things wrong.

    https://medium.com/the-conversation/it-takes-a-body-to-understand-the-world-why-chatgpt-and-other-llms-dont-know-what-they-re-saying-856c114529f6

    I used it once in anger. I was impressed how it gave me a comprehensive answer covering everything I needed. I was unimpressed though when I discovered the answer was made up and not factually correct. It seems it does this.
    Jose Been used it on the Brabantse Pijl the other day and it claimed Lutsenko has won Strade Bianchi and Amstel which she confirmed he definitely hasn’t.
    There was an article in the Guardian where they explained it just gives a plausible answer and not necessarily a factually correct one. Given Google lost so much value over just this, it's a bit surprising that ChatGPT is no better.
    Doesn't it just suck up whatever it finds on the internet including the inaccuracies and put it into words?
    Couldn't it cross reference though? Six the same, one different...

    Weather modelling does this now.
    I'm sure it does and probably gets things right most times but I don't think anyone needs to fear it writing A* graded essays for school kids quite yet.
    Someone asked it submit answers for the ARB exams and it achieved a pass. I think we will need to think more carefully about how we set exam questions.
    Should be relatively straightforward to use AI to check if AI wrote it
    You could use x amount of different strains of AI to check whatever, and a certain percentage is used to gauge the accuracy. The difference to Humans doing this is the speed also the lack of bias.
    AI is as biased as the humans who created it. Don’t be fooled.


    https://metro.co.uk/2017/07/13/racist-soap-dispensers-dont-work-for-black-people-6775909/

  • focuszing723
    focuszing723 Posts: 8,062
    edited April 2023
    You could use x amount of different strains of AI to check whatever, and a certain percentage is used to gauge the accuracy. The difference to Humans doing this is the speed also the lack of bias.

  • focuszing723
    focuszing723 Posts: 8,062

    I mean Christ! An AI architect wouldn't come up with that $h1t.
  • orraloon
    orraloon Posts: 13,227

  • focuszing723
    focuszing723 Posts: 8,062
    Oh, I like that.
  • secretsqirrel
    secretsqirrel Posts: 2,077
    Pross said:

    rjsterry said:

    An interesting take on why the likes of ChatGPT and Midjourney can produce such good facsimiles of writing and photographs, but still get such basic things wrong.

    https://medium.com/the-conversation/it-takes-a-body-to-understand-the-world-why-chatgpt-and-other-llms-dont-know-what-they-re-saying-856c114529f6

    I used it once in anger. I was impressed how it gave me a comprehensive answer covering everything I needed. I was unimpressed though when I discovered the answer was made up and not factually correct. It seems it does this.
    Jose Been used it on the Brabantse Pijl the other day and it claimed Lutsenko has won Strade Bianchi and Amstel which she confirmed he definitely hasn’t.
    I know people who would give a BS answer like that assume if you ask you don’t know. Vino won Amstel and Iglinsky won Strade Bianchi, they are kazakhs and so is Lutsenko.
    Sounds about right …
  • rjsterry
    rjsterry Posts: 29,341
    edited April 2023

    rjsterry said:

    Pross said:

    Pross said:

    Pross said:

    rjsterry said:

    An interesting take on why the likes of ChatGPT and Midjourney can produce such good facsimiles of writing and photographs, but still get such basic things wrong.

    https://medium.com/the-conversation/it-takes-a-body-to-understand-the-world-why-chatgpt-and-other-llms-dont-know-what-they-re-saying-856c114529f6

    I used it once in anger. I was impressed how it gave me a comprehensive answer covering everything I needed. I was unimpressed though when I discovered the answer was made up and not factually correct. It seems it does this.
    Jose Been used it on the Brabantse Pijl the other day and it claimed Lutsenko has won Strade Bianchi and Amstel which she confirmed he definitely hasn’t.
    There was an article in the Guardian where they explained it just gives a plausible answer and not necessarily a factually correct one. Given Google lost so much value over just this, it's a bit surprising that ChatGPT is no better.
    Doesn't it just suck up whatever it finds on the internet including the inaccuracies and put it into words?
    Couldn't it cross reference though? Six the same, one different...

    Weather modelling does this now.
    I'm sure it does and probably gets things right most times but I don't think anyone needs to fear it writing A* graded essays for school kids quite yet.
    Someone asked it submit answers for the ARB exams and it achieved a pass. I think we will need to think more carefully about how we set exam questions.
    The irony is all architects rely heavily on computers now.
    Have done for decades. Also missing the point slightly. These are essay questions on professional conduct and the like, ChatGPT wasn't asked to design anything.
    1985 Mercian King of Mercia - work in progress (Hah! Who am I kidding?)
    Pinnacle Monzonite

    Part of the anti-growth coalition
  • rjsterry
    rjsterry Posts: 29,341

    You could use x amount of different strains of AI to check whatever, and a certain percentage is used to gauge the accuracy. The difference to Humans doing this is the speed also the lack of bias.

    Language based AI has unfortunately exactly the same biases as the data it's trained on.
    1985 Mercian King of Mercia - work in progress (Hah! Who am I kidding?)
    Pinnacle Monzonite

    Part of the anti-growth coalition
  • rjsterry
    rjsterry Posts: 29,341

    rjsterry said:

    Pross said:

    Pross said:

    Pross said:

    rjsterry said:

    An interesting take on why the likes of ChatGPT and Midjourney can produce such good facsimiles of writing and photographs, but still get such basic things wrong.

    https://medium.com/the-conversation/it-takes-a-body-to-understand-the-world-why-chatgpt-and-other-llms-dont-know-what-they-re-saying-856c114529f6

    I used it once in anger. I was impressed how it gave me a comprehensive answer covering everything I needed. I was unimpressed though when I discovered the answer was made up and not factually correct. It seems it does this.
    Jose Been used it on the Brabantse Pijl the other day and it claimed Lutsenko has won Strade Bianchi and Amstel which she confirmed he definitely hasn’t.
    There was an article in the Guardian where they explained it just gives a plausible answer and not necessarily a factually correct one. Given Google lost so much value over just this, it's a bit surprising that ChatGPT is no better.
    Doesn't it just suck up whatever it finds on the internet including the inaccuracies and put it into words?
    Couldn't it cross reference though? Six the same, one different...

    Weather modelling does this now.
    I'm sure it does and probably gets things right most times but I don't think anyone needs to fear it writing A* graded essays for school kids quite yet.
    Someone asked it submit answers for the ARB exams and it achieved a pass. I think we will need to think more carefully about how we set exam questions.
    Should be relatively straightforward to use AI to check if AI wrote it
    Apparently not, at least on image creation. Again, it has no external frame of reference. It's like someone who has learnt about the world through watching TV, but never been outside.
    1985 Mercian King of Mercia - work in progress (Hah! Who am I kidding?)
    Pinnacle Monzonite

    Part of the anti-growth coalition
  • focuszing723
    focuszing723 Posts: 8,062
    Stop disagreeing with me, you know I'm right and you're wrong. Just accept Humanity is doomed and have some cake to pacify reality.
  • webboo
    webboo Posts: 6,087

    Stop disagreeing with me, you know I'm right and you're wrong. Just accept Humanity is doomed and have some cake to pacify reality.

    Late Sunday night and you are on one. Time to reduce the salt intake again.
  • Pross
    Pross Posts: 43,396

    rjsterry said:

    Pross said:

    Pross said:

    Pross said:

    rjsterry said:

    An interesting take on why the likes of ChatGPT and Midjourney can produce such good facsimiles of writing and photographs, but still get such basic things wrong.

    https://medium.com/the-conversation/it-takes-a-body-to-understand-the-world-why-chatgpt-and-other-llms-dont-know-what-they-re-saying-856c114529f6

    I used it once in anger. I was impressed how it gave me a comprehensive answer covering everything I needed. I was unimpressed though when I discovered the answer was made up and not factually correct. It seems it does this.
    Jose Been used it on the Brabantse Pijl the other day and it claimed Lutsenko has won Strade Bianchi and Amstel which she confirmed he definitely hasn’t.
    There was an article in the Guardian where they explained it just gives a plausible answer and not necessarily a factually correct one. Given Google lost so much value over just this, it's a bit surprising that ChatGPT is no better.
    Doesn't it just suck up whatever it finds on the internet including the inaccuracies and put it into words?
    Couldn't it cross reference though? Six the same, one different...

    Weather modelling does this now.
    I'm sure it does and probably gets things right most times but I don't think anyone needs to fear it writing A* graded essays for school kids quite yet.
    Someone asked it submit answers for the ARB exams and it achieved a pass. I think we will need to think more carefully about how we set exam questions.
    The irony is all architects rely heavily on computers now.
    I suspect that, like in my related field, there is a problem with an over-reliance with people not understanding the theory and how to actually do the work manually accepting what comes out of the software. I’ve regularly picked up work that I can see at a glance isn’t right and get “that’s what the software said” responses.