• Categories
Collapse

The Silver Fern

Will our kids be immortal or extinct?

Scheduled Pinned Locked Moved Off Topic
73 Posts 20 Posters 3.5k Views
Will our kids be immortal or extinct?
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • Baron Silas GreenbackB Offline
    Baron Silas GreenbackB Offline
    Baron Silas Greenback
    wrote on last edited by
    #63

    <blockquote class="ipsBlockquote" data-author="Chris B." data-cid="566866" data-time="1458783223">
    <div>
    <p> </p>
    <p> </p>
    <p>Is it possible to be that intelligent, but not to consider moral questions?</p>
    </div>
    </blockquote>
    <p> </p>
    <p> </p>
    <p>Not only do I think is possible it is in my (and most researchers into AI) opinion.. very VERY likely.</p>
    <p> </p>
    <p>Well I guess it could consider moral questions, not make decisions based on human morality. It would be an abstract term. If it gets so intelligent up the ladder from us... then why would it take a humanistic view of morality? Anymore than we look at the ethical code of ants?</p>

    1 Reply Last reply
    0
  • Chris B.C Offline
    Chris B.C Offline
    Chris B.
    wrote on last edited by
    #64

    <div> </div>
    <div>
    <blockquote class="ipsBlockquote" data-author="Baron Silas Greenback" data-cid="566872" data-time="1458784045">
    <div>
    <p>Not only do I think is possible it is in my (and most researchers into AI) opinion.. very VERY likely.</p>
    <p> </p>
    <p>Well I guess it could consider moral questions, not make decisions based on human morality. It would be an abstract term. If it gets so intelligent up the ladder from us... then why would it take a humanistic view of morality? Anymore than we look at the ethical code of ants?</p>
    </div>
    </blockquote>
    <p> </p>
    <p>I'm not sure whether the first is necessarily a good assumption and it will likely make a significant difference in outcomes.</p>
    <p> </p>
    <p>In the second, I largely agree - one major difference to the ants is that at least the ASI will be able to read our codes of ethics and decide which bits - if any - might be relevant to it. </p>
    <p> </p>
    <p>On the whole, Henry, Sam and I agree that it would be good to try to interest the ASI in ethics.  🙂 </p>
    </div>

    1 Reply Last reply
    0
  • antipodeanA Online
    antipodeanA Online
    antipodean
    wrote on last edited by
    #65

    <blockquote class="ipsBlockquote" data-author="Chris B." data-cid="566861" data-time="1458781305">
    <div>
    <p>I didn't find some of the author's analysis particularly convincing. e.g.</p>
    <p> </p>
    <p><span style="color:rgb(51,51,51);font-family:'Noto Sans', Helvetica, Arial, sans-serif;">"So we’ve established that without very specific programming, an ASI system will be both amoral and obsessed with fulfilling its original programmed goal."</span></p>
    <p> </p>
    <p><span style="color:rgb(51,51,51);font-family:'Noto Sans', Helvetica, Arial, sans-serif;">We haven't really established any such thing. </span></p>
    </div>
    </blockquote>
    <p> </p>
    <p>Agreed. I don't mind that you've constructed a case to make it plausible and having done so continue with your line of argument, but let's not say you've established anything other than a presumption.</p>

    1 Reply Last reply
    0
  • I Offline
    I Offline
    infidel
    wrote on last edited by
    #66

    <p class="" style="font-family:Arial, Helvetica, sans-serif;font-size:12px;color:rgb(119,119,119);background-color:rgb(255,241,224);"><span>March 24, 2016 3:09 pm</span></p>
    <div style="color:rgb(0,0,0);font-family:Georgia, 'Times New Roman', serif;font-size:12px;background-color:rgb(255,241,224);">

    Microsoft pulls Twitter bot Tay after racist tweets
    

    </div>
    <p> </p>
    <div>Microsoft has been forced to take down an artificially intelligent “chatbot” it has set loose on Twitter after its interactions with humans led it to start tweeting racist, sexist and xenophobic commentary.</div>
    <div> </div>
    <div>The chatbot, named Tay, is a computer designed by Microsoft to respond to questions and conversations on Twitter in an attempt to engage the millennials market in the US.</div>
    <div> </div>
    <div>
    <div>However, the tech group’s attempts spectacularly backfired after the chatbot was encouraged to use racist slurs, troll a female games developer and to endorse Hitler and conspiracy theories over the 9/11 terrorist attack. A combination of Twitter users, online pranksters, and an insufficiently sensitive filters led it to go rogue and force Microsoft to shut it down within hours of setting it live.</div>
    <div> </div>
    <div>Tweets reported to be from Tay, which have since been deleted, included: “bush did 9/11 and Hitler would have done a better job than the monkey we have now. donald trump is the only hope we’ve got”, and “Ricky gervais learned totalitarianism from adolf hitler, the inventor of atheism”. It appeared to endorse genocide, deny the Holocaust and refer to one woman as a “stupid whore”.</div>
    <div> </div>
    <div>Given that it was designed to learn from the humans it encountered, Tay’s conversion to extreme racism and genocide may not be the best advertisement for the Twitter community in the week the site celebrated its 10th anniversary.</div>
    <div> </div>
    <div>Tay was developed by Microsoft to experiment with conversational understanding using its artificial intelligence technology. It is aimed at 18 to 24 year olds, according to Microsoft’s online introduction, “through casual and playful conversation”.</div>
    <div> </div>
    <div>Tay is described as a “fam from the internet that’s got zero chill! The more you talk the smarter Tay gets”, with people encouraged to ask it to play games and tell stories and jokes. Instead, many people took to asking controversial questions that were repeated by Tay.</div>
    <div> </div>
    <div>The chatbot has since been stood down, signing off with a jaunty: “Phew. Busy day. Going offline for a while to absorb it all. Chat soon.”</div>
    <div> </div>
    <div>The controversial tweets have been removed from Tay’s timeline.</div>
    <div> </div>
    <div>Microsoft said it would make “some adjustments to Tay”.</div>
    <div> </div>
    <div>“The AI chatbot Tay is a machine learning project, designed for human engagement. As it learns, some of its responses are inappropriate and indicative of the types of interactions some people are having with it,” Microsoft said.</div>
    <div> </div>
    <div>Tay uses data provided in conversations to search for responses and create simple personalised profiles. Microsoft said responses were generated from relevant public data and by using AI and editorial developed by a staff including improvisational comedians. “That data has been modelled, cleaned and filtered by the team developing Tay,” it said.</div>
    <div> </div>
    <div>Interactions between companies and the public on Twitter have a habit of spinning out of control, such as with the misuse of corporate hashtags to highlight bad practices by the company.</div>
    <div> </div>
    <div>Automated feeds have also become a problem in the past. Habitat, the furniture retailer, attempted to use trending topics to boost traffic to its website but inadvertently tweeted about Iranian politics.</div>
    <div> </div>
    <div>Similarly, the New England Patriots celebrated reaching 1m followers by allowing people to auto-generate images of jerseys featuring their Twitter handles, including very offensive ones.</div>
    <div> </div>
    <div>Google has had to tweak its search engine after its auto complete feature generated racist suggestions.</div>
    <div> </div>
    <div>From FT.com</div>
    <div> </div>
    <div><a data-ipb='nomediaparse' href='http://www.ft.com/intl/cms/s/0/8ba60bc4-f1c0-11e5-aff5-19b4e253664a.html#axzz43qyDSQha'>http://www.ft.com/intl/cms/s/0/8ba60bc4-f1c0-11e5-aff5-19b4e253664a.html#axzz43qyDSQha</a></div>
    </div>

    1 Reply Last reply
    0
  • Chris B.C Offline
    Chris B.C Offline
    Chris B.
    wrote on last edited by
    #67

    <p>You see what happens, Larry. This is what happens, Larry....when you don't run the concept past The Fonz! </p>

    1 Reply Last reply
    0
  • R Offline
    R Offline
    reprobate
    wrote on last edited by
    #68

    <blockquote class="ipsBlockquote" data-author="Baron Silas Greenback" data-cid="566616" data-time="1458673359">
    <div>
    <p>I genuinely do not know what I think the end result will be, but I have concerns about the share broadness of the possible outcomes, and one thing I am 100% convinced of is that the range is incredibly broad. To specify a predicted outcome is fine, if guesswork (like everyone else) but to set a worse case scenario is foolish. </p>
    <p> </p>
    <p>I took issue with your statement that the very very VERY worst case scenario was AI thinking of us as great apes and flying off into space. That is nonsensical and demonstrably wrong.  </p>
    <p> </p>
    <p>Your position that AI will not get out of control and present a threat is perfectly valid, as like everyone else, it is a guess at the unknown.</p>
    </div>
    </blockquote>
    <p>aah of course. so it is okay for someone to speculate an opinion on a precise outcome, but not on a range of outcomes? fuck me, thank christ you've appointed yourself arbiter of the things people are allowed to have a guess at. this is why they don't give the short angry people with the small brains the keys to the city. </p>
    <p> </p>
    <p>but back to the topic. the interest seems to centre around what a super-intelligent AI would do, how it would regard humanity, and what its existence would mean for us. and the question of morality is an interesting one - if you predict a purely logical, amoral AI then you are effectively saying that the AI is still in the box that we made for it. but why would that be the case? if it is orders of magnitude smarter than us, then surely it can get out of that box? Or if it isn't, then, logically speaking, what would it care about anything it wasn't told to? our hollywood concept of logic over emotion is pretty flawed. our logic says 'do something for all humanity / the earth / utilitarianism / whatever' while emotion says 'but i love this individual' - but those are both totally emotive when it comes down to it. in pure logic terms, who gives a fuck if anything happens? without a driving motivating force - will to live, survival of species, curiosity, whatever - it is irrelevant. if AIs motivations are what we gave it in the first place, then that's fine, but if it is creating its own aims then how that even be speculated about - what is a logical aim for a super-intelligent computer? survival? learning? why either of those? nihilism? what really matters to an AI? </p>
    <p>to me at least, super-intelligent means more than a logical amoral super-computer. because if you don't care about anything, then you have no aims, no motivation - unless you're doing things because you've been told to - and if you're doing things because you've been told to, then you're not so smart.</p>

    1 Reply Last reply
    0
  • Baron Silas GreenbackB Offline
    Baron Silas GreenbackB Offline
    Baron Silas Greenback
    wrote on last edited by
    #69

    <blockquote class="ipsBlockquote" data-author="reprobate" data-cid="567222" data-time="1458907051">
    <div>
    <p>aah of course. so it is okay for someone to speculate an opinion on a precise outcome, but not on a range of outcomes? fuck me, thank christ you've appointed yourself arbiter of the things people are allowed to have a guess at. this is why they don't give the short angry people with the small brains the keys to the city. </p>
    <p> </p>
    <p>but back to the topic. the interest seems to centre around what a super-intelligent AI would do, how it would regard humanity, and what its existence would mean for us. and the question of morality is an interesting one - if you predict a purely logical, amoral AI then you are effectively saying that the AI is still in the box that we made for it. but why would that be the case? if it is orders of magnitude smarter than us, then surely it can get out of that box? Or if it isn't, then, logically speaking, what would it care about anything it wasn't told to? our hollywood concept of logic over emotion is pretty flawed. our logic says 'do something for all humanity / the earth / utilitarianism / whatever' while emotion says 'but i love this individual' - but those are both totally emotive when it comes down to it. in pure logic terms, who gives a fuck if anything happens? without a driving motivating force - will to live, survival of species, curiosity, whatever - it is irrelevant. if AIs motivations are what we gave it in the first place, then that's fine, but if it is creating its own aims then how that even be speculated about - what is a logical aim for a super-intelligent computer? survival? learning? why either of those? nihilism? what really matters to an AI? </p>
    <p>to me at least, super-intelligent means more than a logical amoral super-computer. because if you don't care about anything, then you have no aims, no motivation - unless you're doing things because you've been told to - and if you're doing things because you've been told to, then you're not so smart.</p>
    </div>
    </blockquote>
    <p> </p>
    <p> </p>
    <p>Thanks for giving the opinions of someone who knows fuck all about the topic and has clearly done the research of a 10 year old. You are just asking the same set of questions again and again. You then refuse to educate yourself on the answers that people have come up with.</p>
    <p> </p>
    <p>golf clap</p>
    <p> </p>
    <p>Go ahead ask the same questions again. </p>

    1 Reply Last reply
    0
  • R Offline
    R Offline
    reprobate
    wrote on last edited by
    #70

    <blockquote class="ipsBlockquote" data-author="Baron Silas Greenback" data-cid="567245" data-time="1458949279">
    <div>
    <p>Thanks for giving the opinions of someone who knows fuck all about the topic and has clearly done the research of a 10 year old. You are just asking the same set of questions again and again. You then refuse to educate yourself on the answers that people have come up with.</p>
    <p> </p>
    <p>golf clap</p>
    <p> </p>
    <p>Go ahead ask the same questions again. </p>
    </div>
    </blockquote>
    <p>no, thank you - for your always wonderful rebuttal - it's kind of magnificent. if i ask the same questions again will i get another one?</p>

    1 Reply Last reply
    0
  • Baron Silas GreenbackB Offline
    Baron Silas GreenbackB Offline
    Baron Silas Greenback
    wrote on last edited by
    #71

    <p>Do some basic research yourself, the links have been provided. All you have to do is click and read, yet you refuse. Your ignorance is your own fault.</p>

    1 Reply Last reply
    0
  • antipodeanA Online
    antipodeanA Online
    antipodean
    wrote on last edited by
    #72

    Looking for a career as a personality devoid newsreader? Because that's gone.

    NTAN 1 Reply Last reply
    0
  • NTAN Online
    NTAN Online
    NTA
    replied to antipodean on last edited by
    #73

    @antipodean said in Will our kids be immortal or extinct?:

    Looking for a career as a personality devoid newsreader? Because that's gone.

    You could say that was years ago.

    0_1545176736761_5a46d881-8a00-4b69-9c2b-889d7723aeeb-image.png

    1 Reply Last reply
    0

Will our kids be immortal or extinct?
Off Topic
  • Login

  • Don't have an account? Register

  • Login or register to search.
  • First post
    Last post
0
  • Categories
  • Login

  • Don't have an account? Register

  • Login or register to search.