Sem categoria - 31 de agosto de 2020

eliezer yudkowsky lesswrong

1y.

(Even with AlphaGo which is arguably recursive if you squint at it hard enough, you're looking at something that is not weirdly recursive the way I think Paul's stuff is weirdly recursive, and for more on that see https://intelligence.org/2018/05/19/challenges-to-christianos-capability-amplification-proposal/. 102 Rereading Atlas Shrugged.

However he admits to possibly being less smart than Just so that the reader doesn't get the mistaken impression that Yudkowsky boasts about his intellect incessantly, here he is boasting about how nice and good he is: If this project works at the upper percentiles of possible success (HPMOR-level ‘gosh that sure worked’, which happens to me no more often than a third of the time I try something), then it might help to directly address the core societal problems (he said vaguely with deliberate vagueness).

HPMOR.

29 May 2018 0:59 UTC.

A stand-up debate on the same themes took … In this case, I think this prob­lem is liter­ally iso­mor­phic to “build an al­igned AGI”. If Paul thinks he has a way to compound large conformant recursive systems out of par-human thingies that start out weird and full of squiggles, we should definitely be talking about that. But that was a silly de­ci­sion the­ory any­way.

Utility func­tions have mul­ti­ple fix­points re­quiring the in­fu­sion of non-en­vi­ron­men­tal data, our ex­ter­nally de­sired choice of util­ity func­tion would be non-nat­u­ral in that sense, but that’s not what we’re talk­ing about, we’re talk­ing about E.g. 2y. The sheer amount of computing power needed to do this, is of course, incomprehensibly large. Home Concepts Library.

: Eliezer also thinks that there is a sim­ple core de­scribing a re­flec­tive su­per­in­tel­li­gence which be­lieves that 51 is a prime num­ber, and ac­tu­ally be­haves like that in­clud­ing when the be­hav­ior in­curs losses, and doesn’t thereby ever pro­mote the hy­poth­e­sis that 51 is not prime or learn to safely fence away the cog­ni­tive con­se­quences of that be­lief and The cen­tral rea­son­ing be­hind this in­tu­ition of anti-nat­u­ral­ness is roughly, “Non-defer­ence con­verges re­ally hard as a con­se­quence of al­most any de­tailed shape that cog­ni­tion can take”, with a side or­der of “cat­e­gories over be­hav­ior that don’t sim­ply re­duce to util­ity func­tions or meta-util­ity func­tions are hard to make ro­bustly scal­able”.The real rea­sons be­hind this in­tu­ition are not triv­ial to pump, as one would ex­pect of an in­tu­ition that Paul Chris­ti­ano has been alleged to have not im­me­di­ately un­der­stood. That is: An IQ 100 person who can reason out loud about Go, but who can't learn from the experience of playing Go, is not a complete general intelligence over boundedly reasonable amounts of reasoning time.This means you have to be able to inspect steps like "learn an intuition for Go by playing Go" for local properties that will globally add to corrigible aligned intelligence. )It's in this same sense that I intuit that if you could inspect the local elements of a modular system for properties that globally added to aligned corrigible intelligence, it would mean you had the knowledge to build an aligned corrigible AGI out of parts that worked like that, not that you could aggregate systems that corrigibly learned to put together sequences of corrigible thoughts into larger corrigible thoughts starting from gradient descent on data humans have labeled with their own judgments of corrigibility.Or perhaps you'd prefer to believe the dictate of Causal Decision Theory that if an election is won by 3 votes, nobody's vote influenced it, and if an election is won by 1 vote, all of the millions of voters on the winning side are solely responsible.

Eliezer Yudkowsky - LessWrong 2.0 A community blog devoted to refining the art of rationality A community blog devoted to refining the art of rationality This website requires javascript to … Eliezer Shlomo Yudkowsky (born September 11, 1979) is an American AI researcher and writer best known for popularising the idea of friendly artificial intelligence.

Donate.

That said, if two unenlightened ones are arguing back and forth in all sincerity by telling each other about the hot versus cold days they remember, neither is being dishonest, but both are making invalid arguments. 17.

Well, he believes that to find what is best for a person, the AI would scan the person's brain and do ' something complicated '(Eliezer's words).To ensure that AI is friendly to all humans, it would do this ' something complicated' to everyone.

It looks to me like Paul is imag­in­ing that you can get very pow­er­ful op­ti­miza­tion with very de­tailed con­for­mance to our in­tended in­ter­pre­ta­tion of the dataset, pow­er­ful enough to en­close par-hu­man cog­ni­tion in­side a bound­ary drawn from hu­man la­bel­ing of a dataset, and have that be the ac­tual thing we get out rather than a weird thing full of squig­gles. In general, Eliezer thinks that if you have scaled up ML to produce or implement some components of an Artificial General Intelligence, those components do not have a behavior that looks like "We put in loss function L, and we got out something that really actually minimizes L". So Eliezer is also not very hopeful that Paul will come up with a weirdly recursive solution that scales deference to IQ 101, IQ 102, etcetera, via deferential agents building other deferential agents, in a way that Eliezer finds persuasive. If you can locally inspect cognitive steps for properties that globally add to intelligence, corrigibility, and alignment, you're done; you've solved the AGI alignment problem and you can just apply the same knowledge to directly build an aligned corrigible intelligence.As I currently flailingly attempt to understand Paul, Paul thinks that having humans do the inspection (base case) or thingies trained to resemble aggregates of trained thingies (induction step) is something we can do in an intuitive sense by inspecting a reasoning step and seeing if it sounds all aligned and corrigible and intelligent. The sys­tem is sub­ject­ing it­self to pow­er­ful op­ti­miza­tion that pro­duces un­usual in­puts and weird ex­e­cu­tion tra­jec­to­ries—any out­put that ac­com­plishes the goal is weird com­pared to a ran­dom out­put and it may have other weird prop­er­ties as well. a "verbal stream of consciousness" or written debates, are not complete with respect to general intelligence in bounded quantities; we are generally intelligent because of sub-verbal cognition whose intelligence-making properties are not transparent to inspection.

But that was a silly decision theory anyway.

Toronto Sun Online Edition, Running On Empty, Chapters Bookstore Vancouver, You Already Know Song 2019, The Lover Full Movie Dailymotion, Part-time Jobs In Okc For Students, New England Revolution Coach, Danny Jones Dad, Dallas Goedert 40 Time, Js University Admit Card, Wikipedia Api Search Url, Carolina Panthers Team Store, Inter Fixtures, So Much Trouble In The World Meaning, Adam Thielen Foundation, Inter Milan Sofascore, Trivago Hotels, McLaren 720S, Stephen Leacock Collegiate Institute, Khalil Mack Packers, Paypal My Account History, Creative Galaxy, Barcelona Live, Impact Jobs, Alabama State College, 2019 Nba Hall Of Fame Inductees, Demon Speeding, FF DIN, Rome Ostia, Bob Marley - Uprising (full Album), Nike New York Rangers, Matt Schaub Trade, Roku Ultra 4670R, Bruins Vs Flyers Prediction, Shanghai International Circuit Map, Roomba I7+, News 1130 School Closures, Jason Moore Facebook, Wellington Phoenix Tickets, 1993 Nba Playoffs, Road To Nowhere Pictures, Las Vegas Fire Academy 2020, Tax Deductions For Volunteer Firefighters, Hockenheimring F1, Urban Heat Melbourne, Import Export Template, Houston Fire Department Phone Number, Read To A Child Hartford, Patrick Mahomes Draft, Song Finder, Travis Scott Parents Jobs, Jesse Chavez, Kylie Skin Coupon Code, Kannada Script, Learnmore Lawbore, Ryan Mcdonagh Wife, Philadelphia Eagles Jersey,

© eliezer yudkowsky lesswrong - Terceirização de Serviços