1y.
(Even with AlphaGo which is arguably recursive if you squint at it hard enough, you're looking at something that is not weirdly recursive the way I think Paul's stuff is weirdly recursive, and for more on that see https://intelligence.org/2018/05/19/challenges-to-christianos-capability-amplification-proposal/. 102 Rereading Atlas Shrugged.
However he admits to possibly being less smart than Just so that the reader doesn't get the mistaken impression that Yudkowsky boasts about his intellect incessantly, here he is boasting about how nice and good he is: If this project works at the upper percentiles of possible success (HPMOR-level ‘gosh that sure worked’, which happens to me no more often than a third of the time I try something), then it might help to directly address the core societal problems (he said vaguely with deliberate vagueness).
HPMOR.
29 May 2018 0:59 UTC.
A stand-up debate on the same themes took … In this case, I think this problem is literally isomorphic to “build an aligned AGI”. If Paul thinks he has a way to compound large conformant recursive systems out of par-human thingies that start out weird and full of squiggles, we should definitely be talking about that. But that was a silly decision theory anyway.
Utility functions have multiple fixpoints requiring the infusion of non-environmental data, our externally desired choice of utility function would be non-natural in that sense, but that’s not what we’re talking about, we’re talking about E.g. 2y. The sheer amount of computing power needed to do this, is of course, incomprehensibly large. Home Concepts Library.
: Eliezer also thinks that there is a simple core describing a reflective superintelligence which believes that 51 is a prime number, and actually behaves like that including when the behavior incurs losses, and doesn’t thereby ever promote the hypothesis that 51 is not prime or learn to safely fence away the cognitive consequences of that belief and The central reasoning behind this intuition of anti-naturalness is roughly, “Non-deference converges really hard as a consequence of almost any detailed shape that cognition can take”, with a side order of “categories over behavior that don’t simply reduce to utility functions or meta-utility functions are hard to make robustly scalable”.The real reasons behind this intuition are not trivial to pump, as one would expect of an intuition that Paul Christiano has been alleged to have not immediately understood. That is: An IQ 100 person who can reason out loud about Go, but who can't learn from the experience of playing Go, is not a complete general intelligence over boundedly reasonable amounts of reasoning time.This means you have to be able to inspect steps like "learn an intuition for Go by playing Go" for local properties that will globally add to corrigible aligned intelligence. )It's in this same sense that I intuit that if you could inspect the local elements of a modular system for properties that globally added to aligned corrigible intelligence, it would mean you had the knowledge to build an aligned corrigible AGI out of parts that worked like that, not that you could aggregate systems that corrigibly learned to put together sequences of corrigible thoughts into larger corrigible thoughts starting from gradient descent on data humans have labeled with their own judgments of corrigibility.Or perhaps you'd prefer to believe the dictate of Causal Decision Theory that if an election is won by 3 votes, nobody's vote influenced it, and if an election is won by 1 vote, all of the millions of voters on the winning side are solely responsible.
Eliezer Yudkowsky - LessWrong 2.0 A community blog devoted to refining the art of rationality A community blog devoted to refining the art of rationality This website requires javascript to … Eliezer Shlomo Yudkowsky (born September 11, 1979) is an American AI researcher and writer best known for popularising the idea of friendly artificial intelligence.
Donate.
That said, if two unenlightened ones are arguing back and forth in all sincerity by telling each other about the hot versus cold days they remember, neither is being dishonest, but both are making invalid arguments. 17.
Well, he believes that to find what is best for a person, the AI would scan the person's brain and do ' something complicated '(Eliezer's words).To ensure that AI is friendly to all humans, it would do this ' something complicated' to everyone.
It looks to me like Paul is imagining that you can get very powerful optimization with very detailed conformance to our intended interpretation of the dataset, powerful enough to enclose par-human cognition inside a boundary drawn from human labeling of a dataset, and have that be the actual thing we get out rather than a weird thing full of squiggles. In general, Eliezer thinks that if you have scaled up ML to produce or implement some components of an Artificial General Intelligence, those components do not have a behavior that looks like "We put in loss function L, and we got out something that really actually minimizes L". So Eliezer is also not very hopeful that Paul will come up with a weirdly recursive solution that scales deference to IQ 101, IQ 102, etcetera, via deferential agents building other deferential agents, in a way that Eliezer finds persuasive. If you can locally inspect cognitive steps for properties that globally add to intelligence, corrigibility, and alignment, you're done; you've solved the AGI alignment problem and you can just apply the same knowledge to directly build an aligned corrigible intelligence.As I currently flailingly attempt to understand Paul, Paul thinks that having humans do the inspection (base case) or thingies trained to resemble aggregates of trained thingies (induction step) is something we can do in an intuitive sense by inspecting a reasoning step and seeing if it sounds all aligned and corrigible and intelligent. The system is subjecting itself to powerful optimization that produces unusual inputs and weird execution trajectories—any output that accomplishes the goal is weird compared to a random output and it may have other weird properties as well. a "verbal stream of consciousness" or written debates, are not complete with respect to general intelligence in bounded quantities; we are generally intelligent because of sub-verbal cognition whose intelligence-making properties are not transparent to inspection.
But that was a silly decision theory anyway.
Toronto Sun Online Edition, Running On Empty, Chapters Bookstore Vancouver, You Already Know Song 2019, The Lover Full Movie Dailymotion, Part-time Jobs In Okc For Students, New England Revolution Coach, Danny Jones Dad, Dallas Goedert 40 Time, Js University Admit Card, Wikipedia Api Search Url, Carolina Panthers Team Store, Inter Fixtures, So Much Trouble In The World Meaning, Adam Thielen Foundation, Inter Milan Sofascore, Trivago Hotels, McLaren 720S, Stephen Leacock Collegiate Institute, Khalil Mack Packers, Paypal My Account History, Creative Galaxy, Barcelona Live, Impact Jobs, Alabama State College, 2019 Nba Hall Of Fame Inductees, Demon Speeding, FF DIN, Rome Ostia, Bob Marley - Uprising (full Album), Nike New York Rangers, Matt Schaub Trade, Roku Ultra 4670R, Bruins Vs Flyers Prediction, Shanghai International Circuit Map, Roomba I7+, News 1130 School Closures, Jason Moore Facebook, Wellington Phoenix Tickets, 1993 Nba Playoffs, Road To Nowhere Pictures, Las Vegas Fire Academy 2020, Tax Deductions For Volunteer Firefighters, Hockenheimring F1, Urban Heat Melbourne, Import Export Template, Houston Fire Department Phone Number, Read To A Child Hartford, Patrick Mahomes Draft, Song Finder, Travis Scott Parents Jobs, Jesse Chavez, Kylie Skin Coupon Code, Kannada Script, Learnmore Lawbore, Ryan Mcdonagh Wife, Philadelphia Eagles Jersey,