text,start,duration hello everybody David Shapiro here with,0.0,4.98 another video so today's video is about,2.46,5.28 super alignment uh for those that you,4.98,5.039 that might not know openai recently,7.74,3.6 announced that they are creating a super,10.019,2.761 alignment team and they are going to,11.34,5.16 commit 20 of their compute resources to,12.78,6.9 the task of solving super alignment So,16.5,5.279 today we're going to talk about how it,19.68,3.84 would work or more specifically the,21.779,4.621 challenges with super alignment and also,23.52,5.64 some of my uh let's say criticism my,26.4,5.52 feedback for openai based on what I know,29.16,5.1 about how they have approached alignment,31.92,4.08 so far and what they have said about,34.26,3.12 super alignment,36.0,4.02 uh before we get into the show all of my,37.38,4.32 work is completely open source and free,40.02,3.42 of ads and that is because I am,41.7,3.78 supported by a Grassroots movement now,43.44,4.56 in order to keep doing this your support,45.48,4.559 would be greatly appreciated and all,48.0,4.079 tiers on my patreon get you access to,50.039,4.141 the private Discord server so without,52.079,4.681 further Ado moving on,54.18,5.1 uh first the question is what is super,56.76,5.88 alignment uh so I got this summary just,59.28,5.76 I took open ai's statement on super,62.64,4.62 alignment and and got this nice little,65.04,4.8 summary uh super alignment is the,67.26,4.08 process of ensuring that super,69.84,3.9 intelligent AI systems which are systems,71.34,4.74 much smarter than humans follow human,73.74,4.26 intent so they keep using this word,76.08,6.539 intent which I have some feedback on it,78.0,6.299 involves developing new scientific and,82.619,2.82 Technical breakthroughs that can,84.299,3.661 effectively guide and control these,85.439,4.32 highly Advanced systems this is making,87.96,3.36 the Assumption of Courage ability which,89.759,3.9 we'll talk about later the goal is to,91.32,3.839 prevent potentially catastrophic,93.659,3.121 scenarios such as super intelligent,95.159,3.721 going rogue or becoming uncontrollable,96.78,4.799 super super alignment is a critical,98.88,4.44 challenge in the field of AI safety and,101.579,3.381 is considered one of the most important,103.32,4.56 unsolved technical problems of our time,104.96,4.96 so again this is paraphrase paraphrasing,107.88,3.96 open AI,109.92,5.04 super alignment is not about ethics and,111.84,5.639 disinformation super alignment is,114.96,4.5 fundamentally about X risk or what we,117.479,3.721 used to call existential risk but what,119.46,3.42 people have simplified to just call,121.2,4.64 Extinction risk it's not about jobs,122.88,5.519 displacement it's not about preserving,125.84,4.54 the economy as it is it's not even about,128.399,4.041 ethics and privacy,130.38,4.56 or social Credit Systems it's not about,132.44,5.98 democracy it's not about uh manipulation,134.94,5.34 campaigns or making money or even,138.42,4.56 regulation it is about preventing,140.28,5.4 extinction level events,142.98,4.2 all right so to help you understand,145.68,3.72 super alignment uh I found a couple,147.18,4.62 memes so these are from the AI safety,149.4,4.68 memes uh Twitter which is hilarious and,151.8,4.68 I definitely recommend you follow him uh,154.08,5.64 or then whoever the probably a human,156.48,6.119 hopefully a human anyways uh so this is,159.72,5.7 the show goth Meme and the idea is that,162.599,5.941 uh when you when you train a gigantic,165.42,5.34 model and of course these models are now,168.54,4.62 pushing multiple trillions of parameters,170.76,3.9 and they're trained on trillions of,173.16,2.659 tokens,174.66,3.6 uh you,175.819,4.661 you don't know what is in the model you,178.26,3.6 don't know what it learns you don't know,180.48,3.92 how it thinks and it is entirely too big,181.86,6.36 to uh to be remotely interpretable all,184.4,5.619 you can do is train the model and then,188.22,4.14 test it based on input and output and,190.019,3.841 you can try and trick it you can try and,192.36,4.86 find failure conditions uh basically,193.86,5.64 this is the Mesa optimization problem,197.22,4.08 where it's like okay you don't really,199.5,3.9 know what's going on inside inside the,201.3,5.34 Black Box uh and so unsupervised,203.4,6.78 learning uh Foundation models uh they,206.64,5.16 are scary because they will just start,210.18,4.08 spewing out all kinds of stuff all the,211.8,4.74 stuff that you saw on uh on the Bing,214.26,3.839 chat Sydney,216.54,3.36 um that was because they were they they,218.099,5.401 you got a little bit more raw raw output,219.9,6.3 from the model and of course that was uh,223.5,4.739 if you go watch the Y files that just,226.2,3.179 came out,228.239,3.0 um he had a really great dramatization,229.379,3.36 of some of the conversations that people,231.239,4.92 had with Sydney or being Ai and so that,232.739,5.521 gives you a kind of a closer peek under,236.159,4.201 the hood as to what's going on they have,238.26,3.96 since fixed it with some supervised fine,240.36,3.84 tuning and then of course there's our,242.22,5.28 rlhf which makes it uh behave very well,244.2,6.06 but the thing is is every now and then,247.5,4.86 you'll get a peek behind you know what's,250.26,3.66 actually going on and what it's actually,252.36,3.659 capable of doing and you will realize,253.92,3.599 that you are communicating with a,256.019,3.78 non-human intelligence and it's pretty,257.519,4.321 scary when that happens,259.799,4.501 uh this meme was great because it really,261.84,4.32 kind of shows the context of what actual,264.3,3.42 super intelligence is and I love the,266.16,3.96 simplification of this of the show goth,267.72,3.479 meme,270.12,2.76 um you know but basically the smartest,271.199,4.621 humans that have ever existed uh are,272.88,4.68 several orders of magnitude lower,275.82,4.56 capability than super intelligence and,277.56,4.139 since we're starting to see the first,280.38,3.48 Sparks of super intelligence hopefully,281.699,3.72 people will start to believe that super,283.86,3.54 intelligence is actually a thing we,285.419,3.481 still have some deniers out there which,287.4,3.359 I'll cover in just a second,288.9,6.299 okay so General challenges why is super,290.759,6.421 alignment hard,295.199,4.801 first and foremost is the normalcy bias,297.18,5.88 so human brains we evolved on the,300.0,5.9 savannas of Africa and then we spread,303.06,6.12 across the world and so our brains just,305.9,5.859 do not comprehend exponential growth it,309.18,5.12 is not something that is in our,311.759,5.581 evolutionary distribution and so uh Gary,314.3,6.459 Marcus uh an AI safety researcher is,317.34,6.84 fond of of pointing out that llms really,320.759,5.401 often fail to generalize outside of,324.18,3.9 their training distribution humans are,326.16,4.74 no different and uh in our evolutionary,328.08,5.54 training disposition uh uh distribution,330.9,5.82 we never experience anything truly,333.62,4.54 exponential,336.72,3.72 and the things that we do experience,338.16,4.56 that are exponential like the uh light,340.44,4.5 and sound because those are on uh I,342.72,4.44 think logarithmic scales,344.94,4.62 um your brain handles for you so you,347.16,4.44 still perceive it as geometric even,349.56,3.66 though your brain automatically Tunes,351.6,4.92 audio and and light levels uh so that,353.22,4.86 you just experience it within a much,356.52,3.08 narrower range,358.08,4.5 so that's one part of normalcy bias,359.6,5.98 which is just we are evolutionarily not,362.58,5.1 equipped to comprehend exponential,365.58,5.22 growth and exponential change uh beyond,367.68,6.12 that it is very difficult to understand,370.8,5.64 super intelligence even when you look at,373.8,4.14 the trends because all you see is a,376.44,3.9 trend on a graph like okay uh you know,377.94,4.379 parameter goes up and to the right oh,380.34,5.22 okay great you know uh token window goes,382.319,4.801 up and to the right training data goes,385.56,4.32 up and to the right we don't really have,387.12,4.919 a visceral intuitive emotional,389.88,4.379 understanding of what that means because,392.039,4.921 again normalcy bias and this is I'm not,394.259,4.621 saying that like if you have normalcy,396.96,3.84 bias you're dumb this is literally just,398.88,4.34 a fundamental limitation of human brains,400.8,5.28 uh and even those of us who study this,403.22,5.199 stuff and know that it's coming we,406.08,4.92 cannot predict exactly what it's going,408.419,4.921 to imply or what it's going to feel like,411.0,4.56 once it actually happens because again,413.34,3.96 we are anchored in the present moment,415.56,4.02 the present time because evolutionarily,417.3,4.38 speaking That's What mattered most if,419.58,3.72 you're hungry right now go find food if,421.68,3.42 there's a tiger right now go you know,423.3,3.179 get away from it or hit it with a stick,425.1,3.24 and then once you're safe you're safe,426.479,4.981 again so our time Horizon that our brain,428.34,6.0 thinks about is relatively small,431.46,4.799 and this these are all components that,434.34,5.52 feed into normalcy bias so this normalcy,436.259,6.0 bias creates a lot of problems for for,439.86,6.66 uh many many reasons one for a lot of,442.259,6.78 people they're just not even really like,446.52,5.04 willing or able to engage with the,449.039,4.621 conversation of super alignment because,451.56,3.72 of normalcy bias,453.66,3.659 this is why you see so much skepticism,455.28,5.639 out there uh and and and even for like I,457.319,4.981 said even for those of us that are,460.919,3.0 engaged even though we know what's,462.3,3.959 coming just our cognitive limitations,463.919,4.441 make it really difficult to accurately,466.259,4.681 forecast and predict the impact of some,468.36,4.08 of these things and we have to trust the,470.94,3.78 numbers and even then we can only think,472.44,4.02 so far into the future especially with,474.72,4.16 things changing as fast as they are,476.46,5.4 so here's a thought experiment that I,478.88,4.539 came up with to help you understand,481.86,3.42 super intelligence,483.419,3.541 think of a pigeon,485.28,4.5 they're very common they uh basically,486.96,4.579 exist in every major city in the world,489.78,4.62 they're mildly intelligent creatures,491.539,5.141 they can learn a few things they can,494.4,4.079 solve some basic problems and they can,496.68,4.32 remember uh simple facts like you know,498.479,4.861 where where to go get food they can even,501.0,4.139 learn to recognize certain humans like,503.34,3.18 if you go to the park and feed the,505.139,3.361 pigeons every day the pigeons will learn,506.52,4.44 to recognize you but other than that,508.5,4.68 they're pretty simple creatures,510.96,4.62 now when you compare the cognitive,513.18,4.739 capacity of a pigeon to even the dumbest,515.58,7.079 humans uh human pigeons are cognitively,517.919,6.3 deficient,522.659,3.601 you can't even compete on the same,524.219,4.141 playing field because humans are in a,526.26,4.079 fundamentally different class of,528.36,3.539 cognitive ability,530.339,4.201 compared to Super intelligence you are,531.899,5.641 dumber than the pigeon is to you know a,534.54,5.52 typical person and then not to mention,537.54,5.6 the fact that uh it's entirely possible,540.06,6.42 that that super intelligence or AGI or,543.14,6.4 whatever is going to possess orders of,546.48,6.06 magnitude more cognitive abilities and I,549.54,5.16 don't just mean speed I don't just mean,552.54,5.94 the ability to read uh you know text at,554.7,6.3 a human level you know a million times,558.48,4.02 faster which it's already getting close,561.0,4.86 to doing that uh what I mean is that it,562.5,5.1 will possess cognitive abilities the,565.86,4.5 ability to make connections to solve,567.6,4.679 problems and to understand things in a,570.36,4.62 way that humans might not be able to,572.279,5.821 ever compete with we have the illusion,574.98,4.26 that we can understand everything,578.1,2.88 because you're looking at your own mind,579.24,3.9 from inside the Fishbowl this is a,580.98,3.84 commonly discussed problem in epistem,583.14,4.68 epistemology and philosophy but the,584.82,5.639 thing is is you can imagine the mind of,587.82,5.28 a pigeon by virtue of the fact that the,590.459,4.801 pigeon's mind is much simpler and dumber,593.1,4.44 than yours and you can you know look at,595.26,5.4 it uh and and make inferences but the,597.54,6.12 pigeon lacks the ability to even,600.66,5.52 remotely comprehend your mind because,603.66,5.1 its mind is so much more limited that is,606.18,4.74 the difference between humans and super,608.76,4.98 intelligence and so basically remember,610.92,5.76 that you are a pigeon in comparison and,613.74,5.64 that will help you keep in mind what,616.68,6.12 super intelligence actually is and when,619.38,5.82 I say actually is like it is coming and,622.8,4.32 it is coming fast,625.2,5.28 another thing is AI dysphoria so this is,627.12,5.82 a this is a term that that I coined,630.48,5.039 because I have noticed in the comments,632.94,4.56 and read it and Twitter and all other,635.519,4.741 kinds of places there's a there's a few,637.5,4.86 fundamental kinds of reactions and most,640.26,5.16 of these are emotional reactions uh or,642.36,6.9 or social cultural reactions to AI uh so,645.42,6.0 basically one is denialism so this is,649.26,5.1 people that just reject AI,651.42,4.5 um like there's even people in the,654.36,3.479 comments that say AI does not exist and,655.92,4.02 will never exist and I'm like okay but,657.839,4.981 that's like observably patently false so,659.94,4.38 there are people that are clinging to,662.82,4.079 this denialism because the fear or,664.32,4.38 discomfort of acknowledging the,666.899,4.741 existence of something is too much it's,668.7,4.379 too overwhelming and so they just say,671.64,3.12 I'm gonna pretend like it doesn't exist,673.079,3.541 and we saw this with the pandemic,674.76,3.66 remember there was plenty of people just,676.62,3.6 saying that like the pandemic isn't real,678.42,4.32 stop trying to control me and these are,680.22,4.98 there were plenty of people who denied,682.74,4.32 the existence of the pandemic even on,685.2,5.819 their deathbed they still got themselves,687.06,8.88 into mental uh gymnastics to say no it's,691.019,7.38 just emphysema or it's just what did,695.94,3.72 they call it,698.399,4.201 um uh pneumonia they they called it you,699.66,4.679 know oh I just have bad pneumonia and,702.6,3.479 then they would die and it's like you,704.339,3.601 literally died of of the pandemic but,706.079,3.781 the concept of the pandemic was too,707.94,3.66 terrifying that they could never,709.86,3.9 emotionally reconcile,711.6,4.02 the reality that they were literally,713.76,4.139 dying of it with its existence with the,715.62,4.26 fact of its existence and so I suspect,717.899,3.541 we're going to see the same thing with,719.88,3.0 artificial intelligence where some,721.44,3.66 people are just going to be locked in a,722.88,4.5 state of denial basically forever,725.1,4.859 another one is plain ignorance so this,727.38,4.139 is not technically dysphoria but it,729.959,3.541 needed to be on the list where some,731.519,4.741 people just don't get it like if you do,733.5,4.56 not understand how it works you don't,736.26,3.9 understand what it's capable of you're,738.06,4.56 just not exposed to it you're not uh,740.16,4.619 you're not educated enough or maybe in,742.62,3.48 some cases people are just not,744.779,3.481 intelligent enough to get it plain and,746.1,4.02 simple ignorance is another reason that,748.26,4.139 a lot of people are not going to engage,750.12,4.68 with AI at the level of discussion that,752.399,4.5 it needs to happen number three is,754.8,4.44 magical thinking so these are the kinds,756.899,4.981 of people that immediately assume and,759.24,5.279 and very desperately want to see a soul,761.88,5.519 in the machine the most famous example,764.519,6.181 is uh Blake Lemoine uh at Google who,767.399,5.101 basically there was a really great,770.7,4.02 Reddit meme when he got fired from,772.5,4.139 Google where he you know the the chat,774.72,3.72 log was basically like you know tell me,776.639,3.421 that you have a soul and then then the,778.44,2.88 language model says yes I have a soul,780.06,3.06 and see and the guy's like oh holy,781.32,4.019 like there are there are so many people,783.12,4.26 out there that want to imagine that we,785.339,4.021 already have super intelligence that the,787.38,4.44 the the machine is already sentient that,789.36,4.2 it already deserves rights and I'm like,791.82,4.86 it's it's still just a math model that's,793.56,4.86 telling you what it's programmed to,796.68,3.839 think that it wants having been working,798.42,3.78 with these large language models since,800.519,3.06 gpt2,802.2,3.96 I will tell you that understanding that,803.579,4.081 the that the underlying language model,806.16,3.84 is just predicting the next token right,807.66,4.739 it's they spew out absolute gibberish,810.0,5.339 like seriously go use gpt2 or the,812.399,7.081 original gpt3 and any any illusion that,815.339,6.24 you have that there's a soul in there or,819.48,4.02 that it has extraordinary powers or that,821.579,3.601 it's literally anything other than an,823.5,4.2 autocomplete engine will be dispelled so,825.18,4.08 that goes back to that shogoth thing,827.7,4.5 right the the absolute gibberish that,829.26,5.04 Foundation models spew out once before,832.2,4.8 they're trained will this will dispel,834.3,6.36 any myth of uh disabuse you of any uh,837.0,5.639 illusion that there's something else,840.66,4.739 going on other than just autocomplete,842.639,5.521 um it's the it's that it's the rlhf that,845.399,5.88 makes it uh appear more human-like and,848.16,5.52 that's peridolia uh probably saying that,851.279,4.021 right I got criticized last time I tried,853.68,3.839 to say peridolia,855.3,6.06 um but basically we are programmed to uh,857.519,5.421 perceive human-like things into,861.36,3.84 anthropomorphize things number four is,862.94,4.959 doomerism so doomerism as I've unpacked,865.2,4.439 in some of my other videos is often,867.899,4.44 rooted in uh intergenerational trauma,869.639,5.88 failed parents uh uh you know a,872.339,5.101 nihilistic outlook for whatever reason,875.519,4.5 and so basically what happens is that a,877.44,5.16 lot of people take their intrinsic dread,880.019,4.44 their intrinsic fear their intrinsic,882.6,4.2 self-loathing whatever it is based on,884.459,5.101 their experience and oftentimes it's uh,886.8,4.38 it's completely unconscious I'm not,889.56,3.54 saying that someone's like ah you know I,891.18,3.719 hate my life and so therefore I want to,893.1,3.0 see the world burn and no it's,894.899,3.12 completely unconscious it's basically,896.1,3.599 just that they have a negative outlook,898.019,4.021 because of their life experience and,899.699,4.44 then they project that onto artificial,902.04,3.84 intelligence and it's basically a,904.139,3.961 manifestation of a Death Wish,905.88,3.36 um that's not the only reason for,908.1,3.419 doomerism some people who are very,909.24,4.8 intelligent and oriented uh towards this,911.519,4.741 stuff they still rationally come to the,914.04,5.52 conclusion uh that uh that AI is,916.26,5.22 incredibly dangerous uh and I,919.56,3.36 acknowledge that I acknowledge that if,921.48,3.359 we do this wrong AI is is incredibly,922.92,4.08 dangerous and it could cause an,924.839,4.5 extinction level event but what a Doomer,927.0,3.839 is the difference is that this is,929.339,3.481 someone who seems to want to believe,930.839,5.281 that AI will kill us all and to me that,932.82,4.98 just looks like okay there's an,936.12,3.6 opportunity to fulfill a death wish,937.8,4.62 sorry uh and then the opposite of that,939.72,5.7 is utopianism which is the idea that AI,942.42,5.4 is going to intrinsically solve all of,945.42,4.14 our problems but as as you might have,947.82,3.6 seen in some of my other videos uh,949.56,4.019 technology is is always a double-edged,951.42,3.599 sword and it's a it's a dual use,953.579,3.421 technology and more often than not,955.019,4.021 technology actually makes some things,957.0,5.04 much much worse before it gets better so,959.04,5.34 it is not intrinsically a Force for good,962.04,4.739 it is a dangerous force it is an,964.38,3.899 energetic Force which must be used,966.779,3.481 responsibly,968.279,4.021 another challenge is the geopolitical,970.26,4.56 arms race that is already starting,972.3,5.46 uh the the the the one of the opening,974.82,5.16 one of those strongest opening moves was,977.76,4.68 when the United States cut off the sub,979.98,4.979 the the flow of AI chips to China,982.44,5.04 another thing that's less well known is,984.959,5.581 that we also uh basically recalled all,987.48,4.919 of our AI engineers and all of our chip,990.54,4.5 Fab Engineers it basically said like you,992.399,4.201 need a special permit if you're gonna,995.04,3.479 keep working in China otherwise you're,996.6,5.4 being recalled home uh so that's,998.519,5.701 basically saying hey we're gonna we're,1002.0,4.74 gonna we're gonna force uh brain drain,1004.22,4.559 on China by Taking Back all of our best,1006.74,4.62 engineers and scientists at the same,1008.779,5.521 time uh people are putting AI into,1011.36,5.64 drones we've seen this in uh the Russia,1014.3,4.8 Ukraine conflict where there are more,1017.0,3.6 and more autonomous drones being,1019.1,4.32 deployed meanwhile China Russia and,1020.6,5.099 America and everyone else is putting,1023.42,4.86 more and more AI into jet fighters and,1025.699,4.86 every and literally every other weapon,1028.28,5.639 uh so on top of the military incentives,1030.559,4.921 that there are to create create more,1033.919,3.54 sophisticated weapons there is the,1035.48,4.68 geopolitical incentive to maintain a,1037.459,5.581 level of influence on the geopolitical,1040.16,5.58 world stage whether that's being uh,1043.04,4.799 militarily competitive or economically,1045.74,3.9 competitive or whatever,1047.839,4.021 and one thing I want to caution here is,1049.64,4.56 that the geopolitical arms race is in no,1051.86,4.5 way open ai's responsibility or any,1054.2,4.62 other individual corporations because,1056.36,5.28 even if openai and Google and Microsoft,1058.82,4.68 and all of them literally just flat out,1061.64,4.2 refuse to serve the Pentagon or the,1063.5,5.039 Department of Defense guess what the the,1065.84,5.219 the United States military and every,1068.539,3.781 other military they have their own,1071.059,2.641 budget and they can hire their own,1072.32,3.359 experts and they can they can still make,1073.7,3.3 it happen,1075.679,3.901 um and so I want to say like yes I will,1077.0,5.76 be criticizing open ai's approach but in,1079.58,4.74 this particular case I want to say that,1082.76,4.32 this is way outside of the scope of open,1084.32,5.16 AI but this also underscores the fact,1087.08,5.88 that we absolutely 100 percent not not,1089.48,6.66 just need Federal level regulation and,1092.96,6.3 research uh I'm not we also need,1096.14,6.24 International and Global regulation and,1099.26,4.68 research because some of these things,1102.38,4.08 are so far outside of the scope of just,1103.94,5.7 deploying models and Commercial tools,1106.46,7.02 and then uh finally is open source so,1109.64,6.539 there are uh more than a few commenters,1113.48,5.64 out there like um uh Dr Roman uh,1116.179,4.561 Chowdhury I think I hope I'm saying her,1119.12,3.36 name right and Gary Marcus and quite a,1120.74,3.179 few others,1122.48,4.319 um who are not Eliezer utkowski but,1123.919,4.38 there are plenty of people basically,1126.799,3.661 saying uh the same thing that the the,1128.299,4.081 polls that I ran on my YouTube channel,1130.46,3.719 say which is that a lot of people,1132.38,4.799 anticipate that open source models are,1134.179,5.461 going to overtake and eventually replace,1137.179,4.74 closed Source models,1139.64,5.1 so the thing is is once it's open source,1141.919,4.981 you can't really put that Genie back in,1144.74,3.78 the bottle and a lot of people already,1146.9,3.3 say the cat is out of the bag the horse,1148.52,4.38 has left the barn and is down the street,1150.2,4.74 and so in this case you have a,1152.9,3.84 competitive landscape where it doesn't,1154.94,4.739 matter what open AI research does it,1156.74,4.5 doesn't matter what Google deepmind,1159.679,3.721 research does it doesn't matter what,1161.24,4.439 regulations anyone passes and this is,1163.4,4.8 one of the nightmare scenarios that,1165.679,4.62 people point out that regulation no,1168.2,3.839 matter what you do will not be enough,1170.299,4.201 that research no matter what you do will,1172.039,4.441 not be enough and so basically we're,1174.5,3.84 going to end up in a situation where you,1176.48,3.66 have to fight fire with fire you have to,1178.34,4.86 fight misaligned models misaligned AI,1180.14,6.0 with aligned AI but then that that if,1183.2,5.04 you're relying on AI to fight your Wars,1186.14,4.919 for you what if it switches sides,1188.24,4.5 so these are some major major major,1191.059,2.48 major,1192.74,3.96 challenges with open Ai and one thing,1193.539,4.481 that I'll say before we get into the,1196.7,3.9 criticism is the fact that open AI is,1198.02,4.62 talking about red teaming and,1200.6,4.26 deliberately creating misaligned AI,1202.64,4.74 models in order to test super alignment,1204.86,5.58 that is absolutely Far and Away the best,1207.38,5.58 thing about what they are planning on,1210.44,4.979 doing now with that said I do have some,1212.96,5.579 criticism of open ai's approach,1215.419,4.681 so first,1218.539,4.02 open AI is somewhat preoccupied with,1220.1,5.04 human intention and human values you've,1222.559,4.561 probably seen this in chat GPT whenever,1225.14,3.779 you talk about Ai and safety where it's,1227.12,3.0 like you know we need to make sure it,1228.919,3.541 stays aligned with human values this was,1230.12,4.5 very clearly shoehorned in by their own,1232.46,4.56 internal alignment process which to be,1234.62,4.74 fair it's a good start you know,1237.02,4.44 basically saying let's align AI to human,1239.36,5.46 values that's a good start for aligning,1241.46,6.78 as a universal principle to adhere to,1244.82,7.62 but uh there's very much a Walled Garden,1248.24,6.9 effect going on here or an ivory Tower,1252.44,4.56 effect and what I mean by that is that,1255.14,4.26 this is this is a particular and a,1257.0,4.26 well-documented trend in Silicon Valley,1259.4,4.26 and it's not just open AI that does this,1261.26,4.2 it's literally every tech company on the,1263.66,5.16 west coast of America uh where they kind,1265.46,4.86 of believe that they are the smartest,1268.82,2.76 people in the world and that they are,1270.32,3.12 the only people in the world capable of,1271.58,4.2 solving this problem but the thing is is,1273.44,4.44 that egotistical belief prevents them,1275.78,4.38 from looking out the window and and,1277.88,4.38 getting the help of other experts and so,1280.16,3.899 I have a really great example from my,1282.26,4.62 last corporate job I was talking to a,1284.059,5.641 seasoned software architect someone that,1286.88,6.96 you would assume had a masterful command,1289.7,6.3 of the full Tech stack that goes into,1293.84,4.8 producing good software,1296.0,5.46 and so at one point he said we're gonna,1298.64,4.62 do we're going to automate literally,1301.46,4.02 everything you infrastructure guys,1303.26,4.14 aren't going to need to touch jack,1305.48,4.559 after this and I said okay does that,1307.4,6.0 include authentication firewalls backup,1310.039,5.461 power Does it include all this other,1313.4,3.96 stuff and he just kind of like you could,1315.5,4.86 see the 404 not found in his eyes he,1317.36,5.46 literally had no idea how much actually,1320.36,5.28 goes into the full Tech stack to make,1322.82,5.219 software work when he said everything,1325.64,6.18 his definition of quote everything was,1328.039,6.241 just the software just the code he,1331.82,4.56 didn't know anything about containers he,1334.28,3.779 didn't know anything about data centers,1336.38,3.12 he didn't know anything about cyber,1338.059,3.961 security and so my point here is and I'm,1339.5,5.039 not saying that open AI is this bad but,1342.02,4.68 they're still human and when you look at,1344.539,4.861 who's on the payroll of open AI they,1346.7,4.56 haven't hired a lot of public policy,1349.4,3.72 people they haven't hired a lot of,1351.26,3.6 philosophers in ethicists they haven't,1353.12,5.52 hired civil rights people uh and so when,1354.86,5.88 when they come up with these somewhat,1358.64,4.2 contrived ideas about aligning to human,1360.74,4.439 intention and aligning to human values,1362.84,4.56 all you have to do is is have a five,1365.179,3.901 minute conversation with a philosopher,1367.4,4.08 to realize that those are really garbage,1369.08,6.12 things to align to and so again,1371.48,6.48 you know a for initial effort but they,1375.2,4.5 really really need to look out the,1377.96,4.14 window and bring in more experts so here,1379.7,4.8 are some solutions one,1382.1,5.1 open AI really really really needs to,1384.5,6.179 add human rights as a core discipline in,1387.2,5.339 their research of not just alignment but,1390.679,4.38 also super alignment and the reason is,1392.539,4.76 because human rights is one,1395.059,4.5 well-established and well-researched and,1397.299,5.26 and two it is uh there's plenty of,1399.559,4.681 people that are going to be able to talk,1402.559,4.081 about how protecting human rights is,1404.24,4.26 really the ultimate goal of super,1406.64,4.019 alignment it's not aligning to what,1408.5,4.5 humans want or what humans say they want,1410.659,5.101 because any psychologist again another,1413.0,4.32 five-minute conversation with any,1415.76,3.419 psychologist will tell you yeah humans,1417.32,3.839 are absolutely unable to express what,1419.179,4.74 they truly want and truly need but human,1421.159,5.941 rights however the objective rights to,1423.919,5.461 create the safe environment that we all,1427.1,4.439 want to live in that is a conversation,1429.38,4.14 that you can actually have and that is,1431.539,3.781 while research from the perspective of,1433.52,4.8 Sociology psychology philosophy ethics,1435.32,7.58 public policy Game Theory so,1438.32,7.68 yeah also so anthropic also already,1442.9,5.139 figured this out they're getting closer,1446.0,4.02 I do have some issues with anthropics,1448.039,3.781 constitutional AI but it's moving in the,1450.02,3.12 right direction and the difference is,1451.82,4.08 that anthropic is listing out in those,1453.14,5.94 clear objective terms the values the The,1455.9,5.04 Guiding principles that they want their,1459.08,4.52 AI to align to so in this respect,1460.94,5.4 anthropic gets an a in in super,1463.6,4.36 alignment they're already moving in the,1466.34,4.5 right direction and open AI I believe is,1467.96,4.98 still moving in the wrong direction at,1470.84,4.14 least with the exception of of some of,1472.94,4.14 the the the tactics that they outlined,1474.98,3.96 in their their paper and again I want to,1477.08,4.26 reiterate the fact that open AI is going,1478.94,4.92 to deliberately create misaligned AI to,1481.34,4.5 see how it behaves and to see if they,1483.86,5.88 can detect it that is absolutely 100 A,1485.84,5.52 Plus at least on that section of the,1489.74,2.939 quiz,1491.36,6.78 but oecd EU the UN the White House all,1492.679,7.38 of these other agencies that have a lot,1498.14,3.659 of researchers and a lot of advisors,1500.059,4.681 including uh machine learning and AI,1501.799,6.0 advisors all talk about protecting human,1504.74,6.419 rights so why is it that a that open AI,1507.799,6.181 has not talked about protecting human,1511.159,6.481 rights in their AI alignment research,1513.98,5.699 that is very concerning to me and we'll,1517.64,3.12 come back to that at the end of the,1519.679,2.221 video,1520.76,4.019 another the other major criticism that I,1521.9,4.139 have for open AI is that they're,1524.779,4.321 continuing to ignore autonomous agents,1526.039,5.401 what they in their in their description,1529.1,4.679 they have explicitly stated that they,1531.44,5.16 never want to lose control of of the,1533.779,4.621 machine they believe that they will,1536.6,3.42 remain in control they believe that they,1538.4,4.56 can uh remain in control and this is a,1540.02,5.279 very dangerous assumption to make if you,1542.96,4.98 listen to Conor Leahy and Ellie azer the,1545.299,5.221 yukowski and literally dozens of other,1547.94,6.54 uh people out there uh they uh Robert,1550.52,6.36 Miles lots and lots of people say that,1554.48,4.679 this is a far harder problem to solve,1556.88,4.26 and in my opinion it is actually not,1559.159,4.38 possible to solve that so this is called,1561.14,3.84 the control problem or the courage,1563.539,3.541 ability problem which is basically can,1564.98,4.319 you correct the Ai No matter how smart,1567.08,4.14 it becomes or autonomous,1569.299,5.101 the thing is is if and there seems to be,1571.22,5.52 some consensus amongst people that yes,1574.4,5.159 AI can get to the point where you cannot,1576.74,5.16 control it so instead what we should do,1579.559,3.5 is is,1581.9,4.68 seek to shape it set it on a trajectory,1583.059,5.921 so that you don't need to control it now,1586.58,4.32 this is where I say that the fact that,1588.98,3.24 they're going to be you know creating,1590.9,3.36 red teaming AIS and internally red,1592.22,4.62 teaming tests and sandboxes and that,1594.26,5.399 sort of stuff I think open AI might,1596.84,4.86 ultimately come to this realization on,1599.659,4.201 their own I wish they would be thinking,1601.7,3.839 about this up front I wish that they,1603.86,3.6 would be if they had just mentioned,1605.539,4.26 autonomous agents the fact that they,1607.46,4.079 want to test it and to see if they can,1609.799,5.041 just just for the sake of argument I,1611.539,5.041 really wish that openai would say we're,1614.84,2.9 going to see if we can make,1616.58,3.36 intrinsically stable and trustworthy,1617.74,3.58 autonomous agents no matter how,1619.94,4.5 intelligent and independent they become,1621.32,5.04 the fact that they're not willing to,1624.44,3.359 test that that they're not even willing,1626.36,3.299 to say it is really,1627.799,3.721 alarming to me because I think that they,1629.659,4.441 should be pursuing literally every,1631.52,4.5 Avenue that they can,1634.1,3.48 so here's the solution,1636.02,3.539 one,1637.58,4.62 just go ahead and maybe throw out human,1639.559,4.381 intention as something to align to,1642.2,4.14 because human intention is garbage uh,1643.94,4.979 and maybe like I just said pivot the,1646.34,4.86 research goal to creating models and,1648.919,3.541 agents that are intrinsically,1651.2,4.5 trustworthy uh stable and benevolent,1652.46,4.62 um go ahead and continue with the red,1655.7,3.3 teaming that's good you know a plus,1657.08,3.12 there,1659.0,3.0 um but do more research into those,1660.2,3.479 Universal principles those guiding,1662.0,4.32 principles and try and create autonomous,1663.679,5.161 agents that will very deliberately,1666.32,4.32 preserve and promote those principles,1668.84,4.8 and adhere to them for all times aka the,1670.64,4.62 heuristic imperatives research that I've,1673.64,3.06 been doing oh and by the way I wrote a,1675.26,2.82 book about this and demonstrated all,1676.7,3.3 this and now I'm Not The Only One Look,1678.08,4.02 up the self-aligned paper by Sun at all,1680.0,4.62 where basically yes you can create,1682.1,6.959 models that will not only adhere to uh,1684.62,6.539 to higher principles but they will get,1689.059,4.98 better at those principles over time and,1691.159,4.681 here's the thing in the testing that I,1694.039,3.661 did with Foundation models I took,1695.84,4.199 Foundation models from unaligned to,1697.7,5.219 aligned with my core objective functions,1700.039,4.681 my heuristic imperatives that's,1702.919,3.661 relatively easy but the thing is is that,1704.72,3.6 the decisions they then start to make,1706.58,3.959 they will double down on those,1708.32,4.8 principles on protecting those values,1710.539,5.461 which is exactly what you want to in in,1713.12,5.46 terms of Game Theory you want them the,1716.0,5.4 AI to adopt a strategy and not deviate,1718.58,4.86 from that strategy that is the essence,1721.4,4.139 of the control problem that is the Core,1723.44,3.96 Essence of super alignment and this is,1725.539,3.12 what I've been working on for the last,1727.4,2.94 four years,1728.659,5.76 so for a quick recap open AI one major,1730.34,5.699 problem they're Reinventing the wheel in,1734.419,5.88 a few places namely with uh by by you,1736.039,6.961 know inventing alignment on human values,1740.299,5.281 and humans human intent intentions,1743.0,4.62 just look at United Nations look at,1745.58,3.54 anthropic even just look at what's,1747.62,4.74 trending on GitHub uh you know aligning,1749.12,6.179 to Human Rights is going to be a lot,1752.36,4.919 better and aligning to Universal,1755.299,3.421 principles is going to be a lot better,1757.279,4.201 than aligning to something as squishy as,1758.72,5.819 human values and human intentions again,1761.48,5.52 those are when you when you study the,1764.539,4.201 philosophy the morality the ethics the,1767.0,4.02 information Theory the psychology of it,1768.74,4.919 those are absolutely 100 garbage things,1771.02,5.34 to align to number two open AI is,1773.659,4.62 failing to understand those basic fields,1776.36,4.14 of morality philosophy and ethics human,1778.279,5.221 rights are incredibly well researched uh,1780.5,5.7 don't reinvent the wheel and the fact,1783.5,4.5 that human rights have not even entered,1786.2,4.68 their lexicon is really really deeply,1788.0,6.12 disturbing in it I don't particularly I,1790.88,4.919 don't personally read it this way but I,1794.12,3.059 could imagine someone very cynical,1795.799,4.321 saying maybe open AI doesn't actually,1797.179,4.801 value human rights maybe they don't care,1800.12,3.24 about human rights maybe they don't,1801.98,3.36 believe in human rights the fact that,1803.36,4.199 they're talking about safety of the,1805.34,4.319 human race and not talking about human,1807.559,4.5 rights when you look at the note that's,1809.659,6.181 missing that is deeply alarming and so,1812.059,5.701 then finally they are still making a lot,1815.84,3.42 of assumptions about courage ability,1817.76,3.18 which is why I think that they're not,1819.26,3.6 talking about autonomous agents even,1820.94,3.479 though the fact that lots and lots of,1822.86,3.72 people are going as fast as they can to,1824.419,4.321 make autonomous agents and then in the,1826.58,4.92 grand scheme of things when you uh think,1828.74,4.38 about the competitive landscape that's,1831.5,3.72 going to exist the autonomous agents,1833.12,3.96 that are trustworthy are going to,1835.22,3.6 trounce the autonomous or the the,1837.08,3.66 non-autonomous agents that are waiting,1838.82,3.78 for human instruction so what we really,1840.74,4.14 need is we need to be working on,1842.6,4.199 creating autonomous agents that will,1844.88,4.14 Advocate on our behalf and that are,1846.799,4.441 going to be the strongest and best and,1849.02,4.62 fastest in the world because that is how,1851.24,6.179 we that is uh one component of solving,1853.64,5.519 the control problem of solving alignment,1857.419,4.441 is the competition between these agents,1859.159,5.161 so with all that being said I hope you,1861.86,3.84 got a lot out of this thanks for,1864.32,4.28 watching cheers,1865.7,2.9