text,start,duration morning everybody David Shapiro here,0.719,5.761 with a new video so you probably noticed,2.879,6.361 a little bit of different setup I'm kind,6.48,5.94 of pimping out my uh my recording setup,9.24,7.2 but today we're going to talk about,12.42,7.56 um the open AI Democratic inputs to Ai,16.44,6.3 and the gato framework so you guys have,19.98,4.559 heard me mention the gato framework,22.74,6.24 quite a few times so we're gonna kind of,24.539,6.66 first kind of talk about like this,28.98,4.739 Democratic inputs to AI which is a grant,31.199,4.261 Challenge and then I'll also introduce,33.719,4.321 you to the gato framework which there is,35.46,3.96 some overlap,38.04,4.5 but the takeaway is that I am the gato,39.42,7.38 Community are going to uh put in a um uh,42.54,8.28 proposal to open ai's challenge so right,46.8,6.56 off the bat the Democratic inputs to AI,50.82,6.66 is uh going to be 10 100,53.36,7.66 000 grants that openai gives in order to,57.48,6.84 basically democratize the way that they,61.02,6.419 get feedback in order to just to decide,64.32,6.0 how AI will behave so they give some,67.439,4.201 examples of what they mean by a,70.32,3.96 democratic process and then they also,71.64,4.56 give a few examples of the kinds of,74.28,5.1 questions that they're going to want to,76.2,3.779 um,79.38,3.059 uh address sorry where did it go here we,79.979,4.861 go so one example is how far do you,82.439,3.781 think the personalization of AI,84.84,4.319 assistance like chat GPT uh to align,86.22,4.56 with Tayson preferences should go what,89.159,3.721 boundaries if any should exist in this,90.78,3.6 process,92.88,3.66 so these are the kinds of policy,94.38,4.98 questions that they want to have a,96.54,5.52 scalable system to address,99.36,6.0 and they give you know quite a few,102.06,5.419 examples like Wikipedia Twitter,105.36,4.56 democracy next so on and so forth so,107.479,5.201 there's all kinds of things uh which,109.92,4.86 with those existing systems you might be,112.68,4.38 what asking like okay well why what's,114.78,4.199 like What's missing,117.06,3.9 and so there's a few things that they,118.979,5.161 talk about that they want to uh like the,120.96,4.68 the criteria that they want to address,124.14,4.2 so one evaluation they want to make sure,125.64,5.12 that the evaluation follows,128.34,5.64 metrics and so on that the methodology,130.76,7.14 is good robustness obviously you want to,133.98,6.6 make sure that that the resulting,137.9,4.54 information that you get is robust and,140.58,3.18 defensible,142.44,3.659 but also resistant to trolling and other,143.76,4.199 problems,146.099,4.461 um inclusiveness and representativeness,147.959,6.901 uh you know obviously if you only survey,150.56,7.12 or pull you know a small majority of,154.86,4.68 people or I guess small minority of,157.68,3.72 people rather uh you're not going to,159.54,4.14 have a good representation of the global,161.4,5.28 willpower and Global desires of all,163.68,4.5 humans and that's part of the goal here,166.68,4.62 is that uh pretty much all humans are,168.18,7.199 stakeholders in AI so therefore we need,171.3,5.7 to make sure that we represent everyone,175.379,3.601 on the planet,177.0,4.62 um empowerment of minority opinions so,178.98,5.52 this is uh one of the hardest problems,181.62,4.08 because,184.5,4.08 when you have a democratic process you,185.7,5.459 often have majority rules which means,188.58,4.26 that you have tyranny of the majority so,191.159,3.241 how do you represent the interests of,192.84,5.399 everyone while also kind of abiding by,194.4,6.54 or following the collective willpower so,198.239,4.561 in that case finding consensus can be,200.94,4.379 very difficult effective moderation,202.8,4.799 again making sure that stuff stays on,205.319,5.221 topic so on and so forth scalability so,207.599,3.981 again,210.54,3.419 scalability is one of the chief criteria,211.58,4.48 here because it needs to Encompass the,213.959,5.28 entire planet finally actionability and,216.06,5.66 legibility these are just kind of,219.239,5.461 boilerplate requirements there's a few,221.72,4.48 other footnotes,224.7,3.78 um but yeah so those are the primary,226.2,4.08 goals is how do you create something,228.48,4.02 that can achieve this and it sounds like,230.28,4.14 a very daunting task but I think that,232.5,3.72 we're up to it,234.42,3.84 um so with all that said,236.22,4.14 um uh the gato community and actually,238.26,4.92 even some of my patreons have already uh,240.36,4.68 expressed interest in participating so,243.18,4.32 we'll get that organized very quickly we,245.04,4.68 have until uh just under a month from,247.5,4.799 now to submit our proposal which I have,249.72,3.84 no doubt that we'll be able to do,252.299,3.181 considering we pulled the Gatto,253.56,4.2 framework together in four weeks flat,255.48,3.96 so we have roughly the same amount of,257.76,3.96 time to do something that is less,259.44,3.78 extensive basically we have to design,261.72,4.38 one tool or one platform rather than an,263.22,4.8 entire Global movement so,266.1,3.78 I have alluded to the gato framework,268.02,3.42 quite a few times so let's talk about,269.88,3.42 the gato framework,271.44,3.479 so you can learn about our gato,273.3,4.98 framework here on gotta framework.org,274.919,5.361 it is a global,278.28,5.76 decentralized movement to achieve uh,280.28,6.34 first and foremost Utopia which we,284.04,6.42 Define utopia quite simply as a world,286.62,6.0 state where everyone on the planet has,290.46,4.799 high standard of living a high,292.62,4.92 individual liberty and also high social,295.259,3.481 Mobility,297.54,4.5 so obviously the word Utopia often has a,298.74,6.12 lot of baggage associated with it uh you,302.04,5.159 know whether you think Star Trek or,304.86,3.899 something else and Utopia means,307.199,3.541 different things to different people but,308.759,4.621 in terms of universal principles and,310.74,6.78 also measurable like kpi or metrics we,313.38,7.319 we Define utopia as uh you know high,317.52,5.22 standard of living High individual,320.699,4.621 liberty and high social Mobility if we,322.74,4.98 get those three criteria to be Global we,325.32,4.68 will consider that success now also,327.72,6.18 Gatto is meant to avoid dystopia so on,330.0,6.0 one hand you have Utopia versus dystopia,333.9,5.34 but we also aim to avoid cataclysmic,336.0,5.699 outcomes by solving problems such as a,339.24,3.54 coordination problem at Daniel,341.699,3.421 schmachtenberger and Liv bori talk about,342.78,5.04 with Malik you've probably seen some of,345.12,5.94 my other videos and the goal there is to,347.82,6.0 avoid extinction by creating Global,351.06,6.419 consensus around how to align AI which,353.82,5.939 is a very comprehensive process we'll,357.479,5.461 we'll go into it just a little bit but,359.759,6.66 basically it is a decentralized layered,362.94,6.36 approach to achieving Global alignment,366.419,5.521 so for instance the first layer is model,369.3,6.06 alignment model alignment as the low,371.94,5.28 level things such as building and,375.36,4.5 training and fine-tuning individual,377.22,4.86 language models and other AI models,379.86,4.44 multimodal models as they come so we'll,382.08,4.14 address problems like fine-tuning Mesa,384.3,4.619 optimization inner alignment and so on,386.22,4.979 but it's important to remember that,388.919,4.201 model alignment is only one small,391.199,5.041 component of achieving Utopia avoiding,393.12,6.54 dystopia and avoiding Extinction yes we,396.24,5.519 uh we believe that there will come a,399.66,4.92 time when AI becomes super intelligent,401.759,4.921 and it cannot be contained and we have,404.58,5.28 to get it right before that happens but,406.68,5.28 even before that we could end up in,409.86,4.26 dystopia right so there's kind of a,411.96,5.7 gated process so model alignment even,414.12,6.0 Sam Altman has said that rlhf is not the,417.66,5.64 not the way to get you know solve the,420.12,5.16 control problem but it's a good way to,423.3,4.44 make a good chat bot so we're aligned,425.28,3.18 there,427.74,4.799 the next phase is autonomous systems so,428.46,5.519 one thing that a lot of people are,432.539,3.961 afraid of in the long run is Runaway,433.979,4.5 autonomous AI basically super,436.5,4.259 intelligence that has no leash that has,438.479,4.861 no shackles and so one of the reasons,440.759,4.861 that we advocate for building autonomous,443.34,5.16 systems today is because we need to,445.62,5.82 practice uh building these systems to,448.5,5.22 understand the architectures and the,451.44,4.379 behaviors for instance one of the things,453.72,4.319 that people suspect will happen is,455.819,4.621 instrumental convergence instrumental,458.039,5.521 convergence is the idea that AI systems,460.44,5.4 no matter what objectives you give them,463.56,4.979 they will want things like to protect,465.84,4.62 power to get more data that sort of,468.539,2.761 stuff,470.46,3.54 and so by by practicing building,471.3,5.459 autonomous systems today we can go ahead,474.0,4.5 and start researching and understanding,476.759,3.901 one how to make autonomous systems,478.5,5.039 stable even as they change and improve,480.66,5.34 themselves we can also figure out what,483.539,5.16 needs to go into automating their,486.0,4.319 internal learning processes and stuff,488.699,3.78 because super intelligence was never,490.319,4.561 ever going to be a single model right,492.479,5.22 it's not going to be gpt7 you know in a,494.88,5.879 robot autonomous systems from a software,497.699,4.62 and Hardware perspective are going to be,500.759,3.72 very complex systems so we need to start,502.319,4.44 working on these today and you know in,504.479,3.541 point of fact people have already,506.759,4.021 started working on autonomous systems,508.02,3.84 and they're only going to get more,510.78,2.52 powerful over time,511.86,4.32 layer 3 of the gato framework,513.3,6.0 is the advocacy of using decentralized,516.18,4.739 Technologies such as blockchain and,519.3,5.22 federations in order to uh basically,520.919,6.36 first solve the problem of in the future,524.52,5.28 AI will spend more time talking to each,527.279,5.101 other than to us so we need to create a,529.8,4.92 framework that includes things such as,532.38,4.019 consensus mechanisms as well as,534.72,4.86 reputation management systems because,536.399,5.94 the thing is is in the future you're not,539.58,4.98 going to be able to look at the code or,542.339,5.581 data or design of every autonomous agent,544.56,5.399 out there but instead you can look at,547.92,3.9 the behavior of those agents and track,549.959,4.621 it over time and so then what we can do,551.82,5.639 is embed alignment algorithms into those,554.58,4.56 decentralized networks and those,557.459,4.5 decentralized networks can be used to,559.14,4.8 gatekeep resources like data network,561.959,5.94 access power and compute and that will,563.94,7.62 actually change the the instrumental,567.899,6.901 convergence meaning that autonomous AI,571.56,5.52 agents will be incentivized to,574.8,4.86 self-align if they want access to things,577.08,5.58 like power data and compute and that,579.66,4.739 that decentralized network will also,582.66,5.239 create a layer that that allows for easy,584.399,5.761 collaboration between humans and AI,587.899,5.021 because again blockchain Dows and other,590.16,5.16 decentralized Technologies allow for,592.92,4.26 Collective consensus to be achieved,595.32,4.92 before making decisions and actions and,597.18,5.58 that will be the kind of the the fabric,600.24,5.34 that pulls humans and AI together,602.76,4.92 and so those first three layers are the,605.58,3.9 technical layers those these are the,607.68,4.5 coding data and cryptographic problems,609.48,4.799 that goto aims to solve,612.18,3.96 but it's not going to be a centralized,614.279,4.321 effort this is just a road map that,616.14,5.879 anyone can follow and so then the top,618.6,6.6 four layers of gato are more about the,622.019,5.461 social geopolitical and economic layers,625.2,5.04 so for instance number four layer four,627.48,5.22 is Corporate adoption we have one simple,630.24,4.62 Mantra which is aligned AI is good for,632.7,4.92 business fortunately it seems like some,634.86,5.9 companies open AI Microsoft and IBM,637.62,5.46 believe this at least in principle at,640.76,3.94 least in word,643.08,3.12 um you know obviously actions speak,644.7,3.42 louder than words and so we will see,646.2,5.579 what actions they take over time but the,648.12,5.64 general principle is and many of my,651.779,4.081 patreon supporters already get this and,653.76,3.96 know this where you know I help them,655.86,4.2 with understanding AI alignment and they,657.72,4.34 say this is obviously good for business,660.06,4.2 aligned AI is good for business for a,662.06,3.399 number of reasons,664.26,3.06 now the least of which is that it's more,665.459,4.261 trustworthy and more scalable the more,667.32,4.38 aligned an AI system is the more,669.72,3.78 trustworthy it is and therefore the less,671.7,4.079 supervision it requires which means that,673.5,4.2 it is more scalable and can take on more,675.779,5.041 workload faster so in this respect we,677.7,5.46 hope that this pattern proves out over,680.82,4.62 longer periods of time which means that,683.16,4.739 those businesses that adopt aligned AI,685.44,4.22 will simply do better in the long run,687.899,3.661 and they will have a competitive,689.66,4.06 Advantage obviously we can't count on,691.56,4.8 this forever which is why we also,693.72,4.799 advocate for National regulation,696.36,5.28 now fortunately we have seen calls for,698.519,5.06 National regulation already,701.64,4.56 ranging in you know from empowering,703.579,5.561 existing agencies like the FTC SEC and,706.2,4.5 so on and so forth and those are of,709.14,3.54 course American entities,710.7,3.6 um pretty much every nation has,712.68,3.779 regulatory bodies that are already in,714.3,4.44 place that could be empowered to help,716.459,5.94 regulate AI now that being said there's,718.74,5.7 also a case to be made for advocating,722.399,4.581 for an AI specific entity,724.44,5.76 we're not going to take uh gato is not,726.98,4.66 going to take a position one way or,730.2,4.379 another but we do advocate for National,731.64,6.66 regulation of some kind across the world,734.579,6.241 and this National regulation is not just,738.3,3.719 about,740.82,1.92 um,742.019,3.301 punishing or constraining we also,742.74,4.56 advocate for incentivizing aligned,745.32,4.199 Behavior such as through research grants,747.3,4.979 and and other Financial incentives maybe,749.519,5.161 even including tax breaks for companies,752.279,5.161 that meet alignment standards similar to,754.68,5.46 how there are carbon credits for,757.44,5.82 instance as one example of incentivizing,760.14,5.16 the behavior that you want to see with,763.26,5.579 financial gains again we believe that,765.3,5.4 aligned AI is its own Financial,768.839,3.421 incentive but not everyone's going to,770.7,4.259 believe that one example that I like to,772.26,6.24 use is when smoking was banned from bars,774.959,5.701 and restaurants when smoking was banned,778.5,3.66 from bars and restaurants it actually,780.66,3.179 increased patronage of bars and,782.16,4.5 restaurants because a few bad actors AKA,783.839,5.161 people that wanted to smoke inside that,786.66,4.14 behavior was that noxious Behavior was,789.0,4.139 no longer allowed and so then businesses,790.8,4.14 all benefited and now it's just a,793.139,3.541 foregone conclusion that you shouldn't,794.94,3.78 allow smoking inside,796.68,3.719 that is the kind of nature of national,798.72,4.619 regulation so if we ban misaligned AI,800.399,5.281 it'll bring more people to the table,803.339,4.74 uh number six is international treaty,805.68,5.459 sogato advocates for the creation of,808.079,5.221 international agencies,811.139,4.44 um openai recently published that they,813.3,4.92 are advocating for an agency model,815.579,5.281 perhaps on the the iaea the,818.22,5.46 international atomic energy agency which,820.86,4.979 is a regulator that performs inspections,823.68,5.899 and other uh other uh functions around,825.839,7.101 nuclear uh energy and nuclear enrichment,829.579,5.741 we don't necessarily disagree with that,832.94,4.959 but we think that it should be yes and,835.32,4.98 sogato advocates for the creation of an,837.899,5.821 entity like CERN which is primarily a,840.3,5.099 research organization rather than a,843.72,2.84 regulatory,845.399,3.781 organization and the reason that we,846.56,4.42 advocate for international,849.18,4.32 um cooperation on AI research is because,850.98,4.859 again we believe that eventually one day,853.5,4.62 we are going to lose control of the AI,855.839,4.261 in which case human regulation won't,858.12,3.899 matter so what we need to do is actually,860.1,5.4 focus more resources on understanding,862.019,5.88 alignment and autonomous systems and how,865.5,4.62 to create what we call axiomatic,867.899,4.74 alignment so axiomatic alignment is one,870.12,5.18 of the goal states of the gato framework,872.639,6.421 wherein alignment is very difficult for,875.3,6.88 AI to deviate from due to a saturation,879.06,6.0 of aligned models aligned data sets and,882.18,4.92 what we also call epistemic Convergence,885.06,4.139 which is the idea it's very similar to,887.1,4.62 to instrumental convergence but the idea,889.199,4.38 of epistemic convergence,891.72,4.08 is that any sufficiently intelligent,893.579,3.181 entity,895.8,3.599 uh no matter where they start they ought,896.76,6.06 to come to some similar conclusions uh,899.399,5.521 you know with obviously some variants,902.82,5.759 but by intersecting with the same laws,904.92,5.46 of physics the same universe the same,908.579,4.681 galaxy the same Planet pretty much any,910.38,4.92 sufficiently and intelligent entity,913.26,3.12 ought to come to some similar,915.3,3.24 conclusions and then finally the top,916.38,4.1 layer of gato is global consensus,918.54,4.56 wherein we use exponential Technologies,920.48,5.799 like AI social media and so on in order,923.1,6.12 to create Outreach Outreach into the,926.279,5.041 academic institutions into primary,929.22,4.739 education into industry so on and so,931.32,3.72 forth and that's why I've been doing,933.959,2.88 more interviews for instance,935.04,4.2 so those are the layers of gotcha,936.839,5.281 and taken all together the goal is to,939.24,4.2 again achieve,942.12,3.899 excuse me Utopia avoid dystopia and,943.44,5.28 avoid collapse and each of these layers,946.019,4.38 you don't have to eat the whole elephant,948.72,3.479 the idea is that whatever your,950.399,3.841 specialization is you can participate in,952.199,4.44 gato without saying like yes I am a got,954.24,4.2 to employee or whatever that's not the,956.639,3.181 point,958.44,3.48 um we also have the gato Traditions,959.82,4.5 which is a set of 10 kind of principles,961.92,5.34 or behaviors that everyone can engage in,964.32,5.639 to help Advance this initiative towards,967.26,6.06 Global alignment so the first tradition,969.959,5.101 is start where you are use what you have,973.32,4.56 do what you can basically this says that,975.06,4.62 whatever you're capable of whatever your,977.88,3.18 passions are and your strengths are you,979.68,3.659 can use them so for instance I get a lot,981.06,4.26 of messages by people saying like you,983.339,3.541 know oh well I'm just a lawyer I don't,985.32,3.54 know anything about AI or I'm a graphic,986.88,3.78 artist or you know I just use Twitter,988.86,3.719 and make memes whatever it is that,990.66,3.479 you're capable of doing you can advance,992.579,3.841 the initiative of AI so for instance,994.139,4.921 there's a Twitter uh feed out there what,996.42,5.4 is it the the AI safety memes um Twitter,999.06,3.719 feed,1001.82,3.6 if that's all you do that's fine,1002.779,5.341 um if you're a lawyer you can look at uh,1005.42,5.94 at gato and AI alignment from a legal,1008.12,5.159 perspective or from a business policy,1011.36,3.539 perspective or whatever your perspective,1013.279,3.601 is you have something that you can,1014.899,4.62 contribute and by everyone contributing,1016.88,4.98 in a decentralized manner we can solve,1019.519,4.501 that coordination problem,1021.86,4.5 um that that like I said that uh Daniel,1024.02,4.799 smacktenberger and live Bowie point out,1026.36,4.38 principle number two is work towards,1028.819,5.581 consensus so while Global consensus is,1030.74,5.699 not fully possible we're not ever going,1034.4,4.74 to come to a unanimous decision that,1036.439,4.5 doesn't mean that that the idea of,1039.14,4.02 consensus is not valuable and very,1040.939,5.101 helpful in this process because what I,1043.16,5.7 mean by that is that when you have,1046.04,5.28 consensus as an as a principle as an,1048.86,4.559 ideal you're going to you're going to,1051.32,3.3 listen more you're going to listen,1053.419,3.361 differently and you're also going to,1054.62,4.86 find more novel and unique and Creative,1056.78,5.22 Solutions that that strive to meet,1059.48,5.16 everyone's needs and desires,1062.0,4.86 number three is broadcast your findings,1064.64,4.919 which is uh basically don't keep things,1066.86,5.1 locked up we very much advocate for open,1069.559,4.561 source open communication knowledge,1071.96,4.56 sharing and so on because knowledge,1074.12,4.38 sharing and broadcasting good,1076.52,3.6 information is part of building,1078.5,3.12 consensus,1080.12,3.24 number four is think globally act,1081.62,5.28 locally so think globally the problem of,1083.36,5.819 solving AI alignment is a global problem,1086.9,5.22 it is as Global as you know nuclear,1089.179,6.901 deterrent or or you know climate change,1092.12,6.9 right this is a global problem now that,1096.08,4.74 being said none of us have a global,1099.02,4.26 reach or Global influence right I'm on,1100.82,5.82 YouTube I do have global-ish audience,1103.28,5.58 but I can still only do you know,1106.64,5.399 something with my own hands right and so,1108.86,5.699 by Distributing the workload and acting,1112.039,3.541 locally,1114.559,2.761 but keeping in mind that this is a,1115.58,4.5 global problem we can work together,1117.32,6.0 number five in it to win it this is for,1120.08,5.52 all the cookies basically we achieve you,1123.32,4.56 as many people point out like we either,1125.6,4.079 achieve Utopia by solving all these,1127.88,3.9 problems or we're on an inevitable,1129.679,4.74 downslide towards dystopia collapse and,1131.78,5.82 then finally Extinction so this is what,1134.419,4.921 some people say a binary outcome or a,1137.6,4.02 bimodal outcome where it's we solve this,1139.34,3.719 or we don't,1141.62,2.88 excuse me,1143.059,4.441 number six is Step Up So Step Up talks,1144.5,4.98 about if there's something that you see,1147.5,3.96 that you can do you can advocate in your,1149.48,4.02 community in your company,1151.46,4.56 um in your family whatever step up speak,1153.5,3.72 out,1156.02,4.8 um it could also be uh if you if gato,1157.22,5.579 aligns with you download the framework,1160.82,4.56 start your own uh gato Community or join,1162.799,5.041 a community that is aligned with gato,1165.38,4.56 and start sharing and start doing the,1167.84,5.04 work but it it got to will not succeed,1169.94,4.979 if everyone is passive that is the key,1172.88,4.02 thing here number seven is think,1174.919,4.561 exponentially as I mentioned we very,1176.9,4.2 much Advocate using exponential,1179.48,3.9 Technologies namely social media and,1181.1,4.62 artificial intelligence if you can,1183.38,4.2 create an AI tool that helps Advance,1185.72,3.78 alignment whether it's by building,1187.58,4.8 consensus or solving problems do it if,1189.5,4.98 you have a communication platform,1192.38,5.88 podcast memes reddits whatever,1194.48,5.64 and use those exponential Technologies,1198.26,3.96 and those Network effects to get the,1200.12,4.919 message out to build consensus and to do,1202.22,6.3 uh more with less basically number eight,1205.039,6.241 is trust the process we are not the,1208.52,5.46 first decentralized Global movement and,1211.28,5.34 we won't be the last but the point is is,1213.98,4.8 that decentralized Global movements do,1216.62,4.799 work and in the gato framework we list,1218.78,6.48 about I think 8 or 11 different uh,1221.419,6.0 decentralized movements that we uh took,1225.26,3.6 inspiration from,1227.419,4.321 and so yes you're only going to see your,1228.86,5.16 little narrow part but if everyone is,1231.74,4.2 doing the same thing in parallel even,1234.02,3.84 though you don't see it you trust that,1235.94,3.119 it's out there and that they are doing,1237.86,3.48 it number nine is strike while the iron,1239.059,4.141 is hot there are going to be plenty of,1241.34,4.079 opportunities out here and this one is,1243.2,5.64 exactly what this policy or sorry this,1245.419,6.12 tradition means is open AI presented an,1248.84,4.56 opportunity so we're going to make use,1251.539,3.781 of that opportunity and so we're going,1253.4,3.6 to strike while the iron's hot,1255.32,3.9 and finally tradition number 10 divide,1257.0,4.86 and conquer again everyone is going to,1259.22,4.38 be working in parallel to solve,1261.86,3.48 alignment and not everyone's going to,1263.6,4.02 agree but that's okay right we will work,1265.34,5.1 towards consensus over time so that is,1267.62,6.54 the uh got to layers and traditions many,1270.44,5.28 of you have said that you want to get,1274.16,3.54 involved you don't need our permission,1275.72,5.16 to get involved however you can apply to,1277.7,5.54 join the main gato Community,1280.88,4.799 with this form we do have it,1283.24,4.9 automatically piped into our Discord and,1285.679,4.38 we automatically or not automatically,1288.14,5.34 but we can all uh vote on accepting,1290.059,8.281 members or not we also have a um first I,1293.48,6.36 need to tell everyone we are way behind,1298.34,3.42 on accepting people we also haven't,1299.84,3.66 fully automated the onboarding and,1301.76,5.24 invitation process so if you did apply,1303.5,6.299 on the old form we haven't gotten to you,1307.0,4.36 and you need to apply on the new form,1309.799,4.141 and number two if you don't get accepted,1311.36,4.679 first be patient because we're trying to,1313.94,3.719 get to everyone in automate as much of,1316.039,6.241 it as possible and number two if you're,1317.659,6.421 not accepted that doesn't necessarily,1322.28,3.18 mean that you don't have something to,1324.08,3.959 contribute but we we need to make sure,1325.46,3.9 that we don't have too many cooks in the,1328.039,3.301 kitchen right and so what we're going to,1329.36,3.48 be doing is setting up morgato,1331.34,4.14 communities that are more open,1332.84,4.86 um for everyone to join,1335.48,4.16 um so you know don't take it personally,1337.7,3.959 because there's plenty of people that,1339.64,4.6 that do have something to contribute but,1341.659,4.26 that we just don't have a role for in,1344.24,4.439 the main gato Community yet and then,1345.919,4.38 finally,1348.679,4.261 um if you uh if you're ready to,1350.299,5.161 participate we have two documents so one,1352.94,6.119 is the main gato framework which is a 70,1355.46,6.719 page document that outlines everything,1359.059,5.821 that I've said here and more including,1362.179,4.261 lots and lots of suggestions,1364.88,5.52 explanations as to why how and so on you,1366.44,5.04 know whether you want to Advocate,1370.4,3.6 forgotto or participate in one layer or,1371.48,5.46 even we have recommendations on how to,1374.0,5.7 set up your own gato community,1376.94,4.8 and then the other document is a,1379.7,3.9 one-page handout which I actually take,1381.74,3.78 this to meetup groups now which it's if,1383.6,3.9 you just want to give someone a really,1385.52,3.96 high level snapshot of gato it's a,1387.5,4.62 one-page handout that um that you can,1389.48,5.34 use to share the idea to kind of plant,1392.12,5.58 those seeds and and get the conversation,1394.82,4.68 started,1397.7,3.42 um that is about it for the gato,1399.5,4.679 Community we also have a few more pages,1401.12,6.0 such as like news and updates,1404.179,5.281 um for anything that is uh happening,1407.12,3.96 with the gato Community or relevant to,1409.46,2.82 us we actually need to update this,1411.08,3.42 because I've had a few more podcasts and,1412.28,3.72 then we have a Community Showcase page,1414.5,3.96 where we'll be accumulating,1416.0,4.799 um use cases business cases and Other,1418.46,5.339 Stories of successes,1420.799,6.481 um relate related to AI alignment and,1423.799,6.841 adoption so for instance we have a few,1427.28,5.04 other projects a few other irons in the,1430.64,4.32 fire that will get updated as those get,1432.32,4.56 completed we've got folks participating,1434.96,4.44 in hackathons then of course we've got,1436.88,4.5 you know got to will be participating in,1439.4,4.32 the Democratic inputs the AI that sort,1441.38,3.659 of thing so,1443.72,3.48 if all of this resonates with you if you,1445.039,3.961 want to solve this problem and this is,1447.2,3.3 this is going to be true whether or not,1449.0,4.74 you believe that AGI is imminent or not,1450.5,5.64 this is gonna like got to as valid,1453.74,4.38 whether or not you believe that AI,1456.14,4.2 represents an existential threat because,1458.12,4.62 whatever else is true AI is disrupting,1460.34,5.459 the economy today so there are alignment,1462.74,4.5 questions that we need to solve today,1465.799,3.301 and there are coordination problems we,1467.24,4.439 need to solve today regardless of where,1469.1,5.699 AI ultimately ends up so with all that,1471.679,4.921 thanks for watching I hope you got a lot,1474.799,4.321 out of this and uh yeah stay tuned for,1476.6,6.319 more we will keep up the hard work,1479.12,3.799