source
stringlengths 26
381
| text
stringlengths 53
1.64M
|
---|---|
https://www.databricks.com/company/partners/cloud-data-migration-accenture-databricks | Cloud Data Migration by Accenture - DatabricksPlatformThe Databricks Lakehouse PlatformDelta LakeData GovernanceData EngineeringData StreamingData WarehousingData SharingMachine LearningData SciencePricingMarketplaceOpen source techSecurity and Trust CenterWEBINAR May 18 / 8 AM PT
Goodbye, Data Warehouse. Hello, Lakehouse.
Attend to understand how a data lakehouse fits within your modern data stack.
Register nowSolutionsSolutions by IndustryFinancial ServicesHealthcare and Life SciencesManufacturingCommunications, Media & EntertainmentPublic SectorRetailSee all IndustriesSolutions by Use CaseSolution AcceleratorsProfessional ServicesDigital Native BusinessesData Platform MigrationNew survey of biopharma executives reveals real-world success with real-world evidence.
See survey resultsLearnDocumentationTraining & CertificationDemosResourcesOnline CommunityUniversity AllianceEventsData + AI SummitBlogLabsBeaconsJoin Generation AI in San Francisco
June 26–29
Learn about LLMs like Dolly and open source Data and AI technologies such as Apache Spark™, Delta Lake, MLflow and Delta SharingExplore sessionsCustomersPartnersCloud PartnersAWSAzureGoogle CloudPartner ConnectTechnology and Data PartnersTechnology Partner ProgramData Partner ProgramBuilt on Databricks Partner ProgramConsulting & SI PartnersC&SI Partner ProgramPartner SolutionsConnect with validated partner solutions in just a few clicks.Learn moreCompanyCareers at DatabricksOur TeamBoard of DirectorsCompany BlogNewsroomDatabricks VenturesAwards and RecognitionContact UsSee why Gartner named Databricks a Leader for the second consecutive yearGet the reportTry DatabricksWatch DemosContact UsLoginJUNE 26-29REGISTER NOWBrickbuilder SolutionCloud Data Migration by AccentureMigration solution developed by Accenture and powered by the Databricks Lakehouse PlatformGet startedLess guesswork, more valueAccenture’s Cloud Data Migration helps you navigate any complexity, from building landing zones in the Cloud Continuum to regulating data sovereignty. Accenture works with you to determine the right cloud strategy, operating model, roadmap and additional ecosystem partners. They then help you accelerate migration and modernization to the cloud so that it is secure, cost-effective and agile. Accenture’s comprehensive cloud migration framework brings industrialized capabilities together with exclusive preconfigured, industry-specific tools, methods and automation across all cloud models and proven delivery methods. With Accenture’s Cloud Data Migration Service, you will benefit from:Increased innovation, agility and flexibilityEasing of rising resource demands and better consumption managementReduction in cost, immediate business results and cloud scalabilityGet startedDeliver AI innovation faster with solution accelerators for popular industry use cases. See our full library of solutions ➞ProductPlatform OverviewPricingOpen Source TechTry DatabricksDemoProductPlatform OverviewPricingOpen Source TechTry DatabricksDemoLearn & SupportDocumentationGlossaryTraining & CertificationHelp CenterLegalOnline CommunityLearn & SupportDocumentationGlossaryTraining & CertificationHelp CenterLegalOnline CommunitySolutionsBy IndustriesProfessional ServicesSolutionsBy IndustriesProfessional ServicesCompanyAbout UsCareers at DatabricksDiversity and InclusionCompany BlogContact UsCompanyAbout UsCareers at DatabricksDiversity and InclusionCompany BlogContact UsSee Careers
at DatabricksWorldwideEnglish (United States)Deutsch (Germany)Français (France)Italiano (Italy)日本語 (Japan)한국어 (South Korea)Português (Brazil)Databricks Inc.
160 Spear Street, 13th Floor
San Francisco, CA 94105
1-866-330-0121© Databricks 2023. All rights reserved. Apache, Apache Spark, Spark and the Spark logo are trademarks of the Apache Software Foundation.Privacy Notice|Terms of Use|Your Privacy Choices|Your California Privacy Rights |
https://www.databricks.com/kr/discover/beacons | Beacons Hub Page | DatabricksSkip to main contentPlatformThe Databricks Lakehouse PlatformDelta LakeData GovernanceData EngineeringData StreamingData WarehousingData SharingMachine LearningData SciencePricingMarketplaceOpen source techSecurity and Trust CenterWEBINAR May 18 / 8 AM PT
Goodbye, Data Warehouse. Hello, Lakehouse.
Attend to understand how a data lakehouse fits within your modern data stack.
Register nowSolutionsSolutions by IndustryFinancial ServicesHealthcare and Life SciencesManufacturingCommunications, Media & EntertainmentPublic SectorRetailSee all IndustriesSolutions by Use CaseSolution AcceleratorsProfessional ServicesDigital Native BusinessesData Platform MigrationNew survey of biopharma executives reveals real-world success with real-world evidence.
See survey resultsLearnDocumentationTraining & CertificationDemosResourcesOnline CommunityUniversity AllianceEventsData + AI SummitBlogLabsBeaconsJoin Generation AI in San Francisco
June 26–29
Learn about LLMs like Dolly and open source Data and AI technologies such as Apache Spark™, Delta Lake, MLflow and Delta SharingExplore sessionsCustomersPartnersCloud PartnersAWSAzureGoogle CloudPartner ConnectTechnology and Data PartnersTechnology Partner ProgramData Partner ProgramBuilt on Databricks Partner ProgramConsulting & SI PartnersC&SI Partner ProgramPartner SolutionsConnect with validated partner solutions in just a few clicks.Learn moreCompanyCareers at DatabricksOur TeamBoard of DirectorsCompany BlogNewsroomDatabricks VenturesAwards and RecognitionContact UsSee why Gartner named Databricks a Leader for the second consecutive yearGet the reportTry DatabricksWatch DemosContact UsLoginJUNE 26-29REGISTER NOWDatabricks Beacons ProgramThe Databricks Beacons program is our way to thank and recognize the community members, data scientists, data engineers, developers and open source enthusiasts who go above and beyond to uplift the data and AI community.Whether they are speaking at conferences, leading workshops, teaching, mentoring, blogging, writing books, creating tutorials, offering support in forums or organizing meetups, they inspire others and encourage knowledge sharing – all while helping to solve tough data problems.Meet the Databricks BeaconsBeacons share their passion and technical expertise with audiences around the world. They are contributors to a variety of open source projects including Apache Spark™, Delta Lake, MLflow and others. Don’t hesitate to reach out to them on social to see what they’re working on.ISRAELAdi PolakAdi is a Senior Software Engineer and Developer Advocate in the Azure Engineering organization at Microsoft.FRANCEBartosz KoniecznyBartosz is a Data Engineering Consultant and an instructor.
UNITED STATESR. Tyler CroyTyler, the Director of Platform Engineering at Scribd, has been an open source developer for over 14 years.CHINAKent YaoKent is an Apache Spark™ committer and a staff software engineer at NetEase.IRELANDKyle HamiltonKyle is the Chief Innovation and Data Officer at iQ4, and a lecturer at the University of California, Berkeley.POLANDJacek LaskowskiJacek is an IT freelancer who specializes in Apache Spark™, Delta Lake and Apache Kafka.UNITED STATESScott HainesScott is a Distinguished Software Engineer at Nike where he helps drive Apache Spark™ adoption.UNITED KINGDOMSimon WhiteleySimon is the Director of Engineering at Advancing Analytics, is a Microsoft Data Platform MVP and Data + AI Summit speaker.UNITED STATESGeeta ChauhanGeeta leads AI/PyTorch Partnership Engineering at Facebook AI and focuses on strategic initiatives.SWITZERLANDLorenz WalthertLorenz Walthert is a data scientist, MLflow contributor, climate activist and a GSoC participant.CANADAYitao LiYitao is a software engineer at SafeGraph and the current maintainer of sparklyr, an R interface for Apache Spark™.POLANDMaciej SzymkiewiczMaciej is an Apache Spark™ committer. He is available for mentoring and consulting.JAPANTakeshi YamamuroTakeshi is a software engineer, Apache Spark™ committer and PMC member at NTT, Inc., who mainly works on Spark SQL.Membership CriteriaBeacons are first and foremost practitioners in the data and AI community whose technology focus includes MLflow, Delta Lake, Apache Spark™, Databricks and related ecosystem technologies. Beacons actively build others up throughout the year by teaching, blogging, speaking, mentoring, organizing meetups, creating content, answering questions on forums and more.Program BenefitsPeer networking and sharing through a private Slack channelAccess to Databricks and OSS subject matter expertsRecognition on the Databricks website and social channelsCustom swagIn the future, sponsored travel and lodging to attend select
Databricks eventsSponsorship and swag for meetupsNominate a peerWe’d love to hear from you! Tell us who made continued outstanding contributions to the data and AI community. Candidates must be nominated by someone in the community, and everyone — including customers, partners, Databricks employees or even a current Beacon — is welcome to submit a nomination. Applications will be reviewed on a rolling basis, and membership is valid for one year.NominateProductPlatform OverviewPricingOpen Source TechTry DatabricksDemoProductPlatform OverviewPricingOpen Source TechTry DatabricksDemoLearn & SupportDocumentationGlossaryTraining & CertificationHelp CenterLegalOnline CommunityLearn & SupportDocumentationGlossaryTraining & CertificationHelp CenterLegalOnline CommunitySolutionsBy IndustriesProfessional ServicesSolutionsBy IndustriesProfessional ServicesCompanyAbout UsCareers at DatabricksDiversity and InclusionCompany BlogContact UsCompanyAbout UsCareers at DatabricksDiversity and InclusionCompany BlogContact UsSee Careers
at DatabricksWorldwideEnglish (United States)Deutsch (Germany)Français (France)Italiano (Italy)日本語 (Japan)한국어 (South Korea)Português (Brazil)Databricks Inc.
160 Spear Street, 13th Floor
San Francisco, CA 94105
1-866-330-0121© Databricks 2023. All rights reserved. Apache, Apache Spark, Spark and the Spark logo are trademarks of the Apache Software Foundation.Privacy Notice|Terms of Use|Your Privacy Choices|Your California Privacy Rights |
https://www.databricks.com/ce-termsofuse | Databricks Community Edition | DatabricksPlatformThe Databricks Lakehouse PlatformDelta LakeData GovernanceData EngineeringData StreamingData WarehousingData SharingMachine LearningData SciencePricingMarketplaceOpen source techSecurity and Trust CenterWEBINAR May 18 / 8 AM PT
Goodbye, Data Warehouse. Hello, Lakehouse.
Attend to understand how a data lakehouse fits within your modern data stack.
Register nowSolutionsSolutions by IndustryFinancial ServicesHealthcare and Life SciencesManufacturingCommunications, Media & EntertainmentPublic SectorRetailSee all IndustriesSolutions by Use CaseSolution AcceleratorsProfessional ServicesDigital Native BusinessesData Platform MigrationNew survey of biopharma executives reveals real-world success with real-world evidence.
See survey resultsLearnDocumentationTraining & CertificationDemosResourcesOnline CommunityUniversity AllianceEventsData + AI SummitBlogLabsBeaconsJoin Generation AI in San Francisco
June 26–29
Learn about LLMs like Dolly and open source Data and AI technologies such as Apache Spark™, Delta Lake, MLflow and Delta SharingExplore sessionsCustomersPartnersCloud PartnersAWSAzureGoogle CloudPartner ConnectTechnology and Data PartnersTechnology Partner ProgramData Partner ProgramBuilt on Databricks Partner ProgramConsulting & SI PartnersC&SI Partner ProgramPartner SolutionsConnect with validated partner solutions in just a few clicks.Learn moreCompanyCareers at DatabricksOur TeamBoard of DirectorsCompany BlogNewsroomDatabricks VenturesAwards and RecognitionContact UsSee why Gartner named Databricks a Leader for the second consecutive yearGet the reportTry DatabricksWatch DemosContact UsLoginJUNE 26-29REGISTER NOWLegalTermsDatabricks Master Cloud Services AgreementAdvisory ServicesTraining ServicesUS Public Sector ServicesExternal User TermsWebsite Terms of UseCommunity Edition Terms of ServiceAcceptable Use PolicyPrivacyPrivacy NoticeCookie NoticeApplicant Privacy NoticeDatabricks SubprocessorsPrivacy FAQsDatabricks Data Processing AddendumAmendment to Data Processing AddendumSecurityDatabricks SecuritySecurity AddendumLegal Compliance and EthicsLegal Compliance & EthicsCode of ConductThird Party Code of ConductModern Slavery StatementFrance Pay Equity ReportSubscribe to UpdatesDatabricks Community EditionCommunity Edition Terms of ServiceWelcome to Databricks Community Edition! We are pleased to provide Databricks Community Edition (the “Community Edition Services”) at no charge to those interested in learning and exploring the use of Databricks’ cloud-based data analytics platform, which enables data analysts and others to easily tap the power of Apache Spark and Databricks’ other proprietary functionality. Your use of the Community Edition Services is governed by these Terms of Service, including the Arbitration Agreement (the “Terms”). If you are using the Community Edition Services on behalf of an organization, you represent and warrant that you are authorized to bind that entity to these Terms, in which case “you” or “your” will refer to that entity (otherwise, such terms refer to you as an individual). If you do not have authority to bind your entity or do not agree with these Terms, you must not accept these Terms and may not use the Community Edition Services. The effective date of these Terms is the earliest to occur of the date you explicitly accept these Terms, or the date you first access or use the Community Edition Services.BY CLICKING TO ACCEPT THESE TERMS OR USING THE COMMUNITY EDITION SERVICES, YOU ARE REPRESENTING THAT YOU HAVE CAREFULLY READ, UNDERSTOOD AND AGREE TO BE BOUND BY THESE TERMS, INCLUDING WITHOUT LIMITATION THE ACCEPTABLE USE POLICY, THE SECTION TITLED “YOUR DATA AND USE OF COMMUNITY EDITION - RESTRICTIONS APPLY” AND THE SECTION REGARDING MANDATORY, BINDING ARBITRATION OF DISPUTES ENTITLED “DISPUTES; BINDING ARBITRATION AND CLASS ACTION WAIVER”.YOUR DATA AND USE OF COMMUNITY EDITION – RESTRICTIONS APPLYThere Are Strict Limits On What Your Data Can Include. In order for us to provide the Community Edition Services to you at no charge, we have implemented certain cost saving elements within the architecture of the Community Edition Services including, among other things, the use of a multi-tenant environment with limited data security protections. In addition, Databricks personnel have generally unrestricted access to your account (“Your Account”) and any data used or exposed to the Community Edition Services for the purposes of monitoring and improving the quality of the service. Therefore, you should have no expectation of privacy regarding the data you submit or otherwise make available in any way to the Community Edition Services (collectively, “Your Data”) or the notebooks you create within or upload to the Community Edition Services (“Your Notebooks”, and collectively with Your Data, “Your Content”) and you must limit Your Content to only that data and other information that you can afford to lose, or have accessed, obtained or disseminated by other parties.Without limiting the foregoing, under no circumstances are you permitted to use with or make available to the Community Edition Services (such data, “Prohibited Data”):any data for which you do not have all rights, power and authority necessary for its collection, use and processing as contemplated by this Agreement;any data with respect to which your use and provision to Databricks pursuant to this Agreement would breach any agreement between you and any third party;any data that includes pornography, incitements to violence, terrorism or other wrongdoing, or obscene, illicit or deceptive materials of any kind;any data with respect to which its usage as contemplated herein would violate any applicable local, state, federal or other laws, regulations, orders or rules, including without limitation any privacy laws;any (w) bank, credit card or other financial account numbers or login credentials, (x) social security, tax, driver’s license or other government-issued identification numbers, (y) health information identifiable to a particular individual; or (z) any data that would constitute “special categories of data,” “sensitive personal data,” or any similar concept under applicable law; orany data that is prohibited by the Acceptable Use Policy.You Must Protect Access to Your Account and to Your Content. You are responsible for safeguarding your password and you must make sure no one else has access to it. Additionally, in order to facilitate the sharing and widespread use of the Community Edition Services, we enable you, at your discretion, to share with others access to Your Content. You bear sole responsibility for protecting access to Your Content and for any and all liabilities that may result from the misuse of any sharing privileges granted by you to others. You agree and acknowledge Databricks has a passive role in the transmission, reception and use of Your Content, and Databricks does not take any initiative in the transmission, reception, or use of Your Content. Moreover, as between you and Databricks, you agree and acknowledge that you are solely responsible for your use of Your Content and that Databricks cannot supervise, control, direct, choose, verify, investigate, or evaluate Your Content or your actions with respect to Your Content that you transmit or receive using the Community Edition Services. You acknowledge that we may (but are not obligated to) remove or disable access to any of Your Content, or interrupt any or all services, at any time at our own discretion. You understand and acknowledge that we have the right (but no obligation) to do so if we believe, or are notified, that you have breached any provision of this Agreement (including copyright breach), or if we discontinue or restrict the service that enables you to transmit or receive Your Content.Limits Apply to How You Can Use the Community Edition Services. You agree that your use of the Community Edition Services is subject to the Acceptable Use Policy.DATABRICKS’ LEGAL PROTECTIONS & OTHER PROVISIONSDatabricks Intellectual Property Rights. The Community Edition Services are protected in various ways by copyright, trademark, and other laws of the United States and other countries. These Terms don’t grant you any rights to use of Databricks’ intellectual property, including trademarks, logos and other brand features, except those rights necessary for you to use the Community Edition Services as contemplated under these Terms. Databricks welcomes your feedback but please note that we may use your comments and suggestions freely to improve the Community Edition Services or any of our other products or services, and accordingly you hereby grant Databricks a perpetual, irrevocable, non-exclusive, worldwide, fully-paid, sub-licensable, assignable license to incorporate into the Community Edition Services or otherwise use any feedback Databricks receives from you.The Community Edition Services are Provided “As Is” With No Warranty. Databricks cannot provide guarantees regarding the Community Edition Services. TO THE FULLEST EXTENT PERMITTED BY LAW, DATABRICKS MAKES NO WARRANTIES, EXPRESS OR IMPLIED, ABOUT THE SERVICES, WHICH ARE PROVIDED “AS IS.” WE DISCLAIM ALL WARRANTIES OF ANY KIND, INCLUDING BUT NOT LIMITED TO WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, NON-INFRINGEMENT, NON-INTERRUPTION, ACCURACY OR DATA SECURITY. Some jurisdictions don’t allow certain of these disclaimers, so they may not apply to you.We Require Your Indemnification. You understand and agree that Databricks bears no liability whatsoever in the event you violate the restrictions and obligations imposed by these Terms regarding your use of the Community Edition Services or for the loss of, or unauthorized access to, Your Data, and that Databricks’ willingness and ability to provide access to you to the Community Edition Services at no charge is contingent upon this understanding, and upon your accepting and adhering to all other provisions of these Terms. You agree to indemnify, defend and hold harmless each of Databricks and its investors, directors, officers, employees, representatives and affiliates from any claims, costs, damages, liabilities or expenses (including reasonable attorneys’ fees) arising out of any third party claim alleging that Your Data or your use of our services infringes the rights of, or has caused harm to, any party, or violates any law or regulation.Limitation on Liability. TO THE FULLEST EXTENT PERMITTED BY LAW, IN NO EVENT WILL DATABRICKS BE LIABLE FOR ANY INDIRECT, SPECIAL, INCIDENTAL, PUNITIVE, EXEMPLARY OR CONSEQUENTIAL DAMAGES OR ANY LOSS OF USE, DATA, BUSINESS OR PROFITS, REGARDLESS OF LEGAL THEORY, REGARDLESS OF WHETHER DATABRICKS HAS BEEN WARNED OF THE POSSIBILITY OF SUCH DAMAGES, AND EVEN IF A REMEDY FAILS OF ITS ESSENTIAL PURPOSE. ADDITIONALLY, DATABRICKS’ AGGREGATE LIABILITY TO YOU FOR ALL CLAIMS RELATING TO THE SERVICES SHALL NOT EXCEED THE GREATER OF THE TOTAL OF ANY AMOUNTS YOU MAY HAVE PAID US IN FEES FOR ANY SERVICE IN THE SIX MONTHS IMMEDIATELY PRIOR TO THE EVENT FIRST GIVING RISE TO ANY SUCH LIABILITY OR $500 (FIVE HUNDRED DOLLARS). THE FOREGOING LIMITATIONS AND EXCLUSIONS SHALL NOT APPLY WITH RESPECT TO ANY LIABILITY ARISING UNDER FRAUD, FRAUDULENT MISREPRESENTATION, GROSS NEGLIGENCE, OR ANY OTHER LIABILITY THAT CANNOT BE LIMITED OR EXCLUDED BY LAW. Some jurisdictions don’t allow the types of limitations in this paragraph, so they may not apply to you. IN THESE JURISDICTIONS, EACH PARTY’S LIABILITY WILL BE FURTHER LIMITED TO THE GREATEST EXTENT PERMITTED BY LAW. You agree that this limitation of liability section is intended to allocate the risks between the parties, and that but for this limitation of liability, Databricks would not make available the Community Edition Services.Entire Agreement; No Third Party Rights. These Terms constitute the entire agreement between you and Databricks concerning the Community Edition Services and these Terms create no third party beneficiary rights.Termination, Modification, Waiver & Assignment. Either of us may suspend or terminate your use of the Community Edition Services or delete Your Account or Your Content at any time and for any reason (including without limitation for any suspected violations of the Acceptable Use Policy); however, obligations of these Terms that by their nature should survive termination shall so survive. In addition, we may revise these Terms from time to time, and will always post the most current version on our website. If we elect to terminate your access to the Community Edition Services or delete Your Account or Your Content, or if a revision of these Terms meaningfully reduces your rights, we will make a reasonable attempt to notify you (by, for example, sending a message to the email address associated with your account or posting for a reasonable time period a message to the login page of the Community Edition Services) unless Databricks deems it necessary to suspend or terminate Your Account without notice. By continuing to use or access the Community Edition Services after the revisions come into effect, you agree to be bound by the revised Terms. Databricks' failure to enforce a provision of these Terms is not a waiver of its right to do so later. If a provision of these Terms is found unenforceable, the remaining provisions of the Terms will remain in full effect and an enforceable term will be substituted reflecting our intent as closely as possible, provided that questions of unenforceability regarding the Arbitration Agreement shall be resolved according to the Severability Section of the Arbitration Agreement. You may not assign or transfer any of your rights under these Terms, and any such attempt will be void. Databricks may assign these Terms and/or its rights under these Terms to any of its affiliates or to any successor in interest.DISPUTES; BINDING ARBITRATION AND CLASS ACTION WAIVERInformal Resolution. You agree with us that, if either of us has concerns, we must first work together to resolve any dispute informally without resorting to legal action. You agree to contact us at [email protected] in the event you have a dispute prior to bringing a formal claim against Databricks. If the dispute is not resolved within 30 calendar days from the notice date, either of us may bring a claim subject to the procedures set forth below. You and Databricks agree to the jurisdiction of the Northern District of California to resolve any dispute, claim, or controversy that relates to or arises in connection with these Terms (and any non-contractual disputes/claims relating to or arising in connection with them) and is not subject to mandatory arbitration as set forth below (the “Arbitration Agreement”). Any disputes not subject to mandatory arbitration are subject to the laws of the state of California, without regard to choice or conflicts of law principles.Arbitration Agreement. If you are located within the United States, you and Databricks agree that any dispute, claim, or controversy between you and Databricks arising in connection with or relating in any way to these Terms or to your relationship with Databricks as a user of the Community Edition Services (whether based in contract, tort, statute, fraud, misrepresentation, or any other legal theory, and whether the claims arise during or after the termination of these Terms) will be determined by mandatory binding individual (not class) arbitration. You and Databricks further agree that the arbitrator shall have the exclusive power to rule on his or her own jurisdiction, including any objections with respect to the existence, scope or validity of the Arbitration Agreement or to the arbitrability of any claim or counterclaim. The Arbitration Agreement will survive termination of the Terms. You and Databricks agree that the Federal Arbitration Act applies and governs the interpretation and enforcement of the Arbitration Agreement (despite the choice of law provision above).Exceptions to Arbitration. Notwithstanding the prior clause, you and Databricks both agree that nothing in this Arbitration Agreement will be deemed to waive, preclude, or otherwise limit either of our rights, at any time, to (1) bring an individual action in a U.S. small claims court or (2) bring an individual action seeking only temporary or preliminary individualized injunctive relief in a court of law, pending a final ruling from the arbitrator. In addition, this Arbitration Agreement doesn’t stop you or us from bringing issues to the attention of federal, state, or local agencies. Such agencies can, if the law allows, seek relief against us on your behalf (or vice versa).Prohibition of Class and Representative Actions and Non-Individualized Relief. You and Databricks agree that each of us may bring claims against the other only on an individual basis and not as a plaintiff or class member in any purported class or representative action or proceeding. Unless both you and Databricks agree otherwise, the arbitrator(s) may not consolidate or join more than one person’s or party’s claims and may not otherwise preside over any form of a consolidated, representative or class proceeding. Also, the arbitrator(s) may award relief (including monetary, injunctive and declaratory relief) only in favor of the individual party seeking relief and only to the extent necessary to provide relief necessitated by that party’s individual claim(s). Any relief awarded cannot affect other Databricks customers.Arbitration Procedures. Arbitration is more informal than a lawsuit in court. Arbitration uses a neutral arbitrator or arbitrators instead of a judge or jury, and court review of an arbitration award is very limited. However, the arbitrator(s) can award the same damages and relief on an individual basis that a court can award to an individual. The arbitrator(s) also must follow the terms of these Terms as a court would. The arbitration will be conducted by the American Arbitration Association (referred to as the "AAA") under its rules and procedures, including the AAA's Consumer Arbitration Rules (as applicable), unless you have accepted these Terms as a representative of a business entity, in which case AAA’s Commercial Arbitration Rules shall govern (as applicable), in each case as modified by this Arbitration Agreement. The AAA's rules and forms to commence arbitration are available at www.adr.org. A party who intends to seek arbitration must first send the other party, if to Databricks, by certified mail, a completed Demand for Arbitration. You should send this notice to Databricks at: Databricks, Inc., Attn: Legal Department, Re: Demand for Arbitration, 160 Spear St., Ste. 1300, San Francisco, CA 94105 USA (with a copy to [email protected]). Databricks will send any notice to you to the address we have on file associated with your Databricks account (which may solely be at your account email address); it is your responsibility to keep your address up to date. All information called for in the notice must be provided including a description of the nature and basis of the claims the party is asserting and the relief sought. The arbitration shall be held in the county in which you reside or at another mutually agreed location. If the value of the relief sought is $10,000 or less, you or Databricks may elect to have the arbitration conducted by telephone or based solely on written submissions, which election shall be binding on you and Databricks subject to the discretion of the arbitrator(s) to require an in-person hearing, if the circumstances warrant. In cases where an in-person hearing is held, you and/or Databricks may attend by telephone, unless the arbitrator(s) require otherwise. Any settlement offer made by you or Databricks shall not be disclosed to the arbitrator(s). The arbitrator(s) will decide the substance of all claims in accordance with applicable law, including recognized principles of equity, and will honor all claims of privilege recognized by law. The arbitrator(s) shall not be bound by rulings in prior arbitrations involving different Databricks customers, but is/are bound by rulings in prior arbitrations involving the same Databricks customer to the extent required by applicable law. The award of the arbitrator(s) shall be final and binding, and judgment on the award rendered by the arbitrator(s) may be entered in any court having jurisdiction thereof.Costs of Arbitration. Payment of all filing, administration, and arbitrator fees will be governed by the AAA's rules (either Consumer or Commercial, as applicable), unless otherwise stated in this Arbitration Agreement. If the value of the relief sought by an individual is $10,000 or less, at your request, Databricks will pay all filing, administration, and arbitrator fees associated with the arbitration. Any request for payment of fees by Databricks should be submitted by mail to the AAA along with your Demand for Arbitration and Databricks will make arrangements to pay all necessary fees directly to the AAA. If the value of the relief sought by an individual is more than $10,000 and you are able to demonstrate that the costs of accessing arbitration will be prohibitive as compared to the costs of accessing a court for purposes of pursuing litigation on an individual basis, Databricks will pay as much of the filing, administration, and arbitrator fees as the arbitrator(s) deem necessary to prevent the cost of accessing the arbitration from being prohibitive. In the event the arbitrator(s) determine the claim(s) you assert in the arbitration to be frivolous, you agree to reimburse Databricks for all fees associated with the arbitration paid by Databricks on your behalf that you otherwise would be obligated to pay under the AAA's rules.Severability. With the exception of any of the provisions in the Prohibition of Class and Representative Actions and Non-Individualized Relief section above, if a court decides that any part of this Arbitration Agreement is invalid or unenforceable, the other parts of this Arbitration Agreement shall still apply. If a court decides that any of the provisions in the Prohibition of Class and Representative Actions and Non-Individualized Relief section above is invalid or unenforceable because it would prevent the exercise of a non-waivable right to pursue public injunctive relief, then any dispute regarding the entitlement to such relief (and only that relief) must be severed from arbitration and may be litigated in court; in such case you irrevocably consent to the personal jurisdiction of the state and federal courts in the Northern District of California and such dispute shall be governed by the laws of the state of California, without regard to choice or conflicts of law principles. All other disputes subject to arbitration under the terms of the Arbitration Agreement shall be arbitrated under its terms.Amendments to Arbitration Agreement. Notwithstanding any provision in the Terms to the contrary, you and we agree that if we make any amendment to this Arbitration Agreement (other than an amendment to any notice address or website link provided herein) in the future, that amendment shall not apply to any claim that was filed in a legal proceeding against Databricks prior to the effective date of the amendment. The amendment shall apply to all other disputes or claims governed by this Arbitration Agreement that have arisen or may arise between you and Databricks. We will notify you of amendments to this Arbitration Agreement by posting the amended terms on https://www.databricks.com/legal/ce-termsofuse at least 30 days before the effective date of the amendments and by providing notice through email where possible. If you do not agree to these amended terms, you may close Your Account within the 30-day period and you will not be bound by the amended terms.Last Updated April 9, 2019ProductPlatform OverviewPricingOpen Source TechTry DatabricksDemoProductPlatform OverviewPricingOpen Source TechTry DatabricksDemoLearn & SupportDocumentationGlossaryTraining & CertificationHelp CenterLegalOnline CommunityLearn & SupportDocumentationGlossaryTraining & CertificationHelp CenterLegalOnline CommunitySolutionsBy IndustriesProfessional ServicesSolutionsBy IndustriesProfessional ServicesCompanyAbout UsCareers at DatabricksDiversity and InclusionCompany BlogContact UsCompanyAbout UsCareers at DatabricksDiversity and InclusionCompany BlogContact UsSee Careers
at DatabricksWorldwideEnglish (United States)Deutsch (Germany)Français (France)Italiano (Italy)日本語 (Japan)한국어 (South Korea)Português (Brazil)Databricks Inc.
160 Spear Street, 13th Floor
San Francisco, CA 94105
1-866-330-0121© Databricks 2023. All rights reserved. Apache, Apache Spark, Spark and the Spark logo are trademarks of the Apache Software Foundation.Privacy Notice|Terms of Use|Your Privacy Choices|Your California Privacy Rights |
https://www.databricks.com/dataaisummit/speaker/john-kutay/# | John Kutay - Data + AI Summit 2023 | DatabricksThis site works best with JavaScript enabled.HomepageSAN FRANCISCO, JUNE 26-29VIRTUAL, JUNE 28-29Register NowSession CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingJohn KutayDirector of Product Management at StriimBack to speakersJohn Kutay, Director of Product Management at Striim
John Kutay is Director of Product Management at Striim with prior experience as a software engineer, product manager, and investor. His podcast “What's New in Data” best captures his ability to understand upcoming trends in the data space with thousands of listeners across the globe. In addition, John has over 10 years of experience in the streaming data space through academic research and his work at Striim.Looking for past sessions?Take a look through the session archive to find even more related content from previous Data + AI Summit conferences.Explore the session archiveRegister today to save your spotRegister NowHomepageOrganized By Session CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingFAQEvent PolicyCode of ConductPrivacy NoticeYour Privacy ChoicesYour California Privacy RightsApache, Apache Spark, Spark, and the Spark logo are trademarks of the Apache Software Foundation. The Apache Software Foundation has no affiliation with and does not endorse the materials provided at this event. |
https://www.databricks.com/de/product/marketplace | Databricks Marketplace | DatabricksSkip to main contentPlattformDie Lakehouse-Plattform von DatabricksDelta LakeData GovernanceData EngineeringDatenstreamingData-WarehousingGemeinsame DatennutzungMachine LearningData SciencePreiseMarketplaceOpen source techSecurity & Trust CenterWEBINAR 18. Mai / 8 Uhr PT
Auf Wiedersehen, Data Warehouse. Hallo, Lakehouse.
Nehmen Sie teil, um zu verstehen, wie ein Data Lakehouse in Ihren modernen Datenstapel passt.
Melden Sie sich jetzt anLösungenLösungen nach BrancheFinanzdienstleistungenGesundheitswesen und BiowissenschaftenFertigungKommunikation, Medien und UnterhaltungÖffentlicher SektorEinzelhandelAlle Branchen anzeigenLösungen nach AnwendungsfallSolution AcceleratorsProfessionelle ServicesDigital-Native-UnternehmenMigration der Datenplattform9. Mai | 8 Uhr PT
Entdecken Sie das Lakehouse für die Fertigung
Erfahren Sie, wie Corning wichtige Entscheidungen trifft, die manuelle Inspektionen minimieren, die Versandkosten senken und die Kundenzufriedenheit erhöhen.Registrieren Sie sich noch heuteLernenDokumentationWEITERBILDUNG & ZERTIFIZIERUNGDemosRessourcenOnline-CommunityUniversity AllianceVeranstaltungenData + AI SummitBlogLabsBaken26.–29. Juni 2023
Nehmen Sie persönlich teil oder schalten Sie für den Livestream der Keynote einJetzt registrierenKundenPartnerCloud-PartnerAWSAzureGoogle CloudPartner ConnectTechnologie- und DatenpartnerTechnologiepartnerprogrammDatenpartner-ProgrammBuilt on Databricks Partner ProgramConsulting- und SI-PartnerC&SI-PartnerprogrammLösungen von PartnernVernetzen Sie sich mit validierten Partnerlösungen mit nur wenigen Klicks.Mehr InformationenUnternehmenKarriere bei DatabricksUnser TeamVorstandUnternehmensblogPresseAktuelle Unternehmungen von DatabricksAuszeichnungen und AnerkennungenKontaktErfahren Sie, warum Gartner Databricks zum zweiten Mal in Folge als Leader benannt hatBericht abrufenDatabricks testenDemos ansehenKontaktLoginJUNE 26-29REGISTER NOWDatabricks MarketplaceOffener Marktplatz für Daten, Analytics und KIErste SchritteTo play this video, click here and accept cookiesWas ist Databricks
Marketplace?
Databricks Marketplace ist ein offener Marktplatz für alle Ihre Daten, Analytics und KI, der auf dem Open-Source-Standard Delta Sharing basiert. Der Databricks Marketplace erweitert Ihre Möglichkeiten, Innovationen auf den Markt zu bringen und alle Ihre Analytics- und KI-Initiativen voranzutreiben.
Sie finden dort Datensätze sowie KI- und Analytics-Ressourcen – wie ML-Modelle, Notebooks, Anwendungen und Dashboards – ohne proprietäre Plattformabhängigkeiten, kompliziertes ETL oder teure Replikation. Dieser offene Ansatz ermöglicht es Ihnen, Daten in jeder Cloud mit den Tools Ihrer Wahl schneller einsatzbereit zu machen.Mehr als nur Daten entdeckenRealisieren Sie Innovationen und bringen Sie die KI-, ML- und Analytics-Initiativen Ihres Unternehmens voran. Greifen Sie auf mehr als nur Datensätze zu, darunter ML-Modelle, Notebooks, Anwendungen und Lösungen.Datenprodukte schneller bewertenBereits erstellte Notebooks und Beispieldaten helfen Ihnen, schnell zu bewerten und viel mehr Vertrauen darauf zu haben, dass ein Datenprodukt für Ihre KI-, ML- oder Analytics-Initiativen geeignet ist.Anbieterbindung vermeidenGewinnen Sie aussagekräftige Informationen aus Daten in wesentlich kürzerer Zeit und vermeiden Sie Abhängigkeiten durch offenes und nahtloses Teilen sowie eine Zusammenarbeit über Clouds, Regionen oder Plattformen hinweg. Integrieren Sie die Lösung direkt in die Tools Ihrer Wahl und genau dort, wo Sie arbeiten.Ausgewählte Datenanbieter auf Databricks MarketplaceDatenanbieter auf dem Marketplace werden
RessourcenVeranstaltungJetzt für den Data + AI Summit 2023 anmeldenREGISTRIERENE-BookEin neuer Ansatz beim Data SharingJetzt lesenE-BookDaten, Analysen und KI-GovernanceJetzt lesenKeynoteData Governance and Sharing on the Lakehouse beim Data + AI Summit 2022 BlogsAnkündigung der öffentlichen Vorschau auf den MarketplaceAnkündigung von Databricks Marketplace auf dem Data + AI Summit 2022DokumentationAWSAzureGCPProduktPlatform OverviewPreiseOpen Source TechDatabricks testenDemoProduktPlatform OverviewPreiseOpen Source TechDatabricks testenDemoLearn & SupportDokumentationGlossaryWEITERBILDUNG & ZERTIFIZIERUNGHelp CenterLegalOnline-CommunityLearn & SupportDokumentationGlossaryWEITERBILDUNG & ZERTIFIZIERUNGHelp CenterLegalOnline-CommunityLösungenBy IndustriesProfessionelle ServicesLösungenBy IndustriesProfessionelle ServicesUnternehmenÜber unsKarriere bei DatabricksDiversität und InklusionUnternehmensblogKontaktUnternehmenÜber unsKarriere bei DatabricksDiversität und InklusionUnternehmensblogKontaktWeitere Informationen unter
„Karriere bei DatabricksWeltweitEnglish (United States)Deutsch (Germany)Français (France)Italiano (Italy)日本語 (Japan)한국어 (South Korea)Português (Brazil)Databricks Inc.
160 Spear Street, 13th Floor
San Francisco, CA 94105
1-866-330-0121© Databricks 2023. All rights reserved. Apache, Apache Spark, Spark and the Spark logo are trademarks of the Apache Software Foundation.Datenschutzhinweis|Terms of Use|Ihre Datenschutzwahlen|Ihre kalifornischen Datenschutzrechte |
https://www.databricks.com/dataaisummit/speaker/akira-ajisaka | Akira Ajisaka - Data + AI Summit 2023 | DatabricksThis site works best with JavaScript enabled.HomepageSAN FRANCISCO, JUNE 26-29VIRTUAL, JUNE 28-29Register NowSession CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingAkira AjisakaSenior Software Development Engineer at Amazon Web ServicesBack to speakersAkira Ajisaka is a Senior Software Development Engineer on the AWS Glue team at Amazon Web Services. He likes open source software and distributed systems. He is an Apache Hadoop committer and PMC member.Looking for past sessions?Take a look through the session archive to find even more related content from previous Data + AI Summit conferences.Explore the session archiveRegister today to save your spotRegister NowHomepageOrganized By Session CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingFAQEvent PolicyCode of ConductPrivacy NoticeYour Privacy ChoicesYour California Privacy RightsApache, Apache Spark, Spark, and the Spark logo are trademarks of the Apache Software Foundation. The Apache Software Foundation has no affiliation with and does not endorse the materials provided at this event. |
https://www.databricks.com/dataaisummit/speaker/satish-garla | Satish Garla - Data + AI Summit 2023 | DatabricksThis site works best with JavaScript enabled.HomepageSAN FRANCISCO, JUNE 26-29VIRTUAL, JUNE 28-29Register NowSession CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingSatish GarlaSr Solutions Architect at DatabricksBack to speakersSatish has a distinguished background in cloud modernization, data management, data science and financial risk management. He started his career implementing enterprise risk solutions using SAS. Satish leveraged open source tools and dotData technology to implement automated Feature Engineering and AutoML. Currently Satish works as a Sr Solutions Architect at Databricks helping enterprises with cloud and Lakehouse adoption using open source technologies such as Apache Spark, Delta and MLFlow.Looking for past sessions?Take a look through the session archive to find even more related content from previous Data + AI Summit conferences.Explore the session archiveRegister today to save your spotRegister NowHomepageOrganized By Session CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingFAQEvent PolicyCode of ConductPrivacy NoticeYour Privacy ChoicesYour California Privacy RightsApache, Apache Spark, Spark, and the Spark logo are trademarks of the Apache Software Foundation. The Apache Software Foundation has no affiliation with and does not endorse the materials provided at this event. |
https://www.databricks.com/explore/financial-services-resources/lakehouse-for-financial-services-ebook |
Lakehouse For Financial Services eBook | Databricks
Thumbnails
Document Outline
Attachments
Layers
Current Outline Item
Previous
Next
Highlight All
Match Case
Match
Diacritics
Whole Words
Color
Size
Color
Thickness
Opacity
Presentation Mode
Open
Print
Download
Current View
Go to First Page
Go to Last Page
Rotate Clockwise
Rotate Counterclockwise
Text Selection Tool
Hand Tool
Page Scrolling
Vertical Scrolling
Horizontal Scrolling
Wrapped Scrolling
No Spreads
Odd Spreads
Even Spreads
Document Properties…
Toggle Sidebar
Find
Previous
Next
Presentation Mode
Open
Print
Download
Current View
FreeText Annotation
Ink Annotation
Tools
Zoom Out
Zoom In
Automatic Zoom
Actual Size
Page Fit
Page Width
50%
75%
100%
125%
150%
200%
300%
400%
More Information
Less Information
Close
Enter the password to open this PDF
file:
Cancel
OK
File name:
-
File size:
-
Title:
-
Author:
-
Subject:
-
Keywords:
-
Creation Date:
-
Modification Date:
-
Creator:
-
PDF Producer:
-
PDF Version:
-
Page Count:
-
Page Size:
-
Fast Web View:
-
Close
Preparing document for printing…
0%
Cancel
|
https://www.databricks.com/de/professional-services | Databricks Professional Services | DatabricksSkip to main contentPlattformDie Lakehouse-Plattform von DatabricksDelta LakeData GovernanceData EngineeringDatenstreamingData-WarehousingGemeinsame DatennutzungMachine LearningData SciencePreiseMarketplaceOpen source techSecurity & Trust CenterWEBINAR 18. Mai / 8 Uhr PT
Auf Wiedersehen, Data Warehouse. Hallo, Lakehouse.
Nehmen Sie teil, um zu verstehen, wie ein Data Lakehouse in Ihren modernen Datenstapel passt.
Melden Sie sich jetzt anLösungenLösungen nach BrancheFinanzdienstleistungenGesundheitswesen und BiowissenschaftenFertigungKommunikation, Medien und UnterhaltungÖffentlicher SektorEinzelhandelAlle Branchen anzeigenLösungen nach AnwendungsfallSolution AcceleratorsProfessionelle ServicesDigital-Native-UnternehmenMigration der Datenplattform9. Mai | 8 Uhr PT
Entdecken Sie das Lakehouse für die Fertigung
Erfahren Sie, wie Corning wichtige Entscheidungen trifft, die manuelle Inspektionen minimieren, die Versandkosten senken und die Kundenzufriedenheit erhöhen.Registrieren Sie sich noch heuteLernenDokumentationWEITERBILDUNG & ZERTIFIZIERUNGDemosRessourcenOnline-CommunityUniversity AllianceVeranstaltungenData + AI SummitBlogLabsBaken26.–29. Juni 2023
Nehmen Sie persönlich teil oder schalten Sie für den Livestream der Keynote einJetzt registrierenKundenPartnerCloud-PartnerAWSAzureGoogle CloudPartner ConnectTechnologie- und DatenpartnerTechnologiepartnerprogrammDatenpartner-ProgrammBuilt on Databricks Partner ProgramConsulting- und SI-PartnerC&SI-PartnerprogrammLösungen von PartnernVernetzen Sie sich mit validierten Partnerlösungen mit nur wenigen Klicks.Mehr InformationenUnternehmenKarriere bei DatabricksUnser TeamVorstandUnternehmensblogPresseAktuelle Unternehmungen von DatabricksAuszeichnungen und AnerkennungenKontaktErfahren Sie, warum Gartner Databricks zum zweiten Mal in Folge als Leader benannt hatBericht abrufenDatabricks testenDemos ansehenKontaktLoginJUNE 26-29REGISTER NOWDatabricks Professional ServicesKontaktieren Sie uns, um mehr zu erfahren.Projekte schneller erfolgreich abschließen – dank erstklassigem Fachwissen in den Bereichen Data Engineering, Data Science und Projektmanagement. Databricks Professional Services unterstützen Sie in jeder Phase Ihrer Daten- und KI-Erfahrung:VorteileBei Daten und KI aufs Tempo drücken
Unsere Angebote und Fachservices unterstützen Sie bei Ihrer Journey im Daten- und KI-Bereich und beschleunigen ihre Abläufe individuell und optimal – vom ersten Workspace-Onboarding bis zur Ausbildung von DataOps- und Center-of-Excellence-Praktiken im Unternehmensmaßstab.
Ihr Projekt risikoärmer machen
Ganz gleich, ob Sie von Legacy-Workloads auf Databricks migrieren, neue Datenprodukte oder Daten-, KI-Pipeline- und Machine-Learning-Projekte entwickeln: Als vertrauenswürdiger Partner und Berater sorgen wir gemeinsam mit Ihnen bei jedem Schritt Ihrer Journey dafür, dass Risiken sinken und der Mehrwert steigt.
Operationalisierung in großem Umfang
Das Entwickeln eines Proof-of-Concept für eine Datenpipeline oder eines Einzelknotenmodells ist noch einigermaßen überschaubar. Schwierig wird es, wenn es darum geht, Daten- und KI-Verfahren im gesamten Unternehmen erfolgreich einzuführen und zu skalieren. Mit unseren präskriptiven Angeboten und unserem Fachwissen möchten wir Sie bei der Bewältigung dieser anspruchsvollen Aufgabe unterstützen.
AngeboteSchneller Erfolg bei Projekten mit erstklassigem Fachwissen in den Bereichen Data Engineering, Data Science und ProjektmanagementSchnellstartMachen Sie sich mit der Databricks-Plattform und ihren wichtigsten, auf Best Practices basierenden Funktionen vertraut, um Ihre Projekte deutlich zu beschleunigen.Hadoop-MigrationMit unserer präskriptiven Methodik maximieren Sie den Wert Ihrer bestehenden Daten- und Pipeline-Investitionen und profitieren von einer reibungslosen Migration.Ausbau des Lake HouseBeschleunigen Sie die Implementierung einer einheitlichen und vereinfachten Plattform für Data Analytics, Data Science und ML und legen Sie so den Grundstein für Ihre Lakehouse-Vision.Machine LearningMit unserer präskriptiven Methodik optimieren Sie unternehmensweite ML-Initiativen und steigern deren Akzeptanz.Shared Services AcceleratorKurbeln Sie Ihr unternehmensweites Betriebsmodell an und fördern Sie mit unserer präskriptiven Methodik Daten- und KI-Exzellenz.Maßgeschneiderte ServicesWenden Sie sich an uns, um eine individuelle Leistungsbeschreibung für Ihre Anforderungen zu erhalten.Unsere Spezialisten für professionelle Services können auf eine umfangreiche Erfolgsbilanz verweisen, wenn es darum geht, komplexe, sehr spezifische und gezielte Anforderungen im gesamten Projektlebenszyklus zu erfüllen.Architekten für Resident Solutions | Data ScientistsWir vermitteln technische Ressourcen mit viel Erfahrung und hoher Führungs- und Beratungskompetenz. Zur Absicherung Ihres Erfolgs greifen wir nach Bedarf auf Unterstützung aus unserem umfangreichen Netzwerk professioneller Partner zurück und implementieren Projekte nach Bedarf auch mit ihnen gemeinsam.Databricks- und Spark-Experten10 Jahre Erfahrung
Solider Big-Data-Hintergrund
Praktische Fähigkeiten bei der Umsetzung
Projektplanung und -durchführungUnterstützung bei Design und Architektur
Unterstützung bei der Anpassung des Projekts an die Fähigkeiten der Plattform
Bereitstellung von Beiträgen zu Projektzeitplänen und Ressourcenbedarf
Implementierungs- und ProduktionsplanungArchitekturlösungen für Skalierbarkeit
Unterstützung bei der Prototypentwicklung
Anforderungen an die DevOps-Integration in die Tat umsetzen
Unterstützung bei der Einführung eines COEEntwicklung gemeinsamer Standards und Frameworks
Als Ressource für mehrere Teams zur Verfügung stehen
Erleichterung der Interaktion mit anderen Databricks-Teams
BetriebsmittelÜbersicht über das ErfolgskreditprogrammFormular für den Antrag auf EinlösungWarum gibt es die Databricks Akademie?Der Mensch steht im Mittelpunkt des Kundenerfolgs. Mit Schulungen und Zertifizierungen der Databricks Academy – dem Team, das seinerzeit an der UC Berkeley das Spark-Forschungsprojekt ins Leben gerufen hat – bekommen Sie Data Analytics wirklich in den Griff. Mit der Databricks Academy setzen Sie einfach schneller mehr um.Jetzt Kompetenzen erweiternMöchten Sie loslegen?KontaktProduktPlatform OverviewPreiseOpen Source TechDatabricks testenDemoProduktPlatform OverviewPreiseOpen Source TechDatabricks testenDemoLearn & SupportDokumentationGlossaryWEITERBILDUNG & ZERTIFIZIERUNGHelp CenterLegalOnline-CommunityLearn & SupportDokumentationGlossaryWEITERBILDUNG & ZERTIFIZIERUNGHelp CenterLegalOnline-CommunityLösungenBy IndustriesProfessionelle ServicesLösungenBy IndustriesProfessionelle ServicesUnternehmenÜber unsKarriere bei DatabricksDiversität und InklusionUnternehmensblogKontaktUnternehmenÜber unsKarriere bei DatabricksDiversität und InklusionUnternehmensblogKontaktWeitere Informationen unter
„Karriere bei DatabricksWeltweitEnglish (United States)Deutsch (Germany)Français (France)Italiano (Italy)日本語 (Japan)한국어 (South Korea)Português (Brazil)Databricks Inc.
160 Spear Street, 13th Floor
San Francisco, CA 94105
1-866-330-0121© Databricks 2023. All rights reserved. Apache, Apache Spark, Spark and the Spark logo are trademarks of the Apache Software Foundation.Datenschutzhinweis|Terms of Use|Ihre Datenschutzwahlen|Ihre kalifornischen Datenschutzrechte |
https://www.databricks.com/solutions/accelerators/product-matching-with-ml | How to build: Product matching with machine learning | DatabricksPlatformThe Databricks Lakehouse PlatformDelta LakeData GovernanceData EngineeringData StreamingData WarehousingData SharingMachine LearningData SciencePricingMarketplaceOpen source techSecurity and Trust CenterWEBINAR May 18 / 8 AM PT
Goodbye, Data Warehouse. Hello, Lakehouse.
Attend to understand how a data lakehouse fits within your modern data stack.
Register nowSolutionsSolutions by IndustryFinancial ServicesHealthcare and Life SciencesManufacturingCommunications, Media & EntertainmentPublic SectorRetailSee all IndustriesSolutions by Use CaseSolution AcceleratorsProfessional ServicesDigital Native BusinessesData Platform MigrationNew survey of biopharma executives reveals real-world success with real-world evidence.
See survey resultsLearnDocumentationTraining & CertificationDemosResourcesOnline CommunityUniversity AllianceEventsData + AI SummitBlogLabsBeaconsJoin Generation AI in San Francisco
June 26–29
Learn about LLMs like Dolly and open source Data and AI technologies such as Apache Spark™, Delta Lake, MLflow and Delta SharingExplore sessionsCustomersPartnersCloud PartnersAWSAzureGoogle CloudPartner ConnectTechnology and Data PartnersTechnology Partner ProgramData Partner ProgramBuilt on Databricks Partner ProgramConsulting & SI PartnersC&SI Partner ProgramPartner SolutionsConnect with validated partner solutions in just a few clicks.Learn moreCompanyCareers at DatabricksOur TeamBoard of DirectorsCompany BlogNewsroomDatabricks VenturesAwards and RecognitionContact UsSee why Gartner named Databricks a Leader for the second consecutive yearGet the reportTry DatabricksWatch DemosContact UsLoginJUNE 26-29REGISTER NOWSolution AcceleratorHow to build: Product matching with machine learningOptimize product matching to drive sales
Use machine learning and the Databricks Lakehouse Platform for product matching that can be used by marketplaces and suppliers for various purposes. Resolve differences between product definitions and descriptions and determine which items are likely pairs and which are distinct across disparate data sets.
Read the full write-up
Download notebooksBenefits and business valueOptimize your matchingUse a wide range of data formats and techniques, such as computer vision, NLP and deep learning to extract product features
Scale as neededWith rapid provisioning of cloud resources, workflows can be cost-effectively allocated with resources as needed
Drive profitabilitySave time and create effective upsell opportunities to drive profitability
Reference ArchitectureDeliver AI innovation faster with Solution Accelerators for popular industry use cases. See our full library of solutionsReady to get started?Try Databricks for freeProductPlatform OverviewPricingOpen Source TechTry DatabricksDemoProductPlatform OverviewPricingOpen Source TechTry DatabricksDemoLearn & SupportDocumentationGlossaryTraining & CertificationHelp CenterLegalOnline CommunityLearn & SupportDocumentationGlossaryTraining & CertificationHelp CenterLegalOnline CommunitySolutionsBy IndustriesProfessional ServicesSolutionsBy IndustriesProfessional ServicesCompanyAbout UsCareers at DatabricksDiversity and InclusionCompany BlogContact UsCompanyAbout UsCareers at DatabricksDiversity and InclusionCompany BlogContact UsSee Careers
at DatabricksWorldwideEnglish (United States)Deutsch (Germany)Français (France)Italiano (Italy)日本語 (Japan)한국어 (South Korea)Português (Brazil)Databricks Inc.
160 Spear Street, 13th Floor
San Francisco, CA 94105
1-866-330-0121© Databricks 2023. All rights reserved. Apache, Apache Spark, Spark and the Spark logo are trademarks of the Apache Software Foundation.Privacy Notice|Terms of Use|Your Privacy Choices|Your California Privacy Rights |
https://www.databricks.com/spark/about | About Spark – DatabricksSkip to main contentPlatformThe Databricks Lakehouse PlatformDelta LakeData GovernanceData EngineeringData StreamingData WarehousingData SharingMachine LearningData SciencePricingMarketplaceOpen source techSecurity and Trust CenterWEBINAR May 18 / 8 AM PT
Goodbye, Data Warehouse. Hello, Lakehouse.
Attend to understand how a data lakehouse fits within your modern data stack.
Register nowSolutionsSolutions by IndustryFinancial ServicesHealthcare and Life SciencesManufacturingCommunications, Media & EntertainmentPublic SectorRetailSee all IndustriesSolutions by Use CaseSolution AcceleratorsProfessional ServicesDigital Native BusinessesData Platform MigrationNew survey of biopharma executives reveals real-world success with real-world evidence.
See survey resultsLearnDocumentationTraining & CertificationDemosResourcesOnline CommunityUniversity AllianceEventsData + AI SummitBlogLabsBeaconsJoin Generation AI in San Francisco
June 26–29
Learn about LLMs like Dolly and open source Data and AI technologies such as Apache Spark™, Delta Lake, MLflow and Delta SharingExplore sessionsCustomersPartnersCloud PartnersAWSAzureGoogle CloudPartner ConnectTechnology and Data PartnersTechnology Partner ProgramData Partner ProgramBuilt on Databricks Partner ProgramConsulting & SI PartnersC&SI Partner ProgramPartner SolutionsConnect with validated partner solutions in just a few clicks.Learn moreCompanyCareers at DatabricksOur TeamBoard of DirectorsCompany BlogNewsroomDatabricks VenturesAwards and RecognitionContact UsSee why Gartner named Databricks a Leader for the second consecutive yearGet the reportTry DatabricksWatch DemosContact UsLoginJUNE 26-29REGISTER NOWApache Spark™Apache Spark is a lightning-fast unified analytics engine for big data and machine learning. It was originally developed at UC Berkeley in 2009.The largest open source project in data processing.Since its release, Apache Spark, the unified analytics engine, has seen rapid adoption by enterprises across a wide range of industries. Internet powerhouses such as Netflix, Yahoo, and eBay have deployed Spark at massive scale, collectively processing multiple petabytes of data on clusters of over 8,000 nodes. It has quickly become the largest open source community in big data, with over 1000 contributors from 250+ organizations.The team that started the Spark research project at UC Berkeley founded Databricks in 2013.Apache Spark is 100% open source, hosted at the vendor-independent Apache Software Foundation. At Databricks, we are fully committed to maintaining this open development model. Together with the Spark community, Databricks continues to contribute heavily to the Apache Spark project, through both development and community evangelism.Watch the videoWhat is Apache Spark - Benefits of Apache SparkSpeed
Engineered from the bottom-up for performance, Spark can be 100x faster than Hadoop for large scale data processing by exploiting in memory computing and other optimizations. Spark is also fast when data is stored on disk, and currently holds the world record for large-scale on-disk sorting.
Ease of Use
Spark has easy-to-use APIs for operating on large datasets. This includes a collection of over 100 operators for transforming data and familiar data frame APIs for manipulating semi-structured data.A Unified Engine
Spark comes packaged with higher-level libraries, including support for SQL queries, streaming data, machine learning and graph processing. These standard libraries increase developer productivity and can be seamlessly combined to create complex workflows.Try Apache Spark on the Databricks cloud for freeThe Databricks Unified Analytics Platform offers 5x performance over open source Spark, collaborative notebooks, integrated workflows, and enterprise security — all in a fully managed cloud platform.Try DatabricksThe open source Apache Spark project can be downloaded hereProductPlatform OverviewPricingOpen Source TechTry DatabricksDemoProductPlatform OverviewPricingOpen Source TechTry DatabricksDemoLearn & SupportDocumentationGlossaryTraining & CertificationHelp CenterLegalOnline CommunityLearn & SupportDocumentationGlossaryTraining & CertificationHelp CenterLegalOnline CommunitySolutionsBy IndustriesProfessional ServicesSolutionsBy IndustriesProfessional ServicesCompanyAbout UsCareers at DatabricksDiversity and InclusionCompany BlogContact UsCompanyAbout UsCareers at DatabricksDiversity and InclusionCompany BlogContact UsSee Careers
at DatabricksWorldwideEnglish (United States)Deutsch (Germany)Français (France)Italiano (Italy)日本語 (Japan)한국어 (South Korea)Português (Brazil)Databricks Inc.
160 Spear Street, 13th Floor
San Francisco, CA 94105
1-866-330-0121© Databricks 2023. All rights reserved. Apache, Apache Spark, Spark and the Spark logo are trademarks of the Apache Software Foundation.Privacy Notice|Terms of Use|Your Privacy Choices|Your California Privacy Rights |
https://www.databricks.com/glossary/hadoop-ecosystem | What is a Hadoop Ecosystem?PlatformThe Databricks Lakehouse PlatformDelta LakeData GovernanceData EngineeringData StreamingData WarehousingData SharingMachine LearningData SciencePricingMarketplaceOpen source techSecurity and Trust CenterWEBINAR May 18 / 8 AM PT
Goodbye, Data Warehouse. Hello, Lakehouse.
Attend to understand how a data lakehouse fits within your modern data stack.
Register nowSolutionsSolutions by IndustryFinancial ServicesHealthcare and Life SciencesManufacturingCommunications, Media & EntertainmentPublic SectorRetailSee all IndustriesSolutions by Use CaseSolution AcceleratorsProfessional ServicesDigital Native BusinessesData Platform MigrationNew survey of biopharma executives reveals real-world success with real-world evidence.
See survey resultsLearnDocumentationTraining & CertificationDemosResourcesOnline CommunityUniversity AllianceEventsData + AI SummitBlogLabsBeaconsJoin Generation AI in San Francisco
June 26–29
Learn about LLMs like Dolly and open source Data and AI technologies such as Apache Spark™, Delta Lake, MLflow and Delta SharingExplore sessionsCustomersPartnersCloud PartnersAWSAzureGoogle CloudPartner ConnectTechnology and Data PartnersTechnology Partner ProgramData Partner ProgramBuilt on Databricks Partner ProgramConsulting & SI PartnersC&SI Partner ProgramPartner SolutionsConnect with validated partner solutions in just a few clicks.Learn moreCompanyCareers at DatabricksOur TeamBoard of DirectorsCompany BlogNewsroomDatabricks VenturesAwards and RecognitionContact UsSee why Gartner named Databricks a Leader for the second consecutive yearGet the reportTry DatabricksWatch DemosContact UsLoginJUNE 26-29REGISTER NOWHadoop EcosystemAll>Hadoop EcosystemWhat is the Hadoop Ecosystem?Apache Hadoop ecosystem refers to the various components of the Apache Hadoop software library; it includes open source projects as well as a complete range of complementary tools. Some of the most well-known tools of the Hadoop ecosystem include HDFS, Hive, Pig, YARN, MapReduce, Spark, HBase, Oozie, Sqoop, Zookeeper, etc. Here are the major Hadoop ecosystem components that are used frequently by developers:What is HDFS?Hadoop Distributed File System (HDFS), is one of the largest Apache projects and primary storage system of Hadoop. It employs a NameNode and DataNode architecture. It is a distributed file system able to store large files running over the cluster of commodity hardware.What is Hive?Hive is an ETL and Data warehousing tool used to query or analyze large datasets stored within the Hadoop ecosystem. Hive has three main functions: data summarization, query, and analysis of unstructured and semi-structured data in Hadoop. It features a SQL-like interface, HQL language that works similar to SQL and automatically translates queries into MapReduce jobs.What is Apache Pig?This is a high-level scripting language used to execute queries for larger datasets that are used within Hadoop. Pig's simple SQL-like scripting language is known as Pig Latin and its main objective is to perform the required operations and arrange the final output in the desired format.What is MapReduce?This is another data processing layer of Hadoop. It has the capability to process large structured and unstructured data as well as to manage very large data files in parallel by dividing the job into a set of independent tasks (sub-job).What is YARN?YARN stands for Yet Another Resource Negotiator, but it's commonly referred to by the acronym alone. It is one of the core components in open source Apache Hadoop suitable for resource management. It is responsible for managing workloads, monitoring, and security controls implementation. It also allocates system resources to the various applications running in a Hadoop cluster while assigning which tasks should be executed by each cluster nodes. YARN has two main components:Resource ManagerNode ManagerWhat is Apache Spark?Apache Spark is a fast, in-memory data processing engine suitable for use in a wide range of circumstances. Spark can be deployed in several ways, it features Java, Python, Scala, and R programming languages, and supports SQL, streaming data, machine learning, and graph processing, which can be used together in an application.Additional ResourcesStep-by-Step Migration: Hadoop to DatabricksCloud Modernization With Databricks and AWSDatabricks migration hubHidden value of Hadoop migration whitepaperIt’s time to reconsider your relationship with HadoopDelta Lake and ETLMaking Apache Spark™ Better with Delta LakeBack to GlossaryProductPlatform OverviewPricingOpen Source TechTry DatabricksDemoProductPlatform OverviewPricingOpen Source TechTry DatabricksDemoLearn & SupportDocumentationGlossaryTraining & CertificationHelp CenterLegalOnline CommunityLearn & SupportDocumentationGlossaryTraining & CertificationHelp CenterLegalOnline CommunitySolutionsBy IndustriesProfessional ServicesSolutionsBy IndustriesProfessional ServicesCompanyAbout UsCareers at DatabricksDiversity and InclusionCompany BlogContact UsCompanyAbout UsCareers at DatabricksDiversity and InclusionCompany BlogContact UsSee Careers
at DatabricksWorldwideEnglish (United States)Deutsch (Germany)Français (France)Italiano (Italy)日本語 (Japan)한국어 (South Korea)Português (Brazil)Databricks Inc.
160 Spear Street, 13th Floor
San Francisco, CA 94105
1-866-330-0121© Databricks 2023. All rights reserved. Apache, Apache Spark, Spark and the Spark logo are trademarks of the Apache Software Foundation.Privacy Notice|Terms of Use|Your Privacy Choices|Your California Privacy Rights |
https://www.databricks.com/blog/2019/03/07/simplifying-genomics-pipelines-at-scale-with-databricks-delta.html | Simplifying Genomics Pipelines at Scale with Databricks Delta - The Databricks BlogSkip to main contentPlatformThe Databricks Lakehouse PlatformDelta LakeData GovernanceData EngineeringData StreamingData WarehousingData SharingMachine LearningData SciencePricingMarketplaceOpen source techSecurity and Trust CenterWEBINAR May 18 / 8 AM PT
Goodbye, Data Warehouse. Hello, Lakehouse.
Attend to understand how a data lakehouse fits within your modern data stack.
Register nowSolutionsSolutions by IndustryFinancial ServicesHealthcare and Life SciencesManufacturingCommunications, Media & EntertainmentPublic SectorRetailSee all IndustriesSolutions by Use CaseSolution AcceleratorsProfessional ServicesDigital Native BusinessesData Platform MigrationNew survey of biopharma executives reveals real-world success with real-world evidence.
See survey resultsLearnDocumentationTraining & CertificationDemosResourcesOnline CommunityUniversity AllianceEventsData + AI SummitBlogLabsBeaconsJoin Generation AI in San Francisco
June 26–29
Learn about LLMs like Dolly and open source Data and AI technologies such as Apache Spark™, Delta Lake, MLflow and Delta SharingExplore sessionsCustomersPartnersCloud PartnersAWSAzureGoogle CloudPartner ConnectTechnology and Data PartnersTechnology Partner ProgramData Partner ProgramBuilt on Databricks Partner ProgramConsulting & SI PartnersC&SI Partner ProgramPartner SolutionsConnect with validated partner solutions in just a few clicks.Learn moreCompanyCareers at DatabricksOur TeamBoard of DirectorsCompany BlogNewsroomDatabricks VenturesAwards and RecognitionContact UsSee why Gartner named Databricks a Leader for the second consecutive yearGet the reportTry DatabricksWatch DemosContact UsLoginJUNE 26-29REGISTER NOWCategoriesAll blog postsCompanyCultureCustomersEventsNewsPlatformAnnouncementsPartnersProductSolutionsSecurity and TrustEngineeringData Science and MLOpen SourceSolutions AcceleratorsData EngineeringTutorialsData StreamingData WarehousingData StrategyBest PracticesData LeaderInsightsIndustriesFinancial ServicesHealth and Life SciencesMedia and EntertainmentRetailManufacturingPublic SectorSimplifying Genomics Pipelines at Scale with Databricks Deltaby William Brandler and Frank Austin NothaftMarch 7, 2019 in Engineering BlogShare this postTry this notebook in DatabricksThis blog is the first blog in our “Genomics Analysis at Scale” series. In this series, we will demonstrate how the Databricks Unified Analytics Platform for Genomics enables customers to analyze population-scale genomic data. Starting from the output of our genomics pipeline, this series will provide a tutorial on using Databricks to run sample quality control, joint genotyping, cohort quality control, and advanced statistical genetics analyses.Since the completion of the Human Genome Project in 2003, there has been an explosion in data fueled by a dramatic drop in the cost of DNA sequencing, from $3B1 for the first genome to under $1,000 today.
[1] The Human Genome Project was a $3B project led by the Department of Energy and the National Institutes of Health began in 1990 and completed in 2003.Source: DNA Sequencing Costs: DataConsequently, the field of genomics has now matured to a stage where companies have started to do DNA sequencing at population-scale. However, sequencing the DNA code is only the first step, the raw data then needs to be transformed into a format suitable for analysis. Typically this is done by gluing together a series of bioinformatics tools with custom scripts and processing the data on a single node, one sample at a time, until we wind up with a collection of genomic variants. Bioinformatics scientists today spend the majority of their time building out and maintaining these pipelines. As genomic data sets have expanded into the petabyte scale, it has become challenging to answer even the following simple questions in a timely manner:
How many samples have we sequenced this month?What is the total number of unique variants detected?How many variants did we see across different classes of variation?Further compounding this problem, data from thousands of individuals cannot be stored, tracked nor versioned while also remaining accessible and queryable. Consequently, researchers often duplicate subsets of their genomic data when performing their analyses, causing the overall storage footprint and costs to escalate. In an attempt to alleviate this problem, today researchers employ a strategy of “data freezes”, typically between six months to two years, where they halt work on new data and instead focus on a frozen copy of existing data. There is no solution to incrementally build up analyses over shorter time frames, causing research progress to slow down.
There is a compelling need for robust software that can consume genomic data at industrial scale, while also retaining the flexibility for scientists to explore the data, iterate on their analytical pipelines, and derive new insights.
Fig 1. Architecture for end-to-end genomics analysis with DatabricksWith Databricks Delta: A Unified Management System for Real-time Big Data Analytics, the Databricks platform has taken a major step towards solving the data governance, data access, and data analysis issues faced by researchers today. With Databricks Delta Lake, you can store all your genomic data in one place, and create analyses that update in real-time as new data is ingested. Combined with optimizations in our Unified Analytics Platform for Genomics (UAP4G) for reading, writing, and processing genomics file formats, we offer an end-to-end solution for genomics pipelines workflows. The UAP4G architecture offers flexibility, allowing customers to plug in their own pipelines and develop their own tertiary analytics. As an example, we’ve highlighted the following dashboard showing quality control metrics and visualizations that can be calculated and presented in an automated fashion and customized to suit your specific requirements.
https://www.youtube.com/watch?v=73fMhDKXykU
In the rest of this blog, we will walk through the steps we took to build the quality control dashboard above, which updates in real time as samples finish processing. By using a Delta-based pipeline for processing genomic data, our customers can now operate their pipelines in a way that provides real-time, sample-by-sample visibility. With Databricks notebooks (and integrations such as GitHub and MLflow) they can track and version analyses in a way that will ensure their results are reproducible. Their bioinformaticians can devote less time to maintaining pipelines and spend more time making discoveries. We see the UAP4G as the engine that will drive the transformation from ad-hoc analyses to production genomics on an industrial scale, enabling better insights into the link between genetics and disease.
Read Sample DataLet’s start by reading variation data from a small cohort of samples; the following statement reads in data for a specific sampleId and saves it using the Databricks Delta format (in the delta_stream_output folder).
spark.read. \
format("parquet"). \
load("dbfs:/annotations_etl_parquet/sampleId=" + "SRS000030_SRR709972"). \
write. \
format("delta"). \
save(delta_stream_outpath)
Note, the annotations_etl_parquet folder contains annotations generated from the 1000 genomes dataset stored in parquet format. The ETL and processing of these annotations were performed using Databricks’ Unified Analytics Platform for Genomics.Start Streaming the Databricks Delta TableIn the following statement, we are creating the exomes Apache Spark DataFrame which is reading a stream (via readStream) of data using the Databricks Delta format. This is a continuously running or dynamic DataFrame, i.e. the exomes DataFrame will load new data as data is written into the delta_stream_output folder. To view the exomes DataFrame, we can run a DataFrame query to find the count of variants grouped by the sampleId.
# Read the stream of data
exomes = spark.readStream.format("delta").load(delta_stream_outpath)
# Display the data via DataFrame query
display(exomes.groupBy("sampleId").count().withColumnRenamed("count", "variants"))
When executing the display statement, the Databricks notebook provides a streaming dashboard to monitor the streaming jobs. Immediately below the streaming job are the results of the display statement (i.e. the count of variants by sample_id).
Let’s continue answering our initial set of questions by running other DataFrame queries based on our exomes DataFrame.
Single Nucleotide Variant CountTo continue the example, we can quickly calculate the number of single nucleotide variants (SNVs), as displayed in the following graph.
%sql
select referenceAllele, alternateAllele, count(1) as GroupCount
from snvs
group by referenceAllele, alternateAllele
order by GroupCount desc
Note, the display command is part of the Databricks workspace that allows you to view your DataFrame using Databricks visualizations (i.e. no coding required).Variant CountSince we have annotated our variants with functional effects, we can continue our analysis by looking at the spread of variant effects we see. The majority of the variants detected flank regions that code for proteins, these are known as noncoding variants.
display(exomes.groupBy("mutationType").count())
Amino Acid Substitution HeatmapContinuing with our exomes DataFrame, let’s calculate the amino acid substitution counts with the following code snippet. Similar to the previous DataFrames, we will create another dynamic DataFrame (aa_counts) so that as new data is processed by the exomes DataFrame, it will subsequently be reflected in the amino acid substitution counts as well. We are also writing the data into memory (i.e. .format(“memory”)) and processed batches every 60s (i.e. trigger(processingTime=’60 seconds’)) so the downstream Pandas heatmap code can process and visualize the heatmap.
# Calculate amino acid substitution counts
coding = get_coding_mutations(exomes)
aa_substitutions = get_amino_acid_substitutions(coding.select("proteinHgvs"), "proteinHgvs")
aa_counts = count_amino_acid_substitution_combinations(aa_substitutions)
aa_counts. \
writeStream. \
format("memory"). \
queryName("amino_acid_substitutions"). \
outputMode("complete"). \
trigger(processingTime='60 seconds'). \
start()
The following code snippet reads the preceding amino_acid_substitutions Spark table, determines the max count, creates a new Pandas pivot table from the Spark table, and then plots out the heatmap.
# Use pandas and matplotlib to build heatmap
amino_acid_substitutions = spark.read.table("amino_acid_substitutions")
max_count = amino_acid_substitutions.agg(fx.max("substitutions")).collect()[0][0]
aa_counts_pd = amino_acid_substitutions.toPandas()
aa_counts_pd = pd.pivot_table(aa_counts_pd, values='substitutions', index=['reference'], columns=['alternate'], fill_value=0)
fig, ax = plt.subplots()
with sns.axes_style("white"):
ax = sns.heatmap(aa_counts_pd, vmax=max_count*0.4, cbar=False, annot=True, annot_kws={"size": 7}, fmt="d")
plt.tight_layout()
display(fig)
Migrating to a Continuous PipelineUp to this point, the preceding code snippets and visualizations represent a single run for a single sampleId. But because we’re using Structured Streaming and Databricks Delta, this code can be used (without any changes) to construct a production data pipeline that computes quality control statistics continuously as samples roll through our pipeline. To demonstrate this, we can run the following code snippet that will load our entire dataset.
import time
parquets = "dbfs:/databricks-datasets/genomics/annotations_etl_parquet/"
files = dbutils.fs.ls(parquets)
counter=0
for sample in files:
counter+=1
annotation_path = sample.path
sampleId = annotation_path.split("/")[4].split("=")[1]
variants = spark.read.format("parquet").load(str(annotation_path))
print("running " + sampleId)
if(sampleId != "SRS000030_SRR709972"):
variants.write.format("delta"). \
mode("append"). \
save(delta_stream_outpath)
time.sleep(10)
As described in the earlier code snippets, the source of the exomes DataFrame are the files loaded into the delta_stream_output folder. Initially, we had loaded a set of files for a single sampleId (i.e., sampleId = “SRS000030_SRR709972”). The preceding code snippet now takes all of the generated parquet samples (i.e. parquets) and incrementally loads those files by sampleId into the same delta_stream_output folder. The following animated GIF shows the abbreviated output of the preceding code snippet.
https://www.youtube.com/watch?v=JPngSC5Md-Q
Visualizing your Genomics PipelineWhen you scroll back to the top of your notebook, you will notice that the exomes DataFrame is now automatically loading the new sampleIds. Because the structured streaming component of our genomics pipeline runs continuously, it processes data as soon as new files are loaded into the delta_stream_outputpath folder. By using the Databricks Delta format, we can ensure the transactional consistency of the data streaming into the exomes DataFrame.
https://www.youtube.com/watch?v=Q7KdPsc5mbY
As opposed to the initial creation of our exomes DataFrame, notice how the structured streaming monitoring dashboard is now loading data (i.e., the fluctuating “input vs. processing rate”, fluctuating “batch duration”, and an increase of distinct keys in the “aggregations state”). As the exomes DataFrame is processing, notice the new rows of sampleIds (and variant counts). This same action can also be seen for the associated group by mutation type query.
https://www.youtube.com/watch?v=sT179SCknGM
With Databricks Delta, any new data is transactionally consistent in each and every step of our genomics pipeline. This is important because it ensures your pipeline is consistent (maintains consistency of your data, i.e. ensures all of the data is “correct”), reliable (either the transaction succeeds or fails completely), and can handle real-time updates (the ability to handle many transactions concurrently and any outage of failure will not impact the data). Thus even the data in our downstream amino acid substitution map (which had a number of additional ETL steps) is refreshed seamlessly.
As the last step of our genomics pipeline, we are also monitoring the distinct mutations by reviewing the Databricks Delta parquet files within DBFS (i.e. increase of distinct mutations over time).
SummaryUsing the foundation of the Databricks Unified Analytics Platform - with particular focus with Databricks Delta - bioinformaticians and researchers can apply distributed analytics with transactional consistency using Databricks Unified Analytics Platform for Genomics. These abstractions allow data practitioners to simplify genomics pipelines. Here we have created a genomic sample quality control pipeline that continuously processes data as new samples are processed, without manual intervention. Whether you are performing ETL or performing sophisticated analytics, your data will flow through your genomics pipeline rapidly and without disruption. Try it yourself today by downloading the Simplifying Genomics Pipelines at Scale with Databricks Delta notebook.
Get Started Analyzing Genomics at Scale:
Read our Unified Analytics for Genomics solution guideDownload the Simplifying Genomics Pipelines at Scale with Databricks Delta notebookSign-up for a free trial of Databricks Unified Analytics for GenomicsAcknowledgmentsThanks to Yongsheng Huang and Michael Ortega for their contributions.Interested in the open source Delta Lake?Visit the Delta Lake online hub to learn more, download the latest code and join the Delta Lake community. Try Databricks for freeGet StartedSee all Engineering Blog postsProductPlatform OverviewPricingOpen Source TechTry DatabricksDemoProductPlatform OverviewPricingOpen Source TechTry DatabricksDemoLearn & SupportDocumentationGlossaryTraining & CertificationHelp CenterLegalOnline CommunityLearn & SupportDocumentationGlossaryTraining & CertificationHelp CenterLegalOnline CommunitySolutionsBy IndustriesProfessional ServicesSolutionsBy IndustriesProfessional ServicesCompanyAbout UsCareers at DatabricksDiversity and InclusionCompany BlogContact UsCompanyAbout UsCareers at DatabricksDiversity and InclusionCompany BlogContact UsSee Careers
at DatabricksWorldwideEnglish (United States)Deutsch (Germany)Français (France)Italiano (Italy)日本語 (Japan)한국어 (South Korea)Português (Brazil)Databricks Inc.
160 Spear Street, 13th Floor
San Francisco, CA 94105
1-866-330-0121© Databricks 2023. All rights reserved. Apache, Apache Spark, Spark and the Spark logo are trademarks of the Apache Software Foundation.Privacy Notice|Terms of Use|Your Privacy Choices|Your California Privacy Rights |
https://www.databricks.com/dataaisummit/speaker/christina-taylor-0/# | Christina Taylor - Data + AI Summit 2023 | DatabricksThis site works best with JavaScript enabled.HomepageSAN FRANCISCO, JUNE 26-29VIRTUAL, JUNE 28-29Register NowSession CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingChristina TaylorData Engineer at ToptalBack to speakersChristina is passionate about modern data platforms, mutil-cloud architecture, scalable data pipelines, as well as the latest and greatest in the open source community. An intensely curious lifelong learner and effective team leader, she builds data lakes with medallion structure that support advanced analytics, data science models, and customer facing applications. She also has a keen interest in interdisciplinary areas such as Cloud FinOps, DevOps and MLOps.Looking for past sessions?Take a look through the session archive to find even more related content from previous Data + AI Summit conferences.Explore the session archiveRegister today to save your spotRegister NowHomepageOrganized By Session CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingFAQEvent PolicyCode of ConductPrivacy NoticeYour Privacy ChoicesYour California Privacy RightsApache, Apache Spark, Spark, and the Spark logo are trademarks of the Apache Software Foundation. The Apache Software Foundation has no affiliation with and does not endorse the materials provided at this event. |
https://www.databricks.com/dataaisummit/speaker/joanna-gurry | Joanna Gurry - Data + AI Summit 2023 | DatabricksThis site works best with JavaScript enabled.HomepageSAN FRANCISCO, JUNE 26-29VIRTUAL, JUNE 28-29Register NowSession CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingJoanna GurryExecutive, Data Delivery at National Australia Bank (NAB)Back to speakersLooking for past sessions?Take a look through the session archive to find even more related content from previous Data + AI Summit conferences.Explore the session archiveRegister today to save your spotRegister NowHomepageOrganized By Session CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingFAQEvent PolicyCode of ConductPrivacy NoticeYour Privacy ChoicesYour California Privacy RightsApache, Apache Spark, Spark, and the Spark logo are trademarks of the Apache Software Foundation. The Apache Software Foundation has no affiliation with and does not endorse the materials provided at this event. |
https://www.databricks.com/company/careers/open-positions?location=singapore | Current job openings at Databricks | DatabricksSkip to main contentPlatformThe Databricks Lakehouse PlatformDelta LakeData GovernanceData EngineeringData StreamingData WarehousingData SharingMachine LearningData SciencePricingMarketplaceOpen source techSecurity and Trust CenterWEBINAR May 18 / 8 AM PT
Goodbye, Data Warehouse. Hello, Lakehouse.
Attend to understand how a data lakehouse fits within your modern data stack.
Register nowSolutionsSolutions by IndustryFinancial ServicesHealthcare and Life SciencesManufacturingCommunications, Media & EntertainmentPublic SectorRetailSee all IndustriesSolutions by Use CaseSolution AcceleratorsProfessional ServicesDigital Native BusinessesData Platform MigrationNew survey of biopharma executives reveals real-world success with real-world evidence.
See survey resultsLearnDocumentationTraining & CertificationDemosResourcesOnline CommunityUniversity AllianceEventsData + AI SummitBlogLabsBeaconsJoin Generation AI in San Francisco
June 26–29
Learn about LLMs like Dolly and open source Data and AI technologies such as Apache Spark™, Delta Lake, MLflow and Delta SharingExplore sessionsCustomersPartnersCloud PartnersAWSAzureGoogle CloudPartner ConnectTechnology and Data PartnersTechnology Partner ProgramData Partner ProgramBuilt on Databricks Partner ProgramConsulting & SI PartnersC&SI Partner ProgramPartner SolutionsConnect with validated partner solutions in just a few clicks.Learn moreCompanyCareers at DatabricksOur TeamBoard of DirectorsCompany BlogNewsroomDatabricks VenturesAwards and RecognitionContact UsSee why Gartner named Databricks a Leader for the second consecutive yearGet the reportTry DatabricksWatch DemosContact UsLoginJUNE 26-29REGISTER NOW OverviewCultureBenefitsDiversityStudents & new gradsCurrent job openings at DatabricksDepartmentLocationProductPlatform OverviewPricingOpen Source TechTry DatabricksDemoProductPlatform OverviewPricingOpen Source TechTry DatabricksDemoLearn & SupportDocumentationGlossaryTraining & CertificationHelp CenterLegalOnline CommunityLearn & SupportDocumentationGlossaryTraining & CertificationHelp CenterLegalOnline CommunitySolutionsBy IndustriesProfessional ServicesSolutionsBy IndustriesProfessional ServicesCompanyAbout UsCareers at DatabricksDiversity and InclusionCompany BlogContact UsCompanyAbout UsCareers at DatabricksDiversity and InclusionCompany BlogContact UsSee Careers
at DatabricksWorldwideEnglish (United States)Deutsch (Germany)Français (France)Italiano (Italy)日本語 (Japan)한국어 (South Korea)Português (Brazil)Databricks Inc.
160 Spear Street, 13th Floor
San Francisco, CA 94105
1-866-330-0121© Databricks 2023. All rights reserved. Apache, Apache Spark, Spark and the Spark logo are trademarks of the Apache Software Foundation.Privacy Notice|Terms of Use|Your Privacy Choices|Your California Privacy Rights |
https://www.databricks.com/dataaisummit/speaker/john-thompson | John Thompson - Data + AI Summit 2023 | DatabricksThis site works best with JavaScript enabled.HomepageSAN FRANCISCO, JUNE 26-29VIRTUAL, JUNE 28-29Register NowSession CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingJohn ThompsonGlobal Head, Artificial Intelligence at EYBack to speakersJohn is an international technology executive with over 35 years of experience in the fields of data, advanced analytics, and artificial intelligence (AI). He is currently the Global Head of AI at EY.
John has built start-up organizations from the ground up and he has reengineered business units of Fortune 500 firms to reach their potential. He has directly managed and run - sales, marketing, consulting, support, and product development organizations.Looking for past sessions?Take a look through the session archive to find even more related content from previous Data + AI Summit conferences.Explore the session archiveRegister today to save your spotRegister NowHomepageOrganized By Session CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingFAQEvent PolicyCode of ConductPrivacy NoticeYour Privacy ChoicesYour California Privacy RightsApache, Apache Spark, Spark, and the Spark logo are trademarks of the Apache Software Foundation. The Apache Software Foundation has no affiliation with and does not endorse the materials provided at this event. |
https://www.databricks.com/solutions/accelerators/rd-optimization-with-knowledge-graphs | R&D Optimization With Knowledge Graphs | DatabricksPlatformThe Databricks Lakehouse PlatformDelta LakeData GovernanceData EngineeringData StreamingData WarehousingData SharingMachine LearningData SciencePricingMarketplaceOpen source techSecurity and Trust CenterWEBINAR May 18 / 8 AM PT
Goodbye, Data Warehouse. Hello, Lakehouse.
Attend to understand how a data lakehouse fits within your modern data stack.
Register nowSolutionsSolutions by IndustryFinancial ServicesHealthcare and Life SciencesManufacturingCommunications, Media & EntertainmentPublic SectorRetailSee all IndustriesSolutions by Use CaseSolution AcceleratorsProfessional ServicesDigital Native BusinessesData Platform MigrationNew survey of biopharma executives reveals real-world success with real-world evidence.
See survey resultsLearnDocumentationTraining & CertificationDemosResourcesOnline CommunityUniversity AllianceEventsData + AI SummitBlogLabsBeaconsJoin Generation AI in San Francisco
June 26–29
Learn about LLMs like Dolly and open source Data and AI technologies such as Apache Spark™, Delta Lake, MLflow and Delta SharingExplore sessionsCustomersPartnersCloud PartnersAWSAzureGoogle CloudPartner ConnectTechnology and Data PartnersTechnology Partner ProgramData Partner ProgramBuilt on Databricks Partner ProgramConsulting & SI PartnersC&SI Partner ProgramPartner SolutionsConnect with validated partner solutions in just a few clicks.Learn moreCompanyCareers at DatabricksOur TeamBoard of DirectorsCompany BlogNewsroomDatabricks VenturesAwards and RecognitionContact UsSee why Gartner named Databricks a Leader for the second consecutive yearGet the reportTry DatabricksWatch DemosContact UsLoginJUNE 26-29REGISTER NOWSolution AcceleratorR&D Optimization With Knowledge GraphsPre-built code, sample data and step-by-step instructions ready to go in a Databricks notebook
Get startedUnlocking hidden insights in your data with knowledge graphsKnowledge graphs combine the semantics, scalability and flexibility of a graph database with the performance and governance of a data lakehouse. With the Databricks Lakehouse for Healthcare and Life Sciences, R&D teams can:Store and organize all forms of life sciences data in the lakehouseSynthesize new insights through advanced network analytics and machine learningBuild a knowledge graph with Wisecube to reveal opportunities to improve R&DDownload notebookResourcesUse Case GuideRead nowBlogRead nowDeliver AI innovation faster with Solution Accelerators for popular industry use cases. See our full library of solutionsReady to get started?Try Databricks for freeProductPlatform OverviewPricingOpen Source TechTry DatabricksDemoProductPlatform OverviewPricingOpen Source TechTry DatabricksDemoLearn & SupportDocumentationGlossaryTraining & CertificationHelp CenterLegalOnline CommunityLearn & SupportDocumentationGlossaryTraining & CertificationHelp CenterLegalOnline CommunitySolutionsBy IndustriesProfessional ServicesSolutionsBy IndustriesProfessional ServicesCompanyAbout UsCareers at DatabricksDiversity and InclusionCompany BlogContact UsCompanyAbout UsCareers at DatabricksDiversity and InclusionCompany BlogContact UsSee Careers
at DatabricksWorldwideEnglish (United States)Deutsch (Germany)Français (France)Italiano (Italy)日本語 (Japan)한국어 (South Korea)Português (Brazil)Databricks Inc.
160 Spear Street, 13th Floor
San Francisco, CA 94105
1-866-330-0121© Databricks 2023. All rights reserved. Apache, Apache Spark, Spark and the Spark logo are trademarks of the Apache Software Foundation.Privacy Notice|Terms of Use|Your Privacy Choices|Your California Privacy Rights |
https://www.databricks.com/dataaisummit/speaker/hitesh-sahni/# | Hitesh Sahni - Data + AI Summit 2023 | DatabricksThis site works best with JavaScript enabled.HomepageSAN FRANCISCO, JUNE 26-29VIRTUAL, JUNE 28-29Register NowSession CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingHitesh SahniHead of Cloud Data Platforms and Data Engineering Solutions at Deutsche Post DHLBack to speakersHitesh is head of Cloud Data Platforms and Solutions and Lead architect at DPDHL Group responsible for developing, scaling global cloud data management capabilities, delivering cloud data analytics projects including both data platforms and data engineering solutions.
Hitesh is results oriented Technology Leader with business acumen enabling data driven digital transformation, diverse experience in Big Data Analytics and Cloud Space across various industry verticals (banking, Logistics, etc.).Looking for past sessions?Take a look through the session archive to find even more related content from previous Data + AI Summit conferences.Explore the session archiveRegister today to save your spotRegister NowHomepageOrganized By Session CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingFAQEvent PolicyCode of ConductPrivacy NoticeYour Privacy ChoicesYour California Privacy RightsApache, Apache Spark, Spark, and the Spark logo are trademarks of the Apache Software Foundation. The Apache Software Foundation has no affiliation with and does not endorse the materials provided at this event. |
https://www.databricks.com/dataaisummit/speaker/christopher-locklin/# | Christopher Locklin - Data + AI Summit 2023 | DatabricksThis site works best with JavaScript enabled.HomepageSAN FRANCISCO, JUNE 26-29VIRTUAL, JUNE 28-29Register NowSession CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingChristopher LocklinEngineer Manager, Data Platform at GrammarlyBack to speakersChris Locklin has been leading data infrastructure and engineering teams for the last 9 years (at Grammarly, Dropbox, and Verizon), and initially entered the data space in 2010. He is currently the engineering manager of the Data Platform team at Grammarly. The team is responsible for ingesting, processing, and surfacing over 50 billion events every day.Looking for past sessions?Take a look through the session archive to find even more related content from previous Data + AI Summit conferences.Explore the session archiveRegister today to save your spotRegister NowHomepageOrganized By Session CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingFAQEvent PolicyCode of ConductPrivacy NoticeYour Privacy ChoicesYour California Privacy RightsApache, Apache Spark, Spark, and the Spark logo are trademarks of the Apache Software Foundation. The Apache Software Foundation has no affiliation with and does not endorse the materials provided at this event. |
https://www.databricks.com/p/webinar/modernising-risk-management-financial-services-virtual-workshop | Resources - DatabricksPlatformThe Databricks Lakehouse PlatformDelta LakeData GovernanceData EngineeringData StreamingData WarehousingData SharingMachine LearningData SciencePricingMarketplaceOpen source techSecurity and Trust CenterWEBINAR May 18 / 8 AM PT
Goodbye, Data Warehouse. Hello, Lakehouse.
Attend to understand how a data lakehouse fits within your modern data stack.
Register nowSolutionsSolutions by IndustryFinancial ServicesHealthcare and Life SciencesManufacturingCommunications, Media & EntertainmentPublic SectorRetailSee all IndustriesSolutions by Use CaseSolution AcceleratorsProfessional ServicesDigital Native BusinessesData Platform MigrationNew survey of biopharma executives reveals real-world success with real-world evidence.
See survey resultsLearnDocumentationTraining & CertificationDemosResourcesOnline CommunityUniversity AllianceEventsData + AI SummitBlogLabsBeaconsJoin Generation AI in San Francisco
June 26–29
Learn about LLMs like Dolly and open source Data and AI technologies such as Apache Spark™, Delta Lake, MLflow and Delta SharingExplore sessionsCustomersPartnersCloud PartnersAWSAzureGoogle CloudPartner ConnectTechnology and Data PartnersTechnology Partner ProgramData Partner ProgramBuilt on Databricks Partner ProgramConsulting & SI PartnersC&SI Partner ProgramPartner SolutionsConnect with validated partner solutions in just a few clicks.Learn moreCompanyCareers at DatabricksOur TeamBoard of DirectorsCompany BlogNewsroomDatabricks VenturesAwards and RecognitionContact UsSee why Gartner named Databricks a Leader for the second consecutive yearGet the reportTry DatabricksWatch DemosContact UsLoginJUNE 26-29REGISTER NOWResourcesLoading...ProductPlatform OverviewPricingOpen Source TechTry DatabricksDemoProductPlatform OverviewPricingOpen Source TechTry DatabricksDemoLearn & SupportDocumentationGlossaryTraining & CertificationHelp CenterLegalOnline CommunityLearn & SupportDocumentationGlossaryTraining & CertificationHelp CenterLegalOnline CommunitySolutionsBy IndustriesProfessional ServicesSolutionsBy IndustriesProfessional ServicesCompanyAbout UsCareers at DatabricksDiversity and InclusionCompany BlogContact UsCompanyAbout UsCareers at DatabricksDiversity and InclusionCompany BlogContact UsSee Careers
at DatabricksWorldwideEnglish (United States)Deutsch (Germany)Français (France)Italiano (Italy)日本語 (Japan)한국어 (South Korea)Português (Brazil)Databricks Inc.
160 Spear Street, 13th Floor
San Francisco, CA 94105
1-866-330-0121© Databricks 2023. All rights reserved. Apache, Apache Spark, Spark and the Spark logo are trademarks of the Apache Software Foundation.Privacy Notice|Terms of Use|Your Privacy Choices|Your California Privacy Rights |
https://www.databricks.com/customers/usps-oig | Applying advanced analytics for the sanctity and delivery of mail | DatabricksPlatformThe Databricks Lakehouse PlatformDelta LakeData GovernanceData EngineeringData StreamingData WarehousingData SharingMachine LearningData SciencePricingMarketplaceOpen source techSecurity and Trust CenterWEBINAR May 18 / 8 AM PT
Goodbye, Data Warehouse. Hello, Lakehouse.
Attend to understand how a data lakehouse fits within your modern data stack.
Register nowSolutionsSolutions by IndustryFinancial ServicesHealthcare and Life SciencesManufacturingCommunications, Media & EntertainmentPublic SectorRetailSee all IndustriesSolutions by Use CaseSolution AcceleratorsProfessional ServicesDigital Native BusinessesData Platform MigrationNew survey of biopharma executives reveals real-world success with real-world evidence.
See survey resultsLearnDocumentationTraining & CertificationDemosResourcesOnline CommunityUniversity AllianceEventsData + AI SummitBlogLabsBeaconsJoin Generation AI in San Francisco
June 26–29
Learn about LLMs like Dolly and open source Data and AI technologies such as Apache Spark™, Delta Lake, MLflow and Delta SharingExplore sessionsCustomersPartnersCloud PartnersAWSAzureGoogle CloudPartner ConnectTechnology and Data PartnersTechnology Partner ProgramData Partner ProgramBuilt on Databricks Partner ProgramConsulting & SI PartnersC&SI Partner ProgramPartner SolutionsConnect with validated partner solutions in just a few clicks.Learn moreCompanyCareers at DatabricksOur TeamBoard of DirectorsCompany BlogNewsroomDatabricks VenturesAwards and RecognitionContact UsSee why Gartner named Databricks a Leader for the second consecutive yearGet the reportTry DatabricksWatch DemosContact UsLoginJUNE 26-29REGISTER NOWCUSTOMER STORYDelivering integrity and efficiency for the U.S. Postal ServiceUSPS OIG supports efficient postal service to millions with Databricks Lakehouse40%Reduction in total cost of ownership (compared to SAS)WeeksTo deliver projects instead of months INDUSTRY: Federal Government PLATFORM USE CASE: Lakehouse,Machine Learning,Data Warehousing CLOUD: Azure“With our move to the cloud and lakehouse architecture, we are well positioned to respond to new data challenges swiftly, enabling us to fulfill the agency’s mission of ensuring efficiency, accountability and integrity in the U.S. Postal Service.”
– Ben Joseph, CDO, USPS OIGThe United States Postal Service (USPS) delivers more than 400 million pieces of mail each day. Established to sustain public trust in the mail system, the USPS OIG is critical to ensuring the integrity and accountability of the Postal Service, including its personnel, programs, assets and revenue. The USPS OIG embarked on a data modernization journey to better handle the challenges of today and prepare for those of tomorrow. Using the Lakehouse Architecture, the USPS OIG was able to centralize data analysis in the cloud for easier access and cleared data engineering bottlenecks for large-scale analytics and AI. With the means to use data to facilitate the identification of not only challenges but also new opportunities for innovation, the USPS OIG is better positioned to investigate, audit and research postal operations and programs to protect against fraud, waste and abuse, ensuring the efficiency and integrity of the USPS.Inability to scale analytics with the data perpetuates stagnationAs one of the most trusted government agencies in the country, the U.S. Postal Service (USPS) depends on a network of people and technology to collect, transport, process and deliver nearly 130 billion pieces of mail to over 163 million delivery points per year. The United States Postal Service Office of Inspector General (OIG) was established as an independent oversight agency to help maintain confidence in the postal system and improve the bottom line through audits and investigations. For instance, a key focus of the agency is to detect and prevent postal crimes such as mail dumping, which is when postal employees intentionally discard or delay mail rather than deliver it to its intended recipients. By monitoring various USPS data points, the OIG can identify indicators of employees or routes that are involved with dumped mail.Prior to its use of a data lakehouse architecture, the OIG’s on-premises infrastructure was highly complex to manage and costly to scale. This became increasingly problematic as data volumes increased at such a rate that the office faced challenges extracting insights and developing timely solutions. The agency struggled to handle the influx of customer and delivery-related data, provide a centralized view for all teams, and support reliable and performant data pipelines for downstream analytics and machine learning — a requirement that became heavily evident during the 2020 election season, which saw a historic spike in voting by mail due to COVID-19.With a data team of more than 100 people who needed to work together or respond to anomalous activities in a timely manner, the OIG looked to the cloud and a new data architecture that would offer all its data teams easy access to any data and unlock new analytical and machine learning capabilities to further its efforts to improve mail delivery efficiency and accountability.Lakehouse solves efficiency challenges and opens new doorsA combination of internal and external factors challenged the OIG to simplify the management of all its data at scale while also facilitating analytics and machine learning. “We wanted to leverage industry standards for data management and analytics in-house, which prompted our transition to the cloud,” explained Ben Joseph, CDO at USPS OIG. “As we explored our data infrastructure options, we found that a data lakehouse platform is the only environment that offers a common place to do ETL, analytics and machine learning under the same umbrella.”With a data lakehouse platform, the OIG has removed the barriers that once blocked its ability to deliver reliable and timely data for analytics and machine learning. With an open lakehouse architecture, all the data the agency pulls from USPS is much clearer and easier to use across teams. It has also unlocked a level of operational efficiency and performance at an unprecedented scale. Data engineers can simplify ETL pipeline development and improve data reliability, data analysts can use SQL to collaboratively query and share insights with the built-in visualizations and dashboards, and data scientists can use AutoML to jump-start new machine learning projects by automating tasks and accelerating workflows.For example, one solution called Informed Delivery provides eligible residential customers with a digital preview of their household’s incoming mail. This allows households to see and track their inbound mail more easily. Behind the scenes, OIG stores images of mail alongside traditional data on things like sender, receiver, package contents, weight and value. By combining this data, it can mitigate fraudulent activity in the shape of mail theft or dumping, while improving the customer experience.As a branch of the federal government and a long-established institution without a reputation for being modern in its approach to workflows, the Postal Service expects to see an incredibly positive effect on customer satisfaction and the agency’s reputation through its adoption of the lakehouse.Delivering insights that matter, with confidenceMigrating to a cloud based data lakehouse architecture not only paid dividends from an innovation standpoint, but it also made an impact on costs. Compared to its previous architecture, the agency experienced a 40% reduction in total cost of ownership. These savings were primarily due to the elimination of legacy data infrastructure, unnecessary storage and licensing costs, and productivity improvements.Simultaneously, the OIG’s ability to harness its data for analytics and machine learning use cases was reduced from months to days, because of the data engineering efficiencies introduced by the lakehouse. In fact, it has been able to scale its data ingestion from 1 production pipeline in its legacy system to over 90 production pipelines in the data lakehouse. This has helped the agency deliver more value to its various auditors and investigators in uncovering anomalous activity, while also giving the data team the satisfaction of being able to create customer-facing solutions in ways that were impossible with the previous system.Joseph explained, “Our society has experienced a lot of change on many levels in the last couple of years, and all we want to do as an organization is to be able to respond to those changes and support the various stakeholders who love our products but need more of them. The data lakehouse platform has enabled us to do that on a level we never dreamed of.”Now that the OIG can operate in a more forward-thinking and agile fashion, the team is taking on several additional use cases. With the lakehouse as its data foundation, the OIG is well positioned to continue leveraging data, analytics and AI to deliver value efficiently and with confidence.Ready to get started?Try Databricks for freeLearn more about our productTalk to an expertProductPlatform OverviewPricingOpen Source TechTry DatabricksDemoProductPlatform OverviewPricingOpen Source TechTry DatabricksDemoLearn & SupportDocumentationGlossaryTraining & CertificationHelp CenterLegalOnline CommunityLearn & SupportDocumentationGlossaryTraining & CertificationHelp CenterLegalOnline CommunitySolutionsBy IndustriesProfessional ServicesSolutionsBy IndustriesProfessional ServicesCompanyAbout UsCareers at DatabricksDiversity and InclusionCompany BlogContact UsCompanyAbout UsCareers at DatabricksDiversity and InclusionCompany BlogContact UsSee Careers
at DatabricksWorldwideEnglish (United States)Deutsch (Germany)Français (France)Italiano (Italy)日本語 (Japan)한국어 (South Korea)Português (Brazil)Databricks Inc.
160 Spear Street, 13th Floor
San Francisco, CA 94105
1-866-330-0121© Databricks 2023. All rights reserved. Apache, Apache Spark, Spark and the Spark logo are trademarks of the Apache Software Foundation.Privacy Notice|Terms of Use|Your Privacy Choices|Your California Privacy Rights |
https://www.databricks.com/dataaisummit/speaker/surya-turaga/# | Surya Turaga - Data + AI Summit 2023 | DatabricksThis site works best with JavaScript enabled.HomepageSAN FRANCISCO, JUNE 26-29VIRTUAL, JUNE 28-29Register NowSession CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingSurya TuragaSr. Solutions Architect at DatabricksBack to speakersIn the past I have been responsible for Strategic client development, working with senior client stakeholders to act as a thought leader and evangelist across the Advanced Analytics space. Strong conceptual hands-on experience in Cloud and Big Data environments.
I have also spoken at multiple large events like +30,000 like AWS re:Invent 2022 and Qubole Data Lake Summit etc.,
I love building analytical solutions that combine Machine Learning & Artificial Intelligence. In my career, I have always had the luxury of working very close to technology and love working on challenges around scaling up analytic methods in distributed computing environments.
I am passionate about turning ideas into data products. I enjoy developing high-performance teams, mentoring data science enthusiasts and working in cross-functional teams.Looking for past sessions?Take a look through the session archive to find even more related content from previous Data + AI Summit conferences.Explore the session archiveRegister today to save your spotRegister NowHomepageOrganized By Session CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingFAQEvent PolicyCode of ConductPrivacy NoticeYour Privacy ChoicesYour California Privacy RightsApache, Apache Spark, Spark, and the Spark logo are trademarks of the Apache Software Foundation. The Apache Software Foundation has no affiliation with and does not endorse the materials provided at this event. |
https://www.databricks.com/company/partners/consulting-and-si/partner-solutions/datasentics-quality-inspector | Quality Inspector by DataSentics and Databricks | DatabricksPlatformThe Databricks Lakehouse PlatformDelta LakeData GovernanceData EngineeringData StreamingData WarehousingData SharingMachine LearningData SciencePricingMarketplaceOpen source techSecurity and Trust CenterWEBINAR May 18 / 8 AM PT
Goodbye, Data Warehouse. Hello, Lakehouse.
Attend to understand how a data lakehouse fits within your modern data stack.
Register nowSolutionsSolutions by IndustryFinancial ServicesHealthcare and Life SciencesManufacturingCommunications, Media & EntertainmentPublic SectorRetailSee all IndustriesSolutions by Use CaseSolution AcceleratorsProfessional ServicesDigital Native BusinessesData Platform MigrationNew survey of biopharma executives reveals real-world success with real-world evidence.
See survey resultsLearnDocumentationTraining & CertificationDemosResourcesOnline CommunityUniversity AllianceEventsData + AI SummitBlogLabsBeaconsJoin Generation AI in San Francisco
June 26–29
Learn about LLMs like Dolly and open source Data and AI technologies such as Apache Spark™, Delta Lake, MLflow and Delta SharingExplore sessionsCustomersPartnersCloud PartnersAWSAzureGoogle CloudPartner ConnectTechnology and Data PartnersTechnology Partner ProgramData Partner ProgramBuilt on Databricks Partner ProgramConsulting & SI PartnersC&SI Partner ProgramPartner SolutionsConnect with validated partner solutions in just a few clicks.Learn moreCompanyCareers at DatabricksOur TeamBoard of DirectorsCompany BlogNewsroomDatabricks VenturesAwards and RecognitionContact UsSee why Gartner named Databricks a Leader for the second consecutive yearGet the reportTry DatabricksWatch DemosContact UsLoginJUNE 26-29REGISTER NOWBrickbuilder SolutionQuality Inspector by DataSenticsIndustry-specific solution developed by DataSentics and powered by the Databricks Lakehouse Platform
Contact usAutomate your production quality controlQuality control is a crucial aspect of any production process, but traditional methods can be time-consuming and prone to human error. Quality Inspector by DataSentics, an Atos company, offers a solution that is both efficient and reliable. With out-of-the-box models for visual quality inspection, which are tailored to meet your specific requirements, you’ll experience stable, scalable quality control that’s easy to improve over time. DataSentics’ highly precise and efficient tailoring process allows them to quickly adapt models to your business needs. Quality Inspector is an end-to-end solution that can be seamlessly integrated into your existing setup, delivering high performance and reliability.Achieve more than 99.8% accuracyReduce your quality control time by up to 90%Automate even the most challenging use cases such as use of 3D cameras, heat cameras, etc.Contact usDeliver AI innovation faster with Solution Accelerators for popular industry use cases. See our full library of solutionsProductPlatform OverviewPricingOpen Source TechTry DatabricksDemoProductPlatform OverviewPricingOpen Source TechTry DatabricksDemoLearn & SupportDocumentationGlossaryTraining & CertificationHelp CenterLegalOnline CommunityLearn & SupportDocumentationGlossaryTraining & CertificationHelp CenterLegalOnline CommunitySolutionsBy IndustriesProfessional ServicesSolutionsBy IndustriesProfessional ServicesCompanyAbout UsCareers at DatabricksDiversity and InclusionCompany BlogContact UsCompanyAbout UsCareers at DatabricksDiversity and InclusionCompany BlogContact UsSee Careers
at DatabricksWorldwideEnglish (United States)Deutsch (Germany)Français (France)Italiano (Italy)日本語 (Japan)한국어 (South Korea)Português (Brazil)Databricks Inc.
160 Spear Street, 13th Floor
San Francisco, CA 94105
1-866-330-0121© Databricks 2023. All rights reserved. Apache, Apache Spark, Spark and the Spark logo are trademarks of the Apache Software Foundation.Privacy Notice|Terms of Use|Your Privacy Choices|Your California Privacy Rights |
https://www.databricks.com/jp/company/awards-and-recognition | 受賞歴と業界評価 | DatabricksSkip to main contentプラットフォームデータブリックスのレイクハウスプラットフォームDelta Lakeデータガバナンスデータエンジニアリングデータストリーミングデータウェアハウスデータ共有機械学習データサイエンス料金Marketplaceオープンソーステクノロジーセキュリティ&トラストセンターウェビナー 5 月 18 日午前 8 時 PT
さようなら、データウェアハウス。こんにちは、レイクハウス。
データレイクハウスが最新のデータスタックにどのように適合するかを理解するために出席してください。
今すぐ登録ソリューション業種別のソリューション金融サービス医療・ライフサイエンス製造通信、メディア・エンターテイメント公共機関小売・消費財全ての業界を見るユースケース別ソリューションソリューションアクセラレータプロフェッショナルサービスデジタルネイティブビジネスデータプラットフォームの移行5月9日 |午前8時(太平洋標準時)
製造業のためのレイクハウスを発見する
コーニングが、手作業による検査を最小限に抑え、輸送コストを削減し、顧客満足度を高める重要な意思決定をどのように行っているかをご覧ください。今すぐ登録学習ドキュメントトレーニング・認定デモ関連リソースオンラインコミュニティ大学との連携イベントDATA+AI サミットブログラボBeacons2023年6月26日~29日
直接参加するか、基調講演のライブストリームに参加してくださいご登録導入事例パートナークラウドパートナーAWSAzureGoogle CloudPartner Connect技術・データパートナー技術パートナープログラムデータパートナープログラムBuilt on Databricks Partner ProgramSI コンサルティングパートナーC&SI パートナーパートナーソリューションDatabricks 認定のパートナーソリューションをご利用いただけます。詳しく見る会社情報採用情報経営陣取締役会Databricks ブログニュースルームDatabricks Ventures受賞歴と業界評価ご相談・お問い合わせDatabricks は、ガートナーのマジック・クアドラントで 2 年連続でリーダーに位置付けられています。レポートをダウンロードDatabricks 無料トライアルデモを見るご相談・お問い合わせログインJUNE 26-29REGISTER NOW受賞歴と業界評価Databricks は、業界をリードするさまざまな企業から高い評価をいただいています。2022 年「クラウドデータベース管理システム部門のマジック・クアドラント」のリーダーの 1 社に位置付け2022 年「クラウドデータベース管理システム部門」カスタマーズチョイスに選出2021 年「クラウドデータベース管理システム部門のマジック・クアドラント」のリーダーの 1 社に位置付け2021 年「データサイエンス・機械学習プラットフォーム部門のマジック・クアドラント」のリーダーの 1 社に位置付けレイクハウスが「データ管理のハイプ・サイクル:2022 年」に2023 年の注目企業最も革新的なデータサイエンス分野の企業クラウド 100 社AI 50 社アメリカの優れたスタートアップ企業働きやすいテクノロジー企業米ベイエリアの働きやすい企業ミレニアル世代が働きやすい企業CNBC ディスラプター 50 社2022 年最高の職場ランキングご相談をお待ちしておりますお客さまのビジネスゴールをお聞かせください。Databricks のサービスチームがお役に立ちます。Databricks 無料トライアル製品プラットフォーム料金オープンソーステクノロジーDatabricks 無料トライアルデモ製品プラットフォーム料金オープンソーステクノロジーDatabricks 無料トライアルデモ学習・サポートドキュメント用語集トレーニング・認定ヘルプセンター法務オンラインコミュニティ学習・サポートドキュメント用語集トレーニング・認定ヘルプセンター法務オンラインコミュニティソリューション業種別プロフェッショナルサービスソリューション業種別プロフェッショナルサービス会社情報会社概要採用情報ダイバーシティ&インクルージョンDatabricks ブログご相談・お問い合わせ会社情報会社概要採用情報ダイバーシティ&インクルージョンDatabricks ブログご相談・お問い合わせ 採用情報言語地域English (United States)Deutsch (Germany)Français (France)Italiano (Italy)日本語 (Japan)한국어 (South Korea)Português (Brazil)Databricks Inc.
160 Spear Street, 13th Floor
San Francisco, CA 94105
1-866-330-0121© Databricks 2023. All rights reserved. Apache, Apache Spark, Spark and the Spark logo are trademarks of the Apache Software Foundation.プライバシー通知|利用規約|プライバシー設定|カリフォルニア州のプライバシー権利 |
https://www.databricks.com/it/company/partners | Partner | DatabricksSkip to main contentPiattaformaThe Databricks Lakehouse PlatformDelta LakeGovernance dei datiIngegneria dei datiStreaming di datiData warehouseCondivisione dei datiMachine LearningData SciencePrezziMarketplaceTecnologia open-sourceSecurity and Trust CenterWEBINAR 18 maggio / 8 AM PT
Addio, Data Warehouse. Ciao, Lakehouse.
Partecipa per capire come una data lakehouse si inserisce nel tuo stack di dati moderno.
Registrati oraSoluzioniSoluzioni per settoreServizi finanziariSanità e bioscienzeIndustria manifatturieraComunicazioni, media e intrattenimentoSettore pubblicoretailVedi tutti i settoriSoluzioni per tipo di applicazioneAcceleratoriServizi professionaliAziende native digitaliMigrazione della piattaforma di dati9 maggio | 8am PT
Scopri la Lakehouse for Manufacturing
Scopri come Corning sta prendendo decisioni critiche che riducono al minimo le ispezioni manuali, riducono i costi di spedizione e aumentano la soddisfazione dei clienti.Registrati oggi stessoFormazioneDocumentazioneFormazione e certificazioneDemoRisorseCommunity onlineUniversity AllianceEventiConvegno Dati + AIBlogLabsBeacons
26–29 giugno 2023
Partecipa di persona o sintonizzati per il live streaming del keynoteRegistratiClientiPartnerPartner cloudAWSAzureGoogle CloudPartner ConnectPartner per tecnologie e gestione dei datiProgramma Partner TecnologiciProgramma Data PartnerBuilt on Databricks Partner ProgramPartner di consulenza e SIProgramma partner consulenti e integratori (C&SI)Soluzioni dei partnerConnettiti con soluzioni validate dei nostri partner in pochi clic.RegistratiChi siamoLavorare in DatabricksIl nostro teamConsiglio direttivoBlog aziendaleSala stampaDatabricks VenturesPremi e riconoscimentiContattiScopri perché Gartner ha nominato Databricks fra le aziende leader per il secondo anno consecutivoRichiedi il reportProva DatabricksGuarda le demoContattiAccediJUNE 26-29REGISTER NOWPartner di DatabricksDatabricks conta oltre 1200+ partner in tutto il mondo che forniscono soluzioni e servizi per dati, analisi e IA ai nostri clienti utilizzando la Databricks Lakehouse Platform. Questi partner consentono di sfruttare Databricks per unificare tutti i carichi di lavoro di dati e IA per ottenere informazioni più dettagliate e significative.“Databricks gestisce il volume di dati, mentre Tableau assicura la rapidità della visualizzazione. Queste soluzioni funzionano in perfetta sintonia al centro della nostra piattaforma, per assicurare ai nostri clienti le prestazioni di cui hanno bisogno per offrire funzionalità di punta sui veicoli a guida autonoma”.— Patrick McAuliffe, Lead Engineer, IncitePartner cloudDatabricks gira su AWS, Microsoft Azure, Google Cloud e Alibaba Cloud, integrandosi strettamente con i servizi di infrastruttura, dati e IA di ciascun providerPartner tecnologiciI partner tecnologici integrano le loro soluzioni con Databricks per offrire funzionalità complementari per ETL, acquisizione di dati, BI, ML e governancePartner consulenzialiI partner consulenziali sono esperti che forniscono supporto qualificato per definire le strategie, implementare e modulare correttamente le attività di gestione e analisi dei dati e IA con DatabricksDiventa partnerI nostri partner collaborano con Databricks per sviluppare e fornire soluzioni innovative ai clienti. Unisciti al Databricks Partner Program per avere accesso a un'offerta esclusiva di strumenti, formazione, Solution Accelerator e programmi di commercializzazione.Maggiori informazioniProdottoPanoramica della piattaformaPrezziTecnologia open-sourceProva DatabricksDemoProdottoPanoramica della piattaformaPrezziTecnologia open-sourceProva DatabricksDemoFormazione e supportoDocumentazioneGlossaryFormazione e certificazioneHelp CenterLegaleCommunity onlineFormazione e supportoDocumentazioneGlossaryFormazione e certificazioneHelp CenterLegaleCommunity onlineSoluzioniPer settoreServizi professionaliSoluzioniPer settoreServizi professionaliChi siamoChi siamoLavorare in DatabricksDiversità e inclusioneBlog aziendaleContattiChi siamoChi siamoLavorare in DatabricksDiversità e inclusioneBlog aziendaleContattiPosizioni aperte
in DatabricksMondoEnglish (United States)Deutsch (Germany)Français (France)Italiano (Italy)日本語 (Japan)한국어 (South Korea)Português (Brazil)Databricks Inc.
160 Spear Street, 13th Floor
San Francisco, CA 94105
1-866-330-0121© Databricks 2023. All rights reserved. Apache, Apache Spark, Spark and the Spark logo are trademarks of the Apache Software Foundation.Informativa sulla privacy|Condizioni d'uso|Le vostre scelte sulla privacy|I vostri diritti di privacy in California |
https://www.databricks.com/dataaisummit/speaker/leon-eller | Leon Eller - Data + AI Summit 2023 | DatabricksThis site works best with JavaScript enabled.HomepageSAN FRANCISCO, JUNE 26-29VIRTUAL, JUNE 28-29Register NowSession CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingLeon EllerSolutions Architect at DatabricksBack to speakersLooking for past sessions?Take a look through the session archive to find even more related content from previous Data + AI Summit conferences.Explore the session archiveRegister today to save your spotRegister NowHomepageOrganized By Session CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingFAQEvent PolicyCode of ConductPrivacy NoticeYour Privacy ChoicesYour California Privacy RightsApache, Apache Spark, Spark, and the Spark logo are trademarks of the Apache Software Foundation. The Apache Software Foundation has no affiliation with and does not endorse the materials provided at this event. |
https://www.databricks.com/dataaisummit/speaker/shiv-trisal | Shiv Trisal - Data + AI Summit 2023 | DatabricksThis site works best with JavaScript enabled.HomepageSAN FRANCISCO, JUNE 26-29VIRTUAL, JUNE 28-29Register NowSession CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingShiv TrisalGTM Director, Manufacturing & Energy at DatabricksBack to speakersShiv has successfully led diverse product and strategy teams and delivered game-changing data and AI-led innovation across Diversified Manufacturing, Transportation and Logistics and Aerospace industries, with roles at Ernst & Young, Booz & Co./Strategy & and Raytheon Technologies. As an industry leader, Shiv regularly connects with executives to cover key trends and help enable data and AI strategies to unlock strategic competitive advantage in Manufacturing and Logistics.Looking for past sessions?Take a look through the session archive to find even more related content from previous Data + AI Summit conferences.Explore the session archiveRegister today to save your spotRegister NowHomepageOrganized By Session CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingFAQEvent PolicyCode of ConductPrivacy NoticeYour Privacy ChoicesYour California Privacy RightsApache, Apache Spark, Spark, and the Spark logo are trademarks of the Apache Software Foundation. The Apache Software Foundation has no affiliation with and does not endorse the materials provided at this event. |
https://www.databricks.com/br/product/marketplace | Databricks Marketplace | DatabricksSkip to main contentPlataformaDatabricks Lakehouse PlatformDelta LakeGovernança de dadosData EngineeringStreaming de dadosArmazenamento de dadosData SharingMachine LearningData SciencePreçosMarketplaceTecnologia de código abertoCentro de segurança e confiançaWEBINAR Maio 18 / 8 AM PT
Adeus, Data Warehouse. Olá, Lakehouse.
Participe para entender como um data lakehouse se encaixa em sua pilha de dados moderna.
Inscreva-se agoraSoluçõesSoluções por setorServiços financeirosSaúde e ciências da vidaProdução industrialComunicações, mídia e entretenimentoSetor públicoVarejoVer todos os setoresSoluções por caso de usoAceleradores de soluçãoServiços profissionaisNegócios nativos digitaisMigração da plataforma de dados9 de maio | 8h PT
Descubra a Lakehouse para Manufatura
Saiba como a Corning está tomando decisões críticas que minimizam as inspeções manuais, reduzem os custos de envio e aumentam a satisfação do cliente.Inscreva-se hojeAprenderDocumentaçãoTreinamento e certificaçãoDemosRecursosComunidade onlineAliança com universidadesEventosData+AI SummitBlogLaboratóriosBeaconsA maior conferência de dados, análises e IA do mundo retorna a São Francisco, de 26 a 29 de junho. ParticipeClientesParceirosParceiros de nuvemAWSAzureGoogle CloudConexão de parceirosParceiros de tecnologia e dadosPrograma de parceiros de tecnologiaPrograma de parceiros de dadosBuilt on Databricks Partner ProgramParceiros de consultoria e ISPrograma de parceiros de C&ISSoluções para parceirosConecte-se com apenas alguns cliques a soluções de parceiros validadas.Saiba maisEmpresaCarreiras em DatabricksNossa equipeConselho de AdministraçãoBlog da empresaImprensaDatabricks VenturesPrêmios e reconhecimentoEntre em contatoVeja por que o Gartner nomeou a Databricks como líder pelo segundo ano consecutivoObtenha o relatórioExperimente DatabricksAssista às DemosEntre em contatoInício de sessãoJUNE 26-29REGISTER NOWDatabricks MarketplaceOpen Marketplace para dados, análises e IAComece agoraTo play this video, click here and accept cookiesO que é Databricks
Marketplace?
O Databricks Marketplace é um mercado aberto para todos os seus dados, análises e IA, alimentado pelo padrão de código aberto Delta Sharing. O Databricks Marketplace amplia sua oportunidade de oferecer inovação e avançar em todas as suas iniciativas de análise e IA.
Obtenha conjuntos de dados, bem como ativos de IA e análise — como modelos de ML, notebooks, aplicativos e painéis — sem dependências de plataforma proprietária, ETL complicado ou replicação cara. Essa abordagem aberta permite que você coloque os dados para trabalhar mais rapidamente em cada nuvem com as ferramentas de sua escolha.Descubra mais do que apenas dadosDesbloqueie inovação e promova as iniciativas de IA, ML e funções analíticas da sua organização. Acesse mais do que apenas conjuntos de dados, incluindo modelos ML, notebooks, aplicativos e soluções.Avalie produtos de dados mais rapidamenteNotebooks pré-incorporados e dados de amostra ajudam você a avaliar rapidamente e ter muito mais confiança de que um produto de dados é adequado para suas iniciativas de IA, ML ou análise.Evite a dependência do fornecedorReduza substancialmente o tempo para fornecer insights e evite ficar preso a compartilhamento e colaboração abertos e contínuos em nuvens, regiões ou plataformas. Integre-se diretamente com as ferramentas de sua escolha e exatamente onde você trabalha.Provedores de dados em destaque no Databricks MarketplaceTorne-se um provedor de dados no marketplace
RecursoseventoInscreva-se agora para Data + AI Summit 2023Registrare-booksUma nova abordagem para o Data SharingLeia o artigoe-booksData, Analytics and AI governanceLeia o artigoApresentaçãoGovernança de dados e compartilhamento no Lakehouse no Data + AI Summit 2022 BlogsAnúncio de pré-visualização pública do MarketplaceAnúncio do Databricks Marketplace no Data + AI Summit 2022DocumentaçãoAWSAzureGCPProdutoVisão geral da plataformaPreçosTecnologia de código abertoExperimente DatabricksDemoProdutoVisão geral da plataformaPreçosTecnologia de código abertoExperimente DatabricksDemoAprendizagem e suporteDocumentaçãoGlossárioTreinamento e certificaçãoCentral de ajudaInformações legaisComunidade onlineAprendizagem e suporteDocumentaçãoGlossárioTreinamento e certificaçãoCentral de ajudaInformações legaisComunidade onlineSoluçõesPor setorServiços profissionaisSoluçõesPor setorServiços profissionaisEmpresaQuem somosCarreiras em DatabricksDiversidade e inclusãoBlog da empresaEntre em contatoEmpresaQuem somosCarreiras em DatabricksDiversidade e inclusãoBlog da empresaEntre em contatoSee Careers
at DatabricksMundialEnglish (United States)Deutsch (Germany)Français (France)Italiano (Italy)日本語 (Japan)한국어 (South Korea)Português (Brazil)Databricks Inc.
160 Spear Street, 13th Floor
San Francisco, CA 94105
1-866-330-0121© Databricks 2023. All rights reserved. Apache, Apache Spark, Spark and the Spark logo are trademarks of the Apache Software Foundation.Aviso de privacidade|Termos de Uso|Suas opções de privacidade|Seus direitos de privacidade na Califórnia |
https://www.databricks.com/dataaisummit/speaker/naresh-yegireddi/# | Naresh Yegireddi - Data + AI Summit 2023 | DatabricksThis site works best with JavaScript enabled.HomepageSAN FRANCISCO, JUNE 26-29VIRTUAL, JUNE 28-29Register NowSession CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingNaresh YegireddiStaff Data Engineer at Indigo AGBack to speakersNaresh was born and raised in India. After finishing masters in Electrical Engineering in the year 2007, joined an MNC as a software engineer. Moved to the United States in 2010. Currently working as a Staff Data Engineer at Indigo AG, prior to that, had opportunities to work for SONY PlayStation, GRUBHUB, COMCAST, DELL and AT&T in data warehousing and business intelligence technologies.Looking for past sessions?Take a look through the session archive to find even more related content from previous Data + AI Summit conferences.Explore the session archiveRegister today to save your spotRegister NowHomepageOrganized By Session CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingFAQEvent PolicyCode of ConductPrivacy NoticeYour Privacy ChoicesYour California Privacy RightsApache, Apache Spark, Spark, and the Spark logo are trademarks of the Apache Software Foundation. The Apache Software Foundation has no affiliation with and does not endorse the materials provided at this event. |
https://www.databricks.com/dataaisummit/speaker/datin-ts-habsah-binti-nordin/# | Datin Ts. Habsah Binti Nordin - Data + AI Summit 2023 | DatabricksThis site works best with JavaScript enabled.HomepageSAN FRANCISCO, JUNE 26-29VIRTUAL, JUNE 28-29Register NowSession CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingDatin Ts. Habsah Binti NordinChief Data Officer at PETRONASBack to speakersDatin Habsah Nordin is the Head of Enterprise Data at PETRONAS. She is responsible for orchestrating PETRONAS’ data strategy with the aim to liberate data seamlessly, and institutionalize a center of excellence for Data in PETRONAS and oversees the efforts to build the Enterprise Data Hub (EDH) and scale advanced analytics across the organization.
Datin Habsah is a PETRONAS scholar and holds a Bachelor of Science (Bsc) in Computer Science from the Case Western Reserve University, USA. She joined PETRONAS in 1995 and has since held various leadership roles in IT, Strategic Planning, Business Development, Marketing, Transformation, Internal Audit, Project Management, Data and Knowledge Management.
Prior to her current role, Datin Habsah was involved in few successful transformation initiatives that focused on business and operating model design. She is now a Board Member of PTSSB DMCC, a company incorporated in the United Arab Emirates (UAE). Since 2021, she been appointed as the first female President of Kelab Sukan and Rekreasi PETRONAS (KSRP) been formed since 1976, and Chairman of the Board for Twin Towers Fitness Center.
Datin Habsah has completed the INSEAD Leadership Programme (Duke Program) focusing on Strategic Excellence. She is a certified solution-focused coach accredited by the Canadian Council of Professional Certification (CCPC Global Inc) and Certified Data Management Professional accredited by DAMA. She is Adjunct Lecturer and Industry Advisory Panel for University Teknologi PETRONAS in the area of Computer & Information Science Department for undergraduate and postgraduate program.Looking for past sessions?Take a look through the session archive to find even more related content from previous Data + AI Summit conferences.Explore the session archiveRegister today to save your spotRegister NowHomepageOrganized By Session CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingFAQEvent PolicyCode of ConductPrivacy NoticeYour Privacy ChoicesYour California Privacy RightsApache, Apache Spark, Spark, and the Spark logo are trademarks of the Apache Software Foundation. The Apache Software Foundation has no affiliation with and does not endorse the materials provided at this event. |
https://www.databricks.com/br/company/awards-and-recognition | Prêmios e reconhecimento | DatabricksSkip to main contentPlataformaDatabricks Lakehouse PlatformDelta LakeGovernança de dadosData EngineeringStreaming de dadosArmazenamento de dadosData SharingMachine LearningData SciencePreçosMarketplaceTecnologia de código abertoCentro de segurança e confiançaWEBINAR Maio 18 / 8 AM PT
Adeus, Data Warehouse. Olá, Lakehouse.
Participe para entender como um data lakehouse se encaixa em sua pilha de dados moderna.
Inscreva-se agoraSoluçõesSoluções por setorServiços financeirosSaúde e ciências da vidaProdução industrialComunicações, mídia e entretenimentoSetor públicoVarejoVer todos os setoresSoluções por caso de usoAceleradores de soluçãoServiços profissionaisNegócios nativos digitaisMigração da plataforma de dados9 de maio | 8h PT
Descubra a Lakehouse para Manufatura
Saiba como a Corning está tomando decisões críticas que minimizam as inspeções manuais, reduzem os custos de envio e aumentam a satisfação do cliente.Inscreva-se hojeAprenderDocumentaçãoTreinamento e certificaçãoDemosRecursosComunidade onlineAliança com universidadesEventosData+AI SummitBlogLaboratóriosBeaconsA maior conferência de dados, análises e IA do mundo retorna a São Francisco, de 26 a 29 de junho. ParticipeClientesParceirosParceiros de nuvemAWSAzureGoogle CloudConexão de parceirosParceiros de tecnologia e dadosPrograma de parceiros de tecnologiaPrograma de parceiros de dadosBuilt on Databricks Partner ProgramParceiros de consultoria e ISPrograma de parceiros de C&ISSoluções para parceirosConecte-se com apenas alguns cliques a soluções de parceiros validadas.Saiba maisEmpresaCarreiras em DatabricksNossa equipeConselho de AdministraçãoBlog da empresaImprensaDatabricks VenturesPrêmios e reconhecimentoEntre em contatoVeja por que o Gartner nomeou a Databricks como líder pelo segundo ano consecutivoObtenha o relatórioExperimente DatabricksAssista às DemosEntre em contatoInício de sessãoJUNE 26-29REGISTER NOWPrêmios e reconhecimentoDescubra todos os motivos pelos quais o Databricks é reconhecido pelos líderes do setor.Líder no Quadrante Mágico de 2022 para sistemas de gerenciamento de banco de dados na nuvemPrêmio Customer Choice Award de 2022 para sistemas de gerenciamento de banco de dados na nuvemLíder no Quadrante Mágico de 2021 para sistemas de gerenciamento de banco de dados na nuvemLíder no Quadrante Mágico de 2021 para data science e machine learningLakehouse — Hype Cycle para soluções de gerenciamento de dados, 2022Empresas para ficar de olho em 2023As empresas mais inovadoras em data scienceThe Cloud 100The AI 50Os melhores empregadores de startups dos EUAOs melhores locais de trabalho em tecnologiaOs melhores locais de trabalho na área da baía de São FranciscoOs melhores locais de trabalho para a geração YCNBC Disruptor 50Os melhores lugares para trabalhar em 2022Pronto para saber mais?Adoraríamos saber seus objetivos de negócios. Nossa equipe de serviços fará todo o possível para ajudar sua empresa a ter sucesso.Experimente o Databricks gratuitamenteProdutoVisão geral da plataformaPreçosTecnologia de código abertoExperimente DatabricksDemoProdutoVisão geral da plataformaPreçosTecnologia de código abertoExperimente DatabricksDemoAprendizagem e suporteDocumentaçãoGlossárioTreinamento e certificaçãoCentral de ajudaInformações legaisComunidade onlineAprendizagem e suporteDocumentaçãoGlossárioTreinamento e certificaçãoCentral de ajudaInformações legaisComunidade onlineSoluçõesPor setorServiços profissionaisSoluçõesPor setorServiços profissionaisEmpresaQuem somosCarreiras em DatabricksDiversidade e inclusãoBlog da empresaEntre em contatoEmpresaQuem somosCarreiras em DatabricksDiversidade e inclusãoBlog da empresaEntre em contatoSee Careers
at DatabricksMundialEnglish (United States)Deutsch (Germany)Français (France)Italiano (Italy)日本語 (Japan)한국어 (South Korea)Português (Brazil)Databricks Inc.
160 Spear Street, 13th Floor
San Francisco, CA 94105
1-866-330-0121© Databricks 2023. All rights reserved. Apache, Apache Spark, Spark and the Spark logo are trademarks of the Apache Software Foundation.Aviso de privacidade|Termos de Uso|Suas opções de privacidade|Seus direitos de privacidade na Califórnia |
https://www.databricks.com/dataaisummit/speaker/anfisa-kaydak | Anfisa Kaydak - Data + AI Summit 2023 | DatabricksThis site works best with JavaScript enabled.HomepageSAN FRANCISCO, JUNE 26-29VIRTUAL, JUNE 28-29Register NowSession CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingAnfisa KaydakVP, Data Product & Engineering at HealthverityBack to speakersAnfisa studies applied math in State University, Minsk, Belarus. She started her career in the US as a Web developer and quickly became fascinated with data. The journey from gigabytes quickly progressed to petabytes, from RDMS to distributed systems, from data exploration to complex APLD studies and pipeline engineering. She is SME in healthcare data and analytics, and adept in Data and AI technology transformations in healthcare.Looking for past sessions?Take a look through the session archive to find even more related content from previous Data + AI Summit conferences.Explore the session archiveRegister today to save your spotRegister NowHomepageOrganized By Session CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingFAQEvent PolicyCode of ConductPrivacy NoticeYour Privacy ChoicesYour California Privacy RightsApache, Apache Spark, Spark, and the Spark logo are trademarks of the Apache Software Foundation. The Apache Software Foundation has no affiliation with and does not endorse the materials provided at this event. |
https://www.databricks.com/jp/company/partners/consulting-and-si/candsi-partner-program | パートナーページ:コンサルティングと SI(C&SI) パートナー | DatabricksSkip to main contentプラットフォームデータブリックスのレイクハウスプラットフォームDelta Lakeデータガバナンスデータエンジニアリングデータストリーミングデータウェアハウスデータ共有機械学習データサイエンス料金Marketplaceオープンソーステクノロジーセキュリティ&トラストセンターウェビナー 5 月 18 日午前 8 時 PT
さようなら、データウェアハウス。こんにちは、レイクハウス。
データレイクハウスが最新のデータスタックにどのように適合するかを理解するために出席してください。
今すぐ登録ソリューション業種別のソリューション金融サービス医療・ライフサイエンス製造通信、メディア・エンターテイメント公共機関小売・消費財全ての業界を見るユースケース別ソリューションソリューションアクセラレータプロフェッショナルサービスデジタルネイティブビジネスデータプラットフォームの移行5月9日 |午前8時(太平洋標準時)
製造業のためのレイクハウスを発見する
コーニングが、手作業による検査を最小限に抑え、輸送コストを削減し、顧客満足度を高める重要な意思決定をどのように行っているかをご覧ください。今すぐ登録学習ドキュメントトレーニング・認定デモ関連リソースオンラインコミュニティ大学との連携イベントDATA+AI サミットブログラボBeacons2023年6月26日~29日
直接参加するか、基調講演のライブストリームに参加してくださいご登録導入事例パートナークラウドパートナーAWSAzureGoogle CloudPartner Connect技術・データパートナー技術パートナープログラムデータパートナープログラムBuilt on Databricks Partner ProgramSI コンサルティングパートナーC&SI パートナーパートナーソリューションDatabricks 認定のパートナーソリューションをご利用いただけます。詳しく見る会社情報採用情報経営陣取締役会Databricks ブログニュースルームDatabricks Ventures受賞歴と業界評価ご相談・お問い合わせDatabricks は、ガートナーのマジック・クアドラントで 2 年連続でリーダーに位置付けられています。レポートをダウンロードDatabricks 無料トライアルデモを見るご相談・お問い合わせログインJUNE 26-29REGISTER NOWC&SI パートナーDatabricks コンサルティングと SI(C&SI) パートナープログラムは価値主導型で、大きな成果を上げるためにより良いパートナー関係を構築できます。お問い合わせデータおよび AI サービスプロバイダーのグローバルエコシステムに参加し、私たちの共通の顧客をデータドリブンな企業へと変革してください。パートナーとして、変革の重要な役割を 担っていただきます。セールスインパクト
レイクハウスビジョンを販売するためにフィールドチームと連携し、 Databricks のフットプリントを拡大させてください。カスタマーバリュー
プラットフォームでチームをトレーニングし、実践でのキャパシティと能力を向上させることができます。イノベーション
インパクトの大きい顧客のユースケースに対応した再現性の高いソリューションを開発することで、専門性をアピールできます。エンゲージメントとイネーブルメントのメリットを最大限に活用することで、お客さまのビジネスを拡大し、成功するパートナーシップを構築できます。Databricks テクニカルトレーニングDatabricks レイクハウスプラットフォームへのアクセステクニカルサポートおよびセールスサポートオポチュニティの登録と紹介料顧客投資資金GTM(Go-to-Market)のイネーブルメントとリソース無料お試し・その他ご相談を承りますお問い合わせパートナーを見つける製品プラットフォーム料金オープンソーステクノロジーDatabricks 無料トライアルデモ製品プラットフォーム料金オープンソーステクノロジーDatabricks 無料トライアルデモ学習・サポートドキュメント用語集トレーニング・認定ヘルプセンター法務オンラインコミュニティ学習・サポートドキュメント用語集トレーニング・認定ヘルプセンター法務オンラインコミュニティソリューション業種別プロフェッショナルサービスソリューション業種別プロフェッショナルサービス会社情報会社概要採用情報ダイバーシティ&インクルージョンDatabricks ブログご相談・お問い合わせ会社情報会社概要採用情報ダイバーシティ&インクルージョンDatabricks ブログご相談・お問い合わせ 採用情報言語地域English (United States)Deutsch (Germany)Français (France)Italiano (Italy)日本語 (Japan)한국어 (South Korea)Português (Brazil)Databricks Inc.
160 Spear Street, 13th Floor
San Francisco, CA 94105
1-866-330-0121© Databricks 2023. All rights reserved. Apache, Apache Spark, Spark and the Spark logo are trademarks of the Apache Software Foundation.プライバシー通知|利用規約|プライバシー設定|カリフォルニア州のプライバシー権利 |
https://www.databricks.com/product/startups?itm_data=Ventures-StartupsPage-ApplyNow | Databricks for Startups | DatabricksSkip to main contentPlatformThe Databricks Lakehouse PlatformDelta LakeData GovernanceData EngineeringData StreamingData WarehousingData SharingMachine LearningData SciencePricingMarketplaceOpen source techSecurity and Trust CenterWEBINAR May 18 / 8 AM PT
Goodbye, Data Warehouse. Hello, Lakehouse.
Attend to understand how a data lakehouse fits within your modern data stack.
Register nowSolutionsSolutions by IndustryFinancial ServicesHealthcare and Life SciencesManufacturingCommunications, Media & EntertainmentPublic SectorRetailSee all IndustriesSolutions by Use CaseSolution AcceleratorsProfessional ServicesDigital Native BusinessesData Platform MigrationNew survey of biopharma executives reveals real-world success with real-world evidence.
See survey resultsLearnDocumentationTraining & CertificationDemosResourcesOnline CommunityUniversity AllianceEventsData + AI SummitBlogLabsBeaconsJoin Generation AI in San Francisco
June 26–29
Learn about LLMs like Dolly and open source Data and AI technologies such as Apache Spark™, Delta Lake, MLflow and Delta SharingExplore sessionsCustomersPartnersCloud PartnersAWSAzureGoogle CloudPartner ConnectTechnology and Data PartnersTechnology Partner ProgramData Partner ProgramBuilt on Databricks Partner ProgramConsulting & SI PartnersC&SI Partner ProgramPartner SolutionsConnect with validated partner solutions in just a few clicks.Learn moreCompanyCareers at DatabricksOur TeamBoard of DirectorsCompany BlogNewsroomDatabricks VenturesAwards and RecognitionContact UsSee why Gartner named Databricks a Leader for the second consecutive yearGet the reportTry DatabricksWatch DemosContact UsLoginJUNE 26-29REGISTER NOWBuild your startup on DatabricksThe most powerful platform for data and AIBuild on the Databricks Lakehouse to address all your data, analytics and AI on one platform. Accelerate speed to product while Databricks manages your data infrastructure. Prepare your product for growth with cost-efficient scalability and performance. Maintain flexibility with open source and multicloud options.Databricks for Startups helps you get up and running quickly. Build data-driven applications on the lakehouse — the data platform that scales to all your data needs, from zero to IPO and beyond.Free credits
Get easy access to the Databricks Lakehouse Platform with up to $21K in free creditsExpert advice
Receive advice from experts and the community to help build your productGo to market
Reach more customers with access to Databricks marketing, events and customersBuilt on DatabricksReady to build?If you’re a startup building data-driven applications and have raised VC funding, we want to hear from you.
Apply nowProductPlatform OverviewPricingOpen Source TechTry DatabricksDemoProductPlatform OverviewPricingOpen Source TechTry DatabricksDemoLearn & SupportDocumentationGlossaryTraining & CertificationHelp CenterLegalOnline CommunityLearn & SupportDocumentationGlossaryTraining & CertificationHelp CenterLegalOnline CommunitySolutionsBy IndustriesProfessional ServicesSolutionsBy IndustriesProfessional ServicesCompanyAbout UsCareers at DatabricksDiversity and InclusionCompany BlogContact UsCompanyAbout UsCareers at DatabricksDiversity and InclusionCompany BlogContact UsSee Careers
at DatabricksWorldwideEnglish (United States)Deutsch (Germany)Français (France)Italiano (Italy)日本語 (Japan)한국어 (South Korea)Português (Brazil)Databricks Inc.
160 Spear Street, 13th Floor
San Francisco, CA 94105
1-866-330-0121© Databricks 2023. All rights reserved. Apache, Apache Spark, Spark and the Spark logo are trademarks of the Apache Software Foundation.Privacy Notice|Terms of Use|Your Privacy Choices|Your California Privacy Rights |
https://www.databricks.com/dataaisummit/speaker/matthew-hayes | Matthew Hayes - Data + AI Summit 2023 | DatabricksThis site works best with JavaScript enabled.HomepageSAN FRANCISCO, JUNE 26-29VIRTUAL, JUNE 28-29Register NowSession CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingMatthew HayesVice President of SAP Business at QlikBack to speakersMatthew Hayes is the Vice President of SAP Business at Qlik. He has been innovating solutions for the SAP market for over 20 years. With a strong technical background in SAP, Matt developed Gold Client for handling SAP Test Data Management. Currently, he works to extend Qlik’s offerings to the SAP market and focuses on enabling those solutions for SAP customers and technology partners. Matt is based in Chicago and in the summer months prefers to work remotely from Wrigley Field.Looking for past sessions?Take a look through the session archive to find even more related content from previous Data + AI Summit conferences.Explore the session archiveRegister today to save your spotRegister NowHomepageOrganized By Session CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingFAQEvent PolicyCode of ConductPrivacy NoticeYour Privacy ChoicesYour California Privacy RightsApache, Apache Spark, Spark, and the Spark logo are trademarks of the Apache Software Foundation. The Apache Software Foundation has no affiliation with and does not endorse the materials provided at this event. |
https://www.databricks.com/dataaisummit/speaker/atiyah-curmally | Atiyah Curmally - Data + AI Summit 2023 | DatabricksThis site works best with JavaScript enabled.HomepageSAN FRANCISCO, JUNE 26-29VIRTUAL, JUNE 28-29Register NowSession CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingAtiyah CurmallyPrincipal Environmental Specialist at International Finance CorporationBack to speakersAtiyah Curmally's career centers on a passion for sustainable impact investing in emerging markets. At the International Finance Corporation, she focuses on providing insights, guidance, and practical solutions to investors, enabling them to assess risks and make decisions through the environmental, social, and governance (ESG) lens. Atiyah leads the ESG innovation and data science portfolio, including conception and development of an artificial intelligence (AI) solution called MALENA.Looking for past sessions?Take a look through the session archive to find even more related content from previous Data + AI Summit conferences.Explore the session archiveRegister today to save your spotRegister NowHomepageOrganized By Session CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingFAQEvent PolicyCode of ConductPrivacy NoticeYour Privacy ChoicesYour California Privacy RightsApache, Apache Spark, Spark, and the Spark logo are trademarks of the Apache Software Foundation. The Apache Software Foundation has no affiliation with and does not endorse the materials provided at this event. |
https://www.databricks.com/jp/company/partners/technology-partner-program | Databricks 技術パートナープログラム | DatabricksSkip to main contentプラットフォームデータブリックスのレイクハウスプラットフォームDelta Lakeデータガバナンスデータエンジニアリングデータストリーミングデータウェアハウスデータ共有機械学習データサイエンス料金Marketplaceオープンソーステクノロジーセキュリティ&トラストセンターウェビナー 5 月 18 日午前 8 時 PT
さようなら、データウェアハウス。こんにちは、レイクハウス。
データレイクハウスが最新のデータスタックにどのように適合するかを理解するために出席してください。
今すぐ登録ソリューション業種別のソリューション金融サービス医療・ライフサイエンス製造通信、メディア・エンターテイメント公共機関小売・消費財全ての業界を見るユースケース別ソリューションソリューションアクセラレータプロフェッショナルサービスデジタルネイティブビジネスデータプラットフォームの移行5月9日 |午前8時(太平洋標準時)
製造業のためのレイクハウスを発見する
コーニングが、手作業による検査を最小限に抑え、輸送コストを削減し、顧客満足度を高める重要な意思決定をどのように行っているかをご覧ください。今すぐ登録学習ドキュメントトレーニング・認定デモ関連リソースオンラインコミュニティ大学との連携イベントDATA+AI サミットブログラボBeacons2023年6月26日~29日
直接参加するか、基調講演のライブストリームに参加してくださいご登録導入事例パートナークラウドパートナーAWSAzureGoogle CloudPartner Connect技術・データパートナー技術パートナープログラムデータパートナープログラムBuilt on Databricks Partner ProgramSI コンサルティングパートナーC&SI パートナーパートナーソリューションDatabricks 認定のパートナーソリューションをご利用いただけます。詳しく見る会社情報採用情報経営陣取締役会Databricks ブログニュースルームDatabricks Ventures受賞歴と業界評価ご相談・お問い合わせDatabricks は、ガートナーのマジック・クアドラントで 2 年連続でリーダーに位置付けられています。レポートをダウンロードDatabricks 無料トライアルデモを見るご相談・お問い合わせログインJUNE 26-29REGISTER NOW技術パートナープログラムDatabricks の数千のユーザーと貴社の製品をつなげますお問い合わせDatabricks は、市場最高のデータ・ AI 製品を統合し、推進しています。技術パートナーは、必要な技術および市場投入のサポートが Databricks から提供され、新規顧客の獲得、ビジネス成長を図ることができます。技術パートナーのメリット販売インセンティブDatabricks の現場組織が製品の販売を支援顧客へのアクセスDatabricks Partner Connect を通じて、新規顧客を直接獲得 詳しく見るマーケティング支援マーケティング投資へのアクセスによる顧客リーチの拡大製品、R&D 部門による支援Databricks の製品、エンジニアリング、サポートスタッフへのアクセスサンドボックス環境無償のサンドボックス環境を利用した構築とテストが可能共同マーケティングプログラムDatabricks との共同マーケティングプログラムへの参加パートナーになるにはお問い合わせパートナーを見つける製品プラットフォーム料金オープンソーステクノロジーDatabricks 無料トライアルデモ製品プラットフォーム料金オープンソーステクノロジーDatabricks 無料トライアルデモ学習・サポートドキュメント用語集トレーニング・認定ヘルプセンター法務オンラインコミュニティ学習・サポートドキュメント用語集トレーニング・認定ヘルプセンター法務オンラインコミュニティソリューション業種別プロフェッショナルサービスソリューション業種別プロフェッショナルサービス会社情報会社概要採用情報ダイバーシティ&インクルージョンDatabricks ブログご相談・お問い合わせ会社情報会社概要採用情報ダイバーシティ&インクルージョンDatabricks ブログご相談・お問い合わせ 採用情報言語地域English (United States)Deutsch (Germany)Français (France)Italiano (Italy)日本語 (Japan)한국어 (South Korea)Português (Brazil)Databricks Inc.
160 Spear Street, 13th Floor
San Francisco, CA 94105
1-866-330-0121© Databricks 2023. All rights reserved. Apache, Apache Spark, Spark and the Spark logo are trademarks of the Apache Software Foundation.プライバシー通知|利用規約|プライバシー設定|カリフォルニア州のプライバシー権利 |
https://www.databricks.com/br/product/databricks-sql | Databricks SQL | DatabricksSkip to main contentPlataformaDatabricks Lakehouse PlatformDelta LakeGovernança de dadosData EngineeringStreaming de dadosArmazenamento de dadosData SharingMachine LearningData SciencePreçosMarketplaceTecnologia de código abertoCentro de segurança e confiançaWEBINAR Maio 18 / 8 AM PT
Adeus, Data Warehouse. Olá, Lakehouse.
Participe para entender como um data lakehouse se encaixa em sua pilha de dados moderna.
Inscreva-se agoraSoluçõesSoluções por setorServiços financeirosSaúde e ciências da vidaProdução industrialComunicações, mídia e entretenimentoSetor públicoVarejoVer todos os setoresSoluções por caso de usoAceleradores de soluçãoServiços profissionaisNegócios nativos digitaisMigração da plataforma de dados9 de maio | 8h PT
Descubra a Lakehouse para Manufatura
Saiba como a Corning está tomando decisões críticas que minimizam as inspeções manuais, reduzem os custos de envio e aumentam a satisfação do cliente.Inscreva-se hojeAprenderDocumentaçãoTreinamento e certificaçãoDemosRecursosComunidade onlineAliança com universidadesEventosData+AI SummitBlogLaboratóriosBeaconsA maior conferência de dados, análises e IA do mundo retorna a São Francisco, de 26 a 29 de junho. ParticipeClientesParceirosParceiros de nuvemAWSAzureGoogle CloudConexão de parceirosParceiros de tecnologia e dadosPrograma de parceiros de tecnologiaPrograma de parceiros de dadosBuilt on Databricks Partner ProgramParceiros de consultoria e ISPrograma de parceiros de C&ISSoluções para parceirosConecte-se com apenas alguns cliques a soluções de parceiros validadas.Saiba maisEmpresaCarreiras em DatabricksNossa equipeConselho de AdministraçãoBlog da empresaImprensaDatabricks VenturesPrêmios e reconhecimentoEntre em contatoVeja por que o Gartner nomeou a Databricks como líder pelo segundo ano consecutivoObtenha o relatórioExperimente DatabricksAssista às DemosEntre em contatoInício de sessãoJUNE 26-29REGISTER NOWPromoção de preço do Databricks SQL — Economize mais de 40%Aproveite nossa promoção de 15 meses para o SQL serverless e o novo SQL ProSaiba maisDatabricks SQLO Melhor Data Warehouse é um LakehouseComece agoraAssista à demonstração
WEBINAR • Goodbye, Data Warehouse. Hello, Lakehouse.
Attend on May 18 and get a $100 credit toward a Databricks certification course
Register nowO Databricks SQL (DB SQL) é um data warehouse serverless na Plataforma Databricks Lakehouse capaz de executar todas as suas aplicações SQL e BI em escala, com uma relação preço/desempenho até 12 vezes melhor, um modelo de governança unificado, APIs e formatos abertos e as ferramentas de sua escolha, sem estar preso a um fornecedor.Melhor relação preço/desempenhoReduza custos e obtenha a melhor relação preço/desempenho, eliminando a necessidade de gerenciar, configurar ou expandir sua infraestrutura de nuvem com o modelo serverless.Governança integradaCrie uma única cópia de todos os seus dados usando padrões abertos e aplique uma camada de governança unificada em todas as equipes de dados usando SQL padrão.Ecossistema ricoUse SQL e ferramentas como Fivetran, dbt, Power BI ou Tableau em conjunto com a Databricks para ingerir, transformar e consultar todos os seus dados no local.Elimine silosCapacite cada analista para obter acesso mais rápido aos dados mais recentes para análises em tempo real e transicionar sem esforço do BI para ML.Como funciona?Integrações perfeitas com o ecossistema Facilidade de uso Desempenho no mundo real Governança centralizada Um data lake aberto e confiável como baseIngira, transforme e orquestre dados facilmente de qualquer fonteTrabalhe com seus dados, onde quer que estejam localizados. Os recursos prontos para uso permitem que times de analistas e engenharia analítica ingiram facilmente dados de qualquer tipo de fonte, como armazenamento de nuvem, aplicativos corporativos, como Salesforce, Google Analytics ou Marketo, usando o Fivetran. Tudo está disponível em um clique. Para gerenciar dependências e transformar dados in-place, você pode usar os recursos ETL integrados do lakehouse ou suas ferramentas favoritas, como dbt no Databricks SQL, para obter o melhor desempenho da categoria. “A combinação de Databricks e Fivetran nos permitiu criar um pipeline de dados robusto e moderno em um curto período de tempo. A Fivetran pôde contar com todos os conectores e integrações necessários.”— Justin Wille, diretor de insights e análises da Kreg ToolSaiba maisEscolha suas ferramentas de análises e BI modernasTrabalhe perfeitamente com ferramentas populares de BI, como Tableau, Power BI e Looker. Agora, analistas podem usar suas ferramentas favoritas para descobrir novos insights de negócios com base nos dados mais abrangentes e atualizados. Para maximizar a colaboração, o Databricks SQL também permite que cada analista identifique e compartilhe novos insights com o editor SQL integrado, visualizações e dashboards. “Com os dados na ponta dos dedos, estamos muito mais confiantes de saber que usamos os dados mais recentes e completos para alimentar nossos dashboards e relatórios do Power BI.”— Jake Stone, gerente sênior de análise de negócios da ButcherBoxSaiba maisElimine o gerenciamento de recursos com computação serverlessCom o Databricks SQL serverless, você não precisa mais gerenciar, configurar ou dimensionar a infraestrutura de nuvem no lakehouse. Isso dá à sua equipe de dados mais tempo para se concentrar em seus negócios principais. Os warehouses do Databricks SQL fornecem compute SQL instantâneo e elástico — desacoplado do armazenamento — redimensionados automaticamente para atender a casos de alta concorrência de consultas, sem interrupções. “O Databricks SQL Serverless nos permite usar o poder do Databricks SQL e, ao mesmo tempo, ter muito mais eficiência com nossa infraestrutura.”— R. Tyler Croy, diretor de engenharia de plataforma da ScribdSaiba maisTotalmente projetado para o melhor desempenho da categoriaO Databricks SQL foi inteiramente otimizado para oferecer o melhor desempenho para todas as suas ferramentas, tipos de queries e aplicações do mundo real. Isto inclui o Photon, o mecanismo de query de última geração, que, combinado com os warehouses SQL, oferece relação preço/desempenho até 12 vezes superior a outros data warehouses em nuvem. “A Plataforma Databricks Lakehouse nos permitiu executar análises que reduzem o tempo necessário para obter insights de comportamentos do público de semanas para minutos.”— Stephane Caron, diretora sênior de business intelligence da CBC/Radio-CanadaSaiba maisCentralize o armazenamento e a governança de todos os seus dados com SQL padrãoCrie uma única cópia de todos os seus dados usando o formato aberto Delta Lake para evitar amarras com fornecedores. Realize análises in-place e operações ETL/ELT em seu lakehouse. Trabalhe sem mover ou copiar dados em diferentes sistemas desconexos. Governança refinada, linhagem de dados e SQL padrão: essas ferramentas facilitam a descoberta, proteção e gerenciamento de todos os seus dados nas nuvens com o Unity Catalog da Databricks. “A Databricks é fundamental para o nosso negócio porque sua arquitetura lakehouse proporciona uma maneira unificada de acessar, armazenar e compartilhar dados acionáveis.”— Jagan Mangalampalli, diretor de big data da PunchhSaiba maisDesenvolvido em uma fundação de dados comum, com tecnologia da Plataforma LakehouseA Plataforma Databricks Lakehouse fornece a solução de data warehouse de ponta a ponta mais abrangente para todas as suas necessidades de análises modernas e muito mais. Obtenha desempenho de nível global por um custo muito menor do que os data warehouses na nuvem. Reduza o tempo desde a aquisição de dados brutos até dados utilizáveis em escala e unifique dados em batch e streaming. Além disso, o lakehouse permite que as equipes de dados passem sem esforço da descrição para a análise preditiva para descobrir novos insights. “A Databricks forneceu uma plataforma para nossas equipes de dados e análises acessarem e compartilharem dados no ABN AMRO. As soluções baseadas em ML impulsionam a automação e o insight em toda a empresa.”— Stefan Groot, diretor de engenharia analítica do ABN AMROSaiba maisMigre para a DatabricksNão aguenta mais silos de dados, desempenho lento e altos custos associados a sistemas obsoletos, como Hadoop e os data warehouses corporativos? Migre para a Databricks Lakehouse: a plataforma moderna para todos os seus casos de uso de dados, análises e IA.Migre para a DatabricksIntegraçõesAs integrações perfeitas com o ecossistema fornecem flexibilidade máxima para suas equipes de dados. Integre dados críticos de negócios com Fivetran, transforme-os no local com dbt e descubra novos insights com Power BI, Tableau ou Looker, tudo sem mover seus dados para um data warehouse obsoleto.Ingestão de dados e ETLGovernança de dadosBI e dashboards+ qualquer outro cliente compatível com Apache Spark™“Hoje, mais do que nunca, as organizações precisam de uma estratégia de dados que ofereça velocidade e agilidade para serem adaptáveis. À medida que as empresas migram rapidamente seus dados para a nuvem, observamos um interesse crescente em análises no data lake. O Databricks SQL oferece uma experiência totalmente nova para os clientes aproveitarem insights de grandes volumes de dados com o desempenho, a confiabilidade e a escalabilidade de que precisam. Temos orgulho de fazer parceria com a Databricks para tornar esta oportunidade uma realidade.”— Francois Ajenstat, diretor de produtos, TableauHistória do clienteDescubra maisDelta LakeConexão de parceirosUnity CatalogDelta Live TablesConteúdo relacionado
Todos os recursos de que você precisa. Reunidos em um só lugar.
Explore nossa biblioteca de recursos: você encontrará e-books e vídeos sobre os benefícios do lakehouse.
Explorar recursose-booksBuilding the Data Lakehouse, de Bill Inmon, pai do data warehouseWhy The Data Lakehouse Is Your Next Data WarehouseData, Analytics and AI governancee-book: The Big Book of Data EngineeringLivro para leigos: Migrating from a Data Warehouse to a Data LakehouseEventosTrabalho interno do lakehouse no Data + AI World TourWebinar sobre práticas recomendadas de ajuste de desempenho no lakehouse — A vida de uma queryTreinamento gratuito do Databricks SQL – Sob demandaO Melhor Data Warehouse é um LakehouseBlogsDatabricks estabelece recorde oficial de desempenho para armazenamento de dadosAnúncio da disponibilidade geral do Databricks SQLEvolução da linguagem SQL na Databricks: Ansi por padrão e migrações mais fáceis de data warehousesImplantar dbt na Databricks nunca foi tão fácilTécnicas e implementação de modelagem de data warehouse na Databricks Lakehouse PlatformComo criar uma solução de análise de marketing com Fivetran e dbt na Databricks LakehouseTudo pronto para começar?Experimente gratuitamenteJunte-se à comunidadeProdutoVisão geral da plataformaPreçosTecnologia de código abertoExperimente DatabricksDemoProdutoVisão geral da plataformaPreçosTecnologia de código abertoExperimente DatabricksDemoAprendizagem e suporteDocumentaçãoGlossárioTreinamento e certificaçãoCentral de ajudaInformações legaisComunidade onlineAprendizagem e suporteDocumentaçãoGlossárioTreinamento e certificaçãoCentral de ajudaInformações legaisComunidade onlineSoluçõesPor setorServiços profissionaisSoluçõesPor setorServiços profissionaisEmpresaQuem somosCarreiras em DatabricksDiversidade e inclusãoBlog da empresaEntre em contatoEmpresaQuem somosCarreiras em DatabricksDiversidade e inclusãoBlog da empresaEntre em contatoSee Careers
at DatabricksMundialEnglish (United States)Deutsch (Germany)Français (France)Italiano (Italy)日本語 (Japan)한국어 (South Korea)Português (Brazil)Databricks Inc.
160 Spear Street, 13th Floor
San Francisco, CA 94105
1-866-330-0121© Databricks 2023. All rights reserved. Apache, Apache Spark, Spark and the Spark logo are trademarks of the Apache Software Foundation.Aviso de privacidade|Termos de Uso|Suas opções de privacidade|Seus direitos de privacidade na Califórnia |
https://www.databricks.com/p/webinar/delta-lake-the-foundation-of-your-lakehouse?itm_data=lakehouse-link-lakehousefoundation | Delta Lake: The Foundation of Your Lakehouse | DatabricksVirtual Event + Live Q&ADelta Lake: The Foundation of Your LakehouseBring reliability, performance and security to your data lakeAvailable on-demandAs an open format storage layer, Delta Lake delivers reliability, security and performance to data lakes. Customers have seen 48x faster data processing, leading to 50% faster time to insight, after implementing Delta Lake.Watch a live demo and learn how Delta Lake:Solves the challenges of traditional data lakes — giving you better data reliability, support for advanced analytics and lower total cost of ownershipProvides the perfect foundation for a cost-effective, highly scalable lakehouse architectureOffers auditing and governance features to streamline GDPR complianceHas dramatically simplified data engineering for our customersSpeakersHimanshu RajaProduct ManagementDATABRICKSSam SteinyProduct MarketingDATABRICKSBrenner HeintzProduct MarketingDATABRICKSBarbara EckmanSoftware ArchitectCOMCASTTranscript
Sam Steiny: Hi, and welcome to the Databricks event, Delta Lake, the foundation of your lakehouse. My name is Sam Steiny and I work in product marketing at Databricks, focusing specifically on data engineering and on Delta Lake. I'm excited to be here today. I get to be the MC for today's event, and I will be guiding you through today's sessions. More and more, we've seen the term lakehouse referenced in news at events in tech blogs, and thought leadership. And beyond our work at Databricks, organizations across industries have really increasingly turned to this idea of a lakehouse as the future for unified analytics, data science and machine learning.Sam Steiny: In today's event, we'll see an overview of Delta Lake, which is the secure data storage and management layer for your data lake that really forms the foundation of a lakehouse. We'll see a demo of Delta Lake in action, and we'll actually hear how Comcast has leveraged Delta Lake to bring reliability, performance, and security to their data. We'll finish today's event with a live Q and A. So, come prepared with your questions and we'll do our best to answer as many as possible. So, before we start just some quick housekeeping, today's session is being recorded. So, it'll be available on demand to anyone who has registered.Sam Steiny: And then also, if you have any questions throughout the event, please feel free to add them to the Q and A box. We'll do our best to actually answer them in real time there. But we'll also answer the leftover questions as well as any additional ones in the live Q and A at the end of the session. So, now before we get to our speakers, I wanted to share a quick overview of Delta Lake in a video we recently launched. This'll give you a high level understanding of what Delta Lake is, before Himanshu who is the Delta Lake product manager, will go into more detail about Delta Lake and how it forms the foundation of a lakehouse.Speaker 3: Businesses today have the ability to collect more data than ever before. And that data contains valuable insights into your business and your customers, if you can unlock it. As most organizations have discovered, it's no simple task to turn data into insights. Today's data comes in a variety of formats, video, audio, and text. Data lakes have become the defacto solution because they can store these different formats at a low cost and don't lock businesses into a particular vendor like a data warehouse does. But traditional data lakes have challenges, as data lakes accumulate data in different format, maintaining reliable data is challenging and can often lead to inaccurate query results.Speaker 3: The growing data volume also impacts performance, slowing down analysis and decision-making, and with few auditing and governance features data lakes are very hard to properly secure and govern. With all of these challenges, as much as 73% of company data goes unused for analytics and decision-making and value in it is never realized. Delta Lake solves these challenges. Delta Lake is a data storage and management layer for your data lake that enables you to scale insights throughout your organization with a reliable single source of truth for all data workloads, both batch and streaming, increase productivity by optimizing for speed at scale with performance features like advanced indexing and schema enforcement.Speaker 3: Operate with flexibility in an open source environment stored in Apache parquet format and reduce risk by quickly and accurately updating data in your data lake for compliance and maintain better data governance through audit logging. By unlocking your data with Delta Lake, you can do things like dramatically simplified data engineering by performing ETL processes directly on the data lake. Make new real-time data instantly available for data analysis, data science and machine learning, gain confidence in your ability to reliably meet compliance standards like GDPR and CCPA.Speaker 3: Delta Lake on Databricks brings reliability, performance and security to your data all in an open format, making it the perfect foundation for a cost-effective highly scalable lakehouse architecture. Delta Lake, the open, reliable, performant and secure foundation of your lakehouse.Sam Steiny: Great. So, with that high level view, now you have an understanding of Delta Lake and I'm going to now pass it over to Himanshu Raja, who's the product manager for Delta Lake at Databricks. He's going to do a deeper dive into Delta Lake and explain how it really enables a lakehouse for our customers. Over to you, Himanshu.Himanshu Raja: Thank you, Sam. I'm super excited to be here and talk to you about Delta Lake and why it is the right foundation for lakehouse. In today's session, I will cover the challenges of building data analytics stack while lakehouse is the only future proof solution. What is Delta Lake? And why it is the best foundation for your lakehouse? Brenner, will then jump into the most exciting part of the session and do a demo. After the session, you will have enough context, links to the supporting material to get started and build your first data lake.Himanshu Raja: Every company is feeling the pull to become a data company, because when large amounts of data are applied to even simple models, the improvements on use cases is exponential. And here at Databricks, our entire focus is on helping customers apply data to their toughest problems. I'll dig examples of two such customers, Comcast and Nationwide. Comcast is a great example of a media company that has successfully adopted data and machine learning to create new experiences for their viewers that help improve satisfaction and retention.Himanshu Raja: They have built a voice-activated remote control that allows you to speak into the remote, ask it a question, and it will provide some really relevant results, leveraging things like natural language processing and deep learning. And they've built all of this on top of Databricks platform. Nationwide is one of the largest insurance providers in the U.S. nationwide saw that the explosive growth in data availability and increasing market competition was challenging them to provide better pricing to their customers. With hundreds of millions of insurance records to analyze for downstream ML nationwide realized that their legacy batch analysis process was slow and inaccurate, providing limited insights to predict the frequency and severity of the claims.Himanshu Raja: With Databricks, they have been able to employ deep learning models at scale to provide, more accurate pricing predictions resulting in more revenue from claims. Because of this potential, it's not surprising that, 83% of CEOs say AI is a strategic priority. According to a report published by MIT Sloan management review, or that Gartner predicts AI will generate almost trillion dollars in business value in only a couple of years. But it is very hard to get right. Gartner says 85% of the big data projects will fail. Venture Beat published a report that said 87% of data science projects never make it into production. So, while some companies are having success most still struggle.Himanshu Raja: So, the story starts with data warehouses, which it is hard to believe. Will soon celebrate its 40th birthday. Data warehouses came around in the 80s and were purpose-built for BI and reporting. Overtime they have become essential and today every enterprise on the planet has many of them. However, they weren't built for modern data use cases. They have no support for data like video or audio or text. Datasets that are crucial for modern use cases. It had to be very structured data queriable only with SQL. As a result, there is no viable support for data science or machine learning. In addition, there is no support for real-time streaming. They are great for batch processing, but either do not support streaming or can be cost prohibitive.Himanshu Raja: And because they are closed and proprietary systems, they force you to lock your data in, so you cannot easily move data around. So, today the result of all of that is that most organizations will first store all of their data in data lakes and block stores, and then move subsets of it into the data warehouse. So, then the thinking was that potentially data lakes could be the answer to all our problems. Data lakes came around about 10 years ago and they were great because they could indeed handle all your data. And they were there for good for data science and machine learning use cases. And data lakes serves as a great starting point for a lot of enterprises.Himanshu Raja: However, they aren't able to support that data warehousing or BI use cases. Data lakes are actually more complex to set up than a data warehouse. Our warehouse has a lot of familiar support semantics like asset transactions. With the data lakes, you are just dealing with files. So, those abstractions are not provided, you really have to build them yourself. And they're very complex to set up. And even after you do all of that, the performance is not great. You're just dealing with files, the end. In most cases, customers end up with a lot of small files and even the simplest queries will require you to list all those files. That takes time.Himanshu Raja: And then lastly, when it comes to reliability, they are not that great either. We actually have lot more data in that data lakes, then the warehouse, but is the data reliable? Can I actually guarantee that the schema is going to stay the same? How easy it is for an analyst to merge a bunch of different schemas together. As a result of all of these problems, data lakes have sort of turned into these unreliable data swamps where you have all the data, but it's very difficult to make any sense of it. So, understandably in the absence of a better alternative, what we are seeing with most organizations is a strategy of coexistence.Himanshu Raja: So, this is what a data swamp look like. There are tons of different tools to power each architecture required by a business unit or the organization. It's a whole slew of different open source tools that you have to connect. In the data warehousing stack, on the left side, you are often dealing with proprietary data formats. And if you want to enable advanced use cases, you have to move the data across to other stacks. It ends up being expensive and resource intensive to manage. And what does it result into? Because the systems are siloed, the teams become siloed too. Communication slows down, hindering innovation and speed.Himanshu Raja: Different teams often end up with different versions of the truth. The result is multiple copies of data, no consistent security governance model, closed systems and disconnected, less productive data teams. So, how do we get the best of both worlds? We want some things from the data warehouse, we want some things from the data lakes. We want the performance and reliability of the data warehouses, and we want the flexibility and the scalability of the data lakes. This is what we called the lakehouse paradigm. And the idea here is that the data is in the data lake, but now we are going to add some components so that we can now do all the BI and reporting from the warehouse and all the data science and machine learning from data lakes and also support streaming analytics. So, let's build a lakehouse. What are the things we need to build a lakehouse?Himanshu Raja: We said that we want all our data to be in an really scalable storage layer. And we want a unified platform where we can do multiple use cases. We can achieve multiple use cases. So, we need some kind of transactional layer on top of that data storage layer. So, what do you really need is something like asset compliance, so that when you write data, it either fully succeeds or fully fails and things are consistent. The structure transaction layer is what is data lake. And then the other requirement we talked about was performance. So, to support the different type of use cases, we needed to be really fast. We have lot of data that we want to work with. So, there is the data engine, which is a high-performance query engine that Databricks has created in order to support different types of use cases, whether it is SQL, data science, ETL, BI reporting, streaming, all that stuff on top of the engine to make it really, really fast.Himanshu Raja: So, let's do a deep dive on what is data lake. Data lake is an open, reliable, performant and secure data storage and management layer for your data lakes that enable you to create a true single source of truth. Since it's built upon a budget spot, you are able to build high-performance data pipelines to clean your data from raw injection to business level aggregates. And given the open format, it allows you to avoid unnecessary replication and proprietary lock-in. Ultimately, data lake provides that reliability, performance, and security you need to solve your downstream data use cases. Next, I'm going to talk about each of those benefits of data lake. The first and foremost benefit that you get with data lake is high-quality reliable data in your analytics stack.Himanshu Raja: Let me just talk about three key things here. The first is asset transaction. The second is schema enforcement and schema evolution. And then third is unified batch and streaming. So, on asset transactions, Delta employs an all or nothing asset transaction approach to guarantee that any operation you do on your data lake either fully succeeds or gets aborted so that it can be rerun. On schema enforcement Delta Lake uses schema validation on right, which means that all new rights to our table are checked for compatibility with the target table schema at right time. If the schema is not compatible, Delta Lake cancels the transaction altogether, and no data is written and raises an exception to let the user know about the mismatch.Himanshu Raja: We have very recently introduced capabilities to also do schema evolution, where we can evolve the schema on the fly as the data is coming in especially in the cases where the data is semi structured or unstructured. And you may not know what the data types are, or even in a lot of cases, what the columns that are coming in are. The third thing I would like to talk about is unified batch and streaming. Delta is able to handle both batch and streaming data, including the ability to concurrently, write batch and streaming to the same data table. Delta Lake directly integrates with spark structured streaming for low-latency updates.Himanshu Raja: Not only does this result in a simpler system architecture by not requiring you to build a Lambda architecture anymore. It also results in a shorter time from data ingest to query results. The second key advantage of Delta Lake is performance, lightning, fast performance. There are two aspects to performance in data analytic stack. One is how the data is stored, and then the other is performance during query, during run time. So, let's talk about how their data is stored and how does Delta optimizes the data storage format excel. Delta comes with out-of-box capabilities to store the data optimally for querying. Capabilities such as the ordering where the data is automatically structured along multiple dimensions for fast query performance is one. Delta also has data skipping, where Delta maintains the file statistics so that the data subsets relevant to the queries are used instead of the entire tables.Himanshu Raja: We don't have to go and read all the files. Files can be skipped based on the statistics. And then auto-optimize, optimize is a set of features that automatically compacts small files into fewer larger files so that the query performance is great out of the box. It's paying a small pause during writes to offset and give really great benefit for the tables during the requering. So, that's the part about how the data is stored. Now, let's talk about the Delta engine, which comes into play when you actually query that data. Data engine has three key components to provide super fast performance, photon, the query optimizer and caching. Photon is a native vectorized engine, fully compatible with Apache spark, build to accelerate all structured and semi-structured workloads by more than 20X compared to spark 2.4.Himanshu Raja: Second key component of Delta engine is query optimizer. The query optimizer extends the sparks cost-based optimizer and adaptive query execution with the advanced statistics to provide up to 18 X faster query performance for data warehousing workloads than Spark 3.0. And then the third key component of Delta engine is caching. Delta engine automatically caches IO data, and transcodes it into a more CPU efficient fallback to take advantage of NBMESSTs providing up to 5X faster performance for table scans than Spark 3.O. it also includes a second cache for query results to instantly provide results for any subsequent raps. This improves performance for repeated queries like dashboards, where the underlying tables are not changing frequently.Himanshu Raja: So, let me talk about the third main benefit of Delta Lake, which is to provide security and compliance at scale. Delta Lake reduces risk by enabling you to quickly and accurately update data in your data lake, to comply with regulations like GDPR and maintain better data governance through audit logging. Let me talk about two specific features, time-travel and stable and role-based access controls. With time-travel Delta automatically versions the big data that you store in your data lake and enables you to access any historical version of that data. This temporal data management simplifies your data pipeline by making it easy to audit, rollback data in case of accidentally bad writes or deletes and reproduce experiments and reports.Himanshu Raja: Your organization can finally standardize on a clean centralized version, big data repository in your own cloud storage for your analytics. The second feature I would love to talk about is that table and role based access controls. The data lake, you can programmatically grant and revoke access to your data based on specific workspace or role to ensure that your users can only access the data that you want them to. The Databrick's is extensive ecosystem of partners. Customers can enable a variety of security and governance functionality based on their individual needs.Himanshu Raja: Lastly, but one of the most important benefits of Delta Lake is that, it's open and agile. Delta Lake is an open format that works with other open source technologies, avoiding vendor lock-in and opening up an entire community and ecosystem of tools. All the data in Delta Lake is stored in an open Apache parquet format, allowing data to be read by any compatible reader. Developers can use their Delta Lake with their existing data pipelines with minimum changes, as it is fully compatible with spark. The most commonly used big data processing engine. Delta Lake also supports SQL DML, out-of-box to enable customers to migrate SQL workloads to Delta simply and easily.Himanshu Raja: So, let's talk about how we have seen customers leverage Delta Lake for a number of use cases, primarily among them is improving data pipelines, doing ETL at scale, unifying batch, and streaming with direct integration with Apache spark structured streaming to run both batch and streaming workloads in sort of doing the Lambda architecture, doing BI on your data Lake with our Delta engine, super fast, ready performance. You don't need to choose between a data lake and a data warehouse. As we talked about with the lakehouse, you can do BI directly on your data lake and then meeting regulatory needs with standards like GDPR by keeping a record of historical data changes. And who are these users?Himanshu Raja: The data lake is being used by some of the largest Fortune 100 companies in the world. We have customers like Comcast, Wirecomm, Conde Nast, McAfee, Edmonds. In fact, here Databricks all the data analytics is done using data lake. So, I would love to just deep dive in and would like to talk about the Starbucks use case to just give you an idea as to how our customers have used data lake in their ecosystem. Starbucks today does demand forecasting and personalizes the experiences of their customers on their app. And their architectures are actually struggling to handle petabytes of data adjusted for downstream ML and analytics, and they needed a scalable platform to support multiple use cases across the organization.Himanshu Raja: And with Azure Databricks and Delta Lake, their data engineers are able to build pipelines that support batch and real-time workloads on the same platform. They have enabled their data science teams to blend various data sets, to create new models that improve the customer experiences. And most importantly, data processing performance has improved dramatically allowing them to deploy environments and deliver insights in minutes. So, let me wrap up by summarizing what data lake can do for you, why it is the right foundation for your lakehouse. As we discovered that with Delta Lake, you can improve analytics and data science and machine learning throughout your organization by enabling teams to collaborate and ensure that they are working on reliable data to improve speed with which they make decisions.Himanshu Raja: You can simplify data engineering, reduce infrastructure and maintenance costs with best price performance, and you can enable a multi-cloud secure infrastructure platform with data lake. So, how do you get started on data lake? It's actually really easy, if you have a Databricks deployment already on Azure or AWS, and now GCP if you deploy a cluster with DBR, which is the Databricks right time release version 8.0 or higher, you actually do not need to do anything. Delta is now the default format for all creative tables and the data frame APIs. But we also have plenty of sources for you to try out the product and learn.Himanshu Raja: It's actually a lot of fun to deploy your first data lake and just build a really cool dashboard using a notebooks. If you have not tried Databricks before you can sign up for a free trial account and then you can follow our getting started guide. And Brenner, will do a demo very shortly to just showcase the capabilities that we talked about. So, with that over to you, Sam.Sam Steiny: Awesome. Thank you, Himanshu. That was great. Now, in the past the stage over to Brenner Heintz and Brenner is going to take us through a demo that really brings Delta Lake to life. Now, that you've heard what it is and how powerful it can be, let's see it in action. So, over to you, Brenner.Brenner Heintz: My name is Brenner Heintz. I am a technical PMM at Databricks, and today I'm going to show you how Delta Lake provides the perfect foundation for your lakehouse architecture. We're going to do a demo, and I'm going to show you how it works from a practitioner's perspective. Before we do so, I want to highlight the Delta Lake cheat sheet. I've worked on this with several of my colleagues, and the idea here is be able to provide a resource for practitioners like yourself, to be able to quickly and easily get up to speed with Delta Lake and be able to be productive with it very, very quickly. We've provided most, if not all of the commands in this notebook, it's part of the cheat sheet. So, I highly encourage you to download this notebook and you can click directly on this image, it'll take you directly to the cheat sheet, provide a one pager for Delta Lake with Python and a one pager for Delta Lake with Spark SQL.Brenner Heintz: So, first in order to use Delta Lake, you need to be able to convert your data to Delta Lake format. And the way that we're able to do that is instead of saying parquet as part of your create table or your Spark data frame writer command, all you have to do is place that with the word Delta, to be able to start using Delta Lake right away. So, here's a look at what that looks like. With Python, we can use Spark to read in our data in parquet format. You could also read in your data in CSV or other formats for example. Spark is very flexible in that way. And then we simply write it out in Delta format by indicating Delta here.Brenner Heintz: And we're going to save our data in the loans Delta table. We can do the same thing with SQL. We can use a create table command using Delta to then save our table in Delta format. And finally, the convert to Delta command makes it really easy to convert our data to Delta Lake format in place. So, now that we have shown you how to convert your data to Delta like format, let's take a look at a Delta Lake table and what that looks like. So, I've run the cell already. We have 14,705 batch records in our loans Delta table. Today, we're working with some data from the lending club, and you can see the columns that are currently part of our table here.Brenner Heintz: So, I went ahead and kicked off a couple of right streams to our table. And the idea here was to show you that Delta Lake tables are able to handle batch and streaming data, and they're able to integrate those straight out of the box without any additional configuration or anything else that's needed. You don't need to build a Lambda architecture, for example, to integrate both batch in real-time data. Delta Lake tables can easily manage both at once. So, as you can see, we're writing about 500 records per second, into our existing Delta Lake table. And we're doing so with two different writers, just to show you that you can concurrently both read and write from Delta Lake tables consistently with asset transactions, ensuring that you never deal with a pipeline breakage that corrupts the state of your table, for example.Brenner Heintz: Everything in Delta Lake is a transaction. And so this allows us to create isolation between different readers and writers. And that's really powerful, it saves us a lot of headache and a lot of time undoing mistakes that we may have made if we didn't have acid transactions. So, as I promised as well, those two streaming writes have been coupled. I've also created two streaming reads to show you what's happening in the table in near real time. So, we had those initial 14,705 batch records here. But since then we have about 124,000 streaming records that have entered our table since that time.Brenner Heintz: This is essentially the same chart, but showing you what's happening over each 10-second-window, each of these bars represents a 10-second-window, over which as you can see, since our streams began, we have about 5,000 records per stream that are written to our table at any time. So, all of this is just to say that Delta Lake is a very powerful tool that allows you to easily integrate batch and streaming data straight out of the box. It's very easy to use, and you can get started right away. To put the cherry on top, we added a batch query just for good measure, and we plotted it using Databricks built-in visualization tools, which are very easy and allow you to visualize things very quickly.Brenner Heintz: So, now, that we've showed you how easy it is to integrate batch and streaming data with Delta Lake, let's talk about data quality. You need tools like schema enforcement and schema evolution in order to enforce the quality in your tables. And the reason for that is that what you don't want are upstream data sources, adding additional columns, removing columns, or otherwise changing your schema without you knowing about it. Because what that can cause is a pipeline breakage that then affects all of your downstream data tables. So, to avoid that, we can use schema enforcement first and foremost. So, here I've created this new data, data frame that contains a new column, the credit score column, which is not present in our current table.Brenner Heintz: So, because Delta Lake offers schema enforcement when we run this command, we get an exception because the schema mismatch has been detected by Delta Lake. And that's a good thing. We don't want our data to successfully write to our Delta Lake table because it doesn't match what we expect. However, as long as we're aware and we want to intentionally migrate our schema, we can do so by adding a single command to our write command, we include the merge schema option. And now, that extra column is successfully written to our table, and we're also able to evolve our schema. So, now, when we try and select the records that were in our table, in our new data table, you can see that those records were in fact successfully written to the table and that new credit score column is now present in the schema of our table as well.Brenner Heintz: So, these tools give you, they're very powerful and they allow you to enforce your data quality the way that you need to in order to transition your data from raw unstructured data to high quality structured data, that's ready for downstream apps and users overtime. So, now, that we've talked about schema enforcement and scheme evolution, I want to move on to Delta Lake time travel. Time travel is a really powerful feature of Delta Lake. And because everything in Delta Lake as a transaction, and we're tracking all of the transactions that are made to our Delta Lake tables over time in the transaction log, that allows us to go back in time and recreate the state of our Delta Lake table at any point in time.Brenner Heintz: First, let's look at what that looks like. So, at any point, we can access the transaction log by running this describe history command. And as you can see, each of these versions of our table represent some sort of transaction, some sort of change that was made to our tables. So, our most recent change was that we upended those brand new records with a new column to our Delta Lake table. So, you can see that transaction here, before that we had some streaming updates. All of those rights that were occurring to our table were added as transactions. And basically this allows you to then go back and use the version number or timestamp, and then query historical versions of your Delta Lake tables at any point. That's really powerful because you can even do creative things like compare your current version of a table to a previous version to see what has changed since then, and do other sorts of things along those lines.Brenner Heintz: So, let's go ahead and do that. Let's look, we'll use time travel to view the original version of our table, which was version zero. And this should include just those 14,705 records that we started with because at that point version zero of our table, we hadn't streamed any new records into our table at all. And as you can see, the original version, those 14,705 records are the only records that are present in version as of zero. And there is no credit score column either, because of course, back in version zero, we had not yet evolved Delta Lake table schema.Brenner Heintz: So, compare that 14,705 records to the current number of records in our table, which is over 326,000. Finally, another thing you can do with Delta Lake time-travel is restore a previous version of your tables at any given point in time. So, this is really powerful, if you accidentally delete a column you didn't mean to, or delete some records you didn't mean to, you can always go back and use the restore command to then have the current version of your table restored exactly the way that your data was at that given timestamp or version number. So, as you can see, when we run this command to restore our table to its original state version as of zero, we have been able to do so successfully. Now, when we query it, we only get those 14,705 records as part of the table.Brenner Heintz: Next, one of the features that I think developers, data engineers and other data practitioners are really looking for when they're building their lakehouse is the ability to run simple DML command with just one or two lines of code, be able to do operations like deletes, updates, merges, inserts, et cetera. On a traditional data lake, those simply aren't possible. With Delta Lake, you can run those commands and they simply work and they do so transactionally. And they're very, very simple. So, managing change data becomes much, much easier when you have these simple commands at your disposal.Brenner Heintz: So, let's take a look, we'll choose user ID 4420 as our test case here, we'll use sort of modify their data specifically to show you what Delta Lake can do. As you can see, they are currently present in our table, but if we run this delete command and we specify that specific user, when we run the command and then we select all from our table, we now have no results. The delete has occurred successfully. Next, when we look at the described history command, the transaction log, so you can see the delete that we just carried out is now present in our table. And you can also see the restore that we did to jump back to the original version of our table version zero is also present. We can also do things like insert records directly back into our table if we want to do so.Brenner Heintz: Here, we're going to use time-travel to look at version as of zero, the original version of our table before this user was deleted and then insert that user's data back in. So, now when we run the select all command, the user is again, present in our table. The insert into command works great. Next, there's the update command. Updates are really useful, if you have row level changes that you need to make. Here, we're going to change this users funded amount to 22,000. Actually let's make it 25,000, it looks like it was already 22,000 before.Brenner Heintz: So, we'll update that number and then when we query our table, now, in fact, the user's funded amount has been updated successfully. Finally, in Delta Lake you have the ability to do really, really powerful merges. You can have a table full of change data that for example represents inserts and updates to your Delta Lake table. And with Delta Lake, you can do upsert. In just one single step you can... for each row in your data frame that you want to write to your Delta Lake table, if that row is already present in your table, you can simply update the values that are in that row. Whereas if that row is not present in your table, you can insert it.Brenner Heintz: So, that's what's known as an upsert and those are completely possible and they're very, very easy in Delta Lake. They make managing your Delta Lake very, very simple. So, first we create a quick data frame with just two records in it, we want to add user 4420's data back into our table. And then we also created a user whose user ID rather is one under 1 million. So, it's 999,999. And this user is not currently present in our table. We want to insert them. So, this is what our little data frame looks like. And as you can see, we have these as an update or an insert. And when we run our merge into command, Delta Lake is able to identify the rows that already exist, like user 4420, and those that don't already exist. And when they don't exist, we simply insert them.Brenner Heintz: So, as you can see, these updates, and inserts occurred successfully and Delta Lake has no problem with upserts. Finally, the last thing I want to point out are some specific performance enhancements that are offered as part of Delta Lake. But also as part of Databricks, Delta Lake only. We have a couple of commands that are Databricks, Delta Lake only at the moment. First there's the vacuum command. The vacuum command takes a look at the files that are currently a part of your table, and it removes any files that aren't currently part of your table that have been around for a retention period that you specify. So, this allows you to clean up the old versions of your table that are older than a specific retention period, and sort of save on cloud costs that way.Brenner Heintz: Another thing you can do on Databricks Delta Lake is you can cache the results of specific commands in memory. So, if you have a specific table that your downstream analysts tend to always group by a specific dimension, you can cache that SQL command, and it will always appear much quicker than it, and that way it's able to avoid doing a full read of your data, for example. You also had the ability to use the Z order optimized command, which is really powerful. Z order optimize essentially looks at the layout of your data tables and it figures out the perfect way to locate your data in different files. It essentially lays out your files in an optimized fashion, and that allows you to save on cloud storage costs because the way that it lays them out is typically much more compact than would be when you start. And it also then optimizes those tables for a read and write throughput.Brenner Heintz: So, it's very powerful. It speeds up the results of your queries and saves you on storage and compute costs ultimately. So, that's the demo. I hope you've enjoyed this demo. Again, take a look at the Delta Lake cheat sheet that we will post as part of the description or in the chat that is part of the presentation below. So, thanks so much. I hope you've enjoyed this demonstration. Check out Delta Lake and join us on GitHub, Slack, or as part of our mailing list. Thanks so much.Sam Steiny: Awesome. Thanks, Brenner. That was really, really great. I'm so excited now to be joined by Barbara Eckman. Barbara is a senior principal software architect at Comcast, and she's going to be sharing her experience with Delta Lake and how working with Databricks has really made an impact on her day-to-day and on the Comcast business. So, thanks so much for being here, Barbara. We're super excited to have you.Barbara Eckman: Hi, everybody. Really glad to be here. Hope you're all doing well. I'm here to talk about hybrid cloud access control in a self-service computer environment here at Comcast. I want to just real briefly mentioned that Comcast takes very seriously its commitment to our customers to protect their data. I'm part of the Comcast, what we call data experience big data group. And big data in this case means not only public cloud, but also on-prem data. So, we have a heterogeneous data set, which offers some challenges and challenges are fun, right? Our vision is that data is treated as an enterprise asset. This is not a new idea, but it's an important one.Barbara Eckman: And our mission is to power Comcast enterprise through self-service platforms, data discovery lineage, stewardship governance, engineering services, all those important things that enable people to really use the data in important ways. And we know as many do that powerful business insights, the most powerful insights come from models that integrate data, that span silos. Insights for improving the customer experience as well as business value. So, what this means for the business there are some examples. Basically, this is based on the tons of telemetry data that we capture from sensors and Comcast's network. We capture things like latency, traffic, signal to noise ratio, downstream and upstream, error rates and other things that I don't even know what they mean.Barbara Eckman: But this enables us to do things that improve customer experience like plan the network topology to help if there's a region that has a ton of traffic, we might change the policy to support that. Minimizing truck rolls, truck rolls are what we call it when the Comcast guy cable guy comes or cable female comes to your house. And in this COVID times, we really would like to minimize that even more. And if we can analyze the data ahead of time, we can perhaps make any adjustments or suggest adjustments that the user can make to minimize the need for people to come to their house.Barbara Eckman: We can monitor, predict problems and remedy them often before the user even knows because of this data and this involves both the telemetry data and integrating it with other kinds of data across the enterprise. And then optimizing network performance for region or for the whole household. So, now this is really important stuff and it really helps the customers. And we're working to make this even more prevalent. So, what makes your life hard? This is a professional statement. If you want to talk about personally, what makes your life hard? We can do that later, but what makes your life harder as a data professional?Barbara Eckman: People usually say, "I need to find the data. So if I'm going to be integrating data across silos, I need to find it. I know where it is in myself silo, but maybe." And the way we do that is a metadata search and discovery, which we do through Elasticsearch. Then once I find the data that might be of interest to me, I need to understand what it means. So, what someone calls an account ID might not be the same account ID that you are used to calling an account ID, billing IDs, or back office account IDs need to know what it means in order to be able to join it, to make sense as opposed to Franklin data, monster data that isn't really appropriately joined. We need to know who produced it, that it come from a set-top box. Did it come from a third party who touched it while it was journeying through Comcast, through Tenet, through Kafka or Kinesis and someone aggregated it and then maybe somebody else enriched it with other data.Barbara Eckman: And then it landed in a data lake. The user of the data in the data lake wants to know where the data came from, and who added what piece. And you could see this as both the publisher looks at the data in the data lake and says, "This looks screwy, what's wrong with this? Who messed up my data?" He could also say, or they could say, "Wow, this is enriched really great. I want to thank that person." And also someone who's just using the data wants to know who to ask questions. What did you enrich this with? Where did that data come from, that kind of thing? So, and all that really is helpful when you're doing this integration. That's data governance and lineage, which we do in Apache Atlas.Barbara Eckman: That's our metadata and lineage repository. Then once you found data and understood it, you have to be able to access it. And we do that through at Apache Ranger and its extension that's provided by Privacera. Once you have access to it, you need to be able to integrate it and analyze it across the enterprise. So, finally, now we get to the good stuff to be able to actually get our hands on the data. And we do that with self-service compute using Databricks. And Databricks is a really powerful tool for that. And finally we find that we do really need asset compliance for important operations. And we do that with Delta Lake. So, I can talk about this in more detail, as this top goes on or in the question session.Barbara Eckman: I'm an architect. So, I have to have bus and line diagrams. So, this is a high-level view of our hybrid cloud solution. So, income passed on our data centers, we have a Hadoop data lake that involves Hadoop Ranger and Apache Atlas working together. We are, as many companies are kind of phasing that out, but not immediately, it takes a while. We have a Terra data, enterprise data warehouse. Similarly, we are thinking to move that and not necessarily to the cloud entirely, but maybe to another on-prem source, like the object store. We use MinIO and basically that gives us the mix this object so it look like S3. So, when the spark jobs that we like to use on S3 also can run on our on pre data store.Barbara Eckman: And that's a big plus of course. And for that, we have a Ranger data service that helps with access control there. Up in the cloud, we use AWS though Azure also has a big footprint in Comcast. And Databricks compute is kind of the center here. We use it to access Kinesis. Redshift is only, we're just starting with that. We use Delta Lake and S3 object store and we have a Ranger plugin that the Databricks folks worked carefully with Privacera to create so that our self-service Databricks environment can have all the nit script and the configurations that it needs to run the access control that Privacera provides.Barbara Eckman: We also use Presto and for our federated query capability, it also has a Ranger plugin and all the tags that are applied to metadata on which policies are built, or are housed in Apache Atlas and Ranger and Atlas synced together. And that's how Ranger knows what policies to apply to what data. And in the question session, if you want to dig deeper into any of this, I'd be very happy to do it. So, this is very exciting to me, we're just rolling this out and it's so elegant and I didn't create it so I can say that. So, Ranger analysis together provide a declarative policy based access control. And as I said, Privacera extends Ranger, which originally only worked in Hadoop to AWS through plugins and proxies. And one of the key ones that we use, of course, is Databricks on all three of these environments. And basically what I like about this is we really have one ranger to rule them all and Atlas is his little buddy, because he provides or she, provides the tags that really power our access control.Barbara Eckman: So, here's again a diagram. And we have a portal that we built for our self service applications and the user tags, the metadata with tags, like this is PII, this is a video domain, that kind of stuff. That goes into Atlas, the tags and the metadata associations are synced with Ranger, the policies based on that. So, who gets the CPI? Who gets to see video domain data? Those are synced and cashed in the range of plugins. And then when a user calls an application, whether it's a cloud application in Databricks, or even an on-prem application, the application asks Ranger, "Does this user have the access to do what they're asking to do on this data?" If the answer is yes, and it's very fast, because these are plugins. If the answer is yes, they get access.Barbara Eckman: If no, then they get an error message and we can also do masking and show the data, if someone has access to many columns, but not all columns, I would say a glue table we can mask out the ones that they don't have access to and still give them what data they are allowed to see. Recently we've really needed acid compliance. So, traditionally big data lakes are write once, read many. We have things streaming in from set top boxes in the cable world, those aren't transactional, that's not transactional data. That's what we're used to, but now increasingly we are finding that we need to delete specific records from our parquet files or whatever. We can do this in Spark, but it is a terribly performant. It certainly it can be done, but it turns out Delta Lake does it much better.Barbara Eckman: The deletes are much more performant and you get to view snapshots of past data lake states, which is really pretty awesome. So, we're really moving toward, I love this word, a lakehouse being able to do, write once, read many and acid all in one place. And that is largely thanks to data lakes. So, this is me. Please reach out to me an email if you wish. And I'll be happy to answer questions in the live session if you have any. So, thank you very much for listening.Sam Steiny: Thank you for joining this event, Barbara. That was so awesome. It's great to hear the Comcast story. So, with that, let's get to some questions. We're going to move over to live Q&A. So, please add your questions to that Q&A.Watch NowProductPlatform OverviewPricingOpen Source TechTry DatabricksDemoProductPlatform OverviewPricingOpen Source TechTry DatabricksDemoLearn & SupportDocumentationGlossaryTraining & CertificationHelp CenterLegalOnline CommunityLearn & SupportDocumentationGlossaryTraining & CertificationHelp CenterLegalOnline CommunitySolutionsBy IndustriesProfessional ServicesSolutionsBy IndustriesProfessional ServicesCompanyAbout UsCareers at DatabricksDiversity and InclusionCompany BlogContact UsCompanyAbout UsCareers at DatabricksDiversity and InclusionCompany BlogContact UsSee Careers
at DatabricksWorldwideEnglish (United States)Deutsch (Germany)Français (France)Italiano (Italy)日本語 (Japan)한국어 (South Korea)Português (Brazil)Databricks Inc.
160 Spear Street, 13th Floor
San Francisco, CA 94105
1-866-330-0121© Databricks 2023. All rights reserved. Apache, Apache Spark, Spark and the Spark logo are trademarks of the Apache Software Foundation.Privacy Notice|Terms of Use|Your Privacy Choices|Your California Privacy Rights |
https://www.databricks.com/company/partners/consulting-and-si/partner-solutions/deloitte-trellis | Deloitte Trellis | DatabricksPlatformThe Databricks Lakehouse PlatformDelta LakeData GovernanceData EngineeringData StreamingData WarehousingData SharingMachine LearningData SciencePricingMarketplaceOpen source techSecurity and Trust CenterWEBINAR May 18 / 8 AM PT
Goodbye, Data Warehouse. Hello, Lakehouse.
Attend to understand how a data lakehouse fits within your modern data stack.
Register nowSolutionsSolutions by IndustryFinancial ServicesHealthcare and Life SciencesManufacturingCommunications, Media & EntertainmentPublic SectorRetailSee all IndustriesSolutions by Use CaseSolution AcceleratorsProfessional ServicesDigital Native BusinessesData Platform MigrationNew survey of biopharma executives reveals real-world success with real-world evidence.
See survey resultsLearnDocumentationTraining & CertificationDemosResourcesOnline CommunityUniversity AllianceEventsData + AI SummitBlogLabsBeaconsJoin Generation AI in San Francisco
June 26–29
Learn about LLMs like Dolly and open source Data and AI technologies such as Apache Spark™, Delta Lake, MLflow and Delta SharingExplore sessionsCustomersPartnersCloud PartnersAWSAzureGoogle CloudPartner ConnectTechnology and Data PartnersTechnology Partner ProgramData Partner ProgramBuilt on Databricks Partner ProgramConsulting & SI PartnersC&SI Partner ProgramPartner SolutionsConnect with validated partner solutions in just a few clicks.Learn moreCompanyCareers at DatabricksOur TeamBoard of DirectorsCompany BlogNewsroomDatabricks VenturesAwards and RecognitionContact UsSee why Gartner named Databricks a Leader for the second consecutive yearGet the reportTry DatabricksWatch DemosContact UsLoginJUNE 26-29REGISTER NOWBrickbuilder SolutionTrellis by DeloitteIndustry-specific solution developed by Deloitte and powered by the Databricks Lakehouse Platform
Get startedSolve complex challenges around forecasting and procurementDeloitte's Trellis provides capabilities designed to help solve retail's complex challenges around demand forecasting, replenishment, procurement, pricing and promotion services. Deloitte has leveraged their deep industry and client experience to build an integrated, secured and multicloud-ready "as-a-service" Solution Accelerator on top of Databricks Lakehouse for Retail that can be rapidly customized and tailored as appropriate based on the segments' unique needs. With Deloitte Trellis, you can:Focus on critical shifts occurring both on the demand side and supply side of retail's value chainAssess recommendations, associated impact and insights in real timeAchieve significant improvement to both top-line and bottom-line numbersGet startedProductPlatform OverviewPricingOpen Source TechTry DatabricksDemoProductPlatform OverviewPricingOpen Source TechTry DatabricksDemoLearn & SupportDocumentationGlossaryTraining & CertificationHelp CenterLegalOnline CommunityLearn & SupportDocumentationGlossaryTraining & CertificationHelp CenterLegalOnline CommunitySolutionsBy IndustriesProfessional ServicesSolutionsBy IndustriesProfessional ServicesCompanyAbout UsCareers at DatabricksDiversity and InclusionCompany BlogContact UsCompanyAbout UsCareers at DatabricksDiversity and InclusionCompany BlogContact UsSee Careers
at DatabricksWorldwideEnglish (United States)Deutsch (Germany)Français (France)Italiano (Italy)日本語 (Japan)한국어 (South Korea)Português (Brazil)Databricks Inc.
160 Spear Street, 13th Floor
San Francisco, CA 94105
1-866-330-0121© Databricks 2023. All rights reserved. Apache, Apache Spark, Spark and the Spark logo are trademarks of the Apache Software Foundation.Privacy Notice|Terms of Use|Your Privacy Choices|Your California Privacy Rights |
https://www.databricks.com/cdn-cgi/l/email-protection#016571604165607560637368626a722f626e6c |
Email Protection | Cloudflare
Please enable cookies.
Email Protection
You are unable to access this email address databricks.com
The website from which you got to this page is protected by Cloudflare. Email addresses on that page have been hidden in order to keep them from being accessed by malicious bots. You must enable Javascript in your browser in order to decode the e-mail address.
If you have a website and are interested in protecting it in a similar way, you can sign up for Cloudflare.
How does Cloudflare protect email addresses on website from spammers?
Can I sign up for Cloudflare?
Cloudflare Ray ID: 7c5c2d1a5c4f9c22
•
Your IP:
Click to reveal
2601:147:4700:3180:15eb:de93:22f5:f511
•
Performance & security by Cloudflare
|
https://www.databricks.com/dataaisummit/speaker/mike-conover/# | Mike Conover - Data + AI Summit 2023 | DatabricksThis site works best with JavaScript enabled.HomepageSAN FRANCISCO, JUNE 26-29VIRTUAL, JUNE 28-29Register NowSession CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingMike ConoverSoftware Engineer at DatabricksBack to speakersLooking for past sessions?Take a look through the session archive to find even more related content from previous Data + AI Summit conferences.Explore the session archiveRegister today to save your spotRegister NowHomepageOrganized By Session CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingFAQEvent PolicyCode of ConductPrivacy NoticeYour Privacy ChoicesYour California Privacy RightsApache, Apache Spark, Spark, and the Spark logo are trademarks of the Apache Software Foundation. The Apache Software Foundation has no affiliation with and does not endorse the materials provided at this event. |
https://www.databricks.com/dataaisummit/speaker/robin-sutara/# | Robin Sutara - Data + AI Summit 2023 | DatabricksThis site works best with JavaScript enabled.HomepageSAN FRANCISCO, JUNE 26-29VIRTUAL, JUNE 28-29Register NowSession CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingRobin SutaraField CTO at DatabricksBack to speakersFrom repairing Apache helicopters near the Korean DMZ to the corporate battlefield, Robin has demonstrated success in navigating the high stress, and sometimes combative, complexities of data-led transformations. She has consulted with hundreds of organisations on data strategy, data culture, and building diverse data teams. Robin has had an eclectic career path in technical and business functions with more than two decades in tech companies, including Microsoft and Databricks.Looking for past sessions?Take a look through the session archive to find even more related content from previous Data + AI Summit conferences.Explore the session archiveRegister today to save your spotRegister NowHomepageOrganized By Session CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingFAQEvent PolicyCode of ConductPrivacy NoticeYour Privacy ChoicesYour California Privacy RightsApache, Apache Spark, Spark, and the Spark logo are trademarks of the Apache Software Foundation. The Apache Software Foundation has no affiliation with and does not endorse the materials provided at this event. |
https://www.databricks.com/solutions/accelerators/real-time-point-of-sale-analytics | Real-Time Point-of-Sale Analytics | DatabricksPlatformThe Databricks Lakehouse PlatformDelta LakeData GovernanceData EngineeringData StreamingData WarehousingData SharingMachine LearningData SciencePricingMarketplaceOpen source techSecurity and Trust CenterWEBINAR May 18 / 8 AM PT
Goodbye, Data Warehouse. Hello, Lakehouse.
Attend to understand how a data lakehouse fits within your modern data stack.
Register nowSolutionsSolutions by IndustryFinancial ServicesHealthcare and Life SciencesManufacturingCommunications, Media & EntertainmentPublic SectorRetailSee all IndustriesSolutions by Use CaseSolution AcceleratorsProfessional ServicesDigital Native BusinessesData Platform MigrationNew survey of biopharma executives reveals real-world success with real-world evidence.
See survey resultsLearnDocumentationTraining & CertificationDemosResourcesOnline CommunityUniversity AllianceEventsData + AI SummitBlogLabsBeaconsJoin Generation AI in San Francisco
June 26–29
Learn about LLMs like Dolly and open source Data and AI technologies such as Apache Spark™, Delta Lake, MLflow and Delta SharingExplore sessionsCustomersPartnersCloud PartnersAWSAzureGoogle CloudPartner ConnectTechnology and Data PartnersTechnology Partner ProgramData Partner ProgramBuilt on Databricks Partner ProgramConsulting & SI PartnersC&SI Partner ProgramPartner SolutionsConnect with validated partner solutions in just a few clicks.Learn moreCompanyCareers at DatabricksOur TeamBoard of DirectorsCompany BlogNewsroomDatabricks VenturesAwards and RecognitionContact UsSee why Gartner named Databricks a Leader for the second consecutive yearGet the reportTry DatabricksWatch DemosContact UsLoginJUNE 26-29REGISTER NOWSolution AcceleratorReal-Time Point-of-Sale AnalyticsPre-built code, sample data and step-by-step instructions ready to go in a Databricks notebook
Get startedCalculate real-time inventories across multiple store locations to improve retail marginsPoint-of-sale analytics is the process of collecting and analyzing data from the processing of transactions at a retail store. When a customer checks out, the data from that transaction feeds into several categories: inventory, sales, product, customer and staff.With volatility in the market and narrowing margins in retail, POS analytics is critical for retailers to ensure they are running their inventory management program as effectively as possible. If a POS system stores and reports data about inventory, retailers are able to have a better idea of what they're selling, what they're storing and what isn't moving.Get started with our Solution Accelerator for Real-Time Point-of-Sale Analytics to improve in-store operations by:Rapidly ingesting all data sources and types at scaleBuilding highly scalable streaming data pipelines with Delta Live Tables to obtain a real-time view of your operationLeveraging real-time insights to tackle your most pressing in-store information needsDownload notebookResourcesBlogLearn moreWebinarLearn moreHands-On WorkshopWatch nowDeliver AI innovation faster with Solution Accelerators for popular industry use cases. See our full library of solutionsReady to get started?Try Databricks for freeProductPlatform OverviewPricingOpen Source TechTry DatabricksDemoProductPlatform OverviewPricingOpen Source TechTry DatabricksDemoLearn & SupportDocumentationGlossaryTraining & CertificationHelp CenterLegalOnline CommunityLearn & SupportDocumentationGlossaryTraining & CertificationHelp CenterLegalOnline CommunitySolutionsBy IndustriesProfessional ServicesSolutionsBy IndustriesProfessional ServicesCompanyAbout UsCareers at DatabricksDiversity and InclusionCompany BlogContact UsCompanyAbout UsCareers at DatabricksDiversity and InclusionCompany BlogContact UsSee Careers
at DatabricksWorldwideEnglish (United States)Deutsch (Germany)Français (France)Italiano (Italy)日本語 (Japan)한국어 (South Korea)Português (Brazil)Databricks Inc.
160 Spear Street, 13th Floor
San Francisco, CA 94105
1-866-330-0121© Databricks 2023. All rights reserved. Apache, Apache Spark, Spark and the Spark logo are trademarks of the Apache Software Foundation.Privacy Notice|Terms of Use|Your Privacy Choices|Your California Privacy Rights |
https://www.databricks.com/dataaisummit/speaker/yang-you | Yang You - Data + AI Summit 2023 | DatabricksThis site works best with JavaScript enabled.HomepageSAN FRANCISCO, JUNE 26-29VIRTUAL, JUNE 28-29Register NowSession CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingYang YouPresidential Young Professor at National University of SingaporeBack to speakersProf. Yang You is a Presidential Young Professor at the National University of Singapore. His team broke the world record of ImageNet and BERT training speed. He is a winner of the IPDPS Best Paper Award, ICPP Best Paper Award, AAAI Distinguished Paper Award, ACM/IEEE George Michael HPC Fellowship, Siebel Scholar, Lotfi A. Zadeh Prize, and nominated for the ACM Doctoral Dissertation Award. He also made Forbes 30 Under 30 Asia list (2021) for young leaders and IEEE-CS TCHPC early career award.Looking for past sessions?Take a look through the session archive to find even more related content from previous Data + AI Summit conferences.Explore the session archiveRegister today to save your spotRegister NowHomepageOrganized By Session CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingFAQEvent PolicyCode of ConductPrivacy NoticeYour Privacy ChoicesYour California Privacy RightsApache, Apache Spark, Spark, and the Spark logo are trademarks of the Apache Software Foundation. The Apache Software Foundation has no affiliation with and does not endorse the materials provided at this event. |
https://www.databricks.com/blog/2021/07/13/using-your-data-to-stop-credit-card-fraud-capital-one-and-other-best-practices.html | Using Your Data to Stop Credit Card Fraud: Capital One and Other Best Practices - The Databricks BlogSkip to main contentPlatformThe Databricks Lakehouse PlatformDelta LakeData GovernanceData EngineeringData StreamingData WarehousingData SharingMachine LearningData SciencePricingMarketplaceOpen source techSecurity and Trust CenterWEBINAR May 18 / 8 AM PT
Goodbye, Data Warehouse. Hello, Lakehouse.
Attend to understand how a data lakehouse fits within your modern data stack.
Register nowSolutionsSolutions by IndustryFinancial ServicesHealthcare and Life SciencesManufacturingCommunications, Media & EntertainmentPublic SectorRetailSee all IndustriesSolutions by Use CaseSolution AcceleratorsProfessional ServicesDigital Native BusinessesData Platform MigrationNew survey of biopharma executives reveals real-world success with real-world evidence.
See survey resultsLearnDocumentationTraining & CertificationDemosResourcesOnline CommunityUniversity AllianceEventsData + AI SummitBlogLabsBeaconsJoin Generation AI in San Francisco
June 26–29
Learn about LLMs like Dolly and open source Data and AI technologies such as Apache Spark™, Delta Lake, MLflow and Delta SharingExplore sessionsCustomersPartnersCloud PartnersAWSAzureGoogle CloudPartner ConnectTechnology and Data PartnersTechnology Partner ProgramData Partner ProgramBuilt on Databricks Partner ProgramConsulting & SI PartnersC&SI Partner ProgramPartner SolutionsConnect with validated partner solutions in just a few clicks.Learn moreCompanyCareers at DatabricksOur TeamBoard of DirectorsCompany BlogNewsroomDatabricks VenturesAwards and RecognitionContact UsSee why Gartner named Databricks a Leader for the second consecutive yearGet the reportTry DatabricksWatch DemosContact UsLoginJUNE 26-29REGISTER NOWCategoriesAll blog postsCompanyCultureCustomersEventsNewsPlatformAnnouncementsPartnersProductSolutionsSecurity and TrustEngineeringData Science and MLOpen SourceSolutions AcceleratorsData EngineeringTutorialsData StreamingData WarehousingData StrategyBest PracticesData LeaderInsightsIndustriesFinancial ServicesHealth and Life SciencesMedia and EntertainmentRetailManufacturingPublic SectorUsing Your Data to Stop Credit Card Fraud: Capital One and Other Best Practicesby Fahmid KabirJuly 13, 2021 in Company BlogShare this postFraud is a costly and growing problem – research estimates that $1 of fraud costs companies 3.36x in chargeback, replacement and operational cost. Adding to the pain, according to experts, there are not enough regulations to protect small businesses from chargebacks and losses from fraud. Despite significant advancements in credit card fraud, risk management techniques have adapted, and fraudsters are still able to find loopholes and exploit the system. For credit card companies, the threat of fraudulent card usage is a constant, which results in the need for accurate credit card fraud detection systems. All organizations are at risk of fraud and fraudulent activities, but that risk is especially burdensome to those in financial services. “Threats can originate from internal or external sources, but the effects can be devastating - including loss of consumer confidence, incarceration for those involved, and even the downfall of corporations,” says Badrish Davay, a Data Engineering and Machine Learning leader at Capital One. CNBC reports that the US is the most credit card fraud prone country in the world.
Fraud detection using machine learning
It’s not all bad news, though. With modern advancements, businesses are able to stay ahead of threats by leveraging data and machine learning. As part of a tech talk at the recent Data + AI Summit, we were able to get a glimpse into how Capital One is using data and artificial intelligence (AI) to address fraud. Badrish Davay from Capital One shared how we can utilize state-of-the-art ML algorithms to stay ahead of the attackers and, at the same time ,constantly learn new ways a system is being exploited. “In order to more dynamically detect fraudulent transactions, one can train ML models on a dataset, including credit card transaction data, as well as card and demographic information of the cardholder. Capital One uses Databricks to achieve this goal,” noted Davay.
To play this video, click here and accept cookiesCapital One analyzes all the fraudulent activities to understand what to look for in credit card fraud. Davay presented the 6 “W” questions they ask – what, who, when, where, why and what if? – used to uncover trends in fraudulent activities. Davay highlighted various scenarios in which card information may be compromised and how data can help with anomaly detection and identifying fraud. For example, he shared how geospatial data can detect stolen card information when it is being used away from its actual location, along with temporal data to determine fraud.
As Davay also explained, when a customer physically loses a card, but doesn’t notify the organization, contextual information ( e.g. work hours, spending habits, etc.) can help determine if transactions are routine or anomalous. A key takeaway from Davay is that we should be able to combine a multiple of independent signals together to get a wider context around traction and demographics data. With the availability of data and advancements in ML, fraud prevention is a key area in which ML is changing both workflows and outcomes, allowing organizations to stay ahead of increasingly technologically advanced criminals.
Today’s businesses are facing an increasingly sophisticated enemy that attacks, responds and changes tactics extremely quickly. Due to dynamics of fraud, organizations need AI to constantly adapt to changing behaviours and patterns. AI brings agility that rules do not. With data analytics and ML, companies can get ahead of threats. Below are some key reasons why ML is apt for taking on fraud:
Fraud hides under massive amounts of data: The most effective way to detect fraud is to look at the overall behaviors of end users. Looking at transactions or orders is not enough — we need to follow the events leading up to and after the transaction. This culminates into a lot of structured and unstructured data, and the best way to detect fraud in such huge volumes is with ML and AI.Fraud happens quickly: When an ML system updates in real time, that knowledge can be used within milliseconds to update fraud detection models and prevent an attack.Fraud is always changing: Fraudsters constantly adapt their tactics, making them difficult for humans to detect – and impossible for static rules-based systems, which don’t learn. ML, however, can adapt to changing behavior.Fraud looks fine on the surface: To the human eye, fraudulent and normal transactions don’t appear any differently from each other. ML has a deeper and more nuanced way of viewing data, which helps avoid false positives.Davay discussed how MLuses statistical models, such as classifiers, and logistic regression to look at past outcomes and anomalies to predict future outcomes. An ML system can learn, predict and make decisions as data comes in real time. In his presentation, Davay outlined what a good fraud prevention model needs to have:
one-stop shop for users to train the model and orchestrate executionReal-time detectionDeep analytics and modeling by leveraging powerful ML tools, such as deep learning and neural networks, for what if data analysis and testing new hypothesesAdherence to company security policy and compliance requirementsNotification service to inform cardholders immediately of suspicious activitySeamless integration with enterprise systemsMLflow in fraud prevention
Davay highlighted the value of Databricks and MLflow in their fraud prevention efforts. He talked about the platform and how different data and fraud teams collaboratively develop and run experiments with the team using Databricks. “Even though they share experiments and data collaboratively within the team, we can implement stringent security measures in order to respect data privacy, and each experiment can have its own compute environments and requirements,” said Davay. He referred to Databricks as “a one-stop shop for all of [their] data science and models, making it perfect for data science projects.” When the team has identified features for predicting whether a transaction is fraudulent or not, they pass these data points to Databricks’ hosted environment, where they can then perform feature engineering, data pre-processing and split the data into test and training sets. They then use a variety of supervised or unsupervised ML algorithms, such as SVM, decision tree and random forest, to train a model. They identify the best performing model and use the Databricks Lakehouse Platform to solve for fraud directly from within the platform. The lakehouse is a conducive environment for fraud detection and you can learn more from our solution accelerators here.
Davay mentioned how “MLflow within the Databricks ecosystem is a great feature that we can use because it has numerous advantages in developing the ML workflow pipeline seamlessly.” MLflow allows Capital One to track their ML experiments from end-to-end throughout the ML model lifecycle. During the talk, Davay mentioned they can run experiments directly from GitHub without the need to go through the code and can directly deploy and train models by serializing them while utilizing packages such as Python’s pickle module, Apache Spark, and MLflow. They then deploy the serialized model and serve it as an API by harnessing MLflow.
MLflow and microservices
Davay also touched on microservices and why they are useful in MLflow. A microservice is a gateway to a specific functional aspect of an application. It helps teams like Capital One’s develop applications in a standardized consistent manner over time. Microservices allow Capital One to deploy functionality of applications independent of each other. It helps abstract the functionality while enabling the team to build in a reusable and uniform way of interacting with an application. Furthermore, it lets teams compose complex behavior by combining a variety of other microservices together. Essentially, it empowers companies to use any tech stack in the backend while maintaining compatibility on the front end.
With Capital One’s raw data stored in Amazon S3, they quickly integrate interactions between S3 and their framework through Databricks seamlessly and can massively scale ML model training, validation and deployment pipelines through MLflow. Their team trains and validates models on custom clusters in AWS and deploys them through SageMaker directly by using MLflow APIs. MLflow is not only limited to AI, but can embed any piece of business logic (as mentioned in Databricks Rules + AI accelerator) and, as such, benefits from the E2E governance and delivery principles as microservices.
Putting it all together
Davay shared how Databricks allows Capital one to query and deploy models and manage and clean up the deployment while using MLflow APIs within the AWS ecosystem. In addition, they can ensure safe security and conditional access via AWS SSO.
Based on observations from Capital One and many other customers, there are several benefits of using data and AI for fraud prevention, including:
Reduced need for manual review. ML automates processes in which behaviors can be learned at the individual level and detect anomalies.The ability to prevent fraud cases without impeding the user experience. AI brings automation to the process seamlessly and prevents fraud in advance without burdening users.Lower operational costs than other approaches. With less manual work and automation, data and AI require fewer resources and preempts losses associated with fraud.Frees up teams’ time to focus on more strategic tasks. Most companies are not in the business of fraud detection, and an ML fraud prevention process can help them focus on core activities.Adapts quickly. Coupled with human talent and experience, data and AI work together to constantly learn and adjust to new user behaviors and trends.When it comes to operationalizing data and AI to build customer relationships and drive higher returns on equity, fraud should be considered a top priority. Curbing fraudulent or malicious behavior – from fraudulent card transactions – is key to mitigating negative revenue impact. To more dynamically detect fraudulent transactions, Capital One uses ML and credit card transaction information, as well as card and demographic information, to get a comprehensive view to identify anomalies. Data-driven innovators such as Capital One’s are paving the way in fraud detection and provide a successful model to follow to protect customers and business.
Get started with fraud prevention in DatabricksGet a jump start with our our prebuilt code and guides in our Fraud Solution Accelerators See all our Financial Services solutionsTry Databricks for freeGet StartedRelated postsUsing Your Data to Stop Credit Card Fraud: Capital One and Other Best PracticesJuly 13, 2021 by Fahmid Kabir in Company Blog
Fraud is a costly and growing problem – research estimates that $1 of fraud costs companies 3.36x in chargeback, replacement and operational cost. A...
See all Company Blog postsProductPlatform OverviewPricingOpen Source TechTry DatabricksDemoProductPlatform OverviewPricingOpen Source TechTry DatabricksDemoLearn & SupportDocumentationGlossaryTraining & CertificationHelp CenterLegalOnline CommunityLearn & SupportDocumentationGlossaryTraining & CertificationHelp CenterLegalOnline CommunitySolutionsBy IndustriesProfessional ServicesSolutionsBy IndustriesProfessional ServicesCompanyAbout UsCareers at DatabricksDiversity and InclusionCompany BlogContact UsCompanyAbout UsCareers at DatabricksDiversity and InclusionCompany BlogContact UsSee Careers
at DatabricksWorldwideEnglish (United States)Deutsch (Germany)Français (France)Italiano (Italy)日本語 (Japan)한국어 (South Korea)Português (Brazil)Databricks Inc.
160 Spear Street, 13th Floor
San Francisco, CA 94105
1-866-330-0121© Databricks 2023. All rights reserved. Apache, Apache Spark, Spark and the Spark logo are trademarks of the Apache Software Foundation.Privacy Notice|Terms of Use|Your Privacy Choices|Your California Privacy Rights |
https://www.databricks.com/blog/2020/07/10/a-data-driven-approach-to-environmental-social-and-governance.html | How to Take a Data-driven Approach to ESG Investing With Apache Spark, Delta Lake, and MLflow - The Databricks BlogSkip to main contentPlatformThe Databricks Lakehouse PlatformDelta LakeData GovernanceData EngineeringData StreamingData WarehousingData SharingMachine LearningData SciencePricingMarketplaceOpen source techSecurity and Trust CenterWEBINAR May 18 / 8 AM PT
Goodbye, Data Warehouse. Hello, Lakehouse.
Attend to understand how a data lakehouse fits within your modern data stack.
Register nowSolutionsSolutions by IndustryFinancial ServicesHealthcare and Life SciencesManufacturingCommunications, Media & EntertainmentPublic SectorRetailSee all IndustriesSolutions by Use CaseSolution AcceleratorsProfessional ServicesDigital Native BusinessesData Platform MigrationNew survey of biopharma executives reveals real-world success with real-world evidence.
See survey resultsLearnDocumentationTraining & CertificationDemosResourcesOnline CommunityUniversity AllianceEventsData + AI SummitBlogLabsBeaconsJoin Generation AI in San Francisco
June 26–29
Learn about LLMs like Dolly and open source Data and AI technologies such as Apache Spark™, Delta Lake, MLflow and Delta SharingExplore sessionsCustomersPartnersCloud PartnersAWSAzureGoogle CloudPartner ConnectTechnology and Data PartnersTechnology Partner ProgramData Partner ProgramBuilt on Databricks Partner ProgramConsulting & SI PartnersC&SI Partner ProgramPartner SolutionsConnect with validated partner solutions in just a few clicks.Learn moreCompanyCareers at DatabricksOur TeamBoard of DirectorsCompany BlogNewsroomDatabricks VenturesAwards and RecognitionContact UsSee why Gartner named Databricks a Leader for the second consecutive yearGet the reportTry DatabricksWatch DemosContact UsLoginJUNE 26-29REGISTER NOWCategoriesAll blog postsCompanyCultureCustomersEventsNewsPlatformAnnouncementsPartnersProductSolutionsSecurity and TrustEngineeringData Science and MLOpen SourceSolutions AcceleratorsData EngineeringTutorialsData StreamingData WarehousingData StrategyBest PracticesData LeaderInsightsIndustriesFinancial ServicesHealth and Life SciencesMedia and EntertainmentRetailManufacturingPublic SectorA Data-driven Approach to Environmental, Social and Governanceby Antoine AmendJuly 10, 2020 in Engineering BlogShare this postThe future of finance goes hand in hand with social responsibility, environmental stewardship and corporate ethics. In order to stay competitive, Financial Services Institutions (FSI) are increasingly disclosing more information about their environmental, social and governance (ESG) performance. By better understanding and quantifying the sustainability and societal impact of any investment in a company or business, FSIs can mitigate reputation risk and maintain the trust with both their clients and shareholders. At Databricks, we increasingly hear from our customers that ESG has become a C-suite priority. This is not solely driven by altruism but also by economics: Higher ESG ratings are generally positively correlated with valuation and profitability while negatively correlated with volatility. In this blog post, we offer a novel approach to sustainable investing by combining natural language processing (NLP) techniques and graph analytics to extract key strategic ESG initiatives and learn companies' relationships in a global market and their impact to market risk calculations.Using the Databricks Unified Data Analytics Platform, we will demonstrate how Apache SparkTM, Delta Lake and MLflow can enable asset managers to assess the sustainability of their investments and empower their business with a holistic and data-driven view to their environmental, social and corporate governance strategies. Specifically, we will extract the key ESG initiatives as communicated in yearly PDF reports and compare these with the actual media coverage from news analytics data.In the second part of this blog, we will learn the connections between companies and understand the positive or negative ESG consequences these connections may have to your business. While this blog will focus on asset managers to illustrate the modern approach to ESG and socially responsible investing, this framework is broadly applicable across all sectors in the economy from Consumer Staples and Energy to Media and Healthcare.Extracting key ESG initiativesFinancial services organisations are now facing more and more pressure from their shareholders to disclose more information about their environmental, social and governance strategies. Typically released on their websites on a yearly basis as a form of a PDF document, companies communicate their key ESG initiatives across multiple themes such as how they value their employees, clients or customers, how they positively contribute back to society or even how they mitigate climate change by, for example, reducing (or committing to reduce) their carbon emissions. Consumed by third-party agencies (such as msci or csrhub), these reports are usually consolidated and benchmarked across industries to create ESG metrics.Extracting statements from ESG reportsIn this example, we would like to programmatically access 40+ ESG reports from top tier financial services institutions (some are reported in the below table) and learn key initiatives across different topics. However, with no standard schema nor regulatory guidelines, communication in these PDF documents can be varied, making this approach a perfect candidate for the use of machine learning (ML).Barclayshttps://home.barclays/content/dam/home-barclays/documents/citizenship/ESG/Barclays-PLC-ESG-Report-2019.pdfJP Morgan Chasehttps://www.jpmorganchase.com/content/dam/jpmc/jpmorgan-chase-and-co/documents/jpmc-cr-esg-report-2019.pdfMorgan Stanleyhttps://www.morganstanley.com/pub/content/dam/msdotcom/sustainability/Morgan-Stanley_2019-Sustainability-Report_Final.pdfGoldman Sachshttps://www.goldmansachs.com/our-commitments/sustainability/sustainable-finance/documents/reports/2019-sustainability-report.pdfAlthough our data set is relatively small, we show how one could distribute the scraping process using a user defined function (UDF), assuming the third-party library `PyPDF2` is available across your Spark environment.
import requests
import PyPDF2
import io
@udf('string')
def extract_content(url):
# retrieve PDF binary stream
response = requests.get(url)
open_pdf_file = io.BytesIO(response.content)
pdf = PyPDF2.PdfFileReader(open_pdf_file)
# return concatenated content
text = [pdf.getPage(i).extractText() for i in range(0, pdf.getNumPages())]
return "\n".join(text)
Beyond regular expressions and fairly complex data cleansing (reported in the attached notebooks), we also want to leverage more advanced NLP capabilities to tokenise content into grammatically valid sentences. Given the time it takes to load trained NLP pipelines in memory (such as the `spacy` library below), we ensure our model is loaded only once per Spark executor using a PandasUDF strategy as follows.
import gensim
import spacy
from pyspark.sql.functions import pandas_udf, PandasUDFType
@pandas_udf('array', PandasUDFType.SCALAR_ITER)
def extract_statements(content_series_iter):
# load spacy model for english only once
spacy.cli.download("en_core_web_sm")
nlp = spacy.load("en_core_web_sm")
# provide process_text function with our loaded NLP model
# clean and tokenize a batch of PDF content
for content_series in content_series_iter:
yield content_series.map(lambda x: process_text(nlp, x))
With this approach, we were able to convert raw PDF documents into well defined sentences (some are reported in the table below) for each of our 40+ ESG reports. As part of this process, we also lemmatised our content - that is, to transform a word into its simpler grammatical form, such as past tenses transformed to present form or plural form converted to singular. This extra process will pay off in the modeling phase by reducing the number of words to learn topics from.Goldman Sachswe established a new policy to only take public those companies in the us and europe with at least one diverse board director (starting next year, we will increase our target to two)Barclaysit is important to us that all of our stakeholders can clearly understand how we manage our business for good.Morgan Stanleyin 2019, two of our financings helped create almost 80 affordable apartment units for low-and moderate-income families in sonoma county, at a time of extreme shortage.Riverstonein the last four years, the fund has conserved over 15,000 acres of bottomland hardwood forests, on track to meeting the 35,000-acre goal established at the start of the fundAlthough it is relatively easy for the human eye to infer the themes around each of these statements (in this case diversity, transparency, social, environmental), doing so programmatically and at scale is of a different complexity and requires advanced use of data science.Classifying ESG statementsIn this section, we want to automatically classify each of our 8,000 sentences we extracted from 40+ ESG reports. Together with non matrix factorisation, Latent Dirichlet Allocation (LDA) is one of the core models in the topic modeling arsenal, using either its distributed version on Spark ML or its in-memory sklearn equivalent as follows. We compute our term frequencies and capture our LDA model and hyperparameters using MLflow experiments tracking.
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.decomposition import LatentDirichletAllocation as LDA
import mlflow
# compute word frequencies
# stop words are common english words + banking related buzzwords
word_tf_vectorizer = CountVectorizer(stop_words=stop_words, ngram_range=(1,1))
word_tf = word_tf_vectorizer.fit_transform(esg['lemma'])
# track experiment on ml-flow
with mlflow.start_run(run_name='topic_modeling'):
# Train a LDA model with 9 topics
lda = LDA(random_state = 42, n_components = 9, learning_decay = .3)
lda.fit(word_tf)
# Log model
mlflow.sklearn.log_model(lda, "model")
mlflow.log_param('n_components', '9')
mlflow.log_param('learning_decay', '.3')
mlflow.log_metric('perplexity', lda.perplexity(word_tf))
Following multiple experiments, we found that 9 topics would summarise our corpus best. By looking deeper at the importance of each keyword learned from our model, we try to describe our 9 topics into 9 specific categories, as reported in the table below.Suggested nameLDA descriptive keywordscompany strategyboard, company, corporate, governance, management, executive, director, shareholder, global, engagement, vote, term, responsibility, business, teamgreen energyenergy, emission, million, renewable, use, project, reduce, carbon, water, billion, power, green, total, gas, sourcecustomer focuscustomer, provide, business, improve, financial, support, investment, service, year, sustainability, nancial, global, include, help, initiativesupport communitycommunity, people, business, support, new, small, income, real, woman, launch, estate, access, customer, uk, includeethical investmentsinvestment, climate, company, change, portfolio, risk, responsible, sector, transition, equity, investor, sustainable, business, opportunity, marketsustainable financesustainable, impact, sustainability, asset, management, environmental, social, investing, company, billion, waste, client, datum, investment, providecode of conductinclude, policy, information, risk, review, management, investment, company, portfolio, process, environmental, governance, scope, conduct, datumstrong governancerisk, business, management, environmental, customer, manage, human, social, climate, approach, conduct, page, client, impact, strategicvalue employeesemployee, work, people, support, value, client, company, help, include, provide, community, program, diverse, customer, serviceWith our 9 machine learned topics, we can easily compare each of our FSI's ESG reports side by side to better understand the key priority focus for each of them.Using seaborn visualisation, we can easily flag key differences across our companies (organisations' names redacted). When some organisations would put more focus on valuing employees and promoting diversity and inclusion (such as ORG-21), some seem to be more focused towards ethical investments (ORG-14). As the output of LDA is a probability distribution across our 9 topics instead of one specific theme, we easily unveil the most descriptive ESG initiative for any given organisation using a simple SQL statement and a partitioning function that captures the highest probability for each theme.
WITH ranked (
SELECT
e.topic,
e.statement,
e.company,
dense_rank() OVER (
PARTITION BY e.company, e.topic ORDER BY e.probability DESC
) as rank
FROM esg_reports e
)
SELECT
t.topic,
t.statement
FROM ranked t
WHERE t.company = 'goldman sachs'
AND t.rank = 1
This SQL statement provides us with a NLP generated executive summary for Goldman Sachs (see original report), summarising a complex 70+ pages long document into 9 ESG initiatives / actions.TopicStatementsupport communityCalled the Women Entrepreneurs Opportunity Facility (WEOF), the program aims to address unmet financing needs of women-owned businesses in developing countries, recognizing the significant obstacles that women entrepreneurs face in accessing the capital needed to grow their businesses.strong governanceThe ERM framework employs a comprehensive, integrated approach to risk management, and it is designed to enable robust risk management processes through which we identify, assess, monitor and manage the risks we assume in conducting our business activities.sustainable financeIn addition to the Swedish primary facility, Northvolt also formed a joint venture with the Volkswagen Group to establish a 16 GWh battery cell gigafactory in Germany, which will bring Volkswagens total investment in Northvolt to around $1 billion.green energyBesides reducing JFKs greenhouse gas emissions by approximately 7,000 tons annually (equivalent to taking about 1,400 cars off the road), the project is expected to lower the Port Authority's greenhouse gas emissions at the airport by around 10 percent The GSAM Renewable Power Group will hold the power purchase agreement for the project, while SunPower will develop and construct the infrastructure at JFK.customer focusProgram alumni can also join the 10KW Ambassadors Program, an advanced course launched in 2019 that enables the entrepreneurs to further scale their businesses.10,000 Women Measures Impacts in China In Beijing, 10,000 Women held a 10-year alumni summit at Tsinghua University School of Economics and Management.ethical investmentsWe were one of the first US companies to commit to the White House American Business Act on Climate Pledge in 2015; we signed an open letter alongside 29 other CEOs in 2017 to support the US staying in the Paris Agreement; and more recently, we were part of a group of 80+ CEOs and labour leaders reiterating our support that staying in the Paris Agreement will strengthen US competitiveness in global markets.value employeeOther key initiatives that enhance our diversity of perspectives include: Returnship Initiative, which helps professionals restart their careers after an extended absence from the workforce The strength of our culture, our ability to execute our strategy, and our relevance to clients all depend on a diverse workforce and an inclusive environment that encourages a wide range of perspectives.company strategyUnderscoring our conviction that diverse perspectives can have a strong impact on company performance, we have prioritized board diversity in our stewardship efforts.code of conduct13%Please see page 96 of our 2019 Form 10-K for further of approach to incorporation of environmental, social and governance (ESG) factors in credit analysisDiscussion and AnalysisFN-CB-410a.2Environmental Policy FrameworkAlthough we may observe some misclassification (mainly related to how we have named each topic) and may have to tune our model more, we have demonstrated how NLP techniques can be used to efficiently extract well defined initiatives from complex PDF documents. These, however, may not always reflect companies' core priorities nor does it capture every initiative for each theme. This can be further addressed using techniques borrowed from anomaly detection, grouping corpus into broader clusters and extracting sentences that deviate the most from the norm (i.e. sentences specific to an organisation and not mainstream). This approach, using K-Means, is discussed in our notebooks attached.Create a data-driven ESG scoreAs covered in the previous section, we were able to compare businesses side by side across 9 different ESG initiatives. Although we could attempt to derive an ESG score (the approach many third-party organisations would use), we want our score not to be subjective but truly data-driven. In other terms, we do not want to solely base our assumptions on companies' official disclosures but rather on how companies' reputations are perceived in the media, across all 3 environmental, social and governance variables. For that purpose, we use GDELT, the global database of event location and tones.Data acquisitionGiven the volume of data available in GDELT (100 million records for the last 18 months only), we leverage the lakehouse paradigm by moving data from raw, to filtered and enriched, respectively from Bronze, to Silver and Gold layers, and extend our process to operate in near real time (GDELT files are published every 15mn). For that purpose, we use a Structured Streaming approach that we `trigger` in batch mode with each batch operating on data increment only. By unifying Streaming and Batch, Spark is the de-facto standard for data manipulation and ETL processes in modern data lake infrastructures.
gdelt_stream_df = spark \
.readStream \
.format("delta") \
.table("esg_gdelt_bronze") \
.withColumn("themes", filter_themes(F.col("themes"))) \
.withColumn("organisation", F.explode(F.col("organisations"))) \
.select(
F.col("publishDate"),
F.col("organisation"),
F.col("documentIdentifier").alias("url"),
F.col("themes"),
F.col("tone.tone")
)
gdelt_stream_df \
.writeStream \
.trigger(Trigger.Once) \
.option("checkpointLocation", "/tmp/gdelt_esg") \
.format("delta") \
.table("esg_gdelt_silver")
From the variety of dimensions available in GDELT, we only focus on sentiment analysis (using the tone variable) for financial news related articles only. We assume financial news articles to be well captured by the GDELT taxonomy starting with ECON_*. Furthermore, we assume all environmental articles to be captured as ENV_* and social articles to be captured by UNGP_* taxonomies (UN guiding principles on human rights).Sentiment analysis as proxy for ESGWithout any industry standard nor existing models to define environmental, social and governance metrics, and without any ground truth available to us at the time of this study, we assume that the overall tone captured from financial news articles is a good proxy for companies' ESG scores. For instance, a series of bad press articles related to maritime disasters and oil spills would strongly affect a company's environmental performance. On the opposite, news articles about [...] financing needs of women-owned businesses in developing countries [source] with a more positive tone would positively contribute to a better ESG score. Our approach is to look at the difference between a company sentiment and its industry average; how much more "positive" or "negative" a company is perceived across all its financial services news articles, over time.In the example below, we show that difference in sentiment (using a 15 days moving average) between one of our key FSIs and its industry average. Apart from a specific time window around COVID-19 virus outbreak in March 2020, this company has been constantly performing better than the industry average, indicating a good environmental score overall.Generalising this approach to every entity mentioned in our GDELT dataset, we are no longer limited to the few FSIs we have an official ESG report for and are able to create an internal score for each and every single company across their environmental, social and governance dimensions. In other words, we have started to shift our ESG lense from being subjective to being data-driven.Introducing a propagated weighted ESG metricsIn a global market, companies and businesses are inter-connected, and the ESG performance of one (e.g. seller) may affect the reputation of another (e.g. buyer). As an example, if a firm keeps investing in companies directly or indirectly related to environmental issues, this risk should be quantified and must be reflected back on companies' reports as part of their ethical investment strategy. We could cite the example of Barclays' reputation being impacted in late 2018 because of its indirect connections to tar sand projects (source).Identifying influencing factorsDemocratised by Google for web indexing, Page Rank is a common technique used to identify nodes' influence in large networks. In our approach, we use a variant of Page Rank, Personalised Page Rank, to identify influential organisations relative to our key financial services institutions. The more influential these connections are, the more likely they will contribute (positively or negatively) to our ESG score. An illustration of this approach is reported below where indirect connections to tar sand industry may negatively contribute to a company ESG score proportional to its personalised page rank influence.Using Graphframes, we can easily create a network of companies sharing a common media coverage. Our assumption is that the more companies are mentioned together in news articles, the stronger their link will be (edge weight). Although this assumption may also infer wrong connections because of random co-occurrence in news articles (see later), this undirected weighted graph will help us find companies' importance relative to our core FSIs we would like to assess.
val buildTuples = udf((organisations: Seq[String]) => {
// as undirected, we create both IN and OUT connections
organisations.flatMap(x1 => {
organisations.map(x2 => {
(x1, x2)
})
}).toSeq.filter({ case (x1, x2) =>
x1 != x2 // remove self edges
})
})
val edges = spark.read.table("esg_gdelt")
.groupBy("url")
.agg(collect_list(col("organisation")).as("organisations"))
.withColumn("tuples", buildTuples(col("organisations")))
.withColumn("tuple", explode(col("tuples")))
.withColumn("src", col("tuple._1"))
.withColumn("dst", col("tuple._2"))
.groupBy("src", "dst")
.count()
import org.graphframes.GraphFrame
val esgGraph = GraphFrame(nodes, edges)
By studying this graph further, we observe a power of law distribution of its edge weights: 90% of the connected businesses share a very few connections. We drastically reduce the graph size from 51,679,930 down to 61,143 connections by filtering edges for a weight of 200 or above (empirically led threshold). Prior to running Page Rank, we also optimise our graph by further reducing the number of connections through a Shortest Path algorithm and compute the maximum number of hops a node needs to follow to reach any of our core FSIs vertices (captured in `landmarks` array). The depth of a graph is the maximum of every shortest path possible, or the number of connections it takes for any random node to reach any others (the smaller the depth is, denser is our network).
val shortestPaths = esgGraph.shortestPaths.landmarks(landmarks).run()
val filterDepth = udf((distances: Map[String, Int]) => {
distances.values.exists(_
We filter our graph to have a maximum depth of 4. This process reduces our graph further down to 2,300 businesses and 54,000 connections, allowing us to run Page Rank algorithm more extensively with more iterations in order to better capture industry influence.
val prNodes = esgDenseGraph .parallelPersonalizedPageRank .maxIter(100) .sourceIds(landmarks) .run() We can directly visualise the top 100 influential nodes to a specific business (in this case Barclays PLC) as per below graph. Without any surprise, Barclays is well connected with most of our core FSIs (such as the institutional investors JP Morgan Chase, Goldman Sachs or Credit Suisse), but also to the Security Exchange Commission, Federal Reserve and International Monetary Fund.Further down this distribution, we find public and private companies such as Chevron, Starbucks or Johnson and Johnson. Strongly or loosely related, directly or indirectly connected, all these businesses (or entities from an NLP standpoint) could theoretically affect Barclays ESG performance, either positively or negatively, and as such impact Barclays' reputation.ESG as a propagated metricBy combining our ESG score captured earlier with the importance of each of these entities, it becomes easy to apply a weighted average on the "Barclays network" where each business contributes to Barclays' ESG score proportionally to its relative importance. We call this approach a propagated weighted ESG score (PW-ESG).We observe the negative or positive influence of any company's network using a word cloud visualisation. In the picture below, we show the negative influence (entities contributing negatively to ESG) for a specific organisation (name redacted).Due to the nature of news analytics, it is not surprising to observe news publishing companies (such as Thomson Reuters or Bloomberg) or social networks (Facebook, Twitter) as strongly connected organisations. Not reflecting the true connections of a given business but rather explained by a simple co-occurrence in news articles, we should consider filtering them out prior to our page rank process by removing nodes with a high degree of connections. However, this additional noise seems constant across our FSIs and as such does not seem to disadvantage one organisation over another. An alternative approach would be to build our graph using established connections as extracted from advanced uses of NLP on raw text content. This, however, would drastically increase the complexity of this project and the costs associated with news scraping processes.Finally, we represent the original ESG score as computed in the previous section, and how much of these scores were reduced (or increased) using our PW-ESG approach across its environmental, social and governance dimensions. In the example below, for a given company, the initial scores of 69, 62 and 67 have been reduced to 57, 53 and 60, with the most negative influence of PW-ESG being on its environmental coverage (-20%).Using the agility of Redash coupled with the efficiency of Databricks' runtime, this series of insights can be rapidly packaged up as a BI/MI report, bringing ESG as-a-service to your organisation for asset managers to better invest in sustainable and responsible finance.It is worth mentioning that this new framework is generic enough to accommodate multiple use cases. Whilst core FSIs may consider their own company as a landmark to Page Rank in order to better evaluate reputational risks, asset managers could consider all their positions as landmarks to better assess the sustainability relative to each of their investment decisions.ESG applied to market riskIn order to validate our initial assumption that [...] higher ESG ratings are generally positively correlated with valuation and profitability while negatively correlated with volatility, we create a synthetic portfolio made of random equities that we run through our PW-ESG framework and combine with actual stock information retrieved from Yahoo Finance. As reported in the graph below, despite an evident lack of data to draw scientific conclusions, it would appear that our highest and lowest ESG rated companies (we report the sentiment analysis as a proxy of ESG in the top graph) are respectively the best or worst profitable instruments in our portfolio over the last 18 months.Interestingly, CSRHub reports the exact opposite, Pearson (media) being 10 points above Prologis (property leasing) in terms of ESG scores, highlighting the subjectivity of ESG scoring and its inconsistency between what is communicated and what is actually observed.Following up on our recent blog post about modernizing risk management, we can use this new information available to us to drive better risk calculations. Splitting our portfolio into 2 distinct books, composed of the best and worst 10% of our ESG rated instruments, we report in the graph below the historical returns and its corresponding 95% value-at-risk (historical VaR).Without any prior knowledge of our instruments beyond the metrics we extracted through our framework, we can observe a risk exposure to be 2 times higher for a portfolio made of poor ESG rated companies, supporting the assumptions found in the literature that "poor ESG [...] correlates with higher market volatility", hence to a greater value-at-risk.As covered in our previous blog, the future of risk management lies with agility and interactivity. Risk analysts must augment traditional data with alternative data and alternative insights in order to explore new ways of identifying and quantifying the risks facing their business. Using the flexibility and scale of cloud compute and the level of interactivity in your data enabled through our Databricks runtime, risk analysts can better understand the risks facing their business by slicing and dicing market risk calculations at different industries, countries, segments, and now at different ESG ratings. This data-driven ESG framework enables businesses to ask new questions such as: how much of your risk would be decreased by bringing the environmental rating of this company up 10 points? How much more exposure would you face by investing in these instruments given their low PW-ESG scores?Transforming your ESG strategyIn this blog, we have demonstrated how complex documents can be quickly summarised into key ESG initiatives to better understand the sustainability aspect of each of your investments. Using graph analytics, we introduced a novel approach to ESG by better identifying the influence a global market has to both your organisation strategy and reputational risk. Finally, we showed the economic impact of ESG factors on market risk calculation. As a starting point to a data-driven ESG journey, this approach can be further improved by bringing the internal data you hold about your various investments and the additional metrics you could bring from third-party data, propagating the risks through our PW-ESG framework to keep driving more sustainable finance and profitable investments.Try the following notebooks on Databricks to accelerate your ESG development strategy today and contact us to learn more about how we assist customers with similar use cases.Try Databricks for freeGet StartedRelated postsA Data-driven Approach to Environmental, Social and GovernanceJuly 10, 2020 by Antoine Amend in Engineering Blog
The future of finance goes hand in hand with social responsibility, environmental stewardship and corporate ethics. In order to stay competitive, Financial Services...
See all Engineering Blog postsProductPlatform OverviewPricingOpen Source TechTry DatabricksDemoProductPlatform OverviewPricingOpen Source TechTry DatabricksDemoLearn & SupportDocumentationGlossaryTraining & CertificationHelp CenterLegalOnline CommunityLearn & SupportDocumentationGlossaryTraining & CertificationHelp CenterLegalOnline CommunitySolutionsBy IndustriesProfessional ServicesSolutionsBy IndustriesProfessional ServicesCompanyAbout UsCareers at DatabricksDiversity and InclusionCompany BlogContact UsCompanyAbout UsCareers at DatabricksDiversity and InclusionCompany BlogContact UsSee Careers
at DatabricksWorldwideEnglish (United States)Deutsch (Germany)Français (France)Italiano (Italy)日本語 (Japan)한국어 (South Korea)Português (Brazil)Databricks Inc.
160 Spear Street, 13th Floor
San Francisco, CA 94105
1-866-330-0121© Databricks 2023. All rights reserved. Apache, Apache Spark, Spark and the Spark logo are trademarks of the Apache Software Foundation.Privacy Notice|Terms of Use|Your Privacy Choices|Your California Privacy Rights |
https://www.databricks.com/dataaisummit/speaker/uri-may/# | Uri May - Data + AI Summit 2023 | DatabricksThis site works best with JavaScript enabled.HomepageSAN FRANCISCO, JUNE 26-29VIRTUAL, JUNE 28-29Register NowSession CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingUri May HuntersBack to speakersLooking for past sessions?Take a look through the session archive to find even more related content from previous Data + AI Summit conferences.Explore the session archiveRegister today to save your spotRegister NowHomepageOrganized By Session CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingFAQEvent PolicyCode of ConductPrivacy NoticeYour Privacy ChoicesYour California Privacy RightsApache, Apache Spark, Spark, and the Spark logo are trademarks of the Apache Software Foundation. The Apache Software Foundation has no affiliation with and does not endorse the materials provided at this event. |
https://www.databricks.com/dataaisummit/training | Training - Data + AI Summit 2023 | DatabricksThis site works best with JavaScript enabled.HomepageSAN FRANCISCO, JUNE 26-29VIRTUAL, JUNE 28-29Register NowSession CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingTrainingLevel up your data and AI skills through one of our Summit trainings. Whether you’re new to the Lakehouse architecture or a seasoned pro looking to dive deep, we have a course for you.
Hide search filtersreset filtersSearchSee the best of Data+AI SummitWatch on demandHomepageOrganized By Session CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingFAQEvent PolicyCode of ConductPrivacy NoticeYour Privacy ChoicesYour California Privacy RightsApache, Apache Spark, Spark, and the Spark logo are trademarks of the Apache Software Foundation. The Apache Software Foundation has no affiliation with and does not endorse the materials provided at this event. |
https://www.databricks.com/fr/solutions/industries/federal-government | Analytique de données et IA pour les agences fédérales | DatabricksSkip to main contentPlateformeThe Databricks Lakehouse PlatformDelta LakeGouvernance des donnéesData EngineeringStreaming de donnéesEntreposage des donnéesPartage de donnéesMachine LearningData ScienceTarifsMarketplaceOpen source techCentre sécurité et confianceWEBINAIRE mai 18 / 8 AM PT
Au revoir, entrepôt de données. Bonjour, Lakehouse.
Assistez pour comprendre comment un data lakehouse s’intègre dans votre pile de données moderne.
Inscrivez-vous maintenantSolutionsSolutions par secteurServices financiersSanté et sciences du vivantProduction industrielleCommunications, médias et divertissementSecteur publicVente au détailDécouvrez tous les secteurs d'activitéSolutions par cas d'utilisationSolution AcceleratorsServices professionnelsEntreprises digital-nativesMigration des plateformes de données9 mai | 8h PT
Découvrez le Lakehouse pour la fabrication
Découvrez comment Corning prend des décisions critiques qui minimisent les inspections manuelles, réduisent les coûts d’expédition et augmentent la satisfaction des clients.Inscrivez-vous dès aujourd’huiApprendreDocumentationFORMATION ET CERTIFICATIONDémosRessourcesCommunauté en ligneUniversity AllianceÉvénementsSommet Data + IABlogLabosBeacons26-29 juin 2023
Assistez en personne ou connectez-vous pour le livestream du keynoteS'inscrireClientsPartenairesPartenaires cloudAWSAzureGoogle CloudContact partenairesPartenaires technologiques et de donnéesProgramme partenaires technologiquesProgramme Partenaire de donnéesBuilt on Databricks Partner ProgramPartenaires consulting et ISProgramme Partenaire C&SISolutions partenairesConnectez-vous en quelques clics à des solutions partenaires validées.En savoir plusEntrepriseOffres d'emploi chez DatabricksNotre équipeConseil d'administrationBlog de l'entreprisePresseDatabricks VenturesPrix et distinctionsNous contacterDécouvrez pourquoi Gartner a désigné Databricks comme leader pour la deuxième année consécutiveObtenir le rapportEssayer DatabricksRegarder les démosNous contacterLoginJUNE 26-29REGISTER NOWAnalytique de données et IA pour les agences fédéralesLibérez l'innovation pour offrir de meilleurs services publics grâce aux données et au machine learningDe la stratégie fédérale sur les données à l'ordonnance exécutive sur l'IA, le gouvernement fédéral américain mise sur la modernisation de ses capacités d'analytique de données et d'entreposage.La plateforme Lakehouse de Databricks permet aux agences fédérales de libérer tout le potentiel de leurs données pour atteindre les objectifs de leur mission et mieux servir les citoyens.Donner aux agences fédérales les moyens d'exploiter la puissance des données et de l'IA pour remplir sa missionUtilisée par les principales agences du secteur public+ tout autre client compatible Apache Spark™Découvrez comment les leaders du secteur utilisent Databricks pour améliorer l'expérience des citoyens dans l'ensemble des services publicsDerniers articles de blog, webinaires et études de casLes avantages de Databricks pour le gouvernement fédéralModernisez votre analytique de données130 car. Modernisez votre pile technologique pour améliorer l'expérience des patients et des médecins grâce au pipeline DNASeq le plus rapide à grande échelle.Obtenez des résultats de mission de manière efficace et effective130 car. Modernisez votre pile technologique pour améliorer l'expérience des patients et des médecins grâce au pipeline DNASeq le plus rapide à grande échelle.Créez de meilleures expériences pour les citoyens130 car. Modernisez votre pile technologique pour améliorer l'expérience des patients et des médecins grâce au pipeline DNASeq le plus rapide à grande échelle.En savoir plusConformité étendue pour un agrément d'exploitation simple et rapideDatabricks a obtenu des agréments d'exploitation sur les régions, les réseaux et les contrôles de conformité qui soutiennent vos objectifs de missionCloudsC2S
SC2S
GovCloud
Cloud public
RéseauxPubliques
NIPR
SIPR
JWICS
Contrôles de conformitéFedRAMP
FedRAMP-High
FISMA
IL5
IL6
ICD-503
Comment Databricks alimente l'innovation pour le gouvernement fédéralSantéAméliorez la prestation et la qualité des services de santé pour les citoyens grâce à une analytique solide et à une vue à 360° des patients.
Vue à 360° du patient
Optimisation de la chaîne d'approvisionnement
Gestion des assurances Génomique
Découverte et livraison de médicamentsDéfenseAppliquez les avantages de l'analytique prédictive aux données géospatiales, IoT et de surveillance pour améliorer les opérations et protéger la nation.
Logistique
Maintenance prédictive
Surveillance et reconnaissance
Forces de l'ordre et préparationSécurité intérieureDétectez et prévenez les activités criminelles et les menaces au niveau national grâce à l'analytique en temps réel et à la prise de décision data-driven.
Douanes et protection des frontières
Immigration et citoyenneté
Lutte contre le terrorisme
Gestion de l'aide d'urgence fédéraleAutorité publique / CommerceDétectez de manière proactive les anomalies grâce au machine learning afin d'atténuer les risques et prévenir les activités frauduleuses.
Lutte contre la fraude fiscale et recouvrement
Gestion des processus et des opérations
Gestion des subventions
Vue à 360° du clientÉnergieAméliorez la gestion énergétique grâce à des insights de données qui garantissent la résilience énergétique et la durabilité.
Sécurité des infrastructures énergétiques
Gestion intelligente de l'énergie
Exploration énergétique
Fiabilité du réseau électriqueCommunauté du renseignementTirez parti d'insights en temps réel pour prendre des décisions éclairées pouvant avoir des conséquences sur la sécurité de nos citoyens et du monde entier.
Détection des menaces
Neutralisation des cyberattaques
Renseignement, surveillance et reconnaissance
Analytique des réseaux sociauxRessourcesTémoignages de clientsFavoriser la transformation numérique pour donner la priorité aux patients dans les CMSBonnes pratiques de gouvernance des données — Leçons tirées des centres de services Medicare et MedicaidLeçons tirées de la modernisation de la plateforme d'analytique de données de l'USCISModerniser l'Air Force grâce au Big Data et à l'IAWebinairesUne réponse rapide aux activités hospitalières à l'aide des données et de l'IA pendant la crise de COVID-19Analyse géospatiale et IA dans le secteur publicArchitecture moderne d'analyse de données pour le gouvernementAméliorer la détection des menaces grâce au big data et à l'IAActualitésAzure Databricks obtient l'autorisation élevée de FedRAMP sur Microsoft Azure Government (MAG)Sécurité de niveau gouvernemental dans AWS CloudApporter des analyses unifiées aux clients du gouvernementDoter la communauté du renseignement américain d'outils d'analytique et d'IA de première importanceLes discussions du sommetAutomatiser l'ingestion et l'analyse de données SWIM (gestion des informations à l'échelle du système) de l'Administration Fédérale de l'Aviation (FAA)Analytique géospatiale à grande échelle : analyse des schémas migratoires au cours d'une pandémiePrêt à vous lancer ?Nous aimerions beaucoup connaître vos objectifs commerciaux. Notre équipe de services fera tout son possible pour vous aider à réussir.Essai gratuitNous contacterProduitPlatform OverviewTarifsOpen Source TechEssayer DatabricksDémoProduitPlatform OverviewTarifsOpen Source TechEssayer DatabricksDémoLearn & SupportDocumentationGlossaryFORMATION ET CERTIFICATIONHelp CenterLegalCommunauté en ligneLearn & SupportDocumentationGlossaryFORMATION ET CERTIFICATIONHelp CenterLegalCommunauté en ligneSolutionsBy IndustriesServices professionnelsSolutionsBy IndustriesServices professionnelsEntrepriseNous connaîtreOffres d'emploi chez DatabricksDiversité et inclusionBlog de l'entrepriseNous contacterEntrepriseNous connaîtreOffres d'emploi chez DatabricksDiversité et inclusionBlog de l'entrepriseNous contacterD écouvrez les offres d'emploi
chez Databrickspays/régionsEnglish (United States)Deutsch (Germany)Français (France)Italiano (Italy)日本語 (Japan)한국어 (South Korea)Português (Brazil)Databricks Inc.
160 Spear Street, 13th Floor
San Francisco, CA 94105
1-866-330-0121© Databricks 2023. All rights reserved. Apache, Apache Spark, Spark and the Spark logo are trademarks of the Apache Software Foundation.Avis de confidentialité|Conditions d'utilisation|Vos choix de confidentialité|Vos droits de confidentialité en Californie |
https://www.databricks.com/dataaisummit/speaker/tianyu-liu/# | Tianyu Liu - Data + AI Summit 2023 | DatabricksThis site works best with JavaScript enabled.HomepageSAN FRANCISCO, JUNE 26-29VIRTUAL, JUNE 28-29Register NowSession CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingTianyu LiuLead Data Scientist (Analytics) at GrabBack to speakersFrom the Analytics Data Platform and Finance Data + AI Transformation Team at Grab.Looking for past sessions?Take a look through the session archive to find even more related content from previous Data + AI Summit conferences.Explore the session archiveRegister today to save your spotRegister NowHomepageOrganized By Session CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingFAQEvent PolicyCode of ConductPrivacy NoticeYour Privacy ChoicesYour California Privacy RightsApache, Apache Spark, Spark, and the Spark logo are trademarks of the Apache Software Foundation. The Apache Software Foundation has no affiliation with and does not endorse the materials provided at this event. |
https://www.databricks.com/dataaisummit/speaker/nat-friedman/# | Nat Friedman - Data + AI Summit 2023 | DatabricksThis site works best with JavaScript enabled.HomepageSAN FRANCISCO, JUNE 26-29VIRTUAL, JUNE 28-29Register NowSession CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingNat FriedmanCreator of Copilot; Former CEO at GithubBack to speakersNat Friedman has founded two startups, led GitHub as CEO from 2018 to 2022, and now invests in infrastructure, AI, and developer companies.Looking for past sessions?Take a look through the session archive to find even more related content from previous Data + AI Summit conferences.Explore the session archiveRegister today to save your spotRegister NowHomepageOrganized By Session CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingFAQEvent PolicyCode of ConductPrivacy NoticeYour Privacy ChoicesYour California Privacy RightsApache, Apache Spark, Spark, and the Spark logo are trademarks of the Apache Software Foundation. The Apache Software Foundation has no affiliation with and does not endorse the materials provided at this event. |
https://www.databricks.com/p/webinar/why-data-focused-startups-are-building-on-a-lakehouse | Resources - DatabricksPlatformThe Databricks Lakehouse PlatformDelta LakeData GovernanceData EngineeringData StreamingData WarehousingData SharingMachine LearningData SciencePricingMarketplaceOpen source techSecurity and Trust CenterWEBINAR May 18 / 8 AM PT
Goodbye, Data Warehouse. Hello, Lakehouse.
Attend to understand how a data lakehouse fits within your modern data stack.
Register nowSolutionsSolutions by IndustryFinancial ServicesHealthcare and Life SciencesManufacturingCommunications, Media & EntertainmentPublic SectorRetailSee all IndustriesSolutions by Use CaseSolution AcceleratorsProfessional ServicesDigital Native BusinessesData Platform MigrationNew survey of biopharma executives reveals real-world success with real-world evidence.
See survey resultsLearnDocumentationTraining & CertificationDemosResourcesOnline CommunityUniversity AllianceEventsData + AI SummitBlogLabsBeaconsJoin Generation AI in San Francisco
June 26–29
Learn about LLMs like Dolly and open source Data and AI technologies such as Apache Spark™, Delta Lake, MLflow and Delta SharingExplore sessionsCustomersPartnersCloud PartnersAWSAzureGoogle CloudPartner ConnectTechnology and Data PartnersTechnology Partner ProgramData Partner ProgramBuilt on Databricks Partner ProgramConsulting & SI PartnersC&SI Partner ProgramPartner SolutionsConnect with validated partner solutions in just a few clicks.Learn moreCompanyCareers at DatabricksOur TeamBoard of DirectorsCompany BlogNewsroomDatabricks VenturesAwards and RecognitionContact UsSee why Gartner named Databricks a Leader for the second consecutive yearGet the reportTry DatabricksWatch DemosContact UsLoginJUNE 26-29REGISTER NOWResourcesLoading...ProductPlatform OverviewPricingOpen Source TechTry DatabricksDemoProductPlatform OverviewPricingOpen Source TechTry DatabricksDemoLearn & SupportDocumentationGlossaryTraining & CertificationHelp CenterLegalOnline CommunityLearn & SupportDocumentationGlossaryTraining & CertificationHelp CenterLegalOnline CommunitySolutionsBy IndustriesProfessional ServicesSolutionsBy IndustriesProfessional ServicesCompanyAbout UsCareers at DatabricksDiversity and InclusionCompany BlogContact UsCompanyAbout UsCareers at DatabricksDiversity and InclusionCompany BlogContact UsSee Careers
at DatabricksWorldwideEnglish (United States)Deutsch (Germany)Français (France)Italiano (Italy)日本語 (Japan)한국어 (South Korea)Português (Brazil)Databricks Inc.
160 Spear Street, 13th Floor
San Francisco, CA 94105
1-866-330-0121© Databricks 2023. All rights reserved. Apache, Apache Spark, Spark and the Spark logo are trademarks of the Apache Software Foundation.Privacy Notice|Terms of Use|Your Privacy Choices|Your California Privacy Rights |
https://www.databricks.com/dataaisummit/speaker/thorsten-jacobs | Thorsten Jacobs - Data + AI Summit 2023 | DatabricksThis site works best with JavaScript enabled.HomepageSAN FRANCISCO, JUNE 26-29VIRTUAL, JUNE 28-29Register NowSession CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingThorsten JacobsSr. Solutions Architect at DatabricksBack to speakersThorsten is a Senior Solutions Architect at Databricks and is based in Stockholm, Sweden. Data was always important during his career, as he holds a PhD in physics and previously worked as a Consultant and Data Scientist.Looking for past sessions?Take a look through the session archive to find even more related content from previous Data + AI Summit conferences.Explore the session archiveRegister today to save your spotRegister NowHomepageOrganized By Session CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingFAQEvent PolicyCode of ConductPrivacy NoticeYour Privacy ChoicesYour California Privacy RightsApache, Apache Spark, Spark, and the Spark logo are trademarks of the Apache Software Foundation. The Apache Software Foundation has no affiliation with and does not endorse the materials provided at this event. |
https://www.databricks.com/product/sql-pricing-aws | Databricks SQL Pricing | DatabricksSkip to main contentPlatformThe Databricks Lakehouse PlatformDelta LakeData GovernanceData EngineeringData StreamingData WarehousingData SharingMachine LearningData SciencePricingMarketplaceOpen source techSecurity and Trust CenterWEBINAR May 18 / 8 AM PT
Goodbye, Data Warehouse. Hello, Lakehouse.
Attend to understand how a data lakehouse fits within your modern data stack.
Register nowSolutionsSolutions by IndustryFinancial ServicesHealthcare and Life SciencesManufacturingCommunications, Media & EntertainmentPublic SectorRetailSee all IndustriesSolutions by Use CaseSolution AcceleratorsProfessional ServicesDigital Native BusinessesData Platform MigrationNew survey of biopharma executives reveals real-world success with real-world evidence.
See survey resultsLearnDocumentationTraining & CertificationDemosResourcesOnline CommunityUniversity AllianceEventsData + AI SummitBlogLabsBeaconsJoin Generation AI in San Francisco
June 26–29
Learn about LLMs like Dolly and open source Data and AI technologies such as Apache Spark™, Delta Lake, MLflow and Delta SharingExplore sessionsCustomersPartnersCloud PartnersAWSAzureGoogle CloudPartner ConnectTechnology and Data PartnersTechnology Partner ProgramData Partner ProgramBuilt on Databricks Partner ProgramConsulting & SI PartnersC&SI Partner ProgramPartner SolutionsConnect with validated partner solutions in just a few clicks.Learn moreCompanyCareers at DatabricksOur TeamBoard of DirectorsCompany BlogNewsroomDatabricks VenturesAwards and RecognitionContact UsSee why Gartner named Databricks a Leader for the second consecutive yearGet the reportTry DatabricksWatch DemosContact UsLoginJUNE 26-29REGISTER NOWExtended Time SQL Price Promotion - Save 40%+ Take advantage of our 15-month promotion on Serverless SQL and the brand new SQL ProLearn moreOverviewJobsDelta Live TablesDatabricks SQLData Science & MLModel ServingPlatform & Add-OnsCalculatorDatabricks SQLRun all SQL and BI applications at scale with up to 12x better price-performance, a unified governance model, open formats and APIs, and your tools of choice — no lock-in.Select planhelp me chooseStandardPremiumEnterpriseSelect cloudAWSAzureGoogle CloudSelect regionSelectLoading...Compare featuresFeature DetailSQL ClassicSQL ProSQL ServerlessExtended Time Price PromotionPromo Nov. 1, 2022 to Jan. 31, 2024Promo Nov. 1, 2022 to Jan. 31, 2024Data Exploration and
ConnectivityIntelligent auto complete, ANSI SQL & Rest APIIntelligent auto complete, ANSI SQL & Rest APIIntelligent auto complete, ANSI SQL & Rest APIPerformanceMassively parallel processingMassively parallel processing & Predictive I/OMassively parallel processing & Predictive I/OSQL ETL/ELTQuery federation, materialized views and workflow integrationQuery federation, materialized views and workflow integrationData Science and MLPython UDF, notebooks integration & geospatialPython UDF, notebooks integration & geospatialHigh Concurrency BIFully managed compute, workload management & query result cachingGovernance and ManageabilityQuery profiling, unity catalog integration & platform manageabilityQuery profiling, unity catalog integration & platform manageabilityQuery profiling, unity catalog integration & platform manageabilityEnterprise SecuritySee Platform Capabilities and Add-Ons for details* indicates Public Preview feature
**No data transfer charges to cloud storage region while feature in Preview. See FAQ for more details
Pay as you go with a 14-day free trial or contact us for committed-use discounts or custom requirements
Calculate priceStart free trialContact usWhat is the difference between SQL Serverless and SQL Classic / Pro?For SQL Classic and Pro, Databricks deploys cluster resources into your Cloud provider environment, and you are responsible for paying for the corresponding Compute infrastructure charges. For Serverless compute, Databricks deploys the cluster resources in Databricks’ Cloud provider account and you are not required to separately pay for Compute infrastructure charges. Please see here for more details.What cross-region data transfer charges may be incurred in Serverless offerings?The cross-regions storage access feature is currently in Preview. If your source data is in a different cloud region, Databricks is currently waiving charges for transferring data from the Databricks Serverless environment's region to your cloud storage’s region. We will start charging for this at market-competitive rates in the future when the feature is Generally Available (GA). Please note, the Cloud provider may still charge you directly for transferring data to the region the Databricks serverless environment is running in.Do the promotional discounts for SQL Pro and SQL Serverless stack on top of my negotiated discounts?Yes, the promotional discounts stack on top of any contracted discounts that you may have negotiated.How are the promotional discounts for SQL Pro and SQL Serverless implemented?On AWS, the promotional discounts are implemented through a reduction in the DBU rate that is emitted for SQL Pro and Serverless SQL warehouses during the promotional period.ProductPlatform OverviewPricingOpen Source TechTry DatabricksDemoProductPlatform OverviewPricingOpen Source TechTry DatabricksDemoLearn & SupportDocumentationGlossaryTraining & CertificationHelp CenterLegalOnline CommunityLearn & SupportDocumentationGlossaryTraining & CertificationHelp CenterLegalOnline CommunitySolutionsBy IndustriesProfessional ServicesSolutionsBy IndustriesProfessional ServicesCompanyAbout UsCareers at DatabricksDiversity and InclusionCompany BlogContact UsCompanyAbout UsCareers at DatabricksDiversity and InclusionCompany BlogContact UsSee Careers
at DatabricksWorldwideEnglish (United States)Deutsch (Germany)Français (France)Italiano (Italy)日本語 (Japan)한국어 (South Korea)Português (Brazil)Databricks Inc.
160 Spear Street, 13th Floor
San Francisco, CA 94105
1-866-330-0121© Databricks 2023. All rights reserved. Apache, Apache Spark, Spark and the Spark logo are trademarks of the Apache Software Foundation.Privacy Notice|Terms of Use|Your Privacy Choices|Your California Privacy Rights |
https://www.databricks.com/de/documentation |
Databricks documentation | Databricks on AWS
Support
Feedback
Try Databricks
Help Center
Documentation
Knowledge Base
Amazon Web Services
Microsoft Azure
Google Cloud Platform
Databricks on AWS
Get started
Get started
What is Databricks?
Tutorials and best practices
Release notes
Load & manage data
Load data
Explore data
Prepare data
Share data (Delta sharing)
Data Marketplace
Work with data
Data engineering
Machine learning
Data warehousing
Delta Lake
Developer tools
Technology partners
Administration
Account and workspace administration
Security and compliance
Data governance
Lakehouse architecture
Reference & resources
Reference
Resources
What’s coming?
Documentation archive
Updated May 10, 2023
Send us feedback
Documentation
Databricks documentation
Databricks documentation
Databricks documentation provides how-to guidance and reference information for data analysts, data scientists, and data engineers working in the Databricks Data Science & Engineering, Databricks Machine Learning, and Databricks SQL environments. The Databricks Lakehouse Platform enables data teams to collaborate.
Try Databricks
Get a free trial & set up
Query data from a notebook
Build a basic ETL pipeline
Build a simple Lakehouse analytics pipeline
Free training
What do you want to do?
Data science & engineering
Machine learning
SQL queries & visualizations
Manage Databricks
Account & workspace administration
Security & compliance
Data governance
Reference Guides
API reference
SQL language reference
Error messages
Resources
Release notes
Other resources
© Databricks 2023. All rights reserved. Apache, Apache Spark, Spark, and the Spark logo are trademarks of the Apache Software Foundation.
Send us feedback
| Privacy Policy | Terms of Use
|
https://www.databricks.com/it/solutions/data-pipelines | Ingegneria dei dati | DatabricksSkip to main contentPiattaformaThe Databricks Lakehouse PlatformDelta LakeGovernance dei datiIngegneria dei datiStreaming di datiData warehouseCondivisione dei datiMachine LearningData SciencePrezziMarketplaceTecnologia open-sourceSecurity and Trust CenterWEBINAR 18 maggio / 8 AM PT
Addio, Data Warehouse. Ciao, Lakehouse.
Partecipa per capire come una data lakehouse si inserisce nel tuo stack di dati moderno.
Registrati oraSoluzioniSoluzioni per settoreServizi finanziariSanità e bioscienzeIndustria manifatturieraComunicazioni, media e intrattenimentoSettore pubblicoretailVedi tutti i settoriSoluzioni per tipo di applicazioneAcceleratoriServizi professionaliAziende native digitaliMigrazione della piattaforma di dati9 maggio | 8am PT
Scopri la Lakehouse for Manufacturing
Scopri come Corning sta prendendo decisioni critiche che riducono al minimo le ispezioni manuali, riducono i costi di spedizione e aumentano la soddisfazione dei clienti.Registrati oggi stessoFormazioneDocumentazioneFormazione e certificazioneDemoRisorseCommunity onlineUniversity AllianceEventiConvegno Dati + AIBlogLabsBeacons
26–29 giugno 2023
Partecipa di persona o sintonizzati per il live streaming del keynoteRegistratiClientiPartnerPartner cloudAWSAzureGoogle CloudPartner ConnectPartner per tecnologie e gestione dei datiProgramma Partner TecnologiciProgramma Data PartnerBuilt on Databricks Partner ProgramPartner di consulenza e SIProgramma partner consulenti e integratori (C&SI)Soluzioni dei partnerConnettiti con soluzioni validate dei nostri partner in pochi clic.RegistratiChi siamoLavorare in DatabricksIl nostro teamConsiglio direttivoBlog aziendaleSala stampaDatabricks VenturesPremi e riconoscimentiContattiScopri perché Gartner ha nominato Databricks fra le aziende leader per il secondo anno consecutivoRichiedi il reportProva DatabricksGuarda le demoContattiAccediJUNE 26-29REGISTER NOWIngegneria dei datiSu Databricks girano ogni giorno decine di milioni di carichi di lavoro di produzioneCominciaGuarda la demo
WEBINAR • Goodbye, Data Warehouse. Hello, Lakehouse.
Attend on May 18 and get a $100 credit toward a Databricks certification course
Register nowAcquisisci e trasforma facilmente dati in batch e in streaming sulla Databricks Lakehouse Platform. Organizza flussi di lavoro in produzione affidabili mentre Databricks gestisce automaticamente l'infrastruttura su larga scala. Aumenta la produttività dei team con verifiche integrate della qualità dei dati e supporto di best practice per lo sviluppo di software.Unificare batch e streamingElimina i silos operando su una sola piattaforma con un'unica API per acquisire, trasformare ed elaborare progressivamente dati in batch e in streaming su larga scala.Concentrati sull'estrazione di valore dai datiDatabricks gestisce automaticamente l'infrastruttura e i componenti operativi dei flussi di lavoro in produzione, consentendo agli utenti di concentrarsi sul valore invece che sugli strumenti.Connetti i tuoi strumenti preferitiUna piattaforma lakehouse aperta per connettere e utilizzare gli strumenti preferiti di data engineering per acquisizione, ETL/ELT e orchestrazione dei dati.Costruisci un sistema basato sulla piattaforma lakehouseLa piattaforma lakehouse è la base più idonea per costruire e condividere risorse di dati affidabili, con una gestione centralizzata, massima affidabilità e tempi rapidi.“Per noi, Databricks sta diventando una vera e propria centrale per tutto il nostro lavoro ETL. Più lavoriamo con la Lakehouse Platform, più diventa facile sia per gli utenti sia per gli amministratori della piattaforma”.— Hillevi Crognale, Engineering Manager, YipitDataCome funziona?Acquisizione dati semplificataElaborazione ETL automatizzataOrchestrazione affidabile dei flussi di lavoroOsservabilità e monitoraggio a 360 gradiMotore di elaborazione dati di nuova generazioneFondamenti di governance, affidabilità e prestazioniAcquisizione dati semplificataAcquisisci i dati nella Lakehouse Platform e alimenta le applicazioni di analisi, AI e streaming da un'unica fonte. Auto Loader elabora in modo progressivo e automatico i file che arrivano sullo storage in cloud, senza bisogno di gestire le informazioni di stato, in lavori programmati o continui. Il sistema traccia in modo efficiente i nuovi file (nell'ordine dei miliardi) senza doverli elencare in una directory e, inoltre, può inferire lo schema dai dati sorgente e adattarlo ai cambiamenti che si verificano nel tempo. Il comando COPY INTO agevola gli analisti nell'acquisizione di file in batch in Delta Lake tramite SQL.“Abbiamo registrato un incremento della produttività del 40% nel data engineering, riducendo il tempo necessario per sviluppare nuove idee da alcuni giorni a pochi minuti e aumentando la disponibilità e l'accuratezza dei nostri dati”.— Shaun Pearce, Chief Technology Officer, GoustoMaggiori informazioniElaborazione ETL automatizzataUna volta acquisiti, i dati grezzi devono essere trasformati in modo che siano pronti per analisi e AI. Databricks offre potenti funzionalità ETL per ingegneri dei dati, data scientist e analisti con Delta Live Tables (DLT). DLT è il primo framework che adotta un semplice approccio dichiarativo per costruire pipeline ETL e ML su dati in batch o in streaming, automatizzando al tempo stesso attività complesse come la gestione dell'infrastruttura, l'orchestrazione dei compiti, la gestione e il ripristino di errori e l'ottimizzazione delle prestazioni. Con DLT gli ingegneri possono trattare i dati come codice e applicare best practice moderne di ingegneria software come test, gestione, monitoraggio e documentazione degli errori, per implementare pipeline affidabili su larga scala.Maggiori informazioniOrchestrazione affidabile dei flussi di lavoroDatabricks Workflows è il servizio completamente gestito per l'orchestrazione di tutti i dati, le analisi e l'AI, nativo sulla Lakehouse Platform. Si possono orchestrare diversi carichi di lavoro per l'intero ciclo di vita, inclusi Delta Live Tables e Jobs per SQL, Spark, notebook, dbt, modelli di ML e altro ancora. La stretta integrazione con la Lakehouse Platform sottostante consente di creare ed eseguire carichi di lavoro in produzione affidabili su qualsiasi cloud, offrendo al tempo stesso un monitoraggio accurato e centralizzato, con la massima semplicità per gli utilizzatori finali."La nostra missione è trasformare il modo in cui alimentiamo il pianeta. I nostri clienti del settore energetico hanno bisogno di dati, servizi di consulenza e ricerca per realizzare questa trasformazione. Databricks Workflows ci offre la velocità e la flessibilità necessarie per fornire gli approfondimenti di cui i nostri clienti hanno bisogno".— Yanyan Wu, Vice President of Data, Wood MackenzieMaggiori informazioniOsservabilità e monitoraggio a 360 gradiLa piattaforma lakehouse offre visibilità su tutto il ciclo di vita di dati e AI, consentendo agli ingegneri dei dati e ai team operativi di visualizzare lo stato di salute dei loro flussi di lavoro in produzione in tempo reale, gestire la qualità dei dati e osservare l'andamento storico. Databricks Workflows offre l'accesso a grafici e dashboard dei flussi di dati che tracciano la salute e le prestazioni dei lavori in produzione e delle pipeline Delta Live Tables. Anche i registri di eventi vengono mostrati come tabelle di Delta Lake, consentendo di monitorare e visualizzare prestazioni, qualità dei dati e metriche affidabili da diverse angolazioni.Motore di elaborazione dati di nuova generazioneL'ingegneria dei dati di Databricks si basa su Photon, un motore di nuova generazione compatibile con le API di Apache Spark che offre un rapporto prezzo/prestazioni da record e scalabilità automatica a migliaia di nodi. Spark Structured Streaming offre un'unica API per elaborazione in batch e in streaming, agevolando l'adozione dello streaming sul lakehouse senza modificare il codice o acquisire nuove competenze.Maggiori informazioniGovernance dei dati, affidabilità e prestazioni all'avanguardiaIl data engineering su Databricks permette di beneficiare dei componenti che costituiscono le fondamenta della Lakehouse Platform : Unity Catalog e Delta Lake. I dati grezzi vengono ottimizzati con Delta Lake, un formato di storage open-source che garantisce affidabilità attraverso transazioni ACID e una gestione scalabile dei metadati con prestazioni ad altissima velocità. Delta Lake insieme a Unity Catalog offre una governance granulare per tutte le risorse di dati e AI, semplificando le modalità di governance con un unico modello omogeneo per scoprire, accedere e condividere i dati su diversi cloud. Unity Catalog offre inoltre supporto nativo per Delta Sharing, il primo protocollo aperto del settore per la condivisione semplice e sicura dei dati con altre organizzazioni.Migrazione a DatabricksStanco dei silos di dati, della lentezza e dei costi esorbitanti di sistemi obsoleti come Hadoop e i data warehouse aziendali? Migra a Databricks Lakehouse, la piattaforma moderna per tutti i casi d'uso di gestione dei dati, analisi e AI.Migrazione a DatabricksIntegrazioniAssicura la massima flessibilità ai team di gestione dei dati, utilizzando Partner Connect e un ecosistema di partner tecnologici per realizzare un'integrazione diretta con gli strumenti di ingegneria dei dati più diffusi. Ad esempio, si possono acquisire dati critici con Fivetran, trasformarli in loco con dbt e orchestrare le pipeline con Apache Airflow.Acquisizione ed ETL di dati+ Qualsiasi altro client compatibile con Apache Spark™️ReferenzeScopri di piùDelta LakeFlussi di lavoroDelta Live TablesDelta SharingContenuti associati
Tutte le risorse di cui hai bisogno in un unico posto.
Esplora la libreria di risorse per trovare e-book e video sui vantaggi del data engineering con Databricks.
Esplora risorseeBookCostruire il Data Lakehouse, di Bill Inmon, padre del data warehouseGovernance di dati, analisi e intelligenza artificialeTutto sulla gestione dei datiThe Big Book of Data EngineeringScopri la nuova soluzione Delta SharingMigrazione da data warehouse a data lakehouse per principiantiEventiGestire la trasformazione dei dati con Delta Live TablesSerie di webinar sull'acquisizione di dati senza problemiDATA+AI SUMMIT 2022Modernizzare il data warehouseBlogAnnuncio della disponibilità di Databricks Delta Live Tables (DLT)Presentazione di Databricks WorkflowsPanoramica di tutte le nuove funzionalità di streaming strutturato sviluppate nel 2021 per Databricks e Apache Spark10 funzionalità avanzate per semplificare la gestione di dati semi-strutturati nel lakehouse di DatabricksPronto per cominciare?Prova gratuitaEntra nella communityawsazureGCPProdottoPanoramica della piattaformaPrezziTecnologia open-sourceProva DatabricksDemoProdottoPanoramica della piattaformaPrezziTecnologia open-sourceProva DatabricksDemoFormazione e supportoDocumentazioneGlossaryFormazione e certificazioneHelp CenterLegaleCommunity onlineFormazione e supportoDocumentazioneGlossaryFormazione e certificazioneHelp CenterLegaleCommunity onlineSoluzioniPer settoreServizi professionaliSoluzioniPer settoreServizi professionaliChi siamoChi siamoLavorare in DatabricksDiversità e inclusioneBlog aziendaleContattiChi siamoChi siamoLavorare in DatabricksDiversità e inclusioneBlog aziendaleContattiPosizioni aperte
in DatabricksMondoEnglish (United States)Deutsch (Germany)Français (France)Italiano (Italy)日本語 (Japan)한국어 (South Korea)Português (Brazil)Databricks Inc.
160 Spear Street, 13th Floor
San Francisco, CA 94105
1-866-330-0121© Databricks 2023. All rights reserved. Apache, Apache Spark, Spark and the Spark logo are trademarks of the Apache Software Foundation.Informativa sulla privacy|Condizioni d'uso|Le vostre scelte sulla privacy|I vostri diritti di privacy in California |
https://www.databricks.com/solutions/industries/life-sciences-industry-solutions | Life Sciences Industry Solutions – DatabricksSkip to main contentPlatformThe Databricks Lakehouse PlatformDelta LakeData GovernanceData EngineeringData StreamingData WarehousingData SharingMachine LearningData SciencePricingMarketplaceOpen source techSecurity and Trust CenterWEBINAR May 18 / 8 AM PT
Goodbye, Data Warehouse. Hello, Lakehouse.
Attend to understand how a data lakehouse fits within your modern data stack.
Register nowSolutionsSolutions by IndustryFinancial ServicesHealthcare and Life SciencesManufacturingCommunications, Media & EntertainmentPublic SectorRetailSee all IndustriesSolutions by Use CaseSolution AcceleratorsProfessional ServicesDigital Native BusinessesData Platform MigrationNew survey of biopharma executives reveals real-world success with real-world evidence.
See survey resultsLearnDocumentationTraining & CertificationDemosResourcesOnline CommunityUniversity AllianceEventsData + AI SummitBlogLabsBeaconsJoin Generation AI in San Francisco
June 26–29
Learn about LLMs like Dolly and open source Data and AI technologies such as Apache Spark™, Delta Lake, MLflow and Delta SharingExplore sessionsCustomersPartnersCloud PartnersAWSAzureGoogle CloudPartner ConnectTechnology and Data PartnersTechnology Partner ProgramData Partner ProgramBuilt on Databricks Partner ProgramConsulting & SI PartnersC&SI Partner ProgramPartner SolutionsConnect with validated partner solutions in just a few clicks.Learn moreCompanyCareers at DatabricksOur TeamBoard of DirectorsCompany BlogNewsroomDatabricks VenturesAwards and RecognitionContact UsSee why Gartner named Databricks a Leader for the second consecutive yearGet the reportTry DatabricksWatch DemosContact UsLoginJUNE 26-29REGISTER NOWDatabricks for the Life Sciences IndustryBringing new treatments to patients in need with data analytics and AIData analytics and AI are critical for improving the success of drug discovery and ensuring the efficient delivery of new treatments to market.Databricks helps life science organizations consolidate massive volumes of data and apply powerful analytics so they can realize benefits across the entire drug lifecycle — to achieve lowered costs and better patient outcomes.Learn how to accelerate R&D with big data. Read the ebook.Leading healthcare organizations use Databricks to drive innovation in patient careLatest blog posts, webinars, and case studiesWhy Databricks for healthcareUnlock health data and break data silosConnect structured and unstructured data from EHRs, wearables, imaging platforms, genome sequencers and more to deliver a complete view into patient health.Patient insights at population scaleBetter predict health risks with analytics and AI that scale for millions of patient records in the cloud.Deliver reproducibility and complianceCollaborative analytics workspaces that bring data teams together while streamlining the machine learning lifecycle and providing regulatory-grade MLOps.Use casesAcross the healthcare landscape, data and AI is providing insights and predictive capabilities to personalize care, automate claims and payment processing, and improve patient engagement.Supply ChainCreate more resilient supply chains by improving accuracy in inventory prediction, understanding customer demand, reducing excess inventory, and avoiding lost sales
Supply chain control tower
Demand forecasting
Safety stock
Supply chain
ESG safetyIoT and RoboticsOptimize productivity, increase inventory accuracy, and build a more agile warehouse experience
Predictive maintenance
Automated quality control
Warehouse roboticsCost OptimizationLower costs of manufacturing processes by boosting operational efficiencies and ensuring fast time-to-market of outputs
Picking and delivery pathing
Commodity usage optimization
Worker safety and health monitoringLearn More About our life sciences SolutionsResourcesCase StudiesAstraZenecaRegeneronBiogenCVS HealthLivongoWebinarsImproving Patient Insights With a Health LakehouseEnabling a Scalable Data Science Pipeline With MLflow at Thermo Fisher ScientificHow Regeneron Accelerates Genomic Discovery at Biobank-ScaleeBooksDatabricks for Life Sciences Solution SheetGuide to Healthcare & Life Sciences Guidebook at Data + AI Summit 2021Analyzing Real-World Evidence at ScaleTop Databricks Life Sciences ResourcesReady to get started?We’d love to understand your business goals and how our services team can help you succeed. Try Databricks for freeSchedule a demoProductPlatform OverviewPricingOpen Source TechTry DatabricksDemoProductPlatform OverviewPricingOpen Source TechTry DatabricksDemoLearn & SupportDocumentationGlossaryTraining & CertificationHelp CenterLegalOnline CommunityLearn & SupportDocumentationGlossaryTraining & CertificationHelp CenterLegalOnline CommunitySolutionsBy IndustriesProfessional ServicesSolutionsBy IndustriesProfessional ServicesCompanyAbout UsCareers at DatabricksDiversity and InclusionCompany BlogContact UsCompanyAbout UsCareers at DatabricksDiversity and InclusionCompany BlogContact UsSee Careers
at DatabricksWorldwideEnglish (United States)Deutsch (Germany)Français (France)Italiano (Italy)日本語 (Japan)한국어 (South Korea)Português (Brazil)Databricks Inc.
160 Spear Street, 13th Floor
San Francisco, CA 94105
1-866-330-0121© Databricks 2023. All rights reserved. Apache, Apache Spark, Spark and the Spark logo are trademarks of the Apache Software Foundation.Privacy Notice|Terms of Use|Your Privacy Choices|Your California Privacy Rights |
https://www.databricks.com/dataaisummit/speaker/chang-she | Chang She - Data + AI Summit 2023 | DatabricksThis site works best with JavaScript enabled.HomepageSAN FRANCISCO, JUNE 26-29VIRTUAL, JUNE 28-29Register NowSession CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingChang SheCEO at LanceDBBack to speakersChang She is the CEO and cofounder of LanceDB, the developer-friendly, serverless vector database for AI applications. Previously Chang was VP of Eng at Tubi TV where he led all data and ML efforts. In a past life Chang was one of the original co-authors of the Pandas library.Looking for past sessions?Take a look through the session archive to find even more related content from previous Data + AI Summit conferences.Explore the session archiveRegister today to save your spotRegister NowHomepageOrganized By Session CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingFAQEvent PolicyCode of ConductPrivacy NoticeYour Privacy ChoicesYour California Privacy RightsApache, Apache Spark, Spark, and the Spark logo are trademarks of the Apache Software Foundation. The Apache Software Foundation has no affiliation with and does not endorse the materials provided at this event. |
https://www.databricks.com/blog/category/company | The Databricks BlogSkip to main contentPlatformThe Databricks Lakehouse PlatformDelta LakeData GovernanceData EngineeringData StreamingData WarehousingData SharingMachine LearningData SciencePricingMarketplaceOpen source techSecurity and Trust CenterWEBINAR May 18 / 8 AM PT
Goodbye, Data Warehouse. Hello, Lakehouse.
Attend to understand how a data lakehouse fits within your modern data stack.
Register nowSolutionsSolutions by IndustryFinancial ServicesHealthcare and Life SciencesManufacturingCommunications, Media & EntertainmentPublic SectorRetailSee all IndustriesSolutions by Use CaseSolution AcceleratorsProfessional ServicesDigital Native BusinessesData Platform MigrationNew survey of biopharma executives reveals real-world success with real-world evidence.
See survey resultsLearnDocumentationTraining & CertificationDemosResourcesOnline CommunityUniversity AllianceEventsData + AI SummitBlogLabsBeaconsJoin Generation AI in San Francisco
June 26–29
Learn about LLMs like Dolly and open source Data and AI technologies such as Apache Spark™, Delta Lake, MLflow and Delta SharingExplore sessionsCustomersPartnersCloud PartnersAWSAzureGoogle CloudPartner ConnectTechnology and Data PartnersTechnology Partner ProgramData Partner ProgramBuilt on Databricks Partner ProgramConsulting & SI PartnersC&SI Partner ProgramPartner SolutionsConnect with validated partner solutions in just a few clicks.Learn moreCompanyCareers at DatabricksOur TeamBoard of DirectorsCompany BlogNewsroomDatabricks VenturesAwards and RecognitionContact UsSee why Gartner named Databricks a Leader for the second consecutive yearGet the reportTry DatabricksWatch DemosContact UsLoginJUNE 26-29REGISTER NOWLoading...ProductPlatform OverviewPricingOpen Source TechTry DatabricksDemoProductPlatform OverviewPricingOpen Source TechTry DatabricksDemoLearn & SupportDocumentationGlossaryTraining & CertificationHelp CenterLegalOnline CommunityLearn & SupportDocumentationGlossaryTraining & CertificationHelp CenterLegalOnline CommunitySolutionsBy IndustriesProfessional ServicesSolutionsBy IndustriesProfessional ServicesCompanyAbout UsCareers at DatabricksDiversity and InclusionCompany BlogContact UsCompanyAbout UsCareers at DatabricksDiversity and InclusionCompany BlogContact UsSee Careers
at DatabricksWorldwideEnglish (United States)Deutsch (Germany)Français (France)Italiano (Italy)日本語 (Japan)한국어 (South Korea)Português (Brazil)Databricks Inc.
160 Spear Street, 13th Floor
San Francisco, CA 94105
1-866-330-0121© Databricks 2023. All rights reserved. Apache, Apache Spark, Spark and the Spark logo are trademarks of the Apache Software Foundation.Privacy Notice|Terms of Use|Your Privacy Choices|Your California Privacy Rights |
https://www.databricks.com/dataaisummit/speaker/craig-wiley/# | Craig Wiley - Data + AI Summit 2023 | DatabricksThis site works best with JavaScript enabled.HomepageSAN FRANCISCO, JUNE 26-29VIRTUAL, JUNE 28-29Register NowSession CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingCraig WileySr. Director of Product, Lakehouse AI at DatabricksBack to speakersLooking for past sessions?Take a look through the session archive to find even more related content from previous Data + AI Summit conferences.Explore the session archiveRegister today to save your spotRegister NowHomepageOrganized By Session CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingFAQEvent PolicyCode of ConductPrivacy NoticeYour Privacy ChoicesYour California Privacy RightsApache, Apache Spark, Spark, and the Spark logo are trademarks of the Apache Software Foundation. The Apache Software Foundation has no affiliation with and does not endorse the materials provided at this event. |
https://www.databricks.com/jp/company/partners/consulting-and-si/partner-solutions?itm_data=menu-item-brickbuildersoverview | パートナーソリューション | DatabricksSkip to main contentプラットフォームデータブリックスのレイクハウスプラットフォームDelta Lakeデータガバナンスデータエンジニアリングデータストリーミングデータウェアハウスデータ共有機械学習データサイエンス料金Marketplaceオープンソーステクノロジーセキュリティ&トラストセンターウェビナー 5 月 18 日午前 8 時 PT
さようなら、データウェアハウス。こんにちは、レイクハウス。
データレイクハウスが最新のデータスタックにどのように適合するかを理解するために出席してください。
今すぐ登録ソリューション業種別のソリューション金融サービス医療・ライフサイエンス製造通信、メディア・エンターテイメント公共機関小売・消費財全ての業界を見るユースケース別ソリューションソリューションアクセラレータプロフェッショナルサービスデジタルネイティブビジネスデータプラットフォームの移行5月9日 |午前8時(太平洋標準時)
製造業のためのレイクハウスを発見する
コーニングが、手作業による検査を最小限に抑え、輸送コストを削減し、顧客満足度を高める重要な意思決定をどのように行っているかをご覧ください。今すぐ登録学習ドキュメントトレーニング・認定デモ関連リソースオンラインコミュニティ大学との連携イベントDATA+AI サミットブログラボBeacons2023年6月26日~29日
直接参加するか、基調講演のライブストリームに参加してくださいご登録導入事例パートナークラウドパートナーAWSAzureGoogle CloudPartner Connect技術・データパートナー技術パートナープログラムデータパートナープログラムBuilt on Databricks Partner ProgramSI コンサルティングパートナーC&SI パートナーパートナーソリューションDatabricks 認定のパートナーソリューションをご利用いただけます。詳しく見る会社情報採用情報経営陣取締役会Databricks ブログニュースルームDatabricks Ventures受賞歴と業界評価ご相談・お問い合わせDatabricks は、ガートナーのマジック・クアドラントで 2 年連続でリーダーに位置付けられています。レポートをダウンロードDatabricks 無料トライアルデモを見るご相談・お問い合わせログインJUNE 26-29REGISTER NOWパートナーソリューションパートナーの開発によるレイクハウスを活用した業界別マイグレーションソリューションDatabricks は、主要なコンサルティングパートナーとの連携を通じて、各業界や移行のユースケースに対応する革新的なソリューションを構築しています。深い専門知識と経験を持つパートナーによる設計に基づく Databricks の Brickbuilder は、Databricks のレイクハウス向けに構築されており、コスト削減とデータ価値の最大化を可能にするソリューションです。お客様のビジネスに最適なソリューションが見つかります。パートナーを検索search公共機関Cloud Data Migration by Accenture - Databricks詳しく見る小売・消費財Unified View of Demand by Accenture詳しく見る広告・マーケティングテクノロジーCPG Control Tower by Avanade詳しく見る広告・マーケティングテクノロジーIntelligent Healthcare on Azure Databricks by Avanade詳しく見る広告・マーケティングテクノロジーIntelligent Manufacturing詳しく見る広告・マーケティングテクノロジーLegacy System Migration by Avanade詳しく見る広告・マーケティングテクノロジーRisk Management by Avanade詳しく見るテクノロジー・ソフトウェアMigrate Legacy Cards and Core Banking Portfolios by Capgemini and Databricks詳しく見るテクノロジー・ソフトウェアMigrate to Cloud and Databricks by Capgemini and Databricks詳しく見るテクノロジー・ソフトウェアCapgemini Revenue Growth Management詳しく見るテクノロジー・ソフトウェアMigrate to Databricks by Celebal Technologies and Databricks詳しく見るテクノロジー・ソフトウェアCognizant Video Quality of Experience詳しく見る広告・マーケティングテクノロジーPersona 360 by DataSentics and Databricks詳しく見る広告・マーケティングテクノロジーSAP Migration Accelerator by DataSentics - Databricks詳しく見る広告・マーケティングテクノロジーPrecisionView™ by Deloitte詳しく見る広告・マーケティングテクノロジーTrellis by Deloitte詳しく見る広告・マーケティングテクノロジーSmart Migration to Databricks by EPAM詳しく見る広告・マーケティングテクノロジーLeapLogic Migration Solution by Impetus詳しく見るもっと見る無料お試し・その他ご相談を承りますDatabricks 無料トライアル製品プラットフォーム料金オープンソーステクノロジーDatabricks 無料トライアルデモ製品プラットフォーム料金オープンソーステクノロジーDatabricks 無料トライアルデモ学習・サポートドキュメント用語集トレーニング・認定ヘルプセンター法務オンラインコミュニティ学習・サポートドキュメント用語集トレーニング・認定ヘルプセンター法務オンラインコミュニティソリューション業種別プロフェッショナルサービスソリューション業種別プロフェッショナルサービス会社情報会社概要採用情報ダイバーシティ&インクルージョンDatabricks ブログご相談・お問い合わせ会社情報会社概要採用情報ダイバーシティ&インクルージョンDatabricks ブログご相談・お問い合わせ 採用情報言語地域English (United States)Deutsch (Germany)Français (France)Italiano (Italy)日本語 (Japan)한국어 (South Korea)Português (Brazil)Databricks Inc.
160 Spear Street, 13th Floor
San Francisco, CA 94105
1-866-330-0121© Databricks 2023. All rights reserved. Apache, Apache Spark, Spark and the Spark logo are trademarks of the Apache Software Foundation.プライバシー通知|利用規約|プライバシー設定|カリフォルニア州のプライバシー権利 |
https://www.databricks.com/company/newsroom/press-releases/azure-databricks-achieves-fedramp-high-authorization-on-microsoft-azure-government-mag | Azure Databricks Achieves FedRAMP High Authorization on Microsoft Azure Government (MAG)PlatformThe Databricks Lakehouse PlatformDelta LakeData GovernanceData EngineeringData StreamingData WarehousingData SharingMachine LearningData SciencePricingMarketplaceOpen source techSecurity and Trust CenterWEBINAR May 18 / 8 AM PT
Goodbye, Data Warehouse. Hello, Lakehouse.
Attend to understand how a data lakehouse fits within your modern data stack.
Register nowSolutionsSolutions by IndustryFinancial ServicesHealthcare and Life SciencesManufacturingCommunications, Media & EntertainmentPublic SectorRetailSee all IndustriesSolutions by Use CaseSolution AcceleratorsProfessional ServicesDigital Native BusinessesData Platform MigrationNew survey of biopharma executives reveals real-world success with real-world evidence.
See survey resultsLearnDocumentationTraining & CertificationDemosResourcesOnline CommunityUniversity AllianceEventsData + AI SummitBlogLabsBeaconsJoin Generation AI in San Francisco
June 26–29
Learn about LLMs like Dolly and open source Data and AI technologies such as Apache Spark™, Delta Lake, MLflow and Delta SharingExplore sessionsCustomersPartnersCloud PartnersAWSAzureGoogle CloudPartner ConnectTechnology and Data PartnersTechnology Partner ProgramData Partner ProgramBuilt on Databricks Partner ProgramConsulting & SI PartnersC&SI Partner ProgramPartner SolutionsConnect with validated partner solutions in just a few clicks.Learn moreCompanyCareers at DatabricksOur TeamBoard of DirectorsCompany BlogNewsroomDatabricks VenturesAwards and RecognitionContact UsSee why Gartner named Databricks a Leader for the second consecutive yearGet the reportTry DatabricksWatch DemosContact UsLoginJUNE 26-29REGISTER NOWAzure Databricks Achieves FedRAMP High Authorization on Microsoft Azure Government (MAG)Databricks' Unified Data Analytics Platform Enables Global Enterprises to Process Highly Sensitive Data
November 25, 2020Share this postSAN FRANCISCO, CA – November 25, 2020 – Databricks, the Data and AI Company, today announced that Microsoft Azure Databricks has received a Federal Risk and Authorization Management Program (FedRAMP) High Authority to Operate (ATO). This authorization validates Azure Databricks security and compliance for high-impact data analytics and AI across a wide range of public sector, industry, and enterprise use cases.
FedRAMP is a standardized approach to security assessment, authorization, and continuous monitoring for cloud services as defined by the National Institute of Standards and Technology (NIST). The ATO was granted by the FedRAMP Joint Authorization Board (JAB) which consists of representatives from the Department of Defense (DoD), Department of Homeland Security (DHS), and the General Services Administration (GSA).
“At Veterans Affairs, we included Azure Databricks as part of our Microsoft Azure Authority to Operate (ATO)," said Joseph Fourcade, Lead Cyber Security Analyst, U.S. Department of Veterans Affairs Enterprise Cloud Solutions Office (ECSO). “When Databricks received FedRAMP High approval, we were able to move quickly to inherit that same Azure ATO and approve Azure Databricks for production workloads. Timing couldn't have been better, as we have been working with a number of VA customers implementing Databricks for critical programs."
“Numerous federal agencies are looking to build cloud data lakes and leverage Delta Lake for a complete and consistent view of all their data,” said Kevin Davis, VP, Public Sector at Databricks. “The power of data and AI are being used to dramatically enhance public services, lower costs and improve quality of life for citizens. Using Azure Databricks, government agencies have aggregated hundreds of data sources to improve citizen outreach, automated processing of hourly utility infrastructure IoT data for enabled predictive maintenance, deployed machine learning models to predict patient needs, and built dashboards to predict transportation needs and optimize logistics. FedRAMP High authorization for Azure Databricks further enables federal agencies to analyze all of their data for improved decision making and more accurate predictions.”
"We are pleased to add Azure Databricks to our portfolio of services approved for FedRAMP at the high impact level in Microsoft Azure Government," said Lily Kim, General Manager of Azure Government at Microsoft. "Azure Government provides the most trusted cloud for mission-critical government workloads. FedRAMP High approval for Azure Databricks enables government customers to build fast and reliable data lakes for innovative new use cases, such as risk management and predictive analytics."
With this certification, customers can now use Azure Databricks to process the U.S. government’s most sensitive, unclassified data in cloud computing environments, including data that involves the protection of life and financial assets. From personalized healthcare and education to space exploration and energy research, Azure Databricks enables organizations to accelerate new innovation while minimizing risk when working with highly sensitive, private and public sector data.
With FedRAMP, organizations have a consistent way to evaluate the security of cloud solutions using NIST and FISMA defined standards. FedRAMP offers several levels of assurance and Azure Databricks has met all of the requirements for authorization for the highest degree of assurance. Azure Databricks joins many other Azure services with the FedRAMP High authorization, enabling public sector, enterprise and industry vertical customers to create and deploy cloud-based applications with confidence.
See the list of Azure services by FedRAMP and DoD CC SRG audit scope. Learn more about FedRAMP by reading Microsoft Documentation.
About Databricks
Databricks is the data and AI company. Thousands of organizations worldwide — including Comcast, Condé Nast, Nationwide and H&M — rely on Databricks’ open and unified platform for data engineering, machine learning and analytics. Databricks is venture-backed and headquartered in San Francisco, with offices around the globe. Founded by the original creators of Apache Spark™, Delta Lake and MLflow, Databricks is on a mission to help data teams solve the world’s toughest problems. To learn more, follow Databricks on Twitter, LinkedIn and Facebook.
Media Contact:
Keyana Corliss
Head of Global Communications
[email protected]
Recent Press ReleasesMay 5, 2023Databricks plans to increase local headcount in India by more than 50% to support business growth and drive customer success; launching new R&D hub in 2023 Read nowApril 4, 2023Databricks Announces Lakehouse for Manufacturing, Empowering the World’s Leading Manufacturers to Realize the Full Value of Their DataRead nowMarch 30, 2023Databricks Announces EMEA Expansion, Databricks Infrastructure in the AWS France (Paris) RegionRead nowMarch 7, 2023Databricks Launches Simplified Real-Time Machine Learning for the LakehouseRead nowJanuary 17, 2023Databricks Strengthens Commitment in Korea, Appointing Jungwook Jang as Country ManagerRead nowView AllResourcesContactFor press inquires:[email protected]Stay connectedStay up to date and connect with us through our newsletter, social media channels and blog RSS feed.Subscribe to the newsletterGet assetsIf you would like to use Databricks materials, please contact [email protected] and provide the following information:Your name and titleCompany name and location Description of requestView brand guidelinesProductPlatform OverviewPricingOpen Source TechTry DatabricksDemoProductPlatform OverviewPricingOpen Source TechTry DatabricksDemoLearn & SupportDocumentationGlossaryTraining & CertificationHelp CenterLegalOnline CommunityLearn & SupportDocumentationGlossaryTraining & CertificationHelp CenterLegalOnline CommunitySolutionsBy IndustriesProfessional ServicesSolutionsBy IndustriesProfessional ServicesCompanyAbout UsCareers at DatabricksDiversity and InclusionCompany BlogContact UsCompanyAbout UsCareers at DatabricksDiversity and InclusionCompany BlogContact UsSee Careers
at DatabricksWorldwideEnglish (United States)Deutsch (Germany)Français (France)Italiano (Italy)日本語 (Japan)한국어 (South Korea)Português (Brazil)Databricks Inc.
160 Spear Street, 13th Floor
San Francisco, CA 94105
1-866-330-0121© Databricks 2023. All rights reserved. Apache, Apache Spark, Spark and the Spark logo are trademarks of the Apache Software Foundation.Privacy Notice|Terms of Use|Your Privacy Choices|Your California Privacy Rights |
https://www.databricks.com/company/awards-and-recognition | Awards and Recognition | DatabricksSkip to main contentPlatformThe Databricks Lakehouse PlatformDelta LakeData GovernanceData EngineeringData StreamingData WarehousingData SharingMachine LearningData SciencePricingMarketplaceOpen source techSecurity and Trust CenterWEBINAR May 18 / 8 AM PT
Goodbye, Data Warehouse. Hello, Lakehouse.
Attend to understand how a data lakehouse fits within your modern data stack.
Register nowSolutionsSolutions by IndustryFinancial ServicesHealthcare and Life SciencesManufacturingCommunications, Media & EntertainmentPublic SectorRetailSee all IndustriesSolutions by Use CaseSolution AcceleratorsProfessional ServicesDigital Native BusinessesData Platform MigrationNew survey of biopharma executives reveals real-world success with real-world evidence.
See survey resultsLearnDocumentationTraining & CertificationDemosResourcesOnline CommunityUniversity AllianceEventsData + AI SummitBlogLabsBeaconsJoin Generation AI in San Francisco
June 26–29
Learn about LLMs like Dolly and open source Data and AI technologies such as Apache Spark™, Delta Lake, MLflow and Delta SharingExplore sessionsCustomersPartnersCloud PartnersAWSAzureGoogle CloudPartner ConnectTechnology and Data PartnersTechnology Partner ProgramData Partner ProgramBuilt on Databricks Partner ProgramConsulting & SI PartnersC&SI Partner ProgramPartner SolutionsConnect with validated partner solutions in just a few clicks.Learn moreCompanyCareers at DatabricksOur TeamBoard of DirectorsCompany BlogNewsroomDatabricks VenturesAwards and RecognitionContact UsSee why Gartner named Databricks a Leader for the second consecutive yearGet the reportTry DatabricksWatch DemosContact UsLoginJUNE 26-29REGISTER NOWAwards and RecognitionTake a look at the ways Databricks has been recognized by industry leaders.Leader in the 2022 Magic Quadrant for Cloud Database Management Systems2022 Customer Choice Award for Cloud Database Management SystemsLeader in the 2021 Magic Quadrant for Cloud Database Management SystemsLeader in the 2021 Magic Quadrant for Data Science and Machine LearningLakehouse — Hype Cycle for Data Management, 2022Hot Companies to Watch in 2023Most Innovative Companies in Data ScienceThe Cloud 100The AI 50America’s Best Startup EmployersBest Workplaces in TechnologyBest Workplaces in the Bay AreaBest Workplaces for MillennialsCNBC Disruptor 50Best Places to Work 2022Ready to learn more?We’d love to understand your business goals and how our services team can help you succeed.Try Databricks for freeProductPlatform OverviewPricingOpen Source TechTry DatabricksDemoProductPlatform OverviewPricingOpen Source TechTry DatabricksDemoLearn & SupportDocumentationGlossaryTraining & CertificationHelp CenterLegalOnline CommunityLearn & SupportDocumentationGlossaryTraining & CertificationHelp CenterLegalOnline CommunitySolutionsBy IndustriesProfessional ServicesSolutionsBy IndustriesProfessional ServicesCompanyAbout UsCareers at DatabricksDiversity and InclusionCompany BlogContact UsCompanyAbout UsCareers at DatabricksDiversity and InclusionCompany BlogContact UsSee Careers
at DatabricksWorldwideEnglish (United States)Deutsch (Germany)Français (France)Italiano (Italy)日本語 (Japan)한국어 (South Korea)Português (Brazil)Databricks Inc.
160 Spear Street, 13th Floor
San Francisco, CA 94105
1-866-330-0121© Databricks 2023. All rights reserved. Apache, Apache Spark, Spark and the Spark logo are trademarks of the Apache Software Foundation.Privacy Notice|Terms of Use|Your Privacy Choices|Your California Privacy Rights |
https://www.databricks.com/dataaisummit/speaker/ryan-johnson | Ryan Johnson - Data + AI Summit 2023 | DatabricksThis site works best with JavaScript enabled.HomepageSAN FRANCISCO, JUNE 26-29VIRTUAL, JUNE 28-29Register NowSession CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingRyan JohnsonSenior Staff Engineer at DatabricksBack to speakersRyan Johnson is a Senior Staff Software Engineer and tech lead at Databricks, working with the Delta Lake table format at the boundary between the storage system and the query layer. Before joining Databricks, he worked on the storage layer at Amazon Redshift and previously did database systems research as a professor at the University of Toronto. He loves writing code and solving low-level systems problems.Looking for past sessions?Take a look through the session archive to find even more related content from previous Data + AI Summit conferences.Explore the session archiveRegister today to save your spotRegister NowHomepageOrganized By Session CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingFAQEvent PolicyCode of ConductPrivacy NoticeYour Privacy ChoicesYour California Privacy RightsApache, Apache Spark, Spark, and the Spark logo are trademarks of the Apache Software Foundation. The Apache Software Foundation has no affiliation with and does not endorse the materials provided at this event. |
https://www.databricks.com/br/company/careers | Carreiras na Databricks | DatabricksSkip to main contentPlataformaDatabricks Lakehouse PlatformDelta LakeGovernança de dadosData EngineeringStreaming de dadosArmazenamento de dadosData SharingMachine LearningData SciencePreçosMarketplaceTecnologia de código abertoCentro de segurança e confiançaWEBINAR Maio 18 / 8 AM PT
Adeus, Data Warehouse. Olá, Lakehouse.
Participe para entender como um data lakehouse se encaixa em sua pilha de dados moderna.
Inscreva-se agoraSoluçõesSoluções por setorServiços financeirosSaúde e ciências da vidaProdução industrialComunicações, mídia e entretenimentoSetor públicoVarejoVer todos os setoresSoluções por caso de usoAceleradores de soluçãoServiços profissionaisNegócios nativos digitaisMigração da plataforma de dados9 de maio | 8h PT
Descubra a Lakehouse para Manufatura
Saiba como a Corning está tomando decisões críticas que minimizam as inspeções manuais, reduzem os custos de envio e aumentam a satisfação do cliente.Inscreva-se hojeAprenderDocumentaçãoTreinamento e certificaçãoDemosRecursosComunidade onlineAliança com universidadesEventosData+AI SummitBlogLaboratóriosBeaconsA maior conferência de dados, análises e IA do mundo retorna a São Francisco, de 26 a 29 de junho. ParticipeClientesParceirosParceiros de nuvemAWSAzureGoogle CloudConexão de parceirosParceiros de tecnologia e dadosPrograma de parceiros de tecnologiaPrograma de parceiros de dadosBuilt on Databricks Partner ProgramParceiros de consultoria e ISPrograma de parceiros de C&ISSoluções para parceirosConecte-se com apenas alguns cliques a soluções de parceiros validadas.Saiba maisEmpresaCarreiras em DatabricksNossa equipeConselho de AdministraçãoBlog da empresaImprensaDatabricks VenturesPrêmios e reconhecimentoEntre em contatoVeja por que o Gartner nomeou a Databricks como líder pelo segundo ano consecutivoObtenha o relatórioExperimente DatabricksAssista às DemosEntre em contatoInício de sessãoJUNE 26-29REGISTER NOWAttention: Databricks applicantsDue to reports of phishing, all Databricks applicants should apply on our official Careers page (good news — you are here). All official communication from Databricks will come from email addresses ending with @databricks.com, @us-greenhouse-mail.io or @goodtime.io.Carreiras em DatabricksNossa missão: ajudar as equipes de dados a resolver os problemas mais complexos do mundo. Você quer se juntar a nós?Ver vagas abertas OverviewCultureBenefitsDiversityStudents & new gradsLeve sua carreira ao próximo nível. Invista no futuro.Databricks lidera a revolução de dados e IA. Criamos uma categoria chamada lakehouse. Atualmente, milhares de empresas estão usando esse recurso para resolver problemas como mudanças climáticas, fraudes, rotatividade de clientes e muito mais. Se você está procurando uma oportunidade que possa realmente marcar sua carreira, encontrou.Descubra por que lakehouse é o futuroPor que escolher Databricks?Estamos crescendo rápido e recrutando os melhores talentos do mundo. Somos os Bricksters: uma combinação especial de pensadores inteligentes, curiosos e animados. Pergunte a um Brickster o que ele gosta em seu trabalho e provavelmente ouvirá sobre nossa cultura.Veja os princípios da nossa culturaEncontre sua equipeAdministraçãoDesenvolvimento comercialCaso de sucessoEngenhariaEngenharia de campoFinançasTIInformações legaisMarketingOperaçõesPessoas e RHProdutoServiços profissionaisRecrutamentoVendasSegurançaEstágios e início de carreiraBenefícios, vantagens e trabalho híbridoPara ter o melhor desempenho no trabalho, sua saúde e bem-estar são essenciais. É por isso que oferecemos excelentes benefícios e vantagens, incluindo formas flexíveis de trabalhar. O importante é que você encontre o que funciona melhor para você e sua equipe.Confira nossos benefíciosDiversidade, Equidade e InclusãoAcreditamos que a diversidade de origens, perspectivas e habilidades é o que impulsiona nosso sucesso. Por esse motivo, nos esforçamos para criar e nutrir um ambiente inclusivo e de suporte. Descubra como estamos reduzindo as desigualdades salariais, combatendo vieses inconscientes no recrutamento e muito mais.Ver a inclusão em açãoLocalizaçõesNossa sede está em São Francisco, na Califórnia, e temos mais de 20 escritórios em 12 países. Com mais de 4.500 Bricksters em todo o mundo e uma estratégia de desenvolvimento ambiciosa, somos uma das empresas do setor de nuvem corporativa que mais cresce.Conheça nossos escritórios ao redor do mundoAméricas
São FranciscoSeattleWashington, D.C.Cidade de Nova YorkVer maisEuropa
LondresAmsterdãBerlimMuniqueVer maisÁsia-Pacífico
TóquioSingapuraSeulHangzhouVer maisOportunidades para estudantes e recém-formadosTemos o compromisso de desenvolver a próxima geração de líderes da Databricks. É por isso que queremos que nossos estagiários e recém-formados na universidade tenham um papel essencial no desenvolvimento da nossa plataforma.Nosso Programa de Treinamento foi desenvolvido para você aproveitar ao máximo sua experiência: hackathons de engenharia, Olimpíadas de Estagiários, noites de jogos de tabuleiro e happy hours.Explore estágios e oportunidades Transparência da cobertura do plano de saúde (somente nos EUA)ProdutoVisão geral da plataformaPreçosTecnologia de código abertoExperimente DatabricksDemoProdutoVisão geral da plataformaPreçosTecnologia de código abertoExperimente DatabricksDemoAprendizagem e suporteDocumentaçãoGlossárioTreinamento e certificaçãoCentral de ajudaInformações legaisComunidade onlineAprendizagem e suporteDocumentaçãoGlossárioTreinamento e certificaçãoCentral de ajudaInformações legaisComunidade onlineSoluçõesPor setorServiços profissionaisSoluçõesPor setorServiços profissionaisEmpresaQuem somosCarreiras em DatabricksDiversidade e inclusãoBlog da empresaEntre em contatoEmpresaQuem somosCarreiras em DatabricksDiversidade e inclusãoBlog da empresaEntre em contatoSee Careers
at DatabricksMundialEnglish (United States)Deutsch (Germany)Français (France)Italiano (Italy)日本語 (Japan)한국어 (South Korea)Português (Brazil)Databricks Inc.
160 Spear Street, 13th Floor
San Francisco, CA 94105
1-866-330-0121© Databricks 2023. All rights reserved. Apache, Apache Spark, Spark and the Spark logo are trademarks of the Apache Software Foundation.Aviso de privacidade|Termos de Uso|Suas opções de privacidade|Seus direitos de privacidade na Califórnia |
https://www.databricks.com/dataaisummit/speaker/behzad-bordbar/# | Behzad Bordbar - Data + AI Summit 2023 | DatabricksThis site works best with JavaScript enabled.HomepageSAN FRANCISCO, JUNE 26-29VIRTUAL, JUNE 28-29Register NowSession CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingBehzad BordbarLead Data Scientist at Marks & SpencerBack to speakersBehzad started working at M&S in Jan 2021 as Lead Data Scientist. As part of the Data Science Retails team, he is involved in the digital transformation of Retail operations utilizing Machine Learning and Artificial Intelligence. He has a PhD in mathematics and over 20 years of experience solving challenging and complex business problems using innovative data solutions. He has published over 120 technical and scientific papers and contributed to several open-source projects.Looking for past sessions?Take a look through the session archive to find even more related content from previous Data + AI Summit conferences.Explore the session archiveRegister today to save your spotRegister NowHomepageOrganized By Session CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingFAQEvent PolicyCode of ConductPrivacy NoticeYour Privacy ChoicesYour California Privacy RightsApache, Apache Spark, Spark, and the Spark logo are trademarks of the Apache Software Foundation. The Apache Software Foundation has no affiliation with and does not endorse the materials provided at this event. |
https://www.databricks.com/dataaisummit/speaker/william-zanine | William Zanine - Data + AI Summit 2023 | DatabricksThis site works best with JavaScript enabled.HomepageSAN FRANCISCO, JUNE 26-29VIRTUAL, JUNE 28-29Register NowSession CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingWilliam ZanineHead of Data Management, Channel and Specialty North America at IQVIABack to speakersLooking for past sessions?Take a look through the session archive to find even more related content from previous Data + AI Summit conferences.Explore the session archiveRegister today to save your spotRegister NowHomepageOrganized By Session CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingFAQEvent PolicyCode of ConductPrivacy NoticeYour Privacy ChoicesYour California Privacy RightsApache, Apache Spark, Spark, and the Spark logo are trademarks of the Apache Software Foundation. The Apache Software Foundation has no affiliation with and does not endorse the materials provided at this event. |
https://www.databricks.com/company/newsroom | Newsroom | DatabricksSkip to main contentPlatformThe Databricks Lakehouse PlatformDelta LakeData GovernanceData EngineeringData StreamingData WarehousingData SharingMachine LearningData SciencePricingMarketplaceOpen source techSecurity and Trust CenterWEBINAR May 18 / 8 AM PT
Goodbye, Data Warehouse. Hello, Lakehouse.
Attend to understand how a data lakehouse fits within your modern data stack.
Register nowSolutionsSolutions by IndustryFinancial ServicesHealthcare and Life SciencesManufacturingCommunications, Media & EntertainmentPublic SectorRetailSee all IndustriesSolutions by Use CaseSolution AcceleratorsProfessional ServicesDigital Native BusinessesData Platform MigrationNew survey of biopharma executives reveals real-world success with real-world evidence.
See survey resultsLearnDocumentationTraining & CertificationDemosResourcesOnline CommunityUniversity AllianceEventsData + AI SummitBlogLabsBeaconsJoin Generation AI in San Francisco
June 26–29
Learn about LLMs like Dolly and open source Data and AI technologies such as Apache Spark™, Delta Lake, MLflow and Delta SharingExplore sessionsCustomersPartnersCloud PartnersAWSAzureGoogle CloudPartner ConnectTechnology and Data PartnersTechnology Partner ProgramData Partner ProgramBuilt on Databricks Partner ProgramConsulting & SI PartnersC&SI Partner ProgramPartner SolutionsConnect with validated partner solutions in just a few clicks.Learn moreCompanyCareers at DatabricksOur TeamBoard of DirectorsCompany BlogNewsroomDatabricks VenturesAwards and RecognitionContact UsSee why Gartner named Databricks a Leader for the second consecutive yearGet the reportTry DatabricksWatch DemosContact UsLoginJUNE 26-29REGISTER NOWNewsroomExplore articles and press releases related to Databricks news and announcementsFeatured StoriesMay 27, 2021Accidental Billionaires: How Seven Academics Who Didn’t Want To Make A Cent Are Now Worth BillionsReadMarch 2, 2023How Databricks became an A.I. sensationReadJuly 29, 2022Databricks CEO Ali Ghodsi joins CNBC's The Tech TradeReadMore HeadlinesInfoWorld, May 4, 2023Databricks acquires AI-centric Okera to aid data governance in LLMsRead nowTechCrunch, May 3, 2023Databricks acquires AI-centric data governance platform OkeraRead nowCRN, May 3, 2023Databricks Steps Up Data Governance With Okera AcquisitionRead nowVentureBeat, May 3, 2023Databricks acquires Okera to boost its AI-driven data governance platformRead nowBusiness Insider, May 3, 2023Exclusive: $38 billion data and AI darling Databricks acquires security startup OkeraRead nowLoad more articlesPress releasesMay 5, 2023Databricks plans to increase local headcount in India by more than 50% to support business growth and drive customer success; launching new R&D hub in 2023 Read nowApril 4, 2023Databricks Announces Lakehouse for Manufacturing, Empowering the World’s Leading Manufacturers to Realize the Full Value of Their DataRead nowMarch 30, 2023Databricks Announces EMEA Expansion, Databricks Infrastructure in the AWS France (Paris) RegionRead nowMarch 7, 2023Databricks Launches Simplified Real-Time Machine Learning for the LakehouseRead nowJanuary 17, 2023Databricks Strengthens Commitment in Korea, Appointing Jungwook Jang as Country ManagerRead nowSee all press releasesResourcesContactFor press inquires:[email protected]Stay connectedStay up to date and connect with us through our newsletter, social media channels and blog RSS feed.Subscribe to the newsletterGet assetsIf you would like to use Databricks materials, please contact [email protected] and provide the following information:Your name and titleCompany name and location Description of requestView brand guidelinesProductPlatform OverviewPricingOpen Source TechTry DatabricksDemoProductPlatform OverviewPricingOpen Source TechTry DatabricksDemoLearn & SupportDocumentationGlossaryTraining & CertificationHelp CenterLegalOnline CommunityLearn & SupportDocumentationGlossaryTraining & CertificationHelp CenterLegalOnline CommunitySolutionsBy IndustriesProfessional ServicesSolutionsBy IndustriesProfessional ServicesCompanyAbout UsCareers at DatabricksDiversity and InclusionCompany BlogContact UsCompanyAbout UsCareers at DatabricksDiversity and InclusionCompany BlogContact UsSee Careers
at DatabricksWorldwideEnglish (United States)Deutsch (Germany)Français (France)Italiano (Italy)日本語 (Japan)한국어 (South Korea)Português (Brazil)Databricks Inc.
160 Spear Street, 13th Floor
San Francisco, CA 94105
1-866-330-0121© Databricks 2023. All rights reserved. Apache, Apache Spark, Spark and the Spark logo are trademarks of the Apache Software Foundation.Privacy Notice|Terms of Use|Your Privacy Choices|Your California Privacy Rights |
https://www.databricks.com/br/company/partners/built-on-partner-program | Skip to main contentPlataformaDatabricks Lakehouse PlatformDelta LakeGovernança de dadosData EngineeringStreaming de dadosArmazenamento de dadosData SharingMachine LearningData SciencePreçosMarketplaceTecnologia de código abertoCentro de segurança e confiançaWEBINAR Maio 18 / 8 AM PT
Adeus, Data Warehouse. Olá, Lakehouse.
Participe para entender como um data lakehouse se encaixa em sua pilha de dados moderna.
Inscreva-se agoraSoluçõesSoluções por setorServiços financeirosSaúde e ciências da vidaProdução industrialComunicações, mídia e entretenimentoSetor públicoVarejoVer todos os setoresSoluções por caso de usoAceleradores de soluçãoServiços profissionaisNegócios nativos digitaisMigração da plataforma de dados9 de maio | 8h PT
Descubra a Lakehouse para Manufatura
Saiba como a Corning está tomando decisões críticas que minimizam as inspeções manuais, reduzem os custos de envio e aumentam a satisfação do cliente.Inscreva-se hojeAprenderDocumentaçãoTreinamento e certificaçãoDemosRecursosComunidade onlineAliança com universidadesEventosData+AI SummitBlogLaboratóriosBeaconsA maior conferência de dados, análises e IA do mundo retorna a São Francisco, de 26 a 29 de junho. ParticipeClientesParceirosParceiros de nuvemAWSAzureGoogle CloudConexão de parceirosParceiros de tecnologia e dadosPrograma de parceiros de tecnologiaPrograma de parceiros de dadosBuilt on Databricks Partner ProgramParceiros de consultoria e ISPrograma de parceiros de C&ISSoluções para parceirosConecte-se com apenas alguns cliques a soluções de parceiros validadas.Saiba maisEmpresaCarreiras em DatabricksNossa equipeConselho de AdministraçãoBlog da empresaImprensaDatabricks VenturesPrêmios e reconhecimentoEntre em contatoVeja por que o Gartner nomeou a Databricks como líder pelo segundo ano consecutivoObtenha o relatórioExperimente DatabricksAssista às DemosEntre em contatoInício de sessãoJUNE 26-29REGISTER NOWPrograma de parceria Built on DatabricksCrie, comercialize e expanda seus negócios com a DatabricksInscreva-se agoraO programa de parceria Built on Databricks fornece recursos técnicos e de entrada ao mercado para acelerar o desenvolvimento do seu aplicativo SaaS moderno e expandir seus negócios. Desenvolvido na Plataforma Databricks Lakehouse, ele oferece uma experiência unificada e econômica para criar aplicativos, produtos e serviços de dados e compartilhar dados em escala com um ecossistema aberto global.As empresas que usam o Built on Databricks geram valor comercial em escalaBlogLeia o artigoWebinarVerBlogLeia o artigoBenefícios da parceria Built on DatabricksAcesso a especialistas da Databricks
Conecte-se à equipe de produto, engenharia e suporte da DatabricksDesenvolva em uma plataforma moderna
A plataforma lakehouse líder de mercado para dados unificados, análises e IAAlcance mais clientes
Cobertura expandida para mais clientes de dados a partir de uma plataforma aberta e seguraSuporte de marketing
Acesse investimentos de marketing para ajudar a aumentar a exposição e expandir seu alcanceColaboração em vendas
Desenvolva mais oportunidades por meio de capacitação e colaboração com a equipe de vendas da DatabricksComece hoje mesmo!Inscreva-se agoraProdutoVisão geral da plataformaPreçosTecnologia de código abertoExperimente DatabricksDemoProdutoVisão geral da plataformaPreçosTecnologia de código abertoExperimente DatabricksDemoAprendizagem e suporteDocumentaçãoGlossárioTreinamento e certificaçãoCentral de ajudaInformações legaisComunidade onlineAprendizagem e suporteDocumentaçãoGlossárioTreinamento e certificaçãoCentral de ajudaInformações legaisComunidade onlineSoluçõesPor setorServiços profissionaisSoluçõesPor setorServiços profissionaisEmpresaQuem somosCarreiras em DatabricksDiversidade e inclusãoBlog da empresaEntre em contatoEmpresaQuem somosCarreiras em DatabricksDiversidade e inclusãoBlog da empresaEntre em contatoSee Careers
at DatabricksMundialEnglish (United States)Deutsch (Germany)Français (France)Italiano (Italy)日本語 (Japan)한국어 (South Korea)Português (Brazil)Databricks Inc.
160 Spear Street, 13th Floor
San Francisco, CA 94105
1-866-330-0121© Databricks 2023. All rights reserved. Apache, Apache Spark, Spark and the Spark logo are trademarks of the Apache Software Foundation.Aviso de privacidade|Termos de Uso|Suas opções de privacidade|Seus direitos de privacidade na Califórnia |
https://www.databricks.com/explore/de-data-warehousing/why-the-data-lakehouse-is-your-next-data-warehouse |
Why the Data Lakehouse is your next Data Warehouse - 2nd Edition
Thumbnails
Document Outline
Attachments
Layers
Current Outline Item
Previous
Next
Highlight All
Match Case
Match
Diacritics
Whole Words
Color
Size
Color
Thickness
Opacity
Presentation Mode
Open
Print
Download
Current View
Go to First Page
Go to Last Page
Rotate Clockwise
Rotate Counterclockwise
Text Selection Tool
Hand Tool
Page Scrolling
Vertical Scrolling
Horizontal Scrolling
Wrapped Scrolling
No Spreads
Odd Spreads
Even Spreads
Document Properties…
Toggle Sidebar
Find
Previous
Next
Presentation Mode
Open
Print
Download
Current View
FreeText Annotation
Ink Annotation
Tools
Zoom Out
Zoom In
Automatic Zoom
Actual Size
Page Fit
Page Width
50%
75%
100%
125%
150%
200%
300%
400%
More Information
Less Information
Close
Enter the password to open this PDF
file:
Cancel
OK
File name:
-
File size:
-
Title:
-
Author:
-
Subject:
-
Keywords:
-
Creation Date:
-
Modification Date:
-
Creator:
-
PDF Producer:
-
PDF Version:
-
Page Count:
-
Page Size:
-
Fast Web View:
-
Close
Preparing document for printing…
0%
Cancel
|
https://www.databricks.com/p/webinar/hassle-free-data-ingestion-webinar-series?itm_data=DataIngestionRelatedContent-DataIngestWebinarSeries | Hassle-Free Data Ingestion | DatabricksOn DemandHassle-Free Data IngestionWebinar SeriesIngesting data from hundreds of different data sources is a critical step before organizations can execute advanced analytics, data science and machine learning. But ingesting and unifying this data to create a reliable single source of truth is extremely time consuming and costly.In this webinar series, discover how Databricks simplifies data ingestion into Delta Lake for all data types. Each webinar includes an overview and demo to introduce you to the newly released features and tools that make structured, semi-structured and unstructured data ingestion even easier on the Databricks Lakehouse Platform.Webinars in the series:Intro to Data Ingestion on Databricks — Learn how Databricks enables you to easily and quickly ingest data continuously or at low latency and empowers more users with SQL-only ingestion capabilitiesSemi-Structured Data Ingestion — Deep dive into how Databricks simplifies JSON data ingestion into Delta Lake at scaleUnstructured Data Ingestion – Discover how Databricks makes it easy to ingest unstructured data at scale with Auto Loader and see how partners, like Labelbox, can help you label your unstructured data.Download the seriesProductPlatform OverviewPricingOpen Source TechTry DatabricksDemoProductPlatform OverviewPricingOpen Source TechTry DatabricksDemoLearn & SupportDocumentationGlossaryTraining & CertificationHelp CenterLegalOnline CommunityLearn & SupportDocumentationGlossaryTraining & CertificationHelp CenterLegalOnline CommunitySolutionsBy IndustriesProfessional ServicesSolutionsBy IndustriesProfessional ServicesCompanyAbout UsCareers at DatabricksDiversity and InclusionCompany BlogContact UsCompanyAbout UsCareers at DatabricksDiversity and InclusionCompany BlogContact UsSee Careers
at DatabricksWorldwideEnglish (United States)Deutsch (Germany)Français (France)Italiano (Italy)日本語 (Japan)한국어 (South Korea)Português (Brazil)Databricks Inc.
160 Spear Street, 13th Floor
San Francisco, CA 94105
1-866-330-0121© Databricks 2023. All rights reserved. Apache, Apache Spark, Spark and the Spark logo are trademarks of the Apache Software Foundation.Privacy Notice|Terms of Use|Your Privacy Choices|Your California Privacy Rights |
https://www.databricks.com/dataaisummit/speaker/danica-fine/# | Danica Fine - Data + AI Summit 2023 | DatabricksThis site works best with JavaScript enabled.HomepageSAN FRANCISCO, JUNE 26-29VIRTUAL, JUNE 28-29Register NowSession CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingDanica FineSenior Developer Advocate at ConfluentBack to speakersDanica Fine is a Senior Developer Advocate at Confluent where she helps others get the most out of their event-driven pipelines. In her previous role as a software engineer on a streaming infrastructure team, she predominantly worked on Kafka Streams- and Kafka Connect-based projects. She can be found on Twitter, tweeting about tech, plants, and baking @TheDanicaFine.Looking for past sessions?Take a look through the session archive to find even more related content from previous Data + AI Summit conferences.Explore the session archiveRegister today to save your spotRegister NowHomepageOrganized By Session CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingFAQEvent PolicyCode of ConductPrivacy NoticeYour Privacy ChoicesYour California Privacy RightsApache, Apache Spark, Spark, and the Spark logo are trademarks of the Apache Software Foundation. The Apache Software Foundation has no affiliation with and does not endorse the materials provided at this event. |
https://www.databricks.com/br/company/board-of-directors | Conselho de Administração da Databricks | DatabricksSkip to main contentPlataformaDatabricks Lakehouse PlatformDelta LakeGovernança de dadosData EngineeringStreaming de dadosArmazenamento de dadosData SharingMachine LearningData SciencePreçosMarketplaceTecnologia de código abertoCentro de segurança e confiançaWEBINAR Maio 18 / 8 AM PT
Adeus, Data Warehouse. Olá, Lakehouse.
Participe para entender como um data lakehouse se encaixa em sua pilha de dados moderna.
Inscreva-se agoraSoluçõesSoluções por setorServiços financeirosSaúde e ciências da vidaProdução industrialComunicações, mídia e entretenimentoSetor públicoVarejoVer todos os setoresSoluções por caso de usoAceleradores de soluçãoServiços profissionaisNegócios nativos digitaisMigração da plataforma de dados9 de maio | 8h PT
Descubra a Lakehouse para Manufatura
Saiba como a Corning está tomando decisões críticas que minimizam as inspeções manuais, reduzem os custos de envio e aumentam a satisfação do cliente.Inscreva-se hojeAprenderDocumentaçãoTreinamento e certificaçãoDemosRecursosComunidade onlineAliança com universidadesEventosData+AI SummitBlogLaboratóriosBeaconsA maior conferência de dados, análises e IA do mundo retorna a São Francisco, de 26 a 29 de junho. ParticipeClientesParceirosParceiros de nuvemAWSAzureGoogle CloudConexão de parceirosParceiros de tecnologia e dadosPrograma de parceiros de tecnologiaPrograma de parceiros de dadosBuilt on Databricks Partner ProgramParceiros de consultoria e ISPrograma de parceiros de C&ISSoluções para parceirosConecte-se com apenas alguns cliques a soluções de parceiros validadas.Saiba maisEmpresaCarreiras em DatabricksNossa equipeConselho de AdministraçãoBlog da empresaImprensaDatabricks VenturesPrêmios e reconhecimentoEntre em contatoVeja por que o Gartner nomeou a Databricks como líder pelo segundo ano consecutivoObtenha o relatórioExperimente DatabricksAssista às DemosEntre em contatoInício de sessãoJUNE 26-29REGISTER NOWLiderançaCom uma visão de longo prazo, nossa equipe de liderança conta com décadas de experiência para traçar um novo curso para dados e IAConheça nossa equipeEquipe executivaEquipe fundadoraConselho de AdministraçãoIon StoicaCofundador e presidente-executivoBen HorowitzCofundador da Andreessen HorowitzElena DonioMembro do conselhoAli GhodsiCofundador e diretor executivoPete SonsiniParceiro geral na New Enterprise AssociatesJonathan ChadwickMembro do conselhoMatei ZahariaCofundador e tecnólogo-chefeScott ShenkerProfessor de Ciência da Computação na UC BerkeleyProdutoVisão geral da plataformaPreçosTecnologia de código abertoExperimente DatabricksDemoProdutoVisão geral da plataformaPreçosTecnologia de código abertoExperimente DatabricksDemoAprendizagem e suporteDocumentaçãoGlossárioTreinamento e certificaçãoCentral de ajudaInformações legaisComunidade onlineAprendizagem e suporteDocumentaçãoGlossárioTreinamento e certificaçãoCentral de ajudaInformações legaisComunidade onlineSoluçõesPor setorServiços profissionaisSoluçõesPor setorServiços profissionaisEmpresaQuem somosCarreiras em DatabricksDiversidade e inclusãoBlog da empresaEntre em contatoEmpresaQuem somosCarreiras em DatabricksDiversidade e inclusãoBlog da empresaEntre em contatoSee Careers
at DatabricksMundialEnglish (United States)Deutsch (Germany)Français (France)Italiano (Italy)日本語 (Japan)한국어 (South Korea)Português (Brazil)Databricks Inc.
160 Spear Street, 13th Floor
San Francisco, CA 94105
1-866-330-0121© Databricks 2023. All rights reserved. Apache, Apache Spark, Spark and the Spark logo are trademarks of the Apache Software Foundation.Aviso de privacidade|Termos de Uso|Suas opções de privacidade|Seus direitos de privacidade na Califórnia |
https://www.databricks.com/professional-services/redemption-request-form.pdf | %PDF-1.7
%����
1033 0 obj
<>stream
7�lb�� i���9��?H������Bz�\��P��
������:��L��QLN����&�
�����)��V�|N��';���늤�L��4q��d���K
���j��/0cG)�،��d����27����s���VY���r��N�D؊��t�P����$(fw�y����KaS� B����3:{$9MO�8�
J
(�;���.��u��-:n��
F�k ]�>��+����}S�9q��>���<I��4gt��s�z�zdbG�.��t�$k�q��r��<�Uɩ�b=�#��$Q��|lmo�1��ܿs3`e��s� ��ĎVV)�:?����&��!z�,ٯ����kY�u\�ˑ�^����m
���/^jPqժ��;*��.�X�Of`P����B�'���c,��Ƨ\�KU8�����|����.:4z*��2��yP�t���h�{����#h=h���S��N~�
�T����#�[z%On���YB���>�x�D����({�ރ�KUn�K���lJm�܉5�)�����p�@iє������~��R^�{�c1b!���O����/.�����1fRPtt�i��R��6�_<9�c8�K�T�q���]��E�������j�ZdO��b����� �cp�]�C�܅}C�{���&��#f�$C��$/UA5cΓ1�.t�̮�`ڥ(�AY���)��F�s�X���KD���쒾F��}�ȴ��6�|���Z�C
eM�S�6��E����f��g�sC��A�
��u7s ۻӂ1E�:W�v^����^��Bn P�렛,H��b��5FǙ�p�h�V�2I��#c8�P��A$Vm0T�c�� :i��>������x0e4��ǹi\
,�RUo��k):ʫ�!AZ������/�w�{
Q��<����Refډ�`)�_@B��x�h4�zϮa��M�$i��~����?�ơ4Sۿ��$��Zj���]W����7��$�U��<[� yZwY��
�{�I~�^�-�Y���]N/���c)Ʉ�;7T*\n��Z�T�t�۷ۯ�y����]�C2{� R��T� �g]g7����i2��W��븯=��?y�P�IDv�����}3Z��~3@���x\�̚6��vc�x��(���ͥJ��%���!՞�D�R��g5�E��OU�:��TG��[���'ɑ:a��w��o`�9~ �brt �|L;5����f�ya-�����fF��O����O��|�ϒ�tn�6�W��g�ٺ���*W��OfhF���N�Y%�> ʄ��Z-�F�]�-��8���f�S/�m�/g��Ҧ#Em�m�~�/��i�����㣗�p1�4���W�c<�.��?($���ɤ�.}O�,?��C��K�g���HB�yL��82���>JjOG�ݝ���tҔ�ix�'�4<��/��(��ɬD r<���I� �Ƕ�s4 2>�ߎ���k�
Y�`�lAw"�����e��
����y��@�F>�yj�A���Aͱ��Rq>�g���,�c>�9�sHg=r=���c��X���R��L�k�)=hn��
��&7�~uG����K��T*2�$'��P�����K���K�!,U,=:5c���"ݓ��Ԣ�A���+�=��hZ������9�� ���6�W˼��Ju(��F��'�(�e���X���}��) o��ԟ�9cD���V�Z��x�o,�t��dR��%���{jn���#��Sxw��M8���f`:�\YW%{
:w�*��jO;����<���Qj0���7^49QT��kD�L�At�/�ο�'�9Om�R���W��C�)��'��
[g���+���G=S��x��2��,���S$̶�2�Q���*'��ڲ�"�q���|3q]C9�c�s[m��I�i�墹rߟ_,xK0;T��/�Y%J��J�p��1���|�_s���D�7���JDg�2�������t Q���3C�ٿ�[�]���+8)̰�i ��֍�MM@� fzs��j4��dX��aS��Z�>��ɂ��3�����w})깉�>1"�F=�uq�T�!�z���ҮM��V ��`�֞�4�sȈQ�����+�E�3�/@L�%�9y�����Q���d�P�H,��KN��Q�/\'��J�������v�2~E{"�"&�6� ��T^=Ҫ�nW�*%�Z��Xl��w�u�TMV�H��_Ch���eT��o���{
�6v >��0�d�Z���<:�H>.�*�9��
����+V�����Fz�����w}��?�ѡ�G�=Jj��\ "�� ���
X��v5����M�Z�;G���=Hf�j�?�AU�ӞZ@*e� P��7F���}�\�lަ`6�H sW�u��vh��Й�P��JW�Ʉx�P����Ɖ\���KKs�M�,å�<��8U��B�|I��ut"��ȓ��I~`�zr�ռ�PB�S��Ͱ��/O��o�yi�������˯T[��f�HQ�ᘕL *�t6��):�
j ��־���Y�k-hY���p�]�v�^���H��7��&*t�M��{�D�2�M����Ɵ��[m�v�%�,,.�Z���%��%��XB�D�f���[:�>�t��+���X�yQ�}(�� F�H����nU�N��S�8}&��(�L�5����v̲BO�/;(��C�ȶ|���J�q�Tᅴ�V��-h�ae` ��ɲ�H��I!�Y��-�1��D�1J�=��!���^A���t���PڤE[�C֓�2`9��k}
�ƪ!�M�;W�il�@��Xu9���̨�э|dr�M��B���:\�Z����9�����#Mϛ<�}�*"x<M`w2��ʿ��H�J�� ����'f��-nEt�9�����s�"o����q�
��~o�lk�`#��o���,^�y`/������桕��Sx$�fNېӻNn`0 ��5
�A�`?�V[(G����A����W ?>su���`�FB�X��ݖ�ϓ��XDZ�^͜�T�{�}Py�5�}Jyk���kT�l(jS
� ���Ɲ���X�R���f�f�s�:��Yc�7D��Yeꠋ�*�p��h:fxrS�s[�d���}-0����� |ܡ��k���������
JI�B��vh』�����+��ZO�;�d����i%@Д�X:Y�z�Nxk:��<��gMk�u( �E�l��˅���e�7��ԫ#%J����Q�1"��u�$�E��ƈ�Dy/��z����,pg�p�衄�e_H{B�lӇ|������>�ƫ�ͅ��� :
��XS�M0��T%�|����F#4֥����t�E��W�s���s0���QP眙KGz����GU�LE����&�"��Ĝb-�9:����ңu���~�A$ֆ"�!��q��dEy��
7� A�w3F�+í������e^�>���ץ�Z/���L|������.�r1Bw�隒F�o@�XɃ��TY|�P�;������f�Ԙ�y@^��MM��c�=M�<�`L��b��
y�H�O��9�dV�e������!��/�6����1ŗ��\���L-���e?*ɯ�KQ�y�^�`4�������
r������q���[C�[~.�����+��p�1z���$9a2/3%s�8(���k��{���#�C,��̥�Z��I��5�K�6[;U���Y�7(@#���v�A�i��s������^�>���g��D�£P�u��;�������Y��
%"�b=�
���sr��-�Ρ�CTĠ}���F�(T�v{�M���iq9JT`�N<���ѕ�H�\驖�q+/����$Z?y�[�6t�g�Zh�1s`|�
/��J�K͈�ԅ����7!���F�K�
�5��G��N6!8���z�)xی�y��C'�7���U����z$�����-5��9��ϰ%X��:]��zY�����t�Ƥ.�pD�ru���C�J������U�`�^P������$�b��F�mڏ":'���.���2E�X:���A���Fv�A��� C���HS�r�pEȖ��|e��;h���c��~�7�.��{�JFc���WR{����)�S/��@J>��X�:G#����\��K��
�IަSI
�/p+T9!C�ِ�ͻ��v������HSt�Բ_�O->D�G.
�(�:
3�U�T�r%�1Ε�y������رɽ�]� ���$0��
-a�Y��HڱeE���b.���\1�Rt�:2��e����bF��F���;>�|+��$���"��H�7 �\����
bt�i �8�W�P���S,��ZQ����������HC��X�q�1F8y����k�(��h0l�7��k*�c��[���-�G�.Z��G灋a8�͇R��\}��h���E�|��E�k�S����[wU"iV��3.�ca���OʀCw�b[�n����J��<
��ۦb�F�������`(첮T��I��Ŋ��fS����UN���鄦W{�[�2KdW�}QpcH�=��Q�K�tc��w�/��6�"�CF��u&��� �cl����%*���x�������
ߺTбl��R��d�3e�8��0����c��ޭ]�3�BH���v��j�v@�L���)��7�l~Q씴ńźr��)D�2�豅T��OtBly3�^�Gkh��;���CT݇��-�f1i�O���2��������p��i������v0�Cꎠ�Nw*
y2k���z/;��F��)�g �#(� S�{OV��OCf��0h(�A%���bRjx�f��
�HXoHO @�����]�iXsTE��\Bw�l"#΄k��G%b�K(Pl����A;���;��vk���H��_%3) `"I'6��1UpD�7���߿�
�UA�=[(�s�}y��#���b�����3*��тǙ�������^�)���29B�����fZ�pkn���u�pR�"F2[@S_��EW�٠�/]%eQ�ȆVZ��~��k��*q9)A�KO�E}�Ȥ�ܱ�0ł��7����U*^��;��]�C���h�0|^����Kl�h
��sIL��l�0���G)�x|sĜwnL� mqVN߹��>$Z��d��&�A"�2 �
N0��؝ͫ��`,�����\>���jl��9|�ژ���iXV��{*�]t��cϮ�Y�0JΝ3=�m����f�|�Em>�H`E�[�mo=E��n��e���)�O.q,� *���L,�P
սvp��*I .4U�T!u5k���W��y+��
MWR/��V��\���&A&Ζ�����Z���lg����s��l���\PG����O��R'���+�V�&_~�t����3�Y��Gչ>k$
��Il��9-s:gy��5�p3�}*�"�O����2��=V.;%!���^���_�c��^G�D�wv
qn�3S��1a�S�N�ZL%,���]בπ�*#/��M9��� �S��
�!�zǟ2��:.s�HOߺ�9����>�N9��x��tb��n��1���z�u��3�^�S(���̉:)�y�kH.��[��He�:X9�؞Bl����|��SM��ZB��( ��������{�˫�����Md��2�~�헦��������.54��i��WPH�cFf�dp��U����!*�
GZ^NY
��4����4����࢘�U�#XA�������4�5g+
�;⾭�k��}N���vל��H��Źb.A��zEݩ�͊�đ�i�CP���Nx�e� L-�ތ�3�`(zX����-Z�u<��g\q�ͳ�v�ш�xac:i��
آR����������Kk('EU���]���]a+�rN���{�H��#����L���⨼�{��)�vCc
u���B^�Z����=�w��'�sWB��4���YI��_f���9�0�]϶��`��%?O�t��ގ��>�~bw���Ez/�ql�KB4�#�S��!�5}�ių1�q�m)� 1rں��TZc�B{����Pg�,�*=�(_;τRM# ������S橄A�)�Zj`.= ǜZD P�({3��U �6к�in�<6=���o�9|پ��'����P�s�쨈T"H��]��6����H��@�3u��c��(k�������ha|�
[e�MK��bK9�s=1��]�-�6Q����bH
I�P.M!���ٲ*S�T���_�m�-�Є(�7P���"�v*Z{6�S�{t���=�O�)��vh�7��9����47�p��ے�O�|�3%�a���CZ�͋�����,|+�;5hI��*�ŏ�F��Ђ歑[�?Dd�1�o�?g��R}���Wp2q�*nd����o1E��!$(��=bt���n8��ʔ��@&�!��f+D3�,N�4�Q/�Wp�?=IJè�C�m��DZ^�H�?c%�|�T �qE�7�����Cf�-��ǔ馰�<�\�����ɯ���ٷ����[��Jwq@WBӰ �](Ȯ�Α���0�Q� &��W�B� �m
�|�)�
��!�\$���= �+��8����̀ɡ�,Ω�}���/��%BƳ��頑�}��X��ʐ5�W����|F�x�~���I����B��ʔ�B;��`�_Cy��5ʵ,�k�a@[j���4y�=�y =���jH-܋#{����M���`�?�Q��U�!���yvJ�ğd��0����[��X���`1��6�r㇈�j�ݸ�κ���ȥ��Xŗ��ë�Y����-��c����iz/\���qJ�rK��껺�&�>�6��H)a�
�}����ff��y���8]�|1�\5�\5��N�;9`��mSI��PH�K��Js)�,��,C��5f�̬�}6[�t�P�`i����km^*����"����8$x���E�C�:�+)�i�.��t�0�tw$Z�8 �M�����J�-�J�x�t��Cg�!umH]U�;c�d�[ڠhA�҂��W�7��� rq�㲎7lc3�3���x�i�~�_b\�f.Wt��-��{Tv^4]�rH�nI�#�!��2#��}L��yk$^��9�U����=�7�� giU<맚�2��%YZ�<�/9��B�5�)at�$��pȼ�m]���Ѝ� �
�� ��u]Tun��#�\۳����YT���<�FRO>]Ҏ�J ��P\w�M�kΜ�u�D���L+>R�N���))T.�}w��-~5߇S�(Q��Ju1�A�ȭ��,�I\��! �Q���
ĕ�0�~iӽ��u�'�dB�.
������a:��䱗��T��30B��ޱ&^Y���B] r�SC��@/bԼ�P+��k{4�7X ��de����ImD�T��S���SN���HL\���$ڒ��[�i�2q��S���; J��HY~M�n7&O�8*#+�g�BLy�0p������90JC�E�a�n�dҠ��#8���"�˄o���Y���W
2�ip B�M����4&Diaϸ��OK%(
�A /��
F�Gr�u�/9���r� �5h����:v���H����1;�
�wp�wd�4��d�
(�C�/��%{l�Q� kp��4�g\��+�2�Ϧ�ߨ�Z��?�,����J�na?��%=�܀�A˄8U^>�>{�3x��G�t��E�N�xڜE�H���E1btg6�G��ƨd��z�+��I*�2�r �A�~��#&��zb�pKn���/��� ��I�'���4�����7�-)/�m�g��J(� ?6�02�}�(
��^.Ǧ�
�8ɱ�=��54�D����-�/����N�g�XM�=Ǻ�>�c�>��C�q�Еփ+�Mf�,8�a�ŕG �������?R�_�T�w���X6�}mp%*�a�ƋtV5E�w�O8
����>�2��*�a�q
?��P���������G��O7��d�b@�E�V*�� �����lb1�)�29,�frύ�� ^y�M*��k��X��|��mo�wq:V�A�'���v�'��R���:��ԧ��\�)��}?.=M{�x���$�gy�̟��{"���>[Hd3r��~(�e'DX����y���fw@1q��O �!
I{�*;9��P5�%��A�(~z�g�6�"�g![������)Nؚ�ڵQ�yzN\���`gh�JS��^x�����.���@tѧ���&%������SOr�i��EAT���]��A����J�SK����e�7���<"���B�E�7"�&^�uA!�7z#�+�
j�1��G�\���-�8".�˺T�M��?�8ff�
}Y,p��vñ���B&�Ѹ�Euї�>@\g�
?���K�X5x�v��=X|1�Nb\e�I�>��}+����y�ӟ=~�\�S���q
���R�_����'�8Υv�G���W/�V0İ�����a��TbB�7S[�%�����,�*���!����
9g�"�-L�]P��O\�Y����Zs�-%����b+������!�(oR�ޜ�aAW|p�n'=0~%*3K31��:����흠%-O���ܹ�^R���Cɾ�|x8��$q���E��F/0���$�0����A Ѿ�w\�L�M�՞
j�~���OL���E��F���=�Je���e�ܼw2�����!�i�g���>������@8&':�2�vT>�H~*~�" ��G�
��� o0���gO�P�c�p��LS֙��J���L��J�V���wU?H�%��/l��ӭ�7�i_��l��'Kj%�w��.��#�-}����ݦx�t5���?�除?�'l,�rugV�7�:x��O���f�_=���f`,���v�
K�o�Ѽ���}�Ĵ^ĉ(���G�p�x�Mx�ӈ�fBc�7�0��F����ֆ�n݀�a��ed�Yp!������\�G���G0��n�zwŻ�u����!�İbS����U�b��L��S -R�@�[x�Ȇ �G"�Yϝ"�B���
I���`�D�F����v��:��C������$3\n�b �����|�K��Z���Y��C��m�E6U�:E��@����)��0P�8��T�s1~�i��W��)� U�" X41݉���
������_���z��p�X���+&�T�!m�ݳ�i�`GI;0�o�y�2�9>��-�$UQZ
/E��q��ƈt�{DI�0dZ�Du�!�T��EHQ@�\'�
��>�d}�d�IΩO��L�g)�YGba���#
�2#-]�n�MVC���t��(]$k;���@.�M��T��hj~$X(8f_��g�̓����Z�G
�:�
V��F�V�͜�!���2�j����(��ڝ1ź��SI/N�Ji� T��9T��@I0�f��<�:��2A�����p�^Zݸb�q�Lv~�*0>S�I3����<�T&Yo��Y~q���
P rp�$4P���}%���PG[r��h��滂M
����e̱6�gt��4�[�+#ɱbA�Ǜc��
��1�"�>�:�C�a �)U�����װ�Sq"�Y�u�d`=ќ�-����ϩ�a
��(yc���Xh"�0���F���ݽt�;��5��s�F|��yL��h=D��P��{�_NT�n�$a��Z���vP����L��=�E���>b��J�-r�Y�^��<�
�HJ�6�u�2I谰J�(
����t�Dѓ[���I�CJ\V2<�|�LO$4���JmƜ�q��kA����S_���l8ђ��16�{�q~�w{����h����"i/�N�V��i��)��-N����u\����H{���]��߂�˟&?�tX��7L$&'M�Qt���-;�g��� �LE��|Z�G�����Қz�θ�o����(��Qs9�|�<�F*�� �Zy��lQS5�����P�o�2CN�=�E##��@j��3�>&^�_�[�\�ϬIj��]���㻊 5�ؿ����!O�CRA��z��ꖷ��\<|=Hy�l���
;�`��>��0R�KOW�9�yX(�*���%.&�a���E�V��g���q)f? ڏ��e$�Y�����Wm, ���4ͺeRqڹ?�4"q�������G�Nx���V��-q��'Dk���Q'/"� ���dw1j����W_�X��X!�S�ff�����Q����)��z�Z�7�G���n��(L�mvb��:^�d��X��C�iZp�ޙ&���Go�u��(ei=p��ދ��B���K�k�.�>�O�Z��c����-����vp��`�jЯo�
h�,XСp2=+o��j(g&F�[���XЕǮ��l�����۬Q&G�(Ax+���"��G��.&�U~Z���-��O�9���n[�Ӑg�lh�8�k����ű~5��1�����Pk�� ��^!K5 �=�ő-+����WL�P�V/e�0{Ǥ���æ�{�x�DׄV��1)ݝ�#��D��>ؒ�6cT$�����Wl��V
3���O:(#����1㱺з��>�`�M1n�1�ՋB��� �u�,6D���$)�
�q�ڢf��;�N;�@�]���W��~`+�J�E�;��H)$ ���~J!S�U�]}ʷ�����D�w���ml�;��ڂ��+��a�����������%��(K�%�2s�2$o˘�
3����E�/��$Z�+ͤtLm�3��a($��|� �T�� /�V��3�6��C�k,4�������D˼��0r*i���-�'���c��;����S�i��,��Y���M14��'���i\h5��o0@oR ���e���T7Y���[Pm�V���4�g=�lHI���⸥`���Zw�"���w��~���z/N����
�r"wM:;���]t��ШeO��%�g�}�J9����ﯪZibg�������:q<���o�S�g�G#�r��,�R��0���d�d����.Z�h�oevo;� B�V�DVy��d�gS�9t�w���\����<�����لN�r�v�\ɣ�\n ��zcy&o�T�KӜ5t��g|�HI'�}�0O�X�M��b���~�=��)gǶ�0�����`i �|��B�{�H �=Kh:��%�~}i�>�z� �n����hf�Q�P�vu���g�q�1�������B�}�l+>U�W���,T�w���a� �� ���/�$����ds]��6���ܸ�w����x�J��R�|w�5F[� ���z�^�J���f�ز���&x*c3�XbW$���!���8CHr�
���U8N���2�g��:��_�϶I�r��)#��[+o{R��tv�ܓ�L̮�+�ҝ*��PK{:y�ғ����9�>�4�#ƀ�?�Dв2�����u�c4��s�G���s�K���g@nw��ljv����/�a983����ߛ���� �٢" ��V�]���y�U���8M�2����!��]�B'���7�`��!� $
M�[�:
���j�X�d1�vh������D�toU��I,�Ѱ��S{�D{)e"�
3�0鴫�8�|p 䏙C'� Ϸ[DB�M4��TF��Z�/�oӢ,l��9����[����W��V��h�H;%�nߦގEB`o��q����In��>�3����&U��VK���h��<��B{�7jmZ?�>�����^�������@��n7K����k���>'6%������h���m#<�_�Y�5�%
S�����jnTO��ct��>/�:@��T+�X_mf߶��'����M�f
��w�M��ĚR�[���+��û��E�'/e������o���%5q���%eH�="��)��pz�;���8�V���Q����xB�c8Ij�!'�_m���vİ1R����=~@���.RɻK/��γւ���X^��&�3'[�F\W�PC�!^/�$Q�������Տ��A3�������v֙�504.�����[5m�����D���l�j�J(������p�;� ����M��MI�9��\z�l�"Tw;|�e�q�K6eT��'s����H6��(=sV�ĕΫ�A}���sV�awE�~�ˣ&�ȶ�_������v
S7�~����\>��~?��giP7�l9�q=3�u��ɍ
�[C�G���7�e��0q�e���]�tWO;����-������������Z�m����'J&e64.�:����d%/o�;m���>�Q#˧����A���;�;� <�?�t�;��q��{@�(G�����5 ,T�:%��7PlI���1,w�$�n�f�z;z1�@}>f�k�܍+*�1� |�v�p�]g
� �7NF���R�op���Ұ�CHs�༢E���z�j!��ͺd�6Λ(&<�>\Ƅ��p���c�� ���Ne�w�Q�ӄ�f��`��c�d�{�[Em�2U�4m1��ȫ������8l^gJ/��+�Cz��7���˥z �ݹPx�s�k�� +w
4A�Y�ʱXd�U |�}7������F�AG0^�@C�Væ���r]�(�ӊ �狀gB�_�|���L3Ž�K�0�'�4��z�����X�8����[��`\
O��~�3s�ǣIZ�Ŝ��5=��2��qv $���h@j�����1�
%)��RC�.�]�!旌�?r W�[ls*�+�;��X�П61��{���O�Uc��:�&�2�lCj�mQ�$C�
���{��Ǻ�&wfR� �
6o�\i����%:�~s��j�&k4́5[jՁ'��Y�G+].{#QE�Ȋ8,� \��� ������7�\����C�y<<ͺ�߉/�2W�f���4�I�
��h�^�1��^�O�R�S���7t�U�Ve@���}u5ip1��8Bp>�=,-��w'[��j��C��0C�P����j�J^@^Bfq��F��'D+ g �Ie�����/�u�~X����w���ͽ.���9����g�Y�?Ynb}n�����|T�/�OYwX|_ �8���1���y����9���U9�h��`�IW��{K��(��M�h¥��J]My,������p1��"�:��E�PU�A�!e0kX��h,�B��!�*=6Zw�n��F0
L��� �d������3�'���Y8�>>f$��(� �z�j��f��q��3���R�!����#�����֝} oI�Cr��Ͷ6@y��bu���F{Ӭ� ��k�l�O�ky���i�Y]�3[���_`���]a)ri�~
ﴢ��B�q�B1�.���S�r��d6AǗ��vq����h�<��#%��$���\�0R=�,,��R��F�Wt>��Ls�:�$� {-��Q�KYq}���4�S�������R>���z��(1�$z������.Ӄ�0حn�>F���ƶ=�%G� ���ƽ�{u�&���"�/?�fy%IGu��)��?`�ĉ���&=���z�j��啜"���M]�p�wߜi�E�|S�!��HۖP4�h��+����-�Vw$�����oUݺv=����� �֤�|:�!hF����-d�L�_�Z�`�"M���Q��v0S+��6�J�%:��Ik8.����4�؎f���:`w����b
�S�͍%�AF����bC;��/�|�W0 E#5s�cx.�����!����������҃�^OIJʟW<�k
�ݔi
_�E*����%�b5��I7%�x�\����\<��=ܱ���#E���Jv0�B���zi���r���\pz>ۗ�-��TKw������y����z=I 8t�Z��ғ1�8�n�����qN�p�0W�vt�MPb2�����2=��ndwǨ�Bи�ً'�$h�o�J��5����M�KV�{gI�?zr��DVn��2�q�9`��uD��I��AaP�������<�mɪ#]��V���}ܳB�N��l�& ¨��
��H�/̂�Wj�3-+��za\�\���.�tKR�իe{�.�;���Y�$������p�Rq�x�+��Fsq�}�;�H�`�W�y��ծ����3.s'ā3�$��Pae�3q�D�pњK�O%,�l>E^n5s;F|h4�a�},el����� �C�Ͳ����y�*<��~]N3���q8�K�N�'V��!Z�X�$/qR���ؐ�+$��!d��6"��`I���7������ց����!�y��f|��r��`��y#�B��e �ٞ����3�l]��)�p�do�Qbw��l#�+ƪ�
���������-���/��U�@�U;���4�rk��K9�:s2>U�d�
+�n�HOS��"J�QVG1�R���=�v�ľW|�ݞ�=X���� m1
s)�2�����Ha�0��ێ� �:�>����b扬�����;�c]'��~��2
�(�1��%U�����nm-k+BgD�� *�ܧ��0�=dG� ��)`����sd�Bw��1,��[HF����� �ی���m��l]v����[UGf�>� ��F��3�G{�A��y�$���m(�Xs��f�hosiY�!��%uRһРM ��e�
ITD��ǵɒ.^K(�����+��j}�
-�UD�Hŝ����ӛ)&����
߸��.o����"$�s_����0<��28!i��I"��Af�j�P��dǼ�w�(>,i��[�_PTc���R1`
���NK(��l�����-ͦC���i��������i�zt�P�g�"�Z5j"-\1��Z�۲��.�9uD���'8������ �����k� ��i�E�~T�r��W�R�6� �v���OL�6�q��e��3$�O�`������
bΈ6�Mw��n��]�6����`�
n1P������p7յ��,�H��e��=�I��ѽ�]�w���{MvQԬ��>1Ӆ�:�Udu�68�t!� ��#��O@S(%��iFiE��|-H������[!�[�,/9.�4��i�?�S���?/3�
���t���V�7� �a�ë���-�\D2�z����()kݴ�a G�h* ��� ���Z��N\%�u�9?n������} �?� =)��0���{�j)��N*���1��m?���c�h�Z�l�в�|�֧���J��E_�"�. $z�h�aYS^g3�� ?u'��9Y��ޤ��"s���X���\�D��,�=ث� ���i(p*��i���x%�U��_�'��
(�_����Ƥo
��`��c6�j�K2c𦤞����'�̈́�6�f6u�����R����]�3����K��_��y�,n�ڈC�WD2�0&]_�C���8R� ���Z@��M��{q�R��-2��h�����c_�J�#�E
3�8(^��TH�j(Z�u�K
B� ���m��j�u�_��|��R�Iv�")�S���~����|���&�+̿� X�n��_|�����!�H��q��E,G�_\��܄cFT������J-I��P�y�]�kޅJ�S&����)tE����/Ǒ� ��fe 䈬xL���I�?�W��\��([�ˈc��
q��;Ɔ�v2u�f��j1nz0�e��w�$��7���D/���_�'�R��X>�m�?��܅�����R��0R�p�2[l��AlfN�
\�Q\;@,Օe�Qp駟zW"{gZ
G�:���1u��-w;C��)>�[����?����i���mIWC�t��Ws雷I�i�e^�f��p����ү��c[)"�$��������(aX_ʪ�l�s�R@I�����u;L�wF�2���im�����=Ck!�ܹDH�%�F��f�b��F#D����b��0z��
j�6y���[ѿG��6yG��̲-r�M��ZU��ұo��~e��*c�_]/�u�������T����[��~,�H�&2��&�B��Q�&3D�ĂK|�<re��*�9W��K[x3�)�2���fθ
{�%+�1٥���.n.
2��Ͱ¸Cr�w! ��� k�
"bT�B�X*y�����pxe&;�����_q� 9q*�:�m}�x��ۿ\����9*�C
��8�t�����#(B�Q$��zpO=�:R]"A���y��Xw#�y��ٗ8��$�D������`�Mj2j���O1
4�P�U���7��ߞL��aw�����d�vK�;y1���Rz&%V0tv,�W
��G&��.W�ǯS[CS�����LN����~�µ��)�N|��OU�853��#�}�u�&�>��ۮ��(|Q�1�d��q�盘N��)�N�^���
|�֍�ng�3mQX=�ג��i^�E����E���-j,��"}8g�Q�x,�$Ր6�y���V�K{�P�vA��)��E��v7f4���MD����5��g��zX4 �xܴ�+ǻ:%����"W�*`Ӄ�b}G"�>7�~n1�G� PH�U9�I��gF#;�y�u�Tt�w[�tT�%]�Ql
>�i�����Z��9�1@6+��m��7��2����U���~�bL�umϩ4!�w<�GAY����;�������ƻ�D Y.��q(c��1c��!�����o���*��˚)B�r
W�9���)�=h-�Ҵ0�m:��s?�F˫����j�Gn��h
��6P��f� e����"��y�%P
U��r-ZY���"��ث�Y�0X�V{ln��}�Պ���e%/`�"�\R��LH����]/���(aC ��W���|�Ge�>N~'m|P����i��A�VyBZ�q��^O v�5%\��BuD?��8G8�f�JU�~�ޛ�<�/
I��Y?|h䂋?=���+u]��XyX���k�7ű>� H�[�ȕLe~'����lG
�St�$��9����'�m?=�<�M{�Z�0����*j�a�J�vfk8�`P�
�c�ܓ�[XtK�£�0VV�S�T�_���Q��:2�¶M���Ssr ����
endstream
endobj
1034 0 obj
<>stream
�T$1U�����h��`����9``V�)q������$R>stream
hf�ә�����9Ce���'h��)Q����"��|ӳ�EEl�W��0M]�N1��vk�H�=� �"!� ��%�M;���[
endstream
endobj
1036 0 obj
<>stream
�"-����,J�1�D�N��NLH�9V<�
K#�q�b���.R>)� ��A9�~p�(���?t����9b'g{�����18
r�Ӿ{P�r��G��d��Ǭ� ��� ���?�՜ˏ-i�`���MIe}���6'��b���&>�X����%R}�r��]��� ${�F4 [�v������Y&���p��Y���oĨ/t�t�ݕ�݂6�&7O�8d�����:hz{E�m=%T��"}��L��Xó� grh�g���܌v|K�g�B��t߭:��1?Y��%�.����]&�Wm��[��OG:^�
endstream
endobj
1037 0 obj
<>stream
��s��ka}�t��W���ë}Н �6��h/ j<Z�<{��
~�+s����m�N�e�kĄ�t� ���� (&�ѥ�����ȄV�p~E�,�Tt�q���j���y�A���9BV�;D{��PV�B�������އ�iI��
"a���>��<ӑt���yڻK�.W|�����r���j�p�c7�R�+MN,�:���=� <��ܱ�ޓ��:��-�������U��%�ڛ/�`E��r�����[���+�FIon�N�#@��%�O�Kw��� �c꺿4B*���m�c]-맡��2m�jS >�c���'�.m��8J�faE.�rFQM������#P��߈/
9�I��Xd�l�dl���&%7���{�!�~��Y{�g��r�H7���pt�7����=�����Z��ȹ�B.� ߩ�?�^g��M�l���}3�2{�_)�����Ŭ��g���� ��%܌����3(1�jO�5���M���NI8� 11=ǁ�y��-�v�Ο5_|��8� aw����������=�l�3�ʵK>g����Ȝ��|�A�&�����d�#�)��u#��_C �:I8_���؈]d��[ fгo!�]�I�ě2��Wႈ���rb#�:VTc/�b��6�J��eH�AQ<`���bR�� ���s�� ����L�d*-��@�q�
��uwC�v���U$T!�zH�9~�?���
>�|�Z��m�h���&��Οތ��Pů�}���4�Lݝ��1mb
��\p�G��3o�*���%RE2]�b�{pr�λ瀕m�rh���o.�����
U1/�GrojȺ�Y\��-���K��Nj\�I3�o�&G�Vf�`�_�l�����o�H1�>��r�{G[�Ǭ� ���zJ˗����˂��cX��fG���7ͺW����I��b6w �:�Jj�h3K��8�k�
��МZ���Ѥ�k�"�SYg���л��r��!��jݢ�,R6��]H��i}�/��T��u�z���!�q�6q=m���F�����Fĭ���w�т�qqM�Ec%�2J������N��VOM���s�`���>9ɵ����c(@`f���S
��I���1�h?��_�zɦ��x��c�A�t���]ԠF�v���\s�����D�Q������R{G�]�ΐ������d��9\�Kʺ�w�@�2��F�4�����e�_�a&�YQ,c�~�N����~�"��\��y,ӊ�Y�����X�����~o¼�@��_�����ه���?�t����)�\��o8�P�{�Q��)���j/�qz�,x�O�u>_Æw� �����0h^V|�YR�T�����8�g�By��/8mh9�zU��)��c����C
�e���)��>� ЩT�c�'?Q�*���i��9�<3�w�̈́Ȥ��\]'.[�Ix~<�!�
$�~f�9��u����䞱�δ�[VÿX���=�7��r�Ы�g.�:�2��vz��2�N�����o��%U!y�vT��?��a��i2������2���ǚ�e����L��d��Ϊ^!�,��-<���:����ٓ(��6��l�kU�K_ۯ~n &�����У�Μ�ĢeO�>z�ܒ.M:�}�M�Mb�4$�V��w"�b�25�U0���}R�|�oGu���>""�eZ�%��qK{�zd��e��k��ui��jh����+�PA�:3�&j1��ʩ�4�������t 0�|�g�@�;c�I��[!��� �Q�G����.A�g�غ��{^ܤ�Py�� o�>���|TM7�*�}
��9&r��3sH�4-ݝ�ٽ����ҕ�������!����Se��k��B�8"uҟ��D�2��V������f�z6�Mm��lf��Ǻim�]c�ƽ��l�1�
�r�:�QŹZ�G�pv����
GZ`��+k�Y���T��S��sK�爔��!A!~
��q����2�HY�-�02˫� �~9 ц��W�g
o.��K��:-:����Qd��!��t)�1� %�8nZ�)���k��?�)wT8ʶ�%�7����)Hxa��A��Ӹpi��6���o��)���Rˡ(��!��N��
endstream
endobj
1038 0 obj
<>stream
AX
#5����=߃A�a1�P�H��q7QI���0��S�Ky��#�����L��P��
0�D�쉅�J����� 8/ʌRo��^�p�=�A\L�?�OSL�;�\�C:�z�#�q}�'t�U�)���YLK��S_�d5�uZԇ�� (�\������M@���%��Ҽ��{z"r{/uX�3��T���ά3�{��`��/��cC��0U�6�:�Vv�����G��e����/�����Y�{`��1<�_�$�?|��*"OE��1�_a xo�ݽ�Pr�����j������!��:r�ơq���U�!8K�=�sm�7yX��K�q0�UE�M#U��g��vƓ�o�i#��P���L}`N����a���RX��U���u�Det��>���3��B7
͏�H��
�ͥ�().Jس���o^ ���|(���|��ꈛ3�4@���:&*Y���������
�YV~��{�#B���7��z��O��<Ðy"��W8��}r:ps@����.G)jS���>��K������x�s�@0���̾�u,������f�g�P)0��n�g�cWF��Fu��:����V�F<�rD��wd��|W�?��e�?3�*!-��?�4�M[�F��f%v�Qz�������/z�i!a��C�:p��
endstream
endobj
1039 0 obj
<>stream
.��������4qp����d�V��VGN����t��[�I�:��^�Y�N�r@oK*��Ip2�|\�COS��_(j�e��X��u ^�(�C��1c�!��/%�꺝�QgU���q$b���|C���a�7Y��"���F�8[�+ G������w��H��Dž��%B�8q�k����e�N�]��w�o8���dn��1U
~r��qh�!d�@�kmu׃�J@U�}�Ez���%�M@���H$���գ��3�%���{]��^t��6,+3KUx���v�*S�@���u�ڠ���e�#��0/��E�!�����P���{��lN"���V͍S����>_MSB�������AXwV(*'i���D%+r�BY��Ƀ���_�_�Us
K���LiJ��͆t�cl'[����ޜ�ܝ_@�8�Ts�Rٮ��NqCxz¶��57LK�pH�!��"+H*�]�M+{T�K
z��������}����s�7�kS&�q���5Jq͊f����R�د�m��1x�oF{�ռ�(aHTW�l��G��$dG��\A�2Xk]\D� �F88��^���ĭ��l
��{�m4k�`�:w;Ft�'4���Ĝ���˔*1��9R��'�u_LJ��X�(<���pxj�(��
�ɼ� ;`H��P0TlD��*������$����T���1�"(�Vh�w��e K�1�f��zrZVSb3?KD�0�V�aC=D[����fS�c��4.q���T����/����B�N_M"we'�?�c��NX��������K�So��F��ġ�>#�%�k�[��y#c�FTݼ P��zx�� y�r�I�C_�\=�9䢑��x+P�Z N�sh�<����cn�>�_/����š��Qєd��5E�r��g2�+�t����b�Sg�^�铼�C�@�M����� Ϥj5��mGˁ��d�pmkO���E㲚g�9���}j��8e�;�.�>�(_P�4�Yf�x�� 8﷏݊� �?���F}.I�
�3JZˡ��70�iɺ*88*gU�&����f��c@e�Is*��}͇z;�B��UF��7�O��@�>����5݀��Ȩ5� 8���8!4e�]�5���S$�|��v�HB�:���#��| ��ד_X4,Y�j5n��%�����K:Z�y)R(����VA�������1cC���Δ! 6Z/#�c4�<R�o�������fΦ+��5 +a*�E�Ĭ#��ŠG�v<�E��8:�R]�>�GT�<���_�4Y\X��I�l �8�v��c*Z��^��V�h�HJ~1��4����h�� �^�`���,����m��a� ι���#c��q�Z� ��V0+r��;GFxH�}X?i����p����.V�U�ME�R&Z�_a�`�غ����v�xfׅ¹xJz#�(� ace���屝s´���f�|c��ϼ�O֝Ĺd0����6p:6�(��XJ�� ц)� 4
��~O���Rl���Wk�e���zF�1%Վ���itՆJ����� �xq8`�_�ք����]��Ss�q�ՖtySEH�1љ��m�#H�ͿP�t��I��X�?�
endstream
endobj
1040 0 obj
<>stream
�����Z�g�;�Rp��F�p�;t��O������k�~�r��#�&����d��a�m��ED{��Ӆi(s8[������e;"{�!��7�G�"��(�//��.U���@�.�G�1<��� �Djf�j�,�F������鴆Ds�j/�zh�&��(jeXLAƵ�o��R�>kuH��L5[��,;qZȏ=��0GS�-_�d�Bp�us5�� ����'����Tvi��(.�lJ�Z�������-�$<��u����CXp��4���]��S�3�)ɬ�0vЂ>�4��Z\r��+y��2��Zt���#�(���i�M�]�8
bph>��РB����"
endstream
endobj
1041 0 obj
<>stream
�
�b�G��w�~�*.���ǜ���������3µ�Z \�h=��G
�p�c� �lʾ��m<����Kˤwq*��xb���Զ_Fr�y�d']�L4��Nʼ��;�=w]��_%��_2p]j1є�� MV��t/JFo�*θ%��}�M�Ӂ3) �O����ʼ+��u����U���
�S�¬��ă��ٌ�l�R�&�0ِǹ����2��;�Yxz�6�ŭ�s��*h{��Dy
Ɩ�
!��������l�Ija�)+�x�aD��g�N�D�QC�tB�~
�a�ý�H뉖GTqz�]�_}so�|\�,)�r�
���J�Y,����9��8xyQ��E%�á �����T�\&��'�<���S�;�P���02a��$sf8��F@T�7����Ņ^�8�����rky"Dπ��D�e6ȱ�e�W�N
}㻏1nQ�g�
endstream
endobj
1042 0 obj
<>stream
DCk�:!I��� 8�ČR��H�c}:��x!YG����J�1G���@�+�$+��Y�;&ͭ�(�c�4�GF>Z.AU
C
D��6b�qY���V�[�(�ze
Q�s��䶹b�)���ل�6�H����֚>�'Tba?=�O��eP��X/�=襼�;�����l7�8C��Hj�@z`��F2������[,�K�#�0")#��pJc�B9
�-L~Lw��&1)���a:���"�;n��릚
�����
endstream
endobj
9 0 obj
<>stream
����f֗Һ�'��!���RR�|{��^�3 �"�Z4h<��89�;�fG���EP���F����| ֳ�2���\u�����q�\O}����,�����+�jd �V ��Y���ʹ�� �k`L��bAs /��ZP�֎�i���Ñ��YP�C��������s��Q]UE8�tP�
|yӑb�`E�a��锫����(Oć�כ��S��C����
W�ٝ�7RN���p��O0c!(��Qp�Qq-�s�yG�D��a�-�8�$�a��f־�'݉9}��tMv3Ld�p���G\f���
z[�$�u�������2@A�;�,g����9�}B��ଈ�VH���^�K'1l.@�p��z���:2q�sLɆF�n4�XJvK)bMûf9yA��.?��{��8(��N~g[�������Uk4l-�^���9���)������4��P��0g,�"[9k�G�w����zl��D���|y�?�Y�#qI�]"�����+JO��٠;�� M�Oˊ�]|�3��Wσl'c/�ZB���˷���Y$��b6J��Mj��
��tfN������@w�Ϗ�fl�ƩgX��.+��Ai�؛����g#A5��J����9���L�,?�&���rC�[�w~(���(��D�B��ρR��乊IX3��HK�ok�OaI��c坍��?p�+(���C��nLY�n�/�����d�"]c�`<�+���&����
6��~��H�I�>�Nz8���.�!�~�릾�����J!�|� .܆3��[�8�
����l[�S���$�����
�=�SZ�G}�ަ�n�q�n��Ӣ���� f�YmM
�n�{�8�,��,�g�\��J�j!����ϭ�%5{ H�ƍ|�7�0�4�&q��$)N$���[���)7ۍ�& ��v� ��o��8�H�3a�K@�M�w8<��{��F�ig��Z?N�1�r��U������x$�@Uݼ��n}J�����*�f�^�N�^a��I��*8D,�%�H���Ҡ(�X
[�X2bӃ�[kTv����F�:w�$Fu�И����эb�
�B_�x۹���g�G5�Y����Mo�'�L�_J*������� R���e��4؈A�J ��a)ۿ���hF��tv �Z�O?h��s���ds^?����]:����7��M�稣
�d1���c��=2�x�7sYz�
Y���
�B�
�SV��I6�R�s�gѧ1�|��������M�lA�r20�b����v�Ô2���aIT�s|s�{��E�7���4hט�f�8�\����1���?�d�XN7ͺ-7<-x"IZC�������(#�� �kUci��Ѕ+�������d�,g�Խ���|��baEG�)�%/��B��!��J��ZH��Kt{lf���s��E��X�ʇ�_r���Y5�9�$U0��2�ǖ�$�z< p�L��0'"�=����4���&��h�NHn*��]���"7j|� �Sc�)�.ԛ��.0n��� ���`�#��6�
��h"�\aۣ���4b�L� �O���k��n!���!(�,Ƣo�����0�+����i�*�������������i����:��}�_TH_'A�Ti�Q�e��f������Z���Y��>S�"Î�@7���ػ�~͘=����\<'���4��]����JX���
uO�Y���"A��v�.�M�X�Nż,�c��B��@o+��f7�hT#�I���"��#�Gn�d=o7�LdX ��Jv��� �����N���jx����BZ�C�~����
�ȫ�K�DŽ�,����a�,�+�c
�6���)������Ea�9�ENO��PҖMԇo��S�QB'�'�����D�?�{\����ҺN�:�|�>* w;={D���`��I�N
�?��V�C�tH��������;PA$���?�HZq`{g yhk}�d��9��
T=D/6S�#㔕��w{F�$=�0���^];M�5`+ԛ؛����%�,�� ���c_��`xPP�P~͕���
_n,��VghŻ���i�J}��R�8�����g����"C�%ܦ�ݿ~�� X���Q�ځ"i��� G-ܖ@��t�.LN�G��d%�2ť9���x$}N>ߋ��i��y
�
�\=���9�d~w����DH�Y���~raJ#u��ٴ!tE��+)�����%����
�:m���_tY۞�5zAT��v�R�����&q �v�P%G��8>�씹`�q�v��vi�n>BK���A(�~�z�z����'�ݫ�L��q����B:w�� k뱈ѺeÆ�x�k�@}���E6�.H<(��H�-e�ˇ�IY4��V��:���7�aG�$��.^ϣ�<�@�=�{��������*;�H��a�X5Bg?�iIJ��5�/���~�����,%Z������_���h<�����v��X�'�S9�J/
��Ӓ�3
���v���`����dA�CI0�?�I�+�5�^�:d���ǘ�'Z�E�:~dM�� �7�/�: ���,|����OG�.9�'�LmZF2��!�\J��g���|�<ڰ����� �ϵ��Ї��,�"�M_�i߀Y
������6r.���K�lf��j�H�����h]���yoԭ�Lo.ejy��
#���>D�~�W\�2.���x���Y��n�@
�Tw,9X�Ż�9�"�ٴ� y>ӻ���+�n�vM�B0V3kT�z�ǰU��{�B���&��GUJ!
l��P{q�/�����O����UB剫��(�[F[#�.�A>��Xɜ�,}#�9������V≮*C�:�pj$si���_�?"R�{N�".����/j��p�Py�{Ș��Hy�a�VS�ϵv43~I� �T�a3��S[�^�F�{��T,���1���8�vO�Bv�3dH_pp�� w�ܠZ�Ĩ�/_��W�뉑����?���`���[2�T�������{,� >�1֣v(~�=�����5:�U��~�A �'=�����O��s{>z�Ԑ�eC���D�ī-�rNUU� '�y��\����el�9�ʘ��;�� �V��1"��{�q�0 q`���7�FY@���h�xꟈ�;e�D)��m�]ļ���J�c�����ci�2a��]�/��QK�
�$�&W��M
Q�-�� ��p+�a��)q" [z �=�M?et>H��0�<� ©�l!C�H�ߤL-a�����6Q��Ï��F�l�0�W�6�۸���2L�M*7�~��)�����$�Gz/9/.��Oq�⬆}�'�S+Ȣ��i����x��]EsW�g�d�������Qsi
Qĝ��Zi�u,��|�T~G�
"%U���w�"=T����(�NQOz;j�_�o �O��ќ?nV�Ϳ�)�a�AŅ��U��v�IRc����+Djvib�.k���/�Z�k��љ̚<:�3�B��l_n�s�48ZbN^�4v�@yjj��Q�i��*(tn+��d��������˱'�c>�0�^v�bE��_��@�
��:&�[�(β�[�u�ת�>�el�c*�9\`�VO5��a|;�*�thA�,D_kEqp��n���)8���%�ܙݡaL�_H�.{OӒ���2y��Tp�����J2��Jͣ�7�����0#H�OL�;«i�[9
q�,�QF�^����(�����mQ.(��e��_S�+�F�<�Tp���˹^H�Q�t�h�
�l$�FWV^FKUC�P�Q���-��f��4UO昩9{QZ�=\��Z2�����a:.yxJn+6�c)8s���m���3�"
��_v��X��u"\8�h�2���AE��&�\���]id����+NP�nst[t�t{�O�����g��� -H���s �Lnɂ��ag�a�Z������O`O�#��shu#t-PKCHZ���߲n6䨲8N�0R��!^����lj�LM/F�2������o/q��$��fV�yx�0 �p�Y��v�{�qyC������L����S�#6��J���g3�eYsG��/�?��B=b��J�e{��!KY��.ԯ�$$�����ǖ"
t望8h�栔��qm׆T��UA�����k臥���~ ��{ʻ�ӑ�>g��^+�z��w,��0~�a�[Iߙ�P� ���]Q�d��8�C
Oc�=��R���o����ۏ!oԌ�Q�R��ι��(����{Kf�}䳄�`r$)o��g���|q�A+T�]�-��@�.���o����.����.����,-
�G�o/�hQ8�OO�H'�G
Dw�
�����ՠPp,�G`�nE�g��ڽ�۱ck�;O?�dr^n偧�i�ئ����fk&��P��u��T���]�d"�[tW�M9uF��= ,�&�
k�L�� 䚊��D�����~GW�zo���u�y�g����D�eB��h@i�Ͻ�1&h �}!W��p>:�ii�"�o
��JW �7c��g�>i��1x�`��8�
k�������{�8�H���K�=�j���o�<�ۛ�:©���{v��v�Lje��Ĝ�9}��X[99�(�~vgN�ɹ�+�Ԩg�`�r���
�`̾)�=��M����Jd�N�S����x 0�:�ݟ��g
hԵ�/銴�):5E��aN��:���qRTOwkÞ���ݔ�)5�y�HL%~�>�TD��`D��Qiq~<�\_���9���E�(���ZeG.�����P�����w�:��7���;|Zqf����b���[}����ֱv創+�HF\_ �����I�����6]�q�I={`�K{پy
Tg�� vL߭"�Ĵ�h�<����ON�C�%�i�;W�B&D
�y|^"�9�Y�����h���h�p��:��;���x �ۂ������g��a&�PnA��t^�Q�%��2mz��Ƃ��Ȓ/d�)c���l�~��@��:#z-����@��-�g�w���]M��aE
�1:ؼ�@,����5�ְ���R��ɐ�ӓo�N�t�e4F'
|�Y�O�n�C ����t���T#z+��ٚW��A�o&�[ba9�,�K#�|~���Y��ͨڕ�y�����xL=�O�A�6�Ϻq�[���ni{���NFP&� ���N��~�4>9���$h��ͮ�o�b��0M�,�Y��?3q���O[��Ls:��!m�!MVڢ̴|Hv�T�����L��|J5�h��
$���4<�x���i<���I�h��S5L/f`�[�G i����L�H$����N���7y�{�����X�pڈo����-�iM~��0G���=���X�H��$^�l%��pHd���.���/��k
�SbB����U+�;4��|�4�%��@0�������[iԨ5���Гk�h��=�'{�D��&�(�GD���.���"ػ��RH��^g�I�G�@�F��bq���h�>�u+����
��]�N�.^ۊ�<��f�.�^�uS��i�D
��1.�f�0�`G ���{��JP�2Oёo���p��B����e����)_L����(��
�����a|��������7�T���W�١������������.iżGj���� >��s�Ih��)<^X���*�Uzm�a����G��:�xm,��u���4~5���[r ?���ӻf"�A���$gA�C*hT{C�,����7S���q���_HA��ͨ-Y�GE?:qf���&�f�PR X��
endstream
endobj
10 0 obj
<>stream
�����JRVn5κ��(�ܢւpmk��5���;�x6��?g`�!�@���Vl ��W�I� �D �ɝR�H��D暴⇡��R
���+���U�%�{i.W�D.U�R�8��g~x�v�6`q��I^a�Y�����xء\cj�C���+����������^H���T�l-�+���(�]F��[�_f�nf�y�j7��I����1'���U}��^>~�8ﱳh���X�j����\�7��Z���7>/Subtype/Form>>stream
��?�ev�����{ʇ`��c��b���G�ཤ#�������{�Q?�[
endstream
endobj
13 0 obj
<>/XObject<>>>/Subtype/Form>>stream
�|e3z4�-$��N�e{o�ԢK��5�˽���#�=�O���P�d�X����0K�6r&�f鱀��A���Y���a:Q 8�
M���+ѭ���j
endstream
endobj
15 0 obj
<>/ProcSet[/PDF]>>>>stream
ۤ kC\~��M�N5s69���ݭH��3w*�R�f�����_�{%B���9A
endstream
endobj
16 0 obj
<>/ProcSet[/PDF]>>>>stream
��pg�A���=%�f�ӂ�N���ދe��.�69������q�=�
endstream
endobj
17 0 obj
<>/ProcSet[/PDF]>>>>stream
7��ltQ���z�?��{'o=t+nr`[�v�4��q��N�5Ԩ��d���
endstream
endobj
18 0 obj
<>/ProcSet[/PDF]>>>>stream
k�v��Qo��O��hඐ��s=�r�'PQ0t3i Q=[ ��5��L�
endstream
endobj
19 0 obj
<>/ProcSet[/PDF]>>>>stream
v�u��ή�p�o�o;ך��8��>�ƨ!:�$�А/u�C�\K�/W!
endstream
endobj
20 0 obj
<>/ProcSet[/PDF]>>>>stream
�ӳ�p"ƍ�?���=�
���G�u� �+G������}�AŅ���!�'>f
endstream
endobj
21 0 obj
<>/ProcSet[/PDF]>>>>stream
5����_�1!����s�Y�Z@�a�� �ih����9{%k��%h�
endstream
endobj
22 0 obj
<>/ProcSet[/PDF]>>>>stream
-��U��8��M�ς�6����R[l^�?�g#���a�?�҇P\S�㶊,
endstream
endobj
23 0 obj
<>/ProcSet[/PDF]>>>>stream
��j
9V�}'��\8��Y�>/ProcSet[/PDF]>>>>stream
: }�E�Y�g�� ��=�lv�
endstream
endobj
25 0 obj
<>/ProcSet[/PDF]>>>>stream
BL��2���۾�L��3�jo�#�Ķ������J����i�`�E�hJ
endstream
endobj
26 0 obj
<>/ProcSet[/PDF]>>>>stream
P] 3-��aډ��1s�3�a k$6t��}���an��6����)�*
endstream
endobj
27 0 obj
<>stream
�{�g���[�R�Lr^�IJ�Ҧ��B-��Tn�]��қ V9�lz���E��ǿ�Sh(�Q�c��җ<��o {m��ZA��e/�ۀS]�;���\O8����X>0��h��?�Ԭ��94d�>L�� R X���ܵ(7���d�>�4#c�ƜƊ��r8ŏ4�Ʈ�Ύ������s��̱@�O�B� �iL8�$���n�������
� ��&>stream
W_�稽��a���歕#|
�x�
��b�Fc#��V���Epy���j��#��zIXǗi�XX��$�-`�<���5_X�B��p#��� 9u�D��za"�ʻ?Ŋ���w�ο�N(������x��xw�(O�Ҭ���a�����'���e ������!k�Fj�l0݊�L\@N�.ڄ
�-^����,�����.�a�C=%�;��cnC�ϖ�v݃��(tXQ��%��7x�# �3|e��r
�,ϴ��3�?kcfU^��Ϩ����'Q�?�rE;���W{����������.
��yZ#\C������K���x�1��G�j��X�Kd����Ӹ_�c�Z���
��$Ly�R:��37e�#��(wc�O��=b�O-}��i3�����q��� x5Yd�71�U~��代V�l�+v���4A[�j�o�2fM�o.��)�����
&y
�%�\1�g}�,���nn��{�d��I��{ďC�G %)�b�U��WS�,;���
endstream
endobj
29 0 obj
<>/ProcSet[/PDF/Text]>>/Subtype/Form/Type/XObject>>stream
O0�yɯ��"ty�ʛ��B,s�&����n���a+��V)9�NQ�w��J�}�/���b�PB����hஈ'��J,�\�.���7XZ� �#�^�W5*�(Z4���]t�������9�
;�[s&c�qS���H ��SL3ɘE��s�@�;�#��x�l��Fu�%b���xV��]>iM���nQH�S�(�zz��T�Ӕ���
endstream
endobj
30 0 obj
<>/ProcSet[/PDF/Text]>>/Subtype/Form/Type/XObject>>stream
���tr���<��v3������t� v�F���z��Q)aN��U��B��Q���R��_3��$��癒^_D��>Z=���톋� aJ �]��s��1F�o�ۦ>���T���I�:JY��KS�#�9�
endstream
endobj
31 0 obj
<>/ProcSet[/PDF/Text]>>/Subtype/Form/Type/XObject>>stream
�T��
EK��ì����H�̗r��k��!
�Pr������H��"���~��NW�xmP;e JR*?VX��g8��[���_����~m�ux�8�A e��ޥ"���_�6r�/����
endstream
endobj
32 0 obj
<>/ProcSet[/PDF/Text]>>/Subtype/Form/Type/XObject>>stream
p�v�n�N����~�����=ӄ~�a);���'�Ud�x�;D=I����\|g�U H��*R X䝀�9<��'��m<翝
9H�+ćT��t�cv5� �G3vu
���fj��ϖ�
endstream
endobj
33 0 obj
<>/ProcSet[/PDF/Text]>>/Subtype/Form/Type/XObject>>stream
�wNU˛Qܘ���T1$ �,��Fg�Fg���O=BK��w�����>��� ���,!Yf�(�Yq����brJ9c[���:�E����)�l^��k��Ӟ�2�^jp�������V߶�Ξ��Mb2�Q
endstream
endobj
34 0 obj
<>stream
��M�%���tDy^��'c2�|5Y�fAWD}~�3�?��
U��'JQ�y�;D<�}j�L�l�e�-Y1��_��e=����caS������C�O��O*O�1bXU\a��'�߄`��nr��e%�)��F�dz�܂��C�p�uR�|�둱*�������et��a���i��:�c}���� )O>;�5���p�zX?�=kfnm=�Їc�7���ob�oh#�8�~�лBB��ߟ����eIf?
����.�
�$��ԭBGRÎ���cS���w�ZQxRi���ێ�`��vJUR� fDں��h�)W��Z��
X������9Ke�y���,���a������ ԍ��:��۹�����K#����[\a]��%��mr�1���Z�
m�A�~�-��� t�)�{�����U�cXk�䢵��('��3�
"�}3����'C&��1�X���n�0��fH�d'��au��Պ�_s�2 ���k�
endstream
endobj
35 0 obj
<>stream
����f�$����v.AӃ�1���h^|�F�n5�jP��{&]��}�����:*g��<���z�Fh^�kJ�D>���R�� ���U�χMA���%��[v[��z��H�o����. 5.����J�ٌ<�>�ڽ1�w��P
�"GW����.���B�C�eM��[�˶괏'V�7
V܁d�I�����U�V��'��_O��F��j
���8�Y�.�M�j���ʉA����H��FXA[����ӗ�SH"���H?��4
;�i�?�Cv0�"�W��Z�0*f��gk&4�.��QZ�����a�w^c�بT�v$z1rIw��qh��=g�Nu�xCfI*��($C����hzNu�<�6�rTKuϧ�p���9݄nԞeJH`T�\@-w���(�~��&�d�G�=B���e2]��j���E��>姧�J�g?\�h�K��8�%���u�ո8l��.���\� p��u��Y�����&���s������K*��X#E��_Jc�lm˚ i5S�,q:���~�(џ��ؙ�]L$���w��?D�wW?�B�MJ�+ȸx��B��_
�9��M\N�1Ka��*�q��\,3��#c���l>����Ʈ,������{�9G��-��:.�y���е���D/� �VC��]�.�
`����Z�?g,ҮN���L���`a��^���OLn����.&
�N���?i�.>O� (lj�H�p��lg����Ȍz����e������[j�?L$u\=�S�G�=X�J PR6O�@���>�xmCx�q~���@�r`D;�+���J�UE�q�$
��각�W�#ŭ���K�9C�\X%�Ԙ�J��(����ܭ0&ϝńY�4��X;�����醬�i5�?Q�l����y),�l!s�}�]����#�� �N*�`�n� �p���pfĞ����[�A���qu�n�Y{�߅��Y�^O"h���V�TU)5Z
*.�V�.;�ݐ8�^@/y�lI�z��
�?�i���D�����Q�B,��eZ���\�-g ���R8��D�%2�N�K�;�y�^��9���<�"b(��1o�s#�j1�n$��)��0
�:�@5��/cwQ �CtܘJu*�d_ 7�bq'V��:ء��_b:Auˑ^�O��D�q ���Ϣ,��to���뇹��v gn��;�,;�W��k_�f�-��E�,\>*0�&��mz�1�bO=��s�7�H��v�c9[H@y;)���MI*�[&+�_��se�sK�H�)� �P �F��j�N�{\�_
Z�Yz)�Li��%�}į=�ZTJ3�����?ڸz̳�b�8� �����+��*����$V����y�ůQ��� ��Ebk
���V�y��3����>t
�A�m���jh��=�{2UF��E@4�4�������M[#qF�/�a��m��7}\Gv�"qV{�V���?F2дhWt$ĩ�Ze�(���m��w��q��Jt �`���֚[yF���).�*xB�o��J���d\�([C�}�Z#]�ξHl����v�c�#$M��!�`���ࡌ8h3
���p�?J)�%��!��
��]��5���3�~���k��,���NKq+KIlb�:�B����}�wij�����ޜC��7`�\C�u��{�#��PI�FE,Z��_x�tm&�}ɴA�{R@���k^F�/��I>��c���b4���7��#��-��#ՆE���p��VhO� /~
�}�� ��"f�hI��`���݇�'Py(\K�ѸL�����I�8t$��!�p#����?�p���k�/-��d��� =�mG]��Ќ0���ʕeS����9hLO;ET��&���[���)v����$��?���R�,ޙ��^�0-}JDžnD���
-i��_���؏���&G��C�$L��
^l������il�T��|@���R���L�M A&LjS����:$jß&�6Ϧ��ϛ���H�̺��$�D�ސE�OR1�-��i�w�������OO 7`I$�ks���\,���t�:�������&Jn��C�E�n`v���V�W�^�WB�p�l�ل�Ի����/�w����*(��ɸ��c���t�)�nu�9�սyR�[���H������Y����e���X�v=����I�Z��I�ݿ�WՋ \
]E�6�ֹ$ƀ���&�!"5�`1N<�Y��%�6����B$�t����>!d�����!���~�e�kJ^d�#�w�O��r s{؛��� ]6�ǥ,�5ϋ07�e����Mr�)���q+�]7N�����g�Ÿ���J(��AѦ\��1ԢV6�����Э&��^�FF��|)�����u���9���,��B�v$ ��[��h&8�2�����*E�[xh�l�C��ڟ]+aj�7�Rf�ip��q��OlJ]G�1�w&�9:OxЍg ;=0����.E����#ڃa �W)��w�6y�{������Q�9��1�
��\��?��h |w-9���ͧ�_z�[�4p%�>��F��f�?s:�v�f��x!����Y�@�-A��f��'}
��#2��r]�ص�������U��:e��D����R�g|�u�#�(P%Ӿ�>i�X�#��LYh;X�EE�����Xʣ+ĕt�kƠn@�`�GG�c�������i���rd�I@�����uPK�AɥZ0��*ު�\�)���ꂜ`u�)��voCdž�����Qyg�6���
�l�%X��ܪ�����Y�0ړՂć���62�������q��֖�'c�Ʃ��Gx�s3l!�:�0�� [1�$O�|
cc�I�s� Jŵ�Ecx���4�:,�aU4�H���O��y��c/� [&d�!l{�
:%=��uo%m9�a���oX����D^#�`:���i;M[-s�%}HTa��d��u��lQ�}�`�f>Zu��A��H��(�7�~m�{�xзXIb���U�_���`��|�y��n�+���o1Ic�7:Do�K���hE�����#��c�?GR��z7���8�e�&'����HT��ǎ<�9,;�:Ao��1��['{f?o���ׅ��e��۶�Mʻf~��N�p�hƗOB�X�Em�*�V"j�)�ѿ<�e�|�A��Ρ�W}�0˜��* �h�5�[���T�b�s]�7��I��q�C�d�빿3���͏�we���߄���Ö��Z�ҚvIՌ�k�=Y�a�0�Z��݃+n�� �4��(t���ȷX���������(N��Kæ_��u�Pύ�墑�f=�(6q���漻�M9��Wb����Z�
�i� [�kO3����#-�P�����&�WC0�oX�.i�@*�D��k5k��������q9��%h���]l;|^�*N&�b��(w�Y�����%���2���ւ��͜^^Ov�M�-���:���T��E�4~��JZi�
g�j�[0������W��W��Ɗ��5�|:�#�KuĆթN�h�(G�"{C���3�ֺ�Z� �g�8�I����ޣ�g%�BHNf���^)#���U�/6/ ڨɷ�O�hÁ��*�L���7����$/L�8�v+�����[И+����!6�O�x��K���(zj��c>�y�¾�DKܶRb�J�����p�����C���+@|H�P���X�i\p�2M:�*=�`�2�V{�9�(,=j�
����o]�p�Om�c�eJ�ח���T��Ië���c�2]@Y����O�_�4?#�-i�^`q1��?�mY:����{�9L��B�Q�$�+y��2�������xة!�06.&��c��]�zu�(�1)Yמ���pv φ��k�Lװ��.G��$p�3�i@j��m�
� )�\{��,w�k��^�b\r�^������Lt?�t<*;WD�'��������z��<�:$N��1���TC-VW\-��U)F�&��D ��2��>����̰�nZi�we�DZ���i���g���A��+W�����m�X�9Ur�\��� ��,0�e�`��n���H�m�i����mXҌ�G"��u��}8{H�B�ʣ��_H�[���`�G �A �����W(�V�=s��V��r/{���q ����_�-~�r%ߦ��
�yF�HǍ����j\�s��~�)|U˔��
���2e!x �g����ÇQ���������WaxD�_�[:�}D���D����Y���%v�R��&���y�� 8��a����BgB(�E�U���.����[�>�p���ܠeA��46$�S뻖��C�k�.�կk�֡�li�����H���� �]�щ�\���Z��*I��J.����+��U��µ�b' �b�8?���"/�g� ��"hq�Mï�� {�a�ɺ�O��4v�ǔF�<#��Wrܡ*_7L�%I���G�.�o��?A�*�F��( '5���5C�s�-p��J:!#kl1I��r���
SF(���Ua�{0��tF�����9ا�O��x$M2~�����t}f|��ik-����P�KZ[�y�Ԛ{���Q�YZ��oވ�Ѿ�Jϩ����:�>w�E��ɸ2?a��*'�%���~�j�Ѝ�
��?B��N���WCr
�w�k謹mEc��'�E�����^C���n��h���)���T��1=x����!.�(��m��e��)���|*��bfRx �X��ao��e�5q^��]�+�sS>�d�����,��5�9�n��iC~��tqI(q��?
vJ�2c//��B)ZȈ�J�+CF�,�zw���^���R���%3�^�1p6���g�*��)��'�hc�V>:8���ī*ڒkQx��������
ɑP��d���7����/� Z&"L�&-Z�:Bv�a��G�UtcF�m(��z�?@g�����Q?^7)����J�w�g���N��f�S�x}Ey�ν3|[�Bx��>O ��foXv,�,���H��t���,d�4�7�F�jɅ= �4;vN,�A���ːUn݇%Xҕ��je�7�kN�wE�B�#�[w��x�X�ӹ![��8�M������?f��*Zguc��N��K`�}J� �d�S��&{���@$�_t����J��=�m_ �т��&�^���l�pb�
fs����g+���i�䙓G4�+�s�2[��ݞ��SW"�i�7����Z���rm�O��S�~��Q!jL��V�q���"�����L� ϣ1h����zHE8k��1=e��g��?[b1��G�k���e���� ����P���o �P4��N�ǡm�Oa�S�;n~c�&:���9^���Q�nKv�F��r�&�),ɗy�ў��Gk�5��������B��3m�z"�ye��`�yȣ]G1�{ſٰ\z�y��7Y�0�+4�O�6�W�
�i��س�HU�����:���0 <^�@tPSYb���� 6�2Yj�wfL�fL :/�ѡ��G��]�k�_�����
B%�nkBm��H҇r�W��8{}VC�������T�R�R�O~f�H����
�x�-7 p��i������"�p���p�#{XY�{T�S㺇�B�ԏ
֙����v&ё]�-�*�˙1�h���v��33���0��㐵v.r+Q)�K֖���c���߬�
1wǑ�h�%(w�"5���K0-��X��,�.@���V��e]�lZK@����~��f��u�Tߛc�3�M��,�R,|��3q"SN�2�!@u8�JKJ緎b�)��+?��R�V��-,�`ŷˎ�z��sg[���W���0��^�6�(�ObJܝ��K�)���m8ql��DjYa�%2n���-���$[��(b��� #�\�c�la�U)�GP���V�H�(�Yk�+a�S�Ѫk�bS�PY,�ܑ�ul�ci�!�aPsr�j����+LM���@�<ɋ�#�o���u��N����i��R&DғM��J��] W}n��6����K�JO?�Q�i��WX��l���25��|�������e�ܨ�^�B��[�]�:�����p���܁���l������v�`4��&��D����OH�s��R[��\Δ�H'o;V�g�1A���P��sE#0�:,�C��ʄRM�j��%-{�~�ɛ�U�и�m{��EG~�g�k�{-�fN�,���{�fl�� tJ�'���q%��\.��A��߾���\Y!
m����*��BI�����R��R*������ߩbw]���'d��s��|G�
C�W
]i�A��Q���><q9�N=K�V�m��&��w$?������Nc�xCb���F��������1��UK� �k�~����rL�*�o���6��Gi�����z]N&H��)o�m�\�aݿ�m�Ps��ꝿ7O��E,v���
Fɞ�#����h��勉ΒH���GQr�ܡJ���xp�E^c�)��Hwlo��^w�J;�@Ey$λR`���I���^ ��,�0`�M��FHE��gAVsG-����a�zp�tSEr��Z,)\���D{�]0� d���'R9�aӓM��݃Cp����7�:��:�c����hz�) YN>ME��8��K!���38�p�l�P!l�H��ۼ�����\I_S�WX��.��wZ4i�M(Y�i��*�W�i���Ρ����@��D��䛅7B\,S8�'��s����1rS�t���F�*�x�Cp�kR��C�H��[�'r�Tb��glֹ�����@TLo���-h���G4�d�j��;�F�`�'�N��E�@=�8��*�%�6ۂ� .�Y��51�d���Nt�5�d�?���R ��F+�Tߑ�U��'�x����x�c�]f/[F&{�fd���ļ���=>�� ��x��:;� ��������/uwq,|�`��^w:�R1�` �?����MHǖ�d�n����8R�b{�:%�M�oU�3�ǺV4Y�u3[q����+�}�����X�ӗ�p:��,�V���Ü�����M4?�qRX\4[��\g(��l��s�3�2�|�l��f�r]��2J�L�H�u��6����f��Y�Z�
#��W7v��,�`�AH+@��"fJ��i��m�����5���C�Yj$�|Ƽ�z�c��Y�;3e^��)�v�ؿ�Q�Jnd5�N/-�V�e�q�,$�k�*�_�ȓ�)�B��O,�O�T M���i�i+�BI ��͜q�x�:�����P'��
]<�픏9<�l.�M����T�L��=���
M� ��f���z�U����;��g4�S����.ؓ�f�Z�X�!|q���G��Xѳ�w�N�/g����a �ʙ2��FY�
���I ����Բ�u(Z�=��p�n��Y/`�ȼ�Ҝy�����$��"�B�=:~|�VCFPg�h� �R$h���+����b2U��/Ԍ�A�9&��N)0�)Q6�u���f���� hh��a��kP�-����l�>���H`ı���fe=�-&]�2f'�d'r�� P���χ��͈��P���T�I�I9�h����Ngw��E,�����̴Ů
�ǻ��u�ńum��t��G��'s`Q����wi��l��h Ū�%'D:�6�߀�ͼ_)���l�b5�%[�:�ʽN���ʢ���K����9Ui+f�R`���
�E�T��M�ܕ�)�c��������[ɓ1�:�ff�OD�>�g�G���.�9E��߲n;�X3̂�%�j�M�Fz]�XM;V�%�C�%r��Y�А���N˨�!Im�kU��.��#q�ye�6�S�z��/8���'箸"��-�E���]5s�%W��q�w`Ի�~E1�. 2��g�����\ ,mڋEٌ[:�p�L+��p���UB�W�[�д:��ڎ��� �NF�c�kdS�L�'E�m��0���2�w�u��hn����#��|7�6�^"�|T6�a��R�A�ސ�: 9+���Es�s�@,%Y��y���2Y��8bUn����A;i���+�9�'��\JH��7�ي�k��F����s4�(up��r��m�#
�?MK��c��%B�\�֜�4P'ˢ�Մ�@�
��il��Pv���]�
�
}u�rP^tsI����2�N��J�g��VӬ}������e}��sh@w5JP9���v����t%P���ߛu/�m@s
�}7�.�o���
����bF������������$�8PH���<��{#M�ņ���.��Uf�?4d��^���G�2iP>]��GW����93��o�4Pҷ����!����++�6���J�~��︥�/kW;�iA%7ݞ�7�ͼ
ӵ��x��4��z�LF�i
ȿO�����1�#X
���l>��[.wi">^��[��(�O~�T
��^��P~����e�<�)��4l�1ށ�?���bۊ����t�a��
wI��Pt��2�0���s�C��e�e+�풋�j�K].���_�e�N;�KĩV |[;ߙH 3��;�6�b)��6v��1����YM@�� ������&`P��4�3��(��_+]��R<������ދ
������a�<�p�҅ț*��N��z�δ&�.�P�B)��(���=&�[���� ;�J�1�7�BX�l�Aȟ5����V���LUtAw>�}���ē�+}�����l�H����w�z��K�6�[�
bo�
)�'p��W��y��k�)|�f�y��]��.ÿ�i/�J�
�Lܡ�z��R��'��hT<��fb�'u�O��_���o�4ޑ�1�����p�Y�u�捯wc�l�f�s���N���b`�1���R&����S@�|P�s� ��߇���o��D���>���_�
4�8ܱ!^�<�>[��L����d(4�/�¡�`H.k|NW3g�P�E+��-VV�/��2�;H��qՃN�F&#���hɅ��'.m��${Is� ����`# �DjE2{����˺ɭfB�.C��4Y)���N�6eN���$�%�y�[d���i_OS
�.�ofR�dX��ek$A����8SZ���]�Jf=��nDb��t��;-��
�+��h�z�2��Lr�f,z]+@k ��`[[��c_�1E-h*Xނ|F��9Y�ԣ�Y �����Υi�{�+
�\Gf���x�Z��1�Q���A�C^j��G~��-�TǑwXp�z��7�V;���`���Z��� ��
0��~�1J�:�|�ͅV�M.��e~��0w�������)�iX>0>O����O�ZV�HҊ��������P4��BN=��u~��G�a>���C��QRr�R
��� ��Y�<�'$>��0a�R���r�pJ��sg�*ɤ
(_���߳�]��0���K�p���6���9�T�qcL$SI�l̉WUqm���"Ȳ`*�&`E"1�:yc�/N�R����V�pK�u����V�.�\ܮU��m��{��i� �G�B3Rt�_�k�h�%�I����n��X>��D6�������Cƀ�`w��5H�Ȧ��uM�������U-���˻���0|�:w�����;;�&�9)�u#���ڜ�`�e��P3�Q�C�f?��^!��6��"e(�@�"j�=���qd������C�D.�Оl��5��)ƕi�0!� ���I:�F�Z�m��K���B�#����j ��Ы=�RzL�0Ém�pF�Q�dN���c�Ja���ε���/9�ja���Ld�����t� �� ��ĺ���;�e4��Ўi��v�<�(<Qa%=�#*aZ7�� c��j��;���;��L��=N)N
��ʞ(���Rס>�s�6�i�斻�cR��H��
6m�(ꩩ&FxEfާW��A�։�X��g�J�xj/����4w�Dv_���ȂB��V*���� ��K�"n#��i�Y���M����%Y&���ObXj���ˌ��Z�e]9�n_��06F�e�9��o
L^N�(�"r�p�:U�My_���X+�``+�w�
�2,wh+W��N�����4V|F��T�dk(�eA1����B8yI]3̙�PaP����l�0Lw#�2�Ɍ<���m����ޛ�5Y;C��)���mE�9��C��}-j�b�)�n}�ٞ3~Ζ������r �w�>
F�o{��8f���z��� =�2}Xr�*V��{pa*���@�vW�-<�2�x�%��1�]D�p
�s�4L�0�~�kXjr�ИJ�����D����1Hc��Ի�s�WTr�Ϛ�Ur��ZN��K��U�T�C/D�S��
I��c�{.��&x��mjV�6�X�����o�}&7���|$�M\��+̊�� �^"�E`��q�{Iqk�����lG�������0&�|�J�[s�p���E�ja�ו��3��4t����hH:��3��ːWK�N*�n3Z�n묃�L��LrKP�=>�s2��.�� Kۊ%1���fq��ts�� �G� بۺ`������j�>ߞ &��-.WF�3��^g���7X����]3���ԓ�8����P��Q��%/W�k��)�2ͫ�n� ��2��o�J��{7_�c�qĨe�Y��NB�A 8_ڣ���]�� �O�3;χ��|�/O�f�˻�oO�'����-���@o���؈>C�9WC��/��&uhm�6+rRkic� ��Ca-�����F�
�V�V3�鉝�E$�R�)��V�vhíB+$1�����Y�^[g���)�"5 ��5�wק�<��76�<>nH�+O[Lތ�v(ϊr��9Jx5(l�/*^��uU��$��\�Q`�d�u����V��
F����a���
%�"�Q���u�.�2de {��Ê4*!��?�5I��M�}��N�擖����
O���[u�y�=$��^̰KJ���2O�R]A4���%c�Bl�tm��α UfW_N�k��A&xu���L����V�#��P��Xؠ�ّL��O%��ϐ(��u��������w�;��8%�I<��V��K�>2ɡ�y�5��v�U,_��k#4��7���H����[��2�ۢg�cϰ��k�/Z����K�O�r�!���bVl=�ʿeLC+��|Y�!���*�ʝ��Y�G
A�>�@Aw��}a�e�m��)����5�՞g�n�� ����;��ìO|��` ��6�~�N<��1��}m����-L�� �7Sxg?��|��)���W��U�\���O���
��1���ɲm�\�A��ç��Y:(4�P�v
q�y�|9F���T�W]�T��hr���2�\�6�"s��^��fH%��R�����k�%��h���.�>���`�U����?�ʛx�r�yT+��Pv�s�ʵu���e
�:ml����B'�52�i ԈZ[}ܷk��rR%O���cȉ�x����v> O��g0P��@�w~�E3���p��8覆C�i*��6�8q���@,��m����{�%����+����F����~���v*�x��Ec�J'��O�] 6)~,*��ڿN�Hd5G;wf�DC
I'+����yr��I��z6sHO��ʃ@�'���6���9�?<����I��d{��)�P��~�
?�8xc������s�BE�Ǖ �DO��ӡ�X+ž�,K���V��+%���~@�����i�ő����2��3�i�an ��a&��떍sY: q/ ���Vڇ{�iB1�h��*��^��Ϻ+�q�R��jӽ}$�S��2m�����C�����������BjK���@i`m����WF�ˡ�|K��v�6*r6�H.~ B2�|���xe$��Z_��K�����#����%�b�I��̧8rd��^�t�)ε3S2��{���b�ϢRT�¢��oDŽ����k���Hх�!�g������o�T�ֽ:��o���ELj�P�#kuQ�'E0��S��\:h�y� ����a� �̫=@���[06���ZT���+�;��Fb�Q�l �ʪ�53�K�8@|�6�(fc�W�MϺw�.���6~��UBi|jb�`AӚH����{���6A�cv&3�(nxxu����b?S禎fgV��B�[�K~�#z�U"�_B�����c#���3t_#�\��a�Jb'�&����&:"瘄�6�np���S� G���mw�:�=��u>�k�q����/ J���sl"P�Nin��AS4m8!�#E�J�#�v�X��l̗�1�����W
M��"�L�t�����m���B"�=�Q�R�p�N��a�D�v] ��\�;��I�ô�z/��� � �`��j;��G��7V����� "�o�����c{�s�6�nFԘ�#ث�V�5`0�`v��5�%�8���*k`z�4�Geƞ1�mj�n�'����`���n;@�*%R|���-�2��>0�I�`��s�t&24�o��0�u���0�3�Pv���i9 �!���k��7���yۖ�o��2�z�����X?k���E"�G�%'e��)>,/?���������jʊ�Dy��O�1���� �(�⥄���]�Y���+�5�$
�"��}��QF4��JYS��,��4<Q�b��E݃@��9):B��g%Ś��J+ ,��v�<^������뾆���9oGM�V��/Hr�p?�k���M:үm�<����M�8�����H�����AcM\�u��X�b�^c��k4�So���:qǬ��@юM�� D�i�����+�
�e�{���:�]�H~�d��*��pb����-�UUv�
��z�q��O������
�&\��q[�{���� �4�x{�w$�od��Ȭ���\&��n�N�z
K�^YD�\蛤m���mUK <+!�B��Tuo�+x,w���L�k����Q����D�+��W��'��̨}_�`�.�&�)���ʉT���).MY�B-�Q���λV˛ꝭ=���� �-dLzT��*QC�K�M{R�
7��Ƙ� �P��i���*��n���N�0�$|�
��j ��u�xw�
O��.m����y0���ؾۣr?��.�Q��������B��-��c�?r1�n��T���w��L�X&�#W��ž2�7֩N�^���;��0�c�����@7�K�%�9��TB�"�ˏ ��"ҒG�[V�ЛG,�{)o�I���?�JH�ln�c }�C�W�
dݐy�uUE���S��]Yy����x;���<���?�V�U�VOe���Ӻ���u��$�o�0GP����`|���3q+};��ǃ6E
|CWH��`E`��f��2s�����@U��K�S�8�-��g�w>
�d��us8YܖC�[�vuѾ�+;g&d8�����ű�?�.�A���Pq��b��v<u1��f\i�j�P�m�_�A�%��M¿?Ŋ�wS����7uF� �0�Ёf�c��?kߧ�M�
K4o�-�A5LV�a�I��8:bVv��z��6Ȳ�V�� �j�0i^�����
$���庆���uK�%�ԟ�0�v�=����|�W���s>�B ?ݶ8�d����
����5�c57���0�)��Z�%�Fps~�n����@�1Ӹ�Y%�h�c�
/V�c4�d�a#R���L���m��dC���$�>�6�Y��;|�yۦ��b�1�#��]��t����f�K�M��1�5��w�&����� ��ʯ1����]Q�ܟ(u�Xڪ�Q�&��`��j4�#�4��pؘ����͘T�e�dXwg#�锞�p����ݓj�N�-^��p7Ic1؎V�n�Z{�\�j�/Z�>�Z��� ���G���J�+h�*a�;WN�g���e��d�㻷�^�t�x&�M-��hk��Y�_��9�
�������h�v&�/�V�;��`�.�#Ґ��<���@U��n�c0]:)T��U�}w�@�P��� ���ʍ)��M�������4��������S'������'����@˗Դ8���K$7����E��6�Ĺ%��w0JY-i�lNX4���ܴ��(fX��Z�qc�LA�0AU�x.���%��@W*�c��_�q�7�Z�P(ϩ� �+ZP���v�����!C��
�CS��2GǴ��,�oe�o¹谚��=�}o��P��5��['����fd�
[�7=����ww���)�{6:?1�{�E���][ӺQq�d @u_�A�<�pd�*�sx�=$�G�I��LoY�R��aw����[2�ԓz�(-��Y�ZGd��Ǭ�yh��zr0t7T87�[A}� �ĵ���t�B����)���mT1s���y6��-eY�Y�x�@��{����|��B�K�����B
-��£
w��>S܍����K����L�;&�7���>uʄ[�X���$���
�&���5�u�8��q���|4����%�ӏ�eyw���t��C
�!�4*��q"Lw�&@=���p`�0�B)JfzO�Ľ�)2Ң���e���s������i4<� ��b7(�;�^i�i;�՚S8_5����U��9�D���x>�����}��iBP�^��,D�. �'�B��n�b���IJb���]�ٸ�S��LFP�,.4w��,Ф@IOl�SxW�����D����l92�{
��ҥd?�������c'�(����~+L��u�[>xƿ��
l]@�&fq㌇�Yv��_4�4,3h�|[��.p��ŷ��i���[T� �P?(�{�vW����q�������H��������g�B���0l�f��M �\9���\��o�lJ?G����7t �i�m��ʩ�
0� ��g�sܭZ
�R���H)�Jqe�rk��A�^���ԍ�"�G�I�;%0ׂ� �A_:���z�����N�I�f "R;���{B�����N̵�@% �������!+��|H�a��7�����#/x�u�@�Ge�[s�Lk���E�@���@�?�ͅ׆GN�u)�����6��ſb�E
�gOXP^��؟� l�R>��~�HI�:�˸蟟*�ɞ=-Uk���~����}��F�6
��������v���w�F�[�Q���AM+��T˝kV�?Yg�;�Zq����!{dC!�.�_YÉl���zv� \�����B���B�gwC �7B$�B�.حw��Kz��e}��}~m�R�'�� ��2Z,f%����c�1� �?��yG���:��+й��T�N�qbA��| &�x��2uf�}UQ70lFai�C��xۅf�w��H�v��癃��� �0�&wHԡ�2 ��d�15�nh�jz�<�~$���=
1|��ז<��ji���F�����'4�Ó�9-��v����3���#��'.C�0�x�҄�h��R/����k��� =�ȅ薯)l��?�6�E?~�~��Qc��J�Vl4���M.|�i���#��w����0��C���T����I��b@����sɬ�o΅L1C1�VI؎a")��@
X�����n�W����'C��*�
�����a�E)A@�W�>�(���-{�~������C�>�bsQv���@M�$`�T��d}%U
�)K��a\��PX��������T2_�M��Cve���3���c �U�yt���P���������ễ�]S���A���o���Y_�W�T8��������~�
��!,�>������9Wy�2����NR:�Ỷ��k��Fۀ5Q�B@ɪ��o�͜��� ����L�y�D����Z$�l/!��K�o���K7A��qA&[�F#�{ ����ѲZb���ļ�Iui&�w�|��}H �����sw�� ��8�t�W&uw� �Rc�!����2����l�Aꮼ��f�������U_@I�tÈ3��ұY��Ŀ���SFW!�J��xl���n
���o�*]*�}
�E�8eÀ�T;l�ѳ����TE�Edg�r04ʓ��o�~ ������bT��E��(�M��;TO��iȤpdX$�xA.�K�]�N�N���y��n3��A.�8�87� ���&|Ff�H�+��4��¾�m�Ι����^+n���KL'3H��;VijH�"*���z�[N�}=��)���TAU��1`z_>���o塺:C��*�V�����=�]܍���vy�v��5�G��5{d�����E�'O���^1e�M.�q�������ϟ��h@��8�*!���4$ħњ���c�R9e=�53�Ѵ@C����(�?�ز��/5j�>,��ԫ����-d73�w���>�dc�_l��oj��S�&x�B�4������N��`�M�}��ր�K�sªo��1��s��S���T�-]
��o�Т�2�Lm��ʅי�ośP�͜�"Rm!jbAr�I�v��*��T��TI����|��eI�*^��>Z���e���G� )�$b
ߒ�5��L�����mr8[yǑju�v���
2�����U����)�����#T� b�,�O��/cI%nΰq��C>ﴟ~�$� �%=.�����+�컎[��;^��X;��4��"��*8�N��E�I�X� w�U�����"e ��uL�#v��F�
�1�a����n� ,!-̢�.<# �BI�OF�U�˓4Pށ��Y���F���d���齠��9bC]��Kȃ�TZ�g'-X}��C�t6��j��D���!��Z�2���RϜ�����eJ�Q�US(Js:v:���ߖ5��Y�)�L$��/^�Es Sl���8r��_H�����)���Z_�_hZ;Oy�t�l{f>S���l��v���u�)��ݚ�M�F�B�^�^i��Kۭ��l5�Љ���#"�k��9"~��8 �2���};k�D��>b��#���kTlʘ�!��Z��Hd<��{c����ΖS����mr�ډ]��#�g;km <��e����l��W,�MW���LN��Y�r��y��/p;�hC�B�6�9}�"���>�F'Ce,�����:W2gm���}1�
+�{��&Q�E.�P���|�L}����Q�V� ���I5Ņ獨��wZ���1�.��D<������*������
[}(6`�h� �1� �/3��|K�$��'&�����>�g���?�}�_��{X1i��.A���n�@9ϕ�}�V-R]d����O�+AT>�vu��W���AF�w���8�-��.G%[8I�g�o`�fݙd�Nfƥ_�+�ځ�e1@�Q�*�� cJ)G���r�DA�1i������~�
*���϶�O6d ���@�ch�Aղ�4��A���5��n��9W�}��=�'�#fm@�Y����Q�,�����.��ĝH}m�oj��2�&�~���#@`tܙcn�&�h���1Z�!���q}�y��}M�=�v�5��F�b�Xu�:aLp)�o��ـ4�=����F��&OV������F�ЍeU��퀝z0�*�pbf�'uG�c-�(��y�-M��tb
l8����q
��a
���N��3?���������I/��@qcڽ�?�KEv=�[�"�Yj!o��B�ɪ�;�q$E>�
�ْ��]
ϡ�;�cϙf<1�����%uy�#�&����,�$�>�z=���m"�)/����s� c�;X�B���x1����۳�7o��ok�\T�Sc�% Mr
��1��� ���xCa
łwJܐuQ#cA?K�!%sy�ܿ0f���0z��i}?K�����8W�C�C���}�u�jRՌ��}�X����fH�]������6�)�L��a�(Ql�Ǖ�#R2{ ų�l���3�Gx�S,E��.��}���H�H�9�~$ci�ŗN��o�կ��}(rgF;�,Rň�Z<��
T��ʡxT֕U�̀��
M)���:F��K��� o�����D>�N�ʅ��.����,�3��1w�?:Ѕjj���otkF9(#�����@�gI<�N�����w ��۩�]�@���KyN<耛�m�0���x���1s�0%Ƹs��Y��ʄ�Ϡ
ܬ��З�]��,���M�$�9�5M��7^�B���0�pK`��F8@M@:f��̕�%7���t0@���Zl��YT� 6p����h}��n��me��J���%v���;$J�Q��+<z�K����!4��{�R��v�`�<� 6�]��4����u̠�����a�%�������8�VmZZ�&+`��|T:^�xo�ص��hƿ�c�R�
���x��v@/�yt�y|5�Uv�1f���s��O_s���CV|S�af6xS��#��F I&�W��
�n��-�r/�.X4�/���T=�j�N��X5���X.4.���G���kwa=y�Y�IG�����"��J�CdzX h{��\���0�p&LI rE��V���ft[x�}�s�l���Ơ����~<����\ii�5�>���(yǨ��۴%�:$�nŇ���(V
�"��O��0
$8�d`��(�<};����aռ��4���EG���D1�k�m�*�o�᙭TD����'��<'~I��b�̛�J�b3�72��'��N��զ�
Of���cXi��:m�}�L��M�3�����&�S�Ǣ{�WDsu��a5D��}�iذ]����딘WK����`V?�p�6^]�v^e������q*$�B�46�Հ��:| �ˈ�=פ�m+��#���B���<�I)|�-b��nٵޜ%���i��F�~s|e�� L���B��S`����tkx�/?��Zt%�V�.=��}���aw�N�[m���?����r����vs�nP&�ŵW�����7�a� �ݮ���5���H���^�����E��V�ʄ�x�}q�d�Y�?��P��w�x b�(����S8�\e���BhG6�� m��� ι�����q�(�Q�ig�-z�+a3�:�HH�
_�T�r�����܅R�ʤzO�w���89��]�cl+����!
§&�L�B��7����$�IԍG���%��-��:(��4��݈�[���$#�xo��m�_�@�~���z (!���)�[� n�j����]V���g<4G���p�R�L\�3T�|ja$�>��p��}�N]��`jAR���f<kD�������|���3s���&�C�P�i���Ͼ0���Ƥ��م�y.-��r7#h)#��萛b�V�U� ������sҧ���E��*Q�!0]��7�{��� �[�یŖ��uߌT�l��U$���B�F|�^n���"��s�q��^4@�2ڀ'�q��F9���}f'����k�´���%L��R���a��m�� � M��SPv�qE<[b�j\��� �m�(ۿt6�~���ֵ�e��MK_�c��>���,�0}2����.�uS6S��}���'�Q����m���!��Xt�j� n�Ѥ��*��:k�?0��
M0D�)��\|�T%�5C-=.1�6^L;UxL$Q���_}����?4B*�a�0`ޕц���1��mx̘ ��Ƣ"��´�/_��akK��^�k3!���M�0?��dz�"�����
�Q)�� ����4�^��� �ww3��9!���w��#��rG��ձ�y�-391^�$�\Nr���|�\�WV�ʊ��0�QB|/��Ԗ��L�c�3jS�]���dKv�g�b_L%��j:\��X�Q�ʈ"w�T��<��Bh� 1�~��\y|M��'���]��pK��w��鍡�vt�7�p���?��
n�3�������݃K&͗M}b��ܨ1X�۪:���� �Xb�v�`���V�L�!���K�$�j��7���lx���ЃW��0*ś ���|�IU^����|nF���7tťE�V�
_�� �0�v�d��L$[���M�aTI�u�ޏ���tȭTcݢ�<�;��\����eU[�g���g�d_����r����H�wԦ����D��x�y���:�?L_<�]�B�^n$��B�h��0/�_ɦ�C�lHl�R��lt�$3����}Jel�`(Xn!���S�αAe��B�h���~�Vj�1|�\��q`����JDwri��6�WP�p����T�{3��J7D^����
��r"���$��/7zd2{�t��9x�,���8��*`Tһ�S����P��L�̷�3��aa�n����y�B��wW�]9���f�X�ƕ�@-w}�S[`"k{��)[�Ԑ���E�پ���X V��*Jv��2�����ȉe/�k
p�se�>�����&�_O$��k�p�v���H�l� H���F!���i�s�����ԾDo�+�b�6�W~�;�i��?�gz#���
���io3)M���a�}r-r�?i�-���P���8a�j�%6�ɝ�փ����L�_wa�,a[E�-
�I:����8����=��*��Ѫ��Ϭj�K�z�T��U,���>AԪtNq2M�K�Z��>rb����U���F�8��n�>�������6o��R��4�V>� KR,ȵNj�C�NTG�}�;"��b�D�}e���i��\˘�����Pr���+�Z�C-ISط@�L�����T�z�^���%.rx&�$�kGճ˔�qm"�rs�n*3fO�v~ҥ_�i�L�rݝ��xN7��S��0���m��d�eʸn`���
�?v�X
��r��d�vdݦe�n������A͏����˷�Y�������aLd�D�ϧ�ӧv�d")��F� Q��u H����Q��*wl0� ?���~S���'�grd�Pyr#��q�(G>9�o�+��J���lۨ '(�bjh�N$�8����9J,�FpU&tU��h jcz�V�]ڇ �����~�q^~٬r�m/�Q�@��Sk�����z�����z��84� 9�ז��������r���׳#@��������&��~O
�V��7ܱ�
^DQ� ����
�yi��ٽcLd�U�lԀ�h�� u�y3��p9���3����`���ҭV�'K�&F�+XYsc�x�i>و���g%���
GO���)���a���xJs��G��4�#x��+���ghň�{�R�m*8�Ƀsfx,�{��K���l�����y�"u�xqԗ�g��g���vh�`@���8�>ɀ�2%g�����8^��)��@q��*<�d��+X_���+S��}�Ic _�B�L�'�K��/�}a�?���G��`�x\��R�e�H^J;%��"�y���f��*�K����#����Y����%�ǭ�箛!Kc=;`����z� �6{n,m����ݫLCi��Ō�o�q"M�@�Z
,�k`kJ�=[�#˼�^ h&�w����hH{1�]]=�-��� ^a�F��Q��͌����x�ٯH��
��G�e�]%�?S-�S*�]�R/�UU�{�����������j�[�G6���ԣ���#q{�O�DG����V�m�@�%�D<��|0�+�ַtD�@ه[��"5Z�%�Ӗ�KP�,9�U�l � x8nK�����rYڂT>���#zb_k�݊&r!������/��!E���]�kbP1�B,U/�oY������+�������l~:wdQO��W�J�=p�I-<�q�L �����t����m� ��aޔi��}�J�߳1����,��m��X�O�]�|�[�O��"u��n�=pVH�8^]�J[�"??�>���a�j1+�ԚK����7ކ����}����I���6*�jģ
�Px� ���M'^(,�
*R2~��aD�6�p�i/,�Df,ǁ"������Yu��A{^�����^w�͐�X4�;����)�M@L,�Wb��������D�x���(d���^0�v��}u�հ�v}]�'�n���DQ� �b���?{�>�w��,�Ke�R'����%=!�a�g�G���3��G}�q�tǮ���FԚ������Ɖy=�IU�����i��mWL �25�,�`-mm���h�`
�N�
�@T��f�2��������-2� �|̺U]�m�17�V�Լ�9��l_EKr.<���/�d�xh���s4��h�y�e��5��>������jՁfG���G���
e���¶Ix��ǫ�����$�83��������͌�E��=�i�MwQ��6�bucc}K7/jԂX�!�x[f�⨿��j�>���Fh��o
�O_
s����{pF�}�e*B6Lt����%��"8Y��X")1T����3bTM ���e�GTY�N$�����OB?qi
0��c [�����F7�[�
�R)S��Yl�B�9��Dҙ���O�ro|�~�c--����I�:��uM������%���5���߅*e��z�b��r��f`� ~3��dai������ڳ�O���A� 9�WPG��i(��J�B�2eJRF~�\�j���r�m�_��Qq%�)}�t��z}h=U�ÿ�4M1�n��+�rxW��r��[����l.5n��vQ��""m���G�WSM�
{�Ҝ�V&�}�Рm#:wjz;�A,l �vk��
����� ۗ�uz��p?LRyp�v(�ke�M������~���$�1Q�
JogD�֑���y�9�R�WE_W�%�a�~���!���J���ei�,��V6ğ�����g����]���Ƿ�ygbVh�|J�tE��c�p��=�$��"�{k�LZ,�u��f�T�ĝ�1(�m�C �l���&���_�i{3�� eŝ��j#���]�+$�i~Bg���F���"PH֑6**�G(5"�s&cP�9��"vdF�b�uV�ܨ��J�M��Ng\� ��Ӣ�B�^>U/����Dz�\�#!Հ�wf�^P��gZ����jwXCSk������;D�A6/Pw\{ ���/K8sZ�H���i�8���]�"���%0�P�X��n�u�<�P�/��+���,N;�?��5�I�b���� �u�Hq
D���������隇�틼n��q�8{Ȇ��!�.�s�2B7��һ��a�Oq^T{���d ����/��:�-o��S^��6+�]&/���=A���������@��8�f�yEL�߮HC��}I)%�)(\�aG�+-��\��v��1{��@_N�A�:���%�DD����2��"���2I�Uo�;˹�g�ICm���2o�TZ9XRg�K���{v"�#O���D�9�sUڎpR>�CC��!q����<�N'Nr�M��=S��]3�sd�?B���..���P��oA�(�ܷX�̌pc�Lf�fwD�g��`B��_�ϐ����s�����x6Z���1w�G�����b�w]���V�?�~����YJ���_����_6ŶFg֓*M�Z&=����'� ��;��z�����@�w���D�'�W��v�d7V�<�$��q9=U�;��y�g�b�1Z����#O�B͜�����!�X�l[H:?��g�U�@5�:��4�l�?Xd�{�����ǿ�b����h� .d{
^�����9V�ˋm�Hnt�sW�/u�υ8�>��3�י]�2�Ձ˚��~J[��0�扇�e���n{�
��-��i|걛&9��y�(�#��'F
���w$ԩU�`����gl��My(z�)�d��� \��ֲf��k�V���Rt�J�*81 Vh�������":
\7�`�
@�,S�@}�wm T�r��h��N{��p쫈lՏE)x�E[��Y�|�c^3�w��'R�s0{� o^����VR�\����p2��I��*��ci�����b��0�ki9����,��(��uC�C"K9�X�VGhw|!~��Sd��˦�(�b|S�=i���Z�r#�N��FV��3�ׅ�엤���Z}��Ii#G"1���9��������� �jZ���Ԍ��\F��1�
����d�ǑKө�z���O�%M�k�]�ّ:=_�ρ*q
�/*8v�kԍI(q�Wo]���N���"p�v�+��!g!0�Jg��X��RrX�u2��fzd'f��&p�o��n�CƋ_F3�fk��7�!�#�������#�Nק�����R��ғ�O�1����D��6As�V!}�7�-�|��`�`5O"�-�Ǘ�y����|�O��/s��Arp�� o&`����
��*#b���TC_��z1�� �և,�-�vV���I>�9 ��Ja���l�a�B>�V?��[A�
���B�=���a�}(3I3/��,/��7�t�[e?��<'
&�|(Ģ���%�(�� �)Т)@��5�:{�C@
۽�#�)�B'*����XJ�tzB���Gjz�[��=�S�2����zt�-/73�l*Կ��H�_)��� �D7��±������EUs3I�Aw�
��˯E�`X��_Vr�{�P�4���됚�u�b]�ȳ�Ӕ�������m������v�����[�MܵE��ɭ�cG�B0��M\���A��%4�+��s�S��7��N�y�<�cȂ�5��{��?�ST��#<�{_,gai��qt�d\�~��]e."G�.E'~d$������u1ժü{Cԧ�
����b���`��s�>N�.����$��n�[e��7]��-5ߛ{����H��i �(5���N�.h��uH�-��J%� _�-���$y�L�!��"��w���q���Ik���ɺ��-�ܞ]�䝪��d'aΖ�S��{���J>�kФ����ox)�R�L�ʡ��.Q���
�kq4k�y?(X�R�ϐ��;��'��Bq�]���\�A��ǫ�T�P�:DO�Z���G�)���pO��gƒח��w�tI��)�a�.\���N�Vn�,��d�3�f�)w
v,k@� �>�ӂ'������|��?ޕK�<K|D9�)�P�/x[���k��0�H<+��4�L?����DqݶkKQ��%��'���e�s��os2uE:�~��w^��;���PE�m,qO<�h� �9lt�2m3�9H���J9�7_nm���_<~��snc�����nk^�����2��4 ��֭d�$BMs�O[qCL����UC�!�;u j�4�nw�а� ('�;B9�0���%Pj������J2�~��y����7����'D(�ͻ��-���dW��w7�������(�`p������ �J��H���>/7�fj�^�Og/`311y!����8O��Ϣ��ϰ�0J�h\C��p]i_�p��A�z�(�
�Cq�
-ڦ�P������G�*�M~�)/�z6�a �"�N��F�̽ܩ���p���=�e����H��#6q,F萨�6��b��%!Fg�{��<�&�cgr��W���d=ò��#��6!�hY���$W5��F�s�.�������Q�:��%������i�<��c-63��|��2�y5�5y2��$�0�Z��D�T�UM��6�'�d�ʍ|�&Ɗ%�[�(�^Ym�#�J K�5�o#}F((E
������q����Ŗ2����}�[��|)ѣmx��ɩ�%3�N�P��ɝu�U�a��0j �x��~]~�D����1�âL�_c���ZG�_�Գ9W��RX%`%��?C��B ̟=Q(�ce �`��
0&����"�ԍ%��S}��{
h�7�@�$B���wu�G�7K��͒�l"]`Dh~��@��+^���rQ���Μ�C�� �@\Q�op��ǓL���j�瑅&H)�J�.1t���6\�k�A��-M�M������ˏ�ϴ����U���UA��>�[�xx��w���_t�Q�+�:}��("�uy�+i�f�zJ�U`�t���p�4����v5$-�$��k��%�*��##�%B�la\1�����C?���S/X'A�bƉ��&Y����T^��/L�lďc&gva'7W���k�o/�R�~T���k���b�Ϥ��
ͬ��_��.i�m_��i�D"�K ���5�;�T�j� Ue%�^��Y74��/�r�J��M/�Q��:
�I6%*�m
^��
��Fs._�h$p��K��P��Kp�s����.U�s���~��s���@ۅ�i~�
�������Q53�Ӿ��cҝO�ڌ8wnOF����$z��f���z�#0�R����a!�iCCU���/���JD��||���f߃�+���];�>����ۆ?�bo?�L���v�ѩn7��M�)��|�l�6=ֻ��>.���k(�z̕*i(>cS^��-䰫S-N?'��TN�w�-�-�{d2 a�Q�K�
�
�N�%{ �є�%Q8�~�,�+.o�8�T�r(p9?0�Z��
b0�L�u�|�G�vc'
�i'd֏��l�?�IF��o4��;���>�q �E��jH���&���[��1]�Si�\h���^s�"�]ʈ�2LO��$
�Xo\��~=�Ę<��`p��+��{���bIm� �%�H⭈�/2NhP
�0��qqC=�7��O�q���i���]��k@��b�����=�����"c��r!L� �l(���,GkIc�.m�pO�Nb��h���Ù2���Y���T���Q���
�ݤ�I���[r�ȽK��o�f?��R�迎�P���`�N�I�0���"+bVGvqS�vc����ڪ�r�e����g��5��h�p��7I1�ޟ�8�A�$Y�9t��~Q�h��@���}I�c&lgq"HK���^=�捑.���e�*h6�%»1�:��a�u�7��۟C$���:YW��Sp��\F����Ȧ�`2�-��m���.���!��2*+oO���JM�}���
��|:�����S#��A�����T��-�<��/`�q��pU_�ɚ`���r�ۘ�J�l�E]�D���yY�/H����E;���
��1�ѿ�13ݠ7�#��~����D���^C�3�5p}�R*My���U��(J�ܖd;����sC��e����#�]1 ݔ.�������p-��W���8 ��h���^!V6� �����E4�:��g����1`�s��8� ۊ��;m��;��x�{>��$_ ���\�u<�OE)2���ɢa�J��q�i���R��a^y�B~o3䦴;�#��@l�C[�+�p.�!o�q��&�B��a(
o�� |��e�#�gn�pA�8R����ւW�~�UY #;Ч_�KOk�|dz����3n��k�����v���k)8y��H���`X���h$,EӔ�J���
���
��K�+ N ��' zoE���l��t�l�;'|sHL�(⽶h}h�
ԍKi��&�����!���j|�;�W��<7�V��a����Y����ŅgI� �e�M]���^�2SF�� ����Ʀ2�]
_Դ�3���6H��<��
{����/�`
����p��� �� Z�Y�F��ۼW�٥y�����za�*�L�^5Y��� �8�����1���##� �=��_� )�Sm!��d��q���*�
Pq�+��2� �� �̈��,j�G���L��ԋ�<���i� ���������ZS@(�ї��Z��j�eō�����`�D�W�i�w�c����a�{�9�9��S��&��z<�y�;���r��1/
B�+���4I精Α�D.��hd'��/���eR� �!'d<2���d��8b��'��i�+y���bg��#̪���������.u�U(��p5n��M �B��!����Qj��|�r<����Q�����#|�4����#��
�����.5�G�׀��=��=��F5F�j��gq��#"�w���_����9*�P˨i���J/��36��<"7��_�2�ݍǥ�\k�mt[g�oࡨ(S��x��J�=Cm�Bh�Z�
�Jt�e�QC�zi|��y����i=�s(���ˡS����%x�`��6 $
M�=& ��Ŗ�ů�e䙆�_R�)�
c�!L.��g9�DF��ی�[�KcJ�F���^^
)�O3'�/Sg���J���f�f�bt6���l�����7S��!k4[
�����&,g�$�����UV�JDϽ]�0���1�2x%Gm t��'�g=��`p9��-3dj��K@/�FF���/��+z��'��\VL� ���t�h&2�vM�*��Ɉ�'���j�J���)c<]��E(��Z�y�rj���������R�tO���>�@Wup�P�V"�$I��B�"xcY��JW?R��h�i��*`-����,ݘ&ˀ�ޟ�he�?'oV��E���6��+&9���Ԕw�l�{�!;�B� �&���}t���?R!iy;7����;%'��ٿ�������F#F�d �k/���tsm(�������=� N�0_-�wv2_�Uif�|�q�'��@�"����庫��bfD<����A`��̅��WAG����?��'PO챤 ��.(�=�iq�������~�O#���~��3�@M���P���n��,6r��q�҈h�+^�҅#��;�˴� �uҜ��L?EK�Oú��t�S�H��;�t�S9z�1����9|���\}���j�V¾G��!����S~�Ý�a��q�"���#�ѝ���}�Y }+���~���� oз6�CD�Bت�f$�\G���rg��tT
Ғg ۮ^(��#�D�*G�09&?�t�j^�T������ԋf7����ѹ�¿;('�>����6�j,���4LQ��_�X�i�1;[���)�͊��
лW8DH�R��R&M����=�
HS4}�HRG*Q����I�kq}�n�-����zi��Y�kJ+����3��9&��$��O<��_��N���[ Dʇ�0� �qe[EK��c����R��Vg�U:4P{�w5"�ݺ:@��$:��E��g��<�%��������������#fiP�Sp���@8�8Y&B�ٶ�0, �+�i�����������
��U��݉�zS0)SӶ<��z���N(C�*�y.2��>�\)��Sى�;��xjcwbc\�6�Km������#��P=�C�JςT��m"���%�)�#
fTlƁ�?���VQ:�i1c�[2f����.�G�;��4S����#�4!DDab���N�ё�����k��V1������a,uV+�R�'�!a{6x.:RQ�%T��4��u���W�N ��=5g��#
�|������2Bح�$�d���憰�A�v���J_˓����K ��
�9}��}X����r�ɻ���Ϣ�a�B�\�}�9͵��l-�
��F@�x�C}�P�-�#�Y��y���BR�=W��,Pk`?�m���x
��`G��h��-�O]#~�� �V�Vox��b���<�*�� B�$������d��!*n
ge��!.���3����z�Р�ñ�`ł!j<��o:�_�l�H;���M��ۺ�Y6��&Tdé�k�$��D�ƭ��5��L���y���z婼���zChj)�*M �
��K��,�'h���g�"��&�x/6Y� Jb97���1Qh�v$j܆��|�6oi@�nFc�E~�գ1��~ᆃЦ��5�kD��~*�_GR��4psp$`@=c���ә�M`��{eؔ�X��R�Y�N�<�t�����s�yH�b4�q=��؉� tYM�s�;��&�Z�I�4~�)�o��Mz4Y����������ME��:Ă|�"��
E��z�CkXR����o&�`��4eF�ŵyh\8^�.
�X#S(�슾�n�%��ȑ��<�=CDU�sQ�qu��!jfV��=�w�՜hv��x=�W̦
��'��w��q��P�e��f3�#M�\�͘I�
jËBv����)Ul1|NJ(��t�K}�T�"}>u�����P�k��9��(뢷��E��
4����Z�]?\�K�w;�ָY��f
�R*0la]I�|Z1����$G�Jk�{�Tm�ߠ /�XB2�E`N��=`҉\)���K�(��rY1��c郑�@_�q�@�&)%r�*U `�gH�\r�fU�ğ�(������K��D�d\/��U�ΫCW��:��8~x����4c�k+��T� g�g�W;��'
��#�h��|qT��r"�k;�$J�x� ���m��ګ �'���t���e��Y��ə8��6^YS1�K�ҳYA�D3L��_S {��K%���*/�3�26ܑ�����C\p�O[���<4�`wm�j�c������լ2g� ���$��
%~�_Q䛊q
�x��� .�br���D�7~?UH�9����;Q����L5����愡穩�|ỹ�����"�������+x�+���6�S$���^��O?�g�����7ý)A9�y�0
��q���T�m���\ν@��oێ70u���7�>u�b��<
o� 3�p-S�h�ob��lYo�����0�u�]����P��UJFB�5L[��j)�;��
�
m����'�^6\�Ɋ܊��<��?z�&�&4Q5�&Ë�5Տ7T�~�H��
��ˊN��2�d�>�ߪ��Hՙ��E�D=�q���28��"0!(r9+��?0o�w�Y#���=6��6+
�����������ϗ`�Ј�^��/�,FE��t�#ߪ�x,�"�tz�
NVG��#·̍�8��>�~��rhU�����KG�4;� ��T��&���g�~
�:փ��61��B��qj�k�2U��p�����l����H���ǿg� 5�:�.��gr<�Ĺ�6"4��OdJ-�@`=���}�� ��d!l���cޏ+��a器��֒D�9���_���(��Q�:V������\�}��
o�fJ�m+�}�Q����w�bt���#���ű$���P��e(�L��LI�(���J�W�k0d���ʨd)�"2I�9��srn�F�2������ɱ��;�� ��˚ݙEQ9�s�w�Da���>�a�P�^��Yd��7�͕\X��a�P���"�*�[ii������
j��Q�k�<�����«�v譼�Ɖw#I6�>d78 ��|��SX8:
h�P
v��{�G���8C9*��Ϻ-b�&��BK2�%6��x��r
.{�J�Z3�x��ɦ_61>��̈f��T�d����.`P�%x�z~�.�9A
=���ɐ��=/��D�_��w��X��4N�������t��#���e"�LC���c�IL�9E���ڮJ���ap�����7��G���
��U��ﺖ6f����ZQ�y�N��a<�eM�*��P�A�8�֕���"�3 �՟
�,�`�
�*�~_��Ky|�o�m,S�z
W�h�N�
A��z���+�Ux��������MԔƥ�����˳�h�Df��R��k���}?��G�/�kOp9�!�5[uop�!�v��i���x��P����
�����ʌXK��O��ݞ-ew��߃t��Zf��t�&�&��v��əjf���i��} ���s��`e6";�~��P<�V^�$a���-�(?R
�3x�(��*��B�J&�l ��Swy�tuog?7��wN�g�Z
��]X}'�L�Z�\�!�;V��|�xK��2���6P#YͯHP�ݼ<
Q:#����l���"��<�p\듒�o��:������Y�d_�ڍkzm��t����]&���&-D�^�F�@�/��������"��#ĥq �u{��W뿠�5��ZYB�����^ΐ��:�ˠ��H�?����]sL��V �@;�QqGc�.m��s�Y�n{����X���<���Qo
��D1��l�D2�H����D����jl.�v�C����*Fa�:�[�|x����Ņ����e>�f
�4|��D6wq2e;?)�Rii%KX����� ǒ���W��B�s듰�3W�2n.Ae+#�� �i��"�5��������O� P��=$<��j��U�?'AO.H�E^�r�JC�����U��bH
�2�b e�����i O�Jh��z�l�c9��i�iG���N��z�)�����kA�B�� ��)�sBx�ٙ��Ź��g���7�6�(���9��rڽLr��@H�:?uGjI���
��
�|d��b�n��Ci0}�S/yb�K��r�Zton�:L��ȑ&̬^�#�w[��Ǝ�ΠP����d�f�ީm�,S72����Y$/���BVM#���v��9W��ž<-��_z��o'����V"� ��ow�:8,Q�9�6{ۤٵ��Nj�[f��+�7����wn�udq>I�z-�m���t�
���j���=������)�Ba��gEa�r�J`֝;�B��L!�9"՚��%s�| L,8>Ϻ��E,���"Ȫ�R6��[�ݠl*��� v�>l%vl,O��N�S������풩��+N�� ~���MO����Q�Kry�z���h�*f;k��`�)^���qҊ�5��OVة�iW��?���3}��؉���#L���O�"x��ꁪ�cjiЇ����2���.�!��5�@��(9q'�X����F��s�M� ��vO�߯�$�a5��{��|H�HYj�R�G� �f��s�,�Ϟ���4�^00T�6k����t�
�h
C�x�A.k*e%{��6��`��z%��}zo|j1�[?�TB������26�ސ��$��f�f��Z�r�q+n�p����C� Δ��9x{M���"��{�xS
tC�W3�.(YB�s��c. )�&%F-�Y��?�����[��V�������4��fDɞr�]��[_[�)Q�C��˭��i��q�E�+����NQ�ֶ��V�e"� ��n��� �}�h� ���49UZ�@�^S�+�"�[-�_�%5/3 6�8j2�7%Db�v���+G�=��N<��[7���T?�
�&�|$4\���J�yM��S�+G{%�*,�q��5eT|�����O������ZI�2�'��LͧA X�z�y�S t|{ϱ(f�4��
����t����#a<���s]�p .��d���w����%� *�.�0ū�W���zc�H�^�_|?�F����h
�� ���x���J���c:�ʢ�'��t�mR}�������:>�]r��TuPg+ K�cM¼v%7'Ƨ7M?��O�-U:�?̾6
���ǘ��'�a[DEׅ�MR��/Z�3VH��<#*�*�y���-4X�/Z��]F�6���X�\�e�y����Cl����TD�����"'������#�.��]<:��G���`�?�oO�'���ܳ��mJs_�y����<�y�l�n��L�^l>Mep� �=A��&�쎘c����e诂5�����+6������S�h��gzx�
��包J�f��H6�Y�332|���6 �B<�FAdNby���Y�T��!��S�I �FK1n��g.��_�ь �vU�VB4e�Z���ES�@Z�����:F�>�2�. �y�i�ȞMl0F���
����u��3Yɔ���L/��sQJ�z�v�d�F.k���
�E?)BR�
�L��5�Odz_�
�,
ӹ� H{z�B�����;Psw�o����v9z3����T�r�Q��#�;��[��u��
W;NN_�F�ܛY@�]\q?��;��6X|��|�O9K�9�q��+���b6�('��A8,���7����&Ez��;yN�Q���k��]\��&g�zc\_Y#b��ָY<�o�W� ���6�aےj�<�?��ħ�9 Qd�w�:D.�~]��w��\�p;��+��A�E�[P�����P��yO��1�)z�/}�_.�}%��t2�#O���(�ɝ��Ii��?�d1�i���ݸPP�L��\�QSLH
q$���d�8�(�)�41��,lX���h�+w��-D<^��> &�q<�]��,�in�Es��3�i����`Vl�lճ3 ��B����*ՒW3���Q�Ź���AZNF_[p�a*Z����=;P4ȇ�����h�a}���F
�s�"�|�|�ig�y�exe'��(�}/Ń���ㆫ�t�����!�?�f�Y_T|!ɞ��������<�~�DF�5w�Hx�k{�-��+4���j4G+)�:��a�
Ϟ���_�H�����`k�/��vz��ֆ?��E��&Zs�|՜J�{������m�/$�k��q`h���}j��e�JH�[�v���� ����lj������"���mb��\t���F�^;ޘ�r�)3�C�`�_�' ��ں0`�oke'fc�I���i&��vW�r�|���n�X}���[ޚ9"
����`��z�r<o��Q�i�V������3�%��)�+��<�!T��)���ٌ-���w��V�/�]�;�ĹK�Q2y��c%���;�����? �X+}t0}!�*����zJ���a�
0A����g�ƺ����Տ�L�nhod��칩3���n� f��t|6���:��
�5ޜT�m%'`WO�=�&P��q�o�
L����w��H�.4��[�3�/�p���pB�OM�@���f(N�I5
-ؒL!����@߸�>ÙT�g�t�]��T
�)t��>QI�>�Rm�-
? 5eƾ
�J�8͔�X6�@?�y]�pLr=��ʥ�Bj����D�_#�e���Q�G�-O?�a�|�� �i58^}j�` ���:�����mU`��6!A�e*�,��p�{?"�;�f
R��,�&%�\'Wv����s�`���41e ��xሔ��R� <�}*'�K���[�����Ķ¶����م�Ϻ����γ�Šރ)4�w��I�U0�Ac�;�-y0�Gz@��+
r�8+?���V2R��s�|�pU���$��@iś������Tr;NdO���u2�"������c��7�^��.�#:�U�G'�s���S�2־[�r?0U>�.I���A�7�c� 7n wPsUf)�@R�n���`%�Lh�(��ή��[���l�?��]k�9J ���(' O��ghR��6I�a'�{�2�b���&s����@�'���R.,]ƕ����]��ў�(>����X+-$c��o٨�.j�e;5awL�'8�n+���X"���DcUR@|�jl��(Zy[�����5F$/2���wZQ�u�1ӊ*G�Un~�W��`�
���n�8�|ϟ}g�n%���G�{����
�֟\���d�
�͢���NTh����dDa�J;d`Մ��c����,p���p0F�ٙu1�>�I�t����w���;bT�UN�����*�ڔ%ͳ{���2Fe�z�#�}+��b�%#k@����Rˠ�c�����滺�_��)M��k�%kL�1��S�=cy�u�_hI�/�[�8/ �p�����q�s{�������^bG�9�T$B��f�}g�soK�X=�U&u0z��pxxZw��4}7ܱ���ʾS
Wx�����T]��8��TLy}:2k��%ol$H���3Կ/��8.e�Z��pºT@�*e����v�bP�6O��ǔ���:�B���0A�̡��$xF���Ml�any��q�U����1�b�,-�qv�W>��J�bW�ى��"�1�{P$i9W��?�'/�����b��inU��xƹ�HRy��CCŴ��vf�&�\��G�d�?�#��5����3^C�fR_���l�|oS�tp2�+^YA����%�����x�'^��)�� 6�~��:(V�.Aɧ+r�� O�z��"��#xoo�����ub�.�i�FP�$�V$��GTCI����=^s�jp_U���ܽS���fĺ�(��:�骤��'�$gcpڨm�,�T��(�n��'X�]�j�K5s9����x���)an������9,-|�#u�l �4QS?�q�b(D[c�#��=��h
�����C#a�ob(j({"p�˹Q&V��\w��"�$c���i4V��+�9���(���P����H2�!�yQc���q��`�R=18�+�x�����t���(��b��$�+����d6A44�'��"ϟ.���S�n#�U�4�g��G��G'�<���j��~M�
*V9�t�`o˒����� :I�ï��;����*���� ���\-���O����;��
Pݓ��
vXw�ү�G1V�ט�����@Z��j
[��,���tz�֠���,L��<�K��7�K<�M�\d�|�.*��/�9��m��^;y��@�t���>�V{��}�&�):�4�N��Ҝ����O�ք���t�/�h�6�J�Oh/�h�I
�.=4��O9R��*R��Td��6��������-B�hpݣr����#ȫ#7���� �*Xn�)��<����Q��#���r�fg�bx��]� ʝ�L{���ȑRM����) i�'���KP��e�~Kl�.w,c�q���9{���;P3#Q��L�ʆֿ ŵCHuq�4��v>�g���L�=������M�&�u��֠s�2�KQ����78x��3=��K�� Ы��B�^�SK�^e{� 1�)���BA�N�@��i�)Ȅ�T�,!#�e���cM�0
�$q�_��������R�4LCՅ>q'�����G���|!�ÕX��|$�7�.-3$ V mu���!<;������m��?S[�Lټ�X���I/��I�"������{�G��S�s�ձ� �{]��r�Q�kc*Rs�/����3�h��^$�#P���iL��� ��b{��EU���}3�y� 9%^)"R�9
�/�0u�6�̃��]1}�V��^�_�%=^*�R���Ԑ�����0�B?Q+R����++�2� a�����p4��6��@ݼ�ӦC�%|h�-�ه�� ��Y�.d��J?;�t49�f3����RW��J���X�n���x��M�]�c���g�������߉�Y���m�C��i�k`7�H�,�����C�������C҈P��ӕr}�r���j;=؍��y��}�<�h��w�]�rl�4�����=��_��f+��8�Z�p�_���O^��*7�CJFvpTm�(8l��ѳN� c��$7r���<��=�B���n\Z�s�����#�z�Rv�w"ԕe�Ú�����A@�p���d��SV����AGfxI�4[�4�w
�d��ȕ*xq1�?1ϰeU@o���UpU��Q�o]�'�����jk�h~q���)K$�lD���Z�?ƭ+'��OυT�;����=�0�R�yg4z�ʧ�-���
�&���|��7K�O'o^6]U�o�|���S�v_��R�i@�g������c6WreY���2hz*�d�(��'u�k��<�㑒.��e��wiʘ�2�q�4pt�2s��W���3�z"uI,���Ԉ�_�B �!��I9v<#u�*"���۾�� ��|��:�Hd�(����������{����LcXo�K��/�/eZ�y���8UȊO��)
4�����4J5@�����@sD��#a`1��'x[��Q�y�8��5�-ٻ9w���ā�B��멖r������
(���n��`����=Xg�O��>�[6�e�w�(���R݆��X�v�߾9ѤwV�@gq�_�p�t�eE����wo� ��Fo.������_�]Xt;ٺh':�b{�YB_qu���֩�ő�M�]&<�Q���l�j��g�`�W��1��z,�ӡ�%��M���r
�ڂ���JJ`-���*���e�YUd�q�Ց�j�Nj*J.���$�e��B�.�iNQw�i��ȳB�^h������M�?��n��ӘS��͈�1��a��2j���������3�K� ��F�k$M��#y�8"�A��{.�bo��!+TO�v�
].�q��U�&0DPy Ql7C�_�S��蠛��rZ�DY��̺��U��H{*����d���!ѓ�.#}�e���'�od ¢u��epW!F��)?8_���#�>٣�Q#
-��p���8�Ϥ�l ����M���ܵ�@�nw��14�*WG�ն�"��Y��Q0�!� &�a��G����w��T�1;�e�20�y�N����w��M P�qۉ���v�_�]U�y%u�~��ʓ�Huy�gG�*�f4�F=����c �����"6�C��;`��tY�V?��n��|� � �d��
='/!RX̧
#,Ҕb�#i[�coo��z�܂v������4�M���
��
MF�_iB��#�:�4^���,�ڝ$N��������G,i��T[�)uC��%����qo�uC�b*웹4iK1�}�5ܷ�S= �f�>m
3*L�F1 �k p1���ڔ��|v�5��0�4��iܤ�,T2T�0d�#Hr.��βN�;��["X(Q��3�'�����㿙3�$6�!J������ �#vdž���%����Ž�T,[��B��!�+0�mE<�ӝd�[��;.�� |؋?
4)���%�|�W��2$}[Gҵ�yD#���o[P�}.B�)��+����u��m�W��
�q�Fs�J�bM�y�7S�fE8^��Z�KiH�`�@>Tי�C:ח����u��=v�A�T^��o��J����Ĥ�eB
QŖe�T��������qsl=ڽ�Δ�R�l�6h
���ܙXj� �F�D�_*���d�D}h,�м����AT ^��83����I�5.\�Zּu[�7��/���5����Wk�4D���4���$��<)ј'Q_T�+� 3#�j+�hC�~7)�6a��v�, �vB�#�38, ��M$�,�6�� hu�?6bu��Ir_�X^��{:�10p.�����F����P� ��Y��� �����UW�F{j�
��aء븲a1�m<��n eC��QlϨ�İ5�r�N&���8[@�7:No���|�� �=i����.�y_��QE< |