anchor
stringlengths
86
24.4k
positive
stringlengths
174
15.6k
negative
stringlengths
76
13.7k
anchor_status
stringclasses
3 values
# Slacker created by Albert Lai, Hady Ibrahim, and Varun Kothandaraman github : *[Slacker Github](https://github.com/albertlai431/slacker-chore)* ## Inspiration Among shared housing, chores are a major hassle for most people to deal with organizing to ensure everyone is doing their fair share of the work. In most cases, without direct instruction, most people simply forget about their slice of work they need to complete. ## What it does Slacker is a web-app that allows users to join a group that contains multiple members of their household and through an overall bigger list of items - tasks get automatically assigned to each member in the group. Each member in the group has a couple of task view points with the main pages being the user’s own personal list, the total group list, each group member’s activity, and settings. The user’s personal list of chores constantly refreshes over each week through one-time and repeating chores for each task. WIth forgetting/overdue chores appearing at the top of the screen on every group member’s personal page for quicker completion. ## How we built it Slacker was built using a combination of React and Chakra UI through github source control. Additionally, we have created mockups of both the desktop pages and the mobile app we were planning on creating. To find pictures of the mockups kindly check out the images we have attached to this devpost for the items that we have created so far. ## Challenges we ran into Originally, our plan was to create an ios/android app through react native and create our fleshed out figma app mockups. The full idea simply had too many features and details to work as both: * Create the mobile application * Create the full application, with all the features we brainstormed The first challenge that we ran into was the mockup and design of the application. UI/UX design caused us a lot of grief as we found it difficult to create some design that we felt both looked good and were easy to understand in terms of functionality. The second challenge that we faced was the google authentication feature we created for logging into the website. The main issue was that the implementation of the feature created a lot of issues and bugs that delayed our total work time by a considerable amount of time. Additionally with the time constraint, we were able to create a React web application that has some basic functionality as a prototype for our original idea. ## Accomplishments that we're proud of We are happy with the web application that we have created so far in our prototype with the given time so far: We have implemented: * Finished the landing page * Finished the google authentication * Home screen * Create tasks that will be automatically assigned to users on a recurring basis * Create invite and join group * Labels slacker member with least tasks * Donut graphs for indication of task completion every week * The ability to see every task for each day * The ability to sign out of the webpage * and even more! ## What we learned As a group, since for the majority of us it was our first hackathon, we put more emphasis and time on brainstorming an idea instead of just sitting down and starting to code our project up. We definitely learned that coming into the hackathon with some preconceived notions of what we individually wanted to code would have saved us around more than half a day in time. We also were surprised to learn how useful figma is as a tool for UI/UX design for web development. The ability to copy-paste CSS code for each element of the webpage was instrumental in our ability to create a working prototype faster. ## What's next for Slacker For Slacker, the next steps are to: * Finish the web application with all of the features * Create and polish the full web application, with all the visual features we brainstormed * Finish the mobile application with all of the same features as the web application we aim to complete
# Mental-Health-Tracker ## Mental & Emotional Health Diary This project was made because we all know how much of a pressing issue that mental health and depression can have not only on ourselves, but thousands of other students. Our goal was to make something where someone could have the chance to accurately assess and track their own mental health using the tools that Google has made available to access. We wanted the person to be able to openly express their feelings towards the diary for their own personal benefit. Along the way, we learned about using Google's Natural Language processor, developing using Android Studio, as well as deploying an app using Google's App Engine with a `node.js` framework. Those last two parts turned out to be the greatest challenges. Android Studio was a challenge as one of our developers had not used Java for a long time, nor had he ever developed using `.xml`. He was pushed to learn a lot about the program in a limited amount of time. The greatest challenge, however, was deploying the app using Google App Engine. This tool is extremely useful, and was made to seem that it would be easy to use, but we struggled to implement it using `node.js`. Issues arose with errors involving `favicon.ico` and `index.js`. It took us hours to resolve this issue and we were very discouraged, but we pushed though. After all, we had everything else - we knew we could push through this. The end product involves and app in which the user signs in using their google account. It opens to the home page, where the user is prompted to answer four question relating to their mental health for the day, and then rate themselves on a scale of 1-10 in terms of their happiness for the day. After this is finished, the user is given their mental health score, along with an encouraging message tagged with a cute picture. After this, the user has the option to view a graph of their mental health and happiness statistics to see how they progressed over the past week, or else a calendar option to see their happiness scores and specific answers for any day of the year. Overall, we are very happy with how this turned out. We even have ideas for how we could do more, as we know there is always room to improve!
## What it does Take a picture, get a 3D print of it! ## Challenges we ran into The 3D printers going poof on the prints. ## How we built it * AI model transforms the picture into depth data. Then post-processing was done to make it into a printable 3D model. And of course, real 3D printing. * MASV to transfer the 3D model files seamlessly. * RBC reward system to incentivize users to engage more. * Cohere to edit image prompts to be culturally appropriate for Flux to generate images. * Groq to automatically edit the 3D models via LLMs. * VoiceFlow to create an AI agent that guides the user through the product.
partial
## Inspiration *InTouch* was inspired by our joint frustration at the current system of networking. Despite constant contact with new people, we and many others, find the majority of our connections to be unutilized and superficial. Since research has shown that the strength of acquaintances is what leads to career growth, current methods of networking may be ineffective. We hope to spur a paradigm shift, fostering mutually beneficial and genuine relationships out of our most distant ties. ## What it does Based on research from Harvard Business School, *InTouch* is a mobile platform that analyzes, personalizes, and nurtures real relationships from superficial connections. *InTouch* will focus on three approaches: a) contact prioritization, b) regular interaction, and c) substance of contact. Through personalized data analysis/optimization, the platform will remind the user to reach out on a pre-determined schedule. We envision *InTouch* as an extension to many social networking sites. For instance, *InTouch* could assist in cultivating genuine relationships from new, but distant Linkedin connections. ## How we built it The system is centered around a flask web server deployed using Google App Engine. It makes use of a FireStore database for storing data, and queries LinkedIn's API to gather information on users and how their network changes. The information is displayed to the user through a flutter application written in dart which is compatible with web, android, and iOS. We handle reminding users to keep in contact with their network using Twilio, which we think is beneficial over push notifications, as it is much easier to come back to a text message if you're busy at the time you receive the notification. ## Challenges we ran into We ran into several challenges, including understanding and accessing Linkedin API, and installing Google Cloud. We found the documentation for the LinkedIn API to be unclear in parts, so we spent a lot of time working together to try and understand how to use it. ## Accomplishments that we're proud of We think that our idea is quite original and that it has a lot of potential, envisioning it even being useful for our own network. We spent over 6 hours deciding on it, so we're really proud that after all that time and discussion that we ended up with something, which we think could help people. ## What we learned We spent a lot more time than we normally would coming up with the idea, and this proved fruitful, so we learned that stopping to think about what you're doing can really help in the long run. ## What's next for *InTouch* There are many research articles that suggest ways to cultivate and maintain a large network. For instance, frequency of contact and how personal a certain message is, can greatly strengthen connections. We hope to integrate many of these aspects into *InTouch*.
## Inspiration The loneliness epidemic is a real thing and you don't get meaningful engagements with others, just by liking and commenting on Instagram posts, you get meaningful engagement by having real conversations with others, whether it's a text exchange, phone call, or zoom meeting. This project was inspired by the idea of reviving weak links in our network as described in *The Defining Decade* "Weak ties are the people we have met, or are connected to somehow, but do not currently know well. Maybe they are the coworkers we rarely talk with or the neighbor we only say hello to. We all have acquaintances we keep meaning to go out with but never do, and friends we lost touch with years ago. Weak ties are also our former employers or professors and any other associations who have not been promoted to close friends." ## What it does This web app helps bridge the divide between wanting to connect with others, to actually connecting with others. In our MVP, the Web App brings up a card with information on someone you are connected to. Users can swipe right to show interest in reconnecting or swipe left if they are not interested. In this way the process of finding people to reconnect with is gamified. If both people show interest in reconnecting, you are notified and can now connect! And if one person isn't interested, the other person will never know ... no harm done! ## How we built it The Web App was built using react and deployed with Google cloud's Firebase ## Challenges we ran into We originally planned to use Twitters API to aggregate data and recommend matches for our demo, but getting the developer account took longer than expected. After getting a developer account, we realized that we didn't use Twitter all that much, so we had no data to display. Another challenge we ran into was that we didn't have a lot of experience building Web Apps, so we had to learn on the fly. ## Accomplishments that we're proud of We came into this hackathon with little experience in Web development, so it's amazing to see how far we have been able to progress in just 36 hours! ## What we learned REACT! Also, we learned about how to publish a website, and how to access APIs! ## What's next for Rekindle Since our product is an extension or application within an existing social media, Our next steps would be to partner with Facebook, Twitter, LinkedIn, or other social media sites. Afterward, we would develop an algorithm to aggregate a user's connections on a given social media site and optimize the card swiping feature to recommend the people you will most likely connect with.
## Inspiration We all deal with nostalgia. Sometimes we miss our loved ones or places we visited and look back at our pictures. But what if we could revolutionize the way memories are shown? What if we said you can relive your memories and mean it literally? ## What it does retro.act takes in a user prompt such as "I want uplifting 80s music" and will then use sentiment analysis and Cohere's chat feature to find potential songs out of which the user picks one. Then the user chooses from famous dance videos (such as by Michael Jackson). Finally, we will either let the user choose an image from their past or let our model match images based on the mood of the music and implant the dance moves and music into the image/s. ## How we built it We used Cohere classify for sentiment analysis and to filter out songs whose mood doesn't match the user's current state. Then we use Cohere's chat and RAG based on the database of filtered songs to identify songs based on the user prompt. We match images to music by first generating a caption of the images using the Azure computer vision API doing a semantic search using KNN and Cohere embeddings and then use Cohere rerank to smooth out the final choices. Finally we make the image come to life by generating a skeleton of the dance moves using OpenCV and Mediapipe and then using a pretrained model to transfer the skeleton to the image. ## Challenges we ran into This was the most technical project any of us have ever done and we had to overcome huge learning curves. A lot of us were not familiar with some of Cohere's features such as re rank, RAG and embeddings. In addition, generating the skeleton turned out to be very difficult. Apart from simply generating a skeleton using the standard Mediapipe landmarks, we realized we had to customize which landmarks we are connecting to make it a suitable input for the pertained model. Lastly, understanding and being able to use the model was a huge challenge. We had to deal with issues such as dependency errors, lacking a GPU, fixing import statements, deprecated packages. ## Accomplishments that we're proud of We are incredibly proud of being able to get a very ambitious project done. While it was already difficult to get a skeleton of the dance moves, manipulating the coordinates to fit our pre trained model's specifications was very challenging. Lastly, the amount of experimentation and determination to find a working model that could successfully take in a skeleton and output an "alive" image. ## What we learned We learned about using media pipe and manipulating a graph of coordinates depending on the output we need, We also learned how to use pre trained weights and run models from open source code. Lastly, we learned about various new Cohere features such as RAG and re rank. ## What's next for retro.act Expand our database of songs and dance videos to allow for more user options, and get a more accurate algorithm for indexing to classify iterate over/classify the data from the db. We also hope to make the skeleton's motions more smooth for more realistic images. Lastly, this is very ambitious, but we hope to make our own model to transfer skeletons to images instead of using a pretrained one.
losing
## Inspiration Today we live in a world that is all online with the pandemic forcing us at home. Due to this, our team and the people around us were forced to rely on video conference apps for school and work. Although these apps function well, there was always something missing and we were faced with new problems we weren't used to facing. Personally, forgetting to mute my mic when going to the door to yell at my dog, accidentally disturbing the entire video conference. For others, it was a lack of accessibility tools that made the experience more difficult. Then for some, simply scared of something embarrassing happening during class while it is being recorded to be posted and seen on repeat! We knew something had to be done to fix these issues. ## What it does Our app essentially takes over your webcam to give the user more control of what it does and when it does. The goal of the project is to add all the missing features that we wished were available during all our past video conferences. Features: Webcam: 1 - Detect when user is away This feature will automatically blur the webcam feed when a User walks away from the computer to ensure the user's privacy 2- Detect when user is sleeping We all fear falling asleep on a video call and being recorded by others, our app will detect if the user is sleeping and will automatically blur the webcam feed. 3- Only show registered user Our app allows the user to train a simple AI face recognition model in order to only allow the webcam feed to show if they are present. This is ideal to prevent ones children from accidentally walking in front of the camera and putting on a show for all to see :) 4- Display Custom Unavailable Image Rather than blur the frame, we give the option to choose a custom image to pass to the webcam feed when we want to block the camera Audio: 1- Mute Microphone when video is off This option allows users to additionally have the app mute their microphone when the app changes the video feed to block the camera. Accessibility: 1- ASL Subtitle Using another AI model, our app will translate your ASL into text allowing mute people another channel of communication 2- Audio Transcriber This option will automatically transcribe all you say to your webcam feed for anyone to read. Concentration Tracker: 1- Tracks the user's concentration level throughout their session making them aware of the time they waste, giving them the chance to change the bad habbits. ## How we built it The core of our app was built with Python using OpenCV to manipulate the image feed. The AI's used to detect the different visual situations are a mix of haar\_cascades from OpenCV and deep learning models that we built on Google Colab using TensorFlow and Keras. The UI of our app was created using Electron with React.js and TypeScript using a variety of different libraries to help support our app. The two parts of the application communicate together using WebSockets from socket.io as well as synchronized python thread. ## Challenges we ran into Dam where to start haha... Firstly, Python is not a language any of us are too familiar with, so from the start, we knew we had a challenge ahead. Our first main problem was figuring out how to highjack the webcam video feed and to pass the feed on to be used by any video conference app, rather than make our app for a specific one. The next challenge we faced was mainly figuring out a method of communication between our front end and our python. With none of us having too much experience in either Electron or in Python, we might have spent a bit too much time on Stack Overflow, but in the end, we figured out how to leverage socket.io to allow for continuous communication between the two apps. Another major challenge was making the core features of our application communicate with each other. Since the major parts (speech-to-text, camera feed, camera processing, socket.io, etc) were mainly running on blocking threads, we had to figure out how to properly do multi-threading in an environment we weren't familiar with. This caused a lot of issues during the development, but we ended up having a pretty good understanding near the end and got everything working together. ## Accomplishments that we're proud of Our team is really proud of the product we have made and have already begun proudly showing it to all of our friends! Considering we all have an intense passion for AI, we are super proud of our project from a technical standpoint, finally getting the chance to work with it. Overall, we are extremely proud of our product and genuinely plan to better optimize it in order to use within our courses and work conference, as it is really a tool we need in our everyday lives. ## What we learned From a technical point of view, our team has learnt an incredible amount the past few days. Each of us tackled problems using technologies we have never used before that we can now proudly say we understand how to use. For me, Jonathan, I mainly learnt how to work with OpenCV, following a 4-hour long tutorial learning the inner workings of the library and how to apply it to our project. For Quan, it was mainly creating a structure that would allow for our Electron app and python program communicate together without killing the performance. Finally, Zhi worked for the first time with the Google API in order to get our speech to text working, he also learned a lot of python and about multi-threadingbin Python to set everything up together. Together, we all had to learn the basics of AI in order to implement the various models used within our application and to finally attempt (not a perfect model by any means) to create one ourselves. ## What's next for Boom. The Meeting Enhancer This hackathon is only the start for Boom as our team is exploding with ideas!!! We have a few ideas on where to bring the project next. Firstly, we would want to finish polishing the existing features in the app. Then we would love to make a market place that allows people to choose from any kind of trained AI to determine when to block the webcam feed. This would allow for limitless creativity from us and anyone who would want to contribute!!!!
## Inspiration It's easy to zone off in online meetings/lectures, and it's difficult to rewind without losing focus at the moment. It could also be disrespectful to others if you expose the fact that you weren't paying attention. Wouldn't it be nice if we can just quickly skim through a list of keywords to immediately see what happened? ## What it does Rewind is an intelligent, collaborative and interactive web canvas with built in voice chat that maintains a list of live-updated keywords that summarize the voice chat history. You can see timestamps of the keywords and click on them to reveal the actual transcribed text. ## How we built it Communications: WebRTC, WebSockets, HTTPS We used WebRTC, a peer-to-peer protocol to connect the users though a voice channel, and we used websockets to update the web pages dynamically, so the users would get instant feedback for others actions. Additionally, a web server is used to maintain stateful information. For summarization and live transcript generation, we used Google Cloud APIs, including natural language processing as well as voice recognition. Audio transcription and summary: Google Cloud Speech (live transcription) and natural language APIs (for summarization) ## Challenges we ran into There are many challenges that we ran into when we tried to bring this project to reality. For the backend development, one of the most challenging problems was getting WebRTC to work on both the backend and the frontend. We spent more than 18 hours on it to come to a working prototype. In addition, the frontend development was also full of challenges. The design and implementation of the canvas involved many trial and errors and the history rewinding page was also time-consuming. Overall, most components of the project took the combined effort of everyone on the team and we have learned a lot from this experience. ## Accomplishments that we're proud of Despite all the challenges we ran into, we were able to have a working product with many different features. Although the final product is by no means perfect, we had fun working on it utilizing every bit of intelligence we had. We were proud to have learned many new tools and get through all the bugs! ## What we learned For the backend, the main thing we learned was how to use WebRTC, which includes client negotiations and management. We also learned how to use Google Cloud Platform in a Python backend and integrate it with the websockets. As for the frontend, we learned to use various javascript elements to help develop interactive client webapp. We also learned event delegation in javascript to help with an essential component of the history page of the frontend. ## What's next for Rewind We imagined a mini dashboard that also shows other live-updated information, such as the sentiment, summary of the entire meeting, as well as the ability to examine information on a particular user.
## Inspiration Every student knows the struggle that is course registration. You're tossed into an unfamiliar system with little advice and all these vague rules and restrictions to follow. All the while, courses are filling up rapidly. Far too often students—often underclassmen— are stuck without the courses they need. We were inspired by these pain points to create Schedge, an automatic schedule generator. ## What it does Schedge helps freshmen build their schedule by automatically selecting three out of a four course load. The three courses consist of a Writing the Essay course, the mandatory writing seminar for NYU students, a Core course like Quantitative Reasoning, and a course in the major of the student's choosing. Furthermore, we provide sophomores with potential courses to take after their freshman year, whether that's a follow up to Writing the Essay, or a more advanced major course. ## How we built it We wrote the schedule generation algorithm in Rust, as we needed it to be blazing fast and well designed. The front end is React with TypeScript and Material UI. The algorithm, while technically NP complete for all courses, uses some shortcuts and heuristics to allow for fast schedule generation. ## Challenges we ran into We had some trouble with the data organization, especially with structuring courses with their potential meeting times. ## Accomplishments that we're proud of Using a more advanced systems language such as Rust in a hackathon. Also our project has immediate real world applications at NYU. We plan on extending it and providing it as a service. ## What we learned Courses have a lot of different permutations and complications. ## What's next for Schedge More potential majors and courses! Features for upperclassmen!
winning
## Inspiration We were inspired by Plaid's challenge to "help people make more sense of their financial lives." We wanted to create a way for people to easily view where they are spending their money so that they can better understand how to conserve it. Plaid's API allows us to see financial transactions, and Google Maps API serves as a great medium to display the flow of money. ## What it does GeoCash starts by prompting the user to login through the Plaid API. Once the user is authorized, we are able to send requests for transactions, given the public\_token of the user. We then displayed the locations of these transactions on the Google Maps API. ## How I built it We built this using JavaScript, including Meteor, React, and Express frameworks. We also utilized the Plaid API for the transaction data and the Google Maps API to display the data. ## Challenges I ran into Data extraction/responses from Plaid API, InfoWindow displays in Google Maps ## Accomplishments that I'm proud of Successfully implemented meteor webapp, integrated two different APIs into our product ## What I learned Meteor (Node.js and React.js), Plaid API, Google Maps API, Express framework ## What's next for GeoCash We plan on integrating real user information into our webapp; we currently only using the sandbox user, which has a very limited scope of transactions. We would like to implement differently size displays on the Maps API to represent the amount of money spent at the location. We would like to display different color displays based on the time of day, which was not included in the sandbox user. We would also like to implement multiple different user displays at the same time, so that we can better describe the market based on the different categories of transactions.
## Inspiration In the modern world, with a plethora of distractions and opportunities to spend money frivolously, the concept of saving has been neglected. Being college students ourselves, we understand the importance of every penny spent or saved. The idea of using spreadsheets to maintain balances monthly is cumbersome and can be messy. Hence, our team has developed an app called Cache to make saving fun and rewarding. ## What it does Cache provides users with multiple saving strategies based on their predefined goals. We reward them for reaching those goals with offers and discounts that match the category of spending for which they plan to save money. The app keeps track of their overall expenditures and automatically classifies spending into different categories. It also maps the amount saved in each category towards their goals. If the user ends up spending more in any of the pre-defined spending categories (essential & non-essential) then we suggest methods to reduce spending. Moreover, the app provides relevant and rewarding offers to users based on the timely meeting of their set goals. The offers may contain access to additional airline mileage points, cash back offers and other discount coupons to name a few. ## How we built it We used React to build the frontend and used Firebase services for the backend. All background tasks were carried out using cloud functions and cloud firestore was used to store the app data. ## Challenges we ran into Initially, we had planned to use Plaid to connect real bank accounts and fetch transaction history from the bank accounts. However, Plaid requires the app to go through a verification process that takes several days. Therefore, we decided to use populate our database with dummy data for the time being. ## Accomplishments that we're proud of We are proud of the fact that our app promotes healthy saving habits amongst people of all ages. Our app is not restricted to any particular demographic of people or to any income level. It rewards users in return for meeting their goals which forms a mutually nurturing relationship and creates long term financial well being. We hope that users can improve their budgeting skills and financial habits so that apps like Cache will not be required in the future. ## What's next for Cache We plan to use Cache to provide investing tips based on the users financial budget and investing knowledge. This includes investing in the stock market, mutual funds, cryptocurrency and exchange traded funds (ETF’s). We can notify our users to pay their dues in time so that they don’t incur any additional costs (penalties). In addition, we plan to provide a credit journey report which would allow users to understand how their credit score has been hit or improved in the last few months. Improving the credit history knowledge of the user would enable them to sensibly choose the kind and amount of debt they intend to take on.
## Inspiration Resumes are boring, and we wanted something that would help us find a good job and develop our careers ## What it does So far, it is a bundle of webpages in html ## How we built it With teamwork ## What we learned So much that you could call it beautiful ## What's next for Resume Customizer We plan to learn machine vision and AI to scan for keywords in the future
partial
## Inspiration JetBlue challenge of YHack ## What it does Website with sentiment analysis of JetBlue ## How I built it Python, Data scraping, used textblob for sentiment analysis ## Challenges I ran into choosing between textblob and nltk ## Accomplishments that I'm proud of Having a finished product ## What I learned How to do sentiment analysis ## What's next for FeelingBlue
## Inspiration We got our inspiration from looking at the tools provided to us in the Hackathon. We saw that we cold use the Google API’s effectively when analyzing the sentiment of the customers review on social media platforms. With the wide range of possibilities, it gave us we got the idea of using programs to see the data visually ## What it does JetBlueByMe is a program which takes over 16000 reviews from trip advisor, and hundreds of tweets from twitter to present them in a graphable way. The first representation is an effective yet simple word cloud which shows more frequently described adjective larger. The other is a bar graph to show which word appears most consistently. ## How we built it The first step was to scrape data off multiple websites. To do this a web scraping robot by UiPath was used. This saved a lot of time and allowed us to focus on other aspects of the program. For Twitter, Python had to be used in junction with Beautiful Soup library to extract the tweets and hashtags. This was only possible after receiving permission 10 hours after applying to Twitter for its API use. The Google sentiment API and Syntax API were used to create the final product. The syntax API helped extract the adjectives from the reviews so we can show a word cloud. To display the word cloud, the programming was done in R as it is an effective language for data manipulation. ## Challenges we ran into We were unable to initially use UiPath for Twitter to scrape data as it didn’t have a next button, so the robot did not continue on its own. This was fixed using beautiful soup on Python. Also, when trying to extract the adjectives, the compiling was very slow causing us to fall back about 2 hours. None of us knew the inns and outs of web hence it was a challenging problem for us. ## Accomplishments that we're proud of We are happy about finding an effective way to word scrape using both UiPath and BeautifulSoup. Also, we weren't aware that Google provided an API for sentiment analysis, access to that was a big plus. We learned how to utilize our tools and incorporated them into our project. We also used Firebase to help store data on the cloud so we know its secure. ## What we learned Word scraping was a big thing that we all learned as it was new to all of us. We had to extensively research before applying any idea. Most of the group did not know how to use the language R but we understood the basics by the end. We also learned how to set up a firebase and google-cloud service that will definitely be a big asset in our future programming endeavours. ## What's next for JetBlueByMe Our web scraping application can be optimized and we plan on getting a live feed set up to show reviews sentiment in real-time. With time and resources, we would be able to implement that.
## Inspiration We wanted to find a way to make transit data more accessible to the public as well as provide fun insights into their transit activity. As we've seen in Spotify Wrapped, people love seeing data about themselves. In addition, we wanted to develop a tool to help city organizers make data-driven decisions on how they operate their networks. ## What it does Transit Tracker is simultaneously a tool for operators to analyze their network as well as an app for users to learn about their own activities and how it lessens their impact on the environment. For network operators, Transit Tracker allows them to manage data for a system of riders and individual trips. We developed a visual map that shows the activity of specific sections between train stations. For individuals, we created an app that shows data from their own transit activities. This includes gallons of gas saved, time spent riding, and their most visited stops. ## How we built it We primarily used Palantir Foundry to provide a platform for our back-end data management. Used objects within Foundry to facilitate dataset transformation using SQL and python. Utilized Foundry Workshop to create user interface to display information. ## Challenges we ran into Working with the geoJSON file format proved to be particularly challenging, because it is semi-structured data and not easily compatible with the datasets we were working with. Another large challenge we ran into was learning how to use Foundry. This was our first time using the software, we had to first learn the basics before we could even begin tackling our problem. ## Accomplishments that we're proud of With Treehacks being all of our first hackathons, we're proud of making it to the finish line and building something that is both functional and practical. Additionally, we're proud of the skills we've gained from learning to deal with large data as well as our ability to learn and use foundry in the short time frame we had. ## What we learned We learned just how much we take everyday data analysis for granted. The amount of information being processed everyday in regards to data is unreal. We only tackled a small level of data analysis and even we had a multitude of difficult issues that had to be dealt with. The understanding we’ve learned from dealing with data is so valuable and the skills we’ve gained in using a completely foreign application to build something in such a short amount of time has been truly insightful. ## What's next for Transit Tracker The next step for Transit Tracker would be to be able to translate our data (that is being generated through objects) onto a visual map where the routes would constantly be changing in regards to the data being collected. Being able to visually represent the change onto a graph would be such a valuable step to achieve as it would mean we are working our way towards a functional application.
losing
## Inspiration An Article, about 86 per cent of Canada's plastic waste ends up in landfill, a big part due to Bad Sorting. We thought it shouldn't be impossible to build a prototype for a Smart bin. ## What it does The Smart bin is able, using Object detection, to sort Plastic, Glass, Metal, and Paper We see all around Canada the trash bins split into different types of trash. It sometimes becomes frustrating and this inspired us to built a solution that doesn't require us to think about the kind of trash being thrown The Waste Wizard takes any kind of trash you want to throw, uses machine learning to detect what kind of bin it should be disposed in, and drops it in the proper disposal bin ## How we built it\ Using Recyclable Cardboard, used dc motors, and 3d printed parts. ## Challenges we ran into We had to train our Model for the ground up, even getting all the data ## Accomplishments that we're proud of We managed to get the whole infrastructure build and all the motor and sensors working. ## What we learned How to create and train model, 3d print gears, use sensors ## What's next for Waste Wizard A Smart bin able to sort the 7 types of plastic
## Inspiration The EPA estimates that although 75% of American waste is recyclable, only 30% gets recycled. Our team was inspired to create RecyclAIble by the simple fact that although most people are not trying to hurt the environment, many unknowingly throw away recyclable items in the trash. Additionally, the sheer amount of restrictions related to what items can or cannot be recycled might dissuade potential recyclers from making this decision. Ultimately, this is detrimental since it can lead to more trash simply being discarded and ending up in natural lands and landfills rather than being recycled and sustainably treated or converted into new materials. As such, RecyclAIble fulfills the task of identifying recycling objects with a machine learning-based computer vision software, saving recyclers the uncertainty of not knowing whether they can safely dispose of an object or not. Its easy-to-use web interface lets users track their recycling habits and overall statistics like the number of items disposed of, allowing users to see and share a tangible representation of their contributions to sustainability, offering an additional source of motivation to recycle. ## What it does RecyclAIble is an AI-powered mechanical waste bin that separates trash and recycling. It employs a camera to capture items placed on an oscillating lid and, with the assistance of a motor, tilts the lid in the direction of one compartment or another depending on whether the AI model determines the object as recyclable or not. Once the object slides into the compartment, the lid will re-align itself and prepare for proceeding waste. Ultimately, RecyclAIBle autonomously helps people recycle as much as they can and waste less without them doing anything different. ## How we built it The RecyclAIble hardware was constructed using cardboard, a Raspberry Pi 3 B+, an ultrasonic sensor, a Servo motor, and a Logitech plug-in USB web camera, and Raspberry PI. Whenever the ultrasonic sensor detects an object placed on the surface of the lid, the camera takes an image of the object, converts it into base64 and sends it to a backend Flask server. The server receives this data, decodes the base64 back into an image file, and inputs it into a Tensorflow convolutional neural network to identify whether the object seen is recyclable or not. This data is then stored in an SQLite database and returned back to the hardware. Based on the AI model's analysis, the Servo motor in the Raspberry Pi flips the lip one way or the other, allowing the waste item to slide into its respective compartment. Additionally, a reactive, mobile-friendly web GUI was designed using Next.js, Tailwind.css, and React. This interface provides the user with insight into their current recycling statistics and how they compare to the nationwide averages of recycling. ## Challenges we ran into The prototype model had to be assembled, measured, and adjusted very precisely to avoid colliding components, unnecessary friction, and instability. It was difficult to get the lid to be spun by a single Servo motor and getting the Logitech camera to be propped up to get a top view. Additionally, it was very difficult to get the hardware to successfully send the encoded base64 image to the server and for the server to decode it back into an image. We also faced challenges figuring out how to publicly host the server before deciding to use ngrok. Additionally, the dataset for training the AI demanded a significant amount of storage, resources and research. Finally, establishing a connection from the frontend website to the backend server required immense troubleshooting and inspect-element hunting for missing headers. While these challenges were both time-consuming and frustrating, we were able to work together and learn about numerous tools and techniques to overcome these barriers on our way to creating RecyclAIble. ## Accomplishments that we're proud of We all enjoyed the bittersweet experience of discovering bugs, editing troublesome code, and staying up overnight working to overcome the various challenges we faced. We are proud to have successfully made a working prototype using various tools and technologies new to us. Ultimately, our efforts and determination culminated in a functional, complete product we are all very proud of and excited to present. Lastly, we are proud to have created something that could have a major impact on the world and help clean our environment clean. ## What we learned First and foremost, we learned just how big of a problem under-recycling was in America and throughout the world, and how important recycling is to Throughout the process of creating RecyclAIble, we had to do a lot of research on the technologies we wanted to use, the hardware we needed to employ and manipulate, and the actual processes, institutions, and statistics related to the process of recycling. The hackathon has motivated us to learn a lot more about our respective technologies - whether it be new errors, or desired functions, new concepts and ideas had to be introduced to make the tech work. Additionally, we educated ourselves on the importance of sustainability and recycling as well to better understand the purpose of the project and our goals. ## What's next for RecyclAIble RecycleAIble has a lot of potential as far as development goes. RecyclAIble's AI can be improved with further generations of images of more varied items of trash, enabling it to be more accurate and versatile in determining which items to recycle and which to trash. Additionally, new features can be incorporated into the hardware and website to allow for more functionality, like dates, tracking features, trends, and the weights of trash, that expand on the existing information and capabilities offered. And we’re already thinking of ways to make this device better, from a more robust AI to the inclusion of much more sophisticated hardware/sensors. Overall, RecyclAIble has the potential to revolutionize the process of recycling and help sustain our environment for generations to come.
## Inspiration We are designers from Sheridan College who wanted to create a game with social impact. When we were brainstorming about issues we could tackle, we wanted to make an educational game that didn't take away any of the fun factors. A theme that caught our attention was **environment**. Did you know that Toronto is the worst offender in recycling contamination with a whopping 26% rate? Or that 15% of the average homeowner's garbage is recyclable? In 2017, an estimated 55,000 tonnes of non-recyclabe material was going into blue bins! This is costing Toronto **millions of dollars** because people don't know any better. Some of the most common waste mistakes are: • Throwing out toys/applicances in the recycling bin thinking that someone will reuse it • Throwing out batteries into the garbage • Throwing out coffee cups into the blue bins when they belong in recycling • ...and the list goes on! The reason this is happening is due to a **lack of knowledge** - after all, there's no government official telling us where to throw our garbage. Current products that exist in the market such as Waste Wizard or other waste management games/tools rely on people being proactive, but we put fun as priority so that learning comes naturally. We made it our mission to start a movement from the bottom up - and it starts with children becoming more educated on where waste really belongs. Our game teaches children about some of the most common recycling and garbage mistakes you can make, as well as alternatives such as donating. Join Pandee and Pandoo on their journey to sort through the 6ix's waste and put them in their place! ## What it does Using the **Nintendo Joy Cons**, players will control Pandee and Pandoo to sort through the trash that has landed in their backyard. The waste can be sorted into 5 different receptacles and players will need to use their wits to figure out which container it belongs in. The goal is to use teamwork to get the most points possible and have the cleanest lawn while minimizing their impact on the environment. ## How we built it We used unity and C# to bring it all together. Additionally, we used the joycon library to integrate the Nintendo switch controllers into the game. ## Challenges we ran into We had trouble texturing the raccoon at the beginning because it didn't map properly onto the mesh. It was also difficult to integrate the Nintendo motion controls into the game design. It was the first time our team used it as our hardware and it proved to be difficult. ## Accomplishments that we're proud of We managed to finish it without major bugs! It works and it has a lot of different emergent designs. The game itself feels like it has a lot of potential. This game is a result of our 2 years of schooling - we used everything we learned to create this game. Shoutout to our professors and classmates. ## What we learned Tibi: I learned that coffee cups go into the trash - I used to think they go in the recycling. We also learned how to use the Nintendo joycons. ## What's next for Recoon We want to add secondary mechanics into the game to make it more interesting and flesh out the user experience.
winning
## Inspiration Our inspiration comes from our own experiences as programmers, where we realized that it was sometimes difficult (and a productivity drain), to move our right hands between our keyboard and our mouse. We wondered whether there was any possibility of redesigning a new version of the mouse to work without the need to move our hand from the keyboard. ## What it does nullHands utilizes a variety of measurements to provide users with an accurate mouse movement system. First, we utilize a gyroscope to grab the user's head movements, mapping that to the mouse movements on the screen. Then, we monitored the user's eyes, mapping the left and right eyes to the left and right respective mouse clicks. ## How we built it We built this using iOS Swift 3, to monitor the gyroscope. We then used Python with sockets to run the backend, as well as to run the OpenCV and Mouse movement libraries ## Challenges we ran into We did not have the hardware we needed, so we had to improvise with an iPhone, as well as a headband. ## Accomplishments that we're proud of We are proud of managing to create a working prototype in under 24 hours. ## What we learned We learned about sockets and direct communication between iOS and computer. ## What's next for nullHands We are really passionate about nullHands, and think that this is a project that can definitely help a lot of people. Therefore, we plan on continuing to work on nullHands, improving and adding functionality so that we can one day release this as a product so that everyone can experience nullHands.
# The Guiding Hand ## Are things becoming difficult to understand? Why not use The Guiding Hand? ## The Problem Ever since the onset of covid, the world has heavily been depending on various forms of online communication in order to keep the flow of work going. For the first time ever, we experienced teaching via zoom, Microsoft teams, and various other platforms. Understanding concepts without visual representation has never been easy and not all teachers or institutions can afford Edupen, iPads, or other annotation software. ## What it does Our product aims at building a richer bridge between machines and humans than primitive text user interfaces or even GUIs (graphical user interfaces). It is an alternative user interface for providing real-time data to a computer instead of typing with keys thereby reducing the amount of effort required to communicate over platforms. When a user opens the Guiding Hand's website they see a button. On clicking the button they are led to our application. The application uses hand gestures to perform various functions. * 1 finger to draw on the screen. * 2 fingers to change pen color or select eraser. * 3 fingers to take a screenshot. * 4 fingers to clear the screen. ## How we built it We used React to build the site, and Flask and Python libraries such as mediapipe, opencv, numpy, and pyschreenshot to build the interface. ## Challenges we ran into Our biggest challenge was integrating our Python application with our Reactjs website. To overcome this issue we created a webserver using Flask. ## Accomplishments that we're proud of We are very proud of the fact that we came up with a fully functional site and application in such a short duration of time. ## What we learned We learned how to create a site in ReactJS, link it to a Flask server that opens up our application that allows a user to draw on the screen using just hand gestures. ## What's next for The Guiding Hand * Adding more hand gestures for more features. * Allowing multiple people to draw on the same screen simultaneously. * Annotating over a shared screen. * Converting hand-drawn text on-screen to text and saving it in a document.
## Inspiration Due to a lot of work and family problems, People often forget to take care of their health and food. Common health problems people know a day's faces are BP, heart problems, and Diabetics. Most people face mental health problems, due to studies, jobs, or any other problems. This Project can help people find out their health problems. It helps people in easy recycling of items, as they are divided into 12 different classes. It will help people who do not have any knowledge of plants and predict if the plant is having any diseases or not. ## What it does On the Garbage page, When we upload an image it classifies which kind of garbage it is. It helps people with easy recycling. On the mental health page, When we answer some of the questions. It will predict If we are facing some kind of mental health issue. The Health Page is divided into three parts. One page predicts if you are having heart disease or not. The second page predicts if you are having Diabetics or not. The third page predicts if you are having BP or not. Covid 19 page classify if you are having covid or not Plant\_Disease page predicts if a plant is having a disease or not. ## How we built it I built it using streamlit and OpenCV. ## Challenges we ran into Deploying the website to Heroku was very difficult because I generally deploy it. Most of this was new to us except for deep learning and ML so it was very difficult overall due to the time restraint. The overall logic and finding out how we should calculate everything was difficult to determine within the time limit. Overall, time was the biggest constraint. ## Accomplishments that we're proud of ## What we learned Tensorflow, Streamlit, Python, HTML5, CSS3, Opencv, Machine learning, Deep learning, and using different python packages. ## What's next for Arogya
losing
## Inspiration: The developer of the team was inspired by his laziness. ## What it does: The script opens a web browser and lets you log in to your Tinder account. Then, it automates the swiping right process for a set amount of time. ## How we built it: We developed a script with python and selenium. ## Challenges we ran into: For the script to identify and respond to other activities that happen on tinder such as pop ups. ## Accomplishments that we're proud of That it works (most of the time)! ## What we learned How to implement the selenium framework with python and the implicit and explicit waits necessary for all the components to load ## What's next for Lazy seduction * Being able to respond to matches using generative AI * Receive notifications of matches on your cellphone
## Inspiration Using the Tinder app has three main drawbacks. Firstly, why focus on someone's physical appearance when instead you can be less shallow and focus on their bio, to really get to know them as a person? Secondly, the app is more of a solo activity - why not include your friends on your lovemaking decisions? We set out to fix these issues for Valentines Day 2016. Thirdly, what if the user is vision impaired? Making Tinder accessible to this group of users was central to our goals of making Tinder the best it can be for all. ## What it does Alexa Tinder lets you Tinder using voice commands, asking Tinder for details about the user, such as their bio, job title, and more. From there, you can choose to either swipe right or swipe left, with similar voice commands, and go through your whole Tinder card stack! ## How we built it We used the Alexa API and Amazon Lambda functions to set up the Alexa environment, and then used Python to interact with the API. For the image descriptions, we used Clarifai. ## Challenges we ran into Learning the Alexa API/Amazon Lambda functions was the main drawback as it had some funky quirks when being used with Python! Check it out at [GitHub](https://github.com/Pinkerton/alexa-tinder)
## Inspiration There are many scary things in the world ranging from poisonous spiders to horrifying ghosts, but none of these things scare people more than the act of public speaking. Over 75% of humans suffer from a fear of public speaking but what if there was a way to tackle this problem? That's why we created Strive. ## What it does Strive is a mobile application that leverages voice recognition and AI technologies to provide instant actionable feedback in analyzing the voice delivery of a person's presentation. Once you have recorded your speech Strive will calculate various performance variables such as: voice clarity, filler word usage, voice speed, and voice volume. Once the performance variables have been calculated, Strive will then render your performance variables in an easy-to-read statistic dashboard, while also providing the user with a customized feedback page containing tips to improve their presentation skills. In the settings page, users will have the option to add custom filler words that they would like to avoid saying during their presentation. Users can also personalize their speech coach for a more motivational experience. On top of the in-app given analysis, Strive will also send their feedback results via text-message to the user, allowing them to share/forward an analysis easily. ## How we built it Utilizing the collaboration tool Figma we designed wireframes of our mobile app. We uses services such as Photoshop and Gimp to help customize every page for an intuitive user experience. To create the front-end of our app we used the game engine Unity. Within Unity we sculpt each app page and connect components to backend C# functions and services. We leveraged IBM Watson's speech toolkit in order to perform calculations of the performance variables and used stdlib's cloud function features for text messaging. ## Challenges we ran into Given our skillsets from technical backgrounds, one challenge we ran into was developing out a simplistic yet intuitive user interface that helps users navigate the various features within our app. By leveraging collaborative tools such as Figma and seeking inspiration from platforms such as Dribbble, we were able to collectively develop a design framework that best suited the needs of our target user. ## Accomplishments that we're proud of Creating a fully functional mobile app while leveraging an unfamiliar technology stack to provide a simple application that people can use to start receiving actionable feedback on improving their public speaking skills. By building anyone can use our app to improve their public speaking skills and conquer their fear of public speaking. ## What we learned Over the course of the weekend one of the main things we learned was how to create an intuitive UI, and how important it is to understand the target user and their needs. ## What's next for Strive - Your Personal AI Speech Trainer * Model voices of famous public speakers for a more realistic experience in giving personal feedback (using the Lyrebird API). * Ability to calculate more performance variables for a even better analysis and more detailed feedback
losing
## Inspiration Have you ever lost a valuable item that’s really important to you, only for it to never be seen again? * Over **60% of people** have lost something in their lifetime. * In the **US alone**, over **400 million items** are lost and found every year. * The average person loses up to **nine items** every day. The most commonly lost items include **wallets, keys, and phones**. While some are lucky enough to find their lost items at home or in their car, those who lose things in public often never see them again. The good news is that most places have a **“lost and found”** system, but the problem? It's **manual**, requiring you to reach out to someone to find out if your item has been turned in. ## What it does **LossEndFound** solves this problem by **automating the lost and found process**. It connects users who report lost items with those who find them. * Whether you're looking for something or reporting something found, the system uses **AI-powered vector similarity search** to match items based on descriptions provided by users. ## How we built it We built **LossEndFound** to make reconnecting lost items with their owners **seamless**: * **FastAPI** powers our backend for its speed and reliability. * **Cohere embeddings** capture the key features of each item. * **ChromaDB** stores and performs vector similarity searches, matching lost and found items based on cosine similarity. * On the frontend, we used **React.js** to create a user-friendly experience that makes the process quick and easy. ## Challenges we ran into As first-time hackers, we faced a few challenges: * **Backend development** was tough, especially when handling **numpy array dimensions**, which slowed us down during key calculations. * **Frontend-backend integration** was a challenge since it was our first time bridging these systems, making the process more complex than expected. ## Accomplishments that we're proud of We’re proud of how we pushed ourselves to learn and integrate new technologies: * **ChromaDB**, **Cohere**, and **CORS** were all new tools that we successfully implemented. * Overcoming these challenges showed us what’s possible when we **step outside our comfort zone** and **collaborate effectively**. ## What we learned We learned several key lessons during this project: * The importance of **clear requirements** to guide development. * How to navigate new technologies under pressure. * How to **grow, adapt, and collaborate** as a team to tackle complex problems. ## What's next for LossEndFound Moving forward, we plan to: * Add **better filters** for more precise searches (by date, location, and category). * Introduce **user profiles** to track lost/found items. * Streamline the process for reporting or updating item statuses. These improvements will make the app even more **efficient** and **user-friendly**, keeping the focus on **simplicity and effectiveness**.
## Inspiration We hear all the time that people want a dog but they don't want the committment and yet there's still issues with finding a pet sitter! We flipped the 'tinder'-esque mobile app experience around to reflect just how many people are desparate and willing to spend time with a furry friend! ## What it does Our web app allows users to create an account and see everyone who is currently looking to babysit a cute puppy or is trying to find a pet sitter so that they can go away for vacation! The app also allows users to engage in chat messages so they can find a perfect weekend getaway for their dogs. ## How we built it Our web app is primariy a react app on the front end and we used a combination of individual programming and extreme programming when we hit walls. Ruby on rails and SQLite run the back end and so with a team of four we had two people manning the keyboards for the front end and the other two working diligently on the backend. ## Challenges we ran into GITHUB!!!! Merging, pushing, pulling, resolving, crying, fetching, syncing, sobbing, approving, etc etc. We put our repo through a stranglehold of indecipherable commits more than a few times and it was our greatest rival ## Accomplishments that we're proud of IT WORKS! We're so proud to build an app that looks amazing and also communicates on a sophisticated level. The user experience is cute and delightful but the complexities are still baked in like session tokens and password hashing (plus salt!) ## What we learned The only way to get fast is to go well. The collaboration phase with github ate up a large part of our time every couple of hours and there was nobody to blame but ourselves. ## What's next for Can I Borrow Your Dog We think this a pretty cool little app that could do a LARGE refactoring. Whether we keep in touch as a gorup and maintain this project to spruce up our resumes is definitely being considered. We'd like to show our friends and family how much we accomplished in just 36 hours (straight lol)!
## Why We Created **Here** As college students, one question that we catch ourselves asking over and over again is – “Where are you studying today?” One of the most popular ways for students to coordinate is through texting. But messaging people individually can be time consuming and awkward for both the inviter and the invitee—reaching out can be scary, but turning down an invitation can be simply impolite. Similarly, group chats are designed to be a channel of communication, and as a result, a message about studying at a cafe two hours from now could easily be drowned out by other discussions or met with an awkward silence. Just as Instagram simplified casual photo sharing from tedious group-chatting through stories, we aim to simplify casual event coordination. Imagine being able to efficiently notify anyone from your closest friends to lecture buddies about what you’re doing—on your own schedule. Fundamentally, **Here** is an app that enables you to quickly notify either custom groups or general lists of friends of where you will be, what you will be doing, and how long you will be there for. These events can be anything from an open-invite work session at Bass Library to a casual dining hall lunch with your philosophy professor. It’s the perfect dynamic social calendar to fit your lifestyle. Groups are customizable, allowing you to organize your many distinct social groups. These may be your housemates, Friday board-game night group, fellow computer science majors, or even a mixture of them all. Rather than having exclusive group chat plans, **Here** allows for more flexibility to combine your various social spheres, casually and conveniently forming and strengthening connections. ## What it does **Here** facilitates low-stakes event invites between users who can send their location to specific groups of friends or a general list of everyone they know. Similar to how Instagram lowered the pressure involved in photo sharing, **Here** makes location and event sharing casual and convenient. ## How we built it UI/UX Design: Developed high fidelity mockups on Figma to follow a minimal and efficient design system. Thought through user flows and spoke with other students to better understand needed functionality. Frontend: Our app is built on React Native and Expo. Backend: We created a database schema and set up in Google Firebase. Our backend is built on Express.js. All team members contributed code! ## Challenges Our team consists of half first years and half sophomores. Additionally, the majority of us have never developed a mobile app or used these frameworks. As a result, the learning curve was steep, but eventually everyone became comfortable with their specialties and contributed significant work that led to the development of a functional app from scratch. Our idea also addresses a simple problem which can conversely be one of the most difficult to solve. We needed to spend a significant amount of time understanding why this problem has not been fully addressed with our current technology and how to uniquely position **Here** to have real change. ## Accomplishments that we're proud of We are extremely proud of how developed our app is currently, with a fully working database and custom frontend that we saw transformed from just Figma mockups to an interactive app. It was also eye opening to be able to speak with other students about our app and understand what direction this app can go into. ## What we learned Creating a mobile app from scratch—from designing it to getting it pitch ready in 36 hours—forced all of us to accelerate our coding skills and learn to coordinate together on different parts of the app (whether that is dealing with merge conflicts or creating a system to most efficiently use each other’s strengths). ## What's next for **Here** One of **Here’s** greatest strengths is the universality of its usage. After helping connect students with students, **Here** can then be turned towards universities to form a direct channel with their students. **Here** can provide educational institutions with the tools to foster intimate relations that spring from small, casual events. In a poll of more than sixty university students across the country, most students rarely checked their campus events pages, instead planning their calendars in accordance with what their friends are up to. With **Here**, universities will be able to more directly plug into those smaller social calendars to generate greater visibility over their own events and curate notifications more effectively for the students they want to target. Looking at the wider timeline, **Here** is perfectly placed at the revival of small-scale interactions after two years of meticulously planned agendas, allowing friends who have not seen each other in a while casually, conveniently reconnect. The whole team plans to continue to build and develop this app. We have become dedicated to the idea over these last 36 hours and are determined to see just how far we can take **Here**!
losing
## Inspiration Inspired by [SIU Carbondale's Green Roof](https://greenroof.siu.edu/siu-green-roof/), we wanted to create an automated garden watering system that would help address issues ranging from food deserts to lack of agricultural space to storm water runoff. ## What it does This hardware solution takes in moisture data from soil and determines whether or not the plant needs to be watered. If the soil's moisture is too low, the valve will dispense water and the web server will display that water has been dispensed. ## How we built it First, we tested the sensor and determined the boundaries between dry, damp, and wet based on the sensor's output values. Then, we took the boundaries and divided them by percentage soil moisture. Specifically, the sensor that measures the conductivity of the material around it, so water being the most conductive had the highest values and air being the least conductive had the lowest. Soil would fall in the middle and the ranges in moisture were defined by the pure air and pure water boundaries. From there, we visualized the hardware set up with the sensor connected to an Arduino UNO microcontroller connected to a Raspberry Pi 4 controlling a solenoid valve that releases water when the soil moisture meter is less than 40% wet. ## Challenges we ran into At first, we aimed too high. We wanted to incorporate weather data into our water dispensing system, but the information flow and JSON parsing were not cooperating with the Arduino IDE. We consulted with a mentor, Andre Abtahi, who helped us get a better perspective of our project scope. It was difficult to focus on what it meant to truly create a minimum viable product when we had so many ideas. ## Accomplishments that we're proud of Even though our team is spread across the country (California, Washington, and Illinois), we were still able to create a functioning hardware hack. In addition, as beginners we are very excited of this hackathon's outcome. ## What we learned We learned about wrangling APIs, how to work in a virtual hackathon, and project management. Upon reflection, creating a balance between feasibility, ability, and optimism is important for guiding the focus and motivations of a team. Being mindful about energy levels is especially important for long sprints like hackathons. ## What's next for Water Smarter Lots of things! What's next for Water Smarter is weather controlled water dispensing. Given humidity, precipitation, and other weather data, our water system will dispense more or less water. This adaptive water feature will save water and let nature pick up the slack. We would use the OpenWeatherMap API to gather the forecasted volume of rain, predict the potential soil moisture, and have the watering system dispense an adjusted amount of water to maintain correct soil moisture content. In a future iteration of Water Smarter, we want to stretch the use of live geographic data even further by suggesting appropriate produce for which growing zone in the US, which will personalize the water conservation process. Not all plants are appropriate for all locations so we would want to provide the user with options for optimal planting. We can use software like ArcScene to look at sunlight exposure according to regional 3D topographical images and suggest planting locations and times. We want our product to be user friendly, so we want to improve our aesthetics and show more information about soil moisture beyond just notifying water dispensing.
## Inspiration As the world progresses into the digital age, there is a huge simultaneous focus on creating various sources of clean energy that is sustainable and affordable. Unfortunately, there is minimal focus on ways to sustain the increasingly rapid production of energy. Energy is wasted everyday as utility companies over supply power to certain groups of consumers. ## What It Does Thus, we bring you Efficity, a device that helps utility companies analyze and predict the load demand of a housing area. By leveraging the expanding, ubiquitous arrival of Internet of Things devices, we can access energy data in real-time. Utility companies could then estimate the ideal power to supply to a housing area, while keeping in mind to satisfy the load demand. With this, not too much energy will be wasted and thus improving energy efficiency. On top of that, everyday consumers can also have easy access to their own personal usage for tracking. ## How We Built It Our prototype is built primarily around a Dragonboard 410c, where a potentiometer is used to represent the varying load demand of consumers. By using the analog capabilities of a built in Arduino (ATMega328p), we can calculate the power that is consumed by the load in real time. A Python script is then run via the Dragonboard to receive the data from the Arduino through serial communication. The Dragonboard then further complements our design by having built-in WiFi capabilities. With this in mind, we can send HTTP requests to a webserver hosted by energy companies. In our case, we explored sending this data to a free IOT platform webserver, which can allow a user from anywhere to track energy usage as well as perform analytics such as using MATLAB. In addition, the Dragonboard comes with a fully usable GUI and compatible HDMI monitor for users that are less familiar with command line controls. ## Challenges We Ran Into There were many challenges throughout the Hackathon. First, we had trouble grasping the operations of a Dragonboard. The first 12 hours was spent only on learning how to use the device itself—it also did not help that our first Dragonboard was defective and did not come with a pre-flashed operating system! Next time, we plan to ask more questions early on rather than fixating on problems we believed were trivial. Next, we had a hard time coding the Wi-Fi functionality of the DragonBoard. This was largely due to the lack of expertise in the area from most members. For future references, we find it advisable to have a larger diversity of team members to facilitate faster development. ## Accomplishments That We're Proud Of Overall, we are proud of what we have achieved as this was our first time participating in a hackathon. We ranged from first all the way to fourth year students! From learning how to operate the Dragonboard 410c to having hands on experience in implementing IOT capabilities, we thoroughly believe that HackWestern has broaden all our perspectives on technology. ## What's Next for Efficity If this pitch is successful in this hackathon, we are planning to further improvise and make iterations and develop the full potential of the Dragonboard prototype. There are numerous algorithms we would love to implement and explore to process the collected data since the Dragonboard is quite a powerful device with its own operation systems. We may also want to include extra hardware add-ons such as silent arms for over-usage or solar panels to allow a fully self-sustained device. To take this one step further--if we were able to have a fully functional product, we can opt to pitch this idea to investors!
If we take a moment to stop and think of those who can't speak or hear, we will realize and be thankful for what we have. To make the lives of these differently ables people, we needed to come up with a solution and here we present you with Proximity. Proximity uses the Myo armband for sign recognition and an active voice for speech recognition. The armband is trained on a ML model reads the signs made by human hands and interprets them, thereby, helping the speech impaired to share their ideas and communicate with people and digital assistants alike. The service is also for those who are hearing impaired, so that they can know when somebody is calling their name or giving them a task. We're proud of successfully recognizing a few gestures and setting up a web app that understands and learns the name of a person. Apart from that we have calibrated a to-do list that can enable the hearing impaired people to actively note down tasks assigned to them. We learned an entirely new language, Lua to set up and use the Myo Armband SDK. Apart from that we used vast array of languages, scripts, APIs, and products for different parts of the product including Python, C++, Lua, Js, NodeJs, HTML, CSS, the Azure Machine Learning Studio, and Google Firebase. We look forward to explore the unlimited opportunities with Proximity. From training it to recognize the entire American Sign Language using the powerful computing capabilities of the Azure Machine Learning Studio to advancing our speech recognition app for it to understand more complex conversations. Proximity should integrate seamlessly into the lives of the differently abled.
partial
## Inspiration The inspiration behind MoodJournal comes from a desire to reflect on and cherish the good times, especially in an era where digital overload often makes us overlook the beauty of everyday moments. We wanted to create a digital sanctuary where users can not only store their daily and memories but also discover what truly makes them happy. By leveraging cutting-edge technology, we sought to bring a modern twist to the nostalgic act of keeping a diary, transforming it into a dynamic tool for self-reflection and emotional well-being ## What it does MoodJournal is a digital diary app that allows users to capture their daily life through text entries and photographs. Utilizing semantic analysis and image-to-text conversion technologies, the app evaluates the emotional content of each entry to generate 5 happiness scores in range from Very Happy to Very Sad. This innovative approach enables MoodJournal to identify and highlight the user's happiest moments. At the end of the year, it creates a personalized collage of these joyous times, showcasing a summary of the texts and photos from those special days, serving as a powerful visual reminder of the year's highlights ## How we built it MoodJournal's development combined React and JavaScript for a dynamic frontend, utilizing open-source libraries for enhanced functionality. The backend was structured around Python and Flask, providing a solid foundation for simple REST APIs. Cohere's semantic classification API was integrated for text analysis, enabling accurate emotion assessment. ChatGPT helped generate training data, ensuring our algorithms could effectively analyze and interpret users' entries. ## Challenges we ran into The theme of nostalgia itself presented a conceptual challenge, making it difficult initially to settle on a compelling idea. Our limited experience in frontend development and UX/UI design further complicated the project, requiring substantial effort and learning. Thanks to invaluable guidance from mentors like Leon, Shiv, and Arash, we navigated these obstacles. Additionally, while the Cohere API served our text analysis needs well, we recognized the necessity for a larger dataset to enhance accuracy, underscoring the critical role of comprehensive data in achieving precise analytical outcomes. ## Accomplishments that we're proud of We take great pride in achieving meaningful results from the Cohere API, which enabled us to conduct a thorough analysis of emotions from text entries. A significant breakthrough was our innovative approach to photo emotion analysis; by generating descriptive text from images using ChatGPT and then analyzing these descriptions with Cohere, we established a novel method for capturing emotional insights from visual content. Additionally, completing the core functionalities of MoodJournal to demonstrate an end-to-end flow of our primary objective was a milestone accomplishment. This project marked our first foray into utilizing a range of technologies, including React, Firebase, the Cohere API, and Flask. Successfully integrating these tools and delivering a functioning app, despite being new to them, is something we are especially proud of. ## What we learned This hackathon was a tremendous learning opportunity. We dove into tools and technologies new to us, such as React, where we explored new libraries and features like useEffect, and Firebase, achieving data storage and retrieval. Our first-hand experience with Cohere's APIs, facilitated by direct engagement with their team, was invaluable, enhancing our app's text and photo analysis capabilities. Additionally, attending workshops, particularly on Cohere technologies like RAG, broadened our understanding of AI's possibilities. This event not only expanded our technical skills but also opened new horizons for future projects. ## What's next for MoodJournal We're planning exciting updates to make diary-keeping easier and more engaging: * AI-Generated Entries: Users can have diary entries created by AI, simplifying daily reflections. * Photo Analysis for Entry Generation: Transform photos into diary texts with AI, offering an effortless way to document days. * Integration with Snapchat Memories: This feature will allow users to turn snaps into diary entries, merging social moments with personal reflections. * Monthly Collages and Emotional Insights: We'll introduce monthly summaries and visual insights into past emotions, alongside our yearly wrap-ups. * User Accounts: Implementing login/signup functionality for a personalized and secure experience. These enhancements aim to streamline the journaling process and deepen user engagement with MoodJournal.
# Emotify ## Inspiration We all cared deeply about mental health and we wanted to help those in need. 280 million people have depression in this world. However, we found out that people play a big role in treating depression - some teammates have experienced this first hand! So, we created Emotify, which brings back the memories of nostalgia and happy moments with friends. ## What it does The application utilizes an image classification program to classify photos locally stored on one's device. The application then "brings back memories and feelings of nostalgia" by displaying photos which either match a person's mood (if positive) or inverts a person's mood (if negative). Input mood is determined by Cohere's NLP API; negatively associated moods (such as "sad") are associated with happy photos to cheer people up. The program can also be used to find images, being able to distinguish between request of individual and group photos, as well as the mood portrayed within the photo. ## How we built it We used DeepFace api to effective predict facial emotions that sort into different emotions which are happy, sad, angry, afraid, surprise, and disgust. Each of these emotions will be token to generate the picture intelligently thanks to Cohere. Their brilliant NLP helped us to build a model that guesses what token we should feed our sorted picture generator to bring happiness and take them a trip down the memory lane to remind them of the amazing moments that they been through with their closed ones or times where they were proud of themselves. Take a step back and look back the journey they been through by using React frame work to display images that highlight their fun times. We only do two at a time for our generator because we want people to really enjoy these photos and remind what happened in these two photos (especially happy ones). Thanks to implementing a streamline pipeline, we managed to turn these pictures into objects that can return file folders that feed into the front end through getting their static images folder using the Flask api. We ask the users for their inputs, then run it through our amazing NLP that backed by Cohere to generate meaning token that produce quality photos. We trained the model in advance since it is very time consuming for the DeepFace api to go through all the photos. Of course, we have privacy in mind which thanks to Auth0, we could implement the user base system to securely protect their data and have their own privacy using the system. ## Challenges we ran into One major challenge includes front end development. We were split on the frameworks to use (Flask? Django? React?). how the application was to be designed, the user experience workflow, and any changes we had to make to implement third party integrations (such as Auth0) and make the application look visually appealing. ## Accomplishments that we're proud of We are very satisfied with the work that we were able to do at UofT hacks, and extremely proud of the project we created. Many of the features of this project are things that we did not have knowledge on prior to the event. So, to have been able to successfully complete everything we set out to do and more, while meeting the criteria for four of the challenges, has been very encouraging to say the least. ## What we learned The most experienced among us has been to 2 hackathons, while it was the first for the rest of us. For that reason this learning experience has been overwhelming. Having the opportunity to work with new technologies while creating a project we are proud of within 36 hours has forced us to fill in many of the gaps in our skillset, especially with ai/ml and full stack programming. ## What's next for Emotify We plan to further develop this application during our free time, such that we 'polish it' to our standards, and to ensure it meets our intended purpose. The developers definitely would enjoy using such an app in our daily lives to keep us going with more positive energy. Of course, winning UofTHacks is an asset.
## Inspiration As someone who has always wanted to speak in ASL (American Sign Language), I have always struggled with practicing my gestures, as I, unfortunately, don't know any ASL speakers to try and have a conversation with. Learning ASL is an amazing way to foster an inclusive community, for those who are hearing impaired or deaf. DuoASL is the solution for practicing ASL for those who want to verify their correctness! ## What it does DuoASL is a learning app, where users can sign in to their respective accounts, and learn/practice their ASL gestures through a series of levels. Each level has a *"Learn"* section, with a short video on how to do the gesture (ie 'hello', 'goodbye'), and a *"Practice"* section, where the user can use their camera to record themselves performing the gesture. This recording is sent to the backend server, where it is validated with our Action Recognition neural network to determine if you did the gesture correctly! ## How we built it DuoASL is built up of two separate components; **Frontend** - The Frontend was built using Next.js (React framework), Tailwind and Typescript. It handles the entire UI, as well as video collection during the *"Learn"* Section, which it uploads to the backend **Backend** - The Backend was built using Flask, Python, Jupyter Notebook and TensorFlow. It is run as a Flask server that communicates with the front end and stores the uploaded video. Once a video has been uploaded, the server runs the Jupyter Notebook containing the Action Recognition neural network, which uses OpenCV and Tensorflow to apply the model to the video and determine the most prevalent ASL gesture. It saves this output to an array, which the Flask server reads and responds to the front end. ## Challenges we ran into As this was our first time using a neural network and computer vision, it took a lot of trial and error to determine which actions should be detected using OpenCV, and how the landmarks from the MediaPipe Holistic (which was used to track the hands and face) should be converted into formatted data for the TensorFlow model. We, unfortunately, ran into a very specific and undocumented bug with using Python to run Jupyter Notebooks that import Tensorflow, specifically on M1 Macs. I spent a short amount of time (6 hours :) ) trying to fix it before giving up and switching the system to a different computer. ## Accomplishments that we're proud of We are proud of how quickly we were able to get most components of the project working, especially the frontend Next.js web app and the backend Flask server. The neural network and computer vision setup was pretty quickly finished too (excluding the bugs), especially considering how for many of us this was our first time even using machine learning on a project! ## What we learned We learned how to integrate a Next.js web app with a backend Flask server to upload video files through HTTP requests. We also learned how to use OpenCV and MediaPipe Holistic to track a person's face, hands, and pose through a camera feed. Finally, we learned how to collect videos and convert them into data to train and apply an Action Detection network built using TensorFlow ## What's next for DuoASL We would like to: * Integrate video feedback, that provides detailed steps on how to improve (using an LLM?) * Add more words to our model! * Create a practice section that lets you form sentences! * Integrate full mobile support with a PWA!
losing
## Inspiration Our team firmly believes that a hackathon is the perfect opportunity to learn technical skills while having fun. Especially because of the hardware focus that MakeUofT provides, we decided to create a game! This electrifying project places two players' minds to the test, working together to solve various puzzles. Taking heavy inspiration from the hit videogame, "Keep Talking and Nobody Explodes", what better way to engage the players than have then defuse a bomb! ## What it does The MakeUofT 2024 Explosive Game includes 4 modules that must be disarmed, each module is discrete and can be disarmed in any order. The modules the explosive includes are a "cut the wire" game where the wires must be cut in the correct order, a "press the button" module where different actions must be taken depending the given text and LED colour, an 8 by 8 "invisible maze" where players must cooperate in order to navigate to the end, and finally a needy module which requires players to stay vigilant and ensure that the bomb does not leak too much "electronic discharge". ## How we built it **The Explosive** The explosive defuser simulation is a modular game crafted using four distinct modules, built using the Qualcomm Arduino Due Kit, LED matrices, Keypads, Mini OLEDs, and various microcontroller components. The structure of the explosive device is assembled using foam board and 3D printed plates. **The Code** Our explosive defuser simulation tool is programmed entirely within the Arduino IDE. We utilized the Adafruit BusIO, Adafruit GFX Library, Adafruit SSD1306, Membrane Switch Module, and MAX7219 LED Dot Matrix Module libraries. Built separately, our modules were integrated under a unified framework, showcasing a fun-to-play defusal simulation. Using the Grove LCD RGB Backlight Library, we programmed the screens for our explosive defuser simulation modules (Capacitor Discharge and the Button). This library was used in addition for startup time measurements, facilitating timing-based events, and communicating with displays and sensors that occurred through the I2C protocol. The MAT7219 IC is a serial input/output common-cathode display driver that interfaces microprocessors to 64 individual LEDs. Using the MAX7219 LED Dot Matrix Module we were able to optimize our maze module controlling all 64 LEDs individually using only 3 pins. Using the Keypad library and the Membrane Switch Module we used the keypad as a matrix keypad to control the movement of the LEDs on the 8 by 8 matrix. This module further optimizes the maze hardware, minimizing the required wiring, and improves signal communication. ## Challenges we ran into Participating in the biggest hardware hackathon in Canada, using all the various hardware components provided such as the keypads or OLED displays posed challenged in terms of wiring and compatibility with the parent code. This forced us to adapt to utilize components that better suited our needs as well as being flexible with the hardware provided. Each of our members designed a module for the puzzle, requiring coordination of the functionalities within the Arduino framework while maintaining modularity and reusability of our code and pins. Therefore optimizing software and hardware was necessary to have an efficient resource usage, a challenge throughout the development process. Another issue we faced when dealing with a hardware hack was the noise caused by the system, to counteract this we had to come up with unique solutions mentioned below: ## Accomplishments that we're proud of During the Makeathon we often faced the issue of buttons creating noise, and often times the noise it would create would disrupt the entire system. To counteract this issue we had to discover creative solutions that did not use buttons to get around the noise. For example, instead of using four buttons to determine the movement of the defuser in the maze, our teammate Brian discovered a way to implement the keypad as the movement controller, which both controlled noise in the system and minimized the number of pins we required for the module. ## What we learned * Familiarity with the functionalities of new Arduino components like the "Micro-OLED Display," "8 by 8 LED matrix," and "Keypad" is gained through the development of individual modules. * Efficient time management is essential for successfully completing the design. Establishing a precise timeline for the workflow aids in maintaining organization and ensuring successful development. * Enhancing overall group performance is achieved by assigning individual tasks. ## What's next for Keep Hacking and Nobody Codes * Ensure the elimination of any unwanted noises in the wiring between the main board and game modules. * Expand the range of modules by developing additional games such as "Morse-Code Game," "Memory Game," and others to offer more variety for players. * Release the game to a wider audience, allowing more people to enjoy and play it.
## Inspiration In online documentaries, we saw visually impaired individuals and their vision consisted of small apertures. We wanted to develop a product that would act as a remedy for this issue. ## What it does When a button is pressed, a picture is taken of the user's current view. This picture is then analyzed using OCR (Optical Character Recognition) and the text is extracted from the image. The text is then converted to speech for the user to listen to. ## How we built it We used a push button connected to the GPIO pins on the Qualcomm DragonBoard 410c. The input is taken from the button and initiates a python script that connects to the Azure Computer Vision API. The resulting text is sent to the Azure Speech API. ## Challenges we ran into Coming up with an idea that we were all interested in, incorporated a good amount of hardware, and met the themes of the makeathon was extremely difficult. We attempted to use Speech Diarization initially but realized the technology is not refined enough for our idea. We then modified our idea and wanted to use a hotkey detection model but had a lot of difficulty configuring it. In the end, we decided to use a pushbutton instead for simplicity in favour of both the user and us, the developers. ## Accomplishments that we're proud of This is our very first Makeathon and we were proud of accomplishing the challenge of developing a hardware project (using components we were completely unfamiliar with) within 24 hours. We also ended up with a fully function project. ## What we learned We learned how to operate and program a DragonBoard, as well as connect various APIs together. ## What's next for Aperture We want to implement hot-key detection instead of the push button to eliminate the need of tactile input altogether.
## Inspiration The best learning is interactive. We have an interest in education, Virtual Reality, and fun. This was the intersection. ## What it does Beginning of an interactive, 3-dimensional Virtual Reality game. Our vision for this game is rooted in education and fun, designed to draw on users' knowledge of words and vocabulary-building. Although we would still have to create logic to guide user interactions, we have created the (virtual) fundamental building blocks for such a game. ## How we built it We built it using Unity, HTC Hive, and C#. The main frame of the program and building of Virtual Reality system was programmed with C#. ## Challenges we ran into Unity presented a large learning curve, and compounding our lack of previous experience with Virtual Reality, we had a large learning experience ourselves! ## Accomplishments that we're proud of Building blocks, creating scripts for interactions, and working towards our vision for this game. Overcoming our lack of knowledge and building our skill set. Solving unanticipated problems associated with combining different softwares. ## What we learned Camaraderie. the kindness of mentors, a whole lot of newfound technical expertise on C#, Unity, and Virtual Reality. ## What's next for Virtual Reality Language Builder Time provided, continued building with the hope of forming a complete educational game.
winning
## Inspiration Our inspiration came from the challenge proposed by Varient, a data aggregation platform that connects people with similar mutations together to help doctors and users. ## What it does Our application works by allowing the user to upload an image file. The image is then sent to Google Cloud’s Document AI to extract the body of text, process it, and then matched it against the datastore of gene names for matches. ## How we built it While originally we had planned to feed this body of text to a Vertex AI ML for entity extraction, the trained model was not accurate due to a small dataset. Additionally, we attempted to build a BigQuery ML model but struggled to format the data in the target column as required. Due to time constraints, we explored a different path and downloaded a list of gene symbols from HUGO Gene Nomenclature Committee’s website (<https://www.genenames.org/>). Using Node.js, and Multer, the image is processed and the text contents are efficiently matched against the datastore of gene names for matches. The app returns a JSON of the matching strings in order of highest frequency. This web app is then hosted on Google Cloud through App Engine at (<https://uofthacksix-chamomile.ue.r.appspot.com/>). The UI while very simple is easy to use. The intent of this project was to create something that could easily be integrated into Varient’s architecture. Converting this project into an API to pass the JSON to the client would be very simple. ## How it meets the theme "restoration" The overall goal of this application, which has been partially implemented, was to create an application that could match mutated gene names from user-uploaded documents and connect them with resources, and others who share the common gene mutation. This would allow them to share strategies or items that have helped them deal with living with the gene mutation. This would allow these individuals to restore some normalcy in their lives again. ## Challenges we ran into Some of the challenges we faced: * having a small data set to train the Vertex AI on * time constraints on learning the new technologies, and the best way to effectively use it * formatting the data in the target column when attempting to build a BigQuery ML model ## Accomplishments that we're proud of The accomplishment that we are all proud of is the exposure we gained to all the new technologies we discovered and used this weekend. We had no idea how many AI tools Google offers. The exposure to new technologies and taking the risk to step out of our comfort zone and attempt to learn and use it this weekend in such a short amount of time is something we are all proud of. ## What we learned This entire project was new to all of us. We have never used google cloud in this manner before, only for Firestore. We were unfamiliar with using Express and working with machine learning was something only one of us had a small amount of experience with. We learned a lot about Google Cloud and how to access the API through Python and Node.js. ## What's next for Chamomile The hack is not as complete as we would like since ideally there would be a machine learning aspect to confirm the guesses made by the substring matching and more data to improve the Vertex AI model. Improving on this would be a great step for this project. Also, add a more put-together UI to match the theme of this application.
## Inspiration The inspiration for this project came from UofTHacks Restoration theme and Varient's project challenge. The initial idea was to detect a given gene mutation for a given genetic testing report. This is an extremely valuable asset for the medical community, given the current global situation with the COVID-19 pandemic. As we can already see, misinformation and distrust in the medical community continue to grow in popularity, thus we must try to leverage technology to solve this ever-expanding problem. One way Geneticheck can restore public trust in the medical community is by providing a way to bridge the gap between confusing medical reports and the average person's medical understanding. ## What it does Geneticheck is a smart software that allows a patient or parents of patients with rare diseases to gather more information about their specific conditions and genetic mutations. The reports are scanned through to find the gene mutation and shows where the gene mutation is located on the original report. Genticheck also provides the patient with more information regarding their gene mutation. Specifically, to provide them with the associated Diseases and Phenotypes (or related symptoms) they may now have. The software, given a gene mutation, searches through the Human Phenotype Ontology database and auto-generates a pdf report that lists off all the necessary information a patient will need following a genetic test. The descriptions for each phenotype are given in a laymen-like language, which allows the patient to understand the symptoms associated with the gene mutation, resulting in the patients and loved ones being more observant over their status. ## How we built it Geneticheck was built using Python and Google Cloud's Vision API. Other libraries were also explored, such as PyTesseract, however, yielded lower gene detection results ## Challenges we ran into One major challenge was initially designing the project in Python. Development in python was initially chosen for its rapid R&D capabilities and the potential need to do image processing in OpenCV. As the project was developed and Google Cloud Vision API was deemed acceptable for use, moving to a web-based python framework was deemed too time-consuming. In the interest of time, the python-based command line tool had to be selected as the current basis of interaction ## Accomplishments that we're proud of One proud accomplishment of this project is the success rate of the overall algorithm, being able to successfully detect all 47 gene mutations with their related image. The other great accomplishment was the quick development of PDF generation software to expand the capabilities and scope of the project, to provide the end-user/patient with more information about their condition, ultimately restoring their faith in the medical field through a better understanding/knowledge. ## What we learned Topics learned include OCR for python, optimizing images for text OCR for PyTesseract, PDF generation in python, setting up Flask servers, and alot about Genetic data! ## What's next for Geneticheck The next steps include poring over the working algorithms to a web-based framework, such as React. Running the algorithms on Javascript would allow the user web-based interaction, which is the best interactive format for the everyday person. Other steps is to gather more genetic tests results and to provide treatments options in the reports as well.
## Inspiration More than 2.7 million pets will be killed over the next year because shelters cannot accommodate them. Many of them will be abandoned by owners who are unprepared for the real responsibilities of raising a pet, and the vast majority of them will never find a permanent home. The only sustainable solution to animal homelessness is to maximize adoptions of shelter animals by families who are equipped to care for them, so we created Homeward as a one-stop foster tool to streamline this process. ## What it does Homeward allows shelters and pet owners to offer animals for adoption online. A simple and intuitive UI allows adopters to describe the pet they want and uses Google Cloud ML's Natural Language API to match their queries with available pets. Our feed offers quick browsing of available animals, multiple ranking options, and notifications of which pets are from shelters and which will be euthanized soon. ## How we built it We used the Node.js framework with Express for routing and MongoDB as our database. Our front-end was built with custom CSS/Jade mixed with features from several CSS frameworks. Entries in our database were sourced from the RescueGroups API, and salient keywords for query matching were extracted using Google's Natural Language API. Our application is hosted with Google App Engine. ## Challenges we ran into Incorporating Google's Natural Language API was challenging at first and we had to design a responsive front-end that would update the feed as the user updated their query. Some pets' descriptions had extraneous HTML and links that added noise to our extracted tags. We also found it tedious to clean and migrate the data to MongoDB. ## Accomplishments that we're proud of We successfully leveraged Google Cloud ML to detect salient attributes in users' queries and rank animals in our feed accordingly. We also managed to utilize real animal data from the RescueGroups API. Our front-end also turned out to be cleaner and more user-friendly than we anticipated. ## What we learned We learned first-hand about the challenges of applying natural language processing to potentially noisy user queries in real life applications. We also learned more about good javascript coding practices and robust back-end communication between our application and our database. But most importantly, we learned about the alarming state of animal homelessness and its origins. ## What's next for Homeward We can enhance posted pet management by creating a simple account system for shelters. We would also like to create a scheduling mechanism that lets users "book" animals for fostering, thereby maximizing the probability of adoption. In order to scale Homeward, we need to clean and integrate more shelters' databases and adjust entries to match our schema.
losing
## Inspiration Our inspiration for TRACY came from the desire to enhance tennis training through advanced technology. One of our members was a former tennis enthusiast who has always strived to refine their skills. They soon realized that the post-game analysis process took too much time in their busy schedule. We aimed to create a system that not only analyzes gameplay but also provides personalized insights for players to improve their skills. ## What it does and how we built it TRACY utilizes computer vision algorithms and pre-trained neural networks to analyze tennis footage, tracking player movements, and ball trajectories. The system then employs ChatGPT for AI-driven insights, generating personalized natural language summaries highlighting players' strengths and weaknesses. The output includes dynamic visuals and statistical data using React.js, offering a comprehensive overview and further insights into the player's performance. ## Challenges we ran into Developing a seamless integration between computer vision, ChatGPT, and real-time video analysis posed several challenges. Ensuring accuracy in 2D ball tracking from a singular camera angle, optimizing processing speed, and fine-tuning the algorithm for accurate tracking were a key hurdle we overcame during the development process. The depth of the ball became a challenge as we were limited to one camera angle but we were able to tackle it by using machine learning techniques. ## Accomplishments that we're proud of We are proud to have successfully created TRACY, a system that brings together state-of-the-art technologies to provide valuable insights to tennis players. Achieving a balance between accuracy, speed, and interpretability was a significant accomplishment for our team. ## What we learned Through the development of TRACY, we gained valuable insights into the complexities of integrating computer vision with natural language processing. We also enhanced our understanding of the challenges involved in real-time analysis of sports footage and the importance of providing actionable insights to users. ## What's next for TRACY Looking ahead, we plan to further refine TRACY by incorporating user feedback and expanding the range of insights it can offer. Additionally, we aim to explore potential collaborations with tennis coaches and players to tailor the system to meet the diverse needs of the tennis community.
## Inspiration Imagine: A major earthquake hits. Thousands call 911 simultaneously. In the call center, a handful of operators face an impossible task. Every line is ringing. Every second counts. There aren't enough people to answer every call. This isn't just hypothetical. It's a real risk in today's emergency services. A startling **82% of emergency call centers are understaffed**, pushed to their limits by non-stop demands. During crises, when seconds mean lives, staffing shortages threaten our ability to mitigate emergencies. ## What it does DispatchAI reimagines emergency response with an empathetic AI-powered system. It leverages advanced technologies to enhance the 911 call experience, providing intelligent, emotion-aware assistance to both callers and dispatchers. Emergency calls are aggregated onto a single platform, and filtered based on severity. Critical details such as location, time of emergency, and caller's emotions are collected from the live call. These details are leveraged to recommend actions, such as dispatching an ambulance to a scene. Our **human-in-the-loop-system** enforces control of human operators is always put at the forefront. Dispatchers make the final say on all recommended actions, ensuring that no AI system stands alone. ## How we built it We developed a comprehensive systems architecture design to visualize the communication flow across different softwares. ![Architecture](https://i.imgur.com/FnXl7c2.png) We developed DispatchAI using a comprehensive tech stack: ### Frontend: * Next.js with React for a responsive and dynamic user interface * TailwindCSS and Shadcn for efficient, customizable styling * Framer Motion for smooth animations * Leaflet for interactive maps ### Backend: * Python for server-side logic * Twilio for handling calls * Hume and Hume's EVI for emotion detection and understanding * Retell for implementing a voice agent * Google Maps geocoding API and Street View for location services * Custom-finetuned Mistral model using our proprietary 911 call dataset * Intel Dev Cloud for model fine-tuning and improved inference ## Challenges we ran into * Curated a diverse 911 call dataset * Integrating multiple APIs and services seamlessly * Fine-tuning the Mistral model to understand and respond appropriately to emergency situations * Balancing empathy and efficiency in AI responses ## Accomplishments that we're proud of * Successfully fine-tuned Mistral model for emergency response scenarios * Developed a custom 911 call dataset for training * Integrated emotion detection to provide more empathetic responses ## Intel Dev Cloud Hackathon Submission ### Use of Intel Hardware We fully utilized the Intel Tiber Developer Cloud for our project development and demonstration: * Leveraged IDC Jupyter Notebooks throughout the development process * Conducted a live demonstration to the judges directly on the Intel Developer Cloud platform ### Intel AI Tools/Libraries We extensively integrated Intel's AI tools, particularly IPEX, to optimize our project: * Utilized Intel® Extension for PyTorch (IPEX) for model optimization * Achieved a remarkable reduction in inference time from 2 minutes 53 seconds to less than 10 seconds * This represents a 80% decrease in processing time, showcasing the power of Intel's AI tools ### Innovation Our project breaks new ground in emergency response technology: * Developed the first empathetic, AI-powered dispatcher agent * Designed to support first responders during resource-constrained situations * Introduces a novel approach to handling emergency calls with AI assistance ### Technical Complexity * Implemented a fine-tuned Mistral LLM for specialized emergency response with Intel Dev Cloud * Created a complex backend system integrating Twilio, Hume, Retell, and OpenAI * Developed real-time call processing capabilities * Built an interactive operator dashboard for data summarization and oversight ### Design and User Experience Our design focuses on operational efficiency and user-friendliness: * Crafted a clean, intuitive UI tailored for experienced operators * Prioritized comprehensive data visibility for quick decision-making * Enabled immediate response capabilities for critical situations * Interactive Operator Map ### Impact DispatchAI addresses a critical need in emergency services: * Targets the 82% of understaffed call centers * Aims to reduce wait times in critical situations (e.g., Oakland's 1+ minute 911 wait times) * Potential to save lives by ensuring every emergency call is answered promptly ### Bonus Points * Open-sourced our fine-tuned LLM on HuggingFace with a complete model card (<https://huggingface.co/spikecodes/ai-911-operator>) + And published the training dataset: <https://huggingface.co/datasets/spikecodes/911-call-transcripts> * Submitted to the Powered By Intel LLM leaderboard (<https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard>) * Promoted the project on Twitter (X) using #HackwithIntel (<https://x.com/spikecodes/status/1804826856354725941>) ## What we learned * How to integrate multiple technologies to create a cohesive, functional system * The potential of AI to augment and improve critical public services ## What's next for Dispatch AI * Expand the training dataset with more diverse emergency scenarios * Collaborate with local emergency services for real-world testing and feedback * Explore future integration
## Inspiration Our inspiration for creating this AI project stemmed from the desire to leverage current technology to enhance the lives of our community members, particularly the elderly. Recognizing the difficulties faced by aging individuals, including our own grandparents, to embrace technological advancements, and their concerns about health conditions, we sought a way to integrate technology and healthcare services seamlessly into their lives. Thus, SageWell was conceived—an AI companion designed to provide personalized support to seniors by navigating medical information alongside their healthcare providers. At SageWell, we believe in accessibility of AI for all age groups. ## What it does SageWell enables users to seek medical information through natural language interactions, allowing elderly individuals to pose questions verbally and receive spoken responses, mimicking human conversation. ## How we built it SageWell leverages the capabilities of two Monster APIs: OpenAI-Whisper Large-v2 for speech-to-text conversion and Meta's llama-2-7b-chat-hf for refining our reinforcement learning model. Our model was trained on MedQuAD, a comprehensive dataset containing over 47,000 medical question-answer pairs, Drugs, Side Effects and Medical Condition dataset, and Drugs Related to Medical Conditions dataset. The frontend of the web app was built using React, a JavaScript library, and the service we integrated into our app to store all the audio files was Firebase. ## Challenges we ran into In the process of developing SageWell, we encountered several challenges. Since there were many integrations in our application, from speech-to-text transcription and MonsterAPIs for fine-tuned LLMs, to storage service providers, we faced difficulties trying to link all the individual pieces together. We learned several new tools and technologies and also spent time fine-tuning models to provide medical information. Additionally, we encountered setbacks when we were finishing up our project as we were facing CORS errors when making API calls from the browser - we were able to work around this by adding a proxy, which served as a bridge between our client and server. ## Accomplishments that we're proud of We empowered our primary user demographic, the elderly, to engage with SageWell through voice interactions instead of having to type text. We addressed the challenges that many seniors encounter when typing on digital devices, thereby increasing the accessibility of the process of seeking medical information and ensuring inclusivity in AI for all age groups, especially the elderly. Our reinforcement learning model has demonstrated effectiveness, evidenced by the decrease in training loss after each iteration and to 0.756 at the end. This indicates that the model fits well on the training data. We merged three datasets: the Medical Question Answering data set, the Drugs, Side Effects and Medical Condition data set, and the Drugs Related to Medical Conditions data set. By training our model for SageWell on these datasets obtained from NIH websites and drugs.com, we allowed SageWell to provide medical information to users based on reliable, comprehensive databases. ## What we learned Through this project, we gained valuable insights into the startup ecosystem and developer space. We expanded our skill set by refining our model, working with new APIs and storage providers, and creating a solution that addresses the specific challenges faced by our target audience. ## What's next for SageWell The journey of SageWell is just beginning. Moving forward, we aim to expand its capabilities to assist in additional areas crucial for the well-being of the elderly, such as medication reminders and guidance on accessing support for domestic chores. Furthermore, we envision integrating features that facilitate connections between seniors and younger generations, including their grandchildren and other youth in their communities. By fostering these intergenerational connections, SageWell will expand its targeting market size by not only keeping the elderly engaged with their loved ones but also ensuring they remain connected to the evolving world around them. Through SageWell, we look forward to continuing to push for accessibility of AI for all age groups.
winning
# Inspiration As a team we decided to develop a service that we thought would not only be extremely useful to us, but to everyone around the world that struggles with storing physical receipts. We were inspired to build an eco friendly as well as innovative application that targets the pain points behind filing receipts, losing receipts, missing return policy deadlines, not being able to find the proper receipt with a particular item as well as tracking potentially bad spending habits. # What it does To solve these problems, we are proud to introduce, Receipto, a universal receipt tracker who's mission is to empower users with their personal finances, to track spending habits more easily as well as to replace physical receipts to reduce global paper usage. With Receipto you can upload or take a picture of a receipt, and it will automatically recognize all of the information found on the receipt. Once validated, it saves the picture and summarizes the data in a useful manner. In addition to storing receipts in an organized manner, you can get valuable information on your spending habits, you would also be able to search through receipt expenses based on certain categories, items and time frames. The most interesting feature is that once a receipt is loaded and validated, it will display a picture of all the items purchased thanks to the use of item codes and an image recognition API. Receipto will also notify you when a receipt may be approaching its potential return policy deadline which is based on a user input during receipt uploads. # How we built it We have chosen to build Receipto as a responsive web application, allowing us to develop a better user experience. We first drew up story boards by hand to visually predict and explore the user experience, then we developed the app using React, ViteJS, ChakraUI and Recharts. For the backend, we decided to use NodeJS deployed on Google Cloud Compute Engine. In order to read and retrieve information from the receipt, we used the Google Cloud Vision API along with our own parsing algorithm. Overall, we mostly focused on developing the main ideas, which consist of scanning and storing receipts as well as viewing the images of the items on the receipts. # Challenges we ran into Our main challenge was implementing the image recognition API, as it involved a lot of trial and error. Almost all receipts are different depending on the store and province. For example, in Quebec, there are two different taxes displayed on the receipt, and that affected how our app was able to recognize the data. To fix that, we made sure that if two types of taxes are displayed, our app would recognize that it comes from Quebec, and it would scan it as such. Additionally, almost all stores have different receipts, so we have adapted the app to recognize most major stores, but we also allow a user to manually add the data in case a receipt is very different. Either way, a user will know when it's necessary to change or to add data with visual alerts when uploading receipts. Another challenge was displaying the images of the items on the receipts. Not all receipts had item codes, stores that did have these codes ended up having different APIs. We overcame this challenge by finding an API called stocktrack.ca that combines the most popular store APIs in one place. # Accomplishments that we're proud of We are all very proud to have turned this idea into a working prototype as we agreed to pursue this idea knowing the difficulty behind it. We have many great ideas to implement in the future and have agreed to continue this project beyond McHacks in hopes of one day completing it. We our grateful to have had the opportunity to work together with such talented, patient, and organized team members. # What we learned With all the different skills each team member brought to the table, we were able to pick up new skills from each other. Some of us got introduced to new coding languages, others learned new UI design skills as well as simple organization and planning skills. Overall, McHacks has definitely showed us the value of team work, we all kept each other motivated and helped each other overcome each obstacle as a team. # What's next for Receipto? Now that we have a working prototype ready, we plan to further test our application with a selected sample of users to improve the user experience. Our plan is to polish up the main functionality of the application, and to expand the idea by adding exciting new features that we just didn't have time to add. Although we may love the idea, we need to make sure to conduct more market research to see if it could be a viable service that could change the way people perceive receipts and potentially considering adapting Receipto.
## Inspiration The inspiration for our project stems from the increasing trend of online shopping and the declining foot traffic in physical stores. Our goal was to provide a unique and engaging experience for customers, encouraging them to visit physical stores and rediscover the joy of in-person shopping. We wanted to create an interactive and entertaining shopping experience that would entice customers to visit stores more frequently and foster a deeper connection between them and the store's brand. ## What it does Our project is an AR scavenger hunt experience that gamifies the shopping experience. The scavenger hunt encourages customers to explore the store and discover new products they may have otherwise overlooked. As customers find specific products, they can earn points which can be redeemed for exclusive deals and discounts on future purchases. This innovative marketing scheme not only provides customers with an entertaining experience but also incentivizes them to visit stores more frequently and purchase products they may have otherwise overlooked. ## How we built it To create the AR component of our project, we used Vuforia and Unity, two widely used platforms for building AR applications. The Vuforia platform allowed us to create and track image targets, while Unity was used to design the 3D models for the AR experience. We then integrated the AR component into an Android application by importing it as a Gradle project. Our team utilized agile development methodologies to ensure efficient collaboration and problem-solving throughout the development process. ## Challenges we ran into One of the challenges we faced was integrating multiple APIs and ensuring that they worked together seamlessly. Another challenge was importing the AR component and creating the desired functionality within our project. We also faced issues with debugging and resolving technical errors that arose during the development process. ## Accomplishments that we're proud of Despite the challenges we faced, we were able to achieve successful teamwork and collaboration. Despite forming the team later than other groups, we were able to effectively communicate and work together to bring our project to fruition. We are proud of the end result, which was a polished and functional AR scavenger hunt experience that met our objectives. ## What we learned We learned how difficult it is to truly ship out software, and we are grateful to have joined the hackathon. We gained a deeper understanding of the importance of project planning, effective communication, and collaboration among team members. We also learned that the development process can be challenging and unpredictable, and that it requires perseverance and problem-solving skills. Additionally, participating in the hackathon taught us valuable technical skills such as integrating APIs, creating AR functionality, and importing projects onto an Android application. ## What's next for Winnur Looking forward, we plan to incorporate Computer Vision technology into our project to prevent potential damage to our product's packaging. We also aim to expand the reach of our AR scavenger hunt experience by partnering with more retailers and enhancing the user interface and experience. We are excited about the potential for future development and growth of Winnur.
## Inspiration Games are the only force in the known universe that can get people to take actions against their self-interest, in a predictable way, without using force. I have been attracted to Game Development for a while and I was planning to make something of my own and I love First Person Shooting games so, I thought let's build something that I can enjoy myself. I took inspiration from CS Go and other FPS games to get a brief idea of the mechanics and environment. ## What it does Shoot'em Up is a first-person action shooter in which everything depends on your skill. Carefully consider your tactics for each battle, explore different locations and modes, develop your shooting skills and demonstrate your superiority! Step into the thrilling solo play campaign as you shoot your way through one dire situation after another to save the world as you launch an attack against a lunatic’s apocalyptic plans. ## How I built it I used Unity Game Engine to build this game from scratch. I put all my creativity to design and create the gameplay environment. Everything is built from the ground-up. Unity enabled me to create 3D models and figurines. I used Unity's C# scripting API to form gameplay physics and actions. ## Challenges I ran into I was not familiar with C# at a deeper level so I had to keep lookup for stuff over StackOverflow and the environment design was the hardest part of the job. ## Accomplishments that I am proud of Building a game of my own is the biggest and most prestigious accomplishment for me. ## What I learned I learned that developing a game is not a piece of cake and it takes immense commitment and hard work and I also got myself familiar with the functionalities of Unity Game Engine. ## What's next for SHOOT 'EM UP * Currently, the game only has 2 levels as of now but I would love to add more levels to make the game even more enjoyable. * The game can be played on PC for now but I would love to port the game for cross-platform use. * There are some nicks and cuts here and there and I would love to make the gameplay smoother. ## Note * I have uploaded various code snippets over my Github but that would make no sense at all until you get all the assets files, So I have uploaded the complete project file along with all the assets on my Google Drive whose link is attracted with the submission. * There was some problem with my Mic😥 so I was not able to do a voiceover, Pardon Me for it.
winning
## What it does Khaledifier replaces all quotes and images around the internet with pictures and quotes of DJ Khaled! ## How we built it Chrome web app written in JS interacts with live web pages to make changes. The app sends a quote to a server which tokenizes words into types using NLP This server then makes a call to an Azure Machine Learning API that has been trained on DJ Khaled quotes to return the closest matching one. ## Challenges we ran into Keeping the server running with older Python packages and for free proved to be a bit of a challenge
## Inspiration Inspired by a team member's desire to study through his courses by listening to his textbook readings recited by his favorite anime characters, functionality that does not exist on any app on the market, we realized that there was an opportunity to build a similar app that would bring about even deeper social impact. Dyslexics, the visually impaired, and those who simply enjoy learning by having their favorite characters read to them (e.g. children, fans of TV series, etc.) would benefit from a highly personalized app. ## What it does Our web app, EduVoicer, allows a user to upload a segment of their favorite template voice audio (only needs to be a few seconds long) and a PDF of a textbook and uses existing Deepfake technology to synthesize the dictation from the textbook using the users' favorite voice. The Deepfake tech relies on a multi-network model trained using transfer learning on hours of voice data. The encoder first generates a fixed embedding of a given voice sample of only a few seconds, which characterizes the unique features of the voice. Then, this embedding is used in conjunction with a seq2seq synthesis network that generates a mel spectrogram based on the text (obtained via optical character recognition from the PDF). Finally, this mel spectrogram is converted into the time-domain via the Wave-RNN vocoder (see [this](https://arxiv.org/pdf/1806.04558.pdf) paper for more technical details). Then, the user automatically downloads the .WAV file of his/her favorite voice reading the PDF contents! ## How we built it We combined a number of different APIs and technologies to build this app. For leveraging scalable machine learning and intelligence compute, we heavily relied on the Google Cloud APIs -- including the Google Cloud PDF-to-text API, Google Cloud Compute Engine VMs, and Google Cloud Storage; for the deep learning techniques, we mainly relied on existing Deepfake code written for Python and Tensorflow (see Github repo [here](https://github.com/rodrigo-castellon/Real-Time-Voice-Cloning), which is a fork). For web server functionality, we relied on Python's Flask module, the Python standard library, HTML, and CSS. In the end, we pieced together the web server with Google Cloud Platform (GCP) via the GCP API, utilizing Google Cloud Storage buckets to store and manage the data the app would be manipulating. ## Challenges we ran into Some of the greatest difficulties were encountered in the superficially simplest implementations. For example, the front-end initially seemed trivial (what's more to it than a page with two upload buttons?), but many of the intricacies associated with communicating with Google Cloud meant that we had to spend multiple hours creating even a landing page with just drag-and-drop and upload functionality. On the backend, 10 excruciating hours were spent attempting (successfully) to integrate existing Deepfake/Voice-cloning code with the Google Cloud Platform. Many mistakes were made, and in the process, there was much learning. ## Accomplishments that we're proud of We're immensely proud of piecing all of these disparate components together quickly and managing to arrive at a functioning build. What started out as merely an idea manifested itself into usable app within hours. ## What we learned We learned today that sometimes the seemingly simplest things (dealing with python/CUDA versions for hours) can be the greatest barriers to building something that could be socially impactful. We also realized the value of well-developed, well-documented APIs (e.g. Google Cloud Platform) for programmers who want to create great products. ## What's next for EduVoicer EduVoicer still has a long way to go before it could gain users. Our first next step is to implementing functionality, possibly with some image segmentation techniques, to decide what parts of the PDF should be scanned; this way, tables and charts could be intelligently discarded (or, even better, referenced throughout the audio dictation). The app is also not robust enough to handle large multi-page PDF files; the preliminary app was designed as a minimum viable product, only including enough to process a single-page PDF. Thus, we plan on ways of both increasing efficiency (time-wise) and scaling the app by splitting up PDFs into fragments, processing them in parallel, and returning the output to the user after collating individual text-to-speech outputs. In the same vein, the voice cloning algorithm was restricted by length of input text, so this is an area we seek to scale and parallelize in the future. Finally, we are thinking of using some caching mechanisms server-side to reduce waiting time for the output audio file.
## Inspiration Our team was united in our love for animals, and our anger about the thousands of shelter killings that happen every day due to overcrowding. In order to raise awareness and educate others about the importance of adopting rather than shopping for their next pet, we framed this online web application from a dog's perspective of the process of trying to get adopted. ## What it does In *Overpupulation,* users can select a dog who they will control in order to try to convince visitors to adopt them. To illustrate the realistic injustices some breeds face in shelters, different dogs in the game have different chances of getting adopted. After each rejection from a potential adoptee, we expose some of their faulty reasoning behind their choices to try to debunk false misconceptions. At the conclusion of the game, we present ways for individuals to get involved and support their local shelters. ## How we built it This web application is built in Javascript/JQuery, HTML, and CSS. ## Accomplishments that we're proud of For most of us, this was our first experience working in a team coding environment. We all walked away with a better understanding of git, the front-end languages we utilized, and design. We have purchased the domain name overpupulation.com, but are still trying to work through redirecting issues. :)
winning
## Inspiration As university students, we often find that we have groceries in the fridge but we end up eating out and the groceries end up going bad. ## What It Does After you buy groceries from supermarkets, you can use our app to take a picture of your receipt. Our app will parse through the items in the receipts and add the items into the database representing your fridge. Using the items you have in your fridge, our app will be able to recommend recipes for dishes for you to make. ## How We Built It On the back-end, we have a Flask server that receives the image from the front-end through ngrok and then sends the image of the receipt to Google Cloud Vision to get the text extracted. We then post-process the data we receive to filter out any unwanted noise in the data. On the front-end, our app is built using react-native, using axios to query from the recipe API, and then stores data into Firebase. ## Challenges We Ran Into Some of the challenges we ran into included deploying our Flask to Google App Engine, and styling in react. We found that it was not possible to write into Google App Engine storage, instead we had to write into Firestore and have that interact with Google App Engine. On the frontend, we had trouble designing the UI to be responsive across platforms, especially since we were relatively inexperienced with React Native development. We also had trouble finding a recipe API that suited our needs and had sufficient documentation.
## Inspiration Being frugal students, we all wanted to create an app that would tell us what kind of food we could find around us based on a budget that we set. And so that’s exactly what we made! ## What it does You give us a price that you want to spend and the radius that you are willing to walk or drive to a restaurant; then voila! We give you suggestions based on what you can get in that price in different restaurants by providing all the menu items with price and calculated tax and tips! We keep the user history (the food items they chose) and by doing so we open the door to crowdsourcing massive amounts of user data and as well as the opportunity for machine learning so that we can give better suggestions for the foods that the user likes the most! But we are not gonna stop here! Our goal is to implement the following in the future for this app: * We can connect the app to delivery systems to get the food for you! * Inform you about the food deals, coupons, and discounts near you ## How we built it ### Back-end We have both an iOS and Android app that authenticates users via Facebook OAuth and stores user eating history in the Firebase database. We also made a REST server that conducts API calls (using Docker, Python and nginx) to amalgamate data from our targeted APIs and refine them for front-end use. ### iOS Authentication using Facebook's OAuth with Firebase. Create UI using native iOS UI elements. Send API calls to Soheil’s backend server using json via HTTP. Using Google Map SDK to display geo location information. Using firebase to store user data on cloud and capability of updating to multiple devices in real time. ### Android The android application is implemented with a great deal of material design while utilizing Firebase for OAuth and database purposes. The application utilizes HTTP POST/GET requests to retrieve data from our in-house backend server, uses the Google Maps API and SDK to display nearby restaurant information. The Android application also prompts the user for a rating of the visited stores based on how full they are; our goal was to compile a system that would incentive foodplaces to produce the highest “food per dollar” rating possible. ## Challenges we ran into ### Back-end * Finding APIs to get menu items is really hard at least for Canada. * An unknown API kept continuously pinging our server and used up a lot of our bandwith ### iOS * First time using OAuth and Firebase * Creating Tutorial page ### Android * Implementing modern material design with deprecated/legacy Maps APIs and other various legacy code was a challenge * Designing Firebase schema and generating structure for our API calls was very important ## Accomplishments that we're proud of **A solid app for both Android and iOS that WORKS!** ### Back-end * Dedicated server (VPS) on DigitalOcean! ### iOS * Cool looking iOS animations and real time data update * Nicely working location features * Getting latest data from server ## What we learned ### Back-end * How to use Docker * How to setup VPS * How to use nginx ### iOS * How to use Firebase * How to OAuth works ### Android * How to utilize modern Android layouts such as the Coordinator, Appbar, and Collapsible Toolbar Layout * Learned how to optimize applications when communicating with several different servers at once ## What's next for How Much * If we get a chance we all wanted to work on it and hopefully publish the app. * We were thinking to make it open source so everyone can contribute to the app.
## Inspiration As a group of avid travelers and adventurers we all share the same problem: googling "places to visit" for hours, deliberating reviews, etc. We really wished someone did all that for us. This inspired us to create WizeWay.AI, which creates a personalized itinerary using AI algorithms. ## What it does WizeWay.AI takes in your inputs, such as the info of the travelers, the destination, individual preferences such as pet accommodations and dietary restrictions, and creates an itinerary based on your departure/return date. ## How we built it We utilized available generative AI APIs such as GPT 4.0.11 interfacing with the Taipy framework to create a web application in Python. ## Challenges we ran into Using Taipy was fairly challenging as the only help available on the internet is the Taipy documentation. We further ran into challenges with formatting using Taipy. ## Accomplishments that we're proud of We created a web application that uses AI to make peoples liver easier, which is what coding and artificial intelligence are all about. ## What we learned we learned how to use the Taipy framework and the OpenAI API. We also learned proper practice for code documentation and github collaboration. We furthered our full stack development skills as we incorporated a bit of CSS into the front end. ## What's next for WizeWay.AI We are hoping to continue the project using interfacing with google maps, location tracking, shortest path algorithms, public trasnsit integration, account validation, offline support and much more.
partial
## Inspiration Osu! players often use drawing tablets instead of a normal mouse and keyboard setup because a tablet gives more precision than a mouse can provide. These tablets also provide a better way to input to devices. Overuse of conventional keyboards and mice can lead to carpal tunnel syndrome, and can be difficult to use for those that have specific disabilities. Tablet pens can provide an alternate form of HID, and have better ergonomics reducing the risk of carpal tunnel. Digital artists usually draw on these digital input tablets, as mice do not provide the control over the input as needed by artists. However, tablets can often come at a high cost of entry, and are not easy to bring around. ## What it does Limestone is an alternate form of tablet input, allowing you to input using a normal pen and using computer vision for the rest. That way, you can use any flat surface as your tablet ## How we built it Limestone is built on top of the neural network library mediapipe from google. mediapipe hands provide a pretrained network that returns the 3D position of 21 joints in any hands detected in a photo. This provides a lot of useful data, which we could probably use to find the direction each finger points in and derive the endpoint of the pen in the photo. To safe myself some work, I created a second neural network that takes in the joint data from mediapipe and derive the 2D endpoint of the pen. This second network is extremely simple, since all the complex image processing has already been done. I used 2 1D convolutional layers and 4 hidden dense layers for this second network. I was only able to create about 40 entries in a dataset after some experimentation with the formatting, but I found a way to generate fairly accurate datasets with some work. ## Dataset Creation I created a small python script that marks small dots on your screen for accurate spacing. I could the place my pen on the dot, take a photo, and enter in the coordinate of the point as the label. ## Challenges we ran into It took a while to tune the hyperparameters of the network. Fortunately, due to the small size it was not too hard to get it into a configuration that could train and improve. However, it doesn't perform as well as I would like it to but due to time constraints I couldn't experiment further. The mean average error loss of the final model trained for 1000 epochs was around 0.0015 Unfortunately, the model was very overtrained. The dataset was o where near large enough. Adding noise probably could have helped to reduce overtraining, but I doubt by much. There just wasn't anywhere enough data, but the framework is there. ## Whats Next If this project is to be continued, the model architecture would have to be tuned much more, and the dataset expanded to at least a few hundred entries. Adding noise would also definitely help with the variance of the dataset. There is still a lot of work to be done on limestone, but the current code at least provides some structure and a proof of concept.
Fujifusion is our group's submission for Hack MIT 2018. It is a data-driven application for predicting corporate credit ratings. ## Problem Scholars and regulators generally agree that credit rating agency failures were at the center of the 2007-08 global financial crisis. ## Solution \*Train a machine learning model to automate the prediction of corporate credit ratings. \*Compare vendor ratings with predicted ratings to identify discrepancies. \*Present this information in a cross-platform application for RBC’s traders and clients. ## Data Data obtained from RBC Capital Markets consists of 20 features recorded for 27 companies at multiple points in time for a total of 524 samples. Available at <https://github.com/em3057/RBC_CM> ## Analysis We took two approaches to analyzing the data: a supervised approach to predict corporate credit ratings and an unsupervised approach to try to cluster companies into scoring groups. ## Product We present a cross-platform application built using Ionic that works with Android, iOS, and PCs. Our platform allows users to view their investments, our predicted credit rating for each company, a vendor rating for each company, and visual cues to outline discrepancies. They can buy and sell stock through our app, while also exploring other companies they would potentially be interested in investing in.
## Inspiration Self-driving cars seem to be the focus of the cutting-edge industry. Although there have been many self-driving cars (such as Tesla), none of them have been ported to the cloud to allow for modularity and availability to everyone. Perhaps self-driving can be served just as how IaaS, PaaS, and SaaS are served. This was also very much inspired by our long living hero, @elonmusk, who was, unfortunately, unable to attend this year's McHacks. ## What it does SDaaS is a cloud provider for serving steering instructions to self-driving cars from camera images. ## How We Built It The integral component of our project is an N series GPU-enabled VM hosted on Microsoft Azure. This allowed us to efficiently train a convolutional neural network (Identical to Nvidia's End to End Learning) to control our project. To show the extensibility of our API, we used an open source car simulator called The Open Racing Car Simulator (TORCS) that interfaced with the backend that we had created before. The backend is a Python socket server that processes calls and replies to image frames with steering angles. ## Challenges we ran into Being inexperienced with C++, many of our hours was spent looking through countless pages of documentation and Stack Overflow forums to fix simple bugs. Setting up sockets along with a connection from the C++ code proved to be very difficult. ## Accomplishments that we're proud of We managed to setup almost all of the features that we had proposed in the beginning. ## What's next for SDAAS- Self Driving As A Service Since we only had a virtual simulator for testing purposes, perhaps next time we may use a real car.
winning
## Inspiration We’ve all had the experience of needing assistance with a task but not having friends available to help. As a last resort, one has to resort to large, class-wide GroupMe’s to see if anyone can help. But most students have those because they’re filled with a lot of spam. As a result, the most desperate calls for help often go unanswered. We realized that we needed to streamline the process for getting help. So, we decided to build an app to do just that. For every Yalie who needs help, there are a hundred who are willing to offer it—but they just usually aren’t connected. So, we decided to build YHelpUs, with a mission to help every Yalie get better help. ## What it does YHelpUs provides a space for students that need something to create postings rather than those that have something to sell creating them. This reverses the roles of a traditional marketplace and allows for more personalized assistance. University students can sign up with their school email accounts and then be able to view other students’ posts for help as well as create their own posts. Users can access a chat for each posting discussing details about the author’s needs. In the future, more features relating to task assignment will be implemented. ## How we built it Hoping to improve our skills as developers, we decided to carry out the app’s development with the MERNN stack; although we had some familiarity with standard MERN, developing for mobile with React Native was a unique challenge for us all. Throughout the entire development phase, we had to balance what we wanted to provide the user and how these relationships could present themselves in our code. In the end, we managed to deliver on all the basic functionalities required to answer our initial problem. ## Challenges we ran into The most notable challenge we faced was the migration towards React Native. Although plenty of documentation exists for the framework, many of the errors we faced were specific enough to force development to stop for a prolonged period of time. From handling multi-layered navigation to user authentication across all our views, we encountered problems we couldn’t have expected when we began the project, but every solution we created simply made us more prepared for the next. ## Accomplishments that we're proud of Enhancing our product with automated content moderation using Google Cloud Natural Language API. Also, our sidequest developing a simple matching algorithm for LightBox. ## What we learned Learned new frameworks (MERNN) and how to use Google Cloud API. ## What's next for YHelpUs Better filtering options and a more streamlined UI. We also want to complete the accepted posts feature, and enhance security for users of YHelpUs.
## Inspiration As university students, we have been noticing issues with very large class sizes. With lectures often being taught to over 400 students, it becomes very difficult and anxiety-provoking to speak up when you don't understand the content. As well, with classes of this size, professors do not have time to answer every student who raises their hand. This raises the problem of professors not being able to tell if students are following the lecture, and not answering questions efficiently. Our hack addresses these issues by providing a real-time communication environment between the class and the professor. KeepUp has the potential to increase classroom efficiency and improve student experiences worldwide. ## What it does KeepUp allows the professor to gauge the understanding of the material in real-time while providing students a platform to pose questions. It allows students to upvote questions asked by their peers that they would like to hear answered, making it easy for a professor to know which questions to prioritize. ## How We built it KeepUp was built using JavaScript and Firebase, which provided hosting for our web app and the backend database. ## Challenges We ran into As it was, for all of us, our first time working with a firebase database, we encountered some difficulties when it came to pulling data out of the firebase. It took a lot of work to finally get this part of the hack working which unfortunately took time away from implementing some other features (See what’s next section). But it was very rewarding to have a working backend in Firebase and we are glad we worked to overcome the challenge. ## Accomplishments that We are proud of We are proud of creating a useful app that helps solve a problem that affects all of us. We recognized that there is a gap in between students and teachers when it comes to communication and question answering and we were able to implement a solution. We are proud of our product and its future potential and scalability. ## What We learned We all learned a lot throughout the implementation of KeepUp. First and foremost, we got the chance to learn how to use Firebase for hosting a website and interacting with the backend database. This will prove useful to all of us in future projects. We also further developed our skills in web design. ## What's next for KeepUp * There are several features we would like to add to KeepUp to make it more efficient in classrooms: * Add a timeout feature so that questions disappear after 10 minutes of inactivity (10 minutes of not being upvoted) * Adding a widget feature so that the basic information from the website can be seen in the corner of your screen at all time * Adding Login for users for more specific individual functions. For example, a teacher can remove answered questions, or the original poster can mark their question as answered. * Censoring of questions as they are posted, so nothing inappropriate gets through.
## Inspiration Being Asian-Canadian, we had parents who had to immigrate to Canada. As newcomers, adjusting to a new way of life was scary and difficult. Our parents had very few physical possessions and assets, meaning they had to buy everything from winter clothes to pots and pans. Ensuring we didn't miss any sales to maximize savings made a HUGE difference. That's why we created Shop Buddy - an easy-to-use and convenient tool for people to keep an eye out for opportunities to save money without needing to monitor constantly. This means that people can focus on their other tasks AND know when to get their shopping done. ## What it does Shop Buddy allows users to input links to products they are interested in and what strike price they want to wait for. When the price hits their desired price point, Shop Buddy will send a text to the user's cell phone, notifying them of the price point. Furthermore, to save even MORE time, users can directly purchase the product by simply replying to the text message alert. Since security and transparency are a huge deal these days - especially with retail and e-commerce - we implemented a blockchain where all approved transactions are recorded for full transparency and security. ## How we built it The user submission form is built on a website using HTML/CSS/Javascript. All forms submissions are sent through requests to the Python Backend served via a Flask REST API. When a new alert is submitted via the user, the user is messaged via SMS using the Twilio API. If the user replies to a notification on their phone to instantly purchase the product, the transaction is performed with the Python Chrome Web Driver and then the transaction is recorded on the Shop Buddy Blockchain. ## Challenges we ran into The major challenge we faced was connecting the backend to the frontend. We worked with the mentors to help us submit POST and GET requests. Another challenge was testing. Websites have automatic bot detection, so when we tested our code to check prices and purchase items, we were warned by several sites that bots were detected. To overcome this challenge, we coded mock online retailer webpages, that would allow us to test our code. ## Accomplishments that we're proud of We're proud of completing this project in a group of 2! We both expanded our skillsets to complete Shop Buddy. We are proud of our idea as we believe it can help people be more productive and help newcomers to Canada. ## What we learned We wanted to learn something new, and we both did. Sheridan learned how to code in JavaScript and do Post and Get requests, and Ben learned how to use Blockchain and code a bot to buy items. Overall, we are very happy to see this project come together. ## What's next for Shop Buddy Currently, our product can only be used to purchase items from select retailers. We plan to expand our retail customer list as we get used to working with different websites. Shop Buddy's goal is to help those in need and those who want to be more productive. We would focus on companies catering to a wider audience range to meet these goals.
partial
## 💡 INSPIRATION 💡 Many students have **poor spending habits** and losing track of one's finances may cause **unnecessary stress**. As university students ourselves, we're often plagued with financial struggles. As young adults down on our luck, we often look to open up a credit card or take out student loans to help support ourselves. However, we're deterred from loans because they normally involve phoning automatic call centers which are robotic and impersonal. We also don't know why or what to do when we've been rejected from loans. Many of us weren't taught how to plan our finances properly and we frequently find it difficult to keep track of our spending habits. To address this problem troubling our generation, we decided to create AvaAssist! The goal of the app is to **provide a welcoming place where you can seek financial advice and plan for your future.** ## ⚙️ WHAT IT DOES ⚙️ **AvaAssist is a financial advisor built to support young adults and students.** Ava can provide loan evaluation, financial planning, and monthly spending breakdowns. If you ever need banking advice, Ava's got your back! ## 🔎RESEARCH🔍 ### 🧠UX Research🧠 To discover the pain points of existing banking practices, we interviewed 2 and surveyed 7 participants on their current customer experience and behaviors. The results guided us in defining a major problem area and the insights collected contributed to discovering our final solution. ### 💸Loan Research💸 To properly predict whether a loan would be approved or not, we researched what goes into the loan approval process. The resulting research guided us towards ensuring that each loan was profitable and didn't take on too much risk for the bank. ## 🛠️ HOW WE BUILT IT🛠️ ### ✏️UI/UX Design✏️ ![alt text](https://cdn.discordapp.com/attachments/910655355661463584/911782991204876348/Loan_Amount.gif) Figma was used to create a design prototype. The prototype was designed in accordance with Voice UI (VUI) design principles & Material design as a base. This expedited us to the next stage of development because the programmers had visual guidance in developing the app. With the use of Dasha.AI, we were able to create an intuitive user experience in supporting customers through natural dialog via the chatbot, and a friendly interface with the use of an AR avatar. Check out our figma [here](https://www.figma.com/proto/0pAhUPJeuNRzYDBr07MBrc/Hack-Western?node-id=206%3A3694&scaling=min-zoom&page-id=206%3A3644&starting-point-node-id=206%3A3694&show-proto-sidebar=1) Check out our presentation [here](https://www.figma.com/proto/0pAhUPJeuNRzYDBr07MBrc/Hack-Western?node-id=61%3A250&scaling=min-zoom&page-id=2%3A2) ### 📈Predictive Modeling📈 The final iteration of each model has a **test prediction accuracy of +85%!** ![alt text](https://cdn.discordapp.com/attachments/910655355661463584/911592566829486120/unknown.png) We only got to this point because of our due diligence, preprocessing, and feature engineering. After coming up with our project, we began thinking about and researching HOW banks evaluate loans. Loan evaluation at banks is extremely complex and we tried to capture some aspects of it in our model. We came up with one major aspect to focus on during preprocessing and while searching for our datasets, profitability. There would be no point for banks to take on a loan if it weren't profitable. We found a couple of databases with credit card and loan data on Kaggle. The datasets were smaller than desired. We had to be very careful during preprocessing when deciding what data to remove and how to fill NULL values to preserve as much data as possible. Feature engineering was certainly the most painstaking part of building the prediction model. One of the most important features we added was the Risk Free Rate (CORRA). The Risk Free Rate is the rate of return of an investment with no risk of loss. It helped with the engineering process of another feature, min\_loan, which is the minimum amount of money that the bank can make with no risk of loss. Min\_loan would ultimately help our model understand which loans are profitable and which aren't. As a result, the model learned to decline unprofitable loans. ![alt text](https://cdn.discordapp.com/attachments/910655355661463584/911981729168887948/unknown.png) We also did market research on the average interest rate of specific types of loans to make assumptions about certain features to supplement our lack of data. For example, we used the average credit card loan interest rate of 22%. The culmination of newly engineered features and the already existing data resulted in our complex, high accuracy models. We have a model for Conventional Loans, Credit Card Loans, and Student Loans. The model we used was RandomForests from sklearn because of its wide variety of hyperparameters and robustness. It was fine-tuned using gridsearchCV to find its best hyperparameters. We designed a pipeline for each model using Pipeline, OneHotEncoder, StandardScaler, FunctionTransformer, GradientBoostingClassifier, and RandomForestClassifier from sklearn. Finally, the models were saved as pickle files for front-end deployment. ### 🚀Frontend Deployment🚀 Working on the frontend was a very big challenge. Since we didn't have a dedicated or experienced frontend developer, there was a lot of work and learning to be done. Additionally, a lot of ideas had to be cut from our final product as well. First, we had to design the frontend with React Native, using our UI/UX Designer's layout. For this we decided to use Figma, and we were able to dynamically update our design to keep up with any changes that were made. Next, we decided to tackle hooking up the machine learning models to React with Flask. Having Typescript communicate with Python was difficult. Thanks to these libraries and a lot of work, we were able to route requests from the frontend to the backend, and vice versa. This way, we could send the values that our user inputs on the frontend to be processed by the ML models, and have them give an accurate result. Finally, we took on the challenge of learning how to use Dasha.AI and integrating it with the frontend. Learning how to use DashaScript (Dasha.AI's custom programming language) took time, but eventually, we started getting the hang of it, and everything was looking good! ## 😣 CHALLENGES WE RAN INTO 😣 * Our teammate, Abdullah, who is no longer on our team, had family issues come up and was no longer able to attend HackWestern unfortunately. This forced us to get creative when deciding a plan of action to execute our ambitious project. We needed to **redistribute roles, change schedules, look for a new teammate, but most importantly, learn EVEN MORE NEW SKILLS and adapt our project to our changing team.** As a team, we had to go through our ideation phase again to decide what would and wouldn't be viable for our project. We ultimately decided to not use Dialogflow for our project. However, this was a blessing in disguise because it allowed us to hone in on other aspects of our project such as finding good data to enhance user experience and designing a user interface for our target market. * The programmers had to learn DashaScript on the fly which was a challenge as we normally code with OOP’s. But, with help from mentors and workshops, we were able to understand the language and implement it into our project * Combining the frontend and backend processes proved to be very troublesome because the chatbot needed to get user data and relay it to the model. We eventually used react-native to store the inputs across instances/files. * The entire team has very little experience and understanding of the finance world, it was both difficult and fun to research different financial models that banks use to evaluate loans. * We had initial problems designing a UI centered around a chatbot/machine learning model because we couldn't figure out a user flow that incorporated all of our desired UX aspects. * Finding good data to train the prediction models off of was very tedious, even though there are some Kaggle datasets there were few to none that were large enough for our purposes. The majority of the datasets were missing information and good datasets were hidden behind paywalls. It was for this reason that couldn't make a predictive model for mortgages. To overcome this, I had to combine datasets/feature engineer to get a useable dataset. ## 🎉 ACCOMPLISHMENTS WE ARE PROUD OF 🎉 * Our time management was impeccable, we are all very proud of ourselves since we were able to build an entire app with a chat bot and prediction system within 36 hours * Organization within the team was perfect, we were all able to contribute and help each other when needed; ex. the UX/UI design in figma paved the way for our front end developer * Super proud of how we were able to overcome missing a teammate and build an amazing project! * We are happy to empower people during their financial journey and provide them with a welcoming source to gain new financial skills and knowledge * Learning and implementing DashaAi was a BLAST and we're proud that we could learn this new and very useful technology. We couldn't have done it without mentor help, 📣shout out to Arthur and Sreekaran📣 for providing us with such great support. * This was a SUPER amazing project! We're all proud to have done it in such a short period of time, everyone is new to the hackathon scene and are still eager to learn new technologies ## 📚 WHAT WE LEARNED 📚 * DashaAi is a brand new technology we learned from the DashaAi workshop. We wanted to try and implement it in our project. We needed a handful of mentor sessions to figure out how to respond to inputs properly, but we're happy we learned it! * React-native is a framework our team utilized to its fullest, but it had its learning curve. We learned how to make asynchronous calls to integrate our backend with our frontend. * Understanding how to take the work of the UX/UI designer and apply it dynamically was important because of the numerous design changes we had throughout the weekend. * How to use REST APIs to predict an output with flask using the models we designed was an amazing skill that we learned * We were super happy that we took the time to learn Expo-cli because of how efficient it is, we could check how our mobile app would look on our phones immediately. * First time using AR models in Animaze, it took some time to understand, but it ultimately proved to be a great tool! ## ⏭️WHAT'S NEXT FOR AvaAssist⏭️ AvaAssist has a lot to do before it can be deployed as a genuine app. It will only be successful if the customer is satisfied and benefits from using it, otherwise, it will be a failure. Our next steps are to implement more features for the user experience. For starters, we want to implement Dialogflow back into our idea. Dialogflow would be able to understand the intent behind conversations and the messages it exchanges with the user. The long-term prospect of this would be that we could implement more functions for Ava. In the future Ava could be making investments for the user, moving money between personal bank accounts, setting up automatic/making payments, and much more. Finally, we also hope to create more tabs within the AvaAssist app where the user can see their bank account history and its breakdown, user spending over time, and a financial planner where users can set intervals to put aside/invest their money. ## 🎁 ABOUT THE TEAM🎁 Yifan is a 3rd year interactive design student at Sheridan College, currently interning at SAP. With experience in designing for social startups and B2B software, she is interested in expanding her repertoire in designing for emerging technologies and healthcare. You can connect with her at her [LinkedIn](https://www.linkedin.com/in/yifan-design/) or view her [Portfolio](https://yifan.design/) Alan is a 2nd year computer science student at the University of Calgary. He's has a wide variety of technical skills in frontend and backend development! Moreover, he has a strong passion for both data science and app development. You can reach out to him at his [LinkedIn](https://www.linkedin.com/in/alanayy/) Matthew is a 2nd year student at Simon Fraser University studying computer science. He has formal training in data science. He's interested in learning new and honing his current frontend skills/technologies. Moreover, he has a deep understanding of machine learning, AI and neural networks. He's always willing to have a chat about games, school, data science and more! You can reach out to him at his [LinkedIn](https://www.linkedin.com/in/matthew-wong-240837124/) **📣📣 SHOUT OUT TO ABDULLAH FOR HELPING US THROUGH IDEATION📣📣** You can still connect with Abdullah at his [LinkedIn](https://www.linkedin.com/in/abdullah-sahapdeen/) He's super passionate about reactJS and wants to learn more about machine learning and AI! ### 🥳🎉 THANK YOU UW FOR HOSTING HACKWESTERN🥳🎉
## Inspiration Our inspiration came from the annoying amount of times we have had to take out a calculator after a meal with friends and figure out how much to pay each other, make sure we have a common payment method (Venmo, Zelle), and remember if we paid each other back or not a week later. So to answer this question we came up with a Split that can easily divide our expenses for us, and organize the amount we owe a friend, and payments without having a common platform at all in one. ## What it does This application allows someone to put in a value that someone owes them or they owe someone and organize it. During this implementation of a due to someone, you can also split an entire amount with multiple individuals which will be reflected in the amount owed to each person. Additionally, you are able to clear your debts and make payments through the built-in Checkbook service that allows you to pay just given their name, phone number, and value amount. ## How we built it We built this project using html, css, python, and SQL implemented with Flask. Alongside using these different languages we utilized the Checkbook API to streamline the payment process. ## Challenges we ran into Some challenges we ran into were, not knowing how to implement new parts of web development. We had difficulty implementing the API we used, “Checkbook” , using python into the backend of our website. We had no experience with APIs and so implementing this was a challenge that took some time to resolve. Another challenge that we ran into was coming up with different ideas that were more complex than we could design. During the brainstorming phase we had many ideas of what would be impactful projects but were left with the issue of not knowing how to put that into code, so brainstorming, planning, and getting an attainable solution down was another challenge. ## Accomplishments that we're proud of We were able to create a fully functioning, ready to use product with no prior experience with software engineering and very limited exposure to web dev. ## What we learned Some things we learned from this project were first that communication was the most important thing in the starting phase of this project. While brainstorming, we had different ideas that we would agree on, start, and then consider other ideas which led to a loss of time. After completing this project we found that communicating what we could do and committing to that idea would have been the most productive decision toward making a great project. To complement that, we also learned to play to our strengths in the building of this project. In addition, we learned about how to best structure databases in SQL to achieve our intended goals and we learned how to implement APIs. ## What's next for Split The next step for Split would be to move into a mobile application scene. Doing this would allow users to use this convenient application in the application instead of a browser. Right now the app is fully supported for a mobile phone screen and thus users on iPhone could also use the “save to HomeScreen” feature to utilize this effectively as an app while we create a dedicated app. Another feature that can be added to this application is bill scanning using a mobile camera to quickly split and organize payments. In addition, the app could be reframed as a social media with a messenger and friend system.
## Inspiration We were looking at the Apple Magic TrackPads last week since they seemed pretty cool. But then we saw the price tag, $130! That's crazy! So we set out to create a college student budget friendly "magic" trackpad. ## What it does Papyr is a trackpad for your computer that is just a single sheet of paper, no wires, strings, or pressure detecting devices attached. Paypr allows you to browse the computer just like any other trackpad and supports clicking and scrolling. ## How we built it We use a webcam and a whole lot of computer vision to make the magic happen. The webcam first calibrates itself by detecting the four corners of the paper and maps every point on the sheet to a location on the actual screen. Our program then tracks the finger on the sheet by analyzing the video stream in real time, frame by frame, blurring, thresholding, performing canny edge detection, then detecting the contours in the final result. The furthest point in the hand’s contour corresponds to the user's fingertip and is translated into both movement and actions on the computer screen. Clicking is detailed in the next section, with scrolling is activated by double clicking. ## Challenges we ran into Light sensitivity proved to be very challenging since depending on the environment, the webcam would sometimes have trouble tracking our fingers. However, finding a way to detect clicking was by far the most difficult part of the project. The problem is the webcam has no sense of depth perception: it sees each frame as a 2D image and as a result there is no way to detect if your hand is on or off the paper. We turned to the Internet hoping for some previous work that would guide us in the right direction, but everything we found required either glass panels, infrared sensors, or other non college student budget friendly hardware. We were on our own. We made many attempts including: having the user press down very hard on the paper so that their skin would turn white and detect this change of color, track the shadow the user's finger makes on the paper and detect when the shadow disappears, which occurs when the user places his finger on the paper. None of these methods proved fruitful, so we sat down and for the better part of 5 hours thought about how to solve this issue. Finally, what worked for us was to track the “hand pixel” changes across several frames to detect a valid sequence that can qualify as a “click”. Given the 2D image perception with our web cam, it was no easy task and there was a lot of experimentation that went into this. ## Accomplishments that we're proud of We are extremely proud of getting clicking to work. It was no easy feat. We also developed our own algorithms for fingertip tracking and click detection and wrote code from scratch. We set out to create a cheap trackpad and we were able to. In the end we transformed a piece of paper, something that is portable and available nearly anywhere, into a makeshift-high tech device with only the help of a standard webcam. Also one of the team members was able to win a ranked game of Hearthstone using a piece of paper so that was cool (not the match shown in the video). ## What we learned From normalizing the environment's lighting and getting rid of surrounding noise to coming up with the algorithm to provide depth perception to a 2D camera, this project taught us a great deal about computer vision. We also learned about efficiency and scalability since numerous calculations need to be made each second in analyze each frame and everything going on in them. ## What's next for Papyr - A Paper TrackPad We would like to improve the accuracy and stability of Papyr. This would allow Papyr to function as a very cheap replacement for Wacom digital tablets. Papyr already supports various "pointers" such as fingers or pens.
losing
# DeezSquats: Break PRs, not spines! 💪 Tired of feeling stuck? 🏋️‍♀️ Ready to take control of your health and fitness without the risk of injury? **DeezSquats** is your personalized fitness coach, designed to make exercise safe, enjoyable, and accessible for everyone. ❌ No more guesswork, no more fear. ✅ Our real-time feedback system ensures you're doing things right, every step of the way. Whether you're a seasoned athlete or just starting out, **DeezSquats** empowers you to move confidently and feel great. ## How it works: ``` 1. Personalized Training: Get tailored exercise plans based on your goals and fitness level. 2. Real-time Feedback: Our AI analyzes your form and provides instant guidance to prevent injuries and maximize results. 3. Accessible Fitness: Enjoy professional-quality training right from your phone, anytime, anywhere. 4. Data analyst approach to training: get all your fancy graphs on whatever statistics you want ``` Join a community of like-minded individuals. Together, we'll create a healthier, more vibrant world, one squat at a time. Are you ready to transform your fitness journey? Let's get started today! ## Key Features: ``` 1. AI-powered feedback ✅ 2. Personalized training plans 💪 3. User-friendly interface 📱 4. Community support 🤗 ``` ## What we've achieved: ``` 1. Created a unique solution to promote safe and effective exercise 🎉 2. Mitigated the risks of improper form and injury ❌ 3. Implemented state-of-the-art technology for a seamless user experience 🤖 ``` ## What's next for **DeezSquats**: ``` 1. Expanding our exercise library 🏋️‍♂️ 2. Introducing video feedback for enhanced guidance 🎥 3. Enhancing our virtual gym buddy experience 👯‍♀️ ``` Ready to take the next step? Join **DeezSquats** and experience the future of fitness. ## Authors Alankrit Verma Borys Łangowicz Adibvafa Fallahpour
**Come check out our fun Demo near the Google Cloud Booth in the West Atrium!! Could you use a physiotherapy exercise?** ## The problem A specific application of physiotherapy is that joint movement may get limited through muscle atrophy, surgery, accident, stroke or other causes. Reportedly, up to 70% of patients give up physiotherapy too early — often because they cannot see the progress. Automated tracking of ROM via a mobile app could help patients reach their physiotherapy goals. Insurance studies showed that 70% of the people are quitting physiotherapy sessions when the pain disappears and they regain their mobility. The reasons are multiple, and we can mention a few of them: cost of treatment, the feeling that they recovered, no more time to dedicate for recovery and the loss of motivation. The worst part is that half of them are able to see the injury reappear in the course of 2-3 years. Current pose tracking technology is NOT realtime and automatic, requiring the need for physiotherapists on hand and **expensive** tracking devices. Although these work well, there is a HUGE room for improvement to develop a cheap and scalable solution. Additionally, many seniors are unable to comprehend current solutions and are unable to adapt to current in-home technology, let alone the kinds of tech that require hours of professional setup and guidance, as well as expensive equipment. [![IMAGE ALT TEXT HERE](https://res.cloudinary.com/devpost/image/fetch/s--GBtdEkw5--/c_limit,f_auto,fl_lossy,q_auto:eco,w_900/http://img.youtube.com/vi/PrbmBMehYx0/0.jpg)](http://www.youtube.com/watch?feature=player_embedded&v=PrbmBMehYx0) ## Our Solution! * Our solution **only requires a device with a working internet connection!!** We aim to revolutionize the physiotherapy industry by allowing for extensive scaling and efficiency of physiotherapy clinics and businesses. We understand that in many areas, the therapist to patient ratio may be too high to be profitable, reducing quality and range of service for everyone, so an app to do this remotely is revolutionary. We collect real-time 3D position data of the patient's body while doing exercises for the therapist to adjust exercises using a machine learning model directly implemented into the browser, which is first analyzed within the app, and then provided to a physiotherapist who can further analyze the data. It also asks the patient for subjective feedback on a pain scale This makes physiotherapy exercise feedback more accessible to remote individuals **WORLDWIDE** from their therapist ## Inspiration * The growing need for accessible physiotherapy among seniors, stroke patients, and individuals in third-world countries without access to therapists but with a stable internet connection * The room for AI and ML innovation within the physiotherapy market for scaling and growth ## How I built it * Firebase hosting * Google cloud services * React front-end * Tensorflow PoseNet ML model for computer vision * Several algorithms to analyze 3d pose data. ## Challenges I ran into * Testing in React Native * Getting accurate angle data * Setting up an accurate timer * Setting up the ML model to work with the camera using React ## Accomplishments that I'm proud of * Getting real-time 3D position data * Supporting multiple exercises * Collection of objective quantitative as well as qualitative subjective data from the patient for the therapist * Increasing the usability for senior patients by moving data analysis onto the therapist's side * **finishing this within 48 hours!!!!** We did NOT think we could do it, but we came up with a working MVP!!! ## What I learned * How to implement Tensorflow models in React * Creating reusable components and styling in React * Creating algorithms to analyze 3D space ## What's next for Physio-Space * Implementing the sharing of the collected 3D position data with the therapist * Adding a dashboard onto the therapist's side
## Inspiration We visit many places, we know very less about the historic events or the historic places around us. Today In History notifies you of historic places near you so that you do not miss them. ## What it does Today In History notifies you about important events that took place exactly on the same date as today but a number of years ago in history. It also notifies the historical places that are around you along with the distance and directions. Today In History is also available as an Amazon Alexa skill. You can always ask Alexa, "Hey Alexa, ask Today In History what's historic around me? What Happened Today? What happened today in India....... ## How we built it We have two data sources: one is Wikipedia -- we are pulling all the events from the wiki for the date and filter them based on users location. We use the data from Philadelphia to fetch the historic places nearest to the user's location and used Mapquest libraries to give directions in real time. ## Challenges we ran into Alexa does not know a person's location except the address it is registered with, but we built a novel backend that acts as a bridge between the web app and Alexa to keep them synchronized with the user's location.
partial
## Inspiration With the current pandemic and quarantining measures, many individuals feel like their life is passing them by. Restrictions for personal safety and health have hindered people from seeing their extended family and friends, and even prevented them from going to their favorite places and doing things they like. Being students in college, we’ve seen firsthand how our peers feel like the years that are supposed to be filled with self-discovery, active learning, and great memories are feeling meaningless, unmotivated, and lonely. It’s no surprise that mental health has been negatively impacted. However, our team at Cross Off is full of optimists. We wanted to create something that showed our quarantining friends and family that they can still enjoy their time and find purpose during these months, safely. ## What it does Essentially, Cross Off is a bucket list web app. Based on your location and keywords in your bucket list items, Cross Off will suggest local businesses to complete your items while advising you how to complete it safely. Cross Off also highlights black-owned businesses, so you can further support your community while squashing FOMO! The best way to try something new is to experience it with someone else. That’s why for each bucket list item you have, Cross Off will show you friends, family, and (if you are open to it) other people from your community who have that item in their list as well. Start a conversation to make plans and experience something new together. Not sure what to do? No worries, Cross Off has you covered. We show you some of the most popular bucket list items in your area so you are never drawing a blank. ### Example You’ve always wanted to go skydiving, so you add that to your bucket list. You see that someone in your community, Hannah, also wants to go skydiving. You start a conversation with Hannah and both agree on a time and day to go to Bungee Adventures, a skydiving place nearby. Leading up to the day, you and Hannah talk about your fears and why this activity is on your bucket lists. You meet at Bungee Adventures the day of and follow the safety measures for social distancing. You see all staff and customers are wearing required face coverings suited for skydiving. After both you and Hannah leapt off a plane together, you message each other to reflect on the experience. Then, you head back to Cross Off to *cross off* skydiving. You begin planning for your next adventure. ## How we built it We started with a responsive website template and we edited it to fit our needs by removing irrelevant sections and adding our own. We began first with the profile page, where our list sits at the top. Then we added a section below that for our suggested activities section. Next we built a demo page for one of the activities, which was skydiving. Continuing with our responsive template base, we removed unnecessary sections and added in a spot for finding people to make plans with. Lastly, we wanted to try integrating an API, so we chose the mapping API from ESRI ArcGIS to show pinpoints of the top 6 results for skydiving from Yelp. ## Challenges we ran into One of the challenges we ran into was figuring out the COVID-19 safety measures of local businesses that we used to populate our suggested activities section. For our example for Skydiving, one of our team members contacted the business suggestions from Yelp and asked about the safety measures in place. We also researched how to support local businesses safely (like ordering takeout, and live streaming a concert from home). To address this in the future, we will have to scrape safety measures from the businesses’ Yelp pages or website, and perhaps even work directly with each business to find out how they are being conscious of the pandemic. ## Accomplishments that we're proud of For most of our team members, this is our first time participating in a hackathon. Honestly, we came in planning to attend workshops and learn as much as we can about the whole hackathon process. However, we felt compelled to address a problem that we are all experiencing right now. Two of our members have never built a full-fledged app or website before. We are happy with what we were able to accomplish in the given amount of time, and are extremely proud to be submitting a project for the first time. ## What we learned While building Cross Off, we learned a lot as a team, including: * How to work together (figuring out working schedules, delegating tasks based on strengths) * How to collaborate on code (lots of balancing between screen sharing and sharing code on GitHub) * The importance of APIs (their role in making our lives easier, how to integrate them, reading through documentation) * Building an MVP (figuring out which features we wanted to include, deciding the scope of our project, agreeing on what is important to us) ## What's next for Cross Off Cross Off, by no means, is a finished product. We believe in the impact of our project, and we want to continue building it out. This includes: * Integrating the Yelp Fusion API to automatically populate the business suggestions for each bucket list item, which will then pass in location information to our ESRI map. * Using a database to store user information. * Tapping into social networking APIs to generate a list of friends/family to make plans with. * Creating a feed that allows users to post about their completed bucket list items (including adding pictures, tagging other users, etc.) * Partnering with small business owners and black-owned businesses to come up with bucket list items to suggest to users that they can then complete at their businesses. By continuing to work on Cross Off, we would like to empower users to accomplish their goals, from the things they’d like to try to the things they’ve only dreamed of. We hope you join us on this adventure!
## Inspiration Our society is becoming increasingly more dependent on technology, and thus, the use and accessibility of commonly used devices for individuals who have visual or auditory impairment becomes increasingly important. Our goal is to develop a solution that allows this minority group to communicate more conveniently, helping them fully participate and engage in society. We were driven by the goal of positivity impacting visually and audio impaired users to provide a user friendly and accessible product. ## What it does We came up with a solution called Visual Speech. Our app allows visually impaired users to connect to hearing-impaired users. Voice messages sent by visually impaired users are received as text messages by hearing-impaired users, allowing for seamless communication. ## How we built it We used the google cloud text api and text to speech api for the main functionality of our project. We used express.js and node.js for the backend, as well as firebase to store the chat messages. The frontend was built with React. ## Challenges we ran into We faced some technical and team challenges. Difficulties included setting up an environment variable, Git conflicts when merging, visual bugs, playing the audio for the text to speech API, timezone and communication issues. ## Accomplishments that we're proud of For this project, we’re proud of being able to utilize Google Cloud solutions via speech to text and text to speech APIs, albeit with roadblocks along the way. ## What we learned We came together to discuss ideas, and were able to apply both concepts that we already knew and build on top of unfamiliar skills like Node.js, React, Git, and more! ## What's next for Visual Speech We’re really proud of what we’ve accomplished at this hackathon and in the future, we hope to add live two way communication features, with authentication. We would love to move this app onto react-native so that it can be an app that people can download on the app store.
## 💡 Inspiration 💡 Have you ever wished you could play the piano perfectly? Well, instead of playing yourself, why not get Ludwig to play it for you? Regardless of your ability to read sheet music, just upload it to Ludwig and he'll scan, analyze, and play the entire sheet music within the span of a few seconds! Sometimes, you just want someone to play the piano for you, so we aimed to make a robot that could be your little personal piano player! This project allows us to bring music to places like elderly homes, where live performances can uplift residents who may not have frequent access to musicians. We were excited to combine computer vision, MIDI parsing, and robotics to create something tangible that shows how technology can open new doors. Ultimately, our project makes music more inclusive and brings people together through shared experiences. ## ❓What it does ❓ Ludwig is your music prodigy. Ludwig can read any sheet music that you upload to him, then convert it to a MIDI file, convert that to playable notes on the piano scale, then play each of those notes on the piano with its fingers! You can upload any kind of sheet music and see the music come to life! ## ⚙️ How we built it ⚙️ For this project, we leveraged OpenCV for computer vision to read the sheet music. The sheet reading goes through a process of image filtering, converting it to binary, classifying the characters, identifying the notes, then exporting them as a MIDI file. We then have a server running for transferring the file over to Ludwig's brain via SSH. Using the Raspberry Pi, we leveraged multiple servo motors with a servo module to simultaneously move multiple fingers for Ludwig. In the Raspberry Pi, we developed functions, key mappers, and note mapping systems that allow Ludwig to play the piano effectively. ## Challenges we ran into ⚔️ We had a few roadbumps along the way. Some major ones included file transferring over SSH as well as just making fingers strong enough on the piano that could withstand the torque. It was also fairly difficult trying to figure out the OpenCV for reading the sheet music. We had a model that was fairly slow in reading and converting the music notes. However, we were able to learn from the mentors in Hack The North and learn how to speed it up to make it more efficient. We also wanted to ## Accomplishments that we're proud of 🏆 * Got a working robot to read and play piano music! * File transfer working via SSH * Conversion from MIDI to key presses mapped to fingers * Piano playing melody ablities! ## What we learned 📚 * Working with Raspberry Pi 3 and its libraries for servo motors and additional components * Working with OpenCV and fine tuning models for reading sheet music * SSH protocols and just general networking concepts for transferring files * Parsing MIDI files into useful data through some really cool Python libraries ## What's next for Ludwig 🤔 * MORE OCTAVES! We might add some sort of DC motor with a gearbox, essentially a conveyer belt, which can enable the motors to move up the piano keyboard to allow for more octaves. * Improved photo recognition for reading accents and BPM * Realistic fingers via 3D printing
losing
## Inspiration As college students, our lives are often filled with music: from studying at home, partying, to commuting. Music is ubiquitous in our lives. However, we find the current process of listening to music and controlling our digital music player pretty mechanical and boring: it’s either clicking or tapping. We wanted to truly interact with our music. We want to feel our music. During one brainstorming session, a team member jokingly suggested a Minority Report-inspired gesture UI system. With this suggestion, we realized we can use this hackathon as a chance to build a cool interactive, futuristic way to play music. ## What it does Fedoract allows you to control your music in a fun and interactive way. It wireless streams your hand gestures and allows you to control your Spotify with them. We are using a camera mounted on a fedora to recognize hand gestures, and depending on which gesture, we can control other home applications using the technology of IoT. The camera will be mounted wirelessly on the hat and its video feed will be sent to the main computer to process. ## How we built it For the wireless fedora part, we are using an ESP32-CAM module to record and transmit the video feed of the hand gesture to a computer. The ESP32-CAM module will be powered by a power supply built by a 9V battery and a 3V3/5V Elegoo Power Supply. The video feed is transmitted through WiFi and is connected to the main computer to be analyzed using tools such as OpenCV. Our software will then calculate the gesture and perform actions on Spotify accordingly. The software backend is built using the OpenCV and the media pipe library. The media pipe library includes a hand model that has been pre-trained using a large set of data and it is very accurate. We are using this model to get the positions of different features (or landmarks) of the hand, such as fingertips, the wrist, and the knuckles. Then we are using this information to determine the hand gesture made by the user. The Spotify front end is controlled and accessed using the Selenium web driver. Depending on the action determined by hand gesture recognition, the program presses the corresponding button. Note the new window instantiated by the web driver does not have any prior information. Therefore, we need to log in to Spotify through an account at the start of the process. Then we can access the media buttons and other important buttons on the web page. Backend: we used OpenCV in combination with a never-seen-before motion classification algorithm. Specifically, we used Python scripts using OpenCV to capture webcam input to get hand recognition to recognize the various landmarks (joints) of the hand. Then, motion classification was done through a non-ML, trigonometric approach. First, a vector of change in X and Y input movement was computed using the first and last stored hand coordinates for some given period after receiving some hand motion input. Using deltaX and delta Y, we were able to compute the angle of the vector on the x-y plane, relative to a reference angle that is obtained using the display's width and height. If the vector is between the positive and negative reference angles, then the motion is classified and interpreted as Play Next Song, and so on for the other actions. See the diagrams below for more details. ## Challenges we ran into The USB-to-TTL cable we got for the ESP32 CAM was defective, so we were spending way too much time trying to fix and find alternative ways with the parts we have. Worse of all, we were also having trouble powering the ESP32-CAM both when it was connected directly to the computer and when it was running wirelessly using its own power supply. The speaker we bought was too quiet for our purposes, and we did not have the right types of equipment to get our display working in time. The ESP32 CAM module is very sensitive to power fluctuations in addition to having an extremely complicated code upload process. The community around the device is very small therefore there was often misleading advice. This led to a long debugging process. The software also had many issues. First of all, we needed to install MediaPipe on our ARM (M1) Macs to effectively develop using OpenCV but we figured out that it wasn’t supported only after spending some time trying to install it. Eventually, we resorted to the Intel chip version of PyCharm to install MediaPipe, which surprisingly worked, seeing as our chips are not Intel-manufactured. As a result, PyCharm was super slow and this really slowed down the development process. Also, we had minor IDE issues when importing OpenCV in our scripts, so we hotfixed that by simply creating a new project (shrug). Another thing was trying to control the keyboard via the OS but it turned out to be difficult for keys other than volume, so we resorted to using Selenium to control the Spotify client. Additionally, in the hand gesture tracking, the thumbs down gesture was particularly difficult because the machine kept thinking that other fingers were lifted as well. In the hand motion tracking process, the x and y coordinates were inverted, which made the classification algorithm a lot harder to develop. Then, bridging the video live stream coming from the ES32-CAM to the backend was problematic and we spent around 3 hours trying to find a way to effectively and simply establish a bridge using OpenCV so that we could easily redirect the video live stream to be the SW's input feed. Lastly, we needed to link the multiple functionality scripts together, which wasn’t obvious. ## Accomplishments that we're proud of One thing the hardware team is really proud of is the perseverance displayed during the debugging of our hardware. Because of faulty connection cords and unstable battery supply, it took us over 14 hours simply just to get the camera to connect wirelessly. Throughout this process, we had to use an almost brute force approach and tried all possible combinations of potential fixes. We are really surprised we have mental toughness. The motion classification algorithm! It took a while to figure out but was well worth it. Hand gesture (first working product in the team, team spirit) This was our first fully working Minimum Viable Product in a hackathon for all of the team members ## What we learned How does OpenCV work? We learned extensively how serial connection works. We learned that you can use the media pipe module to perform hand gesture recognition and other image classification using image capture. An important thing to note is the image capture must be in RGB format before being passed into the Mediapipe library. We also learned how to use the image capture with webcams to test in development and how to draw helpful figures on the output image to debug. ## What's next for Festive Fedora There is a lot of potential for improvements in this project. For example, we can put all the computing through a cloud computing service. Right now, we have the hand gesture recognition calculated locally, and having it online means we will have more computing power, meaning that it will also have the potential to connect to more devices by running more complicated algorithms. Something else we can improve is that we can try to get better hardware such that we will have less delay in the video feed, giving us more accuracy for the gesture detection.
## Inspiration The inspiration for ResuMate came from observing how difficult it can be for undergraduate students and recent graduates to get personalized and relevant feedback on their resumes. We wanted to create a tool that could provide intelligent, real-time resume analysis specifically for technology-related jobs, focusing on internship and new grad roles. By leveraging AI, we aim to help candidates enhance their resumes and improve their chances in the competitive tech job market. ## What it does ResuMate is an AI-powered web application that analyzes resumes by providing personalized eligibility and compatibility assessments. It identifies key strengths and areas for improvement based on keyword matching and specific job requirements for tech roles. Users receive insights on which parts of their resume align with job descriptions and suggestions to fill in missing skills or keywords. ## How we built it ResuMate is built using modern web technologies: * React for building a responsive frontend interface. * Next.js for server-side rendering and easy routing. * Pyodide to run Python in the browser, enabling advanced resume analysis through Python libraries like PyPDF2. * CSS Modules to style the application components consistently and modularly. -Cerebras API (Llama3 model) as AI API to generate personalized feedback recommendations based on Large Language Models (LLMs) The core functionality revolves around uploading a PDF resume, processing it with Python code in the browser, and providing feedback based on keyword analysis using LLM call API. ## Challenges we ran into One of the key challenges we faced was transferring PDF content to text within a JavaScript framework. Parsing PDFs in a web environment isn't straightforward, especially in a client-side context where JavaScript doesn't natively support the full breadth of PDF handling like Python does. Integrating Pyodide was crucial for running Python libraries like PyPDF2 to handle the PDF extraction, but it introduced challenges in managing the virtual filesystem and ensuring seamless communication between JavaScript and Python. ## Accomplishments that we're proud of We successfully integrated Python code execution in the browser through Pyodide, allowing us to analyze resumes in real time without needing a backend server for processing. Additionally, we created a user-friendly interface that helps users understand what keywords are missing from their resumes, which will directly improve their job applications. ## What we learned Throughout this project, we learned how to: * Seamlessly integrate Python within a JavaScript framework using Pyodide. * Handle complex file uploads and processing entirely on the client-side. * Optimize PDF text extraction and keyword matching for real-time performance. * Work as a team to overcome technical challenges and meet our project goals. ## What's next for ResuMate Moving forward, we plan to: * Improve the accuracy of our PDF text extraction, especially for resumes with complex formatting. * Expand the keyword matching and scoring algorithms to handle more specific job descriptions and fields. * Develop a more advanced suggestion system that not only identifies missing keywords but also provides actionable advice based on the latest job market trends. * Add support for more resume formats, including Word documents and plain text.
## Inspiration Digitized conversations have given the hearing impaired and other persons with disabilities the ability to better communicate with others despite the barriers in place as a result of their disabilities. Through our app, we hope to build towards a solution where we can extend the effects of technological development to aid those with hearing disabilities to communicate with others in real life. ## What it does Co:herent is (currently) a webapp which allows the hearing impaired to streamline their conversations with others by providing them sentence or phrase suggestions given the context of a conversation. We use Co:here's NLP text generation API in order to achieve this and in order to provide more accurate results we give the API context from the conversation as well as using prompt engineering in order to better tune the model. The other (non hearing impaired) person is able to communicate with the webapp naturally through speech-to-text inputs and text-to-speech functionalities are put in place in order to better facilitate the flow of the conversation. ## How we built it We built the entire app using Next.js with the Co:here API, React Speech Recognition API, and React Speech Kit API. ## Challenges we ran into * Coming up with an idea * Learning Next.js as we go as this is all of our first time using it * Calling APIs are difficult without a backend through a server side rendered framework such as Next.js * Coordinating and designating tasks in order to be efficient and minimize code conflicts * .env and SSR compatibility issues ## Accomplishments that we're proud of Creating a fully functional app without cutting corners or deviating from the original plan despite various minor setbacks. ## What we learned We were able to learn a lot about Next.js as well as the various APIs through our first time using them. ## What's next for Co:herent * Better tuning the NLP model to generate better/more personal responses as well as storing and maintaining more information on the user through user profiles and database integrations * Better tuning of the TTS, as well as giving users the choice to select from a menu of possible voices * Possibility of alternative forms of input for those who may be physically impaired (such as in cases of cerebral palsy) * Mobile support * Better UI
losing
## Inspiration Inspired by expensive and bulky machinery, we sought out to find a way for individuals suffering from Parkinsons disease and other hand tremor conditions to have an affordable and easy to use therapy to manage their conditions. We built a game that gives user the task of drawing a straight line on their computer or television. Their hand movements are tracked using a LeapMotion and a Pebble smart watch. While playing the game, we track statistics and display easy to read graphs to track progress over time. The user can then see their progress over time and see how they improve, providing both encouragement and motivation to continue their treatment. Built using a wide variety of technologies, we believe that we have created a fun, easy access method of treatment for those who need it.
## Inspiration There are over 1.3 billion people in the world who live with some form of vision impairment. Often, retrieving small objects, especially off the ground, can be tedious for those without complete seeing ability. We wanted to create a solution for those people where technology can not only advise them, but physically guide their muscles in their daily life interacting with the world. ## What it does ForeSight was meant to be as intuitive as possible in assisting people with their daily lives. This means tapping into people's sense of touch and guiding their muscles without the user having to think about it. ForeSight straps on the user's forearm and detects objects nearby. If the user begins to reach the object to grab it, ForeSight emits multidimensional vibrations in the armband which guide the muscles to move in the direction of the object to grab it without the user seeing its exact location. ## How we built it This project involved multiple different disciplines and leveraged our entire team's past experience. We used a Logitech C615 camera and ran two different deep learning algorithms, specifically convolutional neural networks, to detect the object. One CNN was using the Tensorflow platform and served as our offline solution. Our other object detection algorithm uses AWS Sagemaker recorded significantly better results, but only works with an Internet connection. Thus, we use a two-sided approach where we used Tensorflow if no or weak connection was available and AWS Sagemaker if there was a suitable connection. The object detection and processing component can be done on any computer; specifically, a single-board computer like the NVIDIA Jetson Nano is a great choice. From there, we powered an ESP32 that drove the 14 different vibration motors that provided the haptic feedback in the armband. To supply power to the motors, we used transistor arrays to use power from an external Lithium-Ion battery. From a software side, we implemented an algorithm that accurately selected and set the right strength level of all the vibration motors. We used an approach that calculates the angular difference between the center of the object and the center of the frame as well as the distance between them to calculate the given vibration motors' strength. We also built a piece of simulation software that draws a circular histogram and graphs the usage of each vibration motor at any given time. ## Challenges we ran into One of the major challenges we ran into was the capability of Deep Learning algorithms on the market. We had the impression that CNN could work like a “black box” and have nearly-perfect accuracy. However, this is not the case, and we experienced several glitches and inaccuracies. It then became our job to prevent these glitches from reaching the user’s experience. Another challenge we ran into was fitting all of the hardware onto an armband without overwhelming the user. Especially on a body part as used as an arm, users prioritize movement and lack of weight on their devices. Therefore, we aimed to provide a device that is light and small. ## Accomplishments that we're proud of We’re very proud that we were able to create a project that solves a true problem that a large population faces. In addition, we're proud that the project works and can't wait to take it further! Specifically, we're particularly happy with the user experience of the project. The vibration motors work very well for influencing movement in the arms without involving too much thought or effort from the user. ## What we learned We all learned how to implement a project that has mechanical, electrical, and software components and how to pack it seamlessly into one product. From a more technical side, we gained more experience with Tensorflow and AWS. Also, working with various single board computers taught us a lot about how to use these in our projects. ## What's next for ForeSight We’re looking forward to building our version 2 by ironing out some bugs and making the mechanical design more approachable. In addition, we’re looking at new features like facial recognition and voice control.
## Inspiration We've had roommate troubles in the past, so we decided to make something to help us change that. ## What it does It keeps track of tasks and activities among roommates, and by gamifying these task using a reward system to motivate everyone to commit to the community. ## Challenges we ran into The two biggest obstacles we ran into were version control and Firebase Documentation/Database ## Accomplishments that we're proud of We completed our core features, and we made decent looking app. ## What we learned Take heed when it comes to version control. ## What's next for roomMe We would like add more database support, and more features that allow communication with other people in your group. We would also like to add extension apps to further enhance the experience of roomMe such as Venmo, Google Calendar, and GroupMe. We are also considering creating a game where people can spend their credits.
partial
## 💡 Inspiration Are you a shutterbug, always trying to capture the world in photographs but forgetting to live in the moment? Do you find it difficult and unrewarding to take staged photos? Maybe you're someone who is often forced behind the lens in their social circles, being overlooked by the fact that you too deserve some nice pictures. Or perhaps you relate to @annaclendening... ![](https://static.demilked.com/wp-content/uploads/2019/07/5d2c220dbaa5b-photos-i-take-of-my-boyfriend-vs-photos-he-takes-2-5d284867244a5__700.jpg) Our point is everyone has been a victim of a bad photo, of having the joy sucked out of photography. With pic-perfect, we aim to provide people with the best solution for effortlessly capturing their moments, ensuring that every shot is a keeper. Our robot will independently follow your movements and employ a pre-trained facial detection algorithm to keep your face in focus. Whenever you feel the moment is perfect, simply signal your approval with a thumbs-up gesture. ## 🔍 What it does The pic-perfect is our friendly robot companion. It follows you wherever you go, using our backend facial detection model. With just a thumbs-up signal, it meticulously captures pictures of you and adds filters to create the most perfectly crafted portraits. These enhanced images are then displayed on a web app, with the intention of later storing them on a cloud service. The facial detection model is configured to position the robot at the proper angle and distance to center-align the person within the frame. Additionally, we utilize a hand gesture recognition algorithm to signal the robot when to begin capturing pictures. ## ⚙️ Our Tech Stack ![](https://i.imgur.com/A4mWJO2.png) * **Hardware**: The robot was built using the Viam Rover Kit and Raspberry Pi 4. We mounted our own webcam for better positioning of the camera. We had initially planned to 3D print a camera stand to achieve a better camera perspective. However, due to unforeseen challenges and time constraints, we were unable to continue with this concept. * **Backend**: We utilized Python to establish connections with the Viam robot, access the necessary pre-trained models for our design, and augment our image processing capabilities. For recognizing hand gestures, we used MediaPipe and Tensorflow to recognize the "thumb's up" hand sign while facial recognition was done through openCV and a Haar Cascade Classifier. * **Frontend**: We designed a simple and straightforward user interface with the primary goal of offering users an attractive platform for viewing their images. ## 🚧 Challenges we ran into Building a hardware AI hack using advanced technologies like the Viam Rover kit was quite a blissful learning curve. We also underwent a very exhaustive testing process for both our machine learning algorithms to configure the robot appropriately for a smooth user experience. * Hand Gesture Recognition: While using MediaPipe, TensorFlow, and OpenCV for hand gesture recognition, we encountered challenges related to model accuracy and real-world performance. Tuning the model and achieving consistent recognition accuracy was a significant task. * Camera Mounting: Initially, we had planned to 3D print a camera stand to achieve a better camera perspective. However, due to unexpected difficulties and time constraints, we were unable to proceed with this plan, which impacted the quality of the camera positioning. ## ✔️ Accomplishments that we're proud of * We are proud to have achieved a fully functional prototype, which has the potential to become a highly marketable product. * Throughout the project, we gained valuable experience and developed new skills in hardware integration, computer vision, and user interface design. * A lot of firsts! We had never worked with the Viam Rover kit, and most of us had little experience with computer vision or photography. ## 📚 What we learned Coming into this project, our team did not have as much experience with hardware-to-software interfacing, robotics, or AI/ML algorithms. Having a project combining all three elements was a challenge we wanted to take and learn from. Due to the nature of the project, we also needed to consider mechanical factors such as the positioning of the camera, the height of the mount, as well as determine suitable velocities that allow for the movement of the robot and polling of the live camera feed at the same time. Exercising technology like Python's OpenCV, and Tensorflow, and picking up a new skill with building robots with Viam, we were glad to have overcome the learning curve and come up with a final product to present. ## 🔭 What's next for We plan on using servo motors to control the vertical aspect of clicking pictures more accurately. We also had discussions on further refining the robot by building Collison detection models and obstacle sensitivity intelligence. Furthermore, we would like to have it speak to us about when the photo is being clicked, communicate with us its thoughts about positioning, and give encouraging feedback about it. Another possibility is uploading the images it takes into the user's personal Google Drive , or integrate it with a backend for use with different users.
## Inspiration Everyone can relate to the scene of staring at messages on your phone and wondering, "Was what I said toxic?", or "Did I seem offensive?". While we originally intended to create an app to help neurodivergent people better understand both others and themselves, we quickly realized that emotional intelligence support is a universally applicable concept. After some research, we learned that neurodivergent individuals find it most helpful to have plain positive/negative annotations on sentences in a conversation. We also think this format leaves the most room for all users to reflect and interpret based on the context and their experiences. This way, we hope that our app provides both guidance and gentle mentorship for developing the users' social skills. Playing around with Co:here's sentiment classification demo, we immediately saw that it was the perfect tool for implementing our vision. ## What it does IntelliVerse offers insight into the emotions of whomever you're texting. Users can enter their conversations either manually or by taking a screenshot. Our app automatically extracts the text from the image, allowing fast and easy access. Then, IntelliVerse presents the type of connotation that the messages convey. Currently, it shows either a positive, negative or neutral connotation to the messages. The interface is organized similarly to a texting app, ensuring that the user effortlessly understands the sentiment. ## How we built it We used a microservice architecture to implement this idea The technology stack includes React Native, while users' information is stored with MongoDB and queried using GraphQL. Apollo-server and Apollo-client are used to connect both the frontend and the backend. The sentiment estimates are powered by custom Co:here's finetunes, trained using a public chatbot dataset found on Kaggle. Text extraction from images is done using npm's text-from-image package. ## Challenges we ran into We were unfamiliar with many of the APIs and dependencies that we used, and it took a long to time to understand how to get the different components to come together. When working with images in the backend, we had to do a lot of parsing to convert between image files and strings. When training the sentiment model, finding a good dataset to represent everyday conversations was difficult. We tried numerous options and eventually settled with a chatbot dataset. ## Accomplishments that we're proud of We are very proud that we managed to build all the features that we wanted within the 36-hour time frame, given that many of the technologies that we used were completely new to us. ## What we learned We learned a lot about working with React Native and how to connect it to a MongoDB backend. When assembling everyone's components together, we solved many problems regarding dependency conflicts and converting between data types/structures. ## What's next for IntelliVerse In the short term, we would like to expand our app's accessibility by adding more interactable interfaces, such as audio inputs. We also believe that the technology of IntelliVerse has far-reaching possibilities in mental health by helping introspect upon their thoughts or supporting clinical diagnoses.
## Inspiration We wanted to do something fun and exciting, nothing too serious. Slang is a vital component to thrive in today's society. Ever seen Travis Scott go like, "My dawg would prolly do it for a Louis belt", even most menials are not familiar with this slang. Therefore, we are leveraging the power of today's modern platform called "Urban Dictionary" to educate people about today's ways. Showing how today's music is changing with the slang thrown in. ## What it does You choose your desired song it will print out the lyrics for you and then it will even sing it for you in a robotic voice. It will then look up the urban dictionary meaning of the slang and replace with the original and then it will attempt to sing it. ## How I built it We utilized Python's Flask framework as well as numerous Python Natural Language Processing libraries. We created the Front end with a Bootstrap Framework. Utilizing Kaggle Datasets and Zdict API's ## Challenges I ran into Redirecting challenges with Flask were frequent and the excessive API calls made the program super slow. ## Accomplishments that I'm proud of The excellent UI design along with the amazing outcomes that can be produced from the translation of slang ## What I learned A lot of things we learned ## What's next for SlangSlack We are going to transform the way today's menials keep up with growing trends in slang.
partial
# AiroScents: Project Story ## Inspiration The idea behind AiroScents was born from the desire to merge technology with environmental enhancement. We wanted to create a solution that could improve the atmosphere in indoor spaces using technology in a way that was both practical and unique. As we explored possibilities, we realized that scent plays a significant role in setting moods and enhancing spaces, whether in a home, office, or event setting. This led us to the concept of an autonomous drone that could navigate spaces while dispersing pleasant fragrances, creating a seamless blend of hardware innovation and software control. ## What We Learned Throughout the development process, we gained a deeper understanding of hardware-software integration. We learned about the complexities of working with Bluetooth modules like the HC-05, the challenges of creating stable flight systems for drones, and the importance of seamless communication between devices. On the software side, we explored React and TypeScript more thoroughly, learning how to integrate external APIs like Mappedin, and how to optimize user interaction for controlling a drone in real-time. The pathfinding research taught us the value of algorithmic efficiency and how crucial it is for navigating complex spaces autonomously. ## How We Built It The project was built in two main phases: hardware and software. ### Hardware: * We custom-built the drone, integrating an Arduino UNO microcontroller alongside a Speedybee f405 v4 Stack Flight Controller and a HC-05 Bluetooth module to communicate with our web app. -All code was written with the Arduino IDE, and Serial data was tested using the Serial Monitor tool. * The motors were set up for mobility, while a scent-dispensing mechanism was installed to allow the drone to release fragrances at intervals. * A secondary automatic scent dispensing system was built using a stepper motor and custom 3D printed arm attachment. * Sensors can be used to ensure stable flight and accurate navigation through indoor spaces. In the future, we intend to integrate a ToF sensor in order to allow the drone to locate itself in real space. ### Software: * The web application was developed using **React JS**, **Node JS** and **TypeScript**. * In the main landing page there are two buttons : 1) **Connect to Drone** that allows the web application to connect to drone via Bluetooth 2) The other one is **Fly Manually** which opens up the controller * We implemented the **Mappedin API** to enable users to define the route the drone would take, mapping out spaces for efficient coverage. * A manual control panel was also built into the web app, allowing users to control the drone in real-time. * We separately explored the potential for **C pathfinding algorithms** to allow the drone to navigate autonomously in the future. ## Challenges We Faced One of the main challenges was ensuring stable communication between the drone and the web app. The Bluetooth module presented issues with range and reliability, which required us to troubleshoot and optimize its performance. On the hardware side, building a stable and efficient drone was a major task, as we had to balance weight, flight stability, and the addition of the scent-dispensing mechanism. On the software side, integrating the Mappedin API presented its own challenges, particularly with mapping complex indoor environments and ensuring the drone would follow precise routes. ## Conclusion AiroScents was a challenging yet highly rewarding project that pushed our skills. We are proud to have built a product that combines technology with a unique, practical application, and we look forward to exploring further improvements and features for future iterations.
## Inspiration It all started from two obscure repetitive tasks: open shutters through two pulleys in a specific order. Every morning when the sun rose, we'd open our blinds to get natural lighting in the dorm room; however, in this specific dorm room, two pulleys in sequence were required to open the shutters. After pondering on the possibility of automation, we looked at a general case where anyone could use a singular general component that can be integrated in any DIY automation project one could possibly desire. ## What it does The product enables the user to make use of a coupled shaft and rotary motion to automate any simple task which uses a transferable motion. This is joined with custom parts to aid in these special attachments, and subsequently, the user will be able to control this automation from a smart device for any presets or customizability they may have. ## How we built it The product is comprised of three main components: mechanical design, hardware design, and lastly software design. We used SolidWorks to model and 3D-print an enclosure for all hardware components, adding an edge in product appeal in design. We used Arduino Due for a large threshold for analog pin usage, and used a high torque stepper motor as well as a stepper motor driver to control speed, direction, power, and partial revolutions for a higher user quality experience as well as high adaptability for more applications. Bluetooth was established using the ESP32 microcontroller to establish single sided communication from the app to the Arduino Due board. Lastly, we used the MIT App Inventor to program and develop an android application where the user can enter desired inputs for already preloaded motor presets over bluetooth communication logic. ## Challenges we ran into This project was as challenging as it was ambitious. We ran into numerous technical challenges including: establishing bluetooth for Arduino over ESP32 board, mechanical design that can sufficiently support the stepper motor without any internal movement, how software programmable backend logic was often deemed out of scope over the 24 hour deadline, as well as numerous occasions where hardware failed during build and prototyping. ## Accomplishments that we're proud of Over the span of 24 hours, we have achieved milestones including mechanical design housing completion, stepper motor hardware completion, overall Arduino wiring and component diagrams completion, and establishing Bluetooth for Arduino over ESP32 microcontroller. The mechanical design was started 2 hours before the deadline, and the use of rapid prototyping and design was crucial for the ".stl" file to be submitted. The challenge with the mechanical design was not only the time constraint, but also how hardware placement wasn't finalized during the ideation process. Due to this reason, we celebrate the completion of the mechanical design. Entering MakeUofT with very minimal prior knowledge, the hardware components placement and design was definitely the biggest challenge. To overcome this, a lot of time was spent troubleshooting why certain hardware components weren't performing up to standard through various voltage calculations and wiring diagrams. Lastly, establishing bluetooth for Arduino over the ESP32 microcontroller was one of the largest celebrated successes. We had to incorporate the use of signal processing as well as programming a suitable backend technology that can efficiently send out data while the app is running. ## What we learned In the last 24 hours, we learned a substantial amount about our technical skills as well as soft skills. Firstly, as mentioned earlier, our team entered the competition with minimal knowledge about Arduinos, motors and other important components. As a result, this meant that we had to learn about the components and apply what we learnt to the project. Additionally, we learned a lot about IoT and connectivity through Bluetooth and wifi modules which we can now use in other projects. The workshops offered were also very helpful and helped us navigate different fields related to robotics including computer vision which was super interesting. In terms of the new soft skills, we learned a lot about efficiency, perseverance and teamwork. Since we were trying to learn and apply our skills at the same time, it was very easy to lose track of time so it was important to pace ourselves and work smartly. Furthermore, perseverance was important as there were many roadblocks when it came to coding and trying to connect the Bluetooth and wifi. Both tasks were frustrating and often tedious and required lots of patience. Lastly, teamwork was the most strengthened skill. As we split up the work and all hit roadblocks, there were many times when things did not go the way expected. Instead of getting frustrated at each other, we ensured that the problem did not overcome us and worked together as a team to overcome the problem. All in all, we are grateful that UofT provided us with this opportunity. Even though there were many adversities we faced and there were many changes made to the project, we made major progress in our knowledge in all areas of robotics: hardware, 3D modelling, firmware and systems design. We believe that this knowledge will be incredibly useful to us as we advance in our careers. ## What's next for Automated Rotary Arm Machine (ARM) As a group of ambitious hackers and makers, we will strive to learn more about the concepts we don't know. We gained a plethora of knowledge about something that we are just getting introduced to. We plan on participating in future Hackathons and Make-a-thons to get exposed to the vast advancements that can be made in the world of technology. This Make-a-thon forced us to think out of the box and get out of our comfort zone. We tried our best to build something from scratch and while we may not have achieved what we set out to do, we are highly grateful for this opportunity. We are focused on completing this project and showcase that our idea can be brought to life. There are several refinements that we know we can make to improve our design and our code. For example, make attachments that will help integrate our prototype with several applications. ## Acknowledgements Chatgpt to help us begin our learning process in terms of Arduino, making apps and the wifi plus Bluetooth modules
Currently, about 600,000 people in the United States have some form of hearing impairment. Through personal experiences, we understand the guidance necessary to communicate with a person through ASL. Our software eliminates this and promotes a more connected community - one with a lower barrier entry for sign language users. Our web-based project detects signs using the live feed from the camera and features like autocorrect and autocomplete reduce the communication time so that the focus is more on communication rather than the modes. Furthermore, the Learn feature enables users to explore and improve their sign language skills in a fun and engaging way. Because of limited time and computing power, we chose to train an ML model on ASL, one of the most popular sign languages - but the extrapolation to other sign languages is easily achievable. With an extrapolated model, this could be a huge step towards bridging the chasm between the worlds of sign and spoken languages.
losing
Presentation Link: <https://docs.google.com/presentation/d/1_4Yy5c729_TXS8N55qw7Bi1yjCicuOIpnx2LxYniTlY/edit?usp=sharing> SHORT DESCRIPTION (Cohere generated this) Enjoy music from the good old days? your playlist will generate songs from your favourite year (e.g. 2010) and artist (e.g. Linkin Park) ## Inspiration We all love listening to music on Spotify, but depending on the mood of the day, we want to listen to songs on different themes. Impressed by the cool natural language processing tech that Cohere offers, we decided to create Songify that uses Cohere to create Spotify playlists based on the user's request. ## What it does The purpose of Songify is to make the process of discovering new music seamless and hopefully provide our users with some entertainment. The algorithm is not limited in search words so anything that Songify is prompted will generate a playlist whether it be for serious music interest or for laughs. Songify uses a web based platform to collect user input which Cohere then scans and extracts keywords from. Cohere then sends those keywords to the Spotify API which looks for songs containing the data, creates a new playlist under the user's account and populates the songs into the playlist. Songify will then return a webpage with an embedded playlist where you can examine the songs that were added instantly. ## How we built it The project revolved around 4 main tasks; Implementing the Spotify API, the Cohere API, creating a webpage and integrating our webpage and backend. Python was the language of choice since it supported the Spotify API, Cohere and Spotipy which extensively saved us time in learning to use Spotify's API. Our team then spent time learning about and executing our specific tasks and came together finally for the integration. ## Challenges we ran into For most of our team, this was our first time working with Python, APIs and integrating front and back end code. Learning all these skills in the span of 3 days was extremely challenging and time consuming. The first hurdle that we had to overcome was learning to read API documentation. The documentation was very intimidating to look at and understanding the key concepts such as API keys, Authorizations, REST calls was very confusing at first. The learning process included watching countless YouTube videos, asking mentors and sponsors for help and hours of trial and error. ## Accomplishments that we're proud of Although our project is not the most flashy, our team has a lot to be proud of. Creating a product with the limited knowledge we had and building an understanding of Python, APIs, integration and front end development in a tight time frame is an accomplishment to be celebrated. Our goal for this hackathon was to make a tangible product and we succeeded in that regard. ## What we learned Working with Spotify's API provided a lot of insight on how companies store data and work with data. Through Songify, we learned that most Spotify information is stored in JSON objects spanning several hundred lines per song and several thousands for albums. Understanding the Authentication process was also a headache since it had many key details such as client ids, API keys, redirect addresses and scopes. Flask was very challenging to tackle, since it was our first time dealing with virtual environments, extensive use of windows command prompt and new notations such as @app.route. Integrating Flask with our HTML skeleton and back end Python files was also difficult due to our limited knowledge in integration. Hack Western was a very enriching experience for our team, exposing us to technologies we may not have encountered if not for this opportunity. ## What's next for HackWesternPlaylist In the future, we hope to implement a search algorithm not only for names, but for artists, artist genres, and the ability to scrub other people's playlists that the user enjoys listening to. The appearance of our product is also suboptimal and cleaning up the front end of the site will make it more appealing to users. We believe that Songify has a lot of flexibility in terms of what it can grow into in the future and are excited to work on it in the future.
## Inspiration We’re huge fans of Spotify, but we’ve always hoped there were more filters for our liked songs outside of just album, artist, and song title. What if we could instead filter by what mood, or “vibe” each song represents? Vibe's got you covered. ## What it does Vibe, powered by Wolfram AI, analyzes your musical tastes and gives you a holistic overview of your top Spotify songs. It creates a custom playlist for you from your top songs, based on your current mood. ## How I built it Vibe leverages the Spotify API and our own sentiment analysis to get the musical and lyrical attributes of each top song in your Spotify account. We then trained a machine learning classifier API using the Wolfram Platform (Wolfram One Instant API) to classify the "vibe" of a song according to its attributes. The training data for this classifier was obtained from publicly available Spotify playlists that were tagged with a specific mood. For the frontend, both React and Bootstrap were used. For the backend, we used the Wolfram One platform for the classifier, while the sentiment analysis was built with a Python/Flask stack, the Genius API to get urls of song lyrics, BeautifulSoup4 to web scrape the lyrics, and vaderSentiment to carry out sentiment analysis. ## Challenges I ran into This is the first time we've used flask and Wolfram, and it was interesting to learn about these new technologies while navigating through the difficulties. ## Accomplishments that I'm proud of Using new technologies! ## What I learned Sentiment analysis, wolfram, flask ## What's next for for Spotify We hope to: * improve our analysis/machine-learning metrics * raise the accuracy of our model by introducing 10x more training data * add more vibes * build a similar app for Apple Music
## Inspiration Every time I talk to someone about boardgames, a few games always skip my mind because there are just so many good games to keep track of! If only there was a convenient location where I could access all of the board games that I own or am interested in. ## What it does Boardhoard allows you to access your library of games from anywhere! It will conveniently give you the ability to search for board games and you will be able to add them to your library so that you can access them with ease whenever you want! ## How we built it We leveraged the versatility of react to create a beautiful UI and the depth of information from BoardGameGeek's API to provide us with the necessary information to display the games. We used Charles and Postman to generate queries and used Java and HTTP libraries to fetch sample data to test our implementation. ## Challenges we ran into BoardGameGeek's (BGG) API returns XML responses which are not ideal. We found an alternative server that converted the responses to JSON which we then used to populate our app. Another challenge involved fetching a complete catalogue of all games on BGG, it simply could not be done. We had to come up with work arounds to fetch large amounts of data. We had trouble implementing individual user databases. ## Accomplishments that we're proud of It worked! It was a great accomplishment that we were able to maintain code quality and styling throughout the project. ## What we learned Learned about the importance of setting proper headers and authorization to POST requests. Learned how to persevere and make something work when faced with a limited set of APIs ## What's next for Boardhoard Add the ability to share you library with other people. Include more metadata to each game detail.
partial
## Inspiration The impact of COVID-19 has had lasting effects on the way we interact and socialize with each other. Even when engulfed by bustling crowds and crowded classrooms, it can be hard to find our friends and the comfort of not being alone. Too many times have we grabbed lunch, coffee, or boba alone only to find out later that there was someone who was right next to us! Inspired by our undevised use of Apple's FindMy feature, we wanted to create a cross-device platform that's actually designed for promoting interaction and social health! ## What it does Bump! is a geolocation-based social networking platform that encourages and streamlines day-to-day interactions. **The Map** On the home map, you can see all your friends around you! By tapping on their icon, you can message them or even better, Bump! them. If texting is like talking, you can think of a Bump! as a friendly wave. Just a friendly Bump! to let your friends know that you're there! Your bestie cramming for a midterm at Mofitt? Bump! them for good luck! Your roommate in the classroom above you? Bump! them to help them stay awake! Your crush waiting in line for a boba? Make that two bobas! Bump! them. **Built-in Chat** Of course, Bump! comes with a built-in messaging chat feature! **Add Your Friends** Add your friends to allow them to see your location! Your unique settings and friends list are tied to the account that you register and log in with. ## How we built it Using React Native and JavaScript, Bump! is built for both IOS and Android. For the Backend, we used MongoDB and Node.js. The project consisted of four major and distinct components. **Geolocation Map** For our geolocation map, we used the expo's geolocation library, which allowed us to cross-match the positional data of all the user's friends. **User Authentication** The user authentication proceeds was built using additional packages such as Passport.js, Jotai, and Bcrypt.js. Essentially, we wanted to store new users through registration and verify old users through login by searching them up in MongoDB, hashing and salting their password for registration using Bcrypt.js, and comparing their password hash to the existing hash in the database for login. We also used Passport.js to create Json Web Tokens, and Jotai to store user ID data globally in the front end. **Routing and Web Sockets** To keep track of user location data, friend lists, conversation logs, and notifications, we used MongoDB as our database and a node.js backend to save and access data from the database. While this worked for the majority of our use cases, using HTTP protocols for instant messaging proved to be too slow and clunky so we made the design choice to include WebSockets for client-client communication. Our architecture involved using the server as a WebSocket host that would receive all client communication but would filter messages so they would only be delivered to the intended recipient. **Navigation and User Interface**: For our UI, we wanted to focus on simplicity, cleanliness, and neutral aesthetics. After all, we felt that the Bump! the experience was really about the time spent with friends rather than on the app, so designed the UX such that Bump! is really easy to use. ## Challenges we ran into To begin, package management and setup were fairly challenging. Since we've never done mobile development before, having to learn how to debug, structure, and develop our code was definitely tedious. In our project, we initially programmed our frontend and backend completely separately; integrating them both and working out the moving parts was really difficult and required everyone to teach each other how their part worked. When building the instant messaging feature, we ran into several design hurdles; HTTP requests are only half-duplex, as they are designed with client initiation in mind. Thus, there is no elegant method for server-initiated client communication. Another challenge was that the server needed to act as the host for all WebSocket communication, resulting in the need to selectively filter and send received messages. ## Accomplishments that we're proud of We're particularly proud of Bump! because we came in with limited or no mobile app development experience (in fact, this was the first hackathon for half the team). This project was definitely a huge learning experience for us; not only did we have to grind through tutorials, youtube videos, and a Stack Overflowing of tears, we also had to learn how to efficiently work together as a team. Moreover, we're also proud that we were not only able to build something that would make a positive impact in theory but a platform that we see ourselves actually using on a day-to-day basis. Lastly, despite setbacks and complications, we're super happy that we developed an end product that resembled our initial design. ## What we learned In this project, we really had an opportunity to dive headfirst in mobile app development; specifically, learning all about React Native, JavaScript, and the unique challenges of implementing backend on mobile devices. We also learned how to delegate tasks more efficiently, and we also learned to give some big respect to front-end engineers! ## What's next for Bump! **Deployment!** We definitely plan on using the app with our extended friends, so the biggest next step for Bump! is polishing the rough edges and getting it on App Stores. To get Bump! production-ready, we're going to robustify the backend, as well as clean up the frontend for a smoother look. **More Features!** We also want to add some more functionality to Bump! Here are some of the ideas we had, let us know if there's any we missed! * Adding friends with QR-code scanning * Bump! leaderboards * Status updates * Siri! "Hey Siri, bump Emma!"
## Inspiration While we were doing preliminary research, we had found overwhelming amounts of evidence of mental health deterioration as a consequence of life-altering lockdown restrictions. Academic research has shown that adolescents depend on friendship to maintain a sense of self-worth and to manage anxiety and depression. Intimate exchanges and self-esteem support significantly increased long-term self worth and decreased depression. While people do have virtual classes and social media, some still had trouble feeling close with anyone. This is because conventional forums and social media did not provide a safe space for conversation beyond the superficial. User research also revealed that friendships formed by physical proximity don't necessarily make people feel understood and resulted in feelings of loneliness anyway. Proximity friendships formed in virtual classes also felt shallow in the sense that it only lasted for the duration of the online class. With this in mind, we wanted to create a platform that encouraged users to talk about their true feelings, and maximize the chance that the user would get heartfelt and intimate replies. ## What it does Reach is an anonymous forum that is focused on providing a safe space for people to talk about their personal struggles. The anonymity encourages people to speak from the heart. Users can talk about their struggles and categorize them, making it easy for others in similar positions to find these posts and build a sense of closeness with the poster. People with similar struggles have a higher chance of truly understanding each other. Since ill-mannered users can exploit anonymity, there is a tone analyzer that will block posts and replies that contain mean-spirited content from being published while still letting posts of a venting nature through. There is also ReCAPTCHA to block bot spamming. ## How we built it * Wireframing and Prototyping: Figma * Backend: Java 11 with Spring Boot * Database: PostgresSQL * Frontend: Bootstrap * External Integration: Recaptcha v3 and IBM Watson - Tone Analyzer * Cloud: Heroku ## Challenges we ran into We initially found it a bit difficult to come up with ideas for a solution to the problem of helping people communicate. A plan for a VR space for 'physical' chatting was also scrapped due to time constraints, as we didn't have enough time left to do it by the time we came up with the idea. We knew that forums were already common enough on the internet, so it took time to come up with a product strategy that differentiated us. (Also, time zone issues. The UXer is Australian. They took caffeine pills and still fell asleep.) ## Accomplishments that we're proud of Finishing it on time, for starters. It felt like we had a bit of a scope problem at the start when deciding to make a functional forum with all these extra features, but I think we pulled it off. The UXer also iterated about 30 screens in total. The Figma file is *messy.* ## What we learned As our first virtual hackathon, this has been a learning experience for remote collaborative work. UXer: I feel like i've gotten better at speedrunning the UX process even quicker than before. It usually takes a while for me to get started on things. I'm also not quite familiar with code (I only know python), so watching the dev work and finding out what kind of things people can code was exciting to see. # What's next for Reach If this was a real project, we'd work on implementing VR features for those who missed certain physical spaces. We'd also try to work out improvements to moderation, and perhaps a voice chat for users who want to call.
Data Dump is the future of IoT bathroom technology. Features include: * Automatic unrolling of toilet paper * Unable to manually unroll toilet paper - must go through app (feature) * Weather report printed on toilet paper * Status of toilet paper indicated on web app * Dynamically rendered toilet paper to visualize number of remaining wipes #bigdata * Blockchain uses number of remaining wipes to mine block with every wipe #disrupt * HERE.com API showcases nearby locations where you can buy more toilet paper Venture capital welcomed AR experience coming soon Our startup is hiring and we must recruit ten new people by the end of the hack-'a-thon. Also download our app ## Technologies used Business side handled; need someone to code the algo's Written in Flask with an Arduino Uno, but looking to convert to React Native / MEAN stack / Ember.js. 7+ years experience preferred Add us on Linked In!
winning
Coded prototype on Github: <https://github.com/shawnd/foodie> Design Prototype, MarvelApp: <https://marvelapp.com/c1h9c5> ## Inspiration To solve students' problems around – 1. Not knowing what to cook 2. Not knowing what groceries to buy at a store 3. Not discovering new meals to eat 4. Not having the time to manually create grocery lists based on online recipes 5. Sharing meals and recipes amongst friends. ## What it does 1. Helps discover meals that can be cooked within a specified budget (eg: $20). 2. Curates recipes from online sources into a database. 3. Create a 'meal list' by adding recipes you want to cook. 4. Each meal list shows approximate prices for the entire meal. 5. Auto-generate a grocery list that compiles saved recipes and their ingredients. 6. Allows one-click sharing of recipes with friends. ## How we built it We used a Python Bottle API that communicates with a Firebase distributed database service that returns data to our front-end Ionic interface. End result was a mobile application. ## Challenges we ran into – Scaling down features for the application. ## Accomplishments that we're proud of – Finishing on time – Building an application over 24 hours!! – Fun experience ## What's next for Foodie Feedback, iteration, feedback, iteration, test, feedback, iteration, release!
## Inspiration We were all very intrigued by machine learning and AI, so we decided to incorporate it into our project. We wanted to create something involving the webcam as well, so we tied it altogether with ScanAI. ## What it does ScanAI aims to detect guns in schools and public areas to help alert authorities quickly in the event of a public shooting. ## How we built it ScanAI is built entirely out of python. Computer vision python libraries were used including OpenCV, facial\_recognition, yolov5 and tkinter. ## Challenges we ran into When training models, we ran into issues of a lack of ram and a lack of training data. We also were challenged by the problem of tackling multiple faces at once. ## Accomplishments that we're proud of ScanAI is able to take imported files and detect multiple faces at once and apply facial recognition to all of them. ScanAI is highly accurate and has many features including Barack Obama facial recognition, object detection, live webcam viewing and scanning, and file uploading functionalities. ## What we learned We all learned a lot about machine learning and its capabilities. Using these modules expanded our knowledge on AI and all its possible uses. ## What's next for ScanAI Our next steps would be to improve our interface to improve user friendliness and platform compatibility. We would also want to incorporate our program with Raspberry Pi to increase its usage flexibility. Lastly, we would also want to work on improving the accuracy of the detection system by feeding it more images and feedback.
## Inspiration There are approximately **10 million** Americans who suffer from visual impairment, and over **5 million Americans** suffer from Alzheimer's dementia. This weekend our team decided to help those who were not as fortunate. We wanted to utilize technology to create a positive impact on their quality of life. ## What it does We utilized a smartphone camera to analyze the surrounding and warn visually impaired people about obstacles that were in their way. Additionally, we took it a step further and used the **Azure Face API** to detect the faces of people that the user interacted with and we stored their name and facial attributes that can be recalled later. An Alzheimer's patient can utilize the face recognition software to remind the patient of who the person is, and when they last saw him. ## How we built it We built our app around **Azure's APIs**, we created a **Custom Vision** network that identified different objects and learned from the data that we collected from the hacking space. The UI of the iOS app was created to be simple and useful for the visually impaired, so that they could operate it without having to look at it. ## Challenges we ran into Through the process of coding and developing our idea we ran into several technical difficulties. Our first challenge was to design a simple UI, so that the visually impaired people could effectively use it without getting confused. The next challenge we ran into was attempting to grab visual feed from the camera, and running them fast enough through the Azure services to get a quick response. Another challenging task that we had to accomplish was to create and train our own neural network with relevant data. ## Accomplishments that we're proud of We are proud of several accomplishments throughout our app. First, we are especially proud of setting up a clean UI with two gestures, and voice control with speech recognition for the visually impaired. Additionally, we are proud of having set up our own neural network, that was capable of identifying faces and objects. ## What we learned We learned how to implement **Azure Custom Vision and Azure Face APIs** into **iOS**, and we learned how to use a live camera feed to grab frames and analyze them. Additionally, not all of us had worked with a neural network before, making it interesting for the rest of us to learn about neural networking. ## What's next for BlindSpot In the future, we want to make the app hands-free for the visually impaired, by developing it for headsets like the Microsoft HoloLens, Google Glass, or any other wearable camera device.
losing
## Inspiration Since we are all stuck at home, it seemed like a good time to bring out the old games we used to play as kids. We are bringing back the wooden labyrinth game but with a modern twist. ## What it does Similar to the classic wooden labyrinth game, you are to guide your marble (in this case, your bunny) from start to finish. On your journey, you will have to move the joystick in different directions to avoid the holes and dead ends. So have fun watching your bunny hop from side to side when you tilt, and please don’t kill it... ## How we built it Our A-MAZE-ing labyrinth is created out of two Arduino Uno's. Each Arduino communicates through Bluetooth transceivers and one acts as a sender while the other acts as the receiver. The sending end uses a joystick shield that controls the labyrinth with the analog sticks. An OLED screen is attached to the joystick for fun animations while the game is running. On the other end, the receiver side uses two servo motors and two QTI sensors. The motors help maneuver the labyrinth while the QTI sensors sense for the marble. If it falls into the wrong hole, one sensor will send a signal over to play a sad/angry emoji. When the marble successfully makes it to the end, a different sensor tells the OLED to play the winning animation. ## Challenges we ran into While creating this project, we ran into both hardware and software problems. For the software side, we ran into issues where the code would not talk to each other through the Bluetooth modules. Information that was sent over from the sender side didn't match on the receiving end and this problem took a bit longer than anticipated to fix. On the hardware side, the main problem was getting the QTI sensors to detect the marble moving at a fast pace. This problem was tackled when creating a few tubes to guide the marble when it dropped into a hole. ## Accomplishments that we're proud of We are proud that we were able to complete the model of our labyrinth. Besides that, we are both satisfied that we completed our first hackathon. ## What we learned We learned that combining items together can cause a lot of problems. When adding the OLED with the motors and detection, any delays that were added to the animations would have to be completed before anything else would go on. ## What's next for our A-MAZE-ing Labyrinth In the future, we want to redesign our model to make it more visually appealing for the user. Looking even further down the line, it would be a huge achievement to see our product sold in stores and online to beginners and coders of all ages.
## Inspiration snore or get pourd on yo pores Coming into grade 12, the decision of going to a hackathon at this time was super ambitious. We knew coming to this hackathon we needed to be full focus 24/7. Problem being, we both procrastinate and push things to the last minute, so in doing so we created a project to help us ## What it does It's a 3 stage project which has 3 phases to get the our attention. In the first stage we use a voice command and text message to get our own attention. If I'm still distracted, we commence into stage two where it sends a more serious voice command and then a phone call to my phone as I'm probably on my phone. If I decide to ignore the phone call, the project gets serious and commences the final stage where we bring out the big guns. When you ignore all 3 stages, we send a command that triggers the water gun and shoots the distracted victim which is myself, If I try to resist and run away the water gun automatically tracks me and shoots me wherever I go. ## How we built it We built it using fully recyclable materials as the future innovators of tomorrow, our number one priority is the environment. We made our foundation fully off of trash cardboard, chopsticks, and hot glue. The turret was built using our hardware kit we brought from home and used 3 servos mounted on stilts to hold the water gun in the air. We have a software portion where we hacked a MindFlex to read off brainwaves to activate a water gun trigger. We used a string mechanism to activate the trigger and OpenCV to track the user's face. ## Challenges we ran into Challenges we ran into was trying to multi-thread the Arduino and Python together. Connecting the MindFlex data with the Arduino was a pain in the ass, we had come up with many different solutions but none of them were efficient. The data was delayed, trying to read and write back and forth and the camera display speed was slowing down due to that, making the tracking worse. We eventually carried through and figured out the solution to it. ## Accomplishments that we're proud of Accomplishments we are proud of is our engineering capabilities of creating a turret using spare scraps. Combining both the Arduino and MindFlex was something we've never done before and making it work was such a great feeling. Using Twilio and sending messages and calls is also new to us and such a new concept, but getting familiar with and using its capabilities opened a new door of opportunities for future projects. ## What we learned We've learned many things from using Twilio and hacking into the MindFlex, we've learned a lot more about electronics and circuitry through this and procrastination. After creating this project, we've learned discipline as we never missed a deadline ever again. ## What's next for You snooze you lose. We dont lose Coming into this hackathon, we had a lot of ambitious ideas that we had to scrap away due to the lack of materials. Including, a life-size human robot although we concluded with an automatic water gun turret controlled through brain control. We want to expand on this project with using brain signals as it's our first hackathon trying this out
## Summary OrganSafe is a revolutionary web application that tackles the growing health & security problem of black marketing of donated organs. The verification of organ recipients leverages the Ethereum Blockchain to provide critical security and prevent improper allocation for such a pivotal resource. ## Inspiration The [World Health Organization (WHO)](https://slate.com/business/2010/12/can-economists-make-the-system-for-organ-transplants-more-humane-and-efficient.html) estimates that one in every five kidneys transplanted per year comes from the black market. There is a significant demand for solving this problem which impacts thousands of people every year who are struggling to find a donor for a significantly need transplant. Modern [research](https://ieeexplore.ieee.org/document/8974526) has shown that blockchain validation of organ donation transactions can help reduce this problem and authenticate transactions to ensure that donated organs go to the right place! ## What it does OrganSafe facilitates organ donations with authentication via the Ethereum Blockchain. Users can start by registering on OrganSafe with their health information and desired donation and then application's algorithms will automatically match the users based on qualifying priority for available donations. Hospitals can easily track donations of organs and easily record when recipients receive their donation. ## How we built it This application was built using React.js for the frontend of the platform, Python Flask for the backend and API endpoints, and Solidity+Web3.js for Ethereum Blockchain. ## Challenges we ran into Some of the biggest challenges we ran into were connecting the different components of our project. We had three major components: frontend, backend, and the blockchain that were developed on that needed to be integrated together. This turned to be the biggest hurdle in our project that we needed to figure out. Dealing with the API endpoints and Solidarity integration was one of the problems we had to leave for future developments. One solution to a challenge we solved was dealing with difficulty of the backend development and setting up API endpoints. Without a persistent data storage in the backend, we attempted to implement basic storage using localStorage in the browser to facilitate a user experience. This allowed us to implement a majority of our features as a temporary fix for our demonstration. Some other challenges we faced included figuring certain syntactical elements to the new technologies we dealt (such as using Hooks and States in React.js). It was a great learning opportunity for our group as immersing ourselves in the project allowed us to become more familiar with each technology! ## Accomplishments that we're proud of One notable accomplishment is that every member of our group interface with new technology that we had little to no experience with! Whether it was learning how to use React.js (such as learning about React fragments) or working with Web3.0 technology such as the Ethereum Blockchain (using MetaMask and Solidity), each member worked on something completely new! Although there were many components we simply just did not have the time to complete due to the scope of TreeHacks, we were still proud with being able to put together a minimum viable product in the end! ## What we learned * Fullstack Web Development (with React.js frontend development and Python Flask backend development) * Web3.0 & Security (with Solidity & Ethereum Blockchain) ## What's next for OrganSafe After TreeHacks, OrganSafe will first look to tackle some of the potential areas that we did not get to finish during the time of the Hackathon. Our first step would be to finish development of the full stack web application that we intended by fleshing out our backend and moving forward from there. Persistent user data in a data base would also allow users and donors to continue to use the site even after an individual session. Furthermore, scaling both the site and the blockchain for the application would allow greater usage by a larger audience, allowing more recipients to be matched with donors.
partial
## Inspiration My Grandmother is from Puerto Rico, and she only speaks Spanish. She recently tried applying for passport, but she could not fill out the document because she does not understand English. After doing some research, there are about 45 million people in America alone with a similar issue. And in the world, that number is 6.5 billion. Especially considering America is the most sought out country to live in in the entire world. But now, there is a solution. ## What it does The cross-platform app allows one to upload important documents converted to one's native language. After the user responds, either manually or vocally in their chosen language, and the document converts back to English, both the questions and answers - then is ready for download and exporting. ## How we built it Front end implementing flutter, dart. Back end using Spring with Java, Google Cloud API, Heroku, PDF Box. ## Challenges we ran into Initially, we wanted to be able to upload and documents, have the software scan the documents, interpret it and convert it to any selected language. However, for our MVP, we were able to successfully implement 4 languages, Spanish, English, Dutch, German. Another challenge was finding a way to change pdfs in a reliable way. We wanted to do one form really well as opposed to multiple forms that were jerky, so the Visa Application was our only form. ## Accomplishments that we're proud of Creating a clean looking app that is simple yet extremely effective. The language updates according to what language the user has set on their settings, so it already has the potential to facilitate people being able to apply for visas, jobs, passports, and fill out tax documents. ## What we learned First, we learned that making a top-notch UI is rather difficult. It is easy to implement a clean looking app, but much more challenging to build next-level animated designs. Aside from that, we learned that we can make a significant impact on the community in a very short amount of time. ## What's next for Phillinda.space
## Inspiration Determined to create a project that was able to make impactful change, we sat and discussed together as a group our own lived experiences, thoughts, and opinions. We quickly realized the way that the lack of thorough sexual education in our adolescence greatly impacted each of us as we made the transition to university. Furthermore, we began to really see how this kind of information wasn't readily available to female-identifying individuals (and others who would benefit from this information) in an accessible and digestible manner. We chose to name our idea 'Illuminate' as we are bringing light to a very important topic that has been in the dark for so long. ## What it does This application is a safe space for women (and others who would benefit from this information) to learn more about themselves and their health regarding their sexuality and relationships. It covers everything from menstruation to contraceptives to consent. The app also includes a space for women to ask questions, find which products are best for them and their lifestyles, and a way to find their local sexual health clinics. Not only does this application shed light on a taboo subject but empowers individuals to make smart decisions regarding their bodies. ## How we built it Illuminate was built using Flutter as our mobile framework in order to be able to support iOS and Android. We learned the fundamentals of the dart language to fully take advantage of Flutter's fast development and created a functioning prototype of our application. ## Challenges we ran into As individuals who have never used either Flutter or Android Studio, the learning curve was quite steep. We were unable to even create anything for a long time as we struggled quite a bit with the basics. However, with lots of time, research, and learning, we quickly built up our skills and were able to carry out the rest of our project. ## Accomplishments that we're proud of In all honesty, we are so proud of ourselves for being able to learn as much as we did about Flutter in the time that we had. We really came together as a team and created something we are all genuinely really proud of. This will definitely be the first of many stepping stones in what Illuminate will do! ## What we learned Despite this being our first time, by the end of all of this we learned how to successfully use Android Studio, Flutter, and how to create a mobile application! ## What's next for Illuminate In the future, we hope to add an interactive map component that will be able to show users where their local sexual health clinics are using a GPS system.
## Inspiration The Telus prompt really pushed us to look at the problem at a unique angle. ## What it does A machine learning algorithm detects the emotions of the user and send a JSON object of the detected mood and the percentage accuracy of the prediction. The p5.js sketch takes that JSON object and changes the amount of Perlin Noise of the drawing to attempt to abstract the emotion into some kind of representation. ## How we built it ## Challenges we ran into Libraries, dependencies, and sometimes even storage space were some of the many issues that we had to face when working on theis project. Along with being a complete novice when it comes to Neural Networks, the project also involved processing which I personally had only heard of once before. ## Accomplishments that we're proud of After many hours, the neural network finally connected to the training dataset and was fully operational ## What's next for Mirror Journal Our next steps would be to create a platform that integrates all these individual moving parts together to flow as one.
winning
## Inspiration After looking at the Hack the 6ix prizes, we were all drawn to the BLAHAJ. On a more serious note, we realized that one thing we all have in common is accidentally killing our house plants. This inspired a sense of environmental awareness and we wanted to create a project that would encourage others to take better care of their plants. ## What it does Poképlants employs a combination of cameras, moisture sensors, and a photoresistor to provide real-time insight into the health of our household plants. Using this information, the web app creates an interactive gaming experience where users can gain insight into their plants while levelling up and battling other players’ plants. Stronger plants have stronger abilities, so our game is meant to encourage environmental awareness while creating an incentive for players to take better care of their plants. ## How we built it + Back-end: The back end was a LOT of python, we took a new challenge on us and decided to try out using socketIO, for a websocket so that we can have multiplayers, this messed us up for hours and hours, until we finally got it working. Aside from this, we have an arduino to read for the moistness of the soil, the brightness of the surroundings, as well as the picture of it, where we leveraged computer vision to realize what the plant is. Finally, using langchain, we developed an agent to handle all of the arduino info to the front end and managing the states, and for storage, we used mongodb to hold all of the data needed. [backend explanation here] ### Front-end: The front-end was developed with **React.js**, which we used to create a web-based game. We were inspired by the design of old pokémon games, which we thought might evoke nostalgia for many players. ## Challenges we ran into We had a lot of difficulty setting up socketio and connecting the api with it to the front end or the database ## Accomplishments that we're proud of We are incredibly proud of integrating our web sockets between frontend and backend and using arduino data from the sensors. ## What's next for Poképlants * Since the game was designed with a multiplayer experience in mind, we want to have more social capabilities by creating a friends list and leaderboard * Another area to explore would be a connection to the community; for plants that are seriously injured, we could suggest and contact local botanists for help * Some users might prefer the feeling of a mobile app, so one next step would be to create a mobile solution for our project
## Inspiration Fashion has always been a world that seemed far away from tech. We want to bridge this gap with "StyleList", which understands your fashion within a few swipes and makes personalized suggestions for your daily outfits. When you and I visit the Nordstorm website, we see the exact same product page. But we could have completely different styles and preferences. With Machine Intelligence, StyleList makes it convenient for people to figure out what they want to wear (you simply swipe!) and it also allows people to discover a trend that they favor! ## What it does With StyleList, you don’t have to scroll through hundreds of images and filters and search on so many different websites to compare the clothes. Rather, you can enjoy a personalized shopping experience with a simple movement from your fingertip (a swipe!). StyleList shows you a few clothing items at a time. Like it? Swipe left. No? Swipe right! StyleList will learn your style and show you similar clothes to the ones you favored so you won't need to waste your time filtering clothes. If you find something you love and want to own, just click “Buy” and you’ll have access to the purchase page. ## How I built it We use a web scrapper to get the clothing items information from Nordstrom.ca and then feed these data into our backend. Our backend is a Machine Learning model trained on the bank of keywords and it provides next items after a swipe based on the cosine similarities between the next items and the liked items. The interaction with the clothing items and the swipes is on our React frontend. ## Accomplishments that I'm proud of Good teamwork! Connecting the backend, frontend and database took us more time than we expected but now we have a full stack project completed. (starting from scratch 36 hours ago!) ## What's next for StyleList In the next steps, we want to help people who wonders "what should I wear today" in the morning with a simple one click page, where they fill in the weather and plan for the day then StyleList will provide a suggested outfit from head to toe!
## Inspiration A lot of us do not think about the fact that how convenient and lucky it is to be able to speak, listen and write. There are huge number of deaf people who used American Sign Language (ASL) to communicate with their peers, friends and family. Do you know that although deaf people may have no trouble communicating any idea in American Sign Language (ASL), they have difficulties when it comes to reading the letters because they never learned to connect letters with sounds. On the other hand, children ( 4-10) nowadays uses iPads and iPhones to watch what they want, play what they want, but imagine if parents can teach them how to recognize hand gestures as a form of communications, wouldn't that be awesome way to learn something? ## What it does ASLBoard provides platform for deaf people, people who suffers from dyslexia and young kids to have accessibility and make sure that they do not feel left out of the society. ASLBoard would allows user to have an easy time chatting with their peers or just have a learning experience. ASLBoard follows QWERTY format to make it easier typing and whenever user tap on the asl sign icon, it would translate what that sign meant in the text box automatically. ## How I built it I used Xcode as the IDE to build my application and use apple documentation references on how to implement keyboard extension. I had to sketch and edit to see which asl sign translate into letters. After sketching, I coded the layout of the keyboard and calculate the distances between each sign icons and use the cocoapods for the keyboard constraints. ## Challenges I ran into I should have been more organized in my work process, I wasted a lot of time trying to think of an optimize solutions. Learning how to use the iOS keyboard extension and working without the storyboard in the Xcode. Lastly, timing myself. ## Accomplishments that I'm proud of I am proud of being able to help out who are in need and to be able to do something for people with disabilities. ## What I learned I learned about how iOS development works and debugging for that native. I am thankful to be able to learn what machine learning is all about and to understand deeply about the cloud infrastructure. To save time. ## What's next for ASLBoard I would implement all the different sign languages (British, Indian, etc). Put an another option for keyboard where you can also input for numbers and other characters.
winning
## Overview Crop diseases pose a significant threat to global food security, especially in regions lacking proper infrastructure for rapid disease identification. To address this challenge, we present a web application that leverages the widespread adoption of smartphones and cutting-edge transfer learning models. Our solution aims to streamline the process of crop disease diagnosis, providing users with insights into disease types, suitable treatments, and preventive measures. ## Key Features * **Disease Detection:** Our web app employs advanced transfer learning models to accurately identify the type of disease affecting plants. Users can upload images of afflicted plants for real-time diagnosis. * **Treatment Recommendations:** Beyond disease identification, the app provides actionable insights by recommending suitable treatments for the detected diseases. This feature aids farmers and agricultural practitioners in promptly addressing plant health issues. * **Prevention Suggestions:** The application doesn't stop at diagnosis; it also offers preventive measures to curb the spread of diseases. Users receive valuable suggestions on maintaining plant health and preventing future infections. * **Generative AI Interaction:** To enhance user experience, we've integrated generative AI capabilities for handling additional questions users may have about their plants. This interactive feature provides users with insightful information and guidance. ## How it Works ? * **Image Upload:** Users upload images of plant specimens showing signs of disease through the web interface. * **Transfer Learning Model:** The uploaded images undergo real-time analysis using advanced transfer learning model, enabling the accurate identification of diseases with the help of PlantID API. * **Treatment and Prevention Recommendations:** Once the disease is identified, the web app provides detailed information on suitable treatments and preventive measures, empowering users with actionable insights. * **Generative AI Interaction:** Users can engage with generative AI to seek additional information, ask questions, or gain knowledge about plant care beyond disease diagnosis.
## Inspiration Inspired by the Making the Mundane Fun prize category, our team immediately focused on helping users achieve long-term goals. We observed a surge in cooking at the start of lockdown, and wanted to keep that momentum going with a simple yet entertaining app to inspire users to enjoy cooking in the long term. ## What it does With the login system, each user stores their own personalized information. The homepage allows users to track when they cooked and when they didn’t to show a visual of their daily cooking routine. Weekly challenges add interest to cooking, randomly generating an ingredient of the week to cook with, which earns you bonus points to go toward purchasing more plants. The garden is a fun game for users to design a personal space. The more you level up, the more plants you can add. ## How we built it We built the app through Android Studio using Java. The UI images were created using Adobe Illustrator and the authentication was achieved through a Firebase database. ## Challenges we ran into Although the project was relatively smooth sailing, we did run into some challenges along the way. The biggest challenge was implementing the Firebase database. We also experienced some difficulties with adding a clickable image grid to our garden pop-up. ## Accomplishments that we're proud of First, our team was rather unfamiliar with Android Studio. We learned a lot about implementing a project in Android Studio. As well, Firebase was a new tool for our team. Although it was a challenge to get the implementation running we are proud of the authentication. Finally, this was the first time the team member who created the UI images had done so and we are proud of their achievements! ## What we learned Overall, we learned a lot about Firebase, UI creation and implementing projects in Android Studio. ## What's next for Harvest In the future, we hope to fully implement user-associated data and a platform to share latest creations with your friends to further personalize the app and include a broader community aspect.
## Inspiration Too many times have broke college students looked at their bank statements and lament on how much money they could've saved if they had known about alternative purchases or savings earlier. ## What it does SharkFin helps people analyze and improve their personal spending habits. SharkFin uses bank statements and online banking information to determine areas in which the user could save money. We identified multiple different patterns in spending that we then provide feedback on to help the user save money and spend less. ## How we built it We used Node.js to create the backend for SharkFin, and we used the Viacom DataPoint API to manage multiple other API's. The front end, in the form of a web app, is written in JavaScript. ## Challenges we ran into The Viacom DataPoint API, although extremely useful, was something brand new to our team, and there were few online resources we could look at We had to understand completely how the API simplified and managed all the APIs we were using. ## Accomplishments that we're proud of Our data processing routine is highly streamlined and modular and our statistical model identifies and tags recurring events, or "habits," very accurately. By using the DataPoint API, our app can very easily accept new APIs without structurally modifying the back-end. ## What we learned ## What's next for SharkFin
partial
## Inspiration As a traveller, resident or explorer, it's often hard to know what's happening locally and precisely where. It's even harder to start a conversation with those people. This progressive web app aims to solve this issue, with a location based thread style social media platform. ## What it does View conversations, threads and news based on a map. Visit Australia and see what's happening over there. Or just browse your local area, at specific locations to check out what's happening. See a huge red patch? That means there's a lot of activity going on there. Search for the best content visually and explore the world as we know it. ## How we built it React framework. Using Open Street Maps, Leaflet API and heatmaps library. ## Challenges we ran into It was a learning experience for all of us. We learnt a lot. For many of us, we did not know much about the React framework and much of the time was spent teaching others. It was worth it, however! ## Accomplishments that we're proud of The heat maps were pretty cool. ## What we learned We learnt that team matters. ## What's next for News Radar Implement the backend and build some momentum locally for some users. # ⚠️ Try it out on your mobile devices ⚠️
## Inspiration We wanted to work on a project that 1) dealt with maps, 2) could benefit any urban environment regardless of how others view it, and 3) had a sense of intimacy. We found many of our initial ideas to be too detached—solutions that lacked a personal connection with the communities they aimed to serve. Then we came up with the idea of an application where users could simply look at a map and see all the areas that are recommended by locals, rather than popular locations that overshadow smaller and underrated areas in a community. From this, we expanded our idea to improve upon inaccurate and sometimes predatory apps claiming to protect users from dangerous incidents, yet only warning users when they are proximity to a "high-crime" area. By simply showing how often crime really happens in a much more realistic area, users have more knowledge and freedom to decide and understand what's going on in the local community around them. This, combined with local recommendations, lets users get the "word on the street" - they would hear it through the grapevine. ## What it does Grapevine is an application designed to make it easier for people to get the inside scoop on an area, based on local reports and recommendations. Locals can anonymously submit incident or recommendation reports, with the corresponding mark showing up on the map. Visitors can then search a location and get a map of their immediate surroundings that shows any reports in the area. They can also specify the radius and filter for certain types of reports. Reports also have an upvote/downvote system. ## How we built it We knew we wanted to build a web application, and so we decided on trying out Node.js and Express.js as our backend framework. Given this, we also decided to use MongoDB to complete the well known ME(no React)N tech stack, and also because of its popularity and reputation for being relatively easy to setup and use (which it was). Our frontend was built very simply with HTML/CSS. For the maps on our frontend, we used Leaflet.js, an interactive map JavaScript library that allowed us to easily display user recommendations and reports. ## Challenges we ran into This was our first time using MongoDB/Express.js/Node.js so there were many difficulties learning these tools on the fly. There were a lot of complications involving missing forward slashes and a good portion of our time was spent trying to figure out how to route pages. Fortunately, we were able to adapt and create a solid code structure that made the rest of our working process easier. We also thought that, given how GitHub is way easier when people aren't making contributions every 30 minutes, it would be better to use VSCode's Live Share feature to work collaboratively at the same time. However, this turned out to be more difficult than expected, especially when only the host can see what their code changes do. Despite this, we were able to push through and develop a good finished product that does exactly what we envisioned it to do. ## Accomplishments that we're proud of We’re very proud of being able to split the work efficiently and being able to stay organized on top of all of our contributions (given that we were using Live Share instead of Git). We are also proud of being able to implement the tech stack and use it in an application. We also successfully used Leaflet, an interactive map library for the first time, which was a new learning experience for us. ## What we learned Since this was a full-stack project that included everything from backend to frontend, there were many aspects that some of us did not know how to do/work with, but learning how to use different resources available to us online, reading documentation, and just using trial and error until we found something that works out helped us a lot as well in learning how to build an application with this tech stack. ## What's next for Grapevine We would like to scale this internationally and find a way to be able to optimize the search function. It would also be good to create a way to verify locals vs non-locals, perhaps through user login and personal information authentication (but still give the option of posting anonymously). We also have ideas of adding routing to the map, so that a user could input a destination and see local reports and recommendations along their route. Finally, we would like to flesh out the upvote system (differentiate between local/visitor feedback).
## Inspiration It’'s pretty common that you will come back from a grocery trip, put away all the food you bought in your fridge and pantry, and forget about it. Even if you read the expiration date while buying a carton of milk, chances are that a decent portion of your food will expire. After that you’ll throw away food that used to be perfectly good. But, that’s only how much food you and I are wasting. What about everything that Walmart or Costco trashes on a day to day basis? Each year, 119 billion pounds of food is wasted in the United States alone. That equates to 130 billion meals and more than $408 billion in food thrown away each year. About 30 percent of food in American grocery stores is thrown away. US retail stores generate about 16 billion pounds of food waste every year. But, if there was a solution that could ensure that no food would be needlessly wasted, that would change the world. ## What it does PantryPuzzle will scan in images of food items as well as extract its expiration date, and add it to an inventory of items that users can manage. When food nears expiration, it will notify users to incentivize action to be taken. The app will take actions to take with any particular food item, like recipes that use the items in a user’s pantry according to their preference. Additionally, users can choose to donate food items, after which they can share their location to food pantries and delivery drivers. ## How we built it We built it with a React frontend and a Python flask backend. We stored food entries in a database using Firebase. For the food image recognition and expiration date extraction, we used a tuned version of Google Vision API’s object detection and optical character recognition (OCR) respectively. For the recipe recommendation feature, we used OpenAI’s GPT-3 DaVinci large language model. For tracking user location for the donation feature, we used Nominatim open street map. ## Challenges we ran into React to properly display Storing multiple values into database at once (food item, exp date) How to display all firebase elements (doing proof of concept with console.log) Donated food being displayed before even clicking the button (fixed by using function for onclick here) Getting location of the user to be accessed and stored, not just longtitude/latitude Needing to log day that a food was gotten Deleting an item when expired. Syncing my stash w/ donations. Don’t wanna list if not wanting to donate anymore) How to delete the food from the Firebase (but weird bc of weird doc ID) Predicting when non-labeled foods expire. (using OpenAI) ## Accomplishments that we're proud of * We were able to get a good computer vision algorithm that is able to detect the type of food and a very accurate expiry date. * Integrating the API that helps us figure out our location from the latitudes and longitudes. * Used a scalable database like firebase, and completed all features that we originally wanted to achieve regarding generative AI, computer vision and efficient CRUD operations. ## What we learned We learnt how big of a problem the food waste disposal was, and were surprised to know that so much food was being thrown away. ## What's next for PantryPuzzle We want to add user authentication, so every user in every home and grocery has access to their personal pantry, and also maintains their access to the global donations list to search for food items others don't want. We integrate this app with the Internet of Things (IoT) so refrigerators can come built in with this product to detect food and their expiry date. We also want to add a feature where if the expiry date is not visible, the app can predict what the likely expiration date could be using computer vision (texture and color of food) and generative AI.
losing
## Inspiration Many people on our campus use an app called When2Meet to schedule meetings, but their UI is terrible, their features are limited, and overall we thought it could be done better. We brainstormed what would make When2Meet better and thought the biggest thing would be a simple new UI as well as a proper account system to see all the meetings you have. ## What it does Let's Meet is an app that allows people to schedule meetings effortlessly. "Make an account and make scheduling a breeze." A user can create a meeting and share it with others. Then everyone with access can choose which times work best for them. ## How we built it We used a lot of Terraform! We really wanted to go with a serverless microservice architecture on AWS and thus chose to deploy via AWS. Since we were already using lambdas for the backend, it made sense to add Amplify for the frontend, Cognito for logging in, and DynamoDB for data storage. We wrote over 900 lines of Terraform to get our lambdas deployed, api gateway properly configured, permissions correct, and everything else we do in AWS configured. Other than AWS, we utilized React with Ant Design components. Our lambdas ran on Python 3.12. ## Challenges we ran into The biggest challenge we ran into was a bug with AWS. For roughly 5 hours we fought intermittent 403 responses. Initially we had an authorizer on the API gateway, but after a short time we removed it. We confirmed it was deleting by searching the CLI for it. We double checked in the web console because we thought it may be the authorizer but it wasn't there anyway. This ended up requiring everything to be manually deleted around the API gate way and everything have to be rebuilt. Thanks to Terraform it made restoring everything relatively easy. Another challenge was using Terraform and AWS itself. We had almost no knowledge of it going in and coming out we know there is so much more to learn, but with these skills we feel confident to set up anything in AWS. ## Accomplishments that we're proud of We are so proud of our deployment and cloud architecture. We think that having built a cloud project of this scale in this time frame is no small feat. Even with some challenges our determination to complete the project helped us get through. We are also proud of our UI as we continue to strengthen our design skills. ## What we learned We learned that implementing Terraform can sometimes be difficult depending on the scope and complexity of the task. This was our first time using a component library for frontend development and we now know how to design, connect, and build an app from start to finish. ## What's next for Let's Meet We would add more features such as syncing the meetings to a Google Calendar. More customizations and features such as location would also be added so that users can communicate where to meet through the web app itself.
## Inspiration ``` We had multiple inspirations for creating Discotheque. Multiple members of our team have fond memories of virtual music festivals, silent discos, and other ways of enjoying music or other audio over the internet in a more immersive experience. In addition, Sumedh is a DJ with experience performing for thousands back at Georgia Tech, so this space seemed like a fun project to do. ``` ## What it does ``` Currently, it allows a user to log in using Google OAuth and either stream music from their computer to create a channel or listen to ongoing streams. ``` ## How we built it ``` We used React, with Tailwind CSS, React Bootstrap, and Twilio's Paste component library for the frontend, Firebase for user data and authentication, Twilio's Live API for the streaming, and Twilio's Serverless functions for hosting and backend. We also attempted to include the Spotify API in our application, but due to time constraints, we ended up not including this in the final application. ``` ## Challenges we ran into ``` This was the first ever hackathon for half of our team, so there was a very rapid learning curve for most of the team, but I believe we were all able to learn new skills and utilize our abilities to the fullest in order to develop a successful MVP! We also struggled immensely with the Twilio Live API since it's newer and we had no experience with it before this hackathon, but we are proud of how we were able to overcome our struggles to deliver an audio application! ``` ## What we learned ``` We learned how to use Twilio's Live API, Serverless hosting, and Paste component library, the Spotify API, and brushed up on our React and Firebase Auth abilities. We also learned how to persevere through seemingly insurmountable blockers. ``` ## What's next for Discotheque ``` If we had more time, we wanted to work on some gamification (using Firebase and potentially some blockchain) and interactive/social features such as song requests and DJ scores. If we were to continue this, we would try to also replace the microphone input with computer audio input to have a cleaner audio mix. We would also try to ensure the legality of our service by enabling plagiarism/copyright checking and encouraging DJs to only stream music they have the rights to (or copyright-free music) similar to Twitch's recent approach. We would also like to enable DJs to play music directly from their Spotify accounts to ensure the good availability of good quality music. ```
# Pythia Camera Check out the [github](https://github.com/philipkiely/Pythia). ![Pythia Diagram](https://raw.githubusercontent.com/philipkiely/Pythia/master/images/PythiaCamera.jpg) ## Inspiration #### Original Idea: Deepfakes and more standard edits are a difficult threat to detect. Rather than reactively analyzing footage to attempt to find the marks of digital editing, we sign footage on the camera itself to allow the detection of edited footage. #### Final Idea: Using the same technology, but with a more limited threat model allowing for a narrower scope, we can create the world's most secure and intelligent home security camera. ## What it does Pythia combines robust cryptography with AI video processing to bring you a unique home security camera. The system notifies you in near-real-time of potential incidents and lets you verify by viewing the video. Videos are signed by the camera and the server to prove their authenticity in courts and other legal matters. Improvements of the same technology have potential uses in social media, broadcasting, political advertising, and police body cameras. ## How we built it * Records video and audio on a camera connected to a basic WIFI-enabled board, in our case a Raspberry Pi 4 At regular intervals: * Combines video and audio into .mp4 file * Signs combined file * Sends file and metadata to AWS ![Signing](https://raw.githubusercontent.com/philipkiely/Pythia/master/images/ChainedRSASignature.jpg) On AWS: * Verifies signature and adds server signature * Uses Rekognition to detect violence or other suspicious behavior * Uses Rekognition to detect the presence of people * If there are people with detectable faces, uses Rekognition to * Uses SMS to notify the property owner about the suspicious activity and links a video clip ![AWS](https://raw.githubusercontent.com/philipkiely/Pythia/master/images/AWSArchitecture.jpg) ## Challenges we ran into None. Just Kidding: #### Hardware Raspberry Pi * All software runs on Raspberry Pi * Wifi Issues * Compatibility issues * Finding a Screwdriver Hardware lab didn't have the type of sensors we were hoping for so no heat map :(. #### Software * Continuous batched recording * Creating complete .mp4 files * Processing while recording #### Web Services * Asynchronous Architecture has lots of race conditions ## Accomplishments that we're proud of * Complex AWS deployment * Chained RSA Signature * Proper video encoding and processing, combining separate frame and audio streams into a single .mp4 ## What we learned #### Bogdan * Gained experience designing and implementing a complex, asynchronous AWS Architecture * Practiced with several different Rekognition functions to generate useful results #### Philip * Video and audio encoding is complicated but fortunately we have great command-line tools like `ffmpeg` * Watchdog is a Python library for watching folders for a variety fo events and changes. I'm excited to use it for future automation projects. * Raspberry Pi never works right the first time ## What's next for Pythia Camera A lot of work is required to fully realize our vision for Pythia Camera as a whole solution that resists a wide variety of much stronger threat models including state actors. Here are a few areas of interest: #### Black-box resistance: * A camera pointed at a screen will record and verify the video from the screen * Solution: Capture IR footage to create a heat map of the video and compare the heat map against rekognition's object analysis (people should be hot, objects should be cold, etc. * Solution: Use a laser dot projector like the iPhone's faceID sensor to measure distance and compare to machine learning models using Rekognition #### Flexible Cryptography: * Upgrade Chained RSA Signature to Chained RSA Additive Map Signature to allow for combining videos * Allow for basic edits like cuts and filters while recording a signed record of changes #### More Robust Server Architecture: * Better RBAC for online assets * Multi-region failover for constant operation
partial
## Inspiration We, as passionate tinkerers, understand the struggles that come with making a project come to life (especially for begineers). **80% of U.S. workers agree that learning new skills is important, but only 56% are actually learning something new**. From not knowing how electrical components should be wired, to not knowing what a particular component does, and what is the correct procedure to effectively assemble a creation, TinkerFlow is here to help you ease this process, all in one interface. ## What it does -> Image identification/classification or text input of available electronic components -> Powered by Cohere and Groq LLM, generates wiring scheme and detailed instructions (with personality!) to complete an interesting project that is possible with electronics available -> Using React Flow, we developed our own library (as other existing softwares were depreciated) that generates electrical schematics to make the fine, precise and potentially tedious work of wiring projects easier. -> Display generated text of instructions to complete project ## How we built it We allowed the user to upload a photo, have it get sent to the backend (handled by Flask), used Python and Google Vision AI to do image classification and identify with 80% accuracy the component. To provide our users with a high quality and creative response, we used a central LLM to find projects that could be created based on inputted components, and from there generate instructions, schematics, and codes for the user to use to create their project. For this central LLM, we offer two options: Cohere and Groq. Our default model is the Cohere LLM, which using its integrated RAG and preamble capability offers superior accuracy and a custom personality for our responses, providing more fun and engagement for the user. Our second option Groq though providing a lesser quality of a response, provides fast process times, a short coming of Cohere. Both of these LLM's are based on large meticulously defined prompts (characterizing from the output structure to the method of listing wires), which produce the results that are necessary in generating the final results seen by the user. In order to provide the user with different forms of information, we decide to present electrical schematics on the webpage. However during the development due to many circumstances, our group had to use simple JavaScript libraries to create its functionality. ## Challenges we ran into * LLM misbehaving: The biggest challenge in the incorporation of the Cohere LLM was the ability to generate consistent results through the prompts used to generate the results needed for all of the information provided about the project proposed. The solution to this was to include a very specifically defined prompts with examples to reduce the amount of errors generated by the LLM. * Not able to find a predefined electrical schematics library to use to generate electrical schematics diagrams, there we had start from scratch and create our own schematic drawer based on basic js library. ## Accomplishments that we're proud of Create electrical schematics using basic js library. Create consistent outputting LLM's for multiple fields. ## What we learned Ability to overcome troubles - consistently innovating for solutions, even if there may not have been an easy route (ex. existing library) to use - our schematic diagrams were custom made! ## What's next for TinkerFlow Aiming for faster LLM processing speed. Update the user interface of the website, especially for the electrical schematic graph generation. Implement the export of code files, to allow for even more information being provided to the user for their project.
## Inspiration Algorithm interviews... suck. They're more a test of sanity (and your willingness to "grind") than a true performance indicator. That being said, large language models (LLMs) like Cohere and ChatGPT are rather *good* at doing LeetCode, so why not make them do the hard work...? Introduce: CheetCode. Our hack takes the problem you're currently screensharing, feeds it to an LLM target of your choosing, and gets the solution. But obviously, we can't just *paste* in the generated code. Instead, we wrote a non-malicious (we promise!) keylogger to override your key presses with the next character of the LLM's given solution. Mash your keyboard and solve hards with ease. The interview doesn't end there though. An email notification will appear on your computer after with the subject "Urgent... call asap." Who is it? It's not mom! It's CheetCode, with a detailed explanation including both the time and space complexity of your code. Ask your interviewer to 'take this quick' and then breeze through the follow-ups. ## How we built it The hack is the combination of three major components: a Chrome extension, Node (actually... Bun) service, and Python script. * The **extension** scrapes LeetCode for the question and function header, and forwards the context to the Node (Bun) service * Then, the **Node service** prompts an LLM (e.g., Cohere, gpt-3.5-turbo, gpt-4) and then forwards the response to a keylogger written in Python * Finally, the **Python keylogger** enables the user to toggle cheats on (or off...), and replaces the user's input with the LLM output, seamlessly (Why the complex stack? Well... the extension makes it easy to interface with the DOM, the LLM prompting is best written in TypeScript to leverage the [TypeChat](https://microsoft.github.io/TypeChat/) library from Microsoft, and Python had the best tooling for creating a fast keylogger.) (P.S. hey Cohere... I added support for your LLM to Microsoft's project [here](https://github.com/michaelfromyeg/typechat). gimme job plz.) ## Challenges we ran into * HTML `Collection` data types are not fun to work with * There were no actively maintained cross-platform keyloggers for Node, so we needed another service * LLM prompting is surprisingly hard... they were not as smart as we were hoping (especially in creating 'reliable' and consistent outputs) ## Accomplishments that we're proud of * We can now solve any Leetcode hard in 10 seconds * What else could you possibly want in life?!
## Inspiration The novel coronavirus (COVID-19) has quickly changed our lives around the world. As of today, the total number of COVID-19 cases has surpassed 105 million. In response to the alarming rise of COVID-19 cases, our team felt propelled to raise awareness about important information as the pandemic progresses to better equip people worldwide with crucial knowledge on how to better protect themselves. ## What it does CovidHub allows users to navigate on different pages ranging from a daily COVID screening test to the latest and specific regional COVID trends. Starting from the homepage, this presents the user with the latest news related to COVID -19 to inform users of real-time reports of COVID. Also in the webpage, is a daily screening test that allows users to complete a checklist of symptoms and that reveals whether it is safe for the user to attend work, school, daycare, etc. Next, the trends page is a space where users can enter the region or city in which they are located in order to view trends of COVID-19, including graphs of the newly reported COVID-19 cases by day. Finally, the tips and tricks for the prevention page offer users multiple tips and tricks to prevent the spread of COVID. ## How we built it Using HTML, CSS, Javascript, and the bootstrap framework, we built this application to create a dynamic and user-friendly website for the audience. ## Challenges we ran into Generally, our collaboration was smooth and effective. However, there were some minor setbacks. For instance, our team had defined, thorough goals pertaining to the development of our website. Yet, as time ticked away, we realized that we would be unable to accomplish all of our coding ambitions. Thus, we quickly decided that to succeed, we needed to be flexible and adjust the layout of our website. While our aspirations persisted, we determined more simple and faster ways to attain them instead of being too detail-oriented. ## Accomplishments that we're proud of Among the many accomplishments that we are proud of in the making of our website, our most successful one is that our team managed to encode the Google News into our project to provide the latest coverage on the pandemic. Despite the physical barrier, our coordination was as a team was seamless. Communication between team members was highly effective with a mutual understanding of objectives. As a result, we accommodated all ideas to create a vision we all agreed on. Aside from that, we were able to achieve the majority of our ambitions in a time-limited frame. ## What we learned During this hackathon, we learned how to fully utilize HTML and CSS to create a dynamic website for users. This included styling, creating class elements, and providing users with specific and real-time information. As well, we developed our knowledge of building websites for the first time while gaining valuable front-end experience. ## What's next for CovidHub Our next step is to turn our vision, CovidHub, into reality. We will continue to further develop our website and add more features that will ensure all users to easily navigate it and offer a wider scope of information on the rapidly changing situation of the coronavirus pandemic. For example, we plan on allowing users to contribute their own story and COVID-19 symptoms to provide users with real-time statistics every minute of the day every day. In the future, they will be able to connect with others like themselves so that people can support each other through similar experiences. Our ambition is that one day, users worldwide will turn to our website for the most trustworthy, helpful, and accurate information and advice on COVID-19.
winning
## Inspiration gitpizza was inspired by a late night development push and a bout of hunger. What if you could order a pizza without having to leave the comfort of your terminal? ## What it does gitpizza is a CLI based on git which allows you to create a number of pizzas (branches), add toppings (files), configure your address and delivery info, and push your order straight to Pizza Hut. ## How I built it Python is bread and butter of gitpizza, parsing the provided arguments and using selenium to automatically navigate through the Pizza Hut website. ## Challenges I ran into Pizza Hut's website is mostly created with angular, meaning selenium would retrieve a barebones HTML page and it would later be dynamically populated with JavaScript. But selenium didn't see these changes, so finding elements by ids and such was impossible. That, along with the generic names and lack of ids in general on the website meant that my only solution was the physically move the mouse and click on pixel-perfect positions to add toppings and place the user's order. ## Accomplishments that I'm proud of Just the amount of commands that gitpizza supports. `gitpizza init` to start a new order, `gitpizza checkout -b new-pizza` to create a second pizza, `gitpizza add --left pepperoni` to add pepperoni to only the left half of your pizza, and `gitpizza diff` to see the differences between each side of your pizza. Visit [the repository](https://github.com/Microsquad/gitpizza) for the full list of commands
## Inspiration I was hungry af, and there was a cool post online about NFC cards. ## What it does Each NFC card is mapped to a topping available from Domino's pizza. Scan each topping you want on an NFC sensor attached to a Raspberry Pi to build your own pizza. Then scan the "end" card, and the Raspberry Pi uses Domino's internal API to order a pizza directly. ## How we built it We attached a Raspberry Pi to a SPI powered RFID/NFC sensor. For each NFC card, we mapped its UID to a topping using Python, and then built a JSON object that could be sent to Domino's API over HTTPS. ## Challenges we ran into Yeah ## Accomplishments that we're proud of Learned a lot about the SPI interface. ## What we learned A lot about the SPI interface. ## What's next for Pizza Eating it.
## Inspiration Every year hundreds of thousands of preventable deaths occur due to the lack of first aid knowledge in our societies. Many lives could be saved if the right people are in the right places at the right times. We aim towards connecting people by giving them the opportunity to help each other in times of medical need. ## What it does It is a mobile application that is aimed towards connecting members of our society together in times of urgent medical need. Users can sign up as respondents which will allow them to be notified when people within a 300 meter radius are having a medical emergency. This can help users receive first aid prior to the arrival of an ambulance or healthcare professional greatly increasing their chances of survival. This application fills the gap between making the 911 call and having the ambulance arrive. ## How we built it The app is Android native and relies heavily on the Google Cloud Platform. User registration and authentication is done through the use of Fireauth. Additionally, user data, locations, help requests and responses are all communicated through the Firebase Realtime Database. Lastly, the Firebase ML Kit was also used to provide text recognition for the app's registration page. Users could take a picture of their ID and their information can be retracted. ## Challenges we ran into There were numerous challenges in terms of handling the flow of data through the Firebase Realtime Database and providing the correct data to authorized users. ## Accomplishments that we're proud of We were able to build a functioning prototype! Additionally we were able to track and update user locations in a MapFragment and ended up doing/implementing things that we had never done before.
partial
## Inspiration **Machine learning** is a powerful tool for automating tasks that are not scalable at the human level. However, when deciding on things that can critically affect people's lives, it is important that our models do not learn biases. [Check out this article about Amazon's automated recruiting tool which learned bias against women.](https://www.reuters.com/article/us-amazon-com-jobs-automation-insight/amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK08G?fbclid=IwAR2OXqoIGr4chOrU-P33z1uwdhAY2kBYUEyaiLPNQhDBVfE7O-GEE5FFnJM) However, to completely reject the usefulness of machine learning algorithms to help us automate tasks is extreme. **Fairness** is becoming one of the most popular research topics in machine learning in recent years, and we decided to apply these recent results to build an automated recruiting tool which enforces fairness. ## Problem Suppose we want to learn a machine learning algorithm that automatically determines whether job candidates should advance to the interview stage using factors such as GPA, school, and work experience, and that we have data from which past candidates received interviews. However, what if in the past, women were less likely to receive an interview than men, all other factors being equal, and certain predictors are correlated with the candidate's gender? Despite having biased data, we do not want our machine learning algorithm to learn these biases. This is where the concept of **fairness** comes in. Promoting fairness has been studied in other contexts such as predicting which individuals get credit loans, crime recidivism, and healthcare management. Here, we focus on gender diversity in recruiting. ## What is fairness? There are numerous possible metrics for fairness in the machine learning literature. In this setting, we consider fairness to be measured by the average difference in false positive rate and true positive rate (**average odds difference**) for unprivileged and privileged groups (in this case, women and men, respectively). High values for this metric indicates that the model is statistically more likely to wrongly reject promising candidates from the underprivileged group. ## What our app does **jobFAIR** is a web application that helps human resources personnel keep track of and visualize job candidate information and provide interview recommendations by training a machine learning algorithm on past interview data. There is a side-by-side comparison between training the model before and after applying a *reweighing algorithm* as a preprocessing step to enforce fairness. ### Reweighing Algorithm If the data is unbiased, we would think that the probability of being accepted and the probability of being a woman would be independent (so the product of the two probabilities). By carefully choosing weights for each example, we can de-bias the data without having to change any of the labels. We determine the actual probability of being a woman and being accepted, then set the weight (for the woman + accepted category) as expected/actual probability. In other words, if the actual data has a much smaller probability than expected, examples from this category are given a higher weight (>1). Otherwise, they are given a lower weight. This formula is applied for the other 3 out of 4 combinations of gender x acceptance. Then the reweighed sample is used for training. ## How we built it We trained two classifiers on the same bank of resumes, one with fairness constraints and the other without. We used IBM's [AIF360](https://github.com/IBM/AIF360) library to train the fair classifier. Both classifiers use the **sklearn** Python library for machine learning models. We run a Python **Django** server on an AWS EC2 instance. The machine learning model is loaded into the server from the filesystem on prediction time, classified, and then the results are sent via a callback to the frontend, which displays the metrics for an unfair and a fair classifier. ## Challenges we ran into Training and choosing models with appropriate fairness constraints. After reading relevant literature and experimenting, we chose the reweighing algorithm ([Kamiran and Calders 2012](https://core.ac.uk/download/pdf/81728147.pdf?fbclid=IwAR3P1SFgtml7w0VNQWRf_MK3BVk8WyjOqiZBdgmScO8FjXkRkP9w1RFArfw)) for fairness, logistic regression for the classifier, and average odds difference for the fairness metric. ## Accomplishments that we're proud of We are proud that we saw tangible differences in the fairness metrics of the unmodified classifier and the fair one, while retaining the same level of prediction accuracy. We also found a specific example of when the unmodified classifier would reject a highly qualified female candidate, whereas the fair classifier accepts her. ## What we learned Machine learning can be made socially aware; applying fairness constraints helps mitigate discrimination and promote diversity in important contexts. ## What's next for jobFAIR Hopefully we can make the machine learning more transparent to those without a technical background, such as showing which features are the most important for prediction. There is also room to incorporate more fairness algorithms and metrics.
## Inspiration Every year roughly 25% of recyclable material is not able to be recycled due to contamination. We set out to reduce the amount of things that are needlessly sent to the landfill by reducing how much people put the wrong things into recycling bins (i.e. no coffee cups). ## What it does This project is a lid for a recycling bin that uses sensors, microcontrollers, servos, and ML/AI to determine if something should be recycled or not and physically does it. To do this it follows the following process: 1. Waits for object to be placed on lid 2. Take picture of object using webcam 3. Does image processing to normalize image 4. Sends image to Tensorflow model 5. Model predicts material type and confidence ratings 6. If material isn't recyclable, it sends a *YEET* signal and if it is it sends a *drop* signal to the Arduino 7. Arduino performs the motion sent to it it (aka. slaps it *Happy Gilmore* style or drops it) 8. System resets and waits to run again ## How we built it We used an Arduino Uno with an Ultrasonic sensor to detect the proximity of an object, and once it meets the threshold, the Arduino sends information to the pre-trained TensorFlow ML Model to detect whether the object is recyclable or not. Once the processing is complete, information is sent from the Python script to the Arduino to determine whether to yeet or drop the object in the recycling bin. ## Challenges we ran into A main challenge we ran into was integrating both the individual hardware and software components together, as it was difficult to send information from the Arduino to the Python scripts we wanted to run. Additionally, we debugged a lot in terms of the servo not working and many issues when working with the ML model. ## Accomplishments that we're proud of We are proud of successfully integrating both software and hardware components together to create a whole project. Additionally, it was all of our first times experimenting with new technology such as TensorFlow/Machine Learning, and working with an Arduino. ## What we learned * TensorFlow * Arduino Development * Jupyter * Debugging ## What's next for Happy RecycleMore Currently the model tries to predict everything in the picture which leads to inaccuracies since it detects things in the backgrounds like people's clothes which aren't recyclable causing it to yeet the object when it should drop it. To fix this we'd like to only use the object in the centre of the image in the prediction model or reorient the camera to not be able to see anything else.
# RiskWatch ## Inspiration ## What it does Our project allows users to report fire hazards with images to a central database. False images could be identified using machine learning (image classification). Also, we implemented methods for people to find fire stations near them. We additionally implemented a way for people to contact Law enforcement and fire departments for a speedy resolution. In return, the users get compensation from insurance companies. Idea is relevant because of large wildfires in California and other states. ## How we built it We build the site from the ground up using ReactJS, HTML, CSS and JavaScript. We also created a MongoDB database to hold some location data and retrieve them in the website. Python was also used to connect the frontend to the database. ## Challenges we ran into We initially wanted to create a physical hardware device using a Raspberry Pi 2 and a RaspiCamera. Our plan was to create a device that could utilize object recognition to classify general safety issues. We understood that performance would suffer greatly when going in, as we thought 1-2 FPS would be enough. After spending hours compiling OpenCV, Tensorflow and Protobuf on the Pi, it was worth it. It was surprising to achieve 2-3 FPS after object recognition using Google's SSDLiteNetv2Coco algorithm. But unfortunately, the Raspberry Pi camera would disconnect often and eventually fail due to a manufacturing defect. Another challenge we faced at the final hours was that our original domain choice was mistakenly marked available by the registry when it really was taken, but we eventually resolved it by talking to a customer support representative. ## Accomplishments that we're proud of We are proud of being able to quickly get back on track after we had issues with our initial hardware idea and repurpose it to become a website. We were all relatively new to React and quickly transitioned from using Materialize.CSS at all the other hackathons we went to. ### Try it out (production)! * clone the repository: `git clone https://github.com/dwang/RiskWatch.git` * run `./run.sh` ### Try it out (development)! * clone the repository: `git clone https://github.com/dwang/RiskWatch.git` * `cd frontend` then `npm install` * run `npm start` to run the app - the application should now open in your browser * start the backend with `./run.sh` ## What we learned Our group learned how to construct and manage databases with MongoDB, along with seamlessly integrating them into our website. We also learned how to make a website with React, making a chatbot, using image recognition and even more! ## What's next for us? We would like to make it so that everyone uses our application to be kept safe - right now, it is missing a few important features, but once we add those, RiskWatch could be the next big thing in information consumption. Check out our GitHub repository at: <https://github.com/dwang/RiskWatch>
winning
Imagine, tourism redefined by an app, by community collaboration, by utilizing everyone's opinion, to provide the best tourism experience... Introducing... AdvenTour ## Inspiration Everyone loves to travel. TripAdvisor is the go-to website for tourist attractions. However, it only lists the major attractions of a city. Many times, one cannot fully experience the local culture and vibe from visiting those attractions (because it is filled with more tourists). To localize and truly experience the culture of the city, a local tour guide will give the best advice for activities, food, and places to stay. Our inspiration of the app comes from the fact the we have a better time when we have a local friend to show us around the city. Thus, why not make everyone who lives in the city become our tour guide? And voila, AdvenTour was born. ## What it does AdvenTour is a multi-platform application which allows locals to suggest activities to do at a certain location; it can be in a restaurant, an attraction, or a bar. The app will automatically locate the user's location and determine the city he/she is in. Then, "challenges" near the region would pop up as things to do, ordered by the most popular suggestions to the least. Tourists would get to engage in a locally-guided scavenger hunt where users would pick a challenge and complete it for points. As their points accumulates, they can achieve milestones and discover special rewards. ## How we built it We created a beautiful application through Android Studio and a webpage powered by React and Node.js. Through our Android application, we used Snap Kit API to authenticate users (which will be used as their unique ID) and we store all our information regarding challenges and user info on Google Cloud Platform's Firebase. The webpage is also powered by the Firebase API. ## Challenges we ran into Snap Kit API was extremely difficult to use on Android. What we realized was that Snap Kit does not put an emphasis on Android development. Furthermore, there were limited resources on Snap Kit's API for Android Studio. Another major difficulty we faced was to interact with Firebase API. In Java, Firebase doesn't offer blocking reads and only supports event-based listeners which made it difficult to structure many sequential data reads through the Android application. The most difficult challenge we faced was integrating multiple technologies together as a whole. We dealt with multi-platform integration and worked with two APIs in one go. None of us had any knowledge of those APIs prior which makes it a learning experience developing the app. Lastly, everyone was low on sleep throughout the weekend, which definitely made things harder. ## Accomplishments that we are proud of We are proud to deliver the product at the end of the day. We manage to combine all the technologies seamlessly. We have successfully built a full stack application along with a mobile application in only 24 hours. We are extremely proud of our dedication, hard work, and continuous effort to produce the best app we can in such a short time span. ## What we learned Integration was challenging but worth it in the end. Through working with the Snapchat and Google Cloud Platform APIs, we gained knowledge in publishing and extracting information. We learned how to locate our GPS through our local device and used Google Maps API to extract city locations and activities. ## What's next for AdvenTour Next up, we need to integrate the reward system of points the user collects with local vendors and businesses to promote incentives. Rewards can range from a free coffee to coupons for a restaurant to prompt user interaction and satisfaction. We did not have a lot of time to spend on the UI interface on the app and we could use some time to make it look more aesthetically pleasing.
## Inspiration There are many occasions where we see a place in a magazine, or just any image source online and we don't know where the place is. There is no description anywhere, and a possible vacation destination may very possibly just disappear into thin air. We certainly did not want to miss out. ## What it does Take a picture of a place. Any place. And upload it onto our web app. We will not only tell you where that place is located, but immediately generate a possible trip plan from your current location. That way, you will be able to know how far away you are from your desired destination, as well as how feasible this trip is in the near future. ## How we built it We first figured out how to use Google Cloud Vision to retrieve the data we wanted. We then processed pictures uploaded to our Flask application, retrieved the location, and wrote the location to a text file. We then used Beautiful Soup to read the location from the text file, and integrated the Google Maps API, along with numerous tools within the API, to display possible vacation plans, and the route to the location. ## Challenges we ran into This was our first time building a dynamic web app, and using so many API’s so it was pretty challenging. Our final obstacle of reading from a text file using JavaScript turned out to be our toughest challenge, because we realized it was not possible due to security concerns, so we had to do it through Beautiful Soup. ## Accomplishments that we're proud of We're proud of being able to integrate many different API's into our application, and being able to make significant progress on the front end, despite having only two beginner members. We encountered many difficulties throughout the building process, and had some doubts, but we were still able to pull through and create a product with an aesthetically pleasing GUI that users can easily interact with. ## What we learned We got better at reading documentation for different API's, learned how to integrate multiple API's together in a single application, and realized we could create something useful with just a bit of knowledge. ## What's next for TravelAnyWhere TravelAnyWhere can definitely be taken on to a whole other level. Users could be provided with different potential routes, along with recommended trip plans that visit other locations along the way. We could also allow users to add multiple pictures corresponding to the same location to get a more precise reading on the destination through machine learning techniques.
## Inspiration GeoGuesser is a fun game which went viral in the middle of the pandemic, but after having played for a long time, it started feeling tedious and boring. Our Discord bot tries to freshen up the stale air by providing a playlist of iconic locations in addendum to exciting trivia like movies and monuments for that extra hit of dopamine when you get the right answers! ## What it does The bot provides you with playlist options, currently restricted to Capital Cities of the World, Horror Movie Locations, and Landmarks of the World. After selecting a playlist, five random locations are chosen from a list of curated locations. You are then provided a picture from which you have to guess the location and the bit of trivia associated with the location, like the name of the movie from which we selected the location. You get points for how close you are to the location and if you got the bit of trivia correct or not. ## How we built it We used the *discord.py* library for actually coding the bot and interfacing it with discord. We stored our playlist data in external *excel* sheets which we parsed through as required. We utilized the *google-streetview* and *googlemaps* python libraries for accessing the google maps streetview APIs. ## Challenges we ran into For initially storing the data, we thought to use a playlist class while storing the playlist data as an array of playlist objects, but instead used excel for easier storage and updating. We also had some problems with the Google Maps Static Streetview API in the beginning, but they were mostly syntax and understanding issues which were overcome soon. ## Accomplishments that we're proud of Getting the Discord bot working and sending images from the API for the first time gave us an incredible feeling of satisfaction, as did implementing the input/output flows. Our points calculation system based on the Haversine Formula for Distances on Spheres was also an accomplishment we're proud of. ## What we learned We learned better syntax and practices for writing Python code. We learnt how to use the Google Cloud Platform and Streetview API. Some of the libraries we delved deeper into were Pandas and pandasql. We also learned a thing or two about Human Computer Interaction as designing an interface for gameplay was rather interesting on Discord. ## What's next for Geodude? Possibly adding more topics, and refining the loading of streetview images to better reflect the actual location.
losing
# Speculator ### (Working title) From the subprime mortgage crisis of the 2000’s to the marked volatility of Tesla stock over the entirety of its lifespan, its no secret that much of what drives stock prices is purely emotional and not backed by facts. This is supported by the fact that a key determinant of stock prices is the general population's expectations for the trajectory of those very stock prices, in accordance with elementary financial theory. [1] We can do better in 2020. This web-app will use sentiment analysis to scrape a curated twitter feed from a selection of reliable news sources to create a holistic view of a company’s public opinion and relevant analysis of political, economic, social, and technological factors. It will then utilize the Yahoo Finance API to analyze a company's key financial statements—-statement of financial position, statement of comprehensive income, statement of cash flows—-to assess the company's financial health. Finally, it will assess the competency of management by comparing their publicly available CVs to exceptional industry leaders. With all of this information, we will predict whether or not the price of the stock is inflated (in the context of its industry) and provide an overall investment recommendation to users. ## Tech Stack * Mongo / Express / Node for backend * Google Cloud Platform for Machine Learning * React for Front-end * Figma for UX/UI ### References 1 <https://www.investopedia.com/articles/basics/04/100804.asp>
## Inspiration Millions of dollars are invested in algorithm trading and we needed to develop a open source model which could predict stock market pricing from the viewpoint of long term as well as short term trading. ## What it does Predict the opening price, closing price for stocks in the future. Takes help of sentient analysis of the new feed to tap on the mood of the market. Stock market is an largely based on the actions and decisions of every individual participating in it and every user's action has a n impact on it. ## How I built it Trained Recurrent Neural network using tensorFlow in the backend, sentiment analysis of news feed trained and saved in a database created using the Hasura api.The front end being a web app provides an interactive platform for the user to utilize the pre-trained models for visualizing the predictions. ## Challenges I ran into tensor flow training of the data ## Accomplishments that I'm proud of integrating the trained models, database, user interface for a smooth interaction ## What I learned team work, full stack integration ## What's next for StockAdvisors analyze weather to predict the trends of the commodity trading market
## Inspiration Have you wondered where to travel or how to plan your trip more interesting? Wanna make trips more adventerous ? ## What it does Xplore is an **AI based-travel application** that allows you to experience destinations in a whole new way. It keeps your adrenaline pumping by keeping your vacation destinations undisclosed. ## How we built it * Xplore is completely functional web application built with Html, Css, Bootstrap, Javscript and Sqlite. * Multiple Google Cloud Api's such as Geolocation API, Maps Javascript API, Directions API were used to achieve our map functionalities. * Web3.storage was also used for data storage service and to retrieves data on IPFS and Filecoin. ## Challenges we ran into While integrating multiple cloud API's and API token from Web3.Strorage with our project, we discovered that it was a little complex. ## What's next for Xplore * Mobile Application for easier access. * Multiple language Support * Seasonal travel suggestions.
losing
## Inspiration This project was a response to the events that occurred during Hurricane Harvey in Houston last year, wildfires in California, and the events that occurred during the monsoon in India this past year. 911 call centers are extremely inefficient in providing actual aid to people due to the unreliability of tracking cell phones. We are also informing people of the risk factors in certain areas so that they will be more knowledgeable when making decisions for travel, their futures, and taking preventative measures. ## What it does Supermaritan provides a platform for people who are in danger and affected by disasters to send out "distress signals" specifying how severe their damage is and the specific type of issue they have. We store their location in a database and present it live on react-native-map API. This allows local authorities to easily locate people, evaluate how badly they need help, and decide what type of help they need. Dispatchers will thus be able to quickly and efficiently aid victims. More importantly, the live map feature allows local users to see live incidents on their map and gives them the ability to help out if possible, allowing for greater interaction within a community. Once a victim has been successfully aided, they will have the option to resolve their issue and store it in our database to aid our analytics. Using information from previous disaster incidents, we can also provide information about the safety of certain areas. Taking the previous incidents within a certain range of latitudinal and longitudinal coordinates, we can calculate what type of incident (whether it be floods, earthquakes, fire, injuries, etc.) is most common in the area. Additionally, by taking a weighted average based on the severity of previous resolved incidents of all types, we can generate a risk factor that provides a way to gauge how safe the range a user is in based off the most dangerous range within our database. ## How we built it We used react-native, MongoDB, Javascript, NodeJS, and the Google Cloud Platform, and various open source libraries to help build our hack. ## Challenges we ran into Ejecting react-native from Expo took a very long time and prevented one of the members in our group who was working on the client-side of our app from working. This led to us having a lot more work for us to divide amongst ourselves once it finally ejected. Getting acquainted with react-native in general was difficult. It was fairly new to all of us and some of the libraries we used did not have documentation, which required us to learn from their source code. ## Accomplishments that we're proud of Implementing the Heat Map analytics feature was something we are happy we were able to do because it is a nice way of presenting the information regarding disaster incidents and alerting samaritans and authorities. We were also proud that we were able to navigate and interpret new APIs to fit the purposes of our app. Generating successful scripts to test our app and debug any issues was also something we were proud of and that helped us get past many challenges. ## What we learned We learned that while some frameworks have their advantages (for example, React can create projects at a fast pace using built-in components), many times, they have glaring drawbacks and limitations which may make another, more 'complicated' framework, a better choice in the long run. ## What's next for Supermaritan In the future, we hope to provide more metrics and analytics regarding safety and disaster issues for certain areas. Showing disaster trends overtime and displaying risk factors for each individual incident type is something we definitely are going to do in the future.
## Inspiration We were trying for an IM cross MS paint experience, and we think it looks like that. ## What it does Users can create conversations with other users by putting a list of comma-separated usernames in the To field. ## How we built it We used Node JS combined with the Express.js web framework, Jade for templating, Sequelize as our ORM and PostgreSQL as our database. ## Challenges we ran into Server-side challenges with getting Node running, overloading the server with too many requests, and the need for extensive debugging. ## Accomplishments that we're proud of Getting a (mostly) fully up-and-running chat client up in 24 hours! ## What we learned We learned a lot about JavaScript, asynchronous operations and how to properly use them, as well as how to deploy a production environment node app. ## What's next for SketchWave We would like to improve the performance and security of the application, then launch it for our friends and people in our residence to use. We would like to include mobile platform support via a responsive web design as well, and possibly in the future even have a mobile app.
## Inspiration Stockbroker visualizations make much more sense than the typical banking app "Balance: $". This app will give people much more control over their own finances and spending decisions. Additionally, in the tips section, the app makes recommendations to help the user save money, which can be monetized through advertising. ## What it does Gives a couple of different ways to visualize your financial life, then gives recommendations to help you meet your savings goals. The app breaks down the user's finances into several different graphs and charts, then provides the user with tips on how he or she can do better in the future. ## How we built it We built a Swift iOS app and a web application that takes data from our custom python/flask API which in turn draws from Nessie. ## Challenges we ran into Building a frontend proved difficult. ## Accomplishments that we're proud of We built a good system for managing the Nessie data, bundling it, and handling it with our custom API. Our recommendations system is also very useful. Our analysis tools are actually helpful to the user, and provide the user with a visual depiction of the state of their finances rather than simply showing them a number. ## What's next for Rosi Continuing to build on the analysis tools we provide to users. Possible machine learning integration in the future.
winning
## Inspiration The inspiration for our project stems from the increasing trend of online shopping and the declining foot traffic in physical stores. Our goal was to provide a unique and engaging experience for customers, encouraging them to visit physical stores and rediscover the joy of in-person shopping. We wanted to create an interactive and entertaining shopping experience that would entice customers to visit stores more frequently and foster a deeper connection between them and the store's brand. ## What it does Our project is an AR scavenger hunt experience that gamifies the shopping experience. The scavenger hunt encourages customers to explore the store and discover new products they may have otherwise overlooked. As customers find specific products, they can earn points which can be redeemed for exclusive deals and discounts on future purchases. This innovative marketing scheme not only provides customers with an entertaining experience but also incentivizes them to visit stores more frequently and purchase products they may have otherwise overlooked. ## How we built it To create the AR component of our project, we used Vuforia and Unity, two widely used platforms for building AR applications. The Vuforia platform allowed us to create and track image targets, while Unity was used to design the 3D models for the AR experience. We then integrated the AR component into an Android application by importing it as a Gradle project. Our team utilized agile development methodologies to ensure efficient collaboration and problem-solving throughout the development process. ## Challenges we ran into One of the challenges we faced was integrating multiple APIs and ensuring that they worked together seamlessly. Another challenge was importing the AR component and creating the desired functionality within our project. We also faced issues with debugging and resolving technical errors that arose during the development process. ## Accomplishments that we're proud of Despite the challenges we faced, we were able to achieve successful teamwork and collaboration. Despite forming the team later than other groups, we were able to effectively communicate and work together to bring our project to fruition. We are proud of the end result, which was a polished and functional AR scavenger hunt experience that met our objectives. ## What we learned We learned how difficult it is to truly ship out software, and we are grateful to have joined the hackathon. We gained a deeper understanding of the importance of project planning, effective communication, and collaboration among team members. We also learned that the development process can be challenging and unpredictable, and that it requires perseverance and problem-solving skills. Additionally, participating in the hackathon taught us valuable technical skills such as integrating APIs, creating AR functionality, and importing projects onto an Android application. ## What's next for Winnur Looking forward, we plan to incorporate Computer Vision technology into our project to prevent potential damage to our product's packaging. We also aim to expand the reach of our AR scavenger hunt experience by partnering with more retailers and enhancing the user interface and experience. We are excited about the potential for future development and growth of Winnur.
Demo: <https://youtu.be/cTh3Q6a2OIM?t=2401> ## Inspiration Fun Mobile AR Experiences such as Pokemon Go ## What it does First, a single player hides a virtual penguin somewhere in the room. Then, the app creates hundreds of obstacles for the other players in AR. The player that finds the penguin first wins! ## How we built it We used AWS and Node JS to create a server to handle realtime communication between all players. We also used Socket IO so that we could easily broadcast information to all players. ## Challenges we ran into For the majority of the hackathon, we were aiming to use Apple's Multipeer Connectivity framework for realtime peer-to-peer communication. Although we wrote significant code using this framework, we had to switch to Socket IO due to connectivity issues. Furthermore, shared AR experiences is a very new field with a lot of technical challenges, and it was very exciting to work through bugs in ensuring that all users have see similar obstacles throughout the room. ## Accomplishments that we're proud of For two of us, it was our very first iOS application. We had never used Swift before, and we had a lot of fun learning to use xCode. For the entire team, we had never worked with AR or Apple's AR-Kit before. We are proud we were able to make a fun and easy to use AR experience. We also were happy we were able to use Retro styling in our application ## What we learned -Creating shared AR experiences is challenging but fun -How to work with iOS's Multipeer framework -How to use AR Kit ## What's next for ScavengAR * Look out for an app store release soon!
## Inspiration We were inspired by the website <https://thispersondoesnotexist.com>, deepfakes, and how realistic images produced by GANs (Generative adversarial networks) can be. ## What it does We created a website where people play a game where they are given one image that is real and one image that is produced from a GAN. Can the user detect what's the GAN or not? ## How I built it We built it using React front-end that pulls images from a real image dataset and a GAN-generated dataset. ## Challenges I ran into We were originally going to customize our own GAN, but complications arose so we decided to have a fun and easy application using pre-existing GAN architecture. ## Accomplishments that I'm proud of I am proud of my team members of understanding the concepts of a GAN and putting it into use. ## What I learned An application of GANs! Also, some of us had never used React or done front-end before. ## What's next for Gan Game Who knows where GANs can take us next?
winning
## Inspiration Conventional language learning apps like Duolingo don’t offer the ability to have freeform and dynamic conversations. Additionally, finding a language partner can be difficult and costly. Lingua Franca tackles this head-on by offering intermediate to advanced language learners an immersive, interactive experience. Although other apps exist that try to do the same thing, their interaction topics are hard-coded, meaning that you encounter yourself in the same dialogue over and over again. By leveraging LLMs, we’re able to ensure that no two experiences are the same! ## What it does You stumble into a foreign land and must communicate with the townsfolk in order to get by. As you talk with them, you must reply by recording yourself speaking in their language. Aided by LLMs, their responses dynamically change depending on what you say. Additionally, at some points in the conversation, they will give you checkpoints that you must accomplish, which encourages you to talk to other villagers. After each of your responses, you can also see alternative phrases you could’ve said in response to the villager. Seeing these alternative responses can aid in learning vocabulary, grammar, and can help the user branch outside of their usual go-to phrases in the language they are learning. Not only can you guide the conversation to whatever topic you’d like to practice, but to keep the user engaged, we’ve also added backstory to the characters in the village. Each time you talk with them, you can learn something more about their relationship with others in the village! ## How we built it Development was done in Unity3D. We used Wit.ai to capture and transcribe the user’s recorded responses. Those transcribed responses were then fed into an LLM from Together.ai, along with extra information to give context and guide the LLM to prompt the user to complete checkpoints. The response from the LLM becomes the villager’s response to the player. We created the world using assets from Unity Asset store, and the character models are from Mixamo. ## What we learned Developing in VR was new to all team members, so developing for the Oculus Quest and using Unity3D was a great learning experience. LLMs aren’t perfect, and working to mitigate poor, harmful, or unproductive responses is difficult. However, we took this challenge seriously while working on this app and carefully tuned our prompts to give the model the context it needed to avoid these situations. ## What's next for Lingua Franca The next steps for this app include: Adding more languages adding audio feedback from the villagers as an addition to text responses adding new locations, characters, and worlds for more variation in the experience.
## Inspiration Aravind doesn't speak Chinese. When Nick and Jon speak in Chinese Aravind is sad. We want to solve this problem for all the Aravinds in the world -- not just for Chinese though, for any language! ## What it does TranslatAR allows you to see English (or any other language of your choice) subtitles when you speak to other people speaking a foreign language. This is an augmented reality app which means the subtitles will appear floating in front of you! ## How we built it We used Microsoft Cognitive Services's Translation APIs to transcribe speech and then translate it. To handle the augmented reality aspect, we created our own AR device by combining an iPhone, a webcam, and a Google Cardboard. In order to support video capturing along with multiple microphones, we multithread all our processes. ## Challenges we ran into One of the biggest challenges we faced was trying to add the functionality to handle multiple input sources in different languages simultaneously. We eventually solved it with multithreading, spawning a new thread to listen, translate, and caption for each input source. ## Accomplishments that we're proud of Our biggest achievement is definitely multi-threading the app to be able to translate a lot of different languages at the same time using different endpoints. This makes real-time multi-lingual conversations possible! ## What we learned We familiarized ourselves with the Cognitive Services API and were also able to create our own AR system that works very well from scratch using OpenCV libraries and Python Imaging Library. ## What's next for TranslatAR We want to launch this App in the AppStore so people can replicate VR/AR on their own phones with nothing more than just an App and an internet connection. It also helps a lot of people whose relatives/friends speak other languages.
Random maze generator built using Java. Follows the Aldous-Broder Algorithm.
winning
## Inspiration One of the 6 most common medication problems in an ageing population comes from the scheduling and burden of taking several medications several times a day. At best, this can be a hassle and an annoying process. However, it is often more likely than not that many may simply forget to take certain medication without supervision and reminders which may result in further deterioration of their health. In order to address this issue to make living healthy a smoother process for the ageing population as well as provide better support for their healthcare providers, MediDate was born. Designed with the user in mind, the UI is simple and intuitive while the hardware is also clean and dependable. The diversity of features ensures that all aspects of the medication and caretaking process can be managed effectively through one comprehensive platform. A senior citizen now has a technological solution to one of their daily problems, which can all be managed easily by a caretaker or nurse. ## What it does MediDate is a combination of hardware components and a web application. The hardware aspect is responsible for tracking dosage & supply for the patient as well as communicate issues (such as low supply of medication) to the web application and the caretaker. The web application is made up of several different features to best serve both patient and caretaker. Users are first brought to a welcome page with a daily schedule of their medications as well as a simulation of the pillbox below to keep track of total medication supply. When the web app detects that supply is below a certain threshold, it will make calls to local pharmacies to reorder the medication. Along the side navigation bar, there are several features that the users can take advantage of including a notifications page, monitoring pharmacy orders, uploading new subscriptions, and descriptions of their current medication. The notifications page is pretty self-explanatory, it keeps track of any notifications patients and/or caretakers should be aware of, such as successful prescription uploads, low medication supply, and errors in uploads. The upload page allows users to take photos of new prescriptions to upload to the web app which will then make the appropriate processes in order to add it to both the schedule and the explanation bar through RX numbers, dates, etc... Finally, the prescription pages offer quick shortcuts for descriptions of the medication to make understanding meds easier for users. In order to be as accessible as possible, an Alexa skill has also been created to support functionality from the web application for users to interact more directly with the caretaking solution. It currently supports limited functionality including querying for today's prescription, descriptions of different medication on the patients' schedules, as well as a call for help function should the need arise. This aspect of MediDate will allow more efficient service for a larger population, directly targeting those with vision impairment. Another feature was integrated using Twilio's SMS API. For the convenience of the user, a notification text would be sent to a registered Pharmacy phone number with details of prescription requirements when the current pill inventory fell below an adjustable threshold. Pharmacies could then respond to the text to notify the user when their prescription was ready for pick-up. This enables seamless prescription refills and reduces the time spent in the process. ## How I built it **Hardware** Powered by an Arduino UNO, buttons were attached to the bottom of the pillbox to act as weight sensors for pills. When pills are removed, the button would click "off", sending data to the web application for processing. We used CoolTerm and a Python script to store Arduino inputs before passing it off to the web app. This aspect allows for physical interaction with the user and helps to directly manage medication schedules. **Google Cloud Vision** In order to turn images of prescriptions into text files that could be processed by our web app, we used Google Cloud Vision to parse the image and scan for relevant text. Instead of running a virtual machine, we made API calls through our web app to take advantage of the free Cloud Credits. **Backend** Scripting was done using JavaScript and Python/Flask, processing information from Cloud Vision, the Arduino, and user inputs. The goal here was to send consistent, clear outputs to the user at all times. **Frontend** Built with HTML, CSS, bootstrap, and javascript, the design is meant to be clean and simple for the user. We chose a friendly UI/UX design, bright colours, and great interface flow. **Alexa Skill** Built with Voiceflow, the intents are simple and the skill does a good job of walking the user through each option carefully with many checks along the way to ensure the user is following. Created with those who may not be as familiar communicating with technology verbally, MediDate is an excellent way to integrate future support technologies seamlessly into users' lives. **Twilio SMS** The Twilio SMS API was integrated using Python/Flask. Once the pill inventory fell below an adjustable pill quantity, the Twilio outbound notification text workflow is triggered. Following receipt of the text by pharmacies and the preparation of prescriptions, a return text triggers a notification status on the user's home page. ## Challenges I ran into Flask proved to be a difficult tool to work with, causing us many issues with static and application file paths. Dhruv and Allen spent a long time working on this problem. We were also a bit rusty with hardware and didn't realize how important resistors were. Because of that, we ran into some issues getting a basic model set up, but it was all smooth sailing from there. The reactive calendar with the time blocks also turned out to be a very complex problem. There were many different ways to take on the difference arrays, which was the big hurdle to solving the problem. Finding an efficient solution was definitely a big challenge. ## Accomplishments that I'm proud of Ultimately, getting the full model off the ground is certainly something to be proud of. We followed Agile methodology and tried (albeit unsuccessfully at times) to get a minimum viable product with each app functionality we took on. This was a fun and challenging project, and we're all glad to have learned so much in the process. ## What's next for MediDate The future of MediDate is bright! With a variety of areas to spread into in order to support accessible treatment for ALL users, MediDate is hoping to improve the hardware. Many elderly also suffer from tremors and other physical ailments that may make taking pills a more difficult process. As a result, implementing a better switch system to open the pillbox is an area the product could expand towards.
## Inspiration Automation is at its peak when it comes to technology, but one area that has lacked to keep up, is areas of daily medicine. We encountered many moments within our family members where they had trouble keeping up with their prescription timelines. In a decade dominated by cell phones, we saw the need to develop something fast and easy, where it wouldn’t require something too complicated to keep track of all their prescriptions and timelines and would be accessibly at their fingertips. ## What it does CapsuleCalendar is an Android application that lets one take a picture of their prescriptions or pill bottles and have them saved to their calendars (as reminders) based on the recommended intake amounts (on prescriptions). The user will then be notified based on the frequency outlined by the physician on the prescription. The application simply requires taking a picture, its been developed with the user in mind and does not require one to go through the calendar reminder, everything is pre-populated for the user through the optical-character recognition (OCR) processing when they take a snap of their prescription/pill bottle. ## How we built it The application was built for Android purely in Java, including integration of all APIs and frameworks. First, authorization of individualized accounts was done using Firebase. We implemented and modified Google’s optical-character recognition (OCR) cloud-vision framework, to accurately recognize text on labels, and process and parse it in real-time. The Google Calendar API was then applied on the parsed data, and with further processing, we used intents to set reminders based on the data of the prescriptions labels (e.g. take X tablets X daily - where X was some arbitrary number which was accounted for in a (or multiple) reminders). ## Challenges we ran into Working with the OCR Java framework was quite difficult to implement into our personalized application due to various dependency failures - it took us way too long to debug and get the framework to work *sufficiently* for our needs. Also, the default OCR graphics toolkit only captures very small snippets of text at a single time whereas we needed multiple lines to be processed at once and text at different areas within the label at once (e.g. default implementation would allow one set to be recognized and processed - we needed multiple sets). The default OCR engine wasn't quite effective for multiple lines of prescriptions, especially when identifying both prescription name and intake procedure - tweaking this was pretty tough. Also, when we tried to use the Google Calendar API, we had extensive issues using Firebase to generate Oauth 2.0 credentials (Google documentation wasn’t too great here :-/). ## Accomplishments that we're proud of We’re proud of being able to implement a customized Google Cloud Vision based OCR engine and successfully process, parse and post text to the Google Calendar API. We were just really happy we had a functional prototype! ## What we learned Debugging is a powerful skill we took away from this hackathon - it was pretty rough going through complex, pre-written framework code. We also learned to work with some new Google APIs, and Firebase integrations. Reading documentation is also very important… along with reading lots of StackOverflow. ## What's next for CapsuleCalendar We would like to use a better, stronger OCR engine that is more accurate at reading labels in a curved manner, and does not get easily flawed from multiple lines of text. Also, we would like to add functionality to parse pre-taken images (if the patient doesn’t have their prescription readily available and only happens to have a picture of their prescription). We would also like to improve the UI. ## Run the application Simply download/clone the source code from GitHub link provided and run on Android studio. It is required to use a physical Android device as it requires use of the camera - not possible on emulator.
## Inspiration This year's theme was nostalgia, and in an urban environment like Toronto, I often find myself missing the greenspace I grew up with. ## What it does I Need To Touch Grass allows users to quickly and easily find various natural areas near them, as well as pictures and directions. ## How I built it I used the Google Maps API to generate a list of nearby natural areas based on user input, pandas to sort and visualize the data, and Django to create a user interface. ## Challenges I ran into My teammate was unfortunately in the hospital, so I had to do it myself, which was difficult. I didn't accomplish everything I wanted to, but I'm proud of what I did accomplish. ## Accomplishments that I'm proud of This was my first time using an API, and it was also my first time doing Python full-stack development! I'm proud of myself for learning Django on the job. ## What I learned Building a web app seems like it would be easy, but it isn't! ## What's next for I Need To Touch Grass Hopefully finishing all the aspects of Django I didn't get to finish.
winning
## Inspiration With a great amount of experience teaching and tutoring at the university level, we knew there was a lot to be desired in the grading experience for both students and instructors. We wished that there was a way students could receive feedback quickly and overworked instructors could focus their attention on more impactful things than grading. As a result, we decided to build a tool that would auto grade short answer response while allowing a high degree of accuracy and customization. ## What it does Given a student response, our program analyzes the similarity to teacher provided answers. Furthermore, it uses GPT to provide quick feedback for students. ## How we built it We used ChromaDB to handle our vector database operations and GPT4 to provide feedback for students. For our front-end, we used Reflex as our full-stack solution. pls ## Demo [https://youtu.be/S7EiVUkjzv4](Quick%20Demo)
## Inspiration The education system is broken and teachers are under-appreciated. So, we wanted to create something to help teachers. We spoke to a teacher who told us that a lot of her time is spent editing students’ work and reviewing tests. So, we started thinking about how we could help teachers save time, while building a straight forwards user friendly solution. ## What it does Rework enables teachers to produce easier and harder versions of the same test based on the test scores of past class sections. Teachers input a photo of the test with questions & answer key, the average student score for each question, and the desired class average for each question. Rework then looks at the test questions where the class average was far below or above the desired average, and makes the harder or easier based on how much above or below the average it was. ## How we built it We built the backend of our product using Python and Flask paired with a basic frontend built with HTML, CSS, and JavaScript. Our program utilizes an OpenAI API to access GPT-3 for the main functionality in generating normalized test questions. We also made use of flask and a virtual environment for our API along with leveraging some OCR software to read our PDF inputs. We built our project centered around the normalization of test questions and then added functionality from there namely interacting with the service through pdfs. ## Challenges we ran into Our team faced a multitude of setbacks and problems which we had to either solve or circumnavigate over the course of the hackathon. Primarily our product makes use of an API connected to GPT-3, working with this API and learning how to correctly prompt the chatbot to obtain desired responses was a challenge. Additionally, correctly fragmenting our project into manageable goals proved to be important to time management. ## Accomplishments that we're proud of We created an MVP version of our product where a .json file would be needed to submit the test questions and answers. We wanted to finish this quickly so that then we could use the rest of our time implementing an OCR so that teachers could simply submit a picture of the test and answers and the questions would be read and parsed into a readable format for Rework to be able to understand, making the life of the teacher significantly easier. We are proud that we were able to add this extra OCR component without having any previous experience with this. ## What we learned Our group had a wide range of technical abilities and we had to learn quickly how to use all of our strengths to benefit our group. We were a fairly new team, the majority of whom were at their first major hackathon, so there were lots of growing pains. Having each team member understand the technologies used was a important task for Friday, as well as organizing ourselves into roles where we could each excel with our diversity of experiences and comfortable languages. Almost all of us had little expertise with front-end development, so that is the technical area where we improved most—along with creating a full project from scratch without a framework. ## What's next for Rework In the future with more time we would like to expand on the feature offering of Rework. Namely, the inclusion of automatic grading software after we convert the image would allow for a more wholistic experience for the teachers, limiting their number of inputs while simultaneously increasing the functionality. We would also like to implement a more powerful OCR such as mathpix, ideally one that is capable of latex integration and improved handwriting recognition, as this would give more options and allow for a higher level of problems to be accurately solved. Ultimately, the ideal goal for the program would replace Chat-GPT with lengchain and GPT-3, as this allows for more specialized queries specifically for math enabling more accurate responses.
## Inspiration Everybody struggles with their personal finances. Financial inequality in the workplace is particularly prevalent among young females. On average, women make 88 cents per every dollar a male makes in Ontario. This is why it is important to encourage women to become more cautious of spending habits. Even though budgeting apps such as Honeydue or Mint exist, they depend heavily on self-motivation from users. ## What it does Our app is a budgeting tool that targets young females with useful incentives to boost self-motivation for their financial well-being. The app features simple scale graphics visualizing the financial balancing act of the user. By balancing the scale and achieving their monthly financial goals, users will be provided with various rewards, such as discount coupons or small cash vouchers based on their interests. Users are free to set their goals on their own terms and follow through with them. The app re-enforces good financial behaviour by providing gamified experiences with small incentives. The app will be provided to users free of charge. As with any free service, the anonymized user data will be shared with marketing and retail partners for analytics. Discount offers and other incentives could lead to better brand awareness and spending from our users for participating partners. The customized reward is an opportunity for targeted advertising ## Persona Twenty-year-old Ellie Smith works two jobs to make ends meet. The rising costs of living make it difficult for her to maintain her budget. She heard about this new app called Re:skale that provides personalized rewards for just achieving the budget goals. She signed up after answering a few questions and linking her financial accounts to the app. The app provided simple balancing scale animation for immediate visual feedback of her financial well-being. The app frequently provided words of encouragement and useful tips to maximize the chance of her success. She especially loves how she could set the goals and follow through on her own terms. The personalized reward was sweet, and she managed to save on a number of essentials such as groceries. She is now on 3 months streak with a chance to get better rewards. ## How we built it We used : React, NodeJs, Firebase, HTML & Figma ## Challenges we ran into We had a number of ideas but struggled to define the scope and topic for the project. * Different design philosophies made it difficult to maintain consistent and cohesive design. * Sharing resources was another difficulty due to the digital nature of this hackathon * On the developing side, there were technologies that were unfamiliar to over half of the team, such as Firebase and React Hooks. It took a lot of time in order to understand the documentation and implement it into our app. * Additionally, resolving merge conflicts proved to be more difficult. The time constraint was also a challenge. ## Accomplishments that we're proud of * The use of harder languages including firebase and react hooks * On the design side it was great to create a complete prototype of the vision of the app. * Being some members first hackathon, the time constraint was a stressor but with the support of the team they were able to feel more comfortable with the lack of time ## What we learned * we learned how to meet each other’s needs in a virtual space * The designers learned how to merge design philosophies * How to manage time and work with others who are on different schedules ## What's next for Re:skale Re:skale can be rescaled to include people of all gender and ages. * More close integration with other financial institutions and credit card providers for better automation and prediction * Physical receipt scanner feature for non-debt and credit payments ## Try our product This is the link to a prototype app <https://www.figma.com/proto/nTb2IgOcW2EdewIdSp8Sa4/hack-the-6ix-team-library?page-id=312%3A3&node-id=375%3A1838&viewport=241%2C48%2C0.39&scaling=min-zoom&starting-point-node-id=375%3A1838&show-proto-sidebar=1> This is a link for a prototype website <https://www.figma.com/proto/nTb2IgOcW2EdewIdSp8Sa4/hack-the-6ix-team-library?page-id=0%3A1&node-id=360%3A1855&viewport=241%2C48%2C0.18&scaling=min-zoom&starting-point-node-id=360%3A1855&show-proto-sidebar=1>
losing
## Inspiration The inspiration behind Aazami comes from a personal experience with a loved one who had dementia. Witnessing the struggles and frustration of forgetting memories was heart-wrenching. It made us realize the need for a simple yet effective solution that could alleviate this issue. That's when the idea of Aazami was born - to create a device that could ease the burden of memory loss, not just for our family, but for millions of families worldwide. Our hope is that Aazami can help people with dementia cherish and relive their precious memories, and provide a small but significant sense of comfort in their daily lives. ## What it does Aazami's main function is to record the last 10 seconds of voice, which can be replayed by using a voice command, "I forgot." This innovative feature helps users to retrieve lost memories and ease the frustration caused by forgetfulness. Aazami is compact and easy to use, making it a convenient companion for people with dementia, their families, and caregivers. By providing an easy and reliable way to recall memories, Aazami aims to enhance the quality of life of people with dementia and their loved ones. Aazami has the potential to significantly support patients with reorientation therapy, a common treatment for dementia. By providing users with a reliable tool to help recall recent memories, Aazami can reduce feelings of confusion and disorientation. With the ability to record and replay the last 10 seconds of voice, patients can use Aazami as a reminder tool to help them remember their daily routines or important details about their environment. In turn, this can help patients feel more confident and in control of their lives. With continued use, Aazami can also help patients engage in reorientation therapy, as they can use the device to actively recall information and strengthen their memory skills. Ultimately, Aazami has the potential to improve the quality of life for patients with dementia, helping them to feel more independent and empowered in their daily lives. ## How we built it To develop Aazami, we utilized a combination of hardware and software components including Arduino and Adafruit's Neopixel for the hardware, and Edge Impulse for machine learning. Our team started off by recording our own voices to create a dataset for "I forgot" voice detection, and refined it through trial and error to ensure the most appropriate dataset for our constraints. We generated Arduino code and improved it to optimize the hardware performance, and also created an amplifier circuit to boost the sound of the device. Through these iterative processes, we combined all the components to create a functional and effective solution. Our website (aazami.netlify.app), developed using Vue.js, helped to promote our technology and increase its accessibility to those who need it most. ## Challenges we ran into While experimenting with Arduino and Edge Impulse, we faced an issue where the sound detection interval was set to 5 seconds. However, this was not sufficient for the user to say "I forgot" in perfect timing. To overcome this problem, we had to develop a separate algorithm that could detect sound at the ideal phase, enabling us to accurately capture the user's command and trigger the playback of the previous 10 seconds of voice. Another significant challenge we encountered was that we were consistently receiving error messages, including "ERR: MFCC failed (-1002)," "ERR: Failed to run DSP process (-1002)," and "ERR: Failed to run classifier (-5)." These errors likely resulted from limitations in the memory size of the Arduino Nano BLE 33 we were using. To address this issue, we were required to manually adjust the size of our data sets, allowing us to process the data more efficiently and minimize the likelihood of encountering these errors in the future. Our initial dataset initially had an accuracy of 100% (as provided above), but we had some tradeoffs due to this error (~97% accuracy now). ## Accomplishments that we're proud of We take great pride in this project as we were able to identify a clear need for this technology and successfully implement it. By addressing the challenges faced by people with dementia and their caregivers, we believe that Aazami has the potential to enhance the quality of life for millions of people worldwide. Our team's dedication and hard work in creating this innovative solution has been a fulfilling and rewarding experience. ## What we learned Through this project, we gained valuable insights into the integration of ML in hardware. Although each member of our team brought unique expertise in either hardware or ML, working together to build a complete system was a new and exciting challenge. Creating our own ML dataset and identifying the necessary components for Aazami enabled us to apply ML in a real-world context, providing us with valuable experience and skills (i.e. Edge Impulse). ## What's next for Aazami Looking ahead, our next goal for Aazami is to expand our dataset to include voices of various ages and pitches. By incorporating a wider range of data, we can improve the accuracy of our model and provide an even more reliable solution for people with dementia. Additionally, we are eager to share this technology with individuals and groups who could benefit from it the most. Our team is committed to demonstrating the capabilities of Aazami to those in need, and we are continuously exploring ways to make it more accessible and user-friendly.
## Inspiration According to the United State's Department of Health and Human Services, 55% of the elderly are non-compliant with their prescription drug orders, meaning they don't take their medication according to the doctor's instructions, where 30% are hospital readmissions. Although there are many reasons why seniors don't take their medications as prescribed, memory loss is one of the most common causes. Elders with Alzheimer's or other related forms of dementia are prone to medication management problems. They may simply forget to take their medications, causing them to skip doses. Or, they may forget that they have already taken their medication and end up taking multiple doses, risking an overdose. Therefore, we decided to solve this issue with Pill Drop, which helps people remember to take their medication. ## What it does The Pill Drop dispenses pills at scheduled times throughout the day. It helps people, primarily seniors, take their medication on time. It also saves users the trouble of remembering which pills to take, by automatically dispensing the appropriate medication. It tracks whether a user has taken said dispensed pills by starting an internal timer. If the patient takes the pills and presses a button before the time limit, Pill Drop will record this instance as "Pill Taken". ## How we built it Pill Drop was built using Raspberry Pi and Arduino. They controlled servo motors, a button, and a touch sensor. It was coded in Python. ## Challenges we ran into Challenges we ran into was first starting off with communicating between the Raspberry Pi and the Arduino since we all didn't know how to do that. Another challenge was to structurally hold all the components needed in our project, making sure that all the "physics" aligns to make sure that our product is structurally stable. In addition, having the pi send an SMS Text Message was also new to all of us, so incorporating a User Interface - trying to take inspiration from HyperCare's User Interface - we were able to finally send one too! Lastly, bringing our theoretical ideas to fruition was harder than expected, running into multiple road blocks within our code in the given time frame. ## Accomplishments that we're proud of We are proud that we were able to create a functional final product that is able to incorporate both hardware (Arduino and Raspberry Pi) and software! We were able to incorporate skills we learnt in-class plus learn new ones during our time in this hackathon. ## What we learned We learned how to connect and use Raspberry Pi and Arduino together, as well as incorporating User Interface within the two as well with text messages sent to the user. We also learned that we can also consolidate code at the end when we persevere and build each other's morals throughout the long hours of hackathon - knowing how each of us can be trusted to work individually and continuously be engaged with the team as well. (While, obviously, having fun along the way!) ## What's next for Pill Drop Pill Drop's next steps include creating a high-level prototype, testing out the device over a long period of time, creating a user-friendly interface so users can adjust pill-dropping time, and incorporating patients and doctors into the system. ## UPDATE! We are now working with MedX Insight to create a high-level prototype to pitch to investors!
## Problem In these times of isolation, many of us developers are stuck inside which makes it hard for us to work with our fellow peers. We also miss the times when we could just sit with our friends and collaborate to learn new programming concepts. But finding the motivation to do the same alone can be difficult. ## Solution To solve this issue we have created an easy to connect, all in one platform where all you and your developer friends can come together to learn, code, and brainstorm together. ## About Our platform provides a simple yet efficient User Experience with a straightforward and easy-to-use one-page interface. We made it one page to have access to all the tools on one screen and transition between them easier. We identify this page as a study room where users can collaborate and join with a simple URL. Everything is Synced between users in real-time. ## Features Our platform allows multiple users to enter one room and access tools like watching youtube tutorials, brainstorming on a drawable whiteboard, and code in our inbuilt browser IDE all in real-time. This platform makes collaboration between users seamless and also pushes them to become better developers. ## Technologies you used for both the front and back end We use Node.js and Express the backend. On the front end, we use React. We use Socket.IO to establish bi-directional communication between them. We deployed the app using Docker and Google Kubernetes to automatically scale and balance loads. ## Challenges we ran into A major challenge was collaborating effectively throughout the hackathon. A lot of the bugs we faced were solved through discussions. We realized communication was key for us to succeed in building our project under a time constraints. We ran into performance issues while syncing data between two clients where we were sending too much data or too many broadcast messages at the same time. We optimized the process significantly for smooth real-time interactions. ## What's next for Study Buddy While we were working on this project, we came across several ideas that this could be a part of. Our next step is to have each page categorized as an individual room where users can visit. Adding more relevant tools, more tools, widgets, and expand on other work fields to increase our User demographic. Include interface customizing options to allow User’s personalization of their rooms. Try it live here: <http://35.203.169.42/> Our hopeful product in the future: <https://www.figma.com/proto/zmYk6ah0dJK7yJmYZ5SZpm/nwHacks_2021?node-id=92%3A132&scaling=scale-down> Thanks for checking us out!
partial
## Inspiration After witnessing countless victims of home disasters like robberies and hurricanes, we decided, there must be a way to preemptively check against such events. It's far too easy to wait until something bad happens to your home before doing anything to prevent it from happening again. That's how we came up with a way to incentivize people taking the steps to protect their homes against likely threats. ## What it does Insura revolutionizes the ways to keep your home safe. Based on your location and historical data of items that typically fall under home insurance (burglary, flooding, etc.), Insura will suggest items for fixes around the house, calculating potential premium savings if done properly. With the click of a button, you can see what needs to be done around the house to collect big savings and protect your home from future damage. Insura also connects with a user's insurance provider to allow for users to send emails to insurance providers detailing the work that was done, backed by pictures of the work. Based on this insurance providers can charge premium prices as they see fit. To incentivize taking active steps to make changes, Insura "gamified" home repair, by allowing people to set goals for task completion, and letting you compete with friends based on the savings they are achieving. The return on investment is therefore crowdsourced; by seeing what your friends are saving on certain fixes around the house, you can determine whether the fix is worth doing. ## How I built it To build the application we mainly used swift to build the UI and logic for displaying tasks and goals. We also created a server using Node to handle the mail to insurance providers. We used to heroku to deploy the application. ## Challenges I ran into We had a hard time finding free APIs for national crime and disaster data and integrating them into the application. In addition, we had a tough time authenticating users to send emails from their accounts. ## Accomplishments that I'm proud of We are really proud of the way the UI looks. We took the time to design everything beforehand, and the outcome was great. ## What I learned We learned a lot about iOS development, how to integrate the backend and frontend on the iOS application, and more about the complicated world of insurance. ## What's next for Insura Next we plan on introducing heatmaps and map views to make the full use of our API and so that users can see what is going on locally.
## Problem Statement As the number of the elderly population is constantly growing, there is an increasing demand for home care. In fact, the market for safety and security solutions in the healthcare sector is estimated to reach $40.1 billion by 2025. The elderly, disabled, and vulnerable people face a constant risk of falls and other accidents, especially in environments like hospitals, nursing homes, and home care environments, where they require constant supervision. However, traditional monitoring methods, such as human caregivers or surveillance cameras, are often not enough to provide prompt and effective responses in emergency situations. This potentially has serious consequences, including injury, prolonged recovery, and increased healthcare costs. ## Solution The proposed app aims to address this problem by providing real-time monitoring and alert system, using a camera and cloud-based machine learning algorithms to detect any signs of injury or danger, and immediately notify designated emergency contacts, such as healthcare professionals, with information about the user's condition and collected personal data. We believe that the app has the potential to revolutionize the way vulnerable individuals are monitored and protected, by providing a safer and more secure environment in designated institutions. ## Developing Process Prior to development, our designer used Figma to create a prototype which was used as a reference point when the developers were building the platform in HTML, CSS, and ReactJs. For the cloud-based machine learning algorithms, we used Computer Vision, Open CV, Numpy, and Flask to train the model on a dataset of various poses and movements and to detect any signs of injury or danger in real time. Because of limited resources, we decided to use our phones as an analogue to cameras to do the live streams for the real-time monitoring. ## Impact * **Improved safety:** The real-time monitoring and alert system provided by the app helps to reduce the risk of falls and other accidents, keeping vulnerable individuals safer and reducing the likelihood of serious injury. * **Faster response time:** The app triggers an alert and sends notifications to designated emergency contacts in case of any danger or injury, which allows for a faster response time and more effective response. * **Increased efficiency:** Using cloud-based machine learning algorithms and computer vision techniques allow the app to analyze the user's movements and detect any signs of danger without constant human supervision. * **Better patient care:** In a hospital setting, the app could be used to monitor patients and alert nurses if they are in danger of falling or if their vital signs indicate that they need medical attention. This could lead to improved patient care, reduced medical costs, and faster recovery times. * **Peace of mind for families and caregivers:** The app provides families and caregivers with peace of mind, knowing that their loved ones are being monitored and protected and that they will be immediately notified in case of any danger or emergency. ## Challenges One of the biggest challenges have been integrating all the different technologies, such as live streaming and machine learning algorithms, and making sure they worked together seamlessly. ## Successes The project was a collaborative effort between a designer and developers, which highlights the importance of cross-functional teams in delivering complex technical solutions. Overall, the project was a success and resulted in a cutting-edge solution that can help protect vulnerable individuals. ## Things Learnt * **Importance of cross-functional teams:** As there were different specialists working on the project, it helped us understand the value of cross-functional teams in addressing complex challenges and delivering successful results. * **Integrating different technologies:** Our team learned the challenges and importance of integrating different technologies to deliver a seamless and effective solution. * **Machine learning for health applications:** After doing the research and completing the project, our team learned about the potential and challenges of using machine learning in the healthcare industry, and the steps required to build and deploy a successful machine learning model. ## Future Plans for SafeSpot * First of all, the usage of the app could be extended to other settings, such as elderly care facilities, schools, kindergartens, or emergency rooms to provide a safer and more secure environment for vulnerable individuals. * Apart from the web, the platform could also be implemented as a mobile app. In this case scenario, the alert would pop up privately on the user’s phone and notify only people who are given access to it. * The app could also be integrated with wearable devices, such as fitness trackers, which could provide additional data and context to help determine if the user is in danger or has been injured.
## Inspiration My printer is all the way in the cold, dark, basement. The Wi-Fi down there is not great either. So for the days where I need to print important documents but lack the strength to venture down into the basement's depths, I need a technological solution. ## What it does The Raspberry Pi 4 hosts a server for the local network that allows printing from any device connected to Wi-Fi. Useful when you want to print on a mobile device or Chromebook that doesn't support printer drivers. ## How we built it I was initially going to make an arcade station with my Pi but because of a snowstorm, out of all the hardware I ordered, only the Pi arrived on time. Thus, I had to pivot and think of a hardware project using only a Pi and some old Micro SD cards. ## Challenges I ran into At first, the Pi refused to connect through SSH. Since I did not have a video adapter (who thought it was a good idea to replace the HDMI ports with Micro HDMI??) I could not change the settings on the device manually, for there was no display output. It was at that moment I realized I would have to do this headless. Then there was the issue where my printer was so old that the drivers were no longer available. With some forum browsing and sketchy workarounds, I was able to get it working. Most of the time. ## What I learned It is probably easier to just print the old-fashioned way, but why do things faster when you can over-engineer a solution? ## What's next Finding ways to make it reliably work with all devices.
winning
## Inspiration E-cigarette use, specifically Juuling, has become an increasing public health concern among young adults and teenagers over the past few years. While e-cigarettes are often viewed as a safe alternative to traditional tobacco cigarettes, e-cigarettes have been proven to have negative health effects on both the user and second-hand smokers, as shown in multiple CDC and Surgeon General reviewed studies. E-cigarettes also still contain the active ingredient nicotine, which is a well know addictive drug. Yet, students across the United States on high school and college campuses continue to vape. For us, high school students, it is common sight to see classmates skipping class and “Juul-ing” in the bathroom. The Juul is one of the most popular e-cigarettes as it has a sleek design and looks like a USB drive. This design coupled with the fact that there is no lasting smell or detectable smoke, it makes it easy for users to go undetected in the high school environment. Moreover, this results in students not receiving help for their addition or even realizing they do have an addition. With an increasing use of e-cigarettes among millennials, there has been a creation of vape culture filled with vape gods preforming vape porn, displayig the artistic style of their smoke creations. Users often post pictures and videos of themself Juuling on social media platforms, specifically Instagram and Facebook. With this in mind, we set out to create a research-based solution that could identify e-cigarette users and deter them from future use, a process school administration have attempted and failed at. Juuly the Bear was created as the mascot leading the war on teenage e-cigarette use. ## What it does Juuly the Bear is intended to fight the growth of vape culture by creating a counter culture that actively discourages Juuling while informing users of dangers. It does this by using computer vision to analyze the Instagram account of an inputted user. The program flags images it detects to be of a person using an e-cigarette. If more than 40% of the images analyzed are of a person vaping, the user is classified as a “frequent e-cigarette” as defined by a study by Jung Ah Lee (2017), and categorized as high-risk for nicotine addiction. Juuly will then automatically message the high-risk user on Facebook Messenger informing them of their status and suggestions on how to cut down on their Juul use. Juuly will also provide external resources that the user can utilize. ## How I built it We built Juuly’s computer vision using the Clarify API in Python. First, we trained a machine learning model with images of e-cigarette users actively vaping. We then tested images of other vaping people to evaluate and further train the model until a sufficient accuracy level was reached. Then, we used the library to create a data scraping program for Instagram. When a username is inputed, the program gathers the most recent posts which are then fed into the computer vision program, analyzing the images with the previously trained model. If more than 40% of the images are of vaping, a Facebook Messenger bot automatically messages the user with warnings and resources. ## Challenges I ran into We ran into many challenges with implementing Juuly the Bear, especially because the technology was initially foreign to us. As high school students, we did not have a huge background in computer vision or machine learning. Initially, we had to completely learn the Clarify API and the Facebook Messenger API. We also had a hard time finding the design and thinking of a way to maximize our outreach. We decided that adding a bit of humor into the design would better resonate with teenagers, the average age at which people Juul. In addition, we were unable to successfully when trying to combine the backend Juuly program with our frontend. We initially wanted to create a fully functional website where one can enter Instagram and Facebook profiles to analyze, but when we had both the front and back ends completed, we had a hard time seamlessly integrating the two. In the end, we had to scrap the front-end in favor of a more functional backend. ## Accomplishments that I'm proud of As a group of high school students, we were able to use many new tools that we had never encountered before. The tools described above were extremely new to us before the hackathon, however, by working with various mentors and continually striving to learn these tools, we were able to create a successful program. The most successful part of the project was creating a powerful backend that was able to detect people Juuling. By training a machine learning model with the Clarify API, we were able to reaching over a 80% accuracy rate for the set of images we had, while initially we had barely any knowledge in machine learning. Another very successful part was our scraping program. This was completely new to us and we were able to create a program that perfectly fit our application. Scraping was also a very powerful tool, and by learning how to scrape social media pages, we had a lot more data than we wouldn’t have had otherwise. ## What's next for Juuly the Bear Our immediate next step would be combining our already designed front end website with our backend. We spent a lot of time trying to understand how to do this successfully, but we ultimately just ran out of time. In the future, we would optimally partner up with major social media organizations including Facebook and Twitter to create a large scale implementation of Juuly. This will have a much larger impact on vape culture as people are able to become more informed. This can have major impacts on public health, adolescent behavior/culture, and also increase the quality of life of all as the number of vapers are reduced.
# Inspiration There are variety of factors that contribute to *mental health* and *wellbeing*. For many students, the stresses of remote learning have taken a toll on their overall sense of peace. Our group created **Balance Pad** as a way to serve these needs. Thus, Balance Pads landing page gives users access to various features that aim to improve their wellbeing. # What it does Balance Pad is a web-based application that gives users access to **several resources** relating to mental health, education, and productivity. Its initial landing page is a dashboard tying everything together to make a clear and cohesive user experience. ### Professional Help > > 1. *Chat Pad:* The first subpage of the application has a built in *Chatbot* offering direct access to a **mental heath professional** for instant messaging. > > > ### Productivity > > 1. *Class Pad:* With the use of the Assembly API, users can convert live lecture content into text based notes. This feature will allow students to focus on live lectures without the stress of taking notes. Additionally, this text to speech aide will increase accessibility for those requiring note takers. > 2. *Work Pad:* Timed working sessions using the Pomodoro technique and notification restriction are also available on our webpage. The Pomodoro technique is a proven method to enhance focus on productivity and will benefit students > 3. *To Do Pad:* Helps users stay organized > > > ### Positivity and Rest > > 1. *Affirmation Pad:* Users can upload their accomplishments throughout their working sessions. Congratulatory texts and positive affirmations will be sent to the provided mobile number during break sessions! > 2. *Relaxation Pad:* Offers options to entertain students while resting from studying. Users are given a range of games to play with and streaming options for fun videos! > > > ### Information and Education > > 1. *Information Pad:* is dedicated to info about all things mental health > 2. *Quiz Pad:* This subpage tests what users know about mental health. By taking the quiz, users gain valuable insight into how they are and information on how to improve their mental health, wellbeing, and productivity. > > > # How we built it **React:** Balance Pad was built using React. This allowed for us to easily combine the different webpages we each worked on. **JavaScript, HTML, and CSS:** React builds on these languages so it was necessary to gain familiarity with them **Assembly API:** The assembly API was used to convert live audio/video into text **Twilio:** This was used to send instant messages to users based on tracked accomplishments # Challenges we ran into > > * Launching new apps with React via Visual Studio Code > * Using Axios to run API calls > * Displaying JSON information > * Domain hosting of Class Pad > * Working with Twilio > > > # Accomplishments that we're proud of *Pranati:* I am proud that I was able to learn React from scratch, work with new tech such as Axios, and successfully use the Assembly API to create the Class Pad (something I am passionate about). I was able to persevere through errors and build a working product that is impactful. This is my first hackathon and I am glad I had so much fun. *Simi:* This was my first time using React, Node.js, and Visual Studio. I don't have a lot of CS experience so the learning curve was steep but rewarding! *Amitesh:* Got to work with a team to bring a complicated idea to life! # What we learned *Amitesh:* Troubleshooting domain creation for various pages, supporting teammates and teaching concepts *Pranati:* I learned how to use new tech such as React, new concepts such API calls using Axios, how to debug efficiently, and how to work and collaborate in a team *Simi:* I learned how APIs work, basic html, and how React modularizes code. Also learned the value of hackathons as this was my first # What's next for Balance Pad *Visualizing Music:* Our group hopes to integrate BeatCaps software to our page in the future. This would allow a more interactive music experience for users and also allow hearing impaired individuals to experience music *Real Time Transcription:* Our group hopes to implement in real time transcription in the Class Pad to make it even easier for students.
## Inspiration Partially inspired by the Smart Cities track, we wanted our app to have the direct utility of ordering food, while still being fun to interact with. We aimed to combine convenience with entertainment, making the experience more enjoyable than your typical drive-through order. ## What it does You interact using only your voice. The app automatically detects when you start and stop talking, uses AI to transcribe what you say, figures out the food items (with modifications) you want to order, and adds them to your current order. It even handles details like size and flavor preferences. The AI then generates text-to-speech audio, which is played back to confirm your order in a humorous, engaging way. There is absolutely zero set-up or management necessary, as the program will completely ignore all background noises and conversation. Even then, it will still take your order with staggering precision. ## How we built it The frontend of the app is built with React and TypeScript, while the backend uses Flask and Python. We containerized the app using Docker and deployed it using Defang. The design of the menu is also done in Canva with a dash of Harvard colors. ## Challenges we ran into One major challenge was getting the different parts of the app—frontend, backend, and AI—to communicate effectively. From media file conversions to AI prompt engineering, we worked through each of the problems together. We struggled particularly with maintaining smooth communication once the app was deployed. Additionally, fine-tuning the AI to accurately extract order information from voice inputs while keeping the interaction natural was a big hurdle. ## Accomplishments that we're proud of We're proud of building a fully functioning product that successfully integrates all the features we envisioned. We also managed to deploy the app, which was a huge achievement given the complexity of the project. Completing our initial feature set within the hackathon timeframe was a key success for us. Trying to work with Python data type was difficult to manage, and we were proud to navigate around that. We are also extremely proud to meet a bunch of new people and tackle new challenges that we were not previously comfortable with. ## What we learned We honed our skills in React, TypeScript, Flask, and Python, especially in how to make these technologies work together. We also learned how to containerize and deploy applications using Docker and Docker Compose, as well as how to use Defang for cloud deployment. ## What's next for Harvard Burger Moving forward, we want to add a business-facing interface, where restaurant staff would be able to view and fulfill customer orders. There will also be individual kiosk devices to handle order inputs. These features would allow *Harvard Burger* to move from a demo to a fully functional app that restaurants could actually use. Lastly, we can sell the product by designing marketing strategies for fast food chains.
partial
## Inspiration Both of us are students who hope to enter the now deflated computer science market, and we shared similar experiences in the mass LeetCode grind. However, we also understood that simply completing LeetCode questions weren't enough; often, potential candidates are met with an unpleasant surprise when they are asked to walk through their thought process. We believed that not being able to communicate your algorithmic ideas was a missed problem, and we hoped to eliminate this with our solution. ## What it does DaVinci Solve is a way for users to practice communicating their thought process in approaching LeetCode problems (and any other competitive programming problems). This website is an AI-simulated interview scenario which prompts users to outline their solutions to various LeetCode problems through speech and provides feedback on the algorithmic approach they provide. ## How we built it We used Gradio as a quick front-end and back-end solution. We implemented both Groq and OpenAI APIs for speech-to-text, text-to-speech, and LLM generation. We implemented Leetscrape to scrape problems off of LeetCode, then used Beautiful-Soup to format the problem for display. We started by mapping out our idea in a flow-chart that we could incrementally complete in order to keep our progress on track. Then we organized our APIs and created a console-based version using various calls from our APIs. Then, we used Gradio to wrap up the functionality in a MVP localhost website. ## Challenges we ran into There was a good amount of bugs that kept surfacing as we wrote more code, and it was also hard to figure out how to move on with the lacking documentation and flexibility from Gradio. However, eventually we pulled out the bug spray. With a good amount of perseverance, we finally weeded out every single bug from our code, which provided a good amount of relief. ## Accomplishments that we're proud of We're pretty proud of being able to go all out on our first hackathon project while spending lots of time for other activities and enjoying the hackathon experience. We even managed to avoid consuming any caffeine and steered away from all-nighters in favour for a good amount of sleep. But definitely the most satisfying thing was getting our project to work in the end. ## What we learned This was our first hackathon, and it was an eye-opening experience to see how much could happen within 36 hours. We developed skills on building and testing as fast as possible. Lastly, we learned how to cooperate on an exciting project while squeezing out as much fun out of Hack the North as we could. ## What's next for DaVinci Solve We plan on implementing a better front-end, a memory system for the feedback-giving LLM, and an in-built IDE for users to test their actual code after illustrating their approach.
## 💡 Inspiration Whenever I was going through educational platforms, I always wanted to use one website to store everything. The notes, lectures, quizzes and even the courses were supposed to be accessed from different apps. I was inspired by how to create a centralized platform that acknowledges learning diversity. Also to enforce a platform where many people can **collaborate, learn and grow.** ## 🔎 What it does By using **Assembly AI** and incorporating a model which focuses on enhancing the user experience by providing **Speech-to-text** functionality. My application enforces a sense of security in which the person decides when to study, and then, they can choose from ML transcription with summarization and labels, studying techniques to optimize time and comprehension, and an ISR(Incremental Static Regeneration) platform which continuously provides support. **The tools used can be scaled as the contact with APIs and CMSs is easy to *vertically* scale**. ## 🚧 How we built it * **Frontend**: built in React but optimized with **NextJS** with extensive use of TailwindCSS and Chakra UI. * **Backend**: Authentication with Sanity CMS, Typescript and GraphQL/GROQ used to power a serverless async Webhook engine for an API Interface. * **Infrastructure**: All connected from **NodeJS** and implemented with *vertical* scaling technology. * **Machine learning**: Summarization/Transcription/Labels from the **AssemblyAI** API and then providing an optimized strategy for that. * **Branding, design and UI**: Elements created in Procreate and some docs in ChakraUI. * **Test video**: Using CapCut to add and remove videos. ## 🛑 Challenges we ran into * Implementing ISR technology to an app such as this required a lot of tension and troubleshooting. However, I made sure to complete it. * Including such successful models and making a connection with them was hard through typescript and axios. However, when learning the full version, we were fully ready to combat it and succeed. I actually have optimized one of the algorithm's attributes with asynchronous recursion. + Learning a Query Language such as **GROQ**(really similar to GraphQL) was difficult but we were able to use it with the Sanity plugin and use the **codebases** that was automatically used by them. ## ✔️ Accomplishments that we're proud of Literally, the front end and the backend required technologies and frameworks that were way beyond what I knew 3 months ago. **However I learned a lot in the space between to fuel my passion to learn**. But over the past few weeks, I planned and saw the docs of **AssemblyAI**, learned **GROQ**, implemented **ISR** and put that through a \**Content Management Service (CMS) \**. ## 📚 What we learned Throughout Hack the North 2022 and prior, I learned a variety of different frameworks, techniques, and APIs to build such an idea. When starting coding I felt like I was going ablaze as the techs were going together like **bread and butter**. ## 🔭 What's next for SlashNotes? While I was able to complete a considerable amount of the project in the given timeframe, there are still places where I can improve: * Implementation in the real world! I aim to push this out to google cloud. * Integration with school-course systems and proving the backend by adding more scaling and tips for user retention.
## Motivation Coding skills are in high demand and will soon become a necessary skill for nearly all industries. Jobs in STEM have grown by 79 percent since 1990, and are expected to grow an additional 13 percent by 2027, according to a 2018 Pew Research Center survey. This provides strong motivation for educators to find a way to engage students early in building their coding knowledge. Mixed Reality may very well be the answer. A study conducted at Georgia Tech found that students who used mobile augmented reality platforms to learn coding performed better on assessments than their counterparts. Furthermore, research at Tufts University shows that tangible programming encourages high-level computational thinking. Two of our team members are instructors for an introductory programming class at the Colorado School of Mines. One team member is an interaction designer at the California College of the Art and is new to programming. Our fourth team member is a first-year computer science at the University of Maryland. Learning from each other's experiences, we aim to create the first mixed reality platform for tangible programming, which is also grounded in the reality-based interaction framework. This framework has two main principles: 1) First, interaction **takes place in the real world**, so students no longer program behind large computer monitors where they have easy access to distractions such as games, IM, and the Web. 2) Second, interaction behaves more like the real world. That is, tangible languages take advantage of **students’ knowledge of the everyday, non-computer world** to express and enforce language syntax. Using these two concepts, we bring you MusicBlox! ## What is is MusicBlox combines mixed reality with introductory programming lessons to create a **tangible programming experience**. In comparison to other products on the market, like the LEGO Mindstorm, our tangible programming education platform **cuts cost in the classroom** (no need to buy expensive hardware!), **increases reliability** (virtual objects will never get tear and wear), and **allows greater freedom in the design** of the tangible programming blocks (teachers can print out new card/tiles and map them to new programming concepts). This platform is currently usable on the **Magic Leap** AR headset, but will soon be expanded to more readily available platforms like phones and tablets. Our platform is built using the research performed by Google’s Project Bloks and operates under a similar principle of gamifying programming and using tangible programming lessons. The platform consists of a baseboard where students must place tiles. Each of these tiles is associated with a concrete world item. For our first version, we focused on music. Thus, the tiles include a song note, a guitar, a piano, and a record. These tiles can be combined in various ways to teach programming concepts. Students must order the tiles correctly on the baseboard in order to win the various levels on the platform. For example, on level 1, a student must correctly place a music note, a piano, and a sound in order to reinforce the concept of a method. That is, an input (song note) is fed into a method (the piano) to produce an output (sound). Thus, this platform not only provides a tangible way of thinking (students are able to interact with the tiles while visualizing augmented objects), but also makes use of everyday, non-computer world objects to express and enforce computational thinking. ## How we built it Our initial version is deployed on the Magic Leap AR headset. There are four components to the project, which we split equally among our team members. The first is image recognition, which Natalie worked predominantly on. This required using the Magic Leap API to locate and track various image targets (the baseboard, the tiles) and rendering augmented objects on those tracked targets. The second component, which Nhan worked on, involved extended reality interaction. This involved both Magic Leap and Unity to determine how to interact with buttons and user interfaces in the Magic leap headset. The third component, which Casey spearheaded, focused on integration and scene development within Unity. As the user flows through the program, there are different game scenes they encounter, which Casey designed and implemented. Furthermore, Casey ensured the seamless integration of all these scenes for a flawless user experience. The fourth component, led by Ryan, involved project design, research, and user experience. Ryan tackled user interaction layouts to determine the best workflow for children to learn programming, concept development, and packaging of the platform. ## Challenges we ran into We faced many challenges with the nuances of the Magic Leap platform, but we are extremely grateful to the Magic Leap mentors for providing their time and expertise over the duration of the hackathon! ## Accomplishments that We're Proud of We are very proud of the user experience within our product. This feels like a platform that we could already begin testing with children and getting user feedback. With our design expert Ryan, we were able to package the platform to be clean, fresh, and easy to interact with. ## What We learned Two of our team members were very unfamiliar with the Magic Leap platform, so we were able to learn a lot about mixed reality platforms that we previously did not. By implementing MusicBlox, we learned about image recognition and object manipulation within Magic Leap. Moreover, with our scene integration, we all learned more about the Unity platform and game development. ## What’s next for MusicBlox: Tangible Programming Education in Mixed Reality This platform is currently only usable on the Magic Leap AR device. Our next big step would be to expand to more readily available platforms like phones and tablets. This would allow for more product integration within classrooms. Furthermore, we only have one version which depends on music concepts and teaches methods and loops. We would like to expand our versions to include other everyday objects as a basis for learning abstract programming concepts.
losing
## Inspiration We were inspired primarily by the co-founder's (A. Brady-Mine) experience developing educational plans to teach human rights through their non-profit. From this, we already had an idea of the value of well-designed and well-informed lesson plans, especially in increasing access to education. The other co-founder (J. Falagan) is an environmental engineer with specific interests in conservation and climate change mitigation. Combining these two interests led to an obvious conclusion, provide relevant, high-quality information about the global environment that incorporates scientific information alongside studies on global sustainability and action efforts. Due to the desire for the product to be easy to access and highly scalable, it was decided the ideal prototyping platform would be a Squarespace website integrated with OpenAI. ## What it does After entering the website, the teacher is given a description of the product and is directed to fill out the lesson plan request form. They select a range of student ages, a lesson plan length, and submit a comma-separated list of desired environmental topics. Within 5 minutes, the teacher receives an email with the lesson plan. ## How we built it The initial form response was taken from the Squarespace page and stored in Google Sheets. From there, the information was passed through Zapier where it was combined with a preset prompt that showed to be the most effective for generating GPT-3 prompt. Multiple prompts were tested and optimized to produce lesson plans that are informative, concise, and accurate relative to the user-selected options. The prompts are then passed to GPT-3 and the developed lesson plan is passed back to Zapier where it is emailed to the user. ## Challenges we ran into The two major challenges we faced were determining a singe prompt capable of producing consistent results for a wide range of topics related to environmental issues. We went through several iterations of prompts and eliminated those that provided inaccurate or incomplete results. We also had issues with triggering the workflow when the forms are submitted. Due to the method that Squarespace uses to update google sheets, the responses were not being triggered. We were able to work around this issue by using a modified or changed command in Zapier. ## Accomplishments that we're proud of We were able to create a functional app and integrate previous outside knowledge with practical development tools. ## What we learned How to create effective workflows using GPT3. Incorporating language learning models into simple user-facing projects. ## What's next for EcoLessons A crucial next step is working on validating the information provided by the model, in particular identifying and reducing hallucinations. In the future we hope to feed GPT3 a pre-selected set of external sources with known accuracy related to the topic requested. This has been shown to significantly reduce the likelihood of hallucinations. This method would also allow us to provide educators with the resources that were used in crafting the final material. ## View and Password: <https://turtle-dove-hx63.squarespace.com/> Password: 123abc
## Inspiration We got the inspiration while solving some math questions. We were solving some of the questions wrong, but couldn't get any idea in what step we were doing wrong. Online, it was even worse: there were only videos, and you had to figure all of the rest out by yourself. The only way to see exactly where you did a mistake was to have a teacher with you. How crazy! Then, we said, technology could help us solve this, and it could even enable us to build a platform that can intelligently give the most efficient route of learning to each person, so no time would be wasted solving the same things again and again! ## What it does The app provides you with some questions (currently math) and a drawing area to solve the question. While you are solving, the app can compare your handwritten solution steps with the correct ones and tell if your step was correct or false. Even more, since it also has educational content built-in, it can track and show you more of the questions that you did incorrectly, and even questions including steps you did incorrect while solving other questions. ## How we built it We built the recognition part using the MyScript math handwriting recognition API, and all the tracking, statistics and other stuff using Swift, UIKit and AVFoundation. ## Challenges we ran into We ran into lots of challenges while building all the data models, since each one is interconnected with the others, and all the steps, questions, tags, etc. make up quite a large variety of data. With the said variety of data, also came a torrent of user interface bugs, and it took *some* perseverance to solve them all as quickly as possible. Also, probably the one of the biggest challenges we dealt with was to deal with the IDE itself crashing :) ## Accomplishments that we're proud of We are proud of the data collection and recommendation system that we built from the ground up (entirely in Swift!), and the UI that we built, since even though the app doesn't have a large quantity of educational content inside yet, we built it with the ability to expand easily as content gets added, in mind. ## What we learned The biggest thing we learned was how to build a data set large enough to give personalized recommendations, and also how to divide and conquer it before it gets too complex. We also learned to go beyond what the documentation on the internet offers while debugging, and to solve things by going from examples, without documentation on how to implement. ## What's next for Tat We think that Tat has quite a potential to redefine education for years to come if we can build more upon it, with more content, more data and even the possibility of integrating crowd-trained AI.
## Inspiration Us college students all can relate to having a teacher that was not engaging enough during lectures or mumbling to the point where we cannot hear them at all. Instead of finding solutions to help the students outside of the classroom, we realized that the teachers need better feedback to see how they can improve themselves to create better lecture sessions and better ratemyprofessor ratings. ## What it does Morpheus is a machine learning system that analyzes a professor’s lesson audio in order to differentiate between various emotions portrayed through his speech. We then use an original algorithm to grade the lecture. Similarly we record and score the professor’s body language throughout the lesson using motion detection/analyzing software. We then store everything on a database and show the data on a dashboard which the professor can access and utilize to improve their body and voice engagement with students. This is all in hopes of allowing the professor to be more engaging and effective during their lectures through their speech and body language. ## How we built it ### Visual Studio Code/Front End Development: Sovannratana Khek Used a premade React foundation with Material UI to create a basic dashboard. I deleted and added certain pages which we needed for our specific purpose. Since the foundation came with components pre build, I looked into how they worked and edited them to work for our purpose instead of working from scratch to save time on styling to a theme. I needed to add a couple new original functionalities and connect to our database endpoints which required learning a fetching library in React. In the end we have a dashboard with a development history displayed through a line graph representing a score per lecture (refer to section 2) and a selection for a single lecture summary display. This is based on our backend database setup. There is also space available for scalability and added functionality. ### PHP-MySQL-Docker/Backend Development & DevOps: Giuseppe Steduto I developed the backend for the application and connected the different pieces of the software together. I designed a relational database using MySQL and created API endpoints for the frontend using PHP. These endpoints filter and process the data generated by our machine learning algorithm before presenting it to the frontend side of the dashboard. I chose PHP because it gives the developer the option to quickly get an application running, avoiding the hassle of converters and compilers, and gives easy access to the SQL database. Since we’re dealing with personal data about the professor, every endpoint is only accessible prior authentication (handled with session tokens) and stored following security best-practices (e.g. salting and hashing passwords). I deployed a PhpMyAdmin instance to easily manage the database in a user-friendly way. In order to make the software easily portable across different platforms, I containerized the whole tech stack using docker and docker-compose to handle the interaction among several containers at once. ### MATLAB/Machine Learning Model for Speech and Emotion Recognition: Braulio Aguilar Islas I developed a machine learning model to recognize speech emotion patterns using MATLAB’s audio toolbox, simulink and deep learning toolbox. I used the Berlin Database of Emotional Speech To train my model. I augmented the dataset in order to increase accuracy of my results, normalized the data in order to seamlessly visualize it using a pie chart, providing an easy and seamless integration with our database that connects to our website. ### Solidworks/Product Design Engineering: Riki Osako Utilizing Solidworks, I created the 3D model design of Morpheus including fixtures, sensors, and materials. Our team had to consider how this device would be tracking the teacher’s movements and hearing the volume while not disturbing the flow of class. Currently the main sensors being utilized in this product are a microphone (to detect volume for recording and data), nfc sensor (for card tapping), front camera, and tilt sensor (for vertical tilting and tracking professor). The device also has a magnetic connector on the bottom to allow itself to change from stationary position to mobility position. It’s able to modularly connect to a holonomic drivetrain to move freely around the classroom if the professor moves around a lot. Overall, this allowed us to create a visual model of how our product would look and how the professor could possibly interact with it. To keep the device and drivetrain up and running, it does require USB-C charging. ### Figma/UI Design of the Product: Riki Osako Utilizing Figma, I created the UI design of Morpheus to show how the professor would interact with it. In the demo shown, we made it a simple interface for the professor so that all they would need to do is to scan in using their school ID, then either check his lecture data or start the lecture. Overall, the professor is able to see if the device is tracking his movements and volume throughout the lecture and see the results of their lecture at the end. ## Challenges we ran into Riki Osako: Two issues I faced was learning how to model the product in a way that would feel simple for the user to understand through Solidworks and Figma (using it for the first time). I had to do a lot of research through Amazon videos and see how they created their amazon echo model and looking back in my UI/UX notes in the Google Coursera Certification course that I’m taking. Sovannratana Khek: The main issues I ran into stemmed from my inexperience with the React framework. Oftentimes, I’m confused as to how to implement a certain feature I want to add. I overcame these by researching existing documentation on errors and utilizing existing libraries. There were some problems that couldn’t be solved with this method as it was logic specific to our software. Fortunately, these problems just needed time and a lot of debugging with some help from peers, existing resources, and since React is javascript based, I was able to use past experiences with JS and django to help despite using an unfamiliar framework. Giuseppe Steduto: The main issue I faced was making everything run in a smooth way and interact in the correct manner. Often I ended up in a dependency hell, and had to rethink the architecture of the whole project to not over engineer it without losing speed or consistency. Braulio Aguilar Islas: The main issue I faced was working with audio data in order to train my model and finding a way to quantify the fluctuations that resulted in different emotions when speaking. Also, the dataset was in german ## Accomplishments that we're proud of Achieved about 60% accuracy in detecting speech emotion patterns, wrote data to our database, and created an attractive dashboard to present the results of the data analysis while learning new technologies (such as React and Docker), even though our time was short. ## What we learned As a team coming from different backgrounds, we learned how we could utilize our strengths in different aspects of the project to smoothly operate. For example, Riki is a mechanical engineering major with little coding experience, but we were able to allow his strengths in that area to create us a visual model of our product and create a UI design interface using Figma. Sovannratana is a freshman that had his first hackathon experience and was able to utilize his experience to create a website for the first time. Braulio and Gisueppe were the most experienced in the team but we were all able to help each other not just in the coding aspect, with different ideas as well. ## What's next for Untitled We have a couple of ideas on how we would like to proceed with this project after HackHarvard and after hibernating for a couple of days. From a coding standpoint, we would like to improve the UI experience for the user on the website by adding more features and better style designs for the professor to interact with. In addition, add motion tracking data feedback to the professor to get a general idea of how they should be changing their gestures. We would also like to integrate a student portal and gather data on their performance and help the teacher better understand where the students need most help with. From a business standpoint, we would like to possibly see if we could team up with our university, Illinois Institute of Technology, and test the functionality of it in actual classrooms.
losing
## Inspiration Despite having different experiences learning Chinese, we agree on one thing: **the best way to learn is to speak it frequently in daily conversation**. Daniel learned Chinese from his parents, and the immersion of Chinese in his home has been the biggest catalyst towards his Chinese education. Joanna has always wanted to improve her spoken Chinese, but practicing with relatives and other native speakers induces a lot of anxiety. *If only there was a low-stakes, accessible way to practice conversing in Chinese...* ## What it Does LanguageLink, utilizing Google’s **DialogFlow API** and **Natural Language** technology combined with **text-to-speech and speech-to-text**, creates organic conversations for accessible language practice. By asking users to converse with the bot in their native language before practicing in their target language, LanguageLink is able to collect natural dialog from languages all around the world. The dialog is fed back into DialogFlow as both training phrases and entry dialogue, so that Otto, our agent, can hold a conversation that goes many ways! Native speakers will additionally give feedback and edits to Otto on fluency, context, and grammar. The approachable and friendly UI makes practicing a language with Otto easy. Anybody can improve at speaking a new language on their own timetable without waiting on a partner. LanguageLink also tracks progress and identifies areas for improvement. Users will be able to access translations, transcripts, and even suggestions while they chat with Otto. ## How We Built it We designed the UI in **Figma**, made illustrations in **Procreate**, and constructed the frontend in vanilla **Javascript**. The linked demo is client-side proof of concept, though the real product will require connecting our **Node.js** backend to communicate with Google’s APIs. ## Challenges We Ran Into Taking on backend servers and requesting APIs came with all the challenges of learning new technology, and figuring out how to present our multilayered idea concisely and efficiently was its own semantic hurdle. ## Accomplishments That We're Proud Of This was a day of many firsts for both of us! It was Joanna’s first time creating a visual brand identity 0 to 1, and it was Daniel’s first time creating and connecting to a Node.js backend. **We’re proud that we created a fun, meaningful product experience for users to enjoy, and we encourage you to try out our demo!** ## What We Learned Though the backend wasn’t used in the demo, it was a great learning experience simply to research it and focus on the frontend aspects of the project. Since PennApps was our first hackathon, we learned how rapidly time seems to fly by while experiencing excitement, frustration, and adrenaline all at the same time. We learned how to learn quickly, gained experience with new technologies, and connected with cool mentors. ## What's next for LanguageLink The most immediate step is to connect the backend tech for LanguageLink, namely DialogFlow. Along with this comes finding a method of sorting the responses into training phrases and entry dialogue, and beginning to construct our conversation trees in different languages. In the long run, our goal is for Otto to be familiar with every language and for LanguageLink to have a global user base. LanguageLink should be simple, friendly, and accessible, lending itself to everyday use. With a user base that collectively trains Otto, we want LanguageLink to be a forum that makes language education more accessible and facilitates connection around the world.
## Inspiration The complexity of navigating opportunities across borders due to pre-existing language barriers sparked our inspiration. Whether it’s missing out on a global business deal or struggling with personal connections when travelling, language differences often become a hurdle. We sought to create a solution to combat this challenge, and that’s how we landed on 'The Voice', a tool designed to bridge these gaps and make international communication seamless. ## What it does "The Voice" enables real-time translation by converting spoken language into text, translating it on the fly, and delivering instant voice output in the desired language. Whether you're in a business meeting or travelling abroad, the app allows users to break language barriers effortlessly, making global communication accessible for everyone. ## How we built it Backend: We used Python, leveraging OpenAI’s Whisper library for initial speech-to-text functionality. Recognizing its limitations in translation quality, we integrated the DeepL API to provide more accurate and natural translations. Frontend: For the UI, we used React with Tailwind CSS to create a clean, intuitive user interface that simplifies interaction. Electron: To run the app natively on the desktop and integrate voice input, we built it with Electron, making the app platform-independent. ## Challenges we ran into Compiling audio inputs: Capturing and processing audio from the laptop’s microphone required working with tools like Voiceflow for user interaction and managing inter-process communication. Hijacking computer audio: We faced complications in rerouting system audio for the app to capture and process it in real-time, especially integrating this within Electron’s framework. ## Accomplishments that we're proud of We are proud to have overcome several technical challenges, including successfully hijacking the computer's audio for real-time processing, which enabled seamless voice translation. Additionally, we designed a pleasant, intuitive UI that not only enhances the user experience but also showcases our proficiency in React and Tailwind CSS. This allowed us to create a visually appealing and functional interface, making cross-language communication both easy and enjoyable. Combining these backend and frontend accomplishments is a major highlight of our project. ## What we learned Beyond improving our teamwork, communication, and collaboration, we gained technical insights in: * Working with Voiceflow to enhance voice interaction. * Using Electron to develop cross-platform desktop applications. * Refining our React skills to design a smooth, intuitive UI/UX. ## What's next for The Voices Looking ahead, we aim to expand The Voice by integrating support for all major languages, making it a truly global solution. Additionally, we plan to implement advanced text-to-speech capabilities, allowing users not only to receive translated text but also to hear the translation in real-time, further enhancing accessibility. We’re also exploring strategic partnerships with telecommunications enterprises to embed our app into their services, enabling smoother communication experiences for international calls and meetings. These future developments will make The Voice a powerful tool for breaking language barriers worldwide.
## Inspiration Recently, character experiences powered by LLMs have become extremely popular. latforms like Character.AI, boasting 54M monthly active users and a staggering 230M monthly visits, are a testament to this trend. Yet, despite these figures, most experiences in the market offer text-to-text interfaces with little variation. We wanted to take the chat with characters to the next level. Instead of a simple and standard text-based interface, we wanted intricate visualization of your character with a 3D model viewable in your real-life environment, actual low-latency, immersive, realistic, spoken dialogue with your character, with a really fun dynamic (generated on-the-fly) 3D graphics experience - seeing objects appear as they are mentioned in conversation - a novel innovation only made possible recently. ## What it does An overview: CharactAR is a fun, immersive, and **interactive** AR experience where you get to speak your character’s personality into existence, upload an image of your character or take a selfie, pick their outfit, and bring your custom character to life in a AR world, where you can chat using your microphone or type a question, and even have your character run around in AR! As an additional super cool feature, we compiled, hosted, and deployed the open source OpenAI Shap-e Model(by ourselves on Nvidia A100 GPUs from Google Cloud) to do text-to-3D generation, meaning your character is capable of generating 3D objects (mid-conversation!) and placing them in the scene. Imagine the terminator generating robots, or a marine biologist generating fish and other wildlife! Our combination and intersection of these novel technologies enables experiences like those to now be possible! ## How we built it ![flowchart](https://i.imgur.com/R5Vbpn6.png) *So how does CharactAR work?* To begin, we built <https://charactar.org>, a web application that utilizes Assembly AI (State of the Art Speech-To-Text) to do real time speech-to-text transcription. Simply click the “Record” button, speak your character’s personality into existence, and click the “Begin AR Experience” button to enter your AR experience. We used HTML, CSS, and Javascript to build this experience, and bought the domain using GoDaddy and hosted the website on Replit! In the background, we’ve already used OpenAI Function Calling, a novel OpenAI product offering, to choose voices for your custom character based on the original description that you provided. Once we have the voice and description for your character, we’re ready to jump into the AR environment. The AR platform that we chose is 8th Wall, an AR deployment platform built by Niantic, which focuses on web experiences. Due to the emphasis on web experiences, any device can use CharactAR, from mobile devices, to laptops, or even VR headsets (yes, really!). In order to power our customizable character backend, we employed the Ready Player Me player avatar generation SDK, providing us a responsive UI that enables our users to create any character they want, from taking a selfie, to uploading an image of their favorite celebrity, or even just choosing from a predefined set of models. Once the model is loaded into the 8th Wall experience, we then use a mix of OpenAI (Character Intelligence), InWorld (Microphone Input & Output), and ElevenLabs (Voice Generation) to create an extremely immersive character experience from the get go. We animated each character using the standard Ready Player Me animation rigs, and you can even see your character move around in your environment by dragging your finger on the screen. Each time your character responds to you, we make an API call to our own custom hosted OpenAI Shap-e API, which is hosted on Google Cloud, running on an NVIDIA A100. A short prompt based on the conversation between you and your character is sent to OpenAI’s novel text-to-3D API to be generated into a 3D object that is automatically inserted into your environment. For example, if you are talking with Barack Obama about his time in the White House, our Shap-E API will generate a 3D object of the White House, and it’s really fun (and funny!) in game to see what Shap-E will generate. ## Challenges we ran into One of our favorite parts of CharactAR is the automatic generation of objects during conversations with the character. However, the addition of these objects also lead to an unfortunate spike in triangle count, which quickly builds up lag. So when designing this pipeline, we worked on reducing unnecessary detail in model generation. One of these methods is the selection of the number of inference steps prior to generating 3D models with Shap-E. The other is to compress the generated 3D model, which ended up being more difficult to integrate than expected. At first, we generated the 3D models in the .ply format, but realized that .ply files are a nightmare to work with in 8th Wall. So we decided to convert them into .glb files, which would be more efficient to send through the API and better to include in AR. The .glb files could get quite large, so we used Google’s Draco compression library to reduce file sizes by 10 to 100 times. Getting this to work required quite a lot of debugging and package dependency resolving, but it was awesome to see it functioning. Below, we have “banana man” renders from our hosted Shap-E model. ![bananaman_left](https://i.imgur.com/9i94Jme.jpg) ![bananaman_right](https://i.imgur.com/YJyRLKF.jpg) *Even after transcoding the .glb file with Draco compression, the banana man still stands gloriously (1 MB → 78 KB).* Although 8th Wall made development much more streamlined, AR Development as a whole still has a ways to go, and here are some of the challenges we faced. There were countless undefined errors with no documentation, many of which took hours of debugging to overcome. Working with the animated Ready Player Me models and the .glbs generated by our Open AI Shap-e model imposed a lot of challenges with model formats and dynamically generating models, which required lots of reading up on 3D model formats. ## Accomplishments that we're proud of There were many small challenges in each of the interconnected portions of the project that we are proud to have persevered through the bugs and roadblocks. The satisfaction of small victories, like seeing our prompts come to 3D or seeing the character walk around our table, always invigorated us to keep on pushing. Running AI models is computationally expensive, so it made sense for us to allocate this work to be done on Google Cloud’s servers. This allowed us to access the powerful A100 GPUs, which made Shap-E model generation thousands of times faster than would be possible on CPUs. This also provided a great opportunity to work with FastAPIs to create a convenient and extremely efficient method of inputting a prompt and receiving a compressed 3D representation of the query. We integrated AssemblyAI's real-time transcription services to transcribe live audio streams with high accuracy and low latency. This capability was crucial for our project as it allowed us to convert spoken language into text that could be further processed by our system. The WebSocket API provided by AssemblyAI was secure, fast, and effective in meeting our requirements for transcription. The function calling capabilities of OpenAI's latest models were an exciting addition to our project. Developers can now describe functions to these models, and the models intelligently output a JSON object containing the arguments for those functions. This feature enabled us to integrate GPT's capabilities seamlessly with external tools and APIs, offering a new level of functionality and reliability. For enhanced user experience and interactivity between our website and the 8th Wall environment, we leveraged the URLSearchParams interface. This allowed us to send the information of the initial character prompt seamlessly. ## What we learned For the majority of the team, it was our first AR project using 8th Wall, so we learned the ins and outs of building with AR, the A-Frame library, and deploying a final product that can be used by end-users. We also had never used Assembly AI for real-time transcription, so we learned how to use websockets for Real-Time transcription streaming. We also learned so many of the intricacies to do with 3D objects and their file types, and really got low level with the meshes, the object file types, and the triangle counts to ensure a smooth rendering experience. Since our project required so many technologies to be woven together, there were many times where we had to find unique workarounds, and weave together our distributed systems. Our prompt engineering skills were put to the test, as we needed to experiment with countless phrasings to get our agent behaviors and 3D model generations to match our expectations. After this experience, we feel much more confident in utilizing the state-of-the-art generative AI models to produce top-notch content. We also learned to use LLMs for more specific and unique use cases; for example, we used GPT to identify the most important object prompts from a large dialogue conversation transcript, and to choose the voice for our character. ## What's next for CharactAR Using 8th Wall technology like Shared AR, we could potentially have up to 250 players in the same virtual room, meaning you could play with your friends, no matter how far away they are from you. These kinds of collaborative, virtual, and engaging experiences are the types of environments that we want CharactAR to enable. While each CharactAR custom character is animated with a custom rigging system, we believe there is potential for using the new OpenAI Function Calling schema (which we used several times in our project) to generate animations dynamically, meaning we could have endless character animations and facial expressions to match endless conversations.
losing
## What it does Paste in a text and it will identify the key scenes before turning it into a narrated movie. Favourite book, historical battle, or rant about work. Anything and everything, if you can read it, Lucid.ai can dream it. ## How we built it Once you hit generate on the home UI, our frontend sends your text and video preferences to the backend, which uses our custom algorithm to cut up the text into key scenes. The backend then uses multithreading to make three simultaneous API calls. First, a call to GPT-3 to condense the chunks into image prompts to be fed into a Stable Diffusion/Deforum AI image generation model. Second, a sentiment keyword analysis using GPT-3, which is then fed to the Youtube API for a fitting background song. Finally, a call to TortoiseTTS generates a convincing narration of your text. Collected back at the front-end, you end up with a movie, all from a simple text. ## Challenges we ran into Our main challenge was computing power. With no access to industry-grade GPU power, we were limited to running our models on personal laptop GPUs. External computing power also limited payload sizes, forcing us to find roundabout ways to communicate our data to the front-end. ## Accomplishments that we're proud of * Extremely resilient commitment to the project, despite repeated technical setbacks * Fast on-our-feet thinking when things don't go to plan * A well-laid out front-end development plan ## What we learned * AWS S3 Cloud Storage * TortoiseTTS * Learn how to dockerize large open source codebase ## What's next for Lucid.ai * More complex camera motions beyond simple panning * More frequent frame generation * Real-time frame generation alongside video watching * Parallel cloud computing to handle rendering at faster speeds
## Inspiration The amount of data in the world today is mind-boggling. We are generating 2.5 quintillion bytes of data every day at our current pace, but the pace is only accelerating with the growth of IoT. We felt that the world was missing a smart find-feature for videos. To unlock heaps of important data from videos, we decided on implementing an innovative and accessible solution to give everyone the ability to access important and relevant data from videos. ## What it does CTRL-F is a web application implementing computer vision and natural-language-processing to determine the most relevant parts of a video based on keyword search and automatically produce accurate transcripts with punctuation. ## How we built it We leveraged the MEVN stack (MongoDB, Express.js, Vue.js, and Node.js) as our development framework, integrated multiple machine learning/artificial intelligence techniques provided by industry leaders shaped by our own neural networks and algorithms to provide the most efficient and accurate solutions. We perform key-word matching and search result ranking with results from both speech-to-text and computer vision analysis. To produce accurate and realistic transcripts, we used natural-language-processing to produce phrases with accurate punctuation. We used Vue to create our front-end and MongoDB to host our database. We implemented both IBM Watson's speech-to-text API and Google's Computer Vision API along with our own algorithms to perform solid key-word matching. ## Challenges we ran into Trying to implement both Watson's API and Google's Computer Vision API proved to have many challenges. We originally wanted to host our project on Google Cloud's platform, but with many barriers that we ran into, we decided to create a RESTful API instead. Due to the number of new technologies that we were figuring out, it caused us to face sleep deprivation. However, staying up for way longer than you're supposed to be is the best way to increase your rate of errors and bugs. ## Accomplishments that we're proud of * Implementation of natural-language-processing to automatically determine punctuation between words. * Utilizing both computer vision and speech-to-text technologies along with our own rank-matching system to determine the most relevant parts of the video. ## What we learned * Learning a new development framework a few hours before a submission deadline is not the best decision to make. * Having a set scope and specification early-on in the project was beneficial to our team. ## What's next for CTRL-F * Expansion of the product into many other uses (professional education, automate information extraction, cooking videos, and implementations are endless) * The launch of a new mobile application * Implementation of a Machine Learning model to let CTRL-F learn from its correct/incorrect predictions
## Inspiration Emergencies are something that the city must handle on a day-today basis, and as residents of Kingston, we understand that every minute counts when responding to a call. We were thus inspired to use Kingston's Open Data resources to model an optimised distribution of emergency services across Kingston. ## What it does Kingston Bernard - named after the famous Alpine rescue dogs - uses historical data on Fire & Rescue Incidents from 2018 to now to map out common emergency areas: whether they be fire, medical, or vehicular. Then, using a greedy metric-k center algorithm, an approximately evenly distributed positional map is generated to inform the Kingston government which locations require the most attention when providing more emergency services (such as highlighting areas that may require more police patrolling, first aid kits, etc.). ## How I built it The web application uses a React frontend with an Express backend that computes the distribution given a number of units available to place (generates a map of a number of coordinates). It also uses Google Cloud API to display the data as a Google Map. ## What's next for Kingston Bernard Kingston-Bernard aims to continue improving its algorithm to further optimise distribution, as well as including more data from Open Data Kingston to better implement a resourceful application. We are team 44: ManchurioX#3808, CheezWhiz#8656, and BluCloos#8986
winning
## Inspiration The inspiration for InstaPresent came from our frustration with constantly having to create presentations for class and being inspired by the 'in-game advertising' episode on Silicon Valley. ## What it does InstaPresent is a tool that uses your computer's microphone to generate a presentation in real-time. It can retrieve images and graphs and summarize your words into bullet points. ## How we built it We used Google's Text To Speech API to process audio from the laptop's microphone. The Text To Speech is captured when the user speaks and when they stop speaking, the aggregated text is sent to the server via WebSockets to be processed. ## Challenges We ran into Summarizing text into bullet points was a particularly difficult challenge as there are not many resources available for this task. We ended up developing our own pipeline for bullet-point generation based on part-of-speech and dependency analysis. We also had plans to create an Android app for InstaPresent, but were unable to do so due to limited team members and time constraints. Despite these challenges, we enjoyed the opportunity to work on this project. ## Accomplishments that we're proud of We are proud of creating a web application that utilizes a variety of machine learning and non-machine learning techniques. We also enjoyed the challenge of working on an unsolved machine learning problem (sentence simplification) and being able to perform real-time text analysis to determine new elements. ## What's next for InstaPresent In the future, we hope to improve InstaPresent by predicting what the user intends to say next and improving the text summarization with word reordering.
## Inspiration We're 4 college freshmen that were expecting new experiences with interactive and engaging professors in college; however, COVID-19 threw a wrench in that (and a lot of other plans). As all of us are currently learning online through various video lecture platforms, we found out that these lectures sometimes move too fast or are just flat-out boring. Summaread is our solution to transform video lectures into an easy-to-digest format. ## What it does "Summaread" automatically captures lecture content using an advanced AI NLP pipeline to automatically generate a condensed note outline. All one needs to do is provide a YouTube link to the lecture or a transcript and the corresponding outline will be rapidly generated for reading. Summaread currently generates outlines that are shortened to about 10% of the original transcript length. The outline can also be downloaded as a PDF for annotation purposes. In addition, our tool uses the Google cloud API to generate a list of Key Topics and links to Wikipedia to encourage further exploration of lecture content. ## How we built it Our project is comprised of many interconnected components, which we detail below: **Lecture Detection** Our product is able to automatically detect when lecture slides change to improve the performance of the NLP model in summarizing results. This tool uses the Google Cloud Platform API to detect changes in lecture content and records timestamps accordingly. **Text Summarization** We use the Hugging Face summarization pipeline to automatically summarize groups of text that are between a certain number of words. This is repeated across every group of text previous generated from the Lecture Detection step. **Post-Processing and Formatting** Once the summarized content is generated, the text is processed into a set of coherent bullet points and split by sentences using Natural Language Processing techniques. The text is also formatted for easy reading by including “sub-bullet” points that give a further explanation into the main bullet point. **Key Concept Suggestions** To generate key concepts, we used the Google Cloud Platform API to scan over the condensed notes our model generates and provide wikipedia links accordingly. Some examples of Key Concepts for a COVID-19 related lecture would be medical institutions, famous researchers, and related diseases. **Front-End** The front end of our website was set-up with Flask and Bootstrap. This allowed us to quickly and easily integrate our Python scripts and NLP model. ## Challenges we ran into 1. Text summarization is extremely difficult -- while there are many powerful algorithms for turning articles into paragraph summaries, there is essentially nothing on shortening conversational sentences like those found in a lecture into bullet points. 2. Our NLP model is quite large, which made it difficult to host on cloud platforms ## Accomplishments that we're proud of 1) Making a multi-faceted application, with a variety of machine learning and non-machine learning techniques. 2) Working on an unsolved machine learning problem (lecture simplification) 3) Real-time text analysis to determine new elements ## What we learned 1) First time for multiple members using Flask and doing web development 2) First time using Google Cloud Platform API 3) Running deep learning models makes my laptop run very hot ## What's next for Summaread 1) Improve our summarization model through improving data pre-processing techniques and decreasing run time 2) Adding more functionality to generated outlines for better user experience 3) Allowing for users to set parameters regarding how much the lecture is condensed by
## Inspiration We couldn't think of a project we both wanted to do — should we address a societal problem? Do something technically challenging hardware hack, in a field we were familiar with ? In the end, after about 20-30 hours of hemming and hawing, we decided we just wanted to have fun, and made the world's most useless and obnoxious Valentine's Day Chrome extension! ## What it does Who needs constant reminders about love, go celebrate yourself! We replace any mention of Valentine and romance, and every link redirects to a single empowerment songs playlist. Queen Bey gifs make an appearance, and your cursor becomes a meme cat. It's a very chaotic chrome extension. For text replacements, we created a dictionary of romance-related words and possible creative replacements to select from. We also drew custom graphics for this project (cursor, backgrounds, other icons). ## How we built it We looked at tutorials online for making a Chrome extension and added our own flavor. ## Challenges we ran into Neither of us know anything about front-end, so making a Chrome extension was a new learning process! ## Accomplishments that we're proud of We made something that made us laugh! ## What we learned How to badly front-end design. ## What's next for Happy Valentine's Day Being single.
winning
### Simple QR Code Bill Payment #### nwHacks 2020 Hackathon Project #### Main repository for the rapidserve application ### Useful Links * [Github](https://github.com/rossmojgani/rapidserve) * [DevPost](https://devpost.com/software/rapidserve-g1skzh) ### Team Members * Ross Mojgani * Dryden Wiebe * Victor Parangue * Aric Wolstenholme ### Description RapidServe is a mobile application which allows restaurants to charge their customers through a mobile application interface. Powered with a React Native frontend and Python Flask API server with a mongoDB database, RapidServe uses QR codes linked to tables to allow the customer to scan the QR code at their table and pay for any item at their table. Once all the items at the customers table are paid for, the customer is free to go and the waiter/waitress does not need to be bothered and wait for each customer at a table to pay individually. ### Technical Details * Frontend Mobile Application **(React Native)** + The frontend was implemented using React Native, there is a landing page where the user can register or log in, using a facebook integration to link their facebook account. + While creating an account, if the user is a waiter/waitress, they are prompted to enter their restaurant ID, along with entering their username/password combination. If the user is a customer, they will just be prompted for a username/password combination. + The page which comes up next is a page to scan a QR code which corresponds to the table which the waiter/waitress is serving or the customer is sitting at, the customer will be able to see which items have been charged to their table and pay for whichever items they need to. The waiter/waitress will be allowed to add items to the table they are serving. + The user can pay for their items and the waiter/waitress can see if the table has been paid for and know the customers are good to go. * API Details **(Flask/Python API)** + The API for this application was implemented using the flask framework along with Python, there was documentation which the frontend used to make their HTTP requests, [API DOCUMENTATION](https://github.com/rossmojgani/rapidserve/blob/master/API.md), this API document was the contract between the frontend and the backend in terms of what arguments were sent into what type of HTTP requests. The API was hosted on a virtual machine in the cloud. + The API queried our mongoDB database based on which requests were being processed, which was also hosted on a virtual machine on the cloud, more below. * Database Details **(MongoDB)** + The database used was mongoDB, which was queried from the Flask/Python server using PyMongo and Flask\_PyMongo, we used two collections mainly, **users, and orders** which stored objects based on what a user needed to have stored (see [API DOCUMENTATION](https://github.com/rossmojgani/rapidserve/blob/master/API.md) for a user object example) and for what a tables order would be (again, see [API DOCUMENTATION](https://github.com/rossmojgani/rapidserve/blob/master/backend/API.md) for a table object example)
## Inspiration With billions of tons of food waste occurring in Canada every year. We knew that there needs to exist a cost-effective way to reduce food waste that can empower restaurant owners to make more eco-conscious decisions while also incentivizing consumers to choose more environmentally-friendly food options. ## What it does Re-fresh is a two-pronged system that allows users to search for food from restaurants that would otherwise go to waste at a lower price than normal. On the restaurant side, we provide a platform to track and analyze inventory in a way that allows restaurants to better manage their requisitions for produce so that they do not generate any extra waste and they can ensure profits are not being thrown away. ## How we built it For the backend portion of the app, we utilized cockroachDB in python and javascript as well as React Native for the user mobile app and the enterprise web application. To ensure maximum protection of our user data, we used SHA256 encryption to encrypt sensitive user information such as usernames and password. ## Challenges we ran into Due to the lack of adequate documentation as well as a plethora of integration issues with react.js and node, cockroachDB was a difficult framework to work with. Other issues we ran into were some problems on the frontend with utilizing chart.js for displaying graphical representations of enterprise data. ## Accomplishments that we're proud of We are proud of the end design of our mobile app and web application. Our team are not native web developers so it was a unique experience stepping out of our comfort zone and getting to try new frameworks and overall we are happy with what we learned as well as how we were able to utilize our brand understanding of programming principles to create this project. ## What we learned We learned more about web development than what we knew before. We also learned that despite the design-oriented nature of frontend development there are many technical hurdles to go through when creating a full stack application and that there is a wide array of different frameworks and APIs that are useful in developing web applications. ## What's next for Re-Fresh The next step for Re-Fresh is restructuring the backend architecture to allow ease of scalability for future development as well as hopefully being able to publish it and attract a customer-base.
Fujifusion is our group's submission for Hack MIT 2018. It is a data-driven application for predicting corporate credit ratings. ## Problem Scholars and regulators generally agree that credit rating agency failures were at the center of the 2007-08 global financial crisis. ## Solution \*Train a machine learning model to automate the prediction of corporate credit ratings. \*Compare vendor ratings with predicted ratings to identify discrepancies. \*Present this information in a cross-platform application for RBC’s traders and clients. ## Data Data obtained from RBC Capital Markets consists of 20 features recorded for 27 companies at multiple points in time for a total of 524 samples. Available at <https://github.com/em3057/RBC_CM> ## Analysis We took two approaches to analyzing the data: a supervised approach to predict corporate credit ratings and an unsupervised approach to try to cluster companies into scoring groups. ## Product We present a cross-platform application built using Ionic that works with Android, iOS, and PCs. Our platform allows users to view their investments, our predicted credit rating for each company, a vendor rating for each company, and visual cues to outline discrepancies. They can buy and sell stock through our app, while also exploring other companies they would potentially be interested in investing in.
partial
## Inspiration One of the biggest problems we constantly have faced when interacting with insurance companies lies in the process of filing and receiving insurance claims. When one of our team members got into a huge traffic accident (luckily everyone was safe), they had to wait over 5 weeks before they got their claim back for their damaged vehicle. After hearing this, we knew we had to pursue a fix for this extremely lengthy claim process. ## What it does Autosurance is meant to integrate into an auto insurance firm's claim process. Typically, there are three very time-consuming steps in filing for and receiving a claim. This process is automated effortlessly with Autosurance, where you can file for a claim, have it verified, and get your money back to you--way faster. ## How we built it Our machine learning solutions were solely created using AWS SageMaker, which provided a really convenient way to train and create an endpoint for our two models; one of which is for crash verification (image classification), and the other which is for cost analysis (regression). These endpoints were linked to our software's backend using AWS Lambda, an extremely convenient gateway connecting AWS and our website. Our CRUD operations run on a Flask server which acts as an intermediary between our AWS S3 buckets and our ReactJS front-end. ## Challenges we ran into We faced a lot of problems setting up AWS Sagemaker and getting Lambda to work, as all of us were extremely new to AWS. However, with help from the awesome mentors, we managed to get all the machine learning to work with our back-end. Since our project has such a diverse stack, consisting of AWS (Sagemaker, Lambda, S3), Flask, and ReactJS, it was also quite a challenge to integrate all of these components and get them to work with each other. ## Accomplishments that we're proud of We're really happy that we ~~managed to get decent sleep~~ were able to create an interesting solution to such a far-reaching problem and had *tons* of fun. ## What we learned We learned tons about using AWS, and are really happy that we were able to make something useful with it. All of us also got some great first-hand experience in developing a full tech stack for a functioning app. ## What's next for Autosurance We want to have Autosurance integrate into a current insurance platform's web or mobile service, to be able to perform its intended use: to make filing and receiving claims, really frickin' fast.
## Inspiration As students, we realized that insurance is a topic that is often seen as unapproachable and difficult to learn about. Getting insurance and finding the right one for a certain situation can seem scary and hard to understand, so we wanted to build a platform where a user could punch in appropriate information and find the most suitable information for them. ## What it does **1.** Glass Wings creates an equal platform where anyone can share their information about what kind of insurance they have bought or encountered based on the environmental factors of their property **2.** Glass Wings can predict the type of insurance and cost of it a user can expect based on the property they are trying to buy. **3.** Glass Wings uses real-time data from actual users, raising awareness about insurance for individuals while simultaneously calculating insurance quickly and easily. ## How we built it We built this platform with Python Django, then utilised AWS in order to train our model to predict the right insurance based on our crowd-sourced data. Not only is this trustworthy because it is based on real-time user verified data, but an individual can get a sense of how much everyone else is paying so that they don't feel they are being ripped off by a company. ## Challenges we ran into AWS's SageMaker and ML is not an easy topic to learn overnight. Using new technologies and a new concept was a huge learning curve, which made it a challenge for us to build the product we envisioned. ## Accomplishments that we're proud of We are tackling real-life issues. Environment is a hot topic right now because more and more people are becoming aware about climate change and the circumstances we are living in, and I believe that we are hopping on the right trends and tackling the appropriate issues. ## What we learned The team learned a lot about insurance. Especially as students in pure tech, we weren't too aware about the finance and insurance industry. We realized that these are real-life problems that everyone faces (we will too eventually!) so we understood that this is a problem that everyone should be more aware about. Not only this, we got to learn a good amount of new technologies such as Django and also ML techniques with AWS. ## What's next for Glass Wings Improve our ML model. Although we did train our set with some mock data, we would love to crowd source more data for more accurate and interesting information.
## Inspiration We wanted to reduce global carbon footprint and pollution by optimizing waste management. 2019 was an incredible year for all environmental activities. We were inspired by the acts of 17-year old Greta Thunberg and how those acts created huge ripple effects across the world. With this passion for a greener world, synchronized with our technical knowledge, we created Recycle.space. ## What it does Using modern tech, we provide users with an easy way to identify where to sort and dispose of their waste items simply by holding it up to a camera. This application will be especially useful when permanent fixtures are erect in malls, markets, and large public locations. ## How we built it Using a flask-based backend to connect to Google Vision API, we captured images and categorized which waste categories the item belongs to. This was visualized using Reactstrap. ## Challenges I ran into * Deployment * Categorization of food items using Google API * Setting up Dev. Environment for a brand new laptop * Selecting appropriate backend framework * Parsing image files using React * UI designing using Reactstrap ## Accomplishments that I'm proud of * WE MADE IT! We are thrilled to create such an incredible app that would make people's lives easier while helping improve the global environment. ## What I learned * UI is difficult * Picking a good tech stack is important * Good version control practices is crucial ## What's next for Recycle.space Deploying a scalable and finalized version of the product to the cloud and working with local companies to deliver this product to public places such as malls.
losing
## Inspiration Learning a new instrument is hard. Inspired by games like Guitar Hero, we wanted to make a fun, interactive music experience but also have it translate to actually learning a new instrument. We chose the violin because most of our team members had never touched a violin prior to this hackathon. Learning the violin is also particularly difficult because there are no frets, such as those on a guitar, to help guide finger placement. ## What it does Fretless is a modular attachment that can be placed onto any instrument. Users can upload any MIDI file through our GUI. The file is converted to music numbers and sent to the Arduino, which then lights up LEDs at locations corresponding to where the user needs to press down on the string. ## How we built it Fretless is composed to software and hardware components. We used a python MIDI library to convert MIDI files into music numbers readable by the Arduino. Then, we wrote an Arduino script to match the music numbers to the corresponding light. Because we were limited by the space on the violin board, we could not put four rows of LEDs (one for each string). Thus, we implemented logic to color code the lights to indicate which string to press. ## Challenges we ran into One of the challenges we faced is that only one member on our team knew how to play the violin. Thus, the rest of the team was essentially learning how to play the violin and coding the functionalities and configuring the electronics of Fretless at the same time. Another challenge we ran into was the lack of hardware available. In particular, we weren’t able to check out as many LEDs as we needed. We also needed some components, like a female DC power adapter, that were not present at the hardware booth. And so, we had limited resources and had to make do with what we had. ## Accomplishments that we're proud of We’re really happy that we were able to create a working prototype together as a team. Some of the members on the team are also really proud of the fact that they are now able to play Ode to Joy on the violin! ## What we learned Do not crimp lights too hard. Things are always harder than they seem to be. Ode to Joy on the violin :) ## What's next for Fretless We can make the LEDs smaller and less intrusive on the violin, ideally a LED pad that covers the entire fingerboard. Also, we would like to expand the software to include more instruments, such as cello, bass, guitar, and pipa. Finally, we would like to corporate a PDF sheet music to MIDI file converter so that people can learn to play a wider range of songs.
## 💡 Inspiration 💡 Have you ever wished you could play the piano perfectly? Well, instead of playing yourself, why not get Ludwig to play it for you? Regardless of your ability to read sheet music, just upload it to Ludwig and he'll scan, analyze, and play the entire sheet music within the span of a few seconds! Sometimes, you just want someone to play the piano for you, so we aimed to make a robot that could be your little personal piano player! This project allows us to bring music to places like elderly homes, where live performances can uplift residents who may not have frequent access to musicians. We were excited to combine computer vision, MIDI parsing, and robotics to create something tangible that shows how technology can open new doors. Ultimately, our project makes music more inclusive and brings people together through shared experiences. ## ❓What it does ❓ Ludwig is your music prodigy. Ludwig can read any sheet music that you upload to him, then convert it to a MIDI file, convert that to playable notes on the piano scale, then play each of those notes on the piano with its fingers! You can upload any kind of sheet music and see the music come to life! ## ⚙️ How we built it ⚙️ For this project, we leveraged OpenCV for computer vision to read the sheet music. The sheet reading goes through a process of image filtering, converting it to binary, classifying the characters, identifying the notes, then exporting them as a MIDI file. We then have a server running for transferring the file over to Ludwig's brain via SSH. Using the Raspberry Pi, we leveraged multiple servo motors with a servo module to simultaneously move multiple fingers for Ludwig. In the Raspberry Pi, we developed functions, key mappers, and note mapping systems that allow Ludwig to play the piano effectively. ## Challenges we ran into ⚔️ We had a few roadbumps along the way. Some major ones included file transferring over SSH as well as just making fingers strong enough on the piano that could withstand the torque. It was also fairly difficult trying to figure out the OpenCV for reading the sheet music. We had a model that was fairly slow in reading and converting the music notes. However, we were able to learn from the mentors in Hack The North and learn how to speed it up to make it more efficient. We also wanted to ## Accomplishments that we're proud of 🏆 * Got a working robot to read and play piano music! * File transfer working via SSH * Conversion from MIDI to key presses mapped to fingers * Piano playing melody ablities! ## What we learned 📚 * Working with Raspberry Pi 3 and its libraries for servo motors and additional components * Working with OpenCV and fine tuning models for reading sheet music * SSH protocols and just general networking concepts for transferring files * Parsing MIDI files into useful data through some really cool Python libraries ## What's next for Ludwig 🤔 * MORE OCTAVES! We might add some sort of DC motor with a gearbox, essentially a conveyer belt, which can enable the motors to move up the piano keyboard to allow for more octaves. * Improved photo recognition for reading accents and BPM * Realistic fingers via 3D printing
## Inspiration Huddle draws inspiration from the desire to simplify and enhance social gatherings. The idea comes from the need to effortlessly plan and capture spontaneous moments with friends while addressing the challenges of coordinating hangouts. ## What it does Huddle streamlines planning hangouts by allowing users to create instant gatherings, send invites to friends, and track live locations (ETA). It features a collaborative photo gallery to capture and share moments and built-in expense-splitting functionality for added convenience. ## How we built it We used Flutter to develop the Android application and Firebase as the database and backend, including OAuth, GCP Storage, Google Maps API, etc. ## Challenges we ran into During the development of Huddle, challenges were encountered in optimizing real-time location tracking, integrating the collaborative gallery, and fine-tuning the expense-splitting mechanism. Overcoming these challenges required innovative problem-solving and collaboration among team members. ## Accomplishments that we're proud of Huddle's team takes pride in successfully creating an app that simplifies the planning and execution of spontaneous hangouts. The achievement lies in delivering a user-friendly, feature-rich experience that aligns with the initial vision. ## What we learned The development of Huddle provided valuable insights into optimizing location-based features, implementing collaborative photo-sharing functionalities, and addressing the complexities of expense-splitting in real-time scenarios. The team gained hands-on experience in overcoming technical challenges and enhancing user interactions. ## What's next for Huddle Looking ahead, Huddle aims to expand its feature set by incorporating user feedback, enhancing security measures for location sharing, and exploring additional social features. The roadmap includes refining the user interface, optimizing performance, and potentially integrating with other popular platforms to enhance the overall user experience further.
winning
## Inspiration We aren't musicians. We can't dance. With AirTunes, we can try to do both! Superheroes are also pretty cool. ## What it does AirTunes recognizes 10 different popular dance moves (at any given moment) and generates a corresponding sound. The sounds can be looped and added at various times to create an original song with simple gestures. The user can choose to be one of four different superheroes (Hulk, Superman, Batman, Mr. Incredible) and record their piece with their own personal touch. ## How we built it In our first attempt, we used OpenCV to maps the arms and face of the user and measure the angles between the body parts to map to a dance move. Although successful with a few gestures, more complex gestures like the "shoot" were not ideal for this method. We ended up training a convolutional neural network in Tensorflow with 1000 samples of each gesture, which worked better. The model works with 98% accuracy on the test data set. We designed the UI using the kivy library in python. There, we added record functionality, the ability to choose the music and the superhero overlay, which was done with the use of rlib and opencv to detect facial features and map a static image over these features. ## Challenges we ran into We came in with a completely different idea for the Hack for Resistance Route, and we spent the first day basically working on that until we realized that it was not interesting enough for us to sacrifice our cherished sleep. We abandoned the idea and started experimenting with LeapMotion, which was also unsuccessful because of its limited range. And so, the biggest challenge we faced was time. It was also tricky to figure out the contour settings and get them 'just right'. To maintain a consistent environment, we even went down to CVS and bought a shower curtain for a plain white background. Afterward, we realized we could have just added a few sliders to adjust the settings based on whatever environment we were in. ## Accomplishments that we're proud of It was one of our first experiences training an ML model for image recognition and it's a lot more accurate than we had even expected. ## What we learned All four of us worked with unfamiliar technologies for the majority of the hack, so we each got to learn something new! ## What's next for AirTunes The biggest feature we see in the future for AirTunes is the ability to add your own gestures. We would also like to create a web app as opposed to a local application and add more customization.
## Inspiration How much carbon does farming really sequester ? This is one question that inspired us to create this solution. With rising interest of governments around the world to start taxing farmers for their emissions, we wanted to find a way to calculate them. ## What it does A drone with a variety of sensors measure the CO2, CH4 and albedo of the land underneath it to estimate the actual carbon offset. The data collected in drone is sent to our server online which is fetched by MATLAB to calculate the carbon offset. The drone also has sensors to calculate water quality. In future the drone will also have soil moisture detection capability using microwaves similar to remote sensing satellites. With the offset we are able to calculate the carbon credits which can then trade over the pi platform. By using blockchain we enable : 1) No double counting of credits 2) Ensure wider participation from around the world (pi has already over 35 million users) 3) Ensure only algorithmic calculated credits are there. ## How we built it Using Arduino, PI, MATLAB ## Challenges we ran into PI was a tough challenge to implement. Loading sensors on drone was another big challenge. ## Accomplishments that we're proud of We were able to get all sensors to work, collect data in real time and run matlab analysis on it ## What we learned ## What's next for We Are Sus Farms
### [GitHub](https://github.com/gablavoiie/MovieRecommender.git) ### [Slideshow](https://docs.google.com/presentation/d/1NSfLr-kHyz2wyOUcPxQO0ypUPL2kQKgIdYUXQ3zuD7g/edit?usp=sharing) We are tired of scrolling through Netflix aimlessly for hours on end hoping to come across a movie that interests us. While several movie recommendation systems exist out there, they are largely based on previously-collected data and are not equipped to process real-time parameters like current mood. We set out to create a comprehensive content based movie recommender that engages in an interactive conversation with the user to output the optimal movie suggestion for them. The project began with a dataset of [IMDB’s top 1,000 movies](https://www.kaggle.com/datasets/omarhanyy/imdb-top-1000?resource=download) from 1920–2019 that contained the movie duration, genre, cast, etc., for each entry. [Another dataset](https://www.kaggle.com/datasets/cryptexcode/mpst-movie-plot-synopses-with-tags) containing tags from a list of 69 words for over 14,000 movies, such as “feel good”, “gut-wrenching”, and “psychedelic”, was combined with the first dataset. A four-feature vector was then created for each of the 1,000 movies. The first row contains a value from 0–9 that represents the decade it was produced; the second row contains a value from 0–2 that represents whether the movie is short (<60 min), medium (60 min <= length <= 120 min), or long (>120 min); the third row contains a one-hot encoding of twenty 0s and 1s based on whether the movie belongs to a certain genre; and the final row contains a one-hot encoding of sixty-nine 0s and 1s that correspond to tags. The web app was built using Python Flask. Most of our team had never used Flask before so it was a large learning curve. JavaScript and HTML were used to create our front end, an interactive chatbot that prompts and responds to user input. We used the OpenAI library and ChatGPT to generate unique and engaging responses to the user’s choices. A Profanity Filter is used to refine ChatGPT outputs to limit controversial or offensive speech. The user’s inputs are classified as either constraints or preferences. The selection process begins with the constraints, such as maturity rating, director preference, and cast preference, narrowing down the dataset. The IMDBPy package was used to search for cast members, directors, etc., based on the user's input (e.g., "The Rock" will be mapped to "Dwayne Johnson"). BreadBot then offers the user the opportunity to enter open-ended text to describe anything they wish to express (e.g., “I want something with a lot of explosions”, “Give me something gut-wrenching and intimate”, “I like long montages”). We use natural language processing (NLP) tools of the Cohere API to analyze the genres and tags that are best expressed in the user’s text. The classifier tool trained a model on 40 examples to classify each of the 20 standard IMDB genres. The embedding tool was used to calculate embeddings for the 69 subjective tags and the textbox input, and a cosine similarity is used to determine the 4 best tags. Asking for “explosions”, for example, will map directly to action movies by using Cohere’s classifying example tools to train the model. Other subjective inputs, like genre and duration preferences, are then used in combination with the NLP results to create a unique input vector for the user’s submission that is identically-structured to the movie-specific vectors created earlier. A cosine similarity is calculated between each of the movies and the input vector, and the film with highest similarity is conveyed as the output to the front end. We used Beautiful Soup to web scrape the poster of the output movie from a Google images query so that it could be displayed to the user. Using the Cohere API posed a challenge as it was our first time using external APIs for machine learning. Learning how to deal with errors and bad requests took time. Moreover managing communication between the front end and back end, especially when dealing with large amounts of variable user input and data, was challenging. We used Ajax in the development of the front end to make this process easier. We are proud to have pulled off what we thought was too ambitious of an idea at the start. It is really cool how we were able to integrate so many different tools to create this project as a whole. We learned to take advantage of the tools out there to facilitate the process of creating a large-scale project. Given the time constraints for this hackathon project, there are several improvements to our project that we wish to implement in the future. One of the issues with our algorithm is its run time because we iterate excessively through the dataset when calculating similarities for each movie. Although essential, there is perhaps a more optimized method of delivering the movie suggestion. In addition, we love our sleek UI, but given more time, we would like to integrate more graphics, designs, colours, and perhaps an audio feature into the chatbot. Furthermore, it would be great to extend this project to allow groups of people to choose an optimal movie for them by combining individual preferences. Lastly, we would want to extend our dataset to beyond just 1,000 of the most popular movies on IMDB.
winning
## Inspiration Energy is the foundation for everyday living. Productivity from the workplace to lifestyle—sleep, nutrition, fitness, social interactions—are dependant on sufficient energy levels required for each activity [1]. Various generalized interventions have been proposed to address energy levels, but currently no method has proposed a personal approach using daily schedules/habits as determinants for energy. ## What it does Boost AI is an iOS application that uses machine learning to predict energy levels based on daily habits. Simple and user-specific questions on sleep schedule, diet, fitness, social interaction, and current energy level will be used as determinants to predict future energy level. Notifications will give the user personalized recommendations to increase energy throughout the day. Boost AI allows you to visualize your energy trends over time, including predictions for personalized intervention based on your own lifestyle. ## How we built it We used MATLAB and TensorFlow for our machine learning framework. The current backend utilizes a support vector machine that is trained on simulated data, based on a subject's "typical" week, with relevant data-augmentation. The linear support vector machine is continually trained with each new user input, and each prediction is based on a moving window, as well as historical daily trends. We have further trained an artificial neural network to make these same predictions, using tensorflow with a keras wrapper. In the future this neural network model will be used to allow for an individual to get accurate predictions with their first use by applying a network trained on a large and diverse set of individuals, then continually fine tuning their personal network to have the best predictions and accurate trends for them. We used Sketch to visualize our iOS application prototype. ## Challenges we ran into Although we come from the healthcare field, we were limited in domain knowledge in human energy and productivity. We did research on each parameter that is determinant to energy levels. ## Accomplishments that we're proud of Boost AI is strongly translatable to improving energy in everyday life. We’re proud of the difference it can make to the every day lives of our users. ## What's next for Boost AI We aim to improve our prototype by training our framework with a real world dataset. We would like to explore two main applications: **1) Workspace.** Boost AI can be optimized in the workplace by implementing the application into workspace specific softwares. We predict that Boost AI will "boost" energy with specific individual interventions for improved productivity and output. **2) Healthcare.** Boost AI can be use health based data such as biometric markers and researched questionnaires to predict energy. The data and trends can be used for clinical-driven, intervention and improvements, as well as personal use. ## References: [1] Arnetz, BB., Broadbridge, CL., Ghosh, S. (2014) Longitudinal determinants of energy levels in knowledge workers. Journal of Occupational Environmental Medicine.
## Inspiration Research shows that maximum people face mental or physical health problems due to their unhealthy daily diet or ignored symptoms at the early stages. This app will help you track your diet and your symptoms daily and provide recommendations to provide you with an overall healthy diet. We were inspired by MyFitnessPal's ability to access the nutrition information from foods at home, restaurants, and the grocery store. Diet is extremely important to the body's wellness, but something that is hard for any one person to narrow down is: What foods should I eat to feel better? It is a simple question, but actually very hard to answer. We eat so many different things in a day, how do you know what is making positive impacts on your health, and what is not? ## What it does Right now, the app is in a pre-alpha phase. It takes some things as input, carbs, fats, protein, vitamins, and electrolyte intake in a day. It sends this data to a Mage API, and Mage predicts how well they will feel in that day. The Mage AI is based off of sample data that is not real-world data, but as the app gets users it will get more accurate. Based off of our data set that we gather and the model type, the AI maintains 96.4% accuracy at predicting the wellness of a user on a given day. This is based off of 10000 users over 1 day, or 1 user over 10000 days, or somewhere in between. The idea is that the AI will be constantly learning as the app gains users and individual users enter more data. ## How we built it We built it in Swift using the Mage.ai for data processing and API ## Challenges we ran into Outputting the result on the App after the API returns the final prediction. We have the prediction score displayed in the terminal, but we could not display it on the app initially. We were able to do that after a lot of struggle. All of us made an app and implemented an API for the very first time. ## Accomplishments that we're proud of -- Successfully implementing the API with our app -- Building an App for the very first time -- Creating a model for AI data processing with a 96% accuracy ## What we learned -- How to implement an API and it's working -- How to build an IOS app -- Using AI in our application without actually knowing AI in depth ## What's next for NutriCorr --Adding different categories of symptoms -- giving the user recommendations on how to change their diet -- Add food object to the app so that the user can enter specific food instead of the nutrient details -- Connect our results to mental health wellness and recommendations. Research shows that people who generally have more sugar intake in their diet generally stay more depressed.
## Inspiration Everybody struggles with their personal finances. Financial inequality in the workplace is particularly prevalent among young females. On average, women make 88 cents per every dollar a male makes in Ontario. This is why it is important to encourage women to become more cautious of spending habits. Even though budgeting apps such as Honeydue or Mint exist, they depend heavily on self-motivation from users. ## What it does Our app is a budgeting tool that targets young females with useful incentives to boost self-motivation for their financial well-being. The app features simple scale graphics visualizing the financial balancing act of the user. By balancing the scale and achieving their monthly financial goals, users will be provided with various rewards, such as discount coupons or small cash vouchers based on their interests. Users are free to set their goals on their own terms and follow through with them. The app re-enforces good financial behaviour by providing gamified experiences with small incentives. The app will be provided to users free of charge. As with any free service, the anonymized user data will be shared with marketing and retail partners for analytics. Discount offers and other incentives could lead to better brand awareness and spending from our users for participating partners. The customized reward is an opportunity for targeted advertising ## Persona Twenty-year-old Ellie Smith works two jobs to make ends meet. The rising costs of living make it difficult for her to maintain her budget. She heard about this new app called Re:skale that provides personalized rewards for just achieving the budget goals. She signed up after answering a few questions and linking her financial accounts to the app. The app provided simple balancing scale animation for immediate visual feedback of her financial well-being. The app frequently provided words of encouragement and useful tips to maximize the chance of her success. She especially loves how she could set the goals and follow through on her own terms. The personalized reward was sweet, and she managed to save on a number of essentials such as groceries. She is now on 3 months streak with a chance to get better rewards. ## How we built it We used : React, NodeJs, Firebase, HTML & Figma ## Challenges we ran into We had a number of ideas but struggled to define the scope and topic for the project. * Different design philosophies made it difficult to maintain consistent and cohesive design. * Sharing resources was another difficulty due to the digital nature of this hackathon * On the developing side, there were technologies that were unfamiliar to over half of the team, such as Firebase and React Hooks. It took a lot of time in order to understand the documentation and implement it into our app. * Additionally, resolving merge conflicts proved to be more difficult. The time constraint was also a challenge. ## Accomplishments that we're proud of * The use of harder languages including firebase and react hooks * On the design side it was great to create a complete prototype of the vision of the app. * Being some members first hackathon, the time constraint was a stressor but with the support of the team they were able to feel more comfortable with the lack of time ## What we learned * we learned how to meet each other’s needs in a virtual space * The designers learned how to merge design philosophies * How to manage time and work with others who are on different schedules ## What's next for Re:skale Re:skale can be rescaled to include people of all gender and ages. * More close integration with other financial institutions and credit card providers for better automation and prediction * Physical receipt scanner feature for non-debt and credit payments ## Try our product This is the link to a prototype app <https://www.figma.com/proto/nTb2IgOcW2EdewIdSp8Sa4/hack-the-6ix-team-library?page-id=312%3A3&node-id=375%3A1838&viewport=241%2C48%2C0.39&scaling=min-zoom&starting-point-node-id=375%3A1838&show-proto-sidebar=1> This is a link for a prototype website <https://www.figma.com/proto/nTb2IgOcW2EdewIdSp8Sa4/hack-the-6ix-team-library?page-id=0%3A1&node-id=360%3A1855&viewport=241%2C48%2C0.18&scaling=min-zoom&starting-point-node-id=360%3A1855&show-proto-sidebar=1>
partial
## Inspiration We were feeling sad because we had no ideas and got rejected by Alexa when we asked to marry her. So we played around with Alexa for many hours straight and did not feel any better because Alexa was quite cold and unwelcoming at times. However, we wanted to change this and make Alexa the emotional support that people can rely on. ## What it does Provides the user with a shoulder (or cylinder) to cry on. Alexa will listen to all your life problems and provide you with helpful, motivational, and warm comments that will make you feel all nice and tingly inside. ## How we built it We used Javascript for everything and used Amazon's Alexa's API for natural language understanding and token parsing. In addition, we programmed our interpretation in Node.JS and hosted it on Microsoft Azure. ## Challenges we ran into * Simulating a therapeutic environment with the best of limited knowledge and experience. * Designing a language interface that correctly understands user sentiments and provides accurate, appropriate feedback. * Setting up AWS and learning the various API's such as AWS, Azure and Alexa. * Learning how Amazon echo works and how to create new skills with it. ## Accomplishments that we're proud of It may have significant social impact and help improve the lives of those suffering from negative emotions. It's also our team's first hardware hack! ## What we learned Alexa can be really helpful and fun to interact with. We were fascinated by how cool the natural language processing was. ## What's next for Alexa: My Personal Therapist Perhaps sending therapy logs to clinics or real therapists.
## Inspiration We were inspired to create Alexa Sage after researching the uses of a voice user interface (VUI) and seeing how it could apply to medical contexts. One social problem relevant for one of our team members was elderly care and loneliness, as their grandparents recently were facing health issues. We identified that there was a need for more scalable forms of health promotion and cognitive decline monitoring among seniors. 15% of Canadians over 65 are in institutional care, and depression rates while in care are 40%, which is much higher than the general population of seniors. Further, VUIs can be more intuitive for people not used to a smartphone, and have the potential for more organic communication between people and their devices, such as seniors who may be more wary of technology. We also had an interest in wellness and positive psychology, and the medical field, and combined these with our skills in backend programming, data science, and psychology to tackle Alexa Skills. ## What it does Alexa Sage has two primary components: Emotional resilience building (3 gratitudes exercise), and cognitive acuity monitoring (sentence repetition and analysis). Both exercises are empirically backed by research, the first two promoting long-term happiness and resilience against depression, and the second being a standard cognitive test administered as a dementia measure. Users are prompted on a periodic basis to engage in an informal discussion with Alexa, where they are asked to make voice entries in a gratitude journal, the results of which are given to an R API to analyze sentiment and dictate Alexa's responses, as well as storing happy memories for later replay. During this process, users are also given a brief test of their mental faculties - confirming their name, and checking if they can remember a brief sentence and repeat it back with proper pronunciation. The results of the gratitude journal analysis and cognitive acuity tests are then stored in a google spreadsheet, where long term trends or abrupt shifts in emotional effect are identified via an R API and used to notify a user's primary caregiver through the Twilio API. Should users have particularly negative responses when asked to recount something they're grateful for, Alexa can also offer to call their loved ones for them. ## How we built it The user interface and the bulk of our program's structure comes from Voiceflow, a visual coding program for building Alexa skills that can integrate with external APIs. We built R scripts to analyze text inputs on one computer and used the Plumber package to set up an R API to send character strings back and forth from a google spreadsheets data storage location. We then used the numerical output (sentiment level based on the R Syuzhet package) from R to store the data and compare to a baseline of user sentiment and an absolute level (if very negative), and make Alexa offer users the option to call loved ones, and should they accept the Twilio API will call or send a text to their loved one's phones. ## Challenges we ran into This was the first hackathon for three of our four members, so understanding the norms of the event was a big hurdle, and only one of us had much coding experience. Some highlights of the challenges we faced: * Translating psychological concepts, tests, and exercises into a VUI * Coordinating voice flow with a mono database, pivoting to a spreadsheet and R data analysis * Integrating Twilio and R APIs with voiceflow * Getting voice flow to export to the physical Alexa device * The time constraints of the competition * Narrowing our focus to what is feasible ## What we're proud of & what we learned **Charvi** learned to use voice flow and more about data science and R **Anthony** is proud we were able to integrate psychological theory with a social impact focus into a cohesive app **Yang** learned to use the Twilio API **Ethan** learned how to create an R API **All of us** are proud of what we made, and happy that we learned so much in the process. ## What's next for Alexa Sage: Promoting cognitive & mental health for seniors We want to add additional tests and exercises to Alexa, to better build emotional resiliency and monitor cognitive health of seniors. Some possible ones are: * more cognitive games such as drawing and memory recall * suggesting different activities based on the user mood and time of day, such as reading a book, calling a friend, or going for a walk * integrating with a user's medical exercises and medication, to prompt them to do these As well, the Alexa Prize is currently underway in the US to develop conversational capabilities with Alexa. During these conversations, we could conduct cognitive assessment and promote positive psychology habits as well. ## Link: <https://docs.google.com/presentation/d/1GrK3_w8w3Fr8feRcucNr2EoXvo0pl_BCWkYn2zdZhGg/edit?usp=sharing>
## Inspiration We wanted to create a new way to interact with the thousands of amazing shops that use Shopify. ![demo](https://res.cloudinary.com/devpost/image/fetch/s--AOJzynCD--/c_limit,f_auto,fl_lossy,q_auto:eco,w_900/http://i.imgur.com/0G1Pdea.jpg) ## What it does Our technology can be implemented inside existing physical stores to help customers get more information about products they are looking at. What is even more interesting is that our concept can be implemented to ad spaces where customers can literally window shop! Just walk in front of an enhanced Shopify ad and voila, you have the product on the sellers store, ready to be ordered right there from wherever you are. ## How we built it WalkThru is Android app built with the Altbeacon library. Our localisation algorithm allows the application to pull the Shopify page of a specific product when the consumer is in front of it. ![Shopify](https://res.cloudinary.com/devpost/image/fetch/s--Yj3u-mUq--/c_limit,f_auto,fl_lossy,q_auto:eco,w_900/http://i.imgur.com/biArh6r.jpg) ![Estimote](https://res.cloudinary.com/devpost/image/fetch/s--B-mjoWyJ--/c_limit,f_auto,fl_lossy,q_auto:eco,w_900/http://i.imgur.com/0M85Syt.jpg) ![Altbeacon](https://avatars2.githubusercontent.com/u/8183428?v=3&s=200) ## Challenges we ran into Using the Estimote beacons in a crowded environment has it caveats because of interference problems. ## Accomplishments that we're proud of The localisation of the user is really quick so we can show a product page as soon as you get in front of it. ![WOW](https://res.cloudinary.com/devpost/image/fetch/s--HVZODc7O--/c_limit,f_auto,fl_lossy,q_auto:eco,w_900/http://i.giphy.com/xT77XWum9yH7zNkFW0.gif) ## What we learned We learned how to use beacons in Android for localisation. ## What's next for WalkThru WalkThru can be installed in current brick and mortar shops as well as ad panels all over town. Our next step would be to create a whole app for Shopify customers which lets them see what shops/ads are near them. We would also want to improve our localisation algorithm to 3D space so we can track exactly where a person is in a store. Some analytics could also be integrated in a Shopify app directly in the store admin page where a shop owner would be able to see how much time people spend in what parts of their stores. Our technology could help store owners increase their sells and optimise their stores.
losing
## Inspiration CredLo was inspired by a personal story of a teammate that talks about the various challenges that immigrants face when moving to a new country. The primary challenge among immigrants is restarting their life in a new place, which begins with their inability to obtain credit as credit scores are non transferable across countries. On top of this, we saw many individuals lack access to a lump sum of money quickly at low interest rates, sparking the need for an automated micro loan system that is diversified, low-risk and easy to use. ## What it does CredLo uses user-inputted data and personal submissions to generate a credit score for the country that the individual is moving to. Additionally, borrowers are able to attain loans quickly at low-interest rates and lenders are able to lend small amounts of money to a large number of people people at a level of risk that they choose. ## How we built it We built the backend using Flask/Python to process requests from the lender/borrower as well as for the borrower's onboarding process. We used Capital One's API to make actual transactions between the lender and the lendee. We trained our ML model using sklearn on a dataset we found online. Most of the frontend was built using vanilla HTML/CSS/JS (no wonder it took us ages to build the UI), with a little bit of Vue sprinkled in. The data was stored as a JSON object (with periodic serialization, which to answer your unasked question, yes we eventually intend to use Cloud Firestore for this instead :) ## Challenges we ran into 1. Naming the product and coming up with the tagline was difficult 2. Since none of our teammates are front-end developers, a large chunk of time was spent by trying to make our UI look somewhat bearable to vue (expect a few more puns as you read along). Time spent working on the UI could have been spent working on additional features instead. ## Accomplishments that we're proud of 1. As a team with zero front-end developers, we have a passably pretty UI. 2. We are proud that our product attempts to solve a real need posed by many individuals around the globe. We had other ideas that were more technically sophisticated, but we instead decided to work on a product that had a real-world impact and could positively impact lives in society. After speaking to various individuals in our target market who said that they would have greatly appreciated assistance from the CredLo platform when moving countries, we are proud that we developed a product that can be incorporated into society. ## What we learned We learned about the various challenges that different groups in society face and the ways in which we can alleviate their stress and headache. We also learned to collaborate and work together as we are a group of students with different backgrounds and skills. ## What's next for CredLo 1. We were restricted by the kind of datasets we had available to use to generate the credit scores. With more time and research, we can improve on the metrics used to come up with an accurate credit score. The eventual goal is to work with banks and other institutions to become a reliable source of information that individuals and institutions can trust. 2. Instead of using user input (which can be faked), we would include verifiable sources of claims such as bank statements, utility bills etc. and extract the necessary data out of them using computer vision. 3. There is currently only an auto-investment mode for lenders. That is, they do not choose who they can lend their money to. We would like to expand the project to allow investors to choose people they think have a sincere need, adjust their rate of interest down if they so wish to, along with the amount of investment (up or down). Eventually, CredLo would provide lenders the possibility to manually invest their money instead of having it automated. 4. Complete integration with Capital One's APIs to facilitate actual bank transfers. We started working on this but left it unfinished due to technical issues.
## Team 4 - Members Henry Wong, Kieran Hansen, Matthew Kompel, Zuhair Siddiqi ## Inspiration We wanted to do something related to modern technology. Electronic cars are new and revolutionary and we thought it would be perfect to make a website about them. We realize that a lot of people might be confused on the differences and benefits of different modules, and we wanted to make something that clears that up. ## What it does Find Your EV takes in a set amount of user specifications and uses our own search algorithm to find the most relevant electronic vehicle that would fit the user. These specifications include price, range, safety, drivetrain, and much more. ## How I built it We used vanilla Javascript to create our front-end, linked with our searching algorithm in our Python Flask backend. We originally deployed the website using Github Pages, but switched to Heroku to support our backend scripts. We also managed to get and setup a custom domain with Domain.com, [FindYourEV.online](http://findyourev.online/) (sidenote: domain was actually really creative and easy to remember!). Lastly, we built the project on GitHub @ <https://github.com/henryinqz/FindYourEV> (direct link to website is <http://findyourev.herokuapp.com/client/index.html>) ## Challenges I ran into Our entire had no prior experience with Web Development. Over the past 36 hours, we were able to gain valuable experience creating our very own full-stack program. A challenge that stumped us was running the backend code on Heroku, and unfortunately, we were unable to deploy the backend so the search function only works locally. ## Accomplishments that I'm proud of I am proud of my group for being able to manifest our idea into this website in the short time frame provided. While we certainly did not create the most polished program, we had fun making it and that's what matters!😎 ## What I learned I learned a lot about all aspects of web development, such as creating front-end UIs with HTML/CSS/JS, creating backend APIs with Python, linking these two models, and also web hosting. ## What's next for Find Your EV Find Your EV is an interesting concept that we see as a valuable utility in today's world. With electric vehicles quickly rising in the automotive industry, it can be challenging for buyers to find a suitable EV for them. Thus, there is a market that Find Your EV could reach after being polished up.
## Inspiration Since its emergence in December of 2019, the COVID-19 pandemic continues to change the lives of the global population. In Canada, the daily case rate has increased eightfold since last October. Financial security and physical health have became the top priorities of people around the world. We are aiming to protect both and giving solutions for the community during the challenging era of Covid-19. ## What it does The Protect-21 application sends a friendly reminder notification to the user to wear their mask every time they leave / depart from key locations (such as their home and place of business). Our application will also encourage the proper wear of their mask and verify them to minimize the risk of exposure. This app promotes both safe practice of wearing a mask during this pandemic and helps the user to avoid preventable costs, such as buying an overpriced, single use mask, or paying a fine for violating a mask mandate, such as those seen in British Columbia aboard transit vehicles. To further encourage hygiene, a key factor in preventing the contraction of the coronavirus, the user receives a friendly reminder notification once they return to a key location to wash their hands for 20 seconds. ## How we built it Technologies used: APIs used : Google Login, Firebase, Maps, Geolocation, Trusted Web Activity, Google Assistant, Teachable Machine Tools: React, HTML, CSS, Javascript, Ionic, Tensorflow JS Libraries: pl5 and ml5 (React) ## Challenges we ran into Serving static HTML on React, and refining the UI to maintain the accessibility of the app, such as ensuring that appearance is uniform across all platforms. ## Accomplishments that we're proud of The usability, accessibility (web, android, iOS), and cost-effectiveness (no expensive technologies or proprietary hardware used) of the app. ## What we learned JavaScript, how to convert a React app into a Progressive Web app, how to implement Google Assistant and Alexa. ## What's next for Protect-21 Implementing a functionality for saving multiple key locations.
losing
## Inspiration Right now, it is more important than ever to support local businesses. When you support local businesses, you help hard work and creativity thrive. Consumers today face challenges accessing many local businesses and we wanted to help alleviate that. ## What it does Locl is a ecommerce platform that allows you to purchase from different local businesses and have it delivered right to your door. It utilizes location based services to help you discover all the products around you and allows you get them brought right to you. ## How I built it Locl is a Vue web application that uses Firebase as a Database and backend ## Challenges I ran into Messing up the domain configuration, learning all new technologies and skills and building an ecommerce platform in 24 hours were all challenges that we are glad to have faced ## Accomplishments that I'm proud of ## What I learned We learned new skills from designing and making Locl from figma and UX to content hosting and custom domain configuration ## What's next for Locl. We will move on to signup businesses and build out a local first delivery network for Vancouver
## Inspiration Imagine this: You’re overwhelmed, scrolling through countless LinkedIn profiles, trying to figure out which clubs or activities will help you land your dream job. It feels like searching for a needle in a haystack! Here’s where UJourney steps in: We simplify your career planning by providing personalized paths tailored specifically to your goals. UJourney uses LinkedIn data from professionals in your dream job to recommend the exact clubs to join, events to attend, skills to acquire, and courses to take at your university. Our mission is to transform career exploration into a clear, actionable journey from aspiration to achievement. ## What it does UJourney is like having a career GPS with a personality. Tell it your dream job, and it will instantly scan the LinkedIn career cosmos to reveal the paths others have taken. No more endless profile scrolling! Instead, you get a curated list of personalized steps—like joining that robotics club or snagging that perfect internship—so you can be the most prepared candidate out there. With UJourney, the path to your dream job isn’t just a distant vision; it’s a series of clear, actionable steps right at your fingertips. ## How we built it The UJourney project is built on three core components: 1. Gathering Personal Information: We start by seamlessly integrating LinkedIn authorization to collect essential details like name and email. This allows users to create and manage their profiles in our system. For secure login and sign-up, we leveraged Auth0, ensuring a smooth and safe user experience. 2. Filtering LinkedIn Profiles: Next, we set up a MongoDB database by scraping LinkedIn profiles, capturing a wealth of career data. Using Python, we filtered this data based on keywords related to company names and job roles. This process helps us pinpoint relevant profiles and extract meaningful insights. 3. Curating Optimal Career Paths: Our AI model takes it from here. By feeding the filtered data and user information into an advanced model via the Gemini API, we generate personalized career paths, complete with timelines and actionable recommendations. The model outputs these insights in a structured JSON format, which we then translate into an intuitive, user-friendly UI design. ## Challenges we ran into Problem: LinkedIn Scraping Restrictions. Our initial plan was to directly scrape LinkedIn profiles based on company names and job roles to feed data into our AI model. However, LinkedIn’s policies prevented us from scraping directly from their platform. We turned to a third-party LinkedIn scraper, but this tool had significant limitations, including a restriction of only 10 profiles per company and no API for automation. While we utilized automation tools like Zapier and HubSpot CRM to streamline part of our workflow, we ultimately faced a significant roadblock. Despite these challenges, we adapted our approach to continue progressing with the project. Solution: Manual Database Creation. To work around these limitations, we manually built a database focused on the top five most commonly searched companies and job roles. While this approach allowed us to gather essential data, it also meant that our database was initially limited in scope. This manual effort was crucial for ensuring we had enough data to effectively train our AI model and provide valuable recommendations. Despite these hurdles, we adapted our approach to ensure UJourney could deliver accurate and practical career insights. ## Accomplishments that we're proud of 1. Rapid Development: We successfully developed and launched UJourney in a remarkably short period of time. Despite the tight timeline, we managed to pull everything together efficiently and effectively. 2. Making the Most of Free Tools: Working with limited resources and relying on free versions of various software, we still managed to create a fully functional version of UJourney. Our resourcefulness allowed us to overcome budget constraints and still deliver a high-quality product. 3. University-Specific Career Plans: One of our standout achievements is the app’s ability to provide personalized career plans tailored to specific universities. By focusing on actionable steps relevant to users' educational contexts, UJourney offers unique value that addresses individual career planning needs with precision. ## What we learned 1. Adaptability is Key: Our journey taught us that flexibility is crucial in overcoming obstacles. When faced with limitations like LinkedIn's scraping restrictions, we had to quickly pivot our approach. This experience reinforced the importance of adapting to challenges and finding creative solutions to keep moving forward. 2. Data Quality Over Quantity: We learned that the quality of data is far more important than sheer volume. By focusing on the most commonly searched companies and job roles, we ensured that our AI model could provide relevant and actionable insights, even with a limited dataset. This underscored the value of precision and relevance in data-driven projects. 3. Resourcefulness Drives Innovation: Working within constraints, such as using free software and limited resources, highlighted our team’s ability to innovate under pressure. We discovered that resourcefulness can turn limitations into opportunities for creative problem-solving, pushing us to explore new tools and methods. 4. User-Centric Design Matters: Our focus on creating university-specific career plans taught us that understanding and addressing user needs is essential for success. Providing tailored, actionable steps for career planning showed us the impact of designing solutions with the user in mind, making the tool genuinely useful and relevant. ## What's next for UJourney What exciting features are on the horizon? 1. Resume Upload Feature: To kick things off, we’re introducing a resume upload feature. This will allow users to gather personal information directly from their resumes, streamlining profile creation and reducing manual data entry. 2. Real-Time University Information: Next, we’ll be scraping university websites to provide real-time updates on campus events and activities. This feature will enable users to see upcoming events and automatically add them to their calendars, keeping them informed and organized. 3. Enhanced Community Involvement: We’ll then roll out features that allow users to view their friends' dream jobs and career paths. This will facilitate connections with like-minded individuals and foster a community where students can share experiences related to jobs and university clubs. 4. Automated LinkedIn Web Scraping: To improve data collection, we’ll automate LinkedIn data scraping. This will help expand our database with up-to-date and relevant career information, enhancing the app’s ability to provide accurate recommendations. 5. AI-Driven Job Recommendations: Finally, we’ll leverage real-time market information and AI to recommend job opportunities that are ideal for the current year. Users will also be able to apply for these jobs directly through the app, making the job application process more efficient and seamless. These upcoming features are designed to enhance the UJourney experience, making career planning, networking, and job applications more intuitive and effective. Stay tuned for these exciting updates!
*Everything in this project was completed during TreeHacks.* *By the way, we've included lots of hidden fishy puns in our writeup! Comment how many you find!* ## TL; DR * Illegal overfishing is a massive issue (**>200 billion fish**/year), disrupting global ecosystems and placing hundreds of species at risk of extinction. * Satellite imagery can detect fishing ships but there's little positive data to train a good ML model. * To get synthetic data: we fine-tuned Stable Diffusion on **1/1000ths of the data** of a typical GAN (and 10x training speed) on images of satellite pictures of ships and achieved comparable quality to SOTA. We only used **68** original images! * We trained a neural network using our real and synthetic data that detected ships with **96%** accuracy. * Built a global map and hotspot dashboard that lets governments view realtime satellite images, analyze suspicious activity hotspots, & take action. * Created a custom polygon renderer on top of ArcGIS * Our novel Stable Diffusion data augmentation method has potential for many other low-data applications. Got you hooked? Keep reading! ## Let's get reel... Did you know global fish supply has **decreased by [49%](https://www.scientificamerican.com/article/ocean-fish-numbers-cut-in-half-since-1970/)** since 1970? While topics like deforestation and melting ice dominate sustainability headlines, overfishing is a seriously overlooked issue. After thoroughly researching sustainability, we realized that this was an important but under-addressed challenge. We were shocked to learn that **[90%](https://datatopics.worldbank.org/sdgatlas/archive/2017/SDG-14-life-below-water.html) of fisheries are over-exploited** or collapsing. What's more, around [1 trillion](https://www.forbes.com/sites/michaelpellmanrowland/2017/07/24/seafood-sustainability-facts/?sh=2a46f1794bbf) (1,000,000,000,000) fish are caught yearly. Hailing from San Diego, Boston, and other cities known for seafood, we were shocked to hear about this problem. Research indicates that despite many verbal commitments to fish sustainably, **one in five fish is illegally caught**. What a load of carp! ### People are shellfish... Around the world, governments and NGOs have been trying to reel in overfishing, but economic incentives and self-interest mean that many ships continue to exploit resources secretly. It's hard to detect small ships on the world's 140 million square miles of ocean. ## What we're shipping In short (we won't keep you on the hook): we used custom Stable Diffusion to create realistic synthetic image data of ships and trained a convolutional neural networks (CNNs) to detect and locate ships from satellite imagery. We also built a **data visualization platform** for stakeholders to monitor overfishing. To enhance this platform, we **identified several hotspots of suspicious dark vessel activity** by digging into 55,000+ AIS radar records. While people have tried to build AI models to detect overfishing before, accuracy was poor due to high class imbalance. There are few positive examples of ships on water compared to the infinite negative examples of patches of water without ships. Researchers have used GANs to generate synthetic data for other purposes. However, it takes around **50,000** sample images to train a decent GAN. The largest satellite ship dataset only has ~2,000 samples. We realized that Stable Diffusion (SD), a popular text-to-image AI model, could be repurposed to generate unlimited synthetic image data of ships based on relatively few inputs. We were able to achieve highly realistic synthetic images using **only 68** original images. ## How we shipped it First, we read scientific literature and news articles about overfishing, methods to detect overfishing, and object detection models (and limitations). We identified a specific challenge: class imbalance in satellite imagery. Next, we split into teams. Molly and Soham worked on the front-end, developing a geographical analysis portal with React and creating a custom polygon renderer on top of existing geospatial libraries. Andrew and Sayak worked on curating satellite imagery from a variety of datasets, performing classical image transformations (rotations, flips, crops), fine-tuning Stable Diffusion models and GANs (to compare quality), and finally using a combo of real and synthetic data to train an CNN. Andrew also worked on design, graphics, and AIS data analysis. We explored Leap ML and Runway fine-tuning methods. ## Challenges we tackled Building Earth visualization portals are always quite challenging, but we could never have predicted the waves we would face. Among animations, rotations, longitude, latitude, country and ocean lines, and the most-feared WebGL, we had a lot to learn. For ocean lines, we made an API call to a submarine transmissions library and recorded features to feed into a JSON. Inspired by the beautiful animated globes of Stripe's and CoPilot's landing pages alike, we challengingly but succeedingly wrote our own. Additionally, the synthesis between globe to 3D map was difficult, as it required building a new scroll effect compatible with the globe. These challenges, although significant at the time, were ultimately surmountable, as we navigated through their waters unforgivingly. This enabled the series of accomplishments that ensued. It was challenging to build a visual data analysis layer on top of the ArcGIS library. The library was extremely granular, requiring us to assimilate the meshes of each individual polygon to display. To overcome this, we built our own component-based layer that enabled us to draw on top of a preexisting map. ## Making waves (accomplishments) Text-to-image models are really cool but have failed to find that many real-world use cases besides art and profile pics. We identified and validated a relevant application for Stable Diffusion that has far-reaching effects for agricultural, industry, medicine, defense, and more. We also made a sleek and refined web portal to display our results, in just a short amount of time. We also trained a CNN to detect ships using the real and synthetic data that achieved 96% accuracy. ## What we learned ### How to tackle overfishing: We learned a lot about existing methods to combat overfishing that we didn't know about. We really became more educated on ocean sustainability practices and the pressing nature of the situation. We schooled ourselves on AIS, satellite imagery, dark vessels, and other relevant topics. ### Don't cast a wide net. And don't go overboard. Originally, we were super ambitious with what we wanted to do, such as implementing Monte Carlo particle tracking algorithms to build probabilistic models of ship trajectories. We realized that we should really focus on a couple of ideas at max because of time constraints. ### Divide and conquer We also realized that splitting into sub-teams of two to work on specific tasks and being clear about responsibilities made things go very smoothly. ### Geographic data visualization Building platforms that enable interactions with maps and location data. ## What's on the horizon (implications + next steps) Our Stable Diffusion data augmentation protocol has implications for few-shot learning of any object for agricultural, defense, medical and other applications. For instance, you could use our method to generate synthetic lung CT-Scan data to train cancer detection models or fine-tune a model to detect a specific diseased fruit not covered by existing general-purpose models. We plan to create an API that allows anyone to upload a few photos of a specific object. We will build a large synthetic image dataset based off of those objects and train a plug-and-play CNN API that performs object location, classification, and counting. While general purpose object detection models like YOLO work well for popular and broad categories like "bike" or "dog", they aren't feasible for specific detection purposes. For instance, if you are a farmer trying to use computer vision to detect diseased lychees. Or a medical researcher trying to detect cancerous cells from a microscope slide. Our method allows anyone to obtain an accurate task-specific object detection model. Because one-size-fits-all doesn't cut it. We're excited to turn the tide with our fin-tech! *How many fish/ocean-related puns did you find?*
losing
## Inspiration We wanted to work on a project that 1) dealt with maps, 2) could benefit any urban environment regardless of how others view it, and 3) had a sense of intimacy. We found many of our initial ideas to be too detached—solutions that lacked a personal connection with the communities they aimed to serve. Then we came up with the idea of an application where users could simply look at a map and see all the areas that are recommended by locals, rather than popular locations that overshadow smaller and underrated areas in a community. From this, we expanded our idea to improve upon inaccurate and sometimes predatory apps claiming to protect users from dangerous incidents, yet only warning users when they are proximity to a "high-crime" area. By simply showing how often crime really happens in a much more realistic area, users have more knowledge and freedom to decide and understand what's going on in the local community around them. This, combined with local recommendations, lets users get the "word on the street" - they would hear it through the grapevine. ## What it does Grapevine is an application designed to make it easier for people to get the inside scoop on an area, based on local reports and recommendations. Locals can anonymously submit incident or recommendation reports, with the corresponding mark showing up on the map. Visitors can then search a location and get a map of their immediate surroundings that shows any reports in the area. They can also specify the radius and filter for certain types of reports. Reports also have an upvote/downvote system. ## How we built it We knew we wanted to build a web application, and so we decided on trying out Node.js and Express.js as our backend framework. Given this, we also decided to use MongoDB to complete the well known ME(no React)N tech stack, and also because of its popularity and reputation for being relatively easy to setup and use (which it was). Our frontend was built very simply with HTML/CSS. For the maps on our frontend, we used Leaflet.js, an interactive map JavaScript library that allowed us to easily display user recommendations and reports. ## Challenges we ran into This was our first time using MongoDB/Express.js/Node.js so there were many difficulties learning these tools on the fly. There were a lot of complications involving missing forward slashes and a good portion of our time was spent trying to figure out how to route pages. Fortunately, we were able to adapt and create a solid code structure that made the rest of our working process easier. We also thought that, given how GitHub is way easier when people aren't making contributions every 30 minutes, it would be better to use VSCode's Live Share feature to work collaboratively at the same time. However, this turned out to be more difficult than expected, especially when only the host can see what their code changes do. Despite this, we were able to push through and develop a good finished product that does exactly what we envisioned it to do. ## Accomplishments that we're proud of We’re very proud of being able to split the work efficiently and being able to stay organized on top of all of our contributions (given that we were using Live Share instead of Git). We are also proud of being able to implement the tech stack and use it in an application. We also successfully used Leaflet, an interactive map library for the first time, which was a new learning experience for us. ## What we learned Since this was a full-stack project that included everything from backend to frontend, there were many aspects that some of us did not know how to do/work with, but learning how to use different resources available to us online, reading documentation, and just using trial and error until we found something that works out helped us a lot as well in learning how to build an application with this tech stack. ## What's next for Grapevine We would like to scale this internationally and find a way to be able to optimize the search function. It would also be good to create a way to verify locals vs non-locals, perhaps through user login and personal information authentication (but still give the option of posting anonymously). We also have ideas of adding routing to the map, so that a user could input a destination and see local reports and recommendations along their route. Finally, we would like to flesh out the upvote system (differentiate between local/visitor feedback).
## Inspiration While we were researching datasets from the CDC, we realized that there was a crisis between adolescents and their overall well-being, including physical and mental health, due to a lack of healthy eating. We took this as an opportunity to make it easier for users to find recipes that are most beneficial for their health. ## What it does The user is provided with a checklist of various wellness issues. They can check off which symptoms they are experiencing, and the web app provides recipes with ingredients that are known to directly help with their current health symptoms. ## How we built it * HTML, React (JavaScript), CSS, Spoonacular API ## Challenges we ran into As this was our first-ever experience with web development, we struggled with figuring out where to start. This was our first time using JavaScript and HTML. We struggled to learn how to use React to help us add interactive features within our web app at first, but we ultimately got it to work. This was also our first time using APIs, so we had to spend time to research which APIs we needed to use, and how to access the data that we needed from them. We also originally found difficulty in having a collaborative coding environment, but we learned to use VSCode Live Sharing to edit our files within the shared project. ## Accomplishments that we're proud of We are very proud that we were able to begin learning how to use JavaScript and HTML within our first hackathon. As this was our first web development, we are pleased that our collaboration produced a web app with visible results. ## What we learned We learned how to use React, JavaScript and HTML for the first time. We also learned how to make API calls. ## What's next for Wellness Recipe Assistant The next thing we would look into for Wellness Recipe Assistant is implementing dietary restrictions onto the recipe finder. Additionally, we would add a difficulty level selector for the recipes, ranging from easy to hard, so that those who don't like cooking as much can still be able to find easy, healthy recipes.
## What it does Tickets is a secure, affordable, and painless system for registration and organization of in-person events. It utilizes public key cryptography to ensure the identity of visitors, while staying affordable for organizers, with no extra equipment or cost other than a cellphone. Additionally, it provides an easy method of requesting waiver and form signatures through Docusign. ## How we built it We used Bluetooth Low Energy in order to provide easy communication between devices, PGP in order to verify the identities of both parties involved, and a variety of technologies, including Vue.js, MongoDB Stitch, and Bulma to make the final product. ## Challenges we ran into We tried working (and struggling) with NFC and other wireless technologies before settling on Bluetooth LE as the best option for our use case. We also spent a lot of time getting familiar with MongoDB Stitch and the Docusign API. ## Accomplishments that we're proud of We're proud of successfully creating a polished and functional product in a short period of time. ## What we learned This was our first time using MongoDB Stitch, as well as Bluetooth Low Energy. ## What's next for Tickets An option to allow for payments for events, as well as more input formats and data collection.
losing
## What it does Alzheimer's disease and dementia affect many of our loved ones every year; in fact, **76,000 diagnoses** of dementia are made every year in Canada. One of the largest issues caused by Alzheimer's is the loss of ability to make informed, cognitive decisions about their finances. This makes such patients especially vulnerable to things such as scams and high-pressure sales tactics. Here's an unfortunate real-life example of this: <https://www.cbc.ca/news/business/senior-alzheimers-upsold-bell-products-source-1.6014904> We were inspired by this heartbreaking story to build HeimWallet. HeimWallet is a digital banking solution that allows for **supervision** over a savings account owned by an individual incapable of managing their finances, and is specifically **tailored** to patients with Alzheimer's disease or dementia. It can be thought of as a mobile debit card linked to a savings account that only allows spending if certain conditions set by a designated *guardian* are met. It allows a family member or other trusted guardian to set a **daily allowance** for a patient and **keep track of their purchases**. It also allows guardians to keep tabs on the **location of patients via GPS** every time a purchase is attempted, and to authorize or refuse attempted purchases that go beyond the daily allowance. This ensures that patients and their guardians can have confidence that the patient's assets are in safe hands. Further, the daily allowance feature empowers patients to be independent and **shop with confidence**, knowing that their disease will not be able to dominate their finances. The name "HeimWallet" comes from "-Heim" in "Alzheimer's". It also alludes to Heimdall, the mythical Norse guardian of the bridge leading to Asgard. ## How we built it The frontend was built using React-Native and Expo, while the backend was made using Python (Flask) and MongoDB. SMS functionality was added using Twilio, and location services were added using Google Maps API. The backend was also deployed to Heroku. We chose **React-Native** because it allowed us to build our app for both iOS and Android using one codebase. **Expo** enabled rapid testing and prototyping of our app. **Flask**'s lightweightness was key in getting the backend built under tight time constraints, and **MongoDB** was a natural choice for our database since we were building our app using JavaScript. **Twilio** enabled us to create a solution that worked even for guardians who did not have the app installed. Its text message-based interactions enabled us to build a product accessible to those without smartphones or mobile data. We deployed our backend to **Heroku** so that Twilio could access our backend's webhook for incoming text messages. Finally, the **Google Maps API**'s reverse geocoding feature enables guardians to see the addresses of where patients are located when a transaction is attempted. ## Challenges we ran into * Fighting with Heroku for almost *six hours* to get the backend deployed. The core mistake ended up being that we were trying to deploy our Python-based backend as a Node.js app.. oops. * Learning to use React Native -- all of us were new to it, and although we all had experience building web apps, we didn't quite have that same foundation with mobile apps. * Incorporating Figma designs on React Native in a way such that it is cross-platform between Android, iOS, and Web. A lot of styling works differently between these platforms, so it was tricky to make our app look consistent everywhere. * Managing mix of team members who were hacking in-person + online. Constant communication to keep everyone in the loop was key! ## Accomplishments that we're proud of We're super proud that we managed to come together and make our vision a reality! And we're especially proud of how much we learned and took away from this hackathon. From learning React Native, to Twilio, to getting better with Figma and sharpening our video-editing skills for our submission, it was thrilling to have gained exposure to so much in so little time. We're also proud of the genuine hard work every member of our team put in to make this project happen -- we worked deep into the A.M. hours, and constantly sought to improve the usability of our product with continuous suggestions and improvements. ## What's next for HeimWallet Here are some things we think we can add on to HeimWallet in order to bring it to the next level: * Proper integration of SOS (e.g. call 911) and Send Location functionality in the patient interface * Ability to have multiple guardians for one patient, so that there are many eyes safeguarding the same assets * Better security and authentication features for the app; of course, security is vital in a fintech product * Feature to allow patients to send a voice memo to a guardian in order to clarify a spending request
<https://www.townsquares.tech> Discord Usernames: `jkarbi#1190`, `Leland#1463`, `Dalton#6802` Discord Team Name: `Team 13`, channel `team-13-text` ## Inspiration Traditionally, citizens write to city counsellors or stage protests when they are unhappy with how their government is acting. Nowadays, citizens can use social media to express their opinions, but the many voices makes platforms crowded and messages can get lost. Ever wondered if there was a better way? That's why we built TownSquares. ## What it does TownSquares lets anyone ask their community for its opinion by creating GPS-based polls. **Polls are locked to GPS coordinates** and can only be **answered by community members within a set radius**. Polls can be used to **inspire change in a community** by making the voice of the people heard loud and clear. Not happy with how a city service is being delivered in your community? Post a poll on TownSquares and see if your neighbours agree. Then use the results to get the attention of your representatives in government! ## How we built it Tech stack: **MEAN (MongoDB, Express.js, Angular, Node.js)**. **Mapbox API** used to display a map and the poll locations. Backend deployed on **Google Cloud** using **App Engine**. **MongoDB** running as a shared cluster on MongoDB Atlas. ## Challenges we ran into Deploying the app on GCP and mapping to a custom domain name. Working with Angular, since we had limited frontend development experience. ## Accomplishments that we're proud of We came into this hackathon with a plan for what we were going to build and which components of the project we would all be responsible for. That really set us up for sucess, and is something we are really proud of! ## What we learned Deployment using GCP App Engine and mapping to custom domain names, integrating with Mapbox, and frontend development with Angular! ## What's next for TownSquares We hope to continue working on this following the hackathon because we think it could really be popular!! We know there's more for us to build and we're excited to do that :).
“DJ Bot” is a web-based application that aims to make DJing parties easier for people. Based on customer research, we believe that the current user experience with Spotify has several shortcomings that we’d like to address through our web application, most notably our potential users have the following pain points: They are unable to fully engage in parties because they have to constantly monitor the vibe and change the tunes manually Can’t make smooth transitions between songs (e.g., no crossfading, and can’t start new songs at the right moments) Have difficulty identifying playlists they’ll like to fit the mood of their activity Spend a lot of time browsing for the right music or recommendations When they select the wrong playlists or song it detracts from the activity and lowers enjoyment We hope to improve the user experience through a web application that allows users to better create playlists to match the vibe of the party and more easily adjust the playlist in real time as the party commences.
winning
## Inspiration Adults over the age of 50 take an average of 15 prescription medications annually. Keeping track of this is very challenging. Pillvisor is a smart pillbox that solves the issue of medication error by verifying the pills are taken correctly in order to keep your loved ones safe. Unlike other products on the market, pillvisor integrates with a real pillbox and is designed with senior users in mind. As we can imagine, keeping track of the pill schedule is challenging and taking incorrect medications can lead to serious avoidable complications. The most common drugs taken at home that have serious complications from medication errors are cardiovascular drugs and painkillers. One study found that almost a third of a million Americans contact poison control annually due to medication errors taken at home. One third of the errors result in hospital admissions whose admittance in on a steady rise. This only includes at home errors while medication errors can also occur in health care facilities. ## What it does Pillvisor is an automated pill box supervisor designed to help people who take many medications on the daily to ensure they actually take the correct pills at the correct time. Unlike many reminder and alarm apps that are wildly available on the app store, our custom pillbox product actually checks that pills are taken so the alarm isn't just turned off and ignored. ## How we built it The user interface to set the alarms is made with flask and is connected to a firebase. Our blacked out pillbox uses photo-resistors to detect which day is open and this verifies the pill is removed from the correct day and it does not stop the alarm if an incorrect day is opened. Once the medication is removed a photo of the medication is taken to check that it is indeed the correct medication, otherwise the user will be reminded to try to scan another pill. We have green LEDs to indicate the correct day of the week. If the user opens an incorrect day or scans the wrong pill a red LED will flash to alert the user. An LCD display to show the medication name and instructions for using the system. We used tensorflow to develop a machine learning convolutional neural network for image recognition to distinguish the different pills from one another. Our Raspberry PI takes a photo, runs the neural network on it and checks to see if the correct pill has been photographed. For our user interface, We developed an isolated Flask application which is connected to our firebase database and allows alarms to be set, deleted and edited easily and quickly(for example changing the time or day of a certain alarm). A sync button on the raspberry pi allows it to be constantly up to date with the backend after changes are made on the cloud. ## Challenges we ran into Due to the complexity of the project, we ran into many issues with both software and hardware. Our biggest challenge for the project was getting the image recognition to work, and produce accurate results due to noise coming from the hand holding the pill. Additionally, getting all the packages and dependencies such as tensorflow and opencv installed onto the system our posed to be a huge challenge On the hardware side, we ran into issues detecting if the pillbox is opened or closed based on the imperfection in ‘blacking out’ the pillbox. Due to constraints we didn’t have an opaque box. ## Accomplishments that we’re proud of We did this hackathon to challenge ourselves to use and apply our skills to new technologies that we were unfamiliar with or relatively new with such as databases, flask, machine learning, and hardware. Additionally, this was the first hackathon for 2 of our team members and we are very proud of what we achieved and what we have learned in such a short period of time. We were happy that we were able to integrate hardware and software together for this project and apply our skills from our varying engineering backgrounds. ## What I learned * How to setup a database * Machine learning, tensorflow and convolutional neural networks * Using Flask, learning javascript and html ## What's next for Pillviser Due to time constraints, we were unable to implement all the features we wanted. One feature we still need to add is a snooze feature to allow a delay of the alarm by a set amount of time which is useful especially if the medication has eating constraints with it. Additionally, we want to improve the image recognition on the pills which we believe can be made into a seperate program would be highly valuable in healthcare facilities as a last line of defence as pills are normally handled using patient charts and delivered through a chain of people so it can be an extra line of defence.
## Inspiration According to the United State's Department of Health and Human Services, 55% of the elderly are non-compliant with their prescription drug orders, meaning they don't take their medication according to the doctor's instructions, where 30% are hospital readmissions. Although there are many reasons why seniors don't take their medications as prescribed, memory loss is one of the most common causes. Elders with Alzheimer's or other related forms of dementia are prone to medication management problems. They may simply forget to take their medications, causing them to skip doses. Or, they may forget that they have already taken their medication and end up taking multiple doses, risking an overdose. Therefore, we decided to solve this issue with Pill Drop, which helps people remember to take their medication. ## What it does The Pill Drop dispenses pills at scheduled times throughout the day. It helps people, primarily seniors, take their medication on time. It also saves users the trouble of remembering which pills to take, by automatically dispensing the appropriate medication. It tracks whether a user has taken said dispensed pills by starting an internal timer. If the patient takes the pills and presses a button before the time limit, Pill Drop will record this instance as "Pill Taken". ## How we built it Pill Drop was built using Raspberry Pi and Arduino. They controlled servo motors, a button, and a touch sensor. It was coded in Python. ## Challenges we ran into Challenges we ran into was first starting off with communicating between the Raspberry Pi and the Arduino since we all didn't know how to do that. Another challenge was to structurally hold all the components needed in our project, making sure that all the "physics" aligns to make sure that our product is structurally stable. In addition, having the pi send an SMS Text Message was also new to all of us, so incorporating a User Interface - trying to take inspiration from HyperCare's User Interface - we were able to finally send one too! Lastly, bringing our theoretical ideas to fruition was harder than expected, running into multiple road blocks within our code in the given time frame. ## Accomplishments that we're proud of We are proud that we were able to create a functional final product that is able to incorporate both hardware (Arduino and Raspberry Pi) and software! We were able to incorporate skills we learnt in-class plus learn new ones during our time in this hackathon. ## What we learned We learned how to connect and use Raspberry Pi and Arduino together, as well as incorporating User Interface within the two as well with text messages sent to the user. We also learned that we can also consolidate code at the end when we persevere and build each other's morals throughout the long hours of hackathon - knowing how each of us can be trusted to work individually and continuously be engaged with the team as well. (While, obviously, having fun along the way!) ## What's next for Pill Drop Pill Drop's next steps include creating a high-level prototype, testing out the device over a long period of time, creating a user-friendly interface so users can adjust pill-dropping time, and incorporating patients and doctors into the system. ## UPDATE! We are now working with MedX Insight to create a high-level prototype to pitch to investors!
## Inspiration After hearing a representative from **Private Internet Access** describe why internet security is so important, we wanted to find a way to simply make commonly used messaging platforms more secure for sharing sensitive and private information. ## What it does **Mummify** provides in-browser text encryption and decryption by simply highlighting and clicking the Chrome Extension icon. It uses a multi-layer encryption by having both a private key and a public key. Anyone is able to encrypt using your public key, but only you are able to decrypt it. ## How we built it Mummify is a Chrome Extension built using Javascript (jQuery), HTML, and CSS. We did a lot of research about cryptography, deciding that we would be using asymmetric encryption with private key and public key to ensure complete privacy and security for the user. We then started to dive into building a Chrome extension, using JavaScript, JQuery and HTML to map out the logics behind our encryption and decryption extension. Lastly, we polished our extension with simple and user-friendly UI design and launched Mummify website! We used Microsoft Azure technologies to host and maintain our webpage which was built using Bootstrap (HTML+CSS), and used Domain.com to get our domain name. ## Challenges we ran into * What is the punniest domain name (in the whole world) that we can come up with? * How do we make a Chrome Extension? * Developing secure encryption algorithms. * How to create shareable keys without defeating the purpose of encryption. * How to directly replace the highlighted text within an entry field. * Bridging the extension and the web page. * Having our extension work on different chat message platforms. (Messenger, HangOuts, Slack...) ## Accomplishments that we're proud of * Managing to overcome all our challenges! * Learning javascript in less than 24 hours. * Coming together to work as the Best Team at nwHacks off of a random Facebook post! * Creating a fully-usable application in less than 24 hours. * Developing a secure encryption algorithm on the fly. * Learning how to harness the powers of Microsoft Azure. ## What we learned Javascript is as frustrating as people make it out to be. Facebook, G-mail, Hotmail, and many other sites all use very diverse build methods which makes it hard for an Extension to work the same on all. ## What's next for Mummify We hope to deploy Mummify to the Chrome Web Store and continue working as a team to develop and maintain our extension, as well as advocating for privacy on the internet!
partial
## Inspiration As impressive as modern technology is, there are many people in the world, such as seniors, who have no clue how to use today's technology and utilize the usefulness of the internet. To make matters worse, most diet and exercise apps are difficult to beginners due to their complex interfaces and intimidating numbers. So we thought, instead of taking food of your photo and posting it on social media, why not use an app to scan your food and plan your diet with a help of AI? Our app aims to help people of all ages be able to manage their nutritions and balance a healthy lifestyle through a friendly conversational interface. ## What it does SnackChat allows user to scan their food and evaluate the nutritional values of the food they are going to eat. The app also implements an AI assistant to chat with users about their diets and calorie intakes through a user-friendly interface of conversation. ## How we built it We built the AI using IBM Watson Assistant and the scanner using IBM Visual Recognition. We built the interface using Xcode. ## Challenges we ran into Thinking of an easy-to-use interface that is easy to everyone, yet for tackling a topic that varies for everyone. ## Accomplishments that we're proud of We programmed Watson Assistant to take user input and compute ideal meal plans according to the user's plans for the future ## What we learned How impressive today's AI technology is and the possibilities of technology. The IBM Visual Recognition and Watson Assistant technology are very impressive and it was fun to learn and build with them. Also, only Red bulls and doritos are not a healthy diet. ## What's next for SnackChat Implementing a larger database for every kinds of food and programming Watson Assistant to be informed in more health topics, such as exercise, supplements, and allergies and preferences of users.
## Inspiration As a gym freak, it was always a major inconvenience to us that we have to manually enter the nutrition facts for every single meal. We set out to make the process super simple using machine learning and azure. ## What it does I automatically detect what food you ate just by looking at it! (using Azure) ## Challenges we ran into The first challenge for us was that our team members were good at different languages. So we decided to work with something completely different that we both were not familiar with. We started with flutter, and then we also tried to learn Azure, and this is how we reached here ## Accomplishments that we're proud of we were proud that we could learn JSON and flutter ## What we learned JSON and Flutter ## What's next for NuTri Subscription-based monetization Personalized intake compute based on the statistical database from Azure Implementing and fixing the number of features that are broken in a current state Partnerships with fitness trainers and Exercise Centres The target audience is the average gym-going folk
## Inspiration Our inspiration for Sustain-ify came from observing the current state of our world. Despite incredible advancements in technology, science, and industry, we've created a world that's becoming increasingly unsustainable. This has a domino effect, not just on the environment, but on our own health and well-being as well. With rising environmental issues and declining mental and physical health, we asked ourselves: *How can we be part of the solution?* We believe that the key to solving these problems lies within us—humans. If we have the power to push the world to its current state, we also have the potential to change it for the better. This belief, coupled with the idea that *small, meaningful steps taken together can lead to a big impact*, became the core principle of Sustain-ify. ## What it does Sustain-ify is an app designed to empower people to make sustainable choices for the Earth and for themselves. It provides users with the tools to make sustainable choices in everyday life. The app focuses on dual sustainability—a future where both the Earth and its people thrive. Key features include: 1. **Eco Shopping Assistant**: Guides users through eco-friendly shopping. 2. **DIY Assistant**: Offers DIY sustainability projects. 3. **Health Reports**: Helps users maintain a healthy lifestyle. ## How we built it Sustain-ify was built with a range of technologies and frameworks to deliver a smooth, scalable, and user-friendly experience. Technical Architecture: Frontend Technologies: * Frameworks: Flutter (Dart), Streamlit (Python) were used for the graphical user interface (GUI/front-end). * Services in Future: Integration with third-party services such as Twilio, Lamini, and Firebase for added functionalities like messaging and real-time updates. Backend & Web Services: * Node.js & Express.js: For the backend API services. * FastAPI: RESTful API pipeline used for HTTP requests and responses. * Appwrite: Backend server for authentication and user management. * MongoDB Atlas: For storing pre-processed data chunks into a vector index. Data Processing & AI Models: * ScrapeGraph.AI: LLM-powered web scraping framework used to extract structured data from online resources. * Langchain & LlamaIndex: Used to preprocess scraped data and split it into chunks for efficient vector storage. * BGE-Large Embedding Model: From Hugging Face, used for embedding textual content. * Neo4j: For building a knowledge graph to improve data retrieval and structuring. * Gemini gpt-40 & Groq: Large language models used for inference, running on LPUs (Language Processing Units) for a sustainable inference mechanism. Additional Services: * Serper: Provides real-time data crawling and extraction from the internet, powered by LLMs that generate queries based on the user's input. * Firebase: Used for storing and analyzing user-uploaded medical reports to generate personalized recommendations. Authentication & Security: * JWT (JSON Web Tokens): For secure data transactions and user authentication. ## Challenges we ran into Throughout the development process, we faced several challenges: 1. Ensuring data privacy and security during real-time data processing. 2. Handling large amounts of scraped data from various online sources and organizing it for efficient querying and analysis. 3. Scaling the inference mechanisms using LPUs to provide sustainable solutions without compromising performance. ## Accomplishments that we're proud of We're proud of creating an app that: 1. Addresses both environmental sustainability and personal well-being. 2. Empowers people to make sustainable choices in their everyday lives. 3. Provides practical tools like the Eco Shopping Assistant, DIY Assistant, and Health Reports. 4. Has the potential to create a big impact through small, collective actions. ## What we learned Through this project, we learned that: 1. Sustainability isn't just about making eco-friendly choices; it's about making *sustainable lifestyle* choices too, focusing on personal health and well-being. 2. Small, meaningful steps taken together can lead to a big impact. 3. People have the power to change the world for the better, just as they have the power to impact it negatively. ## What's next for Sustain-ify Moving forward, we aim to: 1. Continue developing and refining our features to better serve our users. 2. Expand our user base to increase our collective impact. 3. Potentially add more features that address other aspects of sustainability. 4. Work towards our vision of creating a sustainable future where both humans and the planet can flourish. Together, we believe we can create a sustainable future where both humans and the planet can thrive. That's the ongoing mission of Sustain-ify, and we're excited to continue bringing this vision to life!
losing
## Inspiration In 2012 in the U.S infants and newborns made up 73% of hospitals stays and 57.9% of hospital costs. This adds up to $21,654.6 million dollars. As a group of students eager to make a change in the healthcare industry utilizing machine learning software, we thought this was the perfect project for us. Statistical data showed an increase in infant hospital visits in recent years which further solidified our mission to tackle this problem at its core. ## What it does Our software uses a website with user authentication to collect data about an infant. This data considers factors such as temperature, time of last meal, fluid intake, etc. This data is then pushed onto a MySQL server and is fetched by a remote device using a python script. After loading the data onto a local machine, it is passed into a linear regression machine learning model which outputs the probability of the infant requiring medical attention. Analysis results from the ML model is passed back into the website where it is displayed through graphs and other means of data visualization. This created dashboard is visible to users through their accounts and to their family doctors. Family doctors can analyze the data for themselves and agree or disagree with the model result. This iterative process trains the model over time. This process looks to ease the stress on parents and insure those who seriously need medical attention are the ones receiving it. Alongside optimizing the procedure, the product also decreases hospital costs thereby lowering taxes. We also implemented a secure hash to uniquely and securely identify each user. Using a hyper-secure combination of the user's data, we gave each patient a way to receive the status of their infant's evaluation from our AI and doctor verification. ## Challenges we ran into At first, we challenged ourselves to create an ethical hacking platform. After discussing and developing the idea, we realized it was already done. We were challenged to think of something new with the same amount of complexity. As first year students with little to no experience, we wanted to tinker with AI and push the bounds of healthcare efficiency. The algorithms didn't work, the server wouldn't connect, and the website wouldn't deploy. We persevered and through the help of mentors and peers we were able to make a fully functional product. As a team, we were able to pick up on ML concepts and data-basing at an accelerated pace. We were challenged as students, upcoming engineers, and as people. Our ability to push through and deliver results were shown over the course of this hackathon. ## Accomplishments that we're proud of We're proud of our functional database that can be accessed from a remote device. The ML algorithm, python script, and website were all commendable achievements for us. These components on their own are fairly useless, our biggest accomplishment was interfacing all of these with one another and creating an overall user experience that delivers in performance and results. Using sha256 we securely passed each user a unique and near impossible to reverse hash to allow them to check the status of their evaluation. ## What we learned We learnt about important concepts in neural networks using TensorFlow and the inner workings of the HTML code in a website. We also learnt how to set-up a server and configure it for remote access. We learned a lot about how cyber-security plays a crucial role in the information technology industry. This opportunity allowed us to connect on a more personal level with the users around us, being able to create a more reliable and user friendly interface. ## What's next for InfantXpert We're looking to develop a mobile application in IOS and Android for this app. We'd like to provide this as a free service so everyone can access the application regardless of their financial status.
### 💡 Inspiration 💡 We call them heroes, **but the support we give them is equal to the one of a slave.** Because of the COVID-19 pandemic, a lot of medics have to keep track of their patient's history, symptoms, and possible diseases. However, we've talked with a lot of medics, and almost all of them share the same problem when tracking the patients: **Their software is either clunky and bad for productivity, or too expensive to use on a bigger scale**. Most of the time, there is a lot of unnecessary management that needs to be done to get a patient on the record. Moreover, the software can even get the clinician so tired they **have a risk of burnout, which makes their disease predictions even worse the more they work**, and with the average computer-assisted interview lasting more than 20 minutes and a medic having more than 30 patients on average a day, the risk is even worse. That's where we introduce **My MedicAid**. With our AI-assisted patient tracker, we reduce this time frame from 20 minutes to **only 5 minutes.** This platform is easy to use and focused on giving the medics the **ultimate productivity tool for patient tracking.** ### ❓ What it does ❓ My MedicAid gets rid of all of the unnecessary management that is unfortunately common in the medical software industry. With My MedicAid, medics can track their patients by different categories and even get help for their disease predictions **using an AI-assisted engine to guide them towards the urgency of the symptoms and the probable dangers that the patient is exposed to.** With all of the enhancements and our platform being easy to use, we give the user (medic) a 50-75% productivity enhancement compared to the older, expensive, and clunky patient tracking software. ### 🏗️ How we built it 🏗️ The patient's symptoms get tracked through an **AI-assisted symptom checker**, which uses [APIMedic](https://apimedic.com/i) to process all of the symptoms and quickly return the danger of them and any probable diseases to help the medic take a decision quickly without having to ask for the symptoms by themselves. This completely removes the process of having to ask the patient how they feel and speeds up the process for the medic to predict what disease their patient might have since they already have some possible diseases that were returned by the API. We used Tailwind CSS and Next JS for the Frontend, MongoDB for the patient tracking database, and Express JS for the Backend. ### 🚧 Challenges we ran into 🚧 We had never used APIMedic before, so going through their documentation and getting to implement it was one of the biggest challenges. However, we're happy that we now have experience with more 3rd party APIs, and this API is of great use, especially with this project. Integrating the backend and frontend was another one of the challenges. ### ✅ Accomplishments that we're proud of ✅ The accomplishment that we're the proudest of would probably be the fact that we got the management system and the 3rd party API working correctly. This opens the door to work further on this project in the future and get to fully deploy it to tackle its main objective, especially since this is of great importance in the pandemic, where a lot of patient management needs to be done. ### 🙋‍♂️ What we learned 🙋‍♂️ We learned a lot about CRUD APIs and the usage of 3rd party APIs in personal projects. We also learned a lot about the field of medical software by talking to medics in the field who have way more experience than us. However, we hope that this tool helps them in their productivity and to remove their burnout, which is something critical, especially in this pandemic. ### 💭 What's next for My MedicAid 💭 We plan on implementing an NLP-based service to make it easier for the medics to just type what the patient is feeling like a text prompt, and detect the possible diseases **just from that prompt.** We also plan on implementing a private 1-on-1 chat between the patient and the medic to resolve any complaints that the patient might have, and for the medic to use if they need more info from the patient.
## I wanted to do something for students. ## A student can access all the books, pdfs, mock tests for free easily and they don't to sign up/login at the page. This webpage is more focused towards Physics, Chemistry and Mathematics and its a one stop destination for students to grab basic to advanced level knowledge and then test it through mock tests and practice papers. ## I built it using HTML. ## Challenges we ran into ## Accomplishments that we're proud of ## Nothing is impossible if you have the right skill and an ambition ## I'll keep modifying it at a regular interval of time. I'll add some tasks, question of the day, pictures, explainer videos and most importantly I want to keep it free of cost.
winning
## Inspiration It’s Friday afternoon, and as you return from your final class of the day cutting through the trailing winds of the Bay, you suddenly remember the Saturday trek you had planned with your friends. Equipment-less and desperate you race down to a nearby sports store and fish out $$$, not realising that the kid living two floors above you has the same equipment collecting dust. While this hypothetical may be based on real-life events, we see thousands of students and people alike impulsively spending money on goods that would eventually end up in their storage lockers. This cycle of buy-store-collect dust inspired us to develop Lendit this product aims to stagnate the growing waste economy and generate passive income for the users on the platform. ## What it does A peer-to-peer lending and borrowing platform that allows users to generate passive income from the goods and garments collecting dust in the garage. ## How we built it Our Smart Lockers are built with RaspberryPi3 (64bit, 1GB RAM, ARM-64) microcontrollers and are connected to our app through interfacing with Google's Firebase. The locker also uses Facial Recognition powered by OpenCV and object detection with Google's Cloud Vision API. For our App, we've used Flutter/ Dart and interfaced with Firebase. To ensure *trust* - which is core to borrowing and lending, we've experimented with Ripple's API to create an Escrow system. ## Challenges we ran into We learned that building a hardware hack can be quite challenging and can leave you with a few bald patches on your head. With no hardware equipment, half our team spent the first few hours running around the hotel and even the streets to arrange stepper motors and Micro-HDMI wires. In fact, we even borrowed another team's 3-D print to build the latch for our locker! On the Flutter/ Dart side, we were sceptical about how the interfacing with Firebase and Raspberry Pi would work. Our App Developer previously worked with only Web Apps with SQL databases. However, NoSQL works a little differently and doesn't have a robust referential system. Therefore writing Queries for our Read Operations was tricky. With the core tech of the project relying heavily on the Google Cloud Platform, we had to resolve to unconventional methods to utilize its capabilities with an internet that played Russian roulette. ## Accomplishments that we're proud of The project has various hardware and software components like raspberry pi, Flutter, XRP Ledger Escrow, and Firebase, which all have their own independent frameworks. Integrating all of them together and making an end-to-end automated system for the users, is the biggest accomplishment we are proud of. ## What's next for LendIt We believe that LendIt can be more than just a hackathon project. Over the course of the hackathon, we discussed the idea with friends and fellow participants and gained a pretty good Proof of Concept giving us the confidence that we can do a city-wide launch of the project in the near future. In order to see these ambitions come to life, we would have to improve our Object Detection and Facial Recognition models. From cardboard, we would like to see our lockers carved in metal at every corner of this city. As we continue to grow our skills as programmers we believe our product Lendit will grow with it. We would be honoured if we can contribute in any way to reduce the growing waste economy.
## Inspiration 🍪 We’re fed up with our roommates stealing food from our designated kitchen cupboards. Few things are as soul-crushing as coming home after a long day and finding that someone has eaten the last Oreo cookie you had been saving. Suffice it to say, the university student population is in desperate need of an inexpensive, lightweight security solution to keep intruders out of our snacks... Introducing **Craven**, an innovative end-to-end pipeline to put your roommates in check and keep your snacks in stock. ## What it does 📸 Craven is centered around a small Nest security camera placed at the back of your snack cupboard. Whenever the cupboard is opened by someone, the camera snaps a photo of them and sends it to our server, where a facial recognition algorithm determines if the cupboard has been opened by its rightful owner or by an intruder. In the latter case, the owner will instantly receive an SMS informing them of the situation, and then our 'security guard' LLM will decide on the appropriate punishment for the perpetrator, based on their snack-theft history. First-time burglars may receive a simple SMS warning, but repeat offenders will have a photo of their heist, embellished with an AI-generated caption, posted on [our X account](https://x.com/craven_htn) for all to see. ## How we built it 🛠️ * **Backend:** Node.js * **Facial Recognition:** OpenCV, TensorFlow, DLib * **Pipeline:** Twilio, X, Cohere ## Challenges we ran into 🚩 In order to have unfettered access to the Nest camera's feed, we had to find a way to bypass Google's security protocol. We achieved this by running an HTTP proxy to imitate the credentials of an iOS device, allowing us to fetch snapshots from the camera at any time. Fine-tuning our facial recognition model also turned out to be a bit of a challenge. In order to ensure accuracy, it was important that we had a comprehensive set of training images for each roommate, and that the model was tested thoroughly. After many iterations, we settled on a K-nearest neighbours algorithm for classifying faces, which performed well both during the day and with night vision. Additionally, integrating the X API to automate the public shaming process required specific prompt engineering to create captions that were both humorous and effective in discouraging repeat offenders. ## Accomplishments that we're proud of 💪 * Successfully bypassing Nest’s security measures to access the camera feed. * Achieving high accuracy in facial recognition using a well-tuned K-nearest neighbours algorithm. * Fine-tuning Cohere to generate funny and engaging social media captions. * Creating a seamless, rapid security pipeline that requires no legwork from the cupboard owner. ## What we learned 🧠 Over the course of this hackathon, we gained valuable insights into how to circumvent API protocols to access hardware data streams (for a good cause, of course). We also deepened our understanding of facial recognition technology and learned how to tune computer vision models for improved accuracy. For our X integration, we learned how to engineer prompts for Cohere's API to ensure that the AI-generated captions were both humorous and contextual. Finally, we gained experience integrating multiple APIs (Nest, Twilio, X) into a cohesive, real-time application. ## What's next for Craven 🔮 * **Multi-owner support:** Extend Craven to work with multiple cupboards or fridges in shared spaces, creating a mutual accountability structure between roommates. * **Machine learning improvement:** Experiment with more advanced facial recognition models like deep learning for even better accuracy. * **Social features:** Create an online leaderboard for the most frequent offenders, and allow users to vote on the best captions generated for snack thieves. * **Voice activation:** Add voice commands to interact with Craven, allowing roommates to issue verbal warnings when the cupboard is opened.
## Our Mission To hack the Lutron-Sliver kit using Android Things and Google Vision. ## Inspiration The idea was to automate your home using just your face. Lutron's Sliver kit is a representation of the modern smart home open to tinkering and intriguing possibilities. Google's Vision API allows us to go further with the Smart Home Vision, opening our team to Machine learning opportunities not possible prior. Keys in lock, you look up and smile at the camera. You are home, and since it's you the lights welcome you back. The inspitration for the titile is the famus line frome the movie "The Shining". ## What it does Using the Pico i.MX7 Development Board with the Android Things OS, and Google's Google Vision to communicate over TCP to the Lutron Sliver Kit. We imagined a smart home that could welcome you home. ## How we built it Honey I'm home uses a variety of different technologies, starting with the pico development board using android things OS. The intersection of all these technologies allows for the simplicity and potential to control your home without speaking or pressing a button. Honey I'm Home relies on Android Things OS to drive our camera and communication to the Lutron home system. We started by trying to just get the lights of the Lutron to turn on by sending Telnet commands to it. We then wrote a bash script that automated entering the user name and password while also executing light commands. We tried to take it a step further using the pico development board. The Pico proved to be difficult. Learning how to navigate through the android work flow was a huge challenge. ## Challenges we ran into Our biggest challenges were; getting the Pico board to communicate using Telnet Communication Protocol and the Java Language to the Lutron System. We also had trouble learning how to use Android APIs. ## Accomplishments that we're proud of Being able to stretch ourselves to learn multiple new technologies at once, and connect them in this one project. ## What we learned We went through the challenge of working with hardware and embraced how fun it was. We learned that, even if nothing is working, it's all a part of the learning process. Persistance is important to make whatever we attempt possible. ## What's next for Honey I'm Home The concept of Honey I'm Home could potentially be able to tell when you fell asleep to shut off the lights.
winning
## Inspiration The need for faster and more reliable emergency communication in remote areas inspired the creation of FRED (Fire & Rescue Emergency Dispatch). Whether due to natural disasters, accidents in isolated locations, or a lack of cellular network coverage, emergencies in remote areas often result in delayed response times and first-responders rarely getting the full picture of the emergency at hand. We wanted to bridge this gap by leveraging cutting-edge satellite communication technology to create a reliable, individualized, and automated emergency dispatch system. Our goal was to create a tool that could enhance the quality of information transmitted between users and emergency responders, ensuring swift, better informed rescue operations on a case-by-case basis. ## What it does FRED is an innovative emergency response system designed for remote areas with limited or no cellular coverage. Using satellite capabilities, an agentic system, and a basic chain of thought FRED allows users to call for help from virtually any location. What sets FRED apart is its ability to transmit critical data to emergency responders, including GPS coordinates, detailed captions of the images taken at the site of the emergency, and voice recordings of the situation. Once this information is collected, the system processes it to help responders assess the situation quickly. FRED streamlines emergency communication in situations where every second matters, offering precise, real-time data that can save lives. ## How we built it FRED is composed of three main components: a mobile application, a transmitter, and a backend data processing system. ``` 1. Mobile Application: The mobile app is designed to be lightweight and user-friendly. It collects critical data from the user, including their GPS location, images of the scene, and voice recordings. 2. Transmitter: The app sends this data to the transmitter, which consists of a Raspberry Pi integrated with Skylo’s Satellite/Cellular combo board. The Raspberry Pi performs some local data processing, such as image transcription, to optimize the data size before sending it to the backend. This minimizes the amount of data transmitted via satellite, allowing for faster communication. 3. Backend: The backend receives the data, performs further processing using a multi-agent system, and routes it to the appropriate emergency responders. The backend system is designed to handle multiple inputs and prioritize critical situations, ensuring responders get the information they need without delay. 4. Frontend: We built a simple front-end to display the dispatch notifications as well as the source of the SOS message on a live-map feed. ``` ## Challenges we ran into One major challenge was managing image data transmission via satellite. Initially, we underestimated the limitations on data size, which led to our satellite server rejecting the images. Since transmitting images was essential to our product, we needed a quick and efficient solution. To overcome this, we implemented a lightweight machine learning model on the Raspberry Pi that transcribes the images into text descriptions. This drastically reduced the data size while still conveying critical visual information to emergency responders. This solution enabled us to meet satellite data constraints and ensure the smooth transmission of essential data. ## Accomplishments that we’re proud of We are proud of how our team successfully integrated several complex components—mobile application, hardware, and AI powered backend—into a functional product. Seeing the workflow from data collection to emergency dispatch in action was a gratifying moment for all of us. Each part of the project could stand alone, showcasing the rapid pace and scalability of our development process. Most importantly, we are proud to have built a tool that has the potential to save lives in real-world emergency scenarios, fulfilling our goal of using technology to make a positive impact. ## What we learned Throughout the development of FRED, we gained valuable experience working with the Raspberry Pi and integrating hardware with the power of Large Language Models to build advanced IOT system. We also learned about the importance of optimizing data transmission in systems with hardware and bandwidth constraints, especially in critical applications like emergency services. Moreover, this project highlighted the power of building modular systems that function independently, akin to a microservice architecture. This approach allowed us to test each component separately and ensure that the system as a whole worked seamlessly. ## What’s next for FRED Looking ahead, we plan to refine the image transmission process and improve the accuracy and efficiency of our data processing. Our immediate goal is to ensure that image data is captioned with more technical details and that transmission is seamless and reliable, overcoming the constraints we faced during development. In the long term, we aim to connect FRED directly to local emergency departments, allowing us to test the system in real-world scenarios. By establishing communication channels between FRED and official emergency dispatch systems, we can ensure that our product delivers its intended value—saving lives in critical situations.
## Inspiration Every year hundreds of thousands of preventable deaths occur due to the lack of first aid knowledge in our societies. Many lives could be saved if the right people are in the right places at the right times. We aim towards connecting people by giving them the opportunity to help each other in times of medical need. ## What it does It is a mobile application that is aimed towards connecting members of our society together in times of urgent medical need. Users can sign up as respondents which will allow them to be notified when people within a 300 meter radius are having a medical emergency. This can help users receive first aid prior to the arrival of an ambulance or healthcare professional greatly increasing their chances of survival. This application fills the gap between making the 911 call and having the ambulance arrive. ## How we built it The app is Android native and relies heavily on the Google Cloud Platform. User registration and authentication is done through the use of Fireauth. Additionally, user data, locations, help requests and responses are all communicated through the Firebase Realtime Database. Lastly, the Firebase ML Kit was also used to provide text recognition for the app's registration page. Users could take a picture of their ID and their information can be retracted. ## Challenges we ran into There were numerous challenges in terms of handling the flow of data through the Firebase Realtime Database and providing the correct data to authorized users. ## Accomplishments that we're proud of We were able to build a functioning prototype! Additionally we were able to track and update user locations in a MapFragment and ended up doing/implementing things that we had never done before.
## Inspiration: Global warming is a very big problem for the world. It has to be tackled correctly, or else it may create very bad outcomes ## What it does: Predicts the effects global warming will bring to the world by a year in the future ## How we built it: Using C++ ## Challenges we ran into: Lots of data were required to analyze ## Accomplishments that we're proud of: Analytical skills ## What we learned: Global warming will create huge loss to the planet, it has to be tackled carefully ## What's next for Global Warming effects predictor: Innovative ways to control climate change
winning
## Inspiration dwarf fortress and stardew valley ## What it does simulates farming ## How we built it quickly ## Challenges we ran into learning how to farm ## Accomplishments that we're proud of making a frickin gaem ## What we learned games are hard farming is harder ## What's next for soilio make it better
## Inspiration Save the World is a mobile app meant to promote sustainable practices, one task at a time. ## What it does Users begin with a colorless Earth prominently displayed on their screens, along with a list of possible tasks. After completing a sustainable task, such as saying no to a straw at a restaurant, users obtain points towards their goal of saving this empty world. As points are earned and users level up, they receive lively stickers to add to their world. Suggestions for activities are given based on the time of day. They can also connect with their friends to compete for the best scores and sustainability. Both the fun stickers and friendly competition encourage heightened sustainability practices from all users! ## How I built it Our team created an iOS app with Swift. For the backend of tasks and users, we utilized a Firebase database. To connect these two, we utilized CocoaPods. ## Challenges I ran into Half of our team had not used iOS before this Hackathon. We worked together to get past this learning curve and all contribute to the app. Additionally, we created a setup in Xcode for the wrong type of database at first. At that point, we made a decision to change the Xcode setup instead of creating a different database. Finally, we found that it is difficult to use CocoaPods in conjunction with Github, because every computer needs to do the pod init anyway. We carefully worked through this issue along with several other merge conflicts. ## Accomplishments that I'm proud of We are proud of our ability to work as a team even with the majority of our members having limited Xcode experience. We are also excited that we delivered a functional app with almost all of the features we had hoped to complete. We had some other project ideas at the beginning but decided they did not have a high enough challenge factor; the ambition worked out and we are excited about what we produced. ## What I learned We learned that it is important to triage which tasks should be attempted first. We attempted to prioritize the most important app functions and leave some of the fun features for the end. It was often tempting to try to work on exciting UI or other finishing touches, but having a strong project foundation was important. We also learned to continue to work hard even when the due date seemed far away. The first several hours were just as important as the final minutes of development. ## What's next for Save the World Save the World has some wonderful features that could be implemented after this hackathon. For instance, the social aspect could be extended to give users more points if they meet up to do a task together. There could also be be forums for sustainability blog posts from users and chat areas. Additionally, the app could recommend personal tasks for users and start to “learn” their schedule and most-completed tasks.
## Inspiration There are 1.1 billion people without Official Identity (ID). Without this proof of identity, they can't get access to basic financial and medical services, and often face many human rights offences, due the lack of accountability. The concept of a Digital Identity is extremely powerful. In Estonia, for example, everyone has a digital identity, a solution was developed in tight cooperation between the public and private sector organizations. Digital identities are also the foundation of our future, enabling: * P2P Lending * Fractional Home Ownership * Selling Energy Back to the Grid * Fan Sharing Revenue * Monetizing data * bringing the unbanked, banked. ## What it does Our project starts by getting the user to take a photo of themselves. Through the use of Node JS and AWS Rekognize, we do facial recognition in order to allow the user to log in or create their own digital identity. Through the use of both S3 and Firebase, that information is passed to both our dash board and our blockchain network! It is stored on the Ethereum blockchain, enabling one source of truth that corrupt governments nor hackers can edit. From there, users can get access to a bank account. ## How we built it Front End: | HTML | CSS | JS APIs: AWS Rekognize | AWS S3 | Firebase Back End: Node JS | mvn Crypto: Ethereum ## Challenges we ran into Connecting the front end to the back end!!!! We had many different databases and components. As well theres a lot of accessing issues for APIs which makes it incredibly hard to do things on the client side. ## Accomplishments that we're proud of Building an application that can better the lives people!! ## What we learned Blockchain, facial verification using AWS, databases ## What's next for CredID Expand on our idea.
partial
## Inspiration **“ I do think that a significant portion of the population of developed countries, and eventually all countries, will have AR experiences every day, almost like eating three meals a day. It will become that much a part of you”** ~ Tim Cook, CEO, Apple at a technical conference in Utah The beauty of AR with its camera-based object recognition is that the whole world becomes the interface to this data: just look at something to receive more information, and it can work on most of the smart-phones without major hardware requirements unlike VR (Virtual Reality). However, most of the existing AR solutions are: * Discrete (targeting a specific user group) * Solve very specific problems * Work only with predefined Image targets * Proprietary Would you prefer using 15 different task-specific search services for daily use over using just Google? No, right? That's what we intend to do with AR as well! ## What it does VisionAR turns your native camera into a search engine overlaying relevant rich information right to your screen in realtime. It continuously scans your camera stream and comes up with the most relevant explanation for the feed in your native language. Our solution can also be further extended to specific use cases like identifying calorie counts, purchasing the projects you see on the go from e-commerce platforms, and much more. We also extended our hack to identify the bus and show details like the Destination, expected arrival, weather, promotions, etc on the screen. ## Technologies used Unity3D, Python, Google custom search API, DuckDuckGo instant answers API, Google Cloud Vision, Heroku, and Wolfram Alpha ## Challenges we ran into This was our first attempt of building something with unity and we ran into few bugs which took us quite some time to debug and fix. ## What we learned * It's much convenient to develop hack with your familiar tech stack (but less fun) * That AR is awesome! ## What's next for visionAR We plan to make this project Open Source so that it can be a community driven project which benefits the entire tech community and can be designed further by taking into account the aggregated opinions of the community.
## Inspiration The inspiration for TranslatAR comes from the desire to understand different languages in order to better connect with people around the world and wanting to implement AR into our application for a complete interactive experience. Learning a new language can be extremely difficult and nowadays people are having a hard time finding excitement in learning a new language. With TranslatAR, we are able to reinvigorate people's interest and passion to learning a new language, and hence connecting people around the world. ## What it does TranslatAR uses the iPhone's camera to detect objects by our own custom trained model from Microsoft's Cognitive Custom Vision API. Once the objects are detected, they will be able to be translated into the user's selected language in real time using Augmented Reality. What's unique about our AR app, is that the words are anchored to the object in space, therefore identifying and creating a "label" in a unique environment for users to learn. ## How we built it We built our application using primarily Microsoft applications. Microsoft Cognitive Services that were used includes, Microsoft translator API and custom vision model API. Other programs used included Micrsoft Azure, ARkit, and Swift. ## Challenges we ran into We had no prior experience building a mobile app using Swift before. We wanted to use Swift because of their new ARKit. As you might have predict, we ran into many challenges with understanding and programming with an unfamiliar language and being able to communicate between new partners. Challenges in Swift included embedding Microsoft API's within the code because of lack of code documentation for Swift4. Other challenges included training our API to make precise and accurate prediction over 95% of the time. We had to train more than 50 instances of an object. ## Accomplishments that we're proud of We are proud of our progress we made while utilizing a completely unfamiliar language. Implementing AR into our application was very fun to do and something we feel many users will enjoy. We are also very proud of being able to learn, have fun, and meet new people while at this hackathon! ## What we learned We learned a completely new language and were able to overcome obstacles intertwined with learning an unfamiliar topic in a short period of time. This allowed us to develop our skills in better understanding language we are unfamiliar with. ## What's next for TranslatAR We want to continue to add more languages to the application so there are less barriers in connecting and understanding different parts of the world. Future endeavors can include expanding API recognition to more objects and adding extra features such as practice phrases and speech pronunciation. With response capabilities, and better UI/UX functionality we believe that TranslatAR can truly change the way we learn.
# FaceConnect ##### Never lose a connection again! Connect with anyone, any wallet, and send transactions through an image of one's face! ## Inspiration Have you ever met someone and instantly connected with them, only to realize you forgot to exchange contact information? Or, even worse, you have someone's contact but they are outdated and you have no way of contacting them? I certainly have. This past week, I was going through some old photos and stumbled upon one from a Grade 5 Summer Camp. It was my first summer camp experience, I was super nervous going in but I had an incredible time with a friend I met there. We did everything together and it was one of my favorite memories from childhood. But there was a catch – I never got their contact, and I'd completely forgotten their name since it's been so long. All I had was a physical photo of us laughing together, and it felt like I'd lost a precious connection forever. This dilemma got me thinking. The problem of losing touch with people we've shared fantastic moments with is all too common, whether it's at a hackathon, a party, a networking event, or a summer camp. So, I set out to tackle this issue at Hack The Valley. ## What it does That's why I created FaceConnect, a Discord bot that rekindles these connections using facial recognition. With FaceConnect, you can send connection requests to people as long as you have a picture of their face. But that's not all. FaceConnect also allows you to view account information and send transactions if you have a friend's face. If you owe your friend money, you can simply use the "transaction" command to complete the payment. Or even if you find someone's wallet or driver's license, you can send a reach out to them just with their ID photo! Imagine a world where you never lose contact with your favorite people again. Join me in a future where no connections are lost. Welcome to FaceConnect! ## Demos Mobile Registration and Connection Flow (Registering and Detecting my own face!): <https://github.com/WilliamUW/HackTheValley/assets/25058545/d6fc22ae-b257-4810-a209-12e368128268> Desktop Connection Flow (Obama + Trump + Me as examples): <https://github.com/WilliamUW/HackTheValley/assets/25058545/e27ff4e8-984b-42dd-b836-584bc6e13611> ## How I built it FaceConnect is built on a diverse technology stack: 1. **Computer Vision:** I used OpenCV and the Dlib C++ Library for facial biometric encoding and recognition. 2. **Vector Embeddings:** ChromaDB and Llama Index were used to create vector embeddings of sponsor documentation. 3. **Document Retrieval:** I utilized Langchain to implement document retrieval from VectorDBs. 4. **Language Model:** OpenAI was employed to process user queries. 5. **Messaging:** Twilio API was integrated to enable SMS notifications for contacting connections. 6. **Discord Integration:** The bot was built using the discord.py library to integrate the user flow into Discord. 7. **Blockchain Technologies:** I integrated Hedera to build a decentralized landing page and user authentication. I also interacted with Flow to facilitate seamless transactions. ## Challenges I ran into Building FaceConnect presented several challenges: * **Solo Coding:** As some team members had midterm exams, the project was developed solo. This was both challenging and rewarding as it allowed for experimentation with different technologies. * **New Technologies:** Working with technologies like ICP, Flow, and Hedera for the first time required a significant learning curve. However, this provided an opportunity to develop custom Language Models (LLMs) trained on sponsor documentation to facilitate the learning process. * **Biometric Encoding:** It was my first time implementing facial biometric encoding and recognition! Although cool, it required some time to find the right tools to convert a face to a biometric hash and then compare these hashes accurately. ## Accomplishments that I'm proud of I're proud of several accomplishments: * **Facial Recognition:** Successfully implementing facial recognition technology, allowing users to connect based on photos. * **Custom LLMs:** Building custom Language Models trained on sponsor documentation, which significantly aided the learning process for new technologies. * **Real-World Application:** Developing a solution that addresses a common real-world problem - staying in touch with people. ## What I learned Throughout this hackathon, I learned a great deal: * **Technology Stacks:** I gained experience with a wide range of technologies, including computer vision, blockchain, and biometric encoding. * **Solo Coding:** The experience of solo coding, while initially challenging, allowed for greater freedom and experimentation. * **Documentation:** Building custom LLMs for various technologies, based on sponsor documentation, proved invaluable for rapid learning! ## What's next for FaceConnect The future of FaceConnect looks promising: * **Multiple Faces:** Supporting multiple people in a single photo to enhance the ability to reconnect with groups of friends or acquaintances. * **Improved Transactions:** Expanding the transaction feature to enable users to pay or transfer funds to multiple people at once. * **Additional Technologies:** Exploring and integrating new technologies to enhance the platform's capabilities and reach beyond Discord! ### Sponsor Information ICP Challenge: I leveraged ICP to build a decentralized landing page and implement user authentication so spammers and bots are blocked from accessing our bot. Built custom LLM trained on ICP documentation to assist me in learning about ICP and building on ICP for the first time! I really disliked deploying on Netlify and now that I’ve learned to deploy on ICP, I can’t wait to use it for all my web deployments from now on! Canister ID: be2us-64aaa-aaaaa-qaabq-cai Link: <https://github.com/WilliamUW/HackTheValley/blob/readme/ICP.md> Best Use of Hedera: With FaceConnect, you are able to see your Hedera account info using your face, no need to memorize your public key or search your phone for it anymore! Allow people to send transactions to people based on face! (Wasn’t able to get it working but I have all the prerequisites to make it work in the future - sender Hedera address, recipient Hedera address). In the future, to pay someone or a vendor in Hedera, you can just scan their face to get their wallet address instead of preparing QR codes or copy and pasting! I also built a custom LLM trained on Hedera documentation to assist me in learning about Hedera and building on Hedera as a beginner! Link: <https://github.com/WilliamUW/HackTheValley/blob/readme/hedera.md> Best Use of Flow With FaceConnect, to pay someone or a vendor in Flow, you can just scan their face to get their wallet address instead of preparing QR codes or copy and pasting! I also built a custom LLM trained on Flow documentation to assist me in learning about Flow and building on Flow as a beginner! Link: <https://github.com/WilliamUW/HackTheValley/blob/readme/flow.md> Georgian AI Challenge Prize I was inspired by the data sources listed in the document by scraping LinkedIn profile pictures and their faces for obtaining a dataset to test and verify my face recognition model! I also built a custom LLM trained on Georgian documentation to learn more about the firm! Link: <https://github.com/WilliamUW/HackTheValley/blob/readme/GeorgianAI.md> Best .Tech Domain Name: FaceCon.tech Best AI Hack: Use of AI include: 1. Used Computer Vision with OpenCV and the Dlib C++ Library to implement AI-based facial biometric encoding and recognition. 2. Leveraged ChromaDB and Llama Index to create vector embeddings of sponsor documentation 3. Utilized Langchain to implement document retrieval from VectorDBs 4. Used OpenAI to process user queries for everything Hack the Valley related! By leveraging AI, FaceConnect has not only addressed a common real-world problem but has also pushed the boundaries of what's possible in terms of human-computer interaction. Its sophisticated AI algorithms and models enable users to connect based on visuals alone, transcending language and other barriers. This innovative use of AI in fostering human connections sets FaceConnect apart as an exceptional candidate for the "Best AI Hack" award. Best Diversity Hack: Our project aligns with the Diversity theme by promoting inclusivity and connection across various barriers, including language and disabilities. By enabling people to connect using facial recognition and images, our solution transcends language barriers and empowers individuals who may face challenges related to memory loss, speech, or hearing impairments. It ensures that everyone, regardless of their linguistic or physical abilities, can stay connected and engage with others, contributing to a more diverse and inclusive community where everyone's unique attributes are celebrated and connections are fostered. Imagine trying to get someone’s contact in Germany, or Thailand, or Ethiopia? Now you can just take a picture! Best Financial Hack: FaceConnect is the ideal candidate for "Best Financial Hack" because it revolutionizes the way financial transactions can be conducted in a social context. By seamlessly integrating facial recognition technology with financial transactions, FaceConnect enables users to send and receive payments simply by recognizing the faces of their friends. This innovation simplifies financial interactions, making it more convenient and secure for users to settle debts, split bills, or pay for services. With the potential to streamline financial processes, FaceConnect offers a fresh perspective on how we handle money within our social circles. This unique approach not only enhances the user experience but also has the potential to disrupt traditional financial systems, making it a standout candidate for the "Best Financial Hack" category.
losing
## Inspiration I wanted to create a way to encourage users to recycle and help the environment that is fun and does not feel like a chore. There are more than 3.3 million tons of plastic waste every year and only 9% is recycled. ## What it does Eco Nation is a mobile app built on the DeSo blockchain. Users sign up using their DeSo social account and scan plastic items using machine learning and computer vision to score points that can be redeemed for cryptocurrencies, NFTs, and other exclusive items. Users can also challenge their DeSo friends to see who can recycle the most to earn even cooler prizes. ## How we built it Eco Nation is built on the DeSo blockchain. It is easier to develop this for iOS devices since android tends to block or disable some of its features. Using Google Cloud, the user's points and history are stored on the Google Cloud Database. Google Cloud also helps as the backbone of the computer vision technology for scanning barcodes. Users sign up with their DeSo account which links their friends and is how users claim their rewards. ## Challenges we ran into I had a hard time developing this app since the DeSo documents are difficult to follow and flow through. I also had some issues linking the DeSo friends with the user's account and challenging them. The Google Cloud storage of the user's progress was giving me problems because some of the features were blocked by my university's network, but I was able to VPN around it and use my hotspot if needed. ## Accomplishments that we're proud of I am proud of developing a mobile app on the DeSo blockchain. Web 3 technologies are becoming mainstream and developing an app like this may incentives people to recycle more and keep the planet healthy. ## What we learned I learned how to use the DeSo blockchain technologies and the interworkings of the DeSo accounts and how the users' data is stored on the blockchain rather than on a corporate server. ## What's next for Eco Nation I hope to refine the computer vision aspect of this app and work with a larger team to get better ideas for the design and development of blockchain technologies. Track Social Good Discord Username for HackPrinceton Neil52900#3856
## Inspiration Because of covid-19 and the holiday season, we are getting increasingly guilty over the carbon footprint caused by our online shopping. This is not a coincidence, Amazon along contributed over 55.17 million tonnes of CO2 in 2019 alone, the equivalent of 13 coal power plants. We have seen many carbon footprint calculators that aim to measure individual carbon pollution. However, the pure mass of carbon footprint too abstract and has little meaning to average consumers. After calculating footprints, we would feel guilty about our carbon consumption caused by our lifestyles, and maybe, maybe donate once to offset the guilt inside us. The problem is, climate change cannot be eliminated by a single contribution because it's a continuous process, so we thought to gamefy carbon footprint to cultivate engagement, encourage donations, and raise awareness over the long term. ## What it does We build a google chrome extension to track the user’s amazon purchases and determine the carbon footprint of the product using all available variables scraped from the page, including product type, weight, distance, and shipping options in real-time. We set up Google Firebase to store user’s account information and purchase history and created a gaming system to track user progressions, achievements, and pet status in the backend. ## How we built it We created the front end using React.js, developed our web scraper using javascript to extract amazon information, and Netlify for deploying the website. We developed the back end in Python using Flask, storing our data on Firestore, calculating shipping distance using Google's distance-matrix API, and hosting on Google Cloud Platform. For the user authentication system, we used the SHA-256 hashes and salts to store passwords securely on the cloud. ## Challenges we ran into This is our first time developing a web app application for most of us because we have our background in Mechatronics Engineering and Computer Engineering. ## Accomplishments that we're proud of We are very proud that we are able to accomplish an app of this magnitude, as well as its potential impact on social good by reducing Carbon Footprint emission. ## What we learned We learned about utilizing the google cloud platform, integrating the front end and back end to make a complete webapp. ## What's next for Purrtector Our mission is to build tools to gamify our fight against climate change, cultivate user engagement, and make it fun to save the world. We see ourselves as a non-profit and we would welcome collaboration from third parties to offer additional perks and discounts to our users for reducing carbon emission by unlocking designated achievements with their pet similar. This would bring in additional incentives towards a carbon-neutral lifetime on top of emotional attachments to their pet. ## Domain.com Link <https://purrtector.space> Note: We weren't able to register this via domain.com due to the site errors but Sean said we could have this domain considered.
## Inspiration The inspiration behind this app was accredited to the absence of a watchOS native or client application for LinkedIn, a fairly prominent and ubiquitously used platform on the iOS counterpart. This propelled the idea of introducing a watchOS client to perform what LinkedIn is used most practically for, connecting with professionals and growing one's network, in an intuitively gesture-delivering method through a handshake. ## What it does ShakeIn performs a peer-to-peer LinkedIn connection using two Apple watches through a handshake gesture from the two parties. This is done so through initiation of sender and receiver and then simply following through with the handshake. ## How we built it ShakeIn was built using Xcode in Swift, with CocoaPods, Alamofire, LinkedIn's Invitation and Oauth 2.0 APIs, Core Bluetooth, Apple Watch Series 4 and Series 2, and WatchConnectivityKit. ## Challenges we ran into Counteracting our barrier of demonstration due to CoreBluetooth only being supported for Apple Watch Series 3 and up was the biggest challenge. This was due to the fact that only one of our watches supported the framework and required a change of approach involving phone demonstration instead. ## Accomplishments that we're proud of Introducing a LinkedIn based client to perform its most purposeful task through a simple, natural, and socially conventional method ## What we learned Building this app invoked greater awareness for purposeful usage and applicability for the watchOS development platform and how gesture activated processing for an activity can come to fruition. ## What's next for ShakeIn Progressive initiatives involve expanding ShakeIn's applicability in acknowledgement of the fact that the watch is more commonly worn on the left wrist. This implies introducing alternative gesturing that is also intuitive in nature comparable to that of a handshake in order to form a LinkedIn connection.
partial
## Inspiration Rates of patient nonadherence to therapies average around 50%, particularly among those with chronic diseases. One of my closest friends has Crohn's disease, and I wanted to create something that would help with the challenges of managing a chronic illness. I built this app to provide an on-demand, supportive system for patients to manage their symptoms and find a sense of community. ## What it does The app allows users to have on-demand check-ins with a chatbot. The chatbot provides fast inference, classifies actions and information related to the patient's condition, and flags when the patient’s health metrics fall below certain thresholds. The app also offers a community aspect, enabling users to connect with others who have chronic illnesses, helping to reduce the feelings of isolation. ## How we built it We used Cerebras for the chatbot to ensure fast and efficient inference. The chatbot is integrated into the app for real-time check-ins. Roboflow was used for image processing and emotion detection, which aids in assessing patient well-being through facial recognition. We also used Next.js as the framework for building the app, with additional integrations for real-time community features. ## Challenges we ran into One of the main challenges was ensuring the chatbot could provide real-time, accurate classifications and flagging low patient metrics in a timely manner. Managing the emotional detection accuracy using Roboflow's emotion model was also complex. Additionally, creating a supportive community environment without overwhelming the user with too much data posed a UX challenge. ## Accomplishments that we're proud of ✅deployed on defang ✅integrated roboflow ✅integrated cerebras We’re proud of the fast inference times with the chatbot, ensuring that users get near-instant responses. We also managed to integrate an emotion detection feature that accurately tracks patient well-being. Finally, we’ve built a community aspect that feels genuine and supportive, which was crucial to the app's success. ## What we learned We learned a lot about balancing fast inference with accuracy, especially when dealing with healthcare data and emotionally sensitive situations. The importance of providing users with a supportive, not overwhelming, environment was also a major takeaway. ## What's next for Muni Next, we aim to improve the accuracy of the metrics classification, expand the community features to include more resources, and integrate personalized treatment plans with healthcare providers. We also want to enhance the emotion detection model for more nuanced assessments of patients' well-being.
## Inspiration Peer-review is critical to modern science, engineering, and healthcare endeavors. However, the system for implementing this process has lagged behind and results in expensive costs for publishing and accessing material, long turn around times reminiscent of snail-mail, and shockingly opaque editorial practices. Astronomy, Physics, Mathematics, and Engineering use a "pre-print server" ([arXiv](https://arxiv.org)) which was the early internet's improvement upon snail-mailing articles to researchers around the world. This pre-print server is maintained by a single university, and is constantly requesting donations to keep up the servers and maintenance. While researchers widely acknowledge the importance of the pre-print server, there is no peer-review incorporated, and none planned due to technical reasons. Thus, researchers are stuck with spending >$1000 per paper to be published in journals, all the while individual article access can cost as high as $32 per paper! ([source](https://www.nature.com/subscriptions/purchasing.html)). For reference, a single PhD thesis can contain >150 references, or essentially cost $4800 if purchased individually. The recent advance of blockchain and smart contract technology ([Ethereum](https://www.ethereum.org/)) coupled with decentralized file sharing networks ([InterPlanetaryFileSystem](https://ipfs.io)) naturally lead us to believe that archaic journals and editors could be bypassed. We created our manuscript distribution and reviewing platform based on the arXiv, but in a completely decentralized manner. Users utilize, maintain, and grow the network of scholarship by simply running a simple program and web interface. ## What it does arXain is a Dapp that deals with all the aspects of a peer-reviewed journal service. An author (wallet address) will come with a bomb-ass paper they wrote. In order to "upload" their paper to the blockchain, they will first need to add their file/directory to the IPFS distributed file system. This will produce a unique reference number (DOI is currently used in journals) and hash corresponding to the current paper file/directory. The author can then use their address on the Ethereum network to create a new contract to submit the paper using this reference number and paperID. In this way, there will be one paper per contract. The only other action the author can make to that paper is submitting another draft. Others can review and comment on papers, but an address can not comment/review its own paper. The reviews are rated on a "work needed", "acceptable" basis and the reviewer can also upload an IPFS hash of their comments file/directory. Protection is also built in such that others can not submit revisions of the original author's paper. The blockchain will have a record of the initial paper submitted, revisions made by the author, and comments/reviews made by peers. The beauty of all of this is one can see the full transaction histories and reconstruct the full evolution of the document. One can see the initial draft, all suggestions from reviewers, how many reviewers, and how many of them think the final draft is reasonable. ## How we built it There are 2 main back-end components, the IPFS file hosting service and the Ethereum blockchain smart contracts. They are bridged together with ([MetaMask](https://metamask.io/)), a tool for connecting the distributed blockchain world, and by extension the distributed papers, to a web browser. We designed smart contracts in Solidity. The IPFS interface was built using a combination of Bash, HTML, and a lot of regex! . Then we connected the IPFS distributed net with the Ethereum Blockchain using MetaMask and Javascript. ## Challenges we ran into On the Ethereum side, setting up the Truffle Ethereum framework and test networks were challenging. Learning the limits of Solidity and constantly reminding ourselves that we had to remain decentralized was hard! The IPFS side required a lot of clever regex-ing. Ensuring that public access to researchers manuscript and review history requires other proper identification and distribution on the network. The hardest part was using MetaMask and Javascript to call our contracts and connect the blockchain to the browser. We struggled for about hours trying to get javascript to deploy a contract on the blockchain. We were all new to functional programming. ## Accomplishments that we're proud of Closing all the curly bois and close parentheticals in javascript. Learning a whole lot about the blockchain and IPFS. We went into this weekend wanting to learning about how the blockchain worked, and came out learning about Solidity, IPFS, Javascript, and a whole lot more. You can see our "genesis-paper"on an IPFS gateway (a bridge between HTTP and IPFS) [here](https://gateway.ipfs.io/ipfs/QmdN2Hqp5z1kmG1gVd78DR7vZmHsXAiSbugCpXRKxen6kD/0x627306090abaB3A6e1400e9345bC60c78a8BEf57_1.pdf) ## What we learned We went into this with knowledge that was a way to write smart contracts, that IPFS existed, and minimal Javascript. We learned intimate knowledge of setting up Ethereum Truffle frameworks, Ganache, and test networks along with the development side of Ethereum Dapps like the Solidity language, and javascript tests with the Mocha framework. We learned how to navigate the filespace of IPFS, hash and and organize directories, and how the file distribution works on a P2P swarm. ## What's next for arXain With some more extensive testing, arXain is ready for the Ropsten test network *at the least*. If we had a little more ETH to spare, we would consider launching our Dapp on the Main Network. arXain PDFs are already on the IPFS swarm and can be accessed by any IPFS node.
Introducing playc3, a revolutionary Web3 streaming platform that utilizes open-source, decentralized video technology from Livepeer to provide a seamless and more efficient viewing experience for users. playc3 uses blockchain technology to allow viewers to purchase token shares of their favourite streams from any streamer on the platform, giving them a direct stake in the success of the stream and streamer. playc3 is built on a decentralized network, eliminating the need for centralized servers, which reduces costs while maintaining scalability and security. Streamers will be able to monetize their streams in numerous ways, including through token shares, ads, subscriptions, and token mining via watch time. Overall, playc3 is a platform that empowers streamers and viewers, providing a new method of monetization for content creators and a new path for viewers to support their favourite streamers.
winning
## Inspiration We were inspired by the challenge of identifying future visionary leaders in tech using AI. We wanted to use the openAI's API and other sources to create various models that can analyze the attributes and impact of potential tech leaders. ## What it does We built several different models for interpreting which aspiring entrepreneur/visionnaire is most likely to lead a successful startup. They take in a name of a person who is involved in the tech ecosystem and outputs a score and further details on 33 features in total, including their . The scores are based on various factors such as personality traits, experience, emotional and social skills, and leadership skills. The README.md contains the detailed report of our general approach. ## How we built it We built GeorgianAI using various Python programs and the openAI API. ## Challenges we ran into Some of the challenges we faced were: * Finding reliable and relevant data sources for each person * Dealing with noisy and incomplete data * Balancing the trade-off between creativity and accuracy * Ensuring the scalability and feasibility of our solution ## Accomplishments that we're proud of Some of the accomplishments we are proud of are: * Creating a novel and innovative solution for the challenge * Utilizing the GenAI repository effectively * Generating insightful and comprehensive reports for each person * Demonstrating our solution with a captivating demo ## What we learned Some of the things we learned are: * How to use the openAI repository and its models and datasets * How to scrape and process data from various sources * How to apply NLP and computer vision techniques to analyze data * How to present our solution effectively ## What's next for GeorgianAI Some of the next steps for GeorgianAI are: * Improving the accuracy and robustness of our model * Adding more data sources and features to our model * Testing our model on more people and scenarios * Deploying our model as a web app or a chatbot
In the public imagination, the year 1956 brings to mind a number of things – foremost the Hungarian Revolution, and its subsequent bloody suppression. Those of a certain vintage would recall the Suez Crisis, or the debut album of Elvis Presley. But those in the know would associate 1956 with the Dartmouth workshop, often considered the seminal event in artificial intelligence. In the intervening decades the field of AI bore witness to several cycles of hype and bust, as it broadened and matured. The field is once again in a frenzy, and public perception of AI is divided. Evangelists, believing it a tool of Promethean promise, herald the coming of what they call the AI revolution. Others, wary of the limits of today’s computational powers and the over-promise of previous hypes, warn of a market correction of sorts. Because of its complexity and apparent inaccessibility, the average layperson views it with both awe and suspicion. Still others are unaware of its developments at all. However, there is one major difference between the present flowering of AI and the previous decades. It is here in our everyday lives, and here to stay. Yet most people are not aware of this. We aim to make AI more accessible by creating a user-friendly experience that gives easy and fun example use-cases, and provides users with a memento after completion. We initially started off rather ambitiously, and wanted to create a cinematic experience that would incorporate computer vision, and natural language processing. However, we quickly discovered that this would prove difficult to implement within the 36-hour time limit, especially given that this is the first hackathon that our team members have participated in, and that some of us had limited exposure to the tools and frameworks that we used to deploy our project. Nevertheless, we are proud of the prototype that we built and we hope to expand upon it after the conclusion of TreeHacks. We used AWS to host our website and produce our conversational agents, Gradio to host our OpenAI GPT-3 demo, and HTML, CSS, Javascript to build the front-end and back-end of our website.
![alt tag](https://raw.githubusercontent.com/zackharley/QHacks/develop/public/pictures/logoBlack.png) # What is gitStarted? GitStarted is a developer tool to help get projects off the ground in no time. When time is of the essence, devs hate losing time to setting up repositories. GitStarted streamlines the repo creation process, quickly adding your frontend tools and backend npm modules. ## Installation To install: ``` npm install ``` ## Usage To run: ``` gulp ``` ## Credits Created by [Jake Alsemgeest](https://github.com/Jalsemgeest), [Zack Harley](https://github.com/zackharley), [Colin MacLeod](https://github.com/ColinLMacLeod1) and [Andrew Litt](https://github.com/andrewlitt)! Made with :heart: in Kingston, Ontario for QHacks 2016
partial
## Inspiration Our inspiration for this project was to bring plants to "life" by enabling an interaction layer using an intersection of OpenAI API and Hume.ai's API. By specifying certain criteria that pertain to a plant's overall health, we created a voice-based interface that allows us to communicate with our plant and understand if it requires any care. ## What it does Our plant has two sensors manually placed in its soil, pH and moisture. We use our device cameras, either on an iPad or laptop, to capture image data and process that data to a server. OpenAI's API helps us with vision to analyze the plant image, extract the sensor readings, and detail a description of the plant. We also implement an input prompt that can give the plant a personality and have the voice output believe it is a plant. After the sensor readings and prompt are processed, we store this data in a Supabase data table. Applying Hume.ai gave us the ability to read this data and integrate it into our model such that the voice output is dynamically adjusted based on the sensor readings. With this implementation, we are able to communicate with our plant through voice and determine its requirements for success. ## How we built it Bloom Buddy was built using a modern tech stack, combining powerful frontend technologies with robust backend services to create an interactive and responsive plant monitoring system. We create a NextJs frontend that interacts with the client by sending image data from the sensors to an LLM which gets sent to Hume and then gets outputted on the frontend. ### Frontend We used Next.js 13 with React and TypeScript to build a fast, server-side rendered application. This allowed us to create a seamless user experience with quick load times and efficient routing. Tailwind CSS was employed for styling, enabling rapid UI development with a consistent design language. Key components of our frontend include: 1. Dashboard: This is the main interface where users interact with their plant. It displays real-time sensor data and hosts the AI conversation interface. 2. AudioVisualizer: This component creates a visual representation of the audio input and output, enhancing the conversational experience with the plant. 3. Controls: Manages the voice chat controls, allowing users to start and stop conversations with their plant. ### Backend and APIs We leveraged several backend services and APIs to power Bloom Buddy's features: 1. Supabase: We used Supabase as our database to store and retrieve plant sensor data. The database schema was set up using SQL. 2. OpenAI API: This powers the AI conversation feature, allowing the plant to respond intelligently to user inputs. 3. Hume AI Voice API: We integrated this for voice processing, enabling spoken interactions between the user and the plant. ### AI Integration The AI component of Bloom Buddy is particularly interesting. We created a system prompt that dynamically changes based on the plant's current sensor readings. This allows the plant's personality and responses to adapt based on its current state, creating a more realistic and engaging interaction. ### Image Analysis We implemented an image analysis feature to assess plant health. This feature handles image uploads, processes them using AI (likely via the OpenAI API), and updates the plant's status in the database accordingly. ## Challenges we ran into One of the main challenges was integrating real-time sensor data with the AI conversation system. We solved this by implementing a polling mechanism that frequently updates the sensor data and dynamically adjusts the AI's context. Another challenge was creating a responsive and visually appealing audio visualizer. We addressed this by using the Hume AI Voice API's FFT (Fast Fourier Transform) data to create a dynamic visualization that represents both input and output audio. ## Accomplishments that we're proud of We’re most proud of how dynamic Bloom Buddy is. Our teammates experimented with different inputs (i.e. low water, medium sunlight, etc.) and catered the responses to act more dynamically based on these responses. More technically, we adjusted the context window for our LLM responses so the conversations are more dynamic. This means that we emulated what a plant might feel when they’re ‘upset’ or ‘angry.’ This was a very fun implementation we added and had a lot of fun talking to our Bloom Buddy. ## What we learned We learned a ton. Collectively, we all learned how to develop an architecture that made the LLM responses more seamless, interact with the database, and build API endpoints. We worked with Hume, OpenAI, and Twilio, which are all stacks that we were unfamiliar with. ## What's next for Bloom Buddy 1. Bloom Buddy has a lot of potential. Internal and external improvements include: 2. Developing a more robust data pipeline that streams accurate information to the client 3. Integrating sensors within the plant vase to increase scalability 4. Scaling to greenhouses, primary educational programs 5. Creating a mobile application that allows users to communicate and monitor their plants 6. Verifying Twilio so plants can message you when they need resources ### Conclusion By combining these technologies and approaches, we were able to create Bloom Buddy, an interactive plant monitoring system that brings plants to life through AI-powered conversations and real-time data visualization. The project demonstrates the potential of combining IoT, AI, and modern web technologies to create engaging and useful applications in the realm of smart home and plant care. Our approach focused on creating a seamless user experience while leveraging powerful backend services. The real-time nature of the application, combined with the AI-driven conversational interface, provides users with an innovative way to interact with and care for their plants. The dynamic system prompts and adaptive AI responses ensure that each interaction is unique and tailored to the current state of the plant, making Bloom Buddy not just a tool, but a companion in plant care.
## Overview Crop diseases pose a significant threat to global food security, especially in regions lacking proper infrastructure for rapid disease identification. To address this challenge, we present a web application that leverages the widespread adoption of smartphones and cutting-edge transfer learning models. Our solution aims to streamline the process of crop disease diagnosis, providing users with insights into disease types, suitable treatments, and preventive measures. ## Key Features * **Disease Detection:** Our web app employs advanced transfer learning models to accurately identify the type of disease affecting plants. Users can upload images of afflicted plants for real-time diagnosis. * **Treatment Recommendations:** Beyond disease identification, the app provides actionable insights by recommending suitable treatments for the detected diseases. This feature aids farmers and agricultural practitioners in promptly addressing plant health issues. * **Prevention Suggestions:** The application doesn't stop at diagnosis; it also offers preventive measures to curb the spread of diseases. Users receive valuable suggestions on maintaining plant health and preventing future infections. * **Generative AI Interaction:** To enhance user experience, we've integrated generative AI capabilities for handling additional questions users may have about their plants. This interactive feature provides users with insightful information and guidance. ## How it Works ? * **Image Upload:** Users upload images of plant specimens showing signs of disease through the web interface. * **Transfer Learning Model:** The uploaded images undergo real-time analysis using advanced transfer learning model, enabling the accurate identification of diseases with the help of PlantID API. * **Treatment and Prevention Recommendations:** Once the disease is identified, the web app provides detailed information on suitable treatments and preventive measures, empowering users with actionable insights. * **Generative AI Interaction:** Users can engage with generative AI to seek additional information, ask questions, or gain knowledge about plant care beyond disease diagnosis.
## Inspiration Learning a new instrument is hard. Inspired by games like Guitar Hero, we wanted to make a fun, interactive music experience but also have it translate to actually learning a new instrument. We chose the violin because most of our team members had never touched a violin prior to this hackathon. Learning the violin is also particularly difficult because there are no frets, such as those on a guitar, to help guide finger placement. ## What it does Fretless is a modular attachment that can be placed onto any instrument. Users can upload any MIDI file through our GUI. The file is converted to music numbers and sent to the Arduino, which then lights up LEDs at locations corresponding to where the user needs to press down on the string. ## How we built it Fretless is composed to software and hardware components. We used a python MIDI library to convert MIDI files into music numbers readable by the Arduino. Then, we wrote an Arduino script to match the music numbers to the corresponding light. Because we were limited by the space on the violin board, we could not put four rows of LEDs (one for each string). Thus, we implemented logic to color code the lights to indicate which string to press. ## Challenges we ran into One of the challenges we faced is that only one member on our team knew how to play the violin. Thus, the rest of the team was essentially learning how to play the violin and coding the functionalities and configuring the electronics of Fretless at the same time. Another challenge we ran into was the lack of hardware available. In particular, we weren’t able to check out as many LEDs as we needed. We also needed some components, like a female DC power adapter, that were not present at the hardware booth. And so, we had limited resources and had to make do with what we had. ## Accomplishments that we're proud of We’re really happy that we were able to create a working prototype together as a team. Some of the members on the team are also really proud of the fact that they are now able to play Ode to Joy on the violin! ## What we learned Do not crimp lights too hard. Things are always harder than they seem to be. Ode to Joy on the violin :) ## What's next for Fretless We can make the LEDs smaller and less intrusive on the violin, ideally a LED pad that covers the entire fingerboard. Also, we would like to expand the software to include more instruments, such as cello, bass, guitar, and pipa. Finally, we would like to corporate a PDF sheet music to MIDI file converter so that people can learn to play a wider range of songs.
partial
## Inspiration All of us attend meetings almost every week of our lives. Some of us are early birds, who get to meetings first and wonder when everyone else will arrive. Others of us are always late, and want to know what the latest time we can leave home is without having a conspicuous arrival. We wanted to create a web app that allows such people to know exactly where they are relative to everyone else coming to a meeting, so that they have the correct expectations of the situation upon arrival. That is how the idea of WYA was born. ## What it does WYA allows a user to create a meeting event and set the location and time of the meeting. Other users who want to join the meeting can then provide the meeting id as well as their name. All of the users for each meeting then have their location and estimated time of arrival shown on a map that is updated real-time. This allows each meeting participant to see exactly where each person is and when they will arrive to the meeting. ## How we built it We used HTML5, and CSS3 to create the UI of our app. To store the event and user data, we created a database using Firebase. Javascript allowed us to connect everything in between. ## Challenges we ran into / Things we learned This project gave us our first exposure to databases, and it took a while to learn the relationship between the frontend and backend and how to access and modify data in the way that we needed for our app. We also used APIs (Firebase, Google Maps) that we had never used before, which took a long time to figure out how to implement. ## What's next for WYA (Where You At)? We would like to add features that help users more on their journey to their meeting, such as providing directions to the meeting location. The app also currently relies largely on the latitude and longitude coordinates of locations, and it would be more user-friendly to have the ability to search for nearby landmarks so that places have a human-readable name. We could also make the ability to add oneself to an event more secure, for example, by storing passwords in our database. The UI can also be improved by changing our login page into a series of pages, so that the space is more aesthetically pleasing.
## Inspiration Whenever I go on vacation, what I always fondly look back on is the sights and surroundings of specific moments. What if there was a way to remember these associations by putting them on a map to look back on? We strived to locate a problem, and then find a solution to build up from. What if instead of sorting pictures chronologically and in an album, we did it on a map which is easy and accessible? ## What it does This app allows users to collaborate in real time on making maps over shared moments. The moments that we treasure were all made in specific places, and being able to connect those moments to the settings of those physical locations makes them that much more valuable. Users from across the world can upload pictures to be placed onto a map, fundamentally physically mapping their favorite moments. ## How we built it The project is built off a simple React template. We added functionality a bit at a time, focusing on creating multiple iterations of designs that were improved upon. We included several APIs, including: Google Gemini and Firebase. With the intention of making the application very accessible to a wide audience, we spent a lot of time refining the UI and the overall simplicity yet useful functionality of the app. ## Challenges we ran into We had a difficult time deciding the precise focus of our app and which features we wanted to have and which to leave out. When it came to actually creating the app, it was also difficult to deal with niche errors not addressed by the APIs we used. For example, Google Photos was severely lacking in its documentation and error reporting, and even after we asked several experienced industry developers, they could not find a way to work around it. This wasted a decent chunk of our time, and we had to move in a completely different direction to get around it. ## Accomplishments that we're proud of We're proud of being able to make a working app within the given time frame. We're also happy over the fact that this event gave us the chance to better understand the technologies that we work with, including how to manage merge conflicts on Git (those dreaded merge conflicts). This is our (except one) first time participating in a hackathon, and it was beyond our expectations. Being able to realize such a bold and ambitious idea, albeit with a few shortcuts, it tells us just how capable we are. ## What we learned We learned a lot about how to do merges on Git as well as how to use a new API, the Google Maps API. We also gained a lot more experience in using web development technologies like JavaScript, React, and Tailwind CSS. Away from the screen, we also learned to work together in coming up with ideas and making decisions that were agreed upon by the majority of the team. Even with being friends, we struggled to get along super smoothly while working through our issues. We believe that this experience gave us an ample amount of pressure to better learn when to make concessions and also be better team players. ## What's next for Glimpses Glimpses isn't as simple as just a map with pictures, it's an album, a timeline, a glimpse into the past, but also the future. We want to explore how we can encourage more interconnectedness between users on this app, so we want to allow functionality for tagging other users, similar to social media, as well as providing ways to export these maps into friendly formats for sharing that don't necessarily require using the app. We also seek to better merge AI into our platform by using generative AI to summarize maps and experiences, but also help plan events and new memories for the future.
## Inspiration: The inspiration behind Pisces stemmed from our collective frustration with the time-consuming and often tedious process of creating marketing materials from scratch. We envisioned a tool that could streamline this process, allowing marketers to focus on strategy rather than mundane tasks. ## Learning: Throughout the development of Pisces, we learned the intricate nuances of natural language processing and machine learning algorithms. We delved into the psychology of marketing, understanding how to tailor content to specific target audiences effectively. ## Building: We started by gathering a diverse team with expertise in marketing, software development, and machine learning. Collaborating closely, we designed Pisces to utilize cutting-edge algorithms to analyze input data and generate high-quality marketing materials autonomously. ## Challenges: One of the main challenges we faced was training the machine learning models to accurately understand and interpret product descriptions. We also encountered hurdles in fine-tuning the algorithms to generate diverse and engaging content consistently. Despite the challenges, our dedication and passion for innovation drove us forward. Pisces is not just a project; it's a testament to our perseverance and commitment to revolutionizing the marketing industry. ## Interested to Learn More? **Read from the PROS!** Pisces has the power to transform marketing teams by reducing the need for extensive manpower. With traditional methods, it might take a team of 50 individuals to create comprehensive marketing campaigns. However, with Pisces, this workforce can be streamlined to just 5 people or even less. Imagine the time saved by automating the creation of ads, videos, and audience insights! Instead of spending weeks on brainstorming sessions and content creation, marketers can now allocate their time more strategically, focusing on refining their strategies and analyzing campaign performance. This tool isn't just a time-saver; it's a game-changer for the future of marketing. By harnessing the efficiency of Pisces, companies can launch campaigns faster, adapt to market trends more seamlessly, and ultimately, achieve greater success in their marketing endeavors. Pisces can be effectively used across various industries and marketing verticals. Whether you're a small startup looking to establish your brand presence or a multinational corporation aiming to scale your marketing efforts globally, Pisces empowers you to create compelling campaigns with minimal effort and maximum impact. ## Demos Walkthrough (bad compression): [YouTube Link](https://www.youtube.com/watch?v=VGiHuQ7Ha9w) Muted Demo (for ui/ux purposes): [YouTube Link](https://youtu.be/56MRUErwfPc)
losing
## Inspiration I wanted to make something that let me explore everything you need to do at a hackathon. ## What it does Currently, the web app stores and encrypts passwords onto a database hosted by cockroachDB with the "sign up" form. The web app also allows you to retrieve and decrypt your password with the "fetch" form. ## How we built it I used python to build the server side components and flask to connect the server to the web app. I stored the user-data using the cockroachDB API. I used html, jinja2, and bootstrap to make the front-end look pretty. ## Challenges we ran into Originally, I was going to use the @sign API and further continue my project, but the @platform uses Dart. I do not use Dart and I did not plan on doing so within the submission period. I then had to descale my project to something more achievable, which is what I have now. ## Accomplishments that we're proud of I made something when I had little idea of what I was doing. ## What we learned I learned a lot of the basic elements of creating a web app (front-end + back-end) and using databases (cockroachdb). ## What's next for Password Manager Fulling fleshing out the entire web app.
## Inspiration Recently, security has come to the forefront of media with the events surrounding Equifax. We took that fear and distrust and decided to make something to secure and protect data such that only those who should have access to it actually do. ## What it does Our product encrypts QR codes such that, if scanned by someone who is not authorized to see them, they present an incomprehensible amalgamation of symbols. However, if scanned by someone with proper authority, they reveal the encrypted message inside. ## How we built it This was built using cloud functions and Firebase as our back end and a Native-react front end. The encryption algorithm was RSA and the QR scanning was open sourced. ## Challenges we ran into One major challenge we ran into was writing the back end cloud functions. Despite how easy and intuitive Google has tried to make it, it still took a lot of man hours of effort to get it operating the way we wanted it to. Additionally, making react-native compile and run on our computers was a huge challenge as every step of the way it seemed to want to fight us. ## Accomplishments that we're proud of We're really proud of introducing encryption and security into this previously untapped market. Nobody to our kowledge has tried to encrypt QR codes before and being able to segment the data in this way is sure to change the way we look at QR. ## What we learned We learned a lot about Firebase. Before this hackathon, only one of us had any experience with Firebase and even that was minimal, however, by the end of this hackathon, all the members had some experience with Firebase and appreciate it a lot more for the technology that it is. A similar story can be said about react-native as that was another piece of technology that only a couple of us really knew how to use. Getting both of these technologies off the ground and making them work together, while not a gargantuan task, was certainly worthy of a project in and of itself let alone rolling cryptography into the mix. ## What's next for SeQR Scanner and Generator Next, if this gets some traction, is to try and sell this product on th marketplace. Particularly for corporations with, say, QR codes used for labelling boxes in a warehouse, such a technology would be really useful to prevent people from gainng unnecessary and possibly debiliatory information.
# 💻 GitFlow 💻 ## Inspiration As a project lead for ACM responsible for hosting various projects, each project (repo) is configured with continuous integration through Github Actions. These workflows are often copy pasted across multiple repositories. Adding, updating or deleting these workflows becomes cumbersome since there is no concept of "bulk updating" at an organization level. Instead, developers must manually configure them on a per repo basis introducing risk for inconsistencies and performance bottlenecks. Many companies, projects, teams often do not invest in continuous integration / continuous development resources due to the initial friction in getting started. Scaling CI/CD can be an intensive process, but GitFlow aims to provide a foot in the door by simplifying the process with LLM generated workflows and Github automation to plug the generated workflows directly into their repos. Hopefully, resulting in more stable development, happier engineers, and less technical debt. ## What it does Creates a single platform, from which developers can access ALL organizational workflows. They can view which repositories have them installed and can optionally install them as needs arise. Developers can also "bulk delete" workflows and quickly install new workflows with a click of a button. It would then create pull requests on a specified Github Repository with the generated workflow, reducing the required intervention. Using TogetherAI and Llama 2, Github Action Workflows are generated with an LLM to open the possibilities. Developers can also share their workflows with special share links, which provide viewers with the source code and meta data, which can be useful for releasing workflow configurations to the public while protecting the primary source code. ## How we built it Built with Next.js and TailwindCSS and powered by Bun, Convex, Together AI, and the Github API. Next.js and TailwindCSS provide the React framework to develop the frontend and backend. Convex provides a realtime datastore solution used to store the various workflows and their installations. Together AI is utilized to create the workflows from an LLM, although it may not have the best outcomes, developers can still tweak the configurations before finalizing a workflow. ## Challenges we ran into Connecting to the Github API was the most challenging part as there are multiple methods of connecting either PAT (Personal Access Tokens), Github Actions, GitHub CLI, or a GitHub App. Ultimately, the GitHub App was chosen as it would work with multiple organizations looking to benefit from GitFlow. Attempting to run SHELL/BASH commands on a Node.js environment was challenging as there were special packages and considerations. This would make accessing the repository difficult, since the GitHub App would create pull requests. ## Accomplishments that we're proud of Explored Shadcn, a TailwindCSS based component library, which drastically accelerated the development process. Although verbose it provided the necessary functionalities. Understanding the Github API, having access to virtually every Git/Github operation is daunting, but also rewarding as GitFlow attempts to automate as much of the process as possible. Leaving the developers to focus their energy on more complicated problems. ## What we learned Sometimes the best way to learn was to try 3 different things and go with the best one. Attempting to get a concrete Youtube video or an article was impossible since each one did not fit the needs or was outdated. Playing around with the API gave me a deeper insight into how APIs are constructed, since Github provides both a GraphQL and REST API. ## What's next for GitFlow Flushing out the current functionalities would be the highest priority before introducing new features. There are surely ways to optimize/improve code written by a sleep deprived developer at 4AM 😅. The next biggest milestone for GitFlow will be to integrate Webhooks. This would allow push based notifications about particular Github events such as when a new repository was created or a pull request was deleted.
partial
## Inspiration Genes are the code of life, a sequencing that determines who you are, what you look like, what you do and how you behave. Sequencing is the process of determining the order of bases in an organisms genome. Knowing one's genetic sequence can give insight into inherited genetic disorders, one's ancestry, and even one's approximate lifespan. Next-generation sequencing (NGS) is a term for the massive advancements made in genetic sequencing technologies made over the past 20 years. Since the first fully sequenced genome was released in 2000, the price of sequencing has dropped drastically, resulting in a wealth of biotech start-ups looking to commercialize this newfound scientific power. Given that the human genome is very large (about 3 GB for an individual), the combination of computational tools and biology represent a powerful duo for medical and scientific applications. The field of bioinformatics, as it is known, represents a growth area for life sciences that will only increase in years to come. ## What it does Reactive Genetics is a web portal. Individuals who have either paid to have their genes sequenced, or done it themselves (an increasing probability in coming years), can paste in their sequence into the home page of the web portal. It then returns another web page telling them whether they hold a "good" or "bad" gene for one of six common markers of genetic disease. ## How I built it Reactive Genetics uses a flask server that queries the National Center for Biotechnology Information's Basic Local Alignment Search Tool (BLAST) API. "BLASTING" is commonly used in modern biological research to find unknown genes in model organisms. The results are then returned to a React app that tells the user whether they are positive or negative for a certain genetic marker. ## Challenges I ran into The human genome was too large to return reliably or host within the app, so the trivial solution of querying the sequence against the reference genome wasn't possible. We resorted to BLASTing the input sequence and making the return value a boolean about whether the gene is what it "should" be. ## Accomplishments that I'm proud of One team member hopes to enter serious bioinformatics research one day and this is a major first step. Another team member gave a serious shot at learning React, a challenging endeavour given the limited time frame. ## What I learned One team member learned use of the BLAST API. Another team member became familiar with Bootstrap. ## What's next for Reactive genetics The app is currently running both a React development server and a Flask server. Eventually, porting everything over to one language and application would be ideal. More bioinformatics tools are released on a regular basis, so there is potential to use other technologies in the future and/or migrate completely to React.
## Inspiration In an AI course, an assignment was to build a simple chatbot. We took concepts learned in class and worked it into a web application that focuses on QHacks. ## What it does It's an AI that chats with you - answer it's questions or say anything and it'll respond! ## How I built it First we built the application using Javascript/JQuery using a simple textbox and console output. Then we added CSS and "chat bubbles" to make it feel like a regular conversation. ## Challenges I ran into * Figuring out RegEx in Javascript * Getting the response format correct using CSS ## Accomplishments that I'm proud of The more you interact with the chatbot, the more it seems like it could be human. We made our responses conversational, and are proud of the outcome. ## What I learned How to manipulate and then map user input segments to custom responses in a way that seems almost human-like. ## What's next for QHacks Chatbot * Adding more responses * Add response animations or delays
## Inspiration As someone who has always wanted to speak in ASL (American Sign Language), I have always struggled with practicing my gestures, as I, unfortunately, don't know any ASL speakers to try and have a conversation with. Learning ASL is an amazing way to foster an inclusive community, for those who are hearing impaired or deaf. DuoASL is the solution for practicing ASL for those who want to verify their correctness! ## What it does DuoASL is a learning app, where users can sign in to their respective accounts, and learn/practice their ASL gestures through a series of levels. Each level has a *"Learn"* section, with a short video on how to do the gesture (ie 'hello', 'goodbye'), and a *"Practice"* section, where the user can use their camera to record themselves performing the gesture. This recording is sent to the backend server, where it is validated with our Action Recognition neural network to determine if you did the gesture correctly! ## How we built it DuoASL is built up of two separate components; **Frontend** - The Frontend was built using Next.js (React framework), Tailwind and Typescript. It handles the entire UI, as well as video collection during the *"Learn"* Section, which it uploads to the backend **Backend** - The Backend was built using Flask, Python, Jupyter Notebook and TensorFlow. It is run as a Flask server that communicates with the front end and stores the uploaded video. Once a video has been uploaded, the server runs the Jupyter Notebook containing the Action Recognition neural network, which uses OpenCV and Tensorflow to apply the model to the video and determine the most prevalent ASL gesture. It saves this output to an array, which the Flask server reads and responds to the front end. ## Challenges we ran into As this was our first time using a neural network and computer vision, it took a lot of trial and error to determine which actions should be detected using OpenCV, and how the landmarks from the MediaPipe Holistic (which was used to track the hands and face) should be converted into formatted data for the TensorFlow model. We, unfortunately, ran into a very specific and undocumented bug with using Python to run Jupyter Notebooks that import Tensorflow, specifically on M1 Macs. I spent a short amount of time (6 hours :) ) trying to fix it before giving up and switching the system to a different computer. ## Accomplishments that we're proud of We are proud of how quickly we were able to get most components of the project working, especially the frontend Next.js web app and the backend Flask server. The neural network and computer vision setup was pretty quickly finished too (excluding the bugs), especially considering how for many of us this was our first time even using machine learning on a project! ## What we learned We learned how to integrate a Next.js web app with a backend Flask server to upload video files through HTTP requests. We also learned how to use OpenCV and MediaPipe Holistic to track a person's face, hands, and pose through a camera feed. Finally, we learned how to collect videos and convert them into data to train and apply an Action Detection network built using TensorFlow ## What's next for DuoASL We would like to: * Integrate video feedback, that provides detailed steps on how to improve (using an LLM?) * Add more words to our model! * Create a practice section that lets you form sentences! * Integrate full mobile support with a PWA!
losing
## Inspiration People who are mute struggle to communicate with people in their day-to-day lives as most people do not understand American Sign Language. We were inspired by the desire to empower mute individuals by bridging communication gaps and fostering global connections. ## What it does VoiceLens uses advanced lip-reading technology to transcribe speech and translate it in real-time, allowing users to choose their preferred output language for seamless interaction. ## How we built it We used ReactJS for the frontend and Symphonic Labs API to silently transcribe and read the lips of the user. We also then used Groq's llama3 allow for translation between various languages and used Google's Text-to-speech API to voice the sentences. ## Challenges We faced challenges in accurately capturing and interpreting lip movements, as well as ensuring fast and reliable translations across diverse languages. ## Accomplishments that we're proud of We're proud of achieving high accuracy in lip-reading and successfully providing real-time translations that facilitate meaningful communication. ## What we learned We learned the importance of collaboration between technology and accessibility, and how innovative solutions can make a real difference in people's lives. ## What's next for VoiceLens We plan to enhance our language offerings, improve speed and accuracy, and explore partnerships to expand the app's reach and impact globally.
## Inspiration We wanted to simplify communication between any user and a person who speaks mainly sign language. ## What it does In one direction, it converts sign language from camera input into text to be displayed to the user. On the other hand, it takes speech from the user and coverts it into text to be displayed to the person who mainly speaks sign language. ## How we built it The entire front-end was built using VueJS, with speech recognition being done using Chrome's Web Speech API. The two machine learning models were built. The first model is a frozen pre-trained model which works with a convolutional neural network. The second model is one that was built used Microsoft's custom vision, with manual images being taken and fed. ## Challenges we ran into Getting a model that works well for detecting sign language. ## Accomplishments that we're proud of Getting a semi-working model for detecting sign language. ## What we learned Loads of machine learning knowledge. ## What's next for sli.ai Supporting displaying to multiple screens at once. Refining the machine learning model to be more accurate. Implementing text-to-speech for sign language that's converted into text via the model.
## Personal Statement It all started when our team member (definitely not Parth), let's call him Marth, had a crush on a girl who was a big fan of guitar music. He decided to impress her by playing her favorite song on the guitar, but there was one problem - Marth had never played the guitar before. Determined to win her over, Marth spent weeks practicing the song, but he just couldn't get the hang of it. He even resorted to using YouTube tutorials, but it was no use. He was about to give up when he had a crazy idea - what if he could make the guitar play the song for him? That's when our team got to work. We spent months developing an attachment that could automatically parse any song from the internet and play it on the guitar. We used innovative arm technology to strum the strings and servos on the headstock to press the chords, ensuring perfect sound every time. Finally, the day arrived for Marth to show off his new invention to the girl of his dreams. He nervously set up the attachment on his guitar and selected her favorite song. As the guitar began to play, the girl was amazed. She couldn't believe how effortlessly Marth was playing the song. Little did she know, he had a secret weapon! Marth's invention not only won over the girl, but it also sparked the idea for our revolutionary product. Now, guitar players of all levels can effortlessly play any song they desire. And it all started with a boy, a crush, and a crazy idea. ## Inspiration Our product, Strum it Up, was inspired by one team member's struggle to impress a girl with his guitar skills. After realizing he couldn't play, he and the team set out to create a solution that would allow anyone to play any song on the guitar with ease. ## What it does Strum it Up is an attachment for the guitar that automatically parses any song from the internet and uses an innovative arm technology to strum the strings and servos on the headstock to help press the chords, ensuring perfect sound every time. ## How we built it We spent hours developing Strum it Up using a combination of hardware and software. We used APIs to parse songs from the internet, custom-built arm technology to strum the strings, and servos on the headstock to press the chords. ## Challenges we ran into One of the biggest challenges we faced was ensuring that the guitar attachment could accurately strum and press the chords on a wide range of guitar models. This was because different models have different actions (action is the height between strings and the fretboard, the more the height, the harder you need to press the string) We also had to ensure that the sound quality was top-notch and that the attachment was easy to use. ## Accomplishments that we're proud of We're incredibly proud of the final product - Strum it Up. It's a game-changer for guitar players of all levels and allows anyone to play any song with ease. We're also proud of the innovative technology we developed, which has the potential to revolutionize the music industry. ## What we learned Throughout the development process, we learned a lot about guitar playing, sound engineering, and hardware development. We also learned the importance of persistence, dedication, and teamwork when it comes to bringing a product to market. ## What's next for Strum it Up We're excited to see where Strum it Up will take us next. We plan to continue improving the attachment, adding new features, and expanding our reach to guitar players all over the world. We also hope to explore how our technology can be used in other musical applications.
losing
We were inspired by the daily struggle of social isolation. Shows the emotion of a text message on Facebook We built this using Javascript, IBM-Watson NLP API, Python https server, and jQuery. Accessing the message string was a lot more challenging than initially anticipated. Finding the correct API for our needs and updating in real time also posed challenges. The fact that we have a fully working final product. How to interface JavaScript with Python backend, and manually scrape a templated HTML doc for specific key words in specific locations Incorporate the ability to display alternative messages after a user types their initial response.
## Inspiration We explored IBM Watson and realized its potentials and features that enable people to make anything they want using its cloud skills. We all read and we always want to read books/articles which suites our taste. We made this easier using our web app. Just upload the pdf file and get detailed entities, keywords, concepts, and emotions visually in our dashboard. ## What it does Our web app analyzes the content of the articles using IBM NLU and displays entities, keywords, concepts and emotions graphically ## How I built it Our backend is developed using Springboot and Java while the front end is designed using bootstrap and HTML. We used d3.js for displaying graphical representation of data. The content of the article is read using the Apache Tika framework. ## Challenges I ran into Completing a project within 24 hours was a big challenge. We also struggled connecting front end and backend. Fortunately, we found a template and we leveraged it to develop our project. ## Accomplishments that I'm proud of We are proud to say that we worked as a team aiming for a specific prize and we were able to finish the project with pretty much all the features we wanted. ## What I learned We learned the potential of IBM Watson NLU and other IBM Cloud technologies . We also learned different technologies like d3.js, springboot which we were not familiar with. ## What's next for Know before you read We want this app accessible to more people and we are planning to deploy it after finishing up the UI.
## Inspiration Our team wanted to try learning web development, so we needed a simple but also fun project. One day at lunch, we thought of creating a personality quiz that would determine what kind of chess piece you were. This evolved into making a game where you could move around a chess piece however you like and our program would return what chess piece the moves you made were similar to. --- ## What it does A player must move around a chess piece on a board, and the program will return the chess piece which moves in a similar way. --- ## How we built it We built our project on repl.it, using their HTML, CSS, and JS default project option. The team did the programming all on one computer, because we were all learning together. --- ## Challenges we ran into Our team had never done any kind of JS web dev project before, so we had a lot of trouble learning the languages we were using. In particular, creating an interactive chess board that looked decent was very time consuming. We tried many methods of creating a chess board, including dynamically using JS and a CSS grid. We also had trouble making our web page look good, because we did not know a lot about CSS. --- ## Accomplishments that we're proud of The interactive chess board (TM) is the achievement we are most proud of. At one point, we didn't think moving around a piece on a board would even be possible. However, we somehow managed to pull it off. --- ## What we learned We learned a lot about how HTML, CSS, and JS work together to deliver a complete functioning web page. Hack the North was a great learning experience and now the team is a lot more comfortable using the three languages. --- ## What's next for Chess Piece Personality Quiz Our original idea was to take an image and analyze it to determine what chess piece the image was representing. This might be what is next for the Chess Piece Personality Program, after we've figured out how to analyze an image of course.
partial
## Inspiration MyWeekend was created to provide users with interesting and consistent itinerary plans when they suffer from a lack of creativity. ## What it does MyWeekend allows users to generate personalized itineraries based on their chosen location, personal interests, budget, and the size of their group. With the use of ChatGPT and other helper APIs, it takes the request and provides several activities that are curated the user's request. The user is then able to select these potential activities to build their personalized itinerary. ## How we built it The MERN stack, consisting of MongoDB, Express.js, React.js, and Node.js, offers a seamless full-stack JavaScript environment for efficient web development. React.js enables dynamic user interfaces, while Node.js ensures scalability for real-time applications. Express.js simplifies API development, and MongoDB provides flexible data storage. On top of this, we used the Google Maps API to get geolocation and place data to help take full advantage of Google Cloud. ## Challenges we ran into Our team encountered challenges in the form of server issues with node.js, and we were understanding how to use react.js, which slowed down our development. Regardless of our shortcomings, we have kept finding solutions and pushing forward to prototype our idea. ## Accomplishments that we're proud of Our team is proud of our ability to create and deliver a full-stack application that utilizes Artificial Intelligence within its user interactions. We're also proud of our backend work and separate client module, which made it easy to deploy our application on Google Cloud Compute Engine, and allowed us to host it on the internet. Given the time constraints and experience, we were able to create something we were proud of. ## What we learned Full-stack development requires a lot of time and patience in order to create something consistent and beautiful. Although react.js and node.js were unfamiliar to some of our members, we learned that these development tools were important to speed up the development proces. ## What's next for MyWeekend Our team is planning to keep improving the app with every iteration and resolving issues that we may encounter. We hope to finish features that we didn't get time to implement, such as using AI and various other APIs to find the cheapest plane and attraction tickets, or even scheduling international trips.
## Inspiration What inspired the beginning of the idea was terrible gym music and the thought of a automatic music selection based on the tastes of people in the vicinity. Our end goal is to sell a hosting service to play music that people in a local area would want to listen to. ## What it does The app has two parts. A client side connects to Spotify and allows our app to collect users' tokens, userID, email and the top played songs. These values are placed inside a Mongoose database and the userID and top songs are the main values needed. The host side can control the location and the radius they want to cover. This allows the server to be populated with nearby users and their top songs are added to the host accounts playlist. The songs most commonly added to the playlist have a higher chance of being played. This app could be used at parties to avoid issues discussing songs, retail stores to play songs that cater to specific groups, weddings or all kinds of social events. Inherently, creating an automatic DJ to cater to the tastes of people around an area. ## How we built it We began by planning and fleshing out the app idea then from there the tasks were split into four sections: location, front end, Spotify and database. At this point we decided to use React-Native for the mobile app and NodeJS for the backend was set into place. After getting started the help of the mentors and the sponsors were crucial, they showed us all the many different JS libraries and api's available to make life easier. Programming in Full Stack MERN was a first for everyone in this team. We all hoped to learn something new and create an something cool. ## Challenges we ran into We ran into plenty of problems. We experienced many syntax errors and plenty of bugs. At the same time dependencies such as Compatibility concerns between the different APIs and libraries had to be maintained, along with the general stress of completing on time. In the end We are happy with the product that we made. ## Accomplishments that we are proud of Learning something we were not familiar with and being able to make it this far into our project is a feat we are proud of. . ## What we learned Learning about the minutia about Javascript development was fun. It was because of the mentors assistance that we were able to resolve problems and develop at a efficiently so we can finish. The versatility of Javascript was surprising, the ways that it is able to interact with and the immense catalog of open source projects was staggering. We definitely learned plenty... now we just need a good sleep. ## What's next for SurroundSound We hope to add more features and see this application to its full potential. We would make it as autonomous as possible with seamless location based switching and database logging. Being able to collect proper user information would be a benefit for businesses. There were features that did not make it into the final product, such as voting for the next song on the client side and the ability for both client and host to see the playlist. The host would have more granular control such as allowing explicit songs, specifying genres and anything that is accessible by the Spotify API. While the client side can be gamified to keep the GPS scanning enabled on their devices, such as collecting points for visiting more areas.
## Inspiration As University of Waterloo students who are constantly moving in and out of many locations, as well as constantly changing roommates, there are many times when we discovered friction or difficulty in communicating with each other to get stuff done around the house. ## What it does Our platform allows roommates to quickly schedule and assign chores, as well as provide a messageboard for common things. ## How we built it Our solution is built on ruby-on-rails, meant to be a quick simple solution. ## Challenges we ran into The time constraint made it hard to develop all the features we wanted, so we had to reduce scope on many sections and provide a limited feature-set. ## Accomplishments that we're proud of We thought that we did a great job on the design, delivering a modern and clean look. ## What we learned Prioritize features beforehand, and stick to features that would be useful to as many people as possible. So, instead of overloading features that may not be that useful, we should focus on delivering the core features and make them as easy as possible. ## What's next for LiveTogether Finish the features we set out to accomplish, and finish theming the pages that we did not have time to concentrate on. We will be using LiveTogether with our roommates, and are hoping to get some real use out of it!
losing
## Bringing your music to life, not just to your ears but to your eyes 🎶 ## Inspiration 🍐 Composing music through scribbling notes or drag-and-dropping from MuseScore couldn't be more tedious. As pianists ourselves, we know the struggle of trying to bring our impromptu improvisation sessions to life without forgetting what we just played or having to record ourselves and write out the notes one by one. ## What it does 🎹 Introducing PearPiano, a cute little pear that helps you pair the notes to your thoughts. As a musician's best friend, Pear guides pianists through an augmented simulation of a piano where played notes are directly translated into a recording and stored for future use. Pear can read both single notes and chords played on the virtual piano, allowing playback of your music with cascading tiles for full immersion. Seek musical guidance from Pear by asking, "What is the key signature of C-major?" or "Tell me the notes of the E-major diminished 7th chord." To fine tune your compositions, use "Edit mode," where musicians can rewind the clip and drag-and-drop notes for instant changes. ## How we built it 🔧 Using Unity Game Engine and the Oculus Quest, musicians can airplay their music on an augmented piano for real-time music composition. We used OpenAI's Whisper for voice dictation and C# for all game-development scripts. The AR environment is entirely designed and generated using the Unity UI Toolkit, allowing our engineers to realize an immersive yet functional musical corner. ## Challenges we ran into 🏁 * Calibrating and configuring hand tracking on the Oculus Quest * Reducing positional offset when making contact with the virtual piano keys * Building the piano in Unity: setting the pitch of the notes and being able to play multiple at once ## Accomplishments that we're proud of 🌟 * Bringing a scaled **AR piano** to life with close-to-perfect functionalities * Working with OpenAI to synthesize text from speech to provide guidance for users * Designing an interactive and aesthetic UI/UX with cascading tiles upon recording playback ## What we learned 📖 * Designing and implementing our character/piano/interface in 3D * Emily had 5 cups of coffee in half a day and is somehow alive ## What's next for PearPiano 📈 * VR overlay feature to attach the augmented piano to a real one, enriching each practice or composition session * A rhythm checker to support an aspiring pianist to stay on-beat and in-tune * A smart chord suggester to streamline harmonization and enhance the composition process * Depth detection for each note-press to provide feedback on the pianist's musical dynamics * With the up-coming release of Apple Vision Pro and Meta Quest 3, full colour AR pass-through will be more accessible than ever — Pear piano will "pair" great with all those headsets!
## Our Inspiration We were inspired by apps like Duolingo and Quizlet for language learning, and wanted to extend those experiences to a VR environment. The goal was to gameify the entire learning experience and make it immersive all while providing users with the resources to dig deeper into concepts. ## What it does EduSphere is an interactive AR/VR language learning VisionOS application designed for the new Apple Vision Pro. It contains three fully developed features: a 3D popup game, a multi-lingual chatbot, and an immersive learning environment. It leverages the visually compelling and intuitive nature of the VisionOS system to target three of the most crucial language learning styles: visual, kinesthetic, and literacy - allowing users to truly learn at their own comfort. We believe the immersive environment will make language learning even more memorable and enjoyable. ## How we built it We built the VisionOS app using the Beta development kit for the Apple Vision Pro. The front-end and AR/VR components were made using Swift, SwiftUI, Alamofire, RealityKit, and concurrent MVVM design architecture. 3D Models were converted through Reality Converter as .usdz files for AR modelling. We stored these files on the Google Cloud Bucket Storage, with their corresponding metadata on CockroachDB. We used a microservice architecture for the backend, creating various scripts involving Python, Flask, SQL, and Cohere. To control the Apple Vision Pro simulator, we linked a Nintendo Switch controller for interaction in 3D space. ## Challenges we ran into Learning to build for the VisionOS was challenging mainly due to the lack of documentation and libraries available. We faced various problems with 3D Modelling, colour rendering, and databases, as it was difficult to navigate this new space without references or sources to fall back on. We had to build many things from scratch while discovering the limitations within the Beta development environment. Debugging certain issues also proved to be a challenge. We also really wanted to try using eye tracking or hand gesturing technologies, but unfortunately, Apple hasn't released these yet without a physical Vision Pro. We would be happy to try out these cool features in the future, and we're definitely excited about what's to come in AR/VR! ## Accomplishments that we're proud of We're really proud that we were able to get a functional app working on the VisionOS, especially since this was our first time working with the platform. The use of multiple APIs and 3D modelling tools was also the amalgamation of all our interests and skillsets combined, which was really rewarding to see come to life.
# [www.StakeTogether.org](http://www.StakeTogether.org) ## Democratizing Property Investments StakeTogether.org is a website which connects everyday investors to purchase rental properties together. Currently, investing in real estate requires significant capital and involves taking on large liabilities via loans. This discourages all but the wealthiest investors from purchasing rental properties. **Stake Together allows anyone to easily invest in real estate**. StakeTogether highlights properties for sale which are the best bang for your buck. Investors then pledge a partial investment on the properties they choose. Once a property has enough pledged, StakeTogether buys and manages the property on behalf of the investors. Rent revenue is paid proportionally back to the investors, with a small percentage withheld for property management fees and a renovation fund. ## Impact StakeTogether aims to remove barriers to real estate investing. By doing so, we hope to increase the supply of rental properties available in high-margin housing markets, driving down rental costs for consumers. By pushing the rental market towards more affordable options, StakeTogether gives access to both affordable housing, and profitable investing. ## Tech * Python * Pandas * AWS EC2 & RDS * Flask * Postgres * React
winning
# Inspiration The inspiration for Floo came from the need for personalized interview preparation. Many candidates struggle with behavioral interviews and lack the resources to practice effectively. We aimed to create a solution that tailors the interview experience to individual users based on their unique backgrounds and qualifications. # What it does Floo is an AI-driven platform that simulates behavioral interviews. It provides users with realistic interview scenarios and gives personalized feedback. By analyzing the user’s past experiences and resumes, Floo recommends the best responses, helping users build confidence and improve their interviewing skills. # How we built it We developed Floo using a combination APIs including Hume, OpenAI, and Deepgram. The backend is powered by Flask, which manages user data and interacts with a database storing users’ past experiences and resumes. The front end is built with React, creating a seamless and interactive user experience. # Challenges we ran into One of the main challenges was ensuring that the AI accurately interpreted user experiences and provided meaningful feedback. We also faced difficulties in designing an intuitive user interface that effectively communicated the AI’s recommendations. Integrating the database with the AI model posed additional technical challenges. # Accomplishments that we’re proud of We are proud to have successfully developed an AI agent to conduct realistic behavioral interviews. We go beyond a simple AI system that would only ask questions and give surface-level advice. Driven by the user's personal experiences derived from past responses and resume details, it curates advice specifically for the user, maximizing its impact on their learning experience. # What we learned Through this project, we learned the importance of user-centered design and the value of iterative testing. We gained hands-on experience with speech/text APIs as well as, deepening our understanding of how to leverage AI in practical applications. Collaborating as a team taught us the significance of communication and adaptability in problem-solving. # What’s next for Floo Moving forward, we plan to enhance Floo’s capabilities by incorporating more advanced AI algorithms for better feedback and recommendations. We aim to expand our database to include a wider range of industries and roles, providing users with a more comprehensive practice experience. Additionally, we want to explore partnerships with career services and educational institutions to reach a broader audience.
## Inspiration The inspiration behind ReflectAI stems from the growing prevalence of virtual behavioral interviews in the modern hiring process. We recognized that job seekers face a significant challenge in mastering these interviews, which require not only the right words but also the right tone and body language. We wanted to empower job seekers by providing them with a platform to practice, improve, and receive personalized feedback on their performance. Our goal is to level the playing field and increase the chances of success for job seekers everywhere. ## What it does ReflectAI combines language analysis, prosody analysis, and facial expression analysis to offer comprehensive feedback on interview responses. Key features include: Practice Environment: Users can simulate real interview scenarios, record their responses to common behavioral questions, and receive feedback on their performance. Multi-Modal Analysis: Our platform assesses not just what you say but how you say it and what your body language conveys. Personalized Feedback: ReflectAI provides detailed feedback and actionable recommendations to help users improve their communication skills. ![Mockup](https://cdn.discordapp.com/attachments/1166663245285830666/1168069411165454362/First_interview_question.png?ex=65506c69&is=653df769&hm=8754d04503253d35c4d30e968a00d2dd5761010f707aefebc744414af490a092&) ## How we built it We built a React frontend connected to Firebase for storing intermediate artifacts and a backend that utilizes Hume for facial expression, prosody, and language emotion detection alongside OpenAI for feedback generation. ## Challenges we ran into The main challenges were building a React frontend from scratch, and understanding all facets of the Hume API and how it would work within our application. ## Accomplishments that we're proud of We built a full-stack app from scratch that is capable of processing large artifacts (videos) in a performant manner. ## What we learned We learned how to use tools like Figma and the Hume API, and how to effectively set expectations so that we weren't overly scrunched for time. ## What's next for ReflectAI Our journey with ReflectAI is just beginning. We have ambitious plans for the future, including: * Expanding our library of interview questions and scenarios to cover a wide range of industries and job types. * Enhancing our AI models to provide even more detailed and personalized feedback. * Exploring partnerships with educational institutions and employers to integrate ReflectAI into training and hiring processes. * Continuously improving our platform based on user feedback and evolving technology to remain at the forefront of interview preparation.
Demo using react native app version: [Click here](https://youtu.be/vS-FjCjrDqI) ![alt text](https://res.cloudinary.com/dm5iiylmn/image/upload/v1633849416/title_ivhrmy.jpg) ![alt text](https://res.cloudinary.com/dm5iiylmn/image/upload/v1633754807/background_uc94qv.png) --- ## Problem Statement: Racial discrimination in the hiring process > > Minority applicants are “whitening” their resumes by removing references to their race in hopes of increasing their job opportunities, and research shows the strategy is paying off. > > > ![alt text](https://res.cloudinary.com/dm5iiylmn/image/upload/v1633844080/Frame_109_zi0enm.png) ![alt text](https://res.cloudinary.com/dm5iiylmn/image/upload/v1633849734/1-f_sn7dxn.jpg) **eraCe** is a mobile app that allows job seekers to apply for jobs anonymously while emphasizing more in their talent. To achieve this, we encrypt all data of the applicants related to their backgrounds (race, age, gender, religion, etc) to help recruiters make fair and unbiased hiring decisions. ### 1. Changing the hiring system, standardize the process ![alt text](https://res.cloudinary.com/dm5iiylmn/image/upload/v1633851963/feature1_ytcxls.jpg) One of the main problems any organization faces in its hiring process is unconscious bias. Hiring managers often succumb to their beliefs and perceptions that often influence their hiring decisions. And the scary thing is that hiring managers are often unaware of their own biases. The reason for this is that these prejudices are mostly unconscious prejudices that arise from perceptions, stereotypical beliefs, and social conditioning. We proposed the solution of **encrypted CV** to support this blind recruitment process, which will help the job seeker to encrypt their name, address, picture, gender, or any other race descriptors automatically by using an AI algorithm. So the applicant doesn't have to hide the fact that they whitened the resume, while on the other hand supporting the company that values diversity to shift their focus from racial backgrounds to skills. This will be the first step to do especially in the early hiring stages which are crucial, and it takes effort from both parties to overcome this issue. We made it work! [Demo Video](https://www.youtube.com/watch?v=gvgwjOe2HYE) ### 2. One-on-one mentoring system with transcription ![alt text](https://res.cloudinary.com/dm5iiylmn/image/upload/v1633877884/mentorship_xdx3xv.jpg) We made it work! [Demo Video](https://youtu.be/6-_Y7wRPV4M) ## Design Our app was designed using the design thinking method in Figma. [Click here for HD version](https://www.docdroid.net/2MlIqSp/user-journey-pdf) ![alt text](https://res.cloudinary.com/dm5iiylmn/image/upload/v1633853597/Screen_Shot_2021-10-10_at_15.13.05_azk47z.png) ## How we built it The FrontEnd was built in React Native, based on our designs we created in Figma. The BackEnd was written in Python, hosted on Google Cloud, and functions to analyse/whiten a given resume, using machine learning to determine where the features are and updating them algorithmically - using a two way hashing system. Additionally, one of the main use cases of our app is to have a call between a prospective jobseeker and a mentor at a company. We accomplished this by using Twilio's API to call and then to allow the jobseeker to review what was said in the call we use Assembly AI's API to transcribe the call. ## Challenges we ran into In addition to the technical challenges, we ran into human constrained challenges. We collectively live in 3 separate timezones, which made scheduling calls suitable for everyone harder, and one of our members also got sick in the leadup to the hackathon. ## Accomplishments Developed a Mobile Application that has the potential to make a difference for those who may be unknowingly, and obviously unjustly, discriminated against. ## What we learned This was very eye-opening even for us knowing that racism still occurs even if we don't notice it. In fact, according to the study done by researchers at the University of Toronto and Stanford "organizational diversity statements are not actually associated with reduced discrimination against unwhitened resumes" and so this bias is prevalent among both 'diversity neutral' and 'diversity positive' institutions. ## What's next Along with generally beautifying the app, we plan on adding additional company verification and more of a company interface, allowing them to interact with their mentors and their related statistics on a dashboard. Voice modulation during phone call interviews might be the next potential feature in our app. ## Research HBS Working Knowledge: **Minorities Who 'Whiten' Job Resumes Get More Interviews**, July 2013, <https://hbswk.hbs.edu/item/minorities-who-whiten-job-resumes-get-more-interviews> University of Toronto and Stanford: **Whitened Resumes: Race and Self-Presentation in the Labor Market**, January 2016, <http://www-2.rotman.utoronto.ca/facbios/file/Whitening%20MS%20R2%20Accepted.pdf> Workable: **Unconscious bias in recruitment**, <https://resources.workable.com/stories-and-insights/unconscious-bias-in-recruitment> Inside Google: **You don’t know what you don’t know: How our unconscious minds undermine the workplace**, Sep 2014, <https://blog.google/inside-google/life-at-google/you-dont-know-what-you-dont-know-how/> ABC News: **Top 20 'Whitest' and 'Blackest' Names**, May 2015, <https://abcnews.go.com/2020/top-20-whitest-blackest-names/story?id=2470131>
losing
## Inspiration The bitalino system is a great new advance in affordable, do-it-yourself biosignals technology. Using this technology, we want to make an application that provides an educational tool to exploring how the human body works. ## What it does Currently, it uses the ServerBIT architecture to get ECG signals from a connected bitalino and draw them in an HTML file real time using javascript. In this hack, the smoothie.js library was used instead of the jQuery flot to provide smoother plotting. ## How I built it I built the Lubdub Club using Hugo Silva's ServerBIT architecture. From that, the ECG data was drawn using smoothie.js. A lot of work was put in to make a good and accurate ECG display, which is why smoothie was used instead of flot. Other work involved adjusting for the correct ECG units, and optimizing the scroll speed and scale of the plot. ## Challenges I ran into The biggest challenge we ran into was getting the Python API to work. There are a lot more dependencies for it than is written in the documentation, but that may be because I was using a regular Python installation on Windows. I installed WinPython to make sure most of the math libraries (pylab. numpy) were installed, and installed everything else afterwards. In addition, there is a problem with server where the TCP listening will not close properly, which caused a lot of trouble in testing. Apart from that, getting a good ECG signal was very challenging, as testing was done using electrode leads on the hands, which admittedly would give a signal that is quite susceptible to interference (both from surrounding electronics and movements). ALthough we never got an ECG signal close to the ones in the demos online, we did end up with a signal that was definitely an ECG, and had recognizable PQRS phases. ## Accomplishments that I'm proud of I am proud that we were able to get the Python API working with the bitalino, as it seems that many others at Hack Western 2 were unable to. In addition, I am happy with the way the smoothie.js plot came out, and I think it is a great improvement over the original flot plot. Although we did not have time to set up a demo site, I am quite proud of the name our team came up with (lubdub.club). ## What I learned I learned a lot of Javascript, jQuery, Python, and getting ECG signals from less than optimal electrode configurations. ## What's next for Lubdub Club What's next is to implement some form of wave-signal analysis to clean up the ECG waveform, and to perform calculations to find values like heart rate. Also, I would like to make the Python API / ServerBIT easier to use (maybe rewrite from scratch or at least collect all dependencies in an installer). Other things include adding more features to the HTML site, like changing colour to match heartrate, music, and more educational content. I would like to set up lubdub.club, and maybe find a way to have the data from the bitalino sent to the cloud and then displayed on the webpage.
## Inspiration Accessibility to Open Source Software <https://github.com/alicevision/meshroom> ## What it does Making open-source photogrammetry software more usable. We wanted to create a website that cranks out 3D models from 2D images that could be taken and downloaded while having GCP do the heavy lifting. ## How we built it We built a program that automates creating directories with UIPath, a staggering amount of times... infinite directories, infinite fun Golang Client and Server web sockets GCP vms, google cloud sdk Use of Team Viewer, to remote access to a powerful computer to stimulate real world application Validation through Python Scripts Websites using HTML, CSS Beautiful MLH Photo Setup ## Challenges we ran into GCP didn't have CUDA for Windows VMs Open-Source Software used didn't have a download for Linux and needed CUDA cores on Windows machines. All things we tried were initially jank, continued to be jank or do not yet exist. Name a challenge, we didn't face People cutting out our power, using half a laptop Little to no documentation, outdated and un-fixed documentation Lightning and determining perfect angles for the pictures. ## Accomplishments that we're proud of Implementated Golang web sockets, Creating Dope 3D models from pictures, Killer Website, Getting comfortable-ish traveling UIPath and GCP ## What we learned UIPath Basics, Advanced CMD tricks, GCP, how to software ## What's next for YeeHacks Sleep Having models be added into Unity and interacted with Oculus
## Inspiration More creators are coming online to create entertaining content for fans across the globe. On platforms like Twitch and YouTube, creators have amassed billions of dollars in revenue thanks to loyal fans who return to be part of the experiences they create. Most of these experiences feel transactional, however: Twitch creators mostly generate revenue from donations, subscriptions, and currency like "bits," where Twitch often takes a hefty 50% of the revenue from the transaction. Creators need something new in their toolkit. Fans want to feel like they're part of something. ## Purpose Moments enables creators to instantly turn on livestreams that can be captured as NFTs for live fans at any moment, powered by livepeer's decentralized video infrastructure network. > > "That's a moment." > > > During a stream, there often comes a time when fans want to save a "clip" and share it on social media for others to see. When such a moment happens, the creator can press a button and all fans will receive a non-fungible token in their wallet as proof that they were there for it, stamped with their viewer number during the stream. Fans can rewatch video clips of their saved moments in their Inventory page. ## Description Moments is a decentralized streaming service that allows streamers to save and share their greatest moments with their fans as NFTs. Using Livepeer's decentralized streaming platform, anyone can become a creator. After fans connect their wallet to watch streams, creators can mass send their viewers tokens of appreciation in the form of NFTs (a short highlight clip from the stream, a unique badge etc.) Viewers can then build their collection of NFTs through their inventory. Many streamers and content creators have short viral moments that get shared amongst their fanbase. With Moments, a bond is formed with the issuance of exclusive NFTs to the viewers that supported creators at their milestones. An integrated chat offers many emotes for viewers to interact with as well.
partial
# Food Cloud Curbing food waste for a sustainable future ## Inspiration The awareness vertical - Every year food companies throw away an excess amount of food. According to the Food and Agriculture Organization of the United Nations, “roughly one-third of the food produced in the world for human consumption every year — approximately 1.3 billion tonnes — gets lost or wasted.” ## What it does With this in mind, we brainstormed a way for food companies and restaurants to make use of the extra amount of food produced. With the two day time constraints and team skills, we decided to make a web application for a normal consumer to buy excess food based on the desired location radius. The business would signup and login to post food. The consumer would be able to buy discounted food from the post. How we built it For our project, we built our web pages using HTML, CSS, and JavaScript. Our choice of database was Firebase, and we used it with flask as our framework. Our backend was created using Flask and Python. ## Challenges we ran into A challenge was using Flask with Firebase. The documentation was skewed to a pure Python solution with python-admin. Time should have been used to pyrebase, a python wrapper for the Firebase API. Another challenge was developing for the real-time database section for Firebase. We opted for Firestore instead. Accomplishments that we're proud of We are proud of the idea and the application made. We did code it completely on scratch and used productively utilize agile methodologies. The idea is very unique and we hope to be able to spread awareness of saving food waste with this application. ## What we learned Everyone learned different things throughout this project. However, after discussing and solving our problems, we have all gained a better understanding of the full stack environment. We have also all learned how to use firebase for web applications alongside Flask. ## What's next for FoodCloud The next steps for FoodCloud would be having better design or identity. Projects should have consistency with the design as noted by the Scott Forstall, creator of IOS. Another must-have feature is having a better schema with the user/business Firebase database.
## Inspiration Hosts of social events/parties create their own music playlists and ultimately control the overall mood of the event. By giving attendees a platform to express their song preferences, more people end up feeling satisfied and content with the event. ## What it does Shuffle allows hosts to share their event/party playlists with attendees using a web interface. Attendees have the ability to view and vote for their favorite tracks using a cross-platform mobile application. The tracks in the playlist are shuffled in real time based on user votes. ## How we built it We used React for the web application (host) and react-native for the mobile application (client). Both applications access a central database made using MongoDB Stitch. We also used socket.io deployed on Heroku to provide real time updates. ## Challenges we ran into Integrating MongoDB Stitch and socket.io in order to show real time updates across multiple platforms. ## Accomplishments that we're proud of We're proud of the fact that we were able to create a cross platform web and mobile application. Only a valid internet connection is required to access our platform. ## What we learned All team members were able learn and experiment with a new tool/technology. ## What's next for Shuffle Integration with various music streaming services such as Spotify or Apple Music. Ability to filter playlists by mood using machine learning.
## Inspiration We wanted to explore what GCP has to offer more in a practical sense, while trying to save money as poor students ## What it does The app tracks you, and using Google Map's API, calculates a geofence that notifies the restaurants you are within vicinity to, and lets you load coupons that are valid. ## How we built it React-native, Google Maps for pulling the location, python for the webscraper (*<https://www.retailmenot.ca/>*), Node.js for the backend, MongoDB to store authentication, location and coupons ## Challenges we ran into React-Native was fairly new, linking a python script to a Node backend, connecting Node.js to react-native ## What we learned New exposure APIs and gained experience on linking tools together ## What's next for Scrappy.io Improvements to the web scraper, potentially expanding beyond restaurants.
losing
# Stegano ## End-to-end steganalysis and steganography tool #### Demo at <https://stanleyzheng.tech> Please see the video before reading documentation, as the video is more brief: <https://youtu.be/47eLlklIG-Q> A technicality, GitHub user RonanAlmeida ghosted our group after committing react template code, which has been removed in its entirety. ### What is steganalysis and steganography? Steganography is the practice of concealing a message within a file, usually an image. It can be done one of 3 ways, JMiPOD, UNIWARD, or UERD. These are beyond the scope of this hackathon, but each algorithm must have its own unique bruteforce tools and methods, contributing to the massive compute required to crack it. Steganoanalysis is the opposite of steganography; either detecting or breaking/decoding steganographs. Think of it like cryptanalysis and cryptography. ### Inspiration We read an article about the use of steganography in Al Qaeda, notably by Osama Bin Laden[1]. The concept was interesting. The advantage of steganography over cryptography alone is that the intended secret message does not attract attention to itself as an object of scrutiny. Plainly visible encrypted messages, no matter how unbreakable they are, arouse interest. Another curious case was its use by Russian spies, who communicated in plain sight through images uploaded to public websites hiding steganographed messages.[2] Finally, we were utterly shocked by how difficult these steganographs were to decode - 2 images sent to the FBI claiming to hold a plan to bomb 11 airliners took a year to decode. [3] We thought to each other, "If this is such a widespread and powerful technique, why are there so few modern solutions?" Therefore, we were inspired to do this project to deploy a model to streamline steganalysis; also to educate others on stegography and steganalysis, two underappreciated areas. ### What it does Our app is split into 3 parts. Firstly, we provide users a way to encode their images with a steganography technique called least significant bit, or LSB. It's a quick and simple way to encode a message into an image. This is followed by our decoder, which decodes PNG's downloaded from our LSB steganograph encoder. In this image, our decoder can be seen decoding a previoustly steganographed image: ![](https://i.imgur.com/dge0fDw.png) Finally, we have a model (learn more about the model itself in the section below) which classifies an image into 4 categories: unstegographed, MiPOD, UNIWARD, or UERD. You can input an image into the encoder, then save it, and input the encoded and original images into the model, and they will be distinguished from each other. In this image, we are inferencing our model on the image we decoded earlier, and it is correctly identified as stegographed. ![](https://i.imgur.com/oa0N8cc.png) ### How I built it (very technical machine learning) We used data from a previous Kaggle competition, [ALASKA2 Image Steganalysis](https://www.kaggle.com/c/alaska2-image-steganalysis). This dataset presented a large problem in its massive size, of 305 000 512x512 images, or about 30gb. I first tried training on it with my local GPU alone, but at over 40 hours for an Efficientnet b3 model, it wasn't within our timeline for this hackathon. I ended up running this model on dual Tesla V100's with mixed precision, bringing the training time to about 10 hours. We then inferred on the train set and distilled a second model, an Efficientnet b1 (a smaller, faster model). This was trained on the RTX3090. The entire training pipeline was built with PyTorch and optimized with a number of small optimizations and tricks I used in previous Kaggle competitions. Top solutions in the Kaggle competition use techniques that marginally increase score while hugely increasing inference time, such as test time augmentation (TTA) or ensembling. In the interest of scalibility and low latency, we used neither of these. These are by no means the most optimized hyperparameters, but with only a single fold, we didn't have good enough cross validation, or enough time, to tune them more. Considering we achieved 95% of the performance of the State of the Art with a tiny fraction of the compute power needed due to our use of mixed precision and lack of TTA and ensembling, I'm very proud. One aspect of this entire pipeline I found very interesting was the metric. The metric is a weighted area under receiver operating characteristic (AUROC, often abbreviated as AUC), biased towards the true positive rate and against the false positive rate. This way, as few unstegographed images are mislabelled as possible. ### What I learned I learned about a ton of resources I would have never learned otherwise. I've used GCP for cloud GPU instances, but never for hosting, and was super suprised by the utility; I will definitely be using it more in the future. I also learned about stenography and steganalysis; these were fields I knew very little about, but was very interested in, and this hackathon proved to be the perfect place to learn more and implement ideas. ### What's next for Stegano - end-to-end steganlaysis tool We put a ton of time into the Steganalysis aspect of our project, expecting there to be a simple steganography library in python to be easy to use. We found 2 libraries, one of which had not been updated for 5 years; ultimantely we chose stegano[4], the namesake for our project. We'd love to create our own module, adding more algorithms for steganography and incorporating audio data and models. Scaling to larger models is also something we would love to do - Efficientnet b1 offered us the best mix of performance and speed at this time, but further research into the new NFNet models or others could yeild significant performance uplifts on the modelling side, but many GPU hours are needed. ## References 1. <https://www.wired.com/2001/02/bin-laden-steganography-master/> 2. <https://www.wired.com/2010/06/alleged-spies-hid-secret-messages-on-public-websites/> 3. <https://www.giac.org/paper/gsec/3494/steganography-age-terrorism/102620> 4. <https://pypi.org/project/stegano/>
## Inspiration 🌱 Climate change is affecting every region on earth. The changes are widespread, rapid, and intensifying. The UN states that we are at a pivotal moment and the urgency to protect our Earth is at an all-time high. We wanted to harness the power of social media for a greater purpose: promoting sustainability and environmental consciousness. ## What it does 🌎 Inspired by BeReal, the most popular app in 2022, BeGreen is your go-to platform for celebrating and sharing acts of sustainability. Everytime you make a sustainable choice, snap a photo, upload it, and you’ll be rewarded with Green points based on how impactful your act was! Compete with your friends to see who can rack up the most Green points by performing more acts of sustainability and even claim prizes once you have enough points 😍. ## How we built it 🧑‍💻 We used React with Javascript to create the app, coupled with firebase for the backend. We also used Microsoft Azure for computer vision and OpenAI for assessing the environmental impact of the sustainable act in a photo. ## Challenges we ran into 🥊 One of our biggest obstacles was settling on an idea as there were so many great challenges for us to be inspired from. ## Accomplishments that we're proud of 🏆 We are really happy to have worked so well as a team. Despite encountering various technological challenges, each team member embraced unfamiliar technologies with enthusiasm and determination. We were able to overcome obstacles by adapting and collaborating as a team and we’re all leaving uOttahack with new capabilities. ## What we learned 💚 Everyone was able to work with new technologies that they’ve never touched before while watching our idea come to life. For all of us, it was our first time developing a progressive web app. For some of us, it was our first time working with OpenAI, firebase, and working with routers in react. ## What's next for BeGreen ✨ It would be amazing to collaborate with brands to give more rewards as an incentive to make more sustainable choices. We’d also love to implement a streak feature, where you can get bonus points for posting multiple days in a row!
## Inspiration An informed electorate is as vital as the ballot itself in facilitating a true democracy. In this day and age, it is not a lack of information but rather an excess that threatens to take power away from the people. Finding the time to research all 19 Democratic nominee hopefuls to make a truly informed decision is a challenge for most, and out of convenience, many voters tend to rely on just a handful of major media outlets as the source of truth. This monopoly on information gives mass media considerable ability to project its biases onto the public opinion. The solution to this problem presents an opportunity to utilize technology for social good. ## What it does InforME returns power to the people by leveraging Google Cloud’s Natural Language API to detect systematic biases across a large volume of articles pertinent to the 2020 Presidential Election from 8 major media sources, including ABC, CNN, Fox, Washington Post, and Associated Press. We accomplish this by scraping relevant and recent articles from a variety of online sources and using the Google Cloud NLP API to perform sentiment analysis on them. We then aggregate individual entity sentiments and statistical measures of linguistic salience in order to synthesize our data in a meaningful and convenient format for understanding and comparing the individual biases major media outlets hold towards or against each candidate. ## How we built it and Challenges we ran into One of the many challenges we faced is learning the new technology. We dedicated ourselves to learning multiple GCP technologies throughout HackMIT from calling GCP API to serverless deployment. We employed Google NLP API to make sense of our huge data set scraped from major news outlets, Firebase real-time database to log data, and finally GCP App Engine for deployments of our web apps. Coming into the hackathon with little experience with GCP, we found the learning curve to be steep yet rewarding. This immersion in GCP technology renders us a deeper understanding of how different components of GCP work together, and how much potential GCP has for contributing to social good. Another challenge we faced is how to represent the data in a visually meaningful way. Though we were able to generate a lot of insightful technical data, we chose to represent the data in a straightforward, easy-to-understand way without losing information or precision. It’s undoubtedly challenging to find the perfect balance between technicality and aesthetics, and our front-end design tackles this task of using technology for social good in an accessible way without compromising the complexity of current politics. Just as there’s no simple solution to current social problems, there’s no perfect way to contribute to social good. Despite this, InforME is an attempt to return power to the people, providing for a more just distribution of information and better informed electorate, a gateway to a society where information is open and accessible. ## What's next for InforME Despite our progress, there is room for improvement. First, we can allow users to filter results by dates to better represent data in a more specific time range. We can also identify pressing issues or hot topics associated with each candidate via entity sentiment analysis. Moreover, with enough data, we can also build a graph of relationships between each candidates to better serve our audience.
winning
## Background Before we **don't** give you financial advice, let's go through some brief history on the financial advisors and the changes they've seen since their introduction. Financial advisors have been an essential part of the financial world for decades, offering individuals tailored advice on everything from investments to retirement plans. Traditionally, advisors would assess a client's financial situation and suggest investment strategies or products, charging a fee for their services. In Canada, these fees often range from 1% to 2% of a client's assets under management (AUM) annually. For example, if a client had $500,000 invested, they could be paying $5,000 to $10,000 a year in advisor fees. However, over the past two decades, consumers have been migrating away from traditional financial advisors toward lower-cost alternatives like Exchange-Traded Funds (ETFs) and robo-advisors. ETFs, which are passively managed and track indexes like the S&P 500, became popular because they offer diversification at a fraction of the cost—typically charging less than 0.5% in fees. This shift is part of a broader trend toward fee transparency, where investors demand to know exactly what they're paying for and opt for lower-cost options when they can. But while ETFs offer cost savings, they come with their own set of risks. For one, passive investing removes the active decision-making of traditional advisors, which can lead to market-wide issues. In times of high volatility, ETFs can exacerbate market instability because of their algorithmic trading patterns and herd-like behaviours. Furthermore, ETFs don't account for an investor's specific financial goals or risk tolerance, which is where human advisors can still play a critical role. Understanding this transition helps illustrate why a tool like NFA (Not Financial Advice) can fill the gap—offering insights into personal finances without the high fees or potential drawbacks of fully passive investing or even the requirements to invest! Whether it be an individual who is looking to optimize their existing investments, or one who simply wants to learn about what their options are to begin with, NFA is a platform for all! ## Inspiration The inspiration for NFA came from recognizing a gap in the market for accessible, personalized financial insights. Traditional financial advice is often expensive and not readily available to everyone. We wanted to create a platform that could analyze a user's financial situation and provide valuable insights without crossing the line into regulated financial advice. The rise of fintech and the increasing financial literacy needs of younger generations also played a role in inspiring this project. We saw an opportunity to leverage technology to empower individuals to make more informed financial decisions. ## Team Background and Individual Inspiration Our diverse team brings a unique blend of experiences and motivations to the NFA project: 1. **Cole Dermott**: A 21-year-old fourth-year student at the University of Waterloo (Computer Science) and Wilfrid Laurier University (Business Administration). With experience in various software fields, Cole's business background and frequent interactions with financial news and peer inquiries inspired him to develop a tool that could provide quick financial insights. 2. **Daniel Martinez**: A grade 12 student from Vaughan, Ontario, with experience in fullstack development including mobile, web, and web3. As a young startup founder (GradeAssist), Daniel has faced challenges navigating the financial world. These systemic and information barriers motivated him to join the NFA team and create a solution for others facing similar challenges. 3. **Musa Aqeel**: A second-year university student working full-time as a fullstack developer at Dayforce. Musa's personal goal of setting himself up for an early retirement drove him to develop a tool that would help him truly understand his finances in-depth and make informed decisions. 4. **Alex Starosta**: A second-year Software Engineering student at the University of Waterloo. Alex's meticulous approach to personal finance, including constant budgeting and calculations, inspired him to create a financial tool that would provide insights at a glance, eliminating the need for continuous manual checks. ## What it does NFA is a comprehensive platform that: 1. Collects detailed user financial information, including: * Age and location * Invested assets and liabilities * Credit score * Interests and financial goals * Cash and salary details 2. Analyzes this data to provide personalized insights. 3. Identifies potential "red flags" in the user's financial situation. 4. Offers notifications and alerts about these potential issues. 5. Provides educational resources tailored to the user's financial situation and goals. All of this is done without crossing the line into providing direct financial advice, hence the name "Not Financial Advice" - since this is **MOST DEFINITELY NOT FINANCIAL ADVICE!!!!!** ## How we built it We leveraged a modern tech stack to build NFA, focusing on scalability, performance, and developer experience. Our technology choices include: 1. **Frontend:** * Next.js: For server-side rendering and optimized React applications * React: As our primary frontend library * TypeScript: For type-safe JavaScript development * Tailwind CSS: For rapid and responsive UI development 2. **Backend and Database:** * Firebase Auth: For secure user authentication * Firestore: As our scalable, real-time NoSQL database 3. **API and AI Integration:** * Cohere API: For advanced natural language processing, AI-driven insights, and it's web search functionality 4. **Development Tools:** * ESLint: For code quality and consistency * Vercel: For seamless deployment and hosting This tech stack allowed us to create a robust, scalable application that can handle complex financial data processing while providing a smooth user experience. The combination of Firebase for backend services and Next.js for the frontend enabled us to rapidly develop and iterate on our platform. The integration of Cohere API for AI capabilities was crucial in developing our intelligent insights engine, allowing us to analyze user financial data and provide personalized recommendations without crossing into direct financial advice territory. ## Challenges we ran into Building NFA presented us with a unique set of challenges that pushed our skills and creativity to the limit: 1. **Navigating Regulatory Boundaries:** One of our biggest challenges was designing a system that provides valuable financial insights without crossing into regulated financial advice territory. We had to carefully craft our algorithms and user interface to ensure we were providing information and analysis without making specific recommendations that could be construed as professional financial advice. 2. **Ensuring Data Privacy and Security:** Given the sensitive nature of financial data, implementing robust security measures was paramount. We faced challenges in configuring Firebase Auth and Firestore to ensure end-to-end encryption of user data while maintaining high performance. This required a deep dive into Firebase's security rules and careful consideration of data structure to optimize for both security and query efficiency. 3. **Integrating AI Responsibly:** Incorporating AI through the Cohere API and Groq presented unique challenges. We needed to ensure that the AI-generated insights were accurate, unbiased, and explainable. This involved extensive testing and fine-tuning of our prompts and models to avoid potential biases and ensure the AI's outputs were consistently reliable and understandable to users of varying financial literacy levels. 4. **Optimizing Performance with Complex Data Processing:** Balancing the need for real-time insights with the computational intensity of processing complex financial data was a significant challenge. We had to optimize our Next.js and React components to handle large datasets efficiently, implementing techniques like virtualization for long lists and strategic data fetching to maintain a smooth user experience even when dealing with extensive financial histories. 5. **Creating an Intuitive User Interface for Complex Financial Data:** Designing an interface that could present complex financial information in an accessible way to users with varying levels of financial literacy was a major hurdle. We leveraged Tailwind CSS to rapidly prototype and iterate on our UI designs, constantly balancing the need for comprehensive information with clarity and simplicity. 6. **Cross-Browser and Device Compatibility:** Ensuring consistent functionality and appearance across different browsers and devices proved challenging, especially when dealing with complex visualizations of financial data. We had to implement various polyfills and CSS tweaks to guarantee a uniform experience for all users. 7. **Managing Team Dynamics and Skill Diversity:** With team members ranging from high school to university students with varying levels of experience, we faced challenges in task allocation and knowledge sharing. We implemented a peer programming system and regular knowledge transfer sessions to leverage our diverse skillsets effectively. 8. **Handling Real-Time Updates and Notifications:** Implementing a system to provide timely notifications about potential financial "red flags" without overwhelming the user was complex. We had to carefully design our notification system in Firebase to balance immediacy with user experience, ensuring critical alerts were not lost in a sea of notifications. 9. **Scalability Considerations:** Although we're starting with a prototype, we had to design our database schema and server architecture with future scalability in mind. This meant making tough decisions about data normalization, caching strategies, and potential sharding approaches that would allow NFA to grow without requiring a complete overhaul. 10. **Ethical Considerations in Financial Technology:** Throughout the development process, we grappled with the ethical implications of providing financial insights, especially to potentially vulnerable users. We had to carefully consider how to present information in a way that empowers users without encouraging risky financial behavior. These challenges not only tested our technical skills but also pushed us to think critically about the broader implications of financial technology. Overcoming them required creativity, teamwork, and a deep commitment to our goal of empowering users with financial insights. ## Accomplishments that we're proud of 1. **Innovative Financial Insight Engine:** We successfully developed a sophisticated algorithm that analyzes user financial data and provides valuable insights without crossing into regulated financial advice. This delicate balance showcases our understanding of both technology and financial regulations. 2. **Seamless Integration of AI Technologies:** We effectively integrated Cohere API and Groq to power our AI-driven insights, creating a system that can understand and analyze complex financial situations. This accomplishment demonstrates our ability to work with cutting-edge AI technologies in a practical application. 3. **Robust and Scalable Architecture:** Our implementation using Firebase, Firestore, and Next.js resulted in a highly scalable and performant application. We're particularly proud of our data model design, which allows for efficient querying and real-time updates while maintaining data integrity and security. 4. **User-Centric Design:** We created an intuitive and accessible interface for complex financial data using React and Tailwind CSS. Our design makes financial insights understandable to users with varying levels of financial literacy, a crucial aspect for broadening financial education and accessibility. 5. **Advanced Data Visualization:** We implemented sophisticated data visualization techniques that transform raw financial data into easily digestible graphs and charts. This feature significantly enhances user understanding of their financial situation at a glance. 6. **Responsive and Cross-Platform Compatibility:** Our application works seamlessly across various devices and browsers, ensuring a consistent user experience whether accessed from a desktop, tablet, or smartphone. 7. **Real-Time Financial Alerts System:** We developed a nuanced notification system that alerts users to potential financial issues or opportunities without being overwhelming. This feature demonstrates our attention to user experience and the practical application of our insights. 8. **Comprehensive Security Implementation:** We implemented robust security measures to protect sensitive financial data, including end-to-end encryption and careful access control. This accomplishment showcases our commitment to user privacy and data protection. 9. **Efficient Team Collaboration:** Despite our diverse backgrounds and experience levels, we established an effective collaboration system that leveraged each team member's strengths. This resulted in rapid development and a well-rounded final product. 10. **Ethical AI Implementation:** We developed guidelines and implemented checks to ensure our AI-driven insights are unbiased and ethically sound. This proactive approach to ethical AI use in fintech sets our project apart and demonstrates our awareness of broader implications in the field. 11. **Rapid Prototyping and Iteration:** Using our tech stack, particularly Next.js and Tailwind CSS, we were able to rapidly prototype and iterate on our designs. This allowed us to refine our product continuously based on feedback and testing throughout the hackathon. 12. **Innovative Use of TypeScript:** We leveraged TypeScript to create a strongly-typed codebase, significantly reducing runtime errors and improving overall code quality. This showcases our commitment to writing maintainable, scalable code. 13. **Successful Integration of Multiple APIs:** We seamlessly integrated various APIs and services (Firebase, Cohere, Groq) into a cohesive platform. This accomplishment highlights our ability to work with diverse technologies and create a unified, powerful solution. 14. **Creation of Educational Resources:** Alongside the main application, we developed educational resources that help users understand their financial situations better. This additional feature demonstrates our holistic approach to financial empowerment. 15. **Performance Optimization:** We implemented advanced performance optimization techniques, resulting in fast load times and smooth interactions even when dealing with large datasets. This showcases our technical proficiency and attention to user experience. These accomplishments reflect not only our technical skills but also our ability to innovate in the fintech space, our commitment to user empowerment, and our forward-thinking approach to financial technology. ## What we learned 1. **Navigating Financial Regulations:** We gained a deep understanding of the fine line between providing financial insights and giving regulated financial advice. This knowledge is crucial for anyone looking to innovate in the fintech space. 2. **The Power of AI in Finance:** Through our work with Cohere API and Groq, we learned how AI can be leveraged to analyze complex financial data and provide valuable insights. We also understood the importance of responsible AI use in financial applications. 3. **Importance of Data Privacy and Security:** Working with sensitive financial data reinforced the critical nature of robust security measures. We learned advanced techniques in data encryption and secure database management using Firebase and Firestore. 4. **User-Centric Design in Fintech:** We discovered the challenges and importance of presenting complex financial information in an accessible manner. This taught us valuable lessons in UX/UI design for fintech applications. 5. **Full-Stack Development with Modern Technologies:** Our team enhanced their skills in full-stack development, gaining hands-on experience with Next.js, React, TypeScript, and Tailwind CSS. We learned how these technologies can be integrated to create a seamless, efficient application. 6. **Real-Time Data Handling:** We learned techniques for efficiently managing and updating real-time financial data, balancing the need for immediacy with performance considerations. 7. **Cross-Platform Development Challenges:** Ensuring our application worked consistently across different devices and browsers taught us valuable lessons in responsive design and cross-platform compatibility. 8. **The Value of Rapid Prototyping:** We learned how to quickly iterate on ideas and designs, allowing us to refine our product continuously throughout the hackathon. 9. **Effective Team Collaboration:** Working in a diverse team with varying levels of experience taught us the importance of clear communication, task delegation, and knowledge sharing. 10. **Balancing Features and MVP:** We learned to prioritize features effectively, focusing on creating a viable product within the hackathon's time constraints while planning for future enhancements. 11. **The Intersection of Finance and Technology:** This project deepened our understanding of how technology can be used to democratize financial insights and empower individuals in their financial decision-making. 12. **Ethical Considerations in AI and Finance:** We gained insights into the ethical implications of using AI in financial applications, learning to consider potential biases and the broader impact of our technology. 13. **Performance Optimization Techniques:** We learned advanced techniques for optimizing application performance, especially when dealing with large datasets and complex calculations. 14. **The Importance of Financial Literacy:** Through creating educational resources, we deepened our own understanding of financial concepts and the importance of financial education. 15. **API Integration and Management:** We enhanced our skills in working with multiple APIs, learning how to integrate and manage various services within a single application. 16. **Scalability Considerations:** We learned to think beyond the immediate project, considering how our application architecture could scale to accommodate future growth and features. 17. **The Power of Typed Programming:** Using TypeScript taught us the benefits of strongly-typed languages in creating more robust, maintainable code, especially in complex applications. 18. **Data Visualization Techniques:** We gained skills in transforming raw financial data into meaningful visual representations, learning about effective data visualization techniques. 19. **Agile Development in a Hackathon Setting:** We applied agile methodologies in a compressed timeframe, learning how to adapt these principles to the fast-paced environment of a hackathon. 20. **The Potential of Open Banking:** Although not directly implemented, our project made us aware of the possibilities and challenges in the emerging field of open banking and its potential impact on personal finance management. These learnings not only enhanced our technical skills but also broadened our understanding of the fintech landscape, ethical technology use, and the importance of financial empowerment. The experience has equipped us with valuable insights that will inform our future projects and career paths in technology and finance. ## What's next for NFA (Not Financial Advice) 1. **Enhanced AI Capabilities:** * Implement more advanced machine learning models to provide even more accurate and personalized financial insights. * Develop predictive analytics to forecast potential financial outcomes based on user behavior and market trends. 2. **Open Banking Integration:** * Partner with banks and financial institutions to integrate open banking APIs, allowing for real-time, comprehensive financial data analysis. * Implement secure data sharing protocols to ensure user privacy while leveraging the power of open banking. 3. **Expanded Financial Education Platform:** * Develop a comprehensive, interactive financial education module within the app. * Create personalized learning paths based on user's financial knowledge and goals. 4. **Community Features:** * Implement an anonymized peer comparison feature, allowing users to benchmark their financial health against similar demographics. * Create a forum for users to share financial tips and experiences, moderated by AI to ensure quality and prevent misinformation. 5. **Gamification of Financial Goals:** * Introduce gamification elements to encourage positive financial behaviors and goal achievement. * Develop a reward system for reaching financial milestones, potentially partnering with financial institutions for tangible benefits. 6. **Advanced Data Visualization:** * Implement more sophisticated data visualization techniques, including interactive charts and 3D visualizations of complex financial data. * Develop AR/VR interfaces for immersive financial data exploration. 7. **Personalized Financial Product Recommendations:** * Develop an AI-driven system to suggest financial products (savings accounts, investment options, etc.) based on user profiles and goals, while maintaining our commitment to not providing direct financial advice. 8. **Multi-Language Support:** * Expand the platform to support multiple languages, making financial insights accessible to a global audience. 9. **Blockchain Integration:** * Explore the integration of blockchain technology for enhanced security and transparency in financial tracking. * Develop features to analyze and provide insights on cryptocurrency investments alongside traditional financial assets. 10. **Mobile App Development:** * Create native mobile applications for iOS and Android to provide a seamless mobile experience and leverage device-specific features. 11. **API for Developers:** * Develop and release an API that allows third-party developers to build applications on top of NFA's insights engine, fostering an ecosystem of financial tools. 12. **Sustainability Focus:** * Implement features to help users understand the environmental impact of their financial decisions. * Provide insights and recommendations for sustainable investing options. 13. **Customizable Dashboard:** * Allow users to create fully customizable dashboards, tailoring the NFA experience to their specific financial interests and goals. 14. **Integration with Financial Advisors:** * Develop a feature that allows users to safely share their NFA insights with professional financial advisors, bridging the gap between AI-driven insights and professional advice. 15. **Expanded AI Ethics Board:** * Establish an AI ethics board comprising experts in finance, technology, and ethics to ensure ongoing responsible development and use of AI in our platform. 16. **Research Partnerships:** * Collaborate with universities and financial institutions to conduct research on personal finance trends and the impact of AI-driven financial insights. 17. **Accessibility Enhancements:** * Implement advanced accessibility features to make NFA usable for individuals with various disabilities, ensuring financial insights are available to everyone. 18. **Predictive Life Event Planning:** * Develop features that help users plan for major life events (buying a home, having children, retirement) by predicting financial needs and suggesting preparation strategies. 19. **Voice Interface:** * Implement a voice-activated interface for hands-free interaction with NFA, making financial insights even more accessible in users' daily lives. 20. **Continuous Learning AI:** * Develop a system where the AI continuously learns and improves from anonymized user data and feedback, ensuring that insights become increasingly accurate and valuable over time. By implementing these features, NFA aims to become a comprehensive, intelligent, and indispensable tool for personal financial management. Our goal is to democratize access to high-quality financial insights, empower individuals to make informed financial decisions, and ultimately contribute to improved financial well-being on a global scale.
## Inspiration There is a simple and powerful truth: investing is a skill that has a long lasting impact on our ability to reach the most important goals in our lives, from student loans to buying a house to retirement funds. However, our education system doesn’t prepare us to make these daunting decisions. When we were teenagers, we felt finance was “complex, another mysterious “adult thing” that remotely relates to us. Higher tuition and less money available for financial aid are pushing more and more young people into debt even after years of working in the industry. According to Harvard Law School, about 110,000 youths under 25 filed for bankruptcy in 2017. Investing shouldn’t just be a sophisticated tool played by the professionals. Investig should be a mindset that helps anyone to make better decisions in this world. Let’s democratize finance. ## What it does We have designed Dr. Trade to simulate an interactive investing game that makes it simple and fun for teenagers to learn the most important skills of an intelligent investor: * A habit of following world news in the morning. Alarm clock UI design with voice control, seamlessly integrate learning to trade into user’s daily life * Communication skills are essential for traders. The hands-free UX design allows users to practice putting in orders and discussing news like traders do. * The best traders are the calm and persistent ones. Users need to wait until market close at 4pm to see daily profits and are encouraged to trade every day with the built-in reward system, including peer rankings and medals. * Traders learn from their mistakes. At the end of the day, Dr. Trade will reflect with the user what went well and what went wrong based on real market data. * A curious mind to connect the dots between news and stock performance. * Making tough decisions: * Users are only allowed to make one trade per day ## How we built it * Machine learning with ASR, NLP, TTS (Action Google) * Interaction design > Conversation design (Dialogflow/Java) > UI/UX design * Portfolio analysis (BlackRock Aladdin/Python) * Live stock market data (Yahoo Finance/Python) * Speaker (Google Home Mini) ## Accomplishments that we're proud of We have managed not only to identify a true problem, but also to develop a unique and effective solution to address it. Inbetween, we grew as team players through synergy and delved into the Google Action platform and the ML behind it. Needless to say, that focusing on education, we are empowering the future generation to lead a prosperous life, as they now possess the financial literacy to be independent. ## What we learned By concentrating our efforts on the educational perspective of the project, our team discovered the inner-motivation and the vision that kept us pushing in the last few days and in the future. ## What's next for Dr. Trade We are hoping to further develop Dr. Trade and hopefully find product-market fit in the next few months, as it is a project deep embedded in the team’s values for world-change. Here’s what we plan to do in the next month: * Integrate Blackrock Aladdin with Dialogflow to automate portfolio snapshot * Install a 5 inch display to our MVP for portfolio snapshot and performance visualization * User research -- test out our MVP with high school students in the greater Philadelphia community ## Keywords: Education, Experiential Learning, Machine Learning, Interaction Design, UI/UX Design, Action Google, Blackrock, Portfolio Analysis, Financial Literacy
## Inspiration Digitized conversations have given the hearing impaired and other persons with disabilities the ability to better communicate with others despite the barriers in place as a result of their disabilities. Through our app, we hope to build towards a solution where we can extend the effects of technological development to aid those with hearing disabilities to communicate with others in real life. ## What it does Co:herent is (currently) a webapp which allows the hearing impaired to streamline their conversations with others by providing them sentence or phrase suggestions given the context of a conversation. We use Co:here's NLP text generation API in order to achieve this and in order to provide more accurate results we give the API context from the conversation as well as using prompt engineering in order to better tune the model. The other (non hearing impaired) person is able to communicate with the webapp naturally through speech-to-text inputs and text-to-speech functionalities are put in place in order to better facilitate the flow of the conversation. ## How we built it We built the entire app using Next.js with the Co:here API, React Speech Recognition API, and React Speech Kit API. ## Challenges we ran into * Coming up with an idea * Learning Next.js as we go as this is all of our first time using it * Calling APIs are difficult without a backend through a server side rendered framework such as Next.js * Coordinating and designating tasks in order to be efficient and minimize code conflicts * .env and SSR compatibility issues ## Accomplishments that we're proud of Creating a fully functional app without cutting corners or deviating from the original plan despite various minor setbacks. ## What we learned We were able to learn a lot about Next.js as well as the various APIs through our first time using them. ## What's next for Co:herent * Better tuning the NLP model to generate better/more personal responses as well as storing and maintaining more information on the user through user profiles and database integrations * Better tuning of the TTS, as well as giving users the choice to select from a menu of possible voices * Possibility of alternative forms of input for those who may be physically impaired (such as in cases of cerebral palsy) * Mobile support * Better UI
losing
## Inspiration 🤔 The inspiration behind the project is to provide a tool that can assist people in persuasive writing and critical thinking. The idea is to create an application that can help students and professionals to improve their writing and argumentation skills, by providing them with high-quality and well-reasoned arguments in a matter of seconds. The application uses natural language processing technology and the ChatGPT, a large language model, to generate arguments for, against, and neutral to any given prompt. This can save a lot of time and effort for people who need to write essays, articles, speeches, or any other type of written work that requires persuasive arguments. Additionally, it can also help to improve the quality of the arguments, as the application is able to generate well-reasoned and thoughtful arguments. ## What it does ❓ The application works by connecting to the GPT-3.5 language model, also known as ChatGPT, using an in-house built API. The API allows the application to interact with the language model and generate three types of arguments in response to a given prompt. The first type of argument is meant to provide reasons why the argument holds, the second type is meant to provide reasons why the argument fails, and the last type is meant to provide a neutral argument that is neither for nor against the topic. This way the application is able to generate a comprehensive set of arguments that cover different perspectives and provide a well-rounded understanding of the topic. The arguments generated by the application are high-quality, well-reasoned, and thought-provoking, which can help users improve their persuasive writing and critical thinking skills. The application is designed to be user-friendly and easy to use, allowing users to generate arguments on any topic quickly and easily. ## How we built it 🛠️ Because OpenAI does not currently provide direct access to an API for the GPT-3.5 (ChatGPT) model, we had to engineer other methods to connect to the model and use it to produce the arguments for our application. This involved building a custom API that allows our application to interact with the model and retrieve the generated arguments. This custom API was built to work in conjunction with the OpenAI API that allows access to the GPT-3.5 model.built it ## Challenges we ran into ❌ During the development of the application, we encountered several challenges that needed to be overcome. One of the main challenges we faced was the lack of direct access to the GPT-3.5 model through an API, as mentioned earlier. This required a significant amount of time and resources to develop a custom API that could interact with the model and retrieve the generated arguments. In terms of team collaboration, one of the main challenges was coordinating the work of the different team members and ensuring that everyone was on the same page. This required effective communication and collaboration, as well as clear and concise project management. Additionally, it was also important to ensure that everyone was aware of the latest developments and any changes in the project plan. Overall, the project was challenging but with the right team, resources, and perseverance, we were able to overcome these obstacles and deliver a powerful tool that can help users improve their persuasive writing and critical thinking skills. ## What's next for Controversy.io ⏭️ Our goal is to expand the capabilities of our application to support other language models in addition to ChatGPT, this would allow the application to generate arguments in multiple languages and cater to a wider audience. This will require additional development and resources, but we believe that the benefits of having a multilingual argument generation tool will be well worth the investment.
## Inspiration: My inspiration to create the homework helper was due to the sheer amount of workload increase from my freshman, and sophomore year of high school, and the fact that this will be in high demand by any level of student. It has only been 2 weeks into the semester, and I've already felt the need for a tool like this (I would like to mention that I have enough integrity to not use it, even though the tool, in my opinion, works really well). ## What it does: It takes any example prompt (I.e. Assignment prompts, homework content, etc.), and provides an output of either the assignment being completed in full, OR, a request for the user to provide more information to do the assignment correctly, and effectively. Moreover, the most versatile feature of the application is that it allows you to chose intelligence levels, which include Stupid, Highschool Student, Smart, College, and College Professor. Allowing students to essentially guarantee their success on an assignment as the main way students get caught using AI to complete their work is through the fact that their previously completed assignments clearly differ in grade levels compared to the work completed by the AI module (Ex. ChatGPT). ## How we built it: Built it using Python, and through extensive research through the OpenAI documentation. ## Challenges we ran into: Getting my OpenAI API key to work. Took about 10 tries before I realized both issues in how inputted the key, and issues directly in my code that didn't call to the key correctly. ## Accomplishments that we're proud of: The Homework Helper is really effective, and gives significantly different responses depending on the intelligence level you chose (Ex. If you told the AI that you'd like a "stupid" response, it would make a pretty effective, 3rd grade reading level, response). Which, as mentioned previously, would nearly guarantee a students success on an assignment. ## What we learned: Read every piece of documentation related to what you're trying to use an API for. ## What's next for Homework Helper 3000: Create a clean GUI.
## Inspiration While there are several applications that use OCR to read receipts, few take the leap towards informing consumers on their purchase decisions. We decided to capitalize on this gap: we currently provide information to customers about the healthiness of the food they purchase at grocery stores by analyzing receipts. In order to encourage healthy eating, we are also donating a portion of the total value of healthy food to a food-related non-profit charity in the United States or abroad. ## What it does Our application uses Optical Character Recognition (OCR) to capture items and their respective prices on scanned receipts. We then parse through these words and numbers using an advanced Natural Language Processing (NLP) algorithm to match grocery items with its nutritional values from a database. By analyzing the amount of calories, fats, saturates, sugars, and sodium in each of these grocery items, we determine if the food is relatively healthy or unhealthy. Then, we calculate the amount of money spent on healthy and unhealthy foods, and donate a portion of the total healthy values to a food-related charity. In the future, we plan to run analytics on receipts from other industries, including retail, clothing, wellness, and education to provide additional information on personal spending habits. ## How We Built It We use AWS Textract and Instabase API for OCR to analyze the words and prices in receipts. After parsing out the purchases and prices in Python, we used Levenshtein distance optimization for text classification to associate grocery purchases with nutritional information from an online database. Our algorithm utilizes Pandas to sort nutritional facts of food and determine if grocery items are healthy or unhealthy by calculating a “healthiness” factor based on calories, fats, saturates, sugars, and sodium. Ultimately, we output the amount of money spent in a given month on healthy and unhealthy food. ## Challenges We Ran Into Our product relies heavily on utilizing the capabilities of OCR APIs such as Instabase and AWS Textract to parse the receipts that we use as our dataset. While both of these APIs have been developed on finely-tuned algorithms, the accuracy of parsing from OCR was lower than desired due to abbreviations for items on receipts, brand names, and low resolution images. As a result, we were forced to dedicate a significant amount of time to augment abbreviations of words, and then match them to a large nutritional dataset. ## Accomplishments That We're Proud Of Project Horus has the capability to utilize powerful APIs from both Instabase or AWS to solve the complex OCR problem of receipt parsing. By diversifying our software, we were able to glean useful information and higher accuracy from both services to further strengthen the project itself, which leaves us with a unique dual capability. We are exceptionally satisfied with our solution’s food health classification. While our algorithm does not always identify the exact same food item on the receipt due to truncation and OCR inaccuracy, it still matches items to substitutes with similar nutritional information. ## What We Learned Through this project, the team gained experience with developing on APIS from Amazon Web Services. We found Amazon Textract extremely powerful and integral to our work of reading receipts. We were also exposed to the power of natural language processing, and its applications in bringing ML solutions to everyday life. Finally, we learned about combining multiple algorithms in a sequential order to solve complex problems. This placed an emphasis on modularity, communication, and documentation. ## The Future Of Project Horus We plan on using our application and algorithm to provide analytics on receipts from outside of the grocery industry, including the clothing, technology, wellness, education industries to improve spending decisions among the average consumers. Additionally, this technology can be applied to manage the finances of startups and analyze the spending of small businesses in their early stages. Finally, we can improve the individual components of our model to increase accuracy, particularly text classification.
losing
## Inspiration Tinder but Volunteering ## What it does Connects people to volunteering organizations. Makes volunteering fun, easy and social ## How we built it react for web and react native ## Challenges we ran into So MANY ## Accomplishments that we're proud of Getting a really solid idea and a decent UI ## What we learned SO MUCH ## What's next for hackMIT
## Inspiration We saw that lots of people were looking for a team to work with for this hackathon, so we wanted to find a solution ## What it does I helps developers find projects to work, and helps project leaders find group members. By using the data from Github commits, it can determine what kind of projects a person is suitable for. ## How we built it We decided on building an app for the web, then chose a graphql, react, redux tech stack. ## Challenges we ran into The limitations on the Github api gave us alot of troubles. The limit on API calls made it so we couldnt get all the data we needed. The authentication was hard to implement since we had to try a number of ways to get it to work. The last challenge was determining how to make a relationship between the users and the projects they could be paired up with. ## Accomplishments that we're proud of We have all the parts for the foundation of a functional web app. The UI, the algorithms, the database and the authentication are all ready to show. ## What we learned We learned that using APIs can be challenging in that they give unique challenges. ## What's next for Hackr\_matchr Scaling up is next. Having it used for more kind of projects, with more robust matching algorithms and higher user capacity.
## Inspiration As musicians, the COVID-19 pandemic has disrupted our ability to physically perform pieces together in bands, orchestras, and small ensembles. Meeting in online video calls to perform is also difficult due to issues with audio lag. Another option is to record videos individually and then manually edit them together which is a very laborious and painstaking process. This inspired us to create Musync: an app that synchronizes recordings via an easy-to-use interface and efficiently integrates them into a polished video. ## What it does Each musician creates a Musync account and joins a "class". The lead musician can then upload a lead recording (or a metronomic click-track) for the class to play along with. Each member records himself playing an individual part (while listening to the lead recording with headphones), then uploads the video file to Musync. Our app synchronizes the parts by listening for a distinctive clap at the beginning of each recording. The final output is a synchronized music video which features the musicians on the screen (grouped by instrument). ## How we built it To create this application, we used Bulma for our frontend CSS framework, Google Cloud Services for our backend, and the ffmpeg file editing library to manipulate the video and audio files. To process the recordings, we start by splitting the mp4 video from each user into a video and audio part. For each of the audio files, we detect the distinctive clap by analyzing spikes in volume and trim the audio and video files at the clap time so they are all synchronized. Then, we overlay the trimmed audio files into a master audio file and arrange the trimmed video. ## Challenges we ran into One of the hardest part of developing this application was trying to use ffmpeg and the google cloud server because of the time it took to load dependencies each time we loaded the application to test the functionality.
winning
## Inspiration As university students, we have been noticing issues with very large class sizes. With lectures often being taught to over 400 students, it becomes very difficult and anxiety-provoking to speak up when you don't understand the content. As well, with classes of this size, professors do not have time to answer every student who raises their hand. This raises the problem of professors not being able to tell if students are following the lecture, and not answering questions efficiently. Our hack addresses these issues by providing a real-time communication environment between the class and the professor. KeepUp has the potential to increase classroom efficiency and improve student experiences worldwide. ## What it does KeepUp allows the professor to gauge the understanding of the material in real-time while providing students a platform to pose questions. It allows students to upvote questions asked by their peers that they would like to hear answered, making it easy for a professor to know which questions to prioritize. ## How We built it KeepUp was built using JavaScript and Firebase, which provided hosting for our web app and the backend database. ## Challenges We ran into As it was, for all of us, our first time working with a firebase database, we encountered some difficulties when it came to pulling data out of the firebase. It took a lot of work to finally get this part of the hack working which unfortunately took time away from implementing some other features (See what’s next section). But it was very rewarding to have a working backend in Firebase and we are glad we worked to overcome the challenge. ## Accomplishments that We are proud of We are proud of creating a useful app that helps solve a problem that affects all of us. We recognized that there is a gap in between students and teachers when it comes to communication and question answering and we were able to implement a solution. We are proud of our product and its future potential and scalability. ## What We learned We all learned a lot throughout the implementation of KeepUp. First and foremost, we got the chance to learn how to use Firebase for hosting a website and interacting with the backend database. This will prove useful to all of us in future projects. We also further developed our skills in web design. ## What's next for KeepUp * There are several features we would like to add to KeepUp to make it more efficient in classrooms: * Add a timeout feature so that questions disappear after 10 minutes of inactivity (10 minutes of not being upvoted) * Adding a widget feature so that the basic information from the website can be seen in the corner of your screen at all time * Adding Login for users for more specific individual functions. For example, a teacher can remove answered questions, or the original poster can mark their question as answered. * Censoring of questions as they are posted, so nothing inappropriate gets through.
## Inspiration In many of our own lectures, professors ask students to indicate if they are following along (e.g., hand raising, and thumbs up/down reactions, one professor even said clap if you’re confused). We asked around, and students, including ourselves, often opt out of these reactions in-class to prevent looking like, in front of the whole class, that we’re the only ones not understanding the material. In turn, professors do not get an accurate overview of students’ understanding, harming both the professor and students. We want to make learning more productive through convenient and instantaneous feedback. We know people need what we’re making because we want to use this ourselves and have heard professors eager to gain student feedback during the lectures multiple times. ## What it does We are creating software that allows professors to gain real-time feedback on the pace and clarity of their lectures. During lectures, students can anonymously indicate they are confused, speed of the lecture, and ask and upvote questions at any point. The feedback is instantaneously communicated to the professor anonymously. On the professor’s side, as more students click on the confused button, a small circle floating on the professor’s screen will turn red to alert the professor to re-explain a concept in real-time. If no one is confused, the circle remains green. If the professor wants more information, he/she can hover over the circle to expand the window. The window includes student data on preferred lecture speed, percent of people confused, and top student questions. Professors can clear the question bank when they click on the clear button. Confusion and speed reactions will be cleared every 30 seconds automatically. ## How we built it We used ElectronJS to build a cross-platform desktop client for both the professor and the student. In ElectronJS, we used HTML, CSS, and JavaScript to build a frontend as well as asynchronous techniques (using the electron ipcRenderer) to communicate between different processes. By leveraging special functions in ElectronJS, we’re able to produce a frameless, non-resizable, yet draggable floating window (that remains present even in a fullscreen) that perfectly achieves the behavior we intend for the floating indicator. We used Firebase as a backend, leveraging the Firestore NoSQL database as a way to communicate the students’ engagement and feedback on the material, anonymously, with the professor. Implementation of snapshot listeners on both the student and professor clients allows for real-time feedback and achieves our value proposition of seamless feedback. ## Challenges we ran into While designing the interface for the professor, we really wanted to be certain to make it as simple as possible while still providing essential information about student sentiment. As such, we found it challenging to design a UI that fulfilled these requirements without disrupting the professor’s lecture. Ultimately, we created a small, circular floating icon that can be moved throughout the screen. The icon changes color depending on students' reported confusion and lecture speed. Another design challenge that we faced was whether or not to incorporate a “speed up” request button for the students. We felt conflicted that this button may be rarely used, but if it were used it would offer a lot of benefits. Ultimately we decided to incorporate this feature because the increase in UI complexity was minimal compared to the benefit it provided. This is because if a lecture is going too slow, it can actually increase student confusion because the points may seem disconnected. ## Accomplishments that we're proud of We’re proud of narrowing down our scope to create a solution that solves a specific problem in the University track. VibeCheck effectively solves the problem that professors cannot gauge student understanding in lectures. ## What we learned We learned how to work as a team, and bounce ideas off each other. For design, wireframes, and pitch deck, we brushed up on Figma and learned how to use some of their new features. In order to build our software, we learned how to use HTML, CSS, and JavaScript in a lightweight and scalable way as we built VibeCheck. We also learned how to use ElectronJS to realize the value proposition (e.g., seamless, non-disruptive, immediate feedback) we’ve envisioned. We also learned how to integrate Firebase with ElectronJS (given that this integration is not officially supported), learned how to use the NoSQL database structure of FireStore, and use its real-time database features to achieve real-time feedback (another one of our value propositions) between the student and the professor. Coming from a background of iOS app development with Swift, our developer really enjoyed learning how to use web-dev languages and platforms to create VibeCheck. ## What's next for VibeCheck The next feature we want to implement is to allow professors to monitor the progress of the class and potentially reach out to students who, based on the statistics tracked by our platform, indicate they struggled with the class material (whose identity is hidden from the professor unless they otherwise consent). Additionally, this data can be played back during lecture recordings so that viewers can identify parts of the lecture requiring careful attention. \*Github repo is not runnable because Google Cloud credentials are removed.
## Inspiration In an unprecedented time of fear, isolation, and *can everyone see my screen?*, no ones life has been the same since COVID. We saw people come together to protect others, but also those who refused to wear cloth over their noses. We’ve come up with cutting edge, wearable technology to protect ourselves against the latter, because in 2022, no one wants anyone invading their personal space. Introducing the anti anti-masker mask, the solution to all your pandemic related worries. ## What it does The anti anti-masker mask is a wearable defense mechanism to protect yourself from COVID-19 mandate breakers. It detects if someone within 6 feet of you is wearing a mask or not, and if they dare be mask-less in your vicinity, the shooter mechanism will fire darts at them until they leave. Never worry about anti-maskers invading your personal space again! ## How we built it The mask can be split into 3 main subsystems. **The shooter/launcher** The frame and mechanisms are entirely custom modeled and built using SolidWorks and FDM 3D Printing Technology. We also bought a NERF Gun, and the NERF launcher is powered by a small lipo battery and uses 2 brushless drone motors as flywheels. The darts are automatically loaded into the launcher by a rack and pinion mechanism driven by a servo, and the entire launcher is controlled by an Arduino Nano which receives serial communications from the laptop. **Sensors and Vision** We used a single point lidar to detect whether a non mask wearer is within 6 ft of the user. For the mask detection system, we use a downloadable app to take live video stream to a web server where the processing takes place. Finally, for the vision processing, our OpenCV pipeline reads the data from the webserver. **Code** Other than spending 9 hours trying to install OpenCV on a raspberry pi 🤡 the software was one of the most fun parts. To program the lidar, we used an open source library that has premade methods that can return the distance from the lidar to the next closest object. By checking if the lidar is within 500 and 1500mm, we can ensure that a target that is not wearing a mask is within cough range (6ft) before punishing them. The mask detection with OpenCV allowed us to find those public anti-maskers and then send a signal to the serial port. The Arduino then takes the signals and runs the motors to shoot the darts until the offender is gone. ## Challenges we ran into The biggest challenge was working with the Pi Zero. Installing OpenCV was a struggle, the camera FPS was a struggle, the lidar was a struggle, you get the point. Because of this, we changed the project from Raspi to Arduino, but neither the Arduino Uno or the Arduino Nano ran supported dual serial communication, so we had to downgrade to a VL53L0X lidar, which supported I2C, a protocol that the nano supported. After downloading DFRobot’s VL53L0X’s lidar library, we used their sample code to gather the distance measurement which was used in the final project. Another challenges we faced was designing the feeding mechanism for our darts, we originally wanted to use a slider crank mechanism, however it was designed to be quite compact and as a result the crank caused too much friction with the servo mount and the printed piece cracked. In our second iteration we used a rack and pinion design which significantly reduced the lateral forces and very accurately linearly actuated, this was ultimately used in our final design. ## Accomplishments that we're proud of We have an awesome working product that's super fun to play with / terrorize your friends with. The shooter, albiet many painful hours of getting it working, worked SO WELL and the fact we adapted and ended up with robust and consistently working software was a huge W as well. ## What we learned Install ur python libraries before the hackathon starts 😢 but also interfacing with lidars, making wearables, all that good stuff. ## What's next for Anti Anti-Masker Mask We would want to add dart targeting and a turret to track victims. During our prototyping process we explored running the separate flywheels at different speeds to try to curve the dart, this would have ensured more accurate shots at our 2 meter range. Ultimately we did not have time to finish this process however we would love to explore it in the future. Improve wearablility → reduce the laptop by using something like jetson or a Pi, maybe try to reduce the dart shooter or create a more compact “punishment” device. Try to mount it all to one clothing item instead of 2.5.
partial
## Inspiration For most of the population, picking out clothes to purchase can be a relatively hassle-free job– something we tend to take for granted. All it takes is a stroll down a mall or a scroll down a site for us to have a full shopping cart, ready to be purchased. In contrast, for the 20 million people in the US who are visually impaired, the task of picking clothes to buy can be incredibly daunting and stressful. We were inspired by the struggles of a woman who is blind, as described on her blog. She wrote about how much of her shopping experience was marred by a lack of accommodations to those who are visually impaired. Even online shopping– an experience created and celebrated for its convenience– is made difficult, as some of the most popular retail sites lack accessibility options, like alt text. Without any easy way to remotely evaluate the color, texture, size, fit, and style of a garment, blind users are forced to rely on other people, limit the possibilities of their creativity, or simply take a stab in the dark when it comes to buying garments. But shopping should be an enjoyable, carefree pastime for everyone– not one in which there lies an accessibility barrier. That is where moody fits comes in. ## What it does Our workflow starts with Hume AI’s incredible Emphatic Voice Integration technology listening to a user speak and internally outputting the highest emotions present in the user’s voice. These extracted emotions are then fed to an Open AI model, which scrapes through our database and allows for product recommendations. Once product recommendations are collected, they are shown to the user, allowing the user to either ‘swipe left’ (reject) or ‘swipe right’ (add to cart) on them. Finally, the user is able to view all favorite products in their shopping cart, with easy access to product vendors and descriptions. Then, the text-to-speech functionality allows for the user response to be recorded and analyzed. Based on their verbal response, the appropriate action is taken with that specific product. Our use of Hume AI throughout the project ensures that accessibility is not a problem when it comes to moody fits. ## How we built it We employed React for our frontend and Python for our backend infrastructure. Our approach included integrating Hume AI as the empathetic AI agent and leveraging the OpenAI API to accurately match user emotions with personalized product recommendations. ## Challenges we ran into Implementing the Hume API was difficult and required a lot of trying it out in the playground before implementing it in our project. ## Accomplishments that we're proud of Despite the initial challenge of lacking access to open-source data for e-commerce apparel, we overcame this hurdle by scraping the internet to compile a robust database of products available for users to explore. ## What we learned We learned a lot about making API calls and how the front end and back end interact with each other, especially from using Flask for our Python backend. ## What's next for moody fits We'd like to open-source our platform and continue to add more functionality. We also plan on improving the UX by integrating an empathetic API agent to verbally present the recommendations to users, rather than the existing monotonous voice.
## Inspiration Google as part of their customary April fool's prank made a device that would tell a person about their style! We actually liked the idea of detecting style, but to make people cool! ## What it does So, it is a mirror that is capable to project on the surface. We can use this to virtually display images on the mirror. We are using this virtually project your clothes on your body. **The mirror knows all the clothes you have in your wardrobe, Be it T-Shirts, Shirts, Jackets or Lowers. Know it uses our custom algorithm to suggest an outfit for the day!** The mirror is capable of matching different colors. Our algorithm is able to distinguish which colors look good together and which don't. Also, it has added advantage of determining the current temperature and weather conditions to recommend the ideal type of clothes to be worn. ## How we built it We started by making the hardware of the mirror. We took an old LCD Monitor and mounted a **two-way glass on top of it**. It enables the viewer to view his reflection along with partial image from the back. Then we started the software implementation. Our software uses **Google Cloud Vision API** to detect the "Upper Body" & "Lower Body" and gives us the coordinates for the same. We use these coordinates to mask images of the clothes recommended by our algorithm on top of the viewer's body. The coordinates from Google Cloud Vision API are pased to **Unity** which enables optimal placement of the image on the body. Then we started the implementation of our algorithm which suggests clothes from the wardrobe. Currently, the algorithm uses two methods to suggest a combination: * Color Matching - We match color combinations based on defined presets based upon the data from **EffortlessGent.com** * Weather Prediction - We use **openweathermap api** to detect the predicted temperature. If it increases a threshold, thinner clothes are suggested Finally, We implemented basic clothes recommendation and transaction system where the mirror suggests the user, which clothes to buy whose transactions are verified by **CapitalOne's Purchase API** ## Challenges we ran into **Recognizing and segregating the human body** into the upper and lower half to impose two different images was a major challenge. Google Cloud Vision API helped a lot for the same. Integrating the same with OpenCV and Unity for Real-Time Detection has also been one of the challenges. Deciding what colors look good with each other and what combinations can be used has also been a significant challenge. Finally, we went forward with one of the most widely accepted pattern from EffortlessGent ## What's next for TRYOUR A lot!!! We had a lot of ideas in mind to better our algorithm's efficiency but due to time constraints, we were not able to pull them off! In Future, We can use **Pinterest and Tag-Walk** to scrape the latest designs and trends available in the market and suggest something similar to that Also, TRYOUR can be developed into a complete platform where the mirror will **suggest clothes that a user can buy to enrich the experience and be up to date with current fashion trends**. With a single gesture, user can place order for the clothes which can automatically be updated in their digital wardrobe
## Inspiration for Creating sketch-it Art is fundamentally about the process of creation, and seeing as many of us have forgotten this, we are inspired to bring this reminder to everyone. In this world of incredibly sophisticated artificial intelligence models (many of which can already generate an endless supply of art), now more so than ever, we must remind ourselves that our place in this world is not only to create but also to experience our uniquely human lives. ## What it does Sketch-it accepts any image and breaks down how you can sketch that image into 15 easy-to-follow steps so that you can follow along one line at a time. ## How we built it On the front end, we used Flask as a web development framework and an HTML form that allows users to upload images to the server. On the backend, we used Python libraries ski-kit image and Matplotlib to create visualizations of the lines that make up that image. We broke down the process into frames and adjusted the features of the image to progressively create a more detailed image. ## Challenges we ran into We initially had some issues with scikit-image, as it was our first time using it, but we soon found our way around fixing any importing errors and were able to utilize it effectively. ## Accomplishments that we're proud of Challenging ourselves to use frameworks and libraries we haven't used earlier and grinding the project through until the end!😎 ## What we learned We learned a lot about personal working styles, the integration of different components on the front and back end side, as well as some new possible projects we would want to try out in the future! ## What's next for sketch-it Adding a feature that converts the step-by-step guideline into a video for an even more seamless user experience!
losing
# Pitch Every time you throw trash in the recycling, you either spoil an entire bin of recyclables, or city workers and multi-million dollar machines separate the trash out for you. We want to create a much more efficient way to sort garbage that also trains people to sort correctly and provides meaningful data on sorting statistics. Our technology uses image recognition to identify the waste and opens the lid of the correct bin. When the image recognizer does not recognize the item, it opens all bins and trusts the user to deposit it. It also records the number of times a lid has been opened to estimate what and how much is in each bin. The statistics would have many applications. Since we display the proportion of all garbage in each bin, it will motivate people to compost and recycle more. It will also allow cities to recognize when a bin is full based on how much it has collected, allowing garbage trucks to optimize their routes. In addition, information about what items are commonly thrown into the trash would be useful to material engineers who can design recyclable versions of those products. Future improvements include improved speed and reliability, IOTA blockchain integration, facial recognition for personalized statistics, and automatic self-learning. # How it works 1. Raspberry Pi uses webcam and opencv to look for objects 2. When an object is detected the pi sends the image to the server 3. Server sends image to cloud image recognition services (Amazon Rekognition & Microsoft Azure) and determines which bin should be open 4. Server stores information and statistics in a database 5. Raspberry Pi gets response back from server and moves appropriate bin
## Inspiration Canadians produce more garbage per capita than any other country on earth, with the United States ranking third in the world. In fact, Canadians generate approximately 31 million tonnes of garbage a year. According to the Environmental Protection Agency, 75% of this waste is recyclable. Yet, only 30% of it is recycled. In order to increase this recycling rate and reduce our environmental impact, we were inspired to propose a solution through automating waste sorting. ## What it does Our vision takes control away from the user, and lets the machine do the thinking when it comes to waste disposal! By showing our app a type of waste through the webcam, we detect and classify the category of waste into either recyclable, compost, or landfill. From there, the appropriate compartment is opened to ensure that the right waste gets to the right place! ## How we built it Using TensorFlow and object detection, a python program analyzes the webcam image input and classifies the objects shown. The TensorFlow data is then collected and pushed to our MongoDB Atlas database via Google Cloud. For this project, we used machine learning and used a single shot detector model to maintain a balance between accuracy and speed. For the hardware, an Arduino 101 and a step motor were responsible for manipulating the position of the lid and opening the appropriate compartment. ## Challenges we ran into We had many issues with training our ML Models on Google Cloud, due to the meager resources provided by Google. Another issue we encountered was finding the right datasets, due to the novelty of our product. Due to these setbacks, we resorted to modifying a TensorFlow provided model. ## Accomplishments that I'm proud of We managed to work through difficulties and learned a lot during the process! We learned to connect TensorFlow, Arduino, MongoDB, and Express.js to create a synergistic project. ## What's next for Trash Code In the future, we aim to create a mobile app for improved accessibility and to create a fully customized trained ML model. We also hope to design a fully functional full-sized prototype with the Arduino.
# DriveWise: Building a Safer Future in Route Planning Motor vehicle crashes are the leading cause of death among teens, with over a third of teen fatalities resulting from traffic accidents. This represents one of the most pressing public safety issues today. While many route-planning algorithms exist, most prioritize speed over safety, often neglecting the inherent risks associated with certain routes. We set out to create a route-planning app that leverages past accident data to help users navigate safer routes. ## Inspiration The inexperience of young drivers contributes to the sharp rise in accidents and deaths as can be seen in the figure below. ![Injuries and Deaths in Motor Vehicle Crashes](https://raw.githubusercontent.com/pranavponnusamy/Drivewise/refs/heads/main/AccidentsByAge.webp) This issue is further intensified by challenging driving conditions, road hazards, and the lack of real-time risk assessment tools. With limited access to information about accident-prone areas and little experience on the road, new drivers often unknowingly enter high-risk zones—something traditional route planners like Waze or Google Maps fail to address. However, new drivers are often willing to sacrifice speed for safer, less-traveled routes. Addressing this gap requires providing insights that promote safer driving choices. ## What It Does We developed **DriveWise**, a route-planning app that empowers users to make informed decisions about the safest routes. The app analyzes 22 years of historical accident data and utilizes a modified A\* heuristic for personalized planning. Based on this data, it suggests alternative routes that are statistically safer, tailoring recommendations to the driver’s skill level. By factoring in variables such as driver skill, accident density, and turn complexity, we aim to create a comprehensive tool that prioritizes road safety above all else. ### How It Works Our route-planning algorithm is novel in its incorporation of historical accident data directly into the routing process. Traditional algorithms like those used by Google Maps or Waze prioritize the shortest or fastest routes, often overlooking safety considerations. **DriveWise** integrates safety metrics into the edge weights of the routing graph, allowing the A\* algorithm to favor routes with lower accident risk. **Key components of our algorithm include:** * **Accident Density Mapping**: We map over 3.1 million historical accident data points to the road network using spatial queries. Each road segment is assigned an accident count based on nearby accidents. * **Turn Penalties**: Sharp turns are more challenging for new drivers and have been shown to contribute to unsafe routes. We calculate turn angles between road segments and apply penalties for turns exceeding a certain threshold. * **Skillfulness Metric**: We introduce a driver skill level parameter that adjusts the influence of accident risk and turn penalties on route selection. New drivers are guided through safer, simpler routes, while experienced drivers receive more direct paths. * **Risk-Aware Heuristic**: Unlike traditional A\* implementations that use distance-based heuristics, we modify the heuristic to account for accident density, further steering the route away from high-risk areas. By integrating these elements, **DriveWise** offers personalized route recommendations that adapt as the driver's skill level increases, ultimately aiming to reduce the likelihood of accidents for new drivers. ## Accomplishments We're Proud Of We are proud of developing an algorithm that not only works effectively but also has the potential to make a real difference in road safety. Creating a route-planning tool that factors in historical accident data is, to our knowledge, a novel approach in this domain. We successfully combined complex data analysis with an intuitive user interface, resulting in an app that is both powerful and user-friendly. We are also kinda proud about our website. Learn more about us at [idontwannadie.lol](https://idontwannadie.lol/) ## Challenges We Faced This was one of our first hackathons, and we faced several challenges. Having never deployed anything before, we spent a significant amount of time learning, debugging, and fixing deployment issues. Designing the algorithm to analyze accident patterns while keeping the route planning relatively simple added considerable complexity. We had to balance predictive analytics with real-world usability, ensuring that the app remained intuitive while delivering sophisticated results. Another challenge was creating a user interface that encourages engagement without overwhelming the driver. We wanted users to trust the app’s recommendations without feeling burdened by excessive information. Striking the right balance between simplicity and effectiveness through gamified metrics proved to be an elegant solution. ## What We Learned We learned a great deal about integrating large datasets into real-time applications, the complexities of route optimization algorithms, and the importance of user-centric design. Working with the OpenStreetMap and OSMnx libraries required a deep dive into geospatial analysis, which was both challenging and rewarding. We also discovered the joys and pains of deploying an application, from server configurations to domain name setups. ## Future Plans In the future, we see the potential for **DriveWise** to go beyond individual drivers and benefit broader communities. Urban planners, law enforcement agencies, and policymakers could use aggregated data to identify high-risk areas and make informed decisions about where to invest in road safety improvements. By expanding our dataset and refining our algorithms, we aim to make **DriveWise** functional in more regions and for a wider audience. ## Links * **Paper**: [Mathematical Background](https://drive.google.com/drive/folders/1Q9MRjBWQtXKwtlzObdAxtfBpXgLR7yfQ?usp=sharing) * **GitHub**: [DriveWise Repository](https://github.com/pranavponnusamy/Drivewise) * **Website**: [idontwannadie.lol](https://idontwannadie.lol/) * **Video Demo**: [DriveWise Demo](https://www.veed.io/view/81d727bc-ed6b-4bba-95c1-97ed48b1738d?panel=share)
partial
## Inspiration In the coronavirus pandemic, our healthcare systems collapsed due to many loopholes in it. More people and fewer hospitals available were one of them. Due to a lack of awareness about common diseases people find themselves in panic and stressful situations. Our idea of Swasthya stemmed from these problems(so that people have a basic idea about common diseases. ## What it does Swasthya(means health in English) is a niche blog that covers health topics, related content of the health industry and the general community. Here diverse users collaborate to seek and/or contribute health content within the standard guidelines. We have covered six major domains namely heart, skin, mental health, orthopedics, pulmonology, and eyes. A specialty of our website is that we provide our users with a book hope in which stories of lion-hearted people who survived grave diseases are narrated. we are also providing a crowdfunding page(donations through our website will help various NGOs around the world) ## How we built it After the idea formation, we started working on various tech stacks like Html, CSS, Javascript, etc, and hence our website was formed. ## Challenges we ran into The toughest of the challenge was to arrive at the ideation behind the website(how to make it beneficial for a larger section of society and how to keep an easy interface) and of course, the limited time frame in which this project was to be completed. ## Accomplishments that we're proud of completing it in a limited time span , working with a new team of perfect strangers and also arriving at the ideation ## What we learned time management(on a serious note) working in a team and of course, the teach stacks that we used in the project. ## What's next for SWASTHYA We are planning to make swasthya more interactive by the use of technical bots and the main feature that we want to add will be the live video conferencing of troubled people with doctors all around the world. also we are planning to make a swasthya community where survivors will share their stories and people will listen to it.
## Inspiration As college freshmen, we noticed that our team and many of our classmates often don't drink enough water throughout the day, and mostly because we forget to stay hydrated. Since most students own a water bottle, we thought of creating a water bottle modification that could detect how much water one drinks and both display it on the bottle and on an app in order to help people drink more water. Furthermore, the app would store the history of water consumption and give more analytics. ## What it does An ultrasonic sensor is mounted to the lid of the water bottle and measures the distance from the lid to the current water level. Therefore, based on the diameter of the water bottle, the change in water level associated with drinking water can be converted to a volume of water consumed. The ultrasonic sensor then communicates with an Arduino Nano, which then interfaces with the user through push buttons and an LCD screen. Furthermore, a bluetooth module relays data to a phone app, which computes more advanced statistics about water consumption and health based off user height and weight. ## How we built it We started by designing high-level goals for the project: it should measure water level, communicate with the phone, and display statistics. After that we began designing mounts and enclosures for the electronics and sensors, which were then modeled in Solidworks and 3D-printed. Additionally, we designed our necessary circuits in Fritzing, using and Arduino Nano, a HC-06 bluetooth module, a HC-SR04 ultrasonic sensor, a 9V battery, and a 16x2 LCD screen. The app was created in Android Studio, where we made three separate activities: a main screen, a settings menu, and a history screen. We coded firmware for the Arduino using the Arduino IDE to process values from the ultrasonic sensor, relay data to the phone, and to display data on the LCD. ## Challenges we ran into We had to make a ergonomic and compact design since we were constrained to mounting everything on a water bottle. That proved to be difficult when designing our circuits since we had to fit a large amount of sensors and functionality into small compartments. Furthermore, we had to figure out algorithms to differentiate between different user activities such as opening the water bottle, drinking, re-capping the bottle, or even filling the bottle up. ## Accomplishments that we're proud of We were able to produce a system that detected the amount of water consumed, and relay that information to a phone. Furthermore, it occupies a relatively small space and doesn't interfere with regular usage of the bottle. ## What we learned We learned that it is very important to consider space constraints when designing circuits. Furthermore, sensors can have inherent variability that we must account for. Lastly, we learned that CalHacks 6.0 was a lot of fun ;)) ## What's next for HydroHelper We want to be able to compact our electronics more and perhaps use PCBs in order to reduce space taken up by breadboards. Furthermore, we could improve our mounts for our LCD and create sheaths for our wires.
## Inspiration TL;DR: Cut Lines, Cut Time. With the overflowing amount of information and the limited time that we have, it is important to efficiently distribute the time and get the most out of it. With people scrolling short videos endlessly on the most popular apps such as Tiktok, Instagram, and Youtube, we thought, why not provide a similar service but for texts that can not only be fun but also productive? As a group of college students occupied with not only school but also hobbies and goals, we envisioned an app that can summarize any kind of long text effectively so that while we can get the essence of the text, we can also spend more time on other important things. Without having to ask someone to provide a TL;DR for us, we wanted to generate it ourselves in a matter of few seconds, which will help us get the big picture of the text. TL;DR is applicable anywhere, from social media such as Reddit and Messenger to Wikipedia and academic journals, that are able to pick out the most essentials in just one click. Ever on a crunch for time to read a 10-page research paper? Want to stay updated on the news but are too lazy to actually read the whole article? Got sent a box of texts from a friend and just want to know the gist of it. TL;DR: this is the app for you! ## What it does TL;DR helps summarize passages and articles into more short forms of writing, making it easier (and faster) to read on the go. ## How we built it We started by prototyping the project on Figma and discussing our vision for TL;DR. From there, we separated our unique roles within the team into NLP, frontend, and backend. We utilized a plethora of services provided by the sponsors for CalHacks, using Azure to host much of our API and CockRoachDB Serverless to seamlessly integrate persistent data on the cloud. We also utilized Vercel’s Edge network to allow our application to quickly be visited by all people across the globe. ## Web/Extension The minimalistic user interface portraying our goal of simplification provides a web interface and a handy extension accessible by a simple right click. Simply select the text, and it will instantly be shortened and stored for future use! ## Backend and connections The backend was built with Flask via Python and hosted on Microsoft Azure as an App Service. GitHub Actions were also used in this process to deploy our code from GitHub itself to Microsoft Azure. Cockroach Lab’s DB to store our user data (email, phone number, and password) and cached summaries of past TL;DR. Twilio is also used for user authentication as well as exporting a TL;DR from your laptop to your phone. We utilized Co:here’s APIs extensively, making use of the text summarization and sentiment classifier endpoints. Leveraging Beautiful Soup’s capability to extract information, these pair together to generate the output needed by our app. In addition, we went above and beyond to better the NLP landscape by allowing our users to make modifications to Co:here’s generations, which we can send to Co:here. Through this, we are empowering a community of users that help support the development of accessible ML and get their work done as well - win/win! ## Challenges we ran into Every successful project comes with its own challenges, and we sure had to overcome some bugs and obstacles along the way! First, we took our time settling on the perfect idea, as we all wanted to create something that really impacts the lives of fellow students and the general population. Although our project is “quick”, we were slow to make sure that everything was thoroughly thought through. In addition, we spent some time debugging our database connection, where a combination of user error and inexperience stumped our progress. However, with a bit of digging around and pair programming, we managed to solve all these problems and learned so much along the way! ## Accomplishments that we're proud of The integration of different APIs into one platform was a major accomplishment since the numerous code bases that were brought into play and exchanged data had to be done carefully. It did take a while but felt amazing when it all worked out. ## What we learned From this experience, we learned a lot about using new technologies, especially the APIs and servers provided by the sponsors, which helped us be creative in how we implement them in each part of our backend and analysis. We have also learned the power of collaboration and creating a better product through team synergy and combining our creativity and knowledge together. ## What's next for TL;DR We have so much in store for TL;DR! Specifically, we were looking to support generating TL;DR for youtube videos (using the captions API or GCP’s speech-to-text service). In addition, we are always striving for the best user experience possible and will find new ways to make the app more enjoyable. This includes allowing users to make more editions and moving to more platforms!
losing
## Inspiration Often times health information is presented in a very clinical manner that is unfriendly for kids, there's too much medical jargon and children have difficulty engaging with the information. We wanted to turn learning about things like diabetes, first aid, and other medical ailments into a fun and interactive experience by utilizing superheroes! Our goal is to help spread awareness and educate students about various health topics in a fun and exciting manor. ## What it does Marvel Medical Dictionary (MMD) is an mobile augmented reality experience that allows users to learn about different health topics from their favorite Marvel superheroes. After searching a topic like diabetes or spider bites, MMD utilizes natural language processing using data from the Marvel API to select a super hero that is closely related to the search query. For example, if our bite-sized hero/heroine searched for spider bites Spider-Man would be there to provide easily understandable information. Users are able to "watch" Spider-man and other Avengers on their mobile device and learn about different types of health issues. ## How we built it We built MMD on the Unity Game Engine, C#, and imported 3D models found online. We also utilized flask's python framework to retrieve health information, Marvel API data, as well as run our natural language processing code. ## Challenges we ran into A little bit of everything. We have limited experience with Unity and have never developed in AR before. This is also our first time working with web development and had a lot of \_ fun \_ trying to find bugs in our python code as well as learning the beauty of JavaScript and its relation with HTML. ## What's next for Marvel Medical Dictionary Moving forward we would love to integrate more Marvel character models and add more dynamic movements and animations to the AR experience. We also want to more closely integrate engagement principles to increase information retention in regards to health knowledge. Although, MMD was just an idea created on a whim, in the future it could bring awareness and health education to young children all across the world.
# **MedKnight** #### Professional medical care in seconds, when the seconds matter ## Inspiration Natural disasters often put emergency medical responders (EMTs, paramedics, combat medics, etc.) in positions where they must assume responsibilities beyond the scope of their day-to-day job. Inspired by this reality, we created MedKnight, an AR solution designed to empower first responders. By leveraging cutting-edge computer vision and AR technology, MedKnight bridges the gap in medical expertise, providing first responders with life-saving guidance when every second counts. ## What it does MedKnight helps first responders perform critical, time-sensitive medical procedures on the scene by offering personalized, step-by-step assistance. The system ensures that even "out-of-scope" operations can be executed with greater confidence. MedKnight also integrates safety protocols to warn users if they deviate from the correct procedure and includes a streamlined dashboard that streams the responder’s field of view (FOV) to offsite medical professionals for additional support and oversight. ## How we built it We built MedKnight using a combination of AR and AI technologies to create a seamless, real-time assistant: * **Meta Quest 3**: Provides live video feed from the first responder’s FOV using a Meta SDK within Unity for an integrated environment. * **OpenAI (GPT models)**: Handles real-time response generation, offering dynamic, contextual assistance throughout procedures. * **Dall-E**: Generates visual references and instructions to guide first responders through complex tasks. * **Deepgram**: Enables speech-to-text and text-to-speech conversion, creating an emotional and human-like interaction with the user during critical moments. * **Fetch.ai**: Manages our system with LLM-based agents, facilitating task automation and improving system performance through iterative feedback. * **Flask (Python)**: Manages the backend, connecting all systems with a custom-built API. * **SingleStore**: Powers our database for efficient and scalable data storage. ## SingleStore We used SingleStore as our database solution for efficient storage and retrieval of critical information. It allowed us to store chat logs between the user and the assistant, as well as performance logs that analyzed the user’s actions and determined whether they were about to deviate from the medical procedure. This data was then used to render the medical dashboard, providing real-time insights, and for internal API logic to ensure smooth interactions within our system. ## Fetch.ai Fetch.ai provided the framework that powered the agents driving our entire system design. With Fetch.ai, we developed an agent capable of dynamically responding to any situation the user presented. Their technology allowed us to easily integrate robust endpoints and REST APIs for seamless server interaction. One of the most valuable aspects of Fetch.ai was its ability to let us create and test performance-driven agents. We built two types of agents: one that automatically followed the entire procedure and another that responded based on manual input from the user. The flexibility of Fetch.ai’s framework enabled us to continuously refine and improve our agents with ease. ## Deepgram Deepgram gave us powerful, easy-to-use functionality for both text-to-speech and speech-to-text conversion. Their API was extremely user-friendly, and we were even able to integrate the speech-to-text feature directly into our Unity application. It was a smooth and efficient experience, allowing us to incorporate new, cutting-edge speech technologies that enhanced user interaction and made the process more intuitive. ## Challenges we ran into One major challenge was the limitation on accessing AR video streams from Meta devices due to privacy restrictions. To work around this, we used an external phone camera attached to the headset to capture the field of view. We also encountered microphone rendering issues, where data could be picked up in sandbox modes but not in the actual Virtual Development Environment, leading us to scale back our Meta integration. Additionally, managing REST API endpoints within Fetch.ai posed difficulties that we overcame through testing, and configuring SingleStore's firewall settings was tricky but eventually resolved. Despite these obstacles, we showcased our solutions as proof of concept. ## Accomplishments that we're proud of We’re proud of integrating multiple technologies into a cohesive solution that can genuinely assist first responders in life-or-death situations. Our use of cutting-edge AR, AI, and speech technologies allows MedKnight to provide real-time support while maintaining accuracy and safety. Successfully creating a prototype despite the hardware and API challenges was a significant achievement for the team, and was a grind till the last minute. We are also proud of developing an AR product as our team has never worked with AR/VR. ## What we learned Throughout this project, we learned how to efficiently combine multiple AI and AR technologies into a single, scalable solution. We also gained valuable insights into handling privacy restrictions and hardware limitations. Additionally, we learned about the importance of testing and refining agent-based systems using Fetch.ai to create robust and responsive automation. Our greatest learning take away however was how to manage such a robust backend with a lot of internal API calls. ## What's next for MedKnight Our next step is to expand MedKnight’s VR environment to include detailed 3D renderings of procedures, allowing users to actively visualize each step. We also plan to extend MedKnight’s capabilities to cover more medical applications and eventually explore other domains, such as cooking or automotive repair, where real-time procedural guidance can be similarly impactful.
# BananaExpress A self-writing journal of your life, with superpowers! We make journaling easier and more engaging than ever before by leveraging **home-grown, cutting edge, CNN + LSTM** models to do **novel text generation** to prompt our users to practice active reflection by journaling! Features: * User photo --> unique question about that photo based on 3 creative techniques + Real time question generation based on (real-time) user journaling (and the rest of their writing)! + Ad-lib style questions - we extract location and analyze the user's activity to generate a fun question! + Question-corpus matching - we search for good questions about the user's current topics * NLP on previous journal entries for sentiment analysis I love our front end - we've re-imagined how easy and futuristic journaling can be :) And, honestly, SO much more! Please come see! ♥️ from the Lotus team, Theint, Henry, Jason, Kastan
partial
## Inspiration The COVID-19 pandemic has changed the way we go about everyday errands and trips. Along with needing to plan around wait times, distance, and reviews for a location we may want to visit, we now also need to consider how many other people will be there and whether its even a safe establishment to visit. *Planwise helps us plan our trips better.* ## What it does Planwise searches for the places around you that you want to visit and calculates a PlanScore that weighs the Google **reviews**, current **attendance** vs usual attendance, **visits**, and **wait times** so that locations that are rated highly, have few people currently visit them compared to their usual weekly attendance, and have low waiting times are rated highly. A location's PlanScore **changes by the hour** to give users the most up-to-date information about whether they should visit an establishment. Furthermore, PlanWise also **flags** common types of places that are prone to promoting the spread of COVID-19, but still allows users to search for them in case they need to visit them for **essential work**. ## How we built it We built Planwise as a web app with Python, Flask, and HTML/CSS. We used the Google Places and Populartimes APIs to get and rank places. ## Challenges we ran into The hardest challenges weren't technical - they had more to do with our *algorithm* and considering the factors of the pandemic. Should we penalize an essential grocery store for being busy? Should we even display results for gyms in counties which have enforced shutdowns on them? Calculating the PlanScore was tough because a lot of places didn't have some of the information needed. We also spent some time considering which factors to weigh more heavily in the score. ## Accomplishments that we are proud of We're proud of being able to make an application that has actual use in our daily lives. Planwise makes our lives not just easier but **safer**. ## What we learned We learned a lot about location data and what features are relevant when ranking search results. ## What's next for Planwise We plan to further develop the web application and start a mobile version soon! We would like to further **localize** advisory flags on search results depending on the county. For example, if a county has strict lockdown, then Planwise should flag more types of places than the average county.
## Inspiration People are increasingly aware of climate change but lack actionable steps. Everything in life has a carbon cost, but it's difficult to understand, measure, and mitigate. Information about carbon footprints of products is often inaccessible for the average consumer, and alternatives are time consuming to research and find. ## What it does With GreenWise, you can link email or upload receipts to analyze your purchases and suggest products with lower carbon footprints. By tracking your carbon usage, it helps you understand and improve your environmental impact. It provides detailed insights, recommends sustainable alternatives, and facilitates informed choices. ## How we built it We started by building a tool that utilizes computer vision to read information off of a receipt, an API to gather information about the products, and finally ChatGPT API to categorize each of the products. We also set up an alternative form of gathering information in which the user forwards digital receipts to a unique email. Once we finished the process of getting information into storage, we built a web scraper to gather the carbon footprints of thousands of items for sale in American stores, and built a database that contains these, along with AI-vectorized form of the product's description. Vectorizing the product titles allowed us to quickly judge the linguistic similarity of two products by doing a quick mathematical operation. We utilized this to make the application compare each product against the database, identifying products that are highly similar with a reduced carbon output. This web application was built with a Python Flask backend and Bootstrap for the frontend, and we utilize ChromaDB, a vector database that allowed us to efficiently query through vectorized data. ## Accomplishments that we're proud of In 24 hours, we built a fully functional web application that uses real data to provide real actionable insights that allow users to reduce their carbon footprint ## What's next for GreenWise We'll be expanding e-receipt integration to support more payment processors, making the app seamless for everyone, and forging partnerships with companies to promote eco-friendly products and services to our consumers [Join the waitlist for GreenWise!](https://dea15e7b.sibforms.com/serve/MUIFAK0jCI1y3xTZjQJtHyTwScsgr4HDzPffD9ChU5vseLTmKcygfzpBHo9k0w0nmwJUdzVs7lLEamSJw6p1ACs1ShDU0u4BFVHjriKyheBu65k_ruajP85fpkxSqlBW2LqXqlPr24Cr0s3sVzB2yVPzClq3PoTVAhh_V3I28BIZslZRP-piPn0LD8yqMpB6nAsXhuHSOXt8qRQY)
## Inspiration Inclusivity is the cornerstone of thriving communities. As we continue to grow and interact across various cultures, races, and genders, the need to foster diverse and welcoming environments becomes more crucial than ever. Our inspiration for Inclusivity Among Us stemmed from the desire to help individuals and organizations ensure their communication aligns with the values of diversity that are commonly overlooked. We wanted to create a tool that helps people make meaningful changes in how they speak and write, driving positive social impact in communities of all kinds. ## What it does Inclusivity Among Us is a tool designed to help users analyze their communication for inclusivity. It highlights non-inclusive language and provides specific tips for improvement, focusing on topics such as race, gender expression, disability, and educational attainment. The app provides: * An inclusivity rating (out of 100) to measure how inclusive the content is. * Specific changes to improve the inclusivity of the text, using color-coded highlights for non-inclusive phrases. * Multilingual support, allowing users to check content in various languages like English, Spanish, French, and more. * A downloadable report in PDF format, which summarizes the inclusivity rating, flagged text, and suggestions for improvement. ## How we built it We built the Python app using Streamlit for the user interface and integration with OpenAI’s GPT-3.5 to perform the inclusivity analysis. ## Challenges we ran into One of the primary challenges was ensuring that the feedback generated by the AI was both accurate and meaningful. Our prompt engineering skills had to be used to prevent the model from nitpicking trivial language choices and focus only on significant inclusivity issues. Another challenge was ensuring the app could handle text in multiple languages and still provide relevant suggestions, which required fine-tuning how the model interprets cultural nuances in language. ## Accomplishments that we're proud of We are proud of creating a usable tool that helps users improve their language in a meaningful way. The ability to dynamically highlight non-inclusive language, provide concise suggestions, and offer multilingual support are features that make the app impactful across various contexts. We are also proud of the seamless user experience, where anyone can simply paste content, check for inclusivity, and download a report within seconds. ## What we learned This was our first time using Streamlit and it surprised us with how seamless we were able to integrate other features into our app. The time saved from styling and implementing basic features allowed us to focus on refining the actual product. We also learned more about prompt engineering through a lot of trial and error, figuring out how to create the most effective instructions for the model. ## What's next for Inclusivity Among Us * Refine the AI suggestions further, ensuring the advice given is always contextually relevant and culturally sensitive. * Broaden the scope by including more categories of inclusivity, such as socio-economic status, mental health considerations, and age diversity. * Allow users to flag suggestions they find particularly helpful, building a feedback loop that continuously improves the tool. * Custom reports for organizations, offering deeper insights and strategies for making their communication more inclusive. * Explore the possibility of integrating with corporate communication tools like Slack or Gmail, allowing users to check inclusivity in real-time while drafting messages.
winning
## Inspiration Has your browser ever looked like this? ![](https://i.imgur.com/waCM1K0.png) ... or this? ![](https://i.imgur.com/WX2dTfz.png) Ours have, *all* the time. Regardless of who you are, you'll often find yourself working in a browser on not just one task but a variety of tasks. Whether its classes, projects, financials, research, personal hobbies -- there are many different, yet predictable, ways in which we open an endless amount of tabs for fear of forgetting a chunk of information that may someday be relevant. Origin aims to revolutionize your personal browsing experience -- one workspace at a time. ## What it does In a nutshell, Origin uses state-of-the-art **natural language processing** to identify personalized, smart **workspaces**. Each workspace is centered around a topic comprised of related tabs from your browsing history, and Origin provides your most recently visited tabs pertaining to that workspace and related future ones, a generated **textual summary** of those websites from all their text, and a **fine-tuned ChatBot** trained on data about that topic and ready to answer specific user questions with citations and maintaining history of a conversation. The ChatBot not only answers general factual questions (given its a foundation model), but also answers/recalls specific facts found in the URLs/files that the user visits (e.g. linking to a course syllabus). Origin also provides a **semantic search** on resources, as well as monitors what URLs other people in an organization visit and recommend pertinent ones to the user via a **recommendation system**. For example, a college student taking a History class and performing ML research on the side would have sets of tabs that would be related to both topics individually. Through its clustering algorithms, Origin would identify the workspaces of "European History" and "Computer Vision", with a dynamic view of pertinent URLs and widgets like semantic search and a chatbot. Upon continuing to browse in either workspace, the workspace itself is dynamically updated to reflect the most recently visited sites and data. **Target Audience**: Students to significantly improve the education experience and industry workers to improve productivity. ## How we built it ![](https://i.imgur.com/HYsZ3un.jpg) **Languages**: Python ∙ JavaScript ∙ HTML ∙ CSS **Frameworks and Tools**: Firebase ∙ React.js ∙ Flask ∙ LangChain ∙ OpenAI ∙ HuggingFace There are a couple of different key engineering modules that this project can be broken down into. ### 1(a). Ingesting Browser Information and Computing Embeddings We begin by developing a Chrome Extension that automatically scrapes browsing data in a periodic manner (every 3 days) using the Chrome Developer API. From the information we glean, we extract titles of webpages. Then, the webpage titles are passed into a pre-trained Large Language Model (LLM) from Huggingface, from which latent embeddings are generated and persisted through a Firebase database. ### 1(b). Topical Clustering Algorithms and Automatic Cluster Name Inference Given the URL embeddings, we run K-Means Clustering to identify key topical/activity-related clusters in browsing data and the associated URLs. We automatically find a description for each cluster by prompt engineering an OpenAI LLM, specifically by providing it the titles of all webpages in the cluster and requesting it to output a simple title describing that cluster (e.g. "Algorithms Course" or "Machine Learning Research"). ### 2. Web/Knowledge Scraping After pulling the user's URLs from the database, we asynchronously scrape through the text on each webpage via Beautiful Soup. This text provides richer context for each page beyond the title and is temporarily cached for use in later algorithms. ### 3. Text Summarization We split the incoming text of all the web pages using a CharacterTextSplitter to create smaller documents, and then attempt a summarization in a map reduce fashion over these smaller documents using a LangChain summarization chain that increases the ability to maintain broader context while parallelizing workload. ### 4. Fine Tuning a GPT-3 Based ChatBot The infrastructure for this was built on a recently-made popular open-source Python package called **LangChain** (see <https://github.com/hwchase17/langchain>), a package with the intention of making it easier to build more powerful Language Models by connecting them to external knowledge sources. We first deal with data ingestion and chunking, before embedding the vectors using OpenAI Embeddings and storing them in a vector store. To provide the best chat bot possible, we keep track of a history of a user's conversation and inject it into the chatbot during each user interaction while simultaneously looking up relevant information that can be quickly queries from the vector store. The generated prompt is then put into an OpenAI LLM to interact with the user in a knowledge-aware context. ### 5. Collaborative Filtering-Based Recommendation Provided that a user does not turn privacy settings on, our collaborative filtering-based recommendation system recommends URLs that other users in the organization have seen that are related to the user's current workspace. ### 6. Flask REST API We expose all of our LLM capabilities, recommendation system, and other data queries for the frontend through a REST API served by Flask. This provides an easy interface between the external vendors (like LangChain, OpenAI, and HuggingFace), our Firebase database, the browser extension, and our React web app. ### 7. A Fantastic Frontend Our frontend is built using the React.js framework. We use axios to interact with our backend server and display the relevant information for each workspace. ## Challenges we ran into 1. We had to deal with our K-Means Clustering algorithm outputting changing cluster means over time as new data is ingested, since the URLs that a user visits changes over time. We had to anchor previous data to the new clusters in a smart way and come up with a clever updating algorithm. 2. We had to employ caching of responses from the external LLMs (like OpenAI/LangChain) to operate under the rate limit. This was challenging, as it required revamping our database infrastructure for caching. 3. Enabling the Chrome extension to speak with our backend server was a challenge, as we had to periodically poll the user's browser history and deal with CORS (Cross-Origin Resource Sharing) errors. 4. We worked modularly which was great for parallelization/efficiency, but it slowed us down when integrating things together for e2e testing. ## Accomplishments that we're proud of The scope of ways in which we were able to utilize Large Language Models to redefine the antiquated browsing experience and provide knowledge centralization. This idea was a byproduct of our own experiences in college and high school -- we found ourselves spending significant amounts of time attempting to organize tab clutter systematically. ## What we learned This project was an incredible learning experience for our team as we took on multiple technically complex challenges to reach our ending solution -- something we all thought that we had a potential to use ourselves. ## What's next for Origin We believe Origin will become even more powerful at scale, since many users/organizations using the product would improve the ChatBot's ability to answer commonly asked questions, and the recommender system would perform better in aiding user's education or productivity experiences.
## Inspiration I wanted to create an easier way for people to access the internet and interact with their desktop devices from afar. ## What it does It's a mobile app that allows a user to control their web browser on their laptop/desktop in nearly every way they can imagine, from scrolling up and down, switching tabs, and going back/forth through history to advanced features such as refreshing the page, navigating and clicking on links, and even "turning off" and "turning on" the browser. ## How I built it I built the mobile app using React Native and Expo while testing on my Google Pixel. The backend was written in Python with Selenium for the browser controls and a server that served websockets to facilitate communication between the two devices. ## Challenges I ran into Positioning things for the front end was hard and reminded me I don't want to be a front end developer. Besides that, getting back into React Native and creating an API to communicate with the back end took some design thinking. ## Accomplishments that I'm proud of Built alone in < 24 hours. The design also isn't too bad which is nice. ## What I learned Learned about CSS rules/flexbox and more advanced React Native stuff. Oh, and turns out hacking without a team isn't so bad. ## What's next for Webmote I'm going to keep building on this idea and expanding the feature set/ease of distribution. I believe there should be greater connectivity between our devices.
Track: Social Good [Disaster Relief] [Best Data Visualization Hack] [Best Social Impact Hack] [Best Hack for Building Community] ## Inspiration Do you ever worry about what is in your water? While some of us live in the luxury of getting clean water for granted, some of us do not. In Flint, MI, another main water pipeline broke. Again. Under water boil advisory, citizens are subject to government inaction and lack of communication. Our goal is to empower communities with what is in their water using data analytics and crowd sourced reports of pollutants in tap water found in people's homes. ## What it does Water Crusader is an app that uses two categories of data to give communities an informed assessment of their water quality: publicly available government data and crowd sourced tap water assessments taken from people's homes. Firstly, it takes available government data of blood lead levels tested in children, records of home age to indicate the age of utilities and materials used in pipes, and income levels as an indicator of maintenance in a home. With the programs we have built, governments can expand our models by giving us more data to use. Secondly, users are supplied with a cost-effective, open source IOT sensor that uploads water quality measurements to the app. This empowers citizens to participate in knowing the water quality of their communities. As a network of sensors is deployed, real-time, crowd-sourced data can greatly improve our risk assessment models. By fusing critical information from these two components, we use regression models to give our users a risk measurement of lead poisoning from their water pipelines. Armed with this knowledge, citizens are empowered with being able to make more informed health decisions and call their governments to act on the problem of lead in drinking water. ## How we built it Hardware: To simulate the sensor system with the available hardware materials at HackMIT, we used a ESP32 and DHT11 Temperature and Humidity Sensor. The ESP32 takes temperature and humidity data read from DHT11. Data is received in Nose-RED json by specifying HTTP request and the sending actual post in the Arduino IDE. Data Analytics: Using IBM's Watson Studio and AI development tools, data from government sources were cleaned and used to model lead poisoning risk. Blood lead levels tested in children from the Center for Disease Control was used as the feature we wanted to predict. House age and poverty levels taken from the U.S. Census were used to predict blood lead levels tested in children from the Center for Disease Control. ## Challenges we ran into 1. We are limited by the amount of hardware available. We tried our best to create the best simulation of the sensor system as possible. 2. It was hard to retrieve and clean government data, especially with the need to make data requests. 3. Not all of us are familiar with Node-RED js and there was a lot to learn! ## Accomplishments that we're proud of 1.Learning new software! 1. Working in a super diverse team!! ## What's next for Water Crusader We will continue to do research for different water sensors that can be added to our system. From our research, we learned that there is a lead sensor in development.
partial
## Inspiration We were inspired by the resilience of freelancers, particularly creative designers, during the pandemic. As students, it's easy to feel overwhelmed and not value our own work. We wanted to empower emerging designers and remind them of what we can do with a little bit of courage. And support. ## What it does Bossify is a mobile app that cleverly helps students adjust their design fees. It focuses on equitable upfront pay, which in turn increases the amount of money saved. This can be put towards an emergency fund. On the other side, clients can receive high-quality, reliable work. The platform has a transparent rating system making it easy to find quality freelancers. It's a win-win situation. ## How we built it We got together as a team the first night to hammer out ideas. This was our second idea, and everyone on the team loved it. We all pitched in ideas for product strategy. Afterwards, we divided the work into two parts - 1) Userflows, UI Design, & Prototype; 2) Writing and Testing the Algorithm. For the design, Figma was the main software used. The designers (Lori and Janice) used a mix iOS components and icons for speed. Stock images were taken from Unsplash and Pexels. After quickly drafting the storyboards, we created a rapid prototype. Finally, the pitch deck was made to synthesize our ideas. For the code, Android studio was the main software used. The developers (Eunice and Zoe) together implemented the back and front-end of the MVP (minimum viable product), where Zoe developed the intelligent price prediction model in Tensorflow, and deployed the trained model on the mobile application. ## Challenges we ran into One challenge was not having the appropriate data immediately available, which was needed to create the algorithm. On the first night, it was a challenge to quickly research and determine the types of information/factors that contribute to design fees. We had to cap off our research time to figure out the design and algorithm. There were also technical limitations, where our team had to determine the best way to integrate the prototype with the front-end and back-end. As there was limited time and after consulting with the hackathon mentor, the developers decided to aim for the MVP instead of spending too much time and energy on turning the prototype to a real front-end. It was also difficult to integrate the machine learning algorithm to our mini app's backend, mainly because we don't have any experience with implement machine learning algorithm in java, especially as part of the back-end of a mobile app. ## Accomplishments that we're proud of We're proud of how cohesive the project reads. As the first covid hackathon for all the team members, we were still able to communicate well and put our synergies together. ## What we learned Although a simple platform with minimal pages, we learned that it was still possible to create an impactful app. We also learned the importance of making a plan and time line before we start, which helped us keep track of our progress and allows us to use our time more strategically. ## What's next for Bossify Making partnerships to incentivize clients to use Bossify! #fairpayforfreelancers
## Inspiration We as a team shared the same interest in knowing more about Machine Learning and its applications. upon looking at the challenges available, we were immediately drawn to the innovation factory and their challenges, and thought of potential projects revolving around that category. We started brainstorming, and went through over a dozen design ideas as to how to implement a solution related to smart cities. By looking at the different information received from the camera data, we landed on the idea of requiring the raw footage itself and using it to look for what we would call a distress signal, in case anyone felt unsafe in their current area. ## What it does We have set up a signal that if done in front of the camera, a machine learning algorithm would be able to detect the signal and notify authorities that maybe they should check out this location, for the possibility of catching a potentially suspicious suspect or even being present to keep civilians safe. ## How we built it First, we collected data off the innovation factory API, and inspected the code carefully to get to know what each part does. After putting pieces together, we were able to extract a video footage of the nearest camera to us. A member of our team ventured off in search of the camera itself to collect different kinds of poses to later be used in training our machine learning module. Eventually, due to compiling issues, we had to scrap the training algorithm we made and went for a similarly pre-trained algorithm to accomplish the basics of our project. ## Challenges we ran into Using the Innovation Factory API, the fact that the cameras are located very far away, the machine learning algorithms unfortunately being an older version and would not compile with our code, and finally the frame rate on the playback of the footage when running the algorithm through it. ## Accomplishments that we are proud of Ari: Being able to go above and beyond what I learned in school to create a cool project Donya: Getting to know the basics of how machine learning works Alok: How to deal with unexpected challenges and look at it as a positive change Sudhanshu: The interesting scenario of posing in front of a camera while being directed by people recording me from a mile away. ## What I learned Machine learning basics, Postman, working on different ways to maximize playback time on the footage, and many more major and/or minor things we were able to accomplish this hackathon all with either none or incomplete information. ## What's next for Smart City SOS hopefully working with innovation factory to grow our project as well as inspiring individuals with similar passion or desire to create a change.
# YouTrends Project for YHack 2016 Google Chrome extension for YouTube video discovery ## How the code works * Make request to get topic page source code with [Yahoo YQL](https://developer.yahoo.com/yql/) API * Parse through videos from HTTP request and store in a list * Pick a random video from the list and open a tab to that video ## Categories * Music * Sports * Gaming * Movies & TV * News * Education * Politics * Food * Art * Travel * Comedy * Fashion * Technology
winning
## Inspiration Frustrating and intimidating banking experience leads to loss of customers and we wanted to change that making banking fun and entertaining. Specifically, senior citizens find it harder to navigate online bank profiles and know their financial status. We decided to come up with an android app that lets you completely control your bank profile either using your voice or the chat feature. Easily integrate our app into your slack account and chat seamlessly. ## What it does Vocalz allows you to control your online bank profile easily using either chat or voice features. Easily do all basic bank processes like sending money, ordering a credit card, knowing balances and so much more just using few voices or text commands. Unlike our competitors, we give personalized chat experience for our customers. In addition, Vocalz also recommends products from the bank they use according to their financial status as well as determine eligibility for loans. The future of banking is digital and we thrive to make the world better and convenient. Slack integration makes it convenient for working professionals to easily access bank data within slack itself. Join the workspace and use @ to call our Vocalzapp. Experience the next generation of banking directly from your slack account. <https://join.slack.com/t/vocalzzz/shared_invite/enQtOTE0NTI3ODg2NjMxLTdmMWVjODc1YWMwNWQ0ZjI2MDJkODAyYzI2YTZiMmEzYjA3NmExYzZlNjM5Yzg0NGVjY2VlYjE5OGJhNGFmZTM> Current Features Know balance Pay bills Get customized product information from respective banks Order credit cards/financial products Open banking accounts Transaction history You can use either voice or chat features depending on your privacy needs. ## How we built it We used Plaid API to get financial data from any bank in the world and we integrated it within our android app. After logging in securely using your bank credentials, Vocalz automatically customizes your voice-enabled and chat features according to the data provided by the bank. In our real product, We trained the IBM Watson chatbot with hundreds of bank terminology and used Dialogflow to create a seamless conversational experience for the customers. IBM Watson uses machine learning to understand the customer's needs and then responds accordingly regardless of spelling or grammar errors. For voice-enabled chat, we will use google's speech-to-text API which sends the information to IBM Watson and Google text-audio API will return the response as audio. The app will be deployed in the Google Cloud because of its high-security features. For demo purposes and time constraints, we used Voiceflow to demonstrate how our voice-enabled features work. ## Challenges we ran into Getting to know and learn the IBM Watson environment was very challenging for us as we don't have much experience in machine learning or dialogue flow. We also needed to find and research different API's required for our project. Training IBM Watson with specific and accurate words was very time consuming and we are proud of its present personalized features. ## Accomplishments that we're proud of We ran into several challenges and we made sure we are on the right path. We wanted to make a difference in the world and we believe we did it. ## What we learned We learned how to make custom chatbots and bring customized experience based on the app's needs. We learned different skills related to API's, android studio, machine learning within 36 hours of hacking. ## What's next for Vocalz RBC Further training of our chatbot with more words making the app useful in different situations. Notifications for banking-related deadlines, transactions Create a personalized budget Comparing different financial products and giving proper suggestions and recommendations. Integrate VR/AR customer service experience
# Summary Echo is an intelligent, environment-aware smart cane that acts as assistive tech for the visually or mentally impaired. --- ## Overview Over 5 million Americans are living with Alzheimer's. In fact, 1 in 10 people of age 65 and older has Alzheimer's or dementia. Often, those afflicted will have trouble remembering names from faces and recalling memories. **Echo does exactly that!** Echo is a piece of assistive technology that helps the owner keep track of people he/she meets and provide a way for the owner to stay safe by letting them contact the authorities if they feel like they're in danger. Using cameras, microphones, and state of the art facial recognition, natural language processing, and speech to text software, Echo is able recognize familiar and new faces allowing patients to confidently meet new people and learn more about the world around them. When Echo hears an introduction being made, it uses its camera to continuously train itself to recognize the person. Then, if it sees the person again it'll notify its owner that the acquaintance is there. Echo also has a button that, when pressed, will contact the authorities- this way, if the owner is in danger, help is one tap away. ## Frameworks and APIs * Remembering Faces + OpenCV Facial Detection + OpenCV Facial Recognition * Analyzing Speech + Google Cloud Speech-To-Text + Google Cloud Natural Language Processing * IoT Communications + gstreamer for making TCP video and audio streams + SMTP for email capabilities (to contact authorities) ## Challenges There are many moving parts to Echo. We had to integrate an interface between Natural Language Processing and Facial Recognition. Furthermore, we had to manage a TCP stream between the Raspberry Pi, which interacts with our ML backend on a computer. Ensuring that all the parts seamlessly work involved hours of debugging and unit testing. Furthermore, we had to fine tune parameters such as stream quality to ensure that the Facial Recognition worked but we did not experience high latency, and synchronize the audio and video TCP streams from the Pi. We wanted to make sure that the form factor of our hack could be experience just by looking at it. On our cane, we have a Raspberry Pi, a camera, and a button. The button is a distress signal, which will alert the selected contacts in the event of an emergency. The camera is part of the TCP stream that is used for facial recognition and training. The stream server and recognition backend are managed by separate Python scripts on either end of the stack. This results in a stable connection between the smart cane and the backend system. ## Echo: The Hacking Process Echo attempts to solve a simple problem: individuals with Alzheimer's often forget faces easily and need assistance in order to help them socially and functionally in the real world. We rely on the fact that by using AI/ML, we can train a model to help the individual in a way that other solutions cannot. By integrating this with technology like Natural Language Processing, we can create natural interfaces to an important problem. Echo's form factor shows that its usability in the real world is viable. Furthermore, since we are relying heavily on wireless technologies, it is reasonable to say that it is successful as an Internet of Things (IoT) device. ## Empowering the impaired Echo empowers the impaired to become more independent and engage in their daily routines. This smart cane acts both as a helpful accessory that can catalyze social interaction and also a watchdog to quickly call help in an emergency.
## ☁️ Inspiration Trying to come up with creative ideas was especially difficult. Brainstorming many ideas until we ended up getting off-topic. One group member was complaining about how their plant died that day which is where this idea came to. ## ⚙️ What it does Currently, our app has two options. You can either pick from our database of plants to add to your list or add a custom plant. Each plant has different watering cycles so we know it can be hard to keep track, that's why Flora does that for you. Flora allows the user to keep tabs on their plant's water cycle and notifies you when the next time your plant needs to be watered. Flora also tracks the weather in Toronto. If it's raining your watering cycle will reset or if it's snowing we'll let you know to bring your plant inside. If the conditions are all clear then your water cycle will remain the same. ## 🔨 How we built it We built Flora using Java and Android Studio. Java was used to grab information from the OpenWeatherMap API so we can track the current live weather conditions in Toronto. We also used Java to add functionality to the buttons in Android Studio. We used Android Studio as our UX/UI program. We took our Java code and integrated it into Android Studio so our users can interact with the application. ## 💀 Challenges we ran into The two biggest challenges were grabbing the current temperature and weather conditions from the OpenWeatherMap API and grabbing inputs from the user to add their own custom plants to their list of plants. The way we solved the former was through lots of trial and error, we were eventually able to grab the necessary information. The latter may not sound that difficult on paper, however, adding our user's inputs to our database required us to write and read from internal files which were difficult to set up. ## 💪 Accomplishments that we're proud of Overall we are really proud of what we achieved in such a brief period of time. Our biggest accomplishment was figuring out how to create a slider in which our users can scroll up and down our database as well as interact and view the information within. ## 🧠What we learned We learned many things in the last two days, such as: * How to use API's * How to create a mobile app * How to create and use a database * How to transfer and interact with Java code using Android Studio * That there are many different interesting plants! ## 🍀 What's next for Flora We have some plans already in store for Flora that we would like to share. Firstly, we would like to have an option for selecting whether your plant is an indoor or outdoor plant. Secondly, we are adding more information about our plants to our database and expanding it for our users. Finally, we would like to add more information that could be available to the user such as soil level, soil conditions, water intake, sun exposure and more that can't be known just by looking at the plant.
winning
## Inspiration Personal experiences with obtaining a job in the industry. The societal pressure to succeed is felt by everyone in our generation.This leads to what we categorize as "Internal Harassment" (ex: blaming yourself, self-abuse, self-doubt). An overlooked area when compared to "External" harassments, but nonetheless an important issue to discuss. ## What it does Provides a safe space with a small group of peers and a "counselor" . Lipht also takes a photo of you when you're feeling positive and is a reminder when you're feeling good for a later time when you're feeling under the weather. ## How we built it Incorporated Google's speech-to-text API, Google's Firebase backend, and Android's hardware camera API. Programmed in Java, in the Android Studio environment. ## Challenges we ran into Idea actually didn't come to us until 11am on sunday. 2 hours before the competition deadline.We scrapped what we were working on and started this. Not much hope, but yep, we did it. All API'S used was a first to the members, getting the code to compile was an issue but with dedication and motivation we came out on top. ## Accomplishments that we're proud of App compiles. ## What we learned Incorporating foreign API's are more native to us. ## What's next for Lipht Anonymity in the group chat. More filters for the positive camera. Shout out to the Google dev that gave us guidance. Shout out to Nick from MajorLeagueHacking. You motivated us.
Inspiration: The recent surge in criminal activity on university grounds inspired our team to develop a solution to tackle student concerns. We aimed to empower students with a tool that would allow them to quickly request for assistance from peers in case of an emergency, fostering a safer atmosphere on campus. What it does: Our app, SafeSpace, equips users with an emergency button, which when pressed, sends out an SOS to other nearby users to request immediate assistance. In these life-or-death emergency situations, a quick response time is crucial in preventing escalation of the situation. Rather than having to wait for the arrival of 911 services or campus police, students in danger can get assistance promptly from nearby peers. The alert pins the location of the vulnerable student on a map, allowing other students to easily locate and respond to the threat. In addition, a future revision will allow students to see heat maps of criminal occurrences on campus, helping them avoid dangerous areas. How we built it: We employed Google's Flutter development kit to create the application. Flutter provides developers with a variety of libraries, including the Firebase Messaging API, which we used to send push notifications. When the user launches the app for the first time, we make a POST request to the back-end server and store the device's token. If that user ever uses the button, we identify their location and we make another POST request to the server, providing data that can be used to locate that user. We then make an API call to the firebase messaging API and send a push notification to all nearby devices. These notifications will contain the location, which we then use to pin the location on a map (provided by the Google Maps API). The back-end was developed using Python and a Flask server to handle the POST requests and store the data. Challenges we faced One of our greatest hurdles was the integration of the Firebase Cloud Messaging API with both our front and back ends. Firstly, sending push notifications to devices through the Firebase API was initially very sluggish, taking up to minutes to send. This rendered the app virtually useless, as users would not be able to get help quickly enough. Thus, we explored alternative means of sending HTTP requests to Firebase from Flutter and ultimately found a way to send alerts instantaneously. Next, ensuring that Flask & Python were well integrated with Firebase was another big hurdle. Despite using the default python requests library correctly and receiving successful codes, Firebase would simply refuse to send push notifications to the Flutter app. After meticulous debugging and exploring, we finally discovered and implemented the firebase\_admin library which solved the issue completely. Accomplishments that we're proud of: We are proud of our implementation of the Google Maps API to provide users with a pinned live location of the student in danger. Requesting students will now be provided with aid quickly and intuitively. This was a difficult and tedious task to undertake. Nevertheless, as one of our team values, we prioritized user safety and usability over all else. All in all, we take the greatest amount of pride in having developed a functional and impactful app that directly addresses the safety concerns of students. By fostering a peer-to-peer support system to fight crimes, we promote security and a sense of community on campus. What's next for SafeSpace: Firstly, we plan to add a heat map to the app, which highlights the locations of past criminal occurrences. This will deter users from entering a dangerous area of the campus and give valuable insight to campus police, who can further investigate and improve these dangerous hotspots. Indeed, we will work with university authorities and campus security to add more features curated to their respective campus locations. Moreover, in addition to providing users with the victim's latitudinal and longitudinal positions, we will add altitude to better describe their location in buildings, where it may be difficult to discern which floor the victim is on.
## Inspiration Too many times have broke college students looked at their bank statements and lament on how much money they could've saved if they had known about alternative purchases or savings earlier. ## What it does SharkFin helps people analyze and improve their personal spending habits. SharkFin uses bank statements and online banking information to determine areas in which the user could save money. We identified multiple different patterns in spending that we then provide feedback on to help the user save money and spend less. ## How we built it We used Node.js to create the backend for SharkFin, and we used the Viacom DataPoint API to manage multiple other API's. The front end, in the form of a web app, is written in JavaScript. ## Challenges we ran into The Viacom DataPoint API, although extremely useful, was something brand new to our team, and there were few online resources we could look at We had to understand completely how the API simplified and managed all the APIs we were using. ## Accomplishments that we're proud of Our data processing routine is highly streamlined and modular and our statistical model identifies and tags recurring events, or "habits," very accurately. By using the DataPoint API, our app can very easily accept new APIs without structurally modifying the back-end. ## What we learned ## What's next for SharkFin
losing
## Inspiration Since the outbreak of COVID-19, while the rest of the world has moved online, ASL speakers faced even greater inequities making it difficult for so many of them to communicate. However, this has to come to an end. In the pursuit of finding accessibility, I created a tool to empower ASL speakers to speak freely with the help of AI. ## What it does Uses a webcam to translate ASL speech to text. ## How we built it Used Mediapipe to generate points on hands, then use those points to get training data set. I used Jupyter Notebook to run OpenCV and Mediapipe. Upon running our data in Mediapipe, we were able to get a skeleton map of the body with 22 points for each hand. These points can be mapped in 3-dimension as it contains X, Y, and Z axis. We processed these features (22 points x 3) by saving them into a spreadsheet. Then we divided the spreadsheet into training and testing data. Using the training set, we were able to create 6 Machine learning models: * Gradient Boost Classifier * XGBoost Classifier * Support Vector Machine * Logistic Regression * Ridge Classifier * Random Forest Classifier ## Challenges we ran into * Had to do solo work due to issues with the team * Time management * Project management * Lack of data ## Accomplishments that we're proud of Proud of pivoting my original idea and completing this epic hackathon. Also proud of making a useful tool ## What we learned * Time management * Project management ## What's next for Voice4Everyone * More training of data - more classifications * Phone app + Chrome Extension * Reverse translation: Converting English Text to ASL * Cleaner UI * Add support for the entire ASL dictionary and other sign languages
# CourseAI: AI-Powered Personalized Learning Paths ## Inspiration CourseAI was born from the challenges of self-directed learning in our information-rich world. We recognized that the issue isn't a lack of resources, but rather how to effectively navigate and utilize them. This inspired us to leverage AI to create personalized learning experiences, making quality education accessible to everyone. ## What it does CourseAI is an innovative platform that creates personalized course schedules on any topic, tailored to the user's time frame and desired depth of study. Users input what they want to learn, their available time, and preferred level of complexity. Our AI then curates the best online resources into a structured, adaptable learning path. Key features include: * AI-driven content curation from across the web * Personalized scheduling based on user preferences * Interactive course customization through an intuitive button-based interface * Multi-format content integration (articles, videos, interactive exercises) * Progress tracking with checkboxes for completed topics * Adaptive learning paths that evolve based on user progress ## How we built it We developed CourseAI using a modern, scalable tech stack: * Frontend: React.js for a responsive and interactive user interface * Backend Server: Node.js to handle API requests and serve the frontend * AI Model Backend: Python for its robust machine learning libraries and natural language processing capabilities * Database: MongoDB for flexible, document-based storage of user data and course structures * APIs: Integration with various educational content providers and web scraping for resource curation The AI model uses advanced NLP techniques to curate relevant content, and generate optimized learning schedules. We implemented machine learning algorithms for content quality assessment and personalized recommendations. ## Challenges we ran into 1. API Cost Management: Optimizing API usage for content curation while maintaining cost-effectiveness. 2. Complex Scheduling Logic: Creating nested schedules that accommodate various learning styles and content types. 3. Integration Complexity: Seamlessly integrating diverse content types into a cohesive learning experience. 4. Resource Scoring: Developing an effective system to evaluate and rank educational resources. 5. User Interface Design: Creating an intuitive, button-based interface for course customization that balances simplicity with functionality. ## Accomplishments that we're proud of 1. High Accuracy: Achieving a 95+% accuracy rate in content relevance and schedule optimization. 2. Elegant User Experience: Designing a clean, intuitive interface with easy-to-use buttons for course customization. 3. Premium Content Curation: Consistently sourcing high-quality learning materials through our AI. 4. Scalable Architecture: Building a robust system capable of handling a growing user base and expanding content library. 5. Adaptive Learning: Implementing a flexible system that allows users to easily modify their learning path as they progress. ## What we learned This project provided valuable insights into: * The intricacies of AI-driven content curation and scheduling * Balancing user preferences with optimal learning strategies * The importance of UX design in educational technology * Challenges in integrating diverse content types into a cohesive learning experience * The complexities of building adaptive learning systems * The value of user-friendly interfaces in promoting engagement and learning efficiency ## What's next for CourseAI Our future plans include: 1. NFT Certification: Implementing blockchain-based certificates for completed courses. 2. Adaptive Scheduling: Developing a system for managing backlogs and automatically adjusting schedules when users miss sessions. 3. Enterprise Solutions: Creating a customizable version of CourseAI for company-specific training. 4. Advanced Personalization: Implementing more sophisticated AI models for further personalization of learning paths. 5. Mobile App Development: Creating native mobile apps for iOS and Android. 6. Gamification: Introducing game-like elements to increase motivation and engagement. 7. Peer Learning Features: Developing functionality for users to connect with others studying similar topics. With these enhancements, we aim to make CourseAI the go-to platform for personalized, AI-driven learning experiences, revolutionizing education and personal growth.
## Inspiration We wanted to try to create something that uses Open AI or a language model to see how these tools can work. ## What it does Our chatbot can answer a user's questions about specific products and can provide links to documentation. ## How we built it It's a SvelteKit project that uses Tailwind CSS for the chat interface. In the backend, we made use of Open AI to get the results the user was looking for. ## Challenges we ran into We had a lot of issues creating the responses for the user. We noticed that they changed drastically depending on the model we were using. ## Accomplishments that we're proud of We successfully made a chatbot that can helpfully answer a user's questions. ## What we learned We learned that different language models can have a really big impact on the responses that are generated. ## What's next for Cisco chatbot Maybe we could tweak the UI a bit or maybe we could try to make the responses even more specific.
partial
## Inspiration The impact of COVID-19 has had lasting effects on the way we interact and socialize with each other. Even when engulfed by bustling crowds and crowded classrooms, it can be hard to find our friends and the comfort of not being alone. Too many times have we grabbed lunch, coffee, or boba alone only to find out later that there was someone who was right next to us! Inspired by our undevised use of Apple's FindMy feature, we wanted to create a cross-device platform that's actually designed for promoting interaction and social health! ## What it does Bump! is a geolocation-based social networking platform that encourages and streamlines day-to-day interactions. **The Map** On the home map, you can see all your friends around you! By tapping on their icon, you can message them or even better, Bump! them. If texting is like talking, you can think of a Bump! as a friendly wave. Just a friendly Bump! to let your friends know that you're there! Your bestie cramming for a midterm at Mofitt? Bump! them for good luck! Your roommate in the classroom above you? Bump! them to help them stay awake! Your crush waiting in line for a boba? Make that two bobas! Bump! them. **Built-in Chat** Of course, Bump! comes with a built-in messaging chat feature! **Add Your Friends** Add your friends to allow them to see your location! Your unique settings and friends list are tied to the account that you register and log in with. ## How we built it Using React Native and JavaScript, Bump! is built for both IOS and Android. For the Backend, we used MongoDB and Node.js. The project consisted of four major and distinct components. **Geolocation Map** For our geolocation map, we used the expo's geolocation library, which allowed us to cross-match the positional data of all the user's friends. **User Authentication** The user authentication proceeds was built using additional packages such as Passport.js, Jotai, and Bcrypt.js. Essentially, we wanted to store new users through registration and verify old users through login by searching them up in MongoDB, hashing and salting their password for registration using Bcrypt.js, and comparing their password hash to the existing hash in the database for login. We also used Passport.js to create Json Web Tokens, and Jotai to store user ID data globally in the front end. **Routing and Web Sockets** To keep track of user location data, friend lists, conversation logs, and notifications, we used MongoDB as our database and a node.js backend to save and access data from the database. While this worked for the majority of our use cases, using HTTP protocols for instant messaging proved to be too slow and clunky so we made the design choice to include WebSockets for client-client communication. Our architecture involved using the server as a WebSocket host that would receive all client communication but would filter messages so they would only be delivered to the intended recipient. **Navigation and User Interface**: For our UI, we wanted to focus on simplicity, cleanliness, and neutral aesthetics. After all, we felt that the Bump! the experience was really about the time spent with friends rather than on the app, so designed the UX such that Bump! is really easy to use. ## Challenges we ran into To begin, package management and setup were fairly challenging. Since we've never done mobile development before, having to learn how to debug, structure, and develop our code was definitely tedious. In our project, we initially programmed our frontend and backend completely separately; integrating them both and working out the moving parts was really difficult and required everyone to teach each other how their part worked. When building the instant messaging feature, we ran into several design hurdles; HTTP requests are only half-duplex, as they are designed with client initiation in mind. Thus, there is no elegant method for server-initiated client communication. Another challenge was that the server needed to act as the host for all WebSocket communication, resulting in the need to selectively filter and send received messages. ## Accomplishments that we're proud of We're particularly proud of Bump! because we came in with limited or no mobile app development experience (in fact, this was the first hackathon for half the team). This project was definitely a huge learning experience for us; not only did we have to grind through tutorials, youtube videos, and a Stack Overflowing of tears, we also had to learn how to efficiently work together as a team. Moreover, we're also proud that we were not only able to build something that would make a positive impact in theory but a platform that we see ourselves actually using on a day-to-day basis. Lastly, despite setbacks and complications, we're super happy that we developed an end product that resembled our initial design. ## What we learned In this project, we really had an opportunity to dive headfirst in mobile app development; specifically, learning all about React Native, JavaScript, and the unique challenges of implementing backend on mobile devices. We also learned how to delegate tasks more efficiently, and we also learned to give some big respect to front-end engineers! ## What's next for Bump! **Deployment!** We definitely plan on using the app with our extended friends, so the biggest next step for Bump! is polishing the rough edges and getting it on App Stores. To get Bump! production-ready, we're going to robustify the backend, as well as clean up the frontend for a smoother look. **More Features!** We also want to add some more functionality to Bump! Here are some of the ideas we had, let us know if there's any we missed! * Adding friends with QR-code scanning * Bump! leaderboards * Status updates * Siri! "Hey Siri, bump Emma!"
## Inspiration * COVID-19 is impacting all musicians, from students to educators to professionals everywhere * Performing physically together is not viable due to health risks * School orchestras still need a way to perform to keep students interested and motivated * A lot of effort is required to put together separate recordings virtually. Some ensembles don't have the time or resources to do it ## What it does Ludwig is a direct response to the rise of remote learning. Our online platform optimizes virtual interaction between music educators and students by streamlining the process of creating music virtually. Educators can create new assignments and send them out with a description and sheet music to students. Students can then access the assignment, download the sheet music, and upload their recordings of the piece separately. Given the tempo and sampling specificity, Ludwig detects the musical start point of each recording, then syncs and merges them into one combined WAV file. ## Challenges we ran into Our synchronization software is not perfect. On the software side, we have to balance the tradeoff between sampling specificity and delivery speed, so we sacrifice pinpoint synchronization to make sure our users don't get bored while using the platform. On the human side, without the presence of the rest of the ensemble, it is easy for students to play at an inconsistent tempo, play out of tune, or make other mistakes. These sorts of mistakes are hard for any software to adapt to. ## What's next for Ludwig We aim to improve Ludwig's syncing algorithm by adjusting tuning and paying more attention to tempo. We will also refine and expand Ludwig's platform to allow teachers to have different classes with different sets of students.
## Inspiration The point of the sustain app is to bring a competitive spirit and a rewarding feeling for doing good actions that help the environment. The app is basically a social media app where you can add friends, and see your leaderboards and community progress toward a green goal you set for yourself. The intended way to use the app is that every time you find a can or a piece of garbage on the ground, you can scan the item using the machine learning algorithm that over time will be able to detect more and more garbage of all different types and then you throw it away to get points(based on garbage type) that stack up over the weeks. The app also keeps track of the barcode so that it isn't used over and over again to hack points. We also planned in the future to add a variety of other methods to gain points such as ride sharing or using reusable containers or bottles. ## How we built it We built the front end using HTML and CSS along with frameworks like Tailwind or Bootstrap. For the back end, we created it using Django. Finally, the machine learning part was implemented using PyTorch and the YOLOv5 Algorithm. ## Challenges we ran into We encountered several challenges in deploying an object detection algorithm that was both accurate and lightweight, ensuring it did not significantly impact inference time which would hurt user experience. This required us to curate a dataset that's diverse while also having a small model. Additionally, we faced significant difficulties in implementing a conversational virtual assistant using AWS for our system. Despite investing more than three hours in setup, it ultimately crashed, leading us to unfortunately drop the idea. ## What we learned For the 2 front-end developers, we learned a lot about how HTML works and the different available frameworks. Our back-end and machine-learning developers learned how complicated it could be to implement chatbots and deploy them onto web apps. They have also been able to learn new technologies such as YOLOv5 and frameworks like PyTorch and Ultralytics. ## What's next for the Sustain app
winning