anchor
stringlengths
58
24.4k
positive
stringlengths
9
13.4k
negative
stringlengths
166
14k
anchor_status
stringclasses
3 values
# Slacker created by Albert Lai, Hady Ibrahim, and Varun Kothandaraman github : *[Slacker Github](https://github.com/albertlai431/slacker-chore)* ## Inspiration Among shared housing, chores are a major hassle for most people to deal with organizing to ensure everyone is doing their fair share of the work. In most cases, without direct instruction, most people simply forget about their slice of work they need to complete. ## What it does Slacker is a web-app that allows users to join a group that contains multiple members of their household and through an overall bigger list of items - tasks get automatically assigned to each member in the group. Each member in the group has a couple of task view points with the main pages being the user’s own personal list, the total group list, each group member’s activity, and settings. The user’s personal list of chores constantly refreshes over each week through one-time and repeating chores for each task. WIth forgetting/overdue chores appearing at the top of the screen on every group member’s personal page for quicker completion. ## How we built it Slacker was built using a combination of React and Chakra UI through github source control. Additionally, we have created mockups of both the desktop pages and the mobile app we were planning on creating. To find pictures of the mockups kindly check out the images we have attached to this devpost for the items that we have created so far. ## Challenges we ran into Originally, our plan was to create an ios/android app through react native and create our fleshed out figma app mockups. The full idea simply had too many features and details to work as both: * Create the mobile application * Create the full application, with all the features we brainstormed The first challenge that we ran into was the mockup and design of the application. UI/UX design caused us a lot of grief as we found it difficult to create some design that we felt both looked good and were easy to understand in terms of functionality. The second challenge that we faced was the google authentication feature we created for logging into the website. The main issue was that the implementation of the feature created a lot of issues and bugs that delayed our total work time by a considerable amount of time. Additionally with the time constraint, we were able to create a React web application that has some basic functionality as a prototype for our original idea. ## Accomplishments that we're proud of We are happy with the web application that we have created so far in our prototype with the given time so far: We have implemented: * Finished the landing page * Finished the google authentication * Home screen * Create tasks that will be automatically assigned to users on a recurring basis * Create invite and join group * Labels slacker member with least tasks * Donut graphs for indication of task completion every week * The ability to see every task for each day * The ability to sign out of the webpage * and even more! ## What we learned As a group, since for the majority of us it was our first hackathon, we put more emphasis and time on brainstorming an idea instead of just sitting down and starting to code our project up. We definitely learned that coming into the hackathon with some preconceived notions of what we individually wanted to code would have saved us around more than half a day in time. We also were surprised to learn how useful figma is as a tool for UI/UX design for web development. The ability to copy-paste CSS code for each element of the webpage was instrumental in our ability to create a working prototype faster. ## What's next for Slacker For Slacker, the next steps are to: * Finish the web application with all of the features * Create and polish the full web application, with all the visual features we brainstormed * Finish the mobile application with all of the same features as the web application we aim to complete
# Mental-Health-Tracker ## Mental & Emotional Health Diary This project was made because we all know how much of a pressing issue that mental health and depression can have not only on ourselves, but thousands of other students. Our goal was to make something where someone could have the chance to accurately assess and track their own mental health using the tools that Google has made available to access. We wanted the person to be able to openly express their feelings towards the diary for their own personal benefit. Along the way, we learned about using Google's Natural Language processor, developing using Android Studio, as well as deploying an app using Google's App Engine with a `node.js` framework. Those last two parts turned out to be the greatest challenges. Android Studio was a challenge as one of our developers had not used Java for a long time, nor had he ever developed using `.xml`. He was pushed to learn a lot about the program in a limited amount of time. The greatest challenge, however, was deploying the app using Google App Engine. This tool is extremely useful, and was made to seem that it would be easy to use, but we struggled to implement it using `node.js`. Issues arose with errors involving `favicon.ico` and `index.js`. It took us hours to resolve this issue and we were very discouraged, but we pushed though. After all, we had everything else - we knew we could push through this. The end product involves and app in which the user signs in using their google account. It opens to the home page, where the user is prompted to answer four question relating to their mental health for the day, and then rate themselves on a scale of 1-10 in terms of their happiness for the day. After this is finished, the user is given their mental health score, along with an encouraging message tagged with a cute picture. After this, the user has the option to view a graph of their mental health and happiness statistics to see how they progressed over the past week, or else a calendar option to see their happiness scores and specific answers for any day of the year. Overall, we are very happy with how this turned out. We even have ideas for how we could do more, as we know there is always room to improve!
## Inspiration STEM was inspired by our group members, who all experienced failure of a personal health goal. We believed that setting similar goals with our friends and seeing their progress will indeed inspire us to work harder towards completing our goal. We also agreed that this may encourage us to start challenges that we see our friends partaking in, which can help us develop healthy lifestyle habits. ## What it does STEM provides a space where users can set their health goals in the form of challenges and visualize their progress in the form of a growing tree. Additionally, users can see others' progress within the same challenges to further motivate them. User's can help promote fitness and health by creating their own challenges and inviting their friends, family, and colleagues. ## How we built it This mobile application was built with react-native, expo CLI and firebase. ## Challenges we ran into One challenge that we ran into was the time limit. There was a few parts of our project that we designed on Figma and intended to code, but we were unable to do so. Furthermore, each of our group members had no prior experience using react-native which in combination with the time limit, lead to some planned features being undeveloped. Another challenge faced was the fact that our project is a very simple idea with a lot of competition. ## Accomplishments that we're proud of We are very proud of our UI and the aesthetics of our project. Each member of our group members had no prior experience with react native, and therefore, we are proud that we were able to build and submit a functional project within 36 hours. Lastly, we are also very proud that we were able to develop an idea with potential to be a future business. ## What we learned Throughout this weekend, we learned how to be more consistent with version control, in order to work better and faster as a team. We also learned how to build an effective NOSQL database schema. ## What's next for STEM As we all believe that STEM has potential to be a future business, we will continue developing the code, and deploy. We will be adding a live feed page that will allow you to see, like and comment on friends' posts. Users will be able to post about their progress in challenges. STEM will also reach out and try to partner with companies to create incentives for certain achievements made by users. (E.g. getting a discount on certain sportswear brands after completing a physical challenge or certain tree level)
partial
## Inspiration With the ubiquitous and readily available ML/AI turnkey solutions, the major bottlenecks of data analytics lay in the consistency and validity of datasets. **This project aims to enable a labeller to be consistent with both their fellow labellers and their past self while seeing the live class distribution of the dataset.** ## What it does The UI allows a user to annotate datapoints from a predefined list of labels while seeing the distribution of labels this particular datapoint has been previously assigned by another annotator. The project also leverages AWS' BlazingText service to suggest labels of incoming datapoints from models that are being retrained and redeployed as it collects more labelled information. Furthermore, the user will also see the top N similar data-points (using Overlap Coefficient Similarity) and their corresponding labels. In theory, this added information will motivate the annotator to remain consistent when labelling data points and also to be aware of the labels that other annotators have assigned to a datapoint. ## How we built it The project utilises Google's Firestore realtime database with AWS Sagemaker to streamline the creation and deployment of text classification models. For the front-end we used Express.js, Node.js and CanvasJS to create the dynamic graphs. For the backend we used Python, AWS Sagemaker, Google's Firestore and several NLP libraries such as SpaCy and Gensim. We leveraged the realtime functionality of Firestore to trigger functions (via listeners) in both the front-end and back-end. After K detected changes in the database, a new BlazingText model is trained, deployed and used for inference for the current unlabeled datapoints, with the pertinent changes being shown on the dashboard ## Challenges we ran into The initial set-up of SageMaker was a major timesink, the constant permission errors when trying to create instances and assign roles were very frustrating. Additionally, our limited knowledge of front-end tools made the process of creating dynamic content challenging and time-consuming. ## Accomplishments that we're proud of We actually got the ML models to be deployed and predict our unlabelled data in a pretty timely fashion using a fixed number of triggers from Firebase. ## What we learned Clear and effective communication is super important when designing the architecture of technical projects. There were numerous times where two team members were vouching for the same structure but the lack of clarity lead to an apparent disparity. We also realized Firebase is pretty cool. ## What's next for LabelLearn Creating more interactive UI, optimizing the performance, have more sophisticated text similarity measures.
## Inspiration During extreme events such as natural disasters or virus outbreaks, crisis managers are the decision makers. Their job is difficult since the right decision can save lives while the wrong decision can lead to their loss. Making such decisions in real-time can be daunting when there is insufficient information, which is often the case. Recently, big data has gained a lot of traction in crisis management by addressing this issue; however it creates a new challenge. How can you act on data when there's just too much of it to keep up with? One example of this is the use of social media during crises. In theory, social media posts can give crisis managers an unprecedented level of real-time situational awareness. In practice, the noise-to-signal ratio and volume of social media is too large to be useful. I built CrisisTweetMap to address this issue by creating a dynamic dashboard for visualizing crisis-related tweets in real-time. The focus of this project was to make it easier for crisis managers to extract useful and actionable information. To showcase the prototype, I used tweets about the current coronavirus outbreak. ## What it does * Scrape live crisis-related tweets from Twitter; * Classify tweets in relevant categories with deep learning NLP model; * Extract geolocation from tweets with different methods; * Push classified and geolocated tweets to database in real-time; * Pull tweets from database in real-time to visualize on dashboard; * Allows dynamic user interaction with dashboard ## How I built it * Tweepy + custom wrapper for scraping and cleaning tweets; * AllenNLP + torch + BERT + CrisisNLP dataset for model training/deployment; * Spacy NER + geotext for extracting location names from text * geopy + gazetteer elasticsearch docker container for extracting geolocation from locations; * shapely for sampling geolocation from bounding boxes; * SQLite3 + pandas for database push/pull; * Dash + plotly + mapbox for live visualizations; ## Challenges I ran into * Geolocation is hard; * Stream stalling due to large/slow neural network; * Responsive visualization of large amounts of data interactively; ## Accomplishments that I'm proud of * A working prototype ## What I learned * Different methods for fuzzy geolocation from text; * Live map visualizations with Dash; ## What's next for CrisisTweetMap * Other crises like extreme weather events;
## Inspiration We started off by thinking, "What is something someone needs today?". In light of the stock market not doing so well, and the amount of false information being spread over the Internet these days, we figured it was time to get things right by understanding the stock market. We know that no human could analyze a company without bias of the history of the company and its potential stereotypes, but nothing can beat using an NLP to understand the current situation of a company. Thinking about the capabilities of the Cohere NLP and what we know and want from the stock market led us to a solution: Stocker. ## What it does The main application allows you to search up words that make up different stocks. Then, for each company which matches the inputted string, we run through the backend, which grabs the inputted company and searches through the recent news of the company via a web scrapper on Google News. Then, we collect all of the headings and evaluate the status of the company according to a rating system. Finally, we summarize all the data by using Generate on all of the text that was read through and outputs it. ## How we built it The stocks corresponding to the search were grabbed via the NASDAQ API. Then, once the promise is fulfilled, the React page can update the list with ratings already prepended on there. The backend that is communicated runs through Google Cloud, and the backend was built in Python along with a Flask server. This backend communicates directly with the Cohere API, specifically on the Generate and Classify functionalities. Classify is used to evaluate company status from the headings, which is coupled with the Generate to get the text summary of all the headings. Then, the best ones are selected and then displayed with links to the specific articles for people to verify the truthfulness of the information. We trained the Classify with several tests in order to ensure the API understood what we were asking of it, rather than being too extreme or imprecise. ## Challenges we ran into Coming up with a plan of how to bring everything together was difficult -- we knew that we wanted to get data to pass in to a Classify model, but how the scraping would work and being table to communicate that data took time to formulate a plan in order to execute. The entire backend was a little challenging for the team members, as it was the first time they worked with Flask on the backend. This resulted in some troubles with getting things set up, but more significantly, the process of deploying the backend involved lots of research and testing, as nobody on our team knew how our backend could specifically be deployed. On the front-end side, there were some hiccups with getting the data to show for all objects being outputted (i.e., how mapping and conditional rendering would work in React was a learning curve). There were also some bugs with small technical details as well, but those were eventually figured out. Finally, bringing together the back-end and front-end and troubleshooting all the small errors was a bit challenging, given the amount of time that was remaining. Overall though, most errors were solved in appropriate amounts of time. ## Accomplishments that we're proud of Finally figuring out the deployment of the backend was one of the highlights for sure, as it took some time with researching and experimenting. Another big one was getting the front-end designed from the Figma prototype we made and combining it with the functional, but very barebones infrastructure of our app that we made as a POC. Being able to have the front-end design be very smooth and work with the object arrays as a whole rather than individual ones made the code a lot more standardized and consolidated in the end as well, which was nice to see. ## What we learned We learned that it is important to do some more research on how standard templates on writing code in order to be deployed easily is very useful. Some of us also got experience in Flask while others fine-tuned their React skills, which was great to see as the proficiency became useful towards the point where the back-end, front-end, and REST API were coming together (sudden edits were very easy and smooth to make). ## What's next for Stocker Stocker can have some more categories and get smarter for sure. For example, it can actually try to interpret the current trend of where the stock has been headed recently, and also maybe other sources of data other than the news. Stocker relies heavily on the training model and the severity of article names, but in the future it could get smarter with more sources such as these listed.
winning
## Inspiration Every student knows the struggle that is course registration. You're tossed into an unfamiliar system with little advice and all these vague rules and restrictions to follow. All the while, courses are filling up rapidly. Far too often students—often underclassmen— are stuck without the courses they need. We were inspired by these pain points to create Schedge, an automatic schedule generator. ## What it does Schedge helps freshmen build their schedule by automatically selecting three out of a four course load. The three courses consist of a Writing the Essay course, the mandatory writing seminar for NYU students, a Core course like Quantitative Reasoning, and a course in the major of the student's choosing. Furthermore, we provide sophomores with potential courses to take after their freshman year, whether that's a follow up to Writing the Essay, or a more advanced major course. ## How we built it We wrote the schedule generation algorithm in Rust, as we needed it to be blazing fast and well designed. The front end is React with TypeScript and Material UI. The algorithm, while technically NP complete for all courses, uses some shortcuts and heuristics to allow for fast schedule generation. ## Challenges we ran into We had some trouble with the data organization, especially with structuring courses with their potential meeting times. ## Accomplishments that we're proud of Using a more advanced systems language such as Rust in a hackathon. Also our project has immediate real world applications at NYU. We plan on extending it and providing it as a service. ## What we learned Courses have a lot of different permutations and complications. ## What's next for Schedge More potential majors and courses! Features for upperclassmen!
## Inspiration As a team of post-secondary students, we’ve all been through the torment of realising that the courses you intended to take have times that conflict with each other. But if there’s one thing AI can do, it’s making decisions in a short period of time (provided they have the data). Rather than having students search through each course description to decide on how they’ll arrange their schedule, we wanted to create a product that could generate schedules for them, so long as they are provided sufficient information to decide which courses should be in the schedule, and when. ## What it does Borzoi Scheduler is a website that builds course schedules for UofT students. Users just need to provide their program of study, the semester they’re planning for, and the times when they don’t want classes, then Borzoi will generate a schedule for them. With additional exchanges between the user and Borzoi’s AI chat, further specifications can be made to ensure the schedule is as relevant as possible to the user and their needs. ## How we built it Figma was used to create a high-fidelity prototype of the website, demonstrating its functionalities with a sample use case. Meanwhile, Python was used in combination with the ChatGPT API to build the chat that users will interact with to create the personalised schedules. As for the website itself, we used HTML, CSS, and Javascript for its creation and design. Last, but not least, we attempted to use Flask to bring the frontend and backend together. Given the time constraint, we were unable to incorporate the databases that would’ve been required if we actually had to create schedules with UofT courses. However, our team was able to utilise these tools to create a bare-bones version of our website. ## Challenges we ran into Although we were able to settle on an idea relatively early on, due to a lack of experience with the software tools we’d previously learned about, our team had trouble identifying where to start on the project, as well as the technicalities behind the way it worked. We recognised the need for implementing AI, databases, and some sort of frontend/backend, but were unsure how, exactly, that implementation worked. To find our way to the start of actually creating the project, we consulted multiple resources: from Google, to the mentors, and even to ChatGPT, the very AI we intended to use in our website. Many of the answers we got were beyond our understanding, and we often felt just as confused as when we first started searching. After a good night’s rest and some more discussion, we then realised that our problem was that we were thinking too broadly. By breaking our ideas down into smaller, simpler chunks, we were able to get clearer answers and simultaneously identify the steps we needed to take to complete the implementation of our ideas. Our team still came across many unknowns along the way, but with the support of the mentors and quite a bit of self-learning, each of these points were clarified, and we were slowly, but surely, able to move along our development journey. ## Accomplishments that we're proud of Our team is proud of all that we were able to learn in these past 2-3 days! Although we weren’t able to come up with how to write all the code completely on our own, it was a rewarding experience, being exposed to so many development tools, learning the pros and cons of each, and (at the cost of our sleep) figuring out how to use the new knowledge. In particular, at the start of this event, our group wanted to work with AI specifically because none of us had experience with it; we wanted to use this hackathon as an excuse to learn more about this topic and the tools needed to apply it, and we were not disappointed. The time spent doing research and asking mentors for suggestions deepened our understanding of the use of AI, as well as a variety of other tools that we’d often heard of, but had never interacted with until we participated in this hackathon. ## What we learned As mentioned in the accomplishments section, after these past 2-3 days, we now know quite a bit more about AI and other topics such as APIs, JavaScript, etc. But technical knowledge aside, we discovered the importance of breaking problems down into more manageable pieces. When we first started trying to work on our idea, it felt almost impossible for us to even get one function working. But by setting mini goals, and working through each one slowly, and carefully, we were eventually able to create what we have now! ## What's next for Borzoi Scheduler At the moment, there are still a number of functionalities we’re hoping to add (features we wanted to add if we had more time). For one, we want to make the service more accessible by providing voice input and multilingual support (possibly with the use of WhisperAI). For another, we’re hoping to allow users to save their schedule in both a visual and textual format, depending on their preferences. Once those functions are implemented and tested, we want to consider the scope of our service. Currently, Borzoi Scheduler is only available for the students of one school, but we’re hoping to be able to extend this service to other schools as well. Knowing that many students also have to work to pay for rent, tuition, and more, we want to allow as many people as possible to have access to this service so that they can save time that can be used to focus on their hobbies, relationships, as well as their own health. Though this is a big goal, we’re hoping that by collaborating with school services to provide accurate course information, as well as to receive possible funding for the project from the schools, this mission will be made possible. Furthermore, as scheduling is not only done by students, but also by organisations and individuals, we would like to consider creating or adapting Borzoi Scheduler to these audiences so that they may also save time on organising their time.
# Are You Taking It's the anti-scheduling app. 'Are You Taking' is the no-nonsense way to figure out if you have class with your friends by comparing your course schedules with ease. No more screenshots, only good vibes! ## Inspiration The fall semester is approaching... too quickly. And we don't want to have to be in class by ourselves. Every year, we do the same routine of sending screenshots to our peers of what we're taking that term. It's tedious, and every time you change courses, you have to resend a picture. It also doesn't scale well to groups of people trying to find all of the different overlaps. So, we built a fix. Introducing "Are You Taking" (AYT), an app that allows users to upload their calendars and find event overlap. It works very similar to scheduling apps like when2meet, except with the goal of finding where there *is* conflict, instead of where there isn't. ## What it does The flow goes as follows: 1. Users upload their calendar, and get a custom URL like `https://areyoutaking.tech/calendar/<uuidv4>` 2. They can then send that URL wherever it suits them most 3. Other users may then upload their own calendars 4. The link stays alive so users can go back to see who has class with who ## How we built it We leveraged React on the front-end, along with Next, Sass, React-Big-Calendar and Bootstrap. For the back-end, we used Python with Flask. We also used CockroachDB for storing events and handled deployment using Google Cloud Run (GCR) on GCP. We were able to create Dockerfiles for both our front-end and back-end separately and likewise deploy them each to a separate GCR instance. ## Challenges we ran into There were two major challenges we faced in development. The first was modelling relationships between the various entities involved in our application. From one-to-one, to one-to-many, to many-to-many, we had to write effective schemas to ensure we could render data efficiently. The second was connecting our front-end code to our back-end code; we waited perhaps a bit too long to pair them together and really felt a time crunch as the deadline approached. ## Accomplishments that we're proud of We managed to cover a lot of new ground! * Being able to effectively render calendar events * Being able to handle file uploads and store event data * Deploying the application on GCP using GCR * Capturing various relationships with database schemas and SQL ## What we learned We used each of these technologies for the first time: * Next * CockroachDB * Google Cloud Run ## What's next for Are You Taking (AYT) There's a few major features we'd like to add! * Support for direct Google Calendar links, Apple Calendar links, Outlook links * Edit the calendar so you don't have to re-upload the file * Integrations with common platforms: Messenger, Discord, email, Slack * Simple passwords for calendars and users * Render a 'generic week' as the calendar, instead of specific dates
losing
## Inspiration JetBlue challenge of YHack ## What it does Website with sentiment analysis of JetBlue ## How I built it Python, Data scraping, used textblob for sentiment analysis ## Challenges I ran into choosing between textblob and nltk ## Accomplishments that I'm proud of Having a finished product ## What I learned How to do sentiment analysis ## What's next for FeelingBlue
## Inspiration We got our inspiration from looking at the tools provided to us in the Hackathon. We saw that we cold use the Google API’s effectively when analyzing the sentiment of the customers review on social media platforms. With the wide range of possibilities, it gave us we got the idea of using programs to see the data visually ## What it does JetBlueByMe is a program which takes over 16000 reviews from trip advisor, and hundreds of tweets from twitter to present them in a graphable way. The first representation is an effective yet simple word cloud which shows more frequently described adjective larger. The other is a bar graph to show which word appears most consistently. ## How we built it The first step was to scrape data off multiple websites. To do this a web scraping robot by UiPath was used. This saved a lot of time and allowed us to focus on other aspects of the program. For Twitter, Python had to be used in junction with Beautiful Soup library to extract the tweets and hashtags. This was only possible after receiving permission 10 hours after applying to Twitter for its API use. The Google sentiment API and Syntax API were used to create the final product. The syntax API helped extract the adjectives from the reviews so we can show a word cloud. To display the word cloud, the programming was done in R as it is an effective language for data manipulation. ## Challenges we ran into We were unable to initially use UiPath for Twitter to scrape data as it didn’t have a next button, so the robot did not continue on its own. This was fixed using beautiful soup on Python. Also, when trying to extract the adjectives, the compiling was very slow causing us to fall back about 2 hours. None of us knew the inns and outs of web hence it was a challenging problem for us. ## Accomplishments that we're proud of We are happy about finding an effective way to word scrape using both UiPath and BeautifulSoup. Also, we weren't aware that Google provided an API for sentiment analysis, access to that was a big plus. We learned how to utilize our tools and incorporated them into our project. We also used Firebase to help store data on the cloud so we know its secure. ## What we learned Word scraping was a big thing that we all learned as it was new to all of us. We had to extensively research before applying any idea. Most of the group did not know how to use the language R but we understood the basics by the end. We also learned how to set up a firebase and google-cloud service that will definitely be a big asset in our future programming endeavours. ## What's next for JetBlueByMe Our web scraping application can be optimized and we plan on getting a live feed set up to show reviews sentiment in real-time. With time and resources, we would be able to implement that.
## Inspiration Given the increase in mental health awareness, we wanted to focus on therapy treatment tools in order to enhance the effectiveness of therapy. Therapists rely on hand-written notes and personal memory to progress emotionally with their clients, and there is no assistive digital tool for therapists to keep track of clients’ sentiment throughout a session. Therefore, we want to equip therapists with the ability to better analyze raw data, and track patient progress over time. ## Our Team * Vanessa Seto, Systems Design Engineering at the University of Waterloo * Daniel Wang, CS at the University of Toronto * Quinnan Gill, Computer Engineering at the University of Pittsburgh * Sanchit Batra, CS at the University of Buffalo ## What it does Inkblot is a digital tool to give therapists a second opinion, by performing sentimental analysis on a patient throughout a therapy session. It keeps track of client progress as they attend more therapy sessions, and gives therapists useful data points that aren't usually captured in typical hand-written notes. Some key features include the ability to scrub across the entire therapy session, allowing the therapist to read the transcript, and look at specific key words associated with certain emotions. Another key feature is the progress tab, that displays past therapy sessions with easy to interpret sentiment data visualizations, to allow therapists to see the overall ups and downs in a patient's visits. ## How we built it We built the front end using Angular and hosted the web page locally. Given a complex data set, we wanted to present our application in a simple and user-friendly manner. We created a styling and branding template for the application and designed the UI from scratch. For the back-end we hosted a REST API built using Flask on GCP in order to easily access API's offered by GCP. Most notably, we took advantage of Google Vision API to perform sentiment analysis and used their speech to text API to transcribe a patient's therapy session. ## Challenges we ran into * Integrated a chart library in Angular that met our project’s complex data needs * Working with raw data * Audio processing and conversions for session video clips ## Accomplishments that we're proud of * Using GCP in its full effectiveness for our use case, including technologies like Google Cloud Storage, Google Compute VM, Google Cloud Firewall / LoadBalancer, as well as both Vision API and Speech-To-Text * Implementing the entire front-end from scratch in Angular, with the integration of real-time data * Great UI Design :) ## What's next for Inkblot * Database integration: Keeping user data, keeping historical data, user profiles (login) * Twilio Integration * HIPAA Compliancy * Investigate blockchain technology with the help of BlockStack * Testing the product with professional therapists
losing
MediBot: Help us help you get the healthcare you deserve ## Inspiration: Our team went into the ideation phase of Treehacks 2023 with the rising relevance and apparency of conversational AI as a “fresh” topic occupying our minds. We wondered if and how we can apply conversational AI technology such as chatbots to benefit people, especially those who may be underprivileged or underserviced in several areas that are within the potential influence of this technology. We were brooding over the six tracks and various sponsor rewards when inspiration struck. We wanted to make a chatbot within healthcare, specifically patient safety. Being international students, we recognize some of the difficulties that arise when living in a foreign country in terms of language and the ability to communicate with others. Through this empathetic process, we arrived at a group that we defined as the target audience of MediBot; children and non-native English speakers who face language barriers and interpretive difficulties in their communication with healthcare professionals. We realized very early on that we do not want to replace the doctor in diagnosis but rather equip our target audience with the ability to express their symptoms clearly and accurately. After some deliberation, we decided that the optimal method to accomplish that using conversational AI was through implementing a chatbot that asks clarifying questions to help label the symptoms for the users. ## What it does: Medibot initially prompts users to describe their symptoms as best as they can. The description is then evaluated to compare to a list of proper medical terms (symptoms) in terms of similarity. Suppose those symptom descriptions are rather vague (do not match very well with the list of official symptoms or are blanket terms). In that case, Medibot asks the patients clarifying questions to identify the symptoms with the user’s added input. For example, when told, “My head hurts,” Medibot will ask them to distinguish between headaches, migraines, or potentially blunt force trauma. But if the descriptions of a symptom are specific and relatable to official medical terms, Medibot asks them questions regarding associated symptoms. This means Medibot presents questions inquiring about symptoms that are known probabilistically to appear with the ones the user has already listed. The bot is designed to avoid making an initial diagnosis using a double-blind inquiry process to control for potential confirmation biases. This means the bot will not tell the doctor its predictions regarding what the user has, and it will not nudge the users into confessing or agreeing to a symptom they do not experience. Instead, the doctor will be given a list of what the user was likely describing at the end of the conversation between the bot and the user. The predictions from the inquiring process are a product of the consideration of associative relationships among symptoms. Medibot keeps track of the associative relationship through Cosine Similarity and weight distribution after the Vectorization Encoding Process. Over time, Medibot zones in on a specific condition (determined by the highest possible similarity score). The process also helps in maintaining context throughout the chat conversations. Finally, the conversation between the patient and Medibot ends in the following cases: the user needs to leave, the associative symptoms process suspects one condition much more than the others, and the user finishes discussing all symptoms they experienced. ## How we built it We constructed the MediBot web application in two different and interconnected stages, frontend and backend. The front end is a mix of ReactJS and HTML. There is only one page accessible to the user which is a chat page between the user and the bot. The page was made reactive through several styling options and the usage of states in the messages. The back end was constructed using Python, Flask, and machine learning models such as OpenAI and Hugging Face. The Flask was used in communicating between the varying python scripts holding the MediBot response model and the chat page in the front end. Python was the language used to process the data, encode the NLP models and their calls, and store and export responses. We used prompt engineering through OpenAI to train a model to ask clarifying questions and perform sentiment analysis on user responses. Hugging Face was used to create an NLP model that runs a similarity check between the user input of symptoms and the official list of symptoms. ## Challenges we ran into Our first challenge was familiarizing ourselves with virtual environments and solving dependency errors when pushing and pulling from GitHub. Each of us initially had different versions of Python and operating systems. We quickly realized that this will hinder our progress greatly after fixing the first series of dependency issues and started coding in virtual environments as solutions. The second great challenge we ran into was integrating the three separate NLP models into one application. This is because they are all resource intensive in terms of ram and we only had computers with around 12GB free for coding. To circumvent this we had to employ intermediate steps when feeding the result from one model into the other and so on. Finally, the third major challenge was resting and sleeping well. ## Accomplishments we are proud of First and foremost we are proud of the fact that we have a functioning chatbot that accomplishes what we originally set out to do. In this group 3 of us have never coded an NLP model and the last has only coded smaller scale ones. Thus the integration of 3 of them into one chatbot with front end and back end is something that we are proud to have accomplished in the timespan of the hackathon. Second, we are happy to have a relatively small error rate in our model. We informally tested it with varied prompts and performed within expectations every time. ## What we learned: This was the first hackathon for half of the team, and for 3/4, it was the first time working with virtual environments and collaborating using Git. We learned quickly how to push and pull and how to commit changes. Before the hackathon, only one of us had worked on an ML model, but we learned together to create NLP models and use OpenAI and prompt engineering (credits to OpenAI Mem workshop). This project's scale helped us understand these ML models' intrinsic moldability. Working on Medibot also helped us become much more familiar with the idiosyncrasies of ReactJS and its application in tandem with Flask for dynamically changing webpages. As mostly beginners, we experienced our first true taste of product ideation, project management, and collaborative coding environments. ## What’s next for MediBot The next immediate steps for MediBot involve making the application more robust and capable. In more detail, first we will encode the ability for MediBot to detect and define more complex language in simpler terms. Second, we will improve upon the initial response to allow for more substantial multi-symptom functionality.Third, we will expand upon the processing of qualitative answers from users to include information like length of pain, the intensity of pain, and so on. Finally, after this more robust system is implemented, we will begin the training phase by speaking to healthcare providers and testing it out on volunteers. ## Ethics: Our design aims to improve patients’ healthcare experience towards the better and bridge the gap between a condition and getting the desired treatment. We believe expression barriers and technical knowledge should not be missing stones in that bridge. The ethics of our design therefore hinges around providing quality healthcare for all. We intentionally stopped short of providing a diagnosis with Medibot because of the following ethical considerations: * **Bias Mitigation:** Whatever diagnosis we provide might induce unconscious biases like confirmation or availability bias, affecting the medical provider’s ability to give proper diagnosis. It must be noted however, that Medibot is capable of producing diagnosis. Perhaps, Medibot can be used in further research to ensure the credibility of AI diagnosis by checking its prediction against the doctor’s after diagnosis has been made. * **Patient trust and safety:** We’re not yet at the point in our civilization’s history where patients are comfortable getting diagnosis from AIs. Medibot’s intent is to help nudge us a step down that path, by seamlessly, safely, and without negative consequence integrating AI within the more physical, intimate environments of healthcare. We envision Medibot in these hospital spaces, helping users articulate their symptoms better without fear of getting a wrong diagnosis. We’re humans, we like when someone gets us, even if that someone is artificial. However, the implementation of AI for pre-diagnoses still raises many ethical questions and considerations: * **Fairness:** Use of Medibot requires a working knowledge of the English language. This automatically disproportionates its accessibility. There are still many immigrants for whom the questions, as simple as we have tried to make them, might be too much for. This is a severe limitation to our ethics of assisting these people. A next step might include introducing further explanation of troublesome terms in their language (Note: the process of pre-diagnosis will remain in English, only troublesome terms that the user cannot understand in English may be explained in a more familiar language. This way we further build patients’ vocabulary and help their familiarity with English ). There are also accessibility concerns as hospitals in certain regions or economic stratas may not have the resources to incorporate this technology. * **Bias:** We put severe thought into bias mitigation both on the side of the doctor and the patient. It is important to ensure that Medibot does not lead the patient into reporting symptoms they don’t necessarily have or induce availability bias. We aimed to circumvent this by asking questions seemingly randomly from a list of symptoms generated based on our Sentence Similarity model. This avoids leading the user in just one direction. However, this does not eradicate all biases as associative symptoms are hard to mask from the patient (i.e a patient may think chills if you ask about cold) so this remains a consideration. * **Accountability:** Errors in symptom identification can be tricky to detect making it very hard for the medical practitioner to know when the symptoms are a true reflection of the actual patient’s state. Who is responsible for the consequences of wrong pre-diagnoses? It is important to establish these clear systems of accountability and checks for detecting and improving errors in MediBot. * **Privacy:** MediBot will be trained on patient data and patient-doctor diagnoses in future operations. There remains concerns about privacy and data protection. This information, especially identifying information, must be kept confidential and secure. One method of handling this is asking users at the very beginning whether they want their data to be used for diagnostics and training or not.
## Inspiration No one likes waiting around too much, especially when we feel we need immediate attention. 95% of people in hospital waiting rooms tend to get frustrated over waiting times and uncertainty. And this problem affects around 60 million people every year, just in the US. We would like to alleviate this problem and offer alternative services to relieve the stress and frustration that people experience. ## What it does We let people upload their medical history and list of symptoms before they reach the waiting rooms of hospitals. They can do this through the voice assistant feature, where in a conversation style they tell their symptoms, relating details and circumstances. They also have the option of just writing these in a standard form, if it's easier for them. Based on the symptoms and circumstances the patient receives a category label of 'mild', 'moderate' or 'critical' and is added to the virtual queue. This way the hospitals can take care of their patients more efficiently by having a fair ranking system (incl. time of arrival as well) that determines the queue and patients have a higher satisfaction level as well, because they see a transparent process without the usual uncertainty and they feel attended to. This way they can be told an estimate range of waiting time, which frees them from stress and they are also shown a progress bar to see if a doctor has reviewed his case already, insurance was contacted or any status changed. Patients are also provided with tips and educational content regarding their symptoms and pains, battling this way the abundant stream of misinformation and incorrectness that comes from the media and unreliable sources. Hospital experiences shouldn't be all negative, let's try try to change that! ## How we built it We are running a Microsoft Azure server and developed the interface in React. We used the Houndify API for the voice assistance and the Azure Text Analytics API for processing. The designs were built in Figma. ## Challenges we ran into Brainstorming took longer than we anticipated and had to keep our cool and not stress, but in the end we agreed on an idea that has enormous potential and it was worth it to chew on it longer. We have had a little experience with voice assistance in the past but have never user Houndify, so we spent a bit of time figuring out how to piece everything together. We were thinking of implementing multiple user input languages so that less fluent English speakers could use the app as well. ## Accomplishments that we're proud of Treehacks had many interesting side events, so we're happy that we were able to piece everything together by the end. We believe that the project tackles a real and large-scale societal problem and we enjoyed creating something in the domain. ## What we learned We learned a lot during the weekend about text and voice analytics and about the US healthcare system in general. Some of us flew in all the way from Sweden, for some of us this was the first hackathon attended so working together with new people with different experiences definitely proved to be exciting and valuable.
## Inspiration As the amount of Internet users grows, there is an increasing need for online technical support. We asked ourselves, what would a help desk look like 30 years from now? Would there be human-to-human interaction? Perhaps the user's problem could be solved before it even happens. While we aren't currently able to predict the future, we can still make it as easy as possible for a client to find the help they need in a timely manner. Too much time of technical support teams is wasted by users not being able to find what they need or not knowing how to solve their problems. We decided to make a tool that makes it as easy as possible for them to find the solutions. ## What it does Penguin Desk is a universal Google Chrome extension. Most technical problems can be reduced to specific roots, allowing solutions to be streamlined for the majority of websites. The user tells our extension what issue they're having or what they would like to accomplish. Our extension quickly searches for all the possible paths that the user could take and automatically does what it determines is the best action. These actions could be anything from page redirection, highlighting steps on each page or autofilling forms. Because of the similar format of most websites, this extension automatically adapts to the user's needs. ## How we built it We built the tool as a Google Chrome extension using JavaScript, HTML and CSS. To run it, we sideload it into the browser. We use a synonyms API to find other words on the page that could help the user solve their problem. ## Challenges we ran into We ran into a numerous amount of bugs. Everything from console logging breaking our program, to comments being ignored. We found it difficult to get the files to communicate between each other. ## Accomplishments and what we learned None of us had built a Google Chrome extension before, so it was quite the learning experience. ## What's next for Penguin Desk Penguin Desk was initially going to have users log in to an account so that we could store data such as their personal information and previous help requests. We didn't have time to implement that, but it would have been nice to have.
partial
## Inspiration Due to a lot of work and family problems, People often forget to take care of their health and food. Common health problems people know a day's faces are BP, heart problems, and Diabetics. Most people face mental health problems, due to studies, jobs, or any other problems. This Project can help people find out their health problems. It helps people in easy recycling of items, as they are divided into 12 different classes. It will help people who do not have any knowledge of plants and predict if the plant is having any diseases or not. ## What it does On the Garbage page, When we upload an image it classifies which kind of garbage it is. It helps people with easy recycling. On the mental health page, When we answer some of the questions. It will predict If we are facing some kind of mental health issue. The Health Page is divided into three parts. One page predicts if you are having heart disease or not. The second page predicts if you are having Diabetics or not. The third page predicts if you are having BP or not. Covid 19 page classify if you are having covid or not Plant\_Disease page predicts if a plant is having a disease or not. ## How we built it I built it using streamlit and OpenCV. ## Challenges we ran into Deploying the website to Heroku was very difficult because I generally deploy it. Most of this was new to us except for deep learning and ML so it was very difficult overall due to the time restraint. The overall logic and finding out how we should calculate everything was difficult to determine within the time limit. Overall, time was the biggest constraint. ## Accomplishments that we're proud of ## What we learned Tensorflow, Streamlit, Python, HTML5, CSS3, Opencv, Machine learning, Deep learning, and using different python packages. ## What's next for Arogya
## Inspiration Over 70 million people around the world use sign language as their native form of communication. 70 million voices who were unable to be fully recognized in today’s society. This disparity inspired our team to develop a program that will allow for real-time translation of sign language into a text display monitor. Allowing for more inclusivity among those who do not know sign language to be able to communicate with a new community by breaking down language barriers. ## What it does It translates sign language into text in real-time processing. ## How we built it We set the environment by installing different packages (Open cv, mediapipe, scikit-learn), and set a webcam. -Data preparation: We collected data for our ML model by capturing a few sign language letters through the webcam that takes the whole image frame to collect it into different categories to classify the letters. -Data processing: We use the media pipe computer vision inference to capture the hand gestures to localize the landmarks of your fingers. -Train/ Test model: We trained our model to detect the matches between the trained images and hand landmarks captured in real time. ## Challenges we ran into The challenges we ran into first began with our team struggling to come up with a topic to develop. Then we ran into the issue of developing a program to integrate our sign language detection code with the hardware due to our laptop lacking the ability to effectively process the magnitude of our code. ## Accomplishments that we're proud of The accomplishment that we are most proud of is that we were able to implement hardware in our project as well as Machine Learning with a focus on computer vision. ## What we learned At the beginning of our project, our team was inexperienced with developing machine learning coding. However, through our extensive research on machine learning, we were able to expand our knowledge in under 36 hrs to develop a fully working program.
## Inspiration Amidst the hectic lives and pandemic struck world, mental health has taken a back seat. This thought gave birth to our inspiration of this web based app that would provide activities customised to a person’s mood that will help relax and rejuvenate. ## What it does We planned to create a platform that could detect a users mood through facial recognition, recommends yoga poses to enlighten the mood and evaluates their correctness, helps user jot their thoughts in self care journal. ## How we built it Frontend: HTML5, CSS(frameworks used:Tailwind,CSS),Javascript Backend: Python,Javascript Server side> Nodejs, Passport js Database> MongoDB( for user login), MySQL(for mood based music recommendations) ## Challenges we ran into Incooperating the open CV in our project was a challenge, but it was very rewarding once it all worked . But since all of us being first time hacker and due to time constraints we couldn't deploy our website externally. ## Accomplishments that we're proud of Mental health issues are the least addressed diseases even though medically they rank in top 5 chronic health conditions. We at umang are proud to have taken notice of such an issue and help people realise their moods and cope up with stresses encountered in their daily lives. Through are app we hope to give people are better perspective as well as push them towards a more sound mind and body We are really proud that we could create a website that could help break the stigma associated with mental health. It was an achievement that in this website includes so many features to help improving the user's mental health like making the user vibe to music curated just for their mood, engaging the user into physical activity like yoga to relax their mind and soul and helping them evaluate their yoga posture just by sitting at home with an AI instructor. Furthermore, completing this within 24 hours was an achievement in itself since it was our first hackathon which was very fun and challenging. ## What we learned We have learnt on how to implement open CV in projects. Another skill set we gained was on how to use Css Tailwind. Besides, we learned a lot about backend and databases and how to create shareable links and how to create to do lists. ## What's next for Umang While the core functionality of our app is complete, it can of course be further improved . 1)We would like to add a chatbot which can be the user's guide/best friend and give advice to the user when in condition of mental distress. 2)We would also like to add a mood log which can keep a track of the user's daily mood and if a serious degradation of mental health is seen it can directly connect the user to medical helpers, therapist for proper treatement. This lays grounds for further expansion of our website. Our spirits are up and the sky is our limit
winning
## Inspiration There are many scary things in the world ranging from poisonous spiders to horrifying ghosts, but none of these things scare people more than the act of public speaking. Over 75% of humans suffer from a fear of public speaking but what if there was a way to tackle this problem? That's why we created Strive. ## What it does Strive is a mobile application that leverages voice recognition and AI technologies to provide instant actionable feedback in analyzing the voice delivery of a person's presentation. Once you have recorded your speech Strive will calculate various performance variables such as: voice clarity, filler word usage, voice speed, and voice volume. Once the performance variables have been calculated, Strive will then render your performance variables in an easy-to-read statistic dashboard, while also providing the user with a customized feedback page containing tips to improve their presentation skills. In the settings page, users will have the option to add custom filler words that they would like to avoid saying during their presentation. Users can also personalize their speech coach for a more motivational experience. On top of the in-app given analysis, Strive will also send their feedback results via text-message to the user, allowing them to share/forward an analysis easily. ## How we built it Utilizing the collaboration tool Figma we designed wireframes of our mobile app. We uses services such as Photoshop and Gimp to help customize every page for an intuitive user experience. To create the front-end of our app we used the game engine Unity. Within Unity we sculpt each app page and connect components to backend C# functions and services. We leveraged IBM Watson's speech toolkit in order to perform calculations of the performance variables and used stdlib's cloud function features for text messaging. ## Challenges we ran into Given our skillsets from technical backgrounds, one challenge we ran into was developing out a simplistic yet intuitive user interface that helps users navigate the various features within our app. By leveraging collaborative tools such as Figma and seeking inspiration from platforms such as Dribbble, we were able to collectively develop a design framework that best suited the needs of our target user. ## Accomplishments that we're proud of Creating a fully functional mobile app while leveraging an unfamiliar technology stack to provide a simple application that people can use to start receiving actionable feedback on improving their public speaking skills. By building anyone can use our app to improve their public speaking skills and conquer their fear of public speaking. ## What we learned Over the course of the weekend one of the main things we learned was how to create an intuitive UI, and how important it is to understand the target user and their needs. ## What's next for Strive - Your Personal AI Speech Trainer * Model voices of famous public speakers for a more realistic experience in giving personal feedback (using the Lyrebird API). * Ability to calculate more performance variables for a even better analysis and more detailed feedback
## Inspiration It is nearly a year since the start of the pandemic and going back to normal still feels like a distant dream. As students, most of our time is spent attending online lectures, reading e-books, listening to music, and playing online games. This forces us to spend immense amounts of time in front of a large monitor, clicking the same monotonous buttons. Many surveys suggest that this has increased the anxiety levels in the youth. Basically, we are losing the physical stimulus of reading an actual book in a library, going to an arcade to enjoy games, or play table tennis with our friends. ## What it does It does three things: 1) Controls any game using hand/a steering wheel (non-digital) such as Asphalt9 2) Helps you zoom-in, zoom-out, scroll-up, scroll-down only using hand gestures. 3) Helps you browse any music of your choice using voice commands and gesture controls for volume, pause/play, skip, etc. ## How we built it The three main technologies used in this project are: 1) Python 3 The software suite is built using Python 3 and was initially developed in the Jupyter Notebook IDE. 2) OpenCV The software uses the OpenCV library in Python to implement most of its gesture recognition and motion analysis tasks. 3) Selenium Selenium is a web driver that was extensively used to control the web interface interaction component of the software. ## Challenges we ran into 1) Selenium only works with google version 81 and is very hard to debug :( 2) Finding the perfect HSV ranges corresponding to different colours was a tedious task and required me to make a special script to make the task easier. 3) Pulling an all-nighter (A coffee does NOT help!) ## Accomplishments that we're proud of 1) Successfully amalgamated computer vision, speech recognition and web automation to make a suite of software and not just a single software! ## What we learned 1) How to debug selenium efficiently 2) How to use angle geometry for steering a car using computer vision 3) Stabilizing errors in object detection ## What's next for E-Motion I plan to implement more components in E-Motion that will help to browse the entire computer and make the voice commands more precise by ignoring background noise.
## Inspiration The inspiration behind our project stems from the personal challenges we’ve faced in the past with public speaking and presentations. Like many others, we often found ourselves feeling overwhelmed by nerves, shaky voices, and the pressure of presenting in front of peers. This anxiety would sometimes make it difficult to communicate ideas effectively, and the fear of judgment made the experience even more daunting. Recognizing that we weren’t alone in these feelings, we wanted to create a solution that could help others overcome similar hurdles. That’s where Vocis comes in. Its aim is to give people the freedom and the ability to practice their presentation skills at their own pace, in a safe, supportive environment. Whether it’s practicing for a school project, a work presentation, or simply building the confidence to speak in front of others, the platform allows users to refine their delivery. ## What It Does Our project aims to simulate real-life challenges that presenters might face. For example, handling difficult situations like Q&A sessions, dealing with hecklers, or responding to aggressive viewers. By creating these simulated scenarios, our software prepares users for the unpredictability of live presentations. We hope that by giving people the tools and the settings to practice on their own terms, they can gradually build the skills and self-assurance needed to present with ease in any setting. ## Tech Stack - How Vocis is built ReactJs Shadcn NextJs TailwindCSS Hume.Ai OpenAI ## Challenges We Faced During our hackathon, one of the key challenges we faced was the need to dive into extensive documentation while working on API implementation as we had never worked with Hume before. Not only that, as all of us don’t have much experience with the backend of an app, it was really taxing to learn and implement at the same time. This task, which is already time-consuming, became more difficult due to unstable internet connectivity. This led to unexpected delays in accessing resources and troubleshooting problems in real time, which put additional pressure on our timeline. Despite these setbacks, our team worked hard to adapt and maintain momentum. ## Accomplishments Despite the challenges we faced, we were able to make a functional prototype at the very least that displays the core of our program which is simulating real-life difficult scenarios for presenters and public-speakers. At least, the very bare bones and we’re very proud of ourselves for being able to do that and create a wonderful project. ## What We Learned We learned to create a viable project in limited time allowing us to overcome our shortcomings in our ability to create a project Through multiple workshops and gaining insightful help from mentors, we learned more about APIs, implementing APIs, and making sure they cooperate with each other and streamlining the process. We also learned a lot of new, cool and amazing technologies created by a lot of amazing people that allowed us to achieve the aim of our project ## What’s Next For Vocis We allow multiple users to present at the same time and the AI can create situations for multiple “panelists” We allow many more situations that panelists and presenters may face like many different types of aggressive people, journalists that are a little too overbearing. We add reactions of the audience that is listening to our presentation so that it creates a more realistic experience for the user (“presenter”) More security measure Authentication
winning
## Inspiration Inspired by [SIU Carbondale's Green Roof](https://greenroof.siu.edu/siu-green-roof/), we wanted to create an automated garden watering system that would help address issues ranging from food deserts to lack of agricultural space to storm water runoff. ## What it does This hardware solution takes in moisture data from soil and determines whether or not the plant needs to be watered. If the soil's moisture is too low, the valve will dispense water and the web server will display that water has been dispensed. ## How we built it First, we tested the sensor and determined the boundaries between dry, damp, and wet based on the sensor's output values. Then, we took the boundaries and divided them by percentage soil moisture. Specifically, the sensor that measures the conductivity of the material around it, so water being the most conductive had the highest values and air being the least conductive had the lowest. Soil would fall in the middle and the ranges in moisture were defined by the pure air and pure water boundaries. From there, we visualized the hardware set up with the sensor connected to an Arduino UNO microcontroller connected to a Raspberry Pi 4 controlling a solenoid valve that releases water when the soil moisture meter is less than 40% wet. ## Challenges we ran into At first, we aimed too high. We wanted to incorporate weather data into our water dispensing system, but the information flow and JSON parsing were not cooperating with the Arduino IDE. We consulted with a mentor, Andre Abtahi, who helped us get a better perspective of our project scope. It was difficult to focus on what it meant to truly create a minimum viable product when we had so many ideas. ## Accomplishments that we're proud of Even though our team is spread across the country (California, Washington, and Illinois), we were still able to create a functioning hardware hack. In addition, as beginners we are very excited of this hackathon's outcome. ## What we learned We learned about wrangling APIs, how to work in a virtual hackathon, and project management. Upon reflection, creating a balance between feasibility, ability, and optimism is important for guiding the focus and motivations of a team. Being mindful about energy levels is especially important for long sprints like hackathons. ## What's next for Water Smarter Lots of things! What's next for Water Smarter is weather controlled water dispensing. Given humidity, precipitation, and other weather data, our water system will dispense more or less water. This adaptive water feature will save water and let nature pick up the slack. We would use the OpenWeatherMap API to gather the forecasted volume of rain, predict the potential soil moisture, and have the watering system dispense an adjusted amount of water to maintain correct soil moisture content. In a future iteration of Water Smarter, we want to stretch the use of live geographic data even further by suggesting appropriate produce for which growing zone in the US, which will personalize the water conservation process. Not all plants are appropriate for all locations so we would want to provide the user with options for optimal planting. We can use software like ArcScene to look at sunlight exposure according to regional 3D topographical images and suggest planting locations and times. We want our product to be user friendly, so we want to improve our aesthetics and show more information about soil moisture beyond just notifying water dispensing.
## Inspiration As the world progresses into the digital age, there is a huge simultaneous focus on creating various sources of clean energy that is sustainable and affordable. Unfortunately, there is minimal focus on ways to sustain the increasingly rapid production of energy. Energy is wasted everyday as utility companies over supply power to certain groups of consumers. ## What It Does Thus, we bring you Efficity, a device that helps utility companies analyze and predict the load demand of a housing area. By leveraging the expanding, ubiquitous arrival of Internet of Things devices, we can access energy data in real-time. Utility companies could then estimate the ideal power to supply to a housing area, while keeping in mind to satisfy the load demand. With this, not too much energy will be wasted and thus improving energy efficiency. On top of that, everyday consumers can also have easy access to their own personal usage for tracking. ## How We Built It Our prototype is built primarily around a Dragonboard 410c, where a potentiometer is used to represent the varying load demand of consumers. By using the analog capabilities of a built in Arduino (ATMega328p), we can calculate the power that is consumed by the load in real time. A Python script is then run via the Dragonboard to receive the data from the Arduino through serial communication. The Dragonboard then further complements our design by having built-in WiFi capabilities. With this in mind, we can send HTTP requests to a webserver hosted by energy companies. In our case, we explored sending this data to a free IOT platform webserver, which can allow a user from anywhere to track energy usage as well as perform analytics such as using MATLAB. In addition, the Dragonboard comes with a fully usable GUI and compatible HDMI monitor for users that are less familiar with command line controls. ## Challenges We Ran Into There were many challenges throughout the Hackathon. First, we had trouble grasping the operations of a Dragonboard. The first 12 hours was spent only on learning how to use the device itself—it also did not help that our first Dragonboard was defective and did not come with a pre-flashed operating system! Next time, we plan to ask more questions early on rather than fixating on problems we believed were trivial. Next, we had a hard time coding the Wi-Fi functionality of the DragonBoard. This was largely due to the lack of expertise in the area from most members. For future references, we find it advisable to have a larger diversity of team members to facilitate faster development. ## Accomplishments That We're Proud Of Overall, we are proud of what we have achieved as this was our first time participating in a hackathon. We ranged from first all the way to fourth year students! From learning how to operate the Dragonboard 410c to having hands on experience in implementing IOT capabilities, we thoroughly believe that HackWestern has broaden all our perspectives on technology. ## What's Next for Efficity If this pitch is successful in this hackathon, we are planning to further improvise and make iterations and develop the full potential of the Dragonboard prototype. There are numerous algorithms we would love to implement and explore to process the collected data since the Dragonboard is quite a powerful device with its own operation systems. We may also want to include extra hardware add-ons such as silent arms for over-usage or solar panels to allow a fully self-sustained device. To take this one step further--if we were able to have a fully functional product, we can opt to pitch this idea to investors!
# see our presentation [here](https://docs.google.com/presentation/d/1AWFR0UEZ3NBi8W04uCgkNGMovDwHm_xRZ-3Zk3TC8-E/edit?usp=sharing) ## Inspiration Without purchasing hardware, there are few ways to have contact-free interactions with your computer. To make such technologies accessible to everyone, we created one of the first touch-less hardware-less means of computer control by employing machine learning and gesture analysis algorithms. Additionally, we wanted to make it as accessible as possible in order to reach a wide demographic of users and developers. ## What it does Puppet uses machine learning technology such as k-means clustering in order to distinguish between different hand signs. Then, it interprets the hand-signs into computer inputs such as keys or mouse movements to allow the user to have full control without a physical keyboard or mouse. ## How we built it Using OpenCV in order to capture the user's camera input and media-pipe to parse hand data, we could capture the relevant features of a user's hand. Once these features are extracted, they are fed into the k-means clustering algorithm (built with Sci-Kit Learn) to distinguish between different types of hand gestures. The hand gestures are then translated into specific computer commands which pair together AppleScript and PyAutoGUI to provide the user with the Puppet experience. ## Challenges we ran into One major issue that we ran into was that in the first iteration of our k-means clustering algorithm the clusters were colliding. We fed into the model the distance of each on your hand from your wrist, and designed it to return the revenant gesture. Though we considered changing this to a coordinate-based system, we settled on changing the hand gestures to be more distinct with our current distance system. This was ultimately the best solution because it allowed us to keep a small model while increasing accuracy. Mapping a finger position on camera to a point for the cursor on the screen was not as easy as expected. Because of inaccuracies in the hand detection among other things, the mouse was at first very shaky. Additionally, it was nearly impossible to reach the edges of the screen because your finger would not be detected near the edge of the camera's frame. In our Puppet implementation, we constantly *pursue* the desired cursor position instead of directly *tracking it* with the camera. Also, we scaled our coordinate system so it required less hand movement in order to reach the screen's edge. ## Accomplishments that we're proud of We are proud of the gesture recognition model and motion algorithms we designed. We also take pride in the organization and execution of this project in such a short time. ## What we learned A lot was discovered about the difficulties of utilizing hand gestures. From a data perspective, many of the gestures look very similar and it took us time to develop specific transformations, models and algorithms to parse our data into individual hand motions / signs. Also, our team members possess diverse and separate skillsets in machine learning, mathematics and computer science. We can proudly say it required nearly all three of us to overcome any major issue presented. Because of this, we all leave here with a more advanced skillset in each of these areas and better continuity as a team. ## What's next for Puppet Right now, Puppet can control presentations, the web, and your keyboard. In the future, puppet could control much more. * Opportunities in education: Puppet provides a more interactive experience for controlling computers. This feature can be potentially utilized in elementary school classrooms to give kids hands-on learning with maps, science labs, and language. * Opportunities in video games: As Puppet advances, it could provide game developers a way to create games wear the user interacts without a controller. Unlike technologies such as XBOX Kinect, it would require no additional hardware. * Opportunities in virtual reality: Cheaper VR alternatives such as Google Cardboard could be paired with Puppet to create a premium VR experience with at-home technology. This could be used in both examples described above. * Opportunities in hospitals / public areas: People have been especially careful about avoiding germs lately. With Puppet, you won't need to touch any keyboard and mice shared by many doctors, providing a more sanitary way to use computers.
partial
## Inspiration Aravind doesn't speak Chinese. When Nick and Jon speak in Chinese Aravind is sad. We want to solve this problem for all the Aravinds in the world -- not just for Chinese though, for any language! ## What it does TranslatAR allows you to see English (or any other language of your choice) subtitles when you speak to other people speaking a foreign language. This is an augmented reality app which means the subtitles will appear floating in front of you! ## How we built it We used Microsoft Cognitive Services's Translation APIs to transcribe speech and then translate it. To handle the augmented reality aspect, we created our own AR device by combining an iPhone, a webcam, and a Google Cardboard. In order to support video capturing along with multiple microphones, we multithread all our processes. ## Challenges we ran into One of the biggest challenges we faced was trying to add the functionality to handle multiple input sources in different languages simultaneously. We eventually solved it with multithreading, spawning a new thread to listen, translate, and caption for each input source. ## Accomplishments that we're proud of Our biggest achievement is definitely multi-threading the app to be able to translate a lot of different languages at the same time using different endpoints. This makes real-time multi-lingual conversations possible! ## What we learned We familiarized ourselves with the Cognitive Services API and were also able to create our own AR system that works very well from scratch using OpenCV libraries and Python Imaging Library. ## What's next for TranslatAR We want to launch this App in the AppStore so people can replicate VR/AR on their own phones with nothing more than just an App and an internet connection. It also helps a lot of people whose relatives/friends speak other languages.
## Our Inspiration We were inspired by apps like Duolingo and Quizlet for language learning, and wanted to extend those experiences to a VR environment. The goal was to gameify the entire learning experience and make it immersive all while providing users with the resources to dig deeper into concepts. ## What it does EduSphere is an interactive AR/VR language learning VisionOS application designed for the new Apple Vision Pro. It contains three fully developed features: a 3D popup game, a multi-lingual chatbot, and an immersive learning environment. It leverages the visually compelling and intuitive nature of the VisionOS system to target three of the most crucial language learning styles: visual, kinesthetic, and literacy - allowing users to truly learn at their own comfort. We believe the immersive environment will make language learning even more memorable and enjoyable. ## How we built it We built the VisionOS app using the Beta development kit for the Apple Vision Pro. The front-end and AR/VR components were made using Swift, SwiftUI, Alamofire, RealityKit, and concurrent MVVM design architecture. 3D Models were converted through Reality Converter as .usdz files for AR modelling. We stored these files on the Google Cloud Bucket Storage, with their corresponding metadata on CockroachDB. We used a microservice architecture for the backend, creating various scripts involving Python, Flask, SQL, and Cohere. To control the Apple Vision Pro simulator, we linked a Nintendo Switch controller for interaction in 3D space. ## Challenges we ran into Learning to build for the VisionOS was challenging mainly due to the lack of documentation and libraries available. We faced various problems with 3D Modelling, colour rendering, and databases, as it was difficult to navigate this new space without references or sources to fall back on. We had to build many things from scratch while discovering the limitations within the Beta development environment. Debugging certain issues also proved to be a challenge. We also really wanted to try using eye tracking or hand gesturing technologies, but unfortunately, Apple hasn't released these yet without a physical Vision Pro. We would be happy to try out these cool features in the future, and we're definitely excited about what's to come in AR/VR! ## Accomplishments that we're proud of We're really proud that we were able to get a functional app working on the VisionOS, especially since this was our first time working with the platform. The use of multiple APIs and 3D modelling tools was also the amalgamation of all our interests and skillsets combined, which was really rewarding to see come to life.
## Inspiration Our inspiration for this project was to empower individuals, enabling them to make well-informed financial choices effortlessly. Our goal was to eliminate the need for extensive hours of sifting through intricate credit card terms and conditions, making financial decision-making accessible to everyone. ## What it does The application stores information about the various credit cards used by the client. When a client wishes to make a purchase, they can simply open the app. It will then utilize location data to automatically identify the type of store they are in and recommend the most suitable credit card for that specific transaction, optimizing their benefits and rewards. ## How we built it The framework of the mobile application is React Native. We used Python for information processing/retrieval to and from Google Places API as well as OpenAI API. We also used JavaScript for app functionality. ## Challenges we ran into We had an issue because when we retrieved users' location data, we could initially only map their location to an address that could have multiple stores in its area depending on the density of the stores (in plazas/malls, etc). We solved this problem by making the program get the closest store within a 100m radius of the retrieved address. ## Accomplishments that we're proud of Building a React-Native app with no prior experience with the framework. Working around hurdles within our techstack and maintaining the original project idea. Utilizing and learning how to use new API's (OpenAI, Google-Places). ## What we learned How to work efficiently in a team. By finding what needs to be done and delegating tasks among the group. ## What's next for Blitz Future additions for Blitz is to add a database of credit cards and their rewards so the user can search for what card they have and add it to their account. Index our own map data to make it more accurate for our use case. Track user spending to recommend new cards. Create a physical card that stores all the other credit card data to automatically choose the card for you. A Chrome extension for online checkout.
winning
## Inspiration Our inspiration came from the challenge proposed by Varient, a data aggregation platform that connects people with similar mutations together to help doctors and users. ## What it does Our application works by allowing the user to upload an image file. The image is then sent to Google Cloud’s Document AI to extract the body of text, process it, and then matched it against the datastore of gene names for matches. ## How we built it While originally we had planned to feed this body of text to a Vertex AI ML for entity extraction, the trained model was not accurate due to a small dataset. Additionally, we attempted to build a BigQuery ML model but struggled to format the data in the target column as required. Due to time constraints, we explored a different path and downloaded a list of gene symbols from HUGO Gene Nomenclature Committee’s website (<https://www.genenames.org/>). Using Node.js, and Multer, the image is processed and the text contents are efficiently matched against the datastore of gene names for matches. The app returns a JSON of the matching strings in order of highest frequency. This web app is then hosted on Google Cloud through App Engine at (<https://uofthacksix-chamomile.ue.r.appspot.com/>). The UI while very simple is easy to use. The intent of this project was to create something that could easily be integrated into Varient’s architecture. Converting this project into an API to pass the JSON to the client would be very simple. ## How it meets the theme "restoration" The overall goal of this application, which has been partially implemented, was to create an application that could match mutated gene names from user-uploaded documents and connect them with resources, and others who share the common gene mutation. This would allow them to share strategies or items that have helped them deal with living with the gene mutation. This would allow these individuals to restore some normalcy in their lives again. ## Challenges we ran into Some of the challenges we faced: * having a small data set to train the Vertex AI on * time constraints on learning the new technologies, and the best way to effectively use it * formatting the data in the target column when attempting to build a BigQuery ML model ## Accomplishments that we're proud of The accomplishment that we are all proud of is the exposure we gained to all the new technologies we discovered and used this weekend. We had no idea how many AI tools Google offers. The exposure to new technologies and taking the risk to step out of our comfort zone and attempt to learn and use it this weekend in such a short amount of time is something we are all proud of. ## What we learned This entire project was new to all of us. We have never used google cloud in this manner before, only for Firestore. We were unfamiliar with using Express and working with machine learning was something only one of us had a small amount of experience with. We learned a lot about Google Cloud and how to access the API through Python and Node.js. ## What's next for Chamomile The hack is not as complete as we would like since ideally there would be a machine learning aspect to confirm the guesses made by the substring matching and more data to improve the Vertex AI model. Improving on this would be a great step for this project. Also, add a more put-together UI to match the theme of this application.
## Inspiration The inspiration for this project came from UofTHacks Restoration theme and Varient's project challenge. The initial idea was to detect a given gene mutation for a given genetic testing report. This is an extremely valuable asset for the medical community, given the current global situation with the COVID-19 pandemic. As we can already see, misinformation and distrust in the medical community continue to grow in popularity, thus we must try to leverage technology to solve this ever-expanding problem. One way Geneticheck can restore public trust in the medical community is by providing a way to bridge the gap between confusing medical reports and the average person's medical understanding. ## What it does Geneticheck is a smart software that allows a patient or parents of patients with rare diseases to gather more information about their specific conditions and genetic mutations. The reports are scanned through to find the gene mutation and shows where the gene mutation is located on the original report. Genticheck also provides the patient with more information regarding their gene mutation. Specifically, to provide them with the associated Diseases and Phenotypes (or related symptoms) they may now have. The software, given a gene mutation, searches through the Human Phenotype Ontology database and auto-generates a pdf report that lists off all the necessary information a patient will need following a genetic test. The descriptions for each phenotype are given in a laymen-like language, which allows the patient to understand the symptoms associated with the gene mutation, resulting in the patients and loved ones being more observant over their status. ## How we built it Geneticheck was built using Python and Google Cloud's Vision API. Other libraries were also explored, such as PyTesseract, however, yielded lower gene detection results ## Challenges we ran into One major challenge was initially designing the project in Python. Development in python was initially chosen for its rapid R&D capabilities and the potential need to do image processing in OpenCV. As the project was developed and Google Cloud Vision API was deemed acceptable for use, moving to a web-based python framework was deemed too time-consuming. In the interest of time, the python-based command line tool had to be selected as the current basis of interaction ## Accomplishments that we're proud of One proud accomplishment of this project is the success rate of the overall algorithm, being able to successfully detect all 47 gene mutations with their related image. The other great accomplishment was the quick development of PDF generation software to expand the capabilities and scope of the project, to provide the end-user/patient with more information about their condition, ultimately restoring their faith in the medical field through a better understanding/knowledge. ## What we learned Topics learned include OCR for python, optimizing images for text OCR for PyTesseract, PDF generation in python, setting up Flask servers, and alot about Genetic data! ## What's next for Geneticheck The next steps include poring over the working algorithms to a web-based framework, such as React. Running the algorithms on Javascript would allow the user web-based interaction, which is the best interactive format for the everyday person. Other steps is to gather more genetic tests results and to provide treatments options in the reports as well.
## Inspiration ## What it does PhyloForest helps researchers and educators by improving how we see phylogenetic trees. Strong, useful data visualization is key to new discoveries and patterns. Thanks to our product, users have a greater ability to perceive depth of trees by communicating widths rather than lengths. The length between proteins is based on actual lengths scaled to size. ## How we built it We used EggNOG to get phylogenetic trees in Newick format, then parsed them using a recursive algorithm to get the differences between the protein group in question. We connected names to IDs using the EBI (European Bioinformatics Institute) database, then took the lengths between the proteins and scaled them to size for our Unity environment. After we put together all this information, we went through an extensive integration process with Unity. We used EBI APIs for Taxon information, EggNOG gave us NCBI (National Center for Biotechnology Information) identities and structure. We could not use local NCBI lookup (as eggNOG does) due to the limitations of Virtual Reality headsets, so we used the EBI taxon lookup API instead to make the tree interactive and accurately reflect the taxon information of each species in question. Lastly, we added UI components to make the app easy to use for both educators and researchers. ## Challenges we ran into Parsing the EggNOG Newick tree was our first challenge because there was limited documentation and data sets were very large. Therefore, it was difficult to debug results, especially with the Unity interface. We also had difficulty finding a database that could connect NCBI IDs to taxon information with our VR headset. We also had to implement a binary tree structure from scratch in C#. Lastly, we had difficulty scaling the orthologs horizontally in VR, in a way that would preserve the true relationships between the species. ## Accomplishments that we're proud of The user experience is very clean and immersive, allowing anyone to visualize these orthologous groups. Furthermore, we think this occupies a unique space that intersects the fields of VR and genetics. Our features, such as depth and linearized length, would not be as cleanly implemented in a 2-dimensional model. ## What we learned We learned how to parse Newick trees, how to display a binary tree with branches dependent on certain lengths, and how to create a model that relates large amounts of data on base pair differences in DNA sequences to something that highlights these differences in an innovative way. ## What's next for PhyloForest Making the UI more intuitive so that anyone would feel comfortable using it. We would also like to display more information when you click on each ortholog in a group. We want to expand the amount of proteins people can select, and we would like to manipulate proteins by dragging branches to better identify patterns between orthologs.
losing
## Inspiration I'm taking a class called How To Make (Almost) Anything that will go through many aspects of digital fabrication and embedded systems. For the first assignment we had to design a model for our final project trying out different modeling software. As a beginner, I decided to take the opportunity to learn more about Unity through this hackathon. ## What it does Plays like the 15 tile block puzzle game ## How we built it I used Unity. ## Challenges we ran into Unity is difficult to navigate, there were a lot of hidden settings that made things not show up or scale. Since I'm not familiar with C# or Unity I spent a lot of time learning about different methods and data structures. Referencing of objects across the different scripts and attributes is not obvious and I ran into a lot of those kinds of issues. ## Accomplishments that we're proud of About 60% functional. ## What's next for 15tile puzzle game Making it 100% functional.
## Inspiration We wanted to create something that helped other people. We had so many ideas, yet couldn't stick to one. Luckily, we ended up talking to Phoebe(?) from Hardware and she talked about how using textiles would be great in a project. Something clicked, and we started brainstorming ideas. It ended up with us coming up with this project which could help a lot of people in need, including friends and family close to us. ## What it does Senses the orientation of your hand, and outputs either a key press, mouse move, or a mouse press. What it outputs is completely up to the user. ## How we built it Sewed a glove, attached a gyroscopic sensor, wired it to an Arduino Uno, and programmed it in C# and C++. ## Challenges we ran into Limited resources because certain hardware components were out of stock, time management (because of all the fun events!), Arduino communication through the serial port ## Accomplishments that we're proud of We all learned new skills, like sewing, coding in C++, and programming with the Arduino to communicate with other languages, like C#. We're also proud of the fact that we actually fully completed our project, even though it's our first hackathon. ## What we learned ~~how 2 not sleep lolz~~ Sewing, coding, how to wire gyroscopes, sponsors, DisguisedToast winning Hack the North. ## What's next for this project We didn't get to add all the features we wanted, both to hardware limitations and time limitations. Some features we would like to add are the ability to save and load configs, automatic input setup, making it wireless, and adding a touch sensor to the glove.
## Inspiration I was inspired to make this device while sitting in physics class. I really felt compelled to make something that I learned inside the classroom and apply my education to something practical. Growing up I always remembered playing with magnetic kits and loved the feeling of repulsion between magnets. ## What it does There is a base layer of small magnets all taped together so the North pole is facing up. There are hall effect devices to measure the variances in magnetic field that is created by the user's magnet attached to their finger. This allows the device to track the user's finger and determine how they are interacting with the magnetic field pointing up. ## How I built it It is build using the intel edison. Each hall effect device is either on or off depending if there is a magnetic field pointing down through the face of the black plate. This determines where the user's finger is. From there the analog data is sent via serial port to the processing program on the computer that demonstrates that it works. That just takes the data and maps the motion of the object. ## Challenges I ran into There are many challenges I faced. Two of them dealt with just the hardware. I bought the wrong type of sensors. These are threshold sensors which means they are either on or off instead of linear sensors that give a voltage proportional to the strength of magnetic field around it. This would allow the device to be more accurate. The other one deals with having alot of very small worn out magnets. I had to find a way to tape and hold them all together because they are in an unstable configuration to create an almost uniform magnetic field on the base. Another problem I ran into was dealing with the edison, I was planning on just controlling the mouse to show that it works but the mouse library only works with the arduino leonardo. I had to come up with a way to transfer the data to another program, which is how i came up dealing with serial ports and initially tried mapping it into a Unity game. ## Accomplishments that I'm proud of I am proud of creating a hardware hack that I believe is practical. I used this device as a way to prove the concept of creating a more interactive environment for the user with a sense of touch rather than things like the kinect and leap motion that track your motion but it is just in thin air without any real interaction. Some areas this concept can be useful in is in learning environment or helping people in physical therapy learning to do things again after a tragedy, since it is always better to learn with a sense of touch. ## What I learned I had a grand vision of this project from thinking about it before hand and I thought it was going to theoretically work out great! I learned how to adapt to many changes and overcoming them with limited time and resources. I also learned alot about dealing with serial data and how the intel edison works on a machine level. ## What's next for Tactile Leap Motion Creating a better prototype with better hardware(stronger magnets and more accurate sensors)
losing
## What it does Khaledifier replaces all quotes and images around the internet with pictures and quotes of DJ Khaled! ## How we built it Chrome web app written in JS interacts with live web pages to make changes. The app sends a quote to a server which tokenizes words into types using NLP This server then makes a call to an Azure Machine Learning API that has been trained on DJ Khaled quotes to return the closest matching one. ## Challenges we ran into Keeping the server running with older Python packages and for free proved to be a bit of a challenge
## Inspiration Inspired by a team member's desire to study through his courses by listening to his textbook readings recited by his favorite anime characters, functionality that does not exist on any app on the market, we realized that there was an opportunity to build a similar app that would bring about even deeper social impact. Dyslexics, the visually impaired, and those who simply enjoy learning by having their favorite characters read to them (e.g. children, fans of TV series, etc.) would benefit from a highly personalized app. ## What it does Our web app, EduVoicer, allows a user to upload a segment of their favorite template voice audio (only needs to be a few seconds long) and a PDF of a textbook and uses existing Deepfake technology to synthesize the dictation from the textbook using the users' favorite voice. The Deepfake tech relies on a multi-network model trained using transfer learning on hours of voice data. The encoder first generates a fixed embedding of a given voice sample of only a few seconds, which characterizes the unique features of the voice. Then, this embedding is used in conjunction with a seq2seq synthesis network that generates a mel spectrogram based on the text (obtained via optical character recognition from the PDF). Finally, this mel spectrogram is converted into the time-domain via the Wave-RNN vocoder (see [this](https://arxiv.org/pdf/1806.04558.pdf) paper for more technical details). Then, the user automatically downloads the .WAV file of his/her favorite voice reading the PDF contents! ## How we built it We combined a number of different APIs and technologies to build this app. For leveraging scalable machine learning and intelligence compute, we heavily relied on the Google Cloud APIs -- including the Google Cloud PDF-to-text API, Google Cloud Compute Engine VMs, and Google Cloud Storage; for the deep learning techniques, we mainly relied on existing Deepfake code written for Python and Tensorflow (see Github repo [here](https://github.com/rodrigo-castellon/Real-Time-Voice-Cloning), which is a fork). For web server functionality, we relied on Python's Flask module, the Python standard library, HTML, and CSS. In the end, we pieced together the web server with Google Cloud Platform (GCP) via the GCP API, utilizing Google Cloud Storage buckets to store and manage the data the app would be manipulating. ## Challenges we ran into Some of the greatest difficulties were encountered in the superficially simplest implementations. For example, the front-end initially seemed trivial (what's more to it than a page with two upload buttons?), but many of the intricacies associated with communicating with Google Cloud meant that we had to spend multiple hours creating even a landing page with just drag-and-drop and upload functionality. On the backend, 10 excruciating hours were spent attempting (successfully) to integrate existing Deepfake/Voice-cloning code with the Google Cloud Platform. Many mistakes were made, and in the process, there was much learning. ## Accomplishments that we're proud of We're immensely proud of piecing all of these disparate components together quickly and managing to arrive at a functioning build. What started out as merely an idea manifested itself into usable app within hours. ## What we learned We learned today that sometimes the seemingly simplest things (dealing with python/CUDA versions for hours) can be the greatest barriers to building something that could be socially impactful. We also realized the value of well-developed, well-documented APIs (e.g. Google Cloud Platform) for programmers who want to create great products. ## What's next for EduVoicer EduVoicer still has a long way to go before it could gain users. Our first next step is to implementing functionality, possibly with some image segmentation techniques, to decide what parts of the PDF should be scanned; this way, tables and charts could be intelligently discarded (or, even better, referenced throughout the audio dictation). The app is also not robust enough to handle large multi-page PDF files; the preliminary app was designed as a minimum viable product, only including enough to process a single-page PDF. Thus, we plan on ways of both increasing efficiency (time-wise) and scaling the app by splitting up PDFs into fragments, processing them in parallel, and returning the output to the user after collating individual text-to-speech outputs. In the same vein, the voice cloning algorithm was restricted by length of input text, so this is an area we seek to scale and parallelize in the future. Finally, we are thinking of using some caching mechanisms server-side to reduce waiting time for the output audio file.
## Inspiration What if you could automate one of the most creative performances that combine music and spoken word? Everyone's watched those viral videos of insanely talented rappers online but what if you could get that level of skill? Enter **ghostwriter**, freestyling reimagined. ## What it does **ghostwriter** allows you to skip through pre-selected beats where it will then listen to your bars, suggesting possible rhymes to help you freestyle. With the 'record' option, you can listen back to your freestyles and even upload them to share with your friends and listen to your friend's freestyles. ## How we built it In order to build **ghostwriter** we used Google Cloud Services for speech-to-text transcription, the Cohere API for rhyming suggestions, Socket.io for reload time communication between frontend and backend, Express.js for backend, and the CockroachDB distributed SQL database to store transcription as well as the audio files. We used React for the fronted and styled with the Material UI library. ## Challenges we ran into We had some challenges detecting when the end of a bar might be as different rhyming schemes and flows will have varying pauses. Instead, we decided to display rhyming suggestions for each word as the user then has the freedom to determine when they want to end their bar and start another. Another issue we had was figuring out the latency of the API calls to make sure the data was retrieved in time for the user to think of another bar. Finally, we also had some trouble using audio media players to record the user's freestyle along with the background music, however, we were able to find a solution in the end. ## Accomplishments that we're proud of We are really proud to say that what we created during the past 36 hours is meeting its intended purpose. We were able to put all the components of this project in motion for the software to successfully hear our words and to generate rhyming suggestions in time for the user to think of another line and continue their freestyle. Additionally, using technologies that were new to us and coding away until it reached our goal expanded our technological expertise. ## What we learned We learned how to use react and move the text around to match our desired styling. Next, we learned how to interact with numerous APIs (including Cohere's) in order to get the data we want to be organized in the way most efficient for us to display to the user. Finally, we learned how to freestyle better a bit ourselves. ## What's next for Ghostwriter For **ghostwriter**, we aim to have a higher curation for freestyle beats and to build a social community to highlight the most fire freestyles. Our goal is to turn today's rappers into tomorrow's Hip-Hop legends!
winning
## Why we made Time Capsule Traditional physical photo albums & time capsules are not easily accessible or sharable and are limited in storage capabilities. And while cloud-based photo album services offer remote access, collaborative sharing, and automatic backup, you are not in full control of your photos, there is often a subscription cost, and a risk of deletion. ## What it does Time\_capsule.tech is a blockchain-based **photo album** that employs smart contracts to function as a **virtual time capsule** for each image. By storing and encrypting your photos on an *Interplanetary File System* (IPFS) 🪐🌌, the risk of data loss is minimised greatly, as well as adding **unparalleled security, permanence, and control of your own memories**. 📷 ## How we built it While similar to Next.js, the front end was built with **Starknet.js**, a frontend library for easy integration with Starknet custom hooks and components. Also, **Cairo** with intermediary **Sierra** was used for the deployment of contracts both locally as well as remotely on IDEs such as Remix. Finally, to ensure that images remained decentralized, we strived to use an **IPFS** system to host our images. And also *a lot* of dedication. 🔥 ## Accomplishments that we're proud of * Setting up a local devnet for deploying contracts * Understanding the file structure of Starknet.js * Trying most of the outdated tech for IPFS ## What we learned / Challenges we ran into We learned about blockchain, specifically smart contracts and their use cases. On a technical level, we learned about Cairo development, standards for ERC20 contracts, and differences in Starknet.js. On a more practical level, each member brought unique skills and perspectives to the table, fostering a fun and constructive environment. Our collective efforts resulted in an overall successful outcome as well as a positive and enjoyable working experience. ## What's next for Time Capsule * A more thorough implementation of DevOps tools such as Vercel for branch deployment as well as Github actions for functional testing * 3-D visualisation of photos with libraries such as three.js or CSS animations * Incorporate other Ethereum branches onto the network * Sleep 🛌, gaming 🖥️ 🎮 Overall, it was a great time for all and it was a pleasure attending this year’s event.
## Inspiration According to Statistics Canada, nearly 48,000 children are living in foster care. In the United States, there are ten times as many. Teenagers aged 14-17 are the most at risk of aging out of the system without being adopted. Many choose to opt-out when they turn 18. At that age, most youths like our team are equipped with a lifeline back to a parent or relative. However, without the benefit of a stable and supportive home, fostered youths, after emancipation, lack the consistent security for their documents, tacit guidance for practical tasks, and moral aid in building meaningful relationships through life’s ups and downs. Despite the success possible during foster care, there is overwhelming evidence that shows how our conventional system alone inherently cannot guarantee the necessary support to bridge a foster youth’s path into adulthood once they exit the system. ## What it does A virtual, encrypted, and decentralized safe for essential records. There is a built-in scanner function and a resource of contacts who can mentor and aid the user. Alerts can prompt the user to tasks such as booking the annual doctors' appointments and tell them, for example, about openings for suitable housing and jobs. Youth in foster care can start using the app at age 14 and slowly build a foundation well before they plan for emancipation. ## How we built it The essential decentralized component of this application, which stores images on an encrypted blockchain, was built on the Internet Computer Protocol (ICP) using Node JS and Azle. Node JS and React were also used to build our user-facing component. Encryption and Decryption was done using CryptoJS. ## Challenges we ran into ICP turned out to be very difficult to work with - attempting to publish the app to a local but discoverable device was nearly impossible. Apart from that, working with such a novel technology through an unfamiliar library caused many small yet significant mistakes that we wouldn't be able to resolve without the help of ICP mentors. There were many features we worked on that were put aside to prioritize, first and foremost, the security of the users' sensitive documents. ## Accomplishments that we're proud of Since this was the first time any of us worked on blockchain, having a working application make use of such a technology was very satisfying. Some of us also worked with react and front-end for the first time, and others worked with package managers like npm for the first time as well. Apart from the hard skills developed throughout the hackathon, we're also proud of how we distributed the tasks amongst ourselves, allowing us to stay (mostly) busy without overworking anyone. ## What we learned As it turns out, making a blockchain application is easier than expected! The code was straightforward and ICP's tutorials were easy to follow. Instead, we spent most of our time wrangling with our coding environment, and this experience gave us a lot of insight into computer networks, blockchain organization, CORS, and methods of accessing blockchain applications through code run in standard web apps like React. ## What's next for MirrorPort Since the conception of MirrorPort, it has always been planned to become a safe place for marginalized youths. Often, they would also lose contact with adults who have mentored or housed them. This app will provide this information to the user, with the consent of the mentor. Additionally, alerts will be implemented to prompt the user to tasks such as booking the annual doctors' appointments and tell them, for example, about openings for suitable housing and jobs. It could also be a tool for tracking progress against their aspirations and providing tailored resources that map out a transition plan. We're looking to migrate the dApp to mobile for more accessibility and portability. 2FA would be implemented for login security. It could also be a tool for tracking progress against their aspirations and providing tailored resources that map out a transition plan. Adding a document translation feature would also make the dApp work well with immigrant documents across borders.
## Inspiration It’s Friday afternoon, and as you return from your final class of the day cutting through the trailing winds of the Bay, you suddenly remember the Saturday trek you had planned with your friends. Equipment-less and desperate you race down to a nearby sports store and fish out $$$, not realising that the kid living two floors above you has the same equipment collecting dust. While this hypothetical may be based on real-life events, we see thousands of students and people alike impulsively spending money on goods that would eventually end up in their storage lockers. This cycle of buy-store-collect dust inspired us to develop Lendit this product aims to stagnate the growing waste economy and generate passive income for the users on the platform. ## What it does A peer-to-peer lending and borrowing platform that allows users to generate passive income from the goods and garments collecting dust in the garage. ## How we built it Our Smart Lockers are built with RaspberryPi3 (64bit, 1GB RAM, ARM-64) microcontrollers and are connected to our app through interfacing with Google's Firebase. The locker also uses Facial Recognition powered by OpenCV and object detection with Google's Cloud Vision API. For our App, we've used Flutter/ Dart and interfaced with Firebase. To ensure *trust* - which is core to borrowing and lending, we've experimented with Ripple's API to create an Escrow system. ## Challenges we ran into We learned that building a hardware hack can be quite challenging and can leave you with a few bald patches on your head. With no hardware equipment, half our team spent the first few hours running around the hotel and even the streets to arrange stepper motors and Micro-HDMI wires. In fact, we even borrowed another team's 3-D print to build the latch for our locker! On the Flutter/ Dart side, we were sceptical about how the interfacing with Firebase and Raspberry Pi would work. Our App Developer previously worked with only Web Apps with SQL databases. However, NoSQL works a little differently and doesn't have a robust referential system. Therefore writing Queries for our Read Operations was tricky. With the core tech of the project relying heavily on the Google Cloud Platform, we had to resolve to unconventional methods to utilize its capabilities with an internet that played Russian roulette. ## Accomplishments that we're proud of The project has various hardware and software components like raspberry pi, Flutter, XRP Ledger Escrow, and Firebase, which all have their own independent frameworks. Integrating all of them together and making an end-to-end automated system for the users, is the biggest accomplishment we are proud of. ## What's next for LendIt We believe that LendIt can be more than just a hackathon project. Over the course of the hackathon, we discussed the idea with friends and fellow participants and gained a pretty good Proof of Concept giving us the confidence that we can do a city-wide launch of the project in the near future. In order to see these ambitions come to life, we would have to improve our Object Detection and Facial Recognition models. From cardboard, we would like to see our lockers carved in metal at every corner of this city. As we continue to grow our skills as programmers we believe our product Lendit will grow with it. We would be honoured if we can contribute in any way to reduce the growing waste economy.
partial
## Inspiration The Canadian winter's erratic bouts of chilling cold have caused people who have to be outside for extended periods of time (like avid dog walkers) to suffer from frozen fingers. The current method of warming up your hands using hot pouches that don't last very long is inadequate in our opinion. Our goal was to make something that kept your hands warm and *also* let you vent your frustrations at the terrible weather. ## What it does **The Screamathon3300** heats up the user's hand based on the intensity of their **SCREAM**. It interfaces an *analog electret microphone*, *LCD screen*, and *thermoelectric plate* with an *Arduino*. The Arduino continuously monitors the microphone for changes in volume intensity. When an increase in volume occurs, it triggers a relay, which supplies 9 volts, at a relatively large amperage, to the thermoelectric plate embedded in the glove, thereby heating the glove. Simultaneously, the Arduino will display an encouraging prompt on the LCD screen based on the volume of the scream. ## How we built it The majority of the design process was centered around the use of the thermoelectric plate. Some research and quick experimentation helped us conclude that the thermoelectric plate's increase in heat was dependent on the amount of supplied current. This realization led us to use two separate power supplies -- a 5 volt supply from the Arduino for the LCD screen, electret microphone, and associated components, and a 9 volt supply solely for the thermoelectric plate. Both circuits were connected through the use of a relay (dependent on the Arduino output) which controlled the connection between the 9 volt supply and thermoelectric load. This design decision provided electrical isolation between the two circuits, which is much safer than having common sources and ground when 9 volts and large currents are involved with an Arduino and its components. Safety features directed the rest of our design process, like the inclusion of a kill-switch which immediately stops power being supplied to the thermoelectric load, even if the user continues to scream. Furthermore, a potentiometer placed in parallel with the thermoelectric load gives control over how quickly the increase in heat occurs, as it limits the current flowing to the load. ## Challenges we ran into We tried to implement feedback loop, ambient temperature sensors; even though large temperature change, very small changes in sensor temperatures. Goal to have an optional non-scream controlled system failed because of ultimately not having a sensor feedback system. We did not own components such as the microphone, relay, or battery pack, we could not solder many connections so we could not make a permanent build. ## Accomplishments that we're proud of We're proud of using a unique transducer (thermoelectric plate) that uses an uncommon trigger (current instead of voltage level), which forced us to design with added safety considerations in mind. Our design was also constructed of entirely sustainable materials, other than the electronics. We also used a seamless integration of analog and digital signals in the circuit (baby mixed signal processing). ## What we learned We had very little prior experience interfacing thermoelectric plates with an Arduino. We learned to effectively leverage analog signal inputs to reliably trigger our desired system output, as well as manage physical device space restrictions (for it to be wearable). ## What's next for Screamathon 3300 We love the idea of people having to scream continuously to get a job done, so we will expand our line of *Scream* devices, such as the scream-controlled projectile launcher, scream-controlled coffee maker, scream-controlled alarm clock. Stay screamed-in!
## Inspiration Noise sensitivity is common in autism, but it can also affect individuals without autism. Research shows that 50 to 70 percent of people with autism experience hypersensitivity to everyday sounds. This inspired us to create a wearable device to help individuals with heightened sensory sensitivities manage noise pollution. Our goal is to provide a dynamic solution that adapts to changing sound environments, offering a more comfortable and controlled auditory experience. ## What it does SoundShield is a wearable device that adapts to noisy environments by automatically adjusting calming background audio and applying noise reduction. It helps individuals with sensory sensitivities block out overwhelming sounds while keeping them connected to their surroundings. The device also alerts users if someone is behind them, enhancing both awareness and comfort. It filters out unwanted noise using real-time audio processing and only plays calming music if the noise level becomes too high. If it detects a person speaking or if the noise is low enough to be important, such as human speech, it doesn't apply filters or background music. ## How we built it We developed SoundShield using a combination of real-time audio processing and computer vision, integrated with a Raspberry Pi Zero, a headphone, and a camera. The system continuously monitors ambient sound levels and dynamically adjusts music accordingly. It filters noise based on amplitude and frequency, applying noise reduction techniques such as Spectral Subtraction, Dynamic Range Compression to ensure users only hear filtered audio. The system plays calming background music when noise levels become overwhelming. If the detected noise is low, such as human speech, it leaves the sound unfiltered. Additionally, if a person is detected behind the user and the sound amplitude is high, the system alerts the user, ensuring they are aware of their surroundings. ## Challenges we ran into Processing audio in real-time while distinguishing sounds based on frequency was a significant challenge, especially with the limited computing power of the Raspberry Pi Zero. Additionally, building the hardware and integrating it with the software posed difficulties, especially when ensuring smooth, real-time performance across audio and computer vision tasks. ## Accomplishments that we're proud of We successfully integrated computer vision, audio processing, and hardware components into a functional prototype. Our device provides a real-world solution, offering a personalized and seamless sensory experience for individuals with heightened sensitivities. We are especially proud of how the system dynamically adapts to both auditory and visual stimuli. ## What we learned We learned about the complexities of real-time audio processing and how difficult it can be to distinguish between different sounds based on frequency. We also gained valuable experience in integrating audio processing with computer vision on a resource-constrained device like the Raspberry Pi Zero. Most importantly, we deepened our understanding of the sensory challenges faced by individuals with autism and how technology can be tailored to assist them. ## What's next for SoundSheild We plan to add a heart rate sensor to detect when the user is becoming stressed, which would increase the noise reduction score and automatically play calming music. Additionally, we want to improve the system's processing power and enhance its ability to distinguish between human speech and other noises. We're also researching specific frequencies that can help differentiate between meaningful sounds, like human speech, and unwanted noise to further refine the user experience.
## Inspiration Our goal as a team was to use emerging technologies to promote a healthy living through exclaiming the importance of running. The app would connect to the user’s strava account track running distance and routes that the user runs and incentivise this healthy living by rewarding the users with digital artwork for their runs. There is a massive runners’ community on strava where users regularly share their running stats as well as route maps. Some enthusiasts also try to create patterns with their maps as shown in this reddit thread: <https://www.reddit.com/r/STRAVAart/>. Our app takes this route image and generates an NFT for the user to keep or sell on OpenSea. ## What it does ruNFT’s main goal is to promote a healthy lifestyle by incentivizing running. The app connects to users’ Strava accounts and obtains their activity history. Our app uses this data to create an image of each run on a map and generates an NFT for the user. Additionally, this app builds a community of health enthusiasts that can view and buy each other’s NFT map collections. There are also daily, weekly, and all time leaderboards that showcase stats of the top performers on the app. Our goal is to use this leaderboard to derive value for the NFTs as users with the best stats will receive rarer, more valuable tokens. Overall, this app serves as a platform for runners to share their stats, earn tokens for living a healthy lifestyle, and connect with other running enthusiasts around the world. With the growing interest of NFTs booming in the blockchain market with many new individuals taking interest in collecting NFTs, runners can now use our app to create and access their NFTs while using it as motivation to improve their physical health. ## How we built it The front-end was developed using flutter. Initial sketches of how the user interface would look was conceptualized in photoshop where we decided on the color-scheme and the layout. We took these designs to flutter using some online tutorials as well as some acquired tips from the DeltaHacks Flutter workshop. Most of the main components in the front-end were buttons and a header for navigation as well as a form for some submissions regarding minting. The backend was hosted on Heroku and consisted of manipulating and providing data to/from the Strava API, our MongoDB database, in which we used express to serve the data. We also integrated the ability to automate the minting process in the backend by using web.js and the alchemy api. We simply initiate the mintNFT method from on smart contract, while passing a destination wallet address, this is how our users are able to view and receive their minted strava activities. ## Challenges we ran into One of the biggest challenges we ran into was merge conflicts. While GitHub makes it very easy to share and develop code with a group of people, it became hard to distribute who was coding what, oftentimes creating merge conflicts. At many times, this obstacle would take away from our precious time so our solution was to use a scrum process where we had a sprint to develop a certain feature for 2 hour sprints and meeting after using discord to keep ourselves organized. Other challenges that we faced included production challenges with Rinkeby TestNet where its servers were down for hours into Saturday halting our production significantly, however, we overcame that challenge by developing creative ways in the local environment to test our features. Finally, working with flutter being new to us was a challenge of its own, however, it became very annoying when implementing some of the backend features from the Strava API. ## Accomplishments that we're proud of We are really proud of how we used the emerging popularity of NFTs to promote running as now the users will have an incentive to go running and practise a healthier lifestyle essentially giving running a value. We are also really proud of learning flutter and other technologies we used for development that we were not really familiar with. As emerging software engineers, we understand that it will be very important to keep up with new software languages, technologies and methodologies, and this weekend, from what we accomplished by building an app using something none of us knew proves we can continue to adapt and grow as developers. ## What we learned The biggest point of learning for us was how to use flutter for mobile app development since none of us had used flutter before. We were able to do research and learn how the flutter environment works and how it can make it really easy to create apps. With our group's growing interest in NFTs and the NFT market we also learnt a few important things when it comes to creating NFTs and managing them and also what gives NFTs or digital artwork value. ## What's next for ruNFT There are many features that we would like to continue developing in the interface of the app itself. We believe that there is so much more that the app can do for the user. One of the primary motives we have is to create a page that allows the user to see their own collection from within the app as well as a feature such as a blog where stories of running and the experiences of the users can be posted like a feed. Since the app is focused around NFTs, we want to set up a place where NFTs can be sold and bought from within the app using current blockchain technologies and secure transactions. This can make it easier for newer users to operate selling and buying of NFTs easily and do not need to access other resources for this. All in all, we are proud of what we have accomplished and with the constant changes in the markets and blockchain technologies, there are so many more new things that will come for us to implement.
winning
## 💡 INSPIRATION 💡 Many students have **poor spending habits** and losing track of one's finances may cause **unnecessary stress**. As university students ourselves, we're often plagued with financial struggles. As young adults down on our luck, we often look to open up a credit card or take out student loans to help support ourselves. However, we're deterred from loans because they normally involve phoning automatic call centers which are robotic and impersonal. We also don't know why or what to do when we've been rejected from loans. Many of us weren't taught how to plan our finances properly and we frequently find it difficult to keep track of our spending habits. To address this problem troubling our generation, we decided to create AvaAssist! The goal of the app is to **provide a welcoming place where you can seek financial advice and plan for your future.** ## ⚙️ WHAT IT DOES ⚙️ **AvaAssist is a financial advisor built to support young adults and students.** Ava can provide loan evaluation, financial planning, and monthly spending breakdowns. If you ever need banking advice, Ava's got your back! ## 🔎RESEARCH🔍 ### 🧠UX Research🧠 To discover the pain points of existing banking practices, we interviewed 2 and surveyed 7 participants on their current customer experience and behaviors. The results guided us in defining a major problem area and the insights collected contributed to discovering our final solution. ### 💸Loan Research💸 To properly predict whether a loan would be approved or not, we researched what goes into the loan approval process. The resulting research guided us towards ensuring that each loan was profitable and didn't take on too much risk for the bank. ## 🛠️ HOW WE BUILT IT🛠️ ### ✏️UI/UX Design✏️ ![alt text](https://cdn.discordapp.com/attachments/910655355661463584/911782991204876348/Loan_Amount.gif) Figma was used to create a design prototype. The prototype was designed in accordance with Voice UI (VUI) design principles & Material design as a base. This expedited us to the next stage of development because the programmers had visual guidance in developing the app. With the use of Dasha.AI, we were able to create an intuitive user experience in supporting customers through natural dialog via the chatbot, and a friendly interface with the use of an AR avatar. Check out our figma [here](https://www.figma.com/proto/0pAhUPJeuNRzYDBr07MBrc/Hack-Western?node-id=206%3A3694&scaling=min-zoom&page-id=206%3A3644&starting-point-node-id=206%3A3694&show-proto-sidebar=1) Check out our presentation [here](https://www.figma.com/proto/0pAhUPJeuNRzYDBr07MBrc/Hack-Western?node-id=61%3A250&scaling=min-zoom&page-id=2%3A2) ### 📈Predictive Modeling📈 The final iteration of each model has a **test prediction accuracy of +85%!** ![alt text](https://cdn.discordapp.com/attachments/910655355661463584/911592566829486120/unknown.png) We only got to this point because of our due diligence, preprocessing, and feature engineering. After coming up with our project, we began thinking about and researching HOW banks evaluate loans. Loan evaluation at banks is extremely complex and we tried to capture some aspects of it in our model. We came up with one major aspect to focus on during preprocessing and while searching for our datasets, profitability. There would be no point for banks to take on a loan if it weren't profitable. We found a couple of databases with credit card and loan data on Kaggle. The datasets were smaller than desired. We had to be very careful during preprocessing when deciding what data to remove and how to fill NULL values to preserve as much data as possible. Feature engineering was certainly the most painstaking part of building the prediction model. One of the most important features we added was the Risk Free Rate (CORRA). The Risk Free Rate is the rate of return of an investment with no risk of loss. It helped with the engineering process of another feature, min\_loan, which is the minimum amount of money that the bank can make with no risk of loss. Min\_loan would ultimately help our model understand which loans are profitable and which aren't. As a result, the model learned to decline unprofitable loans. ![alt text](https://cdn.discordapp.com/attachments/910655355661463584/911981729168887948/unknown.png) We also did market research on the average interest rate of specific types of loans to make assumptions about certain features to supplement our lack of data. For example, we used the average credit card loan interest rate of 22%. The culmination of newly engineered features and the already existing data resulted in our complex, high accuracy models. We have a model for Conventional Loans, Credit Card Loans, and Student Loans. The model we used was RandomForests from sklearn because of its wide variety of hyperparameters and robustness. It was fine-tuned using gridsearchCV to find its best hyperparameters. We designed a pipeline for each model using Pipeline, OneHotEncoder, StandardScaler, FunctionTransformer, GradientBoostingClassifier, and RandomForestClassifier from sklearn. Finally, the models were saved as pickle files for front-end deployment. ### 🚀Frontend Deployment🚀 Working on the frontend was a very big challenge. Since we didn't have a dedicated or experienced frontend developer, there was a lot of work and learning to be done. Additionally, a lot of ideas had to be cut from our final product as well. First, we had to design the frontend with React Native, using our UI/UX Designer's layout. For this we decided to use Figma, and we were able to dynamically update our design to keep up with any changes that were made. Next, we decided to tackle hooking up the machine learning models to React with Flask. Having Typescript communicate with Python was difficult. Thanks to these libraries and a lot of work, we were able to route requests from the frontend to the backend, and vice versa. This way, we could send the values that our user inputs on the frontend to be processed by the ML models, and have them give an accurate result. Finally, we took on the challenge of learning how to use Dasha.AI and integrating it with the frontend. Learning how to use DashaScript (Dasha.AI's custom programming language) took time, but eventually, we started getting the hang of it, and everything was looking good! ## 😣 CHALLENGES WE RAN INTO 😣 * Our teammate, Abdullah, who is no longer on our team, had family issues come up and was no longer able to attend HackWestern unfortunately. This forced us to get creative when deciding a plan of action to execute our ambitious project. We needed to **redistribute roles, change schedules, look for a new teammate, but most importantly, learn EVEN MORE NEW SKILLS and adapt our project to our changing team.** As a team, we had to go through our ideation phase again to decide what would and wouldn't be viable for our project. We ultimately decided to not use Dialogflow for our project. However, this was a blessing in disguise because it allowed us to hone in on other aspects of our project such as finding good data to enhance user experience and designing a user interface for our target market. * The programmers had to learn DashaScript on the fly which was a challenge as we normally code with OOP’s. But, with help from mentors and workshops, we were able to understand the language and implement it into our project * Combining the frontend and backend processes proved to be very troublesome because the chatbot needed to get user data and relay it to the model. We eventually used react-native to store the inputs across instances/files. * The entire team has very little experience and understanding of the finance world, it was both difficult and fun to research different financial models that banks use to evaluate loans. * We had initial problems designing a UI centered around a chatbot/machine learning model because we couldn't figure out a user flow that incorporated all of our desired UX aspects. * Finding good data to train the prediction models off of was very tedious, even though there are some Kaggle datasets there were few to none that were large enough for our purposes. The majority of the datasets were missing information and good datasets were hidden behind paywalls. It was for this reason that couldn't make a predictive model for mortgages. To overcome this, I had to combine datasets/feature engineer to get a useable dataset. ## 🎉 ACCOMPLISHMENTS WE ARE PROUD OF 🎉 * Our time management was impeccable, we are all very proud of ourselves since we were able to build an entire app with a chat bot and prediction system within 36 hours * Organization within the team was perfect, we were all able to contribute and help each other when needed; ex. the UX/UI design in figma paved the way for our front end developer * Super proud of how we were able to overcome missing a teammate and build an amazing project! * We are happy to empower people during their financial journey and provide them with a welcoming source to gain new financial skills and knowledge * Learning and implementing DashaAi was a BLAST and we're proud that we could learn this new and very useful technology. We couldn't have done it without mentor help, 📣shout out to Arthur and Sreekaran📣 for providing us with such great support. * This was a SUPER amazing project! We're all proud to have done it in such a short period of time, everyone is new to the hackathon scene and are still eager to learn new technologies ## 📚 WHAT WE LEARNED 📚 * DashaAi is a brand new technology we learned from the DashaAi workshop. We wanted to try and implement it in our project. We needed a handful of mentor sessions to figure out how to respond to inputs properly, but we're happy we learned it! * React-native is a framework our team utilized to its fullest, but it had its learning curve. We learned how to make asynchronous calls to integrate our backend with our frontend. * Understanding how to take the work of the UX/UI designer and apply it dynamically was important because of the numerous design changes we had throughout the weekend. * How to use REST APIs to predict an output with flask using the models we designed was an amazing skill that we learned * We were super happy that we took the time to learn Expo-cli because of how efficient it is, we could check how our mobile app would look on our phones immediately. * First time using AR models in Animaze, it took some time to understand, but it ultimately proved to be a great tool! ## ⏭️WHAT'S NEXT FOR AvaAssist⏭️ AvaAssist has a lot to do before it can be deployed as a genuine app. It will only be successful if the customer is satisfied and benefits from using it, otherwise, it will be a failure. Our next steps are to implement more features for the user experience. For starters, we want to implement Dialogflow back into our idea. Dialogflow would be able to understand the intent behind conversations and the messages it exchanges with the user. The long-term prospect of this would be that we could implement more functions for Ava. In the future Ava could be making investments for the user, moving money between personal bank accounts, setting up automatic/making payments, and much more. Finally, we also hope to create more tabs within the AvaAssist app where the user can see their bank account history and its breakdown, user spending over time, and a financial planner where users can set intervals to put aside/invest their money. ## 🎁 ABOUT THE TEAM🎁 Yifan is a 3rd year interactive design student at Sheridan College, currently interning at SAP. With experience in designing for social startups and B2B software, she is interested in expanding her repertoire in designing for emerging technologies and healthcare. You can connect with her at her [LinkedIn](https://www.linkedin.com/in/yifan-design/) or view her [Portfolio](https://yifan.design/) Alan is a 2nd year computer science student at the University of Calgary. He's has a wide variety of technical skills in frontend and backend development! Moreover, he has a strong passion for both data science and app development. You can reach out to him at his [LinkedIn](https://www.linkedin.com/in/alanayy/) Matthew is a 2nd year student at Simon Fraser University studying computer science. He has formal training in data science. He's interested in learning new and honing his current frontend skills/technologies. Moreover, he has a deep understanding of machine learning, AI and neural networks. He's always willing to have a chat about games, school, data science and more! You can reach out to him at his [LinkedIn](https://www.linkedin.com/in/matthew-wong-240837124/) **📣📣 SHOUT OUT TO ABDULLAH FOR HELPING US THROUGH IDEATION📣📣** You can still connect with Abdullah at his [LinkedIn](https://www.linkedin.com/in/abdullah-sahapdeen/) He's super passionate about reactJS and wants to learn more about machine learning and AI! ### 🥳🎉 THANK YOU UW FOR HOSTING HACKWESTERN🥳🎉
## Inspiration Our inspiration came from the annoying amount of times we have had to take out a calculator after a meal with friends and figure out how much to pay each other, make sure we have a common payment method (Venmo, Zelle), and remember if we paid each other back or not a week later. So to answer this question we came up with a Split that can easily divide our expenses for us, and organize the amount we owe a friend, and payments without having a common platform at all in one. ## What it does This application allows someone to put in a value that someone owes them or they owe someone and organize it. During this implementation of a due to someone, you can also split an entire amount with multiple individuals which will be reflected in the amount owed to each person. Additionally, you are able to clear your debts and make payments through the built-in Checkbook service that allows you to pay just given their name, phone number, and value amount. ## How we built it We built this project using html, css, python, and SQL implemented with Flask. Alongside using these different languages we utilized the Checkbook API to streamline the payment process. ## Challenges we ran into Some challenges we ran into were, not knowing how to implement new parts of web development. We had difficulty implementing the API we used, “Checkbook” , using python into the backend of our website. We had no experience with APIs and so implementing this was a challenge that took some time to resolve. Another challenge that we ran into was coming up with different ideas that were more complex than we could design. During the brainstorming phase we had many ideas of what would be impactful projects but were left with the issue of not knowing how to put that into code, so brainstorming, planning, and getting an attainable solution down was another challenge. ## Accomplishments that we're proud of We were able to create a fully functioning, ready to use product with no prior experience with software engineering and very limited exposure to web dev. ## What we learned Some things we learned from this project were first that communication was the most important thing in the starting phase of this project. While brainstorming, we had different ideas that we would agree on, start, and then consider other ideas which led to a loss of time. After completing this project we found that communicating what we could do and committing to that idea would have been the most productive decision toward making a great project. To complement that, we also learned to play to our strengths in the building of this project. In addition, we learned about how to best structure databases in SQL to achieve our intended goals and we learned how to implement APIs. ## What's next for Split The next step for Split would be to move into a mobile application scene. Doing this would allow users to use this convenient application in the application instead of a browser. Right now the app is fully supported for a mobile phone screen and thus users on iPhone could also use the “save to HomeScreen” feature to utilize this effectively as an app while we create a dedicated app. Another feature that can be added to this application is bill scanning using a mobile camera to quickly split and organize payments. In addition, the app could be reframed as a social media with a messenger and friend system.
## Inspiration Everyone on our team comes from a family of newcomers and just as it is difficult to come into a new country, we had to adapt very quickly to the Canadian system. Our team took this challenge as an opportunity to create something that our communities could deeply benefit from when they arrive in Canada. A product that adapts to them, instead of the other way around. With some insight from our parents, we were inspired to create this product that would help newcomers to Canada, Indigenous peoples, and modest income families. Wealthguide will be a helping hand for many people and for the future. ## What it does A finance program portal that provides interactive and accessible financial literacies to customers in marginalized communities improving their financially intelligence, discipline and overall, the Canadian economy 🪙. Along with these daily tips, users have access to brief video explanations of each daily tip with the ability to view them in multiple languages and subtitles. There will be short, quick easy plans to inform users with limited knowledge on the Canadian financial system or existing programs for marginalized communities. Marginalized groups can earn benefits for the program by completing plans and attempting short quiz assessments. Users can earn reward points ✨ that can be converted to ca$h credits for more support in their financial needs! ## How we built it The front end was built using React Native, an open-source UI software framework in combination with Expo to run the app on our mobile devices and present our demo. The programs were written in JavaScript to create the UI/UX interface/dynamics and CSS3 to style and customize the aesthetics. Figma, Canva and Notion were tools used in the ideation stages to create graphics, record brainstorms and document content. ## Challenges we ran into Designing and developing a product that can simplify the large topics under financial literacy, tools and benefits for users and customers while making it easy to digest and understand such information | We ran into the challenge of installing npm packages and libraries on our operating systems. However, with a lot of research and dedication, we as a team resolved the ‘Execution Policy” error that prevented expo from being installed on Windows OS | Trying to use the Modal function to enable pop-ups on the screen. There were YouTube videos of them online but they were very difficult to follow especially for a beginner | Small and merge errors prevented the app from running properly which delayed our demo completion. ## Accomplishments that we're proud of **Kemi** 😆 I am proud to have successfully implemented new UI/UX elements such as expandable and collapsible content and vertical and horizontal scrolling. **Tireni** 😎 One accomplishment I’m proud of is that despite being new to React Native, I was able to learn enough about it to make one of the pages on our app. **Ayesha** 😁 I used Figma to design some graphics of the product bringing the aesthetic to life! ## What we learned **Kemi** 😆 I learned the importance of financial literacy and responsibility and that FinTech is a powerful tool that can help improve financial struggles people may face, especially those in marginalized communities. **Tireni** 😎 I learned how to resolve the ‘Execution Policy” error that prevented expo from being installed on VS Code. **Ayesha** 😁 I learned how to use tools in Figma and applied it in the development of the UI/UX interface. ## What's next for Wealthguide Newsletter Subscription 📰: Up to date information on current and today’s finance news. Opportunity for Wealthsimple product promotion as well as partnering with Wealthsimple companies, sponsors and organizations. Wealthsimple Channels & Tutorials 🎥: Knowledge is key. Learn more and have access to guided tutorials on how to properly file taxes, obtain a credit card with benefits, open up savings account, apply for mortgages, learn how to budget and more. Finance Calendar 📆: Get updates on programs, benefits, loans and new stocks including when they open during the year and the application deadlines. E.g OSAP Applications.
losing
## Inspiration Fully homomorphic computing is a hip new crypto trick that lets you compute on encrypted data. It's pretty wild, so I wanted to try something wild with it. FHE has been getting getting super fast - boolean operations now only take tens of milliseconds, down from minutes or hours just a few years ago. Most applications of FHE still focus on computing known functions on static data, but it's fast enough now to host a real language all on its own. The function I'm homomophically evaluating is *eval*, and the data I'm operating on is code. "Brainfreeze" is what happens if you think about this too hard too long. ## What it does Brainfreeze is a fully-homomorphic runtime for the language [https://en.wikipedia.org/wiki/Brainfuck](Brainfuck). ## How I built it I wrote Python bindings for the TFHE C library for fast FHE. TFHE only exposes boolean operations on single bits at a time, so I wrote a framework for assembling and evaluating virtual homomorphic circuits in Python. Then I wrote an ALU for simple 8-bit arithmetic, and a tiny CPU for dispatching on Brainfuck's 8 possible operations. ## Does it work? No! I didn't have time to finish the entire instruction set - only moving the data pointer (< and >) and incrementing and decrementing the data (+ and -) work right now :-/. It turns out that computers are complicated and I don't remember as much of 6.004 as I thought I did. ## Could it work? Definitely at small scales! But there are some severe limiting factors. FHE guarantees - mathematically - to leak **absolutely no** information about the data it's operation on, and that results in a sort of catastrophically exponential branching nightmare because the computer has to execute *every possible instruction on every possible memory address **during every single clock cycle***, because it's not sure which is the "real" data or the "real" instruction and which is just noise.
## Inspiration We want to share the beauty of the [Curry-Howard isomorphism](https://en.wikipedia.org/wiki/Curry%E2%80%93Howard_correspondence) and automated proof checking with beginning programmers. The concepts of Types and Formal Proofs are central to many aspects of computer science. ProofLang is to Agda the way Python is to C. We believe that the beauty of mathematical proofs and formal verification can be appreciated by more than CS theorists, when taught the right way. The best way to build this intuition is using visualizations, which is what this project aims to do. By presenting types as containers of variants, it allows a teacher to demonstrate the concept of type inhabitation, and why that is central to automated theorem proving. ## What it does ProofLang is a simplified, type-based, programming language. It also comes with an online interpreter and a real time visualization tool, which displays all the types in a way that solidifies the correct type of intuition about types (with regards to theorem proving and the [calculus of constructions](https://en.wikipedia.org/wiki/Calculus_of_constructions)), alongside the instantiations of the types, showing a ledger of evidence. ## How we built it We wrote ProofLang the programming language itself from the ground up based on the [calculus of constructions](https://en.wikipedia.org/wiki/Calculus_of_constructions), but simplified it enough for beginner audiences. The interpreter is written in Rust and complied down to WebAssembly which is imported as a Javascript library into our React frontend. ## Challenges we ran into We ran into challenges integrating WebAssembly with our React frontend. `web-pack` compiles our Rust code down into Javascript for Node.js rather than the Web JS that React uses. Since the interpreter is written in Rust, there was some fighting with the borrow-checker involved as well. ## Accomplishments that we're proud of We are proud of building our own interpreter! We also created a whole programming language which is pretty awesome. We even wrote a tiny parser combinator framework similar to [nom](https://docs.rs/nom/latest/nom/), since we could not figure out a few edge cases. ## What's next for ProofLang Support for function types, as well as type constructors that are not unit-like! Going forward, we would also like to add a visual programming aspect to it, where users can click and drag on a visual interface much like [Snap](https://snap.berkeley.edu/) to write code, which would make it even more accessible to beginner programmers and mathematicians.
## Inspiration We enjoyed playing the computer party game [*Keep Talking and Nobody Explodes*](http://www.keeptalkinggame.com/) with our friends and decided that a real-life implementation would be more accessible and interesting. It's software brought to life. ## What it does Each randomly generated "bomb" has several modules that must be defused in order to win the game. Here's the catch: only one person can see and interact with the bomb. The other players have the bomb defusal manual to defuse the bomb and must act as "experts," communicating quickly with the bomb defuser. And you only have room for three errors. Puzzle-solving, communication, and interpretation skills will be put to the test as players race the five-minute clock while communicating effectively. Here are the modules we built: * **Information Display** *Sometimes, information is useful.* In this display module, we display the time remaining and the serial number of the bomb. How can you use this information? * **Simple Wires** *Wires are the basis of all hardware hacks. But sometimes, you have to pull them out.* A schematic is generated, instructing players to set up a variety of colored wires into six pins. There's only one wire to pull out, but which one? Only the "experts" will know, following a series of conditional statements. * **The Button** *One word. One LED. One button.* Decode this strange combination and figure out if the button saying "PRESS" should be pressed, or if you should hold it down and light up another LED. * **Password** *The one time you wouldn't want a correct horse battery.* Scroll through letters with buttons on an LCD display, in hopes of stumbling upon an actual word, then submit it. * **Simon Says** *The classic childhood toy and perfect Arduino hack, but much, much crueler.* Follow along the flashing LEDs and repeat the pattern - but you must map it to the correct pattern first. ## How we built it We used six Arduino Unos, with one for each module and one for a central processor to link all of the modules together. Each module is independent, except for two digital outputs indicating the number of strikes to the central processor. On breadboards, we used LEDs, LCD displays, and switches to provide a simple user interface. ## Challenges we ran into Reading the switches on the Simon Says module, interfacing all of the Arduinos together ## Accomplishments that we're proud of Building a polished product in a short period of time that made use of our limited resources ## What we learned How to use Arduinos, the C programming language, connecting digital and analog components ## What's next for Keep Talking Arduino More modules, packaging and casing for modules, more options for players
partial
## Inspiration As college students, our lives are often filled with music: from studying at home, partying, to commuting. Music is ubiquitous in our lives. However, we find the current process of listening to music and controlling our digital music player pretty mechanical and boring: it’s either clicking or tapping. We wanted to truly interact with our music. We want to feel our music. During one brainstorming session, a team member jokingly suggested a Minority Report-inspired gesture UI system. With this suggestion, we realized we can use this hackathon as a chance to build a cool interactive, futuristic way to play music. ## What it does Fedoract allows you to control your music in a fun and interactive way. It wireless streams your hand gestures and allows you to control your Spotify with them. We are using a camera mounted on a fedora to recognize hand gestures, and depending on which gesture, we can control other home applications using the technology of IoT. The camera will be mounted wirelessly on the hat and its video feed will be sent to the main computer to process. ## How we built it For the wireless fedora part, we are using an ESP32-CAM module to record and transmit the video feed of the hand gesture to a computer. The ESP32-CAM module will be powered by a power supply built by a 9V battery and a 3V3/5V Elegoo Power Supply. The video feed is transmitted through WiFi and is connected to the main computer to be analyzed using tools such as OpenCV. Our software will then calculate the gesture and perform actions on Spotify accordingly. The software backend is built using the OpenCV and the media pipe library. The media pipe library includes a hand model that has been pre-trained using a large set of data and it is very accurate. We are using this model to get the positions of different features (or landmarks) of the hand, such as fingertips, the wrist, and the knuckles. Then we are using this information to determine the hand gesture made by the user. The Spotify front end is controlled and accessed using the Selenium web driver. Depending on the action determined by hand gesture recognition, the program presses the corresponding button. Note the new window instantiated by the web driver does not have any prior information. Therefore, we need to log in to Spotify through an account at the start of the process. Then we can access the media buttons and other important buttons on the web page. Backend: we used OpenCV in combination with a never-seen-before motion classification algorithm. Specifically, we used Python scripts using OpenCV to capture webcam input to get hand recognition to recognize the various landmarks (joints) of the hand. Then, motion classification was done through a non-ML, trigonometric approach. First, a vector of change in X and Y input movement was computed using the first and last stored hand coordinates for some given period after receiving some hand motion input. Using deltaX and delta Y, we were able to compute the angle of the vector on the x-y plane, relative to a reference angle that is obtained using the display's width and height. If the vector is between the positive and negative reference angles, then the motion is classified and interpreted as Play Next Song, and so on for the other actions. See the diagrams below for more details. ## Challenges we ran into The USB-to-TTL cable we got for the ESP32 CAM was defective, so we were spending way too much time trying to fix and find alternative ways with the parts we have. Worse of all, we were also having trouble powering the ESP32-CAM both when it was connected directly to the computer and when it was running wirelessly using its own power supply. The speaker we bought was too quiet for our purposes, and we did not have the right types of equipment to get our display working in time. The ESP32 CAM module is very sensitive to power fluctuations in addition to having an extremely complicated code upload process. The community around the device is very small therefore there was often misleading advice. This led to a long debugging process. The software also had many issues. First of all, we needed to install MediaPipe on our ARM (M1) Macs to effectively develop using OpenCV but we figured out that it wasn’t supported only after spending some time trying to install it. Eventually, we resorted to the Intel chip version of PyCharm to install MediaPipe, which surprisingly worked, seeing as our chips are not Intel-manufactured. As a result, PyCharm was super slow and this really slowed down the development process. Also, we had minor IDE issues when importing OpenCV in our scripts, so we hotfixed that by simply creating a new project (shrug). Another thing was trying to control the keyboard via the OS but it turned out to be difficult for keys other than volume, so we resorted to using Selenium to control the Spotify client. Additionally, in the hand gesture tracking, the thumbs down gesture was particularly difficult because the machine kept thinking that other fingers were lifted as well. In the hand motion tracking process, the x and y coordinates were inverted, which made the classification algorithm a lot harder to develop. Then, bridging the video live stream coming from the ES32-CAM to the backend was problematic and we spent around 3 hours trying to find a way to effectively and simply establish a bridge using OpenCV so that we could easily redirect the video live stream to be the SW's input feed. Lastly, we needed to link the multiple functionality scripts together, which wasn’t obvious. ## Accomplishments that we're proud of One thing the hardware team is really proud of is the perseverance displayed during the debugging of our hardware. Because of faulty connection cords and unstable battery supply, it took us over 14 hours simply just to get the camera to connect wirelessly. Throughout this process, we had to use an almost brute force approach and tried all possible combinations of potential fixes. We are really surprised we have mental toughness. The motion classification algorithm! It took a while to figure out but was well worth it. Hand gesture (first working product in the team, team spirit) This was our first fully working Minimum Viable Product in a hackathon for all of the team members ## What we learned How does OpenCV work? We learned extensively how serial connection works. We learned that you can use the media pipe module to perform hand gesture recognition and other image classification using image capture. An important thing to note is the image capture must be in RGB format before being passed into the Mediapipe library. We also learned how to use the image capture with webcams to test in development and how to draw helpful figures on the output image to debug. ## What's next for Festive Fedora There is a lot of potential for improvements in this project. For example, we can put all the computing through a cloud computing service. Right now, we have the hand gesture recognition calculated locally, and having it online means we will have more computing power, meaning that it will also have the potential to connect to more devices by running more complicated algorithms. Something else we can improve is that we can try to get better hardware such that we will have less delay in the video feed, giving us more accuracy for the gesture detection.
## Inspiration The inspiration for ResuMate came from observing how difficult it can be for undergraduate students and recent graduates to get personalized and relevant feedback on their resumes. We wanted to create a tool that could provide intelligent, real-time resume analysis specifically for technology-related jobs, focusing on internship and new grad roles. By leveraging AI, we aim to help candidates enhance their resumes and improve their chances in the competitive tech job market. ## What it does ResuMate is an AI-powered web application that analyzes resumes by providing personalized eligibility and compatibility assessments. It identifies key strengths and areas for improvement based on keyword matching and specific job requirements for tech roles. Users receive insights on which parts of their resume align with job descriptions and suggestions to fill in missing skills or keywords. ## How we built it ResuMate is built using modern web technologies: * React for building a responsive frontend interface. * Next.js for server-side rendering and easy routing. * Pyodide to run Python in the browser, enabling advanced resume analysis through Python libraries like PyPDF2. * CSS Modules to style the application components consistently and modularly. -Cerebras API (Llama3 model) as AI API to generate personalized feedback recommendations based on Large Language Models (LLMs) The core functionality revolves around uploading a PDF resume, processing it with Python code in the browser, and providing feedback based on keyword analysis using LLM call API. ## Challenges we ran into One of the key challenges we faced was transferring PDF content to text within a JavaScript framework. Parsing PDFs in a web environment isn't straightforward, especially in a client-side context where JavaScript doesn't natively support the full breadth of PDF handling like Python does. Integrating Pyodide was crucial for running Python libraries like PyPDF2 to handle the PDF extraction, but it introduced challenges in managing the virtual filesystem and ensuring seamless communication between JavaScript and Python. ## Accomplishments that we're proud of We successfully integrated Python code execution in the browser through Pyodide, allowing us to analyze resumes in real time without needing a backend server for processing. Additionally, we created a user-friendly interface that helps users understand what keywords are missing from their resumes, which will directly improve their job applications. ## What we learned Throughout this project, we learned how to: * Seamlessly integrate Python within a JavaScript framework using Pyodide. * Handle complex file uploads and processing entirely on the client-side. * Optimize PDF text extraction and keyword matching for real-time performance. * Work as a team to overcome technical challenges and meet our project goals. ## What's next for ResuMate Moving forward, we plan to: * Improve the accuracy of our PDF text extraction, especially for resumes with complex formatting. * Expand the keyword matching and scoring algorithms to handle more specific job descriptions and fields. * Develop a more advanced suggestion system that not only identifies missing keywords but also provides actionable advice based on the latest job market trends. * Add support for more resume formats, including Word documents and plain text.
## Inspiration The cryptocurrency market is an industry which is expanding at an exponential rate. Everyday, thousands new investors of all kinds are getting into this volatile market. With more than 1,500 coins to choose from, it is extremely difficult to choose the wisest investment for those new investors. Our goal is to make it easier for those new investors to select the pearl amongst the sea of cryptocurrency. ## What it does To directly tackle the challenge of selecting which cryptocurrency to choose, our website has a compare function which can add up to 4 different cryptos. All of the information from the chosen cryptocurrencies are pertinent and displayed in a organized way. We also have a news features for the investors to follow the trendiest news concerning their precious investments. Finally, we have an awesome bot which will answer any questions the user has about cryptocurrency. Our website is simple and elegant to provide a hassle-free user experience. ## How we built it We started by building a design prototype of our website using Figma. As a result, we had a good idea of our design pattern and Figma provided us some CSS code from the prototype. Our front-end is built with React.js and our back-end with node.js. We used Firebase to host our website. We fetched datas of cryptocurrency from multiple APIs from: CoinMarketCap.com, CryptoCompare.com and NewsApi.org using Axios. Our website is composed of three components: the coin comparison tool, the news feed page, and the chatbot. ## Challenges we ran into Throughout the hackathon, we ran into many challenges. First, since we had a huge amount of data at our disposal, we had to manipulate them very efficiently to keep a high performant and fast website. Then, there was many bugs we had to solve when integrating Cisco's widget to our code. ## Accomplishments that we're proud of We are proud that we built a web app with three fully functional features. We worked well as a team and had fun while coding. ## What we learned We learned to use many new api's including Cisco spark and Nuance nina. Furthermore, to always keep a backup plan when api's are not working in our favor. The distribution of the work was good, overall great team experience. ## What's next for AwsomeHack * New stats for the crypto compare tools such as the number of twitter, reddit followers. Keeping track of the GitHub commits to provide a level of development activity. * Sign in, register, portfolio and watchlist . * Support for desktop applications (Mac/Windows) with electronjs
losing
## Inspiration I got annoyed at Plex's lack of features ## What it does Provides direct database and disk access to Plex configuration ## How I built it Python ## Challenges I ran into ## Accomplishments that I'm proud of ## What I learned ## What's next for InterPlex
## Inspiration We've had roommate troubles in the past, so we decided to make something to help us change that. ## What it does It keeps track of tasks and activities among roommates, and by gamifying these task using a reward system to motivate everyone to commit to the community. ## Challenges we ran into The two biggest obstacles we ran into were version control and Firebase Documentation/Database ## Accomplishments that we're proud of We completed our core features, and we made decent looking app. ## What we learned Take heed when it comes to version control. ## What's next for roomMe We would like add more database support, and more features that allow communication with other people in your group. We would also like to add extension apps to further enhance the experience of roomMe such as Venmo, Google Calendar, and GroupMe. We are also considering creating a game where people can spend their credits.
## Inspiration We wanted to use Livepeer's features to build a unique streaming experience for gaming content for both streamers and viewers. Inspired by Twitch, we wanted to create a platform that increases exposure for small and upcoming creators and establish a more unified social ecosystem for viewers, allowing them both to connect and interact on a deeper level. ## What is does kizuna has aspirations to implement the following features: * Livestream and upload videos * View videos (both on a big screen and in a small mini-player for multitasking) * Interact with friends (on stream, in a private chat, or in public chat) * View activities of friends * Highlights smaller, local, and upcoming streamers ## How we built it Our web-application was built using React, utilizing React Router to navigate through webpages, and Livepeer's API to allow users to upload content and host livestreams on our videos. For background context, Livepeer describes themselves as a decentralized video infracstucture network. The UI design was made entirely in Figma and was inspired by Twitch. However, as a result of a user research survey, changes to the chat and sidebar were made in order to facilitate a healthier user experience. New design features include a "Friends" page, introducing a social aspect that allows for users of the platform, both streamers and viewers, to interact with each other and build a more meaningful connection. ## Challenges we ran into We had barriers with the API key provided by Livepeer.studio. This put a halt to the development side of our project. However, we still managed to get our livestreams working and our videos uploading! Implementing the design portion from Figma to the application acted as a barrier as well. We hope to tweak the application in the future to be as accurate to the UX/UI as possible. Otherwise, working with Livepeer's API was a blast, and we cannot wait to continue to develop this project! You can discover more about Livepeer's API [here](https://livepeer.org/). ## Accomplishments that we're proud of Our group is proud of the persistance through the all the challenges that confronted us throughout the hackathon. From learning a whole new programming language, to staying awake no matter how tired, we are all proud of each other's dedication to creating a great project. ## What we learned Although we knew of each other before the hackathon, we all agreed that having teammates that you can collaborate with is a fundamental part to developing a project. The developers (Josh and Kennedy) learned lot's about implementing API's and working with designers for the first time. For Josh, this was his first time implementing his practiced work from small projects in React. This was Kennedy's first hackathon, where she learned how to implement CSS. The UX/UI designers (Dorothy and Brian) learned more about designing web-applications as opposed to the mobile-applications they are use to. Through this challenge, they were also able to learn more about Figma's design tools and functions. ## What's next for kizuna Our team maintains our intention to continue to fully develop this application to its full potential. Although all of us our still learning, we would like to accomplish the next steps in our application: * Completing the full UX/UI design on the development side, utilizing a CSS framework like Tailwind * Implementing Lens Protocol to create a unified social community in our application * Redesign some small aspects of each page * Implementing filters to categorize streamers, see who is streaming, and categorize genres of stream.
losing
## Inspiration We wanted to do something fun and exciting, nothing too serious. Slang is a vital component to thrive in today's society. Ever seen Travis Scott go like, "My dawg would prolly do it for a Louis belt", even most menials are not familiar with this slang. Therefore, we are leveraging the power of today's modern platform called "Urban Dictionary" to educate people about today's ways. Showing how today's music is changing with the slang thrown in. ## What it does You choose your desired song it will print out the lyrics for you and then it will even sing it for you in a robotic voice. It will then look up the urban dictionary meaning of the slang and replace with the original and then it will attempt to sing it. ## How I built it We utilized Python's Flask framework as well as numerous Python Natural Language Processing libraries. We created the Front end with a Bootstrap Framework. Utilizing Kaggle Datasets and Zdict API's ## Challenges I ran into Redirecting challenges with Flask were frequent and the excessive API calls made the program super slow. ## Accomplishments that I'm proud of The excellent UI design along with the amazing outcomes that can be produced from the translation of slang ## What I learned A lot of things we learned ## What's next for SlangSlack We are going to transform the way today's menials keep up with growing trends in slang.
## Inspiration It's easy to zone off in online meetings/lectures, and it's difficult to rewind without losing focus at the moment. It could also be disrespectful to others if you expose the fact that you weren't paying attention. Wouldn't it be nice if we can just quickly skim through a list of keywords to immediately see what happened? ## What it does Rewind is an intelligent, collaborative and interactive web canvas with built in voice chat that maintains a list of live-updated keywords that summarize the voice chat history. You can see timestamps of the keywords and click on them to reveal the actual transcribed text. ## How we built it Communications: WebRTC, WebSockets, HTTPS We used WebRTC, a peer-to-peer protocol to connect the users though a voice channel, and we used websockets to update the web pages dynamically, so the users would get instant feedback for others actions. Additionally, a web server is used to maintain stateful information. For summarization and live transcript generation, we used Google Cloud APIs, including natural language processing as well as voice recognition. Audio transcription and summary: Google Cloud Speech (live transcription) and natural language APIs (for summarization) ## Challenges we ran into There are many challenges that we ran into when we tried to bring this project to reality. For the backend development, one of the most challenging problems was getting WebRTC to work on both the backend and the frontend. We spent more than 18 hours on it to come to a working prototype. In addition, the frontend development was also full of challenges. The design and implementation of the canvas involved many trial and errors and the history rewinding page was also time-consuming. Overall, most components of the project took the combined effort of everyone on the team and we have learned a lot from this experience. ## Accomplishments that we're proud of Despite all the challenges we ran into, we were able to have a working product with many different features. Although the final product is by no means perfect, we had fun working on it utilizing every bit of intelligence we had. We were proud to have learned many new tools and get through all the bugs! ## What we learned For the backend, the main thing we learned was how to use WebRTC, which includes client negotiations and management. We also learned how to use Google Cloud Platform in a Python backend and integrate it with the websockets. As for the frontend, we learned to use various javascript elements to help develop interactive client webapp. We also learned event delegation in javascript to help with an essential component of the history page of the frontend. ## What's next for Rewind We imagined a mini dashboard that also shows other live-updated information, such as the sentiment, summary of the entire meeting, as well as the ability to examine information on a particular user.
## Inspiration We were motivated to tackle linguistic challenges in the educational sector after juxtaposing our personal experience with current news. There are currently over 70 million asylum seekers, refugees, or internally-displaced people around the globe, and this statistic highlights the problem of individuals from different linguistic backgrounds being forced to assimilate into a culture and language different than theirs. As one of our teammates was an individual seeking a new home in a new country, we had first hand perspective at how difficult this transition was. In addition, our other team members had volunteered extensively within the educational system in developing communities, both locally and globally, and saw a similar need with individuals being unable to meet the community’s linguistics standards. We also iterated upon our idea to ensure that we are holistically supporting our communities by making sure we consider the financial implications of taking the time to refine your language skills instead of working. ## What it does Fluently’s main purpose is to provide equitable education worldwide. By providing a user customized curriculum and linguistic practice, students can further develop their understanding of their language. It can help students focus on areas where they need the most improvement. This can help them make progress at their own pace and feel more confident in their language skills while also practicing comprehension skills. By using artificial intelligence to analyze pronunciation, our site provides feedback that is both personalized and objective. ## How we built it Developing the web application was no easy feat. As we were searching for an AI model to help us through our journey we stumbled upon OpenAI, specifically Microsoft Azure’s cognitive systems that utilize OpenAI’s comprehensive abilities in language processing. This API gave us the ability to analyze voice patterns and fluency and transcribe passages that are mentioned in the application. Figuring out the documentation as well as how the AI will be interacting with the user was most important for us to execute properly since the AI would be acting as the tutor/mentor for the students in these cases. We developed a diagram that would break down the passages read to the student phonetically and give them a score of 100 for how well each word was pronounced based on the API’s internal grading system. As it is our first iteration of the web app, we wanted to explore how much information we could extract from the user to see what is most valuable to display to them in the future. Integrating the API with the web host was a new feat for us as a young team. We were confident in our python abilities to host the AI services and found a library by the name of Flask that would help us write html and javascript code to help support the front end of the application through python. By using Flask, we were able to host our AI services with python while also continuously managing our front end through python scripts. This gave room for the development of our backend systems which are Convex and Auth0. Auth0 was utilized to give members coming into the application a unique experience by having them sign into a personalized account. The account is then sent into the Convex database to be used as a storage base for their progress in learning and their development of skills over time. All in all, each component of the application from the AI learning models, generating custom passages for the user, to the backend that communicated between the Javascript and Python server host that streamlines the process of storing user data, came with its own challenges but came together seamlessly as we guide the user from our simple login system to the passage generator and speech analyzer to give the audience constructive feedback on their fluency and pronunciation. ## Challenges we ran into As a majority beginning team, this was our first time working with many of the different technologies, especially with AI APIs. We need to be patient working with key codes and going through an experiment process of trying different mini tests out to then head to the major goal that we were headed towards. One major issue that we faced was the visualization of data to the user. We found it hard to synthesize the analysis that was done by the AI to translate to the user to make sure they are confident in what they need to improve on. To solve this problem we first sought out how much information we could extract from the AI and then in future iterations we would simply display the output of feedback. Another issue we ran into was the application of convex into the application. The major difficulty came from developing javascript functions that would communicate back to the python server hosting the site. This was resolved thankfully; we are grateful for the Convex mentors at the conference that helped us develop personalized javascript functions that work seamlessly with our Auth0 authentication and the rest of the application to record users that come and go. ## Accomplishments that we're proud of: One accomplishment that we are proud of was the implementation of Convex and Auth0 with Flask and Python. As python is a rare language to host web servers in and isn't the primary target language for either service, we managed to piece together a way to fit both services into our project by collaboration with the team at Convex to help us out. This gave way to a strong authentication platform for our web application and for helping us start a database to store user data onto. Another accomplishment was the transition of using a React Native application to using Flask with Python. As none of the group has seen Flask before or worked for it for that matter, we really had to hone in our abilities to learn on the fly and apply what we knew prior about python to make the web app work with this system. Additionally, we take pride in our work with OpenAI, specifically Azure. We researched our roadblocks in finding a voice recognition AI to implement our natural language processing vision. We are proud of how we were able to display resilience and conviction to our overall mission for education to use new technology to build a better tool. ## What we learned As beginners at our first hackathon, not only did we learn about the technical side of building a project, we were also able to hone our teamwork skills as we dove headfirst into a project with individuals we had never worked with before. As a group, we collectively learned about every aspect of coding a project, from refining our terminal skills to working with unique technology like Microsoft Azure Cognitive Services. We also were able to better our skillset with new cutting edge technologies like Convex and OpenAI. We were able to come out of this experience not only growing as programmers but also as individuals who are confident they can take on the real world challenges of today to build a better tomorrow. ## What's next? We hope to continue to build out the natural language processing applications to offer the technology in other languages. In addition, we hope to hone to integrate other educational resources, such as videos or quizzes to continue to build other linguistic and reading skill sets. We would also love to explore the cross section with gaming and natural language processing to see if we can make it a more engaging experience for the user. In addition, we hope to expand the ethical considerations by building a donation platform that allows users to donate money to the developing community and pay forward the generosity to ensure that others are able to benefit from refining their linguistic abilities. The money would then go to a prominent community in need that uses our platform to fund further educational resources in their community. ## Bibliography United Nations High Commissioner for Refugees. “Global Forced Displacement Tops 70 Million.” UNHCR, UNHCR, The UN Refugee Agency, <https://www.unhcr.org/en-us/news/stories/2019/6/5d08b6614/global-forced-displacement-tops-70-million.html>.
winning
## Inspiration We asked ourselves what tools we could use to help eliminate the problem of food waste plaguing North American retailers, and considered weather patterns to be an exciting potential solution. ## What it does Compares historical sales figures of products to historical weather data in the region, and derives formulas to make sales projections for the near future using relationships with factors like temperature and precipitation. ## How we built it We built a simple GUI in Java that allows the user to input sales data as a file. This data is compared to historical weather data from Environment Canada, and linear regression is applied to determine correlation. Using the Weather Network API we determine the forecast for the next week and determine projected sales, then display the information to the user. ## Challenges we ran into Neither of us had experience using APIs, so it was definitely a learning process on how to incorporate the Weather Network. ## Accomplishments that we're proud of It's our first Hackathon and we managed to finish a product! ## What we learned ## What's next for Sales Forecasting Using Weather Patterns Developing more complicated algorithms to find relationships with more obscure weather types.
## Inspiration I was cooking at home one day and I kept noticing we had half a carrot, half an onion, and like a quarter of a pound of ground pork lying around all the time. More often than not it was from me cooking a fun dish that my mother have to somehow clean up over the week. So I wanted to create an app that would help me use those ingredients that I have neglected so that even if both my mother and I forget about it we would not contribute to food waste. ## What it does Our app uses a database to store our user's fridge and keeps track of the food in their fridge. When the user wants a food recipe recommendation our app will help our user finish off their food waste. Using the power of chatGPT our app is super flexable and all the unknown food and food that you are too lazy to measure the weight of can be quickly put into a flexible and delicious recipe. ## How we built it Using figma for design, react.JS bootstrap for frontend, flask backend, a mongoDB database, and openAI APIs we were able to create this stunning looking demo. ## Challenges we ran into We messed up our database schema and poor design choices in our APIs resulting in a complete refactor. Our group also ran into problems with react being that we were relearning it. OpenAI API gave us inconsistency problems too. We pushed past these challenges together by dropping our immediate work and thinking of a solution together. ## Accomplishments that we're proud of We finished our demo and it looks good. Our dev-ops practices were professional and efficient, our kan-ban board saved us a lot of time when planning and implementing tasks. We also wrote plenty of documentations where after our first bout of failure we planned out everything with our group. ## What we learned We learned the importance of good API design and planning to save headaches when implementing out our API endpoints. We also learned much about the nuance and intricacies when using CORS technology. Another interesting thing we learned is how to write detailed prompts to retrieve formatted data from LLMs. ## What's next for Food ResQ : AI Recommended Recipes To Reduce Food Waste We are planning to add a receipt scanning feature so that our users would not have to manually add in each ingredients into their fridge. We are also working on a feature where we would prioritize ingredients that are closer to expiry. Another feature we are looking at is notifications to remind our users that their ingredients should be used soon to drive up our engagement more. We are looking for payment processing vendors to allow our users to operate the most advanced LLMs at a slight premium for less than a coffee a month. ## Challenges, themes, prizes we are submitting for Sponsor Challenges: None Themes: Artificial Intelligence & Sustainability Prizes: Best AI Hack, Best Sustainability Hack, Best Use of MongoDB Atlas, Most Creative Use of Github, Top 3 Prize
## Inspiration Food is a basic human need. As someone who often finds themselves wandering the aisles of Target, I know firsthand how easy it is to get lost among the countless products and displays. The experience can quickly become overwhelming, leading to forgotten items and a less-than-efficient shopping trip. This project was born from the desire to transform that chaos into a seamless shopping experience. We aim to create a tool that not only helps users stay organized with their grocery lists but also guides them through the store in a way that makes shopping enjoyable and stress-free. ## What it does **TAShopping** is a smart grocery list app that records your grocery list in an intuitive user interface and generates a personalized route in **(almost)** any Target location across the United States. Users can easily add items to their lists, and the app will optimize their shopping journey by mapping out the most efficient path through the store. ## How we built it * **Data Aggregation:** We utilized `Selenium` for web scraping, gathering product information and store layouts from Target's website. * **Object Storage:** `Amazon S3` was used for storing images and other static files related to the products. * **User Data Storage:** User preferences and grocery lists are securely stored using `Google Firebase`. * **Backend Compute:** The backend is powered by `AWS Lambda`, allowing for serverless computing that scales with demand. * **Data Categorization:** User items are classified with `Google Gemini` * **API:** `AWS API Endpoint` provides a reliable way to interact with the backend services and handle requests from the front end. * **Webapp:** The web application is developed using `Reflex`, providing a responsive and modern interface for users. * **iPhone App:** The iPhone application is built with `Swift`, ensuring a seamless experience for iOS users. ## Challenges we ran into * **Data Aggregation:** Encountered challenges with the rigidity of `Selenium` for scraping dynamic content and navigating web page structures. * **Object Storage:** N/A (No significant issues reported) * **User Data Storage:** N/A (No significant issues reported) * **Backend Compute:** Faced long compute times; resolved this by breaking the Lambda function into smaller, more manageable pieces for quicker processing. * **Backend Compute:** Dockerized various builds to ensure compatibility with the AWS Linux environment and streamline deployment. * **API:** Managed the complexities of dealing with and securing credentials to ensure safe API access. * **Webapp:** Struggled with a lack of documentation for `Reflex`, along with complicated Python dependencies that slowed development. * **iPhone App:** N/A (No significant issues reported) ## Accomplishments that we're proud of * Successfully delivered a finished product with a relatively good user experience that has received positive feedback. * Achieved support for hundreds of Target stores across the United States, enabling a wide range of users to benefit from the app. ## What we learned > > We learned a lot about: > > > * **Gemini:** Gained insights into effective data aggregation and user interface design. > * **AWS:** Improved our understanding of cloud computing and serverless architecture with AWS Lambda. > * **Docker:** Mastered the process of containerization for development and deployment, ensuring consistency across environments. > * **Reflex:** Overcame challenges related to the framework, gaining hands-on experience with Python web development. > * **Firebase:** Understood user authentication and real-time database capabilities through Google Firebase. > * **User Experience (UX) Design:** Emphasized the importance of intuitive navigation and clear presentation of information in app design. > * **Version Control:** Enhanced our collaboration skills and code management practices using Git. > > > ## What's next for TAShopping > > There are many exciting features on the horizon, including: > > > * **Google SSO for web app user data:** Implementing Single Sign-On functionality to simplify user authentication. > * **Better UX for grocery list manipulation:** Improving the user interface for adding, removing, and organizing items on grocery lists. > * **More stores:** Expanding support to additional retailers, including Walmart and Home Depot, to broaden our user base and shopping capabilities. > > >
losing
Coded prototype on Github: <https://github.com/shawnd/foodie> Design Prototype, MarvelApp: <https://marvelapp.com/c1h9c5> ## Inspiration To solve students' problems around – 1. Not knowing what to cook 2. Not knowing what groceries to buy at a store 3. Not discovering new meals to eat 4. Not having the time to manually create grocery lists based on online recipes 5. Sharing meals and recipes amongst friends. ## What it does 1. Helps discover meals that can be cooked within a specified budget (eg: $20). 2. Curates recipes from online sources into a database. 3. Create a 'meal list' by adding recipes you want to cook. 4. Each meal list shows approximate prices for the entire meal. 5. Auto-generate a grocery list that compiles saved recipes and their ingredients. 6. Allows one-click sharing of recipes with friends. ## How we built it We used a Python Bottle API that communicates with a Firebase distributed database service that returns data to our front-end Ionic interface. End result was a mobile application. ## Challenges we ran into – Scaling down features for the application. ## Accomplishments that we're proud of – Finishing on time – Building an application over 24 hours!! – Fun experience ## What's next for Foodie Feedback, iteration, feedback, iteration, test, feedback, iteration, release!
## Inspiration We were all very intrigued by machine learning and AI, so we decided to incorporate it into our project. We wanted to create something involving the webcam as well, so we tied it altogether with ScanAI. ## What it does ScanAI aims to detect guns in schools and public areas to help alert authorities quickly in the event of a public shooting. ## How we built it ScanAI is built entirely out of python. Computer vision python libraries were used including OpenCV, facial\_recognition, yolov5 and tkinter. ## Challenges we ran into When training models, we ran into issues of a lack of ram and a lack of training data. We also were challenged by the problem of tackling multiple faces at once. ## Accomplishments that we're proud of ScanAI is able to take imported files and detect multiple faces at once and apply facial recognition to all of them. ScanAI is highly accurate and has many features including Barack Obama facial recognition, object detection, live webcam viewing and scanning, and file uploading functionalities. ## What we learned We all learned a lot about machine learning and its capabilities. Using these modules expanded our knowledge on AI and all its possible uses. ## What's next for ScanAI Our next steps would be to improve our interface to improve user friendliness and platform compatibility. We would also want to incorporate our program with Raspberry Pi to increase its usage flexibility. Lastly, we would also want to work on improving the accuracy of the detection system by feeding it more images and feedback.
# imHungry ## Inspiration As Berkeley students we are always prioritizing our work over our health! Students often don't have as much time to go buy food. Why not pick a food for other students while buying your own food and make a quick buck? Or perhaps, place an order with a student who was already planning to head your way after buying some food for themselves? This revolutionary business model enables students to participate in the food delivery business while avoiding the hassles that are associated with the typical food delivery app. Our service does not require students to do anything differently than what they already do! ## What it does This application allows students to be able to purchase or help purchase food from/for their fellow students. The idea is that students already purchase food often before heading out to the library or another place on campus. This app allows these students to list their plans in advance and allow other students to put in their orders as well. These buyers will then meet the purchaser at wherever they are expected to meet. That way, the purchaser doesn't need to make any adjustments to their plan besides buy a few extra orders! The buyers will also have the convenience of picking up their order near campus to avoid walking. This app enables students to get involved in the food delivery business while doing nearly nothing additional! ## How we built it We used Flask, JavaScript, HTML/CSS, and Python. Some technologies we used include Mapbox API, Google Firebase, and Google Firestore. We built this as a team at CalHacks! ## Challenges we ran into We had some trouble getting starting with using Google Cloud for user authentication. One of our team members went to the Google Cloud sponsor stand and was able to help fix part of the documentation! ## Accomplishments that we're proud of We're proud of our use of the Mapbox API because it enabled us to use some beautiful maps in our application! As a food delivery app, we found it quite important that we are able to displays restaurants and locations on campus (for delivery) that we support and Mapbox made that quite easy. We are also quite proud of our use of Firebase and Firestore because we were able to use these technologies to authenticate users as Berkeley students while also quickly storing and retrieving data from the cloud. ## What we learned We learned how to work with some great APIs provided by Mapbox and Google Cloud! ## What's next for imHungry We hope to complete our implementation of the user and deliverer interface and integrate a payments API to enable users to fully use the service! Additional future plans are to add time estimates, improve page content, improve our back-end algorithms, and to improve user authentication.
losing
## Inspiration: seeing and dealing with rude and toxic comments on popular forums like youtube, reddit, and being aware that sometimes it might be you who leaves that rude comment, and you may not even realize it. ## What it does: This chrome extension warns you and reminds you not to be too heated if it finds that you are in the process of leaving a particularly rude or toxic comment using google's perspective api - an NLP algorithm for analyzing sentiment. It reads the users comment into an editable text box field in real time, and is able to inform them if their comment is above the threshold before it is posted. ## How I built it * JS, POST requests to Perspective API, Local Node.js instance ## Challenges I ran into * Found it difficult to figure out how to find when a user is typing a comment - what text fields are activated? When do we collect a users input? Also, sometimes we spent a lot of time on something just to find out that it was made by someone else already. ## Accomplishments that I'm proud of: Was able to get a working extension running on localhost using js and node.js, none of us had substantial experience in either coming into this hackathon. ## What I learned Learned a lot about javascript, how to build an extension, how frustrating creating an extension can be, but how fun hackathons are! ## What's next for TypeMeNot2 Improving the graphics - as of right now, we have a full on alert for toxicity above a certain threshold, but we want to make better representation such as a color fader with a multiplier based off of the toxicity score. Example: icon is bright red for extremely offensive comments, and dark blue for non offensive ones.
## Inspiration Everyone can relate to the scene of staring at messages on your phone and wondering, "Was what I said toxic?", or "Did I seem offensive?". While we originally intended to create an app to help neurodivergent people better understand both others and themselves, we quickly realized that emotional intelligence support is a universally applicable concept. After some research, we learned that neurodivergent individuals find it most helpful to have plain positive/negative annotations on sentences in a conversation. We also think this format leaves the most room for all users to reflect and interpret based on the context and their experiences. This way, we hope that our app provides both guidance and gentle mentorship for developing the users' social skills. Playing around with Co:here's sentiment classification demo, we immediately saw that it was the perfect tool for implementing our vision. ## What it does IntelliVerse offers insight into the emotions of whomever you're texting. Users can enter their conversations either manually or by taking a screenshot. Our app automatically extracts the text from the image, allowing fast and easy access. Then, IntelliVerse presents the type of connotation that the messages convey. Currently, it shows either a positive, negative or neutral connotation to the messages. The interface is organized similarly to a texting app, ensuring that the user effortlessly understands the sentiment. ## How we built it We used a microservice architecture to implement this idea The technology stack includes React Native, while users' information is stored with MongoDB and queried using GraphQL. Apollo-server and Apollo-client are used to connect both the frontend and the backend. The sentiment estimates are powered by custom Co:here's finetunes, trained using a public chatbot dataset found on Kaggle. Text extraction from images is done using npm's text-from-image package. ## Challenges we ran into We were unfamiliar with many of the APIs and dependencies that we used, and it took a long to time to understand how to get the different components to come together. When working with images in the backend, we had to do a lot of parsing to convert between image files and strings. When training the sentiment model, finding a good dataset to represent everyday conversations was difficult. We tried numerous options and eventually settled with a chatbot dataset. ## Accomplishments that we're proud of We are very proud that we managed to build all the features that we wanted within the 36-hour time frame, given that many of the technologies that we used were completely new to us. ## What we learned We learned a lot about working with React Native and how to connect it to a MongoDB backend. When assembling everyone's components together, we solved many problems regarding dependency conflicts and converting between data types/structures. ## What's next for IntelliVerse In the short term, we would like to expand our app's accessibility by adding more interactable interfaces, such as audio inputs. We also believe that the technology of IntelliVerse has far-reaching possibilities in mental health by helping introspect upon their thoughts or supporting clinical diagnoses.
## Inspiration We got inspired by an article \* from Plan international Canada that opened our eyes on the Cyberbullying problem. This problem continues to be a serious issue affecting our **social life** and the **mental health** of teens, young adults and especially girls. So, we thought it would be a great idea if there was an extension that reminds users to be mindful and reflective when they are about to send an insulting comment/message. * <https://stories.plancanada.ca/data-reveals-impact-of-cyberbullying-on-girls-lives-in-canada/> ## What it does Our project"Bully-in-spect" is a google chrome extension that gets what the user is currently typing, analyzes the text, and detects if there is any hate or offensive speech. * If so, a pop-up box will appear to warn the user to be careful and to take a moment to reflect before sending the message. * Another feature is, to analyze the web pages and detects any hate and offensive language. Then it would give the result in a pop-up box! ## How we built it/how scalable is it? We built it with love and programming languages! * It is easily scalable to different platforms and browsers as we are using **Javascript** with the React Framework which is cross-platform widely used everywhere. * We used content scripts to trigger page scans for sentiment analysis after each webpage loads. * We had background scripts to interact with the **Google Cloud Natural Language Processing API** to fetch the sentiment scores for each sentence. * The popup UI was done with CSS. ## Challenges we ran into With a new project, new challenges come! Some of the challenges we encountered during the Hackathon were * How to create a Google Extension * Connect Extension with Google Cloud; authenticate * Accuracy issue with API * Collection of User's Data * Handling React Hooks State ## Accomplishments that we're proud of * I teamed-up with total strange participants and they end up being amazing and creative people! * I am proud to keep learning about many new things during the Hackathon. * Able to build a functional extension! ## How We used Google Cloud * Initially we used AutoML to train the Model of Sentiment ,but we didn't get the needed accuracy. * Then we used [Google Cloud NLP Sentiment API](https://cloud.google.com/natural-language/docs/sentiment-tutorial) to get the analysis done. * We used HTTP POST requests to call the NLP API and get our results. ## What we learned * We learned to create a **Google Chrome Extension** * We learned about the latest technologies from Daniel and Aniket. Such as: Google Cloud, API, Authentications, and React. * It was great working with team members from different timezones and trying to create such a project in 24 Hour Time-Limit ## What's next for Bully-in-spect We would like to go bigger and implement this extension to other larger browser platforms such as safari and firefox and social media platforms (where bullying incidents occur more readily). We would love to create it as an Android-App with extra features: * Parental restrictions * Highlights the offensive words * Show words suggestions for the insulting comments
partial
## Inspiration \_ "According to Portio Research, the world will send 8.3 trillion SMS messages this year alone – 23 billion per day or almost 16 million per minute. According to Statistic Brain, the number of SMS messages sent monthly increased by more than 7,700% over the last decade" \_ The inspiration for TextNet came from the crazy mobile internet data rates in Canada and throughout North America. The idea was to provide anyone with an SMS enabled device to access the internet! ## What it does TextNet exposes the following internet primitives through basic SMS: 1. Business and restaurant recommendations 2. Language translation 3. Directions between locations by bike/walking/transit/driving 4. Image content recognition 5. Search queries. 6. News update TextNet can be used by anyone with an SMS enabled mobile device. Are you \_ roaming \_ in a country without access to internet on your device? Are you tired of paying the steep mobile data prices? Are you living in an area with poor or no data connection? Have you gone over your monthly data allowance? TextNet is for you! ## How we built it TextNet is built using the Stdlib API with node.js and a number of third party APIs. The Stdlib endpoints connect with Twilio's SMS messaging service, allowing two way SMS communication with any mobile device. When a user sends an SMS message to our TextNet number, their request is matched with the most relevant internet primitive supported, parsed for important details, and then routed to an API. These API's include Google Cloud Vision, Yelp Business Search, Google Translate, Google Directions, and Wolfram Alpha. Once data is received from the appropriate API, the data is formatted and sent back to the user over SMS. This data flow provides a form of text-only internet access to offline devices. ## Challenges we ran into Challenge #1 - We arrived at HackPrinceton at 1am Saturday morning. Challenge #2 - Stable SMS data flow between multiple mobile phones and internet API endpoints. Challenge #3 - Google .json credential files working with our Stdlib environment Challenge #4 - Sleep deprivation ft. car and desks Challenge #5 - Stdlib error logging ## Accomplishments that we're proud of We managed to build a basic offline portal to the internet in a weekend. TextNet has real world applications and is built with exciting technology. We integrated an image content recognition machine learning algorithm which given an image over SMS, will return a description of the contents! Using the Yelp Business Search API, we built a recommendation service that can find all of the best Starbucks near you! Two of our planned team members from Queen's University couldn't make it to the hackathon, yet we still managed to complete our project and we are very proud of the results (only two of us) :) ## What we learned We learned how to use Stdlib to build a server-less API platform. We learned how to interface SMS with the internet. We learned *all* about async / await and modern Javascript practices. We learned about recommendation, translate, maps, search queries, and image content analysis APIs. ## What's next for TextNet Finish integrate of P2P payment using stripe ## What's next for HackPrinceton HackPrinceton was awesome! Next year, it would be great if the team could arrange better sleeping accommodations. The therapy dogs were amazing. Thanks for the experience!
## Inspiration In large corporations, such as RBC, the help desk gets called hundreds phone calls every hour, lasting about 8 minutes on average and costing the company $15 per hour. We thought this was both a massive waste of time and resources, not to mention it being quite ineffective and inefficient. We wanted to create a product that accelerated the efficiency of a help-desk to optimize productivity. We designed a product that has the ability to wrap a custom business model and a help service together in an accessible SMS link. This is a novel innovation that is heavily needed in today's businesses. ## What it does SMS Assist offers the ability for a business to **automate their help-desk** using SMS messages. This allows requests to be answered both online and offline, an innovating accessibility perk that many companies need. Our system has no limit to concurrent users, unlike a live help-desk. It provides assistance for exactly what you need, and this is ensured by our IBM Watson model, which trains off client data and uses Machine Learning/NLU to interpret client responses to an extremely high degree of accuracy. **Assist** also has the ability to recieve orders from customers if the businesses so chose. The order details and client information is all stored by the Node server, so that employees can view orders in realtime. Finally, **Assist** utilizes text Sentiment Analysis to analyse each client's tone in their texts. It then sends a report to the console so that the company can receive feedback from customers automatically, and improve their systems. ## How we built it We used Node.js, Twilio, and IBM watson to create SMS Assist. **IBM Watson** was used to create the actual chatbots, and we trained it on user data in order to recognize the user's intent in their SMS messages. Through several data sets, we utilized Watson's machine learning and Natural Language & Sentiment analysis to make communication with Assist hyper efficient. **Twilio** was used for the front end- connecting an SMS client with the server. Using our Twilio number, messages can be sent and received from any number globally! **Node.js** was used to create the server on which SMS Assist runs on. Twilio first recieves data from a user, and sends it to the server. The server feeds it into our Watson chatbot, which then interprets the data and generates a formulated response. Finally, the response is relayed back to the server and into Twilio, where the user recieves the respons via SMS. ## Challenges we ran into There were many bugs involving the Node.js server. Since we didn't have much initial experience with Node or the IBM API, we encountered many problems, such as the SessionID not being saved and the messages not being sent via Twilio. Through hours of hard work, we persevered and solved these problems, resulting in a perfected final product. ## Accomplishments that we're proud of We are proud that we were able to learn the new API's in such a short time period. All of us were completely new to IBM Watson and Twilio, so we had to read lots of documentation to figure things out. Overall, we learned a new useful skill and put it to good use with this project. This idea has the potential to change the workflow of any business for the better. ## What we learned We learned how to use the IBM Watson API and Twilio to connect SMS messages to a server. We also discovered that working with said API's is quite complex, as many ID's and Auth factors need to be perfectly matched for everything to execute. ## What's next for SMS Assist With some more development and customization for actual businesses, SMS Assist has the capability to help thousands of companies with their automated order systems and help desk features. More features can also be added
## Inspiration When Shayan and Alishan came from a different university to attend the deltahacks competition, the first thing that we tried to do was connect them to the wifi so we could get started on our hack, however they had issues connecting to the internet. They were also both over their data plans, so we couldn't hotspot. This catalyzed a thought process about all the times that we need an internet connection but aren't able to access it and we identified a widespread problem with people who need directions but are out of data, and so can't access the internet to find them out. Furthermore, the low income and homeless population may not have the luxury of a data plan, yet need a directions service. This is where Textination came in. ## What it does Textination acts as a text chatbot that engages in an interactive conversation with the user to get them where they need to go, without the use of the internet. The user first texts Textination their current location and their desired destination, Textination then utilizes a cloud dataset, currently accessed through a google maps API, that finds the optimal route towards your destination and texts you back the directions that you take to get there. ## How we built it Coded using python via Pythonanywhere. Implemented various API's such as Twilio, Googlemaps, Flask, and Geopy. ## Challenges we ran into 1. One challenge that we ran into is deciding on which language to use in order to manipulate API's. We ended up choosing python because of its ease of use with the data, and the ability to stack API's. Furthermore, It allows Textination to switch out of databases and API's easily which can give users the most up to date and greatest breadth of data 2. Only one of our teammates had substantial coding experience, the rest of us were business students with minimum programming experience. We worked around this by working towards our strengths and dividing up the work to maximize efficiency ## Accomplishments that we're proud of * Managing to successfully run the flask server and Twilio API working, with no previous experience * Stacking 4 API's with no previous stacking experience * Performing a quality needs assessment and detailed research into relevant data; leveraging primary and secondary research sources ## What we learned * How to successfully bind a server to an API * How to successfully program a back-end web app * Effective brainstorming for problems to social issues # What's next for Textination - SMS Directions Chat Bot * in the future, we want to use artificial intelligence and machine learning to identify areas that are travelled to more often by the most people, and optimize our program's webscraping for those areas to get information to the user faster * We will also add transit directions and bus timings into Textination
winning
Introducing Melo-N – where your favorite tunes get a whole new vibe! Melo-N combines "melody" and "Novate" to bring you a fun way to switch up your music. Here's the deal: You pick a song and a genre, and we do the rest. We keep the lyrics and melody intact while changing up the music style. It's like listening to your favourite songs in a whole new light! How do we do it? We use cool tech tools like Spleeter to separate vocals from instruments, so we can tweak things just right. Then, with the help of the MusicGen API, we switch up the genre to give your song a fresh spin. Once everything's mixed up, we deliver your custom version – ready for you to enjoy. Melo-N is all about exploring new sounds and having fun with your music. Whether you want to rock out to a country beat or chill with a pop vibe, Melo-N lets you mix it up however you like. So, get ready to rediscover your favourite tunes with Melo-N – where music meets innovation, and every listen is an adventure!
## Inspiration We wanted to do something fun and exciting, nothing too serious. Slang is a vital component to thrive in today's society. Ever seen Travis Scott go like, "My dawg would prolly do it for a Louis belt", even most menials are not familiar with this slang. Therefore, we are leveraging the power of today's modern platform called "Urban Dictionary" to educate people about today's ways. Showing how today's music is changing with the slang thrown in. ## What it does You choose your desired song it will print out the lyrics for you and then it will even sing it for you in a robotic voice. It will then look up the urban dictionary meaning of the slang and replace with the original and then it will attempt to sing it. ## How I built it We utilized Python's Flask framework as well as numerous Python Natural Language Processing libraries. We created the Front end with a Bootstrap Framework. Utilizing Kaggle Datasets and Zdict API's ## Challenges I ran into Redirecting challenges with Flask were frequent and the excessive API calls made the program super slow. ## Accomplishments that I'm proud of The excellent UI design along with the amazing outcomes that can be produced from the translation of slang ## What I learned A lot of things we learned ## What's next for SlangSlack We are going to transform the way today's menials keep up with growing trends in slang.
## Inspiration As music producers, we feel that music production is a severely underrepresented niche in the field of machine learning and generative AI. We wanted to help supercharge the human race by bringing the newest global revolution to your DAW. ## What it does Melodify is a plugin for the music production software Ableton Live. We utilize the Max for Live platform along with the Ableton LiveAPI to integrate seamlessly with your workflow and provide fresh melodic and harmonic ideas based on the context of the project. Melodify is made possible by streaming MIDI JSON data over UDP to and from an open source GenAI model hosted on Hugging Face. You can try it out in the browser at <https://huggingface.co/spaces/skytnt/midi-composer>. This is the heart of the entire platform, and after much fine tuning, works as the perfect digital musician's assistant. ## How we built it We used Ableton's "Max for Live" plugin framework along with a python back-end and an open source MIDI data AI model from HuggingFace to create the melody in MIDI. ## Challenges we ran into **Parsing json to MIDI**: MIDI data is unique in that it has meta messages that communicate more than just the obvious parameters you'd expect from musical data (beyond notes, length of note, velocity), which are encoded in the relationships between the different default parameters. When we were parsing to and from MIDI in order to get an accurate suggestion from the model, it was quite difficult to preserve the complex structure when the MIDI data would have multiple layers of tracks or great length. **Max for Live**: The Ableton Max for Live platform has poor support for HTTP/TCP requests, so we had to adapt UDP for a more request/response style API rather than the typical live data streams that UDP is meant for. **Incorporating sponsor tech-stacks into our idea:** We tried and failed to incorporate a few technologies into our vision for the project, noticeably one being Fetch.ai. It felt like we hyper-focused on using agents to do the simple back-end and calls to the AI model, so when it wasn't working we ended up having wasted a ton of time. ## Accomplishments that we're proud of A few of us are new to hacking, and we all contributed to multiple parts of the project in different ways. There was a ton of stuff that was new for us. Working with MIDI data. Building an audio plugin. Using UDP. Even though we don't have everything fully integrated I think the parts that we did accomplish we are proud of, especially as a proof of concept. ## What we learned Python back-end dependencies are rough. We learned a lot about how to prevent breaking python dependency trees. We experimented with different solutions like poetry and virtualenv, and used both in this project. We found that poetry had poor support for ML libraries like PyTorch/Tensorflow, especially when it came to edge cases. ## What's next for Melodify Maybe one day we will create a plug in with fully integrated features. We know for a fact that LiveAPI supports insertion of MIDI. So we could make it more seamless and interactive for the user and provide more features.
winning
## Inspiration We are students from Vanderbilt University, Nashville, TN. **Nashville, aka music city,** is also one of the most dangerous cities in the US. It ranks #15 last year for its crime rate. Violent Crime Rate = 1,101/per 100,000 people. 10 days ago, we went to a hookah bar very close to Vanderbilt, we did not stay there for very long but two security guards got shot at that bar the very same night. When we looked it up, we found that there had been previous instances of similar incidents happening in the same area and we had no idea about it. This freaked us out. How could we enjoy the city safely? How could we avoid the unsafe areas? The fact is, a lot of cities in the US are not safe to walk. How do we solve this problem? As data science students, we wanted to find a data-driven solution that can help people know when they are walking through an area with violent crime. This year, our theme is connecting the dots. When we think about **connecting the dots**, daily navigation come to our mind. Is it possible to use the crime event data to build a navigation application and help people know if they are about to be walking through a particularly unsafe area? ## What it does This application will show you areas with violent crimes and help you navigate around them. It can also help you know if you are in a particularly crime-heavy area. A lot of useful information such as Vanderbilt University Police Department contact info and nearest hospital to your location are provided as well. ## How we built it 1. We created data pipelines using python to extract Nashville open crime incident data, clean the data and add the features. 2. We use google maps api to help us with the navigation part and we push it to the flask. 3. We use Flutter to develop the mobile application. ## Challenges we ran into 1. No experience in phone app development. 2. No experience with using google maps api. ## Accomplishments that I'm proud of Finishing the application to a decent extent. ## What I learned Phone app development and developing pipelines between different components of a software. ## What's next for Walkbuddy 1. Route optimization calculation 2. Live location 3. Location Sharing 4. Safety scores of different paths ### References: [Two Security Guards Got Shot At Nashville Hoookah Bar](https://www.wsmv.com/news/two-security-guards-shot-at-nashville-hookah-bar-overnight/article_0150204a-edcf-11e9-bffe-5fa2003f5895.html) [World's Most Dangerous Cities](https://www.worldatlas.com/articles/most-dangerous-cities-in-the-united-states.html)
## Inspiration Approximately **half** of the United States’ annual violent crimes are reported each year. Much of this discrepancy can be attributed to a widespread fear of calling 911. To provide a safeguarding mechanism for situations precluding conventional means of communication, our project undertakes the development of a sophisticated real-time monitoring and alerting system. Identifying the difficulty of talking aloud in many situations showed us the immense potential that an audiocentric emergency reporting app has to reduce this statistic. This system is predicated upon intricate audio data analysis for the explicit purpose of crime detection and the facilitation of swift emergency response. ## What it does At the heart of this convenient app lies a high-level architectural framework for audiocentric crime detection. The crux of our technical infrastructure is the application of advanced deep learning paradigms, notably the integration of stacking classifiers, which consists of multiple classifiers such as RandomForestClassifier, SGDClassifier, Support Vector Classifier, etc. Augmenting this neural architecture is the strategic deployment of integration of classification models as well as data transformation and dimension reduction, thereby optimizing the intricate classification of audio clips based on the nature of the crime they encapsulate. ## How we built it The backend of the pipeline was written in Python, using Flask and Django as the backbone. The backend is supported by a pipeline of stacking classification models that determines the type of crimes the users ran into by collecting the surrounding ambient sounds that could signal a dangerous environment. For the frontend, we utilized Android Studio to create the interactive user interface of the project, which will enhance the user experience when using the application. The minimalist design for the frontend ensures that the users can spend the least amount of time interacting with our product as it is designed for emergency use, thus requiring minimum initiation steps. The integration of the interactive frontend with a convolutional, yet efficient backend are the characteristics of our product. ## Challenges we ran into *ML Model*: We struggled to find the right parameters for the classification model that resulted in a notable amount of time spent on adjusting minor values to maximize accuracy. *Android Studio*: It was difficult to create a minimal user interface that would recognize a finger holding the record button in loops of 5 seconds. There was some unexpected difficulty in using the “Audio Recorder” constructor of Android Studio that provides live feedback as the audio is being fed in. ## Accomplishments that we're proud of The foundation of our app depends on the pipeline developed to classify types of violent crimes. Though the accuracy of the model could be improved on in the future, it was a fundamental step towards knowing that we could bring safety many steps closer to individuals in such dangerous situations. ## What we learned Among others, we learned that it’s important to achieve the overall goal first - regardless of how well it was reached. We struggled to maximize our accuracy with the pipeline - having switched from CNN to our original method largely due to a lack of computation power - but had to realize that it is better to walk away with something over nothing. ## What's next for SilentSignal Shorter timeframe & location In the future, it would be greatly beneficial to be able to shorten the required timeframe of 5 seconds so that we can identify and report situations where even that is too long. Additionally, it is crucial to know where the user is located as we will be contacting local officials, so this is also an area of interest.
## Inspiration As college students learning to be socially responsible global citizens, we realized that it's important for all community members to feel a sense of ownership, responsibility, and equal access toward shared public spaces. Often, our interactions with public spaces inspire us to take action to help others in the community by initiating improvements and bringing up issues that need fixing. However, these issues don't always get addressed efficiently, in a way that empowers citizens to continue feeling that sense of ownership, or sometimes even at all! So, we devised a way to help FixIt for them! ## What it does Our app provides a way for users to report Issues in their communities with the click of a button. They can also vote on existing Issues that they want Fixed! This crowdsourcing platform leverages the power of collective individuals to raise awareness and improve public spaces by demonstrating a collective effort for change to the individuals responsible for enacting it. For example, city officials who hear in passing that a broken faucet in a public park restroom needs fixing might not perceive a significant sense of urgency to initiate repairs, but they would get a different picture when 50+ individuals want them to FixIt now! ## How we built it We started out by brainstorming use cases for our app and and discussing the populations we want to target with our app. Next, we discussed the main features of the app that we needed to ensure full functionality to serve these populations. We collectively decided to use Android Studio to build an Android app and use the Google Maps API to have an interactive map display. ## Challenges we ran into Our team had little to no exposure to Android SDK before so we experienced a steep learning curve while developing a functional prototype in 36 hours. The Google Maps API took a lot of patience for us to get working and figuring out certain UI elements. We are very happy with our end result and all the skills we learned in 36 hours! ## Accomplishments that we're proud of We are most proud of what we learned, how we grew as designers and programmers, and what we built with limited experience! As we were designing this app, we not only learned more about app design and technical expertise with the Google Maps API, but we also explored our roles as engineers that are also citizens. Empathizing with our user group showed us a clear way to lay out the key features of the app that we wanted to build and helped us create an efficient design and clear display. ## What we learned As we mentioned above, this project helped us learn more about the design process, Android Studio, the Google Maps API, and also what it means to be a global citizen who wants to actively participate in the community! The technical skills we gained put us in an excellent position to continue growing! ## What's next for FixIt An Issue’s Perspective \* Progress bar, fancier rating system \* Crowdfunding A Finder’s Perspective \* Filter Issues, badges/incentive system A Fixer’s Perspective \* Filter Issues off scores, Trending Issues
losing
## Motivation We saw how **long queues** can get firsthand at Hack The North and found that we all shared a common interest in solving this issue. ## Approach Our software **generates a digital ticket** for attendees at an event and orders them, **sending a notification** to those who are closer to being attended to (users who get into the bubble). ## Language & Technologies * Python, * JavaScript * React.js * Flask * Vultr * Domain.com ## Challenges We struggled with the frontend development (*it's so hard to make things look nice!*) as well as combining frontend and backend. ## Solution We are proud that we managed to work together to solve all potential problems by discussing and debating possible workarounds. ## What we learned We all got better with React.js and considered options for a wait time estimation machine learning algorithm. ## Future We are considering pursuing this project further past this hackathon in the following areas: * Enhanced Database * Security Considerations (we don't want hackers to cut the line!) * "What if the APP crashes? - Unique User ID"
## Inspiration First thing I think about when I think about software is removing redundancy / tedious work on humans and transferring that work to computers. Yet during Covid we have needed to fill out more forms than ever. Some universities require daily forms, covid test scheduling, and other overhead that adds up over time. We want to offer a solution to remove this burden. ## What it does App that acts as a hub for all things Covid-related. Each row is a possible task the user can start from the comfort of their phone in just one button click. For now, the only feature present is to fill out the form for the daily check. ## How I built it I developed the Android app using the flutter framework and the REST server using flask and selenium with my WiFi network acting as the Local Area Network (LAN) allowing the app and server to communicate with one another. Additionally I stored most of the user's information in Firestore and had my server fetch that information as needed. (email and password stored as environmental variables) ## Challenges I ran into I initially wanted the UI to be a physical button so I spent a significant amount of time trying to scavenge my hardware setup (nodeMCU) to see if I can get that up and running and have that send REST requests to the server but ran into issues there. I also attempted to containerize my server so I could deploy it conveniently in GCP with little to no configuration. I wasn't able to get this to work either and in hindsight dealing with the initial configuration work to get it to work on a virtual machine would have been way more feasible for this hackathon. Lastly, Cayuga's health system form for scheduling a Covid test is much harder than expected to interact with to automate the form submission. Made some progress on this functionality but it cannot submit the entire form yet. ## Accomplishments that I'm proud of Assuming the server is online, I can swiftly press a single button (after opening the app) to fill out a form that would normally require 2 minutes of work daily. Was this worth 16 hours of work? Well assuming best conditions I would equalize in time saved in 480 days but this is my last year of college haha. But I got to learn a lot about selenium and I can leave this hackathon with a product I can keep working on for my own personal benefit. ## What I learned Learned more about Selenium during this weekend and Firestore. I also became more proficient in Flutter development. ## What's next for If No Corona Then That The App could definitely be more smoothened out, especially an onboarding page to allow the user to provide their information to the app (so they're data doesn't need to be written to firebase manually), more features developed by me, and allowing the user to provide their own backend to automate services so they can add functionality without having to reflash the app. My server side logic could be improved so it can handle dealing with tougher forms to fill out, be deployed on the cloud (ex. GCP) so it can be accessible outside of my home network, require authentication for API requests, and managing users' credentials on the server side in a more scalable but safe way.
## Inspiration The impact of COVID-19 has had lasting effects on the way we interact and socialize with each other. Even when engulfed by bustling crowds and crowded classrooms, it can be hard to find our friends and the comfort of not being alone. Too many times have we grabbed lunch, coffee, or boba alone only to find out later that there was someone who was right next to us! Inspired by our undevised use of Apple's FindMy feature, we wanted to create a cross-device platform that's actually designed for promoting interaction and social health! ## What it does Bump! is a geolocation-based social networking platform that encourages and streamlines day-to-day interactions. **The Map** On the home map, you can see all your friends around you! By tapping on their icon, you can message them or even better, Bump! them. If texting is like talking, you can think of a Bump! as a friendly wave. Just a friendly Bump! to let your friends know that you're there! Your bestie cramming for a midterm at Mofitt? Bump! them for good luck! Your roommate in the classroom above you? Bump! them to help them stay awake! Your crush waiting in line for a boba? Make that two bobas! Bump! them. **Built-in Chat** Of course, Bump! comes with a built-in messaging chat feature! **Add Your Friends** Add your friends to allow them to see your location! Your unique settings and friends list are tied to the account that you register and log in with. ## How we built it Using React Native and JavaScript, Bump! is built for both IOS and Android. For the Backend, we used MongoDB and Node.js. The project consisted of four major and distinct components. **Geolocation Map** For our geolocation map, we used the expo's geolocation library, which allowed us to cross-match the positional data of all the user's friends. **User Authentication** The user authentication proceeds was built using additional packages such as Passport.js, Jotai, and Bcrypt.js. Essentially, we wanted to store new users through registration and verify old users through login by searching them up in MongoDB, hashing and salting their password for registration using Bcrypt.js, and comparing their password hash to the existing hash in the database for login. We also used Passport.js to create Json Web Tokens, and Jotai to store user ID data globally in the front end. **Routing and Web Sockets** To keep track of user location data, friend lists, conversation logs, and notifications, we used MongoDB as our database and a node.js backend to save and access data from the database. While this worked for the majority of our use cases, using HTTP protocols for instant messaging proved to be too slow and clunky so we made the design choice to include WebSockets for client-client communication. Our architecture involved using the server as a WebSocket host that would receive all client communication but would filter messages so they would only be delivered to the intended recipient. **Navigation and User Interface**: For our UI, we wanted to focus on simplicity, cleanliness, and neutral aesthetics. After all, we felt that the Bump! the experience was really about the time spent with friends rather than on the app, so designed the UX such that Bump! is really easy to use. ## Challenges we ran into To begin, package management and setup were fairly challenging. Since we've never done mobile development before, having to learn how to debug, structure, and develop our code was definitely tedious. In our project, we initially programmed our frontend and backend completely separately; integrating them both and working out the moving parts was really difficult and required everyone to teach each other how their part worked. When building the instant messaging feature, we ran into several design hurdles; HTTP requests are only half-duplex, as they are designed with client initiation in mind. Thus, there is no elegant method for server-initiated client communication. Another challenge was that the server needed to act as the host for all WebSocket communication, resulting in the need to selectively filter and send received messages. ## Accomplishments that we're proud of We're particularly proud of Bump! because we came in with limited or no mobile app development experience (in fact, this was the first hackathon for half the team). This project was definitely a huge learning experience for us; not only did we have to grind through tutorials, youtube videos, and a Stack Overflowing of tears, we also had to learn how to efficiently work together as a team. Moreover, we're also proud that we were not only able to build something that would make a positive impact in theory but a platform that we see ourselves actually using on a day-to-day basis. Lastly, despite setbacks and complications, we're super happy that we developed an end product that resembled our initial design. ## What we learned In this project, we really had an opportunity to dive headfirst in mobile app development; specifically, learning all about React Native, JavaScript, and the unique challenges of implementing backend on mobile devices. We also learned how to delegate tasks more efficiently, and we also learned to give some big respect to front-end engineers! ## What's next for Bump! **Deployment!** We definitely plan on using the app with our extended friends, so the biggest next step for Bump! is polishing the rough edges and getting it on App Stores. To get Bump! production-ready, we're going to robustify the backend, as well as clean up the frontend for a smoother look. **More Features!** We also want to add some more functionality to Bump! Here are some of the ideas we had, let us know if there's any we missed! * Adding friends with QR-code scanning * Bump! leaderboards * Status updates * Siri! "Hey Siri, bump Emma!"
losing
## Inspiration I love playing the guitar, and I thought it would be interesting to have a teleprompter for guitar sheet music, as it could double as a metronome and a replacement for standard paper sheet music. Guitar sheet music, or *tab*, is much simpler than standard sheet music: It's presented in a somewhat unstandardized text based format, with six lines of hyphens representing strings on a guitar, and integers along the lines to indicate the placement of the notes (on the guitar's frets). The horizontal position of the notes in a bar roughly indicates the note's timing, though there is no universal way to specify a note's length. ``` Q=120, 3/4 time G G/B C G/B e|--------------------|-------------------|-0----------0--2---|-3-----------------| B|--3----------0--1---|-3-----------------|------1--3---------|-------------------| G|o------0--2---------|-------0-----0-----|-------------------|-------0-----0-----| D|o-------------------|-------------------|-------------------|-------------------| A|-------------0------|-2-----------------|-3-----------------|-2-----------------| E|--3-----------------|-------------------|-------------------|-------------------| Am G D G/B G D7 e|--------------------|-------------------|-------------------|-------------------| B|--1----3--1--0------|-0----1--0---------|------------0------|-------------------| G|----------------2---|------------2--0---|------0--2-----0---|-2-----------------| D|--------------------|-------------------|-4-----------------|-0----0------------| A|--0-----------------|-------------------|-5----2------------|---------3--2--0---| E|--------------------|-3-----------------|------------3------|-------------------| ``` For example, here are the first 8 measures of Bach's Minuet in G major. [source: Ultimate Guitar](https://tabs.ultimate-guitar.com/tab/christian-petzold/minuet-in-g-major-tabs-189404) Many popular tab sites have an autoscrolling feature, which performs a similar music-teleprompter-like role, suggesting a need for something like TabRunner. ## What it does I built: 1. React-based tab parser to take in tab like the one above and extract out the note data into a machine instruction inspired encoding. 2. An Arduino-powered hardware device that takes in the encoded data and renders it onto an 128x64 OLED screen, one measure at a time, with buttons that control the tempo (the delay between measures). ## How we built it 1. Used the [ES8266 OLED SSD1306](https://github.com/ThingPulse/esp8266-oled-ssd1306) library to render lines (representing guitar strings) and circles with numbers in them (representing notes). 2. Built a small application with ReactJS for frontend and minimal styling that has a textbox for notes. ![Webapp Screenshot](https://i.ibb.co/JnTpFYN/tabrunner.png). 3. Made up a way to represent "Note" objects, and ran the tab, which was just a standard, ASCII string, through a gauntlet of string manipulation functions to ultimately build up an array of Note data objects. 4. Developed an encoding to represent note objects. Inspired by machine instructions, I took the note data, which was a collection of ints/floats, and converted it into a single 12 bit binary value. ``` 0b0SSS_HHHH_FFFF ``` S are string bits, representing one of the six strings (ie. 001 = high E, 010 = B, etc) H are horizontal position/timing bits - stored as a value to be normalized from 0 to 15, with 0 representing the left end of a measure and 15 representing the right end. F are the fret bits. There are 19 frets on a standard guitar but I've never seen anything above 15 used, nor are the higher frets particularly playable. 5. Used bit manipulation to parse out the note data from the integer encoding in the Arduino, then used that information to generate X/Y/Bar coordinates in which to render the notes. 6. Added utility functions and wired up some buttons to control the speed, as well as left and right solid bar indicators to show when the start/end of a song is, since it loops through all the bars of a song, infinitely. ## Challenges we ran into * Processing the string data using elementary string functions was quite difficult, requiring a 4 dimensional array and a lot of scribbling odd shapes on paper * Interfacing the Javascript objects into Arduino readable (ie. C++) code was harder than expected. I thought I could just use the standard JSON (Javascript Object Notation) string encode/decode functions but that produced a lot of issues, so I had to come up with that custom integer encoding to flatten the objects into integers, then a few bit manipulations to decode them. ## Accomplishments that we're proud of * Building this in time! * Coming up with an elegant encoding for notes * I think some of the tab parsing is somewhat clever, if messy ## What we learned * A lot about how/why to use bit manipulation * How to wire buttons with an Arduino * How to create a nice control loop with good feedback * How to deal with multidimensional arrays (harder than expected!) in Arduino code ## What's next for TabRunner * Extending this for Ukelele tab, with supporting 4 strings instead of 6 * Adding more complex tab notation like hammer on/pull off parsing * More buttons to reset/fast forward * Rotary encoder for tempo setting rather than buttons * Larger screen/better assembly to mount easily to a guitar, improved visibility
## Inspiration Our inspiration comes from the desire to democratize music creation. We saw a gap where many people, especially youth, had a passion for making music but lacked the resources, knowledge, or time to learn traditional instruments and complex DAW software. We wanted to create a solution that makes music creation accessible and enjoyable for everyone, allowing them to express their musical ideas quickly and easily. ## What it does IHUM is an AI application that converts artists' vocals or hums into MIDI instruments. It simplifies the music creation process by allowing users to record their vocal melodies and transform them into instrumental tracks with preset instruments. This enables users to create complex musical pieces without needing to know how to play an instrument or use a DAW. ## How we built it For the frontend, we used HTML, CSS and React JS to develop IHUM. Using React JS and its libraries such as Pitchy, we were able to process, change, and output the sound waves of audio inputs. For the backend, we used Auth0's API to create a login/signup system, which stores and verifies user emails and passwords. ## Challenges we ran into One of the main challenges we faced was ensuring the AI's accuracy in interpreting vocal inputs and converting them into MIDI data that sounds natural and musical. Furthermore, the instruments that we converted had a ton of issues in how it sounds, especially regarding pitch, tone, etc. However, we were able to pull and troubleshoot through our way through most of them. ## Accomplishments that we're proud of Through all the hours of hard work and effort, an accomplishment we are extremely proud of is the fact that our program is able to process the audio. By using Pitchy JS, we were able to change the audio to fit how we want it to sound. On top of this, we are also proud that we were able to implement a fully working login/signup system using Auth0's API and integrate it within the program. ## What we learned As this was our first time working with audio in web development and many of our group's first hackathons, we faced many issues that we had to overcome and learn from. From processing to setting up APIs to modifying the sound waves, it definitely provided us valuable insight and expanded our skillsets. ## What's next for IHUM Our future updates allow running multiple audio samples simultaneously and increasing our instrument libraries. By doing so, IHUM can essentially be a simple and easy-to-use DAW which permits the user to create entire beats out of their voice on our web application.
## Inspiration At the age of 10, three of us had found our true passions in life... **Playing trumpet.** However, when learning how to play, we had to endure the immense toil of arduously and meticulously penciling in fingerings for notes. Back in our day, it was **impossible** to be a beginner and learn to play trumpet quickly. And that is why, we decided once and for all, at HACKMIT2017, that sheet music with no fingerings must be stopped. ## What it does The snappily named Sheet Music Helper for Beginner Trumpeters takes in a pdf file of a piece of music and quickly generates for the user the same music but with the fingerings for each note written below it. ## How we built it We used the Mac OS Automator and a program called PDFtoMusic Pro to transform the PDF into a MusicXML file. That file was then inputted into a python program which uses the music21 library and lilypad to reconstruct the visual version of the piece, analyzing each note to determine its fingering and include it in the new PDF. ## Challenges we ran into So many... We originally were doing a completely different project with the Microsoft Hololens: an augmented reality fidget spinner. Severe issues even setting up the device and not having access to it at night forced us to scrap it. Afterwards, when trying to find APIs or SDKs to convert PDFs to MusicXML or MIDI files, we realized that there were none available for free or even for a reasonable price. This meant we were forced to use a GUI program (and a free trial at that) to try to make this project work. This is why we had to use the Mac OS automator to at least make the project mostly automatic. However, the program is pretty outdated and the dialog boxes were not cooperating with the Automator. So at one point we did have to manually click through one of the dialog boxes to make the program even work. Also because of this, we didn't get the chance to set up a web app which would've made the project cooler. ## Accomplishments that we're proud of The fact that we got to play around with the Hololens- even though we couldn't really do anything it was really cool to see it in action. The fact that Andrew stayed up for 27 hours straight (and Francisco only napped for 20 minutes). The fact that we didn't give up and at least produced something ## What we learned Programming in Unity is really hard Hololens is **really** cool Converting from PDF to MusicXML is not a simple task ## What's next for Sheet Music Helper for Beginner Trumpeters If we were to continue this project, here would be some of our next steps: Support for other instruments like flute, saxophone, french horn, trombone, etc. Developing a web app Actually getting an API to do the conversion Using our own algorithm to convert from PDF to MusicXML
partial
## Inspiration **Machine learning** is a powerful tool for automating tasks that are not scalable at the human level. However, when deciding on things that can critically affect people's lives, it is important that our models do not learn biases. [Check out this article about Amazon's automated recruiting tool which learned bias against women.](https://www.reuters.com/article/us-amazon-com-jobs-automation-insight/amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK08G?fbclid=IwAR2OXqoIGr4chOrU-P33z1uwdhAY2kBYUEyaiLPNQhDBVfE7O-GEE5FFnJM) However, to completely reject the usefulness of machine learning algorithms to help us automate tasks is extreme. **Fairness** is becoming one of the most popular research topics in machine learning in recent years, and we decided to apply these recent results to build an automated recruiting tool which enforces fairness. ## Problem Suppose we want to learn a machine learning algorithm that automatically determines whether job candidates should advance to the interview stage using factors such as GPA, school, and work experience, and that we have data from which past candidates received interviews. However, what if in the past, women were less likely to receive an interview than men, all other factors being equal, and certain predictors are correlated with the candidate's gender? Despite having biased data, we do not want our machine learning algorithm to learn these biases. This is where the concept of **fairness** comes in. Promoting fairness has been studied in other contexts such as predicting which individuals get credit loans, crime recidivism, and healthcare management. Here, we focus on gender diversity in recruiting. ## What is fairness? There are numerous possible metrics for fairness in the machine learning literature. In this setting, we consider fairness to be measured by the average difference in false positive rate and true positive rate (**average odds difference**) for unprivileged and privileged groups (in this case, women and men, respectively). High values for this metric indicates that the model is statistically more likely to wrongly reject promising candidates from the underprivileged group. ## What our app does **jobFAIR** is a web application that helps human resources personnel keep track of and visualize job candidate information and provide interview recommendations by training a machine learning algorithm on past interview data. There is a side-by-side comparison between training the model before and after applying a *reweighing algorithm* as a preprocessing step to enforce fairness. ### Reweighing Algorithm If the data is unbiased, we would think that the probability of being accepted and the probability of being a woman would be independent (so the product of the two probabilities). By carefully choosing weights for each example, we can de-bias the data without having to change any of the labels. We determine the actual probability of being a woman and being accepted, then set the weight (for the woman + accepted category) as expected/actual probability. In other words, if the actual data has a much smaller probability than expected, examples from this category are given a higher weight (>1). Otherwise, they are given a lower weight. This formula is applied for the other 3 out of 4 combinations of gender x acceptance. Then the reweighed sample is used for training. ## How we built it We trained two classifiers on the same bank of resumes, one with fairness constraints and the other without. We used IBM's [AIF360](https://github.com/IBM/AIF360) library to train the fair classifier. Both classifiers use the **sklearn** Python library for machine learning models. We run a Python **Django** server on an AWS EC2 instance. The machine learning model is loaded into the server from the filesystem on prediction time, classified, and then the results are sent via a callback to the frontend, which displays the metrics for an unfair and a fair classifier. ## Challenges we ran into Training and choosing models with appropriate fairness constraints. After reading relevant literature and experimenting, we chose the reweighing algorithm ([Kamiran and Calders 2012](https://core.ac.uk/download/pdf/81728147.pdf?fbclid=IwAR3P1SFgtml7w0VNQWRf_MK3BVk8WyjOqiZBdgmScO8FjXkRkP9w1RFArfw)) for fairness, logistic regression for the classifier, and average odds difference for the fairness metric. ## Accomplishments that we're proud of We are proud that we saw tangible differences in the fairness metrics of the unmodified classifier and the fair one, while retaining the same level of prediction accuracy. We also found a specific example of when the unmodified classifier would reject a highly qualified female candidate, whereas the fair classifier accepts her. ## What we learned Machine learning can be made socially aware; applying fairness constraints helps mitigate discrimination and promote diversity in important contexts. ## What's next for jobFAIR Hopefully we can make the machine learning more transparent to those without a technical background, such as showing which features are the most important for prediction. There is also room to incorporate more fairness algorithms and metrics.
## Inspiration Every year roughly 25% of recyclable material is not able to be recycled due to contamination. We set out to reduce the amount of things that are needlessly sent to the landfill by reducing how much people put the wrong things into recycling bins (i.e. no coffee cups). ## What it does This project is a lid for a recycling bin that uses sensors, microcontrollers, servos, and ML/AI to determine if something should be recycled or not and physically does it. To do this it follows the following process: 1. Waits for object to be placed on lid 2. Take picture of object using webcam 3. Does image processing to normalize image 4. Sends image to Tensorflow model 5. Model predicts material type and confidence ratings 6. If material isn't recyclable, it sends a *YEET* signal and if it is it sends a *drop* signal to the Arduino 7. Arduino performs the motion sent to it it (aka. slaps it *Happy Gilmore* style or drops it) 8. System resets and waits to run again ## How we built it We used an Arduino Uno with an Ultrasonic sensor to detect the proximity of an object, and once it meets the threshold, the Arduino sends information to the pre-trained TensorFlow ML Model to detect whether the object is recyclable or not. Once the processing is complete, information is sent from the Python script to the Arduino to determine whether to yeet or drop the object in the recycling bin. ## Challenges we ran into A main challenge we ran into was integrating both the individual hardware and software components together, as it was difficult to send information from the Arduino to the Python scripts we wanted to run. Additionally, we debugged a lot in terms of the servo not working and many issues when working with the ML model. ## Accomplishments that we're proud of We are proud of successfully integrating both software and hardware components together to create a whole project. Additionally, it was all of our first times experimenting with new technology such as TensorFlow/Machine Learning, and working with an Arduino. ## What we learned * TensorFlow * Arduino Development * Jupyter * Debugging ## What's next for Happy RecycleMore Currently the model tries to predict everything in the picture which leads to inaccuracies since it detects things in the backgrounds like people's clothes which aren't recyclable causing it to yeet the object when it should drop it. To fix this we'd like to only use the object in the centre of the image in the prediction model or reorient the camera to not be able to see anything else.
## Inspiration Our inspiration is the multiple travelling salesman problem, in which multiple salesmen are required to visit a given number of destinations once and return while minimizing travel cost. This problem occurs in many subjects but is prominent in logistics. It is essential that goods and services are delivered to customers at the the right time, at the right place and at a low cost, with the use of multiple assets. Solving this problem would result in improved quality of service to customers and cost reductions to service providers. ## What it does Our project generates delivery routes for a given number of mail trucks. We receive a list of all recipients along with their physical locations. These recipients are assigned to trucks based on their locations. The sequence of delivery for each truck is then optimized to minimize each of their travel times, factoring in road conditions such as traffic and weather. ## How we built it Our project generates delivery routes by simplifying the multiple travelling salesman problem into multiple instances of the single travelling salesman problem. 1 - Data Preperation. The list of addresses is converted into latitude and longitude GPS coordinates. The travel times between locations is acquired for and to each other location using the google maps API. 2 - Assignment of recipients based on physical location. We use KMeans clustering to group recipients based on their proximity to each other. Each truck has their own set of recipients. This step simplifies the problem from multiple travelling sales to multiple instances of single travelling salesman, for each truck. 3 - Individual route sequencing. We use the metropolis algorithm to optimize the sequence of recipients for each truck. For each truck, an initial random route is generated, followed by subsequent trial routes. The difference in route lengths are computed and the trial route is either accepted or rejected based on optimization parameters. 4 - Visualization. We use folium to visualize the locations of recipients, grouping and assignment of recipients and finally the route calculated for each truck. ## Challenges we ran into Our two main challenges were: 1 - Our algorithm has inherent limitations that can make it unwieldy. It does converge onto optimal routes but it can be time consuming and computationally intensive 2 - Our algorithm relies on the assumption that going from point A to point B has the same travel time as from point B to point A. This is true if the path was a straight line and both ways have the same speed. However, in reality, going from point A to point B may take longer than the other way, such as different roads and speed limits. To overcome this limitation, we used the greater time requirement for our algorithm. ## Accomplishments that we're proud of We put together a working solution for the multiple travelling salesman problem that can be applied on real world logistics problems. This problem is considered NP hard; as it is a generalization of the single travelling salesman which is also NP hard because the number of viable routes grows exponentially as more destinations are added. Overall, implementing a solution for this problem, to the level of a minimum viable product under tight time constraints was a non-trivial task. ## What we learned We improved our knowledge on optimization for complex problems, as well as integrating mathematical solutions with APIs and visualization tools. ## What's next for ShipPy ShipPy may be in a usable state but it can use more work. The individual route sequencing can be computationally intensive so that represents an area of improvement. For real time data, we used google maps API for travel time estimate, but this is all done prior to departure of the mail trucks. It may be possible to implement a solution that can adjust mail trucks' routes in real time should unforeseeable changes in road conditions occur.
winning
## Inspiration We met at the event - on the spot. We are a diverse team, from different parts of the world and different ages, with up to 7 years of difference! We believe that there is an important evolution towards using data technology to make investment decisions & data applications to move to designing new financial products and services we have not even considered yet. ## What it does Trendy analyzes over 300,000+ projects from Indiegogo, a crowd-funding website, for the last year. Trendy monitors and evaluates on average 20+ data points per company. Hence, we focus on 6 main variables and illustrate the use case based on statistical models. In order to keep the most user-friendly interface where we could still get as many info as possible, we have decided to create a chatbot on which the investor interacts with our platform. The user may see graphs, trends analysis, and adjust his preferabilities accordingly. ## Challenges we ran into We have had a lot of trouble setting up the cloud to host everything. We also have had a lot of struggles in order to build a bot, due to many restrictions Facebook has set. The challenges kept us apart from innovating more on our product. ## Accomplishments that we're proud of We are very proud to have a very acute data analysis and a great interface. Our results are logical and we seem to have one of the best interfaces. ## What we learned We learned a lot about cloud hosting, data management, and chatbot setup. More concretely, we have built ourselves a great platform to facilitate our financial wealth plan! ## What's next for Trendy We foresee adding a couple of predictive analytics concepts to our trend hacking platform, like random forests, Kelly criterion, and a couple of others. Moreover, we envisage empowering our database and analysis' accuracy by implementing some Machine Learning models.
# stockbro.ai 🏆 Winner of the Best Newbie Hack at McHacks 10. 🎉 Project built for McHacks 10 by Thabnir, clarariachi, and Rain1618 ## Inspiration Money. Stonks. We like them. Originally inspired by [a research paper](https://doi.org/10.1038/srep01684) which found that the search volume of certain words associated with the economy (e.g. debt) could be indicative of the general direction of the stock market and predict future stock movements. We wanted to extend this idea and see if we could find other keywords that may be indicative of the future of the stock market. As data can be sometimes tedious to understand, we also built a bot that tells the user in no uncertain terms whether of not there is a correlation between the keyword and the stock (and occasionally *questions* the thought process behind the user's selection). ## What it does stockbro.ai attempts to predict the stock market by finding obscure correlations between a stock's prices and Google Trends over time by calculating the Pearson correlation coefficient. Spoiler: it doesn't work. It knows it doesn't work. Whether stockbro.ai cares enough to tell you that it doesn't know what's going on or simply dons a cool professional guise as it shepherds you to financial ruin is up to chance. ## How we built it We made a Flask app and used html + Bokeh to plot graphs showing correlations between a Google search's popularity over time and the associated stock's price. A variety of data science libraries like pandas along with yfinance and pytrends were used to retrieve and process the necessary data from Yahoo Finance and Google Trends respectively. ## Challenges we ran into We first made our chatbot using Cohere but had to redo it by training it with OpenAI to be able to implement it into the project with a more expansive training set. The dataset used to train the robot had to be manually created on the spot as there are no preexisting dataset that quite fit our criteria of condescending, sarcastic and cynically humorous. Lastly, for our time-series prediction model, we were unable to obtain enough data to extract the features necessary to implement a proper Multi layer perceptron (MLP) algorithm and thus didn't feel comfortable using a model that would potentially generate innacurate information. Instead, we used an Exponential Moving Average (EMA) to help filter out the noise and daily fluctuations to obtain the long term trends. Moving averages have been traditionally used to analyze stock prices over en extended period of time and the EMA provides the added advantage of placing more weight on recent values. For any graph with a Pearson correlation above 0.5, we found that the EMA was reasonably reliable in predicting the general direction using our test data. ## Accomplishments that we're proud of We're proud of being able to make a working project in under 24 hours by learning as we go, especially as none of us had extensive previous experience with any of the tools we used. We also found the robot very funny (unsurprising, given it was trained on insults *we* wrote). ## What we learned We had to read up on and implement a variety of data science concepts in order to process large amounts of data and create interactive visualizations of said data. Additionally, we learned how to use Flask, pandas, pytrends, yfinance, bokeh, NLP, basic HTML... and trained a little bot to say bad things! Most importantly, we learned that Stackoverflow takes you far in life. ## What's next for Stock Bro AI I'll let *it* tell you what's next: it won't stop making its "predictions"! More seriously, we will work on implementing a proper machine learning model (perhaps an MLP Classifier) as inspired by the literature and see if it is able to predict the up and downturns of the stock market more closely. If the urge strikes, we could also improve upon the aesthetics of the website. ## References Preis, T., Moat, H. & Stanley, H. Quantifying Trading Behavior in Financial Markets Using Google Trends. Sci Rep 3, 1684 (2013). [DOI Link](https://doi.org/10.1038/srep01684)
## Inspiration Companies lack insight into their users, audiences, and marketing funnel. This is an issue I've run into on many separate occasions. Specifically, * while doing cold marketing outbound, need better insight onto key variables of successful outreach * while writing a blog, I have no idea who reads it * while triaging inbound, which users do I prioritize Given a list of user emails, Cognito scrapes the internet finding public information about users and the companies they work at. With this corpus of unstructured data, Cognito allows you to extract any relevant piece of information across users. An unordered collection of text and images becomes structured data relevant to you. ## A Few Example Use Cases * Startups going to market need to identify where their power users are and their defining attributes. We allow them to ask questions about their users, helping them define their niche and better focus outbound marketing. * SaaS platforms such as Modal have trouble with abuse. They want to ensure people joining are not going to abuse it. We provide more data points to make better judgments such as taking into account how senior of a developer a user is and the types of companies they used to work at. * VCs such as YC have emails from a bunch of prospective founders and highly talented individuals. Cognito would allow them to ask key questions such as what companies are people flocking to work at and who are the highest potential people in my network. * Content creators such as authors on Substack looking to monetize their work have a much more compelling case when coming to advertisers with a good grasp on who their audience is. ## What it does Given a list of user emails, we crawl the web, gather a corpus of relevant text data, and allow companies/creators/influencers/marketers to ask any question about their users/audience. We store these data points and allow for advanced querying in natural language. [video demo](https://www.loom.com/share/1c13be37e0f8419c81aa731c7b3085f0) ## How we built it we orchestrated 3 ML models across 7 different tasks in 30 hours * search results person info extraction * custom field generation from scraped data * company website details extraction * facial recognition for age and gender * NoSQL query generation from natural language * crunchbase company summary extraction * email extraction This culminated in a full-stack web app with batch processing via async pubsub messaging. Deployed on GCP using Cloud Run, Cloud Functions, Cloud Storage, PubSub, Programmable Search, and Cloud Build. ## What we learned * how to be really creative about scraping * batch processing paradigms * prompt engineering techniques ## What's next for Cognito 1. predictive modeling and classification using scraped data points 2. scrape more data 3. more advanced queries 4. proactive alerts [video demo](https://www.loom.com/share/1c13be37e0f8419c81aa731c7b3085f0)
partial
## Inspiration Have you wondered where to travel or how to plan your trip more interesting? Wanna make trips more adventerous ? ## What it does Xplore is an **AI based-travel application** that allows you to experience destinations in a whole new way. It keeps your adrenaline pumping by keeping your vacation destinations undisclosed. ## How we built it * Xplore is completely functional web application built with Html, Css, Bootstrap, Javscript and Sqlite. * Multiple Google Cloud Api's such as Geolocation API, Maps Javascript API, Directions API were used to achieve our map functionalities. * Web3.storage was also used for data storage service and to retrieves data on IPFS and Filecoin. ## Challenges we ran into While integrating multiple cloud API's and API token from Web3.Strorage with our project, we discovered that it was a little complex. ## What's next for Xplore * Mobile Application for easier access. * Multiple language Support * Seasonal travel suggestions.
## Inspiration Partially inspired by the Smart Cities track, we wanted our app to have the direct utility of ordering food, while still being fun to interact with. We aimed to combine convenience with entertainment, making the experience more enjoyable than your typical drive-through order. ## What it does You interact using only your voice. The app automatically detects when you start and stop talking, uses AI to transcribe what you say, figures out the food items (with modifications) you want to order, and adds them to your current order. It even handles details like size and flavor preferences. The AI then generates text-to-speech audio, which is played back to confirm your order in a humorous, engaging way. There is absolutely zero set-up or management necessary, as the program will completely ignore all background noises and conversation. Even then, it will still take your order with staggering precision. ## How we built it The frontend of the app is built with React and TypeScript, while the backend uses Flask and Python. We containerized the app using Docker and deployed it using Defang. The design of the menu is also done in Canva with a dash of Harvard colors. ## Challenges we ran into One major challenge was getting the different parts of the app—frontend, backend, and AI—to communicate effectively. From media file conversions to AI prompt engineering, we worked through each of the problems together. We struggled particularly with maintaining smooth communication once the app was deployed. Additionally, fine-tuning the AI to accurately extract order information from voice inputs while keeping the interaction natural was a big hurdle. ## Accomplishments that we're proud of We're proud of building a fully functioning product that successfully integrates all the features we envisioned. We also managed to deploy the app, which was a huge achievement given the complexity of the project. Completing our initial feature set within the hackathon timeframe was a key success for us. Trying to work with Python data type was difficult to manage, and we were proud to navigate around that. We are also extremely proud to meet a bunch of new people and tackle new challenges that we were not previously comfortable with. ## What we learned We honed our skills in React, TypeScript, Flask, and Python, especially in how to make these technologies work together. We also learned how to containerize and deploy applications using Docker and Docker Compose, as well as how to use Defang for cloud deployment. ## What's next for Harvard Burger Moving forward, we want to add a business-facing interface, where restaurant staff would be able to view and fulfill customer orders. There will also be individual kiosk devices to handle order inputs. These features would allow *Harvard Burger* to move from a demo to a fully functional app that restaurants could actually use. Lastly, we can sell the product by designing marketing strategies for fast food chains.
## Inspiration I dreamed about the day we would use vaccine passports to travel long before the mRNA vaccines even reached clinical trials. I was just another individual, fortunate enough to experience stability during an unstable time, having a home to feel safe in during this scary time. It was only when I started to think deeper about the effects of travel, or rather the lack thereof, that I remembered the children I encountered in Thailand and Myanmar who relied on tourists to earn $1 USD a day from selling handmade bracelets. 1 in 10 jobs are supported by the tourism industry, providing livelihoods for many millions of people in both developing and developed economies. COVID has cost global tourism $1.2 trillion USD and this number will continue to rise the longer people are apprehensive about travelling due to safety concerns. Although this project is far from perfect, it attempts to tackle vaccine passports in a universal manner in hopes of buying us time to mitigate tragic repercussions caused by the pandemic. ## What it does * You can login with your email, and generate a personalised interface with yours and your family’s (or whoever you’re travelling with’s) vaccine data * Universally Generated QR Code after the input of information * To do list prior to travel to increase comfort and organisation * Travel itinerary and calendar synced onto the app * Country-specific COVID related information (quarantine measures, mask mandates etc.) all consolidated in one destination * Tourism section with activities to do in a city ## How we built it Project was built using Google QR-code APIs and Glideapps. ## Challenges we ran into I first proposed this idea to my first team, and it was very well received. I was excited for the project, however little did I know, many individuals would leave for a multitude of reasons. This was not the experience I envisioned when I signed up for my first hackathon as I had minimal knowledge of coding and volunteered to work mostly on the pitching aspect of the project. However, flying solo was incredibly rewarding and to visualise the final project containing all the features I wanted gave me lots of satisfaction. The list of challenges is long, ranging from time-crunching to figuring out how QR code APIs work but in the end, I learned an incredible amount with the help of Google. ## Accomplishments that we're proud of I am proud of the app I produced using Glideapps. Although I was unable to include more intricate features to the app as I had hoped, I believe that the execution was solid and I’m proud of the purpose my application held and conveyed. ## What we learned I learned that a trio of resilience, working hard and working smart will get you to places you never thought you could reach. Challenging yourself and continuing to put one foot in front of the other during the most adverse times will definitely show you what you’re made of and what you’re capable of achieving. This is definitely the first of many Hackathons I hope to attend and I’m thankful for all the technical as well as soft skills I have acquired from this experience. ## What's next for FlightBAE Utilising GeoTab or other geographical softwares to create a logistical approach in solving the distribution of Oyxgen in India as well as other pressing and unaddressed bottlenecks that exist within healthcare. I would also love to pursue a tech-related solution regarding vaccine inequity as it is a current reality for too many.
winning
## Inspiration We were inspired by the website <https://thispersondoesnotexist.com>, deepfakes, and how realistic images produced by GANs (Generative adversarial networks) can be. ## What it does We created a website where people play a game where they are given one image that is real and one image that is produced from a GAN. Can the user detect what's the GAN or not? ## How I built it We built it using React front-end that pulls images from a real image dataset and a GAN-generated dataset. ## Challenges I ran into We were originally going to customize our own GAN, but complications arose so we decided to have a fun and easy application using pre-existing GAN architecture. ## Accomplishments that I'm proud of I am proud of my team members of understanding the concepts of a GAN and putting it into use. ## What I learned An application of GANs! Also, some of us had never used React or done front-end before. ## What's next for Gan Game Who knows where GANs can take us next?
## Inspiration We decided to pursue the CSE Challenge since it appealed to our fun-loving nature as young adults. We felt as though we could create a game that is both informative and extremely fun and attention-grabbing for young children. The CSE challenge provided a perfect opportunity for us to show our passion for programming as well as create a humorous and silly application that we hope puts a smile on the face of its user! ## What it does The web application is an endless tower defence game. Upon game start, the user will see bugs approaching their computer, intending to hurt the poor mainframe. It is the user's job to protect the computer and not allow the bugs to deplete its health bar. In order to get rid of the bugs, the user must click on them (or cycle through them using the TAB key), prompting a multiple choice question testing their knowledge on important cybersecurity topics and answering it correctly. The game is endless, with the speed and spawn rate of the bugs increasing over time to increase difficulty. The game ends when prompted or when the mainframe's health is reduced to 0, as indicated by its health bar. ## How we built it We wanted to include as many basic game design features as we could brainstorm. These included catchy music, visually appealing graphics and visuals, as well as an intuitive user interface. We managed to accomplish this using React as a game engine, an unconventional choice. The web application itself is built using TypeScript and CSS for its front end and NextJS for a bit of the back end. hosted using Cloudfare. ## Challenges we ran into Some challenges we faced were implementing React's useEffect and useState without bugs, as well as creating custom game sprites. ## Accomplishments that we're proud of Some of the accomplishments we're proud of: * The user experience is perfectly tailored to the target audience. * The app is extremely accessible and packed with features to enhance the user experience. * Sprites are custom-made and animated ## What we learned * A further understanding of React applications * Revisiting high school physics with vector calculations * CSS styling techniques ## What's next for Cyber Hero * Adding an ingame currency, allowing users to earn credits when eradicating bugs and using said credits to buy weapons/armour to help them progress further in the game * Adding a scoring system and a leaderboard to add a competitive aspect to the game * Pitch the project to game publishers, specifically mobile app publishers for possible game publishment to help spread the importance of cybersecurity.
## Inspiration Security and facial recognition was our main interest when planning our project for this Makeathon. We were inspired by a documentary in which a male was convicted of murder by the police deparment in Los Angeles. The man was imprisoned for six months away from his daughter and wife. He was wrongfully convicted and this was discovered thorugh the means of a video that displayed evidence of him beign present at a LA Dodgers game at the same time of the alleged murder conviction. Thus, he was set free and this story truly impacted us from an emotional standpoint because the man had to pay a hefty price of six month prison time for no reason. This exposed us to the world of facial recognition and software that can help identify faces that are not explicitly shown. We wanted to employ software that would help identify faces based on neural networks that were preloaded. ## What it does The webcam takes a picture of the user's face, and it compares it to preloaded images of the user's face from the database. The algorithm will then draw boxes around the user's face and eyes. ## How I built it To build this project, we used a PYNQ board, a computer with an Ethernet cable, and several cables to power the PYNQ board as well as neural networks to power the technology to identify the faces (xml files), as well as Python programming to power the software. We used a microprocessor, ethernet cable, HDMI cable, and webcam to power the necessary devices for the PYNQ board. The Python programming coupled with the xml files that were trained to recognize different faces and eyes were used on a Jupyter platform to display the picture taken as well as boxes around the face and eyes. ## Challenges I ran into We faced a plethora of problems while completing this project. These range from technical gaps in knowledge to hardware malfunctions that were unexpected by the team. The first issue we ran into was being given an SD card for the PYNQ board that was not preloaded with the required information. This meant that we had to download a PYNQ file with 1.5 GB of data from the pynq.io. This would hinder our process as it could lead to future difficulties so we decided to switch the SD card with one that is preloaded. This lead us to lose valuable time trying to debug the PYNQ board. Another issue we had was the SD card was corrupted. This was because we unintentionally and ignorantly uploaded files to the Jupyter platform by clicking “Upload” and choosing the files from our personal computer. What we should have done was to use map networking to load the files from our personal computer to Jupyter successfully. Thus, we will be able to implement pictures for computer recognition. Finally, the final issue we had was trying to import the face recognition API that was developed by the Massachusetts Institute of Technology. We did not know how to import the library for use, and given more time, we would explore that avenue more as this was our very first hackathon. We would export it in the PYNQ folder and not the data folder, which is a feature that was elaborated upon by the Xilinx representative. ## Accomplishments that I'm proud of Loading code and images from our computers into the PYNQ board. We were also able to link a web camera with the board while also being able to analyse the pictures taken from the web camera. ## What I learned As a team we were able to learn more about neural networks and how the PYNQ board technology could be incorporated into various areas including our intended purpose in security. To be specific, we learned how to use Jupyter and Python as tools to create these possible embedded systems and even got to explore ideas of possible machine learning. ## What's next for PYNQ EYE Our project is able to recognize users using their facial features. With that being said, there is a huge application in the security industry. In instances where employees/workers have security recognize them and their id to enter the premise of the company, this technology could prove to be useful. The automation in the security aspect of facial recognition would allow employees to show their face to a camera and be granted access to the building/premise, removing the need for extra security detail and identification that could easily be falsified, making the security of the premise much safer. Another application would be home security where the facial recognition system would be used to disable home alarms by the faces of the residents of the property. Such applications prove that this project has the potential to boost security in the workforce and at home.
losing
## Inspiration One of the 6 most common medication problems in an ageing population comes from the scheduling and burden of taking several medications several times a day. At best, this can be a hassle and an annoying process. However, it is often more likely than not that many may simply forget to take certain medication without supervision and reminders which may result in further deterioration of their health. In order to address this issue to make living healthy a smoother process for the ageing population as well as provide better support for their healthcare providers, MediDate was born. Designed with the user in mind, the UI is simple and intuitive while the hardware is also clean and dependable. The diversity of features ensures that all aspects of the medication and caretaking process can be managed effectively through one comprehensive platform. A senior citizen now has a technological solution to one of their daily problems, which can all be managed easily by a caretaker or nurse. ## What it does MediDate is a combination of hardware components and a web application. The hardware aspect is responsible for tracking dosage & supply for the patient as well as communicate issues (such as low supply of medication) to the web application and the caretaker. The web application is made up of several different features to best serve both patient and caretaker. Users are first brought to a welcome page with a daily schedule of their medications as well as a simulation of the pillbox below to keep track of total medication supply. When the web app detects that supply is below a certain threshold, it will make calls to local pharmacies to reorder the medication. Along the side navigation bar, there are several features that the users can take advantage of including a notifications page, monitoring pharmacy orders, uploading new subscriptions, and descriptions of their current medication. The notifications page is pretty self-explanatory, it keeps track of any notifications patients and/or caretakers should be aware of, such as successful prescription uploads, low medication supply, and errors in uploads. The upload page allows users to take photos of new prescriptions to upload to the web app which will then make the appropriate processes in order to add it to both the schedule and the explanation bar through RX numbers, dates, etc... Finally, the prescription pages offer quick shortcuts for descriptions of the medication to make understanding meds easier for users. In order to be as accessible as possible, an Alexa skill has also been created to support functionality from the web application for users to interact more directly with the caretaking solution. It currently supports limited functionality including querying for today's prescription, descriptions of different medication on the patients' schedules, as well as a call for help function should the need arise. This aspect of MediDate will allow more efficient service for a larger population, directly targeting those with vision impairment. Another feature was integrated using Twilio's SMS API. For the convenience of the user, a notification text would be sent to a registered Pharmacy phone number with details of prescription requirements when the current pill inventory fell below an adjustable threshold. Pharmacies could then respond to the text to notify the user when their prescription was ready for pick-up. This enables seamless prescription refills and reduces the time spent in the process. ## How I built it **Hardware** Powered by an Arduino UNO, buttons were attached to the bottom of the pillbox to act as weight sensors for pills. When pills are removed, the button would click "off", sending data to the web application for processing. We used CoolTerm and a Python script to store Arduino inputs before passing it off to the web app. This aspect allows for physical interaction with the user and helps to directly manage medication schedules. **Google Cloud Vision** In order to turn images of prescriptions into text files that could be processed by our web app, we used Google Cloud Vision to parse the image and scan for relevant text. Instead of running a virtual machine, we made API calls through our web app to take advantage of the free Cloud Credits. **Backend** Scripting was done using JavaScript and Python/Flask, processing information from Cloud Vision, the Arduino, and user inputs. The goal here was to send consistent, clear outputs to the user at all times. **Frontend** Built with HTML, CSS, bootstrap, and javascript, the design is meant to be clean and simple for the user. We chose a friendly UI/UX design, bright colours, and great interface flow. **Alexa Skill** Built with Voiceflow, the intents are simple and the skill does a good job of walking the user through each option carefully with many checks along the way to ensure the user is following. Created with those who may not be as familiar communicating with technology verbally, MediDate is an excellent way to integrate future support technologies seamlessly into users' lives. **Twilio SMS** The Twilio SMS API was integrated using Python/Flask. Once the pill inventory fell below an adjustable pill quantity, the Twilio outbound notification text workflow is triggered. Following receipt of the text by pharmacies and the preparation of prescriptions, a return text triggers a notification status on the user's home page. ## Challenges I ran into Flask proved to be a difficult tool to work with, causing us many issues with static and application file paths. Dhruv and Allen spent a long time working on this problem. We were also a bit rusty with hardware and didn't realize how important resistors were. Because of that, we ran into some issues getting a basic model set up, but it was all smooth sailing from there. The reactive calendar with the time blocks also turned out to be a very complex problem. There were many different ways to take on the difference arrays, which was the big hurdle to solving the problem. Finding an efficient solution was definitely a big challenge. ## Accomplishments that I'm proud of Ultimately, getting the full model off the ground is certainly something to be proud of. We followed Agile methodology and tried (albeit unsuccessfully at times) to get a minimum viable product with each app functionality we took on. This was a fun and challenging project, and we're all glad to have learned so much in the process. ## What's next for MediDate The future of MediDate is bright! With a variety of areas to spread into in order to support accessible treatment for ALL users, MediDate is hoping to improve the hardware. Many elderly also suffer from tremors and other physical ailments that may make taking pills a more difficult process. As a result, implementing a better switch system to open the pillbox is an area the product could expand towards.
## Inspiration Automation is at its peak when it comes to technology, but one area that has lacked to keep up, is areas of daily medicine. We encountered many moments within our family members where they had trouble keeping up with their prescription timelines. In a decade dominated by cell phones, we saw the need to develop something fast and easy, where it wouldn’t require something too complicated to keep track of all their prescriptions and timelines and would be accessibly at their fingertips. ## What it does CapsuleCalendar is an Android application that lets one take a picture of their prescriptions or pill bottles and have them saved to their calendars (as reminders) based on the recommended intake amounts (on prescriptions). The user will then be notified based on the frequency outlined by the physician on the prescription. The application simply requires taking a picture, its been developed with the user in mind and does not require one to go through the calendar reminder, everything is pre-populated for the user through the optical-character recognition (OCR) processing when they take a snap of their prescription/pill bottle. ## How we built it The application was built for Android purely in Java, including integration of all APIs and frameworks. First, authorization of individualized accounts was done using Firebase. We implemented and modified Google’s optical-character recognition (OCR) cloud-vision framework, to accurately recognize text on labels, and process and parse it in real-time. The Google Calendar API was then applied on the parsed data, and with further processing, we used intents to set reminders based on the data of the prescriptions labels (e.g. take X tablets X daily - where X was some arbitrary number which was accounted for in a (or multiple) reminders). ## Challenges we ran into Working with the OCR Java framework was quite difficult to implement into our personalized application due to various dependency failures - it took us way too long to debug and get the framework to work *sufficiently* for our needs. Also, the default OCR graphics toolkit only captures very small snippets of text at a single time whereas we needed multiple lines to be processed at once and text at different areas within the label at once (e.g. default implementation would allow one set to be recognized and processed - we needed multiple sets). The default OCR engine wasn't quite effective for multiple lines of prescriptions, especially when identifying both prescription name and intake procedure - tweaking this was pretty tough. Also, when we tried to use the Google Calendar API, we had extensive issues using Firebase to generate Oauth 2.0 credentials (Google documentation wasn’t too great here :-/). ## Accomplishments that we're proud of We’re proud of being able to implement a customized Google Cloud Vision based OCR engine and successfully process, parse and post text to the Google Calendar API. We were just really happy we had a functional prototype! ## What we learned Debugging is a powerful skill we took away from this hackathon - it was pretty rough going through complex, pre-written framework code. We also learned to work with some new Google APIs, and Firebase integrations. Reading documentation is also very important… along with reading lots of StackOverflow. ## What's next for CapsuleCalendar We would like to use a better, stronger OCR engine that is more accurate at reading labels in a curved manner, and does not get easily flawed from multiple lines of text. Also, we would like to add functionality to parse pre-taken images (if the patient doesn’t have their prescription readily available and only happens to have a picture of their prescription). We would also like to improve the UI. ## Run the application Simply download/clone the source code from GitHub link provided and run on Android studio. It is required to use a physical Android device as it requires use of the camera - not possible on emulator.
## Inspiration Being human is given but keeping our humanity is our choice Many people can’t afford medical care but one who has can donate it. As we can see that there are so many people who are suffering with lack of medical care and quality health care we think its our time to become helping hands. The aim and objective of this project is to develop a system for the donation of unused medicines to the people who are in need by using android application. This application will collect all the information about NGO’s, orphanages, old age homes and government hospitals near by user (donor). The user can donate unused medicines by his\her own choice to any of this non-profitable organizations and hospitals who will be giving out these medicines for free. This application will collect all the information about NGOs, orphanages, old age homes, and government hospitals nearby users (donor). ## What it does Med-Vita, it is an android mobile application where user can donate their unused medicines by his/her own choice to a nearby NGOs and Government hospital. Once user arrive at home page, automatically a prompt screen will pop to ask for user location access, once user given his/her invite. It will show the lists of NGO’s and Government Hospitals. User can select anyone from the lists. Here, in the design. User has selected NGO, its showing NGO basic details. Then user need to put details about their medicines and donate money to that NGO. ## How we built it we used figma to design User interface. ## Challenges we ran into Our challenge is to find developer at last moment. ## Accomplishments that we're proud of We have been able to complete the application in a very short deadline. ## What we learned Learned how to communicate through discord. Using Figma to design the UI. Making videos to promote our product. ## What's next for Med-Vita We are planning on creating a web version of the application and deploying it.
winning
## Inspiration The inspiration for Nova came from the overwhelming volume of emails and tasks that professionals face daily. We aimed to create a solution that simplifies task management and reduces cognitive load, allowing users to focus on what truly matters. ## What it does Nova is an automated email assistant that intelligently processes incoming emails, identifies actionable items, and seamlessly adds them to your calendar. It also sends timely text reminders, ensuring you stay organized and on top of your commitments without the hassle of manual tracking. ## How we built it We built Nova using natural language processing algorithms to analyze email content and extract relevant tasks. By integrating with calendar APIs and SMS services, we created a smooth workflow that automates task management and communication, making it easy for users to manage their schedules. ## Challenges we ran into One of the main challenges was accurately interpreting the context of emails to distinguish between urgent tasks and general information. Additionally, ensuring seamless integration with various calendar platforms and messaging services required extensive testing and refinement. ## Accomplishments that we're proud of We are proud of developing a fully functional prototype of Nova that effectively reduces users' daily load by automating task management. Initial user feedback has been overwhelmingly positive, highlighting the assistant's ability to streamline workflows and enhance productivity. ## What we learned Throughout the development process, we learned the importance of user feedback in refining our algorithms and improving the overall user experience. We also gained insights into the complexities of integrating multiple services to create a cohesive solution. ## What's next for Nova Moving forward, we plan to enhance Nova's capabilities by incorporating machine learning to improve task recognition and prioritization. Our goal is to expand its features and ultimately launch it as a comprehensive productivity tool that transforms how users manage their daily tasks.
## Inspiration: We're trying to get involved in the AI chat-bot craze and pull together cool pieces of technology -> including Google Cloud for our backend, Microsoft Cognitive Services and Facebook Messenger API ## What it does: Have a look - message Black Box on Facebook and find out! ## How we built it: SO MUCH PYTHON ## Challenges we ran into: State machines (i.e. mapping out the whole user flow and making it as seamless as possible) and NLP training ## Accomplishments that we're proud of: Working NLP, Many API integrations including Eventful and Zapato ## What we learned ## What's next for BlackBox: Integration with google calendar - and movement towards a more general interactive calendar application. Its an assistant that will actively engage with you to try and get your tasks/events/other parts of your life managed. This has a lot of potential - but for the sake of the hackathon, we thought we'd try do it on a topic that's more fun (and of course, I'm sure quite a few us can benefit from it's advice :) )
## Inspiration Have you ever wished you had…another you? This thought has crossed all of our heads countless times as we find ourselves swamped in too many tasks, unable to keep up in meetings as information flies over our head, or wishing we had the feedback of a third perspective. Our goal was to build an **autonomous agent** that could be that person for you — an AI that learns from your interactions and proactively takes **actions**, provides **answers**, offers advice, and more, to give back your time to you. ## What it does Ephemeral is an **autonomous AI agent** that interacts with the world primarily through the modality of **voice**. It can sit in on meetings, calls, anywhere you have your computer out. It’s power is the ability to take what it hears and proactively carry out repetitive actions for you such as be a real-time AI assistant in meetings, draft emails directly in your Google inbox, schedule calendar events and invite attendees, search knowledge corpuses or the web for answers to questions, image generation, and more. Multiple users (in multiple languages!) can use the technology simultaneously through the server/client architecture that efficiently handles multiprocessing. ## How we built it ![link](https://i.imgur.com/PatcdIi.png) **Languages**: Python ∙ JavaScript ∙ HTML ∙ CSS **Frameworks and Tools**: React.js ∙ PyTorch ∙ Flask ∙ LangChain ∙ OpenAI ∙ TogetherAI ∙ Many More ### 1. Audio to Text We utilized OpenAI’s Whisper model and the python speech\_recognition library to convert audio in real-time to text that can be used by downstream functions. ### 2. Client → Server via Socket Connection We use socket connections between the client and server to pass over the textual query to the server for it to determine a particular action and action parameters. The socket connections enable us to support multiprocessing as multiple clients can connect to the server simultaneously while performing concurrent logic (such as real-time, personalized agentic actions during a meeting). ### 3. Neural Network Action Classifier We trained a neural network from scratch to handle the multi-class classification problem that is going from text to action (or none at all). Because the agent is constantly listening, we need a way to efficiently and accurately determine if each transcribed chunk necessitates a particular action (if so, which?) or none at all (most commonly). We generated data for this task utilizing data augmentation sources such as ChatGPT (web). ### 4. LLM Logic: Query → Function Parameters We use in-context learning via few-shot prompting and RAG to query the LLM for various agentic tasks. We built a RAG pipeline over the conversation history and past related, relevant meetings for context. The agentic tasks take in function parameters, which are generated by the LLM. ### 5. Server → Client Parameters via Socket Connection We pass back the function parameters as a JSON object from the server socket to the client. ### 6. Client Side Handler: API Call A client side handler receives a JSON object that includes which action (if any) was chosen by the Action Planner in step 3, then passes control to the appropriate handler function which handles authorizations and makes API calls to various services such as Google’s Gmail Client, Calendar API, text-to-speech, and more. ### 7. Client Action Notifications → File (monitored by Flask REST API) After the completion of each action, the client server writes the results of the action down to a file which is then read by the React Web App to display ephemeral updates on a UI, in addition to suggestions/answers/discussion questions/advice on a polling basis. ### 8. React Web App and Ephemeral UI To communicate updates to the user (specifically notifications and suggestions from Ephemeral), we poll the Flask API for any updates and serve it to the user via a React web app. Our app is called Ephemeral because we show information minimally yet expressively to the user, in order to promote focus in meetings. ## Challenges we ran into We spent a significant amount of our time optimizing for lower latency, which is important for a real-time consumer-facing application. In order to do this, we created sockets to enable 2-way communication between the client(s) and the server. Then, in order to support concurrent and parallel execution, we added support for multithreading on the server-side. Choosing action spaces that can be precisely articulated enough in text such that a language model can carry out actions was a troublesome task. We went through a lot of experimentation on different tasks to figure out which would have the highest value to humans and also the highest correctness guarantee. ## Accomplishments that we're proud of Successful integration of numerous OSS and closed source models into a working product, including Llama-70B-Chat, Mistral-7B, Stable Diffusion 2.1, OpenAI TTS, OpenAI Whisper, and more. Integration of real actions that we can see ourselves directly using was very cool to see go from a hypothetical to a reality. The potential for impact of this general workflow in various domains is not lost on us, as while the general productivity purpose stands, there are many more specific gains to be seen in fields such as digital education, telemedicine, and more! ## What we learned The possibility of powerful autonomous agents to supplement human workflows signals the shift of a new paradigm where more and more our imprecise language can be taken by these programs and turned into real actions on behalf of us. ## What's next for Ephemeral An agent is only constrained by the size of the action space you give it. We think that Ephemeral has the potential to grow boundlessly as more powerful actions are integrated into its planning capabilities and it returns more of a user’s time to them.
winning
## Problem In these times of isolation, many of us developers are stuck inside which makes it hard for us to work with our fellow peers. We also miss the times when we could just sit with our friends and collaborate to learn new programming concepts. But finding the motivation to do the same alone can be difficult. ## Solution To solve this issue we have created an easy to connect, all in one platform where all you and your developer friends can come together to learn, code, and brainstorm together. ## About Our platform provides a simple yet efficient User Experience with a straightforward and easy-to-use one-page interface. We made it one page to have access to all the tools on one screen and transition between them easier. We identify this page as a study room where users can collaborate and join with a simple URL. Everything is Synced between users in real-time. ## Features Our platform allows multiple users to enter one room and access tools like watching youtube tutorials, brainstorming on a drawable whiteboard, and code in our inbuilt browser IDE all in real-time. This platform makes collaboration between users seamless and also pushes them to become better developers. ## Technologies you used for both the front and back end We use Node.js and Express the backend. On the front end, we use React. We use Socket.IO to establish bi-directional communication between them. We deployed the app using Docker and Google Kubernetes to automatically scale and balance loads. ## Challenges we ran into A major challenge was collaborating effectively throughout the hackathon. A lot of the bugs we faced were solved through discussions. We realized communication was key for us to succeed in building our project under a time constraints. We ran into performance issues while syncing data between two clients where we were sending too much data or too many broadcast messages at the same time. We optimized the process significantly for smooth real-time interactions. ## What's next for Study Buddy While we were working on this project, we came across several ideas that this could be a part of. Our next step is to have each page categorized as an individual room where users can visit. Adding more relevant tools, more tools, widgets, and expand on other work fields to increase our User demographic. Include interface customizing options to allow User’s personalization of their rooms. Try it live here: <http://35.203.169.42/> Our hopeful product in the future: <https://www.figma.com/proto/zmYk6ah0dJK7yJmYZ5SZpm/nwHacks_2021?node-id=92%3A132&scaling=scale-down> Thanks for checking us out!
## Inspiration Many of us have a hard time preparing for interviews, presentations, and any other social situation. We wanted to sit down and have a real talk... with ourselves. ## What it does The app will analyse your speech, hand gestures, and facial expressions and give you both real-time feedback as well as a complete rundown of your results after you're done. ## How We built it We used Flask for the backend and used OpenCV, TensorFlow, and Google Cloud speech to text API to perform all of the background analyses. In the frontend, we used ReactJS and Formidable's Victory library to display real-time data visualisations. ## Challenges we ran into We had some difficulties on the backend integrating both video and voice together using multi-threading. We also ran into some issues with populating real-time data into our dashboard to display the results correctly in real-time. ## Accomplishments that we're proud of We were able to build a complete package that we believe is purposeful and gives users real feedback that is applicable to real life. We also managed to finish the app slightly ahead of schedule, giving us time to regroup and add some finishing touches. ## What we learned We learned that planning ahead is very effective because we had a very smooth experience for a majority of the hackathon since we knew exactly what we had to do from the start. ## What's next for RealTalk We'd like to transform the app into an actual service where people could log in and save their presentations so they can look at past recordings and results, and track their progress over time. We'd also like to implement a feature in the future where users could post their presentations online for real feedback from other users. Finally, we'd also like to re-implement the communication endpoints with websockets so we can push data directly to the client rather than spamming requests to the server. ![Image](https://i.imgur.com/aehDk3L.gif) Tracks movement of hands and face to provide real-time analysis on expressions and body-language. ![Image](https://i.imgur.com/tZAM0sI.gif)
## Inspiration We were inspired by all the people who go along their days thinking that no one can actually relate to what they are experiencing. The Covid-19 pandemic has taken a mental toll on many of us and has kept us feeling isolated. We wanted to make an easy to use web-app which keeps people connected and allows users to share their experiences with other users that can relate to them. ## What it does Alone Together connects two matching people based on mental health issues they have in common. When you create an account you are prompted with a list of the general mental health categories that most fall under. Once your account is created you are sent to the home screen and entered into a pool of individuals looking for someone to talk to. When Alone Together has found someone with matching mental health issues you are connected to that person and forwarded to a chat room. In this chat room there is video-chat and text-chat. There is also an icebreaker question box that you can shuffle through to find a question to ask the person you are talking to. ## How we built it Alone Together is built with React as frontend, a backend in Golang (using Gorilla for websockets), WebRTC for video and text chat, and Google Firebase for authentication and database. The video chat is built from scratch using WebRTC and signaling with the Golang backend. ## Challenges we ran into This is our first remote Hackathon and it is also the first ever Hackathon for one of our teammates (Alex Stathis)! Working as a team virtually was definitely a challenge that we were ready to face. We had to communicate a lot more than we normally would to make sure that we stayed consistent with our work and that there was no overlap. As for the technical challenges, we decided to use WebRTC for our video chat feature. The documentation for WebRTC was not the easiest to understand, since it is still relatively new and obscure. This also means that it is very hard to find resources on it. Despite all this, we were able to implement the video chat feature! It works, we just ran out of time to host it on a cloud server with SSL, meaning the video is not sent on localhost (no encryption). Google App Engine also doesn't allow websockets in standard mode, and also doesn't allow `go.mod` on `flex` mode, which was inconvenient and we didn't have time to rewrite parts of our webapp. ## Accomplishments that we're proud of We are very proud for bringing our idea to life and working as a team to make this happen! WebRTC was not easy to implement, but hard work pays off. ## What we learned We learned that whether we work virtually together or physically together we can create anything we want as long as we stay curious and collaborative! ## What's next for Alone Together In the future, we would like to allow our users to add other users as friends. This would mean in addition of meeting new people with the same mental health issues as them, they could build stronger connections with people that they have already talked to. We would also allow users to have the option to add moderation with AI. This would offer a more "supervised" experience to the user, meaning that if our AI detects any dangerous change of behavior we would provide the user with tools to help them or (with the authorization of the user) we would give the user's phone number to appropriate authorities to contact them.
winning
## Inspiration E-cigarette use, specifically Juuling, has become an increasing public health concern among young adults and teenagers over the past few years. While e-cigarettes are often viewed as a safe alternative to traditional tobacco cigarettes, e-cigarettes have been proven to have negative health effects on both the user and second-hand smokers, as shown in multiple CDC and Surgeon General reviewed studies. E-cigarettes also still contain the active ingredient nicotine, which is a well know addictive drug. Yet, students across the United States on high school and college campuses continue to vape. For us, high school students, it is common sight to see classmates skipping class and “Juul-ing” in the bathroom. The Juul is one of the most popular e-cigarettes as it has a sleek design and looks like a USB drive. This design coupled with the fact that there is no lasting smell or detectable smoke, it makes it easy for users to go undetected in the high school environment. Moreover, this results in students not receiving help for their addition or even realizing they do have an addition. With an increasing use of e-cigarettes among millennials, there has been a creation of vape culture filled with vape gods preforming vape porn, displayig the artistic style of their smoke creations. Users often post pictures and videos of themself Juuling on social media platforms, specifically Instagram and Facebook. With this in mind, we set out to create a research-based solution that could identify e-cigarette users and deter them from future use, a process school administration have attempted and failed at. Juuly the Bear was created as the mascot leading the war on teenage e-cigarette use. ## What it does Juuly the Bear is intended to fight the growth of vape culture by creating a counter culture that actively discourages Juuling while informing users of dangers. It does this by using computer vision to analyze the Instagram account of an inputted user. The program flags images it detects to be of a person using an e-cigarette. If more than 40% of the images analyzed are of a person vaping, the user is classified as a “frequent e-cigarette” as defined by a study by Jung Ah Lee (2017), and categorized as high-risk for nicotine addiction. Juuly will then automatically message the high-risk user on Facebook Messenger informing them of their status and suggestions on how to cut down on their Juul use. Juuly will also provide external resources that the user can utilize. ## How I built it We built Juuly’s computer vision using the Clarify API in Python. First, we trained a machine learning model with images of e-cigarette users actively vaping. We then tested images of other vaping people to evaluate and further train the model until a sufficient accuracy level was reached. Then, we used the library to create a data scraping program for Instagram. When a username is inputed, the program gathers the most recent posts which are then fed into the computer vision program, analyzing the images with the previously trained model. If more than 40% of the images are of vaping, a Facebook Messenger bot automatically messages the user with warnings and resources. ## Challenges I ran into We ran into many challenges with implementing Juuly the Bear, especially because the technology was initially foreign to us. As high school students, we did not have a huge background in computer vision or machine learning. Initially, we had to completely learn the Clarify API and the Facebook Messenger API. We also had a hard time finding the design and thinking of a way to maximize our outreach. We decided that adding a bit of humor into the design would better resonate with teenagers, the average age at which people Juul. In addition, we were unable to successfully when trying to combine the backend Juuly program with our frontend. We initially wanted to create a fully functional website where one can enter Instagram and Facebook profiles to analyze, but when we had both the front and back ends completed, we had a hard time seamlessly integrating the two. In the end, we had to scrap the front-end in favor of a more functional backend. ## Accomplishments that I'm proud of As a group of high school students, we were able to use many new tools that we had never encountered before. The tools described above were extremely new to us before the hackathon, however, by working with various mentors and continually striving to learn these tools, we were able to create a successful program. The most successful part of the project was creating a powerful backend that was able to detect people Juuling. By training a machine learning model with the Clarify API, we were able to reaching over a 80% accuracy rate for the set of images we had, while initially we had barely any knowledge in machine learning. Another very successful part was our scraping program. This was completely new to us and we were able to create a program that perfectly fit our application. Scraping was also a very powerful tool, and by learning how to scrape social media pages, we had a lot more data than we wouldn’t have had otherwise. ## What's next for Juuly the Bear Our immediate next step would be combining our already designed front end website with our backend. We spent a lot of time trying to understand how to do this successfully, but we ultimately just ran out of time. In the future, we would optimally partner up with major social media organizations including Facebook and Twitter to create a large scale implementation of Juuly. This will have a much larger impact on vape culture as people are able to become more informed. This can have major impacts on public health, adolescent behavior/culture, and also increase the quality of life of all as the number of vapers are reduced.
# Inspiration There are variety of factors that contribute to *mental health* and *wellbeing*. For many students, the stresses of remote learning have taken a toll on their overall sense of peace. Our group created **Balance Pad** as a way to serve these needs. Thus, Balance Pads landing page gives users access to various features that aim to improve their wellbeing. # What it does Balance Pad is a web-based application that gives users access to **several resources** relating to mental health, education, and productivity. Its initial landing page is a dashboard tying everything together to make a clear and cohesive user experience. ### Professional Help > > 1. *Chat Pad:* The first subpage of the application has a built in *Chatbot* offering direct access to a **mental heath professional** for instant messaging. > > > ### Productivity > > 1. *Class Pad:* With the use of the Assembly API, users can convert live lecture content into text based notes. This feature will allow students to focus on live lectures without the stress of taking notes. Additionally, this text to speech aide will increase accessibility for those requiring note takers. > 2. *Work Pad:* Timed working sessions using the Pomodoro technique and notification restriction are also available on our webpage. The Pomodoro technique is a proven method to enhance focus on productivity and will benefit students > 3. *To Do Pad:* Helps users stay organized > > > ### Positivity and Rest > > 1. *Affirmation Pad:* Users can upload their accomplishments throughout their working sessions. Congratulatory texts and positive affirmations will be sent to the provided mobile number during break sessions! > 2. *Relaxation Pad:* Offers options to entertain students while resting from studying. Users are given a range of games to play with and streaming options for fun videos! > > > ### Information and Education > > 1. *Information Pad:* is dedicated to info about all things mental health > 2. *Quiz Pad:* This subpage tests what users know about mental health. By taking the quiz, users gain valuable insight into how they are and information on how to improve their mental health, wellbeing, and productivity. > > > # How we built it **React:** Balance Pad was built using React. This allowed for us to easily combine the different webpages we each worked on. **JavaScript, HTML, and CSS:** React builds on these languages so it was necessary to gain familiarity with them **Assembly API:** The assembly API was used to convert live audio/video into text **Twilio:** This was used to send instant messages to users based on tracked accomplishments # Challenges we ran into > > * Launching new apps with React via Visual Studio Code > * Using Axios to run API calls > * Displaying JSON information > * Domain hosting of Class Pad > * Working with Twilio > > > # Accomplishments that we're proud of *Pranati:* I am proud that I was able to learn React from scratch, work with new tech such as Axios, and successfully use the Assembly API to create the Class Pad (something I am passionate about). I was able to persevere through errors and build a working product that is impactful. This is my first hackathon and I am glad I had so much fun. *Simi:* This was my first time using React, Node.js, and Visual Studio. I don't have a lot of CS experience so the learning curve was steep but rewarding! *Amitesh:* Got to work with a team to bring a complicated idea to life! # What we learned *Amitesh:* Troubleshooting domain creation for various pages, supporting teammates and teaching concepts *Pranati:* I learned how to use new tech such as React, new concepts such API calls using Axios, how to debug efficiently, and how to work and collaborate in a team *Simi:* I learned how APIs work, basic html, and how React modularizes code. Also learned the value of hackathons as this was my first # What's next for Balance Pad *Visualizing Music:* Our group hopes to integrate BeatCaps software to our page in the future. This would allow a more interactive music experience for users and also allow hearing impaired individuals to experience music *Real Time Transcription:* Our group hopes to implement in real time transcription in the Class Pad to make it even easier for students.
Shashank Ojha, Sabrina Button, Abdellah Ghassel, Joshua Gonzales # ![](https://drive.google.com/uc?export=view&id=1admSn1s1K2eioqtCisLGZD4zXZjGWsy8) "Reduce Reuse Recoin" ## Theme Covered: The themes covered in this project include post pandemic restoration for both the environment, small buisnesses, and personal finance! The app pitched uses an extensivly trained AI system to detect trash and sort it to the proper bin from your smartphone. While using the app, users will be incentivized to use the app and recover the environment through the opportunity to earn points, which will be redeemable in partnering stores. ## Problem Statment: As our actions continue to damage the environment, it is important that we invest in solutions that help restore our community in more sustainable practices. Moreover, an average person creates over 4 pounds of trash a day, and the EPA has found that over 75% of the waste we create are recyclable. As garbage sorting is so niche from town-to-town, students have reportable agreed to the difficulty of accurately sorting garbage, thus causing this significant misplacement of garbage. Our passion to make our community globally and locally more sustainable has fueled us to use artificial intelligence to develop an app that not only makes sorting garbage as easy as using Snapchat, but also rewards individuals for sorting their garbage properly. For this reason, we would like to introduce Recoin. This intuitive app allows a person to scan any product and easily find the bin that the trash belongs based off their location. Furthermore, if they attempt to sell their product, or use our app, they will earn points which will be redeemable in partnering stores that advocate for the environment. The more the user uses the app, the more points they receive, resulting in better items to redeem in stores. With this app we will not only help recover the environment, but also increase sales in small businesses which struggled during the pandemic to recover. ## About the App: ### Incentive Breakdown: ![](https://drive.google.com/uc?export=view&id=1CU2JkOJqplaTxNo7B8s_UXN_UaEydsd3) Please note that these expenses are estimated expectations for potential benefit packages but are not defined yet. We are proposing a $1 discount for participating small businesses when 100 coffee/drink cups are returned to participating restaurants. This will be easy for small companies to uphold financially, while providing a motivation for individuals to use our scanner. Amazon costs around $0.5 to $2 on packaging, so we are proposing that Amazon provides a $15 gift card per 100 packages returned to Amazon. As the 100 packages can cost from $50 to $200, this incentive will save Amazon resources by 5 to 100 times the amount, while providing positive public perception for reusing. As recycling plastic for 3D filament is an up-and-coming technology that can revolutionize environment sustainability, we would like to create a system where providing materials for such causes can give the individuals benefits. Lastly, as metals become more valuable, we hope to provide recyclable metals to companies to reduce their expenses through our platform. The next steps to this endeavor will be to provide benefits for individuals that provide batteries and electronics with some sort of incentive as well. ## User Interface: ![](https://drive.google.com/uc?export=view&id=1QR2fNvrkpB7q_PAI_5iZfL3M7i9nV6m5) ## Technological Specifics and Next Steps: ![](https://drive.google.com/uc?export=view&id=1vlfSjhtg-_JZZzVITaRMiOsXC-9TwyNS) ### Frontend ![](https://drive.google.com/uc?export=view&id=1TSAcKAPLFtZdJrn8OZVRfkBvS_29Y8Dk) We used to React.JS to develop components for the webcam footage and capture screen shots. It was also utilized to create the rest of the overall UI design. ### Backend #### Waste Detection AI: ![](https://drive.google.com/uc?export=view&id=1Fx8uAW3I_OntNn74aZXYQbThnfPkZs56) On Pytorch, we utilized an open-source trash detection AI software and data, to train the trash detection system originally developed by IamAbhinav03. The system utilizes over 2500 images to train, test, and validate the system. To improve the system, we increased the number of epochs to 8 rather than 5 (number of passes the training system has completed). This allowed the accuracy to increase by 4% more than the original system. We also modified the test/train ratio and split amounts to 70%, 10%, and 20% respectively, as more prominent AI studies have found this distribution to receive the best results. Currently, the system is predicted to have a 94% accuracy, but in the future, we plan on using reinforcement learning in our beta testing to continuously improve our algorithm. Reinforcement learning allows for the data to be more accurate, through learning from user correction. This will allow AI to become more precise as it gains more popularity. A flask server is used to make contact with the waste detection neural network; an image is sent from the front end as a post request, the Flask server generates a tensor and runs that through the neural net, then sends the response from the algorithm back to the front end. This response is the classification of the waste as either cardboard, glass, plastic, metal, paper or trash. #### Possible next steps: By using Matbox API and the Google Suite/API, we will be creating maps to find recycling locations and an extensively thorough Recoin currency system that can easily be transferred to real time money for consumers and businesses (as shown in the user interface above). ## Stakeholders: After the completion of this project, we intend to continue to pursue the app to improve our communities’ sustainability. After looking at the demographic of interest in our school itself, we know that students will be interested in this app, not only from convenience but also through the reward system. Local cafes and Starbucks already have initiatives to improve public perspective and support the environment (i.e., using paper straws and cups), therefore supporting this new endeavor will be an interest to them. As branding is everything in a business, having a positive public perspective will increase sales. ![](https://drive.google.com/uc?export=view&id=189sA5C0KDT8VIaRdD6jQQtnN32qO87h8) ## Amazon: As Amazon continues to be the leading online marketplace, more packages will continue to be made, which can be detrimental to the world's limited resources. We will be training the UI to track packages that are Amazon based. With such training, we would like to be able to implement a system where the packaging can be sent back to Amazon to be reused for credit. This will allow Amazon to form a more environmentally friendly corporate image, while also saving on resources. ## Small Businesses: As the pandemic has caused a significant decline in small business revenue, we intend to mainly partner with small businesses in this project. The software will also help increase small business sales as by supporting the app, students will be more inclined to go to their store due to a positive public image, and the additive discounts will attract more customers. In the future, we wish to train AI to also detect trash of value (i.e.. Broken smartphones, precious metals), so that consumers can sell it in a bundle to local companies that can benefit from the material (ex: 3D-printing companies that convert used plastic to filament) ## Timeline: The following timeline will be used to ensure that our project will be on the market as soon as possible: ![](https://drive.google.com/uc?export=view&id=1jA6at4g31KpTOJdwna-10UwP_U-z4JUp) ## About the Team: We are first and second year students from Queen's University who are very passionate about sustainbility and designing of innovative solutions to modern day problems. We all have the mindset to give any task our all and obtain the best results. We have a diverse skillset in the team and throughout the hackathon we utlized it to work efficienty. We are first time hackathoners, so even though we all had respective expierence in our own fields, this whole expierence was very new and educationally rewarding for us. We would like to thank the organisers and mentors for all thier support and for organizing the event. ## Code References • <https://medium.datadriveninvestor.com/deploy-your-pytorch-model-to-production-f69460192217> • <https://narainsreehith.medium.com/upload-image-video-to-flask-backend-from-react-native-app-expo-app-1aac5653d344> • <https://pytorch.org/tutorials/beginner/saving_loading_models.html> • <https://pytorch.org/tutorials/intermediate/flask_rest_api_tutorial.html> • <https://pytorch.org/get-started/locally/> • <https://www.kdnuggets.com/2019/03/deploy-pytorch-model-production.html> ## References for Information • <https://www.rubicon.com/blog/trash-reason-statistics-facts/> • <https://www.dosomething.org/us/facts/11-facts-about-recycling> • <https://www.forbes.com/sites/forbesagencycouncil/2016/10/31/why-brand-image-matters-more-than-you-think/?sh=6a4b462e10b8> • <https://www.channelreply.com/blog/view/ebay-amazon-packaging-costs>
partial
## Inspiration The best way to learn to code is usually through trial and error. As a team, we all know first hand how hard it can be to maintain the proper standards, techniques, and security practices necessary to keep your applications secure. SQLidify is a teaching tool and a security tool all in one, with the goal of helping coders keep their applications secure. ## What it does SQLidify uses our own unique dataset/training model which consists of over 250 labelled data entries to identify SQL vulnerabilities in an application. To use it, simply paste your code into our website where our machine learning model will identify vulnerabilities in your back-end code, and then will suggest strategies to fix these issues. ## How we built it We used a Flask, python based backend that handles API calls from a front end designed in React.js and Tailwind CSS. When called, our python backend reads data from users and then sends the data to our AI model. At the same time, our own simplified natural language processing model identifies keywords in specific lines of code and sends these lines individually to our AI model. The model makes a prediction for each which is then compared to help improve reliability. If the predictions don't match, further instructions are sent to the user in order to improve our accuracy. The AI is designed using Cohere's classification workflow. We generated over 250 code snippets labeled as either vulnerable or safe. We have another model that is triggered if the code is determined to be vulnerable, which will then generate 3 appropriate options to resolve the vulnerabilities. ## Challenges we ran into We had trouble setting up cohere and getting it to integrate with our application, but we were luckily able to resolve the issues in time to build our app. We also had a lot of trouble finding a dataset fit for our needs so we ended up creating our own from scratch. ## Accomplishments that we're proud of Despite setbacks, we managed to integrate the AI and React frontend and Flask backend all together in less than 24 hours. ## What we learned We all learned so much about machine learning and Cohere in particular, since none of us were experienced at working with AI until McHacks. ## What's next for SQLidify Expansion. We hope to eventually integrate detection for other vulnerabilities such as buffer overflow and many more.
## Inspiration The inspiration for the project was our desire to make studying and learning more efficient and accessible for students and educators. Utilizing advancements in technology, like the increased availability and lower cost of text embeddings, to make the process of finding answers within educational materials more seamless and convenient. ## What it does Wise Up is a website that takes many different types of file format, as well as plain text, and separates the information into "pages". Using text embeddings, it can then quickly search through all the pages in a text and figure out which ones are most likely to contain the answer to a question that the user sends. It can also recursively summarize the file at different levels of compression. ## How we built it With blood, sweat and tears! We used many tools offered to us throughout the challenge to simplify our life. We used Javascript, HTML and CSS for the website, and used it to communicate to a Flask backend that can run our python scripts involving API calls and such. We have API calls to openAI text embeddings, to cohere's xlarge model, to GPT-3's API, OpenAI's Whisper Speech-to-Text model, and several modules for getting an mp4 from a youtube link, a text from a pdf, and so on. ## Challenges we ran into We had problems getting the backend on Flask to run on a Ubuntu server, and later had to instead run it on a Windows machine. Moreover, getting the backend to communicate effectively with the frontend in real time was a real challenge. Extracting text and page data from files and links ended up taking more time than expected, and finally, since the latency of sending information back and forth from the front end to the backend would lead to a worse user experience, we attempted to implement some features of our semantic search algorithm in the frontend, which led to a lot of difficulties in transferring code from Python to Javascript. ## Accomplishments that we're proud of Since OpenAI's text embeddings are very good and very new, and we use GPT-3.5 based on extracted information to formulate the answer, we believe we likely equal the state of the art in the task of quickly analyzing text and answer complex questions on it, and the ease of use for many different file formats makes us proud that this project and website can be useful for so many people so often. To understand a textbook and answer questions about its content, or to find specific information without knowing any relevant keywords, this product is simply incredibly good, and costs pennies to run. Moreover, we have added an identification system (users signing up with a username and password) to ensure that a specific account is capped at a certain usage of the API, which is at our own costs (pennies, but we wish to avoid it becoming many dollars without our awareness of it. ## What we learned As time goes on, not only do LLMs get better, but new methods are developed to use them more efficiently and for greater results. Web development is quite unintuitive for beginners, especially when different programming languages need to interact. One tool that has saved us a few different times is using json for data transfer, and aws services to store Mbs of data very cheaply. Another thing we learned is that, unfortunately, as time goes on, LLMs get bigger and so sometimes much, much slower; api calls to GPT3 and to Whisper are often slow, taking minutes for 1000+ page textbooks. ## What's next for Wise Up What's next for Wise Up is to make our product faster and more user-friendly. A feature we could add is to summarize text with a fine-tuned model rather than zero-shot learning with GPT-3. Additionally, a next step is to explore partnerships with educational institutions and companies to bring Wise Up to a wider audience and help even more students and educators in their learning journey, or attempt for the website to go viral on social media by advertising its usefulness. Moreover, adding a financial component to the account system could let our users cover the low costs of the APIs, aws and CPU running Whisper.
## Inspiration Compiled C and C++ binaries compose a large portion of the software vulnerabilities present in today's server and commercial codebase. Oftentimes, these vulnerabilities can be detected and prevented by static analysis algorithms, such as the CLang static analyzer. However, with ever increasing complexity and frequency of exploits involving various mechanisms of memory corruption and arbitrary program control, static methods are becoming ineffective in identifying all possible attack surfaces in a given program. Machine learning, a powerful data analysis technique that has been used for finding patterns in a wide variety of datasets, is proposed to be a solution to more quickly and effectively identify potential weak points in a program so that they may be patched before deployment. ## What it does CodeHeat (short for automatic Code Heat Map analysis), is a machine-learning based vulnerability detection built specifically for C and C++ programs, but whose concepts may be easily expanded to perform similar analysis on programs in other languages. Instead of analyzing compiled binaries - which is what disassembers such as IDA and Ghidra do - CodeHeat analyzes the source program directly, exactly what is visible to the developer. This offers several advantages: first, source file analysis allows the developer to make changes to his or her program as it is being built, without having to repeatedly wait for compilation. Furthermore, vulnerabilities at the source level are much easier for the developer to identify and fix. This is much easier than having to map the compiled code back to the text to address a vulnerability. ## How we built it The machine learning library used to generate, train, and evaluate the model was Keras, which runs atop Tensorflow. Since Keras is a Python library, all analysis programs we built were written in Python. The data that is passed to the classifier is a series of tokens - C / C++ text source files had to first be tokenized with a lexer. The lexer was implemented from scratch in Python using the PLY (Python Lex-Yacc) library. The machine learning model itself consists of 7 types of internal layers: (1) embedding layer, (2) reshaping layer, (3) 2-dimensional convolutional layer, (4) a maximum pooling layer, (5) a flattening layer, (6) a dropout layer, and (7) dense layers (there are three). Parameters were selected according toa previous [research paper](https://arxiv.org/pdf/1807.04320.pdf?fbclid=IwAR15RDmbxQS_b-2bETRimOsgLpfTTOCd5Eno2fcmxLbauBZNpBKemtAXZoo) investing the properties of a similar vulnerability generation. The convolutional neural network is apposite for this application because the tokens are embedded into a higher space, resulting in a block of program text to be representable by an intensity image. In programming, neighboring tokens are known to affect each others' meanings, and the convolution reflects this proximity. ## Challenges we ran into Tokenization of the C code became our biggest challenge. The research paper we were following used their own custom tokenizer that reduced the token space to 156 symbols, and we had a hard time matching that, while still accounting for the different symbols that could be captured. ## Accomplishments that we're proud of We picked an idea that we thought was interesting, and we stuck with it beginning to end no matter the challenges. We've had to overcome many hurdles and although we didn't make the results that we would have liked, we are very happy with the progress we made. ## What we learned We learned about the process of lexing program text data into a set of symbols that makes it easiest for a machine learning model to find patterns among the program data. We also expanded our thinking about machine learning and its applicability to various problems - even though our datsets were text files (at most described by a one-dimensional string of characters), embedding into a higher space and using convolution enables patterns that would otherwise be difficult to observe to become clear. ## What's next for ML-Based Software Vulnerability Detection To improve CodeHeat, the central model must be trained to better identify offending code. This can be accomplished by selecting appropriate token rules for a tokenizer that more effectively represents the program code and meaning. Additionally, visualization of which parts of the code are most vulnerable would also be desirable. Visualization of which parts of the code are likely the most dangerous can be obtained by careful use of the output layer of the beginning of the convolutional network.
partial
## Inspiration I really related to the problem the career councilors described. Reflecting on things can be hard. Especially when you're reflecting on yourself. Our goal is to make this process fun and easy, with a little help from our friends. ## What it does Meerar compares HOW OFTEN you demonstrate a skill, and how much you ENJOY demonstrating it. To elaborate: * When ever you engage in an event, Meerar will prompt you take a little note, describing your experience and how you felt. * We then use the Lexalytics API to decide how much you enjoyed the event. * Your reflection is posted anonymously on ExperienceMine, along with other people's reflections. * Here's the twist: other students read your reflection and assign a "skill" that they feel you demonstrated. * The more you review and post reflections, the more points you earn. You can earn achievements and compete with your friends like this! * At this point, we have 2 pieces of info: how you felt about the event, and what skill you demonstrated at the event. Once you have enough data, we plot a graph that compares HOW OFTEN you demonstrate a skill, and how much you ENJOY demonstrating it, and provide an analysis on you strengths and weaknesses, and recommends events to attend to play to your strengths. ## How I built it I built Meerar with a PHP backend, which is used to store and process user information. The front end was built on the trusty Twitter Bootstrap framework, with bunch of JavaScript libraries sprinkled in (i.e. materialize.js, chart.js). I used the Lexalytics API in order to gauge the sentiment in user's descriptions. I also wrote a little scraper to scrape from LinkedIn profiles of users which I execute with a bash script (what can I say, I'm learning). It's a bit patchwork-ish, but it works! ## Challenges I ran into I came to McGill not knowing anyone, and being a beginner, it was challenging to find a team that would take me in. I managed to find another team of beginners but unfortunately, my team decided to leave the hackathon just after dinner was served on Saturday. As a result I had to do the project all myself: a pretty daunting task! This is my first major hackathon so I was a little bummed out. Still, I followed through because I thought I had an interesting idea. Hopefully the judges agree! ## Accomplishments that I'm proud of I almost ran into a dead end with the LinkedIn API. Essentially, I wanted to get job information about a user from their profile. However, you needed to register your app with Linkedin and must have an official website (which I did not have) in order to qualify to recieve the access key to use their API. Since I didn't have this, I found a workaround: scrape the page manually with Python, and then exec() a bash script to scrape! Though it may not seem like much, I proud of finding the "way" with my "will". ## What I learned Tech wise, I learned how to use chart.js. It's a really cool way to visualize data with beautiful graphs in the browser. Will definitely be using it again in the future. But what I really learned is that I have a lot left to learn. Everyone at McGill really knows there stuff. Next year, that'll be me. ## What's next for Meerar Hopefully, if the judges like it, we can work on making it a full fledged app available online for student at McGill.
## Inspiration Currently, in war-torn and disaster-struck areas, first responders are risking their lives unnecessarily, as they lack the resources needed to accurately and safely assess a disaster zone. By using robotics, we can prevent the risk of human life. ## What it does The SPOT robot has been enabled to be an independent rescue machine that understands human emotion and natural language using AI with a noted ability to detect a language and adjust output as such. ## How we built it We built it using a variety of tools including Hume for AI/transcription, OpenCV for ML Models, Flask for the backend, and Next.js for the frontend. ## Challenges we ran into Connecting to and controlling SPOT was extremely difficult. We got around this by building a custom control server that connects directly to SPOT and controls its motors. The Hume API was relatively friendly to use and we connected this to a live stream of data via the Continuity Camera. ## Accomplishments that we're proud of Fixing SPOT's internal linux dependencies. This is something that blocked all teams from using SPOT and took up most of the first day. But by solving this, we enabled SPOT to be used by all teams. ## What we learned We learned it is quite complex to combine various tech stacks across a variety of products both hardware and software. We learned to approach these problems by introducing levels of abstraction that would allow parts of the team to work parallely. ## What's next for Spotter - Revolutionizing Disaster Relief We hope to fully autonomize SPOTTER so that SPOT can traverse and navigate disaster environments completely independently. In this way, SPOT can locate survivors and assess the situation globally.
## Inspiration Everyone in this team has previously been to post-secondary and noticed that their large group of friends have been slowly dwindling since graduation, especially after COVID. It's already well known that once you leave school it's a lot harder to make friends, so we got this idea to make FriendFinder to match you with people with similar hobbies in the same neighbourhood as you. ## What it does **Find friends!** When making an account on FriendFinder, you will be asked to input your hobbies, whether you prefer chatting or hanging out, whether you enjoy outdoor activities or not, and your neighbourhood. It then gives other users a relative score based on your profile, with more matching hobbies and preferences having a higher score. Now when ever you log in, the front page will show you a list of people near you with the highest score, allowing you to send them friend requests to start a chat. ## How we built it **With friends!** We used HTML, CSS, and Javascript for the frontend and Firebase and Firestore for the backend. ## Challenges we ran into **Our friends...** Just kidding. One of the biggest challenges we faced was the short amount of time (24 hours) of this hackathon. Being first year students, we made a project of similar scale in school but over 4 months! Another challenge was that none of us knew how to implement a real time chat app into our project. At first we wanted to learn a new language React and make the chat app beautiful, but due to time constraints, we researched a simpler way to do it just to give it base functionality. ## Accomplishments that we're proud of **Our friendship survived!** After the initial scramble to figure out what we were doing, we managed to get a minimum viable product in 24 hours. We are really proud that we incorporated our knowledge from school and learned something new and integrated it together without any major issues. ## What we learned **Make good friends** The most important thing we learned is that team work is one of the most important things needed for a good development team. Being able to communicate with your team and dividing work up by each team member's strengths is what made it possible to finish this project within the strict time limit. The hackathon was a really fun experience and we're really glad that we could form a team together. ## What's next for FriendFinder **More features to find more friends better** * beautify the app * add friend / pending friend requests feature * security/encryption of messages * report user function * more detailed hobby selection list for better matching * update user's profile / hobby selection list at any time * let users add photos * group chat function * rewrite sections of code to become more efficient
partial
## Inspiration Recently, character experiences powered by LLMs have become extremely popular. latforms like Character.AI, boasting 54M monthly active users and a staggering 230M monthly visits, are a testament to this trend. Yet, despite these figures, most experiences in the market offer text-to-text interfaces with little variation. We wanted to take the chat with characters to the next level. Instead of a simple and standard text-based interface, we wanted intricate visualization of your character with a 3D model viewable in your real-life environment, actual low-latency, immersive, realistic, spoken dialogue with your character, with a really fun dynamic (generated on-the-fly) 3D graphics experience - seeing objects appear as they are mentioned in conversation - a novel innovation only made possible recently. ## What it does An overview: CharactAR is a fun, immersive, and **interactive** AR experience where you get to speak your character’s personality into existence, upload an image of your character or take a selfie, pick their outfit, and bring your custom character to life in a AR world, where you can chat using your microphone or type a question, and even have your character run around in AR! As an additional super cool feature, we compiled, hosted, and deployed the open source OpenAI Shap-e Model(by ourselves on Nvidia A100 GPUs from Google Cloud) to do text-to-3D generation, meaning your character is capable of generating 3D objects (mid-conversation!) and placing them in the scene. Imagine the terminator generating robots, or a marine biologist generating fish and other wildlife! Our combination and intersection of these novel technologies enables experiences like those to now be possible! ## How we built it ![flowchart](https://i.imgur.com/R5Vbpn6.png) *So how does CharactAR work?* To begin, we built <https://charactar.org>, a web application that utilizes Assembly AI (State of the Art Speech-To-Text) to do real time speech-to-text transcription. Simply click the “Record” button, speak your character’s personality into existence, and click the “Begin AR Experience” button to enter your AR experience. We used HTML, CSS, and Javascript to build this experience, and bought the domain using GoDaddy and hosted the website on Replit! In the background, we’ve already used OpenAI Function Calling, a novel OpenAI product offering, to choose voices for your custom character based on the original description that you provided. Once we have the voice and description for your character, we’re ready to jump into the AR environment. The AR platform that we chose is 8th Wall, an AR deployment platform built by Niantic, which focuses on web experiences. Due to the emphasis on web experiences, any device can use CharactAR, from mobile devices, to laptops, or even VR headsets (yes, really!). In order to power our customizable character backend, we employed the Ready Player Me player avatar generation SDK, providing us a responsive UI that enables our users to create any character they want, from taking a selfie, to uploading an image of their favorite celebrity, or even just choosing from a predefined set of models. Once the model is loaded into the 8th Wall experience, we then use a mix of OpenAI (Character Intelligence), InWorld (Microphone Input & Output), and ElevenLabs (Voice Generation) to create an extremely immersive character experience from the get go. We animated each character using the standard Ready Player Me animation rigs, and you can even see your character move around in your environment by dragging your finger on the screen. Each time your character responds to you, we make an API call to our own custom hosted OpenAI Shap-e API, which is hosted on Google Cloud, running on an NVIDIA A100. A short prompt based on the conversation between you and your character is sent to OpenAI’s novel text-to-3D API to be generated into a 3D object that is automatically inserted into your environment. For example, if you are talking with Barack Obama about his time in the White House, our Shap-E API will generate a 3D object of the White House, and it’s really fun (and funny!) in game to see what Shap-E will generate. ## Challenges we ran into One of our favorite parts of CharactAR is the automatic generation of objects during conversations with the character. However, the addition of these objects also lead to an unfortunate spike in triangle count, which quickly builds up lag. So when designing this pipeline, we worked on reducing unnecessary detail in model generation. One of these methods is the selection of the number of inference steps prior to generating 3D models with Shap-E. The other is to compress the generated 3D model, which ended up being more difficult to integrate than expected. At first, we generated the 3D models in the .ply format, but realized that .ply files are a nightmare to work with in 8th Wall. So we decided to convert them into .glb files, which would be more efficient to send through the API and better to include in AR. The .glb files could get quite large, so we used Google’s Draco compression library to reduce file sizes by 10 to 100 times. Getting this to work required quite a lot of debugging and package dependency resolving, but it was awesome to see it functioning. Below, we have “banana man” renders from our hosted Shap-E model. ![bananaman_left](https://i.imgur.com/9i94Jme.jpg) ![bananaman_right](https://i.imgur.com/YJyRLKF.jpg) *Even after transcoding the .glb file with Draco compression, the banana man still stands gloriously (1 MB → 78 KB).* Although 8th Wall made development much more streamlined, AR Development as a whole still has a ways to go, and here are some of the challenges we faced. There were countless undefined errors with no documentation, many of which took hours of debugging to overcome. Working with the animated Ready Player Me models and the .glbs generated by our Open AI Shap-e model imposed a lot of challenges with model formats and dynamically generating models, which required lots of reading up on 3D model formats. ## Accomplishments that we're proud of There were many small challenges in each of the interconnected portions of the project that we are proud to have persevered through the bugs and roadblocks. The satisfaction of small victories, like seeing our prompts come to 3D or seeing the character walk around our table, always invigorated us to keep on pushing. Running AI models is computationally expensive, so it made sense for us to allocate this work to be done on Google Cloud’s servers. This allowed us to access the powerful A100 GPUs, which made Shap-E model generation thousands of times faster than would be possible on CPUs. This also provided a great opportunity to work with FastAPIs to create a convenient and extremely efficient method of inputting a prompt and receiving a compressed 3D representation of the query. We integrated AssemblyAI's real-time transcription services to transcribe live audio streams with high accuracy and low latency. This capability was crucial for our project as it allowed us to convert spoken language into text that could be further processed by our system. The WebSocket API provided by AssemblyAI was secure, fast, and effective in meeting our requirements for transcription. The function calling capabilities of OpenAI's latest models were an exciting addition to our project. Developers can now describe functions to these models, and the models intelligently output a JSON object containing the arguments for those functions. This feature enabled us to integrate GPT's capabilities seamlessly with external tools and APIs, offering a new level of functionality and reliability. For enhanced user experience and interactivity between our website and the 8th Wall environment, we leveraged the URLSearchParams interface. This allowed us to send the information of the initial character prompt seamlessly. ## What we learned For the majority of the team, it was our first AR project using 8th Wall, so we learned the ins and outs of building with AR, the A-Frame library, and deploying a final product that can be used by end-users. We also had never used Assembly AI for real-time transcription, so we learned how to use websockets for Real-Time transcription streaming. We also learned so many of the intricacies to do with 3D objects and their file types, and really got low level with the meshes, the object file types, and the triangle counts to ensure a smooth rendering experience. Since our project required so many technologies to be woven together, there were many times where we had to find unique workarounds, and weave together our distributed systems. Our prompt engineering skills were put to the test, as we needed to experiment with countless phrasings to get our agent behaviors and 3D model generations to match our expectations. After this experience, we feel much more confident in utilizing the state-of-the-art generative AI models to produce top-notch content. We also learned to use LLMs for more specific and unique use cases; for example, we used GPT to identify the most important object prompts from a large dialogue conversation transcript, and to choose the voice for our character. ## What's next for CharactAR Using 8th Wall technology like Shared AR, we could potentially have up to 250 players in the same virtual room, meaning you could play with your friends, no matter how far away they are from you. These kinds of collaborative, virtual, and engaging experiences are the types of environments that we want CharactAR to enable. While each CharactAR custom character is animated with a custom rigging system, we believe there is potential for using the new OpenAI Function Calling schema (which we used several times in our project) to generate animations dynamically, meaning we could have endless character animations and facial expressions to match endless conversations.
## Inspiration There are millions of people around the world who have a physical or learning disability which makes creating visual presentations extremely difficult. They may be visually impaired, suffer from ADHD or have disabilities like Parkinsons. For these people, being unable to create presentations isn’t just a hassle. It’s a barrier to learning, a reason for feeling left out, or a career disadvantage in the workplace. That’s why we created **Pitch.ai.** ## What it does Pitch.ai is a web app which creates visual presentations for you as you present. Once you open the web app, just start talking! Pitch.ai will listen to what you say and in real-time and generate a slide deck based on the content of your speech, just as if you had a slideshow prepared in advance. ## How we built it We used a **React** client combined with a **Flask** server to make our API calls. To continuously listen for audio to convert to text, we used a react library called “react-speech-recognition”. Then, we designed an algorithm to detect pauses in the speech in order to separate sentences, which would be sent to the Flask server. The Flask server would then use multithreading in order to make several API calls simultaneously. Firstly, the **Monkeylearn** API is used to find the most relevant keyword in the sentence. Then, the keyword is sent to **SerpAPI** in order to find an image to add to the presentation. At the same time, an API call is sent to OpenAPI’s GPT-3 in order to generate a caption to put on the slide. The caption, keyword and image of a single slide deck are all combined into an object to be sent back to the client. ## Challenges we ran into * Learning how to make dynamic websites * Optimizing audio processing time * Increasing efficiency of server ## Accomplishments that we're proud of * Made an aesthetic user interface * Distributing work efficiently * Good organization and integration of many APIs ## What we learned * Multithreading * How to use continuous audio input * How to use React hooks, Animations, Figma ## What's next for Pitch.ai * Faster and more accurate picture, keyword and caption generation * "Presentation mode” * Integrate a database to save your generated presentation * Customizable templates for slide structure, color, etc. * Build our own web scraping API to find images
## Inspiration Emergencies are something that the city must handle on a day-today basis, and as residents of Kingston, we understand that every minute counts when responding to a call. We were thus inspired to use Kingston's Open Data resources to model an optimised distribution of emergency services across Kingston. ## What it does Kingston Bernard - named after the famous Alpine rescue dogs - uses historical data on Fire & Rescue Incidents from 2018 to now to map out common emergency areas: whether they be fire, medical, or vehicular. Then, using a greedy metric-k center algorithm, an approximately evenly distributed positional map is generated to inform the Kingston government which locations require the most attention when providing more emergency services (such as highlighting areas that may require more police patrolling, first aid kits, etc.). ## How I built it The web application uses a React frontend with an Express backend that computes the distribution given a number of units available to place (generates a map of a number of coordinates). It also uses Google Cloud API to display the data as a Google Map. ## What's next for Kingston Bernard Kingston-Bernard aims to continue improving its algorithm to further optimise distribution, as well as including more data from Open Data Kingston to better implement a resourceful application. We are team 44: ManchurioX#3808, CheezWhiz#8656, and BluCloos#8986
winning
## Inspiration The inspiration for InstaPresent came from our frustration with constantly having to create presentations for class and being inspired by the 'in-game advertising' episode on Silicon Valley. ## What it does InstaPresent is a tool that uses your computer's microphone to generate a presentation in real-time. It can retrieve images and graphs and summarize your words into bullet points. ## How we built it We used Google's Text To Speech API to process audio from the laptop's microphone. The Text To Speech is captured when the user speaks and when they stop speaking, the aggregated text is sent to the server via WebSockets to be processed. ## Challenges We ran into Summarizing text into bullet points was a particularly difficult challenge as there are not many resources available for this task. We ended up developing our own pipeline for bullet-point generation based on part-of-speech and dependency analysis. We also had plans to create an Android app for InstaPresent, but were unable to do so due to limited team members and time constraints. Despite these challenges, we enjoyed the opportunity to work on this project. ## Accomplishments that we're proud of We are proud of creating a web application that utilizes a variety of machine learning and non-machine learning techniques. We also enjoyed the challenge of working on an unsolved machine learning problem (sentence simplification) and being able to perform real-time text analysis to determine new elements. ## What's next for InstaPresent In the future, we hope to improve InstaPresent by predicting what the user intends to say next and improving the text summarization with word reordering.
## Inspiration We're 4 college freshmen that were expecting new experiences with interactive and engaging professors in college; however, COVID-19 threw a wrench in that (and a lot of other plans). As all of us are currently learning online through various video lecture platforms, we found out that these lectures sometimes move too fast or are just flat-out boring. Summaread is our solution to transform video lectures into an easy-to-digest format. ## What it does "Summaread" automatically captures lecture content using an advanced AI NLP pipeline to automatically generate a condensed note outline. All one needs to do is provide a YouTube link to the lecture or a transcript and the corresponding outline will be rapidly generated for reading. Summaread currently generates outlines that are shortened to about 10% of the original transcript length. The outline can also be downloaded as a PDF for annotation purposes. In addition, our tool uses the Google cloud API to generate a list of Key Topics and links to Wikipedia to encourage further exploration of lecture content. ## How we built it Our project is comprised of many interconnected components, which we detail below: **Lecture Detection** Our product is able to automatically detect when lecture slides change to improve the performance of the NLP model in summarizing results. This tool uses the Google Cloud Platform API to detect changes in lecture content and records timestamps accordingly. **Text Summarization** We use the Hugging Face summarization pipeline to automatically summarize groups of text that are between a certain number of words. This is repeated across every group of text previous generated from the Lecture Detection step. **Post-Processing and Formatting** Once the summarized content is generated, the text is processed into a set of coherent bullet points and split by sentences using Natural Language Processing techniques. The text is also formatted for easy reading by including “sub-bullet” points that give a further explanation into the main bullet point. **Key Concept Suggestions** To generate key concepts, we used the Google Cloud Platform API to scan over the condensed notes our model generates and provide wikipedia links accordingly. Some examples of Key Concepts for a COVID-19 related lecture would be medical institutions, famous researchers, and related diseases. **Front-End** The front end of our website was set-up with Flask and Bootstrap. This allowed us to quickly and easily integrate our Python scripts and NLP model. ## Challenges we ran into 1. Text summarization is extremely difficult -- while there are many powerful algorithms for turning articles into paragraph summaries, there is essentially nothing on shortening conversational sentences like those found in a lecture into bullet points. 2. Our NLP model is quite large, which made it difficult to host on cloud platforms ## Accomplishments that we're proud of 1) Making a multi-faceted application, with a variety of machine learning and non-machine learning techniques. 2) Working on an unsolved machine learning problem (lecture simplification) 3) Real-time text analysis to determine new elements ## What we learned 1) First time for multiple members using Flask and doing web development 2) First time using Google Cloud Platform API 3) Running deep learning models makes my laptop run very hot ## What's next for Summaread 1) Improve our summarization model through improving data pre-processing techniques and decreasing run time 2) Adding more functionality to generated outlines for better user experience 3) Allowing for users to set parameters regarding how much the lecture is condensed by
## Inspiration Oftentimes when we find ourselves not understanding the content that has been taught in class and rarely remembering what exactly is being conveyed. And some of us have the habit of mismatching notes and forgetting where we put them. So to help all the ailing students, there was this idea to make an app that would give the students curated automatic content from the notes which they upload online. ## What it does A student uploads his notes to the application. The application creates a summary of the notes, additional information on the subject of the notes, flashcards for easy remembering and quizzes to test his knowledge. There is also the option to view other student's notes (who have uploaded it in the same platform) and do all of the above with them as well. We made an interactive website that can help students digitize and share notes! ## How we built it Google cloud vision was used to convert images into text files. We used Google cloud NLP API for the formation of questions from the plain text by identifying the entities and syntax of the notes. We also identified the most salient features of the conversation and assumed it to be the topic of interest. By doing this, we are able to scrape more detailed information on the topic using google custom search engine API. We also scrape information from Wikipedia. Then we make flashcards based on the questions and answers and also make quizzes to test the knowledge of the student. We used Django as the backend to create a web app. We also made a chatbot in google dialog-flow to inherently enable the use of google assistant skills. ## Challenges we ran into Extending the platform to a collaborative domain was tough. Connecting the chatbot framework to the backend and sending back dynamic responses using webhook was more complicated than we expected. Also, we had to go through multiple iterations to get our question formation framework right. We used the assumption that the main topic would be the noun at the beginning of the sentence. Also, we had to replace pronouns in order to keep track of the conversation. ## Accomplishments that we're proud of We have only 3 members in the team and one of them has a background in electronics engineering and no experience in computer science and as we only had the idea of what we were planning to make but no idea of how we will make. We are very proud to have achieved a fully functional application at the end of this 36-hour hackathon. We learned a lot of concepts regarding UI/UX design, backend logic formation, connecting backend and frontend in Django and general software engineering techniques. ## What we learned We learned a lot about the problems of integrations and deploying an application. We also had a lot of fun making this application because we had the motive to contribute to a large number of people in day to day life. Also, we learned about NLP, UI/UX and the importance of having a well-set plan. ## What's next for Noted In the best-case scenario, we would want to convert this into an open-source startup and help millions of students with their studies. So that they can score good marks in their upcoming examinations.
winning
## Inspiration Knowtworthy is a startup that all three of us founded together, with the mission to make meetings awesome. We have spent this past summer at the University of Toronto’s Entrepreneurship Hatchery’s incubator executing on our vision. We’ve built a sweet platform that solves many of the issues surrounding meetings but we wanted a glimpse of the future: entirely automated meetings. So we decided to challenge ourselves and create something that the world has never seen before: sentiment analysis for meetings while transcribing and attributing all speech. ## What it does While we focused on meetings specifically, as we built the software we realized that the applications for real-time sentiment analysis are far more varied than initially anticipated. Voice transcription and diarisation are very powerful for keeping track of what happened during a meeting but sentiment can be used anywhere from the boardroom to the classroom to a psychologist’s office. ## How I built it We felt a web app was best suited for software like this so that it can be accessible to anyone at any time. We built the frontend on React leveraging Material UI, React-Motion, Socket IO and ChartJS. The backed was built on Node (with Express) as well as python for some computational tasks. We used GRPC, Docker and Kubernetes to launch the software, making it scalable right out of the box. For all relevant processing, we used Google Speech-to-text, Google Diarization, Stanford Empath, SKLearn and Glove (for word-to-vec). ## Challenges I ran into Integrating so many moving parts into one cohesive platform was a challenge to keep organized but we used trello to stay on track throughout the 36 hours. Audio encoding was also quite challenging as we ran up against some limitations of javascript while trying to stream audio in the correct and acceptable format. Apart from that, we didn’t encounter any major roadblocks but we were each working for almost the entire 36-hour stretch as there were a lot of features to implement. ## Accomplishments that I'm proud of We are super proud of the fact that we were able to pull it off as we knew this was a challenging task to start and we ran into some unexpected roadblocks. There is nothing else like this software currently on the market so being first is always awesome. ## What I learned We learned a whole lot about integration both on the frontend and the backend. We prototyped before coding, introduced animations to improve user experience, too much about computer store numbers (:p) and doing a whole lot of stuff all in real time. ## What's next for Knowtworthy Sentiment Knowtworthy Sentiment aligns well with our startup’s vision for the future of meetings so we will continue to develop it and make it more robust before integrating it directly into our existing software. If you want to check out our stuff you can do so here: <https://knowtworthy.com/>
## Inspiration The post-COVID era has increased the number of in-person events and need for public speaking. However, more individuals are anxious to publicly articulate their ideas, whether this be through a presentation for a class, a technical workshop, or preparing for their next interview. It is often difficult for audience members to catch the true intent of the presenter, hence key factors including tone of voice, verbal excitement and engagement, and physical body language can make or break the presentation. A few weeks ago during our first project meeting, we were responsible for leading the meeting and were overwhelmed with anxiety. Despite knowing the content of the presentation and having done projects for a while, we understood the impact that a single below-par presentation could have. To the audience, you may look unprepared and unprofessional, despite knowing the material and simply being nervous. Regardless of their intentions, this can create a bad taste in the audience's mouths. As a result, we wanted to create a judgment-free platform to help presenters understand how an audience might perceive their presentation. By creating Speech Master, we provide an opportunity for presenters to practice without facing a real audience while receiving real-time feedback. ## Purpose Speech Master aims to provide a practice platform for practice presentations with real-time feedback that captures details in regard to your body language and verbal expressions. In addition, presenters can invite real audience members to practice where the audience member will be able to provide real-time feedback that the presenter can use to improve. While presenting, presentations will be recorded and saved for later reference for them to go back and see various feedback from the ML models as well as live audiences. They are presented with a user-friendly dashboard to cleanly organize their presentations and review for upcoming events. After each practice presentation, the data is aggregated during the recording and process to generate a final report. The final report includes the most common emotions expressed verbally as well as times when the presenter's physical body language could be improved. The timestamps are also saved to show the presenter when the alerts rose and what might have caused such alerts in the first place with the video playback. ## Tech Stack We built the web application using [Next.js v14](https://nextjs.org), a React-based framework that seamlessly integrates backend and frontend development. We deployed the application on [Vercel](https://vercel.com), the parent company behind Next.js. We designed the website using [Figma](https://www.figma.com/) and later styled it with [TailwindCSS](https://tailwindcss.com) to streamline the styling allowing developers to put styling directly into the markup without the need for extra files. To maintain code formatting and linting via [Prettier](https://prettier.io/) and [EsLint](https://eslint.org/). These tools were run on every commit by pre-commit hooks configured by [Husky](https://typicode.github.io/husky/). [Hume AI](https://hume.ai) provides the [Speech Prosody](https://hume.ai/products/speech-prosody-model/) model with a streaming API enabled through native WebSockets allowing us to provide emotional analysis in near real-time to a presenter. The analysis would aid the presenter in depicting the various emotions with regard to tune, rhythm, and timbre. Google and [Tensorflow](https://www.tensorflow.org) provide the [MoveNet](https://www.tensorflow.org/hub/tutorials/movenet#:%7E:text=MoveNet%20is%20an%20ultra%20fast,17%20keypoints%20of%20a%20body.) model is a large improvement over the prior [PoseNet](https://medium.com/tensorflow/real-time-human-pose-estimation-in-the-browser-with-tensorflow-js-7dd0bc881cd5) model which allows for real-time pose detection. MoveNet is an ultra-fast and accurate model capable of depicting 17 body points and getting 30+ FPS on modern devices. To handle authentication, we used [Next Auth](https://next-auth.js.org) to sign in with Google hooked up to a [Prisma Adapter](https://authjs.dev/reference/adapter/prisma) to interface with [CockroachDB](https://www.cockroachlabs.com), allowing us to maintain user sessions across the web app. [Cloudinary](https://cloudinary.com), an image and video management system, was used to store and retrieve videos. [Socket.io](https://socket.io) was used to interface with Websockets to enable the messaging feature to allow audience members to provide feedback to the presenter while simultaneously streaming video and audio. We utilized various services within Git and Github to host our source code, run continuous integration via [Github Actions](https://github.com/shahdivyank/speechmaster/actions), make [pull requests](https://github.com/shahdivyank/speechmaster/pulls), and keep track of [issues](https://github.com/shahdivyank/speechmaster/issues) and [projects](https://github.com/users/shahdivyank/projects/1). ## Challenges It was our first time working with Hume AI and a streaming API. We had experience with traditional REST APIs which are used for the Hume AI batch API calls, but the streaming API was more advantageous to provide real-time analysis. Instead of an HTTP client such as Axios, it required creating our own WebSockets client and calling the API endpoint from there. It was also a hurdle to capture and save the correct audio format to be able to call the API while also syncing audio with the webcam input. We also worked with Tensorflow for the first time, an end-to-end machine learning platform. As a result, we faced many hurdles when trying to set up Tensorflow and get it running in a React environment. Most of the documentation uses Python SDKs or vanilla HTML/CSS/JS which were not possible for us. Attempting to convert the vanilla JS to React proved to be more difficult due to the complexities of execution orders and React's useEffect and useState hooks. Eventually, a working solution was found, however, it can still be improved to better its performance and bring fewer bugs. We originally wanted to use the Youtube API for video management where users would be able to post and retrieve videos from their personal accounts. Next Auth and YouTube did not originally agree in terms of available scopes and permissions, but once resolved, more issues arose. We were unable to find documentation regarding a Node.js SDK and eventually even reached our quota. As a result, we decided to drop YouTube as it did not provide a feasible solution and found Cloudinary. ## Accomplishments We are proud of being able to incorporate Machine Learning into our applications for a meaningful purpose. We did not want to reinvent the wheel by creating our own models but rather use the existing and incredibly powerful models to create new solutions. Although we did not hit all the milestones that were hoping to achieve, we are still proud of the application that we were able to make in such a short amount of time and be able to deploy the project as well. Most notably, we are proud of our Hume AI and Tensorflow integrations that took our application to the next level. Those 2 features took the most time, but they were also the most rewarding as in the end, we got to see real-time updates of our emotional and physical states. We are proud of being able to run the application and get feedback in real-time, which gives small cues to the presenter on what to improve without risking distracting the presenter completely. ## What we learned Each of the developers learned something valuable as each of us worked with a new technology that we did not know previously. Notably, Prisma and its integration with CockroachDB and its ability to make sessions and general usage simple and user-friendly. Interfacing with CockroachDB barely had problems and was a powerful tool to work with. We also expanded our knowledge with WebSockets, both native and Socket.io. Our prior experience was more rudimentary, but building upon that knowledge showed us new powers that WebSockets have both when used internally with the application and with external APIs and how they can introduce real-time analysis. ## Future of Speech Master The first step for Speech Master will be to shrink the codebase. Currently, there is tons of potential for components to be created and reused. Structuring the code to be more strict and robust will ensure that when adding new features the codebase will be readable, deployable, and functional. The next priority will be responsiveness, due to the lack of time many components appear strangely on different devices throwing off the UI and potentially making the application unusable. Once the current codebase is restructured, then we would be able to focus on optimization primarily on the machine learning models and audio/visual. Currently, there are multiple instances of audio and visual that are being used to show webcam footage, stream footage to other viewers, and sent to HumeAI for analysis. By reducing the number of streams, we should expect to see significant performance improvements with which we can upgrade our audio/visual streaming to use something more appropriate and robust. In terms of new features, Speech Master would benefit greatly from additional forms of audio analysis such as speed and volume. Different presentations and environments require different talking speeds and volumes of speech required. Given some initial parameters, Speech Master should hopefully be able to reflect on those measures. In addition, having transcriptions that can be analyzed for vocabulary and speech, ensuring that appropriate language is used for a given target audience would drastically improve the way a presenter could prepare for a presentation.
## Inspiration We wanted a low-anxiety tool to boost our public speaking skills. With an ever-accelerating shift of communication away from face-to-face and towards pretty much just memes, it's becoming difficult for younger generations to express themselves or articulate an argument without a screen as a proxy. ## What does it do? DebateABot is a web-app that allows the user to pick a topic and make their point, while arguing against our chat bot. ## How did we build it? Our website is boot-strapped with Javascript/JQuery and HTML5. The user can talk to our web app which used NLP to convert speech to text, and sends the text to our server, which was built with PHP and background written in python. We perform key-word matching and search result ranking using the indico API, after which we run Sentiment Analysis on the text. The counter-argument, as a string, is sent back to the web app and is read aloud to the user using the Mozilla Web Speech API ## Some challenges we ran into First off, trying to use the Watson APIs and the Azure APIs lead to a lot of initial difficulties trying to get set up and get access. Early on we also wanted to use our Amazon Echo that we have, but reached a point where it wasn't realistic to use AWS and Alexa skills for what we wanted to do. A common theme amongst other challenges has simply been sleep deprivation; staying up past 3am is a sure-fire way to exponentiate your rate of errors and bugs. The last significant difficulty is the bane of most software projects, and ours is no exception- integration. ## Accomplishments that we're proud of The first time that we got our voice input to print out on the screen, in our own program, was a big moment. We also kicked ass as a team! This was the first hackathon EVER for two of our team members, and everyone had a role to play, and was able to be fully involved in developing our hack. Also, we just had a lot of fun together. Spirits were kept high throughout the 36 hours, and we lasted a whole day before swearing at our chat bot. To our surprise, instead of echoing out our exclaimed profanity, the Web Speech API read aloud "eff-asterix-asterix-asterix you, chat bot!" It took 5 minutes of straight laughing before we could get back to work. ## What we learned The Mozilla Web Speech API does not swear! So don't get any ideas when you're talking to our innocent chat bot... ## What's next for DebateABot? While DebateABot isn't likely to evolve into the singularity, it definitely has the potential to become a lot smarter. The immediate next step is to port the project over to be usable with Amazon Echo or Google Home, which eliminates the need for a screen, making the conversation more realistic. After that, it's a question of taking DebateABot and applying it to something important to YOU. Whether that's a way to practice for a Model UN or practice your thesis defence, it's just a matter of collecting more data. <https://www.youtube.com/watch?v=klXpGybSi3A>
winning
## Inspiration No one likes being stranded at late hours in an unknown place with unreliable transit as the only safe, affordable option to get home. Between paying for an expensive taxi ride yourself, or sharing a taxi with random street goers, the current options aren't looking great. WeGo aims to streamline taxi ride sharing, creating a safe, efficient and affordable option. ## What it does WeGo connects you with people around with similar destinations who are also looking to share a taxi. The application aims to reduce taxi costs by splitting rides, improve taxi efficiency by intelligently routing taxi routes and improve sustainability by encouraging ride sharing. ### User Process 1. User logs in to the app/web 2. Nearby riders requesting rides are shown 3. The user then may choose to "request" a ride, by entering a destination. 4. Once the system finds a suitable group of people within close proximity, the user will be send the taxi pickup and rider information. (Taxi request is initiated) 5. User hops on the taxi, along with other members of the application! ## How we built it The user begins by logging in through their web browser (ReactJS) or mobile device (Android). Through API calls to our NodeJS backend, our system analyzes outstanding requests and intelligently groups people together based on location, user ratings & similar destination - all in real time. ## Challenges we ran into A big hurdle we faced was the complexity of our ride analysis algorithm. To create the most cost efficient solution for the user, we wanted to always try to fill up taxi cars completely. This, along with scaling up our system to support multiple locations with high taxi request traffic was definitely a challenge for our team. ## Accomplishments that we're proud of Looking back on our work over the 24 hours, our team is really excited about a few things about WeGo. First, the fact that we're encouraging sustainability on a city-wide scale is something really important to us. With the future leaning towards autonomous vehicles & taxis, having a similar system like WeGo in place we see as something necessary for the future. On the technical side, we're really excited to have a single, robust backend that can serve our multiple front end apps. We see this as something necessary for mass adoption of any product, especially for solving a problem like ours. ## What we learned Our team members definitely learned quite a few things over the last 24 hours at nwHacks! (Both technical and non-technical!) Working under a time crunch, we really had to rethink how we managed our time to ensure we were always working efficiently and working towards our goal. Coming from different backgrounds, team members learned new technical skills such as interfacing with the Google Maps API, using Node.JS on the backend or developing native mobile apps with Android Studio. Through all of this, we all learned the persistence is key when solving a new problem outside of your comfort zone. (Sometimes you need to throw everything and the kitchen sink at the problem at hand!) ## What's next for WeGo The team wants to look at improving the overall user experience with better UI, figure out better tools for specificially what we're looking for, and add improved taxi & payment integration services.
# Sharemuters Web application built on NodeJS that uses the Mojio api to get user data. Finds other people that have similar trips as you so you can carpool with them and share the gas money! Uses CockroachDB for storing user data. ## Built at nwhacks2017 ### Sustainable Future Reaching a sustainable future requires intermediate steps to achieve. Electric car adoption is currently under 1% in both the US and Canada. Most cities do not have an amazing transportation system like Vancouver; they rely on cars. Carpooling can reduce environmental impact of driving, and benefit users by saving money, meeting new people, and making commutes more enjoyable. ### Barriers Transport Canada states the biggest barriers for carpooling are personal safety, flexibility, and effort to organize. Sharemuters targets all three of these inconveniences. Users are vetted through Mojio accounts, and the web app pairs drivers with similar commutes. ### Benefits Carpooling is a large market; currently just under 10% of drivers in the US and Canada carpool, and 75% of those drivers carpool with family. In addition to reducing pollution, carpooling saves money on gas and parking, time spent commuting, and offers health benefits from reduced stress. Many companies also offer perks to employees that carpool.
## Inspiration As university students and soon to be graduates, we understand the financial strain that comes along with being a student, especially in terms of commuting. Carpooling has been a long existing method of getting to a desired destination, but there are very few driving platforms that make the experience better for the carpool driver. Additionally, by encouraging more people to choose carpooling at a way of commuting, we hope to work towards more sustainable cities. ## What it does FaceLyft is a web app that includes features that allow drivers to lock or unlock their car from their device, as well as request payments from riders through facial recognition. Facial recognition is also implemented as an account verification to make signing into your account secure yet effortless. ## How we built it We used IBM Watson Visual Recognition as a way to recognize users from a live image; After which they can request money from riders in the carpool by taking a picture of them and calling our API that leverages the Interac E-transfer API. We utilized Firebase from the Google Cloud Platform and the SmartCar API to control the car. We built our own API using stdlib which collects information from Interac, IBM Watson, Firebase and SmartCar API's. ## Challenges we ran into IBM Facial Recognition software isn't quite perfect and doesn't always accurately recognize the person in the images we send it. There were also many challenges that came up as we began to integrate several APIs together to build our own API in Standard Library. This was particularly tough when considering the flow for authenticating the SmartCar as it required a redirection of the url. ## Accomplishments that we're proud of We successfully got all of our APIs to work together! (SmartCar API, Firebase, Watson, StdLib,Google Maps, and our own Standard Library layer). Other tough feats we accomplished was the entire webcam to image to API flow that wasn't trivial to design or implement. ## What's next for FaceLyft While creating FaceLyft, we created a security API for requesting for payment via visual recognition. We believe that this API can be used in so many more scenarios than carpooling and hope we can expand this API into different user cases.
partial
## Inspiration Drawing inspiration from our personal academic journeys and identifying challenges faced by fellow students, we wanted to create a solution that resonated with students seeking motivation tailored to their unique circumstances. Reflecting on our own experiences, we acknowledged that attendance struggles were not always rooted in a lack of motivation; sometimes external factors played a role. This realization fueled the integration of flexibility into our goal-setting. While achieving goals in their entirety is undoubtedly ideal, we recognized the importance of striking a balance between productivity and well-being. In contrast to traditional methods that rely on guilt and unrealistic objectives, our approach embraces gamification and realistic metrics. By doing so, we aimed to create a positive and achievable path toward academic success and class attendance, acknowledging the nuanced nature of students' lives and motivations. Seeing the lack of attendance in class also discourages other people from going and a valuable part of education is learning from others and peers, making that in-person connection. ## What it does While onboarding, Rise allows its users are then able to select their weekly attendance goal based on their personal circumstances. Users are also able to either manually input their class schedule or import their entire class schedule as an .ics calendar file. The app uses geolocation to record whether or not has attended class, as well as uses it to determine if the user’s friends have also attended class and are nearby. Rise incorporates a gamification aspect through the character that lives in the app who is happy as you consistently meet your personalized goal, and sad when you don’t. As you attend class, you receive a ‘sun’ currency that you can use to buy customizations for your character. As a user consistently attends class, they maintain an attendance streak. This contribtues towards their weekly attendance goal, motivating students to keep up their attendance to not lose their streak. A user would be able to add their friends and optionally share notifications with each other when the other leaves for or arrives to class! There would also be notifications for when a user should leave for their class, based on their current location, as well as for when a user’s attendance starts to improve or decline. ## How we built it First, the app was wireframed, and then designed and prototyped with Figma. Then, we set up the frontend with React Native so that our app could run on both iOS and Android devices. The backend is built with Node.js, a JavaScript runtime. For the database, we used MongoDB and Mongoose to maintain the structure of our data. To predict the likelihood of whether or not the user will attend class on a given weekday based on prior data, we used Brain.js, a JavaScript library used for neural networking in the backend. To host the backend, we used AWS Lamba. ## Challenges we ran into None of our members had prior experience integrating an AI model into an application, so it was a struggle to both learn about how AI works and train the AI model while getting it to predict data accurately. Initially, we had planned on using Tensorflow due to its popularity, but we decided to pivot to Brain.js as it was more beginner friendly. In addition, we ran into challenges connecting our backend to our frontend with CORS and hosting. The time constraint was also a challenge to work with, given our lack of experience in working with both React Native and AI. ## Accomplishments that we're proud of For 2/3 members, it was their first hackathon which is a huge accomplishment. Some of our proudest work is the time we took into our design to make it more personal and user-friendly. We were able to successfully use Brain.js to make a basic AI model and train it based on the data we provided, which was an accomplishment given that this was the first time we worked with AI. The fact that we were able to make a portion of a working full-stack application within the time limit was also something we are proud of. ## What we learned We learned a lot about UX/UI design interfaces and how to Figma. Going through the process of designing the application allowed us to learn how to use the tools of Figma more efficiently. This was the first time some of us had used technologies such as React Native, Javascript, Github, and MongoDB. We learned how to use React Native to implement features and style our application. Using Javascript, we were able to build a working backend. With MongoDB, we learned about how a database works as well as how it interacts with a backend. Using Git, we learned how to collaborate with one another on a shared codebase by using commands such as commit, push, and pull. We also learned about AI concepts and how to create a basic AI model. We learned more about the process of training the model to acquire better predictions. ## What's next for Rise Unfortunately due to the time constraints, we were unable to implement the onboarding flow as well as develop some of the more advanced features that were in our initial design. Completing these would be our immediate next step for Rise. We also wanted to implement a notification feature as explained in the ‘What it Does’ section. We envisioned a more advanced AI model that could detect deviations in attendance as well as more accurately predict the likelihood of a user attending classes on a given day in the future. The next step would be for us to learn more about how AI works and relevant technologies, and use that knowledge to create a more powerful model. In addition, the design of our app could be fleshed to look more sophisticated and eye-catching. Based on user feedback, our app could be more accommodating or helpful towards those who are unable to make it to class for reasons out of their control. We would take these into consideration so that Rise can help more people make it to class.
## Inspiration We’ve noticed that it’s often difficult to form intentional and lasting relationships when life moves so quickly. This issue has only been compounded by the pandemic, as students spend more time than ever isolated from others. As social media is increasingly making the world feel more “digital”, we wanted to provide a means for users to develop tangible and meaningful connections. Last week, I received an email from my residential college inviting students to sign up for a “buddy program” where they would be matched with other students with similar interests to go for walks, to the gym, or for a meal. The program garnered considerable interest, and we were inspired to expand upon the Google Forms setup to a more full-fledged social platform. ## What it does We built a social network that abstracts away the tediousness of scheduling and reduces the “activation energy” required to reach out to those you want to connect with. Scheduling a meeting with someone on your friend’s feed is only a few taps away. Our scheduling matching algorithm automatically determines the top best times for the meeting based on the inputted availabilities of both parties. Furthermore, forming meaningful connections is a process, we plan to provide data-driven reminders and activity suggestions to keep the ball rolling after an initial meeting.  ## How we built it We built the app for mobile, using react-native to leverage cross-platform support. We used redux for state management and firebase for user authentication. ## Challenges we ran into Getting the environment (emulators, dependencies, firebase) configured was tricky because of the many different setup methods. Also, getting the state management with Redux setup was challenging given all the boilerplate needed. ## Accomplishments that we're proud of We are proud of the cohesive and cleanliness of our design. Furthermore, the structure of state management with redux drastically improved maintainability and scalability for data to be passed around the app seamlessly. ## What we learned We learned how to create an end-to-end app in flutter, wireframe in Figma, and use API’s like firebase authentication and dependencies like React-redux. ## What's next for tiMe Further flesh out the post-meeting followups for maintaining connections and relationships
## Inspiration The impact of COVID-19 has had lasting effects on the way we interact and socialize with each other. Even when engulfed by bustling crowds and crowded classrooms, it can be hard to find our friends and the comfort of not being alone. Too many times have we grabbed lunch, coffee, or boba alone only to find out later that there was someone who was right next to us! Inspired by our undevised use of Apple's FindMy feature, we wanted to create a cross-device platform that's actually designed for promoting interaction and social health! ## What it does Bump! is a geolocation-based social networking platform that encourages and streamlines day-to-day interactions. **The Map** On the home map, you can see all your friends around you! By tapping on their icon, you can message them or even better, Bump! them. If texting is like talking, you can think of a Bump! as a friendly wave. Just a friendly Bump! to let your friends know that you're there! Your bestie cramming for a midterm at Mofitt? Bump! them for good luck! Your roommate in the classroom above you? Bump! them to help them stay awake! Your crush waiting in line for a boba? Make that two bobas! Bump! them. **Built-in Chat** Of course, Bump! comes with a built-in messaging chat feature! **Add Your Friends** Add your friends to allow them to see your location! Your unique settings and friends list are tied to the account that you register and log in with. ## How we built it Using React Native and JavaScript, Bump! is built for both IOS and Android. For the Backend, we used MongoDB and Node.js. The project consisted of four major and distinct components. **Geolocation Map** For our geolocation map, we used the expo's geolocation library, which allowed us to cross-match the positional data of all the user's friends. **User Authentication** The user authentication proceeds was built using additional packages such as Passport.js, Jotai, and Bcrypt.js. Essentially, we wanted to store new users through registration and verify old users through login by searching them up in MongoDB, hashing and salting their password for registration using Bcrypt.js, and comparing their password hash to the existing hash in the database for login. We also used Passport.js to create Json Web Tokens, and Jotai to store user ID data globally in the front end. **Routing and Web Sockets** To keep track of user location data, friend lists, conversation logs, and notifications, we used MongoDB as our database and a node.js backend to save and access data from the database. While this worked for the majority of our use cases, using HTTP protocols for instant messaging proved to be too slow and clunky so we made the design choice to include WebSockets for client-client communication. Our architecture involved using the server as a WebSocket host that would receive all client communication but would filter messages so they would only be delivered to the intended recipient. **Navigation and User Interface**: For our UI, we wanted to focus on simplicity, cleanliness, and neutral aesthetics. After all, we felt that the Bump! the experience was really about the time spent with friends rather than on the app, so designed the UX such that Bump! is really easy to use. ## Challenges we ran into To begin, package management and setup were fairly challenging. Since we've never done mobile development before, having to learn how to debug, structure, and develop our code was definitely tedious. In our project, we initially programmed our frontend and backend completely separately; integrating them both and working out the moving parts was really difficult and required everyone to teach each other how their part worked. When building the instant messaging feature, we ran into several design hurdles; HTTP requests are only half-duplex, as they are designed with client initiation in mind. Thus, there is no elegant method for server-initiated client communication. Another challenge was that the server needed to act as the host for all WebSocket communication, resulting in the need to selectively filter and send received messages. ## Accomplishments that we're proud of We're particularly proud of Bump! because we came in with limited or no mobile app development experience (in fact, this was the first hackathon for half the team). This project was definitely a huge learning experience for us; not only did we have to grind through tutorials, youtube videos, and a Stack Overflowing of tears, we also had to learn how to efficiently work together as a team. Moreover, we're also proud that we were not only able to build something that would make a positive impact in theory but a platform that we see ourselves actually using on a day-to-day basis. Lastly, despite setbacks and complications, we're super happy that we developed an end product that resembled our initial design. ## What we learned In this project, we really had an opportunity to dive headfirst in mobile app development; specifically, learning all about React Native, JavaScript, and the unique challenges of implementing backend on mobile devices. We also learned how to delegate tasks more efficiently, and we also learned to give some big respect to front-end engineers! ## What's next for Bump! **Deployment!** We definitely plan on using the app with our extended friends, so the biggest next step for Bump! is polishing the rough edges and getting it on App Stores. To get Bump! production-ready, we're going to robustify the backend, as well as clean up the frontend for a smoother look. **More Features!** We also want to add some more functionality to Bump! Here are some of the ideas we had, let us know if there's any we missed! * Adding friends with QR-code scanning * Bump! leaderboards * Status updates * Siri! "Hey Siri, bump Emma!"
losing
## Inspiration Road safety has become an escalating concern in recent years. According to Transport Canada, the number of collisions and causalities has been rising in the past three years. Meanwhile, as AI technology grows exponentially, we identified a niche where we could leverage powerful AI to offer valuable advice and feedback to drivers of all levels, effectively promoting road safety. ## What it does Our system utilizes Computer Vision (CV) techniques and gyroscopes to collect meaningful data about driving performance, such as whether one does shoulder checks on turns or drives when drowsy. The data is then passed to the backend and analyzed by a Large Language Model (LLM). When users want to review their performance and ask for advice, they can ask the LLM to find relevant driving records and offer helpful insights. Examples of potential use cases are supplementing driving lessons or exams for learners, encouraging concerned drivers to enforce good driving habits, providing objective evaluation of professional driving service quality, etc. ## How we built it Our system consists of **a data collection script (Python, React Native),** * Runs a CV algorithm (utilizing roboflow models) and streams the output with the video and gyroscope (from mobile) data to the frontend web app **a frontend web app (React),** * Receives and displays driving performance data * Provides an interface to review the driving records and interact with the LLM to get valuable insights Authenticates user logins with Auth0 **a backend (Flask),** * Connects to Google Gemini for LLM interactions * Transfers data and LLM outputs between the frontend and the database (Utilizing VectorSearch to extract relevant trip records as context for LLM to generate advice) **a database (MongoDB Atlas),** * Saves and transfers metadata and analysis information of each driving trip * Configured to support VectorSearch **a cloud storage service (Google Cloud Storage)** * Hosts driving videos which are media data of larger sizes ## Challenges we ran into * Setting up web sockets to connect individual components for real-time data transfer * Configuring Auth0 to perform authentication correctly within the React app * Deciding how to store videos (Saving them to DB as BLOB vs Using paid cloud storage service) ## Accomplishments that we're proud of * Built components that achieve their corrseponding functionalities (Idenify shoulder checks and closed eyes, interacting with LLM, querying database by vectors, etc.) * Overcame or worked around errors arisen from using libraries or SDKs ## What we learned * Collaborating as a small team to deliver quickly * Utilizing web sockets for real-time data transfer * Utilizing vectorSearch in Mongodb Atlas to query documents * Utilizing Auth0 for authentication * Connecting various programs and technologies to construct the end-to-end point ## What's next for DriveInsight * Consulting domain experts (drivers, driving examiners, etc.) for more driving habits to monitor * Fine-tuning specialized LLMs for even more robust and insightful responses
## Inspiration We were looking for an innovative solution to keep us aware of what we were eating. ## What it does Nutlogger is a web and mobile application that tracks nutritional data over a period of the day, month, year. With this data, we hope that users can get a better understanding of their eating habits. ## How I built it Lots of hard work and patience. The web application was built with MERN and the mobile application was built with Android Studio. ## Challenges I ran into Parsing the information for Google's vision API was difficult. ## Accomplishments that I'm proud of Developing a functional application that actually works ## What I learned * Google cloud platform * React Typescript * android camera ## What's next for Nutlogger * account system and profiles * admin panel for doctors * chat with nutritionists ## Credits: Icons made by Freepik from [www.flaticon.com](http://www.flaticon.com) is licensed by CC 3.0 BY
## Inspiration Ontario Provincial Police say distracted driving continues to be the No. 1 cause of accidents in the province. According to AAA, distracted driving was a factor in nearly 6 out of 10 moderate-to-severe teen crashes. This personal element was a contributing factor in our decision to pursue this project. Through DriveToArrive, we hope to reduce the number of crashes as a result of distracted driving and encourage safe driving practices. ## What it does A video stream is taken in from the camera. The video frames are continually analyzed to detect the presence or absence of an individual’s face and eyes. If a face or eyes are unable to be detected, an audible message is played alerting the driver to pay attention to the road. ## How we built it DriveToArrive implements Python and OpenCV. A multitude of facial image data is provided to the program. This data is comprised of images that contain arbitrarily good or bad images of a face. Incoming frames from a video stream are analyzed using this pre-trained model, and the logic-based decision-making process is executed. When an individual frame is analyzed if a face or eyes is detected a 1 is stored within a continually updated array. If not detected, a zero is stored. This array is used to calculate a rolling percentage of consecutive frames in which the driver is distracted. 75% was chosen as the threshold for determining whether a driver is distracted. The driver is alerted with a different audible message depending on if the program determines an absence of a face or the presence of a face but the absence of eyes. ## Challenges we ran into Initially, we had planned to use a Raspberry Pi to run our application, but when borrowing hardware a critical cable was missing. This held up the progress for a long period of time as the cable had to be purchased to progress. The other challenge was getting access to a monitor as a limited number were available at the facility, all of which were on high demand. As we were trying to resolve these issues we lost a great deal of time, which limited the time invested into the project. Additionally, we were going to use IR sensors to detect, more accurately, if a persons’ eyes are opened or closed while driving, but the IR sensors did not behave as expected. This once again resulted in a great loss of time as implementing and testing the sensors was time-consuming and did not end up working. While using OpenCV, our team was having difficulties distinguishing between if a driver had their eyes closed or face turned away. This needed to be handled by OpenCV after the IR sensors were not an option in order to allow us to detect the eyes’ position. ## Accomplishments that we're proud of As a group, we are very proud of the accuracy and speed at which the script can identify the lack of face and eyes in the image stream. We are also proud of our resilience when faced with the challenges provided by the usage of the Raspberry Pi. As described in the previous section, we ran into multiple issues with the Raspberry Pi including not having the proper equipment to utilize the versatility that it provides. ## What we learned Most of the team had never programmed in Python before, leading to skill development. OpenCV was a library that none of us knew before attending this hackathon, so this was a new skill all of us learned as well. ## What's next for Drive2Arrive In the future, we are looking to continue development on DriveToArrive, extending the service to a mobile application to increase the scalability of the project.
partial
## Inspiration CredLo was inspired by a personal story of a teammate that talks about the various challenges that immigrants face when moving to a new country. The primary challenge among immigrants is restarting their life in a new place, which begins with their inability to obtain credit as credit scores are non transferable across countries. On top of this, we saw many individuals lack access to a lump sum of money quickly at low interest rates, sparking the need for an automated micro loan system that is diversified, low-risk and easy to use. ## What it does CredLo uses user-inputted data and personal submissions to generate a credit score for the country that the individual is moving to. Additionally, borrowers are able to attain loans quickly at low-interest rates and lenders are able to lend small amounts of money to a large number of people people at a level of risk that they choose. ## How we built it We built the backend using Flask/Python to process requests from the lender/borrower as well as for the borrower's onboarding process. We used Capital One's API to make actual transactions between the lender and the lendee. We trained our ML model using sklearn on a dataset we found online. Most of the frontend was built using vanilla HTML/CSS/JS (no wonder it took us ages to build the UI), with a little bit of Vue sprinkled in. The data was stored as a JSON object (with periodic serialization, which to answer your unasked question, yes we eventually intend to use Cloud Firestore for this instead :) ## Challenges we ran into 1. Naming the product and coming up with the tagline was difficult 2. Since none of our teammates are front-end developers, a large chunk of time was spent by trying to make our UI look somewhat bearable to vue (expect a few more puns as you read along). Time spent working on the UI could have been spent working on additional features instead. ## Accomplishments that we're proud of 1. As a team with zero front-end developers, we have a passably pretty UI. 2. We are proud that our product attempts to solve a real need posed by many individuals around the globe. We had other ideas that were more technically sophisticated, but we instead decided to work on a product that had a real-world impact and could positively impact lives in society. After speaking to various individuals in our target market who said that they would have greatly appreciated assistance from the CredLo platform when moving countries, we are proud that we developed a product that can be incorporated into society. ## What we learned We learned about the various challenges that different groups in society face and the ways in which we can alleviate their stress and headache. We also learned to collaborate and work together as we are a group of students with different backgrounds and skills. ## What's next for CredLo 1. We were restricted by the kind of datasets we had available to use to generate the credit scores. With more time and research, we can improve on the metrics used to come up with an accurate credit score. The eventual goal is to work with banks and other institutions to become a reliable source of information that individuals and institutions can trust. 2. Instead of using user input (which can be faked), we would include verifiable sources of claims such as bank statements, utility bills etc. and extract the necessary data out of them using computer vision. 3. There is currently only an auto-investment mode for lenders. That is, they do not choose who they can lend their money to. We would like to expand the project to allow investors to choose people they think have a sincere need, adjust their rate of interest down if they so wish to, along with the amount of investment (up or down). Eventually, CredLo would provide lenders the possibility to manually invest their money instead of having it automated. 4. Complete integration with Capital One's APIs to facilitate actual bank transfers. We started working on this but left it unfinished due to technical issues.
## Team 4 - Members Henry Wong, Kieran Hansen, Matthew Kompel, Zuhair Siddiqi ## Inspiration We wanted to do something related to modern technology. Electronic cars are new and revolutionary and we thought it would be perfect to make a website about them. We realize that a lot of people might be confused on the differences and benefits of different modules, and we wanted to make something that clears that up. ## What it does Find Your EV takes in a set amount of user specifications and uses our own search algorithm to find the most relevant electronic vehicle that would fit the user. These specifications include price, range, safety, drivetrain, and much more. ## How I built it We used vanilla Javascript to create our front-end, linked with our searching algorithm in our Python Flask backend. We originally deployed the website using Github Pages, but switched to Heroku to support our backend scripts. We also managed to get and setup a custom domain with Domain.com, [FindYourEV.online](http://findyourev.online/) (sidenote: domain was actually really creative and easy to remember!). Lastly, we built the project on GitHub @ <https://github.com/henryinqz/FindYourEV> (direct link to website is <http://findyourev.herokuapp.com/client/index.html>) ## Challenges I ran into Our entire had no prior experience with Web Development. Over the past 36 hours, we were able to gain valuable experience creating our very own full-stack program. A challenge that stumped us was running the backend code on Heroku, and unfortunately, we were unable to deploy the backend so the search function only works locally. ## Accomplishments that I'm proud of I am proud of my group for being able to manifest our idea into this website in the short time frame provided. While we certainly did not create the most polished program, we had fun making it and that's what matters!😎 ## What I learned I learned a lot about all aspects of web development, such as creating front-end UIs with HTML/CSS/JS, creating backend APIs with Python, linking these two models, and also web hosting. ## What's next for Find Your EV Find Your EV is an interesting concept that we see as a valuable utility in today's world. With electric vehicles quickly rising in the automotive industry, it can be challenging for buyers to find a suitable EV for them. Thus, there is a market that Find Your EV could reach after being polished up.
## Inspiration It’s Friday afternoon, and as you return from your final class of the day cutting through the trailing winds of the Bay, you suddenly remember the Saturday trek you had planned with your friends. Equipment-less and desperate you race down to a nearby sports store and fish out $$$, not realising that the kid living two floors above you has the same equipment collecting dust. While this hypothetical may be based on real-life events, we see thousands of students and people alike impulsively spending money on goods that would eventually end up in their storage lockers. This cycle of buy-store-collect dust inspired us to develop Lendit this product aims to stagnate the growing waste economy and generate passive income for the users on the platform. ## What it does A peer-to-peer lending and borrowing platform that allows users to generate passive income from the goods and garments collecting dust in the garage. ## How we built it Our Smart Lockers are built with RaspberryPi3 (64bit, 1GB RAM, ARM-64) microcontrollers and are connected to our app through interfacing with Google's Firebase. The locker also uses Facial Recognition powered by OpenCV and object detection with Google's Cloud Vision API. For our App, we've used Flutter/ Dart and interfaced with Firebase. To ensure *trust* - which is core to borrowing and lending, we've experimented with Ripple's API to create an Escrow system. ## Challenges we ran into We learned that building a hardware hack can be quite challenging and can leave you with a few bald patches on your head. With no hardware equipment, half our team spent the first few hours running around the hotel and even the streets to arrange stepper motors and Micro-HDMI wires. In fact, we even borrowed another team's 3-D print to build the latch for our locker! On the Flutter/ Dart side, we were sceptical about how the interfacing with Firebase and Raspberry Pi would work. Our App Developer previously worked with only Web Apps with SQL databases. However, NoSQL works a little differently and doesn't have a robust referential system. Therefore writing Queries for our Read Operations was tricky. With the core tech of the project relying heavily on the Google Cloud Platform, we had to resolve to unconventional methods to utilize its capabilities with an internet that played Russian roulette. ## Accomplishments that we're proud of The project has various hardware and software components like raspberry pi, Flutter, XRP Ledger Escrow, and Firebase, which all have their own independent frameworks. Integrating all of them together and making an end-to-end automated system for the users, is the biggest accomplishment we are proud of. ## What's next for LendIt We believe that LendIt can be more than just a hackathon project. Over the course of the hackathon, we discussed the idea with friends and fellow participants and gained a pretty good Proof of Concept giving us the confidence that we can do a city-wide launch of the project in the near future. In order to see these ambitions come to life, we would have to improve our Object Detection and Facial Recognition models. From cardboard, we would like to see our lockers carved in metal at every corner of this city. As we continue to grow our skills as programmers we believe our product Lendit will grow with it. We would be honoured if we can contribute in any way to reduce the growing waste economy.
losing
## Inspiration Imagine this: You’re overwhelmed, scrolling through countless LinkedIn profiles, trying to figure out which clubs or activities will help you land your dream job. It feels like searching for a needle in a haystack! Here’s where UJourney steps in: We simplify your career planning by providing personalized paths tailored specifically to your goals. UJourney uses LinkedIn data from professionals in your dream job to recommend the exact clubs to join, events to attend, skills to acquire, and courses to take at your university. Our mission is to transform career exploration into a clear, actionable journey from aspiration to achievement. ## What it does UJourney is like having a career GPS with a personality. Tell it your dream job, and it will instantly scan the LinkedIn career cosmos to reveal the paths others have taken. No more endless profile scrolling! Instead, you get a curated list of personalized steps—like joining that robotics club or snagging that perfect internship—so you can be the most prepared candidate out there. With UJourney, the path to your dream job isn’t just a distant vision; it’s a series of clear, actionable steps right at your fingertips. ## How we built it The UJourney project is built on three core components: 1. Gathering Personal Information: We start by seamlessly integrating LinkedIn authorization to collect essential details like name and email. This allows users to create and manage their profiles in our system. For secure login and sign-up, we leveraged Auth0, ensuring a smooth and safe user experience. 2. Filtering LinkedIn Profiles: Next, we set up a MongoDB database by scraping LinkedIn profiles, capturing a wealth of career data. Using Python, we filtered this data based on keywords related to company names and job roles. This process helps us pinpoint relevant profiles and extract meaningful insights. 3. Curating Optimal Career Paths: Our AI model takes it from here. By feeding the filtered data and user information into an advanced model via the Gemini API, we generate personalized career paths, complete with timelines and actionable recommendations. The model outputs these insights in a structured JSON format, which we then translate into an intuitive, user-friendly UI design. ## Challenges we ran into Problem: LinkedIn Scraping Restrictions. Our initial plan was to directly scrape LinkedIn profiles based on company names and job roles to feed data into our AI model. However, LinkedIn’s policies prevented us from scraping directly from their platform. We turned to a third-party LinkedIn scraper, but this tool had significant limitations, including a restriction of only 10 profiles per company and no API for automation. While we utilized automation tools like Zapier and HubSpot CRM to streamline part of our workflow, we ultimately faced a significant roadblock. Despite these challenges, we adapted our approach to continue progressing with the project. Solution: Manual Database Creation. To work around these limitations, we manually built a database focused on the top five most commonly searched companies and job roles. While this approach allowed us to gather essential data, it also meant that our database was initially limited in scope. This manual effort was crucial for ensuring we had enough data to effectively train our AI model and provide valuable recommendations. Despite these hurdles, we adapted our approach to ensure UJourney could deliver accurate and practical career insights. ## Accomplishments that we're proud of 1. Rapid Development: We successfully developed and launched UJourney in a remarkably short period of time. Despite the tight timeline, we managed to pull everything together efficiently and effectively. 2. Making the Most of Free Tools: Working with limited resources and relying on free versions of various software, we still managed to create a fully functional version of UJourney. Our resourcefulness allowed us to overcome budget constraints and still deliver a high-quality product. 3. University-Specific Career Plans: One of our standout achievements is the app’s ability to provide personalized career plans tailored to specific universities. By focusing on actionable steps relevant to users' educational contexts, UJourney offers unique value that addresses individual career planning needs with precision. ## What we learned 1. Adaptability is Key: Our journey taught us that flexibility is crucial in overcoming obstacles. When faced with limitations like LinkedIn's scraping restrictions, we had to quickly pivot our approach. This experience reinforced the importance of adapting to challenges and finding creative solutions to keep moving forward. 2. Data Quality Over Quantity: We learned that the quality of data is far more important than sheer volume. By focusing on the most commonly searched companies and job roles, we ensured that our AI model could provide relevant and actionable insights, even with a limited dataset. This underscored the value of precision and relevance in data-driven projects. 3. Resourcefulness Drives Innovation: Working within constraints, such as using free software and limited resources, highlighted our team’s ability to innovate under pressure. We discovered that resourcefulness can turn limitations into opportunities for creative problem-solving, pushing us to explore new tools and methods. 4. User-Centric Design Matters: Our focus on creating university-specific career plans taught us that understanding and addressing user needs is essential for success. Providing tailored, actionable steps for career planning showed us the impact of designing solutions with the user in mind, making the tool genuinely useful and relevant. ## What's next for UJourney What exciting features are on the horizon? 1. Resume Upload Feature: To kick things off, we’re introducing a resume upload feature. This will allow users to gather personal information directly from their resumes, streamlining profile creation and reducing manual data entry. 2. Real-Time University Information: Next, we’ll be scraping university websites to provide real-time updates on campus events and activities. This feature will enable users to see upcoming events and automatically add them to their calendars, keeping them informed and organized. 3. Enhanced Community Involvement: We’ll then roll out features that allow users to view their friends' dream jobs and career paths. This will facilitate connections with like-minded individuals and foster a community where students can share experiences related to jobs and university clubs. 4. Automated LinkedIn Web Scraping: To improve data collection, we’ll automate LinkedIn data scraping. This will help expand our database with up-to-date and relevant career information, enhancing the app’s ability to provide accurate recommendations. 5. AI-Driven Job Recommendations: Finally, we’ll leverage real-time market information and AI to recommend job opportunities that are ideal for the current year. Users will also be able to apply for these jobs directly through the app, making the job application process more efficient and seamless. These upcoming features are designed to enhance the UJourney experience, making career planning, networking, and job applications more intuitive and effective. Stay tuned for these exciting updates!
## Inspiration Selin's journey was the spark that ignited the creation of our platform. Originally diving into the world of chemistry, she believed it was her calling. However, as time unfolded, she realized it wasn't the path that resonated with her true passions. This realization, while enlightening, also brought with it a wave of confusion and stress. The weight of expectations, both self-imposed and from the university, pressed down on her, urging her to find a new direction swiftly. Yet, the vast expanse of potential careers felt overwhelming, leaving her adrift in a sea of options, not knowing which shore to swim towards. Selin's story isn't unique. It's a narrative that echoes across university halls, with countless students grappling with the same feelings of uncertainty and pressure. Recognizing this widespread challenge became the cornerstone of our mission: to illuminate the myriad of career paths available and guide students towards their true calling. ## What it does Our platform is an AI-powered career mapping tool designed for students navigating the tech landscape. Utilizing advanced machine learning algorithms combined with psychology-driven techniques, it offers a dynamic and data-driven representation of potential career paths. Each job node within the map is informed by LLM and RAG methodologies, providing a comprehensive view based on real user trajectories and data. Beyond mere visualization, the platform breaks down tasks into detailed timelines, ensuring clarity at every step. By integrating insights from both AI and psychology algorithms, we aim to provide students with a clear, strategic blueprint for their ideal tech career. ## How we built it We integrated advanced machine learning algorithms with psychology-driven techniques. The platform's backbone is built on LLM and RAG methodologies, informed by real user trajectories. We also incorporated various APIs, like the Hume AI API, to enhance user experience and data collection. ## Challenges we ran into Embarking on this journey, we were rookies in the arena of hackathons, stepping into uncharted territory with a blend of enthusiasm and trepidation. The path was riddled with unexpected hurdles, the most formidable being a persistent bug in the RAG model from MindsDB. Hours that could have been spent refining and enhancing were instead consumed in troubleshooting this elusive issue. As if the technical challenges weren't daunting enough, the spirit of the hackathon was challenged as friends, one after another, decided to step away despite having amazing ideas. The weight of their absence, combined with the mounting pressure of having to reconstruct a new model (the bug turned out to be withing MindDB's RAG handler which we had no control over) in a race against time, was palpable. With the clock ticking, sleep became a luxury we could scarcely afford, operating on a mere three hours. Yet, in the face of these adversities, it was our shared vision and unwavering determination that became our beacon, guiding us through the darkest hours and reminding us of the potential impact of our creation. The true essence of our challenge wasn't just in navigating technical glitches or decreasing excitement in the event; it was about resilience, adaptability, and the relentless pursuit of innovation. ## Accomplishments that we're proud of Successfully integrating the Hume AI API to translate user opinions into actionable data was a significant win and as well as connecting it into a spreadsheet for further analysis. Despite the hurdles and it being our inaugural hackathon, our team's perseverance saw us through to the end. ## What we learned We gained a lot of insignts into the nature of LLM models and the intricacies of the RAG model. The experience also taught us the importance of adaptability and persistence in the face of unforeseen challenges. ## What's next for Orna Our immediate goal is to finalize the MVP, refining the suggestion system. With these enhancements, we aim to secure seed investment to propel Orna to new heights.
## Inspiration There should be an effective way to evaluate company value by examining the individual values of those that make up the company. ## What it does Simplifies the research process of examining a company by showing it in a dynamic web design that is free-flowing and easy to follow. ## How we built it It was originally built using a web scraper that scraped from LinkedIn which was written in python. The web visualizer was built using javascript and the VisJS library to have a dynamic view and aesthetically pleasing physics. In order to have a clean display, web components were used. ## Challenges we ran into Gathering and scraping the data was a big obstacle, had to pattern match using LinkedIn's data ## Accomplishments that we're proud of It works!!! ## What we learned Learning to use various libraries and how to setup a website ## What's next for Yeevaluation Finetuning and reimplementing dynamic node graph, history. Revamping project, considering it was only made in 24 hours.
losing
## What it does Tickets is a secure, affordable, and painless system for registration and organization of in-person events. It utilizes public key cryptography to ensure the identity of visitors, while staying affordable for organizers, with no extra equipment or cost other than a cellphone. Additionally, it provides an easy method of requesting waiver and form signatures through Docusign. ## How we built it We used Bluetooth Low Energy in order to provide easy communication between devices, PGP in order to verify the identities of both parties involved, and a variety of technologies, including Vue.js, MongoDB Stitch, and Bulma to make the final product. ## Challenges we ran into We tried working (and struggling) with NFC and other wireless technologies before settling on Bluetooth LE as the best option for our use case. We also spent a lot of time getting familiar with MongoDB Stitch and the Docusign API. ## Accomplishments that we're proud of We're proud of successfully creating a polished and functional product in a short period of time. ## What we learned This was our first time using MongoDB Stitch, as well as Bluetooth Low Energy. ## What's next for Tickets An option to allow for payments for events, as well as more input formats and data collection.
## Inspiration With an ever-increasing rate of crime, and internet deception on the rise, Cyber fraud has become one of the premier methods of theft across the world. From frivolous scams like phishing attempts, to the occasional Nigerian prince who wants to give you his fortune, it's all too susceptible for the common person to fall in the hands of an online predator. With this project, I attempted to amend this situation, beginning by focusing on the aspect of document verification and credentialization. ## What does it do? SignRecord is an advanced platform hosted on the Ethereum Inter-Planetary File System (an advanced peer-to-peer hyper media protocol, built with the intentions of making the web faster, safer, and more open). Connected with secure DocuSign REST API's, and the power of smart contracts to store data, SignRecord acts as an open-sourced wide-spread ledger of public information, and the average user's information. By allowing individuals to host their data, media, and credentials on the ledger, they are given the safety and security of having a proven blockchain verify their identity, protecting them from not only identity fraud but also from potential wrongdoers. ## How I built it SignRecord is a responsive web app backed with the robust power of both NodeJS and the Hyperledger. With authentication handled by MongoDB, routing by Express, front-end through a combination of React and Pug, and asynchronous requests through Promise it offers a fool-proof solution. Not only that, but I've also built and incorporated my own external API, so that other fellow developers can easily integrate my platform directly into their applications. ## Challenges I ran into The real question should be what Challenge didn't I run into. From simple mistakes like missing a semi-colon, to significant headaches figuring out deprecated dependencies and packages, this development was nothing short of a roller coaster. ## Accomplishments that I'm proud of Of all of the things that I'm proud of, my usage of the Ethereum Blockchain, DocuSign API's, and the collective UI/UX of my application stand out as the most significant achievements I made in this short 36-hour period. I'm especially proud, that I was able to accomplish what I could, alone. ## What I learned Like any good project, I learnt more than I could have imagined. From learning how to use advanced MetaMask libraries to building my very own API, this journey was nothing short of a race with hurdles at every mark. ## What's next for SignRecord With the support of fantastic mentors, a great hacking community, and the fantastic sponsors, I hope to be able to continue expanding my platform in the near future.
## Inspiration **DronAR** was inspired by a love of cool technology. Drones are hot right now, and the question is, why not combine it with VR? The result is an awesome product that allows for drone management to be more **visually intuitive** letting users interact with drones in ways never done before. ## What it does **DronAR** allows users to view realtime information about their drones such as positional data and status. Using this information, users can make on the spot decisions of how to interact with their drown. ## How I built it Unity + Vuforia for AR. Node + Socket.IO + Express + Azure for backend ## Challenges I ran into C# is *beautiful* ## What's next for DronAR Adding SLAM in order to make it easier to interact with the AR items.
winning
## Inspiration Adults over the age of 50 take an average of 15 prescription medications annually. Keeping track of this is very challenging. Pillvisor is a smart pillbox that solves the issue of medication error by verifying the pills are taken correctly in order to keep your loved ones safe. Unlike other products on the market, pillvisor integrates with a real pillbox and is designed with senior users in mind. As we can imagine, keeping track of the pill schedule is challenging and taking incorrect medications can lead to serious avoidable complications. The most common drugs taken at home that have serious complications from medication errors are cardiovascular drugs and painkillers. One study found that almost a third of a million Americans contact poison control annually due to medication errors taken at home. One third of the errors result in hospital admissions whose admittance in on a steady rise. This only includes at home errors while medication errors can also occur in health care facilities. ## What it does Pillvisor is an automated pill box supervisor designed to help people who take many medications on the daily to ensure they actually take the correct pills at the correct time. Unlike many reminder and alarm apps that are wildly available on the app store, our custom pillbox product actually checks that pills are taken so the alarm isn't just turned off and ignored. ## How we built it The user interface to set the alarms is made with flask and is connected to a firebase. Our blacked out pillbox uses photo-resistors to detect which day is open and this verifies the pill is removed from the correct day and it does not stop the alarm if an incorrect day is opened. Once the medication is removed a photo of the medication is taken to check that it is indeed the correct medication, otherwise the user will be reminded to try to scan another pill. We have green LEDs to indicate the correct day of the week. If the user opens an incorrect day or scans the wrong pill a red LED will flash to alert the user. An LCD display to show the medication name and instructions for using the system. We used tensorflow to develop a machine learning convolutional neural network for image recognition to distinguish the different pills from one another. Our Raspberry PI takes a photo, runs the neural network on it and checks to see if the correct pill has been photographed. For our user interface, We developed an isolated Flask application which is connected to our firebase database and allows alarms to be set, deleted and edited easily and quickly(for example changing the time or day of a certain alarm). A sync button on the raspberry pi allows it to be constantly up to date with the backend after changes are made on the cloud. ## Challenges we ran into Due to the complexity of the project, we ran into many issues with both software and hardware. Our biggest challenge for the project was getting the image recognition to work, and produce accurate results due to noise coming from the hand holding the pill. Additionally, getting all the packages and dependencies such as tensorflow and opencv installed onto the system our posed to be a huge challenge On the hardware side, we ran into issues detecting if the pillbox is opened or closed based on the imperfection in ‘blacking out’ the pillbox. Due to constraints we didn’t have an opaque box. ## Accomplishments that we’re proud of We did this hackathon to challenge ourselves to use and apply our skills to new technologies that we were unfamiliar with or relatively new with such as databases, flask, machine learning, and hardware. Additionally, this was the first hackathon for 2 of our team members and we are very proud of what we achieved and what we have learned in such a short period of time. We were happy that we were able to integrate hardware and software together for this project and apply our skills from our varying engineering backgrounds. ## What I learned * How to setup a database * Machine learning, tensorflow and convolutional neural networks * Using Flask, learning javascript and html ## What's next for Pillviser Due to time constraints, we were unable to implement all the features we wanted. One feature we still need to add is a snooze feature to allow a delay of the alarm by a set amount of time which is useful especially if the medication has eating constraints with it. Additionally, we want to improve the image recognition on the pills which we believe can be made into a seperate program would be highly valuable in healthcare facilities as a last line of defence as pills are normally handled using patient charts and delivered through a chain of people so it can be an extra line of defence.
## Inspiration According to the United State's Department of Health and Human Services, 55% of the elderly are non-compliant with their prescription drug orders, meaning they don't take their medication according to the doctor's instructions, where 30% are hospital readmissions. Although there are many reasons why seniors don't take their medications as prescribed, memory loss is one of the most common causes. Elders with Alzheimer's or other related forms of dementia are prone to medication management problems. They may simply forget to take their medications, causing them to skip doses. Or, they may forget that they have already taken their medication and end up taking multiple doses, risking an overdose. Therefore, we decided to solve this issue with Pill Drop, which helps people remember to take their medication. ## What it does The Pill Drop dispenses pills at scheduled times throughout the day. It helps people, primarily seniors, take their medication on time. It also saves users the trouble of remembering which pills to take, by automatically dispensing the appropriate medication. It tracks whether a user has taken said dispensed pills by starting an internal timer. If the patient takes the pills and presses a button before the time limit, Pill Drop will record this instance as "Pill Taken". ## How we built it Pill Drop was built using Raspberry Pi and Arduino. They controlled servo motors, a button, and a touch sensor. It was coded in Python. ## Challenges we ran into Challenges we ran into was first starting off with communicating between the Raspberry Pi and the Arduino since we all didn't know how to do that. Another challenge was to structurally hold all the components needed in our project, making sure that all the "physics" aligns to make sure that our product is structurally stable. In addition, having the pi send an SMS Text Message was also new to all of us, so incorporating a User Interface - trying to take inspiration from HyperCare's User Interface - we were able to finally send one too! Lastly, bringing our theoretical ideas to fruition was harder than expected, running into multiple road blocks within our code in the given time frame. ## Accomplishments that we're proud of We are proud that we were able to create a functional final product that is able to incorporate both hardware (Arduino and Raspberry Pi) and software! We were able to incorporate skills we learnt in-class plus learn new ones during our time in this hackathon. ## What we learned We learned how to connect and use Raspberry Pi and Arduino together, as well as incorporating User Interface within the two as well with text messages sent to the user. We also learned that we can also consolidate code at the end when we persevere and build each other's morals throughout the long hours of hackathon - knowing how each of us can be trusted to work individually and continuously be engaged with the team as well. (While, obviously, having fun along the way!) ## What's next for Pill Drop Pill Drop's next steps include creating a high-level prototype, testing out the device over a long period of time, creating a user-friendly interface so users can adjust pill-dropping time, and incorporating patients and doctors into the system. ## UPDATE! We are now working with MedX Insight to create a high-level prototype to pitch to investors!
View presentation at the following link: <https://youtu.be/Iw4qVYG9r40> ## Inspiration During our brainstorming stage, we found that, interestingly, two-thirds (a majority, if I could say so myself) of our group took medication for health-related reasons, and as a result, had certain external medications that result in negative drug interactions. More often than not, one of us is unable to have certain other medications (e.g. Advil, Tylenol) and even certain foods. Looking at a statistically wider scale, the use of prescription drugs is at an all-time high in the UK, with almost half of the adults on at least one drug and a quarter on at least three. In Canada, over half of Canadian adults aged 18 to 79 have used at least one prescription medication in the past month. The more the population relies on prescription drugs, the more interactions can pop up between over-the-counter medications and prescription medications. Enter Medisafe, a quick and portable tool to ensure safe interactions with any and all medication you take. ## What it does Our mobile application scans barcodes of medication and outputs to the user what the medication is, and any negative interactions that follow it to ensure that users don't experience negative side effects of drug mixing. ## How we built it Before we could return any details about drugs and interactions, we first needed to build a database that our API could access. This was done through java and stored in a CSV file for the API to access when requests were made. This API was then integrated with a python backend and flutter frontend to create our final product. When the user takes a picture, the image is sent to the API through a POST request, which then scans the barcode and sends the drug information back to the flutter mobile application. ## Challenges we ran into The consistent challenge that we seemed to run into was the integration between our parts. Another challenge that we ran into was one group member's laptop just imploded (and stopped working) halfway through the competition, Windows recovery did not pull through and the member had to grab a backup laptop and set up the entire thing for smooth coding. ## Accomplishments that we're proud of During this hackathon, we felt that we *really* stepped out of our comfort zone, with the time crunch of only 24 hours no less. Approaching new things like flutter, android mobile app development, and rest API's was daunting, but we managed to preserver and create a project in the end. Another accomplishment that we're proud of is using git fully throughout our hackathon experience. Although we ran into issues with merges and vanishing files, all problems were resolved in the end with efficient communication and problem-solving initiative. ## What we learned Throughout the project, we gained valuable experience working with various skills such as Flask integration, Flutter, Kotlin, RESTful APIs, Dart, and Java web scraping. All these skills were something we've only seen or heard elsewhere, but learning and subsequently applying it was a new experience altogether. Additionally, throughout the project, we encountered various challenges, and each one taught us a new outlook on software development. Overall, it was a great learning experience for us and we are grateful for the opportunity to work with such a diverse set of technologies. ## What's next for Medisafe Medisafe has all 3-dimensions to expand on, being the baby app that it is. Our main focus would be to integrate the features into the normal camera application or Google Lens. We realize that a standalone app for a seemingly minuscule function is disadvantageous, so having it as part of a bigger application would boost its usage. Additionally, we'd also like to have the possibility to take an image from the gallery instead of fresh from the camera. Lastly, we hope to be able to implement settings like a default drug to compare to, dosage dependency, etc.
partial
# turnip - food was made for sharing. ## Inspiration After reading about the possible projects, we decided to work with Velo by Wix on a food tech project. What are two things that we students never get tired of? Food and social media! We took some inspiration from Radish and GoodReads to throw together a platform for hungry students. Have you ever wanted takeout but not been sure what you're in the mood for? turnip is here for you! ## What it does turnip is a website that connects local friends with their favourite food takeout spots. You can leave reviews and share pictures, as well as post asking around for food recommendations. turnip also keeps track of your restaurant wishlist and past orders, so you never forget to check out that place your friend keeps telling you about. With integrated access to partnered restaurants, turnip would allow members to order right on the site seamlessly and get food delivered for cheap. Since the whole design is built around sharing (sharing thoughts, sharing secrets, sharing food), turnip would also allow users to place orders together, splitting the cost right at payment to avoid having to bring out the calculator and figure out who owes who what. ## How we built it We used Velo by Wix for the entire project, with Carol leading the design of the website while Amir and Tudor worked on the functionality. We also used Wix's integrated "members" area and forum add-ons to implement the "feed". ## Challenges we ran into One of the bigger challenges we had to face was that none of us had any experience developing full-stack, so we had to learn on the spot how to write a back-end and try to implement it into our website. It was honestly a lot of fun trying to "speedrun" learning the ins and outs of Javascript. Unfortunately, Wix made the project even more difficult to work on as it doesn't natively support multiple people working on it at the same time. As such, our plan to work concurrently fell through and we had to "pass the baton" when it came to working on the website and keep ourselves busy the rest of the time. Lastly, since we relied on Wix add-ons we were heavily limited in the functionality we could implement with Velo. We still created a few functions; however, much of it was already covered by the add-ons and what wasn't was made very difficult to access without rewriting the functionality of the modules from scratch. Given the time crunch, we made do with what we had and had to restrict the scope for McHacks. ## Accomplishments that we're proud of We're super proud of how the design of the site came together, and all the art Carol drew really flowed great with the look we were aiming for. We're also very proud of what we managed to get together despite all the challenges we faced, and the back-end functionality we implemented. ## What we learned Our team really learned about the importance of scope, as well as about the importance of really planning out the project before diving right in. Had we done some research to really familiarize ourselves with Wix and Velo we might have reconsidered the functionalities we would need to implement (and/or implemented them ourselves, which in hindsight would have been better), or chosen to tackle this project in a different way altogether! ## What's next for Turnip We have a lot of features that we really wanted to implement but didn't quite have the time to. A simple private messaging feature would have been great, as well as fully implementing the block feature (sometimes we don't get along with people, and that's okay!). We love the idea that a food delivery service like Radish could implement some of our ideas, like the social media/recommendations/friends feature aspect of our project, and would love to help them do it. Overall, we're extremely proud of the ideas we have come up with and what we have managed to implement, especially the fact that we kept in mind the environmental impact of meal deliveries with the order sharing.
## Inspiration > > Our team is full of food lovers and what’s a better way to show this passion than to design and develop a website related to it! We were inspired by the Hack The 6ix sponsor BMO, who proposed the challenge of answering “What to eat for dinner?”. We realized that this is probably the most asked question during the pandemic, since we can’t go out normally, and because we search our fridges every 15 minutes at home for something to eat. But don’t worry! TastyDinner is here for you! > > > ## What it does > > TastyDinner is here to answer the question: “What to eat for dinner?” by: > > > * Giving inspiration with a gallery of delicious food items to look at! > * Outputting recipes you can make with the ingredients you already have! > * Using Vision AI from Google Cloud’s Vision API for a cool experience! > > > The gallery presented allows you to scroll and gather inspiration, which can help you find the answer to what you want to eat! It’s done by using the Flickr API, where the application dynamically gets many photos related to delicious food items to display for your eyes! > As for the output of recipes, users are able to input ingredients they already have and our application will handle the rest! Our team implemented two ways for a user to input their ingredients. The first way is sending a photo of their ingredients list or receipt, and from there, it would be passed through the Google Cloud Vision API for processing of text! The second way is a simpler approach, where users could just type in ingredients themselves. > After we efficiently process the received ingredients, we then use the Spoonacular API to receive a list of recipes one could make that best fits the ingredients given! > With this web app, you can enter your list of ingredients available or take a picture of a written note and then we’ll recommend the ideal meal for you! > > > ## How we built it > > > The project was built in Visual Studio Code with MongoDB, Express, React.js, Node.js, HTML, CSS, and JavaScript. We also built an android version using Android Studio and Java that integrates with the same Node.js server being used on the web app. > > > ## Challenges we ran into > > Our team is composed of beginner hackers, and we struggled with some of the most basic things. From trouble with github, and learning what a “pull request” was to being unable to connect our React frontend with our Node.js backend, somehow we were able to push through. After staying up till 3AM on the first day, and then pulling an all nighter on the last day of the hackathon, we worked really hard to get our current results! > In the end, we had a huge blast laughing about the dumbest things past midnight, and we loved the process of fixing 3 hour long bugs. We learned in this hackathon that anything is possible, and that we were able to build a full stack app in just 36 hours! > > > ## Accomplishments that we're proud of > > We are proud of developing a website/android app that looks aesthetically pleasing, and with a fully functioning, modularized backend given our skillset. Our team worked really hard together to develop all aspects of our product. > > > ## What we learned > > We learned an incredible amount about web development and integrating the frontend and backend. Many of us came into the project with very diverse skills, so we were able to learn a lot from each other. > > > ## What's next for TastyDinner > > Stay tuned, stay hungry, cause you are going to get a #TastyDinner. > **TastyLunch coming soon!** > > >
## Problem In these times of isolation, many of us developers are stuck inside which makes it hard for us to work with our fellow peers. We also miss the times when we could just sit with our friends and collaborate to learn new programming concepts. But finding the motivation to do the same alone can be difficult. ## Solution To solve this issue we have created an easy to connect, all in one platform where all you and your developer friends can come together to learn, code, and brainstorm together. ## About Our platform provides a simple yet efficient User Experience with a straightforward and easy-to-use one-page interface. We made it one page to have access to all the tools on one screen and transition between them easier. We identify this page as a study room where users can collaborate and join with a simple URL. Everything is Synced between users in real-time. ## Features Our platform allows multiple users to enter one room and access tools like watching youtube tutorials, brainstorming on a drawable whiteboard, and code in our inbuilt browser IDE all in real-time. This platform makes collaboration between users seamless and also pushes them to become better developers. ## Technologies you used for both the front and back end We use Node.js and Express the backend. On the front end, we use React. We use Socket.IO to establish bi-directional communication between them. We deployed the app using Docker and Google Kubernetes to automatically scale and balance loads. ## Challenges we ran into A major challenge was collaborating effectively throughout the hackathon. A lot of the bugs we faced were solved through discussions. We realized communication was key for us to succeed in building our project under a time constraints. We ran into performance issues while syncing data between two clients where we were sending too much data or too many broadcast messages at the same time. We optimized the process significantly for smooth real-time interactions. ## What's next for Study Buddy While we were working on this project, we came across several ideas that this could be a part of. Our next step is to have each page categorized as an individual room where users can visit. Adding more relevant tools, more tools, widgets, and expand on other work fields to increase our User demographic. Include interface customizing options to allow User’s personalization of their rooms. Try it live here: <http://35.203.169.42/> Our hopeful product in the future: <https://www.figma.com/proto/zmYk6ah0dJK7yJmYZ5SZpm/nwHacks_2021?node-id=92%3A132&scaling=scale-down> Thanks for checking us out!
partial
## Inspiration Disasters can strike quickly and without notice. Most people are unprepared for situations such as earthquakes which occur with alarming frequency along the Pacific Rim. When wifi and cell service are unavailable, medical aid, food, water, and shelter struggle to be shared as the community can only communicate and connect in person. ## What it does In disaster situations, Rebuild allows users to share and receive information about nearby resources and dangers by placing icons on a map. Rebuild uses a mesh network to automatically transfer data between nearby devices, ensuring that users have the most recent information in their area. What makes Rebuild a unique and effective app is that it does not require WIFI to share and receive data. ## How we built it We built it with Android and the Nearby Connections API, a built-in Android library which manages the ## Challenges we ran into The main challenges we faced while making this project were updating the device location so that the markers are placed accurately, and establishing a reliable mesh-network connection between the app users. While these features still aren't perfect, after a long night we managed to reach something we are satisfied with. ## Accomplishments that we're proud of WORKING MESH NETWORK! (If you heard the scream of joy last night I apologize.) ## What we learned ## What's next for Rebuild
## Inspiration The only thing worse than no WiFi is slow WiFi. Many of us have experienced the frustrations of terrible internet connections. We have too, so we set out to create a tool to help users find the best place around to connect. ## What it does Our app runs in the background (completely quietly) and maps out the WiFi landscape of the world. That information is sent to a central server and combined with location and WiFi data from all users of the app. The server then processes the data and generates heatmaps of WiFi signal strength to send back to the end user. Because of our architecture these heatmaps are real time, updating dynamically as the WiFi strength changes. ## How we built it We split up the work into three parts: mobile, cloud, and visualization and had each member of our team work on a part. For the mobile component, we quickly built an MVP iOS app that could collect and push data to the server and iteratively improved our locationing methodology. For the cloud, we set up a Firebase Realtime Database (NoSQL) to allow for large amounts of data throughput. For the visualization, we took the points we received and used gaussian kernel density estimation to generate interpretable heatmaps. ## Challenges we ran into Engineering an algorithm to determine the location of the client was significantly more difficult than expected. Initially, we wanted to use accelerometer data, and use GPS as well to calibrate the data, but excessive noise in the resulting data prevented us from using it effectively and from proceeding with this approach. We ran into even more issues when we used a device with less accurate sensors like an Android phone. ## Accomplishments that we're proud of We are particularly proud of getting accurate paths travelled from the phones. We initially tried to use double integrator dynamics on top of oriented accelerometer readings, correcting for errors with GPS. However, we quickly realized that without prohibitively expensive filtering, the data from the accelerometer was useless and that GPS did not function well indoors due to the walls affecting the time-of-flight measurements. Instead, we used a built in pedometer framework to estimate distance travelled (this used a lot of advanced on-device signal processing) and combined this with the average heading (calculated using a magnetometer) to get meter-level accurate distances. ## What we learned Locationing is hard! Especially indoors or over short distances. Firebase’s realtime database was extremely easy to use and very performant Distributing the data processing between the server and client is a balance worth playing with ## What's next for Hotspot Next, we’d like to expand our work on the iOS side and create a sister application for Android (currently in the works). We’d also like to overlay our heatmap on Google maps. There are also many interesting things you can do with a WiFi heatmap. Given some user settings, we could automatically switch from WiFi to data when the WiFi signal strength is about to get too poor. We could also use this app to find optimal placements for routers. Finally, we could use the application in disaster scenarios to on-the-fly compute areas with internet access still up or produce approximate population heatmaps.
## Inspiration We wanted to build an app that could help save lives. There were so many people, among my family, friends and acquaintances who have seen catastrophes taking lives, and we wanted to prevent that. ## What it does Helps prepare and safely evacuate during an earthquake depending on a particular situation. In case of an Earthquake, the app will consider the magnitude and suggest users whether to remain in the house or to evacuate ## Challenges I ran into Difficulty incorporating MapKit and Uber API integrations. ## Accomplishments that I'm proud of My team dedicated themselves to solve a major problem and save lives. ## What I learned API Integrations. Collaborating with designers and other developers. Additional strategies for hackathons. ## What's next for QuakeX Better integrations and optimizations with APIs. Specifically, in the upcoming version, rescuers will be able to locate people trapped in houses during natural disasters.
winning
## Inspiration The need for faster and more reliable emergency communication in remote areas inspired the creation of FRED (Fire & Rescue Emergency Dispatch). Whether due to natural disasters, accidents in isolated locations, or a lack of cellular network coverage, emergencies in remote areas often result in delayed response times and first-responders rarely getting the full picture of the emergency at hand. We wanted to bridge this gap by leveraging cutting-edge satellite communication technology to create a reliable, individualized, and automated emergency dispatch system. Our goal was to create a tool that could enhance the quality of information transmitted between users and emergency responders, ensuring swift, better informed rescue operations on a case-by-case basis. ## What it does FRED is an innovative emergency response system designed for remote areas with limited or no cellular coverage. Using satellite capabilities, an agentic system, and a basic chain of thought FRED allows users to call for help from virtually any location. What sets FRED apart is its ability to transmit critical data to emergency responders, including GPS coordinates, detailed captions of the images taken at the site of the emergency, and voice recordings of the situation. Once this information is collected, the system processes it to help responders assess the situation quickly. FRED streamlines emergency communication in situations where every second matters, offering precise, real-time data that can save lives. ## How we built it FRED is composed of three main components: a mobile application, a transmitter, and a backend data processing system. ``` 1. Mobile Application: The mobile app is designed to be lightweight and user-friendly. It collects critical data from the user, including their GPS location, images of the scene, and voice recordings. 2. Transmitter: The app sends this data to the transmitter, which consists of a Raspberry Pi integrated with Skylo’s Satellite/Cellular combo board. The Raspberry Pi performs some local data processing, such as image transcription, to optimize the data size before sending it to the backend. This minimizes the amount of data transmitted via satellite, allowing for faster communication. 3. Backend: The backend receives the data, performs further processing using a multi-agent system, and routes it to the appropriate emergency responders. The backend system is designed to handle multiple inputs and prioritize critical situations, ensuring responders get the information they need without delay. 4. Frontend: We built a simple front-end to display the dispatch notifications as well as the source of the SOS message on a live-map feed. ``` ## Challenges we ran into One major challenge was managing image data transmission via satellite. Initially, we underestimated the limitations on data size, which led to our satellite server rejecting the images. Since transmitting images was essential to our product, we needed a quick and efficient solution. To overcome this, we implemented a lightweight machine learning model on the Raspberry Pi that transcribes the images into text descriptions. This drastically reduced the data size while still conveying critical visual information to emergency responders. This solution enabled us to meet satellite data constraints and ensure the smooth transmission of essential data. ## Accomplishments that we’re proud of We are proud of how our team successfully integrated several complex components—mobile application, hardware, and AI powered backend—into a functional product. Seeing the workflow from data collection to emergency dispatch in action was a gratifying moment for all of us. Each part of the project could stand alone, showcasing the rapid pace and scalability of our development process. Most importantly, we are proud to have built a tool that has the potential to save lives in real-world emergency scenarios, fulfilling our goal of using technology to make a positive impact. ## What we learned Throughout the development of FRED, we gained valuable experience working with the Raspberry Pi and integrating hardware with the power of Large Language Models to build advanced IOT system. We also learned about the importance of optimizing data transmission in systems with hardware and bandwidth constraints, especially in critical applications like emergency services. Moreover, this project highlighted the power of building modular systems that function independently, akin to a microservice architecture. This approach allowed us to test each component separately and ensure that the system as a whole worked seamlessly. ## What’s next for FRED Looking ahead, we plan to refine the image transmission process and improve the accuracy and efficiency of our data processing. Our immediate goal is to ensure that image data is captioned with more technical details and that transmission is seamless and reliable, overcoming the constraints we faced during development. In the long term, we aim to connect FRED directly to local emergency departments, allowing us to test the system in real-world scenarios. By establishing communication channels between FRED and official emergency dispatch systems, we can ensure that our product delivers its intended value—saving lives in critical situations.
## Inspiration Every year hundreds of thousands of preventable deaths occur due to the lack of first aid knowledge in our societies. Many lives could be saved if the right people are in the right places at the right times. We aim towards connecting people by giving them the opportunity to help each other in times of medical need. ## What it does It is a mobile application that is aimed towards connecting members of our society together in times of urgent medical need. Users can sign up as respondents which will allow them to be notified when people within a 300 meter radius are having a medical emergency. This can help users receive first aid prior to the arrival of an ambulance or healthcare professional greatly increasing their chances of survival. This application fills the gap between making the 911 call and having the ambulance arrive. ## How we built it The app is Android native and relies heavily on the Google Cloud Platform. User registration and authentication is done through the use of Fireauth. Additionally, user data, locations, help requests and responses are all communicated through the Firebase Realtime Database. Lastly, the Firebase ML Kit was also used to provide text recognition for the app's registration page. Users could take a picture of their ID and their information can be retracted. ## Challenges we ran into There were numerous challenges in terms of handling the flow of data through the Firebase Realtime Database and providing the correct data to authorized users. ## Accomplishments that we're proud of We were able to build a functioning prototype! Additionally we were able to track and update user locations in a MapFragment and ended up doing/implementing things that we had never done before.
## Inspiration dwarf fortress and stardew valley ## What it does simulates farming ## How we built it quickly ## Challenges we ran into learning how to farm ## Accomplishments that we're proud of making a frickin gaem ## What we learned games are hard farming is harder ## What's next for soilio make it better
winning
## EDUCHANGE Helping students who have trouble speaking up. ## THE PROBLEM In today's classroom, teachers are often frustrated with the lack of participation by students during class discussions. Most studies conclude that the main issues are fears of negative social repercussions, such as being judged by their peers and their teachers for giving a "stupid" response. ## OUR SOLUTION High school is a sensitive age for students, and they care greatly about how their peers view them. We decided that an application that would both reduce their fears of negative social repercussions as well as gain them social recognition for their contributions would encourage them to participate in class discussions. Our app provides the teacher with a list of questions from students, all while keeping their names anonymous from their peers. If the teachers deems a question especially good, they can select the question to give them points as well as reveal the name of the student who asked them, giving them social recognition and validation. Similarly, peers can vote on their favorite questions asked by their classmates. Scores will be tallied to show top scores.
## Inspiration Looking around our team, we see a common trait that hinders us in the classroom: Shyness. It's hard to ask questions during class, especially in front of large audiences. We want to combat this issue by giving the large population of students like us an opportunity to engage actively in classroom participation with confidence. Additionally, we tackled another problem. When questions are asked during class, many students have difficulty hearing what was asked as well as reviewing that material, as it may not always be in the lecture material. With our product, important questions along with the teacher's answers will be recorded and sent out via email post-lesson. This way, key supplemental information will be provided to students to further enhance their learning experience. ## What it does To address this lack of confidence during class, we provide an anonymous question portal that allows any student to ask a question through voice or text. This is disguised as a word processor, further allowing anonymity for students who feel too shy to ask questions. These questions will be sent to the teacher portal in real-time, where the teacher can select questions to answer. Once the class has concluded, Q/A pairs will be sent out to the class through email. ## How we built it We implemented our idea with a variety of frameworks. Using MongoDB, we first started the development of the database to store the questions being asked. This database served as our main backend tool to store and send the data collected from student and teacher responses. We utilized Twilio's SendGrid to email the data. Our front-end application displayed this information using React JS. To control our microphone, we used an interface between javascript and the Google Cloud API for converting speech to text. ## Challenges we ran into One of our main challenges was formulating the idea. Because we wanted to create a product that was meaningful to both us and our peers, we spent many nights brainstorming a project that resonated with us. Thus, Qloak was born. During Qloak's development, we also ran into many technical problems, such as making sure button clicks were working properly with incoming data, correctly timing that data, and sharing it between users. We worked together by breaking up and assigning tasks that would capitalize on each of our individual strengths. As a team, we were able to successfully coordinate with each other and ultimately integrate our code together. ## Accomplishments that we are proud of Our team is proud that we brought to life an idea that will meaningfully impact and help our fellow peers. Additionally, during this process, each of us had to tackle programs and concepts that we had never worked with before. ## What we learned We learned the importance of teamwork. At the end of our hack, we reflected on our accomplishments and realized how we were able to combine our talents together and efficiently finish our task. Working together allowed us to bounce ideas off each other to polish and strengthen. Also, we became stronger developers after this project, as the product required a wide variety of parts and provided us with full exposure to software such as React, mongoDB, and Twilio. ## What's next for Qloak Our next step is to deploy our product to a larger scale so that we can increase visibility to a wider userbase. We would also like to work on a more intuitive user interface with additional functionalities to allow for smoother flows and interactions.
## Inspiration We were inspired by hard working teachers and students. Although everyone was working hard, there was still a disconnect with many students not being able to retain what they learned. So, we decided to create both a web application and a companion phone application to help target this problem. ## What it does The app connects students with teachers in a whole new fashion. Students can provide live feedback to their professors on various aspects of the lecture, such as the volume and pace. Professors, on the other hand, get an opportunity to receive live feedback on their teaching style and also give students a few warm-up exercises with a built-in clicker functionality. The web portion of the project ties the classroom experience to the home. Students receive live transcripts of what the professor is currently saying, along with a summary at the end of the lecture which includes key points. The backend will also generate further reading material based on keywords from the lecture, which will further solidify the students’ understanding of the material. ## How we built it We built the mobile portion using react-native for the front-end and firebase for the backend. The web app is built with react for the front end and firebase for the backend. We also implemented a few custom python modules to facilitate the client-server interaction to ensure a smooth experience for both the instructor and the student. ## Challenges we ran into One major challenge we ran into was getting and processing live audio and giving a real-time transcription of it to all students enrolled in the class. We were able to solve this issue through a python script that would help bridge the gap between opening an audio stream and doing operations on it while still serving the student a live version of the rest of the site. ## Accomplishments that we’re proud of Being able to process text data to the point that we were able to get a summary and information on tone/emotions from it. We are also extremely proud of the ## What we learned We learned more about React and its usefulness when coding in JavaScript. Especially when there were many repeating elements in our Material Design. We also learned that first creating a mockup of what we want will facilitate coding as everyone will be on the same page on what is going on and all thats needs to be done is made very evident. We used some API’s such as the Google Speech to Text API and a Summary API. We were able to work around the constraints of the API’s to create a working product. We also learned more about other technologies that we used such as: Firebase, Adobe XD, React-native, and Python. ## What's next for Gradian The next goal for Gradian is to implement a grading system for teachers that will automatically integrate with their native grading platform so that clicker data and other quiz material can instantly be graded and imported without any issues. Beyond that, we can see the potential for Gradian to be used in office scenarios as well so that people will never miss a beat thanks to the live transcription that happens.
losing
## Inspiration Not having to worry about coming home to feed our dog. ## What it does Automatically dispenses food and water for the dog based on sensor readings. Will be connected to an app in the future so you can choose when to feed the dog. ## How we built it Hooked up an Arduino Uno to a force sensor and a water level sensor. If the water and food levels are low, dispense food if it’s been 6 hours since the last meal and constantly dispense water. ## Challenges we ran into The DC motor can’t be controlled by the Arduino so we needed to make a servo push a button. We didn’t have a transistor. This was the only way we could turn on the motor. We also were not able to hook the Arduino to WIFI because we were using an ESP 8266 shield. We were going to use the Blynk app to control the project over WIFI but the ESP wasn’t able to pass the information correctly. ## Accomplishments that we're proud of We created what we wanted. An automated food and water dispenser. ## What we learned Problem solving. Building circuits for the hardware. Learning how to connect hardware over WIFI. ## What's next for Feed The Dog Connect the project to WIFI.
## Inspiration We were inspired by plug for adaptors that allow for devices to be controlled over the internet and their power consumption monitored. ## What it does The microcontroller onboard the ESP8266 controls relays to activate the outlets. Current sensors measure the current consumption, and this data is sent back to the web. Plots and graphics are generated using the streaming data. Outlets can be controlled via the web interface. ## How I built it We built a custom circuit board to connect the WiFi module, relays, current sensors and other associated parts. Care was taken to build in safety features like fuses to the system, as mains voltage is being switched. On the web side, we created a webapp using Javascript and the d3 framework for graphics. This communicates to the server on the WiFi module using websockets to monitor information and to control the outlet. ## Challenges I ran into We ran into an issue where the WiFi module would reset when the outlets were toggled via the web interface. This issue didn't present itself during testing without the web interface, so it was a surprise and a challenge to deal with. ## Accomplishments that I'm proud of I'm proud of building a circuit board and doing the entire system design and implementation in a night. We used a number of new tools and components in the system, and working with them was exciting! ## What I learned I learned that testing at each stage of hardware development is key, to avoid having a number of issues present themselves at once. Additionally, more testing with the hardware and software integrated would've given us more time to fix bugs. ## What's next for Smart Outlet Packaging the system so that it is attractive and durable would be a future goal. Additionally, enhancing precision of the current measurements would give users more precise data, which could be important in some applications.
## Inspiration Our inspiration for Smart Sprout came from our passion for both technology and gardening. We wanted to create a solution that not only makes plant care more convenient but also promotes sustainability by efficiently using water resources. ## What it does Smart Sprout is an innovative self-watering plant system. It constantly monitors the moisture level in the soil and uses this data to intelligently dispense water to your plants. It ensures that your plants receive the right amount of water, preventing overwatering or underwatering. Additionally, it provides real-time moisture data, enabling you to track the health of your plants remotely. ## How we built it We built Smart Sprout using a combination of hardware and software. The hardware includes sensors to measure soil moisture, an Arduino microcontroller to process data, and a motorized water dispenser to regulate watering. The software utilizes custom code to interface with the hardware, analyze moisture data, and provide a user-friendly interface for monitoring and control. ## Challenges we ran into During the development of Smart Sprout, we encountered several challenges. One significant challenge was optimizing the water dispensing mechanism to ensure precise and efficient watering. The parts required by our team, such as a water pump, were not available. We also had to fine-tune the sensor calibration to provide accurate moisture readings, which took much more time than expected. Additionally, integrating the hardware with a user-friendly software interface posed its own set of challenges. ## Accomplishments that we're proud of The rotating bottle, and mounting it. It has to be rotated such that the holes are on the top or bottom, as necessary, but the only motor we could find was barely powerful enough to turn it. We reduced friction on the other end by using a polygonal 3d-printed block, and mounted the motor opposite to it. Overall, finding an alternative to a water pump was something we are proud of. ## What we learned As is often the case, moving parts are the most complicated, but we also are using the arduino for two things at the same time: driving the motor and writing to the display. Multitasking is a major component of modern operating systems, and it was interesting to work on it in this case here. ## What's next for Smart Sprout The watering system could be improved. There exist valves that are meant to be electronically operated, or a human designed valve and a servo, which would allow us to link it to a municipal water system.
losing
## Inspiration This game was inspired by the classical game of connect four, in which one inserts disks into a vertical board to try to get four in a row. As big fans of the game, our team sought to improve it by adding new features. ## What it does The game is played like a regular game of connect four, except each player may choose to use their turn to rotate the board left or right and let gravity force the pieces to fall downwards. This seemingly innocent change to connect four adds many new layers of strategy and fun to what was already a strategic and fun game. We developed two products: an iOS app, and a web app, to run the game. In addition, both the iOS and web apps feature the abilities of: 1) Play local "pass and play" multiplayer 2) Play against multiple different AIs we crafted, each of differing skill levels 3) Play live online games against random opponents, including those on different devices! ## How we built it The iOS app was built in Swift and the web app was written with Javascript's canvas. The bulk of the backend, which is crucial for both our online multiplayer and our AIs, came from Firebase's services. ## Challenges we ran into None of us are particularly artistic, so getting a visually pleasant UI wasn't exactly easy... ## Accomplishments that we're proud of We are most proud of our ability to successfully run an online cross-platform multiplayer, which we could not have possibly done without the help of Firebase and its servers and APIs. We are also proud of the AIs we developed, which so far tend to beat us almost every time. ## What we learned Most of us had very little experience working with backend servers, so Firebase provided us with a lovely introduction to allowing our applications to flourish on my butt. ## What's next for Gravity Four Let's get Gravity Four onto even more types of devices and into the app store!
# Catch! (Around the World) ## Our Inspiration Catch has to be one of our most favourite childhood games. Something about just throwing and receiving a ball does wonders for your serotonin. Since all of our team members have relatives throughout the entire world, we thought it'd be nice to play catch with those relatives that we haven't seen due to distance. Furthermore, we're all learning to social distance (physically!) during this pandemic that we're in, so who says we can't we play a little game while social distancing? ## What it does Our application uses AR and Unity to allow you to play catch with another person from somewhere else in the globe! You can tap a button which allows you to throw a ball (or a random object) off into space, and then the person you send the ball/object to will be able to catch it and throw it back. We also allow users to chat with one another using our web-based chatting application so they can have some commentary going on while they are playing catch. ## How we built it For the AR functionality of the application, we used **Unity** with **ARFoundations** and **ARKit/ARCore**. To record the user sending the ball/object to another user, we used a **Firebase Real-time Database** back-end that allowed users to create and join games/sessions and communicated when a ball was "thrown". We also utilized **EchoAR** to create/instantiate different 3D objects that users can choose to throw. Furthermore for the chat application, we developed it using **Python Flask**, **HTML** and **Socket.io** in order to create bi-directional communication between the web-user and server. ## Challenges we ran into Initially we had a separate idea for what we wanted to do in this hackathon. After a couple of hours of planning and developing, we realized that our goal is far too complex and it was too difficult to complete in the given time-frame. As such, our biggest challenge had to do with figuring out a project that was doable within the time of this hackathon. This also ties into another challenge we ran into was with initially creating the application and the learning portion of the hackathon. We did not have experience with some of the technologies we were using, so we had to overcome the inevitable learning curve. There was also some difficulty learning how to use the EchoAR api with Unity since it had a specific method of generating the AR objects. However we were able to use the tool without investigating too far into the code. ## Accomplishments * Working Unity application with AR * Use of EchoAR and integrating with our application * Learning how to use Firebase * Creating a working chat application between multiple users
## Inspiration After seeing the breakout success that was Pokemon Go, my partner and I were motivated to create our own game that was heavily tied to physical locations in the real-world. ## What it does Our game is supported on every device that has a modern web browser, absolutely no installation required. You walk around the real world, fighting your way through procedurally generated dungeons that are tied to physical locations. If you find that a dungeon is too hard, you can pair up with some friends and tackle it together. Unlike Niantic, who monetized Pokemon Go using micro-transactions, we plan to monetize the game by allowing local businesses to to bid on enhancements to their location in the game-world. For example, a local coffee shop could offer an in-game bonus to players who purchase a coffee at their location. By offloading the cost of the game onto businesses instead of players we hope to create a less "stressful" game, meaning players will spend more time having fun and less time worrying about when they'll need to cough up more money to keep playing. ## How We built it The stack for our game is built entirely around the Node.js ecosystem: express, socket.io, gulp, webpack, and more. For easy horizontal scaling, we make use of Heroku to manage and run our servers. Computationally intensive one-off tasks (such as image resizing) are offloaded onto AWS Lambda to help keep server costs down. To improve the speed at which our website and game assets load all static files are routed through MaxCDN, a continent delivery network with over 19 datacenters around the world. For security, all requests to any of our servers are routed through CloudFlare, a service which helps to keep websites safe using traffic filtering and other techniques. Finally, our public facing website makes use of Mithril MVC, an incredibly fast and light one-page-app framework. Using Mithril allows us to keep our website incredibly responsive and performant.
partial
![alt tag](https://raw.githubusercontent.com/zackharley/QHacks/develop/public/pictures/logoBlack.png) # What is gitStarted? GitStarted is a developer tool to help get projects off the ground in no time. When time is of the essence, devs hate losing time to setting up repositories. GitStarted streamlines the repo creation process, quickly adding your frontend tools and backend npm modules. ## Installation To install: ``` npm install ``` ## Usage To run: ``` gulp ``` ## Credits Created by [Jake Alsemgeest](https://github.com/Jalsemgeest), [Zack Harley](https://github.com/zackharley), [Colin MacLeod](https://github.com/ColinLMacLeod1) and [Andrew Litt](https://github.com/andrewlitt)! Made with :heart: in Kingston, Ontario for QHacks 2016
## Problem In these times of isolation, many of us developers are stuck inside which makes it hard for us to work with our fellow peers. We also miss the times when we could just sit with our friends and collaborate to learn new programming concepts. But finding the motivation to do the same alone can be difficult. ## Solution To solve this issue we have created an easy to connect, all in one platform where all you and your developer friends can come together to learn, code, and brainstorm together. ## About Our platform provides a simple yet efficient User Experience with a straightforward and easy-to-use one-page interface. We made it one page to have access to all the tools on one screen and transition between them easier. We identify this page as a study room where users can collaborate and join with a simple URL. Everything is Synced between users in real-time. ## Features Our platform allows multiple users to enter one room and access tools like watching youtube tutorials, brainstorming on a drawable whiteboard, and code in our inbuilt browser IDE all in real-time. This platform makes collaboration between users seamless and also pushes them to become better developers. ## Technologies you used for both the front and back end We use Node.js and Express the backend. On the front end, we use React. We use Socket.IO to establish bi-directional communication between them. We deployed the app using Docker and Google Kubernetes to automatically scale and balance loads. ## Challenges we ran into A major challenge was collaborating effectively throughout the hackathon. A lot of the bugs we faced were solved through discussions. We realized communication was key for us to succeed in building our project under a time constraints. We ran into performance issues while syncing data between two clients where we were sending too much data or too many broadcast messages at the same time. We optimized the process significantly for smooth real-time interactions. ## What's next for Study Buddy While we were working on this project, we came across several ideas that this could be a part of. Our next step is to have each page categorized as an individual room where users can visit. Adding more relevant tools, more tools, widgets, and expand on other work fields to increase our User demographic. Include interface customizing options to allow User’s personalization of their rooms. Try it live here: <http://35.203.169.42/> Our hopeful product in the future: <https://www.figma.com/proto/zmYk6ah0dJK7yJmYZ5SZpm/nwHacks_2021?node-id=92%3A132&scaling=scale-down> Thanks for checking us out!
## Inspiration gitpizza was inspired by a late night development push and a bout of hunger. What if you could order a pizza without having to leave the comfort of your terminal? ## What it does gitpizza is a CLI based on git which allows you to create a number of pizzas (branches), add toppings (files), configure your address and delivery info, and push your order straight to Pizza Hut. ## How I built it Python is bread and butter of gitpizza, parsing the provided arguments and using selenium to automatically navigate through the Pizza Hut website. ## Challenges I ran into Pizza Hut's website is mostly created with angular, meaning selenium would retrieve a barebones HTML page and it would later be dynamically populated with JavaScript. But selenium didn't see these changes, so finding elements by ids and such was impossible. That, along with the generic names and lack of ids in general on the website meant that my only solution was the physically move the mouse and click on pixel-perfect positions to add toppings and place the user's order. ## Accomplishments that I'm proud of Just the amount of commands that gitpizza supports. `gitpizza init` to start a new order, `gitpizza checkout -b new-pizza` to create a second pizza, `gitpizza add --left pepperoni` to add pepperoni to only the left half of your pizza, and `gitpizza diff` to see the differences between each side of your pizza. Visit [the repository](https://github.com/Microsquad/gitpizza) for the full list of commands
winning
## Inspiration What it does Takes an input from user describing their symptoms and recommend over-the-counter medications if their symptoms are mild enough to not require a doctor visit How we built it Python, ChatGPT api to parse user input, React, TypeScript, Tailwind CSS, AWS to deploy backend Challenges we ran into deploying the backend to aws ## Accomplishments that we're proud of ## What we learned What's next for PharmaBot To include MongoDB for ease of database management User authentication and personalization for data persistence and preferences Incorporate drug interaction database for patients to enter their current medical conditions and current drugs, then recommend over-the-counter medications based on that information Multilingual support for countries outside of the Anglophone world Symptom severity scale to more promptly direct the user to help
## Inspiration We were inspired to create our product because of one of our teammate's recent experience in our healthcare system. After a particularly bad bike accident, he went to the emergency department to be checked on but faced egregious wait times due to inefficiencies within the hospital. Unfortunately, the medical staff is so occupied with administrative work such as filling out forms, that their valuable time, which could be spent with patients, is drawn thin. We hoped to make something that could address this issue, which will both lower costs of operation and increase patient satisfaction. ## What it does Our product is a web page that uses the language model GPT-3 to expedite the process of creating patient visit summaries. The UI prompts the user to simply enter a few words or phrases pertaining to the patient's situation - their initial incident, symptoms, and treatments - and the model, along with our back-end, works to synthesize it all into a summary. ## How we built it Much of the beauty of this project lies in the UI, which streamlines the whole process. The web page was built using React with components from Google's Material UI to easily integrate front- and back-end. We also used OpenAI's GPT-3 playground to test various queries and eventually decide on the exact ones that would be used within the React framework. ## Challenges we ran into Working with GPT-3 proved to be a trickier task than expected. The language model was often rather fickle, producing content that we found to be irrelevant or even incorrect. Even more confounding was formatting the results we got. We tried a variety of methods of generating the multi-paragraph structure that we wanted, yet all of them had some sort of inconsistency. Ultimately, we realized that the reliability we needed depended on more simplicity, and thus came up with simpler, but more streamlined queries that got the job done consistently. ## Accomplishments that we're proud of We are proud of having built a product from scratch while implementing cutting-edge natural language technology. It was exciting to see the components of our site develop from its planning stages and then come together as an actual product that can feasibly be deployed for actual use. ## What we learned Having been so entranced by ChatGPT recently, we learned how to integrate large language models into applications for ourselves. It turns out that it was much more difficult than just typing a question into ChatGPT, and designing the pipeline became a valuable learning experience. ## What's next for Untitled Despite having such a niche application, our project has many possibilities for expansion. We can further optimize the process with better, perhaps more specifically trained language models that will be able to predict possible symptoms or treatments for an incident. Additionally, we can expand our concept and product to other similar administrative tasks that take up the valuable time of medical workers, helping to expedite many more facets of our healthcare system.
## Inspiration This project is inspired by our personal experiences of seeing elderly relatives struggle with keeping track of their prescriptions as well as with the complexity of modern tech. By creating simple, no-signup-required, and user-friendly apps, we aim to enhance their quality of life and improve their health as well. ## What it does Althea allows users to enter which prescriptions drugs they are taking so that they can have a checklist to remember what they have taken on a certain day. Althea then asks the user if they felt any symptoms the same day and notifies the user if the symptoms, they had could be side effects of a drug they use. The user can rate the severity of the symptoms they felt. Finally, users would be able to see past logs and even export them as PDFs if they would like to share the information easily with their primary care providers. ## How we built it For the front-end, we used JavaScript with Tailwind CSS and HTML for a smooth and sleek user experience. For the back-end, we used Python with Django. We organized schemas and used SQLite for patient data because of its sensitive nature. We used PostgreSQL for medicine data since we wanted it to be shared among different people and used Gemini AI in conjunction with it. For the mobile app alternative, we used pure Flutter and emulated on our machine. ## Challenges we ran into One significant challenge we ran into early on was deciding with what tech to build our app. We originally agreed on React Native since we all were familiar with React, but we learned that they're not that similar. This caused us to lose quite some time as we struggle with figuring it out. After that, we decided to do React+Django web app with a Flutter mobile app in parallel with huge ambition to interconnect them. However, time constraints and technical challenges didn't allow us to achieve our initial goals. Additionally, our web team was challenged by the undertaking of linking the frontend and backend via API endpoints. ## Accomplishments that we're proud of We're proud of few things. First, we have a large codebase with well-maintained structure, tech and features. Second, we have a neat, streamlined and user-friendly navigation system for both the web and mobile app. Third, we did a decent design considering we are all primarily backend developers with minor experience in React or any other frontend tech. Lastly, we worked on two projects in parallel, even though linking them together didn't work out. ## What we learned Teamwork and communication are a must. Hackathons are the grindiest grind out there (lol). Gradual development and constant updates via Git is the path to success. We made sure to work on separate branches and then merge them to avoid conflicts as well as having structured file system that promotes collaboration. And lastly, simplicity is a key, trying too hard or aiming to high often won't end good. ## What's next for Althea Next thing is implementing all the features that stayed on the whiteboard as well as connecting web and mobile apps into one ecosystem, allowing seamless access for wide audience. Definitely making our UI/UX even better but also streamlined and simple so any user, even least tech-wise, can use it. A cool and useful feature would be allowing users to securely scan their prescription label with their phone camera.
losing
## Inspiration Genes are the code of life, a sequencing that determines who you are, what you look like, what you do and how you behave. Sequencing is the process of determining the order of bases in an organisms genome. Knowing one's genetic sequence can give insight into inherited genetic disorders, one's ancestry, and even one's approximate lifespan. Next-generation sequencing (NGS) is a term for the massive advancements made in genetic sequencing technologies made over the past 20 years. Since the first fully sequenced genome was released in 2000, the price of sequencing has dropped drastically, resulting in a wealth of biotech start-ups looking to commercialize this newfound scientific power. Given that the human genome is very large (about 3 GB for an individual), the combination of computational tools and biology represent a powerful duo for medical and scientific applications. The field of bioinformatics, as it is known, represents a growth area for life sciences that will only increase in years to come. ## What it does Reactive Genetics is a web portal. Individuals who have either paid to have their genes sequenced, or done it themselves (an increasing probability in coming years), can paste in their sequence into the home page of the web portal. It then returns another web page telling them whether they hold a "good" or "bad" gene for one of six common markers of genetic disease. ## How I built it Reactive Genetics uses a flask server that queries the National Center for Biotechnology Information's Basic Local Alignment Search Tool (BLAST) API. "BLASTING" is commonly used in modern biological research to find unknown genes in model organisms. The results are then returned to a React app that tells the user whether they are positive or negative for a certain genetic marker. ## Challenges I ran into The human genome was too large to return reliably or host within the app, so the trivial solution of querying the sequence against the reference genome wasn't possible. We resorted to BLASTing the input sequence and making the return value a boolean about whether the gene is what it "should" be. ## Accomplishments that I'm proud of One team member hopes to enter serious bioinformatics research one day and this is a major first step. Another team member gave a serious shot at learning React, a challenging endeavour given the limited time frame. ## What I learned One team member learned use of the BLAST API. Another team member became familiar with Bootstrap. ## What's next for Reactive genetics The app is currently running both a React development server and a Flask server. Eventually, porting everything over to one language and application would be ideal. More bioinformatics tools are released on a regular basis, so there is potential to use other technologies in the future and/or migrate completely to React.
## Inspiration In an AI course, an assignment was to build a simple chatbot. We took concepts learned in class and worked it into a web application that focuses on QHacks. ## What it does It's an AI that chats with you - answer it's questions or say anything and it'll respond! ## How I built it First we built the application using Javascript/JQuery using a simple textbox and console output. Then we added CSS and "chat bubbles" to make it feel like a regular conversation. ## Challenges I ran into * Figuring out RegEx in Javascript * Getting the response format correct using CSS ## Accomplishments that I'm proud of The more you interact with the chatbot, the more it seems like it could be human. We made our responses conversational, and are proud of the outcome. ## What I learned How to manipulate and then map user input segments to custom responses in a way that seems almost human-like. ## What's next for QHacks Chatbot * Adding more responses * Add response animations or delays
## Inspiration We saw that lots of people were looking for a team to work with for this hackathon, so we wanted to find a solution ## What it does I helps developers find projects to work, and helps project leaders find group members. By using the data from Github commits, it can determine what kind of projects a person is suitable for. ## How we built it We decided on building an app for the web, then chose a graphql, react, redux tech stack. ## Challenges we ran into The limitations on the Github api gave us alot of troubles. The limit on API calls made it so we couldnt get all the data we needed. The authentication was hard to implement since we had to try a number of ways to get it to work. The last challenge was determining how to make a relationship between the users and the projects they could be paired up with. ## Accomplishments that we're proud of We have all the parts for the foundation of a functional web app. The UI, the algorithms, the database and the authentication are all ready to show. ## What we learned We learned that using APIs can be challenging in that they give unique challenges. ## What's next for Hackr\_matchr Scaling up is next. Having it used for more kind of projects, with more robust matching algorithms and higher user capacity.
losing
# INSPIRATION Never before than now, existed something which can teach out about managing your money the right way practically. Our team REVA brings FinLearn, not another budgeting app. Money has been one thing around which everyone’s life revolves. Yet, no one teaches us how to manage it effectively. As much as earning money is not easy, so is managing it. As a student, when you start to live alone, take a student loan, or plans to study abroad, all this becomes a pain for you if you don’t understand how to manage your personal finances. We faced this problem ourselves and eventually educated ourselves. Hence, we bring a solution for all. Finlearn is a fin-ed mobile application that can teach you about money management in a practical way. You can set practical finance goals for yourself and learn while achieving them. Now, learning personal finances is easier than ever with FinLearn, # WHAT IT DOES Finlearn is a fin-ed-based mobile application that can teach you about money and finances in a practical way. You can set practical finance goals for yourself and learn while achieving them. Now, learning personal finances is easier than ever with FinLearn. It has features like Financial Learning Track, Goal Streaks, Reward-Based Learning Management, News Feed for all the latest cool information in the business world. # HOW WE BUILT IT * We built the mobile application on Flutter Framework and designed it on Figma. * It consists of Learning and Goal Tracker APIs built with Flask and Cosmos DB * The learning track has a voice-based feature too built with Azure text-to-speech cognitive services. * Our Budget Diary feature helps you to record all your daily expenses into major categories which can be visualized over time and can help in forecasting your future expenses. * These recorded expenses aid in managing your financial goals in the app. * The rewards-based learning system unlocks more learning paths to you as you complete your goal. # CHALLENGES WE RAN INTO Building this project in such a short time was quite a challenge. Building logic for the whole reward-based learning was not easy. Yet we were able to pull it off. Integrating APIs by using proper data/error handling and maintaining the sleek UI along with great performance was a tricky task. Making reusable/extractable snippets of Widgets helped a lot to overcome this challenge. # ACCOMPLISHMENTS WE ARE PROUD OF We are proud of the efforts that we put in and pulled off the entire application within 1.5 days. Only from an idea to building an entire beautiful application is more than enough to make us feel content. The whole Learning Track we made is the charm of the application. # WHAT’S NEXT FinLearn would have a lot of other things in the future. Our first agenda would be to build a community feature for the students on our app. Building a learning community is gonna give it an edge. # Credits Veideo editing: Aaditya VK
## Inspiration Everyone on our team comes from a family of newcomers and just as it is difficult to come into a new country, we had to adapt very quickly to the Canadian system. Our team took this challenge as an opportunity to create something that our communities could deeply benefit from when they arrive in Canada. A product that adapts to them, instead of the other way around. With some insight from our parents, we were inspired to create this product that would help newcomers to Canada, Indigenous peoples, and modest income families. Wealthguide will be a helping hand for many people and for the future. ## What it does A finance program portal that provides interactive and accessible financial literacies to customers in marginalized communities improving their financially intelligence, discipline and overall, the Canadian economy 🪙. Along with these daily tips, users have access to brief video explanations of each daily tip with the ability to view them in multiple languages and subtitles. There will be short, quick easy plans to inform users with limited knowledge on the Canadian financial system or existing programs for marginalized communities. Marginalized groups can earn benefits for the program by completing plans and attempting short quiz assessments. Users can earn reward points ✨ that can be converted to ca$h credits for more support in their financial needs! ## How we built it The front end was built using React Native, an open-source UI software framework in combination with Expo to run the app on our mobile devices and present our demo. The programs were written in JavaScript to create the UI/UX interface/dynamics and CSS3 to style and customize the aesthetics. Figma, Canva and Notion were tools used in the ideation stages to create graphics, record brainstorms and document content. ## Challenges we ran into Designing and developing a product that can simplify the large topics under financial literacy, tools and benefits for users and customers while making it easy to digest and understand such information | We ran into the challenge of installing npm packages and libraries on our operating systems. However, with a lot of research and dedication, we as a team resolved the ‘Execution Policy” error that prevented expo from being installed on Windows OS | Trying to use the Modal function to enable pop-ups on the screen. There were YouTube videos of them online but they were very difficult to follow especially for a beginner | Small and merge errors prevented the app from running properly which delayed our demo completion. ## Accomplishments that we're proud of **Kemi** 😆 I am proud to have successfully implemented new UI/UX elements such as expandable and collapsible content and vertical and horizontal scrolling. **Tireni** 😎 One accomplishment I’m proud of is that despite being new to React Native, I was able to learn enough about it to make one of the pages on our app. **Ayesha** 😁 I used Figma to design some graphics of the product bringing the aesthetic to life! ## What we learned **Kemi** 😆 I learned the importance of financial literacy and responsibility and that FinTech is a powerful tool that can help improve financial struggles people may face, especially those in marginalized communities. **Tireni** 😎 I learned how to resolve the ‘Execution Policy” error that prevented expo from being installed on VS Code. **Ayesha** 😁 I learned how to use tools in Figma and applied it in the development of the UI/UX interface. ## What's next for Wealthguide Newsletter Subscription 📰: Up to date information on current and today’s finance news. Opportunity for Wealthsimple product promotion as well as partnering with Wealthsimple companies, sponsors and organizations. Wealthsimple Channels & Tutorials 🎥: Knowledge is key. Learn more and have access to guided tutorials on how to properly file taxes, obtain a credit card with benefits, open up savings account, apply for mortgages, learn how to budget and more. Finance Calendar 📆: Get updates on programs, benefits, loans and new stocks including when they open during the year and the application deadlines. E.g OSAP Applications.
## Inspiration Ever felt a shock by thinking where did your monthly salary or pocket money go by the end of a month? When Did you spend it? Where did you spend all of it? and Why did you spend it? How to save and not make that same mistake again? There has been endless progress and technical advancements in how we deal with day to day financial dealings be it through Apple Pay, PayPal, or now cryptocurrencies and also financial instruments that are crucial to create one’s wealth through investing in stocks, bonds, etc. But all of these amazing tools are catering to a very small demographic of people. 68% of the world population still stands to be financially illiterate. Most schools do not discuss personal finance in their curriculum. To enable these high end technologies to help reach a larger audience we need to work on the ground level and attack the fundamental blocks around finance in people’s mindset. We want to use technology to elevate the world's consciousness around their personal finance. ## What it does Where’s my money, is an app that simply takes in financial jargon and simplifies it for you, giving you a taste of managing your money without affording real losses so that you can make wiser decisions in real life. It is a financial literacy app that teaches you A-Z about managing and creating wealth in a layman's and gamified manner. You start as a person who earns $1000 dollars monthly, as you complete each module you are hit with a set of questions which makes you ponder about how you can deal with different situations. After completing each module you are rewarded with some bonus money which then can be used in our stock exchange simulator. You complete courses, earn money, and build virtual wealth. Each quiz captures different data as to how your overview towards finance is. Is it inclining towards savings or more towards spending. ## How we built it The project was not simple at all, keeping in mind the various components of the app we first started by creating a fundamental architecture to as to how the our app would be functioning - shorturl.at/cdlxE Then we took it to Figma where we brainstormed and completed design flows for our prototype - Then we started working on the App- **Frontend** * React. **Backend** * Authentication: Auth0 * Storing user-data (courses completed by user, info of stocks purchased etc.): Firebase * Stock Price Changes: Based on Real time prices using a free tier API (Alpha vantage/Polygon ## Challenges we ran into The time constraint was our biggest challenge. The project was very backend-heavy and it was a big challenge to incorporate all the backend logic. ## What we learned We researched about the condition of financial literacy in people, which helped us to make a better product. We also learnt about APIs like Alpha Vantage that provide real-time stock data. ## What's next for Where’s my money? We are looking to complete the backend of the app to make it fully functional. Also looking forward to adding more course modules for more topics like crypto, taxes, insurance, mutual funds etc. Domain Name: learnfinancewitheaseusing.tech (Learn-finance-with-ease-using-tech)
partial
## Inspiration Our team wanted to try learning web development, so we needed a simple but also fun project. One day at lunch, we thought of creating a personality quiz that would determine what kind of chess piece you were. This evolved into making a game where you could move around a chess piece however you like and our program would return what chess piece the moves you made were similar to. --- ## What it does A player must move around a chess piece on a board, and the program will return the chess piece which moves in a similar way. --- ## How we built it We built our project on repl.it, using their HTML, CSS, and JS default project option. The team did the programming all on one computer, because we were all learning together. --- ## Challenges we ran into Our team had never done any kind of JS web dev project before, so we had a lot of trouble learning the languages we were using. In particular, creating an interactive chess board that looked decent was very time consuming. We tried many methods of creating a chess board, including dynamically using JS and a CSS grid. We also had trouble making our web page look good, because we did not know a lot about CSS. --- ## Accomplishments that we're proud of The interactive chess board (TM) is the achievement we are most proud of. At one point, we didn't think moving around a piece on a board would even be possible. However, we somehow managed to pull it off. --- ## What we learned We learned a lot about how HTML, CSS, and JS work together to deliver a complete functioning web page. Hack the North was a great learning experience and now the team is a lot more comfortable using the three languages. --- ## What's next for Chess Piece Personality Quiz Our original idea was to take an image and analyze it to determine what chess piece the image was representing. This might be what is next for the Chess Piece Personality Program, after we've figured out how to analyze an image of course.
## Inspiration As students, we have seen too many peers in debt, struggling to get the education they deserve. Finding and comparing loan options may be overwhelming for a lot of people, so we built an app to simplify the process. ## What it does Our app recommends students to our database of institutions and banks that give our loans. They can easily see and compare loan options, and apply to their given one of choice. ## How we built it We built our app on top of React, using the NextJS routing framework. Our data is stored inside the MongoDB database, where we access to give loan recommendations to students. ## Challenges we ran into A big challenge for us was the concept of server side rendered components and client side rendered components in React. This often caused issues in routing and fetching from relative routes. ## Accomplishments that we're proud of We went above and beyond on styling our website so it looks modern and appealing to any user. Our hearts are in the right place for providing service for those in need. ## What we learned We learn a lot of cool technologies. From styling with tailwindcss to server side rendered components. ## What's next for College Coin We plan integrate and facilitate loan transactions within the app.
## Inspiration 2020 has definitely been the year of chess. Between 2020 locking everyone indoors, and Netflix's Queen Gambit raking in 62 million viewers, everyone is either talking about chess, or watching others play chess. ## What it does **Have you ever wanted to see chess through the eyes of chess prodigy Beth Harmon?** Where prodigies and beginners meet, BethtChess is an innovative software that takes any picture of a chessboard and instantly returns the next best move given the situation of the game. Not only does it create an experience to help improve your own chess skills and strategies, but you can now analyze chessboards in real-time while watching your favourite streamers on Twitch. ## How we built it IN A NUTSHELL: 1. Take picture of the chessboard 2. Turn position into text (by extracting the FEN code of it by using some machine learning model) 3. Run code through chess engine (we send the FEN code to stockfish (chess engine)) 4. Chess engine will return next best move to us 5. Display results to the user Some of our inspiration came from Apple's Camera app's ability to identify the URL of QR codes in an instant -- without even having to take a picture. **Front-end Technology** * Figma - Used for prototyping the front end * ReactJS - Used for making the website * HTML5 + CSS3 + Fomantic-UI * React-webcam * Styled-components * Framer-motion **Back-end Technology** * OpenCV - Convert image to an ortho-rectified chess board * Kaggle - Data set which has 100,000 chess board images * Keras - Deep Learning (DL) model to predict FEN string * Stockfish.js - The most powerful chess engine * NodeJS - To link front-end, DL model and Stockfish **User Interface** Figma was the main tool we used to design a prototype for the UI/UX page. Here's the link to our prototype: [<https://www.figma.com/proto/Vejv1dzQyZ2ZGOMoFw5w2L/BethtChess?node-id=4%3A2&scaling=min-zoom>] **Website** React.js and node.js were mainly used to create the website for our project (as it is a web app). **Predicting next best move using FEN stream** To predict the next best move, Node.js (express module) was used and stockfish.js was used to communicate with the most powerful chess engine so that we could receive information from the API to deliver to our user. We also trained the Deep Learning model with **Keras** and predicted the FEN string for the image taken from the webcam after image processing using **OpenCV**. ## Challenges we ran into Whether if it's 8pm, 12am, 4am, it doesn't matter to us. Literally. Each of us live in a different timezone and a large challenge was working around these differences. But that's okay. We stayed resilient, optimistic, and determined to finish our project off with a bang! **Learning Curves** It's pretty safe to say that all of us had to learn SOMETHING on the fly. Machine learning, image recognition, computing languages, navigating through Github, are only some of the huge learning curves we had to overcome. Not to mention, splitting the work and especially connecting all components together was a challenge that we had to work extra hard to achieve. Here's what Melody has to say about her personal learning curve: *At first, it felt like I didn't know ANYTHING. Literally nothing. I had some Python and Java experience but now I realize there's a whole other world out there full of possibilities, opportunities, etc. What the heck is an API? What's this? What's that? What are you doing right now? What is my job? What can I do to help? The infinite loop of questions kept on racing through my head. Honestly, though, the only thing that got me through all this was my extremely supportive team!!! They were extremely understanding, supportive, and kind and I couldn't have asked for a better team. Also, they're so smart??? They know so much!!* ## Accomplishments that we're proud of Only one hour into the hackathon (while we were still trying to work out our idea), one of our members already had a huge component of the project (a website + active camera component + "capture image" button) as a rough draft. Definitely, a pleasant surprise for all of us, and we're very proud of how far we've gotten together in terms of learning, developing, and bonding! As it was most of our members' first hackathon ever, we didn't know what to expect by the end of the hackathon. But, we managed to deliver a practically **fully working application** that connected all components that we originally planned. Obviously, there is still lots of room for improvement, but we are super proud of what we achieved in these twenty-four hours, as well as how it looks and feels. ## What we learned Our team consisted of students from high school all the way to recent graduates and our levels of knowledge vastly differed. Although almost all of our team consisted of newbies to hackathons, we didn't let that stop us from creating the coolest chess-analyzing platform on the web. Learning curves were huge for some of us: APIs, Javascript, node.js, react.js, Github, etc. were some of the few concepts we had to wrap our head around and learn on the fly. While more experienced members explored their limits by understanding how the stockfish.js engine works with APIs, how to run Python and node.js simultaneously, and how the two communicate in real-time. Because each of our members lives in a different time zone (including one across the world), adapting to each other's schedules was crucial to our team's success and efficiency. But, we stayed positive and worked hard through dusk and dawn together to achieve goals, complete tasks, and collaborate on Github. ## What's next for BethtChess? Maybe we'll turn it into an app available for iOS and Android mobile devices? Maybe we'll get rid of the "capture photo" so that before you even realize, it has already returned the next best move? Maybe we'll make it read out the instructions for those with impaired vision so that they know where to place the next piece? You'll just have to wait and see :)
losing
## Inspiration As more and more people embrace the movement toward a greener future, there still remain many concerns surrounding the viability of electric vehicles. In order for complete adoption of these greener technologies, there must be incentives toward change and passionate communities. ## What it does EVm connects a network of electric vehicle owners and enables owners to rent their charging stations when they aren't needed. Facilitated by fast and trustless micropayments through the Ethereum blockchain, users can quickly identify nearby charging stations when batteries are running low. ## How we built it Using Solidity and the Hardhat framework, smart contracts were deployed to both a localhost environment and the Goerli testnet. A React front-end was created to interact with the smart contract in a simple and user-friendly way and enabled a connection to a metamask wallet. A Raspberry Pi interface was created to demonstrate a proof of concept for the interaction between the user, electric vehicle, and charging station. While the actual station would be commercially manufactured, this setup provides a clear understanding of the approach. The Raspberry Pi hosted a Flask server to wirelessly communicate data to the web-app. An LCD display conveys the useful metrics so the user can rest assured that their interaction is progressing smoothly. ## Challenges we ran into This was our first experience in the blockchain development space. Not only learning the syntax of Solidity, but gaining an understanding of the major underlying blockchain concepts made for a steep learning curve in little time. Configuration with Hardhat did not go smoothly and required a great deal of debugging. Integrating the hardware with the web-app and smart contracts through the Flask REST API required extensive testing and modification. ## Accomplishments that we're proud of Building our first dApp was a huge accomplishment in itself. Our ambition to connect two of the most rapidly emerging fields, IoT and Blockchain, sparked new creativity in areas that are still very complex and unknown. ## What we learned We not only learned how to write and deploy efficient smart contracts to the Ethereum network, but also saw how they can be integrated into user-friendly web-apps. Building EVm also provided us an opportunity to develop modular, low-level software by learning more about interrupt-driven design as well as various serial communication protocols. ## What's next for EVm We look forward to deploying the smart contracts on actual blockchain networks. To improve transaction times and minimize gas fees, layer 2 chains will be explored to host the project in the future. Extensive testing and refactoring will be done to augment the security and efficiency. Reaching out to industry leaders to make the product more viable will be essential for its adoption.
# Ether on a Stick Max Fang and Philip Hayes ## What it does Ether on a Stick is a platform that allows participants to contribute economic incentives for the completion of arbitrary tasks. More specifically, it allows participants to pool their money into a smart contract that will pay out to a specified target if and only if a threshold percentage of contributors to the pool, weighted by contribution amount, votes that the specified target has indeed carried out an action by a specified due date. Example: A company pollutes a river, negatively affecting everyone nearby. Residents would like the river to be cleaned up, and are willing to pay for it, but only if the river is cleaned up. Solution: Residents use Ether on a Stick to pool their funds together that will pay out to the company if and only if a specified proportion of contributors to the pool vote that the company has indeed cleaned up the river. ## How we built it Ether on a Stick implements with code a game theoretical mechanism called a Dominant Assurance Contract that coordinates the voluntary creation of public goods in the face of the free rider problem. It is a decentralized app (or "dapp") built on the Ethereum network, implementing a "smart contract" in Serpent, Ethereum's Python-like contract language. Its decentralized and trustless nature enables the creation of agreements without a 3rd party escrow who can be influenced or corrupted to determine the wrong user. ## Challenges The first 20 hours of the hackathon were mostly spent setting up and learning how to use the Ethereum client and interact with the network. A significant portion was also spent planning the exact specifications of the contract and deciding what mechanisms would make the network most resistant to attack. Despite the lack of any kind of API reference, writing the contract itself was easier, but deploying it to Ethereum testnet was another challenge, as large swaths of the underlying technology hasn't been built yet. ## What's next for Ether on a Stick We'd like to take a step much closer to a game-theoretically sound system (don't quote us, we haven't written a paper on it) by implementing a sort of token-based reputation system, similar to that of Augur. In this system, a small portion of pooled funds are set aside to be rewarded to reputation token bearing oracles that correctly vote on outcomes of events. "Correctly voting" means voting with the majority of the other randomly selected oracles for a given event. We would also have to restrict events to only those which are easily and publically verifiable; however, by decoupling voting from contribution, this bypasses a Sybil attack wherein malicious actors (or the contract-specified target of the funds) can use a large amount of financial capital to sway the vote in their favor.
## Inspiration Cars of today can function well by themselves, but for autonomous driving to be realistic, cars have to communicate and determine efficient routes as a network. *Kansei dorifto!* ## What it does Inertia uses blockchain to implement secure hive-mind thinking. Each device/car runs a node on the Ethereum network, and these devices update their location, destination, and their calculations of routes through smart contracts. Once all the cars have submitted this information, the network is able to come to a consensus on the best routes to take. The calculation itself (which all users take part in) is loosely based on Dijkstra's algorithm as well as other graph theory algorithms. Users that contribute to this calculation are rewarded with tokens. Why use a distributed app? There are three advantages: 1. Privacy. People should not have to constantly send their location and destination to a third party. With Inertia, they can stay anonymous on the Ethereum network instead. 2. Security. If something goes wrong, a car accident could occur - which is why it is important to use the blockchain, which is much harder to corrupt. 3. Lower costs! Inertia does not require a server or additional infrastructure to implement. The clients do the heavy lifting, which also makes it scalable. ## How we made it We used Ganache to create nodes with worthless "fake Ethereum", and we used web3 to allow them to communicate to the network through the smart contracts (which we wrote with Remix). The front-end was made with just HTML/CSS and vanilla JavaScript and was served by a node.js back-end. Finally, we used ngrok to create tunnels to other devices. ## Accomplishments that we're proud of All of our team members are relatively new to blockchain, so we are excited to have successfully implemented it. ## What's next for Inertia Although we were unable to demo on physical cars at this hackathon, we would like to be able to implement this onto toy cars and even real ones eventually.
partial
## ⭐ Inspiration We've all been there—sitting in a quiet room with a mountain of notes, textbooks sprawled open, and suddenly a nagging question pops into your mind. You're tempted to pick up your phone, but you know one search might lead to an hour on social media. Then there's that longing for a little treat after a focused study session. Cue the birth idea of the "Study BUDD-E", a result of collective student experiences, caffeine highs, and a dash of techy magic. ## 🍭 What it does Introducing "Study BUDD-E", not just a study companion, but your own personal Q&A machine: 1. **Concentration Tracking**: Through its advanced sensors, Study BUDD-E is in sync with your study dynamics. Detecting your reading, typing, or pondering moments, it differentiates between genuine focus and those wandering-mind intervals. 2. **Question & Answer Buddy:** Hit a snag? Unsure about a concept? Just ask! With a vast database and smart processing, "Study BUDD-E" provides you with answers to your academic queries. No need to browse the web and risk distractions. Your BUDD-E's got your back. 3. **Study Stats:** After wrapping up your study session, prepare for some insights! "Study BUDD-E" showcases stats on your concentration levels, time well spent, and moments of diversion, helping you understand your study patterns and where you can improve. 4. **Reward System:** All work and no play makes Jack a dull boy. For every successful, focused study session, "Study BUDD-E" cheers you on by dispensing a sweet candy treat. Your academic achievements, no matter how small, deserve a sweet celebration. In a digital age where every beep and buzz can sidetrack our study mojo, "Study BUDD-E" stands as a beacon of focus, ensuring you stay on track, get answers in real-time, and celebrate the small victories. ## 🔧 How we built it Building the "Study BUDD-E" was a blend of robotics, software development, and a sprinkle of sweet creativity. Here's a behind-the-scenes look at our construction journey: 1. **Robot Base:** At the heart of "Study BUDD-E" is the Viam rover platform. We chose it for its robustness and flexibility. This provided a solid foundation upon which we could customize and bring our sweet-treat-giving academic aide to life. 2. **The Add-ons:** 3. **Camera:** We equipped BUDD-E with a camera to better understand and respond to the user's study behaviors, ensuring that the rewards and stats provided were in sync with real-time engagement. 4. **Candy Dispenser:** No reward system is complete without the rewards! Our candy dispenser is strategically integrated to give out those well-deserved treats after fruitful study sessions. 5. **Speaker:** To make the Q&A experience more interactive, we added a speaker. This allows BUDD-E to vocalize answers to any academic queries, making the study experience more engaging. 6. **The Brain - Raspberry Pi:** Orchestrating the movements, rewards, and interactions is the Raspberry Pi. This mini-computer takes care of processing, managing the camera feed, controlling the candy dispenser, and handling speaker outputs. All this while seamlessly integrating with the Viam Platform. 7. **Frontend Magic:** To offer a user-friendly interface, we developed a study page using React. This not only tracks study progress but also facilitates smooth communication between the student and "Study BUDD-E". It's sleek, intuitive, and keeps you connected with your robotic study companion. By amalgamating a strong robotic base with added functionalities and a dynamic front-end interface, we've aimed to make "Study BUDD-E" an essential part of every student's study routine. ## 🚧 Challenges we ran into Creating "Study BUDD-E" was an enlightening journey, but like every innovation story, it wasn’t without its hurdles. Here are some challenges that kept us on our toes: 1. **Connectivity Conundrums:** One of the main challenges was maintaining a stable connection between our systems and the robot. There were times when the connection was as elusive as the solution to a tricky math problem! Ensuring a consistent and robust link was crucial, as it affected the real-time feedback and control of the robot. After some deep troubleshooting and testing, we managed to create a more stable bridge of communication. 2. **Hardware Hurdles:** Securing all the desired hardware components wasn't a walk in the park. Due to various constraints, some components were out of reach, which led us to think on our feet. It was a masterclass in improvisation as we figured out alternative solutions that would still align with our vision for "Study BUDD-E". 3. **Balancing Acts:** Building a robot is part science and part art. One unexpected challenge was ensuring that "Study BUDD-E" remained stable and balanced, especially with all the new additions. There were a few tumbles and wobbles, but with some recalibrations and tweaks, we managed to get our BUDD-E to stand tall and steady. Each challenge presented a learning opportunity. They pushed us to refine our ideas, think outside the box, and come together as a team to bring our vision of the perfect study companion to life. ## 🏆 Accomplishments that we're proud of Building "Study BUDD-E" was no simple task, but looking back, there are several accomplishments that make us beam with pride: 1. **Resilient Connectivity:** Overcoming the connectivity challenge was a significant win. We not only managed to establish a stable connection between our systems and the robot but ensured that it remained consistent even under varying conditions. This means that students can rely on "Study BUDD-E" without worrying about sudden disruptions. 2. **Innovative Problem-Solving:** When faced with hardware shortages, instead of being deterred, we embraced the art of improvisation. Finding alternative solutions that aligned with our initial vision showcased our team's adaptability and innovation. The end result? A robot that, while slightly different from our first blueprint, embodies the essence of "Study BUDD-E" even better. 3. **Achieving Balance:** Literally and metaphorically! We not only solved the physical balancing issues of the robot but also struck a balance between user-friendly design, functionality, and entertainment. "Study BUDD-E" is stable, efficient, and fun—attributes that we believe will resonate with every student. 4. **Intuitive User Interface:** Our React-based study page is something we take immense pride in. It’s sleek, user-friendly, and bridges the gap between the student and the robot seamlessly. Seeing our users navigate it with ease and finding it genuinely beneficial makes all the coding hours worth it. 5. **Team Synergy:** Last but not least, the way our team collaborated, shared ideas, tackled challenges head-on, and remained committed to the goal is an accomplishment in itself. "Study BUDD-E" is a testament to our collective spirit, determination, and passion for making study sessions sweeter and smarter. In a nutshell, the journey of creating "Study BUDD-E" has been filled with challenges, but the accomplishments along the way have made it an unforgettable experience. ## 🎓 What we learned The process of creating "Study BUDD-E" has been as enlightening as a dense textbook, but way more fun! Here's a glimpse of our takeaways: 1. **The Importance of Resilience:** In the face of connectivity challenges and other unexpected hitches, we discovered the true value of resilience. Keeping the bigger picture in mind, tweaking, adjusting, and not being afraid to iterate were crucial lessons. Just as in studying, sometimes you have to revisit a problem multiple times before finding the right solution. 2. **Adaptability is Key:** When our ideal hardware was out of reach, we learned that sometimes the best solutions come from thinking on the fly. Embracing improvisation not only led to effective outcomes but also made us more versatile as innovators. 3. **Balancing Theory and Practicality:** Just like in academics, where theoretical knowledge needs practical application, building the "Study BUDD-E" taught us the significance of balancing design ideas with real-world functionality. It's one thing to imagine a feature, but another to make it work seamlessly in practice. 4. **Collaboration Overcomes Challenges:** Diverse perspectives lead to comprehensive solutions. By pooling our skills, sharing ideas, and being open to feedback, we were able to address challenges more holistically. The camaraderie we built is a reminder that teamwork amplifies results. 5. **User-Centric Design:** Through developing our React-based interface and integrating the robot's functionalities, we've come to appreciate the importance of a user-centric approach. Building something technically impressive is one thing, but ensuring it's intuitive and caters to the user's needs is what truly makes a product stand out. 6. **Continuous Learning:** Just as "Study BUDD-E" aims to enhance study sessions, the process of building it reiterated the essence of continuous learning. Whether it was diving deep into robotics, exploring new software nuances, or mastering the art of problem-solving, every step was a learning curve. In essence, the path to creating "Study BUDD-E" reinforced that the journey is as valuable as the destination, packed with insights, challenges, and growth at every turn. ## 🚀 What's next for Study BUDD-E The journey with "Study BUDD-E" has only just begun! While we're incredibly proud of what we've accomplished so far, the horizon is brimming with exciting possibilities: 1. **Version 2.0:** Building on the feedback and experiences from this prototype, we're gearing up for a more polished and refined "Study BUDD-E 2.0". This next iteration will not only enhance usability but will also showcase a sleeker design. 2. **From Hacky to High-Tech:** While hackathons are about innovation at lightning speed, which sometimes means opting for quick-fixes, our vision for "Study BUDD-E" is far grander. We aim to revisit every aspect we rushed or improvised, ensuring that the robot’s performance, durability, and user experience are top-notch. 3. **Enhanced Features:** As we refine, we're also looking to innovate! There might be new features in the pipeline that further boost the study experience. Whether it’s advanced analytics, broader question-answer capabilities, or even gamifying the study process, the possibilities are limitless. 4. **Community Engagement:** We believe in evolving with feedback. Engaging with students and users to understand their needs, preferences, and recommendations will play a significant role in shaping the next version of "Study BUDD-E". 5. **Scalability:** Once the refined version is out, we're also looking at potential scalability options. Can "Study BUDD-E" cater to group study sessions? Could it be used in libraries or study halls? We're excited to explore how our little robot can make a larger impact! In essence, the future for "Study BUDD-E" is all about evolution, enhancement, and expansion. We're committed to making study sessions not just rewarding, but also revolutionarily efficient and enjoyable.
## Inspiration Our inspiration came from our curiosity and excitement to make a simple house-hold item into a high-tech tool that changes the way we interact with everyday things. We wanted to maximize the functionality of an everyday object while implementing innovative technology. We specifically wanted to focus on a mirror by designing and creating a smart mirror that responds and adapts to its diverse environment. ## What it does It gives a modern and aesthetic appeal to the room it is implemented in. What makes our Intelligent mirror system smart, is its ability to display any information you want on it. This mirror can be customized to display local weather forecasts, news bulletins, upcoming notifications, and daily quotes. One crucial feature is that it can store the data of medication inside a cupboard, and remind the user to take the specific pills at its predetermined time. It uses facial recognition to detect if a user is in front of the mirror and that will trigger the mirror to start up. ## How I built it We built our project using Mind Studio for Huawei, Github for sharing the code, Raspian to program the Raspberry Pi, and a JavaScript environment. ## Challenges I ran into During the initial brainstorming phase, our group thought of multiple ideas. Unfortunately, due to the large pool of possibilities of ideas, we had a difficult time committing to a single idea. Once we began working on an idea, we resulted in switching only after a couple of hours. At last, we set on to utilize the Telus Dev Shield. After gaining a major stride of progress, we were left stuck and disappointed when we were unable to set up the environment from the Telus Dev Shield. Although we tried our best to figure out a solution, we were, in the end, unable to do so, forcing us to abandon our idea once again. Another major barrier we faced was having the Raspberry Pi and the Huawei Atlas 200 DK communicate with each other. Unfortunately, we used all the ports possible on the Raspberry Pi, giving us no room to connect the Atlas. ## Accomplishments that I'm proud of Finishing the vision recognition model was the standout for the accomplishment of our group while using a trained AI model for facial recognition along with Huawei's Atlas 200 DK. ## What I learned An important ability we learned over the course of 24 hours was to be patient and overcome challenges that appear nearly impossible at first. For roughly the first half of our making session, our group was unable to make substantial progress. After having frequent reflection and brainstorming sessions, we were able to find an idea that our team all agreed on. With the remaining time, we were able to finalize and product that satisfied our desire of making an everyday item into something truly unique. The second we learned was more on the technical side. Since our team consisted of primarily first years, we were unfamiliar with the hardware offered at the event, and thus, the related software. However, after going through the tutorials and attending the workshops, we were able to learn something new about the Huawei Atlas 200 DK and also the Raspbian OS. ## What's next for I.M.S (Intelligent, Mirror, System) Create a better physical model with a two-way mirror and LED lights behind the frame Implement with a smart home system (Alexa, Google Home...) Have Personal profiles based on the facial recognition it can be personalized for each person Have voice control to give commands to the mirror, and have a speaker to have responsive feedback
## Inspiration *Mafia*, also known as *Werewolf*, is a classic in-person party game that university and high school students play regularly. It's been popularized by hit computer games such as Town of Salem and Epic Mafia that serve hundreds of thousands of players, but where these games go *wrong* is that they replace the in-person experience with a solely online experience. We built Super Mafia as a companion app that people can use while playing Mafia with their friends in live social situations to *augment* rather than *replace* their experience. ## What it does Super Mafia replaces the role of the game's moderator, freeing up every student to play. It also allows players to play character roles which normally aren't convenient or even possible in-person, such as the *gunsmith* and *escort*. ## How we built it Super Mafia was built with Flask, Python, and MongoDB on the backend, and HTML, CSS, and Javascript on the front-end. We also spent time learning about mLab which we used to host the database. ## Challenges we ran into Our biggest challenge was making sure that our user experience would be simple-to-use and approachable for young users, while still accommodating of the extra features we built. ## Accomplishments that we're proud of We survived the deadly combo of a cold night and the 5th floor air conditioning. ## What we learned How much sleeping during hackathons actually improves your focus...lol ## What's next for Super Mafia * Additional roles (fool, oracle, miller, etc) including 3rd party roles. A full list of potential roles can be found [here](https://epicmafia.com/role) * Customization options (length of time/day) * Last words/wills * Animations and illustrations
partial
## Inspiration E-cigarette use, specifically Juuling, has become an increasing public health concern among young adults and teenagers over the past few years. While e-cigarettes are often viewed as a safe alternative to traditional tobacco cigarettes, e-cigarettes have been proven to have negative health effects on both the user and second-hand smokers, as shown in multiple CDC and Surgeon General reviewed studies. E-cigarettes also still contain the active ingredient nicotine, which is a well know addictive drug. Yet, students across the United States on high school and college campuses continue to vape. For us, high school students, it is common sight to see classmates skipping class and “Juul-ing” in the bathroom. The Juul is one of the most popular e-cigarettes as it has a sleek design and looks like a USB drive. This design coupled with the fact that there is no lasting smell or detectable smoke, it makes it easy for users to go undetected in the high school environment. Moreover, this results in students not receiving help for their addition or even realizing they do have an addition. With an increasing use of e-cigarettes among millennials, there has been a creation of vape culture filled with vape gods preforming vape porn, displayig the artistic style of their smoke creations. Users often post pictures and videos of themself Juuling on social media platforms, specifically Instagram and Facebook. With this in mind, we set out to create a research-based solution that could identify e-cigarette users and deter them from future use, a process school administration have attempted and failed at. Juuly the Bear was created as the mascot leading the war on teenage e-cigarette use. ## What it does Juuly the Bear is intended to fight the growth of vape culture by creating a counter culture that actively discourages Juuling while informing users of dangers. It does this by using computer vision to analyze the Instagram account of an inputted user. The program flags images it detects to be of a person using an e-cigarette. If more than 40% of the images analyzed are of a person vaping, the user is classified as a “frequent e-cigarette” as defined by a study by Jung Ah Lee (2017), and categorized as high-risk for nicotine addiction. Juuly will then automatically message the high-risk user on Facebook Messenger informing them of their status and suggestions on how to cut down on their Juul use. Juuly will also provide external resources that the user can utilize. ## How I built it We built Juuly’s computer vision using the Clarify API in Python. First, we trained a machine learning model with images of e-cigarette users actively vaping. We then tested images of other vaping people to evaluate and further train the model until a sufficient accuracy level was reached. Then, we used the library to create a data scraping program for Instagram. When a username is inputed, the program gathers the most recent posts which are then fed into the computer vision program, analyzing the images with the previously trained model. If more than 40% of the images are of vaping, a Facebook Messenger bot automatically messages the user with warnings and resources. ## Challenges I ran into We ran into many challenges with implementing Juuly the Bear, especially because the technology was initially foreign to us. As high school students, we did not have a huge background in computer vision or machine learning. Initially, we had to completely learn the Clarify API and the Facebook Messenger API. We also had a hard time finding the design and thinking of a way to maximize our outreach. We decided that adding a bit of humor into the design would better resonate with teenagers, the average age at which people Juul. In addition, we were unable to successfully when trying to combine the backend Juuly program with our frontend. We initially wanted to create a fully functional website where one can enter Instagram and Facebook profiles to analyze, but when we had both the front and back ends completed, we had a hard time seamlessly integrating the two. In the end, we had to scrap the front-end in favor of a more functional backend. ## Accomplishments that I'm proud of As a group of high school students, we were able to use many new tools that we had never encountered before. The tools described above were extremely new to us before the hackathon, however, by working with various mentors and continually striving to learn these tools, we were able to create a successful program. The most successful part of the project was creating a powerful backend that was able to detect people Juuling. By training a machine learning model with the Clarify API, we were able to reaching over a 80% accuracy rate for the set of images we had, while initially we had barely any knowledge in machine learning. Another very successful part was our scraping program. This was completely new to us and we were able to create a program that perfectly fit our application. Scraping was also a very powerful tool, and by learning how to scrape social media pages, we had a lot more data than we wouldn’t have had otherwise. ## What's next for Juuly the Bear Our immediate next step would be combining our already designed front end website with our backend. We spent a lot of time trying to understand how to do this successfully, but we ultimately just ran out of time. In the future, we would optimally partner up with major social media organizations including Facebook and Twitter to create a large scale implementation of Juuly. This will have a much larger impact on vape culture as people are able to become more informed. This can have major impacts on public health, adolescent behavior/culture, and also increase the quality of life of all as the number of vapers are reduced.
## Inspiration The majority of cleaning products in the United States contain harmful chemicals. Although many products pass EPA regulations, it is well known that many products still contain chemicals that can cause rashes, asthma, allergic reactions, and even cancer. It is important that the public has easy access to information of the chemicals that may be harmful to them as well as the environment. ## What it does Our app allows users to scan a product's ingredient label and retrieves information on which ingredients to avoid for the better of the environment as well as their own health. ## How we built it We used Xcode and Swift to design the iOS app. We then used Vision for iOS to detect text based off a still image. We used a Python scrapper to collect data from ewg.org, providing the product's ingredients as well as the side affects of certain harmful additives. ## Challenges we ran into We had very limited experience for the idea that we have in developing an iOS app but, we wanted to challenge ourselves. The challenges the front end had were incorporating the camera feature and the text detector into a single app. As well as trying to navigate the changes between the newer version of Swift 11 from older versions. For our backend members, they had difficulties incorporating databases from Microsoft/Google, but ended up using JSON. ## Accomplishments that we're proud of We are extremely proud of pushing ourselves to do something we haven't done before. Initially, we had some doubt in our project because of how difficult it was. But, as a team we were able to help each other along the way. We're very proud of creating a single app that can do both a camera feature and Optical Character Recognition because as we found out, it's very complicated and error prone. Additionally, for data scrapping, even though the HTML code was not consistent we managed to successfully scrap the necessary data by taking all corner cases into consideration with 100% success rate from more than three thousand HTML files and we are very proud of it. ## What we learned Our teammates working in the front end learned how to use Xcode and Swift in under 24 hours. Our backend team members learned how to scrap the data from a website for the first time as well. Together, we learned how to alter our original expectations of our final product based on time constraint. ## What's next for Ingredient Label Scanner Currently our project is specific to cleaning products, however in the future we would like to incorporate other products such as cosmetics, hair care, skin care, medicines, and food products. Additionally, we hope to alter the list of ingredients in a more visual way so that users can clearly understand which ingredients are more dangerous than others.
## Inspiration Irresponsible substance use causes fatalities due to accidents and related incidents. Driving under the influence is one of the top 5 leading causes of death in young adults. 37 people die every day from a DUI-related accident--that's one person every 38 minutes. We aspire to build a convenient and accessible app that people can use to accurately determine whether they are in good condition to drive. ## What it does BACScanner is a computer vision mobile app that compares iris–pupil ratio before and after substance intake for safer usage. ## How we built it We built the mobile app with SwiftUI and the CV model using Pytorch and OpenCV. Our machine learning model was linked to the frontend by deploying a Flask API. ## Challenges we ran into We were originally hoping to be able to figure out your sobriety based on one video of your eyes. However, found that it was fundamental to take a sober image as a control image to compare to and we had to amend our app to support taking a "before" image and an "after" image, comparing the two. ## Accomplishments that we're proud of We implemented eye tracking and the segmentation neural network with 92% accuracy. We also made an elegant UI for the mobile app. ## What we learned We learned about building full-stack apps that involve ML. Prior to this, we didn't know how to attach an ML model to a frontend app. We thus learned how to deploy our ML model to an API and link it to our front end using Flask. ## What's next for BACScanner We hope to be able to add better recognition for narcotic usage, as right now our app can only accurately detect BAC.
partial
## Inspiration While looking for genuine problems that we could solve, it came to our attention that recycling is actually much harder than it should be. For example, when you go to a place like Starbucks and are presented with the options of composting, recycling, or throwing away your empty coffee, it can be confusing and for many people, it can lead to selecting the wrong option. ## What it does Ecolens uses a cloud-based machine learning webstream to scan for an item and tells the user the category of item it is that they scanned, providing them with a short description of the object and updating their overall count of consuming recyclable vs. unrecyclable items as well as updating the number of items that they consumed in that specific category (i.e. number of water bottle consumed) ## How we built it This project consists of both a front end and a back end. The backend of this project was created using Java Spring and Javascript. Javascript was used in the backend in order to utilize Roboflow and Ultralytics which allowed us to display the visuals from Roboflow on the website for the user to see. Java Spring was used in the backend for creating a database that consisted of all of the scanned items and tracked them as they were altered (i.e. another item was scanned or the user decided to dump the data). The front end of this project was built entirely through HTML, CSS, and Javascript. HTML and CSS were used in the front end to display text in a format specific to the User Interface, and Javascript was used in order to implement the functions (buttons) displayed in the User Interface. ## Challenges we ran into This project was particularly difficult for all of us because of the fact that most of our team consists of beginners and there were multiple parts during the implementation of our application that no one was truly comfortable with. For example, integrating camera support into our website was particularly difficult as none of our members had experience with JavaScript, and none of us had fully fledged web development experience. Another notable challenge was presented with the backend of our project when attempting to delete the user history of items used while also simultaneously adding them to a larger “trash can” like a database. From a non-technical perspective, our group also struggled to come to an agreeance on how to make our implementation truly useful and practical. Originally we thought to have hardware that would physically sort the items but we concluded that this was out of our skill range and also potentially less sustainable than simply telling the user what to do with their item digitally. ## Accomplishments that we're proud of Although we can acknowledge that there are many improvements that could be made, such as having a cleaner UI, optimized (fast) usage of the camera scanner, or even better responses for when an item is accidentally scanned, we’re all collectively proud that we came together to find an idea that allowed each of us to not only have a positive impact on something we cared about but to also learn and practice things that we actually enjoy doing. ## What we learned Although we can acknowledge that there are many improvements that could be made, such as having a cleaner UI, optimized (fast) usage of the camera scanner, or even better responses for when an item is accidentally scanned, we’re all collectively proud that we came together to find an idea that allowed each of us to not only have a positive impact on something we cared about but to also learn and practice things that we actually enjoy doing. ## What's next for Eco Lens The most effective next course of action for EcoLens is to assess if there really is a demand for this product and what people think about it. Would most people genuinely use this if it was fully shipped? Answering these questions would provide us with grounds to move forward with our project.
## Inspiration The majority of cleaning products in the United States contain harmful chemicals. Although many products pass EPA regulations, it is well known that many products still contain chemicals that can cause rashes, asthma, allergic reactions, and even cancer. It is important that the public has easy access to information of the chemicals that may be harmful to them as well as the environment. ## What it does Our app allows users to scan a product's ingredient label and retrieves information on which ingredients to avoid for the better of the environment as well as their own health. ## How we built it We used Xcode and Swift to design the iOS app. We then used Vision for iOS to detect text based off a still image. We used a Python scrapper to collect data from ewg.org, providing the product's ingredients as well as the side affects of certain harmful additives. ## Challenges we ran into We had very limited experience for the idea that we have in developing an iOS app but, we wanted to challenge ourselves. The challenges the front end had were incorporating the camera feature and the text detector into a single app. As well as trying to navigate the changes between the newer version of Swift 11 from older versions. For our backend members, they had difficulties incorporating databases from Microsoft/Google, but ended up using JSON. ## Accomplishments that we're proud of We are extremely proud of pushing ourselves to do something we haven't done before. Initially, we had some doubt in our project because of how difficult it was. But, as a team we were able to help each other along the way. We're very proud of creating a single app that can do both a camera feature and Optical Character Recognition because as we found out, it's very complicated and error prone. Additionally, for data scrapping, even though the HTML code was not consistent we managed to successfully scrap the necessary data by taking all corner cases into consideration with 100% success rate from more than three thousand HTML files and we are very proud of it. ## What we learned Our teammates working in the front end learned how to use Xcode and Swift in under 24 hours. Our backend team members learned how to scrap the data from a website for the first time as well. Together, we learned how to alter our original expectations of our final product based on time constraint. ## What's next for Ingredient Label Scanner Currently our project is specific to cleaning products, however in the future we would like to incorporate other products such as cosmetics, hair care, skin care, medicines, and food products. Additionally, we hope to alter the list of ingredients in a more visual way so that users can clearly understand which ingredients are more dangerous than others.
## Inspiration All of our parents like to recycle plastic bottles and cans to make some extra money, but we always thought it was a hassle. After joining this competition and seeing sustainability as one of the prize tracks, we realized it would be interesting to create something that makes the recycling process more engaging and incentivized on a larger scale. ## What it does We gamify recycling. People can either compete against friends to see who recycles the most, or compete against others for a prize pool given by sponsors (similar to how Kaggle competitions work). To verify if a person recycles, there's a camera section where it uses an object detection model to check if a valid bottle and recycling bin are in sight. ## How we built it We split the project into 3 major parts. The app itself, the object detection model, and another ML model that predicted how trash in a city would move so users can move with it to pick up the most amount of trash. We implemented an object detection model, where we created our own dataset of cans and bottles at PennApps with pictures around the building, and used Roboflow to create the dataset. Our app was created using Swift, and it was inspired by a previous GitHub that deployed a model of the same type as ours onto IOS. The UI was designed using Figma. The ML model that predicted the movement of trash concentration was a CNN that had a differential equation as a loss function which had better results than just the vanilla loss functions. ## Challenges we ran into None of us had coded an app before, so it was difficult doing anything with Swift. It actually took us 2 hours just to get things set up and get the build running, so this was for sure the hardest part of the project. We also ran into problems finding good datasets for both of the models, as they were either poor quality or didn't have the aspects that we wanted. ## Accomplishments that we're proud of Everyone on our team specializes in backend, so with limited initial experience in frontend, we're especially proud of the app we’ve created—it's our first time working on such a project. Integrating all the components posed significant challenges too. Getting everything to work seamlessly, including the CNN model and object detection camera within the same app, required countless attempts. Despite the challenges, we've learned a great amount throughout the process and are incredibly proud of what we've achieved so far. ## What we learned How to create an IOS app, finding datasets, integrating models into apps. ## What's next for EcoRush A possible quality change to the app would be to find a way to differentiate bottles from each other so people can't "hack" the system. We are also looking for more ways to incentivize people to recycle litter they see everyday other than with money. After all, our planet would be a whole lot greener if every citizen of Earth does just a small part!
partial
## Inspiration Let’s take you through a simple encounter between a recruiter and an aspiring student looking for a job during a career fair. The student greets the recruiter eagerly after having to wait in a 45 minute line and hands him his beautifully crafted paper resume. The recruiter, having been talking to thousands of students knows that his time is short and tries to skim the article rapidly, inevitably skipping important skills that the student brings to the table. In the meantime, the clock has been ticking and while the recruiter is still reading non-relevant parts of the resume the student waits, blankly staring at the recruiter. The recruiter finally looks up only to be able to exchange a few words of acknowledge and a good luck before having to move onto the next student. And the resume? It ends up tossed in the back of a bin and jumbled together with thousands of other resumes. The clear bottleneck here is the use of the paper Resume. Instead of having the recruiter stare at a thousand word page crammed with everything someone has done with their life, it would make much more sense to have the student be able to show his achievements in a quick, easy way and have it elegantly displayed for the recruiter. With Reko, both recruiters and students will be geared for an easy, digital way to transfer information. ## What it does By allowing employers and job-seekers to connect in a secure and productive manner, Reko calls forward a new era of stress free peer-to-peer style data transfer. The magic of Reko is in its simplicity. Simply walk up to another Reko user, scan their QR code (or have them scan yours!), and instantly enjoy a UX rich file transfer channel between your two devices. During PennApps, we set out to demonstrate the power of this technology in what is mainly still a paper-based ecosystem: career fairs. With Reko, employers no longer need to peddle countless informational pamphlets, and students will never again have to rush to print out countless resume copies before a career fair. Not only can this save a large amount of paper, but it also allows students to freely choose what aspects of their resumes they want to accentuate. Reko also allows employers to interact with the digital resume cards sent to them by letting them score each card on a scale of 1 - 100. Using this data alongside machine learning, Reko then provides the recruiter with an estimated candidate match percentage which can be used to streamline the hiring process. Reko also serves to help students by providing them a recruiting dashboard. This dashboard can be used to understand recruiter impressions and aims to help students develop better candidate profiles and resumes. ## How we built it ### Front-End // Swift The frontend of Reko focuses on delivering a phenomenal user experience through an exceptional user interface and efficient performance. We utilized native frameworks and a few Cocoapods to provide a novel, intriguing experience. The QR code exchange handshake protocol is accomplished through the very powerful VisionKit. The MVVM design pattern was implemented and protocols were introduced to make the most out of the information cards. The hardest implementation was the Web Socket implementation of the creative exchange of the information cards — between the student and interviewer. ### Back-End // Node.Js The backend of Reko focuses on handling websocket sessions, establishing connection between front-end and our machine learning service, and managing the central MongoDB. Every time a new ‘user-pair’ is instantiated via a QR code scan, the backend stores the two unique socket machine IDs as ‘partners’, and by doing so is able to handle what events are sent to one, or both partners. By also handling the MongoDB, Reko’s backend is able to relate these unique socket IDs to stored user account’s data. In turn, this allows Reko to take advantage of data sets to provide the user with valuable unique data analysis. Using the User ID as context, Reko’s backend is able to POST our self-contained Machine Learning web service. Reko’s ML web service responded with an assortment of statistical data, which is then emitted to the front-end via websocket for display & view by the user. ### Machine Learning // Python In order to properly integrate machine learning into our product, we had to build a self-contained web application. This container application was built on a virtual environment with a REST API layer and Django framework. We chose to use these technologies because they are scalable and easy to deploy to the cloud. With the Django framework, we used POST to easily communicate with the node backend and thus increase the overall workflow via abstraction. We were then able to use Python to train a machine learning model based on data sent from the node backend. After connecting to the MongoDB with the pymongo library, we were able to prepare training and testing data sets. We used the pandas python library to develop DataFrames for each data set and built a machine learning model using the algorithms from the scikit library. We tested various algorithms with our dataset and finalized a model that utilized the Logistic Regression algorithm. Using these data sets and the machine learning model, our service can predict the percentage a candidate matches to a recruiter’s job profile. The final container application is able to receive data and return results in under 1 second and is over 90% accurate. ## Challenges we ran into * Finding a realistic data set to train our machine learning model * Deploying our backend to the cloud * Configuring the container web application * Properly populating our MongoDB * Finding the best web service for our use case * Finding the optimal machine learning for our data sets ## Accomplishments that we're proud of * UI/UX Design * Websocket implementation * Machine Learning integration * Scalably structured database * Self-contained Django web application ## What we learned * Deploying container applications on the cloud * Using MongoDB with Django * Data Modeling/Analysis for our specific use case * Good practices in structuring a MongoDB database as opposed to a SQL database. * How to successfully integrate three software layers to generate a consistent and fluid final product. * Strategies for linking iOS devices in a peer-to-peer fashion via websockets. ## What's next for reko * Our vision for Reko is to have an app which allows for general and easy to use data transfer between two people who may be complete strangers. * We hope to transfer from QR code to NFC to allow for even easier data transfer and thus better user experience. * We believe that a data transfer system such as the one Reko showcases is the future of in-person data transfer due to its “no-username” operation. This system allows individuals to keep their anonymity if desired, and thus protects their privacy.
## Inspiration Business cards haven't changed in years, but cARd can change this! Inspired by the rise of augmented reality applications, we see potential for creative networking. Next time you meet someone at a conference, a career fair, etc., simply scan their business card with your phone and watch their entire online portfolio enter the world! The business card will be saved, and the experience will be unforgettable. ## What it does cARd is an iOS application that allows a user to scan any business card to bring augmented reality content into the world. Using OpenCV for image rectification and OCR (optical character recognition) with the Google Vision API, we can extract both the business card and text on it. Feeding the extracted image back to the iOS app, ARKit can effectively track our "target" image. Furthermore, we use the OCR result to grab information about the business card owner real-time! Using selelium, we effectively gather information from Google and LinkedIn about the individual. When returned to the iOS app, the user is presented with information populated around the business card with augmented reality! ## How I built it Some of the core technologies that go into this project include the following: * ARKit for augmented reality in iOS * Flask for the backend server * selenium for collecting data about the business card owner on the web in real-time * OpenCV to find the rectangular business card in the image and use a homography to map it into a rectangle for AR tracking * Google Vision API for optical character recognition (OCR) * Text to speech ## Challenges I ran into ## Accomplishments that I'm proud of ## What I learned ## What's next for cARd Get cARd on the app store for everyone to use! Stay organized and have fun while networking!
## Inspiration Being students in a technical field, we all have to write and submit resumes and CVs on a daily basis. We wanted to incorporate multiple non-supervised machine learning algorithms to allow users to view their resumes from different lenses, all the while avoiding the bias introduced from the labeling of supervised machine learning. ## What it does The app accepts a resume in .pdf or image format as well as a prompt describing the target job. We wanted to judge the resume based on layout and content. Layout encapsulates font, color, etc., and the coordination of such features. Content encapsulates semantic clustering for relevance to the target job and preventing repeated mentions. ### Optimal Experience Selection Suppose you are applying for a job and you want to mention five experiences, but only have room for three. cv.ai will compare the experience section in your CV with the job posting's requirements and determine the three most relevant experiences you should keep. ### Text/Space Analysis Many professionals do not use the space on their resume effectively. Our text/space analysis feature determines the ratio of characters to resume space in each section of your resume and provides insights and suggestions about how you could improve your use of space. ### Word Analysis This feature analyzes each bullet point of a section and highlights areas where redundant words can be eliminated, freeing up more resume space and allowing for a cleaner representation of the user. ## How we built it We used a word-encoder TensorFlow model to provide insights about semantic similarity between two words, phrases or sentences. We created a REST API with Flask for querying the TF model. Our front end uses Angular to deliver a clean, friendly user interface. ## Challenges we ran into We are a team of two new hackers and two seasoned hackers. We ran into problems with deploying the TensorFlow model, as it was initially available only in a restricted Colab environment. To resolve this issue, we built a RESTful API that allowed us to process user data through the TensorFlow model. ## Accomplishments that we're proud of We spent a lot of time planning and defining our problem and working out the layers of abstraction that led to actual processes with a real, concrete TensorFlow model, which is arguably the hardest part of creating a useful AI application. ## What we learned * Deploy Flask as a RESTful API to GCP Kubernetes platform * Use most Google Cloud Vision services ## What's next for cv.ai We plan on adding a few more features and making cv.ai into a real web-based tool that working professionals can use to improve their resumes or CVs. Furthermore, we will extend our application to include LinkedIn analysis between a user's LinkedIn profile and a chosen job posting on LinkedIn.
winning
## Inspiration Tinder but Volunteering ## What it does Connects people to volunteering organizations. Makes volunteering fun, easy and social ## How we built it react for web and react native ## Challenges we ran into So MANY ## Accomplishments that we're proud of Getting a really solid idea and a decent UI ## What we learned SO MUCH ## What's next for hackMIT
## Inspiration We saw that lots of people were looking for a team to work with for this hackathon, so we wanted to find a solution ## What it does I helps developers find projects to work, and helps project leaders find group members. By using the data from Github commits, it can determine what kind of projects a person is suitable for. ## How we built it We decided on building an app for the web, then chose a graphql, react, redux tech stack. ## Challenges we ran into The limitations on the Github api gave us alot of troubles. The limit on API calls made it so we couldnt get all the data we needed. The authentication was hard to implement since we had to try a number of ways to get it to work. The last challenge was determining how to make a relationship between the users and the projects they could be paired up with. ## Accomplishments that we're proud of We have all the parts for the foundation of a functional web app. The UI, the algorithms, the database and the authentication are all ready to show. ## What we learned We learned that using APIs can be challenging in that they give unique challenges. ## What's next for Hackr\_matchr Scaling up is next. Having it used for more kind of projects, with more robust matching algorithms and higher user capacity.
## Inspiration: Many people that we know want to get more involved in the community but don't have the time for regular commitments. Furthermore, many volunteer projects require an extensive application, and applications for different organizations vary so it can be a time-consuming and discouraging process. We wanted to find a way to remove these boundaries by streamlining the volunteering process so that people can get involved, doing one-time projects without needing to apply every time. ## What it does It is a website aimed at streamlining volunteering hiring and application processes. There are 2 main users: volunteer organizations, and volunteers. Volunteers will sign-up, registering preset documents, waivers, etc. These will then qualify them to volunteer at any of the projects posted by organizations. Organizations can post event dates, locations, etc. Then volunteers can sign-up with the touch of a button. ## How I built it We used node.js, express, and MySQL for the backend. We used bootstrap for the front end UI design and google APIs for some of the functionality. Our team divided the work based on our strengths and interests. ## Challenges I ran into We ran into problems with integrating MongoDB and the Mongo Daemon so we had to switch to MySQL to run our database. MySQL querying and set-up had a learning curve that was very discouraging, but we were able to gain the necessary skills and knowledge to use it. We tried to set up a RESTful API, but ultimately, we decided there was not enough time/resources to efficiently execute it, as there were other tasks that were more realistic. ## Accomplishments that I'm proud of We are proud to all have completed our first 24hr hackathon. Throughout this process, we learned to brainstorm as a team, create a workflow, communicate our progress/ideas, and all acquired new skills. We are proud that we have something that is cohesive functioning components and to have completed our first non-academic collaborative project. We all ventured outside of our comfort zones, using a language that we weren't familiar with. ## What I learned This experience has taught us a lot about working in a team and communicating with other people. There is so much we can learn from our peers. Skillwise, many of our members gained experience in node.js, MySQL, endpoints, embedded javascript, etc. It taught us a lot about patience and persevering because oftentimes, problems could seem unsolvable but yet we still were able to solve them with time and effort. ## What's next for NWHacks2020 We are all very proud of what we have accomplished and would like to continue this project, even though the hackathon is over. The skills we have all gained are sure to be useful and our team has made this a very memorable experience.
winning
## Inspiration We often find downtimes where we want to do some light exercises. Throwing a ball is perfect, but it's not easy to find another person to throw the ball to. We realized that we wanted to make a machine that can substitute the role of a partner... hence the creation of *PitchPartner*! ## What it does PitchPartner is a motorized machine designed to catch and return a tennis ball automatically. When receiving the ball, the machine keeps a counter of how many throws were made. It then feeds the ball to the bike wheel propeller to be launched back! PitchPartner is portable, making it great to use out in an open field! ## How we built it The launcher is made with a bike tire connected to a drill that provides high rpm and torque. It's housed in a wooden frame that was nailed together. The catcher is a net (webbing) augmented with Raspberry Pi and sensors, namely the ultrasonic sensor for the speed detection and the button for the counter. ## Challenges we ran into We initially wanted to make a frisbee thrower and catcher, but due to the complexity arising from the asymmetrical nature of the frisbee (top down), we changed the *trajectory* of our project to use the tennis ball! Because our machine has fast, moving parts, we didn't feel comfortable to potentially have the frisbee be fed into the launcher upside down and thus causing safety problems. Some of us are relative beginners to coding for Raspberry Pi (python), so we spend a lot of time to get the counter displayed on the 7-segment display. We were able to made it work when the programmer described in detail what type of problem he is encountering. We were able to draw knowledge from another programming language, C, to fix the problem. ## Accomplishments that we're proud of This was our first online hackathon! Our team was able to distribute work even though the hardware is built at one place. We played to our strengths and made quick decisions, enabling us to finish the PitchPartner on time! ## What we learned Our team members come from two different cities! One thing we learned is to bounce our ideas and problems off of each other because we all have different strengths and weaknesses. Clear explanation of the problems we faced helped us move closer to finding a solution. ## What's next for PitchPartner Installing a DC brushless motor can make the design even more compact and stable, not to mention true autonomy of the machine. We are also interested in having the machine change directions, thus capable of throwing left and right. Lastly, because we bought a used bike to extract its wheel, we think it would be very cool to incorporate the rest of the bike body to a steering system for the PitchPartner!
## **Inspiration** The president of the US has proposed a target of achieving *80 percent* clean electricity by *2030* and in today's data-driven era of escalating energy demands, the imperative for renewable resources has never been more pressing. As global energy consumption continues to surge, the reliance on AI-powered, sustainable solutions becomes paramount. EcoNet.ai leverages AI to optimize energy utilization, reducing costs, enhancing efficiency, and mitigating environmental impact. Using predictive analytics, our platform promotes sustainable development globally, paving the way for a resilient and eco-friendly future. ## **What it does** EcoNet.ai is a cutting-edge platform using AI-driven geospatial analysis and deep learning to optimize renewable energy deployments. By *identifying optimal locations for solar arrays and wind farms ,and allowing users to easily analyze past and future climate data. It reduces energy costs and supports data-driven decision-making for governments and corporations.* This leads to significant cost savings, improved sustainability, and economic growth. The system's neural networks and real-time data processing provide adaptive, future-proof energy solutions. ## **How we built it** We built EcoNet.ai using a suite of AWS services to optimize renewable energy installations. **AWS Bedrock** , while **DynamoDB** managed complex geospatial data. **Amazon S3** provided storage for our vast datasets and we used PyTorch to do time series predictions. This integration of AWS technologies ensured scalability, reliability, and intelligent resource management, enabling EcoNet.ai to deliver actionable AI-driven insights for sustainable energy initiatives. ## **Challenges we ran into** Developing EcoNet.ai involved several challenges, Efficient API querying through DynamoDB required iterative optimization and predictive caching. Hyperspectral satellite imagery was impractical due to long processing times, and incomplete government datasets hindered reliable data sourcing for our models. ## **Accomplishments that we're proud of** Our team engineered a cutting-edge AI model using machine learning to optimize renewable energy placement by analyzing factors like terrain, local energy demands, and grid infrastructure. Our data-driven solution leads in the field, and we are proud that it can revolutionize energy accessibility and sustainability, potentially transforming communities globally through intelligent resource allocation and adaptive planning. ## What we learned Through this project, we gained invaluable experience applying machine learning and AI to real-world sustainability challenges. We deepened our understanding of renewable energy systems and their optimal placement, enhancing our skills in big data analytics, predictive modeling, and large-scale data processing for next-generation energy solutions. ## What's next for EcoNet.ai EcoNet.ai's future plans include: * *Leveraging AI and ML for real-time, hyper-localized insights in renewable energy planning.* * *Expanding globally.* * *Improving data collection processes.* * *Using computer vision to analyze satellite imagery for precise land-use planning.* These solutions will help governments, corporations, and communities make informed, AI-assisted decisions, speeding up the global transition to sustainable energy sources.
## Inspiration We were inspired to create a health-based solution (despite focusing on sustainability) due to the recent trend of healthcare digitization, spawning from the COVID-19 pandemic and progressing rapidly with increased commercial usage of AI. We did, however, want to create a meaningful solution with a large enough impact that we could go through the hackathon, motivated, and with a clear goal in mind. After a few days of research and project discussions/refinement sessions, we finally came up with a solution that we felt was not only implementable (with our current skills), but also dealt with a pressing environmental/human interest problem. ## What it does WasteWizard is designed to be used by two types of hospital users: Custodians and Admin. At the custodian user level, alerts are sent based on timer countdowns to check on wastebin statuses in hospital rooms. When room waste bins need to be emptied, there is an option to select the type of waste and the current room to locate the nearest large bin. Wastebin status (for that room) is then updated to Empty. On the admin side, there is a dashboard to track custodian wastebin cleaning logs (by time, location, and type of waste), large bin status, and overall aggregate data to analyze their waste output. Finally, there is also an option for the admin to empty large garbage bins (once collected by partnering waste management companies) to update their status. ## How we built it The UI/UX designers employed Figma keeping user intuitiveness in mind. Meanwhile, the backend was developed using Node.js and Express.js, employing JavaScript for server-side scripting. MongoDB served as the database, and Mongoose simplified interactions with MongoDB by defining schemas. A crucial aspect of our project was using the MappedIn SDK for indoor navigation. For authentication and authorization, the developers used Auth0 which greatly enhanced security. The development workflow followed agile principles, incorporating version control for collaboration. Thorough testing at both front-end and back-end levels ensured functionality and security. The final deployment in Azure optimized performance and scalability. ## Challenges we ran into There were a few challenges we had to work through: * MappedIn SDK integration/embedding: we used a front-end system that, while technically compatible, was not the best choice to use with MappedIn SDK so we ended up needing to debug some rather interesting issues * Front End development, in general, was not any of our strong suits, so much of that phase of the project required us to switch between CSS tutorial tabs and our coding screens, which led us to taking more time than expected to finish that up * Auth0 token issues related to redirecting users and logging out users after the end of a session + redirecting them to the correct routes * Needing to pare down our project idea to limit the scope to an idea that we could feasibly build in 24 hours while making sure we could defend it in a project pitch as an impactful idea with potential future growth ## Accomplishments that we're proud of In general, we're all quite proud at essentially full-stack developing a working software project in 24 hours. We're also pretty proud of our project idea, as our initial instinct was to pick broad, flashy projects that were either fairly generic or completely unbuildable in the given time frame. We managed to set realistic goals for ourselves and we feel that our project idea is niche and applicable enough to have potential outside of a hackathon environment. Finally, we're proud of our front-end build. As mentioned earlier, none of us are especially well-versed in front-end, so having our system be able to speak to its user (and have it look good) is a major success in our books. ## What we learned We learned we suck at CSS! We also learned good project time management/task allocation and to plan for the worst as we were quite optimistic about how long it would take us to finish the project, but ended up needing much more time to troubleshoot and deal with our weak points. Furthermore, I think we all learned new skills in our development streams, as we aimed to integrate as many hackathon-featured technologies as possible. There was also an incredible amount of research that went into coming up with this project idea and defining our niche, so I think we all learned something new about biomedical waste management. ## What's next for WasteWizard As we worked through our scope, we had to cut out a few ideas to make sure we had a reasonable project within the given time frame and set those aside for future implementation. Here are some of those ideas: * more accurate trash empty scheduling based on data aggregation + predictive modelling * methods of monitoring waste bin status through weight sensors * integration into hospital inventory/ordering databases As a note, this can be adapted to any biomedical waste-producing environment, not just hospitals (such as labs and private practice clinics).
losing
# Pitch Every time you throw trash in the recycling, you either spoil an entire bin of recyclables, or city workers and multi-million dollar machines separate the trash out for you. We want to create a much more efficient way to sort garbage that also trains people to sort correctly and provides meaningful data on sorting statistics. Our technology uses image recognition to identify the waste and opens the lid of the correct bin. When the image recognizer does not recognize the item, it opens all bins and trusts the user to deposit it. It also records the number of times a lid has been opened to estimate what and how much is in each bin. The statistics would have many applications. Since we display the proportion of all garbage in each bin, it will motivate people to compost and recycle more. It will also allow cities to recognize when a bin is full based on how much it has collected, allowing garbage trucks to optimize their routes. In addition, information about what items are commonly thrown into the trash would be useful to material engineers who can design recyclable versions of those products. Future improvements include improved speed and reliability, IOTA blockchain integration, facial recognition for personalized statistics, and automatic self-learning. # How it works 1. Raspberry Pi uses webcam and opencv to look for objects 2. When an object is detected the pi sends the image to the server 3. Server sends image to cloud image recognition services (Amazon Rekognition & Microsoft Azure) and determines which bin should be open 4. Server stores information and statistics in a database 5. Raspberry Pi gets response back from server and moves appropriate bin
## Inspiration Canadians produce more garbage per capita than any other country on earth, with the United States ranking third in the world. In fact, Canadians generate approximately 31 million tonnes of garbage a year. According to the Environmental Protection Agency, 75% of this waste is recyclable. Yet, only 30% of it is recycled. In order to increase this recycling rate and reduce our environmental impact, we were inspired to propose a solution through automating waste sorting. ## What it does Our vision takes control away from the user, and lets the machine do the thinking when it comes to waste disposal! By showing our app a type of waste through the webcam, we detect and classify the category of waste into either recyclable, compost, or landfill. From there, the appropriate compartment is opened to ensure that the right waste gets to the right place! ## How we built it Using TensorFlow and object detection, a python program analyzes the webcam image input and classifies the objects shown. The TensorFlow data is then collected and pushed to our MongoDB Atlas database via Google Cloud. For this project, we used machine learning and used a single shot detector model to maintain a balance between accuracy and speed. For the hardware, an Arduino 101 and a step motor were responsible for manipulating the position of the lid and opening the appropriate compartment. ## Challenges we ran into We had many issues with training our ML Models on Google Cloud, due to the meager resources provided by Google. Another issue we encountered was finding the right datasets, due to the novelty of our product. Due to these setbacks, we resorted to modifying a TensorFlow provided model. ## Accomplishments that I'm proud of We managed to work through difficulties and learned a lot during the process! We learned to connect TensorFlow, Arduino, MongoDB, and Express.js to create a synergistic project. ## What's next for Trash Code In the future, we aim to create a mobile app for improved accessibility and to create a fully customized trained ML model. We also hope to design a fully functional full-sized prototype with the Arduino.
## Inspiration The beginnings of this idea came from long road trips. When driving having good visibility is very important. When driving into the sun, the sun visor never seemed to be able to actually cover the sun. When driving at night, the headlights of oncoming cars made for a few moments of dangerous low visibility. Why isn't there a better solution for these things? We decided to see if we could make one, and discovered a wide range of applications for this technology, going far beyond simply blocking light. ## What it does EyeHUD is able to track objects on opposite sides of a transparent LCD screen in order to render graphics on the screen relative to all of the objects it is tracking. i.e. Depending on where the observer and the object of interest are located on the each side of the screen, the location of the graphical renderings are adjusted Our basic demonstration is based on our original goal of blocking light. When sitting in front of the screen, eyeHUD uses facial recognition to track the position of the users eyes. It also tracks the location of a bright flash light on the opposite side of the screen with a second camera. It then calculates the exact position to render a dot on the screen that completely blocks the flash light from the view of the user no matter where the user moves their head, or where the flash light moves. By tracking both objects in 3D space it can calculate the line that connects the two objects and then where that line intersects the monitor to find the exact position it needs to render graphics for the particular application. ## How we built it We found an LCD monitor that had a broken backlight. Removing the case and the backlight from the monitor left us with just the glass and liquid crystal part of the display. Although this part of the monitor is not completely transparent, a bright light would shine through it easily. Unfortunately we couldn't source a fully transparent display but we were able to use what we had lying around. The camera on a laptop and a small webcam gave us the ability to track objects on both sides of the screen. On the software side we used OpenCV's haar cascade classifier in python to perform facial recognition. Once the facial recognition is done we must locate the users eyes in their face in pixel space for the user camera, and locate the light with the other camera in its own pixel space. We then wrote an algorithm that was able to translate the two separate pixel spaces into real 3D space, calculate the line that connects the object and the user, finds the intersection of this line and the monitor, then finally translates this position into pixel space on the monitor in order to render a dot. ## Challenges we ran Into First we needed to determine a set of equations that would allow us to translate between the three separate pixel spaces and real space. It was important not only to be able to calculate this transformation, but we also needed to be able to calibrate the position and the angular resolution of the cameras. This meant that when we found our equations we needed to identify the linearly independent parts of the equation to figure out which parameters actually needed to be calibrated. Coming up with a calibration procedure was a bit of a challenge. There were a number of calibration parameters that we needed to constrain by making some measurements. We eventually solved this by having the monitor render a dot on the screen in a random position. Then the user would move their head until the dot completely blocked the light on the far side of the monitor. We then had the computer record the positions in pixel space of all three objects. This then told the computer that these three pixel space points correspond to a straight line in real space. This provided one data point. We then repeated this process several times (enough to constrain all of the degrees of freedom in the system). After we had a number of data points we performed a chi-squared fit to the line defined by these points in the multidimensional calibration space. The parameters of the best fit line determined our calibration parameters to use in the transformation algorithm. This calibration procedure took us a while to perfect but we were very happy with the speed and accuracy we were able to calibrate at. Another difficulty was getting accurate tracking on the bright light on the far side of the monitor. The web cam we were using was cheap and we had almost no access to the settings like aperture and exposure which made it so the light would easily saturate the CCD in the camera. Because the light was saturating and the camera was trying to adjust its own exposure, other lights in the room were also saturating the CCD and so even bright spots on the white walls were being tracked as well. We eventually solved this problem by reusing the radial diffuser that was on the backlight of the monitor we took apart. This made any bright spots on the walls diffused well under the threshold for tracking. Even after this we had a bit of trouble locating the exact center of the light as we were still getting a bit of glare from the light on the camera lens. We were able to solve this problem by applying a gaussian convolution to the raw video before trying any tracking. This allowed us to accurately locate the center of the light. ## Accomplishments that we are proud of The fact that our tracking display worked at all we felt was a huge accomplishments. Every stage of this project felt like a huge victory. We started with a broken LCD monitor and two white boards full of math. Reaching a well working final product was extremely exciting for all of us. ## What we learned None of our group had any experience with facial recognition or the OpenCV library. This was a great opportunity to dig into a part of machine learning that we had not used before and build something fun with it. ## What's next for eyeHUD Expanding the scope of applicability. * Infrared detection for pedestrians and wildlife in night time conditions * Displaying information on objects of interest * Police information via license plate recognition Transition to a fully transparent display and more sophisticated cameras. General optimization of software.
partial
## Inspiration Our mission is to foster a **culture of understanding**. A culture where people of diverse backgrounds get to truly *connect* with each other. But, how can we reduce the barriers that exists today and make the world more inclusive? Our solution is to bridge the communication gap of **people with different races and cultures** and **people of different physical abilities**. ## What we built In 36 hours, we created a mixed reality app that allows everyone in the conversation to communicate using their most comfortable method: You want to communicate using your mother tongue? Your friend wants to communicate using sign language? Your aunt is hard of hearing and she wants to communicate without that back-and-forth frustration? Our app enables everyone to do that. ## How we built it VRbind takes in speech and coverts it into text using Bing Speech API. Internally, that text is then translated into your mother tongue language using Google Translate API, and given out as speech back to the user through the built-in speaker on Oculus Rift. Additionally, we also provide a platform where the user can communicate using sign language. This is detected using the leap motion controller and interpreted as an English text. Similarly, the text is then translated into your mother tongue language and given out as speech to Oculus Rift. ## Challenges we ran into We are running our program in Unity, therefore the challenge is in converting all our APIs into C#. ## Accomplishments that we are proud of We are proud that we were able complete with all the essential feature that we intended to implement and troubleshoot the problems that we had successfully throughout the competition. ## What we learned We learn how to code in C# as well as how to select, implement, and integrate different APIs onto the unity platform. ## What's next for VRbind Facial, voice, and body language emotional analysis of the person that you are speaking with.
## Inspiration Students often have a hard time finding complementary co-founders for their ventures/ideas and have limited interaction with students from other universities. Many universities don't even have entrepreneurship centers to help facilitate the matching of co-founders. Furthermore, it is hard to seek validation from a wide range of perspectives on your ideas when you're immediate network is just your university peers. ## What it does VenYard is a gamified platform that keeps users engaged and interested in entrepreneurship while building a community where students can search for co-founders across the world based on complementary skill sets and personas. VenYard’s collaboration features also extend to the ideation process feature where students can seek feedback and validation on their ideas from students beyond their university. We want to give the same access to entrepreneurship and venture building to every student across the world so they can have the tools and support to change the world. ## How we built it We built VenYard using JS, HTML, CSS, Node.js, MySQL, and a lack of sleep! ## Challenges we ran into We had several database-related issues related to the project submission page and the chat feature on each project dashboard. Furthermore, when clicking on a participant on a project's dashboard, we wanted their profile to be brought up but we ran into database issues there but that is the first problem we hope to fix. ## Accomplishments that we're proud of For a pair of programmers who have horrible taste in design, we are proud of how this project turned out visually. We are also proud of how we have reached a point in our programming abilities where we are able to turn our ideas into reality! ## What we learned We were able to advance our knowledge of MySql and Javascript specifically. Aside from that, we were also able to practice pair programming by using the LiveShare extension on VSCode. ## What's next for VenYard We hope to expand the "Matching" feature by making it so that users can specify more criteria for what they want in the ideal co-founder. Additionally, we probably would have to take a look at the UI and make sure it's user-friendly because there are a few aspects that are still a little clunky. Lastly, the profile search feature needs to be redone because our initial idea of combining search and matching profiles doesn't make sense. ## User Credentials if you do not want to create an account username: [revantkantamneni@gmail.com](mailto:revantkantamneni@gmail.com) password: revant ## Submission Category Education and Social Good ## Discord Name revantk16#6733, nicholas#2124
## Inspiration Our inspiration stemmed from the desire to empower the deaf and hard of hearing community by providing them with a more inclusive means of communication. We recognized the importance of American Sign Language (ASL) as a primary mode of communication for many individuals and sought to leverage technology to make ASL more accessible in virtual environments. ## What it does ASL GestureSense Unity is a groundbreaking project that enables real-time recognition and interpretation of American Sign Language (ASL) gestures in virtual environments. It allows users to interact naturally and intuitively with digital applications using ASL, bridging communication gaps and fostering inclusivity. ## How we built it ASL GestureSense Unity was developed using Unity, Met Quest 2, and various XR interaction toolkits. Unity provided the foundational environment for our project, allowing us to leverage its powerful rendering capabilities and cross-platform compatibility. The integration of Met Quest 2 extended our project's capabilities with its advanced hardware features, including high-resolution displays, precise motion tracking, and hand gesture recognition. We harnessed these features to create immersive experiences that closely mimic real-world interactions. In conjunction with Unity and Met Quest 2, we employed XR interaction toolkits such as the XR Interaction Toolkit and VRTK (Virtual Reality Toolkit). These toolkits enabled us to implement complex interaction mechanics, including hand tracking, gesture recognition, and object manipulation. Additionally, VRTK complemented our development efforts by offering a wide range of tools and utilities designed to streamline the creation of interactive VR applications. With VRTK, we were able to enhance our project with advanced hand gesture recognition capabilities, enabling precise detection and interpretation of gestures performed by users' left and right hands. We utilized Unity's native scripting API along with custom shaders and physics simulations to achieve lifelike interactions and visual effects. Throughout the development cycle, we conducted rigorous testing and optimization to ensure optimal performance across different devices and platforms. ## Challenges we ran into During our journey, we encountered a variety of challenges that tested our problem-solving skills and resilience. One significant hurdle was grappling with compatibility issues between Meta Quest 2 and both mobile phones and laptops. Ensuring seamless interaction across different devices proved to be a daunting task. Furthermore, we faced numerous build issues, particularly in configuring project settings and managing packages. Unity editor version compatibility also emerged as a persistent issue, requiring careful navigation and troubleshooting to maintain project stability and functionality. Moreover, addressing boundary loss in Meta Quest 2 controllers presented its own set of challenges, demanding innovative solutions to ensure reliable tracking and user experience in virtual environments. Despite these obstacles, our team remained dedicated and resourceful, leveraging our collective expertise to overcome each challenge and propel the project forward. ## Accomplishments that we're proud of One of our proudest achievements is venturing into virtual reality (VR) and Meta Quest 2 for the first time. Despite the novelty and complexity, we swiftly integrated the hardware with our software in a remarkably short period. Navigating VR hardware and establishing seamless communication demanded dedication and perseverance, expanding our technical prowess. Witnessing our user interface (UI) on the Meta Quest platform was a significant milestone. For VR novices, this integration validated our adaptability and enthusiasm. It fueled further exploration in VR design and development, marking a transformative journey into uncharted technological terrain. ## What we learned Our project journey underscored the value of embracing new challenges without hesitation. We learned that perseverance and a willingness to explore the unknown are vital for growth. By confronting difficulties head-on, we gained invaluable insights and experiences that transcend technical skills. Above all, we discovered that pushing beyond our comfort zones fosters resilience and adaptability, essential qualities in the dynamic world of technology. ## What's next for ASL GestureSense Unity Our vision extends beyond gesture recognition. We aim to incorporate subtitle features for hand gestures, enhancing accessibility and learning opportunities. Additionally, we're committed to developing comprehensive tutorials to facilitate the learning of sign language. By combining innovation with education, we aspire to empower individuals and promote inclusivity on a broader scale.
winning
# BananaExpress A self-writing journal of your life, with superpowers! We make journaling easier and more engaging than ever before by leveraging **home-grown, cutting edge, CNN + LSTM** models to do **novel text generation** to prompt our users to practice active reflection by journaling! Features: * User photo --> unique question about that photo based on 3 creative techniques + Real time question generation based on (real-time) user journaling (and the rest of their writing)! + Ad-lib style questions - we extract location and analyze the user's activity to generate a fun question! + Question-corpus matching - we search for good questions about the user's current topics * NLP on previous journal entries for sentiment analysis I love our front end - we've re-imagined how easy and futuristic journaling can be :) And, honestly, SO much more! Please come see! ♥️ from the Lotus team, Theint, Henry, Jason, Kastan
## Inspiration 2 days before flying to Hack the North, Darryl forgot his keys and spent the better part of an afternoon retracing his steps to find it- But what if there was a personal assistant that remembered everything for you? Memories should be made easier with the technologies we have today. ## What it does A camera records you as you go about your day to day life, storing "comic book strip" panels containing images and context of what you're doing as you go about your life. When you want to remember something you can ask out loud, and it'll use Open AI's API to search through its "memories" to bring up the location, time, and your action when you lost it. This can help with knowing where you placed your keys, if you locked your door/garage, and other day to day life. ## How we built it The React-based UI interface records using your webcam, screenshotting every second, and stopping at the 9 second mark before creating a 3x3 comic image. This was done because having static images would not give enough context for certain scenarios, and we wanted to reduce the rate of API requests per image. After generating this image, it sends this to OpenAI's turbo vision model, which then gives contextualized info about the image. This info is then posted sent to our Express.JS service hosted on Vercel, which in turn parses this data and sends it to Cloud Firestore (stored in a Firebase database). To re-access this data, the browser's built in speech recognition is utilized by us along with the SpeechSynthesis API in order to communicate back and forth with the user. The user speaks, the dialogue is converted into text and processed by Open AI, which then classifies it as either a search for an action, or an object find. It then searches through the database and speaks out loud, giving information with a naturalized response. ## Challenges we ran into We originally planned on using a VR headset, webcam, NEST camera, or anything external with a camera, which we could attach to our bodies somehow. Unfortunately the hardware lottery didn't go our way; to combat this, we decided to make use of MacOS's continuity feature, using our iPhone camera connected to our macbook as our primary input. ## Accomplishments that we're proud of As a two person team, we're proud of how well we were able to work together and silo our tasks so they didn't interfere with each other. Also, this was Michelle's first time working with Express.JS and Firebase, so we're proud of how fast we were able to learn! ## What we learned We learned about OpenAI's turbo vision API capabilities, how to work together as a team, how to sleep effectively on a couch and with very little sleep. ## What's next for ReCall: Memories done for you! We originally had a vision for people with amnesia and memory loss problems, where there would be a catalogue for the people that they've met in the past to help them as they recover. We didn't have too much context on these health problems however, and limited scope, so in the future we would like to implement a face recognition feature to help people remember their friends and family.
## Inspiration In the fast-paced world of networking and professional growth, connecting with students, peers, mentors, and like-minded individuals is essential. However, the need to manually jot down notes in Excel or the risk of missing out on valuable follow-up opportunities can be a real hindrance. ## What it does Coffee Copilot transcribes, summarizes, and suggests talking points for your conversations, eliminating manual note-taking and maximizing networking efficiency. Also able to take forms with genesys. ## How we built it **Backend**: * Python + FastAPI was used to serve CRUD requests * Cohere was used for both text summarization and text generation using their latest Coral model * CockroachDB was used to store user and conversation data * AssemblyAI was used for speech-to-text transcription and speaker diarization (i.e. identifying who is talking) **Frontend**: * We used Next.js for its frontend capabilities ## Challenges we ran into We ran into a few of the classic problems - going in circles about what idea we wanted to implement, biting off more than we can chew with scope creep and some technical challenges that **seem** like they should be simple (such as sending an audio file as a blob to our backend 😒). ## Accomplishments that we're proud of A huge last minute push to get us over the finish line. ## What we learned We learned some new technologies like working with LLMs at the API level, navigating heavily asynchronous tasks and using event-driven patterns like webhooks. Aside of technologies, we learned how to disagree but move forwards, when to cut our losses and how to leverage each others strengths! ## What's next for Coffee Copilot There's quite a few things on the horizon to look forwards to: * Adding sentiment analysis * Allow the user to augment the summary and the prompts that get generated * Fleshing out the user structure and platform (adding authentication, onboarding more users) * Using smart glasses to take pictures and recognize previous people you've met before
winning
## Inspiration Our team firmly believes that a hackathon is the perfect opportunity to learn technical skills while having fun. Especially because of the hardware focus that MakeUofT provides, we decided to create a game! This electrifying project places two players' minds to the test, working together to solve various puzzles. Taking heavy inspiration from the hit videogame, "Keep Talking and Nobody Explodes", what better way to engage the players than have then defuse a bomb! ## What it does The MakeUofT 2024 Explosive Game includes 4 modules that must be disarmed, each module is discrete and can be disarmed in any order. The modules the explosive includes are a "cut the wire" game where the wires must be cut in the correct order, a "press the button" module where different actions must be taken depending the given text and LED colour, an 8 by 8 "invisible maze" where players must cooperate in order to navigate to the end, and finally a needy module which requires players to stay vigilant and ensure that the bomb does not leak too much "electronic discharge". ## How we built it **The Explosive** The explosive defuser simulation is a modular game crafted using four distinct modules, built using the Qualcomm Arduino Due Kit, LED matrices, Keypads, Mini OLEDs, and various microcontroller components. The structure of the explosive device is assembled using foam board and 3D printed plates. **The Code** Our explosive defuser simulation tool is programmed entirely within the Arduino IDE. We utilized the Adafruit BusIO, Adafruit GFX Library, Adafruit SSD1306, Membrane Switch Module, and MAX7219 LED Dot Matrix Module libraries. Built separately, our modules were integrated under a unified framework, showcasing a fun-to-play defusal simulation. Using the Grove LCD RGB Backlight Library, we programmed the screens for our explosive defuser simulation modules (Capacitor Discharge and the Button). This library was used in addition for startup time measurements, facilitating timing-based events, and communicating with displays and sensors that occurred through the I2C protocol. The MAT7219 IC is a serial input/output common-cathode display driver that interfaces microprocessors to 64 individual LEDs. Using the MAX7219 LED Dot Matrix Module we were able to optimize our maze module controlling all 64 LEDs individually using only 3 pins. Using the Keypad library and the Membrane Switch Module we used the keypad as a matrix keypad to control the movement of the LEDs on the 8 by 8 matrix. This module further optimizes the maze hardware, minimizing the required wiring, and improves signal communication. ## Challenges we ran into Participating in the biggest hardware hackathon in Canada, using all the various hardware components provided such as the keypads or OLED displays posed challenged in terms of wiring and compatibility with the parent code. This forced us to adapt to utilize components that better suited our needs as well as being flexible with the hardware provided. Each of our members designed a module for the puzzle, requiring coordination of the functionalities within the Arduino framework while maintaining modularity and reusability of our code and pins. Therefore optimizing software and hardware was necessary to have an efficient resource usage, a challenge throughout the development process. Another issue we faced when dealing with a hardware hack was the noise caused by the system, to counteract this we had to come up with unique solutions mentioned below: ## Accomplishments that we're proud of During the Makeathon we often faced the issue of buttons creating noise, and often times the noise it would create would disrupt the entire system. To counteract this issue we had to discover creative solutions that did not use buttons to get around the noise. For example, instead of using four buttons to determine the movement of the defuser in the maze, our teammate Brian discovered a way to implement the keypad as the movement controller, which both controlled noise in the system and minimized the number of pins we required for the module. ## What we learned * Familiarity with the functionalities of new Arduino components like the "Micro-OLED Display," "8 by 8 LED matrix," and "Keypad" is gained through the development of individual modules. * Efficient time management is essential for successfully completing the design. Establishing a precise timeline for the workflow aids in maintaining organization and ensuring successful development. * Enhancing overall group performance is achieved by assigning individual tasks. ## What's next for Keep Hacking and Nobody Codes * Ensure the elimination of any unwanted noises in the wiring between the main board and game modules. * Expand the range of modules by developing additional games such as "Morse-Code Game," "Memory Game," and others to offer more variety for players. * Release the game to a wider audience, allowing more people to enjoy and play it.
## Inspiration After looking at the Hack the 6ix prizes, we were all drawn to the BLAHAJ. On a more serious note, we realized that one thing we all have in common is accidentally killing our house plants. This inspired a sense of environmental awareness and we wanted to create a project that would encourage others to take better care of their plants. ## What it does Poképlants employs a combination of cameras, moisture sensors, and a photoresistor to provide real-time insight into the health of our household plants. Using this information, the web app creates an interactive gaming experience where users can gain insight into their plants while levelling up and battling other players’ plants. Stronger plants have stronger abilities, so our game is meant to encourage environmental awareness while creating an incentive for players to take better care of their plants. ## How we built it + Back-end: The back end was a LOT of python, we took a new challenge on us and decided to try out using socketIO, for a websocket so that we can have multiplayers, this messed us up for hours and hours, until we finally got it working. Aside from this, we have an arduino to read for the moistness of the soil, the brightness of the surroundings, as well as the picture of it, where we leveraged computer vision to realize what the plant is. Finally, using langchain, we developed an agent to handle all of the arduino info to the front end and managing the states, and for storage, we used mongodb to hold all of the data needed. [backend explanation here] ### Front-end: The front-end was developed with **React.js**, which we used to create a web-based game. We were inspired by the design of old pokémon games, which we thought might evoke nostalgia for many players. ## Challenges we ran into We had a lot of difficulty setting up socketio and connecting the api with it to the front end or the database ## Accomplishments that we're proud of We are incredibly proud of integrating our web sockets between frontend and backend and using arduino data from the sensors. ## What's next for Poképlants * Since the game was designed with a multiplayer experience in mind, we want to have more social capabilities by creating a friends list and leaderboard * Another area to explore would be a connection to the community; for plants that are seriously injured, we could suggest and contact local botanists for help * Some users might prefer the feeling of a mobile app, so one next step would be to create a mobile solution for our project
## Introducing Nuisance ### Inspiration When prompted with the concept of **Useless Inventions**, and the slight delay from procrastinating the brainstorming process of our idea, we suddenly felt very motivated to make a little friend to help us. Introducing **Nuisance**. A (not so friendly) Bot that will sense when you have given him your phone. Promptly running away, and screaming if you get too close. An interesting take on the game of manhunt. ### What it does **Nuisance** detects when a *phone* is placed in its possession. It then embarks on a random journey, in an effort to play everyone's favourite game, keep away. If a daring human approaches before *Nuisance* is ready to end the game, he screams and runs away; only a genuine scream of horror stands a chance of reclaiming the device. Adding a perfect touch of embarrassment and a loss of dignity. ### How we built it 1. Arduino Due 2. 2 wheels 3. Caster Wheel 4. H-bridge/ Motor Driver 5. Motors 6. 2 UltraSonic Sensors 7. Noise Sound Audio Sensor 8. PIR Motion sensor 9. 1 Grove Buzzer v1.2 10. large breadboard 11. 2 small breadboard 12. OCD display 13. Force sensor 14. 9V battery 15. 3\*1.5 = 4.5 V battery 16. A bit of wires and a **lot** of cardboard *and some software* ### Challenges we ran into * different motor powers / motors not working anymore We had an issue during the debugging phase of our code regarding the *Ultrasonic Sensors*. No matter what was done, they just seemed to constantly be timing out. After looking extensively into the issue, we figured out that the issue was neither hardware nor software related. The breadboard had sporadic faulty pins that we had to be considerate of. Thus causing us to test the rest of the breadboard for integrity. Furthermore, we had a lot of coding issues regarding the swap between our Arduino uno and due. The Arduino due did not support the same built in libraries, such as tone (for the buzzer). We also had issues with the collision detection algorithm at first. However, with a lil tenacity, *and the power of friendship*, you too can solve this problem. We originally had the wrong values being processed, causing out algorithm to disregard the numbers we required to gauge distance accurately. ### Accomplishments that we're proud of * completed project..? ### What we learned * yell at a nuisance if you want ur stuff back? Never doubt the ## What's next for Nuisance Probably more crying
winning
## Inspiration Both chronic pain disorders and opioid misuse are on the rise, and the two are even more related than you might think -- over 60% of people who misused prescription opioids did so for the purpose of pain relief. Despite the adoption of PDMPs (Prescription Drug Monitoring Programs) in 49 states, the US still faces a growing public health crisis -- opioid misuse was responsible for more deaths than cars and guns combined in the last year -- and lacks the high-resolution data needed to implement new solutions. While we were initially motivated to build Medley as an effort to address this problem, we quickly encountered another (and more personal) motivation. As one of our members has a chronic pain condition (albeit not one that requires opioids), we quickly realized that there is also a need for a medication and symptom tracking device on the patient side -- oftentimes giving patients access to their own health data and medication frequency data can enable them to better guide their own care. ## What it does Medley interacts with users on the basis of a personal RFID card, just like your TreeHacks badge. To talk to Medley, the user presses its button and will then be prompted to scan their ID card. Medley is then able to answer a number of requests, such as to dispense the user’s medication or contact their care provider. If the user has exceeded their recommended dosage for the current period, Medley will suggest a number of other treatment options added by the care provider or the patient themselves (for instance, using a TENS unit to alleviate migraine pain) and ask the patient to record their pain symptoms and intensity. ## How we built it This project required a combination of mechanical design, manufacturing, electronics, on-board programming, and integration with cloud services/our user website. Medley is built on a Raspberry Pi, with the raspiaudio mic and speaker system, and integrates an RFID card reader and motor drive system which makes use of Hall sensors to accurately actuate the device. On the software side, Medley uses Python to make calls to the Houndify API for audio and text, then makes calls to our Microsoft Azure SQL server. Our website uses the data to generate patient and doctor dashboards. ## Challenges we ran into Medley was an extremely technically challenging project, and one of the biggest challenges our team faced was the lack of documentation associated with entering uncharted territory. Some of our integrations had to be twisted a bit out of shape to fit together, and many tragic hours spent just trying to figure out the correct audio stream encoding. Of course, it wouldn’t be a hackathon project without overscoping and then panic as the deadline draws nearer, but because our project uses mechanical design, electronics, on-board code, and a cloud database/website, narrowing our scope was a challenge in itself. ## Accomplishments that we're proud of Getting the whole thing into a workable state by the deadline was a major accomplishment -- the first moment we finally integrated everything together was a massive relief. ## What we learned Among many things: The complexity and difficulty of implementing mechanical systems How to adjust mechatronics design parameters Usage of Azure SQL and WordPress for dynamic user pages Use of the Houndify API and custom commands Raspberry Pi audio streams ## What's next for Medley One feature we would have liked more time to implement is better database reporting and analytics. We envision Medley’s database as a patient- and doctor-usable extension of the existing state PDMPs, and would be able to leverage patterns in the data to flag abnormal behavior. Currently, a care provider might be overwhelmed by the amount of data potentially available, but adding a model to detect trends and unusual events would assist with this problem.
### 💡 Inspiration 💡 We call them heroes, **but the support we give them is equal to the one of a slave.** Because of the COVID-19 pandemic, a lot of medics have to keep track of their patient's history, symptoms, and possible diseases. However, we've talked with a lot of medics, and almost all of them share the same problem when tracking the patients: **Their software is either clunky and bad for productivity, or too expensive to use on a bigger scale**. Most of the time, there is a lot of unnecessary management that needs to be done to get a patient on the record. Moreover, the software can even get the clinician so tired they **have a risk of burnout, which makes their disease predictions even worse the more they work**, and with the average computer-assisted interview lasting more than 20 minutes and a medic having more than 30 patients on average a day, the risk is even worse. That's where we introduce **My MedicAid**. With our AI-assisted patient tracker, we reduce this time frame from 20 minutes to **only 5 minutes.** This platform is easy to use and focused on giving the medics the **ultimate productivity tool for patient tracking.** ### ❓ What it does ❓ My MedicAid gets rid of all of the unnecessary management that is unfortunately common in the medical software industry. With My MedicAid, medics can track their patients by different categories and even get help for their disease predictions **using an AI-assisted engine to guide them towards the urgency of the symptoms and the probable dangers that the patient is exposed to.** With all of the enhancements and our platform being easy to use, we give the user (medic) a 50-75% productivity enhancement compared to the older, expensive, and clunky patient tracking software. ### 🏗️ How we built it 🏗️ The patient's symptoms get tracked through an **AI-assisted symptom checker**, which uses [APIMedic](https://apimedic.com/i) to process all of the symptoms and quickly return the danger of them and any probable diseases to help the medic take a decision quickly without having to ask for the symptoms by themselves. This completely removes the process of having to ask the patient how they feel and speeds up the process for the medic to predict what disease their patient might have since they already have some possible diseases that were returned by the API. We used Tailwind CSS and Next JS for the Frontend, MongoDB for the patient tracking database, and Express JS for the Backend. ### 🚧 Challenges we ran into 🚧 We had never used APIMedic before, so going through their documentation and getting to implement it was one of the biggest challenges. However, we're happy that we now have experience with more 3rd party APIs, and this API is of great use, especially with this project. Integrating the backend and frontend was another one of the challenges. ### ✅ Accomplishments that we're proud of ✅ The accomplishment that we're the proudest of would probably be the fact that we got the management system and the 3rd party API working correctly. This opens the door to work further on this project in the future and get to fully deploy it to tackle its main objective, especially since this is of great importance in the pandemic, where a lot of patient management needs to be done. ### 🙋‍♂️ What we learned 🙋‍♂️ We learned a lot about CRUD APIs and the usage of 3rd party APIs in personal projects. We also learned a lot about the field of medical software by talking to medics in the field who have way more experience than us. However, we hope that this tool helps them in their productivity and to remove their burnout, which is something critical, especially in this pandemic. ### 💭 What's next for My MedicAid 💭 We plan on implementing an NLP-based service to make it easier for the medics to just type what the patient is feeling like a text prompt, and detect the possible diseases **just from that prompt.** We also plan on implementing a private 1-on-1 chat between the patient and the medic to resolve any complaints that the patient might have, and for the medic to use if they need more info from the patient.
## Inspiration In the United States, every 11 seconds, a senior is treated in the emergency room for a fall. Every 19 minutes, an older adult dies from a fall, directly or indirectly. Deteriorating balance is one of the direct causes of falling in seniors. This epidemic will only increase, as the senior population will double by 2060. While we can’t prevent the effects of aging, we can slow down this process of deterioration. Our mission is to create a solution to senior falls with Smart Soles, a shoe sole insert wearable and companion mobile app that aims to improve senior health by tracking balance, tracking number of steps walked, and recommending senior-specific exercises to improve balance and overall mobility. ## What it does Smart Soles enables seniors to improve their balance and stability by interpreting user data to generate personalized health reports and recommend senior-specific exercises. In addition, academic research has indicated that seniors are recommended to walk 7,000 to 10,000 steps/day. We aim to offer seniors an intuitive and more discrete form of tracking their steps through Smart Soles. ## How we built it The general design of Smart Soles consists of a shoe sole that has Force Sensing Resistors (FSRs) embedded on it. These FSRs will be monitored by a microcontroller and take pressure readings to take balance and mobility metrics. This data is sent to the user’s smartphone, via a web app to Google App Engine and then to our computer for processing. Afterwards, the output data is used to generate a report whether the user has a good or bad balance. ## Challenges we ran into **Bluetooth Connectivity** Despite hours spent on attempting to connect the Arduino Uno and our mobile application directly via Bluetooth, we were unable to maintain a **steady connection**, even though we can transmit the data between the devices. We believe this is due to our hardware, since our HC05 module uses Bluetooth 2.0 which is quite outdated and is not compatible with iOS devices. The problem may also be that the module itself is faulty. To work around this, we can upload the data to the Google Cloud, send it to a local machine for processing, and then send it to the user’s mobile app. We would attempt to rectify this problem by upgrading our hardware to be Bluetooth 4.0 (BLE) compatible. **Step Counting** We intended to use a three-axis accelerometer to count the user’s steps as they wore the sole. However, due to the final form factor of the sole and its inability to fit inside a shoe, we were unable to implement this feature. **Exercise Repository** Due to a significant time crunch, we were unable to implement this feature. We intended to create a database of exercise videos to recommend to the user. These recommendations would also be based on the balance score of the user. ## Accomplishments that we’re proud of We accomplished a 65% success rate with our Recurrent Neural Network model and this was our very first time using machine learning! We also successfully put together a preliminary functioning prototype that can capture the pressure distribution. ## What we learned This hackathon was all new experience to us. We learned about: * FSR data and signal processing * Data transmission between devices via Bluetooth * Machine learning * Google App Engine ## What's next for Smart Soles * Bluetooth 4.0 connection to smartphones * More data points to train our machine learning model * Quantitative balance score system
winning
## Inspiration Many of us have a hard time preparing for interviews, presentations, and any other social situation. We wanted to sit down and have a real talk... with ourselves. ## What it does The app will analyse your speech, hand gestures, and facial expressions and give you both real-time feedback as well as a complete rundown of your results after you're done. ## How We built it We used Flask for the backend and used OpenCV, TensorFlow, and Google Cloud speech to text API to perform all of the background analyses. In the frontend, we used ReactJS and Formidable's Victory library to display real-time data visualisations. ## Challenges we ran into We had some difficulties on the backend integrating both video and voice together using multi-threading. We also ran into some issues with populating real-time data into our dashboard to display the results correctly in real-time. ## Accomplishments that we're proud of We were able to build a complete package that we believe is purposeful and gives users real feedback that is applicable to real life. We also managed to finish the app slightly ahead of schedule, giving us time to regroup and add some finishing touches. ## What we learned We learned that planning ahead is very effective because we had a very smooth experience for a majority of the hackathon since we knew exactly what we had to do from the start. ## What's next for RealTalk We'd like to transform the app into an actual service where people could log in and save their presentations so they can look at past recordings and results, and track their progress over time. We'd also like to implement a feature in the future where users could post their presentations online for real feedback from other users. Finally, we'd also like to re-implement the communication endpoints with websockets so we can push data directly to the client rather than spamming requests to the server. ![Image](https://i.imgur.com/aehDk3L.gif) Tracks movement of hands and face to provide real-time analysis on expressions and body-language. ![Image](https://i.imgur.com/tZAM0sI.gif)
## Problem In these times of isolation, many of us developers are stuck inside which makes it hard for us to work with our fellow peers. We also miss the times when we could just sit with our friends and collaborate to learn new programming concepts. But finding the motivation to do the same alone can be difficult. ## Solution To solve this issue we have created an easy to connect, all in one platform where all you and your developer friends can come together to learn, code, and brainstorm together. ## About Our platform provides a simple yet efficient User Experience with a straightforward and easy-to-use one-page interface. We made it one page to have access to all the tools on one screen and transition between them easier. We identify this page as a study room where users can collaborate and join with a simple URL. Everything is Synced between users in real-time. ## Features Our platform allows multiple users to enter one room and access tools like watching youtube tutorials, brainstorming on a drawable whiteboard, and code in our inbuilt browser IDE all in real-time. This platform makes collaboration between users seamless and also pushes them to become better developers. ## Technologies you used for both the front and back end We use Node.js and Express the backend. On the front end, we use React. We use Socket.IO to establish bi-directional communication between them. We deployed the app using Docker and Google Kubernetes to automatically scale and balance loads. ## Challenges we ran into A major challenge was collaborating effectively throughout the hackathon. A lot of the bugs we faced were solved through discussions. We realized communication was key for us to succeed in building our project under a time constraints. We ran into performance issues while syncing data between two clients where we were sending too much data or too many broadcast messages at the same time. We optimized the process significantly for smooth real-time interactions. ## What's next for Study Buddy While we were working on this project, we came across several ideas that this could be a part of. Our next step is to have each page categorized as an individual room where users can visit. Adding more relevant tools, more tools, widgets, and expand on other work fields to increase our User demographic. Include interface customizing options to allow User’s personalization of their rooms. Try it live here: <http://35.203.169.42/> Our hopeful product in the future: <https://www.figma.com/proto/zmYk6ah0dJK7yJmYZ5SZpm/nwHacks_2021?node-id=92%3A132&scaling=scale-down> Thanks for checking us out!
## Introduction Our innovative AI utilizes cutting-edge technology to analyze your facial expressions and speech through your webcam and microphone. Once you start interacting, the AI will adapt its responses according to your emotions and engagement level, providing a unique, immersive, and engaging conversational experience. It's not just a chat; it's a dynamic interaction crafted just for you to boost your confidence and mental health. ## Inspiration As a group, we decided to address the issue of impostor syndrom and lack of confidence among students. We built the project with the intention of creating a unique way to boost self-esteem and emotional well-being by providing feedback for the user to improve based on. ## What it does Our project uses both webcam analysis as well as chatbot interactions. Through these means, Fake:It aims to provide all students with valuable insight, support, and encouragement. Whether the user is struggling with self-doubt or anxiety; the main goal is to contribute to a student's well-being and personal growth. ## Technologies Employed ### Backend #### [Flask](https://flask.palletsprojects.com/en/2.1.x/) ### Frontend #### [React](https://reactjs.org/) #### [Tailwind CSS](https://tailwindcss.com/) #### [Vite](https://vitejs.dev/) ## Additional Integrations ### [TensorFlow Face API](https://www.tensorflow.org/) ### [OpenAI API](https://beta.openai.com/) ## Challenges we ran into To begin, setting up local development environments with flask proved to take longer than expected. In addition, we faced trouble managing global React states once the project grew in complexity. Finally, our team faced trouble trying to implement a 'mood report' that would graph a users facial expressions over time after their session. Facing these challenges made this hackathon engaging and memorable. ## Accomplishments that we're proud of We are very happy with our achievements including successfully tackling the issue of boosting self-confidence through the use of tracking various moods and making an effort to quantify them for our project, enhancing the user experience. These accomplishments reflect our commitment to providing a more empathetic and personalized platform for our users which includes tailored responses based on their moods to improve themselves. It's a significant step towards fostering emotional well-being and support within our community. ## What we learned We gained technical expertise on challenges throughout the hackathon with tools like React, Flask, and RESTful api's. On top of this, being able to adapt and overcome certain challenges by using teamwork and collaboration is something we found to be paramount to this project's success. ## What's next for FakeIt Next steps would be adding analytical features to the project, such as a way to display users' moods throughout their session. This could be compounded with an account system, where users could create accounts to track long term improvement in moods. Finally, adding more ai personalities the users can choose would boost engagement with our platform.
winning
# Picify **Picify** is a [Flask](http://flask.pocoo.org) application that converts your photos into Spotify playlists that can be saved for later listening, providing a uniquely personal way to explore new music. The experience is facilitated by interacting and integrating a wide range of services. Try it [here](http://picify.net/). ## Workflow The main workflow for the app is as follows: 1. The user uploads a photo to the Picify Flask server. 2. The image is passed onto the [Google Cloud Vision](https://cloud.google.com/vision/) API, where labels and entities are predicted/extracted. This information then gets passed back to the Flask server. 3. The labels and entities are filtered by a dynamic confidence threshold which is iteratively lowered until a large enough set of descriptors for the image can be formed. 4. Each of the descriptors in the above set are then expanded into associated "moods" using the [Datamuse API](https://www.datamuse.com/api/). 5. All of the descriptors and associated moods are filtered against a whitelist of "musically-relevant" terms compiled from sources such as [AllMusic](https://www.allmusic.com/moods) and [Every Noise at Once](http://everynoise.com/genrewords.html), excepting descriptors with extremely high confidence (for example, for a picture of Skrillex this might be "Skrillex"). 6. Finally, the processed words are matched against existing Spotify playlists, which are sampled to form the final playlist. ## Contributors * [Macguire Rintoul](https://github.com/mrintoul) * [Matt Wiens](https://github.com/mwiens91) * [Sophia Chan](https://github.com/schan27)
## Inspiration Phreesia's challenge about storing medical data with a third-party software ## What it does Allows users to analyze an image and see a prediction for skin cancer(benign or malignant), and allows upload of the image and the prediction into cloud storage ## How we built it Tensorflow Lite and Andriod studio, with Firebase for user authentication and cloud storage (+GitHub, proto io) ## Challenges we ran into Implementing Tensorflow Lite into Android studio ## Accomplishments that we're proud of Building an app with the functionality we intended and a model UI for what the app could be ## What we learned How to implement an ML model in Android Studio, how to use Firebase for cloud storage ## What's next for Skin Cancer Detection App Using blockchain we can enable safe data transfer of patient's data to doctors. Adding access to more ML analysis tools to create an ecosystem of physician tools available. Worked on doctor's note summarization with a python API with co:here NLP generation. (Did not finish in time)
## Inspiration The point of the sustain app is to bring a competitive spirit and a rewarding feeling for doing good actions that help the environment. The app is basically a social media app where you can add friends, and see your leaderboards and community progress toward a green goal you set for yourself. The intended way to use the app is that every time you find a can or a piece of garbage on the ground, you can scan the item using the machine learning algorithm that over time will be able to detect more and more garbage of all different types and then you throw it away to get points(based on garbage type) that stack up over the weeks. The app also keeps track of the barcode so that it isn't used over and over again to hack points. We also planned in the future to add a variety of other methods to gain points such as ride sharing or using reusable containers or bottles. ## How we built it We built the front end using HTML and CSS along with frameworks like Tailwind or Bootstrap. For the back end, we created it using Django. Finally, the machine learning part was implemented using PyTorch and the YOLOv5 Algorithm. ## Challenges we ran into We encountered several challenges in deploying an object detection algorithm that was both accurate and lightweight, ensuring it did not significantly impact inference time which would hurt user experience. This required us to curate a dataset that's diverse while also having a small model. Additionally, we faced significant difficulties in implementing a conversational virtual assistant using AWS for our system. Despite investing more than three hours in setup, it ultimately crashed, leading us to unfortunately drop the idea. ## What we learned For the 2 front-end developers, we learned a lot about how HTML works and the different available frameworks. Our back-end and machine-learning developers learned how complicated it could be to implement chatbots and deploy them onto web apps. They have also been able to learn new technologies such as YOLOv5 and frameworks like PyTorch and Ultralytics. ## What's next for the Sustain app
partial
Record your happy, sad and motivational moments with our web-app. We hope to help you with your mental health during this time. When things get hard, remind yourself with a happy moment that has happened in your life, and motivate yourself with the goals you have set for yourself! We plan on further develop this site and increase customizability in the future.
## 💡 Inspiration We got inspiration from or back-end developer Minh. He mentioned that he was interested in the idea of an app that helped people record their positive progress and showcase their accomplishments there. This then led to our product/UX designer Jenny to think about what this app would target as a problem and what kind of solution would it offer. From our research, we came to the conclusion quantity over quality social media use resulted in people feeling less accomplished and more anxious. As a solution, we wanted to focus on an app that helps people stay focused on their own goals and accomplishments. ## ⚙ What it does Our app is a journalling app that has the user enter 2 journal entries a day. One in the morning and one in the evening. During these journal entries, it would ask the user about their mood at the moment, generate am appropriate response based on their mood, and then ask questions that get the user to think about such as gratuity, their plans for the day, and what advice would they give themselves. Our questions follow many of the common journalling practices. The second journal entry then follows a similar format of mood and questions with a different set of questions to finish off the user's day. These help them reflect and look forward to the upcoming future. Our most powerful feature would be the AI that takes data such as emotions and keywords from answers and helps users generate journal summaries across weeks, months, and years. These summaries would then provide actionable steps the user could take to make self-improvements. ## 🔧 How we built it ### Product & UX * Online research, user interviews, looked at stakeholders, competitors, infinity mapping, and user flows. * Doing the research allowed our group to have a unified understanding for the app. ### 👩‍💻 Frontend * Used React.JS to design the website * Used Figma for prototyping the website ### 🔚 Backend * Flask, CockroachDB, and Cohere for ChatAI function. ## 💪 Challenges we ran into The challenge we ran into was the time limit. For this project, we invested most of our time in understanding the pinpoint in a very sensitive topic such as mental health and psychology. We truly want to identify and solve a meaningful challenge; we had to sacrifice some portions of the project such as front-end code implementation. Some team members were also working with the developers for the first time and it was a good learning experience for everyone to see how different roles come together and how we could improve for next time. ## 🙌 Accomplishments that we're proud of Jenny, our team designer, did tons of research on problem space such as competitive analysis, research on similar products, and user interviews. We produced a high-fidelity prototype and were able to show the feasibility of the technology we built for this project. (Jenny: I am also very proud of everyone else who had the patience to listen to my views as a designer and be open-minded about what a final solution may look like. I think I'm very proud that we were able to build a good team together although the experience was relatively short over the weekend. I had personally never met the other two team members and the way we were able to have a vision together is something I think we should be proud of.) ## 📚 What we learned We learned preparing some plans ahead of time next time would make it easier for developers and designers to get started. However, the experience of starting from nothing and making a full project over 2 and a half days was great for learning. We learned a lot about how we think and approach work not only as developers and designer, but as team members. ## 💭 What's next for budEjournal Next, we would like to test out budEjournal on some real users and make adjustments based on our findings. We would also like to spend more time to build out the front-end.
# We'd love if you read through this in its entirety, but we suggest reading "What it does" if you're limited on time ## The Boring Stuff (Intro) * Christina Zhao - 1st-time hacker - aka "Is cucumber a fruit" * Peng Lu - 2nd-time hacker - aka "Why is this not working!!" x 30 * Matthew Yang - ML specialist - aka "What is an API" ## What it does It's a cross-platform app that can promote mental health and healthier eating habits! * Log when you eat healthy food. * Feed your "munch buddies" and level them up! * Learn about the different types of nutrients, what they do, and which foods contain them. Since we are not very experienced at full-stack development, we just wanted to have fun and learn some new things. However, we feel that our project idea really ended up being a perfect fit for a few challenges, including the Otsuka Valuenex challenge! Specifically, > > Many of us underestimate how important eating and mental health are to our overall wellness. > > > That's why we we made this app! After doing some research on the compounding relationship between eating, mental health, and wellness, we were quite shocked by the overwhelming amount of evidence and studies detailing the negative consequences.. > > We will be judging for the best **mental wellness solution** that incorporates **food in a digital manner.** Projects will be judged on their ability to make **proactive stress management solutions to users.** > > > Our app has a two-pronged approach—it addresses mental wellness through both healthy eating, and through having fun and stress relief! Additionally, not only is eating healthy a great method of proactive stress management, but another key aspect of being proactive is making your de-stressing activites part of your daily routine. I think this app would really do a great job of that! Additionally, we also focused really hard on accessibility and ease-of-use. Whether you're on android, iphone, or a computer, it only takes a few seconds to track your healthy eating and play with some cute animals ;) ## How we built it The front-end is react-native, and the back-end is FastAPI (Python). Aside from our individual talents, I think we did a really great job of working together. We employed pair-programming strategies to great success, since each of us has our own individual strengths and weaknesses. ## Challenges we ran into Most of us have minimal experience with full-stack development. If you look at my LinkedIn (this is Matt), all of my CS knowledge is concentrated in machine learning! There were so many random errors with just setting up the back-end server and learning how to make API endpoints, as well as writing boilerplate JS from scratch. But that's what made this project so fun. We all tried to learn something we're not that great at, and luckily we were able to get past the initial bumps. ## Accomplishments that we're proud of As I'm typing this in the final hour, in retrospect, it really is an awesome experience getting to pull an all-nighter hacking. It makes us wish that we attended more hackathons during college. Above all, it was awesome that we got to create something meaningful (at least, to us). ## What we learned We all learned a lot about full-stack development (React Native + FastAPI). Getting to finish the project for once has also taught us that we shouldn't give up so easily at hackathons :) I also learned that the power of midnight doordash credits is akin to magic. ## What's next for Munch Buddies! We have so many cool ideas that we just didn't have the technical chops to implement in time * customizing your munch buddies! * advanced data analysis on your food history (data science is my specialty) * exporting your munch buddies and stats! However, I'd also like to emphasize that any further work on the app should be done WITHOUT losing sight of the original goal. Munch buddies is supposed to be a fun way to promote healthy eating and wellbeing. Some other apps have gone down the path of too much gamification / social features, which can lead to negativity and toxic competitiveness. ## Final Remark One of our favorite parts about making this project, is that we all feel that it is something that we would (and will) actually use in our day-to-day!
losing
## Inspiration We aren't musicians. We can't dance. With AirTunes, we can try to do both! Superheroes are also pretty cool. ## What it does AirTunes recognizes 10 different popular dance moves (at any given moment) and generates a corresponding sound. The sounds can be looped and added at various times to create an original song with simple gestures. The user can choose to be one of four different superheroes (Hulk, Superman, Batman, Mr. Incredible) and record their piece with their own personal touch. ## How we built it In our first attempt, we used OpenCV to maps the arms and face of the user and measure the angles between the body parts to map to a dance move. Although successful with a few gestures, more complex gestures like the "shoot" were not ideal for this method. We ended up training a convolutional neural network in Tensorflow with 1000 samples of each gesture, which worked better. The model works with 98% accuracy on the test data set. We designed the UI using the kivy library in python. There, we added record functionality, the ability to choose the music and the superhero overlay, which was done with the use of rlib and opencv to detect facial features and map a static image over these features. ## Challenges we ran into We came in with a completely different idea for the Hack for Resistance Route, and we spent the first day basically working on that until we realized that it was not interesting enough for us to sacrifice our cherished sleep. We abandoned the idea and started experimenting with LeapMotion, which was also unsuccessful because of its limited range. And so, the biggest challenge we faced was time. It was also tricky to figure out the contour settings and get them 'just right'. To maintain a consistent environment, we even went down to CVS and bought a shower curtain for a plain white background. Afterward, we realized we could have just added a few sliders to adjust the settings based on whatever environment we were in. ## Accomplishments that we're proud of It was one of our first experiences training an ML model for image recognition and it's a lot more accurate than we had even expected. ## What we learned All four of us worked with unfamiliar technologies for the majority of the hack, so we each got to learn something new! ## What's next for AirTunes The biggest feature we see in the future for AirTunes is the ability to add your own gestures. We would also like to create a web app as opposed to a local application and add more customization.
## Inspiration We wanted to create a proof-of-concept for a potentially useful device that could be used commercially and at a large scale. We ultimately designed to focus on the agricultural industry as we feel that there's a lot of innovation possible in this space. ## What it does The PowerPlant uses sensors to detect whether a plant is receiving enough water. If it's not, then it sends a signal to water the plant. While our proof of concept doesn't actually receive the signal to pour water (we quite like having working laptops), it would be extremely easy to enable this feature. All data detected by the sensor is sent to a webserver, where users can view the current and historical data from the sensors. The user is also told whether the plant is currently being automatically watered. ## How I built it The hardware is built on an Arduino 101, with dampness detectors being used to detect the state of the soil. We run custom scripts on the Arduino to display basic info on an LCD screen. Data is sent to the websever via a program called Gobetwino, and our JavaScript frontend reads this data and displays it to the user. ## Challenges I ran into After choosing our hardware, we discovered that MLH didn't have an adapter to connect it to a network. This meant we had to work around this issue by writing text files directly to the server using Gobetwino. This was an imperfect solution that caused some other problems, but it worked well enough to make a demoable product. We also had quite a lot of problems with Chart.js. There's some undocumented quirks to it that we had to deal with - for example, data isn't plotted on the chart unless a label for it is set. ## Accomplishments that I'm proud of For most of us, this was the first time we'd ever created a hardware hack (and competed in a hackathon in general), so managing to create something demoable is amazing. One of our team members even managed to learn the basics of web development from scratch. ## What I learned As a team we learned a lot this weekend - everything from how to make hardware communicate with software, the basics of developing with Arduino and how to use the Charts.js library. Two of our team member's first language isn't English, so managing to achieve this is incredible. ## What's next for PowerPlant We think that the technology used in this prototype could have great real world applications. It's almost certainly possible to build a more stable self-contained unit that could be used commercially.
## Inspiration At this stage of our lives, a lot of students haven’t yet developed the financial discipline to save money and tend to be wasteful with their spending. With this app, we hope to design an interface that focuses on minimalism. The app is easy to use and provides users with a visual breakdown of where their money is going to and from. This gives users a better idea of what their day-to-day spending habits look like and help them develop the necessary money saving skills that would be beneficial in the future. ## What it does BreadBook enables users to input their expenses and income and categorize them chronologically from daily, monthly, weekly, to yearly perspectives. BreadBook also helps you visualize these finances across different time periods and assists you in budgeting properly throughout them. ## How we built it This project was built using a simple web stack of Angular, Node.js and various Node libraries and packages. The back-end of the server is a simple REST api running on a Node.js express server that handles requests and allows the transmitting of data to the front-end. Our front-end was built using Angular and a few vfx packages such as chart.js. ## Accomplishments that we're proud of Being able to implement various libraries of Angular and Node greatly helped us better understand our weaknesses and strengths as team members, and expanded our knowledge greatly regarding these technologies. Implementing chart.js to graphically show our data was a huge achievement given our limited experience with Angular modules. ## What we learned Throughout the two day development process of our application, we all gained experience in using angular and what it allowed us to do in the creation of our web application. As a result, we all definitely became more comfortable with this framework, along with web development overall. Our team decided to focus on the app functionalities right off the bat, as we all saw the potential and usefulness in our project idea and believed it should be our primary focus in the app’s development. As things progressed, we began to implement a cleaner UI and presentation aspect of the app as well, which was an entirely different realm of development. As a result, we all developed a better understanding of what to prioritize in the process of development as time is limited, as well as the importance in deciding whether or not to implement certain ideas based on their effort, required work and value to the project. Finally one of the greatest parts about our participation in this event and being part of this project is the collaboration aspect. We can definitely all say we had an amazing experience from simply getting together, being creative and working in a group. This is especially different to us, as during this event, we created this project not as a school requirement, but through our own interests. It is when we work on projects like this that we are reminded of why we enjoy programming and the process of developing our ideas into something we can all use. ## What's next for BreadBook The current state of BreadBook tracks all the day-to-day and recurring purchases that the user has made throughout daily, monthly or annual time periods. In the future, we would like to implement ways to identify or cut out unneeded speeding. We would give estimates on how much money could be saved daily/monthly/annually if this spending was reduced. We would also like to add a monthly spending plan that would allow you to allocate different amounts of money for different spending categories. When the spending limit of one or more of these categories is being approached a warning would be given to the user to ensure that they realize that they are near their limit.
winning
Welcome to our demo video for our hack “Retro Readers”. This is a game created by our two man team including myself Shakir Alam and my friend Jacob Cardoso. We are both heading into our senior year at Dr. Frank J. Hayden Secondary School and enjoyed participating in our first hackathon ever, Hack The 6ix a tremendous amount. We spent over a week brainstorming ideas for our first hackathon project and because we are both very comfortable with the idea of making, programming and designing with pygame, we decided to take it to the next level using modules that work with APIs and complex arrays. Retro Readers was inspired by a social media post pertaining to another text font that was proven to help mitigate reading errors made by dyslexic readers. Jacob found OpenDyslexic which is an open-source text font that does exactly that. The game consists of two overall gamemodes. These gamemodes aim towards an age group of mainly children and young children with dyslexia who are aiming to become better readers. We know that reading books is becoming less popular among the younger generation and so we decided to incentivize readers by providing them with a satisfying retro-style arcade reading game. The first gamemode is a read and research style gamemode where the reader or player can press a key on their keyboard which leads to a python module calling a database of semi-sorted words from Wordnik API. The game then displays the word back to the reader and reads it aloud using a TTS module. As for the second gamemode, we decided to incorporate a point system. Using the points the players can purchase unique customizables and visual modifications such as characters and backgrounds. This provides a little dopamine rush for the players for participating in a tougher gamemode. The gamemode itself is a spelling type game where a random word is selected using the same python modules and API. Then a TTS module reads the selected word out loud for readers. The reader then must correctly spell the word to attain 5 points without seeing the word. The task we found the most challenging was working with APIs as a lot of them were not deemed fit for our game. We had to scratch a few APIs off the list for incompatibility reasons. A few of these APIs include: Oxford Dictionary, WordsAPI and more. Overall we found the game to be challenging in all the right places and we are highly satisfied with our final product. As for the future, we’d like to implement more reliable APIs and as for future hackathons (this being our first) we’d like to spend more time researching viable APIs for our project. And as far as business practicality goes, we see it as feasible to sell our game at a low price, including ads and/or pad cosmetics. We’d like to give a special shoutout to our friend Simon Orr allowing us to use 2 original music pieces for our game. Thank you for your time and thank you for this amazing opportunity.
## Inspiration When coming up with the idea for our hack, we realized that as engineering students, and specifically first year students, we all had one big common problem... time management. We all somehow manage to run out of time and procrastinate our work, because it's hard to find motivation to get tasks done. Our solution to this problem is an app that would let you make to-do lists, but with a twist. ## What it does The app will allow users to make to-do lists, but each task is assigned a number of points you can receive on completion. Earning points allows you to climb leaderboards, unlock character accessories, and most importantly, unlock new levels of a built-in game. The levels of the built in game are not too long to complete, as to not take away too much studying, but it acts as a reward system for people who love gaming. It also has a feature where you can take pictures of your tasks as your completing them, that you can share with friends also on the app, or archive for yourself to see later. The app includes a pomodoro timer to promote studying, and a forum page where you are able to discuss various educational topics with other users to further enhance your learning experience on this app. ## How we built it Our prototype was built on HTML using a very basic outline. Ideally, if we were to go further with this app, we would use a a framework such as Django or Flask to add a lot more features then this first prototype. ## Challenges we ran into We are *beginners*!! This was a first hackathon for almost all of us, and we all had very limited coding knowledge previously, so we spent a lot of time learning new applications, and skills, and didn't get much time to actually build our app. ## Accomplishments that we're proud of Learning new applications! We went through many different applications over the past 24 hours before landing on HTML to make our app with. We looked into Django, Flask, and Pygame, before deciding on HTML, so we gained some experience with these as well. ## What we learned We learned a lot over the weekend from various workshops, and hands-on personal experience. A big thing we learned is the multiple components that go into web development and how complicated it can get. This was a great insight into the world of real coding, and the application of coding that is sure to stick with us, and keep us motivated to keep teaching ourselves new things! ## What's next for Your Future Hopefully, in the future we're able to further develop Your Future to make it complete, and make it run the way we hope. This will involve a lot of time and dedication to learning new skills for us, but we hope to take that time and put in the effort to learn those skills!
## Inspiration Alex K's girlfriend Allie is a writer and loves to read, but has had trouble with reading for the last few years because of an eye tracking disorder. She now tends towards listening to audiobooks when possible, but misses the experience of reading a physical book. Millions of other people also struggle with reading, whether for medical reasons or because of dyslexia (15-43 million Americans) or not knowing how to read. They face significant limitations in life, both for reading books and things like street signs, but existing phone apps that read text out loud are cumbersome to use, and existing "reading glasses" are thousands of dollars! Thankfully, modern technology makes developing "reading glasses" much cheaper and easier, thanks to advances in AI for the software side and 3D printing for rapid prototyping. We set out to prove through this hackathon that glasses that open the world of written text to those who have trouble entering it themselves can be cheap and accessible. ## What it does Our device attaches magnetically to a pair of glasses to allow users to wear it comfortably while reading, whether that's on a couch, at a desk or elsewhere. The software tracks what they are seeing and when written words appear in front of it, chooses the clearest frame and transcribes the text and then reads it out loud. ## How we built it **Software (Alex K)** - On the software side, we first needed to get image-to-text (OCR or optical character recognition) and text-to-speech (TTS) working. After trying a couple of libraries for each, we found Google's Cloud Vision API to have the best performance for OCR and their Google Cloud Text-to-Speech to also be the top pick for TTS. The TTS performance was perfect for our purposes out of the box, but bizarrely, the OCR API seemed to predict characters with an excellent level of accuracy individually, but poor accuracy overall due to seemingly not including any knowledge of the English language in the process. (E.g. errors like "Intreduction" etc.) So the next step was implementing a simple unigram language model to filter down the Google library's predictions to the most likely words. Stringing everything together was done in Python with a combination of Google API calls and various libraries including OpenCV for camera/image work, pydub for audio and PIL and matplotlib for image manipulation. **Hardware (Alex G)**: We tore apart an unsuspecting Logitech webcam, and had to do some minor surgery to focus the lens at an arms-length reading distance. We CAD-ed a custom housing for the camera with mounts for magnets to easily attach to the legs of glasses. This was 3D printed on a Form 2 printer, and a set of magnets glued in to the slots, with a corresponding set on some NerdNation glasses. ## Challenges we ran into The Google Cloud Vision API was very easy to use for individual images, but making synchronous batched calls proved to be challenging! Finding the best video frame to use for the OCR software was also not easy and writing that code took up a good fraction of the total time. Perhaps most annoyingly, the Logitech webcam did not focus well at any distance! When we cracked it open we were able to carefully remove bits of glue holding the lens to the seller’s configuration, and dial it to the right distance for holding a book at arm’s length. We also couldn’t find magnets until the last minute and made a guess on the magnet mount hole sizes and had an *exciting* Dremel session to fit them which resulted in the part cracking and being beautifully epoxied back together. ## Acknowledgements The Alexes would like to thank our girlfriends, Allie and Min Joo, for their patience and understanding while we went off to be each other's Valentine's at this hackathon.
losing
## Inspiration People's criteria for connecting with others online have up till now been predominantly superficial. We wanted to create something that proves that human connections are initiated by a lot more than just looks. We are bringing that idea into the spotlight by taking advantage of the users' birth dates. We imagine a situation where the zodiac world becomes a revolutionary idea for our modern social media reality. ## What it does It is a chatting app that hides the chatters' personal information until after a few minutes of texting. If they want to know more about each other they can explore more about each others' personal information step by step and potentially get closer to them. The unique part of the app is that we aspire to match people by implementing an algorithm based on their zodiac signs to match them up. ## How we built it Based on react-native we use 'react-native-router-flux','react-native-gifted-chat' to implement the UI and use JavaScript to build the zodiac matches algorithm. ## Challenges we ran into We were all unfamiliar with app making, and we decided to build an android app from scratch. It took a lot of time until all 4 of us got the hang of the React Native. Especially the connection between different components gave us a really hard time. We kept on going, though, because the entire process was extremely simulating in terms of learning. ## Accomplishments that we're proud of Within 36 hours we learned how to build a chat-room environment without any prior knowledge. ## What we learned We gained familiarity with JavaScript and React-Native and very passionately experienced the "never give up" spirit. We also feel like we've built uniquely valuable teamwork skills. It is one thing to learn something on your own, and it is another to find other people (who you had never met before this hackathon) who are equally passionate about learning and do it all together. We definitely believe that these skills will follow us on our academic and professional paths. ## What's next for Who We are planning to: 1) Make a time limit function 2) Makes the UI interface better 3) Considerably improve the matching algorithm, since the one presented is a very simplified version 4) Make the app interactive between two people. 5) Add fancy features to people's profiles that will allow even more interaction, such as first and last impressions. 6) Advertise the zodiac sign aspect of the app in a way that it sounds funny, enjoyable and surprisingly accurate.
## Inspiration During these trying times, the pandemic impacted many people by isolating them in their homes. People are not able to socialize like they used to and find people they can relate with. For example, students who are transitioning to college or a new school where they don’t know anyone. Matcher aims to improve students' mental health by matching them with people who share similar interests and allows them to communicate. Overall, its goal is to connect people across the world. ## What it does The user first logs in and answers a series of comprehensive, researched backed questions (AI determined questions) to determine his/her personality type. Then, we use machine learning to match people and connect them. Users can email each other after they are matched! Our custom Machine Learning algorithm used K-Means Algorithm, and Random Forest to study people's personalities. ## How we built it We used React on the front end, Firebase for authentication and storage, and Python for the server and machine learning. ## Challenges we ran into We all faced unique challenges but losing one member mid way really damped our spirits and limited our potential. * Gordon: I was new to firebase and I didn’t follow the right program flow in the first half of the hackathon. * Lucia: The challenge I ran into was trying to figure out how to properly route the web pages together on React. Also, how to integrate Firebase database on the Front End since I never used it before. * Anindya: Time management. ## Accomplishments that we're proud of We are proud that we are able to persevere after losing a member but still managing to achieve a lot. We are also proud that we showed resiliency when we realized that we messed up our program flow mid way and had to start over from the beginning. We are happy that we learned and implemented new technologies that we have never used before. Our hard work and perseverance resulted in an app that is useful and will make an impact to people's lives! ## What we learned We believe that what doesn't kill you, makes you stronger. * Gordon: After chatting with mentors, I learnt about SWE practises, Firebase flow, and Flask. I also handled setback and failure from wasting 10 hours. * Lucia: I learned about Firebase and how to integrate it into React Front End. I also learned more about how to use React Hooks! * Anindya: I learned how to study unique properties of data using unsupervised learning methods. Also I learned how to integrate Firebase with Python. ## What's next for Matcher We would like to finish our web app by completing our integration of the Firebase Realtime Database. We plan to add social networking features such as a messaging and video chat feature which allows users to communicate with each other on the web app. This will allow them to discuss their interests with one another right at our site! We would like to make this project accessible to multiple platforms such as mobile as well.
## Inspiration Databases are wonderfully engineered for specific tasks. Every time someone wants to add a different type of data or use their data with different access pattern, they either need to either use a sub-optimal choice of database (one that they already support), or support a totally new database. The former damages performance, while the latter is extremely costly in both complexity and engineering effort. For example, Druid on 100GB of time series data is about 100x faster than MySQL, but it's slower on other types of data. ## What it does We set up a simple database auto-selector that makes the decision of whether to use Druid or MySQL. We set up a metaschema for data -- thus we can accept queries and then direct them to the database containing the relevant data. Our core technical contributions are a tool that assigns data to the appropriate database based on the input data and a high-level schema for incoming data. We demonstrated our approach by building a web app, StockSolver, that shows these trade-offs and the advantages of using 1DB for database selection. It has both time-series data and text data. Using our metaschema we and 1DB can easily mix-and-match data between Druid and MongoDB. 1DB finds that the time-series data should be stored on Druid, while MongoDB should store the text data. We show the results of making these decisions in our demo! ## How we built it We created a web app for NASDAQ financial data. We used react and node.js to build our website. We set up MongoDB on Microsoft's Cosmos DB and Druid on the Google Cloud Platform. ## Challenges we ran into It was challenging just to set up each of these databases and load large amounts of data onto them. It was even more challenging to try to load data and build queries that the database was not necessarily made for in order to make clear comparisons between the performance of the databases in differ use-cases. Building the queries to back the metaschema was also quite challenging. ## Accomplishments that we're proud of Building an end-to-end system from databases to 1DB to our data visualizations. ## What we learned We collectively had relatively little database experience and thus we learned how to better work with different databases. ## What's next for 1DB: One Database to rule them all We would like to support more databases and to experiment with using more complex heuristics to select among databases. An extension that follows naturally from our work is to have 1DB track query usage statistics and over time, make the decision to select among supported databases. The extra level of indirection makes these switches natural and can be potentially automated.
losing
## Inspiration There is a growing number of people sharing gardens in Montreal. As a lot of people share apartment buildings, it is indeed more convenient to share gardens than to have their own. ## What it does With that in mind, we decided to create a smart garden platform that is meant to make sharing gardens as fast, intuitive, and community friendly as possible. ## How I built it We use a plethora of sensors that are connected to a Raspberry Pi. Sensors range from temperature to light-sensitivity, with one sensor even detecting humidity levels. Through this, we're able to collect data from the sensors and post it on a google sheet, using the Google Drive API. Once the data is posted on the google sheet, we use a python script to retrieve the 3 latest values and make an average of those values. This allows us to detect a change and send a flag to other parts of our algorithm. For the user, it is very simple. They simply have to text a number dedicated to a certain garden. This will allow them to create an account and to receive alerts if a plant needs attention. This part is done through the Twilio API and python scripts that are triggered when the user sends an SMS to the dedicated cell-phone number. We even thought about implementing credit and verification systems that allow active users to gain points over time. These points are earned once the user decides to take action in the garden after receiving a notification from the Twilio API. The points can be redeemed through the app via Interac transfer or by simply keeping the plant once it is fully grown. In order to verify that the user actually takes action in the garden, we use a visual recognition software that runs the Azure API. Through a very simple system of QR codes, the user can scan its QR code to verify his identity.
## Inspiration **75% of adults over the age of 50** take prescription medication on a regular basis. Of these people, **over half** do not take their medication as prescribed - either taking them too early (causing toxic effects) or taking them too late (non-therapeutic). This type of medication non-adherence causes adverse drug reactions which is costing the Canadian government over **$8 billion** in hospitalization fees every year. Further, the current process of prescription between physicians and patients is extremely time-consuming and lacks transparency and accountability. There's a huge opportunity for a product to help facilitate the **medication adherence and refill process** between these two parties to not only reduce the effects of non-adherence but also to help save tremendous amounts of tax-paying dollars. ## What it does **EZPill** is a platform that consists of a **web application** (for physicians) and a **mobile app** (for patients). Doctors first create a prescription in the web app by filling in information including the medication name and indications such as dosage quantity, dosage timing, total quantity, etc. This prescription generates a unique prescription ID and is translated into a QR code that practitioners can print and attach to their physical prescriptions. The patient then has two choices: 1) to either create an account on **EZPill** and scan the QR code (which automatically loads all prescription data to their account and connects with the web app), or 2) choose to not use EZPill (prescription will not be tied to the patient). This choice of data assignment method not only provides a mechanism for easy onboarding to **EZPill**, but makes sure that the privacy of the patients’ data is not compromised by not tying the prescription data to any patient **UNTIL** the patient consents by scanning the QR code and agreeing to the terms and conditions. Once the patient has signed up, the mobile app acts as a simple **tracking tool** while the medicines are consumed, but also serves as a quick **communication tool** to quickly reach physicians to either request a refill or to schedule the next check-up once all the medication has been consumed. ## How we built it We split our team into 4 roles: API, Mobile, Web, and UI/UX Design. * **API**: A Golang Web Server on an Alpine Linux Docker image. The Docker image is built from a laptop and pushed to DockerHub; our **Azure App Service** deployment can then pull it and update the deployment. This process was automated with use of Makefiles and the **Azure** (az) **CLI** (Command Line Interface). The db implementation is a wrapper around MongoDB (**Azure CosmosDB**). * **Mobile Client**: A client targeted exclusively at patients, written in swift for iOS. * **Web Client**: A client targeted exclusively at healthcare providers, written in HTML & JavaScript. The Web Client is also hosted on **Azure**. * **UI/UX Design**: Userflow was first mapped with the entire team's input. The wireframes were then created using Adobe XD in parallel with development, and the icons were vectorized using Gravit Designer to build a custom assets inventory. ## Challenges we ran into * Using AJAX to build dynamically rendering websites ## Accomplishments that we're proud of * Built an efficient privacy-conscious QR sign-up flow * Wrote a custom MongoDB driver in Go to use Azure's CosmosDB * Recognized the needs of our two customers and tailored the delivery of the platform to their needs ## What we learned * We learned the concept of "Collections" and "Documents" in the Mongo(NoSQL)DB ## What's next for EZPill There are a few startups in Toronto (such as MedMe, Livi, etc.) that are trying to solve this same problem through a pure hardware solution using a physical pill dispenser. We hope to **collaborate** with them by providing the software solution in addition to their hardware solution to create a more **complete product**.
# CourseAI: AI-Powered Personalized Learning Paths ## Inspiration CourseAI was born from the challenges of self-directed learning in our information-rich world. We recognized that the issue isn't a lack of resources, but rather how to effectively navigate and utilize them. This inspired us to leverage AI to create personalized learning experiences, making quality education accessible to everyone. ## What it does CourseAI is an innovative platform that creates personalized course schedules on any topic, tailored to the user's time frame and desired depth of study. Users input what they want to learn, their available time, and preferred level of complexity. Our AI then curates the best online resources into a structured, adaptable learning path. Key features include: * AI-driven content curation from across the web * Personalized scheduling based on user preferences * Interactive course customization through an intuitive button-based interface * Multi-format content integration (articles, videos, interactive exercises) * Progress tracking with checkboxes for completed topics * Adaptive learning paths that evolve based on user progress ## How we built it We developed CourseAI using a modern, scalable tech stack: * Frontend: React.js for a responsive and interactive user interface * Backend Server: Node.js to handle API requests and serve the frontend * AI Model Backend: Python for its robust machine learning libraries and natural language processing capabilities * Database: MongoDB for flexible, document-based storage of user data and course structures * APIs: Integration with various educational content providers and web scraping for resource curation The AI model uses advanced NLP techniques to curate relevant content, and generate optimized learning schedules. We implemented machine learning algorithms for content quality assessment and personalized recommendations. ## Challenges we ran into 1. API Cost Management: Optimizing API usage for content curation while maintaining cost-effectiveness. 2. Complex Scheduling Logic: Creating nested schedules that accommodate various learning styles and content types. 3. Integration Complexity: Seamlessly integrating diverse content types into a cohesive learning experience. 4. Resource Scoring: Developing an effective system to evaluate and rank educational resources. 5. User Interface Design: Creating an intuitive, button-based interface for course customization that balances simplicity with functionality. ## Accomplishments that we're proud of 1. High Accuracy: Achieving a 95+% accuracy rate in content relevance and schedule optimization. 2. Elegant User Experience: Designing a clean, intuitive interface with easy-to-use buttons for course customization. 3. Premium Content Curation: Consistently sourcing high-quality learning materials through our AI. 4. Scalable Architecture: Building a robust system capable of handling a growing user base and expanding content library. 5. Adaptive Learning: Implementing a flexible system that allows users to easily modify their learning path as they progress. ## What we learned This project provided valuable insights into: * The intricacies of AI-driven content curation and scheduling * Balancing user preferences with optimal learning strategies * The importance of UX design in educational technology * Challenges in integrating diverse content types into a cohesive learning experience * The complexities of building adaptive learning systems * The value of user-friendly interfaces in promoting engagement and learning efficiency ## What's next for CourseAI Our future plans include: 1. NFT Certification: Implementing blockchain-based certificates for completed courses. 2. Adaptive Scheduling: Developing a system for managing backlogs and automatically adjusting schedules when users miss sessions. 3. Enterprise Solutions: Creating a customizable version of CourseAI for company-specific training. 4. Advanced Personalization: Implementing more sophisticated AI models for further personalization of learning paths. 5. Mobile App Development: Creating native mobile apps for iOS and Android. 6. Gamification: Introducing game-like elements to increase motivation and engagement. 7. Peer Learning Features: Developing functionality for users to connect with others studying similar topics. With these enhancements, we aim to make CourseAI the go-to platform for personalized, AI-driven learning experiences, revolutionizing education and personal growth.
winning
## Members Keith Khadar, Alexander Salinas, Eli Campos, Gabriella Conde ## Inspiration Last year, Keith had a freshman roommate named Cayson who had played high school football until a knee injury sidelined him. While his condition improved in college—allowing him to walk, he couldn’t run. Keith remembered how he often had to make a 30-minute walk to his physical therapist. It was through witnessing his struggle and through Keiths experience working on medical devices in Dream Team Engineering, a Club at the University of Florida dedicated to improving patient care, and our curiosity to work on real world problems with AI, that we began to think about this issue. ## What it does Our device tracks an injured athlete's movements and provides personalized advice, comparable to that of a world-class physical therapist, ensuring the patient recovers effectively and safely. Our device helps users perform physical therapy exercises at home safely while AI analyzes their movements to ensure they operate within their expected range of motion and effort values. ## How we built it Using web technologies (Angular, Python, Tune) and microcontrollers (Flex-sensor + ESP32) to track, give insights, and show improvement over time. ## Challenges we ran into Bluetooth Implementation: Establishing a reliable and efficient Bluetooth connection between the microcontroller and our web application proved more complex than anticipated. Sleeve Assembly: Designing and constructing a comfortable, functional sleeve that accurately houses our sensors while maintaining flexibility was a delicate balance. Data Interpretation: Translating raw sensor data into meaningful, actionable insights for users required extensive algorithm development and testing. Cross-platform Compatibility: Ensuring our web application functioned seamlessly across various devices and browsers presented unexpected complications. Specifically browser as well as ## Accomplishments that we're proud of Seamless Bluetooth Integration: We successfully implemented robust Bluetooth communication between our hardware and software components, enabling real-time data transfer. Real Time Digital Signal Processing: Our team developed sophisticated algorithms to analyze data from our sensors, providing a comprehensive view of the user's movements and progress. Intuitive User Interface: We created a user-friendly interface that clearly presents complex data and personalized recommendations in an easily digestible format. Rapid Prototyping: Despite time constraints, we produced a fully functional prototype that demonstrates the core capabilities of our concept. Tune AI Integration: We are proud of our connection to tune ai and using their llama ai model to provide insights into the patients movements. ## What we learned Full-Stack Development: We gained valuable experience in integrating frontend and backend technologies, particularly in using Python for backend operations and Angular for the frontend. Interdisciplinary Collaboration: We learned the importance of effective communication and teamwork when combining expertise from various fields (e.g., software development, hardware engineering, and physical therapy). Real-world Problem Solving: This experience reinforced the value of addressing genuine societal needs through innovative technological solutions. ## What's next for Glucose Enhanced Sensor Array: Integrate additional sensors (e.g., accelerometers, gyroscopes) for more comprehensive movement tracking and analysis. Machine Learning Integration: Implement more advanced ML algorithms to improve personalization and predictive capabilities of our advice engine. Clinical Trials: Conduct rigorous testing with physical therapists and patients to validate and refine our system's effectiveness. Mobile App Development: Create dedicated iOS and Android apps to increase accessibility and user engagement. Expanding Use Cases: Explore applications beyond athletic injuries, such as rehabilitation for stroke patients or elderly care. Another use case is to help correct diagnostic error. We will have a lot of data on how the patient moves and we can train machine learning models to then analyze that data and affirm the diagnostic that the doctor gave.
## Inspiration We have a desire to spread awareness surrounding health issues in modern society. We also love data and the insights in can provide, so we wanted to build an application that made it easy and fun to explore the data that we all create and learn something about being active and healthy. ## What it does Our web application processes data exported by Apple health and provides visualizations of the data as well as the ability to share data with others and be encouraged to remain healthy. Our educational component uses real world health data to educate users about the topics surrounding their health. Our application also provides insight into just how much data we all constantly are producing. ## How we built it We build the application from the ground up, with a custom data processing pipeline from raw data upload to visualization and sharing. We designed the interface carefully to allow for the greatest impact of the data while still being enjoyable and easy to use. ## Challenges we ran into We had a lot to learn, especially about moving and storing large amounts of data and especially doing it in a timely and user-friendly manner. Our biggest struggle was handling the daunting task of taking in raw data from Apple health and storing it in a format that was easy to access and analyze. ## Accomplishments that we're proud of We're proud of the completed product that we came to despite early struggles to find the best approach to the challenge at hand. An architecture this complicated with so many moving components - large data, authentication, user experience design, and security - was above the scope of projects we worked on in the past, especially to complete in under 48 hours. We're proud to have come out with a complete and working product that has value to us and hopefully to others as well. ## What we learned We learned a lot about building large scale applications and the challenges that come with rapid development. We had to move quickly, making many decisions while still focusing on producing a quality product that would stand the test of time. ## What's next for Open Health Board We plan to expand the scope of our application to incorporate more data insights and educational components. While our platform is built entirely mobile friendly, a native iPhone application is hopefully in the near future to aid in keeping data up to sync with minimal work from the user. We plan to continue developing our data sharing and social aspects of the platform to encourage communication around the topic of health and wellness.
## Inspiration All three of are international students who converged on this idea based on a shared experience: our grandmothers, who dreamed of coming to see us graduate, could not make the long trip due to loss of mobility. We have also seen our parents, who are around 60 years old, unable to enjoy the things they used to with us when we were kids: playing basketball, backpacking, skiing, leaving those cherished moments as memories. We realised that even though children born today are expected to live maybe 100+ years, their quality of life in their later years depends on how we take care of our bodies today. We did extensive research into how earlier life experiences affect mobility and realised that this is an uphill battle that starts today. We thus developed an accessible and intuitive approach to tracking and understanding factors that go into your present and future mobility. ## What it does Hercules can see and understand your pain using only your iPhone camera and/or a voice note describing your pain, therefore simplifying understanding pains in the body to even those who aren't health literate. He allows you to log any fleeting, acute, or chronic pain, all while seamlessly tracking your activity using wearable data. He will create a long-term log correlating the two, helping develop an exercise routine based on the user's specific needs and limitations, ensuring that they say safe but active enough to promote long-term mobility health. ## How we built it We first did in-depth research on what affects mobility health in the future and what we can do to alleviate effects of it deteriorating early on. We mapped out user journeys and decided that our target users would be everyone, with an emphasis on those who still have the power to prevent future mobility issues Our product is a react-native app built in JavaScript/Node.js, and utilizes the OpenAI API, namely Whisper for speech-to-text of user voice input, GPT-3.5 Turbo for semantic understanding of that text, GPT-4 Vision for image recognition and reasoning, and OpenAI TTS for text-to-speech for Hercules' voice. We used Figma to mock up the app design while we built the bare-bones user flow, then added the styling. Additionally, although we did not have time to implement this into our app, we worked on developing a "Mobility Score" that uses wearable data to evaluate an individual's mobility, which was inspired by the Oura Ring's "Sleep Score" and "Activity Score" measurements and equations. ## Challenges we ran into We ended up having to design and implement a lot of screens, which had to be done after implementing the image recognition and AI speech software. We also realised that there simply isn't enough data that measures changes in one's mobility long-term, and how acute pains could manifest into more serious conditions with old age. This actually made us more excited about building the product however, as it could allow us to accumulate such data, especially for those historically underrepresented in medical data such as women and minorities. Technically, the main challenge we ran into was that OpenAI's GPT-4 vision model was not recognizing body parts in some cases from the pictures we sent it. We eventually realized that it was due to OpenAI's API automatically cropping our image if it wasn't in the correct aspect ratio, and were able to fix it by implement cropping client-side. ## Accomplishments that we're proud of We were able to design and implement 80% of our screens that would be required for a fully functional product. We also managed to make Hercules a very helpful and personable AI agent that makes the user feel at ease. We are most proud that we have created a tool for our generation to enjoy the later stages in life more than our older family members ever did. ## What we learned We understand mobility physiology far better, as well as designing not only for older generations but the older generations of the future. We also learned that the biggest causes of the health span - lifespan gap are lack of health literacy and access to personalised care, both of which could now be solved given new technologies such as AI agents, which is what we envision for our product. ## What's next for Hercules We would love to develop personalised fitness programs for those who have very specific pains and injuries but not the means to go to a doctor or physical therapist. We would likely offer these programs at a very low cost and keep them accessible in terms of ease of following them. We also want to introduce educational aspects of monitoring pain and habit-building to ensure users are maintaining the actions needed to preserve their mobility until later in life. Finally, a key indicator of mobility health is nutrition, so we want to incorporate a nutrition journal that's as easy to use as the pain log, using just a picture for Hercules to tell the user about their eating habits as it relates to their long-term mobility health.
partial
## Inspiration Every few days, a new video of a belligerent customer refusing to wear a mask goes viral across the internet. On neighborhood platforms such as NextDoor and local Facebook groups, neighbors often recount their sightings of the mask-less minority. When visiting stores today, we must always remain vigilant if we wish to avoid finding ourselves embroiled in a firsthand encounter. With the mask-less on the loose, it’s no wonder that the rest of us have chosen to minimize our time spent outside the sanctuary of our own homes. For anti-maskers, words on a sign are merely suggestions—for they are special and deserve special treatment. But what can’t even the most special of special folks blow past? Locks. Locks are cold and indiscriminate, providing access to only those who pass a test. Normally, this test is a password or a key, but what if instead we tested for respect for the rule of law and order? Maskif.ai does this by requiring masks as the token for entry. ## What it does Maskif.ai allows users to transform old phones into intelligent security cameras. Our app continuously monitors approaching patrons and uses computer vision to detect whether they are wearing masks. When a mask-less person approaches, our system automatically triggers a compatible smart lock. This system requires no human intervention to function, saving employees and business owners the tedious and at times hopeless task of arguing with an anti-masker. Maskif.ai provides reassurance to staff and customers alike with the promise that everyone let inside is willing to abide by safety rules. In doing so, we hope to rebuild community trust and encourage consumer activity among those respectful of the rules. ## How we built it We use Swift to write this iOS application, leveraging AVFoundation to provide recording functionality and Socket.io to deliver data to our backend. Our backend was built using Flask and leveraged Keras to train a mask classifier. ## What's next for Maskif.ai While members of the public are typically discouraged from calling the police about mask-wearing, businesses are typically able to take action against someone causing a disturbance. As an additional deterrent to these people, Maskif.ai can be improved by providing the ability for staff to call the police.
## Inspiration Recent mass shooting events are indicative of a rising, unfortunate trend in the United States. During a shooting, someone may be killed every 3 seconds on average, while it takes authorities an average of 10 minutes to arrive on a crime scene after a distress call. In addition, cameras and live closed circuit video monitoring are almost ubiquitous now, but are almost always used for post-crime analysis. Why not use them immediately? With the power of Google Cloud and other tools, we can use camera feed to immediately detect weapons real-time, identify a threat, send authorities a pinpointed location, and track the suspect - all in one fell swoop. ## What it does At its core, our intelligent surveillance system takes in a live video feed and constantly watches for any sign of a gun or weapon. Once detected, the system immediately bounds the weapon, identifies the potential suspect with the weapon, and sends the authorities a snapshot of the scene and precise location information. In parallel, the suspect is matched against a database for any additional information that could be provided to the authorities. ## How we built it The core of our project is distributed across the Google Cloud framework and AWS Rekognition. A camera (most commonly a CCTV) presents a live feed to a model, which is constantly looking for anything that looks like a gun using GCP's Vision API. Once detected, we bound the gun and nearby people and identify the shooter through a distance calculation. The backend captures all of this information and sends this to check against a cloud-hosted database of people. Then, our frontend pulls from the identified suspect in the database and presents all necessary information to authorities in a concise dashboard which employs the Maps API. As soon as a gun is drawn, the authorities see the location on a map, the gun holder's current scene, and if available, his background and physical characteristics. Then, AWS Rekognition uses face matching to run the threat against a database to present more detail. ## Challenges we ran into There are some careful nuances to the idea that we had to account for in our project. For one, few models are pre-trained on weapons, so we experimented with training our own model in addition to using the Vision API. Additionally, identifying the weapon holder is a difficult task - sometimes the gun is not necessarily closest to the person holding it. This is offset by the fact that we send a scene snapshot to the authorities, and most gun attacks happen from a distance. Testing is also difficult, considering we do not have access to guns to hold in front of a camera. ## Accomplishments that we're proud of A clever geometry-based algorithm to predict the person holding the gun. Minimized latency when running several processes at once. Clean integration with a database integrating in real-time. ## What we learned It's easy to say we're shooting for MVP, but we need to be careful about managing expectations for what features should be part of the MVP and what features are extraneous. ## What's next for HawkCC As with all machine learning based products, we would train a fresh model on our specific use case. Given the raw amount of CCTV footage out there, this is not a difficult task, but simply a time-consuming one. This would improve accuracy in 2 main respects - cleaner identification of weapons from a slightly top-down view, and better tracking of individuals within the frame. SMS alert integration is another feature that we could easily plug into the surveillance system as well, and further compound the reaction improvement time.
## Inspiration It all started a couple days ago when my brother told me he'd need over an hour to pick up a few items from a grocery store because of the weekend checkout line. This led to us reaching out to other friends of ours and asking them about the biggest pitfalls of existing shopping systems. We got a whole variety of answers, but the overwhelming response was the time it takes to shop and more particularly checkout. This inspired us to ideate and come up with an innovative solution. ## What it does Our app uses computer vision to add items to a customer's bill as they place items in the cart. Similarly, removing an item from the cart automatically subtracts them from the bill. After a customer has completed shopping, they can checkout on the app with the tap of a button, and walk out the store. It's that simple! ## How we built it We used react with ionic for the frontend, and node.js for the backend. Our main priority was the completion of the computer vision model that detects items being added and removed from the cart. The model we used is a custom YOLO-v3Tiny model implemented in Tensorflow. We chose Tensorflow so that we could run the model using TensorflowJS on mobile. ## Challenges we ran into The development phase had it's fair share of challenges. Some of these were: * Deep learning models can never have too much data! Scraping enough images to get accurate predictions was a challenge. * Adding our custom classes to the pre-trained YOLO-v3Tiny model. * Coming up with solutions to security concerns. * Last but not least, simulating shopping while quarantining at home. ## Accomplishments that we're proud of We're extremely proud of completing a model that can detect objects in real time, as well as our rapid pace of frontend and backend development. ## What we learned We learned and got hands on experience of Transfer Learning. This was always a concept that we knew in theory but had never implemented before. We also learned how to host tensorflow deep learning models on cloud, as well as make requests to them. Using google maps API with ionic react was a fun learning experience too! ## What's next for MoboShop * Integrate with customer shopping lists. * Display ingredients for recipes added by customer. * Integration with existing security systems. * Provide analytics and shopping trends to retailers, including insights based on previous orders, customer shopping trends among other statistics.
winning
## Inspiration The most important part in any quality conversation is knowledge. Knowledge is what ignites conversation and drive - knowledge is the spark that gets people on their feet to take the first step to change. While we live in a time where we are spoiled by the abundance of accessible information, trying to keep up and consume information from a multitude of sources can give you information indigestion: it can be confusing to extract the most relevant points of a new story. ## What it does Macaron is a service that allows you to keep track of all the relevant events that happen in the world without combing through a long news feed. When a major event happens in the world, news outlets write articles. Articles are aggregated from multiple sources and uses NLP to condense the information, classify the summary into a topic, extracts some keywords, then presents it to the user in a digestible, bite-sized info page. ## How we built it Macaron also goes through various social media platforms (twitter at the moment) to perform sentiment analysis to see what the public opinion is on the issue: displayed by the sentiment bar on every event card! We used a lot of Google Cloud Platform to help publish our app. ## What we learned Macaron also finds the most relevant charities for an event (if applicable) and makes donating to it a super simple process. We think that by adding an easy call-to-action button on an article informing you about an event itself, we'll lower the barrier to everyday charity for the busy modern person. Our front end was built on NextJS, with a neumorphism inspired design incorporating usable and contemporary UI/UX design. We used the Tweepy library to scrape twitter for tweets relating to an event, then used NLTK's vader to perform sentiment analysis on each tweet to build a ratio of positive to negative tweets surrounding an event. We also used MonkeyLearn's API to summarize text, extract keywords and classify the aggregated articles into a topic (Health, Society, Sports etc..) The scripts were all written in python. The process was super challenging as the scope of our project was way bigger than we anticipated! Between getting rate limited by twitter and the script not running fast enough, we did hit a lot of roadbumps and had to make quick decisions to cut out the elements of the project we didn't or couldn't implement in time. Overall, however, the experience was really rewarding and we had a lot of fun moving fast and breaking stuff in our 24 hours!
## About Us Discord Team Channel: #team-64 omridan#1377, dylan28#7389, jordanbelinsky#5302, Turja Chowdhury#6672 Domain.com domain: positivenews.space ## Inspiration Over the last year headlines across the globe have been overflowing with negative content which clouded over any positive information. In addition everyone has been so focused on what has been going on in other corners of the world and have not been focusing on their local community. We wanted to bring some pride and positivity back into everyone's individual community by spreading positive headlines at the users users location. Our hope is that our contribution shines a light in these darkest of times and spreads a message of positivity to everyone who needs it! ## What it does Our platform utilizes the general geolocation of the user along with a filtered API to produce positive articles about the users' local community. The page displays all the articles by showing the headlines and a brief summary and the user has the option to go directly to the source of the article or view the article on our platform. ## How we built it The core of our project uses the Aylien news API to gather news articles from a specified country and city while reading only positive sentiments from those articles. We then used the IPStack API to gather the users location via their IP Address. To reduce latency and to maximize efficiency we used JavaScript in tandem with React opposed to a backend solution to code a filtration of the data received from the API's to display the information and imbed the links. Finally using a combination of React, HTML, CSS and Bootstrap a clean, modern and positive design for the front end was created to display the information gathered by the API's. ## Challenges we ran into The most significant challenge we ran into while developing the website was determining the best way to filter through news articles and classify them as "positive". Due to time constraints the route we went with was to create a library of common keywords associated with negative news, filtering articles with the respective keywords out of the dictionary pulled from the API. ## Accomplishments that we're proud of We managed to support a standard Bootstrap layout comprised of a grid consisting of rows and columns to enable both responsive design for compatibility purposes, and display more content on every device. Also utilized React functionality to enable randomized background gradients from a selection of pre-defined options to add variety to the site's appearance. ## What we learned We learned a lot of valuable skills surrounding the aspect of remote group work. While designing this project, we were working across multiple frameworks and environments, which meant we couldn't rely on utilizing just one location for shared work. We made combined use of Repl.it for core HTML, CSS and Bootstrap and GitHub in conjunction with Visual Studio Code for the JavaScript and React workloads. While using these environments, we made use of Discord, IM Group Chats, and Zoom to allow for constant communication and breaking out into sub groups based on how work was being split up. ## What's next for The Good News In the future, the next major feature to be incorporated is one which we titled "Travel the World". This feature will utilize Google's Places API to incorporate an embedded Google Maps window in a pop-up modal, which will allow the user to search or navigate and drop a pin anywhere around the world. The location information from the Places API will replace those provided by the IPStack API to provide positive news from the desired location. This feature aims to allow users to experience positive news from all around the world, rather than just their local community. We also want to continue iterating over our design to maximize the user experience.
## Inspiration The media we consume daily has an impact on our thinking, behavior, and emotions. If you’ve fallen into a pattern of regularly watching or listening to the news, the majority of what you’re consuming is likely about the coronavirus (COVID-19) crisis. And while staying up to date on local and national news, especially as it relates to mandates and health updates, is critical during this time, experts say over-consumption of the news can take a toll on your physical, emotional, and mental health. ## What it does The app first greets users with a screen prompting them to either sign up for an account or sign in to a pre-existing account. With the usual authentication formalities out of the way the app gets straight to business as our server scrapes oodles of articles from the internet and filters out the good from the bad, before displaying the user with a smorgasbord of good news. ## How we built it We have used flutter to create our android based application and used firebase as a database. ExpressJS as a backend web framework. With the help of RapidAPI, we are getting lists of top headline news. ## Challenges we ran into Initially, we tried to include Google Cloud-Based Sentiment Analysis of each news. However, we thought to try some new technology. Since the majority of our team members were new to machine learning, we were facing too many challenges to even get started with. Issues with lack of examples available. So we again limited our app to show customized positive news. We wanted to add more features during the hacking period but due to time constraints, we had to limit. ## Accomplishments that we're proud of Completely Working android based applications and integrated with backend having the contribution of each and every member of the team. ## What we learned We have learned to fetch and upload data to firebase's real-time database through the flutter application. We learned the value of Team Contribution and Team Work which is the ultimate key to the success of the project. Using Text-based Sentiment Analysis to analyze and rank news on the basis of positivity through Cloud Natural Language Processing. ## What's next for Hopeful 1. More Customized Feed 2. Update Profile Section 3. Like and Reply to comments
winning
## Inspiration Being our first hackathon, we wanted to make something simple but impactful. Being University students who enjoy music we got the idea of revamping our experiences in our dorm rooms. So we came up with LIT, that visualizes audio with great detail. ## What it does LIT as the name suggest is a music visualizer that syncs in real time. The system is connected to a music emitting device that records the frequency and visualizes it via LED lights on the circuit. ## How we built it LIT was created using Arduino and Electric circuitry. Utilizing LED lights, breadboard, resistors and arduino Uno the system visualizes any audio that is sent through the auxiliary cable. ## Challenges we ran into A major challenge was finding the right pieces for the circuit. Due to unavailability of some parts such as a potentiometer, we had to adapt. In addition finding an ideal sensitivity range for the LED lights to react to the frequency was a little difficult without some key parts but using trial testing we were able to find the a suitable frequency range. ## Accomplishments that we're proud of This was the first hackathon for three of the four members on this team. All being in their first year of their undergraduate careers. We are proud that we were able to accomplish a finished project that not only executes but is super COOL!
## Inspiration As part of UC Berkeley's bollywood fusion dance team, we have performed on countless stages across the United States. However, no matter if we were in San Diego, Los Angeles, or Chicago, we always wanted to incorporate a wide variety of light up props. Our biggest limitation was time, money, and transportability. We need to be able to put a device in bag or small suitcase, and we don't want to spend loads of money on it, and we want it to endlessly customizable and synchronizable across stage. Being able to control your performance's lighting at your finger tips is what we wanted to do to step up in the dance circuit. ## What it does An iOS mobile app which lets you import music and create lighting cues which can be instantly synched to a series of arduino devices. Upon a start queue, all the lights will go off in an elaborate display of effects. The best thing is, all you need is your phone! ## How we built it Used the swift programming language to make our iOS app. We also built a temporary android mobile app to connect to our arduino devices, as iOS doesn't support connection to arduino modules. The android app sets up a bluetooth connection with a selected device to upload its unique lighting cue encoding. After parsing this request, the arduino lights up accordingly. ## Challenges we ran into Bluetooth was being very finicky when we tried to connect to it, and actually writing data at such a low level lead to some byte data loss issues. Also programming in C took longer than expected, after facing several sleep deprivation induced segmentation faults. ## Accomplishments that we're proud of Divided work effectively, and thought through several algorithmic efficiencies through data management. ## What we learned We learned a great deal about hardware hacking. It was the first time any of us worked on a hardware hack, and we realized that it was actually an amazing feeling to have a physical product which you can go back home and use everyday, versus an app that you may or may not see the light of day. ## What's next for RobinsLight RobinsLight isn't only applicable to dance teams, but can be used in everyday homes and events. You don't need to have to go buy an Amazon Alexa, or a Google Home to have an aesthetic and simple light setup. Since all of our parts are very low level, we save our consumers a lot of money and stay away from any privacy issues.
## Inspiration With COVID-19, the world has been forced to stay safe inside their homes and avoid social contact, a measure which has taken a noticeable toll on everyone’s mental well being. With All of the Lights, individuals can connect in new and fun ways with products that they likely already own - RGB light strips. ## What it does All of the Lights is a web-enabled LED strip control system. It allows friends to synchronize their lights and remotely participate in each other's lives. **Note:** The devices made for this project use only a short LED strip as proof of concept. In real use, the device would be mounted with the user's LED strip that typically runs around the perimeter of their ceiling, controlling the lights for an entire room. Users access our web app to choose a different light pattern depending on whether they want to study together, party, or just chill. Each All of the Lights device is updated with the new pattern, immediately changing everyone's lights. All of the Lights has several different modes or patterns, including: * White Light (On or Off) * Slow colour fading for vibing * Fast colour jumping for parties * Custom colour patterns (such as Blue-Orange fading) * Pomodoro Study Mode With the Pomodoro Study Mode, users can use their LED lights as a way to boost their productivity by changing colour when they should take a break from studying, then returning to the original colour to notify the user to resume studying. ## How we built it All of the Lights is primarily a hardware hack. We began with a rough device circuit diagram to determine the necessary components and used CAD to design a enclosure to be 3D printed. While waiting for the prints, we split up to work on the two major components: creating a circuit to control high-power LEDs and interfacing between Raspberry Pi's to synchronize the devices. The control circuit uses an ATtiny84 microcontroller to drive 3 MOSFET transistors which adjust the brightness of each 12V RGB channel. This utilizes Pulse Width Modulation (PWM) to access the entire range of colour values. To control the light patterns, the Raspberry Pi sends a 32 bit serial packet to the ATtiny. This packet contains the red, green, and blue values, as well as information about whether the colours should fade or not and the duration of the current pattern element. Using a system inspired by floating point integers, an accurate duration between 10 milliseconds and 3 hours can be specified using just 9 bits. All of the Lights supports several nodes in the local network using Python threading and sockets combined with Flask to submit GET requests from the localhost. One Raspberry Pi is used as the server node, which retrieves a string from the Flask server containing information about desired light pattern. The server Pi supports multiple client Pi’s to join its network and updates each with the pattern data upon a new POST to the server. The clients and the server all send a serial message to the ATTiny on the LED driver board to change the light colours. ## Challenges we ran into With the tight time constraints of this Hackathon, waiting for 3D prints to finish could be the difference between complete a product and not. To avoid this, we had to design our 3D printed case before having a concrete list of parts that would be enclosed. This required making intelligent design decisions to estimate how parts would eventually fit together in the case, without being too tight or oversized. The serial communication between the Raspberry Pi web client and ATtiny LED driver board was made difficult by the different logic levels of the two devices. A voltage step-up circuit was needed to convert the 3.3V serial output from the Pi to a 5V serial input for the ATtiny. This required several prototype circuits that tried using diodes or MOSFETs, but the finally solution uses a double bipolar transistor inverter to accomplish the step up. With the current system, one of the All of the Lights devices acts as both a client to the web app and a server to the other devices. This means that it must simultaneously fetch data from the web app, relay this information to each client, and control its own LEDs via serial. Organizing all of these concurrent tasks required lots of integration testing to get right. ## Accomplishments that we're proud of We focused heavily on modularizing both the hardware and software components of this project to facilitate future development. This was a rewarding endeavour as we got to see all of the modules, such as the LED driver board, power circuit and LED strip being seamlessly integrated. As a project that required many interactions between hardware and software, there were many challenges and bugs during the Hackathon. However, after finally fixing all of the issues, it was a great accomplishment to see a physical, real world device behaving exactly as we had designed, even if that meant pulling an all-nighter to see it work at 7:30am! We are especially excited about this device since we intend on further developing All of the Lights for us and friends to use. ## What we learned One of the main features of this project is the various device interactions. We learned how to use sockets to interface between Raspberry Pi's, how to collect information from a web server with Flask, and how to communicate over serial between devices with different logic levels. We also improved our engineering soft skills, primarily teamwork and communication. Throughout the competition, our team members frequently discussed the objective of each component of the project, allowing us to work in parallel and design hardware or code that would be relatively easy to integrate later on. ## What's next for All of the Lights With all of the technical groundwork complete, All of the Lights possessed the necessary hardware and software requirements to expand out and create more intricate and useful LED patterns. The localhost server is a crucial aspect to the build, and currently allows people of the same household to connect to and control the lights from any browser. The server will eventually be deployed to the web, allowing people to connect their lights from anywhere in the world. Additionally, All of the Lights will allow users to use the Spotify API to synchronize music on a device with their LED lights. Finally, more productivity features will be implemented to allow users to structure their day. All of the Lights will launch a custom alarm setting, and let users be naturally woken with lights simulating the sunrise. Thanks to its modular design, launching custom settings on a device has never been easier!
losing
## Inspiration We hate making resumes and customizing them for each employeer so we created a tool to speed that up. ## What it does A user creates "blocks" which are saved. Then they can pick and choose which ones they want to use. ## How we built it [Node.js](https://nodejs.org/en/) [Express](https://expressjs.com/) [Nuxt.js](https://nuxtjs.org/) [Editor.js](https://editorjs.io/) [html2pdf.js](https://ekoopmans.github.io/html2pdf.js/) [mongoose](https://mongoosejs.com/docs/) [MongoDB](https://www.mongodb.com/)
## Inspiration Let’s take you through a simple encounter between a recruiter and an aspiring student looking for a job during a career fair. The student greets the recruiter eagerly after having to wait in a 45 minute line and hands him his beautifully crafted paper resume. The recruiter, having been talking to thousands of students knows that his time is short and tries to skim the article rapidly, inevitably skipping important skills that the student brings to the table. In the meantime, the clock has been ticking and while the recruiter is still reading non-relevant parts of the resume the student waits, blankly staring at the recruiter. The recruiter finally looks up only to be able to exchange a few words of acknowledge and a good luck before having to move onto the next student. And the resume? It ends up tossed in the back of a bin and jumbled together with thousands of other resumes. The clear bottleneck here is the use of the paper Resume. Instead of having the recruiter stare at a thousand word page crammed with everything someone has done with their life, it would make much more sense to have the student be able to show his achievements in a quick, easy way and have it elegantly displayed for the recruiter. With Reko, both recruiters and students will be geared for an easy, digital way to transfer information. ## What it does By allowing employers and job-seekers to connect in a secure and productive manner, Reko calls forward a new era of stress free peer-to-peer style data transfer. The magic of Reko is in its simplicity. Simply walk up to another Reko user, scan their QR code (or have them scan yours!), and instantly enjoy a UX rich file transfer channel between your two devices. During PennApps, we set out to demonstrate the power of this technology in what is mainly still a paper-based ecosystem: career fairs. With Reko, employers no longer need to peddle countless informational pamphlets, and students will never again have to rush to print out countless resume copies before a career fair. Not only can this save a large amount of paper, but it also allows students to freely choose what aspects of their resumes they want to accentuate. Reko also allows employers to interact with the digital resume cards sent to them by letting them score each card on a scale of 1 - 100. Using this data alongside machine learning, Reko then provides the recruiter with an estimated candidate match percentage which can be used to streamline the hiring process. Reko also serves to help students by providing them a recruiting dashboard. This dashboard can be used to understand recruiter impressions and aims to help students develop better candidate profiles and resumes. ## How we built it ### Front-End // Swift The frontend of Reko focuses on delivering a phenomenal user experience through an exceptional user interface and efficient performance. We utilized native frameworks and a few Cocoapods to provide a novel, intriguing experience. The QR code exchange handshake protocol is accomplished through the very powerful VisionKit. The MVVM design pattern was implemented and protocols were introduced to make the most out of the information cards. The hardest implementation was the Web Socket implementation of the creative exchange of the information cards — between the student and interviewer. ### Back-End // Node.Js The backend of Reko focuses on handling websocket sessions, establishing connection between front-end and our machine learning service, and managing the central MongoDB. Every time a new ‘user-pair’ is instantiated via a QR code scan, the backend stores the two unique socket machine IDs as ‘partners’, and by doing so is able to handle what events are sent to one, or both partners. By also handling the MongoDB, Reko’s backend is able to relate these unique socket IDs to stored user account’s data. In turn, this allows Reko to take advantage of data sets to provide the user with valuable unique data analysis. Using the User ID as context, Reko’s backend is able to POST our self-contained Machine Learning web service. Reko’s ML web service responded with an assortment of statistical data, which is then emitted to the front-end via websocket for display & view by the user. ### Machine Learning // Python In order to properly integrate machine learning into our product, we had to build a self-contained web application. This container application was built on a virtual environment with a REST API layer and Django framework. We chose to use these technologies because they are scalable and easy to deploy to the cloud. With the Django framework, we used POST to easily communicate with the node backend and thus increase the overall workflow via abstraction. We were then able to use Python to train a machine learning model based on data sent from the node backend. After connecting to the MongoDB with the pymongo library, we were able to prepare training and testing data sets. We used the pandas python library to develop DataFrames for each data set and built a machine learning model using the algorithms from the scikit library. We tested various algorithms with our dataset and finalized a model that utilized the Logistic Regression algorithm. Using these data sets and the machine learning model, our service can predict the percentage a candidate matches to a recruiter’s job profile. The final container application is able to receive data and return results in under 1 second and is over 90% accurate. ## Challenges we ran into * Finding a realistic data set to train our machine learning model * Deploying our backend to the cloud * Configuring the container web application * Properly populating our MongoDB * Finding the best web service for our use case * Finding the optimal machine learning for our data sets ## Accomplishments that we're proud of * UI/UX Design * Websocket implementation * Machine Learning integration * Scalably structured database * Self-contained Django web application ## What we learned * Deploying container applications on the cloud * Using MongoDB with Django * Data Modeling/Analysis for our specific use case * Good practices in structuring a MongoDB database as opposed to a SQL database. * How to successfully integrate three software layers to generate a consistent and fluid final product. * Strategies for linking iOS devices in a peer-to-peer fashion via websockets. ## What's next for reko * Our vision for Reko is to have an app which allows for general and easy to use data transfer between two people who may be complete strangers. * We hope to transfer from QR code to NFC to allow for even easier data transfer and thus better user experience. * We believe that a data transfer system such as the one Reko showcases is the future of in-person data transfer due to its “no-username” operation. This system allows individuals to keep their anonymity if desired, and thus protects their privacy.
## Inspiration Our video is in the attached google drive folder link at the bottom if the youtube link in devpost isn't loading. Our inspiration for this project came from noticing the struggle of finding work during the pandemic. There are many small businesses that are looking for people to fill part-time positions and there are many people who do not know where to find these positions. Being a part of several local Facebook pages, most people have resorted to posting on Facebook to look to fill available part-time opportunities. However, it can be quite tedious sifting through hundreds of Facebook comments so our app, PartTime, offers an easy and accessible solution! Similarly, employers have found it difficult to find new employees to fill positions that they need. This has been especially problematic for companies who have had increased workloads through the pandemic, as certain industries are still high in demand and these companies can’t source the necessary manpower to meet the demand. ## What it does Our project provides an easy way for both small businesses to find part-time employees and for unemployed workers to find part-time positions. On a high level, employers create job listings through the app, and then users can go through job listings in a fashion similar to tinder, swiping left to ignore a job listing and swiping right to instantly apply for a job listing. The employers are then able to check the app and see basic information about applicants including their resume and contact information so that they can contact the applicants they are interested in to move further down the application process. Applicants are also able to look back at listings that they’ve applied for. ## How I built it On the frontend, we used react native and javascript to create various components each of which represent a different view on the application. We integrated the frontend with the backend storage management with certain buttons and while rendering the view, the backend will continuously send data in case there are any updates. On the backend we used Firebase and Firestore to manage user authentication and data storage. Users and employer accounts both go through essentially the same sign-up and sign-in process, and we use this to our advantage by keeping track of the userIDs. For data storage, we have two separate parts, Firestore which keeps track of the basic information of employers and applicants and then Firebase Storage which handles the image and resume upload and download process. On Firestore, we have three collections, users, companies, and listings. For the users and companies collections, each applicant or company document is accessed by using their userID from the authentication. For the listings collection, there is an auto generated ID when a document is made, so we keep track of those. For each user, basic information about them is stored including name, email, phone number, location, as well as which job listings they’ve swiped right to, applied for, and swiped left to, ignored. This way we ensure that they aren’t swiping on the same listings multiple times. For the companies collection, there is information about the company name, email, and then we keep track of all the job listings a company has created once again using the auto generated IDs. For the listings collection, each listing has information regarding the position title, compensation, employer, location, description, and it keeps track of all the people that have applied for the listing. ## Challenges I ran into We initially had issues retrieving data using Firebase of which were able to figure out most of. ## Accomplishments that I'm proud of Completing a finished and function product! Implementing the Google Geocoding API so that one’s location can be reverse geocoded into an address and a user can also type in an address as well that gets turned into a location. Using Firebase and its Cloud Firestore and Storage for the database in our project. We had problems using it in our last hackathon and getting it to work this time was really satisfying! ## What I learned We learned a lot about React-Native and using API’s as well as half of our team has never formally learned anything about IOS app development. It was a really enjoyable experience and taught us a lot. ## What's next for PartTime Making the app look nicer and adding more functionality concerning images and profiles Adding more data and more analysis with the Google Map API so that a company can see employees and their possible applicants to each of their locations visually. This might even call for a possible website counterpart to our app.
winning
## What it does Blink is a communication tool for those who cannot speak or move, while being significantly more affordable and accurate than current technologies on the market. [The ALS Association](http://www.alsa.org/als-care/augmentative-communication/communication-guide.html) recommends a $10,000 communication device to solve this problem—but Blink costs less than $20 to build. You communicate using Blink through a modified version of **Morse code**. Blink out letters and characters to spell out words, and in real time from any device, your caretakers can see what you need. No complicated EEG pads or camera setup—just a small, unobtrusive sensor can be placed to read blinks! The Blink service integrates with [GIPHY](https://giphy.com) for GIF search, [Earth Networks API](https://www.earthnetworks.com) for weather data, and [News API](https://newsapi.org) for news. ## Inspiration Our inspiration for this project came from [a paper](http://www.wearabletechnologyinsights.com/articles/11443/powering-devices-through-blinking) published on an accurate method of detecting blinks, but it uses complicated, expensive, and less-accurate hardware like cameras—so we made our own **accurate, low-cost blink detector**. ## How we built it The backend consists of the sensor and a Python server. We used a capacitive touch sensor on a custom 3D-printed mounting arm to detect blinks. This hardware interfaces with an Arduino, which sends the data to a Python/Flask backend, where the blink durations are converted to Morse code and then matched to English characters. The frontend is written in React with [Next.js](https://github.com/zeit/next.js) and [`styled-components`](https://styled-components.com). In real time, it fetches data from the backend and renders the in-progress character and characters recorded. You can pull up this web app from multiple devices—like an iPad in the patient’s lap, and the caretaker’s phone. The page also displays weather, news, and GIFs for easy access. **Live demo: [blink.now.sh](https://blink.now.sh)** ## Challenges we ran into One of the biggest technical challenges building Blink was decoding blink durations into short and long blinks, then Morse code sequences, then standard characters. Without any libraries, we created our own real-time decoding process of Morse code from scratch. Another challenge was physically mounting the sensor in a way that would be secure but easy to place. We settled on using a hat with our own 3D-printed mounting arm to hold the sensor. We iterated on several designs for the arm and methods for connecting the wires to the sensor (such as aluminum foil). ## Accomplishments that we're proud of The main point of PennApps is to **build a better future**, and we are proud of the fact that we solved a real-world problem applicable to a lot of people who aren't able to communicate. ## What we learned Through rapid prototyping, we learned to tackle difficult problems with new ways of thinking. We learned how to efficiently work in a group with limited resources and several moving parts (hardware, a backend server, a frontend website), and were able to get a working prototype ready quickly. ## What's next for Blink In the future, we want to simplify the physical installation, streamline the hardware, and allow multiple users and login on the website. Instead of using an Arduino and breadboard, we want to create glasses that would provide a less obtrusive mounting method. In essence, we want to perfect the design so it can easily be used anywhere. Thank you!
## Inspiration Retinal degeneration affects 1 in 3000 people, slowly robbing them of vision over the course of their mid-life. The need to adjust to life without vision, often after decades of relying on it for daily life, presents a unique challenge to individuals facing genetic disease or ocular injury, one which our teammate saw firsthand in his family, and inspired our group to work on a modular, affordable solution. Current technologies which provide similar proximity awareness often cost many thousands of dollars, and require a niche replacement in the user's environment; (shoes with active proximity sensing similar to our system often cost $3-4k for a single pair of shoes). Instead, our group has worked to create a versatile module which can be attached to any shoe, walker, or wheelchair, to provide situational awareness to the thousands of people adjusting to their loss of vision. ## What it does (Higher quality demo on google drive link!: <https://drive.google.com/file/d/1o2mxJXDgxnnhsT8eL4pCnbk_yFVVWiNM/view?usp=share_link> ) The module is constantly pinging its surroundings through a combination of IR and ultrasonic sensors. These are readily visible on the prototype, with the ultrasound device looking forward, and the IR sensor looking to the outward flank. These readings are referenced, alongside measurements from an Inertial Measurement Unit (IMU), to tell when the user is nearing an obstacle. The combination of sensors allows detection of a wide gamut of materials, including those of room walls, furniture, and people. The device is powered by a 7.4v LiPo cell, which displays a charging port on the front of the module. The device has a three hour battery life, but with more compact PCB-based electronics, it could easily be doubled. While the primary use case is envisioned to be clipped onto the top surface of a shoe, the device, roughly the size of a wallet, can be attached to a wide range of mobility devices. The internal logic uses IMU data to determine when the shoe is on the bottom of a step 'cycle', and touching the ground. The Arduino Nano MCU polls the IMU's gyroscope to check that the shoe's angular speed is close to zero, and that the module is not accelerating significantly. After the MCU has established that the shoe is on the ground, it will then compare ultrasonic and IR proximity sensor readings to see if an obstacle is within a configurable range (in our case, 75cm front, 10cm side). If the shoe detects an obstacle, it will activate a pager motor which vibrates the wearer's shoe (or other device). The pager motor will continue vibrating until the wearer takes a step which encounters no obstacles, thus acting as a toggle flip-flop. An RGB LED is added for our debugging of the prototype: RED - Shoe is moving - In the middle of a step GREEN - Shoe is at bottom of step and sees an obstacle BLUE - Shoe is at bottom of step and sees no obstacles While our group's concept is to package these electronics into a sleek, clip-on plastic case, for now the electronics have simply been folded into a wearable form factor for demonstration. ## How we built it Our group used an Arduino Nano, batteries, voltage regulators, and proximity sensors from the venue, and supplied our own IMU, kapton tape, and zip ties. (yay zip ties!) I2C code for basic communication and calibration was taken from a user's guide of the IMU sensor. Code used for logic, sensor polling, and all other functions of the shoe was custom. All electronics were custom. Testing was done on the circuits by first assembling the Arduino Microcontroller Unit (MCU) and sensors on a breadboard, powered by laptop. We used this setup to test our code and fine tune our sensors, so that the module would behave how we wanted. We tested and wrote the code for the ultrasonic sensor, the IR sensor, and the gyro separately, before integrating as a system. Next, we assembled a second breadboard with LiPo cells and a 5v regulator. The two 3.7v cells are wired in series to produce a single 7.4v 2S battery, which is then regulated back down to 5v by an LM7805 regulator chip. One by one, we switched all the MCU/sensor components off of laptop power, and onto our power supply unit. Unfortunately, this took a few tries, and resulted in a lot of debugging. . After a circuit was finalized, we moved all of the breadboard circuitry to harnessing only, then folded the harnessing and PCB components into a wearable shape for the user. ## Challenges we ran into The largest challenge we ran into was designing the power supply circuitry, as the combined load of the sensor DAQ package exceeds amp limits on the MCU. This took a few tries (and smoked components) to get right. The rest of the build went fairly smoothly, with the other main pain points being the calibration and stabilization of the IMU readings (this simply necessitated more trials) and the complex folding of the harnessing, which took many hours to arrange into its final shape. ## Accomplishments that we're proud of We're proud to find a good solution to balance the sensibility of the sensors. We're also proud of integrating all the parts together, supplying them with appropriate power, and assembling the final product as small as possible all in one day. ## What we learned Power was the largest challenge, both in terms of the electrical engineering, and the product design- ensuring that enough power can be supplied for long enough, while not compromising on the wearability of the product, as it is designed to be a versatile solution for many different shoes. Currently the design has a 3 hour battery life, and is easily rechargeable through a pair of front ports. The challenges with the power system really taught us firsthand how picking the right power source for a product can determine its usability. We were also forced to consider hard questions about our product, such as if there was really a need for such a solution, and what kind of form factor would be needed for a real impact to be made. Likely the biggest thing we learned from our hackathon project was the importance of the end user, and of the impact that engineering decisions have on the daily life of people who use your solution. For example, one of our primary goals was making our solution modular and affordable. Solutions in this space already exist, but their high price and uni-functional design mean that they are unable to have the impact they could. Our modular design hopes to allow for greater flexibility, acting as a more general tool for situational awareness. ## What's next for Smart Shoe Module Our original idea was to use a combination of miniaturized LiDAR and ultrasound, so our next steps would likely involve the integration of these higher quality sensors, as well as a switch to custom PCBs, allowing for a much more compact sensing package, which could better fit into the sleek, usable clip on design our group envisions. Additional features might include the use of different vibration modes to signal directional obstacles and paths, and indeed expanding our group's concept of modular assistive devices to other solution types. We would also look forward to making a more professional demo video Current example clip of the prototype module taking measurements:(<https://youtube.com/shorts/ECUF5daD5pU?feature=share>)
## Inspiration Members of our team know multiple people who suffer from permanent or partial paralysis. We wanted to build something that could be fun to develop and use, but at the same time make a real impact in people's everyday lives. We also wanted to make an affordable solution, as most solutions to paralysis cost thousands and are inaccessible. We wanted something that was modular, that we could 3rd print and also make open source for others to use. ## What it does and how we built it The main component is a bionic hand assistant called The PulseGrip. We used an ECG sensor in order to detect electrical signals. When it detects your muscles are trying to close your hand it uses a servo motor in order to close your hand around an object (a foam baseball for example). If it stops detecting a signal (you're no longer trying to close) it will loosen your hand back to a natural resting position. Along with this at all times it sends a signal through websockets to our Amazon EC2 server and game. This is stored on a MongoDB database, using API requests we can communicate between our games, server and PostGrip. We can track live motor speed, angles, and if it's open or closed. Our website is a full-stack application (react styled with tailwind on the front end, node js on the backend). Our website also has games that communicate with the device to test the project and provide entertainment. We have one to test for continuous holding and another for rapid inputs, this could be used in recovery as well. ## Challenges we ran into This project forced us to consider different avenues and work through difficulties. Our main problem was when we fried our EMG sensor, twice! This was a major setback since an EMG sensor was going to be the main detector for the project. We tried calling around the whole city but could not find a new one. We decided to switch paths and use an ECG sensor instead, this is designed for heartbeats but we managed to make it work. This involved wiring our project completely differently and using a very different algorithm. When we thought we were free, our websocket didn't work. We troubleshooted for an hour looking at the Wifi, the device itself and more. Without this, we couldn't send data from the PulseGrip to our server and games. We decided to ask for some mentor's help and reset the device completely, after using different libraries we managed to make it work. These experiences taught us to keep pushing even when we thought we were done, and taught us different ways to think about the same problem. ## Accomplishments that we're proud of Firstly, just getting the device working was a huge achievement, as we had so many setbacks and times we thought the event was over for us. But we managed to keep going and got to the end, even if it wasn't exactly what we planned or expected. We are also proud of the breadth and depth of our project, we have a physical side with 3rd printed materials, sensors and complicated algorithms. But we also have a game side, with 2 (questionable original) games that can be used. But they are not just random games, but ones that test the user in 2 different ways that are critical to using the device. Short burst and hold term holding of objects. Lastly, we have a full-stack application that users can use to access the games and see live stats on the device. ## What's next for PulseGrip * working to improve sensors, adding more games, seeing how we can help people We think this project had a ton of potential and we can't wait to see what we can do with the ideas learned here. ## Check it out <https://hacks.pulsegrip.design> <https://github.com/PulseGrip>
partial
## Inspiration We were inspired by the development of foldscope, a very low-cost, high-resolution foldable microscope (<https://en.wikipedia.org/wiki/Foldscope>) capable of imaging blood cells. We wanted to create tools that can integrate with this microscope and, more generally, other applications and better improve health in the developing world. Malaria is a leading cause of death in many developing countries, where blood smears are used to identify the presence of parasites in red blood cells (RBCs). To improve the efficiency of detecting parasitic cells in blood smears, ultimately speeding up malaria diagnosis, we aimed to create an online tool that can do this in minutes. ## What it does The project allows a user to upload a thin blood smear image, and it classifies the RBCs in the blood that are infected with malaria parasites. ## How we built it We utilized a thin blood smear dataset from 193 patients in a Bangladesh hospital curated by the NIH. It consists of 20,000 labeled cells (exhibiting malaria parasitic infection or not) across 965 blood smear images. Given these images, we performed a multi-step image segmentation procedure to isolate the red blood cells (RBCs): we first used U-Net to segment the blood smears into cell clusters, then used Faster R-CNN to segment the cell clusters into individual RBCs, and then incorporated thresholding techniques to refine the segmentation and smooth the edges. Once each RBC in every blood smear was individually segmented, we trained a CNN to classify whether these segmented images contained the malaria parasite. Mapping these now-labeled segmented images back to their parent blood smears allowed us to output modified blood smear images highlighting the RBCs containing a malaria parasite. ## Challenges we ran into We encountered challenges in building an effective RBC segmentation pipeline. Variations in the segmentation procedure greatly affected the classification performance of the CNN, which was somewhat surprising. The various segmentation methodologies we explored yielded segments that looked visually very similar to hand-drawn segmentations provided in the NIH dataset, and these hand-drawn segmentations were classified very well by CNN. We tried integrating various thresholding, grayscale manipulations, filtering, and flood-filling methodologies to integrate with the U-Net + R-CNN for RBC segmentation. In addition, we originally started with pre-trained models like ResNet-18 for classification. However, they tended to overfit the training data, so we opted for a simple, untrained one-layer CNN architecture, which worked the best. ## Accomplishments that we're proud of We are proud that we were able to build a comprehensive segmentation and classification pipeline and that we were able to integrate this into a full-stack web app with a front end and a back end. ## What we learned We learned many technical skills along the way, such as using Python’s OpenCV framework for image processing/manipulation, various image segmentation methodologies, and using flask to build out the web app. ## What's next for Plasmodium In the future, we hope to continue developing our app and refining our segmentation/classification methodologies to increase accuracy. Furthermore, we plan to expand our pipeline to other diseases, such as sickle cell anemia, to create a more comprehensive health diagnostic tool for the developing world.
## Inspiration Every year, the amount of data collected exponentially grows. As the abundance of data grows, so do the possibilities that come along with it. In conjunction with machine learning in Python, we decided to utilize the tools available to try to improve a critical aspect of the health industry: cancer diagnosis. ## What it does Our algorithm diagnoses the patient, given traits about their biopsy lab results. With the data of breast cancer on a cellular level, we were able to train a learning algorithm to predict an accuracy of 99% on our test set. In an effort to decrease the amount of false negative diagnosis on our algorithm's behalf, we were able to achieve a 0.4% false negative diagnosis. ## How we built it In terms of data, we accessed the breast cancer dataset from UCI's machine learning repository. Once we had the data, we used Python and various packages within Python to both clean up and visualize our data. We then used Tensorflow to model this data using 3 different machine learning algorithms: logistic regression, softmax regression, and neural networks. Using a 60% / 40% data split of our data, we trained and tested our models. ## Challenges we ran into The breast cancer dataset that we used contained only 539 incidences. At the beginning, we had hoped for larger datasets that could train a more sophisticated model. As a result, we had to make do with a smaller model, but still managed to achieve great results. ## Accomplishments that we're proud of Both Tate and I are incredibly proud of ourselves for coming this far in all. This is both of our first hackathons where we submitted our projects. Furthermore, neither of us had attempted a project in this field in the past, and found that our respective knowledges in machine learning and Tensorflow piggybacked off of each other and pushed ourselves to a newer level. ## What we learned Throughout Treehacks, we experienced the effects of extreme sleep deprivation, poor diet, and high strain. We vow to pack acai bowls to the next hackathon we go to along with an air mattress. Jokes aside, we threw ourselves into the water with analyzing and modeling learning algorithms in tensorflow as we had little prior experience beforehand. We also went above the typical matplotlib in Python for visuals and experimented with Seaborn for next level visualizations. ## What's next for Breast Cancer Classifier We look to expand to bigger datasets
## Inspiration dermalab was inspired by a very common problem that doctors face: disease misdiagnoses. Skin diseases are often misdiagnosed by dermatologists because conditions often contain many similarities in appearance. Doctors - or even patients at times -, unaided with advanced technology, are often less sure of skin disease diagnoses, which may result in incorrect treatment, or even failure of treatment altogether. Skin disease misdiagnosis is primarily relevant in low-income communities where there is a lack of medical doctors with the appropriate specialty and expertise. Machine learning (ML) and Artificial Intelligence (AI) technologies can be used to detect differences more accurately than the human eye but currently aren’t widely adopted due to a lack of algorithms that are able to conduct a variety of tasks for disease diagnosis. Current algorithms are specialized to a single task in this journey and are often difficult to use, for both patients and doctors. Therefore, we strove to create an algorithm that was able to successfully leverage these technologies while catering to the entirety of the disease diagnosis process. We wanted to enable doctors to have more confidence in their diagnoses as well as provide helpful, understandable explanations to patients during this journey. dermalab is what we've produced to solve these issues, and we hope you enjoy it! ## What it does dermalab helps fill the aforementioned gaps via real-time skin condition classification, disease severity calculation, and spread predictions using cutting-edge technology. We've consolidated all of our amazing features on a website that anybody can use! Users first have an option of uploading pictures of the affected area to the website. Our machine learning algorithm will then classify and identify the disease or condition that is present, if any. However, we don't stop there! Users also receive a summary of their conditions - without all of the hard-to-understand medical jargon. That way, both patients and doctors are on the same page when it comes to understanding a disease diagnosis. Our web page also gives users the choice of completing a questionnaire to return the severity level of the disease. We provide explanations for each severity level to ensure that this process is empathetic to patients in the best way possible. Our deep learning algorithm also outputs the predicted disease spread for doctors as well. This was done via further analysis through deep learning and MATLAB to understand region, rate, and direction of disease growth. We used the InceptionResNetV2 model, with lesion images and their annotated masks to train the deep learning model, to perform semantic segmentation on unseen data. Once the semantic segmentation of lesions are done, thereby performing binary masks, MATLAB could be used to perform an onion ring segmentation of the region, where dilation and erosion of mask is done and the difference is taken. Combined, the variety of services that dermalab provides a robust interface for doctors and patients during disease diagnoses, and helps prevents misdiagnoses. Check it out [here](https://dermalabs-pennapps.co/)! ## How we built it dermalab can be primarily be split into the frontend and backend. The frontend consists of the web application and the backend contains the ML/AI/deep learning/LLM models. The backend has four objectives: A. Diagnosis: Classification of skin diseases B. Evaluation: Calculation of disease severity level C. Prognosis: Prediction of disease spread D. Explanation: Understandable relaying of information to patients A. Diagnosis: Classification of skin diseases dermalab uses machine learning and image processing to identify the type of disease that a person is suffering from. * We trained a random forest model on a dataset consisting of multiple labeled types of skin diseases * Weights of trained model were saved and loaded with the user-provided image containing affected region. * Model returns the class the image falls under, * Success! Identification of the diseases through the image provided is outputted to the user. B. Evaluation: Calculation of disease severity level While it is certainly helpful to identify the diseases the person might have, the severity of the diseases plays a key role in determining what measures to take next. * Users are asked to give their medical history and the severity level of their disease is given. * Random Forest model is trained with cross validation + medical history dataset of patients with Eryhemato-Squamous dataset * Results vary from 0 to 5, where 0 indicates the absence of disease and 5 indicates the greatest severity. C. Prognosis: Prediction of disease spread dermalab also predicts disease progression. * Further analysis via deep learning and MATLAB to understand region, rate, and direction of disease growth. * InceptionResNetV2 model with lesion images and annotated masks for semantic segmentation * Onion ring segmentation of affected region performed via MATLAB * More informed doctors making better decisions for treatment plans D. Explanation: Understandable relaying of information to patients dermalab supports powerful physician-patient interactions. When it comes to disease classification, the Metaphor API returns up-to-date information about disease, overcoming a limitation of ChatGPT. In terms of disease severity (0 to 5), Llama 2.0 and Replicate provide explanation of severity, allowing us to show that LLMs can be used for language simplification. **Other Tools!** *MATLAB* dermalab uses MATLAB as a major tool to analyse the spread of lesions in skin: semantic segmentation further segmented to onion ring segmentations, providing information about origination, growth rate, and direction. *Metaphor* We use Metaphor's API because we believe that medical communication requires the most recent research. Patients have a right to access the most recent information available, and Metaphor's API is the perfect solution. The frontend was connected using Flask and Github. ## Challenges we ran into One challenge we ran into was running the Metaphor API. It took a lot of troubleshooting, debugging, and asking for help to resolve the issue, but we're glad we did! Another challenge was the lack of annotated dermatological data. Since we aren't medical professionals that are qualified to annotate available data, we had to get creative when it came to finding data that could achieve our objectives. ## Accomplishments that we're proud of We're proud, first and foremost, that we were able to bring dermalab to completion. One accomplishment was linking the frontend (web page) and the backend (multiple ML models) to create a seamless interface for the user. Another major accomplishment was troubleshooting with Metaphor's API until it was able to output the results we wanted. ## What we learned We're so grateful for everything that we learned! * Combining frontend and backend via Flask * Using Metaphor's API to return improved responses from ChatGPT * Employing machine learning, deep learning, and MATLAB for dermatology applications ## What's next for dermalab We want dermalab to be a tool for all doctors and patients. To get there, we want to refine our algorithm to be able to take into account more disease classes, so that it can classify a greater variety of diseases.
losing
# Nexus, **Empowering Voices, Creating Connections**. ## Inspiration The inspiration for our project, Nexus, comes from our experience as individuals with unique interests and challenges. Often, it isn't easy to meet others with these interests or who can relate to our challenges through traditional social media platforms. With Nexus, people can effortlessly meet and converse with others who share these common interests and challenges, creating a vibrant community of like-minded individuals. Our aim is to foster meaningful connections and empower our users to explore, engage, and grow together in a space that truly understands and values their uniqueness. ## What it Does In Nexus, we empower our users to tailor their conversational experience. You have the flexibility to choose how you want to connect with others. Whether you prefer one-on-one interactions for more intimate conversations or want to participate in group discussions, our application Nexus has got you covered. We allow users to either get matched with a single person, fostering deeper connections, or join one of the many voice chats to speak in a group setting, promoting diverse discussions and the opportunity to engage with a broader community. With Nexus, the power to connect is in your hands, and the choice is yours to make. ## How we built it We built our application using a multitude of services/frameworks/tool: * React.js for the core client frontend * TypeScript for robust typing and abstraction support * Tailwind for a utility-first CSS framework * DaisyUI for animations and UI components * 100ms live for real-time audio communication * Clerk for a seamless and drop-in OAuth provider * React-icons for drop-in pixel perfect icons * Vite for simplified building and fast dev server * Convex for vector search over our database * React-router for client-side navigation * Convex for real-time server and end-to-end type safety * 100ms for real-time audio infrastructure and client SDK * MLH for our free .tech domain ## Challenges We Ran Into * Navigating new services and needing to read **a lot** of documentation -- since this was the first time any of us had used Convex and 100ms, it took a lot of research and heads-down coding to get Nexus working. * Being **awake** to work as a team -- since this hackathon is both **in-person** and **through the weekend**, we had many sleepless nights to ensure we can successfully produce Nexus. * Working with **very** poor internet throughout the duration of the hackathon, we estimate it cost us multiple hours of development time. ## Accomplishments that we're proud of * Finishing our project and getting it working! We were honestly surprised at our progress this weekend and are super proud of our end product Nexus. * Learning a ton of new technologies we would have never come across without Cal Hacks. * Being able to code for at times 12-16 hours straight and still be having fun! * Integrating 100ms well enough to experience bullet-proof audio communication. ## What we learned * Tools are tools for a reason! Embrace them, learn from them, and utilize them to make your applications better. * Sometimes, more sleep is better -- as humans, sleep can sometimes be the basis for our mental ability! * How to work together on a team project with many commits and iterate fast on our moving parts. ## What's next for Nexus * Make Nexus rooms only open at a cadence, ideally twice each day, formalizing the "meeting" aspect for users. * Allow users to favorite or persist their favorite matches to possibly re-connect in the future. * Create more options for users within rooms to interact with not just their own audio and voice but other users as well. * Establishing a more sophisticated and bullet-proof matchmaking service and algorithm. ## 🚀 Contributors 🚀 | | | | | | --- | --- | --- | --- | | [Jeff Huang](https://github.com/solderq35) | [Derek Williams](https://github.com/derek-williams00) | [Tom Nyuma](https://github.com/Nyumat) | [Sankalp Patil](https://github.com/Sankalpsp21) |
This is a simulation of astronomical bodies interacting gravitationally, forming orbits. Prize Submission: Best Design, Locals Only, Lost Your Marbles, Useless Stuff that Nobody Needs, Best Domain Name Registered With Domain.com Domain.com: [Now online!](http://www.spacesim2k18.org) [Video demo of website](https://youtu.be/1rnRuP8i8Vo) We wanted to make the simulation interactive, shooting planets, manipulating gravity, to make a fun game! This simulation also allows testing to see what initial conditions allowed the formation of our solar system, and potentially in the future, macroscopic astronomical entities like galaxies!
## Inspiration This project was inspired by my love of walking. We all need more outdoor time, but people often feel like walking is pointless unless they have somewhere to go. I have fond memories of spending hours walking around just to play Pokemon Go, so I wanted to create something that would give people a reason to go somewhere new. I envision friends and family sending mystery locations to their loved ones with a secret message, picture, or video that will be revealed when they arrive. You could send them to a historical landmark, a beautiful park, or just like a neat rock you saw somewhere. The possibilities are endless! ## What it does You want to go out for a walk, but where to? SparkWalk offers users their choice of exciting "mystery walks". Given a secret location, the app tells you which direction to go and roughly how long it will take. When you get close to your destination, the app welcomes you with a message. For now, SparkWalk has just a few preset messages and locations, but the ability for users to add their own and share them with others is coming soon. ## How we built it SparkWalk was created using Expo for React Native. The map and location functionalities were implemented using the react-native-maps, expo-location, and geolib libraries. ## Challenges we ran into Styling components for different devices is always tricky! Unfortunately, I didn't have time to ensure the styling works on every device, but it works well on at least one iOS and one Android device that I tested it on. ## Accomplishments that we're proud of This is my first time using geolocation and integrating a map, so I'm proud that I was able to make it work. ## What we learned I've learned a lot more about how to work with React Native, especially using state and effect hooks. ## What's next for SparkWalk Next, I plan to add user authentication and the ability to add friends and send locations to each other. Users will be able to store messages for their friends that are tied to specific locations. I'll add a backend server and a database to host saved locations and messages. I also want to add reward cards for visiting locations that can be saved to the user's profile and reviewed later. Eventually, I'll publish the app so anyone can use it!
winning
## Inspiration The inspiration for our project came from three of our members being involved with Smash in their community. From one of us being an avid competitor, one being an avid watcher and one of us who works in an office where Smash is played quite frequently, we agreed that the way Smash Bro games were matched and organized needed to be leveled up. We hope that this becomes a frequently used bot for big and small organizations alike. ## How it Works We broke the project up into three components, the front end made using React, the back end made using Golang and a middle part connecting the back end to Slack by using StdLib. ## Challenges We Ran Into A big challenge we ran into was understanding how exactly to create a bot using StdLib. There were many nuances that had to be accounted for. However, we were helped by amazing mentors from StdLib's booth. Our first specific challenge was getting messages to be ephemeral for the user that called the function. Another adversity was getting DM's to work using our custom bot. Finally, we struggled to get the input from the buttons and commands in Slack to the back-end server. However, it was fairly simple to connect the front end to the back end. ## The Future for 'For Glory' Due to the time constraints and difficulty, we did not get to implement a tournament function. This is a future goal because this would allow workspaces and other organizations that would use a Slack channel to implement a casual tournament that would keep the environment light-hearted, competitive and fun. Our tournament function could also extend to help hold local competitive tournaments within universities. We also want to extend the range of rankings to have different types of rankings in the future. One thing we want to integrate into the future for the front end is to have a more interactive display for matches and tournaments with live updates and useful statistics.
## Problem In these times of isolation, many of us developers are stuck inside which makes it hard for us to work with our fellow peers. We also miss the times when we could just sit with our friends and collaborate to learn new programming concepts. But finding the motivation to do the same alone can be difficult. ## Solution To solve this issue we have created an easy to connect, all in one platform where all you and your developer friends can come together to learn, code, and brainstorm together. ## About Our platform provides a simple yet efficient User Experience with a straightforward and easy-to-use one-page interface. We made it one page to have access to all the tools on one screen and transition between them easier. We identify this page as a study room where users can collaborate and join with a simple URL. Everything is Synced between users in real-time. ## Features Our platform allows multiple users to enter one room and access tools like watching youtube tutorials, brainstorming on a drawable whiteboard, and code in our inbuilt browser IDE all in real-time. This platform makes collaboration between users seamless and also pushes them to become better developers. ## Technologies you used for both the front and back end We use Node.js and Express the backend. On the front end, we use React. We use Socket.IO to establish bi-directional communication between them. We deployed the app using Docker and Google Kubernetes to automatically scale and balance loads. ## Challenges we ran into A major challenge was collaborating effectively throughout the hackathon. A lot of the bugs we faced were solved through discussions. We realized communication was key for us to succeed in building our project under a time constraints. We ran into performance issues while syncing data between two clients where we were sending too much data or too many broadcast messages at the same time. We optimized the process significantly for smooth real-time interactions. ## What's next for Study Buddy While we were working on this project, we came across several ideas that this could be a part of. Our next step is to have each page categorized as an individual room where users can visit. Adding more relevant tools, more tools, widgets, and expand on other work fields to increase our User demographic. Include interface customizing options to allow User’s personalization of their rooms. Try it live here: <http://35.203.169.42/> Our hopeful product in the future: <https://www.figma.com/proto/zmYk6ah0dJK7yJmYZ5SZpm/nwHacks_2021?node-id=92%3A132&scaling=scale-down> Thanks for checking us out!
## Inspiration Coming into this hackathon I wanted to create a project that I found interesting and one that I could see being used. Many of my projects I have done in the past were interesting, but there were factors that would always keep them from being widely used. Often this is a barrier to entry. They required extensive setup for the more minimal reward that they give. For Hack Western one of my goals was the create something that had a low barrier to entry. However a low barrier to entry means nothing if the use is not up to par. I wanted to create a project that many people could use. Based on these two concepts, ease of use and wide ranging use, I decided to create a chat bot that automates answering science questions. ## What it does What my project does is very simple. On the messaging platform, discord, you message my bot a question. It will attempt to search for a relevant answer. If one is found then it will give you a bit of context and the answer that it found. ## How I built it I built this project in three main sections. The first section is my sleuther. This searches the web for various science based questions as well as their answers. When a question is found, I use IBM's natural language processing api to determine tags that represent the topic of the question. All of this data is then stored on an Algolia database where it can be quickly accessed later. The second section of my project is the server. To implement this server I used StdLib to easily create a web api. When this api is accessed it queries the Algolia database to retrieve any relevant questions and returns the best entry. The third and final part of my project is the front end Discord bot. When you send the bot a message it will generate tags to determine the general topic of the question and using these it will query the Algolia index. It does this by calling the StdLib endpoint that was setup as the server. Overall these three sections combine to create my final project. ## Challenges I ran into The first challenge that I ran into was unfamiliar technology. I had never used StdLib before and getting it to work was a struggle at times. Thankfully the mentors from StdLib were very helpful and allowed me to get my service up an running with not too much stress. The main challenge for this project though it trying to match questions. As a user's questions and my database questions will not be worded the exact same way a simple string match would not suffice. In order to match the questions the general meaning of the question would need to be found and then those could be roughly matched. To try and work around this I tried various methods before setting on a sort of tag based system. ## Accomplishments that I'm proud of To be honest this project was not my initial hackathon project. When I first started the idea was very different. The same base theme of low barrier to entry and widely usable, but the actual subject was quite different. Unfortunately part way through that project there was an insurmountable issue that did not really allow that project to progress. This forced me to pivot and find a new idea. This pivot came many hours into the hackathon already which severely reduced the time for my current hack. Because of this, the accomplishment that I think I am most proud of is the fact that I am here able to submit this hack. ## What I learned Because I was forced to pivot on my idea partway through the hackathon a fair amount of stress was created for me. This fact really taught me the value of planning - most particularly determining the scope of the project. If I had planned ahead and wrote out everything that my first hack would have entailed then many of the issues that were encountered in it would have been able to be avoided. ## What's next for Science-Bot I want to continue to improve my question matching algorithm so that it more questions can be matched. But beyond simply improving current functionality, I would like to add new scope to the questions my bot handles. This could be either increasing in scope, or increasing the depth of the questions. An interesting new topic would be questions about a very specific scientific area. I would likely need to be much more selective in the questions I match and this is something I think would pose a difficult and interesting challenge.
winning
## Inspiration Choose your own adventure books and table top rpgs ## What it does Has google assistant act as a storyteller ## How we built it We built it using firebase for the data storage, stblib for backend connection, google home for main front-end interface, dialogflow for information learning, and we implemented google sentiment analysis API to calculate sentiment store for different stories. ## Challenges we ran into Hard to find database so we hardcoded it. It was hard to decided on a specific topic, before we were trying to do something healthcare related, data visualization, and an audio diary. ## Accomplishments that we're proud of Comprehensive machine learning process, a fulfunctioning backend and how everyone was able to learn javascript from no background. ## What we learned javascript, stdlib, dialogflow ## What's next for googleAdventure That depends on you.
## Inspiration Similar to how we cannot decide for ourselves on questions such as "Where should I get food?", or "What should I do for today?". Now we can outsource our decision-making to a smart home. ## What it does Utilizing historical user input as a reference point, we have a profile on the user's likes and dislikes. When a command is issued to Google Home, tone is first analyzed with IBM Watson's NLP API, and based on the tone, a list of relevant words are used to scrape the news, with the Google News API. This is to get a current list of topics relevant to the user's preferences. Finally a decision is formulated by the multiple factors considered previously. ## How we built it We tested our code for proof of concept in Python, then translated and executed them in Node-JS, because it was universally accepted for all the platforms we used. The whole project is operated from FireBase, all inputs are sent through Watson, and all decisions are sent to the neural network for improvement on the weightings. ## Challenges we ran into Google Home had trouble connecting if it was more than 4 meters away from the WIFI router. Syntax problems when translating from Python to JS. Time complexity optimization issues for Watson API when it is not locally ran. Trouble in conversion of categorical data when trying to train the NN. ## What we learned Bring a sleeping bag. ## What's next for No Bored IPO
## Inspiration 2 days before flying to Hack the North, Darryl forgot his keys and spent the better part of an afternoon retracing his steps to find it- But what if there was a personal assistant that remembered everything for you? Memories should be made easier with the technologies we have today. ## What it does A camera records you as you go about your day to day life, storing "comic book strip" panels containing images and context of what you're doing as you go about your life. When you want to remember something you can ask out loud, and it'll use Open AI's API to search through its "memories" to bring up the location, time, and your action when you lost it. This can help with knowing where you placed your keys, if you locked your door/garage, and other day to day life. ## How we built it The React-based UI interface records using your webcam, screenshotting every second, and stopping at the 9 second mark before creating a 3x3 comic image. This was done because having static images would not give enough context for certain scenarios, and we wanted to reduce the rate of API requests per image. After generating this image, it sends this to OpenAI's turbo vision model, which then gives contextualized info about the image. This info is then posted sent to our Express.JS service hosted on Vercel, which in turn parses this data and sends it to Cloud Firestore (stored in a Firebase database). To re-access this data, the browser's built in speech recognition is utilized by us along with the SpeechSynthesis API in order to communicate back and forth with the user. The user speaks, the dialogue is converted into text and processed by Open AI, which then classifies it as either a search for an action, or an object find. It then searches through the database and speaks out loud, giving information with a naturalized response. ## Challenges we ran into We originally planned on using a VR headset, webcam, NEST camera, or anything external with a camera, which we could attach to our bodies somehow. Unfortunately the hardware lottery didn't go our way; to combat this, we decided to make use of MacOS's continuity feature, using our iPhone camera connected to our macbook as our primary input. ## Accomplishments that we're proud of As a two person team, we're proud of how well we were able to work together and silo our tasks so they didn't interfere with each other. Also, this was Michelle's first time working with Express.JS and Firebase, so we're proud of how fast we were able to learn! ## What we learned We learned about OpenAI's turbo vision API capabilities, how to work together as a team, how to sleep effectively on a couch and with very little sleep. ## What's next for ReCall: Memories done for you! We originally had a vision for people with amnesia and memory loss problems, where there would be a catalogue for the people that they've met in the past to help them as they recover. We didn't have too much context on these health problems however, and limited scope, so in the future we would like to implement a face recognition feature to help people remember their friends and family.
losing
## Inspiration Wondering what Vine, Tinder and YikYak would be like when combined together. ## What it does Users can take and upload videos, or explore videos uploaded nearby in a unique UI. ## How I built it Android app with Node.js back end ## Challenges I ran into Video streaming. ## Accomplishments that I'm proud of Getting video streaming to work ## What I learned How to video stream ## What's next for QuickVid IDK
## Inspiration Imagine a social networking app that uses your face as your unique handle/identifier rather than an username or @. ## What it does Find out someone's name, interests, and pronouns by simply scanning their face! Users first register themselves on our database providing a picture of themselves, their name, and a few of their interests. Using the search functionality, users can scan the area for anyone that has registered in the app. If a person's face is recognized in the search, their information will be displayed on screen! ## How we built it The entire app was built with Android Studio, XML, and Java. The user registration functionality relies on Google Cloud Firebase, and the user search functionality uses Microsoft Azure Face API. ## Challenges we ran into Because Firebase returns data asynchronously, it was challenging to work with calls to Firebase and threads. ## Accomplishments that we are proud of * Getting data asynchronously from Firebase * Consistent facial verification between database photos and Microsoft Azure ## What we learned * How to work with APIs from both Google and Microsoft * Building Android applications ## What's next for first hello Larger scaling/better performance, display information in AR
## Inspiration We live in a place where majority of the auctions take place in-person third party environment. So, we would like to create a platform where buyers can directly bid for the products sold by the seller. ## What it does It provides a platform for users to sell and buy products without the involvement of 3rd party. Users can sell products as well as buy products without any hassle. Users can bid on all products other than his own product. User has control over the starting bid amount and when to sell the product. ## How we built it Used MERN stack, deployed it in GCP ## Challenges we ran into Deployment into the cloud ## What we learned Managing the state inventory and how the API works ## What's next for Auctioneer We would love to diversity the payment pipeline by including the cryptocurrency payments
partial
## Inspiration The inspiration for T-Error came from the common frustration that tech leads and developers face when debugging problems. Errors can occur frequently, but understanding their patterns and seeing what is really holding your team up can be tough. We wanted to create something that captures these errors in real time, visualizes them, and lets you write and seamlessly integrate documentation making it easier for teams to build faster. ## What it does T-Error is a terminal error-monitoring tool that captures and logs errors as developers run commands. It aggregates error data in real-time from various client terminals and provides a frontend dashboard to visualize error frequencies and insights, as well as adding the option to seamlessly add documentation. A feature we are really excited about is the ability to automatically run the commands in the documentation without needing to leave the terminal. ## How we built it We built T-Error using: Custom shell: we implemented a custom shell in c++ to capture stderr and seamlessly interface with our backend. Backend: Powered by Node.js, the server collects, processes, and stores error data in mongoDB. Frontend: Developed with React.js, the dashboard visualizes error trends with interactive charts, graphs, and logs, as well as an embedded markdown editor:). ## Challenges we ran into One of the main challenges was ensuring the terminal wrappers were lightweight and didn’t disrupt normal command execution while effectively capturing errors. We spent hours trying to get bash scripts to do what we wanted, until we gave up and tried implementing a shell which worked much better. Additionally, coming up with the UX for how to best deliver existing documentation was a challenge but after some attempts, we arrived at a solution we were happy with. ## Accomplishments that we're proud of We’re proud of building a fully functional MVP that successfully captures and visualizes error data in real-time. Our terminal wrappers integrate seamlessly with existing workflows, and the error analysis and automatic documentation execution has the potential to significantly speed up development. ## What we learned Throughout this project, we learned about the complexities of error logging across multiple environments and how to efficiently process large volumes of real-time data. We also gained experience with the integration of frontend and backend technologies, as well as diving into the lower layers of the tech stack and smoothly chaining everything together. ## What's next for T-Error Going forward, there are a few features that we want to implement. First is error reproduction - we could potentially gain more context about the error from the file system and previous commands and use that context to help replicate errors automatically. We also wanted to automate the process of solving these errors - as helpful as it is to have engineers write documentation, there is a reason there are gaps. This could be done using an intelligent agent for simple tasks, and more complex systems for others. We also want to be able to accommodate better to teams, allowing them to have groups where internal errors are tracked.
## Inspiration Data analytics can be **extremely** time-consuming. We strove to create a tool utilizing modern AI technology to generate analysis such as trend recognition on user-uploaded datasets.The inspiration behind our product stemmed from the growing complexity and volume of data in today's digital age. As businesses and organizations grapple with increasingly massive datasets, the need for efficient, accurate, and rapid data analysis became evident. We even saw this within one of our sponsor's work, CapitalOne, in which they have volumes of financial transaction data, which is very difficult to manually, or even programmatically parse. We recognized the frustration many professionals faced when dealing with cumbersome manual data analysis processes. By combining **advanced machine learning algorithms** with **user-friendly design**, we aimed to empower users from various domains to effortlessly extract valuable insights from their data. ## What it does On our website, a user can upload their data, generally in the form of a .csv file, which will then be sent to our backend processes. These backend processes utilize Docker and MLBot to train a LLM which performs the proper data analyses. ## How we built it Front-end was very simple. We created the platform using Next.js and React.js and hosted on Vercel. The back-end was created using Python, in which we employed use of technologies such as Docker and MLBot to perform data analyses as well as return charts, which were then processed on the front-end using ApexCharts.js. ## Challenges we ran into * It was some of our first times working in live time with multiple people on the same project. This advanced our understand of how Git's features worked. * There was difficulty getting the Docker server to be publicly available to our front-end, since we had our server locally hosted on the back-end. * Even once it was publicly available, it was difficult to figure out how to actually connect it to the front-end. ## Accomplishments that we're proud of * We were able to create a full-fledged, functional product within the allotted time we were given. * We utilized our knowledge of how APIs worked to incorporate multiple of them into our project. * We worked positively as a team even though we had not met each other before. ## What we learned * Learning how to incorporate multiple APIs into one product with Next. * Learned a new tech-stack * Learned how to work simultaneously on the same product with multiple people. ## What's next for DataDaddy ### Short Term * Add a more diverse applicability to different types of datasets and statistical analyses. * Add more compatibility with SQL/NoSQL commands from Natural Language. * Attend more hackathons :) ### Long Term * Minimize the amount of work workers need to do for their data analyses, almost creating a pipeline from data to results. * Have the product be able to interpret what type of data it has (e.g. financial, physical, etc.) to perform the most appropriate analyses.
## Inspiration At the University of Toronto, accessibility services are always in demand of more volunteer note-takers for students who are unable to attend classes. Video lectures are not always available and most profs either don't post notes, or post very imprecise, or none-detailed notes. Without a doubt, the best way for students to learn is to attend in person, but what is the next best option? That is the problem we tried to tackle this weekend, with notepal. Other applications include large scale presentations such as corporate meetings, or use for regular students who learn better through visuals and audio rather than note-taking, etc. ## What it does notepal is an automated note taking assistant that uses both computer vision as well as Speech-To-Text NLP to generate nicely typed LaTeX documents. We made a built-in file management system and everything syncs with the cloud upon command. We hope to provide users with a smooth, integrated experience that lasts from the moment they start notepal to the moment they see their notes on the cloud. ## Accomplishments that we're proud of Being able to integrate so many different services, APIs, and command-line SDKs was the toughest part, but also the part we tackled really well. This was the hardest project in terms of the number of services/tools we had to integrate, but a rewarding one nevertheless. ## What's Next * Better command/cue system to avoid having to use direct commands each time the "board" refreshes. * Create our own word editor system so the user can easily edit the document, then export and share with friends. ## See For Your Self Primary: <https://note-pal.com> Backup: <https://danielkooeun.lib.id/notepal-api@dev/>
partial
## Inspiration 2020 has been a difficult year for all of us. It’s been a year full of bad news, boredom, isolation and a disconnect from friends and the community. The pandemic has forced us to spend most of the year in isolation and for the most part has negatively impacted both mental and physical wellbeing. We seem to have forgotten what day of the week it is and every day feels repetitive and mundane. ## What it does ProMotion is here to change that. We have a platform that allows you to use machine learning and augmented reality to connect with your community and make mundane tasks fun. We have created an IOS application that gives you instant feedback and helps you better yourself by correcting your form and tracking your progress in many sports such as adjusting your jump shot form in basketball and spike form in volleyball. ## How we built it This is done through bleeding edge tech and the latest machine learning technologies available from Apple including ML kit and reality kit. At a high level, we begin by training a machine learning model on specific action sequences. For example, a jump shot in basketball. From there we train and learn what the ideal movements for this action are. For a jump shot, we may want to train on highlights of NBA players. We can then record a user completing this specific action and capture key pivot points on their body such as their knees, elbows and wrists. We compare these pivot points to those of the ideal model and can compute the difference. Lower levels of difference results in higher scores which is signified by the user’s rating on the top right of the screen. The user can record themselves completing any action and has the ability to playback their video and view the change in their rating over time. More specifically, getting into the technical details behind it, the tech stack is implemented as follows: * MLKit along with the vision framework running PoseNET, a human body pose detector to capture 17 different body landmarks (wrists, ankles, knees, elbows, etc.) and perform the motion tracking. This happens live on device running at 60fps, an incredibly difficult feat * This captured body landmark data is then fed to CreateML, where another neural network classifier classifies several types of actions (e.g. for volleyball, what a spike is, for basketball, what a jump shot is) * Additional body landmark data is captured from reference ideal videos, including NBA highlights, etc. to give us ideal comparison models to evaluate the user’s poses against, giving us a comprehensive rating system * Next, we had to map the comparison model to the user’s model when actually executing in app in both the time and spatial dimensions, as some people are taller and slightly different in other dimensions as well. This was a huge pain point, as we had to dynamically scale our ideal reference data. * Finally, tying it all together we had to visualize everything, including the error/difference regions. This required us writing custom renderers drawing out the wireframe in a series of bezier paths, along with freeform polygons to visualize the error regions. Getting this right and visually appealing was quite a challenge. ## Key Features ### Practice Mode It's been difficult to stay in shape mentally and physically and we all know that physical exercise alone can get mundane. Practice gets boring, there is no stimuli, no one to cheer you on and no way to track your progress. We recognize these major issues and ProMotion’s practice mode completely changes the game. In this mode, users can select from a large number of sports and practice essential skills and moves. There is an interactive UI in augmented reality which provides instant feedback about your form and your progress by comparing your moves to an ideal athlete. ### Challenge Mode Our disconnect from our friends and community is easy to miss in these tough times. Even completing the simplest of tasks is a blast when doing it with those you care about. ProMotion’s challenge mode allows users to record their practices, share their progress and compete with friends. See how you stack up against your buddies on the leaderboard and share the fun of physical activity with your friend group. Friend’s can easily share, post their attempts and you can replay them in augmented reality. ### Custom Challenges As we’ve been alluding to, ProMotion isn’t only for athletes. The pandemic has effected everyone and ProMotion is here to help. We take our features one step further by allowing users to create custom motion instantly in our Create mode. Simply name your activity and submit a few video clips to train our ML action classifier. Then instantly share your motion and let your friends hop on the trend. No action is too simple, even brushing your teeth or drinking water can be made into a fun activity. COVID has made everything mundane and ProMotion is rolling it back. Furthermore, for those looking for more practical use of create modes, we help remote teachers and instructors evaluate and assess their students. Record complex yoga moves or workout techniques and share with your class to help them from a distance. They can practice these moves on their own time and compare themselves to your example. ## What's next for ProMotion We plan to add even more functionality like commenting on your friends video or competing with them in real time. The sky is truly the limit with ProMotion.
## Inspiration Our inspiration came from a shared interest in sports and the unfortunate realization that-- getting better was getting harder! Private Coaching was getting more expensive, and self-coaching was difficult and time-consuming. To learn how to shoot with proper basketball shooting form, an athlete would have to be willing to pay inflated prices for lessons or be left to record themselves, play, analyze the film, and scour the internet to get better. We realized that we needed a coach; we needed an AI-powered coach to make our jobs as athletes much more efficient. Inspired by the hit film *Coach Carter (2008)* and the Bostonian accent, we developed "Coach Cahtah"! ## What it does Coach Cahtah takes a video input of a player shooting a basketball anywhere on the court and uses Machine Learning to track and trace the ball's path to the hoop so that the user can better understand how their shooting form affects their field goal percentage. ## How we built it We used Python for the backend of the project and the public Machine Learning library, OpenCV, to track the path of the basketball in the video. Instead of using hours of nonexistent basketball training data, we used a color-mapping method on the basketball to isolate its motion and cut our time to production exponentially. ## Challenges we ran into First off, we needed to realize how difficult image recognition was. We did not have enough workforce or existing data to fulfill our intended goals, so instead, we simplified our ambitions to reach an MVP. We also attempted to use the Terra API with the Apple Watch, but due to its closed-system nature, it wasn't easy to pull gyroscopic data from the device. ## Accomplishments that we're proud of Fortunately enough, we were able to adapt very well considering the situation. Instead of using TensorFlow, we ended up using OpenCV, which was more beginner-friendly, and we were able to circumvent the hours of training data we needed to train a model by taking a shortcut through the whole process. We used color mapping to isolate the basketball to trace and view its path to the hoop instead. ## What we learned Machine learning has INCREDIBLE opportunities to make our daily lives INCREDIBLY efficient. The problem is that it requires a strong understanding and high-quality training data. With time and effort, using this tool can create unique products. ## What's next for Coach Cahtah Coach Cahtah's greatest asset is its expandability. We can integrate all of these programs into a single app to allow athletes to analyze their shooting on the court efficiently. We can also move into other sports like Tennis.
## Inspiration Nowadays, large corporations are spending more and more money nowadays on digital media advertising, but their data collection tools have not been improving at the same rate. Nike spent over $3.03 billion on advertising alone in 2014, which amounted to approximately $100 per second, yet they only received a marginal increase in profits that year. This is where Scout comes in. ## What it does Scout uses a webcam to capture facial feature data about the user. It sends this data through a facial recognition engine in Microsoft Azure's Cognitive Services to determine demographics information, such as gender and age. It also captures facial expressions throughout an Internet browsing session, say a video commercial, and applies sentiment analysis machine learning algorithms to instantaneously determine the user's emotional state at any given point during the video. This is also done through Microsoft Azure's Cognitive Services. Content publishers can then aggregate this data and analyze it later to determine which creatives were positive and which creatives generated a negative sentiment. Scout follows an opt-in philosophy, so users must actively turn on the webcam to be a subject in Scout. We highly encourage content publishers to incentivize users to participate in Scout (something like $100/second) so that both parties can benefit from this platform. We also take privacy very seriously! That is why photos taken through the webcam by Scout are not persisted anywhere and we do not collect any personal user information. ## How we built it The platform is built on top of a Flask server hosted on an Ubuntu 16.04 instance in Azure's Virtual Machines service. We use nginx, uWSGI, and supervisord to run and maintain our web application. The front-end is built with Google's Materialize UI and we use Plotly for complex analytics visualization. The facial recognition and sentiment analysis intelligence modules are from Azure's Cognitive Services suite, and we use Azure's SQL Server to persist aggregated data. We also have an Azure Chatbot Service for data analysts to quickly see insights. ## Challenges we ran into **CORS CORS CORS!.** Cross-Origin Resource Sharing was a huge pain in the head for us. We divided the project into three main components: the Flask backend, the UI/UX visualization, and the webcam photo collection+analysis. We each developed our modules independently of each other, but when we tried to integrate them together, we ran into a huge number of CORS issues with the REST API endpoints that were on our Flask server. We were able to resolve this with a couple of extra libraries but definitely a challenge figuring out where these errors were coming from. SSL was another issue we ran into. In 2015, Google released a new WebRTC Policy that prevented webcam's from being accessed on insecure (HTTP) sites in Chrome, with the exception of localhost. This forced us to use OpenSSL to generate self-signed certificates and reconfigure our nginx routes to serve our site over HTTPS. As one can imagine, this caused havoc for our testing suites and our original endpoints. It forced us to resift through most of the code we had already written to accommodate this change in protocol. We don't like implementing HTTPS, and neither does Flask apparently. On top of our code, we had to reconfigure the firewalls on our servers which only added more time wasted in this short hackathon. ## Accomplishments that we're proud of We were able to multi-process our consumer application to handle the massive amount of data we were sending back to the server (2 photos taken by the webcam each second, each photo is relatively high quality and high memory). We were also able to get our chat bot to communicate with our REST endpoints on our Flask server, so any metric in our web portal is also accessible in Messenger, Skype, Kik, or whatever messaging platform you prefer. This allows marketing analysts who are frequently on the road to easily review the emotional data on Scout's platform. ## What we learned When you stack cups, start with a 3x3 base and stack them in inverted directions. ## What's next for Scout You tell us! Please feel free to contact us with your ideas, questions, comments, and concerns!
losing
## Inspiration Kimoyo is named after the kimoyo beads in Black Panther-- they're beads that allow you to start a 3D video call right in the palm of your hand. Hologram communication, or "holoportation" as we put it, is not a new idea in movies. Similar scenes occur in Star Wars and in Kingsman, for example. However, holoportation is certainly an up-and-coming idea in the real world! ## What it does In the completed version of Kimoyo, users will be able to use an HTC Vive to view the avatars of others in a video call, while simultaneously animating their own avatar through inverse kinematics (IK). Currently, Kimoyo has a prototype IK system working, and has a sample avatar and sample environment to experience! ## How I built it Starting this project with only a basic knowledge of Unity and with no other VR experience (I wasn't even sure what HTC Vive was!), I leaned on mentors, friends, and many YouTube tutorials to learn enough about Vive to put together a working model. So far, Kimoyo has been done almost entirely in Unity using SteamVR, VRTK, and MakeHuman assets. ## Challenges I ran into My lack of experience was a limiting factor, and I feel that I had to spend quite a bit of time watching tutorials, debugging, and trying to solve very simple problems. That being said, the resources available saved me a lot of time, and I feel that I was able to learn enough to put together a good project in the time available. The actual planning of the project-- deciding which hardware to use and reasoning through design problems-- was also challenging, but very rewarding as well. ## Accomplishments that I'm proud of I definitely could not have built Kimoyo alone, and I'm really glad and very thankful that I was able to learn so much from the resources all around me. There have been bugs and issues and problems that seemed absolutely intractable, but I was able to keep going with the help of others around me! ## What's next for Kimoyo The next steps for Kimoyo is to get a complete, working version up. First, we plan to expand the hand inverse kinematics so the full upper body moves naturally. We also plan to add additional camera perspectives and settings, integrate sound, beginning work with a Unity network manager to allow multiple people to join an environment, and of course building and deploying an app. After that? Future steps might include writing interfaces for creation of custom environments (including AR?), and custom avatars, as well as developing a UI involving the Vive controllers-- Kimoyo has so many possibilities!
## Inspiration The present disconnect between an equation written on the board and the visual representation of that equation. As previous Multivariable Calculus students, we struggled to grasp the complex equations thrown at us that were not trivial to conceptualize. ## What it does This application is that connection. We interpret handwritten math equations using OCR within Wolfram's Mathematica platform. This data is then used to generate visual 3d models which are layered over a chosen surface within augmented reality. Essentially you strap on your Hololens start the HoloWolfram app then scan a math equation to instantly see the 3d model right on your desk. ## How we built it HoloWolfram was built using the Wolfram Mathematica platform to parse the data obtained from a picture taken with the Hololens Once Wolfram was able to successfully read the equations we were writing out by hand on a standard sheet of paper it then generates a 3d model of the equation. This 3d model is loaded into unity as a GameObject using the Unity Link beta that was graciously provided to us by Kyle from Wolfram. Once we were able to load the 3d model into Unity it was just a matter of bringing all these separate pieces together and building the project to the Hololens. ## Challenges we ran into Each different piece had its own pitfalls and being new to all of the different technologies used in HoloWolfram's creation was not an advantage. Building a Minimum Viable Product to Hololens successfully was perhaps our greatest feat because there are just so many moving parts of an app such as this and under the strict time constraints of a hackathon, decisions have to be made swiftly. ## Accomplishments that we're proud of -Successfully Using the Unity Link in its beta form -Learning how to build applications for the Hololens -Diving deep into the Unity development environment ## What we learned -Wolfram Mathematica is extremely powerful. ## What's next for HoloWolfram -More interactivity, -Gamification, -Polished UI, and -Publication to the Windows Store
## Inspiration Virtual reality is a blooming technology. It has a bright future regardless of which sector we may use it for. From medical training (holographic human body analysis) to entertainment. Particularly for me, its more of an accessory that I believe one day our civilization may be obsessed upon like the television and internet. ## What it does This app is capable of closing the border between the real and “supposed to be” scientific fantasy world. This app uses the concept of both Augmented reality along with Virtual Reality bringing use one step closer to a different kind of holograms. This app intact is much more superior at what it does than the $4000 Hololens. While Hololens thrives on its projector in the side of the lens and with a very narrow field of view. This app opens up the world of virtual and augmented world in a way which makes it way cheaper alternative than its counterpart along with many other possibilities that ranges from virtual observatory to confidential information transfer in the form of hologram. ## How I built it Using Unity, Vuforia, Google Vr, Android Sdk and along with the support of google cardboard this app was designed in a way that opens up the world our own world like virtual reality along with augmented reality being turned in to accepted reality. For a while I had to work with the different aspect ratio of the object file respective to the surface and the target photo being properly detected by the camera. The connection and proper conversion of apk files took many trials due to rendering and processing speed of the app. Finally it took me 25 hours to achieve a successful prototype. ## Challenges I ran into The hardest part was setting up the target image and detecting it. The problem was discovered after 3 hours of brute-forcing. The image used in the creation of target image file has significantly lower resolution than the one printed which led to the recognition of the app become sloppy and sometimes undetectable. Getting the app to work on VR cardboard wasn’t easy. Due to the coagulation of pixels and many other rendering errors and bugs, the output achieved from the simulations(which were perfectly normal) were drastically different form the ones in my android device. The object, most of the time, was hovering half a meter from the point of target image and sometimes was no where to be seen. This was fixed to relocating the image points my changing the co-ordinates. ## Accomplishments that I'm proud of This is my first hackathon and I can’t believe how hard I have worked for this and finally completed my hack. The first day I was planning on an entirely different hack regarding fingerprint and security but I got in to deep trouble when I realized that hardware collection has fingerprint scanner but not the modules that are absolutely necessary for it to be even used. I had almost given up. I had little, almost none experience in apps development but I started learning the whole concept and idea from scratch slowly. Finally when I realized what I could do with VR hack, I pulled all nighters until I could present something worthy of this hackathon. ## What I learned This whole week has been very educational. I recently bought macbook and realized how easy it is when I can use terminal in mac, run vuforia exclusively for making augmented reality and on the other hand how I had to pay to use macbook version of adobe 123d catch and windows one was free but most importantly how two finger swipe across the touchpad cost me my whole night of writing a different version of this essay/cover letter. All jokes aside. I learned a lot about app designing, use of different api, learned how to deal with bugs in apps development . Nonetheless I learned about many of the technologies that I never knew existed like galvanic vestibular stimulation. ## What's next for Virtual-Augmented Reality This projects is merely a prototype compared to the future this idea has. With the upcoming technologies capable of photogrammetry will give rise to higher quality obj file/3D file which if incorporated in augmented reality using camera vr which confuse anyone about what is real or what is fantasy. People can actually see their wildest dreams bring a new era in our society.
partial
## Inspiration It is really easy for a university student to procrastinate and get distracted when they should be studying. As university students, we wanted to create an application that encourages students to stay focused while studying. In addition, our project is inspired by the mobile app Forest. ## What it does Deer Timer works as follows: (1) A deer is on screen for the duration of the app’s runtime. (2) The user can enter a certain amount of time they would like to study for. The application will count down from this time. (3) If the user switches windows away from the application, the deer will spontaneously combust and slowly die. The user being away from the application window is our way of indicating that the user is getting distracted by something that isn’t relevant to what they’re supposed to be focusing on. (4) If the user never switches windows, then the adorable deer stays alive, and the user receives the satisfaction of saving them. ## How we built it We used Python and the PyQt libraries. We also used various sources from the web for the sound effects, music, and graphics (cited below). ## Challenges we ran into Learning PyQt5 was a new package we had to learn, so it was somewhat of a learning curve. In addition, we learned DaVinci Resolve to help with some of the deer image creation. ## Accomplishments that we're proud of We thought the screen detection (detecting window switching) would be difficult to implement, but it was actually the opposite, so we're proud of that. Also, making a user interface that connects every component we all made together felt fulfilling. In addition, we think our deer is quite cute 🦌. ## What we learned (1) PyQt5 (2) DaVinci Resolve ## What's next for Deer Timer We plan on incorporating more ways we can tell that the user has been distracted. An example of such is using the user’s webcam and machine learning to detect if the user has picked up their phone. ## Citations Idea: inspired by the Forest App: <https://www.forestapp.cc/> Deer Drawing: tutorial from <https://www.easypeasyandfun.com/how-to-draw-a-deer/> Fire Greenscreen: <https://www.youtube.com/watch?v=mhKFAgv6umY> Picture of DH in the thumbnail: <https://www.utm.utoronto.ca/facilities/building/deerfield-hall> Music playing: <https://www.youtube.com/watch?v=IEEhzQoKtQU> Music links: 1. rain: <https://www.qtfm.cn/channels/276506/programs/12076436/> 2. fire: <https://www.qtfm.cn/channels/402136/programs/17815663/> 3. bird: <https://www.qtfm.cn/channels/402136/programs/17816074/> 4. piano: <https://www.qtfm.cn/channels/402136/programs/17826879/>
## Inspiration As a student we know that keeping on top of all your school work can be hard, especially when the assignments, tests, and exams start building up. This apps inspiration is to help you be more efficient in your studying and make good habits. ## What it does The program takes as input the dates of upcoming tests, assignments, and exams. Using this data, it works out what subject the user should prioritize each day to be ready for each assessment. Using the amount of days until the assessment is due and the percentage weight of the assessment it ranks the type of assignment and subject from most important to least and tells the user the most important thing for them to study. ## How we built it First, we talked about how the design and the user interface, brainstorming was really useful. We decided to take all the weights and the past tests of the assessments and record them. these eights are usually found on the course outline of the subject. Then through careful revision and analysis we made an algorithm that figures out how you should work to succeed and raise your average. Python was implemented and we were planning to make an app but we needed a little more resources and time. ## Challenges we ran into We wanted to make the application as detailed as possible, so it would be able what subject to study for and what type of assessment, then we also wanted to add a daily timer feature that would suggest the number of hours of studying. This was a little difficult as we were in a tight spot with time and resources. finally as we were working on making the Application android friendly the software we were using did not work, and we were unable to make it into an application, but the code worked well. ## Accomplishments that we're proud of the final output of the code and the processes to get the output the user see's were a long process and it was a proud moment when the code started to work ## What we learned we learned a lot about class's and objects as well as parent and daughter class's ## What's next for Success Prio (SP) We were also researching machine learning to personalize the application for each user, as every user has a different study style and is naturally better at some subjects over others. we would do this by introducing a factor into the ranking process about weather or not they need more or less time for certain subjects. as well we would like the user to be able to use this application and input his or hers grads after following our set out schedule and adjusting the factors for how good they are at each subject using machine learning after training the computer for each user. As well as, having the application more powerful in managing the subjects and hourly rate of studying.
## Inspiration With everything being done virtually these days, including this hackathon, we spend a lot of time at our desks and behind screens. It's more important now than ever before to take breaks from time to time, but it's easy to get lost in our activities. Studies show that breaks increases over overall energy and productivity, and decreases exhaustion and fatigue. If only we had something to help us from forgetting.. ## What it does The screen connected to the microcontrollers tells you when it's time to give your eyes a break, or to move around a bit to get some exercise. Currently, it tells you to take a 20-second break for your eyes for every 20 minutes of sitting, and a few minutes of break to exercise for every hour of sitting. ## How we built it The hardware includes a RPi 3B+, aluminum foil contacts underneath the chair cushion, a screen, and wires to connect all these components. The software includes the RPi.GPIO library for reading the signal from the contacts and the tkinter library for the GUI displayed on the screen. ## Challenges we ran into Some python libraries were written for Python 2 and others for Python 3, so we took some time to resolve these dependency issues. The compliant structure underneath the cushion had to be a specific size and rigidity to allow the contacts to move appropriately when someone gets up/sits down on the chair. Finally, the contacts were sometimes inconsistent in the signals they sent to the microcontrollers. ## Accomplishments that we're proud of We built this system in a few hours and were successful in not spending all night or all day working on the project! ## What we learned Tkinter takes some time to learn to properly utilize its features, and hardware debugging needs to be a very thorough process! ## What's next for iBreak Other kinds of reminders could be implemented later like reminder to drink water, or some custom exercises that involve sit up/down repeatedly.
losing
## Inspiration Inspired by leap motion applications ## What it does User can their gesture to control motor, speaker, and led matrix. ## How we built it Use arduino to control motor, speaker, led matrix and use Bluetooth to connect with computer that is connected to Oculus and leap-motion. ## Challenges we ran into Put augmented reality overlay onto the things that we want to control. ## Accomplishments that we're proud of Successfully controlled their components using gestures. ## What we learned How to make use of Oculus and Leap-motion. ## What's next for Augmented Reality Control Experience (ARCX) People with disabilities can use this technology to control their technologies such as turning on lights and playing music.
## Inspiration Video games evolved when the Xbox Kinect was released in 2010 but for some reason we reverted back to controller based games. We are here to bring back the amazingness of movement controlled games with a new twist- re innovating how mobile games are played! ## What it does AR.cade uses a body part detection model to track movements that correspond to controls for classic games that are ran through an online browser. The user can choose from a variety of classic games such as temple run, super mario, and play them with their body movements. ## How we built it * The first step was setting up opencv and importing the a body part tracking model from google mediapipe * Next, based off the position and angles between the landmarks, we created classification functions that detected specific movements such as when an arm or leg was raised or the user jumped. * Then we correlated these movement identifications to keybinds on the computer. For example when the user raises their right arm it corresponds to the right arrow key * We then embedded some online games of our choice into our front and and when the user makes a certain movement which corresponds to a certain key, the respective action would happen * Finally, we created a visually appealing and interactive frontend/loading page where the user can select which game they want to play ## Challenges we ran into A large challenge we ran into was embedding the video output window into the front end. We tried passing it through an API and it worked with a basic plane video, however the difficulties arose when we tried to pass the video with the body tracking model overlay on it ## Accomplishments that we're proud of We are proud of the fact that we are able to have a functioning product in the sense that multiple games can be controlled with body part commands of our specification. Thanks to threading optimization there is little latency between user input and video output which was a fear when starting the project. ## What we learned We learned that it is possible to embed other websites (such as simple games) into our own local HTML sites. We learned how to map landmark node positions into meaningful movement classifications considering positions, and angles. We learned how to resize, move, and give priority to external windows such as the video output window We learned how to run python files from JavaScript to make automated calls to further processes ## What's next for AR.cade The next steps for AR.cade are to implement a more accurate body tracking model in order to track more precise parameters. This would allow us to scale our product to more modern games that require more user inputs such as Fortnite or Minecraft.
## Inspiration We attended an AR workshop and thought it would be interesting to do an AR project with Unity. Initially, we hoped to make an educational tool for Physics classrooms, but later we found the idea hard to implement and changed our idea to making an AR game. ## What it does LIDAR Marble Due is a 1v1 game where players take turns using tilt control to move the marble from a spawn position to a goal position. But watch out! The game takes into account the terrain of the physical world. So a physical obstacle in real life has a collision size! After each round, players can place various kinds of virtual obstacles (for example, bumpers, spikes, and fans) to increase the difficulty. If the marble touches spikes three times, the player is out. The first player who fails to move the marble to the goal position loses, and the opponent wins. ## How we built it LIDAR Marble Duel is built with Unity. ## Challenges we ran into We had difficulty manipulating quaternions while implementing tilt control. Also, we only have one iOS device with LIDAR, which made testing difficult. ## Accomplishments that we're proud of It's our first time dealing with AR, and we made a functional (and fun!) game! ## What we learned Never trust Euler angles. Aside from that, we learned the basics of AR development using Unity and how to deploy a build to an iOS device. ## What's next for LIDAR Marble Duel We plan to add more features (e.g., more virtual obstacles) and improve the UI of the game before submitting the game to App Store.
winning
## Inspiration We saw a short video on a Nepalese boy who had to walk 10 miles each way for school. From this video, we wanted to find a way to bring unique experiences to students in constrained locations. This could be for students in remote locations, or in cash strapped low income schools. We learned that we all share a passion for creating fair learning opportunities for everyone, which is why we created Magic School VR. ## What it does Magic School VR is an immersive virtual reality educational platform where you can attend one-on-one lectures with historical figures, influential scientists, or the world's best teachers. You can have Albert Einstein teach you quantum theory, Bill Nye the Science Guy explain the importance of mitochondria, or Warren Buffet educate you on investing. **Step 1:** Choose a subject *physics, biology, history, computer science, etc.* **Step 2:** Choose your teacher *(Elon Musk, Albert Einstein, Neil Degrasse Tyson, etc.* **Step 3:** Choose your specific topic *Quantum Theory, Data Structures, WWII, Nitrogen cycle, etc.* **Step 4:** Get immersed in your virtual learning environment **Step 5:** Examination *Small quizzes, short answers, etc.* ## How we built it We used Unity, Oculus SDK, and Google VR to build the VR platform as well as a variety of tools and APIs such as: * Lyrebird API to recreate Albert Einstein's voice. We trained the model by feeding it with audio data. Through machine learning, it generated audio clips for us. * Cinema 4D to create and modify 3D models. * Adobe Premiere to put together our 3D models and speech, as well to chroma key masking objects. * Adobe After Effects to create UI animations. * C# to code camera instructions, displays, and interactions in Unity. * Hardware used: Samsung Gear VR headset, Oculus Rift VR Headset. ## Challenges we ran into We ran into a lot of errors with deploying Magic School VR to the Samsung Gear Headset, so instead we used Oculus Rift. However, we had hardware limitations when it came to running Oculus Rift off our laptops as we did not have HDMI ports that connected to dedicated GPUs. This led to a lot of searching around trying to find a desktop PC that could run Oculus. ## Accomplishments that we're proud of We are happy that we got the VR to work. Coming into QHacks we didn't have much experience in Unity so a lot of hacking was required :) Every little accomplishment motivated us to keep grinding. The moment we manged to display our program in the VR headset, we were mesmerized and in love with the technology. We experienced first hand how impactful VR can be in education. ## What we learned * Developing with VR is very fun!!! * How to build environments, camera movements, and interactions within Unity * You don't need a technical background to make cool stuff. ## What's next for Magic School VR Our next steps are to implement eye-tracking engagement metrics in order to see how engaged students are to the lessons. This will help give structure to create more engaging lesson plans. In terms of expanding it as a business, we plan on reaching out to VR partners such as Merge VR to distribute our lesson plans as well as to reach out to educational institutions to create lesson plans designed for the public school curriculum. [via GIPHY](https://giphy.com/gifs/E0vLnuT7mmvc5L9cxp)
## Inspiration With everything being done virtually these days, including this hackathon, we spend a lot of time at our desks and behind screens. It's more important now than ever before to take breaks from time to time, but it's easy to get lost in our activities. Studies show that breaks increases over overall energy and productivity, and decreases exhaustion and fatigue. If only we had something to help us from forgetting.. ## What it does The screen connected to the microcontrollers tells you when it's time to give your eyes a break, or to move around a bit to get some exercise. Currently, it tells you to take a 20-second break for your eyes for every 20 minutes of sitting, and a few minutes of break to exercise for every hour of sitting. ## How we built it The hardware includes a RPi 3B+, aluminum foil contacts underneath the chair cushion, a screen, and wires to connect all these components. The software includes the RPi.GPIO library for reading the signal from the contacts and the tkinter library for the GUI displayed on the screen. ## Challenges we ran into Some python libraries were written for Python 2 and others for Python 3, so we took some time to resolve these dependency issues. The compliant structure underneath the cushion had to be a specific size and rigidity to allow the contacts to move appropriately when someone gets up/sits down on the chair. Finally, the contacts were sometimes inconsistent in the signals they sent to the microcontrollers. ## Accomplishments that we're proud of We built this system in a few hours and were successful in not spending all night or all day working on the project! ## What we learned Tkinter takes some time to learn to properly utilize its features, and hardware debugging needs to be a very thorough process! ## What's next for iBreak Other kinds of reminders could be implemented later like reminder to drink water, or some custom exercises that involve sit up/down repeatedly.
# EasyCampus Download our Unity Project in this Google Drive Link: <https://drive.google.com/open?id=1G7KM3N9Tv7HMqMhF524LlaOv-tWK5p8Y> ## Inspiration When we were freshmen, we had a hard time getting ourselves familiarize with the campus: we didn't know which classes were in which buildings, and even if sometimes we knew the name of the building, we didn't know how to get there as fast as possible. Even if we have campus map, it was very confusing and abstract. As a result, we spent a lot of time finding buildings and sometimes we were even late for classes because of it. Therefore, in order to improve freshmen's experience, we want to use VR to help people feel and know the campus in a better way. Moreover, visitors can also get familiar with the campus before they visit. ## What it does * This VR application shows you around the campus. * It automatically walk you through the shortest path to the building you want to go from where you are. * If you just say where you want to go or a course you take, it can recognize words you say and lead you to the building. * It displays description of a building near you and courses related to it. ## How we built it We used Unity to build the campus and wrote scripts in C# to control VR related behaviors. We transformed the campus to a graph, and then utilized Bellman-Ford algorithm to find the best path to buildings. For voice control, we used Microsoft Azure Cognitive Services Speech API. ## Challenges we ran into * Adjust camera to a right position to prevent users from feeling dizzy when using this app. To fit both the first-person controller and cardboard camera, we used gyroscope to build a distinguisher between mouse motion and CardboardHead. * Even though we learnt the shortest path algorithms at school, we were usually given a well-formatted graph. It was much harder to transform a campus map to nodes and edges because shapes of buildings and roads were difficult to parse. * Build models in Unity to reflect the campus. It took a lot of efforts for us to adjust details: lights, winds, and locations of buildings to make the campus more vivid and attractive. * Adjust positions of texts to display when users are near a certain building. ## Accomplishments that we're proud of * Built buildings and natural landscapes in Unity . * Successfully figured out the algorithm to find the shortest path in the campus. * Utilized voice control. ## What we learned * Adjust VR camera in Unity. * Add details to models in Unity. * Use voice control. ## What's next Incorporate the app on mobile phones or Hololens. Add more details to the campus: slopes, stairs, and interior of buildings.
winning
## Inspiration Have you ever wanted to listen to music based on how you’re feeling? Now, all you need to do is message MoodyBot a picture of yourself or text your mood, and you can listen to the Spotify playlist MoodyBot provides. Whether you’re feeling sad, happy, or frustrated, MoodyBot can help you find music that suits your mood! ## What it does MoodyBot is a Cisco Spark Bot linked with Microsoft’s Emotion API and Spotify’s Web API that can detect your mood from a picture or a text. All you have to do is click the Spotify playlist link that MoodyBot sends back. ## How we built it Using Cisco Spark, we created a chatbot that takes in portraits and gives the user an optimal playlist based on his or her mood. The chatbot itself was implemented on built.io which controls feeding image data through Microsoft’s Emotion API. Microsoft’s API outputs into a small Node.js server in order to compensate for the limited features of built.io. like it’s limitations when importing modules. From the external server we use moods classified by Microsoft’s API to select a Spotify playlist using Spotify’s Web API which is then sent back to the user on Cisco Spark. ## Challenges we ran into Spotify’s Web API requires a new access token every hour. In the end, we were not able to find a solution to this problem. Our inexperience with Node.js also led to problems with concurrency. We had problems with built.io having limited APIs that hindered our project. ## Accomplishments that we're proud of We were able to code around the fact that built.io would not encoding our images correctly. Built.io also was not able to implement other solutions to this problem that we tried to use. ## What we learned Sometimes, the short cut is more work, or it won't work at all. Writing the code ourselves solved all the problems we were having with built.io. ## What's next for MoodyBot MoodyBot has the potential to have its own app and automatically open the Spotify playlist it suggests. It could also connect over bluetooth to a speaker.
[Repository](https://github.com/BradenC82/A_Treble-d_Soul/) ## Inspiration Mental health is becoming decreasingly stigmatized and more patients are starting to seek solutions to well-being. Music therapy practitioners face the challenge of diagnosing their patients and giving them effective treatment. We like music. We want patients to feel good about their music and thus themselves. ## What it does Displays and measures the qualities of your 20 most liked songs on Spotify (requires you to log in). Seven metrics are then determined: 1. Chill: calmness 2. Danceability: likeliness to get your groove on 3. Energy: liveliness 4. Focus: helps you concentrate 5. Audacity: absolute level of sound (dB) 6. Lyrical: quantity of spoken words 7. Positivity: upbeat These metrics are given to you in bar graph form. You can also play your liked songs from this app by pressing the play button. ## How I built it Using Spotify API to retrieve data for your liked songs and their metrics. For creating the web application, HTML5, CSS3, and JavaScript were used. React was used as a framework for reusable UI components and Material UI for faster React components. ## Challenges I ran into Learning curve (how to use React). Figuring out how to use Spotify API. ## Accomplishments that I'm proud of For three out of the four of us, this is our first hack! It's functional! It looks presentable! ## What I learned React. API integration. ## What's next for A Trebled Soul Firebase. Once everything's fine and dandy, registering a domain name and launching our app. Making the mobile app presentable.
## Inspiration Globally, one in ten people do not know how to interpret their feelings. There's a huge global shift towards sadness and depression. At the same time, AI models like Dall-E and Stable Diffusion are creating beautiful works of art, completely automatically. Our team saw the opportunity to leverage AI image models and the emerging industry of Brain Computer Interfaces (BCIs) to create works of art from brainwaves: enabling people to learn more about themselves and ## What it does A user puts on a Brain Computer Interface (BCI) and logs in to the app. As they work in front of front of their computer or go throughout their day, the user's brainwaves are measured. These differing brainwaves are interpreted as indicative of different moods, for which key words are then fed into the Stable Diffusion model. The model produces several pieces, which are sent back to the user through the web platform. ## How we built it We created this project using Python for the backend, and Flask, HTML, and CSS for the frontend. We made use of a BCI library available to us to process and interpret brainwaves, as well as Google OAuth for sign-ins. We made use of an OpenBCI Ganglion interface provided by one of our group members to measure brainwaves. ## Challenges we ran into We faced a series of challenges throughout the Hackathon, which is perhaps the essential route of all Hackathons. Initially, we had struggles setting up the electrodes on the BCI to ensure that they were receptive enough, as well as working our way around the Twitter API. Later, we had trouble integrating our Python backend with the React frontend, so we decided to move to a Flask frontend. It was our team's first ever hackathon and first in-person hackathon, so we definitely had our struggles with time management and aligning on priorities. ## Accomplishments that we're proud of We're proud to have built a functioning product, especially with our limited experience programming and operating under a time constraint. We're especially happy that we had the opportunity to use hardware in our hack, as it provides a unique aspect to our solution. ## What we learned Our team had our first experience with a 'real' hackathon, working under a time constraint to come up with a functioning solution, which is a valuable lesson in and of itself. We learned the importance of time management throughout the hackathon, as well as the importance of a storyboard and a plan of action going into the event. We gained exposure to various new technologies and APIs, including React, Flask, Twitter API and OAuth2.0. ## What's next for BrAInstorm We're currently building a 'Be Real' like social media plattform, where people will be able to post the art they generated on a daily basis to their peers. We're also planning integrating a brain2music feature, where users can not only see how they feel, but what it sounds like as well
partial
## Inspiration While we were thinking about the sustainability track, we realized that one of the biggest challenges faced by humanity is carbon emissions, global warming and climate change. According to Dr.Fatih Birol, IEA Executive Director - *"Global carbon emissions are set to jump by 1.5 billion tonnes this year. This is a dire warning that the economic recovery from the Covid crisis is currently anything but sustainable for our climate."* With this concern in mind we decided to work on a model which could possibly be a small compact carbon capturing system to reduce the carbon footprint around the world. ## What it does The system is designed to capture CO2 directly from the atmosphere using microalgae as our biofilter. ## How we built it Our plan was to first develop a design that could house the microalgae. We designed a chamber in Fusion 360 which we later 3D printed to house the microalgae. The air from the surroundings is directed into the algal chamber using an aquarium aerator. The pumped in air moves into the algal chamber through an air stone bubble diffuser which allows the air to break into smaller bubbles. These smaller air bubbles make the CO2 sequestration easier by giving the microalgae more time to act upon it. We have made a spiral design inside the chamber so that the bubbles travel upward through the chamber in a spiral fashion, giving the microalgae even more time to act upon it. This continuous process in due course would lead to capturing of CO2 and production of oxygen. ## Challenges we ran into 3D printing the parts of the chamber within the specified time. Getting our hands on enough micro algae to fill up the entire system in its optimal growth period (log phase) for the best results. Making the chamber leak proof. ## Accomplishments that we're proud of The hardware design that we were able to design and build over the stipulated time. Develop the system which could actually bring down CO2 levels by utilizing the unique side of microalgae. ## What we learned We came across a lot of research papers implicating the best use of microalgae in its role to capture CO2. Time management: Learnt to design and develop a system from scratch in a short period. ## What's next for Aria We plan to conduct more research using microalgae and enhance the design of the existing system we built so that we could increase the carbon capture efficiency of the system. Keeping in mind the deteriorating indoor air quality, we also plan to integrate it with the inorganic air filters so that it could help in improving the overall indoor air quality. We also plan to conduct research on finding out how much area does one unit of Aria covers
## Inspiration Mood tracking apps are known to help many individuals dealing with mental health struggles understand and learn about their mood changes, improve their mood, and manage their mental illnesses (AMIA Annu Symp Proc. 2017; 2017: 495–504). However, many people have trouble fitting the task of charting mood into their daily routine -- recording daily mood seems unimportant, many individuals struggling with their mental health have low motivation to accomplish this kind of mundane task, it's a small task that's easy to push off indefinitely, and it serves as a daily reminder of their difficulties or disabilities. ## What it does Pilot encourages individuals to fill out their mood tracker, and it helps them do so in an efficient, natural, and engaging way. And while many suffering from low mood find themselves stuck in self-propagating cycles of negative thinking and isolation, Pilot strives to boost mood more effectively than current trackers by actively listening and responding, and by introducing people to unique places and cultures around the world! Pilot serves as a browser landing page, and each day a unique picture of a mystery location encourages users to engage with the app in order to identify where the image was taken. Users are prompted to discuss the events of the day out loud, though they can enter text if they wish -- this style of information is more efficient and natural than the process required by most mood trackers, and encourages comprehensiveness and candor. Pilot then responds to the individual's sentiment and prompts them to reflect on their current mood further with a probing question. Once the user has responded to these questions, Pilot recommends how the user may be feeling on three different scales, mood, anxiety, and cynicism, allowing the user to adjust as they see fit. Once this step is complete, the destination is revealed and the user can explore the location they were matched to for that day. Pilot also provides users with a clean environment with which to view their mood and mental health history. ## How I built it We used Watson for sentiment analysis, Rev for speech-to-text, and Amadeus to identify sights and Mapbox and Wikipedia to display information about locations. ## Accomplishments that I'm proud of We're extremely proud of all the features we were able to pack into Pilot in 24 hours.
## Inspiration It all started when I said it's impossible to have access to satellite data. I started my search and found plenty of open sources, either from space agency data or other online tools. First, I identified a disaster to work on, which is algae blooms. I chose it because of its riskiness as they are often instigated by pollution and changing temperatures and can kill a variety of marine and freshwater life through eutrophication. ## What it does Algal blooms affect coastal communities. I came up with a tracking mechanism to measure the presence of HAB. The use of RS products by a variety of available imagery NASA satellite data and other open sites and geospatial information would help address our challenge effectively. I noticed that chlorophyll is one major variable in algal blooms in the area; using (OLCI) system, I could capture signatures of biogeochemical that affect algal bloom growth. We have developed an algorithm using Ai to analyze and according to the presence of causing factors to generate early warning signals for detected hazards. Finally, I used a visual interface ''GUI" to understand the impacts of the phenomenon. ## How we built it Usually, these blooms give a distinct coloration visible in imagery, such as the red tide, although the coloration does vary depending on the type of bloom. Given the importance of knowing how these blooms affect aquatic life, remote sensing techniques using a variety of available imagery have been developed. Variation in chlorophyll is one major variable in algal blooms in the area; using the Sentinel-3 Ocean and Land Color Instrument (OLCI) system, this instrument has been designed to capture signatures of biogeochemical that affect algal bloom growth. My idea can be divided into two parts: The first part is the gathering of information using geographic information systems (GIS) - sensing from the distance, and also collecting sufficient information about the disaster, its causes, and everything about it. The second part is developing an algorithm using artificial intelligence to analyze and generate early warning signals for detected hazards. * GIS The GIS will collect, save, retrieve, process, analyze, and display spatial data and information. It will produce maps, extract information, using software that conducts data management and analysis using DBMS software and designs, and displays data using AutoCAD. * RS Remote Sensing I will collect information about the phenomenon and this process is done using satellites, which occurs through an interaction between electromagnetic energy (the light source) and the phenomena of the surface of the earth to be photographed, and then it is necessary to use special imaging systems that can record the reflected energy. I found a prior solution depending on the integrated ocean observing system, as satellites, buoys on the surface, and sensors on the ocean floor are collecting data on ocean color and currents, if algae blooms were predicted, scientists track them to estimate where they are travelling. But to detect the disaster before happening, I decided to have a look at past images from satellites regards: The fossil fuel factors, agricultural land, marine population and then collect statistics regards nitrogen and phosphorus presence in air, Lakes and Rivers, Coasts and Bay, Groundwater and Drinking Water. Those factors play a crucial role in nutrient pollution which in turn cause algae blooms. The further data we get, the most accurate we reach. I processed the data by: Imagary Processing These data will be put in an algorithm in which several conditions will be tested and according to statistics it can estimate the predicted timing depending on the time it took to happen at past in presence of those factors. Till now, I have achieved the part of image processing, data collection from the satellites. Now, I am working on the exact algorithm and trying to experiment it to show how accurate it could be. ## Challenges we ran into Finding NASA open resources to find the differences in colors of algae blooms. ## Accomplishments that we're proud of Working solely on imagery processing and collecting data from satellites. ## What we learned I learned how to use AI algorithms to process and detect hazards. ## What's next for Earthling Developing a more complicated algorithm to predict the hazard more quickly and more efficiently
partial
## Inspiration As AR becomes develops and becomes more discrete, we are going to see it more and more in every day life. We were excited by the social side of AR, and how integrating social networks in real life interactions brings the "social" side of social networking back. We wanted to find a way to identify humans through facial tracking/recognition and bring up a known set of social characteristics to augment your interaction. ## What it does Once you've talked to a person for a certain amount of time, SocialEyes recognizes and stores their face on the Android app. It groups those faces by person and you then have the ability to connect their face to their Facebook account. From then on, whenever you encounter that person, SocialEyes tracks and recognizes the face and brings up his or her information in a HUD environment. SocialEyes can even tell you the persons heart rate as you are talking to them. ## How I built it We tuned Haar cascades trained for finding faces in OpenCV to locate the positions of heads in space from the view of the camera of our Meta Augmented Reality (AR) Headset. If we've never seen the face before, we send the face to our back end, which can be accessed and edited using our companion Android app. From here, faces are sent to Azure, where they are grouped using Project Oxford. The groups can be tagged using our Android app, and then linked to Facebook using the Facebook API. The next time that we see the face, we can send the face to our back end, which then sends the face to Azure for identification (among the different tagged face groups we've accumulated). The back end transmits information about the person originally grabbed from the Facebook API back to the computer running the headset, which then displays the information on the AR headset next to the identified person's head. At the same time, we also calculated heart rates of subjects given only the AR headset's video feed using eulerian magnification, a technique originally developed in MIT's Computer Science and Artificial intelligence laboratory. We captured groups of pixels located on people's foreheads located using a combination of Haar cascades for the eyes and head. We then manipulated these captured groups of pixels using a signal processing library (part of OpenMDAO) for calculation of the subject's heart rates (after a small calibration period). ## Challenges I ran into The scope of this project presented the most issues--we were working on basically three different project that all had to come together and function smoothly. Latency was easily the largest issue we had that covered all three projects. We had to use computer vision to detect and track heads in the frame, while simultaneously updating the UI and sending off our images for external processing in Azure. Concurrently, we had to find the heart rate through image processing. We were also working with a Meta kit that is still only for developers, so there are limitations on things like field of view and resolution that we had to work around. The facial tracking had ## Accomplishments that I'm proud of We're most proud of the sheer complexity of this project. There was so much to do on so many different platforms that we couldn't be sure anything would work together smoothly. Because it was so multifaceted, we had to work evenly as a team and strategically divide up tasks, so we're also proud of how well the team worked together. ## What I learned We learned a lot about Azure and Facebook Graph API, which were both instrumental in our project's success. We had to learn a lot in a very short amount of time, but both ended up working flawlessly with our product. ## What's next for SocialEyes AR What's most exciting about this project is the interaction between AR and humans. AR headsets can only augment reality so much without interacting with humans, and we think this is the next step in human-to-tech interaction. Five or ten years down the road, this is the kind of thing that will humanize Artificial Intelligence--the ability to identify and "know" a human being by face.
## Inspiration There are approximately **10 million** Americans who suffer from visual impairment, and over **5 million Americans** suffer from Alzheimer's dementia. This weekend our team decided to help those who were not as fortunate. We wanted to utilize technology to create a positive impact on their quality of life. ## What it does We utilized a smartphone camera to analyze the surrounding and warn visually impaired people about obstacles that were in their way. Additionally, we took it a step further and used the **Azure Face API** to detect the faces of people that the user interacted with and we stored their name and facial attributes that can be recalled later. An Alzheimer's patient can utilize the face recognition software to remind the patient of who the person is, and when they last saw him. ## How we built it We built our app around **Azure's APIs**, we created a **Custom Vision** network that identified different objects and learned from the data that we collected from the hacking space. The UI of the iOS app was created to be simple and useful for the visually impaired, so that they could operate it without having to look at it. ## Challenges we ran into Through the process of coding and developing our idea we ran into several technical difficulties. Our first challenge was to design a simple UI, so that the visually impaired people could effectively use it without getting confused. The next challenge we ran into was attempting to grab visual feed from the camera, and running them fast enough through the Azure services to get a quick response. Another challenging task that we had to accomplish was to create and train our own neural network with relevant data. ## Accomplishments that we're proud of We are proud of several accomplishments throughout our app. First, we are especially proud of setting up a clean UI with two gestures, and voice control with speech recognition for the visually impaired. Additionally, we are proud of having set up our own neural network, that was capable of identifying faces and objects. ## What we learned We learned how to implement **Azure Custom Vision and Azure Face APIs** into **iOS**, and we learned how to use a live camera feed to grab frames and analyze them. Additionally, not all of us had worked with a neural network before, making it interesting for the rest of us to learn about neural networking. ## What's next for BlindSpot In the future, we want to make the app hands-free for the visually impaired, by developing it for headsets like the Microsoft HoloLens, Google Glass, or any other wearable camera device.
## Inspiration When networking / talking with people, names and other info is typically forgotten very quickly. Having information readily available about the potential employers while you speak can greatly improve your networking experience. ## What it does Uses facial recognition to determine locations of faces in a live video feed and overlays info about detected people. This information is viewed in real-time on a VR display as an augmented reality pass-through. The system allows users to have the most relevant info in their sight while talking to others, reducing stress and increasing efficiency in communication. ## How we built it The frontend is a Unity 3D Android app using ARFoundation for base AR functionality. Facial detection is done by sending a JPG every 10 frames to the Azure Face API backend which returns bounding boxes of all faces. Next the bounding box centers are mapped into the 3D world to display info text around faces. ## Challenges we ran into The original tools we used (ARCore) did not allow for facial detection using a back camera (by design). We had to change the tools we used and develop our own solutions for certain tasks. Mapping screen space coordinates from camera to world coordinates was difficult to do reliably. We had access to limited hardware, mainly only having a VR headset instead of an AR one. ## Accomplishments that we are proud of * Integrated many different tools to make application work * Able to recognize faces, do augmented reality and display visual information in a suitable way for a VR headset. ## What we learned * Azure Face API * Coding Unity 3D games in C# * Google Cloud Platform * HTTP requests ## What's next for NetworkSmartAR Adding more displayed information about individuals. Adding voice recognition for names and conversation details being added to displayed information.
partial
## Inspiration On our way to PennApps, our team was hungrily waiting in line at Shake Shack while trying to think of the best hack idea to bring. Unfortunately, rather than being able to sit comfortably and pull out our laptops to research, we were forced to stand in a long line to reach the cashier, only to be handed a clunky buzzer that countless other greasy fingered customer had laid their hands on. We decided that there has to be a better way; a way to simply walk into a restaurant, be able to spend more time with friends, and to stand in line as little as possible. So we made it. ## What it does Q'd (pronounced queued) digitizes the process of waiting in line by allowing restaurants and events to host a line through the mobile app and for users to line up digitally through their phones as well. Also, it functions to give users a sense of the different opportunities around them by searching for nearby queues. Once in a queue, the user "takes a ticket" which decrements until they are the first person in line. In the meantime, they are free to do whatever they want and not be limited to the 2-D pathway of a line for the next minutes (or even hours). When the user is soon to be the first in line, they are sent a push notification and requested to appear at the entrance where the host of the queue can check them off, let them in, and remove them from the queue. In addition to removing the hassle of line waiting, hosts of queues can access their Q'd Analytics to learn how many people were in their queues at what times and learn key metrics about the visitors to their queues. ## How we built it Q'd comes in three parts; the native mobile app, the web app client, and the Hasura server. 1. The mobile iOS application built with Apache Cordova in order to allow the native iOS app to be built in pure HTML and Javascript. This framework allows the application to work on both Android, iOS, and web applications as well as to be incredibly responsive. 2. The web application is built with good ol' HTML, CSS, and JavaScript. Using the Materialize CSS framework gives the application a professional feel as well as resources such as AmChart that provide the user a clear understanding of their queue metrics. 3. Our beast of a server was constructed with the Hasura application which allowed us to build our own data structure as well as to use the API calls for the data across all of our platforms. Therefore, every method dealing with queues or queue analytics deals with our Hasura server through API calls and database use. ## Challenges we ran into A key challenge we discovered was the implementation of Cordova and its associated plugins. Having been primarily Android developers, the native environment of the iOS application challenged our skills and provided us a lot of learn before we were ready to properly implement it. Next, although a less challenge, the Hasura application had a learning curve before we were able to really us it successfully. Particularly, we had issues with relationships between different objects within the database. Nevertheless, we persevered and were able to get it working really well which allowed for an easier time building the front end. ## Accomplishments that we're proud of Overall, we're extremely proud of coming in with little knowledge about Cordova, iOS development, and only learning about Hasura at the hackathon, then being able to develop a fully responsive app using all of these technologies relatively well. While we considered making what we are comfortable with (particularly web apps), we wanted to push our limits to take the challenge to learn about mobile development and cloud databases. Another accomplishment we're proud of is making it through our first hackathon longer than 24 hours :) ## What we learned During our time developing Q'd, we were exposed to and became proficient in various technologies ranging from Cordova to Hasura. However, besides technology, we learned important lessons about taking the time to properly flesh out our ideas before jumping in headfirst. We devoted the first two hours of the hackathon to really understand what we wanted to accomplish with Q'd, so in the end, we can be truly satisfied with what we have been able to create. ## What's next for Q'd In the future, we're looking towards enabling hosts of queues to include premium options for users to take advantage to skip lines of be part of more exclusive lines. Furthermore, we want to expand the data analytics that the hosts can take advantage of in order to improve their own revenue and to make a better experience for their visitors and customers.
## Inspiration 🔥 While on the way to CalHacks, we drove past a fire in Oakland Hills that had started just a few hours prior, meters away from I-580. Over the weekend, the fire quickly spread and ended up burning an area of 15 acres, damaging 2 homes and prompting 500 households to evacuate. This served as a harsh reminder that wildfires can and will start anywhere as long as few environmental conditions are met, and can have devastating effects on lives, property, and the environment. *The following statistics are from the year 2020[1].* **People:** Wildfires killed over 30 people in our home state of California. The pollution is set to shave off a year of life expectancy of CA residents in our most polluted counties if the trend continues. **Property:** We sustained $19b in economic losses due to property damage. **Environment:** Wildfires have made a significant impact on climate change. It was estimated that the smoke from CA wildfires made up 30% of the state’s greenhouse gas emissions. UChicago also found that “a single year of wildfire emissions is close to double emissions reductions achieved over 16 years.” Right now (as of 10/20, 9:00AM): According to Cal Fire, there are 7 active wildfires that have scorched a total of approx. 120,000 acres. [[1] - news.chicago.edu](https://news.uchicago.edu/story/wildfires-are-erasing-californias-climate-gains-research-shows) ## Our Solution: Canary 🐦🚨 Canary is an early wildfire detection system powered by an extensible, low-power, low-cost, low-maintenance sensor network solution. Each sensor in the network is placed in strategic locations in remote forest areas and records environmental data such as temperature and air quality, both of which can be used to detect fires. This data is forwarded through a WiFi link to a centrally-located satellite gateway computer. The gateway computer leverages a Monogoto Satellite NTN (graciously provided by Skylo) and receives all of the incoming sensor data from its local network, which is then relayed to a geostationary satellite. Back on Earth, we have a ground station dashboard that would be used by forest rangers and fire departments that receives the real-time sensor feed. Based on the locations and density of the sensors, we can effectively detect and localize a fire before it gets out of control. ## What Sets Canary Apart 💡 Current satellite-based solutions include Google’s FireSat and NASA’s GOES satellite network. These systems rely on high-quality **imagery** to localize the fires, quite literally a ‘top-down’ approach. Google claims it can detect a fire the size of a classroom and notify emergency services in 20 minutes on average, while GOES reports a latency of 3 hours or more. We believe these existing solutions are not effective enough to prevent the disasters that constantly disrupt the lives of California residents as the fires get too big or the latency is too high before we are able to do anything about it. To address these concerns, we propose our ‘bottom-up’ approach, where we can deploy sensor networks on a single forest or area level and then extend them with more sensors and gateway computers as needed. ## Technology Details 🖥️ Each node in the network is equipped with an Arduino 101 that reads from a Grove temperature sensor. This is wired to an ESP8266 that has a WiFi module to forward the sensor data to the central gateway computer wirelessly. The gateway computer, using the Monogoto board, relays all of the sensor data to the geostationary satellite. On the ground, we have a UDP server running in Google Cloud that receives packets from the satellite and is hooked up to a Streamlit dashboard for data visualization. ## Challenges and Lessons 🗻 There were two main challenges to this project. **Hardware limitations:** Our team as a whole is not very experienced with hardware, and setting everything up and getting the different components to talk to each other was difficult. We went through 3 Raspberry Pis, a couple Arduinos, different types of sensors, and even had to fashion our own voltage divider before arriving at the final product. Although it was disheartening at times to deal with these constant failures, knowing that we persevered and stepped out of our comfort zones is fulfilling. **Satellite communications:** The communication proved to be tricky due to inconsistent timing between sending and receiving the packages. We went through various socket ids and ports to see if there were any patterns to the delays. Through our thorough documentation of steps taken, we were eventually able to recognize a pattern in when the packages were being sent and modify our code accordingly. ## What’s Next for Canary 🛰️ As we get access to better sensors and gain more experience working with hardware components (especially PCB design), the reliability of our systems will improve. We ran into a fair amount of obstacles with the Monogoto board in particular, but as it was announced as a development kit only a week ago, we have full faith that it will only get better in the future. Our vision is to see Canary used by park services and fire departments in the most remote areas of our beautiful forest landscapes in which our satellite-powered sensor network can overcome the limitations of cellular communication and existing fire detection solutions.
## INSPIRATION: **Have you ever been on a lunch break at work, or passed by a restaurant and really wanted to try it, but had no one to go with at the time? Have some anxiety about eating alone?** We here at team TableForTwo understand the feeling. So we built an app that helps people find an eating buddy, and potentially build some new social connections! After all, food tastes better when it's shared, right? ## WHAT IT DOES: Search restaurants, see who's searching for a buddy, and make a reservation to connect with a fellow lone eater. Connect with Facebook to get going, and browse restaurants in your local vicinity. Clicking a restaurant will pull up a reservation modal where you can submit a reservation request or view the information tags for the joint. Once someone views your request and accepts it, a 'reservation' will be made (no real reservation made at the restaurant though), and you two will be connected! Now **get eating**. ## How we built it: HTML, CSS, Node,js, Angular ## Challenges we ran into: We hit some limits on APIs, then APIs went down... ## What we learned: All The Things.
winning
## Inspiration With the recent COVID-19 pandemic, musicians around the globe don't have a reliable way to reach their audience and benefit from what they love to do — perform. I personally have seen my music teacher suffer from this, because his business performing at weddings, concerts, and at other public events has been put on hold due to COVID-19. Because of this, we wanted to find a way for musicians, and any other live artist, to be able to perform to a live audience and make money as if it were a real concert. ## What it does WebStage is an online streaming platform where a performer can perform for their audience. Users can make accounts with their email or with Google sign-in. Once they do so, they can create their own "stages" with set times to perform, and they can also buy tickets and go to the stages of other musicians. ## How we built it We used Figma to design and prototype the site. For the front-end, we used HTML/CSS/JS. For the back-end, we used Node.js with Express.js for the server and WebRTC for the live-streaming. We also used Firebase and Cloud Firestore. ## Challenges we ran into Figuring out how to live-stream, establishing chat functionality, ensuring that stages would be visible on the site with proper pop-up messages when needed, saving usernames, verifying authentication tokens, tweaking the CSS to make sure everything fit on screen ## Accomplishments that we're proud of Everything, from the design to the functionality of things like the live-streaming. It's amazing that we were able to build the platform in such a short period of time. ## What we learned We learned how to use WebRTC, Firebase authentication, and Cloud Firestore. We also grew much more comfortable collaborating with others while working towards a strict deadline. ## What's next for WebStage We plan to add various payment options so that users would have to buy tickets (if the streamer charges for them); in the near future for purposes of the hackathon, we elected to not implement this feature in the current build. Other features such as saving streams as videos are possibilities as well.
## Inspiration College students often times find themselves taking the first job they see. However this often leaves them with a job that is stressful, hard, or pays less than what they're worth. We realized that students don't have a good tool to discover the best job in their area. Job boards like LinkedIn and Glassdoor typically don't have lowkey part time jobs, while university job boards are limited to university specific jobs. We wanted to create a means for students to post reviews for job postings within their university area. This would allow students to share experiences and inform students of the best job options for them. ## What it does Rate My University Job is a job postings website created for college students. To access the website, a user must first create an account using a .edu email address. Users can search for job postings based on tags or the job title. The search results are filter are filtered by .edu domain name for users from the same University. A job posting contains information like the average pay reviewers received at the job, the location, a description, an average rating out of 5 stars. If a job posting doesn't exist for their position, they can create a new posting and provide the title, description, location, and a tag. Other students can read these posts and contribute with their own reviews. ## How we built it We created the front end using vanilla HTML, CSS and JavaScript. We implemented Firebase Firestore to update and query a database where all job postings and reviews are stored. We also use Firebase Auth to authenticate emails and ensure a .edu email is used. We designed the interactive components using JavaScript to create a responsive user interface. The website is hosted using both GitHub Pages and Domain.com ## Challenges we ran into (1) Website Design UI/UX (2) Developing a schema and using a database ## Accomplishments that we're proud of (1) Being able to store account data of multiple users and authenticate the .edu domain. (2) Completing a first project in a collaborative environment (3) Curating a list of job postings from the same university email domains. (4) A robust search engine based on titles and search tags. ## What we learned In general, we learned how to format and beatify HTML files with CSS, in addition to connecting a HTML. We learned how to use the FireStore database and how to query, upload, and update data. ## What's next for Rate My University Job We would seek to improve the UI/UX. We would also look to add additional feature such as upvoting and downvoting posts and a reporting system for malicious/false posts. We also look to improve the search engine to allow for more concise searches and allow the searches to be sorted based on rating/pay/tags/etc. Overall there are a lot of additional features we can add to make this project even better.
## Inspiration In a world where finance is extremely important, everyone needs access to **banking services**. Citizens within **third world countries** are no exception, but they lack the banking technology infrastructure that many of us in first world countries take for granted. Mobile Applications and Web Portals don't work 100% for these people, so we decided to make software that requires nothing more than a **cellular connection to send SMS messages** in order to operate. This resulted in our hack, **UBank**. ## What it does **UBank** allows users to operate their bank accounts entirely through **text messaging**. Users can deposit money, transfer funds between accounts, transfer accounts to other users, and even purchases shares of stock via SMS. In addition to this text messaging capability, UBank also provides a web portal so that when our users gain access to a steady internet connection or PC, they can view their financial information on a more comprehensive level. ## How I built it We set up a backend HTTP server in **Node.js** to receive and fulfill requests. **Twilio** with ngrok was used to send and receive the text messages through a webhook on the backend Node.js server and applicant data was stored in firebase. The frontend was primarily built with **HTML, CSS, and Javascript** and HTTP requests were sent to the Node.js backend to receive applicant information and display it on the browser. We utilized Mozilla's speech to text library to incorporate speech commands and chart.js to display client data with intuitive graphs. ## Challenges I ran into * Some team members were new to Node.js, and therefore working with some of the server coding was a little complicated. However, we were able to leverage the experience of other group members which allowed all of us to learn and figure out everything in the end. * Using Twilio was a challenge because no team members had previous experience with the technology. We had difficulties making it communicate with our backend Node.js server, but after a few hours of hard work we eventually figured it out. ## Accomplishments that I'm proud of We are proud of making a **functioning**, **dynamic**, finished product. It feels great to design software that is adaptable and begging for the next steps of development. We're also super proud that we made an attempt at tackling a problem that is having severe negative effects on people all around the world, and we hope that someday our product can make it to those people. ## What I learned This was our first using **Twillio** so we learned a lot about utilizing that software. Front-end team members also got to learn and practice their **HTML/CSS/JS** skills which were a great experience. ## What's next for UBank * The next step for UBank probably would be implementing an authentication/anti-fraud system. Being a banking service, it's imperative that our customers' transactions are secure at all times, and we would be unable to launch without such a feature. * We hope to continue the development of UBank and gain some beta users so that we can test our product and incorporate customer feedback in order to improve our software before making an attempt at launching the service.
partial
# Operation Trousers We're trying to help educate people about shorts, because they're important. The end result us a simple and informative webpage, coupled with beautiful visualizations from real data. Come check it out! ![Men In Shorts](https://res.cloudinary.com/devpost/image/fetch/s--fsMpZYq7--/c_limit,f_auto,fl_lossy,q_auto:eco,w_900/http://ak-hdl.buzzfed.com/static/2014-08/1/5/enhanced/webdr08/enhanced-buzz-wide-15634-1406883700-16.jpg)
# Don't Dis My Ability ## 💡 Inspiration Post pandemic has seen a tremendous surge in people's inclination towards solo traveling and "workstation" and the era of solo traveling has also experienced a rise in specially-abled solo travelers stepping out and exploring the world. In researching the solo travel experiences and issues faced by specially-abled travelers, we identified two major problems faced by the people which are: i) Getting Money and Using The ATM- Even after carrying sufficient local currency, many specially-abled travelers are bound to use the ATM at least once to retrieve cash and in situations like these, many travelers have to either reach out to a fellow traveler or a staff member of accommodation and trusting them with sensitive banking information. ii) Difficulty in ordering food- Travelling to countries with diverse cultures is often reflected in the diverse variety of dishes available and this can often cause a problem of not knowing what to eat especially for a specially-abled person having vision and hearing impairments. ## 💻 What it does To make solo travel convenient for specially-abled people we bring to you "Don't Dis My Ability" a web app working on ensuring a smooth and easy-going solo travel experience for the differently-abled people. The website uses Google Vision API for OCR implementation to retrieve the necessary information which is then converted to speech for ease in understanding. The information retrieved is also converted to required language. This also ensures the screen reading technology is not being fooled and only focuses on necessary information making the process of ordering food at a restaurant simplified for the specially-abled traveler. The web app also allows micropayment to bypass the hassle of making visits to the ATMs and sharing sensitive banking information with strangers for help by allowing direct payment from native currency via payment gateways. ## ⚙️ How we built it * Figma: For design * DeSo: For user authentication * React.js: For frontend * Python: For backend * Google Vision API: OCR * Payment Gateway: Razorpay API (It provides lots of features like UPI, direct payment from native currency etc.) * Text to Speech: react-speech-kit * For multilingual: i18n ## ✈ Travel Track Our team embarked on a journey to create an adaptive user interface for specially-abled people that made it easier for them to use technology whenever they travel abroad. We spearheaded the project by creating a platform that allows users to: * Upload the menu and extract the information from it * Convert the extracted text from the menu into speech * Allow micropayments to be made using Razorpay * Classify the food as vegetarian or non-vegetarian * Get nutrition information about the food, such as its ingredients and its calories * Change the language of the website to the user's preferred language ## 📚 Research Research is paramount to gaining a full understanding of the user and their needs. Beyond our own experiences, we needed to dig deeper into the web, outside of our network, and find both organizations that could shed light on how better to help our targeted users as well as to conduct research into other similar applications or products. This was crucial to avoid re-inventing the wheel and wasting valuable time and resources. Here are a few of the resources that were helpful to us: * <https://www.narayanseva.org/blog/10-problems-faced-by-people-with-disabilities> * <https://www.sagetraveling.com/25-things-that-can-go-wrong-traveling-with-a-disability> * <https://www.digitalartsonline.co.uk/features/interactive-design/how-design-websites-for-disabled-people-in-2017/p> ## 🤝 Most Creative Use of GitHub We are using GitHub for the following reasons: * **Collaboration**: GitHub makes it easy to share code with others and helps a lot in collaboration. * **GitHub Project**: We also used GitHub for planning and keeping track of our project and its progress with the help of the GitHub project management tool. * **Implementing the CI/CD workflow**: GitHub makes it easy to implement the CI/CD workflow and makes the deployment process easy. * **Deploying the project**: Deploying the project on GitHub helped us to get the project deployed on the network to be accessed by other people. * **Using PRs and Issues**: We are doing multiple PRs and building multiple issues to keep on track of the project. ## 🔐 Best Use of DeSo We are using **DeSo** to make a secure user authentication. DeSo is the first Layer 1 blockchain custom-built for decentralized social media applications. ## 🌐 Best Domain Name from Domain.com * dontdismyability.tech ## 🧠 Challenges we ran into Due to the difference in the time zone it was a bit difficult to collaborate with other developers in the team but we managed to get the project done in time. Complete the project in the given time frame. ## Accomplishments that we're proud of Our team embarked on a journey to create an adaptive user interface for specially-abled people that made it easier for them to use technology whenever they travel abroad. We spearheaded the project by creating a platform that allows users to: * Upload the menu and extract the information from it * Convert the extracted text from the menu into speech * Allow micropayments to be made using Razorpay * Change the language of the website to the user's preferred language ## 📖 What we learned * Collaboration with other developers. * Implementing the payment gateway and google cloud ## 🚀 What's next for Don't Dis My Ability * Building a mobile app for the project. * Simplifying the process of accommodation by providing a proper description of rooms, advancing eliminating the graphic capture and overcrowding of webpages, following international standards on color contrasts, etc. * Using NLP and ML to simplify the whole travel experience on the data being retrieved using vision AI API
## Inspiration We wanted to allow financial investors and people of political backgrounds to save valuable time reading financial and political articles by showing them what truly matters in the article, while highlighting the author's personal sentimental/political biases. We also wanted to promote objectivity and news literacy in the general public by making them aware of syntax and vocabulary manipulation. We hope that others are inspired to be more critical of wording and truly see the real news behind the sentiment -- especially considering today's current events. ## What it does Using Indico's machine learning textual analysis API, we created a Google Chrome extension and web application that allows users to **analyze financial/news articles for political bias, sentiment, positivity, and significant keywords.** Based on a short glance on our visualized data, users can immediately gauge if the article is worth further reading in their valuable time based on their own views. The Google Chrome extension allows users to analyze their articles in real-time, with a single button press, popping up a minimalistic window with visualized data. The web application allows users to more thoroughly analyze their articles, adding highlights to keywords in the article on top of the previous functions so users can get to reading the most important parts. Though there is a possibilitiy of opening this to the general public, we see tremendous opportunity in the financial and political sector in optimizing time and wording. ## How we built it We used Indico's machine learning textual analysis API, React, NodeJS, JavaScript, MongoDB, HTML5, and CSS3 to create the Google Chrome Extension, web application, back-end server, and database. ## Challenges we ran into Surprisingly, one of the more challenging parts was implementing a performant Chrome extension. Design patterns we knew had to be put aside to follow a specific one, to which we gradually aligned with. It was overall a good experience using Google's APIs. ## Accomplishments that we're proud of We are especially proud of being able to launch a minimalist Google Chrome Extension in tandem with a web application, allowing users to either analyze news articles at their leisure, or in a more professional degree. We reached more than several of our stretch goals, and couldn't have done it without the amazing team dynamic we had. ## What we learned Trusting your teammates to tackle goals they never did before, understanding compromise, and putting the team ahead of personal views was what made this Hackathon one of the most memorable for everyone. Emotional intelligence played just as an important a role as technical intelligence, and we learned all the better how rewarding and exciting it can be when everyone's rowing in the same direction. ## What's next for Need 2 Know We would like to consider what we have now as a proof of concept. There is so much growing potential, and we hope to further work together in making a more professional product capable of automatically parsing entire sites, detecting new articles in real-time, working with big data to visualize news sites differences/biases, topic-centric analysis, and more. Working on this product has been a real eye-opener, and we're excited for the future.
partial