instruction
stringlengths 25
134
| input
stringclasses 1
value | output
stringlengths 12
49.5k
|
---|---|---|
Write an article about "Your first full stack app" | Get your first full stack app on your portfolio
Always wanted to build a complete web app but aren't sure how to get started? This course is for you.
<Button
variant={"green"}
href={/learn/your-first-full-stack-app/0}
Your first full stack app - start course |
|
Write an article about "Generative AI Bootcamp" | export const status = 'available'
return ();
} |
|
Write an article about "an article" | {
if (name) this.name = name;
if (email) this.email = email;
try {
await sqlUPDATE users SET name = ${this.name}, email = ${this.email} WHERE id = ${this.id};
console.log('User profile updated successfully.');
} catch (error) {
console.error('Error updating user profile:', error);
// Handle error (e.g., rollback transaction, log error, etc.)
}
}
// Delete user profile from the database
async deleteProfile() {
try {
await sqlDELETE FROM users WHERE id = ${this.id};
console.log(User with ID ${this.id} deleted successfully.);
} catch (error) {
console.error('Error deleting user profile:', error);
// Handle error
}
}
// Method to display user info - useful for debugging
displayUserInfo() {
console.log(User ID: ${this.id}, Name: ${this.name}, Email: ${this.email});
}
}
// Example usage (inside an async function or top-level await in a module)
const user = new User(1, 'John Doe', 'john.doe@example.com');
user.displayUserInfo(); // Display initial user info
// Update user info
await user.updateProfile({ name: 'Jane Doe', email: 'jane.doe@example.com' });
user.displayUserInfo(); // Display updated user info
// Delete user
await user.deleteProfile();
Prior to the widespread availability of Generative AI tools, you more or less needed to understand Javascript, its most recent syntax changes, object oriented programming conventions and database abstractions, at a minimum, to produce this code.
You also needed to have recently gotten some sleep, be more or less hydrated and have already had your caffeine to create this simple example.
And even the most highly-skilled keyboard-driven developers would have taken
a bit longer than a few seconds to write this out.
GenAI is not just for text or code...
Here's an example of me asking ChatGPT4 to generate me an image with the following prompt:
I'm creating a video game about horses enhanced with Jetpacks.
Please generate me a beautiful, cheerful and friendly sprite of a horse with a jetpack strapped onto its back that would be suitable for use in my HTML5 game.
Use a bright, cheery and professional retro pixel-art style.
I can use Large Language Models (LLMs) like ChatGPT to generate pixel art and other assets for my web and gaming projects.
Within a few moments, I got back a workable image that was more or less on the money given my prompt.
I didn't have to open my image editor, spend hours tweaking pixels using specialized tools, or hit up my designer or digital artist friends for assistance.
How does GenAI work?
Generative AI works by "learning" from massive datasets, to draw out similarities and "features"
Generative AI systems learn from vast datasets to build a model that allows them to produce new outputs.
For example, by learning from millions of images and captions, an AI can generate brand new photographic images based on text descriptions provided to it.
The key technique that makes this possible involves training machine learning models using deep neural networks that can recognize complex patterns.
Imagine you have a very smart robot that you want to teach to understand and use human language.
To do this, you give the robot a huge pile of books, articles, and conversations to read over and over again.
Each time the robot goes through all this information, it's like it's completing a grade in school, learning a little more about how words fit together and how they can be used to express ideas.
In each "grade" or cycle, the robot pays attention to what it got right and what it got wrong, trying to improve.
Think of it like learning to play a video game or a sport; the more you practice, the better you get.
The robot is doing something similar with language, trying to get better at understanding and generating it each time it goes through all the information.
This process of going through the information, learning from mistakes, and trying again is repeated many times, just like going through many grades in school.
For a model as capable as ChatGPT 4, the cost to perform this training can exceed $100 million, as OpenAI's Sam Altman has shared.
With each "generation" of learning, the robot gets smarter and better at using language, much like how you get smarter and learn more as you move up in school grades.
Why is GenAI having its moment right now?
GenAI is the confluence of many complimentary components and approaches reaching maturity at the same time
Advanced architectures: New architectures like transformers that are very effective for language and generation
Progressive advancement of the state of the art: Progressive improvements across computer vision, natural language processing, and AI in general
Why is GenAI such a big deal?
Prior to the proliferation of LLMs and Generative AI models, you needed to have some pixel art skills, and be proficient in use of photo editing / creation software such as Photoshop, Illustrator, or GIMP in order to produce high quality pixel art.
Prior to Gen AI, you needed to be a software developer to produce working code.
Prior to Gen AI, you needed to be a visual artist to produce images, or a digital artist to produce pixel art, video game assets, logos, etc.
With Generative AI on the scene, this is no longer strictly true.
You do still need to be a specialist to understand the outputs and have the capability to explain them.
In the case of software development, you still require expertise in how computers work, architecture and good engineering practices to
employ the generated outputs to good effect.
There are some major caveats to understand around this piece such as why Generative AI is currently a huge boon to senior and above level developers, but commonly misleading and actively harmful to junior developers, but in general it holds true:
Generative AI lowers the barrier for people to produce specialized digital outputs.
MIT News: Explained - Generative AI
McKinsey - The State of AI in 2023: Generative AI's breakout year
McKinsey - What is ChatGPT, DALL-E and generative AI?
Accenture - What is Generative AI?
GenAI in the wild - successful use cases
Since the initial explosion of interest around GenAI, most companies have sprinted toward integrating generative AI models into their products and services, with varying success.
Here's a look at some of the tools leveraging Generative AI successfully to accelerate their end users:
v0.dev
Vercel's v0.dev tool which generates user interfaces in React in response to natural language queries.
In the above example, I prompted the app with:
A beautiful pricing page with three large columns showing my free, pro and enterprise tiers of service for my new saas news offering
and the app immediately produced three separate versions that I can continue to iterate on in natural language, asking for visual refinements to the layout, style, colors, font-size and more.
Prior to Gen AI, all of this work would have been done by hand by a technical designer sitting closely with at least one frontend developer.
Pulumi AI
Pulumi AI generates working Pulumi programs that describe infrastructure as code in response to natural language prompts.
There are some current pain points, such as the tool
strongly favoring older versions of Pulumi code which are now "deprecated" or slated for removal, but in general this tool is capable of saving a developer a lot of time
by outining the patterns to get a tricky configuration working with AWS.
If Generative AI opens the door for non-specialists to create specialized outputs, it simultaneously accelerates specialists.
Generative AI is powerful because it enables development use cases that were previously out of reach due to being too technically complex for one developer to build out on their own.
I've experienced this phenomenon myself.
I have been pair-coding alongside Generative AI models for {timeElapsedSinceJanuary2023()}, and in that time I have started work on more ambitious applications than I normally would have tackled as side proejcts.
I have also
completed more side projects as a result.
I have gotten unstuck faster when faced with a complex or confusing failure scenario, because I was able to talk through the problem with ChatGPT and discuss alternative approaches.
ChatGPT4 responds to me
with the quality of response and breadth of experience that I previously only would have expected from senior and staff level engineers.
I have enjoyed my work more, because now I have a supremely helpful colleague who is always available, in my time zone.
Gen AI is never busy, never frustrated or overwhelmed, and is more likely to have read widely and deeply on a given
technology subject than a human engineer.
I employ careful scrutiny to weed out hallucinations.
Because I've been developing software and working at both small and large Sillicon Valley companies since 2012, I am able to instantly detect when ChatGPT or a similar tool is hallucinating, out of its depth or
poorly suited to a particular task due to insufficient training data.
Sanity checking a change from an SEO perspective before making it
Help me configure a new GitHub Action for my repository that automatically validates my API specification
Cooperatively build a complex React component for my digital school
Collaboratively update a microservice's database access pattern
Collaboratively upgrading a section of my React application to use a new pattern
Large Language Models (LLMs)
LLMs are a critical component of Generative AI
Large Language Models (LLMs) are the brains behind Generative AI, capable of understanding, generating, and manipulating language based on the patterns they've learned from extensive datasets.
Their role is pivotal in enabling machines to perform tasks that require human-like language understanding, from writing code to composing poetry.
Think of LLMs as the ultimate librarian, but with a superpower: instant recall of every book, article, and document ever written.
They don't just store information; they understand context, draw connections, and create new content that's coherent and contextually relevant.
This makes LLMs invaluable in driving forward the capabilities of Generative AI, enabling it to generate content that feels surprisingly human.
One of the main challenges with LLMs is "hallucination," where the model generates information that's plausible but factually incorrect or nonsensical.
This is akin to a brilliant storyteller getting carried away with their imagination.
While often creative, these hallucinations can be misleading, making it crucial to use LLMs with a critical eye, especially in applications requiring high accuracy.
Hallucinations refer to when an AI like ChatGPT generates responses that seem plausible but don't actually reflect truth or reality.
The system is essentially "making things up" based on patterns learned from its language data - hence "hallucinating".
The critical challenge here is that hallucination is more or less inextricable from the LLM behaviors we find valuable - and LLMs do not know when they do not know something.
This is precisely why it can be so dangerous for
junior or less experienced developers, for example, to blindly follow what an LLM says when they are attempting to pair code with one.
Without a sufficient understanding of the target space, its challenges and potential issues, it's possible to make a tremendous mess by following the hallucinations of an AI model.
Why does hallucination happen?
LLMs like ChatGPT have been trained on massive text datasets, but have no actual connection to the real world. They don't have human experiences or knowledge of facts.
Their goal is to produce outputs that look reasonable based on all the text they've seen.
So sometimes the AI will confidently fill in gaps by fabricating information rather than saying "I don't know."
This is one of the reasons you'll often see LLMs referred to as "stochastic parrots". They are attempting to generate the next best word based on all of the words and writing they have ever seen.
Should this impact trust in LLMs?
Yes, hallucinations mean we can't fully rely on LLMs for complete accuracy and truthfulness. They may get core ideas directionally right, but details could be invented.
Think of them more as an aid for content generation rather than necessarily fact sources.
LLMs don't have true reasoning capacity comparable to humans.
Approaching them with appropriate trust and skepticism is wise as capabilities continue advancing.
GenAI meets software development: AI Dev Tools
What is a developer's IDE?
IDE stands for Integrated Development Environment.
It is a text editor designed specifically for programmers' needs.
IDEs provide syntax highlighting, autocompletion of code, and boilerplate text insertion to accelerate the coding process.
Most modern IDEs are highly customizable.
Through plugins and configuration changes, developers customize keyboard shortcuts, interface color themes, extensions that analyze code or connect to databases, and more based on their workflow.
Two very popular IDEs are Visual Studio Code (VSCode) from Microsoft and Neovim, which is open-source and maintained by a community of developers.
In VSCode, developers can install all sorts of plugins from a central marketplace - plugins to lint and format their code, run tests, interface with version control systems, and countless others.
There is also rich support for changing the visual theme and layout.
Neovim is another IDE centered around modal editing optimized for speed and keyboard usage over mice.
Its users can create key mappings to quickly manipulate files and code under-the-hood entirely from the keyboard.
It embraces Vim language and edit commands for coding efficiency.
For example, the following gif demonstrates a custom IDE using tmux and Neovim (my personal preference):
My personal preference is to combine tmux with Neovim for a highly flexible setup that expands and contracts to the size of my current task.
Developers tend to "live in" their preferred IDE - meaning they spend a lot of time coding.
Developers are also highly incentivized to tweak their IDE and add automations for common tasks in order to make themselves more efficient.
For this reason, Developers may try many different IDEs over the course of their career, but most tend to find something they're fond of and stick with it, which has implications for services that are or are not
available in a given IDE.
Usually, a service or Developer-facing tool gets full support as a VSCode plugin long before an official Neovim plugin is created and maintained.
In summary, IDEs are incredibly valuable tools that can match the preferences and project needs of individual developers through customizations.
VSCode and Neovim have strong followings in their ability to adapt to diverse workflows. Developers can write code and configuration to customize the IDE until it perfectly suits their style.
Generative AI in Software Development: Codeium vs. GitHub Copilot
Codeium and GitHub Copilot represent the cutting edge of Generative AI in software development, both leveraging LLMs to suggest code completions and solutions.
While GitHub Copilot is built on OpenAI's Codex, Codeium offers its unique AI-driven approach.
The key differences lie in their integration capabilities, coding style adaptations, and the breadth of languages and frameworks they support, making each tool uniquely valuable depending on the developer's needs.
These tools, while serving the common purpose of enhancing coding efficiency through AI-assisted suggestions, exhibit distinct features and use cases that cater to different aspects of the development workflow.
Codeium review
Codeium vs ChatGPT
GitHub Copilot review
ChatGPT 4 and Codeium are still all I need
The top bugs all AI developer tools are suffering from
Codeium, praised for its seamless integration within popular code editors like VSCode and Neovim, operates as a context-aware assistant, offering real-time code suggestions and completions directly in the IDE.
Its ability to understand the surrounding code and comments enables it to provide highly relevant suggestions, making it an indispensable tool for speeding up the coding process.
Notably, Codeium stands out for its free access to individual developers, making it an attractive option for those looking to leverage AI without incurring additional costs, whereas GitHub has been perpetually cagey about its Copilot offerings and their costs.
As a product of GitHub, Copilot is deeply integrated with the platform's ecosystem, potentially offering smoother workflows for developers who are heavily invested in using GitHub for version control and collaboration.
Imagine AI developer tools as ethereal companions residing within your IDE, whispering suggestions, and solutions as you type.
They blend into the background but are always there to offer a helping hand, whether it's completing a line of code or suggesting an entire function.
These "code spirits" are revolutionizing how developers write code, making the process faster, more efficient, and often more enjoyable.
Here's what I think about the future of Generative AI, after evaluating different tools and pair-coding with AI for ${timeElapsedSinceJanuary2023()}}/>
Thoughts and analysis
Where I see this going
In the rapidly evolving field of software development, the integration of Generative AI is not just a passing trend but a transformative force.
In the time I've spent experimenting with AI to augment my workflow and enhance my own human capabilities, I've realized incredible productivity gains: shipping more ambitious and complete applications than ever before.
I've even enjoyed myself more.
I envision a future where AI-powered tools become indispensable companions, seamlessly augmenting human intelligence with their vast knowledge bases and predictive capabilities.
These tools will not only automate mundane tasks but also inspire innovative solutions by offering insights drawn from a global compendium of code and creativity.
As we move forward, the symbiosis between developer and AI will deepen, leading to the birth of a new era of software development where the boundaries between human creativity and artificial intelligence become increasingly blurred.
What I would pay for in the future
In the future, what I'd consider invaluable is an AI development assistant that transcends the traditional boundaries of code completion and debugging.
I envision an assistant that's deeply integrated into my workflow and data sources (email, calendar, GitHub, bank, etc), capable of understanding the context of my projects across various platforms, project management tools, and even my personal notes.
This AI wouldn't just suggest code; it would understand the nuances of my projects, predict my needs, and offer tailored advice, ranging from architectural decisions to optimizing user experiences.
This level of personalized and context-aware assistance could redefine productivity, making the leap from helpful tool to indispensable partner in the creative process.
My favorite AI-enhanced tools
| Job to be done | Name | Free or paid? |
|---|---|---|
| Architectural and planning conversations | ChatGPT 4 | Paid |
| Autodidact support (tutoring and confirming understanding) | ChatGPT 4 | Paid |
| Accessing ChatGPT on the command line | mods | Free |
| Code-completion | Codeium | Free for individuals. Paid options |
| AI-enhanced video editing suite | Kapwing AI | Paid |
| AI-enhanced video repurposing (shorts) | OpusClip | Paid |
| Emotional support | Pi.ai | Free |
Emotional support and mind defragging with Pi.ai
Pi.ai is the most advanced model I've encountered when it comes to relating to human beings.
I have had success using Pi to have a quick chat and talk through something that is
frustrating or upsetting me at work, and in between 15 and 25 minutes of conversation, I've processed and worked through the issue and my feelings and am clear headed enough to make forward progress again.
This is a powerful remover of obstacles, because the longer I do what I do, the more clear it becomes that EQ is more critical than IQ.
Noticing when I'm irritated or overwhelmed and having a quick talk with someone highly intelligent and sensitive in order to process things and return with a clear mind is invaluable.
Pi's empathy is off the charts, and it feels like you're speaking with a highly skilled relational therapist.
How my developer friends and I started using GenAI
Asking the LLM to write scripts to perform one-off tasks (migrating data, cleaning up projects, taking backups of databases, etc)
Asking the LLM to explain a giant and complex stack trace (error) that came from a piece of code we're working with
Asking the LLM to take some unstructured input (raw files, log streams, security audits, etc), extract insights and return a simple list of key-value pairs
Opportunities
The advent of Generative AI heralds a plethora of opportunities that extend far beyond the realms of efficiency and productivity.
With an expected annual growth rate of 37% from 2023 to 2030, this technology is poised to revolutionize industries by democratizing creativity, enhancing decision-making, and unlocking new avenues for innovation.
In sectors like healthcare, education, and entertainment, Generative AI can provide personalized experiences, adaptive learning environments, and unprecedented creative content.
Moreover, its ability to analyze and synthesize vast amounts of data can lead to breakthroughs in research and development, opening doors to solutions for some of the world's most pressing challenges.
Challenges
Potential biases perpetuated
Since models are trained on available datasets, any biases or problematic associations in that data can be propagated through the system's outputs.
Misinformation risks
The ability to generate convincing, contextually-relevant content raises risks of propagating misinformation or fake media that appears authentic. Safeguards are needed.
Lack of reasoning capability
Despite advances, these models currently have a limited understanding of factual knowledge and common sense compared to humans. Outputs should thus not be assumed fully accurate or truthful.
Architectures and approaches such as Retrieval Augmented Generation (RAG) are commonly deployed to anchor an LLM in facts and proprietary data.
Hallucinations can lead junior developers astray
One of the significant challenges posed by Generative AI in software development is the phenomenon of 'hallucinations' or the generation of incorrect or nonsensical code.
This can be particularly misleading for junior developers, who might not have the experience to discern these inaccuracies.
Ensuring that AI tools are equipped with mechanisms to highlight potential uncertainties and promote best practices is crucial to mitigate this risk and foster a learning environment that enhances, rather than hinders, the development of coding skills.
Tool fragmentation and explosion
As the landscape of Generative AI tools expands, developers are increasingly faced with the paradox of choice.
The proliferation of tools, each with its unique capabilities and interfaces, can lead to fragmentation, making it challenging to maintain a streamlined and efficient workflow.
Navigating a rapidly evolving landscape
The pace at which Generative AI is advancing presents a double-edged sword.
While it drives innovation and the continuous improvement of tools, it also demands that developers remain perennial learners to keep abreast of the latest technologies and methodologies.
This rapid evolution can be daunting, necessitating a culture of continuous education and adaptability within the development community to harness the full potential of these advancements.
To be fair, this has always been the case with software development, but forces like Generative AI accelerate the subjective pace of change even further.
Ethics implications
Given the challenges in safely deploying Generative AI, these are some of the most pressing implications for ethical standards:
Audit systems for harmful biases
And the ability to make and track corrections when needed.
Human oversight
We need measures to catch and correct or flag AI mistakes.
In closing: As a developer...
Having worked alongside Generative AI for some time now, the experience has been occasionally panic-inducing, but mostly enjoyable.
Coding alongside ChatGPT4 throughout the day feels like having a second brain that's tirelessly available to bounce ideas off, troubleshoot problems, and help me tackle larger and more complex development challenges on my own. |
|
Write an article about "Infrastructure as Code" | Build systems in the cloud - quickly
Infrastructure as code is a critical skill these days. Practitioners are able to define and bring up reproducible copies of architectures on cloud providers
such as AWS and Google Cloud.
This course will get you hands on with CloudFormation, Terraform and Pulumi.
<Button
variant={"green"}
href={/learn/infrastructure-as-code/0}
Infrastructure as code intro - start course |
|
Write an article about "GitHub Automations" | Time to automate with GitHub!
GitHub Automations help you maintain software more effectively with less effort
<Button
variant={"green"}
href={/learn/courses/github-automations/0}
GitHub Automations - start course |
|
Write an article about "Taking Command" | Time to build a command line tool!
Project-based practice: building a command line tool in Go
<Button
variant={"green"}
href={/learn/courses/taking-command/0}
Taking Command - start course |
|
Write an article about "an article" | export const meta = {
}
Segment 1
One |
|
Write an article about "Pair coding with AI" | More than the sum of its parts...
Learning how to effectively leverage AI to help you code, design systems, generate high quality images in any style and more
can make you more productive, and can even make your work more enjoyable and less stressful.
This course shows you how.
<Button
variant={"green"}
href={/learn/pair-coding-with-ai/0}
Pair coding with AI - start course |
|
Write an article about "Emotional Intelligence for Developers" | Procrastination is about negative emotions
As you get further into your career, you come to realize that the technical chops come over time and are the easy part.
Mastering your own emotions, working with emotional beings (other humans), and recognizing when something has come up and needs attention or
skillful processing will take you further than memorizing 50 new books or development patterns.
<Button
variant={"green"}
href={/learn/emotional-intelligence-for-developers/0}
Emotional Intelligence for Developers - start course |
|
Write an article about "Coming out of your shell" | Don't stick to the default terminal
Have you ever heard of ZSH? Alacrity? Butterfish?
In this course you'll learn to install, configure and leverage powerful custom shells to supercharge your command line skills.
<Button
variant={"green"}
href={/learn/courses/coming-out-of-your-shell/0}
Coming out of your shell - start course |
|
Write an article about "an article" | export const meta = {
}
Cusomization makes it yours
By changing your shell, you not only make your computer more comfortable, but
perhaps even more importantly, you learn about Unix, commands, tty, and more. |
|
Write an article about "an article" | export const meta = {
}
Your shell is your home as a hacker
But most people never dare to experiment with even changing their shell.
What a shame |
|
Write an article about "Git Going" | Most developers don't understand git...
Yet everyone needs git. Learning git well is one of the best ways to differentiate yourself to hiring managers.
Git is your save button!
Never lose work once you learn how to use git. While git has a lot of complex advanced features and configuration options, learning the basic workflow for being effective
doesn't take long, and this course will show you everything you need to know with a hands-on project.
<Button
variant={"green"}
href={/checkout?product=git-going}
Git Going - start course |
|
Write an article about "an article" | export const meta = {
}
Why Version Control?
Version control is super important! |
|
Write an article about "an article" | export const meta = {
}
Git vs GitHub
Git and GitHub are intertwined but different. |
|
Write an article about "an article" | export const meta = {
}
Most developers don't know git well
mermaid
gitGraph
commit
commit
branch develop
checkout develop
commit
commit
checkout main
merge develop
commit
commit
And this is a great thing for you. You can differentiate yourself to hiring managers, potential teams considering you, and anyone else you collaborate with
professionally by demonstrating a strong grasp of git.
Some day, you'll need to perform a complex git surgery, likely under pressure, in order to fix something or restore a service. You'll be glad then that you
practiced and learned git well now. |
|
Write an article about "an article" | export const meta = {
}
Git is your save button
That's why it's so critical to learn the basics well. Git enables you to save your work, have multiple copies of your code distributed around to other machines,
so that you can recover even if you spill tea all over your laptop, and professionally share code and collaborate with other develeopers. |
|
Write an article about "an article" | export const meta = {
}
Learning git pays off
Learning git is very important. |
|
Write an article about "an article" | export const meta = {
}
Git configuration
Configuring git is important. |
|
Write an article about "an article" | export const metadata = {
}
Descript
Descript is a powerful video editing tool that allows users to edit videos by editing the transcript, making the process more intuitive and accessible.
Features
Free Tier: No
Chat Interface: No
Supports Local Model: No
Supports Offline Use: No
IDE Support
No IDE support information available
Language Support
No language support information available
Links
Homepage
Review |
|
Write an article about "Part #2 Live code" | Part 2 of our previous video.
Join Roie Schwaber-Cohen and me as we continue to step through and discuss the Pinecone Vercel starter template that deploys an AI chatbot that is less likely to hallucinate thanks to Retrieval Augmented Generation (RAG). |
|
Write an article about "Pinecone & Pulumi" | I co-hosted a webinar with Pulumi's Scott Lowe about:
The delta between getting an AI or ML technique working in a Jupyter Notebook and prod
How to deploy AI applications to production using the Pinecone AWS Reference Architecture
How Infrastructure as Code can simplif y productionizing AI applications |
|
Write an article about "Live code" | Join Roie Schwaber-Cohen and me for a deep dive into The Pinecone Vercel starter template that deploys an AI chatbot that is less likely to hallucinate thanks to Retrieval Augmented Generation (RAG).
This is an excellent video to watch if you are learning about Generative AI, want to build a chatbot, or are having difficulty getting your current AI chatbots to return factual answers about specific topics and your proprietary data.
You don't need to already be an AI pro to watch this video, because we start off by explaining what RAG is and why it's such an The majority of this content was originally captured as a live Twitch.tv stream co-hosted by Roie (rschwabco) and myself (zackproser).
Be sure to follow us on Twitch for more Generative AI deep dives, tutorials, live demos, and conversations about the rapidly developing world of Artificial Intelligence. |
|
Write an article about "How to use Jupyter notebooks, langchain and Kaggle.com to create an AI chatbot on any topic" | In this video, I do a deep dive on the two Jupyter notebooks which I built as part of my office oracle project.
Both notebooks are now open source:
Open-sourced Office Oracle Test Notebook
Open-sourced Office Oracle Data Bench
I talk through what I learned, why Jupyter notebooks were such a handy tool for getting my data quality to where I needed it to be, before worrying about application logic.
I also demonstrate langchain DocumentLoaders, how to store secrets in Jupyter notebooks when open sourcing them, and much more.
What's involved in building an AI chatbot that is trained on a custom corpus of knowledge?
In this video I breakdown the data preparation, training and app development components and explain why Jupyter notebooks were such a handy tool while creating this app and tweaking my model.
App is open source at https://github.com/zackproser/office-... and a demo is currently available at https://office-oracle.vercel.app. |
|
Write an article about "Deploying the Pinecone AWS Reference Architecture - Part 1" | In this three part video series, I deploy the Pinecone AWS Reference Architecture with Pulumi from start to finish. |
|
Write an article about "Deploying the first Cloudflare workers in front of api.cloudflare.com and www.cloudflare.com" | On the Cloudflare API team, we were responsible for api.cloudflare.com as well as www.cloudflare.com.
Here's how we wrote the first Cloudflare Workers to gracefully deprecate TLS 1.0 and set them in front of
both properties, without any downtime.
And no, if you're paying attention, my name is not Zack Prosner, it's Zack Proser :) |
|
Write an article about "Project" | Adding speech-to-text capabilities to Panthalia allows me to commence blog posts faster and more efficiently
than ever before, regardless of where I might be.
In this video I demonstrate using speech to text to create a demo short story end to end, complete with generated images,
courtesy of StableDiffusionXL. |
|
Write an article about "Deploying the Pinecone AWS Reference Architecture - Part 2" | In this three part video series, I deploy the Pinecone AWS Reference Architecture with Pulumi from start to finish. |
|
Write an article about "Destroying the Pinecone AWS Reference Architecture" | I demonstrate how to destroy a deployed Pinecone AWS Reference Architecture using Pulumi. |
|
Write an article about "Exploring a Custom Terminal-Based Developer Workflow - Tmux, Neovim, Awesome Window Manager, and More" | This video showcases my custom terminal-based developer workflow that utilizes a variety of fantastic open-source tools like tmux, neovim, Awesome Window Manager, and more.
So, let's dive in and see what makes this workflow so efficient, powerful, and fun to work with.
One of the first tools highlighted in the video is tmux, a terminal multiplexer that allows users to manage multiple terminal sessions within a single window.
I explain how tmux can increase productivity by letting developers switch between tasks quickly and easily.
I also show off how tmux can help your workflow fit to your task, using techniques like pane splitting to expand and contract the space to the task at hand.
Next up is neovim, a modernized version of the classic Vim text editor.
I demonstrate how neovim integrates seamlessly with tmux, providing powerful text editing features in the terminal.
I also discuss some of the advantages of using neovim over traditional text editors, such as its extensibility, customization options, and speed.
The Awesome Window Manager also gets its moment in the spotlight during the video.
This dynamic window manager is designed for developers who want complete control over their workspace.
I show how Awesome Window Manager can be configured to create custom layouts and keybindings, making it easier to manage multiple applications and terminal sessions simultaneously.
Throughout the video, I share a variety of other open-source tools that I have integrated into my workflow.
Some of these tools include fzf, a command-line fuzzy finder that makes searching for files and directories a breeze; ranger, a file manager designed for the terminal; and zsh, a powerful shell that offers a multitude of productivity-enhancing features.
One of the key takeaways from the video is how By combining these open-source tools and tailoring them to their specific needs, I have created a workflow that helps me work faster, more efficiently, and with greater satisfaction.
So, if you're looking for inspiration on how to build your terminal-based developer workflow, this YouTube video is a must-watch.
See what I've learned so far in setting up a custom terminal-based setup for ultimate productivity and economy of movement. |
|
Write an article about "How to build an AI chatbot using Vercel's ai-chatbot template" | Curious how you might take Vercel's ai-chatbot template repository from GitHub and turn it into your own GPT-like chatbot of any identity? That's what I walkthrough in this video.
I show the git diffs and commit history while talking through how I integrated langchain, OpenAI, ElevenLabs for voice cloning and text to speech and Pinecone.io for the vector database in order to create a fully featured chat-GPT-like custom AI bot that can answer, factually, for an arbitrary corpus of knowledge. |
|
Write an article about "How to build chat with your data using Pinecone, LangChain and OpenAI" | I demonstrate how to build a RAG chatbot in a Jupyter Notebook end to end.
This tutorial is perfect for beginners who want help getting started, and for experienced developers who want to understand how LangChain, Pinecone and OpenAI
all fit together. |
|
Write an article about "A full play through of my HTML5 game, CanyonRunner" | CanyonRunner is a complete HTML5 game that I built with the Phaser.js framework. I wrote about how the game works in my blog post here. |
|
Write an article about "Pinecone & Pulumi" | I co-hosted a webinar with Pulumi's Engin Diri about:
The Pinecone AWS Reference Architecture,
How it's been updated to use Pinecone Serverless and the Pinecone Pulumi provider
How to deploy an AI application to production using infrastructure as code |
|
Write an article about "How to use ChatGPT in your terminal" | If you're still copying and pasting back and forth between ChatGPT in your browser and your code in your IDE, you're missing out.
Check out how easily you can use ChatGPT in your terminal! |
|
Write an article about "Project" | I've been working on this side project for several months now, and it's ready enough to demonstrate. In this video I talk through:
What it is
How it works
A complete live demo
Using Replicate.com for a REST API interface to StableDiffusion XL for image generation
Stack
Next.js
Vercel
Vercel Postgres
Vercel serverless functions
Pure JavaScript integration with git and GitHub thanks to isomorphic-git
Features
Secured via GitHub oAuth
StableDiffusion XL for image generation
Postgres database for posts and images data
S3 integration for semi-volatile image storage
Start and complete high quality blog posts in MDX one-handed while on the go
Panthalia is open-source and available at github.com/zackproser/panthalia |
|
Write an article about "What is a vector database?" | I walk through what a vector database is, by first explaining the types of problems that vector databases solve, as well as how AI "thinks".
I use clowns as an example of a large corpus of training data from which we can extract high level features, and I discuss architectures such as
semantic search and RAG. |
|
Write an article about "How to use Jupyter Notebooks for Machine Learning and AI tasks" | In this video, I demonstrate how to load Jupyter Notebooks into Google Colab and run them for free. I show how to load Notebooks from GitHub and how to execute individual cells and how to run
Notebooks end to end. I also discuss some important security considerations around leaking API keys via Jupyter Notebooks. |
|
Write an article about "Deploying the Pinecone AWS Reference Architecture - Part 3" | In this three part video series, I deploy the Pinecone AWS Reference Architecture with Pulumi from start to finish. |
|
Write an article about "Master GitHub Pull Request Reviews with gh-dash and Octo - A YouTube Video Tutorial" | In this video tutorial, I demonstrate the power of combining gh-dash and Octo for a seamless terminal-based GitHub pull request review experience.
In this video, I show you how these two powerful tools can be used together to quickly find, organize, and review pull requests on GitHub, all from the comfort of your terminal.
Topics covered
Discovering Pull Requests with gh-dash
We'll kick off the tutorial by showcasing gh-dash's impressive pull request discovery capabilities.
Watch as we navigate the visually appealing TUI interface to find, filter, and sort pull requests under common headers, using custom filters to locate the exact pull requests you need.
Advanced GitHub Searches in Your Terminal
Explore gh-dash's advanced search functionality in action as we demonstrate how to perform fully-featured GitHub searches directly in your terminal.
Learn how to search across repositories, issues, and pull requests using a range of query parameters, streamlining your pull request review process.
In-Depth Code Reviews Using Octo
Once we've located the pull requests that need reviewing, we'll switch gears and dive into Octo, the powerful Neovim plugin for code reviews.
Witness how Octo integrates seamlessly with Neovim, enabling you to view code changes, commits, and navigate the codebase with ease.
Participating in Reviews with Comments and Emoji Reactions
See how Octo takes code reviews to the next level by allowing you to leave detailed in-line comments and even add GitHub emoji reactions to comments.
With Octo, you can actively participate in the review process and provide valuable feedback to your colleagues, all within the Neovim interface.
Combining gh-dash and Octo for a Streamlined Workflow
In the final segment of the video tutorial, we'll demonstrate how to create a seamless workflow that combines the strengths of gh-dash and Octo.
Learn how to harness the power of both tools to optimize your GitHub pull request review process, from locating pull requests using gh-dash to conducting comprehensive code reviews with Octo.
By the end of this video tutorial, you will have witnessed the incredible potential of combining gh-dash and Octo for a robust terminal-based GitHub pull request review experience.
We hope you'll be inspired to integrate these powerful tools into your workflow, maximizing your efficiency and productivity in managing and reviewing pull requests on GitHub.
Happy coding! |
|
Write an article about "Mastering Fast, Secure AWS Access with open source tool aws-vault" | Don't hardcode your AWS credentials into your dotfiles or code! Use aws-vault to store them securely
In this YouTube video, I demonstrate how to use the open-source Golang tool, aws-vault, for securely managing access to multiple AWS accounts.
aws-vault stores your permanent AWS credentials in your operating system's secret store or keyring and fetches temporary AWS credentials from the AWS STS endpoint.
This method is not only secure but also efficient, especially when combined with Multi-Factor Authentication.
In this video, I demonstrate the following aspects of aws-vault:
Executing arbitrary commands against your account: The video starts by showing how aws-vault can be used to execute any command against your AWS account securely.
By invoking aws-vault with the appropriate profile name, you can fetch temporary AWS credentials and pass them into subsequent commands, ensuring a secure way of managing AWS access.
Quick AWS account login: Next, I show how to use aws-vault to log in to one of your AWS accounts quickly.
This feature is particularly helpful for developers and system administrators who manage multiple AWS accounts and need to switch between them frequently.
Integration with Firefox container tabs: One of the most exciting parts of the video is the demonstration of how aws-vault can be used in conjunction with Firefox container tabs to log in to multiple AWS accounts simultaneously.
This innovative approach allows you to maintain separate browsing sessions for each AWS account, making it easier to manage and work with different environments.
The video emphasizes how using aws-vault can significantly improve your command line efficiency and speed up your workflow while working with various test and production environments.
If you're a developer or system administrator looking to enhance your AWS account management skills, this YouTube video is for you. |
|
Write an article about "Building an AI chatbot with langchain, Pinecone.io, Jupyter notebooks and Vercel" | What's involved in building an AI chatbot that is trained on a custom corpus of knowledge?
In this video I breakdown the data preparation, training and app development components and explain why Jupyter notebooks were such a handy tool while creating this app and tweaking my model.
App is open source at https://github.com/zackproser/office-... and a demo is currently available at https://office-oracle.vercel.app. |
|
Write an article about "Deploying a jump host for the Pinecone AWS Reference Architecture" | I demonstrate how to configure, deploy and connect through a jump host so that you can interact with RDS Postgres
and other resources running in the VPC's private subnets. |
|
Write an article about "Cloud-Nuke - A Handy Open-Source Tool for Managing AWS Resources" | In this video, we'll have a more casual conversation about cloud-nuke, an open-source tool created and maintained by Gruntwork.io.
I discuss the benefits and features of cloud-nuke, giving you an idea of how it can help you manage AWS resources more efficiently.
First and foremost, cloud-nuke is a Golang CLI tool that leverages the various AWS Go SDKs to efficiently find and destroy AWS resources.
This makes it a handy tool for developers and system administrators who need to clean up their cloud environment, save on costs, and minimize security risks.
One of the main benefits of cloud-nuke is its ability to efficiently search and delete AWS resources.
It does this by using a powerful filtering system that can quickly identify and remove unnecessary resources, while still giving you full control over what gets deleted.
This means that you don't have to worry about accidentally removing critical resources.
Another useful feature of cloud-nuke is its support for regex filters and config files.
This allows you to exclude or target resources based on their names, giving you even more control over your cloud environment.
For example, you might have a naming convention for temporary resources, and with cloud-nuke's regex filtering, you can quickly identify and delete these resources as needed.
Configuring cloud-nuke is also a breeze, as you can define custom rules and policies for managing resources.
This means you can tailor the tool to meet the specific needs of your organization, ensuring that your cloud environment stays clean and secure.
One thing to keep in mind when using cloud-nuke is that it's This will help you avoid accidentally deleting critical resources, and it will also ensure that you're keeping up with any changes in your cloud environment.
In addition to using cloud-nuke as a standalone tool, you can also integrate it with other cloud management tools and services.
This will help you create a more comprehensive cloud management strategy, making it easier to keep your environment secure and well-organized.
To sum it up, cloud-nuke is a versatile open-source tool that can help you manage your AWS resources more effectively.
Its efficient search and deletion capabilities, support for regex filters and config files, and easy configuration make it a valuable addition to any developer's or system administrator's toolkit.
So, if you're looking for a better way to manage your AWS resources, give cloud-nuke a try and see how it can make your life easier. |
|
Write an article about "Semantic Search with TypeScript and Pinecone" | Roie Schwaber-Cohen and I discuss semantic search and step through the code for performing semantic search with Pinecone's vector database. |
|
Write an article about "Episode" | Table of contents
Welcome to Episode 2
In today's episode, we're looking at interactive machine learning demos, vector databases compared, and developer anxiety.
My work
Introducing - interactive AI demos
I've added a new section to my site, demos. To kick things off, I built two interactive demos:
Tokenization demo
Embeddings demo
Both demos allow you to enter freeform text and then convert it to a different representation that machines can understand.
The tokenization demo shows you how the tiktoken library converts your natural language into token IDs from a given vocabulary, while the embeddings demo shows you how text is converted to an array of floating point numbers representing the features that the
embedding model extracted from your input data.
I'm planning to do a lot more with this section in the future. Some initial ideas:
Create a nice intro page linking all the demos together in a sequence that helps you to iteratively build up Add more demos - I plan to ship a new vector database demonstration using Pinecone shortly that will introduce the high level concepts involved in working with vector databases and potentially even demonstrate visualizing high-dimensional vector space
Take requests - If you have ideas for new demos, or aspects of machine learning or AI pipelines that you find confusing, let me know by responding to this email.
Vector databases compared
I wrote a new post comparing top vector database offerings. I'm treating this as a living document, meaning that I'll likely
add to and refine it over time.
What's abuzz in the news
Here's what I've come across and have been reading lately.
The common theme is developer anxiety: the velocity of changes and new generative AI models and AI-assisted developer tooling, combined with ongoing
industry layoffs and the announcement of "AI software developer" Devin, has many developers looking to the future with deep concern and worry.
Some have wondered aloud if their careers are already over, some are adopting the changes in order to continue growing their careers, and still others remain deeply skeptical of AI's ability to replace all of the squishy aspects to our jobs that don't fit in a nice spec.
What's my plan?
As usual, I intend to keep on learning, publishing and growing.
I've been hacking alongside "AI" for a year and a half now, and so far my productivity and job satification have only improved.
Are we going to need less individual programmers at some unknown point in the future?
Probably.
Does that mean that there won't be opportunities for people who are hungry and willing to learn?
Probably not.
Recommended reading
The AI Gold Rush
The Top 100 GenAI Consumer Apps
Can You Replace Your Software Engineers With AI?
Developers are on edge
My favorite tools
High-level code completion
I am still ping-ponging back and forth between ChatGPT 4 and Anthropic's Claude 3 Opus.
I am generally impressed by Claude 3 Opus, but even with the premium subscription, I'm finding some the limits to be noticeably dear, if you will.
Several days in a row now I've gotten the
warning about butting up against my message sending limits.
At least for what I'm using them both for right now: architecture sanity checks and boilerplate code generation, it's not yet the case that one is so obviously superior that I'm ready to change up my workflow.
Autocomplete / code completion
Codeium!
AI-assisted video editing
Kapwing AI
That's all for this episode! If you liked this content or found it helpful in any way, please pass it on to someone you know who could benefit. |
|
Write an article about "Episode" | Table of contents
Starting fresh with episode 1
Why Episode 1? I've decided to invest more time and effort into my newsletter. All future episodes will now be
available on my newsletter archive at https://zackproser.com/newsletter.
Going forward, each of my newsletter episodes will include:
My work - posts, videos, open-source projects I've recently shipped
What's abuzz in the news - new AI models, open-source models, LLMs, GPTs, custom GPTs and more
My favorite tools - a good snapshot of the AI-enhanced and other developer tooling I'm enamored with at the moment
I will aim to publish and send a new episode every two weeks.
My work
The Generative AI Bootcamp: Developer Tooling course is now available
I've released my first course!
The Generative AI Bootcamp: DevTools course is designed for semi and non-technical folks who want to understand:
What Generative AI is
Which professions and skillsets it is disrupting, why and how
Which AI-enhanced developer tooling on the scene is working and why
This course is especially designed for investors, analysts and market researchers looking to understand the opportunities and challenges of Generative AI as it relates to Developer Tooling, Integrated Developer Environments (IDEs), etc.
2023 Wins - My year in review
2023 was a big year for me, including a career pivot to a new role, a new company and my first formal entry into the AI space.
I reflect on my wins and learnings from the previous year.
Testing Pinecone Serverless at Scale with the AWS Reference Architecture
I updated the Pinecone AWS Reference Architecture to use Pinecone Serverless, making for an excellent test bed for
putting Pinecone through its paces at scale.
Just keep an eye on your AWS bill!
Codeium vs ChatGPT
I get asked often enough about the differences between Codeium for code completion (intelligent autocomplete) and ChatGPT4, that I figured I should just write a
comprehensive comparison of their capabilities and utility.
My first book credit - My Horrible Career
What started out as an extended conversation with my programming mentor John Arundel became a whole book!
How to build a sitemap for Next.js that captures static and dynamic routes
Some old-school tutorial content for my Next.js and Vercel fans.
What's abuzz in the news
Anthropic releases Claude 3 family of models
I've been experimenting with Claude 3 Opus, their most intelligent model, to see how it slots in for high-level architecture discussions and code generation compared to ChatGPT 4.
So far, so good, but I'll have more thoughts and observations here soon. Watch this space!
My favorite tools
High-level code completion
Currently neck and neck between ChatGPT 4 and Anthropic's Claude 3 Opus. Stay tuned for more thoughts.
Autocomplete / code completion
Codeium!
AI-assisted video editing
Kapwing AI
That's all for this episode! If you liked this content or found it helpful in any way, please pass it on to someone you know who could benefit. |
|
Write an article about "ChatGPT4 and Codeium are still my favorite dev assistant stack" | As of October 10th, 2023, ChatGPT4 and Codeium are all I need to make excellent progress and have fun doing it.
As of October 10th, 2023, the Generative AI hype cycle is still in full-swing and there are more startups with their own developer-focused AI-assisted coding tools than ever before. Here's why
I'm still perfectly content with ChatGPT4 (with a Plus subscription for $20 per month) and Codeium, which I've reviewed here for code completion.
They are available everywhere
ChatGPT4 can be opened from a browser anywhere, even on internet-connected machines I don't own: chat.openai.com is part of my muscle memory now, and I once I log in, my entire
conversational history is available to me. Now that ChatGPT4 is available on Android, it's truly with me wherever I go.
They are low-friction
Now that ChatGPT4 is on my phone, I can start up a new conversation when I'm walking and away from my desk.
Some days, after working all day and winding down for sleep, I'll still have a couple of exciting
creative threads I don't want to miss out on, so I'll quickly jot or speak a paragraph of context into a new new GPT4 chat thread to get it whirring away on the idea.
I can either tune it by giving it more
feedback or just pass out and pick up the conversation the next day.
I don't always stick to stock ChatGPT4 form-factors, however. Charmbracelet's mods wrapper is the highest quality and most delightful tool I've found for working with
GPT4 in your unix pipes or just asking quick questions in your terminal such as, "Remind me of the psql command to connect to my Postgres host".
Being able to pipe your entire Python file to mods and ask it
to help you find the bug is truly accelerating.
Codium works like a dream once you get it installed. I tend to use Neovim by preference but also work with VSCode - once you're over the initial installation and auth hurdles, it "just works".
Most everything else I've tried doesn't work
No disrespect to these tools or the teams behind them. I do believe the potential is there and that many of them will become very successful in due time once the initial kinks are worked out.
But I've spent a great deal of time experimenting with these across various form factors: an Ubuntu desktop, my daily driver Linux laptop, a standard MacBook pro, and the reality is that the code or tests
or suggestions they output are often off the mark.
ChatGPT4 extends my capabilities with minimal fuss
Since ChatGPT4 is available in the browser, I can access it from any machine with an internet connection, even if I'm traveling.
Since it's now also available as an Android app, I can also reference past
conversations and start new ones on my phone.
The Android app has a long way to go until it's perfect, yet it does already support speech to text, making for the lowest possible friction entrypoint to a new
app idea, architectural discussion or line of inquiry to help cement my understanding of a topic I'm learning more about.
They are complimentary
ChatGPT4 excels at having long-winded discussions with me about the many ways I might implement desired functionality in one of my applications or side projects.
It often suggests things that I would have missed
on my first pass, such as the fact that Vercel has deployment hooks I can take advantage of, and it's especially useful once the project is underway.
I can say things like:
I'm changing the data model yet again now that I understand how I want this UX to work - drop these fields from the posts table, add these to the images table and re-generate the base SQL migrations I run to scaffold the app.
I think of and treat ChatGPT4 as a senior level technical peer.
I do sometimes ask it to generate code for me as a starting point for a new component, or to explain a TypeScript error that is baking my noodle, but it's main value is in being that intelligent always-available
coding partner I can talk through issues with.
Meanwhile, Codium runs in my actual IDE and is one of the best tools I've found at code-completion - it does a better job of just about anything I've evaluated at grokking the and its suggestions are often scarily spot-on, which means that it saves me a couple seconds here and there, closing HTML tags for me, finishing up that convenience JavaScript function I'm writing, even completing
my thought as I'm filling in a README.
That's the other key feature of Codium that makes it such a winner for me - it's with me everywhere and it can suggest completions in any context ranging from prose, to TOML, to Python,
to TypeScript, to Go, to Dockerfiles, to YAML, and on and on.
With GPT4 as my coding buddy who has the memory of an elephant and who can deconstruct even the nastiest stack-traces or log dumps on command, and Codium helping me to save little bits of time here and there but constantly,
I have settled on a workflow that accelerates me and, perhaps more Looking forward and what I'm still missing
I have no doubt that the current generation of developer-focused AI tools are going to continue improving at a rapid pace.
ChatGPT itself has seen nothing but enhancement at breakneck speed since I started using it and I haven't
even gotten my hands on its multi-modal (vision, hearing, etc) capabilities yet.
However, even with excellent wrappers such as mods, which I've mentioned above, what I find myself missing is the ability for ChatGPT4 to read and
see my entire codebase when I'm working on a given project.
The quality of its answers are occasionally hobbled by its inability to load my entire codebase into its context, which leads to it generating more generic sample code than it really needs to.
I'm confident with additional work and a litle time, it won't be long until ChatGPT4 or one of its competitors is able to say: "The reason your current Jest test is choking on that I'm able to get that information out of it now, but it just takes a good bit of careful prompting and more copy/paste than I would ideally like.
What I really want is the helpful daemon looking over my shoulder, who is smart enough to know
when to raise its hand to point out something that's going to cause the build to fail, and when to keep quiet because even if it knows better that's just my personal coding style preference so better to leave it alone.
We're not quite there yet, but all of the base ingredients to create this experience are.
Indeed, many different companies both large and small are sprinting full-tilt toward this experience, as I've written about recently, but there's still quite a way to go until these tools present uniformly smooth experiences to their end users:
GitHub Copilot review
The top bugs all AI developer tools have right now
Codeium review
CodiumAI PR agent for eased GitHub maintenance
Can ChatGPT4 help me complete side projects more quickly? |
|
Write an article about "Opengraph dynamic social images" | ${process.env.NEXT_PUBLIC_SITE_URL}/api/og}
alt="Zachary Proser's default opengraph image"
/>
What is opengraph?
Opengraph is a standard for social media image formats. It's the "card" that is rendered whenever you or someone else shares a URL to your site on social media:
It's considered a good idea to have an opengraph image associated with each of your posts because it's a bit of nice eye candy that theoretically helps improve your click-through rate.
A high quality opengraph image can help
make your site look more professional.
Implementing my desired functionality
This took me a bit to wrap my head around.
The examples Vercel provides were helpful and high quality as usual, (they even have a helpful opengraph playground) but I wish there had been more of them.
It took me a while to figure out how to implement the exact
workflow I wanted:
I add a "hero" image to each of my posts which renders on my blog's index page. I wanted my opengraph image for a post to contain the post's title as well as its hero image
I wanted a fallback image to render for my home or index pages - and in case the individual post's image couldn't be rendered for whatever reason
In this way, I could have an attractive opengraph image for each post shared online, while having a sane default image that does a good job of promoting my site in case of any issues.
In general, I'm pretty happy with how the final result turned out, but knowing myself I'll likely have additional tweaks to make in the future to improve it further.
If you look closely (right click the image and open it in a new tab), you can see that my image has two linear gradients, one for the green background which transitions between greens from top to bottom, and one for blue which transitions left to right.
In addition, each band has a semi-transparent background image - giving a brushed aluminum effect to the top and bottom green bands and a striped paper effect to the center blue card where the title and hero image are rendered.
I was able to
pull this off due to the fact that Vercel's '@vercel/og' package allows you to use Tailwind CSS in combination with inline styles.
Per-post images plus a fallback image for home and index pages
This is my fallback image, and it is being rendered by hitting the local /api/og endpoint.
It's src parameter is ${process.env.NEXT_PUBLIC_SITE_URL/api/og} which computes to "${process.env.NEXT_PUBLIC_SITE_URL}/api/og".
${process.env.NEXT_PUBLIC_SITE_URL}/api/og}
alt="Zachary Proser's default topengraph image"
/>
Example dynamically rendered opengraph images for posts:
Blog post with dynamic title and hero image
javascript
src={${process.env.NEXT_PUBLIC_SITE_URL}/api/og
?title=Retrieval Augmented Generation (RAG)
&image=/_next/static/media/retrieval-augmented-generation.2337c1a1.webp}
${process.env.NEXT_PUBLIC_SITE_URL}/api/og?title=Retrieval Augmented Generation (RAG)&image=/_next/static/media/retrieval-augmented-generation.2337c1a1.webp}
alt="Retrieval Augmented Generation post"
/>
Another blog post with dynamic title and hero image
javascript
src={${process.env.NEXT_PUBLIC_SITE_URL}/api/og
?title='AI-powered and built with...JavaScript?
&image=/_next/static/media/javascript-ai.71499014.webp}
${process.env.NEXT_PUBLIC_SITE_URL}/api/og?title=A-powered and built with...JavaScript?&image=/_next/static/media/javascript-ai.71499014.webp}
alt="AI-powered and built with JavaScript post"
/>
Blog post with dynamic title but fallback image
Having gone through this exercise, I would highly recommend implementing a fallback image that renders in two cases:
1. If the page or post shared did not have a hero image associated with it (because it's your home page, for example)
2. Some error was encountered in rendering the hero image
Here's an example opengraph image where the title was rendered dynamically, but the fallback image was used:
javascript
src={${process.env.NEXT_PUBLIC_SITE_URL}/api/og
?title=This is still a dynamically generated title}
${process.env.NEXT_PUBLIC_SITE_URL}/api/og?title=This is still a dynamically generated title}
alt="This is still a dynamically generated title"
/>
Understanding the flow of Vercel's '@vercel/og' package and Next.js
This is a flowchart of how the sequence works:
In essence, you're creating an API route in your Next.js site that can read two query parameters from requests it receives:
1. The title of the post to generate an image for
2. The hero image to include in the dynamic image
and use these values to render the final @vercel/og ImageResponse.
Honestly, it was a huge pain in the ass to get it all working the way I wanted, but it would be far worse without this library and Next.js integration.
In exchange for the semi-tedious experience of building out your custom OG image you get tremendous flexibility within certain hard limitations, which you can read about here.
Here's my current /api/og route code, which still needs to be improved and cleaned up, but I'm sharing it in case it helps anyone else trying to figure out this exact same flow.
This entire site is open-source and available at github.com/zackproser/portfolio
javascript
export const config = {
};
const { searchParams } = new URL(request.url);
console.log(og API route searchParams %o:, searchParams)
const hasTitle = searchParams.has('title');
const title = hasTitle ? searchParams.get('title') : 'Portfolio, blog, videos and open-source projects';
// This is horrific - need to figure out and fix this
const hasImage = searchParams.has('image') || searchParams.get('amp;image')
// This is equally horrific - need to figure out and fix this for good
const image = hasImage ? (searchParams.get('image') || searchParams.get('amp;image')) : undefined;
console.log(og API route hasImage: ${hasImage}, image: ${image})
// My profile image is stored in /public so that we don't need to rely on an external host like GitHub
// that might go down
const profileImageFetchURL = new URL('/public/zack.webp', const profileImageData = await fetch(profileImageFetchURL).then(
(res) => res.arrayBuffer(),
);
// This is the fallback image I use if the current post doesn't have an image for whatever reason (like it's the homepage)
const fallBackImageURL = new URL('/public/zack-proser-dev-advocate.webp', // This is the URL to the image on my site
const ultimateURL = hasImage ? new URL(${process.env.NEXT_PUBLIC_SITE_URL}${image}) : fallBackImageURL
const postImageData = await fetch(ultimateURL).then(
(res) => res.arrayBuffer(),
).catch((err) => {
console.log(og API route err: ${err});
});
return new ImageResponse(
Zachary Proser
Staff Developer Advocate @Pinecone.io
linear-gradient(to right, rgba(31, 97, 141, 0.8), rgba(15, 23, 42, 0.8)), url(https://zackproser.com/subtle-stripes.webp)
}}
>
{title}
<div tw="flex w-64 h-85 rounded overflow-hidden mt-4">
<img
src={postImageData}
alt="Post Image"
className="w-full h-full object-cover"
/>
</div>
</div>
</div>
<div tw="flex flex-col items-center">
<h1
tw="text-white text-3xl pb-2"
>
zackproser.com
</h1>
</div>
</div>
)
}
Here's my ArticleLayout.jsx component, which forms the <meta name="og:image" content={ogURL} /> in the head of each post to provide the URL that social media sites will call when rendering
their cards:
javascript
function ArrowLeftIcon(props) {
return (
)
}
export function ArticleLayout({
children,
metadata,
isRssFeed = false,
previousPathname,
}) {
let router = useRouter()
if (isRssFeed) {
return children
}
const sanitizedTitle = encodeURIComponent(metadata.title.replace(/'/g, ''));
// opengraph URL that gets rendered into the HTML, but is really a URL to call our backend opengraph dynamic image generating API endpoint
let ogURL = ${process.env.NEXT_PUBLIC_SITE_URL}/api/og?title=${sanitizedTitle}
// If the post includes an image, append it as a query param to the final opengraph endpoint
if (metadata.image && metadata.image.src) {
ogURL = ogURL + &image=${metadata.image.src}
}
console.log(ArticleLayout ogURL: ${ogURL});
let root = '/blog/'
if (metadata?.type == 'video') {
root = '/videos/'
}
const builtURL = ${process.env.NEXT_PUBLIC_SITE_URL}${root}${metadata.slug ?? null}
const postURL = new URL(builtURL)
return (
<>
{${metadata.title} - Zachary Proser}
{metadata.title}
<meta name="twitter:card" content="summary_large_image" />
<meta property="twitter:domain" content="zackproser.com" />
<meta property="twitter:url" content={postURL} />
<meta name="twitter:title" content={metadata.title} />
<meta name="twitter:description" content={metadata.description} />
<meta name="twitter:image" content={ogURL} />
</Head>
<Container className="mt-16 lg:mt-32">
<div className="xl:relative">
<div className="mx-auto max-w-2xl">
{previousPathname && (
<button
type="button"
onClick={() => router.back()}
aria-label="Go back to articles"
className="group mb-8 flex h-10 w-10 items-center justify-center rounded-full bg-white shadow-md shadow-zinc-800/5 ring-1 ring-zinc-900/5 transition dark:border dark:border-zinc-700/50 dark:bg-zinc-800 dark:ring-0 dark:ring-white/10 dark:hover:border-zinc-700 dark:hover:ring-white/20 lg:absolute lg:-left-5 lg:-mt-2 lg:mb-0 xl:-top-1.5 xl:left-0 xl:mt-0"
>
<ArrowLeftIcon className="h-4 w-4 stroke-zinc-500 transition group-hover:stroke-zinc-700 dark:stroke-zinc-500 dark:group-hover:stroke-zinc-400" />
</button>
)}
<article>
<header className="flex flex-col">
<h1 className="mt-6 text-4xl font-bold tracking-tight text-zinc-800 dark:text-zinc-100 sm:text-5xl">
{metadata.title}
</h1>
<time
dateTime={metadata.date}
className="order-first flex items-center text-base text-zinc-400 dark:text-zinc-500"
>
<span className="h-4 w-0.5 rounded-full bg-zinc-200 dark:bg-zinc-500" />
<span className="ml-3">{formatDate(metadata.date)}</span>
</time>
</header>
<Prose className="mt-8">{children}</Prose>
</article>
<Newsletter />
<FollowButtons />
</div>
</div>
</Container>
)
}
Thanks for reading
If you enjoyed this post or found it helpful in anyway, do me a favor and share the URL somewhere on social media so that you can see my opengraph image in action ππ. |
|
Write an article about "Wash three walls with one bucket" | Without much more work, you can ensure your side projects are not only expanding your knowledge, but also expanding your portfolio of hire-able skills.
Building side projects is my favorite way to keep my skills sharp and invest in my knowledge portfolio. But this post is about more than picking good side projects that
will stretch your current knowledge and help you stay up on different modes of development. It's also about creating a virtual cycle that goes from:
Idea
Building in public
Sharing progress and thoughts
Incorporating works into your portfolio
Releasing and repeating
It's about creating leverage even while learning a new technology or ramping up on a new paradigm and always keeping yourself in a position to seek out your next opportunity.
Having skills is not sufficient
You also need to display those skills, display some level of social proof around those skills, and, It's one thing to actually complete good and interesting work, but does it really exist if people can't find it? You already did the work and learning - now, be find-able for it.
In order to best capture the value generated by your learning, it helps to run your own tech blog as I recently wrote about.
Let's try a real world example to make this advice concrete. Last year, I found myself with the following desires while concentrating on Golang and wanting to start a new side project:
I wanted to deepen my understanding of how size measurements for various data types works
I wanted a tool that could help me learn the comparative sizes of different pieces of data while I worked
I wanted to practice Golang
I wanted to practice setting up CI/CD on GitHub
How could I design a side project that would incorporate all of these threads?
Give yourself good homework
Simple enough: I would build a Golang CLI that helps you understand visually the relative size of things in bits and bytes.
I would build it as an open-source project on GitHub, and write excellent unit tests that I could wire up to run via GitHub actions on every pull request and branch push. This would not
only give me valuable practice in setting up CI/CD for another project, but because my tests would be run automatically on branch pushes and opened pull requests, maintenance for the project
would become much easier:
Even folks who had never used the tool before would be able to understand in a few minutes if their pull request failed any tests or not.
I would keep mental or written notes about what I learned while working on this project, what I would improve and what I might have done differently. These are seeds for the
blog post I would ultimately write about the project.
Finally I would add the project to my /projects page as a means of displaying these skills.
You never know what's going to be a hit
In art as in building knowledge portfolios, how you feel about the work, the subject matter and your ultimate solutions may be very different from how folks who are looking to hire or work with people like you
may look at them.
This means a couple of things: it's possible for you to think a project is stupid or simple or gross, and yet have the market validate a strong desire for what you're doing.
It's possible for something you considered
trivial, such as setting up CI/CD for this particular language in this way for the 195th time, to be exactly what your next client is looking to hire you for.
It's possible for something you consider unfinished, unpolished or not very good to be the hook that sufficiently impresses someone looking for folks who know the tech stack or specific technology you're working with.
It's possible for folks to hire you for something that deep down you no longer feel particularly fired up about - something stable or boring or "old hat" that's valuable regardless, which you end up doing for longer to
get some cash, make a new connection or establish a new client relationship.
This means it's also unlikely to be a great use of your time to obsess endlessly about one particular piece of your project - in the end, it could be that nobody in the world cares or shares your vision about how the
CLI or the graphics rendering engine works and is unique, but that your custom build system you hacked up in Bash and Docker is potentially transformative for someone else's business if applied by a trusted partner or consultant.
Release your work and then let go of it
Releasing means pushing the big red scary button to make something public: whether that means merging the pull request to put your post up, sharing it to LinkedIn or elsewhere, switching your GitHub repository from private or public,
or making the video or giving the talk.
Letting go of it means something different.
I've noticed that I tend to do well with the releasing part, which makes my work public and available to the world, but then I tend to spend too much time checking stats, analytics, click-through
rates, etc once the work has been out for a while. I want to change this habit up, because I'd rather spend that time and energy learning or working on the next project.
Depending on who you are and where you are in your creative journey, you may find different elements of this phase difficult or easy.
My recommendation is to actually publish your work, even if it's mostly there and not 100% polished.
You never
know what feedback you'll get or connection you'll make by simply sharing your work and allowing it to be out in the world.
Then, I recommend, while noting that I'm still working on this piece myself, that you let it go so that you are clear to begin work on the next thing.
Wash three walls with one bucket
The excitement to learn and expand your skillset draws you forward into the next project.
The next project gives you ample opportunity to encounter issues, problems, bugs, your own knowledge gaps and broken workflows.
These are valuable and part of the process; they are not indications of failure;
Getting close enough to the original goal for your project allows you to move into the polishing phase and to consider your activities retrospectively.
What worked and what didn't?
Why?
What did I learn?
Writing about or making videos about the project allows you to get things clear enough in your head to tell a story - which further draws in your potential audience and solidifes your expertise as a developer.
Your finished writing and other artifacts, when shared with the world, may continue to drive traffic to your site and projects for years to come, giving you leads and the opportunity to apply the skills you've been
honing in a professional or community context.
Create virtuous cycles that draw you forward toward your goals, and wash three walls with one bucket.
Where did this phrase come from?
"Kill two birds with one stone" is a popular catchphrase meaning to solve two problems with one effort. But it's a bit on the nose and it endorses avicide, which I'm generally against.
One of my professors once related the story of one of her professors who was a Catholic monk and an expert
in the Latin language.
He would say "Bullshit!", it's "wash two walls with one bucket" when asked for the equivalent to "kill two birds with one stone" in Latin.
I liked that better so I started using it where previously I would have
suggested wasting a pair of birds.
For this piece, I decided the key idea was to pack in dense learning opportunities across channels as part of your usual habit of exploring the space and practicing skills via side projects.
So, I decided to add another wall. |
|
Write an article about "The Pain and Poetry of Python" | export const href = "https://pinecone.io/blog/pain-poetry-python"
This was the fourth article I published while working at Pinecone:
Read article |
|
Write an article about "A Blueprint for Modern API" | Introduction
Pageripper is a commercial API that extracts data from webpages, even if they're rendered with Javascript.
In this post, I'll detail the Continuous Integration and Continuous Delivery (CI/CD) automations I've configured via GitHub Actions for my Pageripper project, explain how they work and why they make working Pageripper delightful (and fast).
Why care about developer experience?
Working on well-automated repositories is delightful.
Focusing on the logic and UX of my changes allows me to do my best work, while the repository handles the tedium of running tests and publishing releases.
At Gruntwork.io, we published git-xargs, a tool for multiplexing changes across many repositories simultaneously.
Working on this project was a delight, because we spent the time to implement an excellent CI/CD pipeline that handled running tests and publishing releases.
As a result, reviewing and merging pull requests, adding new features and fixing bugs was significantly snappier, and felt easier to do.
So, why should you care about developer experience? Let's consider what happens when it's a mess...
What happens when your developer experience sucks
I've seen work slow to a crawl because the repositories were a mess: long-running tests that took over 45 minutes to complete a run and that were flaky.
Even well-intentioned and experienced developers experience a
slow-down effect when dealing with repositories that lack CI/CD or have problematic, flaky builds and ultimately untrustable pipelines.
Taking the time to correctly set up your repositories up front is a case of slowing down to go faster. Ultimately, it's a matter of project velocity.
Developer time is limited and expensive, so making sure the path
is clear for the production line is critical to success.
What are CI/CD automations?
Continuous Integration is about constantly merging into your project verified and tested units of incremental value.
You add a new feature, test it locally and then push it up on a branch and open a pull request.
Without needing to do anything else, the automation workflows kick in and run the projects tests for you, verifying you haven't broken anything.
If the tests pass, you merge them in, which prompts more automation to deploy your latest code to production.
In this way, developers get to focus on logic, features, UX and doing the right thing from a code perspective.
The pipeline instruments the guardrails that everyone needs in order to move very quickly.
And that's what this is all about at the end of the day. Mature pipelines allow you to move faster. Safety begets speed.
Pageripper's automations
Let's take a look at the workflows I've configured for Pageripper.
On pull request
Jest unit tests are run
Tests run on every pull request, and tests run quickly. Unit tests are defined in jest.
Developers get feedback on their changes in a minute or less, tightening the overall iteration cycle.
npm build
It's possible for your unit tests to pass but your application build to still fail due to any number of things: from dependency issues to incorrect configurations and more.
For that reason, whenever tests are run, the workflow also runs an npm build to ensure the application builds successfully.
docker build
The Pageripper API is Dockerized because it's running on AWS Elastic Container Service (ECS).
Because Pageripper uses Puppeteer, which uses Chromium or an installation of the Chrome browser, building the Docker image is a bit involved and
also takes a while.
I want to know immediately if the build is broken, so if and only if the tests all pass, then a test docker build is done via GitHub actions as well.
OpenAPI spec validation
For consistency and the many downstream benefits (documentation and SDK generation, for example), I maintain an OpenAPI spec for Pageripper.
On every pull request, this spec is validated to ensure no changes or typos broke anything.
This spec is used for a couple of things:
Generating the Swagger UI for the API documentation that is hosted on GitHub pages and integrated with the repository
Generating the test requests and the documentation and examples on RapidAPI, where Pageripper is listed
Running dredd to validate that the API correctly implements the spec
Pulumi preview
Pageripper uses Pulumi and Infrastructure as Code (IaC) to manage not just the packaging of the application into a Docker container, but the orchestration of all other supporting infrastructure and AWS resources that comprise a functioning production API service.
This means that on every pull request we can run pulumi preview to get a delta of the changes that Pulumi will make to our AWS account on the next deployment.
To further reduce friction, I've configured the Pulumi GitHub application to run on my repository, so that the output of pulumi preview can be added directly to my pull request as a comment:
On merge to main
OpenAPI spec is automatically published
A workflow converts the latest OpenAPI spec into a Swagger UI site that details the various API endpoints, and expected request and response format:
Pulumi deployment to AWS
The latest changes are deployed to AWS via the pulumi update command. This means that what's at the HEAD of the repository's main branch is what's in production at any given time.
This also means that developers never need to:
Worry about maintaining the credentials for deployments themselves
Worry about maintaining the deployment pipeline themselves via scripts
Worry about their team members being able to follow the same deployment process
Worry about scheduling deployments for certain days of the week - they can deploy multiple times a day, with confidence
Thanks for reading
If you're interested in automating more of your API development lifecycle, have a look at the workflows in the Pageripper repository.
And if you need help configuring CI/CD for the ultimate velocity and developer producitvity, feel free to reach out! |
|
Write an article about "How to generate images with AI" | You can generate images using AI for free online through a variety of methods
I've been generating images using models such as StableDiffusion and DALLE for blog posts for months now. I can quickly produce
high-quality images that help tell my story.
This blog post will give you a lay of the land in what's currently possible, and point you to some resources for generating AI images whether you are as
technical as a developer or not - and whether you'd prefer to produce images via a simple UI or programmatically via an API.
In addition, I'll give you the minimum you need to understand about prompts and negative prompts and how to use them effectively.
DALLE-3 via Bing Create
Overall, this is probably the best option right now if you want high quality images without needing to pay.
You will need a Microsoft live account (which is free),
but otherwise you just log into bing.com/create and you write your prompt directly into the text input at the top:
This is using OpenAI's DALLE-3 model under the hood, which is noticeably better at converting the specific details and instructions in natural human language into an image that
resembles what the user intended.
I have been generally pretty impressed with its outputs, using them for recent blog post hero images as well as my own 404 page.
For example, I used Bing and DALLE-3 to generate the hero image for this post in particular via the following prompt:
Neon punk style. A close up of a hand holding a box and waving a magic wand over it. From the box, many different polaroid photos of different pixel art scenes are flying upward and outward.
Bing currently gives you 25 "boosts" per day, which appears to mean 25 priority image generation requests.
After you use them up, your requests might slow down as they fall toward the back of the queue.
Using DALLE-3 in this way also supports specifying the style of art you want generated upfront, such as "Pixel art style. Clowns juggling in a park".
Discord bots
Discord is the easiest and lowest friction way to get started generating images via StableDiffusion right now, especially if you're unwilling to pay for anything.
Stable Foundation is a popular Discord channel that hosts several different instances of the bot that you can ask for image generations via chat prompts.
Here's the direct link to the Stable Foundation discord invite page
This is a very handy tool if you don't need a ton of images or just want the occasional AI generated image with a minimum of setup or fuss required.
You can run discord in your browser, which makes things even
simpler as it requires no downloads.
There's some There are generated a couple of images, you'll eventually ask for another and be told to chill-out for a bit.
This is their Discord channel's way of rate-limiting you so that you don't cost them too much money and overwhelm
the service so that other users can't generate images.
And this is fair enough - they're providing you with free image generation services, after all.
Like other free online services, they also will not allow you to generate content that is considered not safe for work, or adult.
Also fair enough - it's their house their rules, but occasionally you'll run into slight bugs with the NSFW content detector that will incorrectly flag your innocent image prompt as resulting in NSFW content even when you didn't want it to,
which can lead to failed generations and more wasted time. If you want total control over your generations, you need to go local and use a tool like AUTOMATIC111, mentioned below.
Finally, because it's a Discord channel that anyone can join, when you ask the bot for your images and the bot eventually returns them, everyone else in the channel can see your requests and your generated images and could download them if they
wanted to.
If you are working on a top-secret project or you just don't want other people knowing what you're up to, you'll want to look into AUTOMATIC111 or other options for running image generation models locally.
Replicate.com
Replicate is an outstanding resource for technical and non-technical folks alike.
It's one of my favorite options and I use both their UI for quick image generations when I'm writing content, and I use their
REST API in my Panthalia project which allows me to start blog posts by talking into my phone and request images via StableDiffusion XL.
Replicate.com hosts popular machine learning and AI models and makes them available through
a simple UI that you can click around in and type image requests into, as well as a REST API for developers to integrate into their applications.
Replicate.com is one of those "totally obvious in retrospect" ideas: with the explosion of useful machine learning models, providing a uniform interface to running those models
easily was pretty brilliant.
To use Replicate go to replicate.com and click the Explore button to see all the models you can use. You'll find more than just image generation models,
but for the sake of this tutorial, look for StableDiffusionXL.
Once you're on the StableDiffusionXL model page, you can enter the prompt for the image you want to generate. Here's an example of a simple prompt that works well:
Pixel art style. Large aquarium full of colorful fish, algae and aquarium decorations. Toy rocks.
If you're a developer and you don't feel like wrangling Python models into microservices or figuring out how to properly Dockerize StableDiffusion, you can take advantage of Replicate's REST API, which is truly a delight, from experience:
I have generated a ton of images via Replicate every month for the past several months and the most they've charged me is $2 and some change. Highly recommended.
AUTOMATIC111
This open-source option requires that you be comfortable with GitHub and git at a minimum, but it's very powerful because it allows you to run StableDiffusion, as well as checkpoint models based on StableDiffusion,
completely locally.
As in, once you have this up and running locally using the provided script, you visit the UI on localhost and you can then pull your ethernet cord out of your laptop, turn off your WiFi
card's radio and still generate images via natural language prompts locally.
There are plenty of reasons why you might want to general images completely locally without sending data off your machine which we won't get into right now.
AUTOMATIC111 is an open-source project which means that it's going to have some bugs, but there's also a community of users who are actively engaging with the project, developers who are fixing those bugs regularly,
and plenty of GitHub issues and discussions where you can find fellow users posting workarounds and fixes for common problems.
The other major benefit of using this tool is that it's completely free.
If your use case is either tricky to capture the perfect image for, or if it necessitates you generating tons of images over and over again,
it may be worth the time investment to get this running locally and learn how to use it.
AUTOMATIC111 is also powerful because it allows you to use LoRa and LyCORIS models to essentially fine-tune whichever base model you're using to further customize your final image outputs.
LoRA, short for Low-Rank Adaptation, models are smaller versions of Stable Diffusion models designed to apply minor alterations to standard checkpoint models.
For example, there might be a LoRa model for Pikachu,
making it easier to generate scenes where Pikachu is performing certain actions.
The acronym LyCORIS stands for "Lora beYond COnventional methods, Other Rank adaptation Implementations for Stable diffusion."
Unlike LoRA models, LyCORIS encompasses a variety of fine-tuning methods.
It's a project dedicated to exploring diverse ways of parameter-efficient fine-tuning on Stable Diffusion via different algorithm implementations.
If you want to go deeper into understanding the current state of AI image generation via natural language, as well as checkpoint models, LoRa and LyCORIS models and similar techniques for getting specific outputs,
AUTOMATIC111 is the way to go.
If you are working with AUTOMATIC111, one of the more popular websites for finding checkpoint, LoRa and LyCORIS models is civit.ai which hosts a vast array of both SFW and NSFW models contributed by the community.
Prompt basics
Prompting is how you ask the AI model for an image in natural human language, like "Pixel art style. Aquarium full of colorful fish, plants and aquarium decorations".
Notice in the above examples that I tend to start by describing the style of art that I want at the beginning of the prompt, such as "Pixel art style" or "Neon punk style".
Some folks use a specific artist or photographer's name if they want the resulting image to mirror that style, which will work if the model has been trained on that artist.
Sometimes, results you'll get back from a given prompt are pretty close to what you want, but for one reason or another the image(s) will be slightly off.
You can actually re-run
generation with the same prompt and you'll get back slightly different images each time due to the random value inputs that are added by default on each run.
Sometimes, it's better to modify your prompt and try to describe the same scene or situation in simpler terms.
Adding emphasis in StableDiffusion image generation prompts
For StableDiffusion and StableDiffusionXL models in particular, there's a trick you can use when writing out your prompt to indicate that a particular phrase or feature is more and should be given more "weight" during image generation.
Adding parends around a word or phrase increases its weight relative to other phrases in your prompt, such as:
Pixel art style. A ninja running across rooftops ((carrying a samurai sword)).
You can use this trick in both StableDiffusion and StableDiffusionXL models, and you can use (one), ((two)) or (((three))) levels of parends, according to my testing, to signify that something
is more Negative prompts
The negative prompt is your opportunity to "steer" the model away from certain features or characteristics you're getting in your generated images that you don't want.
If your prompt is generating images close to what you want, but you keep getting darkly lit scenes or extra hands or limbs, sometimes adding phrases like "dark", "dimly lit", "extra limbs" or "bad anatomy"
can help.
Why generate images with AI?
Neon punk style.
An android artist wearing a french beret, sitting in the greek thinker position, and staring at a half-finished canvas of an oil painting landscape.
In a french loft apartment with an open window revealing a beautiful cityscape.
My primary motivation for generating images with AI is that I write a ton of blog posts both in my free time and as part of my day job, and I want high-quality eye candy
to help attract readers to click and to keep them engaged with my content for longer.
I also find it to be an absolute blast to generate Neon Punk and Pixel art images to represent even complex scenarios I'm writing about - so it increases my overall enjoyment of the creative process itself.
I have visual arts skills and I used to make assets for my own posts or applications with Photoshop or Adobe Illustrator - but using natural language to describe what I want is about a thousand times faster and
certainly less involved.
I've gotten negative comments on Hacker News before (I know, it sounds unlikely, but hear me out) over my use of AI-generated images in my blog posts, but in fairness to those commenters who didn't feel the need
to use their real names in their handles, they woke up and started their day with a large warm bowl of Haterade.
I believe that the content I produce is more interesting overall because it features pretty images that help to tell my overall story. |
|
Write an article about "Pinecone AWS Reference Architecture Technical Walkthrough" | export const href = "https://pinecone.io/learn/aws-reference-architecture"
I built Pinecone's first AWS Reference Architecture using Pulumi.
This is the seventh article I wrote while working at Pinecone:
Read the article |
|
Write an article about "How I keep my shit together" | I've been working in intense tech startups for the past {RenderNumYearsExperience()} years. This is what keeps me healthy and somewhat sane.
The timeline below describes an ideal workday. Notice there's a good split between the needs of my body and mind and the needs of my employer. After 5 or 6PM, it's family time.
Beneath the timeline, I explain each phase and why works for me.
Wake up early
I tend to be an early riser, but with age and additional responsibilities it's no longer a given that I'll spring out of bed at 5AM.
I set a smart wake alarm on my fitbit which attempts to rouse me when I'm
already in light sleep as close to my target alarm time as possible.
The more time I give myself in the morning for what is For the past two jobs now I've used this time to read, sit in the sun, meditate, drink my coffee, and hack on stuff that I care about like my many side projects.
Get sunlight
This helps me feel alert and gets my circadian cycle on track.
Vipassana meditation
I sit with my eyes closed, noticing and labeling gently: inhaling, exhaling, thinking, hearing, feeling, hunger, pain, fear, thinking, etc.
Metta meditation
I generate feelings of loving kindness for myself, visualizing myself feeling safe, healthy, happy and living with ease.
This one I may do alongside a YouTube video.
Manoj Dias of Open has a great one.
Coffee and fun
Some days, I'll ship a personal blog post, finish adding a feature to one of my side projects, read a book, or work on something that is otherwise First block of work and meetings
Depending on the day, I'll have more or less focus time or meetings. Sometimes I block out my focus time on my work calendar to help others be aware of what I'm up to and to keep myself focused.
I'll do open-source work, write blog posts, create videos, attend meetings, or even do performance analysis on systems and configure a bunch of alerting services to serve as an SRE in a pinch, in my current role as a staff developer advocate at Pinecone.io.
I work until noon or 1pm before stopping to break my fast.
Break my fast
I eat between the hours of noon and 8pm. This is the form of intermittent fasting that best works for me.
A few years ago, a blood panel showed some numbers indicating I was heading toward a metabolic syndrome I had no interest in acquiring, so I follow this protocol and eat mostly vegan but sometimes vegetarian (meaning I'll have cheese in very sparing amounts occasionally).
Sometimes I'll eat fish and sometimes I'll even eat chicken, but for the most part I eat vegan.
In about 3 months of doing this, an updated blood panel showed I had completely reversed my metabolic issues.
In general, I try to follow Michael Pollen's succinct advice: "Eat food.
Not too much.
Mostly plants".
Long walk
I've reviewed the daily habits of a slew of famous creatives from the past, from sober scientists to famously drug-using artists and every combination in between.
One thing that was common amongst most of them is that they took two or more longer walks during the day.
I try to do the same.
I find that walking is especially helpful if I've been stuck on something for a while or if I find myself arm-wrestling some code, repository or technology that won't cooperate the way I initially thought it should.
It's usually within the first 20 minutes of the walk that I realize what the issue is or at least come up with several fresh avenues of inquiry to attempt when I return, plus I get oxygenated and usually find myself in a better mood when I get back.
I carry my son in my arms as I walk, talking to him along the way.
Ice bath
This is from Wim Hof, whose breathing exercises I also found helpful.
I started doing cold showers every morning and tolerated them well and found they gave me a surge in energy and focused attention, so I ended up incrementally stepping it up toward regular ice baths.
First I bought an inflatable ice bath off Amazon and would occasionally go to the store and pick up 8 bags of ice and dump them into a tub full of hose water.
I'd get into the bath for 20 minutes, use the same bluetooth mask I use for sleep and play a 20 minute yoga nidra recording.
The more I did this, the more I found that ice baths were for me.
They not only boosted my energy and focus but also quieted my "monkey mind" as effectively as a deep meditative state that normally takes me more than 20 minutes to reach.
According to Andrew Huberman, the Stanford professor of neurology and opthamology who runs his own popular podcast, cold exposure of this kind can increase your available dopamine levels by 2x, which is similar to what cocaine would do, but for 6 continuous hours.
I've never tried cocaine so I can't confirm this from
experience, but I can say that when I get out of a 20 minute ice bath I'm less mentally scattered and I feel like I have plenty of energy to knock out the remainder of my workday.
Now, I produce my own ice with a small ice machine and silicon molds I fill with hose water and then transfer into a small ice chest.
Long walk at end of day and family time
Usually, working remotely allows me to be present with my family and to end work for the day between 4 and 6pm depending on what's going on.
We like to take a long walk together before returning to settle in for the night.
Sleep
I try to get to sleep around 11pm but that tends to be aspirational.
I use the manta bluetooth sleep mask because it helps me stay asleep longer in the morning as I've found I'm very sensitive to any light.
I connect it to Spotify and play a deep sleep playlist without ads that is 16 hours long.
I turn on do not disturb on my phone.
Sometimes if my mind is still active I'll do breath counting or other breathing exercises to slow down. |
|
Write an article about "Building data-driven pages with Next.js" | ;
I've begun experimenting with building some of my blog posts - especially those that are heavy on data, tables, comparisons and multi-dimensional considerations - using scripts, JSON and home-brewed schemas.
Table of contents
What are data-driven pages?
I'm using this phrase to describe pages or experiences served up from your Next.js project that you compile rather than edit.
Whereas you might edit a static blog post to add new information, with a data-driven page you would update the data-source and then run the associated build process, resulting
in a web page you serve to your users.
Why build data driven pages?
In short, data driven pages make it easier to maintain richer and more information-dense experiences on the web.
Here's a couple of reasons I like this pattern:
There is more upfront work to do than just writing a new MDX file for your next post, but once the build script is stable, it's much quicker to iterate (Boyd's Law)
By iterating on the core data model expressed in JSON, you can quickly add rich new features and visualizations to the page such as additional tables and charts
If you have multiple subpages that all follow a similar pattern, such as side by side product review, running a script one time is a lot faster than making updates across multiple files
You can hook your build scripts either into npm's prebuild hook, which runs before npm run build is executed, or to the pnpm build target, so that your data driven pages are freshly rebuilt with no additional effort on your part
This pattern is a much more sane way to handle data that changes frequently or a set of data that has new members frequently.
In other words, if you constantly have to add Product or Review X to your site, would you rather manually re-create HTML sections by hand or add a new object to your JSON?
You can drive more than one experience from a single data source: think a landing page backed by several detail pages for products, reviews, job postings, etc.
How it works
The data
I define my data as JSON and store it in the root of my project in a new folder.
For example, here's an object that defines GitHub's Copilot AI-assisted developer tool for my giant AI-assisted dev tool comparison post:
javascript
"tools": [
{
"name": "GitHub Copilot",
"icon": "@/images/tools/github-copilot.svg",
"category": "Code Autocompletion",
"description": "GitHub Copilot is an AI-powered code completion tool that helps developers write code faster by providing intelligent suggestions based on the context of their code.",
"open_source": {
"client": false,
"backend": false,
"model": false
},
"ide_support": {
"vs_code": true,
"jetbrains": true,
"neovim": true,
"visual_studio": true,
"vim": false,
"emacs": false,
"intellij": true
},
"pricing": {
"model": "subscription",
"tiers": [
{
"name": "Individual",
"price": "$10 per month"
},
{
"name": "Team",
"price": "$100 per month"
}
]
},
"free_tier": false,
"chat_interface": false,
"creator": "GitHub",
"language_support": {
"python": true,
"javascript": true,
"java": true,
"cpp": true
},
"supports_local_model": false,
"supports_offline_use": false,
"review_link": "/blog/github-copilot-review",
"homepage_link": "https://github.com/features/copilot"
},
...
]
As you can see, the JSON defines every property and value I need to render GitHub's Copilot in a comparison table or other visualization.
The script
The script's job is to iterate over the JSON data and produce the final post, complete with any visualizations, text, images or other content.
The full script is relatively long. You can read the full script in version control, but in the next sections I'll highlight some of the more interesting parts.
Generating the Post Content
One of the most Here's a simplified version of that function:
javascript
const generatePostContent = (categories, tools, existingDate) => {
const dateToUse = existingDate ||${new Date().getFullYear()}-${new Date().getMonth() + 1}-${new Date().getDate()};
const toolTable = generateToolTable(tools);
const categorySections = categories.map((category) => {
return generateCategorySection(category);
}).join('\n');
const tableOfContents = categories.map((category) => {
// ... generate table of contents ...
}).join('\n');
return
;
}
fs.writeFileSync(filename, content, { encoding: 'utf-8', flag: 'w' });
console.log(Generated content for "The Giant List of AI-Assisted Developer Tools Compared and Reviewed" and wrote to ${filename});
This code does a few It determines the correct directory and filename for the generated page based on the project structure.
It checks if the file already exists and, if so, extracts the existing date from the page's metadata. This allows us to preserve the original publication date if we're regenerating the page.
It generates the full page content using the generatePostContent function.
It creates the directory if it doesn't already exist.
It writes the generated content to the file.
Automating the Build Process with npm and pnpm
One of the key benefits of using a script to generate data-driven pages is that we can automate the build process to ensure that the latest content is always available.
Let's take a closer look at how we can use npm and pnpm to run our script automatically before each build.
Using npm run prebuild
In the package.json file for our Next.js project, we can define a "prebuild" script that will run automatically before the main "build" script:
json
{
"scripts": {
"prebuild": "node scripts/generate-ai-assisted-dev-tools-page.js",
"build": "next build",
...
}
}
With this setup, whenever we run npm run build to build our Next.js project, the prebuild script will run first, executing our page generation script and ensuring that the latest content is available.
Using pnpm build
If you're using pnpm instead of npm, then the concept of a "prebuild" script no longer applies, unless you enable the enable-pre-post-scripts option in your .npmrc file as noted here.
If you decline setting this option, but still need your prebuild step to work across npm and pnpm, then you can do something gross like this:
json
{
"scripts": {
"prebuild": "node scripts/generate-ai-assisted-dev-tools-page.js",
"build": "npm run prebuild && next build",
...
}
}
Why automation matters
By automating the process of generating our data-driven pages as part of the build process, we can ensure that the latest content is always available to our users.
This is especially With this approach, we don't have to remember to run the script manually before each build - it happens automatically as part of the standard build process.
This saves time and reduces the risk of forgetting to update the content before deploying a new version of the site.
Additionally, by generating the page content at build time rather than at runtime, we can improve the performance of our site by serving static HTML instead of dynamically generating the page on each request.
This can be especially Key Takeaways
While the full script is quite long and complex, breaking it down into logical sections helps us focus on the key takeaways:
Generating data-driven pages with Next.js allows us to create rich, informative content that is easy to update and maintain over time.
By separating the data (in this case, the categories and tools) from the presentation logic, we can create a flexible and reusable system for generating pages based on that data.
Using a script to generate the page content allows us to focus on the high-level structure and layout of the page, while still providing the ability to customize and tweak individual sections as needed.
By automating the process of generating and saving the page content, we can save time and reduce the risk of errors or inconsistencies.
While the initial setup and scripting can be complex, the benefits in terms of time savings, consistency, and maintainability are well worth the effort. |
|
Write an article about "Writing code on Mac or Linux but testing on Windows with hot-reloading" | Read article |
|
Write an article about "Warp AI terminal review" | Warp brings AI assistance into your terminal to make you more efficient
Table of contents
Warp is an AI-assisted terminal that speeds you up and helps you get unblocked with LLM-suggested command completion,
custom workflows and rich theme support.
Unlike command line tools like Mods which can be mixed and matched in most environments,
Warp is a full on replacement for your current terminal emulator.
The good
It works
The core experience works out of the box as advertised: it's pretty intuitive to get help with complex commmands, ask about errors and get
back useful responses that help you to move forward more quickly.
It's pretty
It's also great to see first-class theming support and I will say that warp looks great out of the box - even using the default theme.
The painful
No tmux compatibility currently
I'm an avowed tmux user. I love being able to quickly section off a new piece of screen real estate and have it be a full fledged terminal
in which I can interact with Docker images or SSH to a remote server or read a man page or write a script.
I like the way tmux allows me to flow my workspace to the size of my current task - when I'm focused on writing code or text I can zoom in
and allow that task to take up my whole screen.
When I need to do side by side comparisons or plumb data between files and projects, I can open up as many panes as I need to get the job done.
Unfortunately, at the time of writing, Warp does not support Tmux and it's not clear how far away
that support will be.
Sort of awkward to run on Linux
I have another quibble about the default experience of running warp on Linux currently:
It's currently a bit awkward, because I launch the warp-terminal binary from my current terminal emulator, meaning that I get a somewhat janky experience and an extra floating window to manage.
Sure, I could work around this - but the tmux issue prevents me from making the jump to warp as my daily driver.
You need to log in to your terminal
I know this will bother a lot of other folks even more than it bugs me, but one of the things I love about my current workflow is that hitting my control+enter hotkey gives me a fresh
terminal in under a second - I can hit that key and just start typing.
Warp's onboarding worked fine - there were no major issues or dark patterns - but it does give me pause to need to log into my terminal and it makes me wonder how gracefully warp degrades
when it cannot phone home.
Getting locked out of your terminal due to a remote issue would be a bridge too far for many developers.
Looking forward
I'm impressed by warp's core UX and definitely see the value.
While I do pride myself on constantly learning more about the command line,
terminal emulators and how to best leverage them for productivity, it's sort of a no-brainer to marry
the current wave of LLMs and fine-tuned models with a common developer pain point: not knowing how to fix something in their terminal.
Not every developer wants to be a terminal nerd - but they do want to get stuff done more efficiently and with less suffering than before.
I can see warp being a great tool to helping folks accomplish that.
Check out my detailed comparison of the top AI-assisted developer tools. |
|
Write an article about "CatFacts rewrite in Golang" | Visit the repo on GitHub
I rewrote CatFacts from scratch in Golang just for the practice. I wanted an excuse to understand Go modules.
In keeping with the spirit of going way over the top, this service is deployed via Kubernetes on Google Cloud, for the most resilient pranking service possible.
Read my complete write-up on Medium
I wrote up a technical deep dive on this project on Medium. You can check it out here. |
|
Write an article about "Git-xargs allows you to run commands and scripts against many Github repos simultaneously" | Demo
Intro
have you ever needed to add a particular file across many repos at once?
Or to run a search and replace to change your company or product name across 150 repos with one command?
What about upgrading Terraform modules to all use the latest syntax?
How about adding a CI/CD configuration file, if it doesnβt already exist, or modifying it in place if it does, but only on a subset of repositories you select?
You can handle these use cases and many more with a single git-xargs command.
Just to give you a taste, hereβs how you can use git-xargs to add a new file to every repository in your Github organization:
bash
git-xargs \
--branch-name add-contributions \
--github-org my-example-org \
--commit-message "Add CONTRIBUTIONS.txt" \
touch CONTRIBUTIONS.txt
In this example, every repo in the my-example-org GitHub org have a CONTRIBUTIONS.txt file added, and an easy to read report will be printed to STDOUT :
Try it out
git-xargs is free and open-source - so you can grab it here: https://github.com/gruntwork-io/git-xargs
Learn more
Read the introductory blog post to better understand what git-xargs can do and its genesis story. |
|
Write an article about "How do you write so fast?" | ;
How do I write so fast?
Occasionally someone will ask me how I am able to write new content so quickly. This is my answer.
There are two reasons I'm able to write quickly:
1. I write in my head
I mostly write new articles in my head while I'm walking around doing other things. This means that by the time I am back at a computer, I usually just need to type in what I've already hashed out.
2. I automate and improve my process constantly
The fact that I'm constantly writing in my own head means that my real job is rapid, frictionless capture.
When I have an idea that I want to develop into a full post, I will capture it in one of two ways:
I'll use Obsidian, my second brain, and add a new note to my Writing > In progress folder
I'll use my own tool, Panthalia, (intro and update), which allows me to go from idea fragment to open pull request in seconds
I've found there's a significant motivational benefit to only needing to finish an open pull request versus writing an entire article from scratch.
Going from post fragment (or, the kernel of the idea) to open pull request reduces the perceived lift of the task. |
|
Write an article about "Keep Calm and Ship Like" | ;
Catching a breath
I want to reflect on what I accomplished last year and what I consider my biggest wins:
I netted hundreds of new email newsletter subscribers, LinkedIn followers, and Youtube subscribers.
I open-sourced several projects, many articles and YouTube demos and tutorials that I'm proud of.
I landed a Staff Developer Advocate role at Pinecone.io, where I shipped a separate set of articles on Generative AI and machine learning, plus webinars, open-source improvements to our clients and applications, and Pinecone's first AWS Reference Architecture in Pulumi.
The beginning of my "Year in AI"
In January 2023, I continued doing two things I had been doing for years, namely: open-sourcing side projects and tools and writing or making videos about them.
However, for some reason I felt a surge of enthusiasm around sharing my projects, perhaps because I was beginning to experiment with LLMs and realizing the productivity and support gains they could unlock.
So, I put a little extra polish into the blog posts and YouTube videos that shared my latest experiments with ChatGPT.
Early in the year, I wrote Can ChatGPT4 and GitHub Copilot help me produce a more complete side project more quickly?.
As I wrote in maintaining this site no longer fucking sucks, I also re-did this site for the Nth time, this time using the latest Next.js, Tailwind and a Tailwind UI template, that I promptly hacked up to my own needs, and deployed my new site to Vercel.
Here's my commit graph for the year on my portfolio project:
Which makes it less hard to believe that it was only 9 months ago I started building this version of the site in this incredibly hard to read screenshot of my first commit on the project:
My blogging finally got me somewhere
Writing about what I learn and ship in tech has been an In the beginning of the year I was working at Gruntwork.io, doing large scale AWS deployments for customers using Terraform, but as I wrote in You get to keep the neural connections, it came time for the next adventure.
And as I wrote about in Run your own tech blog, one of the key benefits of doing a lot of writing about your open-source projects and learnings is that you have high quality work samples ever at the ready.
This year, in the middle of working an intense job as a tech lead, I managed to do some self-directed job hunting in a down market, start with 5 promising opportunities and ultimately winnow the companies I wanted to work
for down to two.
I received two excellent offers in hand at the same time and was able to take my pick: I chose to start at Pinecone as a Staff Developer Advocate.
In the break between Gruntwork.io and Pinecone.io, I took one week to experiment with Retrieval Augmented Generation and built a Michael Scott from the office chatbot.
I open-sourced the data prep and quality testing Jupyter Notebooks I built for this project
plus the chatbot Next.js application itself, as I wrote about in my Office Oracle post.
I shipped like crazy at Pinecone
Articles
Once I started at Pinecone, I shipped a bunch of articles on Generative AI, machine learning and thought pieces on the future of AI and development:
Retrieval Augmented Generation
AI Powered and built with...JavaScript?
How to use Jupyter Notebooks to do Machine Learning and AI tasks
The Pain and Poetry of Python
Making it easier to maintain open-source projects with CodiumAI and Pinecone
Videos
Semantic Search with TypeScript and Pinecone
Live code review: Pinecone Vercel starter template and RAG - Part 1
Live code review: Pinecone Vercel starter template and RAG - Part 2
What is a Vector Database?
Deploying the Pinecone AWS Reference Architecture - Part 1
Deploying the Pinecone AWS Reference Architecture - Part 2
Deploying the Pinecone AWS Reference Architecture - Part 3
How to destroy the Pinecone AWS Reference Architecture
How to deploy a Jump host into the Pinecone AWS Reference Architecture
Projects
Introducing Pinecone's AWS Reference Architecture with Pulumi
Exploring Pinecone's AWS Reference Architecture
My personal writing was picked up, more than once
This was equally unexpected, thrilling and wonderful.
I did not know these people or outlets, but they found something of value in what I had to say.
Each of these surprised netted me a group of new newsletter and YouTube subscribers.
Daniel Messier included my rant Maintaining this site fucking sucks in his Unsupervised Learning newsletter
The Changelog picked up my Run your own tech blog post
Habr picked up and translated my First see if you've got the programming bug post into Russian. This resulted in about 65 new YouTube subscribers and new readers from Russian-speaking countries.
In addition, my programming mentor, John Arundel graciously linked to my blog when he published the blog post series I lightly collaborated on with him (He did the lion's share of the work).
You can read his excellent series, My horrible career, here.
The new subscribers and followers kept coming
My site traffic saw healthy regular growth and some spikes...
As I hoped, regularly publishing a stream of new content to my site and selectively sharing some of them on social media led to more and more organic traffic and a higher
count of indexed pages in top search engines.
By continuously building and sharing valuable content, tools and posts, I intend to continuously build organic traffic to my site, while eventually adding offerings like courses, training, books and more.
EmailOctopus Newsletter cleared 200...
When I rebuilt the latest version of my portfolio site, I wired up a custom integration with EmailOctopus so that I could have total control over how my Newsletter experience looks and behaves within my site.
In a way, this is the channel I'm most excited about because it's the channel I have the most control over.
These folks signed up directly to hear from me, so growing this audience is critcal for reaching my goals.
YouTube went from 0 to over 150...
I tend to do demos and technical walkthroughs on my YouTube channel. The various unexpected re-shares of my content to other networks led to a spike in YouTube subscribers.
I went from effectively no YouTube subscribers at the beginning of the year to 156 at the end of 2023.
I got a surprise hit on my video about performing GitHub pull request reviews entirely in your terminal.
More evidence that you should constantly publish what you find interesting, because you never know which topic or
video is going to be a hit.
LinkedIn
LinkedIn remained the most valuable channel for sharing technical and thought leadership content with my audience.
I saw the highest engagement on this platform, consistently since the beginning of the year.
I made the subtle but Reddit
Reddit was a close second to LinkedIn, or perhaps slightly ahead of it, judging solely from referral traffic.
I found that:
longform technical tutorials tended to perform best on Reddit
the community is suspicious even when you're just giving away a tutorial or sharing something open-source
Reddit posts that do well tend to deliver steady trickles of traffic over time
Consulting wins
I started being tapped for my insight into Generative AI, developer tooling and vector databases.
Initially, this came in the form of various think tanks and research firms asking me to join calls as an expert, and
to give my opinions and share my experiences as an experienced software developer experimenting with the first raft of AI-assisted developer tooling.
Realizing the opportunity at hand, I quickly gave my about page a face lift, making it more clear that I do limited engagements for my key areas of interest.
By the end of the year, I had successfully completed several such engagements, but was also beginning to see an uptick in direct outreach, not mediated by any third party.
Personal wins
There were many reasons I wanted to work at Pinecone as a developer advocate.
One of those many reasons was that the role involved some flying and some public speaking, both of which I have some phobia around.
I intentionally wanted to go to the places that scare me, and I am pleased to report that even after just a couple of sessions of exposure therapy this last year, I'm already feeling better about both.
I did some talks, webinars and conferences this year in Atlanta, San Francisco, New York and they all went really well, resulting in new contacts, Pinecone customers, followers and follow-up content.
Takeaways and learnings
Publish. Publish. Publish. You cannot know in advance what will be successful and what will fall flat. Which articles will take off and which will get a few silent readers.
I am regularly surprised by how well certain posts, videos and projects do, and which aspects of them folks find interesting, and how poorly certain projects do, despite a great deal of preparation.
Build self-sustaining loops
I use what I learn at work to write content and build side projects that people will find interesting.
I use what I learn in my side project development at work - constantly. Side projects have been an invaluable constant laboratory in which to expand my skill set and experience.
I use my skill sets and experience to help other people, including clients and those looking for assistance in development, understanding industry trends, and building better software.
Rinse and repeat constantly for many years, with minimal breaks in between. |
|
Write an article about "Programmer emotions" | | When | I feel |
|---|---|
| My program compiles after an onerous redactoring| elation|
| People add meetings to my calendar to talk through deliverables they haven't thought through or locked down yet| like my precious focus time is being wasted |
| Another developer says something I created was helpful to them| like a link in a long chain stretching from the past into the future|
| Someone downloads my code, tool or package| absolutely victorious |
| I ship something | My pull request is merged | absolutely victorious | |
|
Write an article about "Codeium vs ChatGPT" | Codeium began its life as an AI developer tool that offered code-completion for software developers, and
ChatGPT was originally a general purpose AI language model that could assist with a variety of tasks.
But as I write this post on February 2nd, 2024, many of these products' unique capabilities are beginning to
overlap. What are the key differences and what do you need to know in order to get the most out of them both?
When you're finished reading this post you'll understand why these tools are so powerful, which capabilities remain unique to each,
and how you can use them to level up your development or technical workflow.
Codeium vs ChatGPT - capabilities at a glance
| | Code generation | Image generation | Chat capabilities | Code completion | General purpose chat | Runs in IDEs | Free
|---|---|---|---|---|---|---|---|
| Codeium | β
| β | β | β
| β | β
| β
|
| ChatGPT | β
| β
| β
| β
| β
| β΄οΈ | β |
Legend
| Supported | Not supported | Requires extra tooling |
|---|---|---|
| β
| β | β΄οΈ |
Let's break down each of these attributes in turn to better understand how these two tools differ:
Code generation
Both Codeium and ChatGPT are capable of advanced code generation, meaning that developers can ask the tool to write code in most any programming language and get back something pretty reasonable
most of the time.
For example, in the browser interface of ChatGPT 4, you could ask for a Javascript class that represents a user for a new backend system you're writing and get something
decent back, especially if you provide notes and refinements along the way.
For example, here's an actual conversation with ChatGPT 4 where I do just that.
Unless you're using a third party wrapper like a command line interface (CLI) or IDE plugin that calls the OpenAI API, it's slightly awkward to do this in ChatGPT's browser chat window -
because you're going to end up doing a lot of copying from the browser and judiciously pasting into your code editor.
Even with this limitation, I've still found using ChatGPT 4 to discuss technical scenarios as I work to be a massive accelerator.
Runs in IDEs
Codeium's advantage here is that it tightly integrates with the code editors that developers already use, such as VSCode and Neovim.
Think of Codeium as a code assistant that is hanging out in the background of whatever file you happen to be editing at the moment.
It can read all of the text and code in the file to build up context.
As you type, you will begin to see Codeium suggestions, which are written out in a separate color (light grey by default) ahead of your cursor.
As the developer, if you feel that the suggestion
is a good one, or what you were about to type yourself, you hit the hotkeys you've configured to accept the suggestion and Codeium writes it out for you, saving you time.
In a good coding or documentation writing session, where Codeium is correctly following along with you and getting the right context, these many little autocompletions add up to saving you
quite a bit of time.
Like GitHub CoPilot, you can also write out a large comment block describing the code or functionality you want beneath it, and that is usually more than enough for Codeium to outright write your
function, method or class as you've described it, which can also be very accelerating, e.g.,:
// This API route accepts the product slug and returns product details
// from the database, or an error if the product does not exist
Once you move your cursor below this, Codeium will start writing out the code necessary to fulfill your description.
With some extra work, you can bring ChatGPT into your terminal or code editor
This is not to say that you can't get ChatGPT into your terminal or code editor - because I happen to use it there everyday. It just means you
need to leverage one of many third party tools that call OpenAI's API to do so.
My favorite of these is called mods.
This makes the full power of OpenAI's latest models, as well as many powerful local-only and open-source models, available
in your terminal where developers tend to live.
I can have it read a file and suggest code improvements:
cat path/to/file | mods "Suggest improvements to this code"
or assign it the kinds of tasks I previously would have had to stop and do manually:
ls -lh /local/dir | mods "These files are all too large and I want them
all converted to .webp. Write me a script that performs the
downsizing and conversion"
There are many community plugins for VSCode and Neovim that wrap the OpenAI in a more complete way, allowing you to highlight code in your editor and have ChatGPT4 look at it, rewrite it, etc.
Is it free to use?
When you consider that it's possible to bring ChatGPT4 into your code editors and terminal with a little extra work, one of the key advantages that Codeium retains is its price.
I'm currently happy to pay $20 per month for ChatGPT Plus because I'm getting value out of it daily for various development tasks and for talking through problems.
But Codeium is absolutely free for individual developers, which is not to be overlooked, because the quality of its output is also very high.
What advantage does ChatGPT have over Codeium?
As of this writing, one of the most powerful things that ChatGPT can do that Codeium can't is rapidly create high quality images in just about any artistic style. Users describe the image
they want, such as:
"A bright and active school where several young hackers are sitting around working on computers while the instructor explains code on the whiteboard. Pixel art style."
Having an on-demand image generator that responds to feedback, has a wide array of artistic styles at its disposal and can more or less follow directions (it's definitely not perfect)
is a pretty incredible time-saver and assistant when you publish as much on the web as I do.
What about general purpose chat?
Up until recently, ChatGPT had the upper hand here. It's still one of the most powerful models available at the time of this writing, and it is not constrained to technical conversations.
In fact, one of my favorite ways to use it is as a tutor on some new topic I'm ramping up on - I can ask it complex questions to check my understanding and ask for feedback on the mental
models I'm building. Anything from pop culture to outer space, philosophy and the meaning of life are up for grabs - and you can have a pretty satisfying and generally informative discussion
with ChatGPT on these and many more topics.
Tools like Codeium and GitHub's CoPilot used to be focused on the intelligent auto-completion functionality for coders, but all of these "AI-assisted developer tools" have been scrambling to add
their own chat functionality recently.
Codeium now has free chat functionality - and from some initial testing, it does quite well with the kinds of coding asisstant tasks I would normally delegate to ChatGPT:
Should you use Codeium or ChatGPT?
Honestly, why not both? As I wrote in Codeium and ChatGPT are all I need, these two tools are incredibly powerful on their own,
and they're even more powerful when combined.
I expect that over time we'll begin to see more comprehensive suites of AI tools and assistants that share context,
private knowledge bases and are explicitly aware of one another.
Until then, I'm getting great effect by combining my favorite tools in my daily workflow.
How do I use Codeium and ChatGPT together?
As I write this blog post on my Linux laptop in Neovim, I first tab over to Firefox to ask ChatGPT to generate me a hero image I can use in this blog post. I do this in the chat.openai.com
web interface, because that interface is tightly integrated with DALLE, OpenAI's image generating model.
I'll let it do a first few iterations, giving notes as we go, and as I write, until we
get the right image dialed in.
Meanwhile, as I write out this blog post in Neovim, Codeium is constantly suggesting completions, which is generally less useful when I'm writing prose, but very useful whenever I'm coding, writing
documentation, writing scripts, etc. |
|
Write an article about "How to Run a Quake 3 Arena Server in an AWS ECS Fargate Task" | {metadata.description}
Read article |
|
Write an article about "Why your AI dev tool startup is failing with developers" | A frustated senior developer trying our your improperly tested dev tool for the first time
When I evaluate a new AI-assisted developer tool, such as codeium, GitHub CoPilot or OpenAI's ChatGPT4, this is the thought process I use to determine if it's something I can't live without or if
it's not worth paying for.
Does it do what it says on the tin?
This appears simple and yet it's where most AI-assisted developer tools fall down immediately. Does your product successfully do what it says on the marketing site?
In the past year I've tried more than a few well-funded, VC-backed,
highly-hyped coding tools that claim to be able to generate tests, perform advanced code analysis, or catch security issues that simply do not run successfully when loaded in Neovim or vscode.
The two cardinal sins most AI dev tool startups are committing right now
Product developers working on the tools often test the "happy path" according to initial product requirements
Development teams and their product managers do not sit with external developers to do user acceptance testing
Cardinal sin 1 - Testing the "happy path" only
When building new AI developer tooling, a product engineer might use one or more test repositories or sample codebases to ensure their tool can perform its intended functionality, whether it's generating tests or finding bugs.
This is fine for getting started, but a critical error I've noticed many companies make is that they never expand this set of test codebases to proactively attempt to flush out their bugs.
This could also be considered laziness and poor testing practices, as it pushes the onus of verifying your product works onto your busy early adopters,
who have their own problems to solve.
Cardinal sin 2 - Not sitting "over the shoulder" of their target developer audience
The other cardinal sin I keep seeing dev tool startups making is not doing user acceptance testing with external developers.
Sitting with an experienced developer who is not on your product team and watching them struggle to use your product successfully is often painful and very
eye-opening, but failing to do so means you're pushing your initial bug reports off to chance.
Hoping that the engineers with the requisite skills to try your product are going to have the time and inclination to write you a detailed bug report after your supposed wonder-tool just
failed for them on their first try is foolish and wasteful.
Most experienced developers would rather move on and give your competitors a shot, and continue evaluating alternatives until they find a tool that works.
Trust me - when I was in the market for an AI-assisted video editor, I spent 4 evenings in a row trying everything from incumbents
like Vimeo to small-time startups before finding and settling on Kapwing AI, because it was the first tool that actually worked and supported my desired workflow. |
|
Write an article about "Weaviate vs Milvus" | Table of contents
vector database comparison: Weaviate vs Milvus
This page contains a detailed comparison of the Weaviate and Milvus vector databases.
You can also check out my detailed breakdown of the most popular vector databases here.
Deployment Options
| Feature | Weaviate | Milvus |
| ---------| -------------| -------------|
| Local Deployment | β
| β
|
| Cloud Deployment | β
| β |
| On - Premises Deployment | β
| β
|
Scalability
| Feature | Weaviate | Milvus |
| ---------| -------------| -------------|
| Horizontal Scaling | β
| β
|
| Vertical Scaling | β
| β |
| Distributed Architecture | β
| β
|
Data Management
| Feature | Weaviate | Milvus |
| ---------| -------------| -------------|
| Data Import | β
| β
|
| Data Update / Deletion | β
| β
|
| Data Backup / Restore | β
| β
|
Security
| Feature | Weaviate | Milvus |
| ---------| -------------| -------------|
| Authentication | β
| β
|
| Data Encryption | β
| β |
| Access Control | β
| β
|
Vector Similarity Search
| Feature | Weaviate | Milvus |
|---------|-------------|-------------|
| Distance Metrics | Cosine, Euclidean, Jaccard | Euclidean, Cosine, Jaccard |
| ANN Algorithms | HNSW, Beam Search | IVF, HNSW, Flat |
| Filtering | β
| β
|
| Post-Processing | β
| β
|
Integration and API
| Feature | Weaviate | Milvus |
|---------|-------------|-------------|
| Language SDKs | Python, Go, JavaScript | Python, Java, Go |
| REST API | β
| β
|
| GraphQL API | β
| β |
| GRPC API | β | β |
Community and Ecosystem
| Feature | Weaviate | Milvus |
|---------|-------------|-------------|
| Open-Source | β
| β
|
| Community Support | β
| β
|
| Integration with Frameworks | β
| β
|
Pricing
| Feature | Weaviate | Milvus |
|---------|-------------|-------------|
| Free Tier | β | β
|
| Pay-as-you-go | β | β |
| Enterprise Plans | β
| β
| |
|
Write an article about "Office Oracle - a complete AI Chatbot leveraging langchain, Pinecone.io and OpenAI" | What is this?
The Office Oracle AI chatbot is a complete AI chatbot built on top of langchain, Pinecone.io and OpenAI's GPT-3 model. It demonstrates how you can build a fully-featured chat-GPT-like experience
for yourself, to produce an AI chatbot with any identity, who can answer factually for any arbitrary corpus of knowledge.
For the purposes of demonstration, I used the popular Office television series, but this same stack and approach will work for AI chatbots who can answer for a company's documentation, or specific processes, products,
policies and more.
Video series
Be sure to check out my three-part video series on YouTube, where I break down the entire app end to end, and discuss the Jupyter notebooks and data science elements, in addition to the Vercel ai-chatbot template
I used and modified for this project:
AI Chatbots playlist on YouTube
Intro video and demo
Jupyter notebooks deep-dive
Next.js vercel template ai-chatbot deep-dive
Open source code
I open sourced the Jupyter notebooks that I used to prepare, sanitize and spot-check my data here:
Office Oracle Data workbench
Office Oracle Data test workbench
The data workbench notebook handles fetching, parsing and writing the data to local files, as well as converting the text to embeddings and upsertings the vectors into the Pinecone.io vector database.
The test workbench notebook demonstrates how to create a streamlined test harness that allows you to spot check and tweak your data model without requiring significant development changes to your application layer.
I also open sourced the next.js application itself |
|
Write an article about "First, find out if you've got the programming bug" | I'm thinking about learning to code. Which laptop should I get? Should I do a bootcamp? Does my child need special classes or prep in order to tackle a computer science degree?
A lot of different folks ask me if they should learn to code, if software development is a good career trajectory for them or their children, and what they need to study in school in order
to be successful.
Here's my advice in a nutshell
Before you should worry about any of that: your major, which school you're trying to get your kid into, which laptop you should purchase, you need to figure out if you (or your kid) have the "programming bug".
This will require a bit of exploration and effort on your part, but the good news is there's a ton of high quality and free resources online that will give you enough of a taste for coding and
building to help you determine if this is something worth pursuing as a career or hobby. I'll share some of my favorites in this post.
What is the programming bug?
"The programming bug" is the spark of innate curiosity that drives your learning forward. Innate meaning that it's coming from you - other people don't need to push you to do it.
In software development, coding, systems engineering, machine learning, data science; basically, in working with computers while also possibly working with people - there are periods of profound frustration and tedium, punctuated by anxiety and stress.
I have personally reached a level of frustration that brought
tears to my eyes countless times. If you pursue the path of a digital craftsperson, be assured that you will, too. Especially in the beginning. That's okay.
I also happen to think that being able to describe to machines of all shapes and sizes exactly what you want them to do in their own languages; to solve problems in collaboration with machines, and to be able to bring an idea from your imagination all the
way to a publicly accessible piece of software that people from around the world use and find utility or joy in - borders on magic.
The spark of curiosity allows you to continually re-ignite your own passion for the craft
In my personal experience, considering my own career, and also the folks I've worked with professionally who have been the most effective and resilient, the single determining criterion for success is
this innate curiosity and drive to continue learning and to master one's craft; curiosity in the field, in the tools, in the possibilities, in what you can build, share, learn and teach.
That's all well and good, but how do you actually get started?
Use free resources to figure out if you have the programming bug
Don't buy a new macbook. Don't sign up for a bootcamp. Not at first.
Use the many excellent free resources on the internet that are designed to help folks try out programming in many different languages and contexts.
Here are a few that I can recommend to get you started:
Exercism.io
Codewars
Codecademy
Edabit
Give the initial exercises a shot. It doesn't really matter what language you start with first, but if you have no clue, try Python, PHP, or JavaScript. When you come across a phrase or concept
you don't understand, try looking it up and reading about it.
It's key that none of these services require you to pay them anything to get started and get a taste for programming. You can do them in your browser on a weak, old computer or at the library or an
internet cafe, before shelling out for a fancy new laptop.
If it turns out you could go happily through the rest of your life without ever touching a keyboard again, you've lost nothing but a little time.
How can you get a feel for what the work is like?
Jobs in software development vary wildly in how they look - a few parameters are company size, team size, technology stack, the industry you're in (coding for aviation is very, very different from coding for advertising
in some meaningful ways), etc.
Nevertheless, it can be helpful to watch some professional developers do developer things, in order to gauge if it even seems interesting to you or not.
How can you peek into the day to day of some working developers?
Luckily, plenty of developers make it easy for you to do that, by sharing content on YouTube and Twitch.
This is very far from an exhaustive list, but here's a few channels I've watched recently that can help you see some
on-screen action for yourself:
Ants Are Everywhere - An ex-Googler reads the source code to popular open-source projects on YouTube, thinking through the process and showing how he answers his own questions
as they arise. Really excellent code spelunking.
Yours truly - I make tutorials on open source tools as well as share some recordings of myself live-coding on some open source projects.
Lately, and for the foreseeable future, I'll be going deep on A.I.
applications, LLMs (large language models such as ChatGPT and others), vector databases and machine learning topics.
TJ DeVries - A great open source developer, author of a very popular Neovim plugin (a coding tool for developers) and someone who makes their content accessible and interesting for all viewers.
The Primeagen - A spicier, no-holds-barred look at all things programming, getting into coding, learning to code, and operating at a high level as a software engineer from a Netflix engineer who isn't afraid to say it like it is.
I'll continue to add more as I find good channels to help folks get a feel for the day in, day out coding tasks.
Keep in mind: these channels will give you a good taste of working with code, using a code editor and working with different languages and tools, but that's only a part of the overall job of being a professional developer.
There's entire bookshelves worth of good content on the soft skills of the job: working effectively up and down your organization, planning, team structure and dynamics, collaborative coding, team meetings, methods for planning and tracking work,
methods for keeping things organized when working with other developers, etc.
These skills are perhaps even more soft skill development as well.
You may not might find the programming bug overnight
I've been on a computer since I was 3 years old, but for the first few years I was really only playing games, making diaramas with paint and similar programs.
Around age 11, I had a neighborhood friend who showed me an early Descent game on his PC.
He also had a C++ textbook that he let me borrow and read through.
At the tender age of 11, I was thinking to myself that I would read this book, become a developer, and then make my own games.
I started by
trying to understand the textbook material. This didn't pan out - and it would be another 15 years until I'd make a conscious decision to learn to code.
At age 26, I joined my first tech company as a marketing associate.
Luckily, there was a component of the job that was also quality assurance, and our product was very technical, so I had to use the command line
to send various test payloads into our engine and verify the outputs made sense. I was hooked.
The staff-level developer who was sitting next to me gave me just the right amount of encouragement and said that if I kept at it - I would be like "this" (he made a motion with his hand of an upward ramp).
From that point forward, I was teaching myself everything I could on nights and weekends.
Practicing, coding, reading about
coding and trying to build things. And I've never stopped.
The timeline for learning to code can be lumpy and will look different for different people. That's okay, too.
What do you do if you think you DO have the programming bug?
So what should you do if you try out some of these types of programming exercises and you find out that you really do like them?
That you find yourself thinking about them when you're doing something else?
What do you do next?
Start building and never stop.
This is the advice from a Stack Overflow developer survey from a few years ago about how to stay current and how to grow as a developer: "Build things all the time and never stop".
I couldn't agree more.
The first complete web app I built for myself was my Article Optimizer.
It was brutal.
I didn't even know the names of the things I didn't know - so I couldn't Google them.
I had to work backwards by examining apps that were similar enough
to what I was trying to build (for example, an app that presented the user with a form they could use to submit data) and reverse engineer it, reading the page source code, and trying to find out more information about the base technologies.
Form processing, APIs, custom fonts, CSS, rendering different images based on certain conditions, text processing and sanitization.
I learned a metric ton from my first major web app, even though it took me months to get it live.
And the first version was thrilling, but not at all
what I wanted.
So I kept on refining it, re-building it.
Learning new frameworks and techniques.
Around the third time I rewrote it, I got it looking and functioning the way I wanted, and I got it running live on the internet so that other people could use it.
Then I maintained it as a freely available app for many years. Hosting something on the internet, on your own custom domain, will teach you a ton as well.
This is the path that worked for me: find something that's outside of your comfort zone.
Figure out how to build it.
Chase down every curiosity - research it as best you can and then try to get it working.
Once you do, it's time for the next project.
This time, do something more ambitious
than last time around - something that will push you out of your comfort zone again so that you can learn even more.
Don't pony up your cash until you've gotten a free taste
I've seen people take out a loan for $12,000 in order to complete a coding bootcamp, just to discover during their first job placement that they don't actually enjoy working on the computer all day or want to continue building digital things.
If you're currently considering learning to code or getting into computers as a possible career, don't over invest until you've given yourself a taste of coding and building.
When you're on the job site day in and day out - doing the actual work, feeling the stress, and the joy and the panic and accomplishment, Mom and Dad are not going to be leaning over your shoulder (hopefully).
Software development, hacking, designing and building systems, creating apps and sites, solving hard engineering challenges with your ever-expanding toolkit can be a wonderful career - if you enjoy doing the work.
You need to figure out if you can find that spark and continually use it to renew your own interest and passion.
Looking for advice or have a question?
You can subscribe to my newsletter below, and the first email in the series will allow you to reply in order to share with me any challenges you're currently facing in your career, or questions you might have.
All the best and happy coding! |
End of preview. Expand
in Dataset Viewer.
No dataset card yet
New: Create and edit this dataset card directly on the website!
Contribute a Dataset Card- Downloads last month
- 2