Noa Roggendorff

nroggendorff

AI & ML interests

None yet

Recent Activity

updated a Space 7 minutes ago
nroggendorff/nroggendorff
updated a model 32 minutes ago
nroggendorff/smallama
updated a Space about 1 hour ago
nroggendorff/train-llama
View all activity

Articles

Organizations

nroggendorff's activity

posted an update about 11 hours ago
view post
Post
206
I've done it.
~~well, some of it~~
  • 2 replies
Β·
replied to their post 5 days ago
Reacted to luigi12345's post with πŸ‘€ 13 days ago
view post
Post
2244
Best Debug Prompt

You are a frustrated user who has tested this application extensively. Your job is to list EVERY possible way this app could completely break or become unusable.

For each potential failure:

1. What would make you say "This app is totally broken!"?
2. What exact steps did you take when it broke?
3. What did you see on your screen when it broke?
4. How angry would this make a typical user (1-10)?
5. What would you expect the app to do instead?

Think about:
- What happens if you click buttons really fast?
- What if your internet is slow/disconnected?
- What if you upload weird files/images?
- What if you try to break the app on purpose?
- What if multiple people use it at once?
- What if you use it on mobile/tablet?
- What if you refresh/navigate while it's working?
- What if you paste invalid inputs?
- What if you upload HUGE files?
- What if you leave it running overnight?

Don't worry about being technical - just describe what you saw break as a user.

Format each issue like:

ISSUE #1: [Brief angry user description]
- STEPS TO BREAK IT: [Exactly what you did]
- WHAT HAPPENED: [What you saw]
- ANGER LEVEL: [1-10]
- EXPECTED: [What should happen]

Keep going until you've found every possible way to break this app from a user's perspective!

After outpuiting the list, accoring to the list optmiced Composer edit block to fix the ones severe that make sense to adjust accoirng to gradio limitations and current usage target )dont suppose we need unecessary funcitons)
Reacted to MonsterMMORPG's post with πŸ”₯ 19 days ago
view post
Post
4532
Hunyuan3D-1 - SOTA Open Source Text-to-3D and Image-to-3D - 1-Click Install and use both Locally on Windows and on Cloud - RunPod and Massed Compute

Automatic Installers
Works amazing on 24 GB GPUs
Files > https://www.patreon.com/posts/115412205

So what is Hunyuan3D-1
Official repo : https://github.com/tencent/Hunyuan3D-1
On Hugging Face : tencent/Hunyuan3D-1

Tencent Hunyuan3D-1.0: A Unified Framework for Text-to-3D and Image-to-3D Generation

Abstract

While 3D generative models have greatly improved artists' workflows, the existing diffusion models for 3D generation suffer from slow generation and poor generalization. To address this issue, we propose a two-stage approach named Hunyuan3D-1.0 including a lite version and a standard version, that both support text- and image-conditioned generation.

In the first stage, we employ a multi-view diffusion model that efficiently generates multi-view RGB in approximately 4 seconds. These multi-view images capture rich details of the 3D asset from different viewpoints, relaxing the tasks from single-view to multi-view reconstruction. In the second stage, we introduce a feed-forward reconstruction model that rapidly and faithfully reconstructs the 3D asset given the generated multi-view images in approximately 7 seconds. The reconstruction network learns to handle noises and in-consistency introduced by the multi-view diffusion and leverages the available information from the condition image to efficiently recover the 3D structure.

Our framework involves the text-to-image model, i.e., Hunyuan-DiT, making it a unified framework to support both text- and image-conditioned 3D generation. Our standard version has 3x more parameters than our lite and other existing model. Our Hunyuan3D-1.0 achieves an impressive balance between speed and quality, significantly reducing generation time while maintaining the quality and diversity of the produced assets.




posted an update 20 days ago
view post
Post
2226
I still think whitespace in tokenizers are so dumb.
Congrats, you just doubled your vocab size for no reason.
  • 3 replies
Β·
replied to their post 22 days ago
view reply

The model I made isn't very good by the way, it's only set for one epoch.

posted an update 22 days ago
replied to their post 23 days ago
replied to their post 24 days ago
view reply

They will charge you a 10$ verification fee (so don't lock until that happens), but that cancels after a few days.

posted an update 24 days ago
view post
Post
1820
Did you guys know that if you try to link a prepaid card to huggingface it won't work, but then if you press the button again it links anyway? Then you can lock the card (deny any charges), and get resources for free? You're welcome :P
Β·
posted an update 25 days ago
view post
Post
1193
wdym you can't pickle
_io.TextIOWrapper

~!??
posted an update 30 days ago
view post
Post
3305
@echo off
echo hello world
pause

Β·
replied to MichaelBoll's post about 1 month ago
posted an update about 1 month ago
view post
Post
1255
100 followers? When did that happen?
replied to their post about 1 month ago
posted an update about 1 month ago
view post
Post
2105
she assert on my device until i give up AHAHEGHFDGHJHASUFSHD
Β·
Reacted to their post with πŸš€ about 2 months ago
view post
Post
2645
When huggingface patches this, I'm going to be really sad, but in the meantime, here you go:

When AutoTrain creates a new space to train your model, it does so via the huggingface API. If you modify the code so that it includes a premade README.md file, you can add these two lines:

---
app_port: 8080 # or any integer besides 7860 that's greater than 2 ** 10
startup_duration_timeout: 350m
---


This will tell huggingface to listen for the iframe on your port, instead of the one autotrain is actually hosting on, and because startup time isn't charged, you get the product for free. (you can take this even further by switching compute type to A100 or something)
  • 1 reply
Β·
posted an update about 2 months ago
view post
Post
2645
When huggingface patches this, I'm going to be really sad, but in the meantime, here you go:

When AutoTrain creates a new space to train your model, it does so via the huggingface API. If you modify the code so that it includes a premade README.md file, you can add these two lines:

---
app_port: 8080 # or any integer besides 7860 that's greater than 2 ** 10
startup_duration_timeout: 350m
---


This will tell huggingface to listen for the iframe on your port, instead of the one autotrain is actually hosting on, and because startup time isn't charged, you get the product for free. (you can take this even further by switching compute type to A100 or something)
  • 1 reply
Β·
posted an update about 2 months ago
view post
Post
1518
pretty much all of the values in the llama training post are placeholders, so if you dont get a desireable result tweak and tweak and tweak. it took months to get smallama to do anything
Reacted to zamal's post with πŸ€— about 2 months ago
view post
Post
1938
πŸš€ New Model Release: zamal/Molmo-7B-GPTQ-4bit πŸš€

Hello lovely community,

zamal/Molmo-7B-GPTQ-4bit model is now available for all! This model has been highly quantized, reducing its size by almost six times. It now occupies significantly less space and vRAM, making it perfect for deployment on resource-constrained devices without compromising performance.

Now we get:
Efficient Performance: Maintains high accuracy while being highly quantized.
Reduced Size: The model size is reduced by nearly six times, optimizing storage and memory usage.
Versatile Application: Ideal for integrating a powerful visual language model into various projects particularly multi rag chains.
Check it out!

  • 1 reply
Β·