Techne Logo

AI Experiences

Stories of AI adoption and real-world experiences from Hacker News

Experience Report4d ago

Using LLMs for Efficient Anki Card Creation

The discussion revolves around the trade-offs between hand-generating Anki flashcards and using language models to automate the process. While manual creation aids deeper learning and memory retention, it is time-consuming. Using LLMs like ChatGPT can accelerate the creation of cards, enabling faster progression through learning topics, especially when verified for accuracy. An actionable insight is to use AI-generated content as a starting point but combine it with human or multiple model verification to maintain quality and learning effectiveness.

Experience Report4d ago

Challenges of Using LLMs in Interviews

The discussion centers on the impact of LLMs in technical interviews, highlighting concerns that reliance on these models leads to reduced fundamental skills. The original poster shares practical strategies to mitigate this: conducting in-person whiteboard interviews, designing questions that expose LLM limitations (particularly in complex hierarchical and temporal data), and preparing edge cases to force candidates to rethink solutions. A respondent suggests asking esoteric questions that a normal person wouldn't know, which is recognized as another form of challenging LLMs' capabilities. These insights provide concrete methods for interviewers to adapt in an era of widespread LLM usage.

Experience Report9d ago

ChatGPT Use in Microbenchmarks and Data Structures

The thread discusses practical experiences using ChatGPT for programming microbenchmarks and container choice advice in C++. The original poster highlights using ChatGPT as a quick code generator for benchmarking small array search methods, discovering that branchless linear scan is fastest for small sizes. Another user notes ChatGPT's frequent recommendation of unordered_maps even when vectors would be more efficient for small data sets, pointing out that ChatGPT can be persuaded to acknowledge vectors' benefits after prompting. Key takeaway: ChatGPT can accelerate simple benchmarking code generation but requires user expertise to vet its container recommendations, especially for small data sizes.

Experience Report57m ago

Learning and Using LLMs in Software Development

The discussion contrasts traditional software development learning—focusing on domain, problem space, and abstract solutions—with using Large Language Models (LLMs) mainly as tools to automate code generation and assist in orchestration, testing, and validation. One participant emphasizes that code is a disposable artifact, while abstract problem-solving is key. Another highlights learning by observing LLM decision-making processes, leveraging the model's ability to rapidly access knowledge pools, thus gaining understanding through practical usage rather than manual coding. The actionable insight is to consider LLMs as accelerators for repetitive tasks and learning enhancers through observation, not just code producers.

Experience Report1h ago

Local vs Cloud LLM Control

The conversation centers on the value of building local language model agents using tools like llama.cpp or vllm to gain a fundamental understanding of LLMs and retain control compared to cloud API providers. One participant recommends this approach to demystify LLMs and highlights the trade-off of control versus ease of use. Another notes hardware limitations and expectations about performance relative to commercial cloud models, indicating that local inference is currently constrained by personal hardware capabilities.

Experience Report7h ago

Using moonshot LLM for SVG generation

The original poster shares their experience installing and using the moonshot large language model (LLM) via OpenRouter to generate SVG images of a pelican riding a bicycle. They provide code snippets and links to the rendered SVGs. A follow-up query asks about where to run trillion-parameter models, implying interest in the technical and infrastructure requirements for such large models. The insight here is how to practically set up and experiment with a high-parameter LLM for creative generation tasks, and that users are curious about scalability and deployment aspects of very large models.

Experience Report11h ago

Simple UX with LLM integration

The discussion highlights a user appreciating a feature reminiscent of Warp Terminal's command style that seamlessly integrates with an LLM, emphasizing simplicity and effective problem solving in UX design.

Experience Report15h ago

Impact of LLMs on Medical Assistance

The thread discusses a personal experience where ChatGPT provided life-saving medical information, emphasizing the current benefits of LLMs in healthcare. A counterpoint highlights the variability of LLM outcomes, acknowledging both their potential benefits and harms. This insight stresses the importance of cautious and critical use of LLMs for medical advice.

Experience Report20h ago

Using ChatGPT for Medical Record Understanding

Users share personal experiences using ChatGPT to assist with interpreting medical records, noting it is faster and can rival telehealth services in accuracy. However, caution is advised as ChatGPT is not a substitute for professional medical advice, reflecting the need to balance convenience with responsible usage and recognizing the limitations of AI in sensitive decision-making contexts.

Experience Report1d ago

Using ChatGPT for long-term issue resolution

The original poster shares a brief experience about using ChatGPT to solve a long-term problem, indicating a significant reliance on AI tools. The follow-up comment appears to be a sarcastic remark, but the thread highlights how AI like ChatGPT is becoming a go-to resource for resolving persistent issues. Actionable insight: Encourage evaluating AI assistance in troubleshooting and maintain awareness of its growing role in problem solving.

Experience Report1d ago

ChatGPT policy update on medical advice

The conversation centers on OpenAI's recent update to ChatGPT's usage policies restricting it from providing tailored legal or medical advice without licensed professional involvement. One user shares firsthand experience noting that ChatGPT now refuses to offer medical opinions, marking a technical and policy change rather than just legal precaution. The user appreciates the tool's educational value but regrets the reduced functionality in medical contexts. This insight highlights how AI service providers enforce ethical boundaries through software behavior changes, impacting user interaction and reliance on AI for sensitive advice.

Experience Report1d ago

Challenges of Using LLMs with Real-World Excel

The discussion highlights frustrations with large language models (LLMs) when applied to real-world Excel tasks, emphasizing that demos often involve clean, textbook data unlike messy, practical spreadsheets. One user notes how LLMs confidently produce errors in complex Excel scenarios, limiting their utility, while another less experienced user finds potential value for learning Excel functions and possibilities. The key insight is that LLMs may offer conceptual assistance but currently struggle with reliable execution on unstructured, real-world Excel data.

Experience Report2d ago

On-device Local LLM Performance

The discussion focuses on experiences with different on-device local LLM stacks, comparing llama.cpp, MLC, and other implementations like llama.rn. Users report that MLC has shown better performance than llama.cpp in the past, but the evaluation is somewhat dated. Another contributor shares that llama.rn was too slow to be conversational due to WebGL rendering demands, suggesting that using newer models or different frameworks like expo-llm-mediapipe might yield improved performance. Actionable insight includes exploring newer models or alternative local LLM implementations to achieve better conversational speeds on mobile devices.

Experience Report3d ago

Managing Context in LLM Sessions

Users discuss the challenge of 'flagged' or 'poisoned' responses in language model sessions that require starting new sessions to recover. They share practical advice, including clearing context frequently (e.g., every third message), to prevent 'context rot' and improve performance, referencing Anthropic's recommended best practices.

Experience Report3d ago

Challenges using LLM tools for large codebases

The original poster suggests a 'shoot and forget' approach when using LLM tools for coding, focusing on final PR results rather than intermediate outputs. A respondent counters that this strategy is ineffective for large or complex projects, as the volume of generated code and related artifacts can become overwhelming, leading to extensive reviews and bloated codebases. The actionable insight is to use LLM tools selectively for brainstorming and targeted implementations rather than broad, unsupervised code generation.