← 返回
philschmid.de - RSS feed

philschmid.de - RSS feed

AI
更新于 2026-05-15 01:27 共 100 条
  1. 1 How Agents Manage Other Agents: Four Subagents Patterns in 2026
  2. 2 How to use Deep Research with the Gemini API
  3. 3 How to correctly use MCP servers with your AI Agents
  4. 4 8 Tips for Writing Agent Skills
  5. 5 How to use Gemma 4 with the Gemini API and Google AI Studio
  6. 6 How Kimi, Cursor, and Chroma Train Agentic Models with RL
  7. 7 Combine Built-in Tools and Function Calling in the Gemini Interactions API
  8. 8 Developer Guide: Nano Banana 2 with the Gemini Interactions API
  9. 9 How Autoresearch will change Small Language Models adoption
  10. 10 Practical Guide to Evaluating and Testing Agent Skills
  11. 11 Writing a Good AGENTS.md
  12. 12 Agents: Inner Loop vs Outer Loop
  13. 13 Can We Close the Loop in 2026?
  14. 14 Multimodal Function Calling with Gemini 3 and Interactions API
  15. 15 Getting Started with Gemini Deep Research API
  16. 16 The Agent Client Protocol Overview
  17. 17 Gemini Interactions API Quick Start
  18. 18 MCP is Not the Problem, It's your Server: Best Practices for Building MCP Servers
  19. 19 Transparent PNG Stickers with Nano Banana Pro and Gemini interactions API
  20. 20 Building Agents with the Gemini Interactions API
  21. 21 Introducing MCP CLI: A way to call MCP Servers Efficiently
  22. 22 The importance of Agent Harness in 2026
  23. 23 8 Predictions for 2026. What comes next in AI?
  24. 24 Context Engineering for AI Agents: Part 2
  25. 25 Why (Senior) Engineers Struggle to Build AI Agents
  26. 26 Practical Guide on how to build an Agent from scratch with Gemini 3
  27. 27 Gemini 3 Prompting: Best Practices for General Usage
  28. 28 Gemini API File Search: A Web Developer Tutorial
  29. 29 Build your first AI Agent with Gemini, n8n and Google Cloud Run
  30. 30 AI Agent Benchmark Compendium
  31. 31 Agents 2.0: From Shallow Loops to Deep Agents
  32. 32 The Rise of Subagents
  33. 33 The 10 Steps for product AI generation with Gemini 2.5 Flash
  34. 34 Memory in Agents, Make LLMs remember.
  35. 35 Google Gemini CLI Cheatsheet
  36. 36 Code Sandbox MCP: A Simple Code Interpreter for Your AI Agents
  37. 37 Integrating Long-Term Memory with Gemini 2.5
  38. 38 The New Skill in AI is Not Prompting, It's Context Engineering
  39. 39 Single vs Multi-Agent System?
  40. 40 Zero to One: Learning Agentic Patterns
  41. 41 Google Gemini LangChain Cheatsheet
  42. 42 OpenAI Codex CLI, how does it work?
  43. 43 Model Context Protocol (MCP) an overview
  44. 44 ReAct agent from scratch with Gemini 2.5 and LangGraph
  45. 45 Pass@k vs Pass^k: Understanding Agent Reliability
  46. 46 Google Gemma 3 Function Calling Example
  47. 47 Function Calling Guide: Google DeepMind Gemini 2.0 Flash
  48. 48 From PDFs to Insights: Structured Outputs from PDFs with Gemini 2.0
  49. 49 Mini-R1: Reproduce Deepseek R1 „aha moment“ a RL tutorial
  50. 50 How to align open LLMs in 2025 with DPO and and synthetic data
  51. 51 How to use Anthropic MCP Server with open LLMs, OpenAI or Google Gemini
  52. 52 Bite: How Deepseek R1 was trained
  53. 53 Fine-tune classifier with ModernBERT in 2025
  54. 54 How to fine-tune open LLMs in 2025 with Hugging Face
  55. 55 Deploy QwQ-32B-Preview the best open Reasoning Model on AWS with Hugging Face
  56. 56 Deploy Llama 3.2 Vision on Amazon SageMaker
  57. 57 How to Fine-Tune Multimodal Models or VLMs with Hugging Face TRL
  58. 58 Evaluate open LLMs with Vertex AI and Gemini
  59. 59 Evaluate LLMs using Evaluation Harness and Hugging Face TGI/vLLM
  60. 60 Deploy open LLMs with Terraform and Amazon SageMaker
  61. 61 LLM Evaluation doesn't need to be complicated
  62. 62 Evaluating Open LLMs with MixEval: The Closest Benchmark to LMSYS Chatbot Arena
  63. 63 Train and Deploy open Embedding Models on Amazon SageMaker
  64. 64 Deploy Mixtral 8x7B on AWS Inferentia2 with Hugging Face Optimum
  65. 65 Fine-tune Llama 3 with PyTorch FSDP and Q-Lora on Amazon SageMaker
  66. 66 Fine-tune Embedding models for Retrieval Augmented Generation (RAG)
  67. 67 Understanding the Cost of Generative AI Models in Production
  68. 68 Deploy Llama 3 70B on AWS Inferentia2 with Hugging Face Optimum
  69. 69 Deploy open LLMs with vLLM on Hugging Face Inference Endpoints
  70. 70 Efficiently fine-tune Llama 3 with PyTorch FSDP and Q-Lora
  71. 71 Deploy Llama 3 on Amazon SageMaker
  72. 72 Accelerate Mixtral 8x7B with Speculative Decoding and Quantization on Amazon SageMaker
  73. 73 Deploy Llama 2 70B on AWS Inferentia2 with Hugging Face Optimum
  74. 74 Fine-Tune and Evaluate LLMs in 2024 with Amazon SageMaker
  75. 75 Evaluate LLMs with Hugging Face Lighteval on Amazon SageMaker
  76. 76 How to fine-tune Google Gemma with ChatML and Hugging Face TRL
  77. 77 How to Fine-Tune LLMs in 2024 with Hugging Face
  78. 78 RLHF in 2024 with DPO and Hugging Face
  79. 79 Scale LLM Inference on Amazon SageMaker with Multi-Replica Endpoints
  80. 80 Fine-tune Llama 7B on AWS Trainium
  81. 81 Programmatically manage 🤗 Inference Endpoints
  82. 82 Deploy Mixtral 8x7B on Amazon SageMaker
  83. 83 Deploy Embedding Models on AWS inferentia2 with Amazon SageMaker
  84. 84 Deploy Llama 2 7B on AWS inferentia2 with Amazon SageMaker
  85. 85 Deploy Stable Diffusion XL on AWS inferentia2 with Amazon SageMaker
  86. 86 Amazon Bedrock: How good (bad) is Titan Embeddings?
  87. 87 Evaluate LLMs and RAG a practical example using Langchain and Hugging Face
  88. 88 Deploy Idefics 9B and 80B on Amazon SageMaker
  89. 89 Train and Deploy Mistral 7B with Hugging Face on Amazon SageMaker
  90. 90 Llama 2 on Amazon SageMaker a Benchmark
  91. 91 Fine-tune Falcon 180B with DeepSpeed ZeRO, LoRA and Flash Attention
  92. 92 Fine-tune Falcon 180B with QLoRA and Flash Attention on Amazon SageMaker
  93. 93 Deploy Falcon 180B on Amazon SageMaker
  94. 94 Optimize open LLMs using GPTQ and Hugging Face Optimum
  95. 95 LLMOps: Deploy Open LLMs using Infrastructure as Code with AWS CDK
  96. 96 Deploy Llama 2 7B/13B/70B on Amazon SageMaker
  97. 97 Introducing EasyLLM - streamline open LLMs
  98. 98 Extended Guide: Instruction-tune Llama 2
  99. 99 LLaMA 2 - Every Resource you need
  100. 100 Fine-tune LLaMA 2 (7-70B) on Amazon SageMaker