{"id":194,"date":"2026-05-05T05:04:46","date_gmt":"2026-05-05T05:04:46","guid":{"rendered":"https:\/\/abrarqasim.com\/blog\/laravel-pulse-production-what-i-use-what-i-turned-off\/"},"modified":"2026-05-05T05:04:46","modified_gmt":"2026-05-05T05:04:46","slug":"laravel-pulse-production-what-i-use-what-i-turned-off","status":"publish","type":"post","link":"https:\/\/abrarqasim.com\/blog\/laravel-pulse-production-what-i-use-what-i-turned-off\/","title":{"rendered":"Laravel Pulse in Production: What I Use, What I Turned Off"},"content":{"rendered":"<p>Confession: I avoided Laravel Pulse for the first six months it was out. The dashboard screenshots looked great, but I already had Laravel Telescope wired up and a Grafana board that nobody ever opened. Adding another monitoring tool felt like the project equivalent of buying another notebook to fix the fact that I never write in any of them.<\/p>\n<p>Then the support inbox started filling up with &ldquo;the app is slow on Tuesdays&rdquo; and I had no good way to point at why. Telescope is great for one request, useless for trends. Grafana told me CPU was fine. I needed something between those two.<\/p>\n<p>Pulse turned out to be exactly that. I wired it into a Laravel 12 app on a small Hetzner VPS and let it run for six weeks. Some of it was boring. A few bits caught real problems, and a couple of defaults needed disabling before they bit me. Here&rsquo;s what actually happened.<\/p>\n<h2 id=\"what-pulse-actually-is-the-thing-the-readme-glosses-over\">What Pulse actually is (the thing the readme glosses over)<\/h2>\n<p>Pulse is a first-party performance dashboard that lives inside your Laravel app. It records slow queries, slow jobs, slow requests, exceptions, cache hit ratios, queue throughput, and a few other things. Everything goes into your database (you can use a separate connection, more on that), and the dashboard renders at <code>\/pulse<\/code>. The <a href=\"https:\/\/laravel.com\/docs\/12.x\/pulse\" rel=\"nofollow noopener\" target=\"_blank\">official docs<\/a> cover install in five minutes.<\/p>\n<p>The bit I missed for a while: Pulse isn&rsquo;t really a logging tool. It&rsquo;s a sampling tool. Every record it stores has been aggregated into rolling buckets. So when you look at &ldquo;slow queries&rdquo; you&rsquo;re not seeing every slow query, you&rsquo;re seeing the top N by aggregated execution time over the window you picked. That&rsquo;s why the dashboard stays fast even when your app doesn&rsquo;t.<\/p>\n<p>The docs do mention this. It still took me a couple of days to internalise it.<\/p>\n<h2 id=\"what-i-actually-look-at-in-the-dashboard\">What I actually look at in the dashboard<\/h2>\n<p>I dropped about half the cards from the default layout. The ones I kept:<\/p>\n<ol>\n<li><strong>Slow Queries<\/strong> \u2014 by far the most useful card, because the SQL shows up next to the call site.<\/li>\n<li><strong>Slow Jobs<\/strong> \u2014 caught a queue job that was loading every user&rsquo;s relations because of a missing <code>with()<\/code>.<\/li>\n<li><strong>Exceptions<\/strong> \u2014 overlaps with Sentry, but Pulse groups them by class plus location, which is a nicer first look.<\/li>\n<li><strong>Cache<\/strong> \u2014 hit ratios per key. Surprisingly useful. I had a key with a 4% hit rate that I&rsquo;d assumed was hot.<\/li>\n<li><strong>Usage by User<\/strong> \u2014 the one I check on bad days.<\/li>\n<\/ol>\n<p>The cards I dropped: Servers, Storage, Slow Outgoing Requests (we don&rsquo;t make many), and the Application Usage card. They&rsquo;re fine, I just don&rsquo;t use them.<\/p>\n<h2 id=\"the-recorders-i-turned-off-and-why\">The recorders I turned off (and why)<\/h2>\n<p>This is the section I wish I&rsquo;d had on day one.<\/p>\n<pre><code class=\"language-php\">\/\/ config\/pulse.php\n'recorders' =&gt; [\n    Recorders\\SlowQueries::class =&gt; [\n        'sample_rate' =&gt; 0.1,\n        'threshold' =&gt; 250, \/\/ ms\n        'ignore' =&gt; [\n            '\/(.*pulse_.*)\/',     \/\/ don't record Pulse's own queries\n            '\/(.*telescope_.*)\/', \/\/ or Telescope's\n        ],\n    ],\n\n    Recorders\\SlowOutgoingRequests::class =&gt; false, \/\/ disabled\n],\n<\/code><\/pre>\n<p>Two changes worth flagging.<\/p>\n<p><code>sample_rate: 0.1<\/code> for slow queries. The default is 1.0. On a busy app that&rsquo;s a write-amplifier: every slow query becomes a database write. Sampling at 10% kept the resolution I needed without doubling DB writes.<\/p>\n<p><code>threshold: 250<\/code>. The default is 1000ms. That&rsquo;s too lax for most web requests. I dropped it to 250 and immediately saw a <code>WHERE ... LIKE '%foo%'<\/code> query in a user search I&rsquo;d forgotten about.<\/p>\n<p>Also: explicitly ignore Pulse&rsquo;s own tables. If you don&rsquo;t, and you put Pulse on the same database as your app, the dashboard will start recording itself. Not catastrophic, but it pollutes your top-N for no reason.<\/p>\n<h2 id=\"pulse-vs-telescope-i-run-both-but-for-different-things\">Pulse vs Telescope: I run both, but for different things<\/h2>\n<p>I keep seeing this framed as if you have to pick one. I run both.<\/p>\n<p><a href=\"https:\/\/laravel.com\/docs\/12.x\/telescope\" rel=\"nofollow noopener\" target=\"_blank\">Telescope<\/a> is the per-request inspector. When a customer reports &ldquo;this page is broken&rdquo;, Telescope is where I go. It records every query, every cache call, every event, every exception, for that one request.<\/p>\n<p>Pulse is the across-time aggregator. When I want to know &ldquo;which queries were slow last week&rdquo;, Pulse is where I go.<\/p>\n<p>They overlap (both can record exceptions), but the use cases are different enough that I don&rsquo;t feel bad running both. Telescope is on its own database connection so it doesn&rsquo;t compete with the app. Pulse uses the main connection because the writes are sampled.<\/p>\n<p>If you have to pick one for a small app, pick Pulse. Telescope is a heavier tool that pays off only when something breaks. Pulse pays you back every time you glance at the dashboard.<\/p>\n<h2 id=\"the-trap-i-fell-into-with-the-storage-driver\">The trap I fell into with the storage driver<\/h2>\n<p>Pulse has two ingest drivers: <code>storage<\/code> (buffers events and flushes them in batches to your main DB) and <code>redis<\/code> (queues events to Redis and flushes via a long-running <code>pulse:work<\/code> worker). I&rsquo;d assumed I was on <code>storage<\/code>, but a stale <code>PULSE_INGEST_DRIVER=redis<\/code> in our env was sending events through a worker I hadn&rsquo;t deployed.<\/p>\n<p>The symptom: pgbouncer connections climbing during peak hours, and Pulse data showing up half the time.<\/p>\n<p>The fix:<\/p>\n<pre><code class=\"language-php\">\/\/ config\/pulse.php\n'ingest' =&gt; [\n    'driver' =&gt; env('PULSE_INGEST_DRIVER', 'storage'),\n    'trim' =&gt; [\n        'lottery' =&gt; [1, 1000],\n        'keep' =&gt; '7 days',\n    ],\n],\n<\/code><\/pre>\n<p>What I&rsquo;d do differently:<\/p>\n<ul>\n<li>Set the driver explicitly. Don&rsquo;t lean on env defaults; they&rsquo;re easy to forget.<\/li>\n<li>The <code>trim<\/code> config controls how long Pulse keeps data. Default is 7 days; on a small VPS I dropped it to 3.<\/li>\n<li>Schedule the trim command, or your <code>pulse_entries<\/code> table will grow without bound.<\/li>\n<\/ul>\n<p>Schedule it like this:<\/p>\n<pre><code class=\"language-php\">\/\/ app\/Console\/Kernel.php\n$schedule-&gt;command('pulse:check')-&gt;everyMinute();\n$schedule-&gt;command('pulse:trim')-&gt;daily();\n<\/code><\/pre>\n<p>Skip <code>pulse:trim<\/code> and the entries table balloons. Mine hit 4M rows in a week before I noticed. Not catastrophic, but the dashboard slowed enough that I started avoiding it. Which defeats the point.<\/p>\n<h2 id=\"should-you-bother\">Should you bother?<\/h2>\n<p>Yes if any of these match you:<\/p>\n<ul>\n<li>An app slow enough that you&rsquo;re guessing at causes.<\/li>\n<li>A queue worker doing things you can&rsquo;t easily inspect.<\/li>\n<li>A team where someone other than you needs to glance at &ldquo;is the app OK&rdquo;.<\/li>\n<\/ul>\n<p>Probably not yet if:<\/p>\n<ul>\n<li>You&rsquo;re still on Laravel 10 and not planning to upgrade soon. Pulse runs on 10 but a few recorders assume 11+.<\/li>\n<li>You&rsquo;re a one-person side project with 100 requests a day. Telescope is enough.<\/li>\n<li>You already pay Datadog or New Relic for the same thing.<\/li>\n<\/ul>\n<p>Setup took me about two hours including the recorder tuning and the storage-driver fix. The first useful insight (a slow query I&rsquo;d been ignoring) showed up on day one. Fair trade.<\/p>\n<p>If you want the longer story of the rest of the Laravel stack I lean on, I covered the new Volt syntax and how I actually use it in my <a href=\"https:\/\/abrarqasim.com\/blog\/laravel-volt-six-months-in-what-im-actually-using\" rel=\"noopener\">Laravel Volt notes<\/a>. The <a href=\"https:\/\/github.com\/laravel\/pulse\" rel=\"nofollow noopener\" target=\"_blank\">source on GitHub<\/a> is also worth skimming if you want to see what each recorder queries; it&rsquo;s all <code>Recorder<\/code> classes, no magic.<\/p>\n<h2 id=\"what-to-do-this-week\">What to do this week<\/h2>\n<p>Open <code>config\/pulse.php<\/code>. Drop your slow query threshold to 250ms. Add <code>sample_rate: 0.1<\/code>. Restart your queue workers. Wait a day. Then look at your Slow Queries card and find one you didn&rsquo;t know about. There&rsquo;s almost always one.<\/p>\n<p>I cover this kind of &ldquo;what did I actually learn from shipping it&rdquo; stuff over on <a href=\"https:\/\/abrarqasim.com\/work\" rel=\"noopener\">my project work<\/a>. If you&rsquo;ve disabled a Pulse recorder I haven&rsquo;t, drop me a line: which one, and what burned you.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Six weeks running Laravel Pulse in production. The recorders I turned off, the slow query that surprised me, and when I still reach for Telescope.<\/p>\n","protected":false},"author":2,"featured_media":193,"comment_status":"","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"rank_math_title":"","rank_math_description":"Six weeks running Laravel Pulse in production. The recorders I turned off, the slow query that surprised me, and when I still reach for Telescope.","rank_math_focus_keyword":"laravel pulse","rank_math_canonical_url":"","rank_math_robots":"","footnotes":""},"categories":[173,52],"tags":[56,74,226,227,19,53,228],"class_list":["post-194","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-laravel","category-php","tag-laravel","tag-laravel-12","tag-laravel-pulse","tag-monitoring","tag-performance","tag-php","tag-telescope"],"_links":{"self":[{"href":"https:\/\/abrarqasim.com\/blog\/wp-json\/wp\/v2\/posts\/194","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/abrarqasim.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/abrarqasim.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/abrarqasim.com\/blog\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/abrarqasim.com\/blog\/wp-json\/wp\/v2\/comments?post=194"}],"version-history":[{"count":0,"href":"https:\/\/abrarqasim.com\/blog\/wp-json\/wp\/v2\/posts\/194\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/abrarqasim.com\/blog\/wp-json\/wp\/v2\/media\/193"}],"wp:attachment":[{"href":"https:\/\/abrarqasim.com\/blog\/wp-json\/wp\/v2\/media?parent=194"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/abrarqasim.com\/blog\/wp-json\/wp\/v2\/categories?post=194"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/abrarqasim.com\/blog\/wp-json\/wp\/v2\/tags?post=194"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}