27 June 2022

Fun with Rails Query Tracing

Finding the Origins of Database Queries

There are a number of reasons and times when trying to find the specific origin of a DB query comes up. Most often in legacy systems which have spiraled into extremely complex systems. A common cause is a shared database and trying to figure out which application is causing some specific query. Even in a single application, trying to tease out DB queries that have embedded themselves deep within a shared library. Whatever the cause, trying to find the source of a DB query comes up as a fairly common code detective task from time to time. Let’s look at a few approaches to tracking down the query.

Adding Query Comments

A really helpful tool for Rails is Marginalia, which can append comments about the source of a query to the query itself as a comment. This lets one leverage traditional DB tools like slow query logs, etc, and cross-reference from the query back to the code source. The output would look something like the below, depending on the tool you used to look at the queries.

my-service 	SELECT COUNT(*) FROM some_table WHERE some_table.some_reference_id = ??? /*application:my-service,controller:endpoint_a,action:show*/ 	1,181
my-service 	SELECT some_table.* FROM some_table WHERE some_table.id = ??? LIMIT ??? /*application:my-service,controller:endpoint_b,action:create*/ 	3,227

Some tools strip the comments or need to be configured to keep them. You can also set up your Rails logs to output this as well, but it can be really verbose. I leave it off by default but can toggle it on when debugging something specific.

# add in config/initializers/marginalia.rb
Marginalia.application_name = "my-service"

if ENV["ENABLE_AR_LOGGING"]
  ActiveRecord::Base.logger = config.logger
  ActiveRecord::Base.logger.level = Logger::DEBUG
end

DD Database APM

If you are already using an observability tool, many offer rich support for databases these days. For example, Datadog now offers database monitoring. This can be a really powerful tool, and help to understand and drive the deprecation of tables over time. A few quick notes

  • If you have multiple applications connecting to a shared DB, ensure a unique DB Username so that datadog can attribute queries to specific applications
  • DD, unfortunately, strips query comments, it would be really nice if they kept the query comment in the DB Monitoring both for query metrics and samples.
  • query_metrics - understand query load, speed, etc especially a historical view
  • query_samples - all the details around specific queries
  • The APM application traces can help pinpoint where the code initiates the database span.

Fun with Rails LogSubscriber

While the others are great general tooling, what if you need something more specific, or you just want to try to find specific queries by running tests and seeing the call stack? Well, this being Ruby of course we can hack together something for development.

Finding the Query Needle In the Haystack

Looking for call chains causing specific active record queries? Why not log out the call stack info or drop a debugger to inspect the code whenever a particular query is executed. Combining this with a robust test suite should help track down all the offending callers in no time.

Rails or more specifically ActiveRecord/ActiveSupport have all sorts of cool tools. A really useful one is LogSubscriber, it is a part of the Active Support Instrumentation / Notifications ecosystem, which overall is really cool. It can be used to hook into many of the common abstractions in Rails, for example adding custom action controller logging.

In our case, let’s listen to all active record queries, with a log subscriber, and capture any of interest. This is some throw-away code I add when needed, I track down whatever I am looking for and remove it when my work is complete.

# add in spec_helper.rb, or other test setup files
module ActiveRecord
  class LogSubscriber < ActiveSupport::LogSubscriber
    def sql(event)
      # NOTE: I add a global $ignore_query == false && if I need to say ignore all the factories or before/after spec specific queries to help
      # only find callers in application code.
      if /FROM "some_table" WHERE "some_condition"/.match?(event.payload[:sql])
        Rails.logger.info "SQL FOUND #{caller_locations[15...150]}" 
        binding.irb if ENV["QUERY_BINDING"]
      end
    end
  end
end

ActiveRecord::LogSubscriber.attach_to :active_record
...



Dan Mayer
25 June 2022

Remarkable2 Review

I bought a Remarkable2 early after launch (I believe, I was in batch 3). I haven’t really said much about it, in some sense because it doesn’t feel particularly cool, but what is clearly special is that I still use it nearly every day. I have had the device for over a year now and still enjoy it. It isn’t really something to show off like a cool tech toy, but after settling into some basic usage patterns, it is a nearly daily tool for me and has effectively replaced pen and paper, allowing me to digitize and archive my notes when I want. The device isn’t perfect and future iterations hopefully improve the writing experience, but it has after many attempts gotten good enough that I rarely reach for traditional pen and paper. Although, as the company tries to move from selling a device to making profits off a subscription service, I am more likely to look for alternatives that aren’t subscription based.

TL;DR: I have used this almost daily for 18 months, and have hundreds of pages of notes on it, it has become a part of my daily workflow.

History of Digital Notes Attempts

I have been wanting quick digital notes for decades and tried a number of solutions, most of which fell short so quickly, that I never developed any habit of using them on a day-to-day basis… The first attempts might date me and remind folks of a time when many companies were pushing for quick digital notes with palm pilots Graffiti script and making progress on natural digital note conversion for Pocket PCs. In recent years, it feels like there has been less interest in natural writing on digital devices, although Apple, brought some interest back with the Apple Pencil.

  • Pocket PC, with pen
  • Palm Pilot (handspring)
  • Samsung Galaxy, tablet with pen
  • iPad
  • live scribe
    • This was an interesting one, but the requirement of special paper, and needing to having a bluetooth phone app running and syncing caused to much friction.
  • Samsung Galaxy S21 Ultra, with pen

Why the Remarkable Works for Me

I basically use the remarkable as an endless stream of notes. I have a primary notebook that I just add pages to the end ever growing. I used to have mutliple notebooks like work, personal, sketching, etc… Even the friction of having to switch from one notebook to another was to much and I would break my habit of writing something down. I then back up and archive that file… If I need something in particular digitized or want to work more on some particular page (say a sketch of a graph), I export it that to a single file. That is kind of it, my primary use case is just replacing the pen and notepad with an endless digital notepad. I love it because it really doesn’t work for much else than pen and paper replacement. It isn’t fancy, and many of the extra features haven’t been to useful to me, for example other than being a neat trick streaming it to my screen over a zoom as a whiteboard isn’t particularly engaging for anyone.

Example of My Notes Remarkable

An example of what my list of notes often look like on the remarkable

Alternative Usages

OK, yes from time to time I do use the remarkable for other things, but the only thing that is a daily habit is taking quick notes, most often during meetings to serve as a reminder before moving the notes to better long term and more collaborative storage.

  • sketching up a graph or chart to think through the design before spending more time on a detailed chart using real data
  • books, it is much easier for me to buy books on kindle as I am hooked into that store and it requires finding non DRM (or hacking around it) content to read on the remarkable, and the lack of backlight limits usable reading locations/time.

Comics On The Remarkable

Look how great these look, seriously an amazing reading experience.

Black & White Comics Look Gorgeous on the Remarkable

Black & White Comics Look Gorgeous on the Remarkable

These comics, are very cool… but without color, most folks would prefer comics on an iPad. There are a lot of great older comics that were always black and white, which you can buy and put on the remarkable. I bought msot of mine from Comixology before Amazon bought them. I am not sure if they still support non DRM comic formats that work on the remarkable.

Long Form Articles are Easy on the Eyes

I also read long-form articles on it, during the day with good lighting, I mostly reach tech and blog entries this way to give my eyes a break from the computer screen. I use the highlighter while reading to highlight sections of text to help me remember them or reference them later. I still use my Kindle at night for the backlight and it is simpler to buy content on the kindle, while it takes a bit of effort on the remarkable. For long artciles in the morning with my coffee the Remarkable is a great fit.

Sketching

It is fun to sketch and draw whie watching TV, it isn’t something I do a ton, but it can be a nice way to let the mind wander.

Doodling on the Remarkable

doodling on the Remarkable

A Future Full of Improvements in Digitial Notes

Even though this product has found a place in my daily workflow, it doesn’t mean it has solved the problem… In fact, it still means I am chasing the same dream, now I just have something acceptable enough that I don’t entirely give up on it. A short list of things that need to be improved.

  • No backlight
  • Slow initial response (getting out of sleep/power off and into notes is annoying at work I have it set up to basically always be awake) but it just doesn’t work for taking a quick note before bed… I just email myself from my phone
  • Digital conversion notes to digitized text are still pretty bad, I convert drawings, charts, etc… but otherwise things are just stored/archived as basically PDF drawings.
    • I still dream of being able to hand write in the markdown format for show blog posts
  • Article sync isn’t perfect to send remarkable strips of images and graphics… The PDF view is less easy on the eyes but keeps images as part of the file* the pen capture resolution is still notably less sharp and responsive than a pen… My wife’s cursive which is already hard to read with a good pen and paper is particularly difficult as the remarkable is just not as fine or subtle of the line as it could be.
  • Not every device should require a service, as with all software the companies Remarkable is now trying to have a monthly/yearly subscription as part of using the device. I am trying to find and remove all subscriptions I can from my life. I don’t feel like I need to pay for the device and for each month I use it. Most of the cloud features are useless to me, I just need it to back up a file.
  • Color, I really love the e-ink experience but it is still common where I don’t read on the device because articles have pictures, charts, or leverage color… I keep hearing e-ink color is just around the corner
  • A ridiciously good drawing environment. I want layers, colors, smoothing curve line drawing mode… Like the best of photoshop with e-ink
  • Higher resolution writing capture, use a really nice pen and sign your name and then compare to your remarkable signature… There is still a way to go
  • Give me the Web
    • Web browser… Yeah, I know e-ink web browsing? Yes, I think it would be great for textually rich sites…
    • Email, I love to do email on the remarkable, if instant text digitization worked well

In the end, it is great and I am happy with it, but it is still kind of an expensive toy, I am sure most folks wouldn’t find it worth it and would be better off with a good phone or tablet. I know I have grown accustom to it, and would miss it if it broke and was no longer a tool I could reach for.

Grid view on the Remarkable

example of browsing multiple pages on the Remarkable

...



Dan Mayer
19 May 2022

NFTs, Art Investing, and Collectibles

I am still skeptical of Crypto in general, I do own some coins for fun / investment, but consider it an interesting and very risky investment. As far as NFTs built on top of the blockchain, I looked into it and decided that in its current form and iteration it isn’t compelling to me… I think there are some real concerns around the crypto ecosystem for now, and I am happy to watch and wait. Hearing all the swirl around NFTs and art investments did get me thinking a bit more about how I spend money on Art and where I could perhaps do some more purposeful spending to support art.

Art As An Investment

Does anyone else having these ads follow them around?

Art vs Market

NFTs, brought art investment to the software investment market… and it keeps growing! Or so they say… as NFTs collapse back down.

I know there are a number of startups trying to “open up” the art market to collective investment both for traditional art masterpieces and other digital markets. Positioning it as diversification in investments to get money out of the markets, move away from large cash stock piles, etc… It’s all a bit much for me, but with NFTs and these types of non-traditional investments being marketed all over the internet. The ads did make me think is there any art I would like that might be an investment? Should we always be treating art as something that needs to hold or increase value? I hope not, but if I am not really buying any art that isn’t supporting artists, perhaps I should buy more art and I can pretend it is small investments.

I have never really had or held art as an investment or followed the art market, but I had as a kid been more into various collectibles… baseball cards, comics, POGs (yes, I have hundreds of them), and magic the gathering cards. I offloaded most of my collections over time but always held on to some toys, comics, and magic cards… Turns out a few of my magic cards in terms of a percentage growth have beat my index funds! I guess, if I can consider collectibles as pop art, I have some experience and it feels a lot more fun to participate in than NFTs.

A Few Dual Lands

Oh wow that box of magic cards in my basement has a handful of dual lands and dozens of other older interesting cards.

I don’t really want to be in a world where all art is an investment, so while it might be fun to buy some pop art/collectibles that could increase in value, I also wanted to look at art that just makes me happy and can help support artists.

Comics

Comics? Seriously? Yeah, why not with NFTs as art, it feels like folks are really looking to get into collectibles again. Along with the current state of blockbuster movies being mostly comics, I feel like the value of comic collectibles going up over time seems likely… Old nerds watch marvel movies nostalgic for our childhood comic days, soon folks will be nostalgic for early comic movies and the comics of the time that inspired them… A number of comic movies are a generation behind like Spawn, Maxx, and others that might due for a movie revisit. Seems like some of the comics around the movie reboots will jump in value whenever that nostalgia hits.

Comics Framed

A few comics after I framed them

Theo with his comics

Theo smiling after we put up Iron Man (current favorite) on his wall

Also, if I am wrong and I probably am, it isn’t a real big investment, and it is cool pop culture art you can hang on your wall, a gift to kids, cousins, and friends. I picked up a number of comics and got them framed for my and some friends’ kids. Instead of paying a ton of money in ETH for a digital NFTs image that at the moment seems mostly like bragging rights, my kiddos have their favorite comic hero up on their wall.

Spider Man Spread

Spiderman before framing

Spider Man Framed

Spiderman comics after framing

Commissioned Art

There are lots of ways to commission custom art, and it isn’t always insanely expensive. After our dog passed, our family had a commissioned piece to remember our dog Spot.

Spot Commission Drawing

Our dog Spot

After a small family reunion-type vacation, we had an artist take a group photo and turn it into a drawing that we sent to all the families…

Family Art

Supporting Small Artists

On sites like Etsy, you can find much smaller and more unique art that is possibly a better fit for your tastes than things you will find in a more gallery or commercial art market. You can support smaller more niche artists that have a quirky hobby or just haven’t found a large commercial following. Anyways, it is a nice way to be able to support.

Photographers

This is probably the most common way many non-arty folks help support artists… Hiring a photographer. Wedding photos, new baby photos, etc… We have enjoyed the experience and the photos so we have increased this practice by hiring photographers for larger family trips when cousins get together. Major life milestones, etc… It is fun and we get some great photos that end up on walls, Christmas cards, etc.

Family Photo

A family photo during a vacation

Local Art Fairs

While I haven’t attended many of these recently, we are planning to hit up First Friday Art Walks in Denver’s Art District on Santa Fe soon. I remember as a kid visiting Art fairs in Springfield, IL often and collecting little small drawings or paintings intended to be sold to kids for $3-5, these pieces proudly hung on my walls and inspired my own creations. I hope my kids enjoy and remember the experience of finding little pieces of art that speak to them. I even found a favorite artist, whom every year, I would seek out their small photo-realistic pencil drawings of animals in the kid’s art tent.

Frog Drawing

Pencil drawings, I bought as a kid… I still have a gorilla, dog, and my favorite frog

Floating Girl Painting

A painting my parents bought in Springfield and was handed down to me

Original art

There are various places to pick up original art. We like to purchase some art while traveling.

Vietname Painting

A painting we bought when in Vietnam, shipped back, and then framed

While traveling met a local painter and loved many of her pieces but the color scheme wouldn’t work for our house, they mentioned commissioning pieces with loose guidelines like color themes and taking it from there. We are actively considering commissioning a piece, and it wouldn’t cost more than any of the paintings they already had for sale, which had standard prices based on canvas size.

Interested in just finding a custom piece online? If you have any favorite sites let me know.

  • SingulArt - reasonable prices for custom paintings (we haven’t purchased from here yet, but are currently considering a piece).
...



Dan Mayer
10 April 2022

Redis & Sidekiq

Redis & Sidekiq

A collection of notes about working with Sidekiq and Redis. A previous post about Ruby & Redis, briefly touched on some things, but I will get into more specifics in this post.

Redis and Background Jobs

A common usage of Redis for Rubyists is for background jobs. Two popular libraries for jobs are Sidekiq and Resque. At this point, I highly recommend Sidekiq over Resque as it is more actively maintained and has more community support around it. I am not going to get into too many specifics of Sidekiq and Resque, but talk a bit more about how they use Redis. There are always some gotchas when working with Redis, ask folks about sometimes an incident that occurred because of a keys, flushall, or flushdb command. Some of these commands are destructive which is always something to be careful with, but they also all have very slow performance characteristics. It is worth noting how some of the calls in Resque and Sidekiq scale with queue depth, which is critical to understand.

A Incident Caused by our Sidekiq/Redis Observability

UPDATE: We no longer think the line below is the culprite, we observe latency growth and decline with queue size, but we are unsure of the cause and unable to reproduce. As seen, in the NOTE above the Sidekiq latency call is O(1+1) and therefor fast and predictable.

We got into trouble when moving from Resque to Sidekiq because our observability instrumentation was frequently making an O(S+N) call (Sidekiq’s queue latency). It wasn’t much of a problem until we had one of our common traffic spikes that results in a brief deep queue depth. Our previous Resque code didn’t have any issues and had some similar instrumentation being sent to Datadog. While our Sidekiq code had been live for days, this behavior where our processing speed decreased with queue depth hadn’t been observed or noticed. The problem came to light when on a weekend (of course) a small spike caused a background job backlog, as we an expected common case. The latency went way up due to our instrumentation and we started processing jobs slower than we enqueued them. This fairly quickly filled our entire Redis leading to OOM errors.

Redis Sidekiq Analytics

Analytics Monitoring our Recovery

These charts are from after the incident. We moved to a new Redis to get things back up and running during the incident, and after things were under control worked on draining the old full Redis, in a isolated way that couldn’t impact production load. In this graph, you can see as we reduce the queue size the latency of our Redis calls also reduces in step. I included CPU to show how hard we were taxing our Redis, this chart isn’t 1:1 as we were adding and removing workers and making some other tweaks, but the queue size -> latency is a direct correlation.

Code Culprite

NOTE: Update Mike responded that he doesn’t think the latency call is the issue so we are further investigating

UPDATE: We no longer think the line below is the culprite, we observe latency growth and decline with queue size, but we are unsure of the cause and unable to reproduce. As seen, in the NOTE above the Sidekiq latency call is O(1+1) and therefor fast and predictable.

As mentioned it wasn’t any of our normal code that was really the problem, it was this line that was part of our instrumentation and observability tooling. Sidekiq::Queue.new(queue_name).latency. As with any incident, there were a ton of other related things, but it is worth noting that this seemingly simple line could have some hidden gotchas or an outsized impact on your Redis performance. As that latency call scales linearly with queue size, it is calling Redis’s Lrange under the hood which is an O(S+N) operation.

Sidekiq / Redis Performance

A colleague @samsm, helped dig into this incident by putting together the queue size -> latency charts above as well as all the helpful tables I am sharing below. These showe how Sidekiq calls translate into their Redis implementation operation details and the operational costs.

Redis / Sidekiq Math

Doing the math on various Sidekiq operations: how much will they impact Redis?

Big O Complexity Notation 101

Big O Notation

Redis Big O

Sidekiq Operation Complexity

Redis Sidekiq Mapping

Additional Sidekiq / Redis Reading

Some additional reading if you want to dig in further on working with Sidekiq and Redis

...



Dan Mayer
04 April 2022

CDNs

I am starting a series of self-reference materials on staples of web application delivery. A place to collect and document my learnings and understanding of various technologies I often reach for when designing systems. The first in this series was about Ruby and Redis While this one will focus in on Content Delivery Networks. These reference pages will be updated over time and evolve as my usage or focus changes.

Content Delivery Network (CDN)

A Content Delivery Network can help serve high traffic or high performance websites as well as offer a number of features.

A content delivery network, or content distribution network (CDN), is a geographically distributed network of proxy servers and their data centers. The goal is to provide high availability and performance by distributing the service spatially relative to end users. CDNs came into existence in the late 1990s as a means for alleviating the performance bottlenecks of the Internet[1][2] as the Internet was starting to become a mission-critical medium for people and enterprises. Since then, CDNs have grown to serve a large portion of the Internet content today… – wikipedia

Minimum of what you want your CDN to be doing for you

  • Smart routing: last-mile network distribution
  • Speedy established TLS: speed up your TLS handshakes
  • DDOS Protection: Cached pages are kind of already protected but many CDNs offer DDOS protection (you can also do this at your load balancer layer)
  • Serving assets: handling serving assets to avoid having static file load hit dynamic servers
  • Caching: at least assets, but even better for HTML / API content
  • Compression: gzip and brotli

CDN Setups

Often you might want different configurations and settings for different purposes and uses of CDNs. For example, it is fairly common to have an asset CDN with long-lived caches for assets, while you might want applications to be able to specify find grained caching headers for HTML and API content.

Why You Should have A CDN

It can help even if you aren’t quite ready to fully leverage it. CDNs have a lot of hidden advantages, and with some early setup can be very quickly utilized to handle insane amounts of traffic. Handling loads of largest advertising campaigns for the initial experience even if all those new signups have to be pushed into a queue to handle when you can. As this tweet points out even after folks think of all the obvious reasons to have CDNs, there are often many other clever ways to use CDNs.

A Start Up Story

My first startup got a viral link taking it down and by the time we got page caching on our rails server a few hours had passed and we lost the bump… For this first bit of traction, a CDN could have made all the difference. The content would have been easy to protect with the single viral page behind the CDN and appropriate content headers. That startup never succeeded, but early on much time was wasted trying to improve app layer caching and performance where a CDN could have been a big help.

CDN Gotchas

While CDNs are great, nothing is free and every abstraction adds some complexity to your system. Understanding the values it can provide and also understanding some of the gotchas cna help your team decide if it is the right decision for the system.

  • Accidentally caching private pages/data!
  • caches including things like a set cookie (sessions, return_to, etc) that should be user-specific
  • CSRF, many traditional protections like the built-in Rails CSRF don’t work well with cached pages
  • difficulties with various security implementations like content-security-policy nonce implementations
  • Having different rules for cache keys and what information is sent to the origin
  • you may only cache on specific cookies and headers…but you might want your origin to receive all headers to help with debugging or other info, this is an additional mental load when understanding a request-response cycle.

Understand the reliability risk and how to mitigate it

I have written previously about request depth and availabiltiy. A CDN is an additional layer in the request depth of your application stack… It also only has a 99.9% AWS SLA, as the entry point to your systems which means that is the upper limit on your overall SLA pretending for a moment everything else was 100% reliable. Now, AWS and most CDNs actually have far better real-world uptime and success rates during normal operations, but all of the major CDNs have had notable major outages. Including Cloudfront having a major outage the day before thanksgiving. There are a few protective measures one can take a team really needs to ensure higher availability of their site, neither suggestion is cheap.

 * A team can implement and support High Availability Origin Failover as a protective layer within Cloudfront, to protect against origin level failures.  * A team can implement multi-CDN DNS failover This can be done in an automated or manual fashion depending on complexity and cost concerns.

CDNs vs Load Balancers

These days CDNs can do a lot of what load balancers do, by routing different types of requests to different back-end “origin” servers. Geo target routing to support locale and nearest reach servers… While this overlaps with load balancers, I generally end up with a configuration where I have both a CDN and load balancers in place. My setup looks often looks like so: web browser -> CDN -> ALB -> Application Servers.

CDNs Enhancing Request Payloads

As part of the ability to route different requests, CDNs now often handle basic IP Address Geolocation, allowing one to route requests from different countries, language support, city, postal code, or more to different origins. Even if your application doesn’t use different origins, the Geolocation information is often extremely useful. You can avoid additional network calls or 3rd party integrations to leverage the CDN’s built-in geolocation support. For example, AWS Cloudfront can provide Geolocation info on all requests to the CDN to impact caching logic as well as header hints to your application servers. For example, it is helpful to use this data to detect if a user might be accessing your site on the wrong domain given the country they are in. Example additional header data.

CloudFront-Viewer-Country-Name: United States
CloudFront-Viewer-Country-Region: MI
CloudFront-Viewer-Country-Region-Name: Michigan
CloudFront-Viewer-City: Ann Arbor
CloudFront-Viewer-Postal-Code: 48105
CloudFront-Viewer-Time-Zone: America/Detroit
CloudFront-Viewer-Latitude: 42.30680
CloudFront-Viewer-Longitude: -83.70590
CloudFront-Viewer-Metro-Code: 505

Note: inside a Ruby app, Rails/Rack will format the headers adding in HTTP and upcasing, so you could access this data in a typical Rails app like so request.headers["HTTP_CLOUDFRONT_VIEWER_CITY"].

CDNs Are Evolving

This post covers more traditional CDNs and features common consumer-facing sites should consider leveraging. CDNs are now moving into the realm of cloud infrastructure with [email protected] and other CDNs allowing code (most commonly Javascript) to be deployed to the CDN network. The features and options supported when looking at having fully supported runtime code at the edge opens up new architecture and application stack options. I am looking at FAAS and edge deployed code, but haven’t leveraged it in any significant production environment yet. It is definitely a space to keep an eye on that promises to simplify global app distribution while maintaining extremely performant consumer experiences.

Additional CDN Links

...



Dan Mayer
26 March 2022

Ruby & Redis

Ruby & Redis

A collection of notes and some tips about using Redis.

Redis Setup

Redis is super easy to setup, and in dev mode often just works right out of the box, but as you leverage and scale it inproduction, you might want to think more about it’s setup beyond just setting a default REDIS_URL ENV var. Often a basic Redis for simple product is just setup like so…

Redis.current = Redis.new(url: ENV['REDIS_URL'])

This has some issues:

A better setup adding in configurable options:

Redis.new(
  url: ENV['REDIS_URL'],
  timeout: ENV.fetch("REDIS_TIMEOUT", 1),
  reconnect_attempts: ENV.fetch("REDIS_RECONNECT_ATTEMPTS", 3),
  reconnect_delay: ENV.fetch("REDIS_RECONNECT_DELAY", 0.5),
  reconnect_delay_max: ENV.fetch("REDIS_RECONNECT_DELAY_MAX", 5)
)

If you are wanting to configure a Redis and use it across threads, using a Redis connection pool is recommended.

pool_size = ENV.fetch("RAILS_MAX_THREADS", 10)
redis_pool = ConnectionPool.new(size: pool_size) do
  Redis.new(
    url: ENV['REDIS_URL'],
    timeout: ENV.fetch("REDIS_TIMEOUT", 1),
    reconnect_attempts: ENV.fetch("REDIS_RECONNECT_ATTEMPTS", 3),
    reconnect_delay: ENV.fetch("REDIS_RECONNECT_DELAY", 0.5),
    reconnect_delay_max: ENV.fetch("REDIS_RECONNECT_DELAY_MAX", 5)
  )
end

Although this means when using it you need to grab a pool connection first

# original style, which is deprecated and would block across threads
Redis.current.get("some_key")

# utilizing a pool
redis_pool.with do |conn|
  conn.get("some_key")
end

thx @ericactripp, for sharing the link about connection pools

Redis in Common Libraries

All the above helps when you are working with Redis directly, but often we are configuring common libraries with Redis, how many of them are able to leverage the same kinds of benifits like a connection pool?

# common config that won't leverage a redis connection pool
config.cache_store = :redis_cache_store, {
  url: ENV["REDIS_URL"],
}

# by setting the pool side and timeout, you can leverage a connection pool with your Redis
config.cache_store = :redis_cache_store, {
  url: ENV["REDIS_URL"],
  pool_size: 8,
  pool_timeout: 6
}

Investigate Combining Redis Calls

If you have an app that is making many sequential Redis calls, there is a good chance you could make a significant improvement by leveraging Redis pipelining or Mget. I think that that the Flipper codebase is a great way to learn and see various Ruby techniques. It is high quality and has a wide adoption so you can trust it has been put through the paces. If you want to dig into combinging calls, read about the differences between pipeline and mget in terms of code and latency.

Pipeline Syntax Change

As long as we are updating some of our calls, worth being aware of another depracation. “Pipelining commands on a Redis instance is deprecated and will be removed in Redis 5.0.0.”

redis.pipelined do
  redis.get("key")
end

# should be replaced by

redis.pipelined do |pipeline|
  pipeline.get("key")
end

Redis Usage Beyond The Basics

If you are looking to do a bit more than the default Rails.cache capabilities with Redis, you will find it supports a lot of powerful feature, and can along with pipelining be extremely performant. If you are looking to push for as performant as you can, setup hiredis-rb as your connection for redis-rb. It uses C extensions to be as performant as possible. This post goes into some details where direct caching wiht Redis can provide more powerful capabilities than using Rails.cache

Redis CLI

A few useful tips around using Redis and understanding how your application is using Redis.

  • brew install redis: install redis via homebrew
  • brew uninstall redis: uninstall redis via homebrew
  • brew info redis: Get info on currently installed redis
  • redis-cli ping: Check if redis service is running
  • redis-cli monitor
  • redis-cli slowlog get 100

Exciting Things Are Happening with Ruby Redis

A good deal of things will be changing in Redis-rb 5.0, we mentioned Redis.current and the redis.pipelined changes. These changes and others help support a move to a simpler and faster redis-client under the hood.

A move to simplify the redis-rb codebase and drop a mutex looks like it will roll out redis-client, which can significantly speed up some use cases. It looks like sidekiq for example with move to this in the near future.

Update: Looks like that perf win was a bit to good to be true.

...



%