🏛️

100% Bonus Depreciation Is Back

Trump's new tax law restored full 100% bonus depreciation. A cost seg study lets you deduct the full value of property components in year one — not over 27.5 years. Get a Free Assessment →

Voltar

What the Actual Research Says About Airbnb Listing Optimization Says (and What It Doesn't)

We went looking for peer-reviewed evidence behind the advice STR operators hear every day. Here is what we found, what we couldn't verify, and why the most-cited study in Airbnb content might not mean what you think it means.

Parker Place

Written by

Parker Place

April 8, 2026

Airbnb Listing Advice Thumbnail

⚡️

Revele a rentabilidade de qualquer propriedade no Airbnb e aluguel de longo prazo

What the Actual Research Says About Airbnb Optimization (and What It Doesn't)

Every short-term rental blog on the internet will tell you the same things. Professional photos add $2,500 a year. Your cover photo moves bookings by 25 to 30 percent. There are thirteen ranking factors. Wishlist saves matter.

We went looking for where any of that actually comes from.

Some of it is real. Some of it is one unverifiable operator study that got repeated until it became common knowledge. And the single most rigorous academic paper on the topic is over a decade old, studies one city, and measures something subtly different from what the blogs claim it measures.

This post is the honest version of what the research says. We are writing it because we got tired of citing numbers we could not trace, and because we think operators deserve to know which claims hold up and which ones do not.

The paper nobody in STR Twitter has read

The most methodologically serious study of Airbnb listing performance we could find is a multilevel hedonic analysis of Airbnb listing performance out of McGill's tourism research platform. It uses 386,153 listing-month observations from New York City between 2014 and 2016, which is a real dataset and a real regression, not a blog post with a bar chart.

The headline finding most operators have never heard: about 79.8 percent of variance in nightly price is explained by listing-level characteristics (the unit itself), but only 32.1 percent of variance in revenue is.

Sit with that for a second. It means the house you buy largely determines what you can charge per night. It does not determine what you will actually earn. Roughly two-thirds of revenue variance lives somewhere else: positioning, copy, photos, reviews, pricing strategy, operational quality, the stuff you can actually change after closing.

That is the entire case for listing optimization being worth doing, and it comes from the one paper nobody cites.

Per-unit coefficients, and why they confused us

The McGill paper also reports per-marginal-unit coefficients for things like bedroom count and bathroom count, holding everything else constant. When we first read this, one number jumped out: adding a bathroom looked like it increased revenue slightly more than adding a bedroom (+14.8 percent versus +11.0 percent per unit).

We almost shipped that as a finding. Then we checked it against our own data and realized we would have been wrong in a way that matters.

The McGill coefficient answers a very specific question: if you took the exact same listing and added one bathroom, what would happen? That is not the question an investor is asking. An investor is asking whether a 5BR earns more than a 3BR in the same market, which is a totally different calculation that sweeps in square footage, lot size, group capacity, ADR tier, management sophistication, pool presence, and about twenty other things correlated with bedroom count.

Both things can be true. Per-unit, bathrooms are competitive with bedrooms in a 2016 NYC apartment sample. In aggregate, across every market we track, bedroom count dominates total revenue by a wide margin because everything else scales with it.

The lesson is not about bathrooms. The lesson is that regression coefficients from one paper do not transfer cleanly into operator intuition, and when they conflict you should trust the bigger, more recent, more relevant dataset. Which is awkward to say about a peer-reviewed paper, but it is also true.

What Airbnb's own engineers have published

Here is something most STR advice skips entirely: Airbnb's search ranking team publishes papers. Real ones. Peer-reviewed, at real conferences, with A/B test results from production traffic.

A 2023 paper from Airbnb's search team, Learning To Rank Diversely, describes how the ranking model was changed from a listing-level relevance score to a query-level objective that explicitly rewards diversity of results. The A/B test showed a meaningful bookings lift when they stopped ranking each listing independently and started ranking the result set as a whole. In plain English: if your listing looks too similar to everything else on the results page, the ranker may downrank you even if you are individually strong.

A 2025 follow-up, Beyond Pairwise Learning-to-Rank, extends the loss function beyond pairwise comparisons and reports further production gains. A separate paper, Learning to Rank for Maps at Airbnb, notes that map-based search behaves differently from list-based search and has its own ranking objective. And Airbnb's embedding-based retrieval work describes a two-tower neural network trained on booking signals, where wishlist-saves that never convert to bookings are treated as a weak or negative signal, not a positive one.

Two things to pull out of all that.

First, the popular "thirteen ranking factors" list that circulates in STR content (originally popularized by Daniel Rusteen at OptimizeMyBnb, and the basis for most of the practitioner advice on this topic) is, at best, a rough mental model of signal categories. Airbnb's actual ranker is a learned neural model with hundreds of features, retrained continuously, with different objectives for map search versus list search. The thirteen-factor framing is not wrong exactly. It is a useful teaching tool for operators who are not going to read neural ranking papers. It is just less precise than it sounds.

Second, "get wishlist saves" as advice is more complicated than the blogs say. If a user wishlists your listing and then books something else, Airbnb's own embedding model may be using that as a signal that you are a close-but-not-quite substitute for what the user actually wanted. Wishlist-to-book is the signal that helps you. Wishlist-without-book might not be.

The photo study that actually controls for something

The most careful study of listing photos we could find analyzed roughly half a million Airbnb photos using deep learning to score both aesthetic and content features. The authors controlled for listing, host, and market characteristics, and then asked which photo attributes predicted booking performance.

The two findings that matter:

Content beats aesthetic. What is in the photo matters more than how the photo is shot. A well-composed photo of the wrong subject underperformed a worse-composed photo of the right subject.

Bedroom cover photos decreased bookings. Listings with a bedroom as the first (cover) photo booked less than otherwise comparable listings. Living rooms and other common spaces as covers performed best. This is counterintuitive and almost never mentioned in STR advice.

There is a useful caveat. This study does not directly test whether "pool hero for warm-weather markets" beats "living room hero for warm-weather markets," which is the question most operators actually want answered. It tests bedroom versus non-bedroom. But the directional finding (the cover photo should not be the bedroom) is about as well established as anything in this space.

What the text of your listing does

A 2021 paper used topic modeling on Airbnb description text (Latent Dirichlet Allocation and Structural Topic Models) to identify which language patterns correlated with booking performance across thousands of listings.

The two patterns worth stealing:

Descriptions that read like hotel marketing ("luxurious," "stunning," "world-class amenities," generic superlatives) underperformed. Descriptions that positioned the host as a credible local guide ("we lived on this block for eight years," "the bakery two doors down opens at 6") outperformed.

This is not surprising if you have read any guest reviews. Guests consistently mention specific, ground-truth details in five-star reviews and vague generic language in three-star ones. The paper just confirmed it across a large enough sample to publish.

What we cannot find in the literature is a clean test of the Rusteen-style conventions specifically: pipe-separated amenity titles, 500-character summaries with review quotes, wishlist CTAs, bracket-notation photo captions. Those are practitioner consensus from OptimizeMyBnb and Rusteen's book Profitable Properties, not peer-reviewed findings. They may well work. We have never seen a controlled test of any of them.

The Rankbreeze study

Now for the part that bothered us the most.

The numbers you see everywhere in STR content (professional photos add $2,521 per year, cover photo optimization shifts booking rate by 25 to 30 percent, 24 to 30 photos is the magic range) get attributed to "a Rankbreeze study." Rankbreeze is a paid Airbnb ranking tool, and the study is used in their marketing.

We tried to find the actual study. Not the blog summaries, not the infographic, the actual methodology document. Sample size, control group, statistical tests, anything that would let us evaluate whether the numbers mean what they are being used to mean.

We could not find it. The Rankbreeze site does not publish methodology. The numbers propagate across Hostfully, Hostaway, iGMS, and AirDNA blogs, always attributed to Rankbreeze, never with a direct link to a primary source.

We are not saying Rankbreeze made the numbers up. Operators often have excellent internal data and see booking outcomes academics never see. But there is a specific problem with any operator-published photo study that Rankbreeze almost certainly does not address:

Selection bias. Hosts who hire professional photographers are also hosts who invest in cleaning, respond faster to messages, write better descriptions, price more dynamically, and generally run better operations. If you compare "listings with pro photos" against "listings without pro photos" in observational data, you are not measuring the photos. You are measuring the kind of operator who buys pro photos. Separating those two things requires either randomization (which almost nobody does on a real Airbnb listing because it is expensive and weird) or an instrumental variable, and neither appears in any public Rankbreeze material we could find.

We are keeping the directional claim. Professional photos very likely do improve bookings. The directionality is consistent with every academic paper that has looked at the question. But the specific dollar figure, the specific percentage lift, the specific 24-to-30 photo range, these should be treated as marketing material until somebody publishes the methodology behind them.

If anyone at Rankbreeze reads this and can point us at the actual study, we will update this post immediately and apologize for the skepticism. The offer is genuine.

The collage question

One of the oldest debates in STR marketing: single hero photo, or a multi-panel collage showing the property, the amenities, and the brand in one image?

It is mostly a settled question, and most operators have not updated their priors.

My intuition - Airbnb is a design-first company. From a first-principles approach, why would they reward a barely-readable collage. 80% of their users are on a mobile device. 4-6 images stuffed into one is crowded and ugly. Next time, open it on your phone and computer.

Four reasons the collage lost.

Platform policy. Airbnb's photo guidelines explicitly say each image must be standalone and to "avoid collages or composite images." VRBO lists composite images as a rejected photo type that their automated system flags. On the two platforms where the listing itself lives, the question is not open.

Instagram killed the collage post around 2017. When carousels launched, the algorithm began rewarding swipe-through engagement that a static collage cannot generate. The format has been replaced by multi-slide carousels, short video, andsingle hero shots. Scroll any STR brand's feed and the trend is obvious.

Mobile rendering. A four-panel collage on a 390-pixel phone screen gives each panel roughly 180 pixels and half the height too. That is smaller than the Airbnb search thumbnail the panel is supposed to be competing with. Every element (logo, text, accent line, hero) gets compressed below legibility on the device where most browsing actually happens.

Airbnb already does the collage for you. The listing page displays a grid of your first five photos automatically. Building one yourself is doing work the platform has already solved, and doing it worse because you are compressing into one frame what the platform is already showing across five.

The debate resolves to a single question: where is this asset going to live? If it has to earn a click, use a single strong photo. If it is going on Instagram, use a carousel. The skill of STR marketing is matching the asset to the channel, not picking a universal winner.

What we are doing differently now

We run BNBCalc, which means we have our own data. Millions of listings across 2,369 markets in the US and internationally, with historical revenue, occupancy, ADR, and booking outcomes.

For questions the academic literature cannot answer cleanly (does pipe-separated title formatting beat sentence-case? Does adding a review quote to the summary lift bookings? Does cover photo swap at month N correlate with revenue lift at month N+1?) We are starting to run the analyses on our own data rather than cite things we cannot verify.

The short version

If you skipped to the bottom, here are the load-bearing claims from peer-reviewed research:

  • About 68 percent of revenue variance is not explained by listing-level characteristics alone, which is the entire case for listing optimization (McGill hedonic analysis).
  • The cover photo should not be a bedroom, and photo content matters more than photo aesthetics (510K photo deep learning study).
  • Descriptions that read like hotel marketing underperform descriptions that read like a knowledgeable local (topic modeling of Airbnb narratives).
  • Airbnb's actual ranker is a learned neural model with hundreds of features and separate objectives for map versus list search, not a fixed list of thirteen factors (Learning To Rank Diversely, Beyond Pairwise LTR, Learning to Rank for Maps).
  • Wishlist saves that never convert to bookings may be a neutral or negative signal in Airbnb's retrieval model, not a positive one (Embedding-Based Retrieval).
  • The "$2,521 from photos" number does not appear to have public methodology behind it and should be treated accordingly.
  • DO NOT use a collage. Airbnb specifically advises against it, and it genuinely looks terrible on a mobile, which is the primary user device for browsing and booking listings.

Everything else you have read, including most of what we have written in our own previous content, is practitioner consensus. It might be right. It is not, at this point, proven.

Free Tool

Airbnb Tax Deduction Calculator

Paying too much in taxes? We have the perfect solution. Simulate an Airbnb home purchase below.

Purchase Price

$450K

Structure Value

70%

Apply Trump's Tax Cut (Bonus Depreciation)

Depreciation

$117,695

Interest

$21,600

Tax

$6,750

Year 1 Deduction

$146,045

Want to claim this deduction? Get a free cost segregation benefit analysis from CSA Partners — no obligation.

Get Full Analysis