99 MyFitnessPal customers want a classic-look toggle back

99 of 1,981 MyFitnessPal App Store reviews ask for the same thing - a way back to the old interface. The strongest single signal in a post-redesign review pile.

We pulled 1,981 MyFitnessPal App Store reviews from the post-redesign window into a dashboard at app.sunbeam.cx/public/nnlf47vs. One finding stands out: 99 reviews contain the same specific request - restore the old interface, or give us a customisation option to choose, sometimes called a "classic look" toggle.

That's the largest single suggestion in the dataset.

Reading a post-redesign review pile is hard

Every consumer app team eventually ships a redesign. The week after launch is the most concentrated window of customer reaction your product will ever generate. Hundreds, sometimes thousands of reviews land in days, mostly negative because people who liked the old version had no reason to write before and now do.

Reading them manually doesn't scale. Running a post-launch survey gives you what you asked about, not what customers thought. Listening to the loudest five reviewers gets you a vocal-minority bias that probably isn't representative.

A better approach is to read all of them, and look for what customers consistently and specifically asked for. The pattern that survives the noise is the pattern worth acting on.

The nutrition logic still works

Before we get to the asks, the data has a useful contour worth noting.

MyFitnessPal's core nutrition surfaces still get genuine love. Calorie tracking and macro tracking both show net-positive sentiment, and there's a recurring topic in the reviews of long-term users (people who joined in 2011 or 2012) crediting the app with their weight loss. 45 of the 1,981 reviews include some version of "I've used this for years and it's helped me." That's a meaningful loyalty cohort.

The regression is concentrated in the navigation layer wrapping the nutrition logic. Diary navigation, the redesigned home screen, the layout of the meal-tracking flow - these are where the dataset's negative sentiment piles up. The split matters for two reasons. First, it tells the team what's actually broken (navigation, not the underlying tracking). Second, it gives a defensible internal narrative that the value props are intact, the wrapper isn't.

The timing is concrete. App Updates spans (the umbrella covering recent-update complaints) sat at a steady 4 to 86 per fortnight from February through March 2026. In the fortnight of 2026-04-13 to 2026-04-27, that volume jumped to 597 spans. The next fortnight was 431. The redesign landed in mid-April; the dashboard captures roughly one month of post-launch reaction.

What 99 customers asked for

In the dataset, two adjacent things are happening. There's a topic called "Strong preference for the previous app version" - 88 reviews where customers fondly remember the old version, mostly written warmly. Sitting alongside it, 99 reviews contain a specific actionable request: restore the old interface, or give us a customisation option to choose between old and new. Some examples:

At least give us a customisation option to have the classic look.
I'd do anything to go back to the previous version.
Why take away the diary? Please put this app right back the way it was.

The next-largest specific request is from 75 customers asking to "see all foods for the entire day on one screen without clicking into each meal" - the diary-collapse change. After that, 64 customers calling out the food item lookup function being slower or breaking. Then 61 customers naming specific failing integrations: Garmin, Strava, MapMyWalk, Trainerize, Fitbit, Samsung Health, Google Fit, smart scales.

These are different shapes of feedback. Some are reversible product decisions (the layout). Some are explicit feature regressions (the lookup). Some are integration breakages (the partner syncs). A team triaging this dataset can put each request into a different lane: ship a toggle, fix the regression, debug the integrations.

What this means for redesign post-mortems

The lesson generalises beyond MyFitnessPal. If you're sitting on a post-redesign review pile, the question isn't "what are people angry about?" That's the easy question and the answer is unhelpfully broad ("everything"). The useful question is "what specific change are they asking for, and how many customers asked for it?"

Sentiment scores tell you the temperature. They don't tell you the action. A topic with 70% negative sentiment that nobody has a specific fix for is harder to act on than a topic with 50% negative sentiment where 99 customers all named the same fix.

Most analytics tools optimise for the temperature reading, which is why teams running them end up with dashboards full of red without a clear roadmap.

Counting requests separates structural objections (which are roadmap decisions) from reversible product changes (which are revert decisions) from bugs (which are fixes). Different lanes, different owners, different timelines.

Most companies don't do this because reading 1,981 reviews manually is roughly two days of work. The dashboard at app.sunbeam.cx/public/nnlf47vs was built in a couple of hours from the public reviews. We didn't need access to MyFitnessPal's data, didn't need an integration, didn't need to ask anyone. The reviews are public; the analysis is what we do.

Run this on your own app

If you've shipped a redesign in the last 90 days, or if you're about to, the same analysis takes minutes to set up against your own App Store reviews. Try it at sunbeam.cx/try.