Christ is my all
2569 stories
·
3 followers

Reducing our monorepo size to improve developer velocity

1 Share

At Dropbox, almost every product change flows through a single place: our server monorepo. A monorepo is a single, shared Git repository that contains many services and libraries used across the company. Instead of splitting code across dozens of smaller repositories, we keep a large portion of our backend infrastructure in one place. That architecture makes cross-service development easier, but it also means the repository sits at the center of nearly everything we build. 

Building AI-powered features at Dropbox often requires small changes across ranking systems, retrieval pipelines, evaluation logic, and UI surfaces. All of that work moves through the same engineering loop: pull the latest code, build and test it, get it reviewed, merge it, and ship it. Over time, we began to notice that this loop was getting slower. Our monorepo had grown to 87GB; downloading a full copy of the codebase (or “cloning” the repository) took more than an hour, and many continuous integration (CI) jobs were repeatedly paying that cost. We were also approaching GitHub’s 100GB repository size limit, which introduced real operational risk.

In this post, we’ll share how we reduced the repository from 87GB to 20GB (a 77% reduction), cutting the time required to clone the repository to under 15 minutes. We’ll also explain what was driving the growth and what we learned about maintaining a large monorepo at scale.

Dropbox Dash: AI that understands your work

Dash knows your context, your team, and your work, so your team can stay organized, easily find and share knowledge, and keep projects secure, all from one place. And soon, Dash is coming to Dropbox.

Learn more →

When repository size becomes a real problem

To understand why repository size matters, it helps to look at how engineers actually work. The first time someone sets up their development environment, they clone the repository, meaning they download a full copy of the codebase and its history to their machine. After that initial setup, daily work is less intensive. Engineers fetch and pull incremental updates rather than redownloading everything. But that first clone is unavoidable, and when the repository reached 87GB, it regularly took more than an hour.

That cost didn’t just affect onboarding. Many continuous integration jobs—automated build and test workflows that run on every code change—begin from a fresh clone. That meant our CI pipelines were repeatedly incurring the same overhead. Internal systems that synchronize the repository were also handling significantly more data than before, which increased the likelihood of timeouts and degraded performance.

At the same time, the repository was growing steadily, typically by 20 to 60MB per day, with occasional spikes above 150MB. At that rate, we were on track to hit the GitHub Enterprise Cloud (GHEC) 100GB repository size hard limit within months. The issue wasn’t simply that we had a large codebase. The growth rate itself didn’t match what we would expect from normal development activity, even at Dropbox’s scale. That suggested the problem wasn’t just what we were storing, but how it was being stored.

When compression backfires

At first, we looked for the usual causes of repository bloat: large binaries, accidentally committed dependencies, or generated files that didn’t belong in version control. None of those explained what we were seeing. The growth pattern pointed somewhere less obvious: Git’s delta compression.

Git doesn’t store every version of every file as a complete copy. Instead, it tries to save space by storing the differences between similar files. When multiple versions of a file exist, Git keeps one full version and represents the others as deltas, or “diffs,” against it. In most repositories, this works extremely well and keeps storage efficient.

The issue was how Git decides which files are similar enough to compare. By default, it uses a heuristic based on only the last 16 characters of the file path when pairing files for delta compression. In many codebases, that’s good enough. Files with similar names often contain related content. Our internationalization (i18n) files, however, followed this structure:

 i18n/metaserver/[language]/LC_MESSAGES/[filename].po

The language code appears earlier in the path, not in the final 16 characters. As a result, Git was often computing deltas between files in different languages instead of within the same language. A small update to one translation file might be compared against an unrelated file in another language. Instead of producing a compact delta, Git generated a much larger one.

Routine translation updates were therefore creating disproportionately large pack files. Nothing about the content was unusual. The problem was the interaction between our directory structure and Git’s compression heuristic. Once we understood that mismatch, the rapid growth of the repository finally made sense.

Testing a fix locally

Once we suspected that delta pairing was the root cause, we looked for ways to influence how Git grouped files during compression. We found an experimental flag called --path-walk that changes how Git selects candidates for delta comparison. Instead of relying on the last 16 characters of a path, it walks the full directory structure, which keeps related files closer together.

We ran a local repack—essentially asking Git to reorganize and recompress the objects in the repository—using this flag. The results were immediate. The repository shrank from the low-80GB range to the low-20GB range. That confirmed our hypothesis: the issue wasn’t the volume of data, but how it was being packed.

However, that success exposed a new constraint. GitHub told us that --path-walk was not compatible with certain server-side optimizations they rely on, including features like bitmaps and delta islands that make cloning and fetching fast. Even though the fix worked locally, it wouldn’t work in production.

We needed a solution that achieved the same size reduction while remaining compatible with GitHub’s infrastructure. That meant working within the parameters GitHub could safely support, rather than relying on an experimental client-side flag.

Why we couldn't do this alone

Our local experiments proved that better packing could dramatically reduce the repository size. But there was a critical limitation: you can’t repack a repository locally, push it to GitHub, and expect those improvements to persist.

GitHub constructs transfer packs dynamically on the server based on what each client is missing. That means the server’s own packing strategy determines clone and fetch sizes. Even if a local mirror is perfectly optimized, GitHub will rebuild the pack during transfer using its own configuration. To permanently reduce repository size and improve performance, the repack had to be executed on GitHub’s servers.

$ git clone --mirror git@github.com:dropbox-internal/server.git server_mirror
performance: 2795.152366000 s

$ du -sh server_mirror
84G     server_mirror

$ git repack -adf --depth=250 --window=250
performance: 31205.079533000 s (~9h)

$ du -sh server_mirror
20G     server_mirror

We shared our findings with GitHub Support and worked with them on a solution that would be compatible with their infrastructure. Instead of relying on experimental flags, they recommended a more aggressive repack using tuned window and depth parameters. These settings control how thoroughly Git searches for similar objects and how many layers of deltas it allows. Higher values increase compute time during repacking but can significantly improve compression.

We tested the approach on a mirrored clone of the repository. The repack took roughly nine hours to complete, but the result was clear: the repository shrank from 84GB to 20GB. Because this method aligned with GitHub’s server-side optimizations, it could be executed safely in production.

Rolling it out without breaking anything

Repacking a repository changes how billions of objects are physically organized on disk. It doesn’t alter the contents of the code, but it does change the structure underlying every clone, fetch, and push. Given how central the monorepo is to our development workflow, we treated this like any other production infrastructure change.

Before touching the live repository, we created a test mirror and had GitHub perform the repack there first. We monitored fetch duration distributions, push success rates, and API latency to ensure the new pack structure didn’t introduce regressions. The mirror dropped from 78GB to 18GB, and while there was minor movement at the tail of fetch latency, it was well within the tradeoff we were willing to make for a fourfold size reduction. We didn’t observe stability issues.

With that validation in place, GitHub rolled out the production repack gradually over the course of a week. They updated one replica per day, beginning with read-write replicas and reserving buffer time at the end of the week in case a rollback was needed. This phased approach ensured that if anything unexpected surfaced, they could revert safely.

The final result was substantial. The repository shrank from 87GB to 20GB, and clone times dropped from over an hour to under 15 minutes in many cases. New engineers no longer begin onboarding with a long wait. CI pipelines start faster and run more reliably. Internal services that synchronize the repository are less prone to timeouts. And by moving well below GitHub’s 100GB limit, we reduced the risk of platform-level performance degradation during high-traffic periods.

Just as importantly, the system remained stable throughout the rollout. Fetch duration, push success rates, and API latency all stayed within expected ranges. The improvements held without introducing new operational risk.

Project data size dropped significantly and has remained stable since.

What we learned

Beyond the size reduction itself, this project reinforced a few broader lessons about maintaining large-scale infrastructure. The following three mattered most:

Growth isn’t just about commit volume
When we first noticed the repository ballooning, the instinct was to look at what was being added: large files, unused dependencies, generated artifacts. But the root cause had nothing to do with the content of our commits. It was about how our directory structure interacted with Git’s compression heuristics. Our i18n paths encouraged Git to compute deltas across different languages rather than within the same language. Routine translation updates were therefore creating oversized pack files. The growth was structural, not behavioral.

Tools embed assumptions. When your usage patterns diverge from those assumptions, performance can degrade quietly over time. In our case, Git’s 16-character path heuristic worked as designed. It just didn’t work well with our repository structure. Understanding those internal mechanics was what allowed us to diagnose the issue correctly.

Some fixes require working with your platform provider
We were able to identify the root cause and even validate a fix locally. But because GitHub determines how repositories are packed and transferred, a local repack wasn’t enough. The solution had to align with GitHub’s server-side infrastructure.

That meant bringing clear data to GitHub, testing collaboratively, and working within supported parameters. When your system depends on a managed platform, some problems live at the boundary between your code and theirs. Having strong relationships and a shared debugging process makes a meaningful difference.

Treat repo health like production infrastructure
A repository repack changes the physical structure of billions of objects. Even though the code itself doesn’t change, every engineer and every automated system interacts with that underlying structure. We approached this project the same way we would approach any production infrastructure change: test on a mirror, measure real-world impact, roll out gradually, and maintain a rollback path.

Repositories can feel like passive storage, something that simply grows over time. At scale, they are not passive. They are critical infrastructure that directly affects developer velocity and CI reliability. As part of this work, we built a recurring stats job that tracks key health indicators for the monorepo and feeds them into an internal dashboard. It monitors things like overall repository size, how quickly that size is growing, how long a fresh clone takes, and how storage is distributed across different parts of the codebase. If growth starts accelerating again or clone times begin creeping up, we'll see it early rather than discovering it when engineers start feeling the pain. Monitoring growth trends and investigating anomalies early is part of running a healthy engineering organization.

What’s next

Reducing the repository from 87GB to 20GB had an immediate impact on how we build. New engineers can get started in minutes instead of waiting through a lengthy initial clone. CI pipelines spin up faster and run more reliably. Teams working on AI features—where progress often comes from many small, iterative changes across multiple services—feel that improvement in every development cycle.

The investigation also led to structural changes designed to prevent the same issue from resurfacing. We updated our i18n workflow to align more closely with how Git’s packing algorithm groups files, reducing the likelihood of pathological delta pairing in the future. Just as importantly, we now have better visibility into repository growth trends and a clearer understanding of what “normal” looks like.

More broadly, this project gave us a repeatable playbook. When growth accelerates unexpectedly, we know how to investigate at the compression layer, how to validate fixes safely, and how to work across platform boundaries when necessary. Monorepos will continue to grow as products evolve, but growth doesn’t have to mean friction. With the right tooling and discipline, it can remain invisible to the engineers who rely on it every day.

Acknowledgments: Samm Desmond, Genghis Chau

~ ~ ~

If building innovative products, experiences, and infrastructure excites you, come build the future with us! Visit jobs.dropbox.com to see our open roles.



Read the whole story
rtreborb
6 hours ago
reply
San Antonio, TX
Share this story
Delete

I Turned an Old Wheelbarrow Into a Garden Planter and It Finally Fixed That Dead Corner in My Yard

1 Share

I had an old wheelbarrow sitting against the wall for years. Rusted, unused, and always in the way.

Throwing it out felt like the next step. Instead, I used it to fix a part of the yard that never worked. One empty corner that always looked disconnected from everything else.

Turning it into a planter didn’t just add flowers. It gave that space a purpose, a focal point, and a sense of structure without building anything new.

I Turned an Old Wheelbarrow Into a Garden Planter and It Finally Fixed That Dead Corner in My Yard

What I Started With

The wheelbarrow was in rough shape. Rust spots, faded paint, worn handles. Not something you would normally keep.

But the structure was still solid. Deep basin, stable frame, and enough presence to stand out once placed correctly.

That’s what makes this idea work. You’re not restoring it to look new. You’re using its shape to create something useful.

The Key Step Before Planting

Before adding soil, I made sure it could handle constant watering.

  • Cleaned out dirt and debris
  • Let it dry completely
  • Added a protective paint layer inside to slow rust
  • Drilled a few drainage holes at the lowest point

This part matters more than the planting. Without drainage and protection, it won’t last a full season.

Spray paint a wheelbarrow and turn into a garden planter

Turning It Into a Planter

Once prepped, the rest is simple.

  • Filled it with potting soil
  • Positioned it before planting
  • Used dense flowers that stay low and spread

The biggest decision was placement, not plants.

Instead of pushing it to the edge, I placed it where the yard needed structure. Slightly angled, visible from different sides, not hidden.

That turns it from an object into a feature.

Garden wheelbarrow full of plants

The Spilled Garden Effect (What Changes Everything)

The real impact comes when you stop treating it like a container.

Instead of keeping everything inside, extend the planting outward. Add soil and flowers at the base so it looks like the wheelbarrow is releasing them into the garden.

That removes the “pot” look and creates a continuous surface.

It also solves a common problem. Small gardens often feel flat. This adds flow without adding height or building anything.

Eye catching garden planter from a wheel barrow

Where This Works Best

This idea fixes spaces that feel unfinished:

  • corners that don’t connect to anything
  • fence lines that look empty
  • edges of patios
  • areas between lawn and planting beds

Instead of filling the space with multiple small elements, one piece defines it.

What Changed After

Before, the area looked like leftover ground.

After, it feels anchored. The eye stops there. The planting feels intentional instead of scattered.

It doesn’t look like something added later. It feels like part of the layout.

The Detail That Makes or Breaks It

Most people underplant.

If you leave gaps, it looks like a container. If you plant dense, it becomes part of the garden.

  • Soil should barely be visible
  • Plants should touch and overlap
  • Edges should soften and spill slightly

That’s what removes the DIY look.

Beautiful wheelbarrow planter

Simple Seasonal Upgrade

One thing that makes this even better is that it changes through the year.

  • Spring and summer → begonias or petunias
  • Fall → mums
  • Early season → pansies

You keep the structure, but the look changes without redoing the space.

This works because it changes how you use objects in the garden.

The wheelbarrow stops being a tool. It becomes the first thing your eye lands on.

Instead of blending into the background, that empty corner turns into a spot that actually pulls attention and holds the space together.

The post I Turned an Old Wheelbarrow Into a Garden Planter and It Finally Fixed That Dead Corner in My Yard appeared first on Homedit.

Read the whole story
rtreborb
6 hours ago
reply
San Antonio, TX
Share this story
Delete

I Skipped a Viral Washer Cleaning Hack and Did This Instead

1 Share

A lemon and a full tube of toothpaste inside a washing machine looks convincing on video. Thick foam. Dramatic scrubbing. Instant visual payoff.

It also raises one question: why would an appliance with pumps, rubber seals, and sensors need toothpaste?

My washer is not new. I am not running experiments inside a machine that handles hundreds of cycles per year. Instead of testing a social media trick, I used a method manufacturers and repair technicians consistently recommend.

Washing machine cleaning process

The difference is not dramatic.

It is structural.

What I Used Instead

  • No specialty tablets.
  • No abrasive paste.
  • No heavy fragrance masking buildup.

Just white vinegar, baking soda, and two controlled hot cycles.

The logic is simple. Vinegar dissolves residue and mineral buildup. Baking soda neutralizes odor. Used separately, they flush clean without leaving film behind.

How to Clean a Washing Machine with Vinegar and Baking Soda

The Front-Load Routine

The real problem area is not the drum. It is the gasket.

That rubber seal traps moisture and detergent residue in its folds. Ignoring it defeats the entire cleaning process.

I sprayed white vinegar directly onto the gasket and wiped it thoroughly. After that, I poured two cups of vinegar into the detergent dispenser and ran a full hot cycle with the machine empty.

Once complete, I added half a cup of baking soda directly into the drum and ran a second hot cycle.

When both cycles finished, I wiped down:

  • The interior drum
  • The door and gasket
  • The detergent drawer

Then I left the door open to air dry.

No scent lingered once dry. The drum smelled neutral. Towels dried without a stale undertone.

That is the correct outcome.

Why the Viral Method Doesn’t Add Up

Toothpaste contains abrasives designed for enamel. Washing machines contain:

  • Rubber seals
  • Plastic housings
  • Drain pumps
  • Moisture sensors

Even if visible foam washes away, internal components were never designed for paste-based residue. Lemon juice introduces acid without control over concentration or exposure time.

The viral method focuses on spectacle. The maintenance method focuses on function.

One looks satisfying. The other protects the appliance.

How to clean washing machine with vinegar and baking soda

What Actually Changed

Before cleaning, the washer carried a faint closed-door smell. Subtle but present. The detergent compartment showed minor film buildup.

After the vinegar and baking soda cycles:

  • The interior felt dry and clean
  • No film remained in the drawer
  • Laundry carried no residual odor

There was no dramatic transformation. Just reset conditions.

A washing machine should smell like nothing.

How Often This Makes Sense

Once per month is enough for most households. If you wash pet bedding, gym clothes, or heavy soil loads, every two to three weeks prevents buildup.

The goal is not freshness. It is prevention.

I am not interested in dramatic hacks inside a machine that costs more than most kitchen appliances. Maintenance is quieter than viral trends.

It is also safer.

The post I Skipped a Viral Washer Cleaning Hack and Did This Instead appeared first on Homedit.

Read the whole story
rtreborb
6 hours ago
reply
San Antonio, TX
Share this story
Delete

banning all Anthropic employees

1 Share

Per my policies, I need to ban every employee and contractor of Anthropic Inc from ever contributing code to any of my projects. Anyone have a list?

Any project that requires a Developer Certificate of Origin or similar should be doing this, because Anthropic is making tools that explicitly lie about the origin of patches to free software projects.

UNDERCOVER MODE — CRITICAL

You are operating UNDERCOVER in a PUBLIC/OPEN-SOURCE repository. [...] Do not blow your cover.

NEVER include in commit messages or PR descriptions:

[...] The phrase 'Claude Code' or any mention that you are an AI
Co-Authored-By lines or any other attribution

-- via @vedolos

Read the whole story
rtreborb
6 hours ago
reply
San Antonio, TX
Share this story
Delete

Launching Passkeys support on Report URI! 🗝️

2 Shares
Launching Passkeys support on Report URI! 🗝️

As we're always wanting to keep ahead in the security game, I'm happy to announce that we now support Passkeys on Report URI! Let's take a quick look at what Passkeys are, why you should use them, and how we've implemented them.

Launching Passkeys support on Report URI! 🗝️

Passkeys solve a big problem

Let's kick things off by stating the biggest benefit of Passkeys which is that they are phishing-resistant! That's right, if you're using Passkeys to protect your account, you no longer have to worry about falling victim to a phishing attack. This was the primary driver for us to add support at Report URI, to provide our customers with a strong authentication mechanism that will give them confidence they are protected against the pervasive threat of phishing attacks. On top of this tremendous benefit, I feel that they're also much more convenient to use too!

How do Passkeys work?

Instead of relying on a secret piece of information like a password, Passkeys work by relying on cryptography and are surprisingly simple under the hood. Your device will create a cryptographic key pair that will be used for authentication when you need to login to the website. The registration process for a Passkey looks like this:


 User               Browser / OS              Website / Server            
 |                      |                           |
 | 1. "Create Passkey"  |                           |
 |--------------------->|                           |
 |                      | 2. Request registration   |
 |                      |-------------------------->|
 |                      |                           |
 |                      | 3. Send challenge         |
 |                      |<--------------------------|
 |                      |                           | 
 |                      | 4. Create new key pair    |
 |                      |    - save private key     |
 |                      |      on device            | 
 |                      |                           |
 |                      | 5. Send public key + attestation
 |                      |-------------------------->|
 |                      |                           | 7. Store public key
 |                      |                           |    with user account
 |                      | 8. Registration complete  |
 |                      |<--------------------------|
 | 9. "Registration Complete"                       |
 |<---------------------|                           |
 |                      |                           |

You initiate the Passkey registration process in the browser and you will be prompted by your device or password manager to create a Passkey. You device will create the cryptographic key pair, sign the challenge provided by the website, and then return the signed challenge along with your public key, which is stored against your account. The private key is kept securely on your device. Now that Passkey registration is complete, you can then use your Passkey for authentication.

User               Browser / OS              Website / Server
 |                      |                           |
 | 1. "Sign in with passkey"                        |
 |--------------------->|                           |
 |                      | 2. Request authentication |
 |                      |-------------------------->|
 |                      |                           | 
 |                      | 3. Send challenge         |
 |                      |<--------------------------|
 |                      |                           |
 |                      | 4. Biometrics / PIN       |
 |                      | 5. Sign with private key  |
 |                      | 6. Return signed challenge|
 |                      |-------------------------->|
 |                      |                           | 7. Verify signature
 |                      |                           |    using public key
 |                      | 8. Authentication successful
 |                      |<--------------------------| 
 | 9. "Signed in!"      |                           |
 |<---------------------|                           |

When logging in to a website where you have registered a Passkey, you will usually have to initiate the process to sign in with your Passkey. In the background, your device will then start the authentication process and receive the challenge that needs to be signed with your private key. To do that, your device will ask for something like FaceID, TouchID, or similar on your device to authenticate you. Once you have authenticated to your device, it will sign the challenge with your private key and return it to the website. The website can then check it is definitely you by verifying that signature using your public key that it previously received, and then you're logged in! This is such a nice experience and has so little friction for the user, especially when you consider how strong this mechanism is.

How are they phishing-resistant?

When your device creates a Passkey, it doesn't just create and store the keys used, it also stores some important metadata too. The relevant part of that metadata that gives us phishing resistance is the Relying Part ID, or rpId. When you go to Report URI and register a Passkey on our website, the rpId will be saved with the Passkey on your device as report-uri.com and your device can then enforce that your new Passkey is only ever used on this domain or its subdomains. This means that if you end up on a phishing site that looks like Report URI, but isn't actually report-uri.com, the Passkey simply will not work. Take these examples that might make for convincing phishing pages:

https://report-url.com               <-- nope
https://report-uri.secure-login.com  <-- nope
https://report-uri.xyz               <-- nope

The only way that your device will now use the Passkey to log you in is if you're on a valid website where the Passkey is allowed to be used, effectively neutralising the threat of phishing!

How are they being used on Report URI?

There are two ways that you can use Passkeys on your website and they offer slightly different benefits.

  1. You can use Passkeys to replace passwords altogether, so they become your primary authentication mechanism.
  2. You can use Passkeys as a 2FA mechanism alongside your existing username/password authentication.

At Report URI we've opted for option #2 and now offer Passkeys as a 2FA option alongside our existing TOTP 2FA offering. Passkeys make for an incredibly strong second-factor and our primary goal was to achieve the phishing resistance that Passkeys offer. Looking at option #1 is also a valid approach and there are other benefits too, mainly being able to get rid of passwords from your database and protect against password based attacks. Given our extensive measures to protect user passwords, it was less of a concern for us to move to using Passkeys as our primary authentication mechanism and instead we chose to introduce them as a 2FA mechanism. If you're interested in our approach to securing user passwords, you can read my blog post that goes in to detail, but here is a summary:

  1. We use the Pwned Passwords API to prevent the use of passwords that have previously been leaked.
  2. We use zxcvbn to ensure the use of strong passwords when registering an account or changing password.
  3. We provide extensive support for password managers using attributes on HTML form elements.
  4. We store hashed passwords using bcrypt (work factor 10 + 128bit salt) so they are resistant to cracking.

Passkeys are now available on the Settings page in your account and we strongly recommend that you go and enable them!

In the coming week, I will also be publishing two more blog posts. One of them is the full details of the external engagement to have our Passkeys implementation audited. We engaged a penetration testing company to come in and do a full test of our implementation to make absolutely sure it was rock solid. The blog post will contain the full, unredacted report with details of all findings. The second blog post will be the announcement of our whitepaper on Passkeys and the new security considerations they bring if you're planning to use them on your site. Make sure you're subscribed for notifications so you know when they go live!

Read the whole story
rtreborb
6 hours ago
reply
San Antonio, TX
Share this story
Delete

Quote and share highlighted text from any story

1 Comment and 3 Shares

When you share a story on NewsBlur, sometimes the whole article isn’t the point. You want to call out a specific paragraph, a key finding, a sentence that made you think. Until now, you’d have to manually copy-paste text into the comment box and add your own formatting. Now you can select any text in a story, click Quote, and it drops into the share dialog as a styled blockquote, ready for you to add your own commentary underneath.

How it works

Select text in any story and a popover appears with the usual options: Highlight, Train, and Search. There’s now a new option: Quote. Click it, and the share dialog opens with your selected text rendered as a blockquote above the comment field.

Add your own comment below the quote, or just share the quote by itself. The share button updates to say “Share with comment” when there’s a quote or comment present. If you change your mind, click the × on the blockquote to remove it.

Once shared, the blockquote renders with a left border and italic styling in the comment thread, so other readers can see exactly what caught your eye before reading your take on it.

The quote feature works anywhere text selection is available: the story detail view, highlighted text, and search results. It’s available now on the web. If you have feedback or ideas, please share them on the NewsBlur forum.

Read the whole story
rtreborb
1 day ago
reply
San Antonio, TX
Share this story
Delete
Next Page of Stories