This is part of a series exploring how collective intelligence can create a better world. Read the whole series here

As I type these very words, I’m helping scientists figure out the future of climate change. I’m doing it by running a program that I downloaded from Climateprediction.net (CPDN), a group that’s using my laptop’s extra processing power to forecast how global warming is going to impact heat waves in Australia.

Help Grist raise $25,000 by September 30 to further advance our climate reporting

They’re not just doing it on my skimpy Mac (sorry, Apple — sleek as the Air is, it’s still underwhelming when up against the sorts of computers that normally get this kind of job done). They’ve got tens of thousands of other people’s laptops and desktops involved too. The technique is called “distributed computing,” and it’s being used to help researchers in everything from the search for extraterrestrial life to understanding how proteins fold.

Before I get into the details of climate models and computing, a quick note about the safety of this kind of thing, for those of you who are (wisely) wary of giving strangers access to your computer. CPDN is a project of Oxford University — a fairly reputable outfit, in my book. David Anderson, who leads the the project that developed the software CPDN uses to distribute its models (the project is called BOINC) says they use a technique called code signing to protect against hackers. Over ten years of operating on about 500,000 computers, Anderson says there have thus far been no security incidents involving BOINC.

Grist thanks its sponsors. Become one.

Ok, then, so why do these scientists want in to my computer in the first place? Well, if we want to get a firmer grasp on what’s in stock for our planet, we need climate scientists to run models — and those models require a heck of a lot of computing power.

As you may have noticed, scientists are more apt to offer up fuzzy statements like “droughts and floods are increasingly likely to happen,” than they are to say, “THIS IS EXACTLY HOW FUCKED WE ARE.” While we do know enough about climate change to say that we’re causing it, and even a lot about just how nasty its bite is going to be, scientists sometimes still get hazy on the specifics.

This is because of something called the butterfly effect, which sounds prettier than it is. It just means that we live in a non-linear world, and that even a tiny change in the starting conditions can result in a massively different end result (just think of the two lives Gwen could’ve led based on whether she caught that train in Sliding Doors). The term was coined by Edward Lorenz, based on his theoretical example that a hurricane’s formation depends on when and where a butterfly had flapped its wings.

And there are A LOT of elements that factor in to the Earth’s climate. It’s like one big non-linear system that’s made up of tons of smaller (but still pretty big) equally non-linear systems (such as the atmosphere, the oceans, and the biosphere). Which means that projecting future climate change gets complicated in a hurry.

Grist thanks its sponsors. Become one.

In order to whittle down the uncertainties enough to make meaningful statements, researchers have to tinker with A LOT of slightly different models, each with slightly different starting conditions, and they need to be able to run their models hundreds of thousands of times. This way, they can “explore what is the possible weather under the given climate conditions,” Friederike Otto, scientific coordinator at CPDN, explains.

Normally, you’d need a fancy supercomputer to do that sort of thing. But you can expect to pony up at least $100 million for one of those, not to mention the $6 to $7 million in annual maintenance costs – and it’ll probably only be a few years before the technology becomes obsolete. Researchers can submit proposals for a chance to run their models on the supercomputers that are already out there, but space is limited. That’s where distributed computing comes in.

Otto says her group has “re-written the models so that they can run on laptops and desktop machines. And with this we can, so to speak, put our own supercomputer together, with all the volunteers who donate their free processing power to our project.”

The folks at CPDN are the first climate scientists to experiment with this new approach. In addition to exploring the Australian heat waves and droughts, they’ve used distributed computing to gain some insight on the connection between climate change and the UK floods this past winter, and what’s going to happen with our ocean’s currents.

Not only is distributed computing in some ways more accessible to researchers but, like the other forms of citizen science I talked about in my last post, it also helps people feel more involved. Like they can actually do something about everything that’s going wrong. CPDN tries to let volunteers in on its results, too, by publishing them to its website even before the academic papers get written up.

So far, I’ve had the model running in the background on my laptop for about two months (it’d go a lot faster if I left my laptop on and the model running for more of the day, and if I didn’t sometimes put the model on pause when things get slow and I want to use of my laptop’s processing power myself). Overall, I’d rate the experience pretty highly: Sure, sometimes it bogs things down, but those extra whirring noises it gets out of my computer make me feel pretty dang powerful. No, I’m not just messing around on Facebook — I’ve got important problems to solve!

Ultimately, however, as important as it is to narrow down what the effects of climate change will be, all that snazzy distributed computing doesn’t really help us do anything to prevent the problem in the first place. Don’t worry, collective intelligence can step in yet again! Stay tuned for my next post, on a group that’s working to crowdsource solutions to climate change.