1 9 min 3 yrs

If we could express all of that data as number of bits, our fundamental unit of information, that number would be, well, astronomical. But that’s not all: in the next year that number is going to double, and the year after that it will double again, and so on. There are two reasons that astronomy is experiencing this accelerating explosion of data. First, we are getting very good at building telescopes that can image enormous portions of the sky. Second, the sensitivity of our detectors is subject to the exponential force of Moore’s Law. That means that these enormous images are increasingly dense with pixels, and they’re growing fast – 10 billion pixel digital camera and increasing! So far, our data storage capabilities have kept pace with the massive output of these electronic stargazers. The real struggle has been figuring out how to search and synthesize that output via AI et al..

Earlier raw images would be deposited in some data archive and you just tell astronomers to get the images. Astronomers then download the images and run software on them in order to find all of the objects using certain parameters, and then they to assess the quality of the data, for instance whether an object that was thought to be a star was actually a star. So you had to do a lot of analysis before you could really get into your research. But people realized that you are not going to be able to download and process a terabyte of images yourself. It’s a huge waste of time.

Teams and systems were processing data, cataloguing them, so that every object in the deep field had a description in terms of size, distance, color, brightness and so forth. Such catalogues are released to the world almost. It can take forever to download, not because the data set was especially large, but because there were so many people accessing the archive at the same time. That was one of astronomy’s first open source exercises, in the sense that we use that term today. Systems now produces more data than we have on the entire Internet now, and that’s in a single year.

All data in astronomy is monochrome, black and white, the processing team combines it into layers of red, green and blue, and so forth. Imaging team takes colored layers and combines them and tries to make them as accurate as possible in terms of how they would look to the human eye, or to a slightly more sensitive eye. Programs lets you take monochrome images, which you can then make any color you like, and it let’s you make them into a single beautiful image. (Feb 2, 2014)

One thought on “Bigdata of universe is doubling each year!

  1. M/s Tweety Elon has fired its core staff, torpedoed years of efforts to rein in worst online behaviors, and reinstated accounts and developer apps that had been kicked off Twitter for violating its terms of service (among other reasons deep state). Changed API policy and revoked access that allowed third-party client apps to work with Twitter while smartly crushing small business legal protections, anti-trust laws and monopolies acts cherished by many so-called Gx leaders of civilized world. With great power comes license to thrill, and zero responsibility, Bravo!

Leave a Reply