Sebastian,
I can make these statistics more understandable. But I work with Javascript. If you use tab separated format, I can read that and convert it to json and that can be shared with others who might want to process and analyze the data. The raw data also.
Python is NOT a universal Internet standard, and not likely to become one until a compiler and basic sharing are cleaned up. That is another of the Internet Foundation projects, but low priority since so few (relative to the whole Internet) people are using it.
If you share your data in a global format (I can help you), then anyone can bolt into your output. A community using it can then generate usage data to guide your development. You should have about 10,000 people already working in this area globally. I can help you find them. I have found many groups doing statistics on the web. They are not working together effectively.
From better statistics, I can help you find and connect to the groups who can use Common Crawl. I want to replace Google search for certain types of topics where a for-profit company is not allowed, or always suspect.
I have been working to redesign the Internet internals and policies for the last 22 years full time. I think we have talked before. But I want to write some proposals for new organizations and basic statistics on the characteristics of the Internet are needed.
Later I want to profile domains like EDU, GOV and others to show exactly what is happening with them. Then ask that they be re-written or mapped to more useful, visible and auditable form. I can be quite specific. But I want to get you statistics in a form to show others. I am not going to screen scrape your results, or invest my scare time in a language I think is totally inadequate right now. You have done a good job, I just want to help use the data you are producing to change the Internet. That includes helping Common Crawl to tackle some global problems to demonstrate effective ways to index and understand all of what is available and how it is structured and functions. "Covid-19" is a bit too large. But individual nodes in that problem can be tackled with the resources available. If I can demonstrate a few, then there are donors and sponsors and organizations to help.
I am negotiating to have my own InternetFoundation.Org site rebuilt. If you have specific questions, you have my email. I think this conversation is not private. So I am only explaining basic things that anyone can do.
Thank you for what you are doing. Can you export your results in a computer-readable form? If you have thought about it, I would rather adapt to your export format, and give feedback or rewrite. Adding Javascript should greatly expand the community of people who can connect to CommonCrawl and its derivative products and tools. I will look at all the others in the coming days. But my time is pretty covered. I will take time to give you some ideas for how to better use these statistics. In my working career, I was a senior mathematical statistician. I have about 50 years in statistics. Mainly on global economic, social, technological and scientific modeling and simulation. Since those all require massive collection and curating of information from many different places and forms, I have also been adept at finding and gathering data. Connecting all the data on the Internet with a few basic human-computer readable forms is much faster and more efficient than the current proprietary, binary, compressed and obscure formats now. Many requiring substantial investment of time to find the required tools and dependencies.
GitHub.com itself is one of the Internet Foundation projects. I am working to index and remap it completely. It has massive duplication and too many undocumented pieces. There are many groups working individually, but not together.
site:GitHub.com has 75.1 Million entry points
site:GitHub.com "covid-19" has 111 Thousand entry points.
site:GitHub.com "Common Crawl" OR "CommonCrawl" has 4200 entry points
As a sample, if you can give me that list of 111,000 Urls I can see about profiling who is doing what. I, and many other people, could use the data from CommonCrawl, starting with basic statistics. But it needs to be easy to try basic things like look at all the pages that contain "covid-19". And have the result back in a form that is easy for JavaScript to use. Batch processing in Python will NOT help get the information to billions of browsers and users of the Internet who only have JavaScript - content scripts, scripts in pages, background html and scripts.
I can help on many things but don't ask me to learn things you can do in minutes. I can reasonably tackle "Covid-19" on the Internet, but not if I have to do it alone.
Pardon me if I am writing a bit formally and much. I am sharing this note with some people I hope will help. There are about 5000 immediate problems on the Internet that could be helped with data and statistics from CommonCrawl. I can only do them one by one, and I need data from you to do that. I hope you can spend a bit of time to help me get started. If there are things that need doing, and I profile the 4200 entry points for "Common Crawl" OR "CommonCrawl" on GitHub, then you can more easily get that whole community working together. Not in ones and twos, but as a complete, open, visible and clear group. Yes, I know CC is working to get them organized, but I think I can speed up the process, if they will help on "Covid-19" and "Remapping the Internet" and related topics.
Richard Collins, Director, The Internet Foundation