Tuesday, 5 September 2017

My CNRS webcast "Enabling open and reproducible research at computer systems conferences: good, bad and ugly"

This spring I was kindly invited by Dr. Arnaud Legrand (CNRS research scientist promoting reproducible research in France) to present our practical experience while enabling open and reproducible research at computer systems conferences (good, bad and ugly).

This CNRS webinar took place in Grenoble on March 14, 2017 with a very lively audience.

You can find the following online resources related to this talk:

Monday, 4 September 2017

Microsoft sponsors non-profit cTuning foundation

We would like to thank to Microsoft for providing an Azure sponsorship for our non-profit cTuning foundation to host our public repository of cross-linked artifacts and optimization results in a unified and reusable Collective Knowledge format.

Many thanks to Dr. Aaron Smith for his assistance!

Successful PhD defense at the University of Paris-Saclay (advised by cTuning foundation members)

We would like to congratulate Abdul Memon (PhD student advised by Dr. Grigori Fursin from the cTuning foundation) for successfully defending his thesis "Crowdtuning: Towards Practical and Reproducible Auto-tuning via Crowdsourcing and Predictive Analytics" in the University of Paris-Saclay.

Most of the software, data sets and experiments are shared in a unified, reproducible and reusable way using Collective Mind framework and later converted to the new Collective Knowledge framework.

We helped prepare ACM policy on Result and Artifact Review and Badging

After arranging many Artifact Evaluations to reproduce and validate experimental results from published papers at various ACM and IEEE computer systems conferences (CGO, PPoPP, PACT, SC), we saw the need for a common reviewing and badging methodology.

In 2016, the cTuning foundation joined ACM internal workgroup on reproducibility and provided feedback and suggestions to develop a common methodology for artifact evaluation. The outcome of this collaborative effort is the common ACM policy on Result and Artifact Review and Badging published here:
We started aligned artifact submission and reviewing procedures for computer systems conferences:
We expect that this document will gradually evolve based on our AE experience - please, stay tuned for more news!

Monday, 16 November 2015

Join our experiment on public discussion of ADAPT'16 paper submissions!

Dear colleagues,

As a part of our ongoing initiative towards open, collaborative and reproducible computer systems' research, we cordially invite you to participate in the public pre-reviewing of ADAPT'16 paper submissions co-located with HiPEAC'16(6th international workshop on adaptive, self-tuning
computing systems).

Each submission is now available at ArXiv and has a separate Reddit discussion thread here:

Note that several papers have shared artifacts (benchmarks, data sets, models) to help you validate presented techniques and even build upon them!

Please, feel free to comment on these papers, exchange ideas, reproduce results, suggest extensions, note missing references and related work, etc. We hope such public pre-reviewing will speed up dissemination of novel ideas while also letting authors actively engage in discussions and eventually improve their open articles before the final reviewing by the ADAPT Program Committee!

You can find more details about this publication model at http://adapt-workshop.org/motivation2016.html

Looking forward to your participation!

Thursday, 24 September 2015

Further posts in the new blog

Dear colleagues,

From now on, I plan to publish posts related to collaborative, systematic and reproducible computer engineering to my startup's blog:
http://dividiti.blogspot.com

Take care,
Grigori