FAQs

 

Eligibility

Yes! The Tools Competition is eager to hear from participants from across the globe. Participants must be able to accept funds from US based entities.
Yes! We are eager to hear and support individuals who are new to the field. We encourage you to request a smaller award to be more competitive. Please take the eligibility quiz for more guidance.
We encourage you to apply and make a note of your conflict. For competitive solutions, we will try our best to make accommodations to allow you to participate in the competition despite the conflict. Email ToolsCompetition@the-learning-agency.com if you would like to discuss your specific circumstances.
Yes! Anyone 18 years or older is eligible, and we are eager to hear from people at all stages of the development process.
Please refer to the Official Rules. All participants must agree to these rules to compete.

Developing successful proposals

Submissions for the 2021 Tools Competition closed on October 1, 2021. You can read more about our submission process here.

The competition has four priority areas or ‘tracks’ that reflect the pressing needs and opportunities in education.

  • Accelerate learning in elementary and secondary literacy and math.This track focuses on tools that help students achieve or exceed proficiency in grade-level literacy or math skills, despite learning loss due to COVID.
  • Transform K-12 assessments in both cost and quality.In this track, the focus is on tools that improve the quality of assessment to better meet the needs of educators, students and families while reducing the time or cost to develop or administer them. Tools that support diagnostic, formative, interim, summative and direct-to-family assessments are eligible.
  • Accelerate learning in elementary and secondary literacy and math.This track focuses on tools that help students achieve or exceed proficiency in grade-level literacy or math skills, despite learning loss due to COVID.
  • Drive improvements in adult learning that boost middle class wages. Here the emphasis is on tools that increase the effectiveness or reach of post-secondary education or skill training to prepare adults, particularly non-college educated adults, for the changing economy.
  • Facilitate faster, better, and cheaper learning science research. This track looks at tools that accelerate the learning science research process by facilitating A/B testing and randomized controlled trials, improving research design, promoting replication, or releasing knowledge and data for external research.

To ensure your proposal is competitive, describe how your tool fits one or multiple of the tracks.

For support or feedback on how to tailor your proposal to fit the track, reach out to ToolsCompetition@the-learning-agency.com.

Each track has somewhat different requirements and eligibility criteria. For more information, read the read competition overview and Official Rules.

Yes, proposals may address one or multiple tracks. The initial submission form will ask you to select the primary track, which should be the track that is the best match. You may also select additional tracks. Before Phase II, competition organizers will determine which track is most competitive for your proposal and you will be evaluated against proposals within that track.

You may also choose to submit multiple proposals.

Yes, proposals must be in English.
Complete the eligibility quiz to determine how to make your solution most competitive. For general information, refer to the guidelines for award sizes. If you have additional questions, reach out to ToolsCompetition@the-learning-agency.com.
The Tools Competition seeks to spur new tools and technology. This means that something about the proposal needs to be fresh, innovative, or original.

For a Mid-Scale or Large Prize, this might be an API that will improve the platform or a new tool to improve effectiveness. Or it could mean adding infrastructure that allows outside researchers to access your data.

This does not mean you have to create a new tool or new platform. Proposals seeking a Mid-Scale or Large Prize should build off of what they have and what’s already having an impact.

For this track, we’re looking for tools that help elementary and secondary students achieve or exceed proficiency in grade-level reading or math skills, despite learning loss due to COVID.

Please review last year’s winners for examples of competitive proposals. Note: All of these proposals would be competitive for the “accelerate learning” track but a few of them would also be competitive for other tracks.

For this track, we are looking for tools that improve the quality of assessment to better meet the needs of educators, students and families while reducing the time or cost to develop or administer them. Tools that support diagnostic, formative, interim, summative and direct-to-family assessments are eligible.

Possible examples of competitive proposals include but are not limited to:

  • Springboard Collaborative’s 2020 Winning Proposal, a speech recognition powered literacy screen that will automate literacy assessment such that it requires neither classroom time nor teaching expertise to administer. The tool is designed to enable families to more deeply understand their children’s reading development, set goals, and measure progress.
  • A platform that uses natural language processing to evaluate student writing for organization, idea development and use of evidence in order to provide real-time feedback to students and information on students’ individual and group needs to educators.
  • An open source algorithm that can evaluate students’ handwritten math equations and solutions to assess correctness. The algorithm could either be incorporated into a new tool or integrated into existing digital learning platforms.

For this track, we are looking for tools that increase the effectiveness or reach of post-secondary education or skill training to prepare adults, particularly non-college educated adults, for the changing economy.

Possible examples of competitive proposals include but are not limited to:

  • A platform that creates a new tool to help adult learners gain data science skills.
  • A new algorithm that assesses open job descriptions in order to determine demand for reskilling programs or certificate programs at universities.
  • A chatbot that allows learners to receive real-time feedback and support to improve and build in demand skills while on-the-job.

For this track, we are looking for tools that accelerate the learning science research process by facilitating A/B testing and random controlled trials, improving research design, promoting replication, or releasing knowledge and data for external research. Tools can focus on any topic related to education, but tools that concentrate on math, specifically driving mastery toward Algebra 1, will receive a competitive priority.

Examples of existing research tools that fit the goals of this track include but are not limited to:

  • ASSISTments’E-TRIALs and Carnegie Learning’s UpGrade, which both allow outsider researchers to run experiments with their user base in existing learning environments.
  • TerraCotta democratizes research as it integrates into learning management systems to allow teachers to conduct rigorous research in their classroom.
  • Kaggle, Datashop, and Data.World make educational data more easily accessible to researchers.

Research areas that would qualify for the competitive priority would include but are not limited to research tools that address Algebra 1:

  • Math curriculum
  • Proximate measures of student mastery in math
  • Math tutoring programs
  • Math learning progressions.
The Tools Competition has a phased selection process in order to give competitors time and feedback to strengthen their tool and build a team. Proposals will be reviewed at each phase and selected submissions will be invited to submit to the next round.

For more information see here.

If you have questions about specific phases, reach out to ToolsCompetition@the-learning-agency.com .

Proposals will be evaluated against others within the same priority area. Proposals requesting a larger prize amount will be subject to greater scrutiny. At each stage of the competition, reviewers will evaluate proposals based on eligibility requirements for the prize bands as well as:
  • Potential impact and likelihood to improve learning
  • Attention to equity to support learning of historically marginalized populations
  • Ability to support rapid experimentation and continuous improvement
  • Ability to scale to additional users and/or domains
  • Team passion, and readiness to execute
Yes! Before the October 1st deadline, the organizing committee will host two informational webinars. A webinar is scheduled for July 27th from 12-1pm ET. Updates on how to register for the webinar will be posted here. Interested competitors are also welcome to reach out to ToolsCompetition@the-learning-agency.com with questions or feedback. Additional avenues for support will be emailed out to our email list, so please make sure to sign up by adding your email address to the window across the site. We also recommend joining the Learning Engineering Google Group. Opportunities for partnership and additional support are also frequently posted there.

Learning Engineering

Learning engineering is the use of computer science to pursue rapid experimentation and continuous improvement with the goal of improving student outcomes.

The learning engineering approach is critical because the current process to test and establish the efficacy of new ideas is too long and too expensive. Learning science research remains slow, small-scale, and data-poor, compared to other fields. The result is that teachers and administrators often have neither proven tools nor the research at hand they need to make informed pedagogical decisions. Learning engineering aims to solve this problem using the tools of computer science.

For individual platforms, the learning engineering approach is important because it allows for platforms to engage in rapid experimentation and engage in continuous improvement. In other words, learning engineering allows for platforms to quickly understand if an approach works and for whom and at what time. This is central to scaling an effective product.

Far too often, education research proves to be a frustrating process. Experiments often take years. Costs are high, sometimes many million per study. Quality is also uneven, and many studies have small n sizes and lack rigorous control. Similarly, the field lacks high-quality datasets that can spark better research and richer understanding of student learning.

Part of the issue is that learning is a complicated domain that takes place in highly varied contexts. Another issue is that the subjects of the studies are typically young people and so there are heightened concerns around privacy.

But the consequences of weak research processes are clear, and in education, experts often don't know much about what works, why it works, for whom it works, and in what contexts.

Take the example of interleaved practice, or mixing up problem sets while learning. Research into middle school math has established that students learn better when their practice is interleaved, meaning students practice a mix of new concepts and concepts from earlier lessons. But it’s an open research question how far this principle extends. Does interleaved practice work equally well for reading comprehension or social studies? Does it work for younger math students too? Does the type of student (high-achieving versus behind) matter?

This lack of knowledge has important consequences, and far too much money, time, and energy is wasted on unproven educational theories and strategies.

Learning engineering, at its core, is really about three processes: (1) systematically collecting data as users interact with a platform, tool, or procedure while protecting student privacy (2) analyzing the collected data to make more and more educated guesses about what’s leading to better learning, and (3) iterating based on these data to improve the platform, tool, or procedure for better learning outcomes. Some but not all platforms will partner with researchers to better learn what’s working best for students. These findings can then be shared with the community at large to help improve learner outcomes everywhere.

Instrumentation is building out a digital learning platform so many external researchers can engage in research. To be more exact, the platform is offering its data as an “instrument” to do research. In this sense, instrumentation is central to learning engineering; it is the process by which a platform turns their data into a research tool.

One primary way to instrument is by building a way for external researchers to run A/B experiments. Several platforms have created systems that allow outside researchers to run their research trials on digital platforms. In other words, the platforms have “opened up” their platforms to outside researchers. These platforms facilitate large-scale A/B trials and offer open-source trial tools, as well as tools that teachers themselves can use to conduct their own experiments.

When it comes to building A/B instrumentation within a platform, the process usually begins with identifying key data flows and ways in which there could be splits within the system. Platforms will also have to address issues of consent, privacy, and sample size. For instance, the average classroom does not provide a large enough sample size, and so platforms will need to think about ways to coordinate across classrooms. A number of platforms have also found success building “templates” to make it easier for researchers to run studies at scale.

One example of this approach is the ETRIALS testbed created by the ASSISTments team. As co-founder Neil Heffernan has argued, ETRIALS “allows researchers to examine basic learning interventions by embedding RCTs within students’ classwork and homework assignments. The shared infrastructure combines student-level randomization of content with detailed log files of student- and class-level features to help researchers estimate treatment effects and understand the contexts within which interventions work.”

To date, the ETRIALS tool has been used by almost two dozen researchers to conduct more than 100 studies, and these studies have yielded useful insights into student learning. For example, Neil Heffernan has shown that crowdsourcing “hints” from teachers has a statistically significant positive effect on student outcomes. The platform is currently expanding to increase the number of researchers by a factor ten over the next three years.

Other examples of platforms that have “opened up” in this way include Canvas, Zearn, and Carnegie Learning.

Carnegie Learning created the powerful Upgrade tool to help edtech platforms conduct A/B tests. This project is designed to be a “fully open source platform and aims to provide a common resource for learning scientists and educational software companies.” Using Carnegie Learning’s Upgrade, the Playpower Labs team found that adding “gamification” actually reduces learner engagement by 15 percent.

Questions to Consider to Assess and Bolster Your Proposal:

Does your platform allow outside researchers to run science of learning studies within your platform?
If the answer is yes, then your platform is instrumented and you should address how this instrumentation will scale and grow with the support of the Tools Competition.

Does your platform allow outside researchers to mine data within your platform to better understand the science of learning?
If the answer is yes, then your platform is instrumented and you should address how this instrumentation will scale and grow with the support of the Tools Competition.

If the answer to either of the above questions is “no,” then we highly recommend that you partner with a researcher to help you think through how to begin to instrument your platform as part of the tools competition.

A secondary way that learning platforms can contribute to the field of learning engineering is to produce large shareable datasets. Sharing large datasets that have been anonymized (removed of all personally identifiable markers, to protect student privacy) is a big catalyst for progress in the field as a whole.

In the field of machine learning for image recognition, there is a ubiquitously used open-source dataset of more than 100,000 labeled images called “ImageNet”. The creation and open-source offering of this dataset has allowed researchers to build better and better machine learning image recognition algorithms thus catapulting the field of image recognition to a new higher standard. We need similar datasets in the field of education.

An example of this approach is the development of a dataset aimed at improving assisted feedback on writing. Called the “Feedback Prize,” this effort will build on the Automated Student Assessment Prize (ASAP) that occurred in 2012 and support educators in their efforts to give feedback to students on their writing.

To date, the project has developed a dataset of nearly 400,000 essays from more than half-dozen different platforms. The data are currently being annotated for discourse features (e.g., evidence, claims, etc) and will be released as part of a data science competition. More on the project here.

Another example of an organization that has created a shared dataset is CommonLit, which uses algorithms to determine the readability of texts. CommonLit has shared its corpus of 3,000 level-assessed reading passages for grades 6-12. This will allow researchers to create open-source readability formulas and applications.

Yet another platform that has created useful large-scale datasets is Infinite Campus. Their student information system (SIS) and learning management system (LMS) dataset includes demographic, enrollment, program, behavior, health, schedule, attendance, curriculum, and assessment information. Using this data with the proper permissions, the company facilitates partnerships between research organizations, education agencies, and research funders to ask questions at scale about what works for student learning.

For the Learning Engineering Tools Competition 2021, a dataset alone would not make a highly competitive proposal. Teams with a compelling dataset are encouraged to partner with a researcher or developer that will design a tool or an algorithm based on the data.

For a list of researchers, please email Toolscompetition@the-learning-agency.com. We have a large and growing network of researchers who can assist platforms with:

1) How best to instrument a platform in ways that would serve the field,
2) Determining what data a platform is able to collect and how best to collect it,
3) Using the data and related research to answer questions of interest.

We are also happy to make connections to researchers through individual requests or broader networking listservs and events.

Questions to Consider to Assess and Bolster Your Proposal:

Does your platform allow outside researchers to run science of learning studies within your platform?

If the answer is yes, then your platform is instrumented and you should address how this instrumentation will scale and grow with the support of the Tools Competition.

Does your platform allow outside researchers to mine data within your platform to better understand the science of learning?

If the answer is yes, then your platform is instrumented and you should address how this instrumentation will scale and grow with the support of the Tools Competition.

If the answer to either of the above questions is “no,” then we highly recommend that you partner with a researcher to help you think through how to begin to instrument your platform as part of the tools competition.

Research Partnerships for Mid-Scale and Large Prizes

External researchers must be external to the immediate organization that is receiving the funds, but they may work for the same institution in another department.
You can include costs for external researchers, but ideally, your tool allows multiple researchers to leverage the data. Given that, your budget should cover establishing the infrastructure to allow external researchers to access your data. We anticipate interested researchers will be able to fundraise to conduct research using your data.

Competitors seeking a Mid-Scale or Large Prize must have commitment from one or more external researchers that they are interested in using the data from their platform by the time they submit their detailed proposal for Phase 2, which is due December 17th.

If you need help identifying a researcher, please reach out to
ToolsCompetition@the-learning-agency.com and we will share a list of researchers with demonstrated interest in supporting competitors.

This does not need to be a formal agreement, and the researcher does not need to have already secured funding. Instead, we want to see that you have started forming partnerships with outside researchers to share your data and consider how that will require you to adapt your tool.

Most importantly, the tool must be designed so that multiple researchers can access data from the platform over time. Given this, we assume that if the researcher you are working with falls through for a reason, you will be able to establish another partnership quickly. Regardless of lane, it must be able to support 2 researchers within a 2 year period.

Budget

The funding is a prize, not a grant. Therefore, there are no specific requirements on what costs are allowed or not allowed (within reason, of course). There are no specific requirements around indirect costs, either.

Proposals will be evaluated based on whether they are clear, concise, actionable, and attainable, with budgets that are aligned and realistic with what’s being proposed. Judges will evaluate how you will maximize your impact.

There is no definitive time period for the award. It is recommended that awarded proposals demonstrate significant progress by Product Review Day in Summer 2022 to receive the second installment of funds. This progress will be measured against the timeline for execution outlined in the proposal.

What happens after the competition?

Winners will receive their award by check or bank transfer in two installments.

Winners will receive the second installment of the prize after Product Development Day if they are making sufficient progress on the plan they outline in their Phase 2: Detailed proposal.

Winners will present during a virtual Product Review Day to their peers and others in the field to get feedback and perspective on their progress.

Approximately one year after winners are notified, winners will convene again to present their progress in a Demo Day.

Yes! At each phase, the organizers will compile lists of opportunities for additional funding, support, and partnership. We also encourage your team, if not selected, to stay in touch with the organizers through ToolsCompetition@the-learning-agency.com and the Learning Engineering Google Group.
 
 
 

SPONSORED BY

 
 
 
 
 

“Bill & Melinda Gates Foundation” is a registered trademark of the Bill & Melinda Gates Foundation in the United States and is used with permission.