Skip to main content

Advertisement

Prerequisite-related MOOC recommendation on learning path locating

Abstract

Prerequisite inadequacy causes more MOOC drop-out. As an effective method interfering with learning process, existing MOOC recommendation is mainly about subsequent learning objects that have not been learned before. This paper proposes a locating-based MOOC recommendation method with consideration on prerequisite relationship. It locates the target learner’s learning paths and provides prerequisite recommendation for unqualified learning objects and subsequent recommendation for the qualified objects. Experiments on real-world data show both the accuracy and learning performance improvement of our recommendation method.

Introduction

MOOC (Massive Open Online Course) has developed rapidly in recent years, but the drop-out rate reached 90% [1]. Kizilcec found that frustration is one of the important factors of learners’ persistence in learning [2]. Pappano believes that MOOC learners are often frustrated for the inadequacy of prerequisite, so that they fail to keep pace and tend to drop out [3].

Recommendation helps to guide learners on MOOC learning. It provides proper learning objects for the learners. Traditional MOOC recommendation is mainly about learning objects that were not learned before.

However, prerequisite relationship between learning objects plays an important role in MOOC learning. Figure 1 shows the category of math subject on Khan Academy (https://www.khanacademy.orgn) which is one of the most popular MOOC platform. Usually MOOCers learn the objects according to their sequence. Previous knowledge provides the prerequisites for subsequent learning.

Fig. 1
figure1

Knowledge sequence of math subject of Khan Academy

MOOC platforms pay effort on prerequisite support for better learning experience. Coursera (https://www.coursera.org) lists the prerequisite in the course introduction. Khan Academy (https://www.khanacademy.org) lists the courses according to the grade level. It tests the learner to find a suitable start point for them. But few MOOC platforms provide recommendation on prerequisite. And as to our knowledge no recommendation of objects learned before is conducted for further review.

This paper proposes a solution for MOOC recommendation on prerequisite correlation. It is different from traditional MOOC recommendation which mainly concerns learning objects that were not learned before. Locating-based MOOC recommendation (LMR) covers both prerequisite and subsequent learning objects aiming at the learners’ situation. The situation is measured by locating qualified or unqualified learning objects on the target learner’s learning paths (learning behaviors on the time series). For the unqualified learning objects, LMR recommends prerequisite objects of them. For qualified learning object, LMR recommends objects that take the qualified learning objects as prerequisite. To recommendation candidates, collaborative filtering is expanded to train on similar qualified learners’ paths.

The main contributions of LMR are as follows:

  • Different recommendation methods on prerequisite are proposed. Both prerequisite and subsequent learning object recommendations are performed. Recommendation of learning objects that were learned before are also covered to support review learning.

  • Learning path locating helps for adaptive recommendation according to learners’ performance on the path. The recommendation methods aim at the located qualified and unqualified learning objects.

  • The time decay because of forgetting is combined for learning score modification. It punishes the learning scores to indicate the real knowledge maintenance at different time points. Modified learning scores are combined into prerequisite correlation calculation and are adopted as one of the features for recommendation.

  • Experiments on real-world data show the improvement of LMR both on recommendation accuracy and learning performance.

Sections of this paper are assigned as follows. “Related work” is about the research work of prerequisite relation and recommendation on learning path. Next section introduces the symbols of this paper. After that, LMR is introduced according to the work flow of recommendation including learning path locating, forgetting curve function, prerequisite coefficient calculation, qualified similar learners for collaborative filtering and recommendations of prerequisite and subsequent learning objects. “Experiments” lists the dataset and comparison result between different recommendation methods on accuracy and learning performance. The last section is the “Summary”.

Related work

Prerequisite is crucial for satisfying learning experience. Prerequisite is usually defined by expert labeling. Polyzou predicts academic performance based on the prerequisite relationship between courses which is achieved by expert annotation [4]. But manual labeling cannot satisfy the requirement of increasingly massive number of MOOCers.

In most research, prerequisite correlation is mainly calculated through knowledge-based concepts analysis. Yang built the concept map through the prerequisite relationship of the existing curriculums and used the concept map to predict the prerequisite [5]. Liu studies the learning dependence between knowledge points through text analysis [6]. Some research is based on the analysis of the concept map to establish the prerequisite relationship between the knowledge [7,8,9]. Wikipedia is often used for prerequisite training. Liang defines the knowledge prerequisite relationship on links between pages of Wikipedia [10, 11]. Wang adopts Wikipedia’s links between knowledge concepts to establish a concept map for teaching materials [12]. Agrawal extracts key concepts in the textbook and calculates prerequisite values between two concepts with the frequency and sequence of them [13]. These methods are all based on knowledge content. They depend on text analysis and existing knowledge database.

Sequential learning data are used for recommendation. Lu used the association rule mining method to recommend courses and trained on other learners’ learning paths [14]. Sun analyzed learning path through meta-path method and enriched the learner’s portfolio [15]. Chen compared the homogeneity between the user and the item’s image by path similarity [16]. Yu learned a similar user’s behavioral sequence through collaborative filtering to make sequential recommendations [17]. These methods focus on sequential data analysis. The learning location in the sequence is ignored.

Yueh proposed a Markov-based recommendation on learning sequence and analyzed the learning path from the learner’s history [18]. Mi recommended based on the context tree, focusing more on solution design than implementation [19]. Yu used collaborative filtering to recommend in a game with storyline through other users’ sequential actions [17]. Lee learned the sequence of behavioral learning courses through learners [20]. The recommendation methods mainly consider subsequent objects instead of prerequisite relationship. The recommendation feature is mainly about learning preference without consideration of learning performance.

We propose the solution LMR to recommend both prerequisite and subsequent learning objects according to learning path locating results. Both preference and performance are adopted as features of recommendation.

Symbols

Before further discussion, related symbols are listed with description as Table 1 shows.

Table 1 Symbols

Locating-based MOOC recommendation for prerequisite and subsequent learning objects

Adaptive learning is a learning mode which responds to learners according to their learning situation [21, 22]. It helps in lessening frustration and drop-out [23]. To fulfill adaptive learning, LMR recommends according to the current situation of the target learner. The learning situation is measured by learning path locating. Qualified and unqualified learning objects are detected by learning path locating. LMR recommends prerequisite learning objects for the unqualified learning objects and subsequent learning objects which take the qualified learning objects as prerequisite. Prerequisite correlation and learning scores are adopted as features for recommendation. They are modified with time decay for forgetting. According to the process of LMR, learning path locating, forgetting decay, prerequisite correlation and recommendation on locating are introduced in sequence.

Learning path locating

According to the adaptive learning theory, learning recommendation is suggested to be based on the learner’s situation [23]. Prerequisite and subsequent recommendation need to locate where the target learner failed and what the learner succeeded in.

The learning path locating is as Fig. 2 shows. The learner keeps moving on the path successfully to d2. But when arriving at d1, he fails to pass. We define d1 as the start of prerequisite inadequacy. Prerequisite recommendation is required for d1 learning. We define d2 as the subsequent start. Subsequent objects that takes d2 as prerequisite is ready to be recommended for further successful learning.

Fig. 2
figure2

Learning path locating

Before locating the qualified and unqualified learning objects, the criteria whether a learning score is qualified is defined as in (1). According to the exam criteria in real world, if the score is no less than 60, the learner succeeds on the corresponding learning object. Otherwise, it is unqualified.

$$ g\left( {\text{se}} \right) = \left\{ {\begin{array}{*{20}c} {0, \quad{\text{se}} < 60} \\ {1, \quad {\text{se}} \ge 60} \\ \end{array} } \right. $$
(1)

Decay of learning score for forgetting with time on

German psychologist H. Ebbinghaus found that forgetting begins immediately after the learning behavior. The knowledge maintenance goes down with time on.

Ebbinghaus found the process of forgetting was not uniform. The first oblivion was fast; after that forgetting gradually slowed down. He believed that the maintenance of mastered learning content was a function of time [24]. Table 2 lists Ebbinghaus’ experimental results.

Table 2 Time points of Ebbinghaus’ experimental results

With the data of Table 2, Ebbinghaus proposed the forgetting curve as in Fig. 3. It can be found that the memory is divided into short-term memory and long-term memory. The first memory zone is 5 min, the second memory zone is 30 min, and the third memory zone is 12 h. The first 3 memory zones belong to the category of short-term memory [25]. The fourth memory zone is 1 day, the fifth memory zone is 2 days, the sixth memory zone is 4 days, the 7th memory zone is 7 days, the 8th memory zone is 15 days, and the last 5 memory zones are long-term memory [26].

Fig. 3
figure3

Forgetting curve

Even for science knowledge, the proficiency falls down with time on. The learner still needs repeated practice to strengthen the skills.

By fitting the curve, the corresponding mathematical equation is modeled as in (2). It represents the decay of score \( {\text{se}}_{si} \) for forgetting on time distance, dis.

$$ f \left( {{\text{se}}_{si} , {\text{dis}}} \right) = {\text{se}}_{si} \;*\; \left( {0.34\;*\;{\text{dis}}^{ - 0.2} + 0.13} \right) $$
(2)

According to the Ebbinghaus forgetting curve [24], MOOC learners forget over time. LMR combines the forgetting curve to indicate the real knowledge maintenance after forgetting.

Prerequisite correlation measuring on learning scores

With improved collaborative filtering, learning paths of qualified similar learners are adopted for prerequisite recommendation (PR) and subsequent recommendation (SR).

Breese compared collaborative filtering recommendations on correlation coefficients, vector comparisons, or Bayesian statistics, and found the correlation coefficient was more accurate [27]. LMR measures the prerequisite correlation coefficient measurement with time decayed learning score.

If the scores of the learner between i and \( d_{1} \) are positively correlated, the coefficient should be high. We calculate the correlation coefficient \( q\left( {i,d_{1} } \right) \) between the scores with Pearson coefficient, as shown in (3):

$$ q\left( {i,d_{1} } \right) = \frac{{\mathop \sum \nolimits_{s = 0}^{{n_{s} }} \left( {{\text{se}}_{si} - \overline{{{\text{se}}_{i} }} } \right)\;*\;\left( {{\text{se}}_{{sd_{1} }} - \overline{\text{se}}_{{d_{1} }} } \right)}}{{\sqrt {\mathop \sum \nolimits_{s = 0}^{{n_{s} }} \left( {{\text{se}}_{si} - \overline{{{\text{se}}_{i} }} } \right)^{2} \mathop \sum \nolimits_{s = 0}^{{n_{s} }} \left( {{\text{se}}_{{sd_{1} }} - \overline{\text{se}}_{{d_{1} }} } \right)^{2} } }}. $$
(3)

The correlation between two learning behaviors is influenced by their time distance. The knowledge maintenance decays for being forgotten. Assuming that the behavior of the learning object i takes place first, the learning score on i decay with Ebbinghaus curve forgetting as shown in (2). By replacing \( {\text{se}}_{si} \) with (2), (3) turns into (4). \( d_{{si, sd_{1} }} \) is the time distance between the two learning behaviors of learner s on learning objects i and \( d_{1} \).

$$ q\left( {i,d_{1} } \right) = \frac{{\mathop \sum \nolimits_{s = 0}^{{n_{s} }} \left( {f({\text{se}}_{si} ,{\text{dis}}_{{si, sd_{1} }} ) - \overline{{{\text{se}}_{i} }} } \right)*\left( {{\text{se}}_{{sd_{1} }} - \overline{{{\text{se}}_{{d_{1} }} }} } \right)}}{{\sqrt {\mathop \sum \nolimits_{s = 0}^{{n_{s} }} \left( {f({\text{se}}_{si} ,{\text{dis}}_{{si, sd_{1} }} ) - \overline{{{\text{se}}_{i} }} } \right)^{2} \mathop \sum \nolimits_{s = 0}^{{n_{s} }} \left( {{\text{se}}_{{sd_{1} }} - \overline{{{\text{se}}_{{d_{1} }} }} } \right)^{2} } }} $$
(4)

In order to keep the correlation values within 0 to 1, sigmoid function is adopted to (4), as (5):

$$ q\left( {i,d_{1} } \right) = \frac{1}{{1 + e^{{\frac{{\mathop \sum \nolimits_{s = 0}^{{n_{s} }} \left( {f({\text{se}}_{si} ,{\text{dis}}_{{si, sd_{1} }} ) - \overline{{{\text{se}}_{i} }} } \right)\;*\;\left( {\overline{\text{se}}_{{d_{1} }} - {\text{se}}_{{sd_{1} }} } \right)}}{{\sqrt {\mathop \sum \nolimits_{s = 0}^{{n_{s} }} \left( {f({\text{se}}_{si} ,{\text{dis}}_{{si, sd_{1} }} ) - \overline{\text{se}}_{i} } \right)^{2} \mathop \sum \nolimits_{s = 0}^{{n_{s} }} \left( {{\text{se}}_{{sd_{1} }} - \overline{\text{se}}_{{d_{1} }} } \right)^{2} } }}}} }}. $$
(5)

Qualified similar learner for collaborative filtering

Collaborative filtering is usually used to recommend unlearned learning objects for the target learner based on preference of similar learners.

As shown in Fig. 4, the solid line indicates learning object selection by a learner, and the dashed line indicates learning object recommendation for a learner, and the dotted line indicates similar learners of the target learner. Learner A selects learning object 3 and learning object 4. Learner B selects learning object 2, learning object 3, and learning object 4. Because learner A and learner B have most common courses learned, they are similar learners. Learner A did not study learning object 2 which was learnt by similar learner B; it is recommended to learner A.

Fig. 4
figure4

Collaborative filtering on similar learners

It can be found that collaborative filtering is used to calculate the recommendation value by accumulating the selection 0/1 or preference value (i.e., favorable scores) of top similar learners. The recommendation value is calculated by accumulating feature values from similar learners.

It can be found that the collaborative filtering method recommends according to similar learners’ features. LMR improves the collaborative filtering method to recommend on similar learners’ paths. The improvement is mainly as follows:

  • Qualified similar learners Qualified similar learners on the located objects have successful learning experience. Their learning paths are referable. Learning objects on their paths are adopted as candidates for recommendation.

  • Learning object candidates according to the sequence of learning paths Usually collaborative filtering adopts all objects of similar learners as candidates for recommendation. LMR considers on the prerequisite relationship. Candidates are selected according to the sequence of similar learners’ paths. The sequence on learning paths reflects the prerequisite relation between objects. Even if the candidates may be learned by the target learner before, it is necessary to review them to solve the prerequisite inadequacy. Traditional collaborative filtering recommendation mainly covers only objects that have not been learned by the target learners.

  • Time featured recommendation value Learning scores are modified with time forgetting decay consideration. The modified scores are adopted for prerequisite correlation coefficient calculation and recommendation value calculation. It helps to increase the accuracy of recommendation.

Prerequisite recommendation for the unqualified learning object

According to the learning path locating, the first unqualified learning object \( \varvec{d}_{1} \) needs more proficiency on the prerequisite. LMR recommends prerequisite learning objects on paths of similar learners qualified on \( \varvec{d}_{1} \). According to the learning paths, the objects before \( \varvec{d}_{1} \) are adopted as recommendation candidates as Fig. 5 shows.

Fig. 5
figure5

Prerequisite recommendation candidates for the unqualified learning object

The algorithm of prerequisite recommendation is shown in Algorithm 1. The algorithm recommends prerequisite learning object set \( I_{pr} \) for the target learner r. L{l1,l2,…,lm} is the vector representation of the learners. Each learner vector lm comprises the learning objects that he learned. The cycle goes through similar learners in set S{s1,s2,…,sk1} who are qualified in \( d_{1} \). Their learning objects before \( d_{1} \) indicated as s{i1, i2,..,iindex(d1)} are adopted as recommendation candidates i. Similar qualified learners’ scores of the candidate learning objects sesi contribute to the recommendation value pri. The prerequisite correlation \( q\left( {i,d_{1} } \right) \) is multiplied as weight. After the cycle, the recommendation value is normalized by the sum of weights decri.

seri is the learning score of the target learner r on learning object i. It is in negative correlation with the recommendation value considering the necessity of learning object reviewing. For the forgetting effect on knowledge maintenance after learning, the learning score is decayed by forgetting curve function such as function \( f \left( {{\text{se}}_{si} , {\text{dis}}\left( {{\text{time}}\left( {r,d_{1} } \right),t_{n} } \right)} \right) \). \( t_{n} \) is the system time of recommendation. \( 100 - f \left( {{\text{se}}_{r,i} , {\text{dis}}\left( {{\text{time}}\left( {r,d_{1} } \right),t_{n} } \right)} \right) \) shows the negative correlation of the score. \( {\text{se}}_{{r,d_{1} }} \) is the score of the target learner r on prerequisite start \( d_{1} \). It is negatively correlated with the recommendation value to indicate the necessity of reviewing

figurea

Subsequent recommendation for the qualified learning objects

According to the learning path locating result, the most recent qualified learning object of the learner is d2. It means learning objects of the target learner before d2 are all successfully learned. They are indicated as b. According to the qualified similar learners’ paths, the subsequent learning objects with b as prerequisite are adopted as the recommendation candidates. For example, for the qualified prerequisite d2, the first learning objects after d2 of the qualified similar learners are adopted as recommendation candidate, as Fig. 6 shows.

Fig. 6
figure6

Subsequent recommendation candidates for the qualified prerequisite d2

The subsequent recommendation values of candidates are calculated according to the learning features of the qualified similar learners as Algorithm 2 shows. It recommends subsequent learning objects set Ifr for the target learner r.

In Algorithm 2, qualified similar learners’ performance on learning objects following d2 is accumulated. The learning scores are weighted with similarity and prerequisite correlation values. For each b in r{i1, i2,…,iindex(d2)}, the qualified similar learners’ scores and the learning score of the target learner are both combined for recommendation. The learning scores of b are modified by the forgetting curve function to fit the real knowledge maintenance

figureb

Experiments

Our experiments are conducted on learning data from mic-video platform of jClass (http://jclass.pte.sh.cn). Various data of learning process on the mic-video platform jClass are recorded. It provides information of learning scores, learning objects learning time, etc.

Experiment on accuracy improvement

The number of selected top similar qualified learners for recommendation is indicated as k1. The number of top-valued recommendation objects is indicated as k2. Different value combinations of k1 and k2 are separately tested to get the best recommendation accuracy. Weight parameters between features are assigned as 1 without loss of generality.

PR and SR are compared with 2 types of collaborative filtering recommendation methods. One is based on preference CFPreference, and the other is based on learning scores CFscore.

Figure 7 shows the precision comparison with different k1 and k2 combinations. Totally the precision of SR is the best. Along each qualified similar learners’ path, the first learning object after the subsequent start point is adopted as recommendation candidate. The precision of PR is better than CF methods. It shows that the recommendation based on prerequisite correlation has better performance. CFscore has higher value in precision than CFPreference. It shows score feature combination in MOOC recommendation, which helps to improve the accuracy.

Fig. 7
figure7

Precision comparison between different recommendations

Figure 8 compares recall on different value combinations of k1 and k2. PR and SR are better than the CF methods. The performance of PR is better than SR. The recommendation candidates of PR cover all possible prerequisite learning objects of similar learners. That helps in recall improvement.

Fig. 8
figure8

Recall comparison between different recommendations

Figure 9 is the comparison result of the F1-score for different value combinations of k1 and k2. As the balance of precision and recall, F1-scores of PR and SR outperform CFPreference and CFScore. Prerequisite considered recommendation for MOOC achieves better accuracy than traditional CF methods.

Fig. 9
figure9

F1-score comparison between different recommendations

Experiment on application to different types of learning objects

To analyze the influence of forgetting decay on different types of knowledge objects, learning objects of the data set are classified into arts and science which are different in memory mode. In the whole dataset, arts or science objects account for nearly half. But among the learning objects hitting in the test dataset, 62.1% of them belong to the arts as shown in Table 3. The learning objects of science also show good performance. It means the knowledge of science may not rely on memory so much, but it is necessary for them to be reviewed to improve the proficiency of the knowledge as the Ebbinghaus forgetting curve shows.

Table 3 Performance difference on learning objects between arts and science

Experiment on learning performance

Further experiments are designed to verify the improvement on learning scores for PR and SR.

Figure 10 shows the difference between all learning scores in the whole dataset and the learning scores of hit objects by PR. It can be found the lowest learning score is improved from 8.33 to 16.67 and the average score is improved from 61.7 to 74.6. The highest score is kept on 100, the full score.

Fig. 10
figure10

Learning score comparison on PR

Figure 11 shows the difference between all learning scores in the whole dataset and the learning scores of hit objects of SR. It can be found the lowest learning score is improved from 8.33 to 14.29 and the average score is improved from 61.7 to 83.9. The highest score is kept on 100, the full score.

Fig. 11
figure11

Learning score comparison on SR

It can be found PR and SR help to improve learning performance. The target learner will achieve more satisfaction and less frustration. SR has higher average score than PR. PR aims at the prerequisite inadequacy. It combines the past learning scores of the target learner on the candidate objects as one of the feature for recommendation. Lower learning score record has more necessity of reviewing and higher recommendation value.

Summary

This paper proposes a MOOC recommendation method on prerequisite. The prerequisite correlation calculation is on modified learning scores with forgetting decay on time. The recommendation covers both prerequisite and subsequent learning according to the result of learning path locating. It is different from normal MOOC recommendation of learning objects that are not learned before. Experiments verify the improvement on accuracy by prerequisite recommendation and subsequent recommendation. Additional experiments are also designed to demonstrate the improvement on learning performance. Further research on forgetting function with different kinds of learning objects will help to improve the accuracy of recommendation in the future.

Availability of data and materials

The data that support the findings of this study are available from JClass but restrictions apply to the availability of these data, which were used under license for the current study, and so are not publicly available. Data are however available from the authors upon reasonable request and with permission of JClass.

References

  1. 1.

    Breslow L, Pritchard DE, De Boer J, Stump GS, Ho AD, Seaton DT. Studying learning in the worldwide classroom: research into edx’s first mooc. Res Pract Assess. 2013;8:13–25.

  2. 2.

    Kizilcec RF, Piech C, Schneider E. Deconstructing disengagement: analyzing learner subpopulations in massive open online courses. In: Proceedings of the third international conference on learning analytics and knowledge. New York: ACM; 2013. p. 170–9.

  3. 3.

    Pappano L. The year of the mooc. New York: New York Times; 2012.

  4. 4.

    Polyzou A, Karypis G. Grade prediction with course and student specific models. In: Pacific-Asia conference on knowledge discovery and data mining. Berlin: Springer; 2016. 
p. 89–101.

  5. 5.

    Yang Y, Liu H, Carbonell J, Ma W. Concept graph learning from educational data. In: Proceedings of the eighth ACM international conference on web search and data mining. New York: ACM; 2015. 
p. 159–68.

  6. 6.

    Liu J, Jiang L, Wu Z, Zheng Q, Qian Y. Mining learning-dependency between knowledge units from text. VLDB J Int J Very Large Data Bases. 2011;20(3):335–45.

  7. 7.

    Huang X, Yang K, Lawrence VB. An efficient data mining approach to concept map generation for adaptive learning. In: Industrial conference on data mining. Berlin: Springer; 2015. 
p. 247–60.

  8. 8.

    Scheines R, Silver E, Goldin IM. Discovering prerequisite relationships among knowledge components. In: EDM. 2014. p. 355–56.

  9. 9.

    Vuong A, Nixon T, Towle B. A method for finding prerequisites within a curriculum. In: EDM. 2011. p. 211–6.

  10. 10.

    Liang C, Wu Z, Huang W, Giles CL. Measuring prerequisite relations among concepts. In: Proceedings of the 2015 conference on empirical methods in natural language processing. 2015. p. 1668–74.

  11. 11.

    Talukdar PP, Cohen WW. Crowd sourced comprehension: predicting prerequisite structure in wikipedia. In: Proceedings of the seventh workshop on building educational applications using NLP. Association for Computational Linguistics; 2012. p. 307–15.

  12. 12.

    Wang S, Ororbia A, Wu Z, Williams K, Liang C, Pursel B, Giles CL. Using prerequisites to extract concept maps from textbooks. In: Proceedings of the 25th ACM international on conference on information and knowledge management. New York: ACM; 2016. p. 317–26.

  13. 13.

    Agrawal R, Golshan B, Terzi E. Grouping students in educational settings. In: Proceedings of the 20th ACM SIGKDD international conference on knowledge discovery and data mining. New York: ACM; 2014. p. 1017–26.

  14. 14.

    Lu Z, Pan SJ, Li Y, Jiang J, Yang Q. Collaborative evolution for user profiling in recommender systems. In: IJCAI. 2016. p. 3804–10.

  15. 15.

    Sun Y, Han J, Yan X, Yu PS, Wu T. Pathsim: meta path-based top-k similarity search in heterogeneous information networks. Proc LDB Endow. 2011;4(11):992–1003.

  16. 16.

    Chen Y, Zhao X, Gan J, Ren J, Hu Y. Content-based top-n recommendation using heterogeneous relations. In: Australasian database conference. Berlin: Springer; 2016. 
p. 308–20.

  17. 17.

    Yu H, Riedl MO. A sequential recommendation approach for interactive personalized story generation. In: Proceedings of the 11th international conference on autonomous agents and multiagent systems, vol. 1. International Foundation for Autonomous Agents and Multiagent Systems; 2012. p. 71–8.

  18. 18.

    Huang YM, Huang TC, Wang KT, Hwang WY. A markov-based recommendation model for exploring the transfer of learning on the web. J Educ Technol Soc. 2009;12(2):144.

  19. 19.

    Mi F, Faltings B. Adaptive sequential recommendation using context trees. In: IJCAI. 2016. p. 4018–9.

  20. 20.

    Lee Y, Cho J. An intelligent course recommendation system. Smart. 2011;1(1):69–84.

  21. 21.

    Education Growth Advisors. Learning to adapt: understanding the adaptive learning supplier landscape. Seattle: PLN/Bill and Melinda Gates Foundation; 2013.

  22. 22.

    Oxman S, Wong W, Innovations DVX. White paper: adaptive learning systems. Integr Educ Solut 2014;6–7.

  23. 23.

    D Clark. Adaptive moocs. cogbooks adaptive learning. copyright Cogbooks. 2013.

  24. 24.

    Ebbinghaus H. Memory: a contribution to experimental psychology. Ann Neurosci. 2013;20(4):155.

  25. 25.

    Schacter DL. The seven sins of memory: insights from psychology and cognitive neuroscience. Am Psychol. 1999;54(3):182.

  26. 26.

    Averell L, Heathcote A. The form of the forgetting curve and the fate of memories. J Math Psychol. 2011;55(1):25–35.

  27. 27.

    Breese JS, Heckerman D, Kadie C. Empirical analysis of predictive algorithms for collaborative filtering. In: Proceedings of the fourteenth conference on uncertainty in artificial intelligence. Burlington: Morgan Kaufmann Publishers Inc.; 1998. p. 43–52.

Download references

Acknowledgements

The work is funded by computer science and technology subject of Shanghai Polytechnic University with No. xxkzd1604. There is no interest conflict for this paper. As a paper recommended by the conference to the journal, part of it was published by the same name conference. Additional content on ideas and experiments are added. More than 30% of the two papers’ content is different.

Funding

The work is funded by computer science and technology subject of Shanghai Poly-technic University with No. xxkzd1604.

Author information

YP contributed to the algorithm and draft of this paper. NW contributed to the paper writing. YZ and YJ worked on the coding and experiment result collection. WJ updated the paper writing. WT provided support of technology and fund. All authors read and approved the final manuscript.

Correspondence to Yanxia Pang.

Ethics declarations

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Keywords

  • MOOC
  • Recommendation
  • Prerequisite
  • Learning path