Skip to main content

Factorization threshold models for scale-free networks generation



Several models for producing scale-free networks have been suggested; most of them are based on the preferential attachment approach. In this article, we suggest a new approach for generating scale-free networks with an alternative source of the power-law degree distribution.


The model derives from matrix factorization methods and geographical threshold models that were recently proven to show good results in generating scale-free networks. We associate each node with a vector having latent features distributed over a unit sphere and with a weight variable sampled from a Pareto distribution. We join two nodes by an edge if they are spatially close and/or have large weights.

Results and conclusion

The network produced by this approach is scale free and has a power-law degree distribution with an exponent of 2. In addition, we propose an extension of the model that allows us to generate directed networks with tunable power-law exponents.


Most social, biological, topological and technological networks display distinct nontrivial topological features demonstrating that connections between the nodes are neither regular nor random at the same time [1]. Such systems are called complex networks. On of the well-known and well-studied classes of complex networks is scale-free networks whose degree distribution P(k) follows a power law \(P(k) \sim k^{-\alpha }\), where \(\alpha \) is a parameter whose value is typically in the range \(2< \alpha < 3\). Many real networks have been reported to be scale-free [2].

Generating scale-free networks is an important problem because they usually have useful properties, such as high clustering [3], robustness to random attacks [4] and easy achievable synchronization [5]. Several models for producing scale-free networks have been suggested; most of them are based on the preferential attachment approach [1]. This approach forces existing nodes of higher degrees to gain edges added to the network more rapidly in a “rich-get-richer” manner. This paper offers a model with another explanation of scale-free property.

Our approach is inspired by matrix factorization, a machine learning method being successfully used for link prediction [6]. The main idea is to approximate a network adjacency matrix by a product of matrices V and \(V^T\), where V is the matrix of nodes’ latent features vectors. To create a generative model of scale-free networks, we sample latent features V from some probabilistic distribution and try to generate a network adjacency matrix. Two nodes are connected by an edge if the dot product of their latent features exceeds some threshold. This threshold condition is influenced by the geographical threshold models that are applied to scale-free network generation [7]. Because of the methods used (adjacency matrix factorization and threshold condition), we call our model the factorization threshold model.

A network produced in such a way is scale-free and follows power-law degree distribution with an exponent of 2, which differs from the results for basic preferential attachment models [810] where the exponent equals 3. We also suggest an extension of our model that allows us to generate directed networks with a tunable power-law exponent.

This paper is organized as follows. “Related work” section provides information about related works that inspired us. The formal description of our model in the case of an undirected fixed size network is presented in “Model description” section, which is followed by a discussion of how to generate growing networks. In “Generating sparse networks” section, the problem of making resulting networks sparse is considered. “Degree distribution” section shows that our model indeed produces scale-free networks. Extensions of our model, which allows to generate directed networks with a tunable power-law exponents and some other interesting properties, will be discussed in “Model modifications” section. “Conclusion” section concludes the paper.

Related work

In this section, we consider related works that encouraged us to create a new model for complex networks generation.

Matrix factorization

Matrix factorization is a group of algorithms where a given matrix R is factorized into two smaller matrices Q and P such that: \(R \approx Q^TP\) [11].

There is a popular approach in recommendation systems which is based on matrix factorization [12]. Assume that users express their preferences by rating some items, this can be viewed as an approximate representation of their interests. Combining known ratings, we get partially filled matrix R, the idea is to approximate unknown ratings using matrix factorization \(R \approx Q^TP\). A geometrical interpretation is the following. The rows of matrices Q and P can be seen as latent features vectors \(\vec {q}_i\) and \(\vec {p}_u\) of items and users, respectively. The dot product \((\vec {q}_i, \vec {p}_u)\) captures an interaction between an user u and an item i, and it should approximate the rating of the item i by the user u: \(R_{ui} \approx (\vec {q}_i, \vec {p}_u)\). Mapping of each user and item to latent features is considered as an optimization problem of minimizing distance between R and \(Q^TP\) that is usually solved using stochastic gradient descent (SGD) or alternating least squares (ALS) methods.

Furthermore, matrix factorization was suggested to be used for link prediction in networks [6]. Link prediction refers to the problem of finding missing or hidden links which probably exist in a network [13]. In [6] it is solved via matrix factorization: a network adjacency matrix A is approximated by a product of the matrices V and \(V^T\), where V is the matrix of nodes’ latent features.

Geographical threshold models

Geographical threshold models were recently proven to have good results in scale-free networks generation [7]. We are going to briefly summarize one variation of these models [14].

Suppose the number of nodes to be fixed. Each node carries a randomly and independently distributed weight variable \(w_i \in \mathbb {R}\). Also, the nodes are uniformly and independently distributed with specified density in a \(\mathbb {R}^d\). A pair of nodes with weights \(w, w'\) and Euclidean distance r are connected if and only if:

$$\begin{aligned} (w + w') \cdot h(r) \ge \theta , \end{aligned}$$

where \(\theta \) is the model threshold parameter and h(r) is the distance function that is assumed to decrease in r. For example, we can take \(h(r) = r^{-\beta },\) where \(\beta > 0.\)

First, exponential distribution of weights with the inverse scale parameter \(\lambda \) has been studied. This distribution of weights leads to scale-free networks with a power-law exponent of 2: \(P(k) \propto k^{-2}\). It is interesting that the exponent of a power law does not depend on the \(\lambda \), d and \(\beta \) in this case. Second, Pareto weight distribution with scale parameter \(w_0\) and shape parameter a has been considered. In this case, a tunable power-law degree distribution has been achieved: \(P(k) \propto k^{-1 - \frac{a \beta }{d} }\).

There are other variations of this approach: uniform distribution of coordinates in the \(d\)-dimensional unit cube [15], lattice-based models [16, 17] and even networks embedded in fractal space [18].

Model description

We studied theoretically matrix factorization by turning it from a trainable supervised model into a generative probabilistic model. When matrix factorization is used in machine learning, the adjacency matrix A is given and the goal is to train the model by tuning the matrix of latent features V in such way that \(A \approx V^T V\). In our model, we make the reverse: latent features V are sampled from some probabilistic distribution and we generate a network adjacency matrix A based on \(V^T V\).

Formally our model is described in the following way:

$$\begin{aligned} {\left\{ \begin{array}{ll} A_{ij} = \mathrm {I}\left[ ( \vec {v_i}, \vec {v_j}) \ge \theta \right] \\ \vec {v_i} = w_i \vec {x_i} \in \mathbb {R}^d \\ w_i \sim \text {Pareto}(a, w_0) , ~ \vec {x_i} \sim \text {Uniform} ( S^{d-1}) \\ i = 1\ldots n, ~ j = 1\ldots n \end{array}\right. } \end{aligned}$$
  • Network has n nodes and each node is associated with a d-dimensional latent features vector \(\vec {v_i}\).

  • Each latent features vector \(\vec {v_i}\) is a product of weight \(w_i\) and direction \(\vec {x_i}\).

  • Directions \(\vec {x_i}\) are i.i.d. random vectors uniformly distributed over the surface of \((d-1)\)-sphere.

  • Weights are i.i.d. random variables distributed according to Pareto distribution with the following density function f(w):

    $$\begin{aligned} f(w) = \frac{a}{w_0} {\left( \frac{w_0}{w}\right) }^{a + 1}\; (w \ge w_0). \end{aligned}$$
  • Edges between nodes i and j appear if a dot product of their latent features vectors \((\vec {v_i}, \vec {v_j})\) exceeds a threshold parameter \(\theta \).

Therefore, we take into consideration both node’s importance \(w_i\) and its location \(x_i\) on the surface of a \((d-1)\)-sphere (that can be interpreted as the earth in the case of \(\vec {x_i} \in S^{2} \subset \mathbb {R}^3\)). Thus, inspired by the matrix factorization approach we achieved the following model behavior: the edges in our model are assumed to be formed when a pair of nodes is spatially close and/or has large weights. Actually, compared with the geographical threshold models, we use dot product to measure proximity of nodes instead of Euclidean distance.

We have defined our model for fixed size networks, but in principle, our model can be generalized for the case of growing networks. The problem is that a fixed threshold \(\theta \) when the size of a network tends to infinity with high probability leads to a complete graph. But real networks are usually sparse.

Therefore, to introduce growing factorization threshold models we use a threshold function \(\theta := \theta (n)\) which depends on the number of nodes n in the network. Then for every value of network size n we have the same parameters except of threshold \(\theta \). This means that at every step, when a new node will be added to the graph, some of the existing edges will be removed. In the next section, we will try to find threshold functions which lead to sparse networks.

To preserve readability of the proofs, we consider only the case \(d = 3\) because proofs for higher dimensions can be derived in a similar way. However, we will give not only mean-field approximations but also strict probabilistic proofs, which to the best of our knowledge have not been done for geographical threshold models yet and can be likely applied in the other works too.

Generating sparse networks

The aim of this section is to model sparse growing networks. To do this, we need to find a proper threshold function.

First, we have studied the growth of the real networks. For example, Fig. 1 shows the growth of a citation graph. The data was obtained from the SNAPFootnote 1 database. It can be seen that the function \(\textit{y}(\textit{x}) = 4.95 \textit{x} \log \textit{x} - 40 \textit{x}\) is a good estimation of the growth rate of this network. That is why we decided to focus on the linearithmic or sub-linearithmic growth rate of the model (here and subsequently, by the growth of the model we mean the growth of the number of edges).

Fig. 1
figure 1

The growth of citation graph Arxiv HEP-PH

Analysis of the expected number of edges

Let M(n) denote the number of edges in the network of size n. To find its expectation, we need the two following lemmas.

Lemma 1

The probability for a node with weight w to be connected to a random node is

$$\begin{aligned} P_{e}(w) = {\left\{ \begin{array}{ll} \frac{1}{2}\left( 1 - \frac{a\theta }{w(a+1)w_{0}}\right) , \quad &{}w > \frac{\theta }{w_0}, \\ \frac{1}{2}\frac{w_{0}^a}{\theta ^a (a + 1)} w^a, \quad &{}w \le \frac{\theta }{w_0}. \end{array}\right. } \end{aligned}$$

Lemma 2

The edge probability in the network is

$$\begin{aligned} P_{e} = {\left\{ \begin{array}{ll} \frac{1}{2} - \frac{1}{2} \frac{a^2}{(a+1)^2}\frac{\theta }{w_0^{2}}, &{}\quad \theta < w_{0}^2, \\ \frac{w_0^{2a}}{2 \theta ^a} \left (\frac{a (\ln \theta - 2 \ln w_0)}{a+1} - \frac{a^2}{(a+1)^2} + 1\right ), &{}\quad \theta \ge w_{0}^2. \end{array}\right. } \end{aligned}$$

To improve readability, we moved the proofs of Lemmas 1 and  2 to Appendix.

The next theorem shows that our model can have any growth which is less than quadratic.

Theorem 1

Denote as R(n) such function that \(R(n) = o(n^2)\) and R(n)>0. Then there exists such threshold function \(\theta (n)\) that the growth of the model is R(n):

$$\begin{aligned} \exists N \quad \mathrm {E}M(n) = R(n) \quad (n \ge N). \end{aligned}$$

Proof It easy to check that \(P_{e}\) is a continuous function of \(\theta \). The intermediate value theorem states that \(P_{e}(\theta )\) takes any value between \(P_{e}(\theta = 0) = 1/2\) and \(P_{e}(\theta = \infty ) = 0\) at some point within the interval.

Since \(R(n) = o(n^2)\) and positive, there exists N such that for all \(n \ge N\), \(0< R(n) < \frac{1}{2} \times \frac{n(n-1)}{2}\).

It means that the equation \(\mathrm {E}M(n) = R(n)\) is feasible for all \(n \ge N\). \(\square \)

Taking into account Theorem 1, we obtain parameters for the linearithmic and linear growths of the expected number of edges.

Theorem 2

Suppose the following threshold function: \({\theta (n) = D n^{\frac{1}{a}}}\) where D is a constant. Then the growth of the model is linearithmic:

$$\begin{aligned} \mathrm {E}M (n) = A n \ln n (1 + o(1)) \quad\text { } \left( n \ge \frac{w_0^{2a}}{D^a}\right) , \end{aligned}$$

where \(\mathrm {A}\) is a constant depending on the Pareto distribution parameters.

Proof We can rewrite inequality \(n \ge \frac{w_0^{2a}}{D^a}\) as \({Dn^{\frac{1}{a}} \ge w_0^2}\) and apply Lemma 2 in the case \(\theta (n) = Dn^{\frac{1}{a}} \ge w_0^2\)

$$\begin{aligned} \mathrm {E}M = \frac{n(n-1)}{2}\frac{w_0^{2a}}{2 \theta ^a} \Big (\frac{a (\ln \theta - 2 \ln w_0 )}{a+1} - \frac{a^2}{(a+1)^2} + 1\Big ). \end{aligned}$$

If we replace \(\theta \) by \(Dn^{\frac{1}{a}}\), we obtain

$$\begin{aligned} \mathrm {E}M(n) &=\frac{n(n-1)w_0^{2a}}{4 (Dn^{\frac{1}{a}})^a} \left (\frac{a (\ln (Dn^{\frac{1}{a}}) - 2 \ln w_0)}{a+1} - \frac{a^2}{(a+1)^2} + 1\right) \\ &= \frac{(n-1)w_0^{2a}}{4D^a} \left (\frac{\ln n}{a+1} - \frac{a^2}{(a+1)^2} + 1 + \frac{a(\ln D - 2\ln w_0)}{a+1}\right) \\ &= \mathrm {A} n \ln n (1 + o(1)). \end{aligned}$$

Theorem 3

Suppose that the growth of the model is sub-linearithmic: \({\frac{\mathrm {E}M(n)}{n\ln n} = o(1)}\) , then \({\frac{n^{\frac{1}{a}}}{\theta (n)} = o(1)}\).

Proof Let us consider another model with a threshold function \(\theta '(n) = Dn^{\frac{1}{a}}\) and the expected number of edges \(\mathrm {E}M'(n)\). According to Theorem 2 and the condition \({\frac{\mathrm {E}M(n)}{n\ln n} = o(1)}\) there exists a natural number \(N_D\) such that

$$\begin{aligned} \forall n \ge N_D \quad \mathrm {E}M'(n) = \mathrm {A} n \ln n (1 + o(1)) \ge \mathrm {E}M(n). \end{aligned}$$

This also means that for all \(n \ge N_D\) we have \(\theta (n) \ge \theta '(n)\). Therefore

$$\begin{aligned} \forall n \ge N_D \quad \frac{n^{\frac{1}{a}}}{\theta (n)} \le \frac{n^{\frac{1}{a}}}{\theta '(n)} = \frac{1}{D}. \end{aligned}$$

By the arbitrariness of the choice of D, we have \( \frac{n^{\frac{1}{a}}}{\theta (n)} = o(1)\). \(\square \)

Concentration theorem

In this section, we will find the variance of the number of the edges and prove the concentration theorem

Proofs of the following lemmas can be found in the Appendix.

Lemma 3

Suppose that x, y and z are random nodes. Let \(P_{<}\) be the probability for the node x to be connected to both nodes y and z. Then the variance of the number of edges M is

$$\begin{aligned} \mathrm {Var}(M) = \frac{n(n-1)}{2} P_{e}(1 - P_{e}) + n \frac{(n-1)(n-2)}{2}(P_{<} - P_{e}^2), \end{aligned}$$

Lemma 4

Suppose that x, y and z are random nodes. Let \(P_{<}\) be the probability for the node x to be connected to both nodes y and z. Then

$$\begin{aligned} P_{<} = {\left\{ \begin{array}{ll} \frac{1}{4}\frac{w_0^{2a}}{\theta ^{2a}(a+1)^2} [\theta ^{a} - w_0^{2a}] + \frac{1}{4}\frac{w_0^{2a}}{\theta ^{a}}\Big [ 1 - 2 \frac{a^2 }{(a+1)^2} + \frac{a^3 }{(a+1)^2(a+2)} \Big ], \quad &{}\theta \ge w_{0}^2,\\ \frac{1}{4} - \frac{1}{2} \frac{a^2 \theta }{(a+1)^2} \frac{1}{w_0^2} + \frac{1}{4}\frac{a^3 \theta ^2 }{(a+1)^2(a+2)} \frac{1}{w_0^4} , \quad &{}\theta < w_{0}^2. \end{array}\right. } \end{aligned}$$

Combining these results, we get the following theorem that will be needed to prove the concentration theorem

Theorem 4

If \(\theta \ge w_0^2\) , the variance is

$$\begin{aligned} \mathrm {Var}(M) = \mathrm {E}M + n \frac{(n-1)(n-2)}{2}\left [A\frac{1}{\theta ^{a}} + B \frac{1}{\theta ^{2a}} \right ] - \frac{2(n-2)}{n(n-1)} (\mathrm {E}M)^2 , \end{aligned}$$

where A and B are constants which depend on the Pareto distribution parameters.

Proof According to Lemmas 3 and 4 in case of \(\theta \ge w_0^2\), the variance is

$$\begin{aligned} \mathrm {Var}(M) = \frac{n(n-1)}{2} P_{e}(1 - P_{e}) + n \frac{(n-1)(n-2)}{2}(P_{<} - P_{e}^2). \end{aligned}$$
$$\begin{aligned} P_{<} = \frac{1}{4}\frac{w_0^{2a}}{\theta ^{2a}(a+1)^2} [\theta ^{a} - w_0^{2a}] + \frac{1}{4}\frac{w_0^{2a}}{\theta ^{a}}\Big [ 1 - 2 \frac{a^2 }{(a+1)^2} + \frac{a^3 }{(a+1)^2(a+2)} \Big ] \end{aligned}$$

According to Lemma 2, the expected number of edges is

$$\begin{aligned} \mathrm {E}M = \frac{n(n-1)}{2}P_{e}. \end{aligned}$$

Combining (8) and (6), we obtain

$$\begin{aligned} \mathrm {Var}(M) = \mathrm {E}M(1 - P_{e}) + n \frac{(n-1)(n-2)}{2}P_{<} - \mathrm {E}M (n-2) P_{e} = \mathrm {E}M + n \frac{(n-1)(n-2)}{2}P_{<} - \frac{2(n-2)}{n(n-1)} (\mathrm {E}M)^2. \end{aligned}$$


$$\begin{aligned} P_{<} = \frac{1}{4}\frac{w_0^{2a}}{\theta ^{2a}(a+1)^2} [\theta ^{a} - w_0^{2a}] + \frac{1}{4}\frac{w_0^{2a}}{\theta ^{a}}\left [ 1 - 2 \frac{a^2 }{(a+1)^2} + \frac{a^3 }{(a+1)^2(a+2)} \right] = \frac{1}{\theta ^a}C_{1} - \frac{1}{\theta ^{2a}}C_{2} + \frac{1}{\theta ^a} C_{3} = A\frac{1}{\theta ^{a}} + B \frac{1}{\theta ^{2a}}, \end{aligned}$$

where \(C_1\), \(C_2\), \(C_3\), A and B are constants depending on the Pareto distribution parameters.

Finally, we obtain

$$\begin{aligned} \mathrm {Var}(M) = \mathrm {E}M + n \frac{(n-1)(n-2)}{2}\left [A\frac{1}{\theta ^{a}} + B \frac{1}{\theta ^{2a}} \right] - \frac{2(n-2)}{n(n-1)} (\mathrm {E}M)^2. \end{aligned}$$

\(\square \)

Theorem 5

Concentration theorem If \(\theta (n)\) and \(\mathrm {E}M(n)\) tends to infinity as \(n \rightarrow \infty \) and \(\frac{n^3}{(\mathrm {E}M(n))^2\theta (n)^a} = o(1)\) , then

$$\begin{aligned} \forall \varepsilon > 0 \quad P(|M - \mathrm {E}M| \ge \varepsilon \cdot \mathrm {E}M) \xrightarrow []{n\rightarrow \infty } 0 , \end{aligned}$$

where M is the number of edges in the graph.

Proof According to Chebyshev’s inequality, we have

$$\begin{aligned} P(|M - \mathrm {E}M | \ge \varepsilon \cdot \mathrm {E}M )\le \frac{\mathrm {Var}(M)M}{\varepsilon ^2 \cdot (\mathrm {E}M)^2} . \end{aligned}$$

Let us estimate the right part of the inequality. Using Theorem 4, we get

$$\begin{aligned} \frac{\mathrm {Var}(M)}{\varepsilon ^2 \cdot (\mathrm {E}M)^2} = \frac{1}{\varepsilon ^2 \mathrm {E}M} + \frac{O(n^3)}{(\mathrm {E}M)^2}\left [ A\frac{1}{\theta ^{a}} + B \frac{1}{\theta ^{2a}} \right] + O\left( \frac{1}{n}\right) = \frac{1}{\varepsilon ^2 \mathrm {E}M} + \frac{O(n^3)}{(\mathrm {E}M)^2}\frac{1}{\theta ^{a}}\left [ 1 + \frac{B}{A\theta ^{2a}} \right ] + O\left( \frac{1}{n}\right) \end{aligned}$$

Using the conditions of the theorem, we obtain

$$\begin{aligned} \frac{\mathrm {Var}(M)}{\varepsilon ^2 \cdot (\mathrm {E}M)^2} \rightarrow 0 \text { as } n \rightarrow \infty . \end{aligned}$$

\(\square \)

Combining Theorems 2, 3 and 5, we obtain the following corollary.

Corollary 1

Suppose that one of the following conditions holds:

  • The threshold function \(\theta (n)\) equals \(D n^{\frac{1}{a}}\)

  • \(\frac{n}{\mathrm {E}M(n)} = O(1)\) and \(\frac{\mathrm {E}M(n)}{n\ln n} = o(1)\)


$$\begin{aligned} \forall \varepsilon > 0 \quad P(|M - \mathrm {E}M| \ge \varepsilon \cdot \mathrm {E}M) \xrightarrow [n\rightarrow \infty ]{} 0 , \end{aligned}$$

where M is the number of edges in the graph.

In this way, we have proved that the number of edges in the graph does not deviate much from its expected value. It means that having the linearithmic or the sub-linearithmic growth of the expected number of edges we also have the same growth for the actual number of edges.

Degree distribution

In this section, we show that our model follows power-law degree distribution with an exponent of 2 and give two proofs. The first is a mean-field approximation. It is usually applied for a fast checking of hypotheses. The second one is a strict probabilistic proof. To the best of our knowledge it has not been considered in the context of the geographic threshold models yet.

To confirm our proofs, we carried out a computer simulation and plotted complementary cumulative distribution of node degree which is shown on Fig. 2. We also used a discrete power-law fitting method, which is described in [2] and implemented in the network analysis package igraph.Footnote 2 We obtained \(\alpha = 2.16\), \(x_{\min } = 4\) and a quite large p-value of 0.9984 for the Kolmogorov–Smirnov goodness of fit test.

Fig. 2
figure 2

Complementary cumulative distribution of node degree \(n = 3 \cdot 10^5\), \(\vec {x_i} \in \mathbb {R}^{3}\), \(w_i \sim {\text{Pareto}}(3, 1)\), \(\theta = 66.9\)

Theorem 6

Let P(k) be the probability of a random node to have a degree k. If \(\frac{n^{\frac{1}{a}}}{\theta (n)} = o(1)\) , then there exist such constants \(C_0\) and \(N_0\) such that \(\forall ~k(n):\forall ~n~>~N_0 \,\, k(n)~<~C_0n\) we have

$$\begin{aligned} P(k) = (1+o(1)) k^{-2}. \end{aligned}$$

Mean-field approximation

This approximation gives power law only for nodes with weights \(w \le \frac{\theta }{w_0}\). But the expected number of nodes with weights not satisfying this inequality \(\mathrm {E}m\) is extremely small

$$\begin{aligned} \mathrm {E} m = n P\left(w > \frac{\theta }{w_0}\right) = n\left( \frac{w_0^2}{\theta }\right) ^a = o(1). \end{aligned}$$

As it was shown in Lemma 1, the probability of the node \(\vec {v_i} = w_i \vec {x_i}\) with weight \(w_i = w \le \frac{\theta }{w_0}\) to have an edge to another random node is

$$\begin{aligned} P_{e}(w) = \frac{w_{0}^a}{2 \theta ^a (a + 1)} w^a. \end{aligned}$$

Let \(k_i(w)\) be the degree of the node \(v_i\). Then

$$\begin{aligned} k_i(w) = \sum _{i \ne j} I[v_i \text { is connected to } v_j], \end{aligned}$$

where I stands for the indicator function.

As all nodes are independent, we get

$$\begin{aligned} E k_i(w) = (n-1) P_{e}(w). \end{aligned}$$

In the mean-field approximation, we assume that \(k_i(w)\) is really close to its expectation and we can substitute it by \({(n-1) P_{e}(w)}\) in the following expression for the degree distribution \(P(k) = f(w) \frac{\mathrm {d}w}{\mathrm {d}k},\) where f(w) is a density of weights. Thus,

$$\begin{aligned} P(k) = \frac{2 a w_0^a\theta ^a(a+1)}{(n-1) w^{2a}} \propto k^{-2} \end{aligned}$$

\(\square \)

Note that we have not used conditions on k(n) and \(\theta (n)\) yet, they are needed to estimate residual terms in the following rigorous proof.

Proof Degree \(k_i\) of the node \(v_i\) is a binomial random variable. Using the probability \(P_{e}(w)\) of the node \(v_i\) with weight \({w_i = w}\) to have an edge to another random node, we can get the probability that \(k_i\) equals k:

$$\begin{aligned} P(k_i = k | w_i = w) = {n -1 \atopwithdelims ()k} \left( P_{e}(w)\right) ^k (1-P_{e}(w))^{n-k-1}. \end{aligned}$$

To get the total probability, we need to integrate this expression with respect to w

$$\begin{aligned} P(k_i = k) = {n - 1 \atopwithdelims ()k} \int\limits _{w_0}^{\infty } \left( P_{e}(w)\right) ^k (1-P_{e}(w))^{n-k-1} \frac{aw_0^a}{w^{a+1}} \mathrm {d}w. \end{aligned}$$

Because of \(P_{e}(w)\) is a composite function, the integral breaks up into two parts.

$$\begin{aligned} I_1 = \int\limits _{w_0}^{\theta /w_0} \left( P_{e}(w)\right) ^k (1-P_{e}(w))^{n-k-1} \frac{aw_0^a}{w^{a+1}} \mathrm {d}w,\\ I_2 = \int\limits _{\theta /w_0}^{\infty } \left( P_{e}(w)\right) ^k (1 - P_{e}(w))^{n-k-1} \frac{aw_0^a}{w^{a+1}}\mathrm {d}w. \end{aligned}$$


$$\begin{aligned} P(k_i = k) = {n - 1 \atopwithdelims ()k} (I_1 + I_2). \end{aligned}$$

For estimating \(I_1\) we can use the formula \(P_{e}(w) = \frac{1}{2}\frac{w_{0}^a}{\theta ^a (a + 1)} w^a\) from Lemma 1. After making the substitution to integrate with respect to \(P_{e}(w)\) and using the incomplete beta-function, we get

$$\begin{aligned} I_1 = \frac{w_0^{2a}}{2\theta ^a a(a+1)} \cdot \left( B \left( \frac{1}{2(a+1)}; k-1, n-k \right) - B \left( \frac{w_0^{2a}}{2\theta ^a(a+1)}; k-1, n-k \right) \right) . \end{aligned}$$

For \(I_2\) we can derive an upper bound. Note that for \(w \ge \theta /w_0\) we have

$$\begin{aligned} P_{e}(w) = \frac{1}{2}\left( 1 - \frac{a\theta }{w(a+1)w_0}\right) < \frac{1}{2} \end{aligned}$$
$$\begin{aligned} 1 - P_{e}(w) \le 1 - P_{e}(\theta / w_0) = \frac{1}{2}\left( 1 + \frac{a}{a+1} \right) = \varepsilon _0 < 1. \end{aligned}$$

Therefore, we obtain the following upper estimate

$$\begin{aligned} I_2 = O\left( \frac{ (\varepsilon _0)^{n-k-1} }{2^k} \int\limits _{\theta /w_0}^{\infty } \frac{aw_0^a}{w^{a+1}}\mathrm {d}w \right) = O\left( \frac{ (\varepsilon _0)^{n-k-1}}{\theta ^a 2^k} \right) \end{aligned}$$

We now combine estimates for \(I_1\), \(I_2\) and the following estimates for the incomplete beta-function:

$$\begin{aligned} B(x; a, b) = O\Big (\frac{x^a}{a}\Big ),\\ B(x; a, b) = B(a, b) + O\Big (\frac{(1-x)^b}{b}\Big ),\\ \frac{1}{B(d-1, n-d)} = \frac{\Gamma (n-1)}{\Gamma (d-1)\Gamma (n-d)} = O\Big (\frac{n^{d-1}}{\Gamma (d-1)}\Big ). \end{aligned}$$

This gives us

$$\begin{aligned} P(k_i = k) &= \left( {\begin{array}{c}n-1\\ k\end{array}}\right) \frac{w_0^{2a}}{2 \theta ^a a(a+1)} \left[ B(k-1, n-k) + O\left( \frac{\left( 1-\frac{1}{2(a+1)}\right) ^{n-k}}{n - k}\right) \right. \\ & \qquad \qquad \qquad \qquad \qquad \left. - O\left( \frac{\left( \frac{w_0^{2a}}{2\theta ^a(a+1)}\right) ^{k-1}}{k-1} \right) + O\left( \frac{ (\varepsilon _0)^{n-k-1}}{\theta ^a 2^k} \right) \right] \\ &= \left( {\begin{array}{c}n-1\\ k\end{array}}\right) \frac{w_0^{2a}}{2 \theta ^a a(a+1)} B(k-1, n-k) \\ & \quad \left[ 1+ O\left( \frac{\left( \varepsilon _1\right) ^{n-k} n^{k-1}}{(n - k)\Gamma (k-1)}\right) + \ O\left( \frac{\left( \frac{w_0^{2a}}{2\theta ^a(a+1)}\right) ^{k-1}n^{k-1}}{(k-1)\Gamma (k-1)} \right) \right. \\& \qquad \left. + \ O\left( \frac{ (\varepsilon _0)^{n-k-1}}{\theta ^a 2^k} \frac{n^{k-1}}{\Gamma (k-1)} \right) \right] . \end{aligned}$$

Let us introduce the following notations:

$$\begin{aligned} A &= O\left( \frac{\left( \varepsilon _1\right) ^{n-k} n^{k-1}}{(n - k)\Gamma (k-1)}\right) , \text {where } \varepsilon _1 = 1 - \frac{1}{2(a+1)}, \\ &B = O\left( \frac{\left( \frac{w_0^{2a}}{2\theta ^a(a+1)}\right) ^{k-1}n^{k-1}}{(k-1)\Gamma (k-1)} \right) ,\\ C &= O\left( \frac{ (\varepsilon _0)^{n-k-1}}{\theta ^a 2^k} \frac{n^{k-1}}{\Gamma (k-1)} \right) , \text {where } \varepsilon _0 =\frac{1}{2}\left( 1 + \frac{a}{a+1} \right) . \end{aligned}$$

Using \(\frac{n}{\theta ^a(n)} = o(1)\), for \(k(n) < C_0n\) we get

$$\begin{aligned} B = O\left( \frac{\left( \frac{w_0^{2a}}{2(a+1)}\right) ^{k-1}(\frac{n}{\theta ^a})^{k-1}}{\Gamma (k)} \right) = o(1). \end{aligned}$$

If k(n) is a bounded function, then since \(\varepsilon _0 < 1\) and \(\varepsilon _1 < 1\) we have

$$\begin{aligned} A = O\left( \left( \varepsilon _1\right) ^\frac{n-k}{k-1} n^{k-1}\right) = o(1),\\ C = O\left( \left( \varepsilon _0\right) ^{n-k} n^{k-1} \right) = o(1). \end{aligned}$$

If \(k(n) \rightarrow \infty \) as \(n \rightarrow \infty \), using Stirling’s approximation \(\Gamma (k-1) \sim \sqrt{2 \pi (k-2)} \left( \frac{e}{k-2}\right) ^{k-2}\) we get

$$\begin{aligned} A &= O \left( \frac{k - 2}{(n-k) \sqrt{k-2}} \left( (\varepsilon _1)^\frac{n-k}{k-1} \frac{n}{k - 2}\right) ^{k-1}\right) ,\\ C &= O \left( \frac{\sqrt{k-2}}{\theta ^a} \left( (\varepsilon _0)^{\frac{n-k-1}{k-1}} \frac{n}{k-2} \right) ^{k-1} \right) . \end{aligned}$$

Since \( \varepsilon ^x x \rightarrow 0\) for \(\varepsilon < 1\) as \(x \rightarrow \infty \) there exist constants \(C_0\) and \(N_0\) such that for \(n > N_0\) and \(k(n) < C_0n\) we have \((\varepsilon _1)^\frac{n-k}{k-1} \frac{n}{k - 2} < 1\) and \((\varepsilon _0)^{\frac{n-k-1}{k-1}} \frac{n}{k-2} < 1\). This implies that \(A=o(1)\) and \(C=o(1)\).

Thus, we obtain

$$\begin{aligned} P(k_i = k) =(1 + o(1)) {n - 1 \atopwithdelims ()k} B(k - 1, n - k) = (1 + o(1)) k^{-2}. \end{aligned}$$

\(\square \)

Note that regardless of the shape parameter of the Pareto distribution of weights we always generate networks with a degree distribution following a power law with an exponent equals 2. In the next section, we modify our model to change the exponent of the degree destribution and some other properties of the resulting networks.

Model modifications

In this section, we will show how to modify our model to get new properties and how these modifications will affect the degree distribution.

Directed network

Many real networks are directed. To model them and obtain an exponent of the power law that differs from 2, we changed the condition for the existence of an edge. There will be a directed edge \((v_i, v_j)\), if and only if

$$\begin{aligned} (w_i^{\alpha } \vec {x_i}, w_j^{\beta } \vec {x_j}) \ge \theta, \quad \alpha ,\beta > 0. \end{aligned}$$

As it follows from the next theorem this modification allows us to tune an exponent of the power law.

Theorem 7

Let \(P_{out}(k)\) be the probability of an random node to have out-degree k, \(P_{in}(k)\) in-degree k. If \({n^{\max \{\alpha , \beta \}/a}/\theta (n) = o(1)},\) then there exist constants \(C_0\) and \(N_0\) such that \(\forall k(n):\forall n > N_0 \,\, k(n) < C_0n\) we have

$$\begin{aligned} P_{out}(k) = (1+o(1)) k^{-1 - \alpha /\beta }, P_{in}(k) = (1+o(1))k^{-1 - \beta /\alpha }. \end{aligned}$$

Proof Here is a proof for the out-degree distribution. The case of the in-degree distribution is similar.

First, let us compute \(P_{e}(w)\)—the probability of the node \(\vec {v_i} = w_i \vec {x_i}\) with weight \(w_i = w\) to have an edge to another random node.

$$\begin{aligned} P_{e}(w) = \int\limits _{w_0}^{\infty }f(w')\int\limits _{\begin{array}{c} x' \in S(0,1) \\ (w^{\alpha } x, (w')^{\beta } x')\ge \theta \end{array}} \frac{1}{4\pi } \mathrm {d}x' \mathrm {d}w'. \end{aligned}$$

Similar to Lemma  1 we get

$$\begin{aligned} P_{e}(w) = \int\limits _{\max \{w_0, \theta ^{1/\beta } / w^{\alpha /\beta }\}}^{\infty }\frac{a w^a_0}{(w')^{a+1}} \frac{1}{2} \left( 1 - \frac{\theta }{w^\alpha (w')^\beta }\right) \mathrm {d}w'. \end{aligned}$$

Thus, we obtain

$$\begin{aligned} P_{e}(w) = {\left\{ \begin{array}{ll} \frac{1}{2}\left( 1 - \frac{a\theta }{w^\alpha (a+\beta )w_{0}^\beta }\right) , \quad &{}w > \left( \frac{\theta }{w_0^\alpha }\right) ^{1/\beta }, \\ \frac{w^{a\alpha /\beta }w_0^a}{2\theta ^{a/\beta }}\left( \frac{1}{a} - \frac{1}{\beta + a} \right) , \quad &{}w \le \left( \frac{\theta }{w_0^\alpha }\right) ^{1/\beta }. \end{array}\right. } \end{aligned}$$

Like in Theorem  6, we have

$$\begin{aligned} P(k_i = k) = {n - 1 \atopwithdelims ()k} \int\limits _{w_0}^{\infty } \left( P_{e}(w)\right) ^k (1-P_{e}(w))^{n-k-1} \frac{aw_0^a}{w^{a+1}} \mathrm {d}w. \end{aligned}$$

The rest of the proof is similar to the corresponding steps of Theorem 6, so we omit details here. \(\square \)

With \(\alpha = \beta \) this model turns into an undirected case with the power-law exponent equals 2 that agrees with Theorem  6.

Functions of dot product

In our model because of the condition \({w_i w_j (\vec {x_i}, \vec {x_j}) \ge \theta \ge 0}\) node \(\vec {v_i}\) can only be connected to the node \(\vec {v_j}\) if an angle between \(\vec {x_i}\) and \(\vec {x_j}\) is less than \(\pi /2\). This is a constraint on the possible neighbors of a node that restricts the scope of our model.

We can solve this issue by changing the condition for the existence of an edge:

$$\begin{aligned} w_i^{\alpha } w_j^{\beta } h((\vec {x_i},\vec {x_j})) \ge \theta , \end{aligned}$$

where \(h:[-1, 1] \rightarrow \mathbb {R}\). On Fig. 3 is an example of how it works in \(\mathbb {R}^2\).

Fig. 3
figure 3

Example in \(\mathbb {R}^2\) of influence \(h(x) = x\), \(h(x) = e^x\), \(h(x) = x^2\)

Theorem 8

Let \(P_{\text{out}}(k)\) be the probability of an random node to have out-degree k, \(P_{\text{in}}(k)\)—in-degree k. If \({n^{\max \{\alpha , \beta \}/a}/\theta (n) = o(1)}\) and \(h:[-1, 1] \rightarrow \mathbb {R}\)-continuous, strictly increasing function, positive at least in one point from \((-1, 1)\), then there exist constants \(C_0\) and \(N_0\) such that \(\forall k(n):\forall n > N_0 \,\, k(n) < C_0n\) we have

$$\begin{aligned} P_{out}(k) = k^{-1 - \alpha /\beta }(1+o(1)), P_{in}(k) = k^{-1 - \beta /\alpha }(1+o(1)). \end{aligned}$$

Short scheme of proof

Here is the scheme of proof for the out-degree distribution. The case of the in-degree is similar.

Restrictions on the function h allow us to modify the proof of the directed case. The main difference is a value of the probability \(P_{e}(w)\) of a node \(\vec {v_i} = w_i \vec {x_i}\) with the weight \(w_i = w\) to have an edge to another random node.

$$\begin{aligned} P_{e}(w) = \int\limits _{w_0}^{\infty } \frac{aw_0^a}{(w')^{a+1}}\int\limits _{\begin{array}{c} x' \in S^2 \\ w^\alpha (w')^\beta h((x,x'))\ge \theta \end{array}} \frac{1}{4\pi } \mathrm {d}x' \mathrm {d}w'. \end{aligned}$$

We will denote by I the inner integral:

$$\begin{aligned} \int\limits _{\begin{array}{c} x' \in S^2 \\ w^\alpha (w')^\beta h((x,x'))\ge \theta \end{array}} \frac{1}{4\pi } \mathrm {d}x' \mathrm {d}w'. \end{aligned}$$

We can rewrite inequality (15) as \( h((x, x')) \ge \frac{\theta }{w^\alpha (w')^\beta }\) and notice that \(\frac{\theta }{w^\alpha (w')^\beta } \in (0, +\infty )\). Let us consider \(h([-1, 1]) = [r, q]\), on this interval function h is invertable. We examine the mutual position of [rq] and \((0, +\infty )\). The definition of h implies that \([r, q] \cap (0, +\infty ) \ne \emptyset \). This gives us the next two cases.

  1. A.

    The first case is \([r, q] \subset (0, +\infty )\). If \(\frac{\theta }{w^\alpha (w')^\beta } \in [r, q]\), then we may invert h and the inner integral I is equal to \(2\pi \left( 1 - h^{-1}\left( \frac{\theta }{w^\alpha (w')^\beta }\right) \right) \). If \(\frac{\theta }{w^\alpha (w')^\beta } > q\), then the inequality (15) is not satisfied and \(I=0\). If \(0< \frac{\theta }{w^\alpha (w')^\beta } < r\), then the inequality (15) is satisfied for any pair of x and \(x'\), \(I = 4\pi \), the surface area of \(S^2\).

    To deal with \(P_{e}(w)\), we need to compare \(w_0\) with boundaries for each range of \(\frac{\theta }{w^\alpha (w')^\beta }\)

    1. 1.

      If \(w_0 < \frac{\theta ^{1/\beta }}{w^{\alpha /\beta } q^{1/\beta }}\), then

      $$\begin{aligned} P_{e}(w) &= \int\limits _{w_0}^{\frac{\theta ^{1/\beta }}{w^{\alpha /\beta } q^{1/\beta }}} 0 \mathrm {d}w' + \int\limits _{\frac{\theta ^{1/\beta }}{w^{\alpha /\beta } q^{1/\beta }}}^{\frac{\theta ^{1/\beta }}{w^{\alpha /\beta } r^{1/\beta }}} \frac{aw_0^a}{(w')^{a+1}} \frac{1}{2} \left[ 1 - h^{-1}\left( \frac{\theta }{w^\alpha (w')^\beta }\right) \right] \mathrm {d}w' \\& \quad + \int\limits _{\frac{\theta ^{1/\beta }}{w^{\alpha /\beta } r^{1/\beta }}}^{\infty } 4\pi \frac{aw_0^a}{(w')^{a+1}} \mathrm {d}w'. \end{aligned}$$
    2. 2.

      If \(\frac{\theta ^{1/\beta }}{w^{\alpha /\beta } q^{1/\beta }} \le w_0 < \frac{\theta ^{1/\beta }}{w^{\alpha /\beta } r^{1/\beta }}\), then

      $$\begin{aligned} P_{e}(w) = \int\limits _{w_0}^{\frac{\theta ^{1/\beta }}{w^{\alpha /\beta } r^{1/\beta }}} \frac{aw_0^a}{(w')^{a+1}} \frac{1}{2} \left[ 1 - h^{-1}\left( \frac{\theta }{w^\alpha (w')^\beta }\right) \right] \mathrm {d}w' + \int\limits _{\frac{\theta ^{1/\beta }}{w^{\alpha /\beta } r^{1/\beta }}}^{\infty } 4\pi \frac{aw_0^a}{(w')^{a+1}} \mathrm {d}w'. \end{aligned}$$
    3. 3.

      Last case is \(w_0 \ge \frac{\theta ^{1/\beta }}{w^{\alpha /\beta } r^{1/\beta }}\). But \(\theta (n)\) grows with n, and for big enough n this inequality will not be satisfied.

  2. B.

    The second case is \([r, q] \not \subset (0, +\infty )\), which implies \(r \le 0\). If \(\frac{\theta }{w^\alpha (w')^\beta } \in (0, q]\), then \(I=2\pi \left( 1 - h^{-1}\left( \frac{\theta }{w^\alpha (w')^\beta }\right) \right) \). If \(\frac{\theta }{w^\alpha (w')^\beta } > q\), then \(I=0\). This gives

    $$\begin{aligned} P_{e}(w) = \int\limits _{\max (w_0, \frac{\theta ^{1/\beta }}{w^{\alpha /\beta } q^{1/\beta }})}^{\infty } \frac{aw_0^a}{(w')^{a+1}} \frac{1}{2} \left[ 1 - h^{-1}\left( \frac{\theta }{w^\alpha (w')^\beta }\right) \right] \mathrm {d}w' \end{aligned}$$

    It remains only to show that \(P_{out}(k) = k^{-2}(1+o(1))\). But now it is easy to see that the influence of every kind of the principal parts of the integral for \(P_{e}(w)\) has been already examined in previous theorems for degree distributions. For example,

    $$\begin{aligned} \int\limits _{\frac{\theta ^{1/\beta }}{w^{\alpha /\beta } q^{1/\beta }}}^{\frac{\theta ^{1/\beta }}{w^{\alpha /\beta } r^{1/\beta }}} \frac{aw_0^a}{(w')^{a+1}} \frac{1}{2} \left[ 1 - h^{-1}\left( \frac{\theta }{w^\alpha (w')^\beta }\right) \right] \mathrm {d}w' = \frac{w_0^a w^{2a\alpha /\beta }}{\beta \theta ^{a/\beta }}\int\limits _{r}^{q} (1 - h^{-1}(t)) t^{a/\beta - 1} \mathrm {d}t, \end{aligned}$$

    what is proportional to the one we got in Theorem 7. Therefore, we are not giving here additional details. \(\square \)

For example, described class of functions contains functions like \(e^x\) and \({x^{2m+1} + c}\), \({m \in \mathbb {N}}\), for a proper constant c.

Of course, not only this small class of functions h(x) has no influence on the degree distribution. For example, it is easy to show that \(h(x) = x^{2m}, m \in \mathbb {N}\) also has this property. In this way, a proof will be different only in the computation of \(P_{e}(w)\).


In our work, we suggest a new model for scale-free networks generation, which is based on the matrix factorization and has a geographical interpretation. We formalize it for fixed size and growing networks. We proof and validate empirically that degree distribution of resulting networks obeys power law with an exponent of 2.

We also consider several extensions of the model. First, we research the case of the directed network and obtain power-law degree distribution with a tunable exponent. Then, we apply different functions to the dot product of latent features vectors, which give us modifications with interesting properties.

Further research could focus on the deep study of latent features vectors distribution. It seems that not only a uniform distribution over the surface of the sphere should be considered because, for example, cities are not uniformly distributed over the surface of Earth. Besides, we want to try other distributions of weights.





  1. Albert R, Barabási A-L. Statistical mechanics of complex networks. Rev Mod Phys. 2002;74(1):47.

    Article  MathSciNet  MATH  Google Scholar 

  2. Clauset A, Shalizi CR, Newman ME. Power-law distributions in empirical data. SIAM Rev. 2009;51(4):661–703.

    Article  MathSciNet  MATH  Google Scholar 

  3. Colomer-de-Simon P, Boguná M. Clustering of random scale-free networks. Phys Rev E Stat Nonlin Soft Matter Phys. 2012;86:026120 (preprint arXiv:1205.2877).

    Article  Google Scholar 

  4. Callaway DS, Newman ME, Strogatz SH, Watts DJ. Network robustness and fragility: percolation on random graphs. Phys Rev Lett. 2000;85(25):5468.

    Article  Google Scholar 

  5. Moreno Y, Pacheco AF. Synchronization of kuramoto oscillators in scale-free networks. EPL (Europhys Lett). 2004;68(4):603.

    Article  Google Scholar 

  6. Menon AK, Elkan C. Link prediction via matrix factorization. In: Gunopulos D, Hofmann T, Malerba D, Vazirgiannis M, editors. Machine learning and knowledge discovery in databases: European conference, ECML PKDD 2011, Athens, September 5–9, 2011, Proceedings, Part II. Berlin: Springer; 2011. p. 437–52.

    Chapter  Google Scholar 

  7. Hayashi Y. A review of recent studies of geographical scale-free networks. arXiv preprint physics/0512011; 2005.

  8. Barabási A-L, Albert R. Emergence of scaling in random networks. Science. 1999;286(5439):509–12.

    Article  MathSciNet  MATH  Google Scholar 

  9. Bollobás B, Riordan O, Spencer J, Tusnády G, et al. The degree sequence of a scale-free random graph process. Random Struct Algorithms. 2001;18(3):279–90.

    Article  MathSciNet  MATH  Google Scholar 

  10. Holme P, Kim BJ. Growing scale-free networks with tunable clustering. Phys Rev E. 2002;65(2):026107.

    Article  Google Scholar 

  11. Lee DD, Seung HS. Algorithms for non-negative matrix factorization. In: Advances in neural information processing systems; 2001. p. 556–562.

  12. Koren Y, Bell R, Volinsky C. Matrix factorization techniques for recommender systems. Computer. 2009;8:30–7.

    Article  Google Scholar 

  13. Liben-Nowell D, Kleinberg J. The link-prediction problem for social networks. J Am Soc Inf Sci Technol. 2007;58(7):1019–31.

    Article  Google Scholar 

  14. Masuda N, Miwa H, Konno N. Geographical threshold graphs with small-world and scale-free properties. Phys Rev E. 2005;71(3):036108.

    Article  MathSciNet  Google Scholar 

  15. Morita S. Crossovers in scale-free networks on geographical space. Phys Rev E. 2006;73(3):035104.

    Article  Google Scholar 

  16. Rozenfeld AF, Cohen R, Ben-Avraham D, Havlin S. Scale-free networks on lattices. Phys Rev Lett. 2002;89(21):218701.

    Article  Google Scholar 

  17. Warren CP, Sander LM, Sokolov IM. Geography in a scale-free network model. Phys Rev E. 2002;66(5):056105.

    Article  Google Scholar 

  18. Yakubo K, Korošak D. Scale-free networks embedded in fractal space. Phys Rev E. 2011;83(6):066111.

    Article  Google Scholar 

Download references

Authors' contributions

This work is the result of a close joint effort in which all authors contributed almost equally to defining and shaping the problem definition, proofs, algorithms, and manuscript. The research would not have been conducted without the participation of any of the authors. All authors read and approved the final manuscript.

Competing interests

The authors declare that they have no competing interests.

Author information

Authors and Affiliations


Corresponding author

Correspondence to Akmal Artikov.



Proof of Lemma 1

For a node x with the weight w, the probability to be connected to a random node is represented by

$$\begin{aligned} P_{e}(w) = \int\limits _{w_0}^{\infty }f(w')\int\limits _{\begin{array}{c} x' \in S^2 \\ ww'(x,x')\ge \theta \end{array}} \frac{1}{4\pi } \mathrm {d}x' \mathrm {d}w' . \end{aligned}$$

We can rewrite inequality \(ww'(x,x')\ge \theta \) as \({(x,x')\ge \frac{\theta }{ww'}}\). If \(\frac{\theta }{ww'} \in [0, 1]\), this inequality defines the spherical cap of the area \(2\pi (1 - \frac{\theta }{ww'})\). Therefore, we have

$$\begin{aligned} P_{e}(w) = \int\limits _{\max \{w_0, \theta / w\}}^{\infty }f(w')2\pi \left( 1 - \frac{\theta }{ww'}\right) \frac{1}{4\pi } \mathrm {d}w' . \end{aligned}$$

If we substitute \(f(w')\) from (2), we obtain

$$\begin{aligned} P_{e}(w) = \int\limits _{\max \{w_0, \theta / w\}}^{\infty }\frac{a}{w_0}\left( \frac{w_0}{w'}\right) ^{a+1} \frac{1}{2} \left( 1 - \frac{\theta }{ww'}\right) \mathrm {d}w' . \end{aligned}$$

If \(w \le \theta / w_0\), then

$$\begin{aligned} P_{e}(w) &= \int\limits _{\theta / w}^{\infty }\frac{a}{2w_0}\left( \frac{w_0}{w'}\right) ^{a+1} \left( 1 - \frac{\theta }{ww'}\right) \mathrm {d}w' = \int\limits _{\theta / w}^{\infty }\frac{a}{2w_0}\left( \frac{w_0}{w'}\right) ^{a+1} \mathrm {d}w' - \int\limits _{\theta / w}^{\infty }\frac{a}{2w_0}\left( \frac{w_0}{w'}\right) ^{a+1} \frac{\theta }{ww'} \mathrm {d}w' \\ &=\frac{aw_0^a}{2} \frac{1}{a \left( \theta / w\right) ^{a}} - \frac{aw_0^a\theta }{2w} \frac{1}{(a+1) (\theta / w)^{a+1}} = \frac{1}{2}\frac{w_{0}^a}{\theta ^a (a + 1)} w^a . \end{aligned}$$

If \(w > \theta / w_0\), then

$$\begin{aligned} P_{e}(w) &= \int\limits _{w_0}^{\infty }\frac{a}{w_0}\left( \frac{w_0}{w'}\right) ^{a+1} 2\pi \left( 1 - \frac{\theta }{ww'}\right) \frac{1}{4\pi } \mathrm {d}w' = \frac{aw_0^a}{2} \int\limits _{w_0}^{\infty }\frac{1}{w'^{a+1}} \mathrm {d}w' - \frac{aw_0^a\theta }{2w} \int\limits _{ w_0}^{\infty }\frac{1}{w'^{a+2}} \mathrm {d}w' \\ &= \frac{aw_0^a}{2} \frac{1}{a w_0^{a}} - \frac{aw_0^a\theta }{2w} \frac{1}{(a+1) w_0^{a+1}} = \frac{1}{2}\left( 1 - \frac{a\theta }{w(a+1)w_{0}}\right) . \end{aligned}$$

Proof of Lemma 2

The edge probability is represented by

$$\begin{aligned} P_{e} = \int\limits _{w_0}^{\infty }\int\limits _{S^2}\int\limits _{w_0}^{\infty } \int\limits _{\begin{array}{c} x' \in S^2 \\ ww'(x,x')\ge \theta \end{array}} f(w) f(w') \frac{1}{16\pi ^2} \mathrm {d}x' \mathrm {d}w' \mathrm {d}x \mathrm {d}w. \end{aligned}$$

Using (18), we obtain

$$\begin{aligned} P_{e} = \int\limits _{w_0}^{\infty } \int\limits _{S(0,1)} \frac{1}{4\pi } f(w) P_{e}(w) \mathrm {d}x \mathrm {d}w = \int\limits _{w_0}^{\infty } f(w) P_{e}(w) \mathrm {d}w . \end{aligned}$$

If \(\theta < w_{0}^2\), then for all \(w \in [w_0, \infty )\) \(P_{e}(w)\) equals to \(\frac{1}{2}\left( 1 - \frac{a\theta }{w(a+1)w_{0}}\right) \). Using it, we get

$$\begin{aligned} P_{e} &= \int\limits _{w_0}^{\infty } \frac{1}{2}\left( 1 - \frac{a\theta }{w(a+1)w_{0}}\right) a \frac{w_0^a}{w^{a+1}} \mathrm {d}w \\ & = \frac{1}{2} - \int\limits _{w_0}^{\infty } \frac{1}{2}\left( \frac{a\theta }{w(a+1)w_{0}}\right) a \frac{w_0^a}{w^{a+1}} \mathrm {d}w \\ &= \frac{1}{2} - \frac{1}{2} a^2 \theta \frac{w_0^{a-1}}{a+1} \int\limits _{w_0}^{\infty } \frac{1}{w^{a+2}} \mathrm {d}w \\ &= \frac{1}{2} - \frac{1}{2} a^2 \theta \frac{w_0^{a-1}}{a+1} \frac{1}{a+1}\frac{1}{w_0^{a+1}} \\ &= \frac{1}{2} - \frac{1}{2} \frac{a^2}{(a+1)^2}\frac{\theta }{w_0^{2}} . \end{aligned}$$

If \(\theta \ge w_{0}^2\), then

$$\begin{aligned} P_{e} &= \int\limits _{w_0}^{\theta /w_0} \frac{1}{2}\frac{w_{0}^a}{\theta ^a (a + 1)} w^a a \frac{w_0^a}{w^{a+1}} \mathrm {d}w + \int\limits _{\theta /w_0}^{\infty } \frac{1}{2}\left( 1 - \frac{a\theta }{w(a+1)w_{0}}\right) a \frac{w_0^a}{w^{a+1}} \mathrm {d}w \\ & =\frac{1}{2}\frac{w_{0}^a}{\theta ^a (a + 1)}aw_0^a \int\limits _{w_0}^{\theta /w_0}\frac{1}{w} \mathrm {d}w + \frac{1}{2} a w_0^{a} \int\limits _{\theta /w_0}^{\infty } \frac{1}{w^{a+1}} - \frac{a^2w_0^{a-1} \theta }{2(a+1)} \int\limits _{\theta /w_0}^{\infty } \frac{1}{w^{a+2}} \mathrm {d}w \\ &= \frac{1}{2}\frac{w_{0}^{2a} a}{\theta ^a (a + 1)} (\ln \theta - 2\ln w_0) + \frac{w_0^{2a}}{2\theta ^a} - \frac{a^2}{2(a+1)^2}\frac{w_0^{2a}}{\theta ^{a}} . \end{aligned}$$

Proof of Lemma 3

Let us enumerate pairs of nodes. Each pair of nodes i has an edge indicator \(I_{e_i}\).

By definition, we have

$$\begin{aligned} \mathrm {Var}(M) & = \mathbb {E}(M^2) - \mathbb {E}(M)^2 = \mathbb {E}(I_{e_1} + \cdots + I_{e_{n(n-1)/2}})^2 - (\mathbb {E}I_{e_1} + \cdots + \mathbb {E}I_{e_{n(n-1)/2}})^2 \\ &= \sum _{i} \mathbb {E}I_{e_i}^2 + 2\sum _{i\ne j} \mathbb {E}I_{e_i}I_{e_j} - \sum _{i} (\mathbb {E}I_{e_i})^2 - 2\sum _{i\ne j}\mathbb {E}I_{e_i}\mathbb {E}I_{e_{j}} . \end{aligned}$$

\(I_{e_1}\), \(\ldots \), \(I_{e_{n(n-1)/2}}\) is the sequence of identically distributed random variables, so their expected value is the same and equals to \(P_{e}\).

Since \(\mathbb {E}I_{e_i}^2 = \mathbb {E}I_{e_i} = P_{e}\), it follows that

$$\begin{aligned} \mathbb {E}I_{e_i}I_{e_j} - \frac{n(n-1)}{2}(P_{e})^2 - 2\sum _{i\ne j}\mathbb {E}I_{e_i}\mathbb {E}I_{e_j} = \frac{n(n-1)}{2} P_{e}(1 -P_{e}) + 2\sum _{i\ne j} \mathbb {E}I_{e_i}I_{e_j} - 2\sum _{i\ne j}\mathbb {E}I_{e_i}\mathbb {E}I_{e_j}. \end{aligned}$$

If edges \(e_{i}\) and \(e_j\) do not have mutual nodes, then \(I_{e_i}\) and \(I_{e_j}\) are independent variables. Therefore, \(\mathrm {E}(I_{e_i} I_{e_j}) = \mathrm {E}(I_{e_i}) \mathrm {E}(I_{e_j}) = P_{e}^2\). We get

$$\begin{aligned} \mathrm {Var}(M) &= \frac{n(n-1)}{2} P_{e}(1 - P_{e}) + \sum _{v = 1}^{n} \sum _{\begin{array}{c} w = 1 \\ w \ne v \end{array}}^{n} \sum _{\begin{array}{c} z = w + 1\\ z\ne v \end{array}}^{n} (\mathbb {E}I_{e(v, w)}I_{e(v, z)} - \mathbb {E}I_{e(v, w)}\mathbb {E}I_{e(v, z)}) \\ &= \frac{n(n-1)}{2} P_{e}(1 - P_{e}) + \sum _{v = 1}^{n} \sum _{\begin{array}{c} w = 1 \\ w \ne v \end{array}}^{n} \sum _{\begin{array}{c} z = w + 1\\ z\ne v \end{array}}^{n} (\mathbb {E}I_{e(v, w)}I_{e(v, z)} - P_{e}^2) \end{aligned}$$

\(\mathbb {E}I_{e(v, w)}I_{e(v, z)}\) is exactly equal to \(P_<\).

Proof of Lemma 4

It can be easily seen that

$$\begin{aligned} P_{<} = \int\limits _{w_0}^{\infty } P_{e}(w)^2 f(w) \mathrm {d}w. \end{aligned}$$

If \(\theta < w_{0}^2\) we have

$$\begin{aligned} P_< & = \int\limits _{w_0}^{\infty } \frac{1}{4}\left (1 - \frac{a\theta }{w(a+1)w_{0}}\right )^2 a \frac{w_0^a}{w^{a+1}} \mathrm {d}w \\ &= \frac{1}{4} a w_0^a \int\limits _{w_0}^{\infty } \frac{1}{w^{a+1}} \mathrm {d}w - \frac{1}{2} \frac{a^2 \theta w_0^{a-1}}{a+1} \int\limits _{w_0}^{\infty } \frac{1}{w^{a+2}} \mathrm {d}w + \frac{1}{4}\frac{a^3 \theta ^2 w_0^{a-2} }{(a+1)^2} \int\limits _{w_0}^{\infty } \frac{1}{w^{a+3}} \mathrm {d}w \\ &= \frac{1}{4} - \frac{1}{2} \frac{a^2 \theta }{(a+1)^2} \frac{1}{w_0^2} + \frac{1}{4}\frac{a^3 \theta ^2 }{(a+1)^2(a+2)} \frac{1}{w_0^4}. \end{aligned}$$

If \(\theta \ge w_{0}^2\), then

$$\begin{aligned} P_< = \int\limits _{w_0}^{\theta /w_0} \frac{1}{4}\frac{w_{0}^{2a}}{\theta ^{2a} (a + 1)^2} w^{2a} a \frac{w_0^a}{w^{a+1}} \mathrm {d}w + \int\limits _{\theta /w_0}^{\infty } \frac{1}{4}\Big (1 - \frac{a\theta }{w(a+1)w_{0}}\Big )^2 a \frac{w_0^a}{w^{a+1}} \mathrm {d}w. \end{aligned}$$

Computing the first integral, we get

$$\begin{aligned} \int\limits _{w_0}^{\theta /w_0} \frac{1}{4}\frac{w_{0}^{2a}}{\theta ^{2a} (a + 1)^2} w^{2a} a \frac{w_0^a}{w^{a+1}} \mathrm {d}w = \frac{1}{4}\frac{w_{0}^{2a}}{\theta ^{2a}(a + 1)^2}a w_0^{a} \int\limits _{w_0}^{\theta /w_0} w^{a-1} \mathrm {d}w = \frac{1}{4}\frac{w_0^{2a}}{\theta ^{2a}(a+1)^2} [\theta ^{a} - w_0^{2a}]. \end{aligned}$$

And for the second one, we have

$$\begin{aligned} \int\limits _{\theta /w_0}^{\infty } \frac{1}{4}\left(1 - \frac{a\theta }{w(a+1)w_{0}}\right)^2 a \frac{w_0^a}{w^{a+1}} \mathrm {d}w &= \int\limits _{\theta /w_0}^{\infty } \frac{1}{4} a \frac{w_0^a}{w^{a+1}} \mathrm {d}w - \int\limits _{\theta /w_0}^{\infty } \frac{1}{2}\frac{a\theta }{w(a+1)w_{0}} a \frac{w_0^a}{w^{a+1}} \mathrm {d}w \\& \quad + \int\limits _{\theta /w_0}^{\infty } \frac{1}{4}\frac{a^2\theta ^2}{w^2(a+1)^2w_{0}^2} a \frac{w_0^a}{w^{a+1}} \mathrm {d}w \\ & = \frac{1}{4} a w_0^a \int\limits _{\theta /w_0}^{\infty } \frac{1}{w^{a+1}} \mathrm {d}w - \frac{1}{2} \frac{a^2 \theta w_0^{a-1}}{a+1} \int\limits _{\theta /w_0}^{\infty } \frac{1}{w^{a+2}} \mathrm {d}w \\& \quad + \frac{1}{4}\frac{a^3 \theta ^2 w_0^{a-2} }{(a+1)^2} \int\limits _{\theta /w_0}^{\infty } \frac{1}{w^{a+3}} \mathrm {d}w \\ & = \frac{1}{4} w_0^a \frac{w_0^{a}}{\theta ^{a}} - \frac{1}{2} \frac{a^2 \theta w_0^{a-1}}{(a+1)^2} \frac{w_0^{a+1}}{\theta ^{a+1}} + \frac{1}{4}\frac{a^3 \theta ^2 w_0^{a-2} }{(a+1)^2(a+2)} \frac{w_0^{a+2}}{\theta ^{a+2}} \\ & = \frac{1}{4}\frac{w_0^{2a}}{\theta ^{a}} - \frac{1}{2} \frac{a^2 }{(a+1)^2} \frac{w_0^{2a}}{\theta ^{a}} + \frac{1}{4}\frac{a^3 }{(a+1)^2(a+2)} \frac{w_0^{2a}}{\theta ^{a}}. \end{aligned}$$

This gives us \(P_<\) in the case of \(\theta \ge w_{0}^2\):

$$\begin{aligned} P(<) = \frac{1}{4}\frac{w_0^{2a}}{\theta ^{2a}(a+1)^2} [\theta ^{a} - w_0^{2a}] + \frac{1}{4}\frac{w_0^{2a}}{\theta ^{a}}\Big [ 1 - 2 \frac{a^2 }{(a+1)^2} + \frac{a^3 }{(a+1)^2(a+2)} \Big ]. \end{aligned}$$

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (, which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Artikov, A., Dorodnykh, A., Kashinskaya, Y. et al. Factorization threshold models for scale-free networks generation. Compu Social Networls 3, 4 (2016).

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: