 Research
 Open Access
 Published:
Factorization threshold models for scalefree networks generation
Computational Social Networks volume 3, Article number: 4 (2016)
Abstract
Background
Several models for producing scalefree networks have been suggested; most of them are based on the preferential attachment approach. In this article, we suggest a new approach for generating scalefree networks with an alternative source of the powerlaw degree distribution.
Methods
The model derives from matrix factorization methods and geographical threshold models that were recently proven to show good results in generating scalefree networks. We associate each node with a vector having latent features distributed over a unit sphere and with a weight variable sampled from a Pareto distribution. We join two nodes by an edge if they are spatially close and/or have large weights.
Results and conclusion
The network produced by this approach is scale free and has a powerlaw degree distribution with an exponent of 2. In addition, we propose an extension of the model that allows us to generate directed networks with tunable powerlaw exponents.
Background
Most social, biological, topological and technological networks display distinct nontrivial topological features demonstrating that connections between the nodes are neither regular nor random at the same time [1]. Such systems are called complex networks. On of the wellknown and wellstudied classes of complex networks is scalefree networks whose degree distribution P(k) follows a power law \(P(k) \sim k^{\alpha }\), where \(\alpha \) is a parameter whose value is typically in the range \(2< \alpha < 3\). Many real networks have been reported to be scalefree [2].
Generating scalefree networks is an important problem because they usually have useful properties, such as high clustering [3], robustness to random attacks [4] and easy achievable synchronization [5]. Several models for producing scalefree networks have been suggested; most of them are based on the preferential attachment approach [1]. This approach forces existing nodes of higher degrees to gain edges added to the network more rapidly in a “richgetricher” manner. This paper offers a model with another explanation of scalefree property.
Our approach is inspired by matrix factorization, a machine learning method being successfully used for link prediction [6]. The main idea is to approximate a network adjacency matrix by a product of matrices V and \(V^T\), where V is the matrix of nodes’ latent features vectors. To create a generative model of scalefree networks, we sample latent features V from some probabilistic distribution and try to generate a network adjacency matrix. Two nodes are connected by an edge if the dot product of their latent features exceeds some threshold. This threshold condition is influenced by the geographical threshold models that are applied to scalefree network generation [7]. Because of the methods used (adjacency matrix factorization and threshold condition), we call our model the factorization threshold model.
A network produced in such a way is scalefree and follows powerlaw degree distribution with an exponent of 2, which differs from the results for basic preferential attachment models [8–10] where the exponent equals 3. We also suggest an extension of our model that allows us to generate directed networks with a tunable powerlaw exponent.
This paper is organized as follows. “Related work” section provides information about related works that inspired us. The formal description of our model in the case of an undirected fixed size network is presented in “Model description” section, which is followed by a discussion of how to generate growing networks. In “Generating sparse networks” section, the problem of making resulting networks sparse is considered. “Degree distribution” section shows that our model indeed produces scalefree networks. Extensions of our model, which allows to generate directed networks with a tunable powerlaw exponents and some other interesting properties, will be discussed in “Model modifications” section. “Conclusion” section concludes the paper.
Related work
In this section, we consider related works that encouraged us to create a new model for complex networks generation.
Matrix factorization
Matrix factorization is a group of algorithms where a given matrix R is factorized into two smaller matrices Q and P such that: \(R \approx Q^TP\) [11].
There is a popular approach in recommendation systems which is based on matrix factorization [12]. Assume that users express their preferences by rating some items, this can be viewed as an approximate representation of their interests. Combining known ratings, we get partially filled matrix R, the idea is to approximate unknown ratings using matrix factorization \(R \approx Q^TP\). A geometrical interpretation is the following. The rows of matrices Q and P can be seen as latent features vectors \(\vec {q}_i\) and \(\vec {p}_u\) of items and users, respectively. The dot product \((\vec {q}_i, \vec {p}_u)\) captures an interaction between an user u and an item i, and it should approximate the rating of the item i by the user u: \(R_{ui} \approx (\vec {q}_i, \vec {p}_u)\). Mapping of each user and item to latent features is considered as an optimization problem of minimizing distance between R and \(Q^TP\) that is usually solved using stochastic gradient descent (SGD) or alternating least squares (ALS) methods.
Furthermore, matrix factorization was suggested to be used for link prediction in networks [6]. Link prediction refers to the problem of finding missing or hidden links which probably exist in a network [13]. In [6] it is solved via matrix factorization: a network adjacency matrix A is approximated by a product of the matrices V and \(V^T\), where V is the matrix of nodes’ latent features.
Geographical threshold models
Geographical threshold models were recently proven to have good results in scalefree networks generation [7]. We are going to briefly summarize one variation of these models [14].
Suppose the number of nodes to be fixed. Each node carries a randomly and independently distributed weight variable \(w_i \in \mathbb {R}\). Also, the nodes are uniformly and independently distributed with specified density in a \(\mathbb {R}^d\). A pair of nodes with weights \(w, w'\) and Euclidean distance r are connected if and only if:
where \(\theta \) is the model threshold parameter and h(r) is the distance function that is assumed to decrease in r. For example, we can take \(h(r) = r^{\beta },\) where \(\beta > 0.\)
First, exponential distribution of weights with the inverse scale parameter \(\lambda \) has been studied. This distribution of weights leads to scalefree networks with a powerlaw exponent of 2: \(P(k) \propto k^{2}\). It is interesting that the exponent of a power law does not depend on the \(\lambda \), d and \(\beta \) in this case. Second, Pareto weight distribution with scale parameter \(w_0\) and shape parameter a has been considered. In this case, a tunable powerlaw degree distribution has been achieved: \(P(k) \propto k^{1  \frac{a \beta }{d} }\).
There are other variations of this approach: uniform distribution of coordinates in the \(d\)dimensional unit cube [15], latticebased models [16, 17] and even networks embedded in fractal space [18].
Model description
We studied theoretically matrix factorization by turning it from a trainable supervised model into a generative probabilistic model. When matrix factorization is used in machine learning, the adjacency matrix A is given and the goal is to train the model by tuning the matrix of latent features V in such way that \(A \approx V^T V\). In our model, we make the reverse: latent features V are sampled from some probabilistic distribution and we generate a network adjacency matrix A based on \(V^T V\).
Formally our model is described in the following way:

Network has n nodes and each node is associated with a ddimensional latent features vector \(\vec {v_i}\).

Each latent features vector \(\vec {v_i}\) is a product of weight \(w_i\) and direction \(\vec {x_i}\).

Directions \(\vec {x_i}\) are i.i.d. random vectors uniformly distributed over the surface of \((d1)\)sphere.

Weights are i.i.d. random variables distributed according to Pareto distribution with the following density function f(w):
$$\begin{aligned} f(w) = \frac{a}{w_0} {\left( \frac{w_0}{w}\right) }^{a + 1}\; (w \ge w_0). \end{aligned}$$(2) 
Edges between nodes i and j appear if a dot product of their latent features vectors \((\vec {v_i}, \vec {v_j})\) exceeds a threshold parameter \(\theta \).
Therefore, we take into consideration both node’s importance \(w_i\) and its location \(x_i\) on the surface of a \((d1)\)sphere (that can be interpreted as the earth in the case of \(\vec {x_i} \in S^{2} \subset \mathbb {R}^3\)). Thus, inspired by the matrix factorization approach we achieved the following model behavior: the edges in our model are assumed to be formed when a pair of nodes is spatially close and/or has large weights. Actually, compared with the geographical threshold models, we use dot product to measure proximity of nodes instead of Euclidean distance.
We have defined our model for fixed size networks, but in principle, our model can be generalized for the case of growing networks. The problem is that a fixed threshold \(\theta \) when the size of a network tends to infinity with high probability leads to a complete graph. But real networks are usually sparse.
Therefore, to introduce growing factorization threshold models we use a threshold function \(\theta := \theta (n)\) which depends on the number of nodes n in the network. Then for every value of network size n we have the same parameters except of threshold \(\theta \). This means that at every step, when a new node will be added to the graph, some of the existing edges will be removed. In the next section, we will try to find threshold functions which lead to sparse networks.
To preserve readability of the proofs, we consider only the case \(d = 3\) because proofs for higher dimensions can be derived in a similar way. However, we will give not only meanfield approximations but also strict probabilistic proofs, which to the best of our knowledge have not been done for geographical threshold models yet and can be likely applied in the other works too.
Generating sparse networks
The aim of this section is to model sparse growing networks. To do this, we need to find a proper threshold function.
First, we have studied the growth of the real networks. For example, Fig. 1 shows the growth of a citation graph. The data was obtained from the SNAP^{Footnote 1} database. It can be seen that the function \(\textit{y}(\textit{x}) = 4.95 \textit{x} \log \textit{x}  40 \textit{x}\) is a good estimation of the growth rate of this network. That is why we decided to focus on the linearithmic or sublinearithmic growth rate of the model (here and subsequently, by the growth of the model we mean the growth of the number of edges).
Analysis of the expected number of edges
Let M(n) denote the number of edges in the network of size n. To find its expectation, we need the two following lemmas.
Lemma 1
The probability for a node with weight w to be connected to a random node is
Lemma 2
The edge probability in the network is
To improve readability, we moved the proofs of Lemmas 1 and 2 to Appendix.
The next theorem shows that our model can have any growth which is less than quadratic.
Theorem 1
Denote as R(n) such function that \(R(n) = o(n^2)\) and R(n)>0. Then there exists such threshold function \(\theta (n)\) that the growth of the model is R(n):
Proof It easy to check that \(P_{e}\) is a continuous function of \(\theta \). The intermediate value theorem states that \(P_{e}(\theta )\) takes any value between \(P_{e}(\theta = 0) = 1/2\) and \(P_{e}(\theta = \infty ) = 0\) at some point within the interval.
Since \(R(n) = o(n^2)\) and positive, there exists N such that for all \(n \ge N\), \(0< R(n) < \frac{1}{2} \times \frac{n(n1)}{2}\).
It means that the equation \(\mathrm {E}M(n) = R(n)\) is feasible for all \(n \ge N\). \(\square \)
Taking into account Theorem 1, we obtain parameters for the linearithmic and linear growths of the expected number of edges.
Theorem 2
Suppose the following threshold function: \({\theta (n) = D n^{\frac{1}{a}}}\) where D is a constant. Then the growth of the model is linearithmic:
where \(\mathrm {A}\) is a constant depending on the Pareto distribution parameters.
Proof We can rewrite inequality \(n \ge \frac{w_0^{2a}}{D^a}\) as \({Dn^{\frac{1}{a}} \ge w_0^2}\) and apply Lemma 2 in the case \(\theta (n) = Dn^{\frac{1}{a}} \ge w_0^2\)
If we replace \(\theta \) by \(Dn^{\frac{1}{a}}\), we obtain
Theorem 3
Suppose that the growth of the model is sublinearithmic: \({\frac{\mathrm {E}M(n)}{n\ln n} = o(1)}\) , then \({\frac{n^{\frac{1}{a}}}{\theta (n)} = o(1)}\).
Proof Let us consider another model with a threshold function \(\theta '(n) = Dn^{\frac{1}{a}}\) and the expected number of edges \(\mathrm {E}M'(n)\). According to Theorem 2 and the condition \({\frac{\mathrm {E}M(n)}{n\ln n} = o(1)}\) there exists a natural number \(N_D\) such that
This also means that for all \(n \ge N_D\) we have \(\theta (n) \ge \theta '(n)\). Therefore
By the arbitrariness of the choice of D, we have \( \frac{n^{\frac{1}{a}}}{\theta (n)} = o(1)\). \(\square \)
Concentration theorem
In this section, we will find the variance of the number of the edges and prove the concentration theorem
Proofs of the following lemmas can be found in the Appendix.
Lemma 3
Suppose that x, y and z are random nodes. Let \(P_{<}\) be the probability for the node x to be connected to both nodes y and z. Then the variance of the number of edges M is
Lemma 4
Suppose that x, y and z are random nodes. Let \(P_{<}\) be the probability for the node x to be connected to both nodes y and z. Then
Combining these results, we get the following theorem that will be needed to prove the concentration theorem
Theorem 4
If \(\theta \ge w_0^2\) , the variance is
where A and B are constants which depend on the Pareto distribution parameters.
Proof According to Lemmas 3 and 4 in case of \(\theta \ge w_0^2\), the variance is
According to Lemma 2, the expected number of edges is
Combining (8) and (6), we obtain
Therefore,
where \(C_1\), \(C_2\), \(C_3\), A and B are constants depending on the Pareto distribution parameters.
Finally, we obtain
\(\square \)
Theorem 5
Concentration theorem If \(\theta (n)\) and \(\mathrm {E}M(n)\) tends to infinity as \(n \rightarrow \infty \) and \(\frac{n^3}{(\mathrm {E}M(n))^2\theta (n)^a} = o(1)\) , then
where M is the number of edges in the graph.
Proof According to Chebyshev’s inequality, we have
Let us estimate the right part of the inequality. Using Theorem 4, we get
Using the conditions of the theorem, we obtain
\(\square \)
Combining Theorems 2, 3 and 5, we obtain the following corollary.
Corollary 1
Suppose that one of the following conditions holds:

The threshold function \(\theta (n)\) equals \(D n^{\frac{1}{a}}\)

\(\frac{n}{\mathrm {E}M(n)} = O(1)\) and \(\frac{\mathrm {E}M(n)}{n\ln n} = o(1)\)
Then
where M is the number of edges in the graph.
In this way, we have proved that the number of edges in the graph does not deviate much from its expected value. It means that having the linearithmic or the sublinearithmic growth of the expected number of edges we also have the same growth for the actual number of edges.
Degree distribution
In this section, we show that our model follows powerlaw degree distribution with an exponent of 2 and give two proofs. The first is a meanfield approximation. It is usually applied for a fast checking of hypotheses. The second one is a strict probabilistic proof. To the best of our knowledge it has not been considered in the context of the geographic threshold models yet.
To confirm our proofs, we carried out a computer simulation and plotted complementary cumulative distribution of node degree which is shown on Fig. 2. We also used a discrete powerlaw fitting method, which is described in [2] and implemented in the network analysis package igraph.^{Footnote 2} We obtained \(\alpha = 2.16\), \(x_{\min } = 4\) and a quite large pvalue of 0.9984 for the Kolmogorov–Smirnov goodness of fit test.
Theorem 6
Let P(k) be the probability of a random node to have a degree k. If \(\frac{n^{\frac{1}{a}}}{\theta (n)} = o(1)\) , then there exist such constants \(C_0\) and \(N_0\) such that \(\forall ~k(n):\forall ~n~>~N_0 \,\, k(n)~<~C_0n\) we have
Meanfield approximation
This approximation gives power law only for nodes with weights \(w \le \frac{\theta }{w_0}\). But the expected number of nodes with weights not satisfying this inequality \(\mathrm {E}m\) is extremely small
As it was shown in Lemma 1, the probability of the node \(\vec {v_i} = w_i \vec {x_i}\) with weight \(w_i = w \le \frac{\theta }{w_0}\) to have an edge to another random node is
Let \(k_i(w)\) be the degree of the node \(v_i\). Then
where I stands for the indicator function.
As all nodes are independent, we get
In the meanfield approximation, we assume that \(k_i(w)\) is really close to its expectation and we can substitute it by \({(n1) P_{e}(w)}\) in the following expression for the degree distribution \(P(k) = f(w) \frac{\mathrm {d}w}{\mathrm {d}k},\) where f(w) is a density of weights. Thus,
\(\square \)
Note that we have not used conditions on k(n) and \(\theta (n)\) yet, they are needed to estimate residual terms in the following rigorous proof.
Proof Degree \(k_i\) of the node \(v_i\) is a binomial random variable. Using the probability \(P_{e}(w)\) of the node \(v_i\) with weight \({w_i = w}\) to have an edge to another random node, we can get the probability that \(k_i\) equals k:
To get the total probability, we need to integrate this expression with respect to w
Because of \(P_{e}(w)\) is a composite function, the integral breaks up into two parts.
Thus,
For estimating \(I_1\) we can use the formula \(P_{e}(w) = \frac{1}{2}\frac{w_{0}^a}{\theta ^a (a + 1)} w^a\) from Lemma 1. After making the substitution to integrate with respect to \(P_{e}(w)\) and using the incomplete betafunction, we get
For \(I_2\) we can derive an upper bound. Note that for \(w \ge \theta /w_0\) we have
Therefore, we obtain the following upper estimate
We now combine estimates for \(I_1\), \(I_2\) and the following estimates for the incomplete betafunction:
This gives us
Let us introduce the following notations:
Using \(\frac{n}{\theta ^a(n)} = o(1)\), for \(k(n) < C_0n\) we get
If k(n) is a bounded function, then since \(\varepsilon _0 < 1\) and \(\varepsilon _1 < 1\) we have
If \(k(n) \rightarrow \infty \) as \(n \rightarrow \infty \), using Stirling’s approximation \(\Gamma (k1) \sim \sqrt{2 \pi (k2)} \left( \frac{e}{k2}\right) ^{k2}\) we get
Since \( \varepsilon ^x x \rightarrow 0\) for \(\varepsilon < 1\) as \(x \rightarrow \infty \) there exist constants \(C_0\) and \(N_0\) such that for \(n > N_0\) and \(k(n) < C_0n\) we have \((\varepsilon _1)^\frac{nk}{k1} \frac{n}{k  2} < 1\) and \((\varepsilon _0)^{\frac{nk1}{k1}} \frac{n}{k2} < 1\). This implies that \(A=o(1)\) and \(C=o(1)\).
Thus, we obtain
\(\square \)
Note that regardless of the shape parameter of the Pareto distribution of weights we always generate networks with a degree distribution following a power law with an exponent equals 2. In the next section, we modify our model to change the exponent of the degree destribution and some other properties of the resulting networks.
Model modifications
In this section, we will show how to modify our model to get new properties and how these modifications will affect the degree distribution.
Directed network
Many real networks are directed. To model them and obtain an exponent of the power law that differs from 2, we changed the condition for the existence of an edge. There will be a directed edge \((v_i, v_j)\), if and only if
As it follows from the next theorem this modification allows us to tune an exponent of the power law.
Theorem 7
Let \(P_{out}(k)\) be the probability of an random node to have outdegree k, \(P_{in}(k)\) indegree k. If \({n^{\max \{\alpha , \beta \}/a}/\theta (n) = o(1)},\) then there exist constants \(C_0\) and \(N_0\) such that \(\forall k(n):\forall n > N_0 \,\, k(n) < C_0n\) we have
Proof Here is a proof for the outdegree distribution. The case of the indegree distribution is similar.
First, let us compute \(P_{e}(w)\)—the probability of the node \(\vec {v_i} = w_i \vec {x_i}\) with weight \(w_i = w\) to have an edge to another random node.
Similar to Lemma 1 we get
Thus, we obtain
Like in Theorem 6, we have
The rest of the proof is similar to the corresponding steps of Theorem 6, so we omit details here. \(\square \)
With \(\alpha = \beta \) this model turns into an undirected case with the powerlaw exponent equals 2 that agrees with Theorem 6.
Functions of dot product
In our model because of the condition \({w_i w_j (\vec {x_i}, \vec {x_j}) \ge \theta \ge 0}\) node \(\vec {v_i}\) can only be connected to the node \(\vec {v_j}\) if an angle between \(\vec {x_i}\) and \(\vec {x_j}\) is less than \(\pi /2\). This is a constraint on the possible neighbors of a node that restricts the scope of our model.
We can solve this issue by changing the condition for the existence of an edge:
where \(h:[1, 1] \rightarrow \mathbb {R}\). On Fig. 3 is an example of how it works in \(\mathbb {R}^2\).
Theorem 8
Let \(P_{\text{out}}(k)\) be the probability of an random node to have outdegree k, \(P_{\text{in}}(k)\)—indegree k. If \({n^{\max \{\alpha , \beta \}/a}/\theta (n) = o(1)}\) and \(h:[1, 1] \rightarrow \mathbb {R}\)continuous, strictly increasing function, positive at least in one point from \((1, 1)\), then there exist constants \(C_0\) and \(N_0\) such that \(\forall k(n):\forall n > N_0 \,\, k(n) < C_0n\) we have
Short scheme of proof
Here is the scheme of proof for the outdegree distribution. The case of the indegree is similar.
Restrictions on the function h allow us to modify the proof of the directed case. The main difference is a value of the probability \(P_{e}(w)\) of a node \(\vec {v_i} = w_i \vec {x_i}\) with the weight \(w_i = w\) to have an edge to another random node.
We will denote by I the inner integral:
We can rewrite inequality (15) as \( h((x, x')) \ge \frac{\theta }{w^\alpha (w')^\beta }\) and notice that \(\frac{\theta }{w^\alpha (w')^\beta } \in (0, +\infty )\). Let us consider \(h([1, 1]) = [r, q]\), on this interval function h is invertable. We examine the mutual position of [r, q] and \((0, +\infty )\). The definition of h implies that \([r, q] \cap (0, +\infty ) \ne \emptyset \). This gives us the next two cases.

A.
The first case is \([r, q] \subset (0, +\infty )\). If \(\frac{\theta }{w^\alpha (w')^\beta } \in [r, q]\), then we may invert h and the inner integral I is equal to \(2\pi \left( 1  h^{1}\left( \frac{\theta }{w^\alpha (w')^\beta }\right) \right) \). If \(\frac{\theta }{w^\alpha (w')^\beta } > q\), then the inequality (15) is not satisfied and \(I=0\). If \(0< \frac{\theta }{w^\alpha (w')^\beta } < r\), then the inequality (15) is satisfied for any pair of x and \(x'\), \(I = 4\pi \), the surface area of \(S^2\).
To deal with \(P_{e}(w)\), we need to compare \(w_0\) with boundaries for each range of \(\frac{\theta }{w^\alpha (w')^\beta }\)

1.
If \(w_0 < \frac{\theta ^{1/\beta }}{w^{\alpha /\beta } q^{1/\beta }}\), then
$$\begin{aligned} P_{e}(w) &= \int\limits _{w_0}^{\frac{\theta ^{1/\beta }}{w^{\alpha /\beta } q^{1/\beta }}} 0 \mathrm {d}w' + \int\limits _{\frac{\theta ^{1/\beta }}{w^{\alpha /\beta } q^{1/\beta }}}^{\frac{\theta ^{1/\beta }}{w^{\alpha /\beta } r^{1/\beta }}} \frac{aw_0^a}{(w')^{a+1}} \frac{1}{2} \left[ 1  h^{1}\left( \frac{\theta }{w^\alpha (w')^\beta }\right) \right] \mathrm {d}w' \\& \quad + \int\limits _{\frac{\theta ^{1/\beta }}{w^{\alpha /\beta } r^{1/\beta }}}^{\infty } 4\pi \frac{aw_0^a}{(w')^{a+1}} \mathrm {d}w'. \end{aligned}$$ 
2.
If \(\frac{\theta ^{1/\beta }}{w^{\alpha /\beta } q^{1/\beta }} \le w_0 < \frac{\theta ^{1/\beta }}{w^{\alpha /\beta } r^{1/\beta }}\), then
$$\begin{aligned} P_{e}(w) = \int\limits _{w_0}^{\frac{\theta ^{1/\beta }}{w^{\alpha /\beta } r^{1/\beta }}} \frac{aw_0^a}{(w')^{a+1}} \frac{1}{2} \left[ 1  h^{1}\left( \frac{\theta }{w^\alpha (w')^\beta }\right) \right] \mathrm {d}w' + \int\limits _{\frac{\theta ^{1/\beta }}{w^{\alpha /\beta } r^{1/\beta }}}^{\infty } 4\pi \frac{aw_0^a}{(w')^{a+1}} \mathrm {d}w'. \end{aligned}$$ 
3.
Last case is \(w_0 \ge \frac{\theta ^{1/\beta }}{w^{\alpha /\beta } r^{1/\beta }}\). But \(\theta (n)\) grows with n, and for big enough n this inequality will not be satisfied.

1.

B.
The second case is \([r, q] \not \subset (0, +\infty )\), which implies \(r \le 0\). If \(\frac{\theta }{w^\alpha (w')^\beta } \in (0, q]\), then \(I=2\pi \left( 1  h^{1}\left( \frac{\theta }{w^\alpha (w')^\beta }\right) \right) \). If \(\frac{\theta }{w^\alpha (w')^\beta } > q\), then \(I=0\). This gives
$$\begin{aligned} P_{e}(w) = \int\limits _{\max (w_0, \frac{\theta ^{1/\beta }}{w^{\alpha /\beta } q^{1/\beta }})}^{\infty } \frac{aw_0^a}{(w')^{a+1}} \frac{1}{2} \left[ 1  h^{1}\left( \frac{\theta }{w^\alpha (w')^\beta }\right) \right] \mathrm {d}w' \end{aligned}$$It remains only to show that \(P_{out}(k) = k^{2}(1+o(1))\). But now it is easy to see that the influence of every kind of the principal parts of the integral for \(P_{e}(w)\) has been already examined in previous theorems for degree distributions. For example,
$$\begin{aligned} \int\limits _{\frac{\theta ^{1/\beta }}{w^{\alpha /\beta } q^{1/\beta }}}^{\frac{\theta ^{1/\beta }}{w^{\alpha /\beta } r^{1/\beta }}} \frac{aw_0^a}{(w')^{a+1}} \frac{1}{2} \left[ 1  h^{1}\left( \frac{\theta }{w^\alpha (w')^\beta }\right) \right] \mathrm {d}w' = \frac{w_0^a w^{2a\alpha /\beta }}{\beta \theta ^{a/\beta }}\int\limits _{r}^{q} (1  h^{1}(t)) t^{a/\beta  1} \mathrm {d}t, \end{aligned}$$what is proportional to the one we got in Theorem 7. Therefore, we are not giving here additional details. \(\square \)
For example, described class of functions contains functions like \(e^x\) and \({x^{2m+1} + c}\), \({m \in \mathbb {N}}\), for a proper constant c.
Of course, not only this small class of functions h(x) has no influence on the degree distribution. For example, it is easy to show that \(h(x) = x^{2m}, m \in \mathbb {N}\) also has this property. In this way, a proof will be different only in the computation of \(P_{e}(w)\).
Conclusion
In our work, we suggest a new model for scalefree networks generation, which is based on the matrix factorization and has a geographical interpretation. We formalize it for fixed size and growing networks. We proof and validate empirically that degree distribution of resulting networks obeys power law with an exponent of 2.
We also consider several extensions of the model. First, we research the case of the directed network and obtain powerlaw degree distribution with a tunable exponent. Then, we apply different functions to the dot product of latent features vectors, which give us modifications with interesting properties.
Further research could focus on the deep study of latent features vectors distribution. It seems that not only a uniform distribution over the surface of the sphere should be considered because, for example, cities are not uniformly distributed over the surface of Earth. Besides, we want to try other distributions of weights.
Notes
References
 1.
Albert R, Barabási AL. Statistical mechanics of complex networks. Rev Mod Phys. 2002;74(1):47.
 2.
Clauset A, Shalizi CR, Newman ME. Powerlaw distributions in empirical data. SIAM Rev. 2009;51(4):661–703.
 3.
ColomerdeSimon P, Boguná M. Clustering of random scalefree networks. Phys Rev E Stat Nonlin Soft Matter Phys. 2012;86:026120 (preprint arXiv:1205.2877).
 4.
Callaway DS, Newman ME, Strogatz SH, Watts DJ. Network robustness and fragility: percolation on random graphs. Phys Rev Lett. 2000;85(25):5468.
 5.
Moreno Y, Pacheco AF. Synchronization of kuramoto oscillators in scalefree networks. EPL (Europhys Lett). 2004;68(4):603.
 6.
Menon AK, Elkan C. Link prediction via matrix factorization. In: Gunopulos D, Hofmann T, Malerba D, Vazirgiannis M, editors. Machine learning and knowledge discovery in databases: European conference, ECML PKDD 2011, Athens, September 5–9, 2011, Proceedings, Part II. Berlin: Springer; 2011. p. 437–52.
 7.
Hayashi Y. A review of recent studies of geographical scalefree networks. arXiv preprint physics/0512011; 2005.
 8.
Barabási AL, Albert R. Emergence of scaling in random networks. Science. 1999;286(5439):509–12.
 9.
Bollobás B, Riordan O, Spencer J, Tusnády G, et al. The degree sequence of a scalefree random graph process. Random Struct Algorithms. 2001;18(3):279–90.
 10.
Holme P, Kim BJ. Growing scalefree networks with tunable clustering. Phys Rev E. 2002;65(2):026107.
 11.
Lee DD, Seung HS. Algorithms for nonnegative matrix factorization. In: Advances in neural information processing systems; 2001. p. 556–562.
 12.
Koren Y, Bell R, Volinsky C. Matrix factorization techniques for recommender systems. Computer. 2009;8:30–7.
 13.
LibenNowell D, Kleinberg J. The linkprediction problem for social networks. J Am Soc Inf Sci Technol. 2007;58(7):1019–31.
 14.
Masuda N, Miwa H, Konno N. Geographical threshold graphs with smallworld and scalefree properties. Phys Rev E. 2005;71(3):036108.
 15.
Morita S. Crossovers in scalefree networks on geographical space. Phys Rev E. 2006;73(3):035104.
 16.
Rozenfeld AF, Cohen R, BenAvraham D, Havlin S. Scalefree networks on lattices. Phys Rev Lett. 2002;89(21):218701.
 17.
Warren CP, Sander LM, Sokolov IM. Geography in a scalefree network model. Phys Rev E. 2002;66(5):056105.
 18.
Yakubo K, Korošak D. Scalefree networks embedded in fractal space. Phys Rev E. 2011;83(6):066111.
Authors' contributions
This work is the result of a close joint effort in which all authors contributed almost equally to defining and shaping the problem definition, proofs, algorithms, and manuscript. The research would not have been conducted without the participation of any of the authors. All authors read and approved the final manuscript.
Competing interests
The authors declare that they have no competing interests.
Author information
Appendix
Appendix
Proof of Lemma 1
For a node x with the weight w, the probability to be connected to a random node is represented by
We can rewrite inequality \(ww'(x,x')\ge \theta \) as \({(x,x')\ge \frac{\theta }{ww'}}\). If \(\frac{\theta }{ww'} \in [0, 1]\), this inequality defines the spherical cap of the area \(2\pi (1  \frac{\theta }{ww'})\). Therefore, we have
If we substitute \(f(w')\) from (2), we obtain
If \(w \le \theta / w_0\), then
If \(w > \theta / w_0\), then
Proof of Lemma 2
The edge probability is represented by
Using (18), we obtain
If \(\theta < w_{0}^2\), then for all \(w \in [w_0, \infty )\) \(P_{e}(w)\) equals to \(\frac{1}{2}\left( 1  \frac{a\theta }{w(a+1)w_{0}}\right) \). Using it, we get
If \(\theta \ge w_{0}^2\), then
Proof of Lemma 3
Let us enumerate pairs of nodes. Each pair of nodes i has an edge indicator \(I_{e_i}\).
By definition, we have
\(I_{e_1}\), \(\ldots \), \(I_{e_{n(n1)/2}}\) is the sequence of identically distributed random variables, so their expected value is the same and equals to \(P_{e}\).
Since \(\mathbb {E}I_{e_i}^2 = \mathbb {E}I_{e_i} = P_{e}\), it follows that
If edges \(e_{i}\) and \(e_j\) do not have mutual nodes, then \(I_{e_i}\) and \(I_{e_j}\) are independent variables. Therefore, \(\mathrm {E}(I_{e_i} I_{e_j}) = \mathrm {E}(I_{e_i}) \mathrm {E}(I_{e_j}) = P_{e}^2\). We get
\(\mathbb {E}I_{e(v, w)}I_{e(v, z)}\) is exactly equal to \(P_<\).
Proof of Lemma 4
It can be easily seen that
If \(\theta < w_{0}^2\) we have
If \(\theta \ge w_{0}^2\), then
Computing the first integral, we get
And for the second one, we have
This gives us \(P_<\) in the case of \(\theta \ge w_{0}^2\):
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Received
Accepted
Published
DOI
Keywords
 Scalefree networks
 Matrix factorization
 Threshold models