If you did not already know

Generalized Power Generalized Weibull Distribution This paper introduces a new generalization of the power generalized Weibull distribution called the generalized power generalized Weibull distribution. This distribution can also be considered as a generalization of Weibull distribution. The hazard rate function of the new model has nice and flexible properties and it can take various shapes, including increasing, decreasing, upside-down bathtub and bathtub shapes. Some of the statistical properties of the new model, including quantile function, moment generating function, reliability function, hazard function and the reverse hazard function are obtained. The moments, incomplete moments, mean deviations and Bonferroni and Lorenz curves and the order statistics densities are also derived. The model parameters are estimated by the maximum likelihood method. The usefulness of the proposed model is illustrated by using two applications of real-life data. …

Weighted-SVD The Matrix Factorization models, sometimes called the latent factor models, are a family of methods in the recommender system research area to (1) generate the latent factors for the users and the items and (2) predict users’ ratings on items based on their latent factors. However, current Matrix Factorization models presume that all the latent factors are equally weighted, which may not always be a reasonable assumption in practice. In this paper, we propose a new model, called Weighted-SVD, to integrate the linear regression model with the SVD model such that each latent factor accompanies with a corresponding weight parameter. This mechanism allows the latent factors have different weights to influence the final ratings. The complexity of the Weighted-SVD model is slightly larger than the SVD model but much smaller than the SVD++ model. We compared the Weighted-SVD model with several latent factor models on five public datasets based on the Root-Mean-Squared-Errors (RMSEs). The results show that the Weighted-SVD model outperforms the baseline methods in all the experimental datasets under almost all settings. …

Operation-Guided Attention-Based Sequence-to-Sequence Network (OpAtt) Recent neural models for data-to-text generation are mostly based on data-driven end-to-end training over encoder-decoder networks. Even though the generated texts are mostly fluent and informative, they often generate descriptions that are not consistent with the input structured data. This is a critical issue especially in domains that require inference or calculations over raw data. In this paper, we attempt to improve the fidelity of neural data-to-text generation by utilizing pre-executed symbolic operations. We propose a framework called Operation-guided Attention-based sequence-to-sequence network (OpAtt), with a specifically designed gating mechanism as well as a quantization module for operation results to utilize information from pre-executed operations. Experiments on two sports datasets show our proposed method clearly improves the fidelity of the generated texts to the input structured data. …

Like this:

Like Loading…

Related