Buscar por

Publicaciones de la facultad

Se muestran 0 publicaciones de 0 resultados.

Abstract:

In recent years, much attention has been paid to the role of classical special functions of a real or complex variable in mathematical physics, especially in boundary value problems (BVPs). In the present paper, we propose a higher-dimensional analogue of the generalized bessel polynomials within Clifford analysis via a special set of monogenic polynomials. We give the definition and derive a number of important properties of the generalized monogenic bessel polynomials (GMBPs), which are defined by a generating exponential function and are shown to satisfy an analogue of Rodrigues' formula. As a consequence, we establish an expansion of particular monogenic functions in terms of GMBPs and show that the underlying basic series is everywhere effective. We further prove a second-order homogeneous differential equation for these polynomials

Abstract:

Momentum profits depend mainly on the short leg and therefore on barriers to short sales. Our research indicates that the decline in momentum profitability in the past 2 decades is driven partly by a contemporaneous growth in stock options trading. Stock options offer an alternative to short selling, augmenting the stock lending market, and thereby contributing to improved pricing efficiency. The resulting reduction in barriers to short sales contributes to lower returns to momentum trading from the short leg. Our results persist after matching stocks with and without options based on different firm-level characteristics

Abstract:

We consider twisted graphs, that is, topological graphs that are weakly isomorphic to subgraphs of the complete twisted graph Tn. We determine the exact minimum number of crossings of edges among the set of twisted graphs with n vertices and m edges; state a version of the crossing lemma for twisted graphs and conclude that the mid-range crossing constant for twisted graphs is 1/6. Let e(k)(n) be the maximum number of edges over all twisted graphs with n vertices and local crossing number at most k. We give lower and upper bounds for e(k)(n) and settle its exact value for k is an element of {0, 1, 2, 3, 6, 10). We conjecture that for every t > = 1, e(t 2) (n) = (t+1) n - (t+2 2), n > = t + 1

Abstract:

This study has carried out a review of the literature appearing on diversity in the last 50 years. Research findings from this period reveal it is impossible to assume there is a pure and simple relationship between diversity and performance without considering a series of variables that affect this relationship. In this study, emphasis has been placed on the analysis of results arrived at through empirical investigation on the relation between the most studied dimensions of diversity and performance. The results presented are part of a more extensive research

Abstract:

In order to analyze the influence of substituent groups, both electron-donating and electron-attracting and the number of π-electrons on the corrosion inhibiting properties of organic molecules, a theoretical quantum chemical study under vacuo and in the presence of water, using the Polarizable Continuum Model (PCM), was carried out for four different molecules, bearing similar chemical framework structure: 2-mercaptoimidazole (2MI), 2-mercaptobenzimidazole (2MBI), 2-mercapto-5- methylbenzimidazole (2M5MBI), and 2-mercapto-5-nitrobenzimidazole (2M5NBI). From an electrochemical study conducted previously in our group, (R. Álvarez-Bustamante, G. Negrón-Silva, M. Abreu-Quijano, H. Herrera-Hernández, M. Romero-Romo, A. Cuán, M. Palomar-Pardavé. Electrochim. Acta, 54, (2009) 539), it was found that the corrosion inhibition efficiency, IE, order followed by the molecules tested was 2MI > 2MBI > 2M5MBI > 2M5NBI. Thus 2MI turned out to be the best inhibitor. This fact strongly suggests that, contrary to a hitherto generally suggested notion, an efficient corrosion inhibiting molecule neither requires to be a large one, nor possesses an extensive π-electrons number. In this work, from a theoretical study a correlation was found between EHOMO, hardness (η), electron charge transfer (ΔN), electrophilicity (W), back-donation (ΔEBack-donation) and the inhibition efficiency, IE. The negative values of EHOMO and the estimated value of the Standard Free Gibbs energy for all the molecules (based on the calculated equilibrium constant) were negative, indicating that the complete chemical processes in which the inhibitors are involved, occur spontaneously

Abstract:

The run sum chart is an effective two-sided chart that can be used to monitor for process changes. It is known that it is more sensible than the Shewhart chart with runs rules and its performance improves as the number of regions increases. However, as the number of regions increses the resulting chart has more parameters to be defined and its design becomes more involved. In this article, we introduce a one-parameter run sum chart. This chart accumulates scores equal to the subgroup means and signals when the cummulative sum exceeds a limit value. A fast initial response feature is proposed and its run length distribution function is found by a set of recursive relations. We compare this chart with other charts suggested in the literature and find it competitive with the CUSUM, the FIR CUSUM, and the combined Shewhart FIR CUSUM schemes

Abstract:

Processes with a low fraction of nonconforming units are known as high-yield processes. These processes produce a small number of nonconforming parts per million. Traditional methods for monitoring the fraction of nonconforming units such as the binomial and geometric control charts with probability limits are not effective. In order to properly monitor these processes, we propose new two-sided geometric-based control charts. In this article we show how to design, analyze, and evaluate their performance. We conclude that these new charts outperform other geometric charts suggested in the literature

Abstract:

To improve the performance of control charts the conditional decision procedure (CDP) incorporates a number of previous observations into the chart's decision rule. It is expected that charts with this runs rule are more sensitive to shifts in the process parameter. To signal an out-of-control condition more quickly some charts use a headstart feature. They are referred as charts with fast initial response (FIR). The CDP chart can also be used with FIR. In this articIe we analyze and compare the performance of geometric CDP charts with and with no F1R. To do it we model the CDP charts with a Markov chain and find cIosed-form ARL expressions. We find the conditional decision procedure useful when the fraction p of nonconforming units detenorates. However the CDP chart is not very effective for signaling decreases in p

Abstract:

A fast initial response (FIR) feature for the run sum R chart is proposed and its ARL performance estimated by a Markov chain representation. It is shown that this chart is more sensitive than several R charts with runs rules proposed by different authors. We conclude that the run sum R chart is simple to use and a very effective tool for monitoring increases and decreases in process dispersion

Abstract:

The standard S chart signals an out-of-control condition when one point exceeds a control limit. It can be augmented with runs rules to improve its performance in detecting assignable causes. A commonly used rule signals when k consecutive points exceed a control limit. This rule can be used alone or to supplement the standard chart. In this article we derive ARL expressions for charts with the k-ofk runs rule. We show how to design S charts with this runs rule, compare their ARL performance, and make a control chart recommendation when it is important to monitor for both increases and decreases in process dispersion

Abstract:

We analyze the performance of traditional R charts and introduce modifications for monitoring both increases and decreases in the process dispersion. We show that the use of equal tail probability limits and the use of sorne runs rules does not represent a significant improvement for monitoring increases and decreases in the variability of the process. We propose to use the R chart with a central line equal. to the median of the distribution of the range. We al so suggest supplementing this chart with a runs rule that signals when nine consecutive points lie on the same side of the median line. We find that such modifications lead to R charts with improved performance for monitoring the process dispersion

Abstract:

To increase the performance for detecting small shifts, control charts are used with runs rules, The Western Electrical Handbook (1956) suggests runs rules to be used with the Shewhart X chart. In this article, we review the performance of two sets of runs rules. The rules of one set signal if k succesive points fall beyond a limito The rules of the other set signal if k out of k+ 1 consecutive points fall beyond a different limito We suggest runs rules from these sets. They are intended to be used with a modified Shewhart X chart, a chart with 3.3σ limits. We find that for small shifts all suggested charts have improved performance than the Shewhart X chart. For large shifts they have comparable performance

Abstract:

Most control charts for variables data are constructed and analyzed under the assumption that observations are independent and normally distributed. Although this may be adequate for many processes, there are situations where these basic assumptions do not hold. In such cases the use of traditional control charts may not be effective. In this article, we estimate the performance of control charts derived under the assumption of normality (normal charts) but used with a much broader range of distributions. We consider monitoring the dispersion of processes that follow the exponential power family of distributions (a family of distributions which includes the normal as a special case). We have found that if a normal CUSUM chart is used with several of these processes the rate of false alarms might be quite different from the rate that results when a normal process is monitored. A normal chart might also be not sensitive enough in detecting changes in such processes. CUSUM charts suitable for monitoring this family of processes are derived to show how much sensitivity is recovered when the correct chart is used

Abstract:

In this paper we analyze several control charts suitable for monitoring process dispersion when subgrouping is not possible or not desirable. We compare the performances of a moving range chart, a cumulative sum (CUSUM) chart based on moving ranges, a CUSUM chart based on an approximate normalizing transformation, a self-starting CUSUM chart, a change-point CUSUM chart, and a exponentially weighted moving average chart based on the subgroup variance. The average run length performances of these charts are also estimated and compared

Abstract:

We consider several control charts for monitoring normal processes for changes in dispersion. We present comparisons of the average run length performances of these charts. We demonstrate that a CUSUM chart based on the likelihood ratio test for the change point problem for normal variances has an ARL performance that is superior to other procedures. Graphs are given to aid in designing this control chart

Abstract:

It is a common practice to monitor the fraction p of non-conforming units to detect whether the quality of a process improves or deteriorates. Users commonly assume that the number of non-conforming units in a subgroup is approximately normal, since large subgroup sizes are considered. If p is small this approximation might fail even for large subgroup sizes. If in addition, both upper and lower limits are used, the performance of the chart in terms of fast detection may be poor. This means that the chart might not quickly detect the presence of special causes. In this paper the performance of several charts for monitoring increases and decreases in p is analyzed based on their Run Length (RL) distribution. It is shown that replacing the lower control limit by a simple runs rule can result in an increase in the overall chart performance. The concept of RL unbiased performance is introduced. It is found that many commonly used p charts and other charts proposed in the literature have RL biased performance. For this reason new control limits that yield an exact (or nearly) RL unbiased chart are proposed

Abstract:

When monitoring a process it is important to quickly detect increases and decreases in its variability. In addition to preventing any increase in the variability of the process and any deterioration in the quality of the output, it is also important to search for special causes that may result in a smaller process dispersion. Considering this, users should always try to monitor for both increases and decreases in the variability. The process variability is commonly monitored by means of a Shewhart range chart. For small subgroup sizes this control chart has a lower control limit equal to zero. To help monitor for both increases and decreases in variability, Shewhart charts with probability limits or runs rules can be used. CUSUM and EWMA charts based on the range or a function of the subgroup variance can also be used. In this paper a CUSUM chart based on the subgroup range is proposed. Its performance is compared with that of other charts proposed in the literature. It is found that for small subgroup sizes, it has an excellent performance and it thus represents a powerful alternative to currently utilized strategies.

Abstract:

Most control charts for variables data are constructed and analyzed under the assumption that observations are independent and normally distributed. Although this may be adequate for many processes, there are situations where these basic assumptions do not hold. The use of control charts in such cases may not be effective if the rate of false alarms is high or if the control chart is not sensitive in detecting changes in the process. In this paper, the ARL performance of a CUSUM chart for dispersion is analyzed under a variety of non-normal process models. We will consider processes that follow the exponential power family of distributions, a symmetric class of distributions which includes the normal distribution as a special case

Resumen:

En este trabajo estudiamos la elección de financiamiento a largo plazo entre créditos sindicatos y la emisión de bonos de las empresas mexicanas. Restringimos nuestra atención a aquellas empresas que emiten títulos financieros en los mercados de capitales. Nuestros hallazgos sugieren que el tamaño de la empresa, la calidad de sus garantías, y su calidad crediticia, juegan un papel importante en el resultado de la elección. También encontramos que los efectos marginales de estas tres características no son consistentes a lo largo de su rango de valores. En particular, encontramos evidencia de que la calidad crediticia de las empresas tiene un efecto no monótono en la elección

Abstract:

We study the long-term financing choice between syndicated loans and Mexican firms' issuance of public debt. We restrict our attention to those firms that issue securities in capital markets. Our findings suggest that the firm's size, the quality of its collateral, and credit quality play an important role in the outcome of choice. We also find that the marginal effects of these three characteristics are not consistent along their range of values. In particular, we found evidence that the credit quality of firms has a non-monotonic effect in the choice

Resumen:

Propósito. En este artículo se examinan los factores que afectan la probabilidad de que una empresa salga a Bolsa, utilizando una base de datos integral de empresas privadas y públicas en México, de todos los sectores, durante 2006-2014

Abstract:

Purpose. The purpose of this paper is to examine the factors that affect the likelihood of being public using a comprehensive database of private and public companies in Mexico, from all sectors, during 2006-2014

Resumen:

Este artículo analiza la demanda de vivienda en México a través del gasto en servicios de vivienda y el costo de uso del capital residencial de cada hogar representativo por percentil de ingreso. La hipótesis de ingreso permanente se considera como función de las características sociodemográficas y el grado de educación del jefe del hogar. Asimismo, se obtienen las elasticidades de ingreso, riqueza, edad del jefe de familia, tamaño del hogar y número de empleados, así como la semielasticidad del costo de uso de capital residencial. El gasto en vivienda es inelástico, aunque es más sensible al ingreso corriente que al permanente. También demostramos que existe una estructura regresiva en este mercado y se realiza un análisis de sensibilidad con el fin de medir el impacto en el gasto de vivienda ante ciertas variaciones del costo de uso residencial de largo plazo

Abstract:

This article analyzes the demand for housing in Mexico through the approach of spending on housing services and user cost of owner-occupied of each representative household by income percentile. The hypothesis of permanent income as a function of the socio-demographic characteristics and the degree of education of the household head is included in the model. We obtain the elasticity of income, wealth, age of head of household, size of household and number of occupied; as well as the semi-elasticity of the user cost of residential capital. It should be noted that expenditure on housing is inelastic, although it is more sensitive to current income than the permanent income. We show that this market struture is regresive, therefore a sensitivity analysis is performed in order to measure the impact on the housing expenditure related to certain variations of the long-run user cost of owner-occupied

Abstract:

This paper proposes a theoretical model that offers a rationale for the formation of lender syndicates. We argue that the ex-ante process of information acquisition may affect the strategies used to create syndicates. For large loans, the restrictions on lending impose a natural reason for syndication. We study medium-sized loans instead, where there is some room for competition since each financial institution has the ability to take the loan in full by itself. In this case, syndication would be the optimal choice only if their screening costs are similar. Otherwise, lenders would be compelled to compete, since a lower screening cost can create a comparative advantage in interest rates

Abstract:

We consider a model of bargaining by concessions where agents can terminate negotiations by accepting the settlement of an arbitrator. The impact of pragmatic arbitrators—that enforce concessions that precede their appointment—is compared with that of arbitrators that act on principle—ignoring prior concessions. We show that while the impact of arbitration always depends on how costly that intervention is relative to direct negotiation, the range of scenarios for which it has an impact, and the precise effect of such impact, does change depending on the behavior— pragmatic or on principle—of the arbitrator. Moreover the requirement of mutual consent to appoint the arbitrator matters only when he is pragmatic. Efficiency and equilibrium are not aligned since agents sometimes reach negotiated agreements when an arbitrated settlement is more efficient and vice versa. What system of arbitration has the best performance depends on the arbitration and negotiation costs, and each can be optimal for plausible environments

Abstract:

This paper analyzes a War of Attrition where players enjoy private information about their outside opportunities. The main message is that uncertainty about the possibility that the opponent opts out increases the equilibrium probability of concession

Abstract:

In many modern production systems the human operator is faced with problems when it comes to interacting with the production system using the control system. One reason for this is that the control systems are mainly designed with respect to production, without taking into account how an operator is supposed to interact with it. This article presents a control system where the goal is to increase flexibility and reusability of production equipment and program modules. Apart from this, the control system is also suitable for human operator interaction. To make it easier for an operator to interact with the control system, the operator activities vis-a-vis the control system have been divided into so called control levels. One of the six predefined control levels is described more in detail to illustrate how production can be manipulated with the help of a control system at this level. The communication with the control system is accomplished with the help of a graphical tool that interacts with a relational database. The tool has been implemented in Java to make it platform-independent. Some examples of the tool and how it can be used are provided

Abstract:

Modern control systems often exhibit problems in switches between automatic and manual system control. One reason for this is the structure of the control system, which is usually not designed for this type of action. This article presents a method for splitting the control system into different control levels. By switching between these control levels, the operator can increase or decrease the number of manual control activities he wishes to perform while still enjoying the support of the control system. The structural advantages of the control levels are demonstrated for two types of operator activity; 1 control flow tracing; and 2 control flow alteration. These two types of operator activity can be used in such situations as when locating an error, introducing a new machine, changing the ordering of products or optimizing the production flow

Abstract:

Partly due to the introduction of computers and intelligent machines in modern manufacturing, the role of the operator has changed with time. More and more of the work tasks have been automated, reducing the need for human interactions. One reason for this is the decrease in the relative cost of computers and machinery compared to the cost of having operators. Even though this statement may be true in industrialized countries it is not evident that it is valid in developing countries. However, a statement that is valid for both industrialized countries and developing countries is to obtain balanced automation systems. A balanced automation system is characterized by "the correct mix of automated activities and the human activities". The way of reaching this goal, however, might be different depending on the place of the manufacturing installation. Aspects, such as time, money, safety, flexibility and quality, govern the steps to take in order to reach a balanced automation system. In this paper there are defined six steps of automation that identify areas of work activities in a modern manufacturing system, that might be performed by either an automatic system or a human. By combining these steps of automation in what is called levels of automation, a mix of automatic and manual activities is obtained. Through the analysis of these levels of automation, with respect to machine costs and product quality, it is demonstrated which the lowest possible automation level should be when striving for balanced automation systems in developing countries. The bottom line of the discussion is that product supervision should not be left to human operators solely, but rather be performed automatically by the system

Abstract:

In the matching with contracts literature, three well-known conditions (from stronger to weaker)–substitutes, unilateral substitutes (US), and bilateral substitutes (BS)–have proven to be critical. This paper aims to deepen our understanding of them by separately axiomatizing the gap between BS and the other two. We first introduce a new “doctor separability” condition (DS) and show that BS, DS, and irrelevance of rejected contracts (IRC) are equivalent to US and IRC. Due to Hatfield and Kojima (2010) and Aygün and Sönmez (2012), we know that US, “Pareto separability” (PS), and IRC are the same as substitutes and IRC. This, along with our result, implies that BS, DS, PS, and IRC are equivalent to substitutes and IRC. All of these results are given without IRC whenever hospitals have preferences

Abstract:

The paper analyzes the role of labor market segmentation and relative wage rigidity in the transmission process of disinflation policies in an open economy facing imperfect capital markets. Wages are flexible in the nontradables sector, and based on efficiency factors in the tradables sector. With perfect labor mobility, a permanent reduction in the devaluation rate leads in the long run to a real appreciation, a lower ratio of output of tradables to nontradables, an increase in real wages measured in terms of tradables, and a fall in the product wage in the nontradables sector. Under imperfect labor mobility, unemployment temporarily rises.

Abstract:

In this paper we study the adjacency matrix of some infinite graphs, which we call the shift operator on the Lp space of the graph. In particular, we establish norm estimates, we find the norm for some cases, we decide the triviality of the kernel of some infinite trees, and we find the eigenvalues of certain infinite graphs obtained by attaching an infinite tail to some finite graphs

Resumen:

México es un país que se encuentra inmerso en un proceso acelerado de envejecimiento poblacional. Las personas adultas mayores están expuestas a múltiples riesgos, entre ellos la violencia familiar, no solo la generada por sus parejas, sino también por parte de otros miembros de la familia. El objetivo propuesto en este artículo fue analizar las características y principales factores asociados con la violencia familiar de las mujeres adultas mayores (sin incluir la violencia de pareja), según grupos de edad (60-69 o 70 años o más) y subtipos de violencia (emocional, económica, física y sexual). Se hizo un análisis secundario de la Encuesta Nacional sobre la Dinámica de las Relaciones en los Hogares (ENDIREH) 2016. La muestra final estuvo conformada por 18,416 mujeres de 60 años o más, lo cual representó un total de 7,043,622 mujeres de dicho grupo de edad. Se hizo un análisis descriptivo y se estimaron modelos de regresión binaria para determinar los principales factores asociados con la violencia y sus subtipos. La violencia emocional fue la más frecuente, seguida de la económica. Las mujeres de edad más avanzada tuvieron una mayor prevalencia de violencia familiar

Abstract:

Mexico as a country is immersed in a process of accelerated population aging. Older people are exposed to multiple risks, including domestic violence, not only from their couple, but also from other family members. The proposed objective of this article is to analyze the characteristics and main factors associated with family violence of older adult women in 2016 -excluding intimate partner violence- by age groups (60-69 and 70 years old or more) and subtypes of violence (emotional, economical, physical and sexual). An analysis of The National Survey on the Dynamics of Household Relationships (ENDIREH in Spanish) 2016 was used. The final sample consisted of 18,416 women aged 60 or more, which represented a total of 7,043,622 women in the age group. A descriptive analysis was carried out and the binary regression models were estimated to determine to main factors associated with violence and its subtypes. The emotional violence was the most frequent, followed by economic violence

Abstract:

The fact that shocks in early life can have long-term consequences is well established in the literature. This paper examines the effects of extreme precipitations on cognitive and health outcomes and shows that impacts can be detected as early as 2 years of age. Our analyses indicate that negative conditions (i.e., extreme precipitations) experienced during the early stages of life affect children’s physical, cognitive and behavioral development measured between 2 and 6 years of age. Affected children exhibit lower cognitive development (measured through language, working and long-term memory and visual-spatial thinking) in the magnitude of 0.15 to 0.19 SDs. Lower height and weight impacts are also identified. Changes in food consumption and diet composition appear to be key drivers behind these impacts. Partial evidence of mitigation from the delivery of government programs is found, suggesting that if not addressed promptly and with targeted policies, cognitive functioning delays may not be easily recovered

Abstract:

A number of studies document gender differentials in agricultural productivity. However, they are limited to region and crop-specific estimates of the mean gender gap. This article improves on previous work in three ways. First, data representative at the national level and for a wide variety of crops is exploited. Second, decomposition methods—traditionally used in the analysis of wage gender gaps—are employed. Third, heterogeneous effects by women's marital status and along the productivity distribution are analyzed. Drawing on data from the 2011–2012 Ethiopian Rural Socioeconomic Survey, we find an overall 23.4 percentage point productivity differential in favor of men, of which 13.5 percentage points (57%) remain unexplained after accounting for gender differences in land manager characteristics, land attributes, and access to resources. The magnitude of the unexplained fraction is large relative to prior estimates in the literature. A more detailed analysis suggests that differences in the returns to extension services, land certification, land extension, and product diversification may contribute to the unexplained fraction. Moreover, the productivity gap is mostly driven by non-married female managers—particularly divorced women—; married female managers do not display a disadvantage. Finally, overall and unexplained gender differentials are more pronounced at mid-levels of productivity

Abstract:

Public transportation is a basic everyday activity. Costs imposed by violence might have far-reaching consequences. We conduct a survey and exploit the discontinuity in the hours of operation of a program that reserves subway cars exclusively for women in Mexico City. The program seems to be successful at reducing sexual harassment toward women by 2.9 percentage points. However, it produces unintended consequences by increasing nonsexual aggression incidents (e.g., insults, shoving) among men by 15.3 percentage points. Both sexual and nonsexual violence seem to be costly; however, our results do not imply that costs of the program outweigh its benefits

Abstract:

We measure the effect of a large nationwide tax reform on sugar-added drinks and caloric-dense food introduced in Mexico in 2014. Using scanner data containing weekly purchasesof 47,973 barcodes by 8,130 households and an RD design, we find that calories purchasedfrom taxed drinks and taxed food decreased respectively by 2.7% and 3%. However, this wascompensated by increases from untaxed categories, such that total calories purchased didnot change. We find increases in cholesterol (12.6%), sodium (5.8%), saturated fat (3.1%), carbohydrates (2%), and proteins (3.8%)

Abstract:

The frequency and intensity of extreme temperature events are likely to increase with climate change. Using a detailed dataset containing information on the universe of loans extended by commercial banks to private firms in Mexico, we examine the relationship between extreme temperatures and credit performance. We find that unusually hot days increase delinquency rates, primarily affecting the agricultural sector, but also non-agricultural industries that rely heavily on local demand. Our results are consistent with general equilibrium effects originated in agriculture that expand to other sectors in agricultural regions. Additionally, following a temperature shock, affected firms face increased challenges in accessing credit, pay higher interest rates, and provide more collateral, indicating a tightening of credit during financial distress

Abstract:

In this work we construct a numerical scheme based on finite differences to approximate the free boundary of an American call option. Points of the free boundary are calculated by approximating the solution of the Black-Scholes partial differential equation with finite differences on domains that are parallelograms for each time step. Numerical results are reported

Abstract:

We present higher-order quadrature rules with end corrections for general Newton–Cotes quadrature rules. The construction is based on the Euler–Maclaurin formula for the trapezoidal rule. We present examples with 6 well-known Newton–Cotes quadrature rules. We analyzemodified end corrected quadrature rules, which consist on a simple modification of the Newton–Cotes quadratures with end corrections. Numerical tests and stability estimates show the superiority of the corrected rules based on the trapezoidal and the midpoint rules

Abstract:

The constructions of the quadratures are based on the method of central corrections described in [4]. The quadratures consist of the trapezoidal rule plus a local weighted sum of the values of v around the point of singularity. Integrals of the above type appear in scattering calculations; we test the performance of the quadrature rules with an example of this kind

Abstract:

In this report we construct corrected trapezoidal quadrature rules up to order 40 to evaluate 2-dimensional integrals of the form ʃD v(x, y) log((x2 + y2)1/2)dxdy, where the domain D is a square containing the point of singularity (0, 0) and v is a C∞ function of compact support contained in D. The procedure we use is a modification of the method constructed in [1]. These quadratures are particularly useful in acoustic scattering calculations with large wave numbers. We describe how to extend the procedure to calculate other 2-dimensional integrals with different singularities

Abstract:

In this paper we construct an algorithm to approximate the solution of the initial value problem y'(t) = f(t,y) with y(t0) = y0. The method is implicit and combines the classical Simpson’s rule with the Simpson’s 3/8 rule to yield an unconditionally A-stable method of order 4

Abstract:

In this paper, we propose an anisotropic adaptive refinement algorithm based on the finite element methods for the numerical solution of partial differential equations. In 2-D, for a given triangular grid and finite element approximating space V, we obtain information on location and direction of refinement by estimating the reduction of the error if a single degree of freedom is added to V. For our model problem the algorithm fits highly stretched triangles along an interior layer, reducing the number of degrees of freedom that a standard h-type isotropic refinement algorithm would use

Abstract:

In this paper we construct an algorithm that generates a sequence of continuous functions that approximate a given real valued function f of two variables that have jump discontinuities along a closed curve. The algorithm generates a sequence of triangulations of the domain of f . The triangulations include triangles with high aspect ratio along the curve where f has jumps. The sequence of functions generated by the algorithm are obtained by interpolating f on the triangulations using continuous piecewise polynomial functions. The approximation error of this algorithm is O(1/N2) when the triangulation contains N triangles and when the error is measured in the L1 norm. Algorithms that adaptively generate triangulations by local regular refinement produce approximation errors of size O(1/N), even if higher-order polynomial interpolation is used

Abstract:

We construct correction coefficients for high-order trapezoidal quadrature rules to evaluate three-dimensional singular integrals of the form, J(v) = integral(D) v(x, y, z)/root x(2) +y(2) + z(2) dx dy dz, where the domain D is a cube containing the point of singularity (0, 0, 0) and v is a C-infinity function defined on R-3. The procedure employed here is a generalization to 3-D of the method of central corrections for logarithmic singularities [1] in one dimension, and [2] in two dimensions. As in one and two dimensions, the correction coefficients for high-order trapezoidal rules for J(v) are independent of the number of sampling points used to discretize the cube D. When v is compactly supported in D, the approximation is the trapezoidal rule plus a local weighted sum of the values of v around the point of singularity. These quadrature rules provide an efficient, stable and accurate way of approximating J(v). We demonstrate the performance of these quadratures of orders up to 17 for highly oscillatory functions v. These type of integrals appear in scattering calculations in 3-D

Abstract:

We present a high-order, fast, iterative solver for the direct scattering calculation for the Helmholtz equation in two dimensions. Our algorithm solves the scattering problem formulated as the Lippmann-Schwinger integral equation for compactly supported, smoothly vanishing scatterers. There are two main components to this algorithm. First, the integral equation is discretized with quadratures based on high-order corrected trapezoidal rules for the logarithmic singularity present in the kernel of the integral equation. Second, on the uniform mesh required for the trapezoidal rule we rewrite the discretized integral operator as a composition of two linear operators: a discrete convolution followed by a diagonal multiplication; therefore, the application of these operators to an arbitrary vector, required by an iterative method for the solution of the discretized linear system, will cost N(2)log(N) for a N-by-N mesh, with the help of FFT. We will demonstrate the performance of the algorithm for scatterers of complex structures and at large wave numbers. For numerical implementations, CMRES iterations will be used, and corrected trapezoidal rules up to order 20 will be tested

Abstract:

In this report, we construct correction coefficients to obtain high-order trapezoidal quadrature rules to evaluate two-dimensional integrals with a logarithmic singularity of the form J(v) = ∫D v(x, y) ln (√x2 + y2) dx dy, where the domain D is a square containing the point of singularity (0,0) and v is a C∞ function defined on the whole plane ℝ2. The procedure we use is a generalization to 2-D of the method of central corrections for logarithmic singularities described in [1]. As in 1-D, the correction coefficients are independent of the number of sampling points used to discretize the square D. When v has compact support contained in D, the approximation is the trapezoidal rule plus a local weighted sum of the values of v around the point of singularity. These quadrature rules give an efficient, stable, and accurate way of approximating J(v). We provide the correction coefficients to obtain corrected trapezoidal quadrature rules up to order 20

Abstract:

This paper addresses the problem of the optimal design of batch plants with imprecise demands in product amounts. The design of such plants necessarily involves the way that equipment may be utilized, which means that plant scheduling and production must form an integral part of the design problem. This work relies on a previous study, which proposed an alternative treatment of the imprecision (demands) by introducing fuzzy concepts, embedded in a multi-objective Genetic Algorithm (GA) that takes into account simultaneously maximization of the net present value (NPV) and two other performance criteria, i.e. the production delay/advance and a flexibility criterion. The results showed that an additional interpretation step might be necessary to help the managers choosing among the non-dominated solutions provided by the GA. The analytic hierarchy process (AHP) is a strategy commonly used in Operations Research for the solution of this kind of multicriteria decision problems, allowing the apprehension of manager subjective judgments. The major aim of this study is thus to propose a software integrating the AHP theory for the analysis of the GA Pareto-optimal solutions, as an alternative decision-support tool for the batch plant design problem solution

Abstract:

Nanoliposomes, bilayer vesicles at the nanoscale, are becoming popular because of their safety, patient compliance, high entrapment efficiency, and prompt action. Several notable biological activities of natural essential oils (EOs), including fungal inhibition, are of supreme interest. As developed, multi-compositional nanoliposomes loaded with various concentrations of clove essential oil (CEO) and tea tree oil (TTO) were thoroughly characterized to gain insight into their nano-size distribution. The present work also aimed to reconnoiter the sustainable synthesis conditions to estimate the efficacy of EOs in bulk and EO-loaded nanoliposomes with multi-functional entities. Following a detailed nano-size characterization of in-house fabricated EO-loaded nanoliposomes, the antifungal efficacy was tested by executing the mycelial growth inhibition (MGI) test using Trichophyton rubrum fungi as a test model. The dynamic light scattering (DLS) profile of as-fabricated EO-loaded nanoliposomes revealed the mean size, polydispersity index (PdI), and zeta potential values as 37.12 +_ 1.23 nm, 0.377 +_ 0.007, and −36.94 +_ 0.36 mV, respectively. The sphere-shaped morphology of CEO and TTO-loaded nanoliposomes was confirmed by a scanning electron microscope (SEM). The existence of characteristic functional bands in all tested counterparts was demonstrated by attenuated total reflection-Fourier transform infrared (ATR-FTIR) spectroscopy. Compared to TTO-loaded nanoliposomes, the CEO-loaded nanoliposomes exhibited a maximum entrapment efficacy of 91.57 +_ 2.5%. The CEO-loaded nanoliposome fraction, prepared using 1.5 uL/mL concentration, showed the highest MGI of 98.4 +_ 0.87% tested against T. rubrum strains compared to the rest of the formulations

Abstract:

Modern lifestyle demands high-end commodities, for instance, cosmetics, detergents, shampoos, household cleaning, sanitary items, medicines, and so forth. In recent years, these products' consumption has increased considerably, being antibiotics and some other pharmaceutical and personal care products (PPCPs). Several antibiotics and PPCPs represent a wide range of emerging contaminants with a straight ingress into aquatic systems, given their high persistence in seawater, effluent treatment plants, and even drinking water. Under these considerations, the necessity of developing new and affordable technologies for the treatment and sustainable mitigation of pollutants is highly requisite for a safer and cleaner environment. One possible mitigation solution is an effective deployment of nanotechnological cues as promising matrices that can contribute by attending issues and improving the current strategies to detect, prevent, and mitigate hazardous pollutants in water. Focused on nanoparticles' distinctive physical and chemical properties, such as high surface area, small size, and shape, metallic nanoparticles (MNPs) have been investigated for water remediation. MNPs gained increasing interest among research groups due to their superior efficiency, stability, and high catalyst activity compared with conventional systems. This review summarizes the occurrence of antibiotics and PPCPs and the application of MNPs as pollutant mitigators in the aquatic environment. The work also focuses on transportation fate, toxicity, and current regulations for environmental safety

Abstract:

The development of greener nano-constructs with noteworthy biological activity is of supreme interest, as a robust choice to minimize the extensive use of synthetic drugs. Essential oils (EOs) and their constituents offer medicinal potentialities because of their extensive biological activity, including the inhibition of fungi species. However, their application as natural antifungal agents are limited due to their volatility, low stability, and restricted administration routes. Nanotechnology is receiving particular attention to overcome the drawbacks of EOs such as volatility, degradation, and high sensitivity to environmental/external factors. For the aforementioned reasons, nanoencapsulation of bioactive compounds, for instance, EOs, facilitates protection and controlled-release attributes. Nanoliposomes are bilayer vesicles, at nanoscale, composed of phospholipids, and can encapsulate hydrophilic and hydrophobic compounds. Considering the above critiques, herein, we report the in-house fabrication and nano-size characterization of bioactive oregano essential oil (Origanum vulgare L.) (OEO) molecules loaded with small unilamellar vesicles (SUV) nanoliposomes. The study was focused on three main points: (1) multi-compositional fabrication nanoliposomes using a thin film hydration-sonication method; (2) nano-size characterization using various analytical and imaging techniques; and (3) antifungal efficacy of as-developed OEO nanoliposomes against Trichophyton rubrum (T. rubrum) by performing the mycelial growth inhibition test (MGI)

Abstract:

The necessity to develop more efficient, biocompatible, patient compliance, and safer treatments in biomedical settings is receiving special attention using nanotechnology as a potential platform to design new drug delivery systems (DDS). Despite the broad range of nanocarrier systems in drug delivery, lack of biocompatibility, poor penetration, low entrapment efficiency, and toxicity are significant challenges that remain to address. Such practices are even more demanding when bioactive agents are intended to be loaded on a nanocarrier system, especially for topical treatment purposes. For the aforesaid reasons, the search for more efficient nano-vesicular systems, such as nanoliposomes, with a high biocompatibility index and controlled releases has increased considerably in the past few decades. Owing to the stratum corneum layer barrier of the skin, the in-practice conventional/conformist drug delivery methods are inefficient, and the effect of the administered therapeutic cues is limited. The current advancement at the nanoscale has transformed the drug delivery sector. Nanoliposomes, as robust nanocarriers, are becoming popular for biomedical applications because of safety, patient compliance, and quick action. Herein, we reviewed state-of-the-art nanoliposomes as a smart and sophisticated drug delivery approach. Following a brief introduction, the drug delivery mechanism of nanoliposomes is discussed with suitable examples for the treatment of numerous diseases with a brief emphasis on fungal infections. The latter half of the work is focused on the applied perspective and clinical translation of nanoliposomes. Furthermore, a detailed overview of clinical applications and future perspectives has been included in this review

Abstract:

A workflow is a set of steps or tasks that model the execution of a process, e.g., protein annotation, invoice generation and composition of astronomical images. Workflow applications commonly require large computational resources. Hence, distributed computing approaches (such as Grid and Cloud computing) emerge as a feasible solution to execute them. Two important factors for executing workflows in distributed computing platforms are (1) workflow scheduling and (2) resource allocation. As a consequence, there is a myriad of workflow scheduling algorithms that map workflow tasks to distributed resources subject to task dependencies, time and budget constraints. In this paper, we present a taxonomy of workflow scheduling algorithms, which categorizes the algorithms into (1) best-effort algorithms (including heuristics, metaheuristics, and approximation algorithms) and (2) quality-of-service algorithms (including budget-constrained, deadline-constrained and algorithms simultaneously constrained by deadline and budget). In addition, a workflow engine simulator was developed to quantitatively compare the performance of scheduling algorithms

Resumen:

Hoy en día, los riesgos y oportunidades relacionados con la sostenibilidad surgen de la dependencia de una entidad ante los recursos que necesita para operar, pero además por el impacto que ocasiona en estos, por lo que podría traer varios impactos contables en la determinación de información financiera. De esta manera, tanto la información contenida en sus estados financieros como la incluida en la información a revelar sobre sostenibilidad relacionada con la información financiera son datos esenciales para que un usuario evalúe el valor de una entidad. Estos aspectos y otros adicionales se abordan de manera sencilla y amigable en la obra Impactos contables de acuerdo con las NIF para el cierre de estados financieros 2022, que es de utilidad para la preparación y presentación de la información financiera del ejercicio 2022; por ello, este libro es imprescindible para los Contadores independientes y de empresas de cualquier tamaño, además de que está dirigido y es de gran ayuda para el personal de despachos, docentes y estudiantes, tanto de nivel licenciatura como de posgrado y, por supuesto, para los preparadores de información financiera, empresarios y público en general

Abstract:

We study the behavior of a decision maker who prefers alternative x to alternative y in menu A if the utility of x exceeds that of y by at least a threshold associated with y and A. Hence the decision maker's preferences are given by menu-dependent interval orders. In every menu, her choice set comprises of undominated alternatives according to this preference. We axiomatize this broad model when thresholds are monotone, i.e., at least as large in larger menus. We also obtain novel characterizations in two special cases that have appeared in the literature: the maximization of a fixed interval order where the thresholds depend on the alternative and not on the menu, and the maximization of monotone semiorders where the thresholds are independent of the alternatives but monotonic in menus

Abstract:

Given a scattered space X = (X, tau) and an ordinal lambda, we define a topology tau+lambda in such a way that tau+0 = tau and, when X is an ordinal with the initial segment topology, the resulting sequence {tau+lambda}(lambda is an element of Ord) coincides with the family of topologies {I-lambda}(lambda is an element of Ord) used by Icard, Joosten, and the second author to provide semantics for polymodal provability logics. We prove that given any scattered space X of large-enough rank and any ordinal lambda > 0, GL is strongly complete for tau(+lambda). The special case where X = omega(omega) + 1 and lambda = 1 yields a strengthening of a theorem of Abashidze and Blass

Abstract:

We introduce verification logic, a variant of Artemov’s logic of proofs with new terms of the form ¡þ! satisfying the axiom schema þ then ¡þ!:þ. The intention is for ¡þ! to denote a proof of þ in Peano arithmetic, whenever such a proof exists. By a suitable restriction of the domain of ¡·!, we obtain the verification logic VS5, which realizes the axioms of Lewis' system S5. Our main result is that VS5 is sound and complete for its arithmetical interpretation

Abstract:

This note examines evidence of non-fundamentalness in the rate of variation of annual per capita capital stock for OECD countries in the period 1955–2020. Leeper et al. (2013) proposed a theoretical model in which, due to agents performing fiscal foresight, this economic series could exhibit a non-fundamental behavior (in particular, a non-invertible moving average component), which has important implications for modeling and forecasting. Using the methodology proposed in Velasco and Lobato (2018), which delivers consistent estimators of the autoregressive and moving average parameters without imposing fundamentalness assumptions, we empirically examine whether the capital data are better represented with an invertible or a non-invertible moving average model. We find strong evidence in favor of the non-invertible representation since for the countries that present significant innovation asymmetry, the selected model is predominantly non-invertible

Abstract:

Shelf life experiments have as an outcome a matrix of zeroes and ones that represent the acceptance or no acceptance of customers when presented with samples of the product under evaluation in a random fashion within a designed experiment. This kind of response is called a Bernoulli response due to the dichotomous nature (0,1) of its values. It is not rare to find inconsistent sequences of responses, that is when a customer rejects a less aged sample and does not reject an older sample. That is, we find a zero before a one. This is due to the human factor present in the experiment. In the presence of this kind of inconsistencies some conventions have been taken in the literature in order to estimate shelf life distribution using methods and software from the reliability field which requires numerical responses. In this work we propose a method that does not require coding the original responses into numerical values. We use a more reliable coding by using the Bernoulli response directly and using a Bayesian approach. The resulting method is based on solid Bayesian theory and proved computer programs. We show by means of an example and simulation studies that the new methodology clearly beats the methodology proposed by Hough. We also provide the R software necessary for the implementation

Abstract:

Definitive Screening Designs (DSD) are a class of experimental designs that have the possibility to estimate linear, quadratic and interaction effects with relatively little experimental effort. The linear or main effects are completely independent of two factor interactions and quadratic effects. The two factor interactions are not completely confounded with other two factor interactions, and quadratic effects are estimable. The number of experimental runs is twice the number of factors of interest plus one. Several approaches have been proposed to analyze the results of these experimental plans, some of these approaches take into account the structure of the design, others do not. The first author of this paper proposed a Bayesian sequential procedure that takes into account the structure of the design, this procedure consider normal and non normal responses. The creators of the DSD originally performed a forward stepwise regression programmed in JMP, and also used the minimization of a bias corrected version of Akaike's information criterion, and later they proposed a frequentist procedure that considers the structure of the DSD. Both the frequentist and Bayesian procedures, when the number of experimental runs is twice the number of factors of interest plus one, use as initial step fitting a model with only main effects and then check the significance of these effects to proceed. In this paper we present modification of the Bayesian procedure that incorporates the Bayesian factor identification which is an approach that computes, for each factor, the posterior probability that it is active, this includes the possibility that it is present in linear, quadratic or two factor interactions. This a more comprehensive approach than just testing the significance of an effect

Resumen:

Paquete estadístico con interfaz gráfica para análisis secuencial bayesiano de diseños discriminantes definitivos

Abstract:

Definitive Screening Designs are a class of experimental designs that under factor sparsity have the potential to estimate linear, quadratic and interaction effects with little experimental effort. BAYESDEF is a package that performs a five step strategy to analyze this kind of experiments that makes use of tools coming from the Bayesian approach. It also includes the least absolute shrinkage and selection operator (lasso) as a check (Aguirre VM. (2016))

Abstract:

With the advent of widespread computing and availability of open source programs to perform many different programming tasks, nowadays there is a trend in Statistics to program tailor made applications for non statistical customers in various areas. This is an alternative to having a large statistical package with many functions many of which never are used. In this article, we present CONS an R package dedicated to Consonance Analysis. Consonance Analysis is a useful numerical and graphical exploratory approach for evaluating the consistency of the measurements and the panel of people involved in sensory evaluation. It makes use of several uni and multivariate techniques either graphical or analytical, particularly Principal Components Analysis. The package is implemented in a graphical user interface in order to get a user friendly package

Abstract:

Definitive screening designs (DSDs) are a class of experimental designs that allow the estimation of linear, quadratic, and interaction effects with little experimental effort if there is effect sparsity. The number of experimental runs is twice the number of factors of interest plus one. Many industrial experiments involve nonnormal responses. Generalized linear models (GLMs) are a useful alternative for analyzing these kind of data. The analysis of GLMs is based on asymptotic theory, something very debatable, for example, in the case of the DSD with only 13 experimental runs. So far, analysis of DSDs considers a normal response. In this work, we show a five-step strategy that makes use of tools coming from the Bayesian approach to analyze this kind of experiment when the response is nonnormal. We consider the case of binomial, gamma, and Poisson responses without having to resort to asymptotic approximations. We use posterior odds that effects are active and posterior probability intervals for the effects and use them to evaluate the significance of the effects. We also combine the results of the Bayesian procedure with the lasso estimation procedure to enhance the scope of the method

Abstract:

It is not uncommon to deal with very small experiments in practice. For example, if the experiment is conducted on the production process, it is likely that only a very few experimental runs will be allowed. If testing involves the destruction of expensive experimental units, we might only have very small fractions as experimental plans. In this paper, we will consider the analysis of very small factorial experiments with only four or eight experimental runs. In addition, the methods presented here could be easily applied to larger experiments. A Daniel plot of the effects to judge significance may be useless for this type of situation. Instead, we will use different tools based on the Bayesian approach to judge significance. The first tool consists of the computation of the posterior probability that each effect is significant. The second tool is referred to in Bayesian analysis as the posterior distribution for each effect. Combining these tools with the Daniel plot gives us more elements to judge the signiicance of an effect. Because, in practice, the response may not necessarily be normally distributed, we will extend our approach to the generalized linear model setup. By simulation, we will show that not only in the case of discrete responses and very small experiments, the usual large sample approach for modeling generalized linear models may produce a very biased and variable estimators, but also that the Bayesian approach provides a very sensible results

Abstract:

Inference for quantile regression parameters presents two problems. First, it is computationally costly because estimation requires optimising a non-differentiable objective function which is a formidable numerical task, specially with many number of observations and regressors. Second, it is controversial because standard asymptotic inference requires the choice of smoothing parameters and different choices may lead to different conclusions. Bootstrap methods solve the latter problem at the price of enlarging the former. We give a theoretical justification for a new inference method consisting of the construction of asymptotic pivots based on a small number of bootstrap replications. The procedure still avoids smoothing and reduces usual bootstrap methods’ computational cost. We show its usefulness to draw inferences on linear or non-linear functions of the parameters of quantile regression models

Abstract:

Repeatability and reproducibility (R&R) studies can be used to pinpoint the parts of a measurement system that might need improvement. By using simulation, there is practically no difference in using five or 10 parts in many R&R studies

Abstract:

The existing methods for analyzing unreplicated fractional factorial experiments that do not contemplate the possibility of outliers in the data have a poor performance for detecting the active effects when that contingency becomes a reality. There are some methods to detect active effects under this experimental setup that consider outliers. We propose a new procedure based on robust regression methods to estimate the effects that allows for outliers. We perform a simulation study to compare its behavior relative to existing methods and find that the new method has a very competitive or even better power. The relative power improves as the contamination and size of outliers increase when the number of active effects is up to four

Abstract:

The paper presents the asymptotic theory of the efficient method of moments when the model of interest is not correctly specified. The paper assumes a sequence of independent and identically distributed observations and a global misspecification. It is found that the limiting distribution of the estimator is still asymptotically normal, but it suffers a strong impact in the covariance matrix. A consistent estimator of this covariance matrix is provided. The large sample distribution on the estimated moment function is also obtained. These results are used to discuss the situation when the moment conditions hold but the model is misspecified. It also is shown that the overidentifying restrictions test has asymptotic power one whenever the limit moment function is different from zero. It is also proved that the bootstrap distributions converge almost surely to the previously mentioned distributions and hence they could be used as an alternative to draw inferences under misspecification. Interestingly, it is also shown that bootstrap can be reliably applied even if the number of bootstrap replications is very small

Abstract:

It is well known that outliers or faulty observations affect the analysis of unreplicated factorial experiments. This work proposes a method that combines the rank transformation of the observations, the Daniel plot and a formal statistical testing procedure to assess the significance of the effects. It is shown, by means of previous theoretical results cited in the literature, examples and a Monte Carlo study, that the approach is helpful in the presence of outlying observations. The simulation study includes an ample set of alternative procedures that have been published in the literature to detect significant effects in unreplicated experiments. The Monte Carlo study also, gives evidence that using the rank transformation as proposed, provides two advantages: keeps control of the experimentwise error rate and improves the relative power to detect active factors in the presence of outlying observations

Abstract:

Most of the inferential results are based on the assumption that the user has a "random" sample, by this it is usually understood that the observations are a realization from a set of independent identically distributed random variables. However most of the time this is not true mainly for two reasons: one, the data are not obtained by means of a probabilistic sampling scheme from the population, the data are just gathered as they becomes available or in the best of the cases using some kind of control variables and quota sampling.; and second, even if a probabilistic scheme is used, the sample design is complex in the sense that it was not simple random sampling with replacement, but instead some sort of stratification or clustering or a combination of both was required. For an excellent discussion about the kind of considerations that should be made in the first situation see Hahn and Meeker (1993) and a related comment in Aguirre (1994). For the second problem there is a book about the topic in Skinner et a1.(1989). In this paper we consider the problem of evaluating the effect of sampling complexity on Pearson's Chi-square and other alternative tests for goodness of fit for proportions. Work on this problem can be found in Shuster and Downing (1976), Rao and Scott (1974), Fellegi (1980), Holt et al. (1980), Rao and Scott (1981), and Thomas and Rao (1987). Out of this work come up several adjustments to Pearson's test, namely: Wald type tests, average eigenvalue correction and Satterthwaite type correction. There is a more recent and general resampling approach given in Sitter (1992), but it was not pursued in this study

Abstract:

Sometimes data analysis using the usual parametric techniques produces misleading results due to violations of the underlying assumptions, such as outliers or non-constant variances. In particular, this could happen in unreplicated factorial or fractional factorial experiments. To help in this situation alternative analyses have been proposed. For example Box and Meyer give a Bayesian analysis allowing for possibly faulty observations in un replicated factorials and the well known Box-Cox transformation can be used when there is a change in dispersion. This paper presents an analysis based on the rank transformation that deals with the above problems. The analysis is simple to use and can be implemented with a general purpose statistical computer package. The procedure is illustrated with examples from the literature. A theoretical justification is outlined at the end of the paper

Abstract:

The article considers the problem of choosing between two (possibly) nonlinear models that have been fitted to the same data using M-estimation methods. An asymptotically normally distributed lest statistics using a Monte Carlo study. We found that the presence of a competitive model either in the null or the alternative hypothesis affects the distributional properties of the tests, and that in the case that the data contains outlying observations the new procedure had a significantly higher power that the rest of the test

Abstract:

Fuller (1976), Anderson (1971), and Hannan (1970) introduce infinite moving average models as the limit in the quadratic mean of a sequence of partial sums, and Fuller (1976) shows that if the assumption of independence of the addends is made then the limit almost surely holds. This note shows that without the assumption of independence, the limit holds with probability one. Moreover, the proofs given here are easier to teach

Abstract:

A test for the problem or choosing between several nonnested nonlinear regression models simultaneously is presented. The test does not require an explicit specification of a parametric family of distributions for the error term and has a closed form

Abstract:

The asymptotic dislribution of the generalized Cox test for choosing between two multivariate, nonlinear regression models in implicit form is derived. The data is assumed to be generated by a model that need not be either the null or the non-null model. As the data-generating model is not subjected to a Pitman drift the analysis is global, not local, and provides a fairly complete qualitative descriptíon of the power characteristics or the generalized Cox test. Some investigations of these characteristics are included. A new test statistic is introduced that does not requíre an explicit specification of the error distributíon of the null model. The idea is to replace an analytical computation of the expectation of the Cox difference with a bootstrap estimate. The null dístributíon of this new test is derived

Abstract:

A great deal of research has investigated how various aspects of ethnic identity influence consumer behavior, yet this literature is fragmented. The objective of this article was to present an integrative theoretical model of how individuals are motivated to think and act in a manner consistent with their salient ethnic identities. The model emerges from a review of social science and consumer research about US Hispanics, but researchers could apply it in its general form and/or adapt it to other populations. Our model extends Oyserman's (Journal of Consumer Psychology, 19, 250) identity-based motivation (IBM) model by differentiating between two types of antecedents of ethnic identity salience: longitudinal cultural processes and situational activation by contextual cues, each with different implications for the availability and accessibility of ethnic cultural knowledge. We provide new insights by introducing three ethnic identity motives that are unique to ethnic (nonmajority) cultural groups: belonging, distinctiveness, and defense. These three motives are in constant tension with one another and guide longitudinal processes like acculturation, and ultimately influence consumers' procedural readiness and action readiness. Our integrative framework organizes and offers insights into the current body of Hispanic consumer research, and highlights gaps in the literature that present opportunities for future research

Abstract:

In many Solvency and Basel loss data, there are thresholds or deductibles that affect the analysis capability. On the other hand, the Birnbaum-Saunders model has received great attention during the last two decades and it can be used as a loss distribution. In this paper, we propose a solution to the problem of deductibles using a truncated version of the Birnbaum-Saunders distribution. The probability density function, cumulative distribution function, and moments of this distribution are obtained. In addition, properties regularly used in insurance industry, such as multiplication by a constant (inflation effect) and reciprocal transformation, are discussed. Furthermore, a study of the behavior of the risk rate and of risk measures is carried out. Moreover, estimation aspects are also considered in this work. Finally, an application based on real loss data from a commercial bank is conducted

Abstract:

This paper proposes two new estimators for determining the number of factors (r) in static approximate factor models. We exploit the well-known fact that the r largest eigenvalues of the variance matrix of N response variables grow unboundedly as N increases, while the other eigenvalues remain bounded. The new estimators are obtained simply by maximizing the ratio of two adjacent eigenvalues. Our simulation results provide promising evidence for the two estimators

Abstract:

We study a modification of the Luce rule for stochastic choice which admits the possibility of zero probabilities. In any given menu, the decision maker uses the Luce rule on a consideration set, potentially a strict subset of the menu. Without imposing any structure on how the consideration sets are formed, we characterize the resulting behavior using a single axiom. Our result offers insight into special cases where consideration sets are formed under various restrictions

Abstract:

Purpose– This paper summarizes the findings of a research project aimed at benchmarking the environmental sustainability practices of the top 500 Mexican companies. Design/methodology/approach– The paper surveyed the firms with regard to various aspects of their adoption of environmental sustainability practices, including who or what prompted adoption, future adoption plans, decision-making responsibility, and internal/external challenges. The survey also explored how the adoption of environmental sustainability practices relates to the competitiveness of these firms. Findings– The results suggest that Mexican companies are very active in the various areas of business where environmental sustainability is relevant. Not surprisingly, however, the Mexican companies are seen to be at an early stage of development along the sustainability “learning curve”. Research limitations/implications– The sample consisted of 103 self-selected firms representing the six primary business sectors in the Mexican economy. Because the manufacturing sector is significantly overrepresented in the sample and because of its importance in addressing issues of environmental sustainability, when appropriate, specific results for this sector are reported and contrasted to the overall sample. Practical implications– The vast majority of these firms see adopting environmental sustainability practices as being profitable and think this will be even more important in the future. Originality/value– Improving the environmental performance of business firms through the adoption of sustainability practices is compatible with competitiveness and improved financial performance. In Mexico, one might expect that the same would be true, but only anecdotal evidence was heretofore available

Abstract:

We analyze an environment where the uncertainty in the equity market return and its volatility are both stochastic and may be potentially disconnected. We solve a representative investor's optimal asset allocation and derive the resulting conditional equity premium and risk-free rate in equilibrium. Our empirical analysis shows that the equity premium appears to be earned for facing uncertainty, especially high uncertainty that is disconnected from lower volatility, rather than for facing volatility as traditionally assumed. Incorporating the possibility of a disconnect between volatility and uncertainty significantly improves portfolio performance, over and above the performance obtained by conditioning on volatility only

Abstract:

We study the consumption-portfolio allocation problem in continuous time when asset prices follow Lévy processes and the investor is concerned about potential model misspecification. We derive optimal consumption and portfolio policies that are robust to uncertainty about the hard-to-estimate drift rate, jump intensity and jump size parameters. We also provide a semi-closed form formula for the detection-error probability and compare various portfolio holding strategies, including robust and non-robust policies. Our quantitative analysis shows that ignoring uncertainty leads to significant wealth loss for the investor

Abstract:

We exploit the manifold increase in homicides in 2008–11 in Mexico resulting from its war on organized drug traffickers to estimate the effect of drug-related homicides on housing prices. We use an unusually rich data set that provides national coverage of housing prices and homicides and exploits within-municipality variations. We find that the impact of violence on housing prices is borne entirely by the poor sectors of the population. An increase in homicides equivalent to 1 standard deviation leads to a 3 percent decrease in the price of low-income housing

Abstract:

This paper examines foreign direct investment (FDI) in the Hungarian economy in the period of post-Communist transition since 1989. Hungary took a quite aggressive approach in welcoming foreign investment during this period and as a result had the highest per capita FDI in the region as of 2001. We discuss the impact of FDI in terms of strategic intent, i.e., market serving and resource seeking FDI. The effect of these two kinds of FDI is contrasted by examining the impact of resource seeking FDI in manufacturing sectors and market serving FDI in service industries. In the case of transition economies, we argue that due to the strategic intent, resource seeking FDI can imply a short-term impact on economic development whereas market serving FDI strategically implies a long-term presence with increased benefits for the economic development of a transition economy. Our focus is that of market serving FDI in the Hungarian banking sector, which has brought improved service and products to multinational and Hungarian firms. This has been accompanied by the introduction of innovative financial products to the Hungarian consumer, in particular consumer credit including mortgage financing. However, the latter remains an underserved segment with much growth potential. For public policy in Hungary and other transition economies, we conclude that policymakers should consider the strategic intent of FDI in order to maximize its benefits in their economies

Abstract:

We propose a general framework for extracting rotation invariant features from images for the tasks of image analysis and classification. Our framework is inspired in the form of the Zernike set of orthogonal functions. It provides a way to use a set of one-dimensional functions to form an orthogonal set over the unit disk by non-linearly scaling its domain, and then associating it an exponential term. When the images are projected into the subspace created with the proposed framework, the rotations in the image affect only the exponential term while the value of the orthogonal functions serve as rotation invariant features. We exemplify our framework using the Haar wavelet functions to extract features from several thousand images of symbols. We then use the features in an OCR experiment to demonstrate the robustness of the method

Abstract:

In this paper we explore the use of orthogonal functions as generators of representative, compact descriptors of image content. In Image Analysis and Pattern Recognition such descriptors are referred to as image features, and there are some useful properties they should possess such as rotation invariance and the capacity to identify different instances of one class of images. We exemplify our algorithmic methodology using the family of Daubechies wavelets, since they form an orthogonal function set. We benchmark the quality of the image features generated by doing a comparative OCR experiment with three different sets of image features. Our algorithm can use a wide variety of orthogonal functions to generate rotation invariant features, thus providing the flexibility to identify sets of image features that are best suited for the recognition of different classes of images

Abstract:

The COVID-19 pandemic triggered a huge, sudden uptake in work from home, as individuals and organizations responded to contagion fears and government restrictions on commercial and social activities (Adams-Prassl et al. 2020; Bartik et al. 2020; Barrero et al. 2020; De Fraja et al. 2021). Over time, it has become evident that the big shift to work from home will endure after the pandemic ends (Barrero et al. 2021). No other episode in modern history involves such a pronounced and widespread shift in working arrangements in such a compressed time frame. The Industrial Revolution and the later shift away from factory jobs brought greater changes in skill requirements and business operations, but they unfolded over many decades. These facts prompt some questions: What explains the pandemic's role as catalyst for a lasting uptake in work from home (WFH)? When looking across countries and regions, have differences in pandemic severity and the stringency of government lockdowns had lasting effects on WFH levels? What does a large, lasting shift to remote work portend for workers? Finally, how might the big shift to remote work affect the pace of innovation and the fortunes of cities?

Abstract:

The pandemic triggered a large, lasting shift to work from home (WFH). To study this shift, we survey full-time workers who finished primary school in twenty-seven countries as of mid-2021 and early 2022. Our crosscountry comparisons control for age, gender, education, and industry and treat the United States mean as the baseline. We find, first, that WFH averages 1.5 days per week in our sample, ranging widely across countries. Second, employers plan an average of 0.7 WFH days per week after the pandemic, but workers want 1.7 days. Third, employees value the option to WFH two to three days per week at 5 percent of pay, on average, with higher valuations for women, people with children, and those with longer commutes. Fourth, most employees were favorably surprised by their WFH productivity during the pandemic. Fifth, looking across individuals, employer plans for WFH levels after the pandemic rise strongly with WFH productivity surprises during the pandemic. Sixth, looking across countries, planned WFH levels rise with the cumulative stringency of government-mandated lockdowns during the pandemic. We draw on these results to explain the big shift to WFH and to consider some implications for workers, organization, cities, and the pace of innovation

Resumen:

This paper presents evidence of large learning losses and partial recovery in Guanajuato, Mexico, during and after the school closures related to the COVID-19 pandemic. Learning losses were estimated using administrative data from enrollment records and by comparing the results of a census-based standardized test administered to approximately 20,000 5th and 6th graders in: (a) March 2020 (a few weeks before school closed); (b) November 2021 (2 months after schools reopened); and (c) June of 2023 (21 months after schools re-opened and over three years after the pandemic started). On average, students performed 0.2 to 0.3 standard deviations lower in Spanish and math after schools reopened, equivalent to 0.66 to 0.87 years of schooling in Spanish and 0.87 to 1.05 years of schooling in math. By June of 2023, students were able to make up for -60% of the learning loss that built up during school closures but still scored 0.08–0.11 standard deviations below their pre-pandemic levels (equivalent to 0.23–0.36 years of schooling)

Abstract:

In this work, we propose a hybrid AI system consisting of a multi-agent system simulating students during an exam and a teacher monitoring them, as well as an evolutionary algorithm that finds classroom arrangements which minimize cheating incentives. The students will answer the exam based on how much knowledge they have about the topic of the exam. In our simulation, then they enter a decision phase in which, for those questions they don't know the answer to, they will either cheat or answer by guessing. If a student gets caught cheating, his/her exam will be cancelled. The purpose of this study is to examine the question of how different monitoring behaviors on the part of the teacher affect the cheating behaviors of students. The results of this study show that an unbiased teacher, that is, a teacher that monitors every student with the same probability, produces minimal cheating incentives for students

Abstract:

When analyzing catastrophic risk, traditional measures for evaluating risk, such as the probable maximum loss (PML), value at risk (VaR), tail-VaR, and others, can become practically impossible to obtain analytically in certain types of insurance, such as earthquake, and certain types of reinsurance arrangements, specially non-proportional with reinstatements. Given the available information, it can be very difficult for an insurer to measure its risk exposure. The transfer of risk in this type of insurance is usually done through reinsurance schemes combining diverse types of contracts that can greatly reduce the extreme tail of the cedant’s loss distribution. This effect can be assessed mathematically. The PML is defined in terms of a very extreme quantile. Also, under standard operating conditions, insurers use several “layers” of non proportional reinsurance that may or may not be combined with some type of proportional reinsurance. The resulting reinsurance structures will then be very complicated to analyze and to evaluate their mitigation or transfer effects analytically, so it may be necessary to use alternative approaches, such as Monte Carlo simulation methods. This is what we do in this paper in order to measure the effect of a complex reinsurance treaty on the risk profile of an insurance company. We compute the pure risk premium, PML as well as a host of results: impact on the insured portfolio, risk transfer effect of reinsurance programs, proportion of times reinsurance is exhausted, percentage of years it was necessary to use the contractual reinstatements, etc. Since the estimators of quantiles are known to be biased, we explore the alternative of using an Extreme Value approach to complement the analysis

Abstract:

Estimation of adequate reserves for outstanding claims is one of the main activities of actuaries in property/casualty insurance and a major topic in actuarial science. The need to estimate future claims has led to the development of many loss reserving techniques. There are two important problems that must be dealt with in the process of estimating reserves for outstanding claims: one is to determine an appropriate model for the claims process, and the other is to assess the degree of correlation among claim payments in different calendar and origin years. We approach both problems here. On the one hand we use a gamma distribution to model the claims process and, in addition, we allow the claims to be correlated. We follow a Bayesian approach for making inference with vague prior distributions. The methodology is illustrated with a real data set and compared with other standard methods

Abstract:

Consider a random sample X1, X2,. . ., Xn, from a normal population with unknown mean and standard deviation. Only the sample size, mean and range are recorded and it is necessary to estimate the unknown population mean and standard deviation. In this paper the estimation of the mean and standard deviation is made from a Bayesian perspective by using a Markov Chain Monte Carlo (MCMC) algorithm to simulate samples from the intractable joint posterior distribution of the mean and standard deviation. The proposed methodology is applied to simulated and real data. The real data refers to the sugar content (°BRIX level) of orange juice produced in different countries

Abstract:

This paper is concerned with the situation that occurs in claims reserving when there are negative values in the development triangle of incremental claim amounts. Typically these negative values will be the result of salvage recoveries, payments from third parties, total or partial cancellation of outstanding claims due to initial overestimation of the loss or to a possible favorable jury decision in favor of the insurer, rejection by the insurer, or just plain errors. Some of the traditional methods of claims reserving, such as the chain-ladder technique, may produce estimates of the reserves even when there are negative values. However, many methods can break down in the presence of enough (in number and/or size) negative incremental claims if certain constraints are not met. Historically the chain-ladder method has been used as a gold standard (benchmark) because of its generalized use and ease of application. A method that improves on the gold standard is one that can handle situations where there are many negative incremental claims and/or some of these are large. This paper presents a Bayesian model to consider negative incremental values, based on a three-parameter log-normal distribution. The model presented here allows the actuary to provide point estimates and measures of dispersion, as well as the complete distribution for outstanding claims from which the reserves can be derived. It is concluded that the method has a clear advantage over other existing methods. A Markov chain Monte Carlo simulation is applied using the package WinBUGS

Abstract:

The BMOM is particularly useful for obtaining post-data moments and densities for parameters and future observations when the form of the likelihood function is unknown and thus a traditional Bayesian approach cannot be used. Also, even when the form of the likelihood is assumed known, in time series problems it is sometimes difficult to formulate an appropriate prior density. Here, we show how the BMOM approach can be used in two, nontraditional problems. The first one is conditional forecasting in regression and time series autoregressive models. Specifically, it is shown that when forecasting disaggregated data (say quarterly data) and given aggregate constraints (say in terms of annual data) it is possible to apply a Bayesian approach to derive conditional forecasts in the multiple regression model. The types of constraints (conditioning) usually considered are that the sum, or the average, of the forecasts equals a given value. This kind of condition can be applied to forecasting quarterly values whose sum must be equal to a given annual value. Analogous results are obtained for AR(p) models. The second problem we analyse is the issue of aggregation and disaggregation of data in relation to predictive precision and modelling. Predictive densities are derived for future aggregate values by means of the BMOM based on a model for disaggregated data. They are then compared with those derived based on aggregated data

Resumen:

El problema de estimar el valor acumulado de una variable positiva y continua para la cual se ha observado una acumulación parcial, y generalmente con sólo un reducido número de observaciones (dos años), se puede llevar a cabo aprovechando la existencia de estacionalidad estable (de un periodo a otro). Por ejemplo, la cantidad por pronosticar puede ser el total de un periodo (año) y el cual debe hacerse en cuanto se obtiene información sólo para algunos subperiodos (meses) dados. Estas condiciones se presentan de manera natural en el pronóstico de las ventas estacionales de algunos productos ‘de temporada’, tales como juguetes; en el comportamiento de los inventarios de bienes cuya demanda varía estacionalmente, como los combustibles; o en algunos tipos de depósitos bancarios, entre otros. En este trabajo se analiza el problema en el contexto de muestreo por conglomerados. Se propone un estimador de razón para el total que se quiere pronosticar, bajo el supuesto de estacionalidad estable. Se presenta un estimador puntual y uno para la varianza del total. El método funciona bien cuando no es factible aplicar metodología estándar debido al reducido número de observaciones. Se incluyen algunos ejemplos reales, así como aplicaciones a datos publicados con anterioridad. Se hacen comparaciones con otros métodos

Abstract:

The problem of estimating the accumulated value of a positive and continuous variable for which some partially accumulated data has been observed, and usually with only a small number of observations (two years), can be approached taking advantage of the existence of stable seasonality (from one period to another). For example the quantity to be predicted may be the total for a period (year) and it needs to be made as soon as partial information becomes available for given subperiods (months). These conditions appear in a natural way in the prediction of seasonal sales of style goods, such as toys; in the behavior of inventories of goods where demand varies seasonally, such as fuels; or banking deposits, among many other examples. In this paper, the problem is addressed within a cluster sampling framework. A ratio estimator is proposed for the total value to be forecasted under the assumption of stable seasonality. Estimators are obtained for both the point forecast and the variance. The procedure works well when standard methods cannot be applied due to the reduced number of observations. Some real examples are included as well as applications to some previously published data. Comparisons are made with other procedures

Abstract:

We present a Bayesian solution to forecasting a time series when few observations are available. The quantity to predict is the accumulated value of a positive, continuous variable when partially accumulated data are observed. These conditions appear naturally in predicting sales of style goods and coupon redemption. A simple model describes the relation between partial and total values, assuming stable seasonality. Exact analytic results are obtained for point forecasts and the posterior predictive distribution. Noninformative priors allow automatic implementation. The procedure works well when standard methods cannot be applied due to the reduced number of observations. Examples are provided

Resumen:

Este artículo evalúa algunos aspectos del Proyecto Piloto de Nutrición, Alimentación y Salud (PNAS). Describe brevemente el Proyecto y presenta las características de la población beneficiaria, luego profundiza en el problema de la pobreza y a partir de un índice se evalúa la selección de las comunidades beneficiadas por el Proyecto. Posteriormente se describe la metodología usada en el análisis costo-efectividad y se da el procedimiento para el cálculo de los cocientes del efecto que tuvo el PNAS específicamente en el gasto en alimentos. Por último, se presentan las conclusiones que, entre otros aspectos, arrojan que el efecto del PNAS en el gasto en alimentos de las familias indujo un incremento del gasto de 7.3% en la zona ixtlera y de 4.3% en la zona otomí-mazahua, con un costo de 29.9 nuevos pesos (de 1991) y de 40.9 para cada una de las zonas, respectivamente

Abstract:

An evaluation is made of some aspects of the Proyecto Piloto de Nutrición, Alimentación y Salud, a Pilot Program for Nutrition, Food and Health of the Mexican Government (PNAS). We give a brief description of the Project and characteristics of the target population. We then describe and use the FGT Index to determine if the communities included in the Project were correctly chosen. We describe the method of cost-effectiveness analysis used in this article. The procedure for specifying cost-effectiveness ratios is next presented, and their application to measure the impact of PNAS on Food Expenditures carried out. Finally we present empirical results that show that, among other results, PNAS increased Food Expenditures of the participating households by 7.3% in the Ixtlera Zone and by 4.3% in the Otomí Mazahua Zone, at a cost of N$29.9 (1991) and N$40.9 for each, respectively

Resumen:

Con frecuencia las instituciones financieras internacionales y los gobiernos locales se ven implicados en la implantación de programas de desarrollo. Existe amplia evidencia de que los mejores resultados se obtienen cuando la comunidad se compromete en la operación de los programas, es decir cuando existe participación comunitaria. La evidencia es principalmente cualitativa, pues no hay métodos para medir cuantitativamente esta participación. En este artículo se propone un procedimiento para generar un índice agregado de participación comunitaria. Está orientado de manera específica a medir el grado de participación comunitaria en la construcción de obras de beneficio colectivo. Para estimar los parámetros del modelo que se propone es necesario hacer algunos supuestos, debido a las limitaciones en la información. Se aplica el método a datos de comunidades que participaron en un proyecto piloto de nutrición-alimentación y salud que se llevó a cabo en México

Abstract:

There is ample evidence that the best results are obtained in development programs when the target community gets involved in their implementation and/or operation. The evidence is mostly qualitative, however, since there are no methods for measuring this participation quantitatively. In this paper we present a procedure for generating an aggregate index of community participation based on productivity. It is specifically aimed at measuring community participation in the construction of works for collective benefit. Because there are limitations on the information available, additional assumptions must be made to estimate parameters. The method is applied to data from communities in Mexico participating in a national nutrition, food and health program

Abstract:

A Bayesian approach is used to derive constrained and unconstrained forecasts in an autoregressive time series model. Both are obtained by formulating an AR(p) model in such a way that it is possible to compute numerically the predictive distribution for any number of forecasts. The types of constraints considered are that a linear combination of the forecasts equals a given value. This kind of restriction is applied to forecasting quarterly values whose sum must be equal to a given annual value. Constrained forecasts are generated by conditioning on the predictive distribution of unconstrained forecasts. The procedures are applied to the Quarterly GNP of Mexico, to a simulated series from an AR(4) process and to the Quarterly Unemployment Rate for the United States

Abstract:

The problem of temporal disaggregation of time series is analyzed by means of Bayesian methods. The disaggregated values are obtained through a posterior distribution derived by using a diffuse prior on the parameters. Further analysis is carried out assuming alternative conjugate priors. The means of the different posterior distribution are shown to be equivalent to some sampling theory results. Bayesian prediction intervals are obtained. Forecasts for future disaggregated values are derived assuming a conjugate prior for the future aggregated value

Abstract:

A formulation of the problem of detecting outliers as an empirical Bayes problem is studied. In so doing we encounter a non-standard empirical Bayes problem for which the notion of average risk asymptotic optimality (a.r.a.o.) of procedures is defined. Some general theorems giving sufficient conditions for a.r.a.o. procedures are developed. These general results are then used in various formulations of the outlier problem for underlying normal distributions to give a.r.a.o. empirical Bayes procedures. Rates of convergence results are also given using the methods of Johns and Van Ryzin

Resumen:

El texto examina cuáles son las características y rasgos del habla tanto de las mujeres como de los hombres; hace una valoración a partir de algunas de sus causas y concluye con una invitación a hacernos conscientes de la forma de expresarnos

Abstract:

This article examines the distinctive characteristics and features of how both women and men speak. Based on this analysis, the author will make an assessment, and then invite the reader to become aware of their manner of speaking

Resumen:

Inés Arredondo perteneció a la llamada Generación de Medio Siglo, particularmente, al grupo de intelectuales y artistas que fundaron e impulsaron las actividades culturales de la Casa del Lago durante los años sesenta. El artículo es una semblanza que da cuenta tanto de los hechos más importantes que marcaron la vida y la trayectoria intelectual de Inés Arredondo, como de los rasgos particulares (estéticos) que definen la obra narrativa de esta escritora excepcional

Abstract:

Inés Arredondo belonged to the so-called Mid-Century Generation, namely a group of intellectuals and artists that established and promoted Casa del Lago’s cultural activities in the Sixties. This article gives an account of the important events and intellectual journey that shaped the writer’s life particularly the esthetic characteristics that shaped the narrative work of this exceptional writer

Abstract:

Informality is a structural trait in emerging economies affecting the behavior of labor markets, financial access and economy-wide productivity. This paper develops a simple general equilibrium closed economy model with nominal rigidities, labor and financial frictions to analyze the transmission of shocks and of monetary policy. In the model, the informal sector provides a flexible margin of adjustment to the labor market at the cost of a lower productivity. In addition, only formal sector firms have access to financing, which is instrumental in their production process. In a quantitative version of the model calibrated to Mexican data, we find that informality: (i) dampens the impact of demand and financial shocks, as well as of technology shocks specific to the formal sector, on wages and inflation, but (ii) heightens the inflationary impact of aggregate technology shocks. The presence of an informal sector also increases the sacrifice ratio of monetary policy actions. From a Central Bank perspective, the results imply that informality mitigates inflation volatility for most type of shocks but makes monetary policy less effective

Abstract:

A topological space X is totally Brown if for each n E N \ {1} and every nonempty open subsets U1, U2, …, Un of X we have cl X (U1) Π cl X (U2) Π … Π cl x (Un) = θ. Totally Brown spaces are connected. In this paper we consider a topology ts on the set N of natural numbers. We then present properties of the topological space (N, ts) some of them involve the closure of a set with respect to this topology, while other describe subsets which are wither totally Brown or totally separated. Our theorems generalize results proved by P. Szczuka in 2013, 2014, 2016 and by P. Szyszkowska and M Szyszkowski in 2018

Resumen:

En el presente trabajo, estudiamos los espacios de Brown, que son conexos y no completamente de Hausdorff. Utilizando progresiones aritméticas, construimos una base BG para una topología TG de N, y mostramos que (N, TG), llamado el espacio de Golomb, es de Brown. También probamos que hay elementos de BG que son de Brown, mientras que otros están totalmente separados. Escribimos algunas consecuencias de este resultado. Por ejemplo, (N, TG) no es conexo en pequeño en ninguno de sus puntos. Esto generaliza un resultado probado por Kirch en 1969. También damos una prueba más simple de un resultado presentado por Szczuka en 2010

Abstract:

In the present paper we study Brown spaces which are connected and not completely Hausdorff. Using arithmetic progressions, we construct a base BG for a topology TG on N, and show that (N, TG), called the Golomb space is a Brown space. We also show that some elements of BG are Brown spaces, while others are totally separated. We write some consequences of such result. For example, the space (N, TG) is not connected "im kleinen" at each of its points. This generalizes a result proved by Kirchin 1969. We also present a simpler proof of a result given by Szczuka in 2010

Resumen:

El libro sintetiza la experiencia adquirida en cursos de Ordenadores y Programación impartidos por la autora durante tres años en la Licenciatura de Infórmática de la Facultad de Ciencias de la Universidad Autónoma de Barcelona

Resumen:

En recientes años se ha incrementado el interés en el desarrollo de nuevos materiales en este caso compositos, ya que estos materiales más avanzados pueden realizar mejor su trabajo que los materiales convencionales (K. Morsi, A. Esawi.,, 2006). En el presente trabajo se analiza el efecto de la adición de nanotubos de carbono incorporando nano partículas de plata para aumentar tanto sus propiedades eléctricas como mecánicas. La realización de aleaciones de Aluminio con nanotubos de carbono utilizando molienda de baja energía con una velocidad de 140 rpm y durante un periodo de 24 horas de molienda, partiendo de aluminio al 98% se realizó una aleación con 0.35 de nanotubos de carbono, la molienda se realizó para obtener una buena homogenización ya que la distribución afecta al comportamiento de las propiedades (Amirhossein Javadi, 2013), además de la reducción de partícula y finalmente la incorporación de nanotubos de carbono adicionando nanopartículas de plata por la reducción con borohidruro de sodio por medio de la punta ultrasónica, Las aleaciones obtenidas fueron caracterizadas por Microscopia electrónica de barrido (MEB), Análisis de Difracción de Rayos X, se realizaron pruebas de dureza y finalmente se realizaron pruebas de conductividad eléctrica

Abstract:

In recent years has increased interest in the development of new materials in this case composites, as these more advanced materials can perform their work better than conventional materials. (K. Morsi, A. Esawi.,, 2006). In the present work we analyze the effect of the addition of carbon nanotubes incorporating nano silver particles to increase both their electrical and mechanical properties. The performance of aluminum alloys with carbon nanotubes using low energy grinding with a speed of 140 rpm and during a period of 24 hours of grinding, starting from 98% aluminum, an alloy was made with 0.35 carbon nanotubes, grinding (Amirhossein Javadi, 2013), in addition to the reduction of particle and finally the incorporation of carbon nanotubes by adding silver nanoparticles by the reduction with sodium borohydride by Medium of the ultrasonic tip. The obtained alloys were characterized by Scanning Electron Microscopy (SEM), X-Ray Diffraction Analysis, hardness tests were performed and electrical conductivity tests were finally carried out

Abstract:

In this study, high temperature reactions of Fe–Cr alloys at 500 and 600 °C were investigated using an atmosphere of N2–O2 8 vol% with 220 vppm HCl, 360 vppm H2O and 200 vppm SO2; moreover the following aggressive salts were placed in the inlet: KCl and ZnCl2. The salts were placed in the inlet to promote corrosion and increase the chemical reaction. These salts were applied to the alloys via discontinuous exposures. The corrosion products were characterized using thermo-gravimetric analysis, scanning electron microscopy and X-ray diffraction.The species identified in the corrosion products were: Cr2O3, Cr2O (Fe0.6Cr0.4)2O3, K2CrO4, (Cr, Fe)2O3, Fe–Cr, KCl, ZnCl2, FeOOH, σ-FeCrMo and Fe2O3. The presence of Mo, Al and Si was not significant and there was no evidence of chemical reaction of these elements. The most active elements were the Fe and Cr in the metal base. The Cr presence was beneficial against corrosion; this element decelerated the corrosion process due to the formation of protective oxide scales over the surfaces exposed at 500 °C and even more notable at 600 °C; as it was observed in the thermo-gravimetric analysis increasing mass loss. The steel with the best performance was alloy Fe9Cr3AlSi3Mo, due to the effect of the protective oxides inclusive in presence of the aggressive salts

Abstract:

Cognitive appraisal theory predicts that emotions affect participation decisions around risky collective action. However, little existing research has attempted to parse out the mechanisms by which this process occurs. We build a global game of regime change and discuss the effects that fear may have on participation through pessimism about the state of the world, other players' willingness to participate, and risk aversion. We test the behavioral effects of fear in this game by conducting 32 sessions of an experiment in two labs where participants are randomly assigned to an emotion induction procedure. In some rounds of the game, potential mechanisms are shut down to identify their contribution to the overall effect of fear. Our results show that in this context, fear does not affect willingness to participate. This finding highlights the importance of context, including integral versus incidental emotions and the size of the stakes, in shaping effect of emotions on behavior

Abstract:

Multilayer perceptron networks have been designed to solve supervised learning problems in which there is a set of known labeled training feature vectors. The resulting model allows us to infer adequate labels for unknown input vectors. Traditionally, the optimal model is the one that minimizes the error between the known labels and those inferred labels via such a model. The training process results in those weights that achieve the most adequate labels. Training implies a search process which is usually determined by the descent gradient of the error. In this work, we propose to replace the known labels by a set of such labels induced by a validity index. The validity index represents a measure of the adequateness of the model relative only to intrinsic structures and relationships of the set of feature vectors and not to previously known labels. Since, in general, there is no guarantee of the differentiability of such an index, we resort to heuristic optimization techniques. Our proposal results in an unsupervised learning approach for multilayer perceptron networks that allows us to infer the best model relative to labels derived from such a validity index which uncovers the hidden relationships of an unlabeled dataset

Abstract:

Clustering is an unsupervised process to determine which unlabeled objects in a set share interesting properties. The objects are grouped into k subsets (clusters) whose elements optimize a proximity measure. Methods based on information theory have proven to be feasible alternatives. They are based on the assumption that a cluster is one subset with the minimal possible degree of “disorder”. They attempt to minimize the entropy of each cluster. We propose a clustering method based on the maximum entropy principle. Such a method explores the space of all possible probability distributions of the data to find one that maximizes the entropy subject to extra conditions based on prior information about the clusters. The prior information is based on the assumption that the elements of a cluster are “similar” to each other in accordance with some statistical measure. As a consequence of such a principle, those distributions of high entropy that satisfy the conditions are favored over others. Searching the space to find the optimal distribution of object in the clusters represents a hard combinatorial problem, which disallows the use of traditional optimization techniques. Genetic algorithms are a good alternative to solve this problem. We benchmark our method relative to the best theoretical performance, which is given by the Bayes classifier when data are normally distributed, and a multilayer perceptron network, which offers the best practical performance when data are not normal. In general, a supervised classification method will outperform a non-supervised one, since, in the first case, the elements of the classes are known a priori. In what follows, we show that our method’s effectiveness is comparable to a supervised one. This clearly exhibits the superiority of our method

Abstract:

One of the basic endeavors in Pattern Recognition and particularly in Data Mining is the process of determining which unlabeled objects in a set do share interesting properties. This implies a singular process of classification usually denoted as "clustering", where the objects are grouped into k subsets (clusters) in accordance with an appropriate measure of likelihood. Clustering can be considered the most important unsupervised learning problem. The more traditional clustering methods are based on the minimization of a similarity criteria based on a metric or distance. This fact imposes important constraints on the geometry of the clusters found. Since each element in a cluster lies within a radial distance relative to a given center, the shape of the covering or hull of a cluster is hyper-spherical (convex) which sometimes does not encompass adequately the elements that belong to it. For this reason we propose to solve the clustering problem through the optimization of Shannon's Entropy. The optimization of this criterion represents a hard combinatorial problem which disallows the use of traditional optimization techniques, and thus, the use of a very efficient optimization technique is necessary. We consider that Genetic Algorithms are a good alternative. We show that our method allows to obtain successfull results for problems where the clusters have complex spatial arrangements. Such method obtains clusters with non-convex hulls that adequately encompass its elements. We statistically show that our method displays the best performance that can be achieved under the assumption of normal distribution of the elements of the clusters. We also show that this is a good alternative when this assumption is not met

Abstract:

This paper proposes a novel distributed controller that solves the leader-follower and the leaderless consensus problems in the task space for networks composed of robots that can be kinematically and dynamically different (heterogeneous). In the leader-follower scenario, the controller ensures that all the robots in the network asymptotically reach a given leader pose (position and orientation), provided that at least one follower robot has access to the leader pose. In the leaderless problem, the robots asymptotically reach an agreement pose. The proposed controller is robust to variable time-delays in the communication channel and does not rely on velocity measurements. The controller is dynamic, it cancels-out the gravity effects and it incorporates a proportional to the error term between the robot and the controller virtual position. The controller dynamics consists of a simple proportional scheme plus damping injection through a second-order (virtual) system. The proposed approach employs the singularity free unit-quaternions to represent the orientation of the end-effectors, and the network is represented by an undirected and connected interconnection graph. The application to the control of bilateral teleoperators is described as a special case of the leaderless consensus solution. The paper presents numerical simulations with a network composed of four 6-Degrees-of-Freedom (DoF) and one 7-DoF robot manipulators. Moreover, we also report some experiments with a 6-DoF industrial robot and two 3-DoF haptic devices

Abstract:

This document describes the methodology used to group all the municipalities in the country into 777 local labor markets. The approach used to define local labor markets follows best international practices and hinges on the idea that there are groups of municipalities that are highly economically integrated and, therefore, constitute a single local labor market. After defining local labor markets, census population and housing data collected by INEGI are linked to local labor markets, covering the period from 1990 to 2020. With the definitions generated in this document, researchers from the academic community and the public and private sectors can study different aspects of the Mexican labor market using a unit of analysis that considers the interrelationships between the country's municipalities. The databases are presented at the individual level and aggregated at the local labor market level from 1990 to 2020. This document details the structure of the databases, and provides an initial analysis of the trends exhibited by the Mexican labor markets during 1990-2020 based on the aforementioned database

Abstract:

This article examines the effects of committee specialization and district characteristics on speech participation by topic and congressional forum. It argues that committee specialization should increase speech participation during legislative debates, while district characteristics should affect the likelihood of speech participation in non-lawmaking forums. To examine these expectations, we analyze over 100,000 speeches delivered in the Chilean Chamber of Deputies between 1990 and 2018. To carry out our topic classification task, we utilize the recently developed state-of-the-art multilingual Transformer model XLM-RoBERTa. Consistent with informational theories, we find that committee specialization is a significant predictor of speech participation in legislative debates. In addition, consistent with theories purporting that legislative speech serves as a vehicle for the electoral connection, we find that district characteristics have a significant effect on speech participation in non-lawmaking forums

Abstract:

According to conventional wisdom, closed-list proportional representation (CLPR) electoral systems create incentives for legislators to favor the party line over their voters' positions. However, electoral incentives may induce party leaders to tolerate shirking by some legislators, even under CLPR. This study argues that in considering whose deviations from the party line should be tolerated, party leaders exploit differences in voters' relative electoral influence resulting from malapportionment. We expect defections in roll call votes to be more likely among legislators elected from overrepresented districts than among those from other districts. We empirically test this claim using data on Argentine legislators' voting records and a unique dataset of estimates of voters' and legislators' placements in a common ideological space. Our findings suggest that even under electoral rules known for promoting unified parties, we should expect strategic defections to please voters, which can be advantageous for the party's electoral fortunes

Abstract:

This article examines speech participation under different parliamentary rules: open forums dedicated to bill debates, and closed forums reserved for non-lawmaking speeches. It discusses how electoral incentives influence speechmaking by promoting divergent party norms within those forums. Our empirical analysis focuses on the Chilean Chamber of Deputies. The findings lend support to the view that, in forums dedicated to non-lawmaking speeches, participation is greater among more institutionally disadvantaged members (backbenchers, women, and members from more distant districts), while in those that are dedicated to lawmaking debates, participation is greater among more senior members and members of the opposition

Abstract:

We present a novel approach to disentangle the effects of ideology, partisanship, and constituency pressures on roll-call voting. First, we place voters and legislators on a common ideological space. Next, we use roll-call data to identify the partisan influence on legislators’ behavior. Finally, we use a structural equation model to account for these separate effects on legislative voting. We rely on public opinion data and a survey of Argentine legislators conducted in 2007–08. Our findings indicate that partisanship is the most important determinant of legislative voting, leaving little room for personal ideological position to affect legislators’ behavior

Abstract:

Legislators in presidential countries use a variety of mechanisms to advance their electoral careers and connect with relevant constituents. The most frequently studied activities are bill initiation, co-sponsoring, and legislative speeches. In this paper, the authors examine legislators' information requests (i.e. parliamentary questions) to the government, which have been studied in some parliamentary countries but remain largely unscrutinised in presidential countries. The authors focus on the case of Chile - where strong and cohesive national parties coexist with electoral incentives that emphasise the personal vote - to examine the links between party responsiveness and legislators' efforts to connect with their electoral constituencies. Making use of a new database of parliamentary questions and a comprehensive sample of geographical references, the authors examine how legislators use this mechanism to forge connections with voters, and find that targeted activities tend to increase as a function of electoral insecurity and progressive ambition

Abstract:

In her response, Alfaro fleshes out two main questions that come out from her book The Belief in Intuition. First, what is the relation between Henri Bergson and Max Scheler's personalist anthropology, on the one hand, and politics, on the other? What kinds of political order and civic education would sustain dense moral psychology, such as the one she claims follows from their writings? Second, and more specifically, what theory of authority corresponds to the philosophical anthropology that we learn from these authors? How can the "small-scale exemplarity" that Alfaro argues is articulated in their writings be an alternative to how we think today about charisma and emulation in the public sphere? In articulating these responses, Alfaro reflects on phenomena like uncertainty, education, and social media in the contemporary public sphere

Abstract:

Using insights from two of the major proponents of the hermeneutical approach, Paul Ricoeur and Hannah Arendt-who both recognized the ethicopolitical importance of narrative and acknowledged some of the dangers associated with it-I will flesh out the worry that "narrativity" in political theory has been overly attentive to storytelling and not heedful enough of story listening. More specifically, even if, as Ricoeur says, "narrative intelligence" is crucial for self-understanding, that does not mean, as he invites us to, that we should always seek to develop a "narrative identity" or become, as he says, "the narrator of our own life story." I offer that, perhaps inadvertently, such an injunction might turn out to be detrimental to the "art of listening." This, however, must also be cultivated if we want to do justice to our narrative character and expect narrative to have the political role that both Ricoeur and Arendt envisaged. Thus, although there certainly is a "redemptive power" in narrative, when the latter is understood primarily as the act of narration or as the telling of stories, there is a danger to it as well. Such a danger, I think, intensifies at a time like ours, when, as some scholars have noted, "communicative abundance" or the "ceaseless production of redundancy" in traditional and social media has often led to the impoverishment of the public conversation

Abstract:

In this paper, I take George Lakoff and Mark Johnson's thesis that metaphors shape our reality to approach the judicial imagery of the new criminal justice system in Mexico (in effect since 2016). Based on twenty-nine in-depth interviews with judges and other members of the judiciary, I study what I call the "dirty minds" metaphor, showing its presence in everyday judicial practice and analyzing both its cognitive basis as well as its effects in how criminal judges understand their job. I argue that the such a metaphor, together with the "fear of contamination" it raises as a result, is misleading and goes to the detriment of the judicial virtues that should populate the new system. The conclusions I offer are relevant beyond the national context, inter alia, because they concern a far-reaching paradigm of judgment

Abstract:

Recent efforts to theorize the role of emotions in political life have stressed the importance of sympathy, and have often recurred to Adam Smith to articulate their claims. In the early twentieth-century, Max Scheler disputed the salutary character of sympathy, dismissing it as an ultimately perverse foundation for human association. Unlike later critics of sympathy as a political principle, Scheler rejected it for being ill equipped to salvage what, in his opinion, should be the proper basis of morality, namely, moral value. Even if Scheler's objections against Smith's project prove to be ultimately mistaken, he had important reasons to call into question its moral purchase in his own time. Where the most dangerous idol is not self-love but illusory self-knowledge, the virtue of self-command will not suffice. Where identification with others threatens the social bond more deeply than faction, “standing alone” in moral matters proves a more urgent task

Abstract:

Images of chemical molecules can be produced, manipulated, simulated and analyzed using sophisticated chemical software. However, in the process of publishing such images into scientific literature, all their chemical significance is lost. Although images of chemical molecules can be easily analyzed by the human expert, they cannot be fed back into chemical software and loose much of their potential use. We have developed a system that can automatically reconstruct the chemical information associated to the images of chemical molecules thus rendering them computer readable. We have benchmarked our system against a commercially available product and have also tested it using chemical databases of several thousand images with very encouraging results

Abstract:

We present an algorithm that automatically segments and classifies the brain structures in a set of magnetic resonance (MR) brain images using expert information contained in a small subset of the image set. The algorithm is intended to do the segmentation and classification tasks mimicking the way a human expert would reason. The algorithm uses a knowledge base taken from a small subset of semiautomatically classified images that is combined with a set of fuzzy indexes that capture the experience and expectation a human expert uses during recognition tasks. The fuzzy indexes are tissue specific and spatial specific, in order to consider the biological variations in the tissues and the acquisition inhomogeneities through the image set. The brain structures are segmented and classified one at a time. For each brain structure the algorithm needs one semiautomatically classified image and makes one pass through the image set. The algorithm uses low-level image processing techniques on a pixel basis for the segmentations, then validates or corrects the segmentations, and makes the final classification decision using higher level criteria measured by the set of fuzzy indexes. We use single-echo MR images because of their high volumetric resolution; but even though we are working with only one image per brain slice, we have multiple sources of information on each pixel: absolute and relative positions in the image, gray level value, statistics of the pixel and its three-dimensional neighborhood and relation to its counterpart pixels in adjacent images. We have validated our algorithm for ease of use and precision both with clinical experts and with measurable error indexes over a Brainweb simulated MR set

Abstract:

We present an attractive methodology for the compression of facial gestures that can be used to drive interaction in real time applications. Using the eigenface method we build compact representation spaces for a variety of facial gestures. These compact spaces are the so called eigenspaces. We do real time tracking and segmentation of facial features from video images and then use the eigenspaces to find compact descriptors of the segmented features. We use the system for an avatar videoconference application where we achieve real time interactivity with very limited bandwidth requirements. The system can also be used as a hands free man-machine interface

Abstract:

We use interactive virtual environments for cognitive behavioral therapy. Working together with children therapists and psychologists, our computer graphics group developed 5 interactive simulators for the treatment of fears and behavior disorders. The simulators run in real time on P4 PCs with graphic accelerators, but also work online using streaming techniques and Web VR engines. The construction of the simulators starts with ideas and situations proposed by the psychologists, these ideas are then developed by graphic designers and finally implemented in 3D virtual worlds by our group. We present the methodology we follow to turn the psychologists ideas and then the graphic designer’s sketches into fully interactive simulators. Our methodology starts with a graphic modeler to build the geometry of the virtual worlds, the models are then exported to a dedicated OpenGL VR engine that can interface with any VR peripheral. Alternatively, the models can be exported to a Web VR engine. The simulators are cost efficient since they require not much more than the PC and the graphics card. We have found that both the therapists and the children that use the simulators find this technology very attractive

Abstract:

We study the motion of the negative curved symmetric two and three center problem on the Poincare upper semi plane model for a surface of constant negative curvature κwhich without loss of generality we assume k = -1. Using this model, we first derive the equations of motion for the 2-and 3-center problems. We prove that for 2-center problem, there exists a unique equilibrium point and we study the dynamics around it. For the motion restricted to the invariant y-axis, we prove that it is a center, but for the general two center problem it is unstable. For the 3-center problem, we show the nonexistence of equilibrium points. We study two particular integrable cases, first when the motion of the free particle is restricted to the y-axis, and second when all particles are along the same geodesic. We classify the singularities of the problem and introduce a local and a global regularization of all them. We show some numerical simulations for each situation

Abstract:

We consider the curved 4-body problems on spheres and hyperbolic spheres. After obtaining a criterion for the existence of quadrilateral configurations on the equator of the sphere, we study two restricted 4-body problems, one in which two masses are negligible and another in which only one mass is negligible. In the former, we prove the evidence square-like relative equilibria, whereas in the latter we discuss the existence of kite-shaped relative equilibria

Abstract:

In this paper, we study the hypercyclic composition operators on weighted Banach spaces of functions defined on discrete metric spaces. We show that the only such composition operators act on the "little" spaces. We characterize the bounded composition operators on the little spaces, as well as provide various necessary conditions for hypercyclicity

Abstract:

We give a complete characterisation of the single and double arrow relations of the standard context K(Ln) of the lattice Ln of partitions of any positive integer n under the dominance order, thereby addressing an open question of Ganter, 2020/2022

Abstract:

Demand response (DR) programs and local markets (LM) are two suitable technologies to mitigate the high penetration of distributed energy resources (DER) that is vastly increasing even during the current pandemic in the world. It is intended to improve operation by incorporating such mechanisms in the energy resource management problem while mitigating the present issues with Smart Grid (SG) technologies and optimization techniques. This paper presents an efficient intraday energy resource management starting from the day-ahead time horizon, which considers load uncertainty and implements both DR programs and LM trading to reduce the operating costs of three load aggregator in an SG. A random perturbation was used to generate the intraday scenarios from the day-ahead time horizon. A recent evolutionary algorithm HyDE-DF, is used to achieve optimization. Results show that the aggregators can manage consumption and generation resources, including DR and power balance compensation, through an implemented LM

Abstract:

Demand response programs, energy storage systems, electric vehicles, and local electricity markets are appropriate solutions to offset the uncertainty associated with the high penetration of distributed energy resources. It aims to enhance efficiency by adding such technologies to the energy resource management problem while also addressing current concerns using smart grid technologies and optimization methodologies. This paper presents an efficient intraday energy resource management starting from the day ahead time horizon, which considers the uncertainty associated with load consumption, renewable generation, electric vehicles, electricity market prices, and the existence of extreme events in a 13-bus distribution network with high integration of renewables and electric vehicles. A risk analysis is implemented through conditional value-at-risk to address these extreme events. In the intraday model, we assume that an extreme event will occur to analyze the outcome of the developed solution. We analyze the solution's impact departing from the day-ahead, considering different risk aversion levels. Multiple metaheuristics optimize the day-ahead problem, and the best-performing algorithm is used for the intraday problem. Results show that HyDE gives the best day-ahead solution compared to the other algorithms, achieving a reduction of around 37% in the cost of the worst scenarios. For the intraday model, considering risk aversion also reduces the impact of the extreme scenarios

Abstract:

The central configurations given by an equilateral triangle and a regular tetrahedron with equal masses at the vertices and a body at the barycenter have been widely studied in [9] and [14] due to the phenomena of bifurcation occurring when the central mass has a determined value m*. We propose a variation of this problem setting the central mass as the critical value m* and letting a mass at a vertex to be the parameter of bifurcation. In both cases, 2D and 3D, we verify the existence of bifurcation, that is, for a same set of masses we determine two new central configurations. The computation of the bifurcations, as well as their pictures have been performed considering homogeneous force laws with exponent a < −1

Abstract:

The leaderless and the leader-follower consensus are the most basic synchronization behaviors for multiagent systems. For networks of Euler-Lagrange (EL) agents different controllers have been proposed to achieve consensus, requiring in all cases, either the cancellation or the estimation of the gravity forces. While, in the first case, it is shown that a simple Proportional plus damping (P+d) scheme with exact gravity cancellation can achieve consensus, in the latter case, it is necessary to estimate, not just the gravity forces, but the parameters of the whole dynamics. This requires the computation of a complicated regressor matrix, that grows in complexity as the degrees-of-freedom of the EL-agents increase. To simplify the controller implementation we propose in this paper a simple P+d scheme with only adaptive gravity compensation. In particular, two adaptive controllers that solve both consensus problems by only estimating the gravitational term of the agents and hence without requiring the complete regressor matrix are reported. The first controller is a simple P+d scheme that does not require to exchange velocity information between the agents but requires centralized information. The second controller is a Proportional-Derivative plus damping (PD+d) scheme that is fully decentralized but requires exchanges of speed information between the agents. Simulation results demonstrate the performance of the proposed controllers

Abstract:

Medical image segmentation is one of the most productive research areas in medical image processing. The goal of most new image segmentation algorithms is to achieve higher segmentation accuracy than existing algorithms. But the issue of quantitative, reproducible validation of segmentation results, and the questions: What is segmentation accuracy ?, and: What segmentation accuracy can a segmentation algorithm achieve ? remain wide open. The creation of a validation framework is relevant and necessary for consistent and realistic comparisons of existing, new and future segmentation algorithms. An important component of a reproducible and quantitative validation framework for segmentation algorithms is a composite index that will measure segmentation performance at a variety of levels. In this paper we present a prototype composite index that includes the measurement of seven metrics on segmented image sets. We explain how the composite index is a more complete and robust representation of algorithmic performance than currently used indices that rate segmentation results using a single metric. Our proposed index can be read as an averaged global metric or as a series of algorithmic ratings that will allow the user to compare how an algorithm performs under many categories

Abstract:

How is the size of the informal sector affected when the distribution of social expenditures across formal and informal workers changes? How is it affected when the tax rate changes along with the generosity of these transfers? In our search model, taxes are levied on formal-sector workers as a proportion of their wage. Transfers, in contrast, are lump-sum and are received by both formal and informal workers. This implies that high-wage formal workers subsidize low-wage formal workers as well as informal workers. We calibrate the model to Mexico and perform counterfactuals. We find that the size of the informal sector is quite inelastic to changes in taxes and transfers. This is due to the presence of search frictions and to the cross-subsidy in our model: for low-wage formal jobs, a tax increase is roughly offset by an increase in benefits, leaving the unemployed approximately indifferent. Our results are consistent with the empirical evidence on the recent introduction of the “Seguro Popular” healthcare program

Abstract:

We calibrate the cost of sovereign defaults using a continuous time model, where government default decisions may trigger a change in the regime of a stochastic TFP process. We calibrate the model to a sample of European countries from 2009 to 2012. By comparing the estimated drift in default relative to that in no-default, we find that TFP falls in the range of 3.70–5.88 %. The model is consistent with observed falls in GDP growth rates and subsequent recoveries and illustrates why fiscal multipliers are small during sovereign debt crises

Abstract:

Employment to population ratios differ markedly across Organization for Economic Cooperation and Development (OECD) countries, especially for people aged over 55 years. In addition, social security features differ markedly across the OECD, particularly with respect to features such as generosity, entitlement ages, and implicit taxes on social security benefits. This study postulates that differences in social security features explain many differences in employment to population ratios at older ages. This conjecture is assessed quantitatively with a life cycle general equilibrium model of retirement. At ages 60-64 years, the correlation between the simulations of this study's model and observed data is 0.67. Generosity and implicit taxes are key features to explain the cross-country variation, whereas entitlement age is not

Abstract:

The consequences of increases in the scale of tax and transfer programs are assessed in the context of a model with idiosyncratic productivity shocks and incomplete markets. The effects are contrasted with those obtained in a stand-in house hold model featuring no idiosyncratic shocks and complete markets. The main finding is that the impact on hours remains very large, but the welfare consequences are very different. The analysis also suggests that tax and transfer policies have large effects on average labor productivity via selection effects on employment

Abstract:

We also noticed that 5 months of historical data are enough for accurate training of the forecast models, and that shallow models with around 50,000 parameters have enough predicting power for this task

Abstract:

Atmospheric pollution components have negative effects in the health and life of people. Outdoor pollution has been extenseively studied, but a large portion of people stay indoors. Our research focuses on indoor pollution forecasting using deep learning techniques coupled with the large processing capabilities of the cloud computing. This paper also shares the implementation using an open source approach of the code for modeling time-series of different sources data. We believe that further research can leverage the outcomes of our research

Abstract:

Sergio Verdugo's provocative Foreword challenges us to think about whether the concepts we inherited from classical constitutionalism are still useful for understanding our current reality. Verdugo refutes any attempt to defend what he calls "the conventional approach to constituent power." The objective of this article is to contradict Verdugo's assertions which, the Foreword claims, are based on an incorrect notion of the people as a unified body, or as a social consensus. The article argues, instead, for the plausibility of defending the popular notion of constituent power by anchoring it in a historical and dynamic concept of democratic legitimacy. It concludes that, although legitimizing deviations from the established channels for political transformation entails risks, we must assume them for the sake of the emancipatory potential of constituent power

Resumen:

El libro La Corte Enrique Santiago Petracchi II que se reseña muestra la impronta de Enrique Santiago Petracchi durante su segundo período a cargo de la presidencia de la Corte Suprema de Justicia de la Nación Argentina, ocurrida entre los años 2003 y 2006. La Corte se estudia tanto desde el punto de vista interno, con sus reformas en busca de transparencia y legitimidad, como desde el punto de vista externo, contextual y de relación con el resto de los poderes del Estado y la sociedad civil. El recuento incluye una mención de los hitos jurisprudenciales de dicho período y explica cómo la figura de Petracchi es central para comprender este momento de la Corte

Abstract:

The book "The Court Enrique Santiago Petracchi II" that is reviewed shows the imprint of Chief Justice Enrique Santiago Petracchi at the Argentinean Supreme Court of Justice during his second term that occurred between 2003 and 2006. The Court is studied both from the internal point of view, with its reforms in search of transparency and legitimacy, as well as from the external, contextual point of view and relationship with the rest of the State branches and civil society. The account includes a mention of the leading cases of that period and explains how the figure of Petracchi is central to understanding this moment of the Court

Resumen:

El artículo plantea algunas inquietudes sobre el análisis que Gargarella hace de la sentencia de la Corte Interamericana de Derechos Humanos en el caso Gelman vs. Uruguay. Aceptando la idea principal de que las decisiones pueden gradarse según su legitimidad democrática, cuestiona que la Ley de Caducidad uruguaya haya sido legítima en un grado significativo y por tanto haya debido respetarse en su validez por la Corte Interamericana. Ello por cuanto la decisión no fue tomada por todas las personas posiblemente afectadas por la misma. Este principio normativo de inclusión es fundamental para la teoría de la democracia de Gargarella, sin embargo, no fue considerado en su análisis del caso. El artículo explora las consecuencias de tomarse en serio el principio de inclusión y los problemas prácticos que este apareja, en especial respecto de la constitución del demos. Finalmente, propone una alternativa de solución mediante la justicia constitucional/convencional, de acuerdo con un entendimiento procedimental de la misma basado en la participación, según lo pensó J. H. Ely

Abstract:

The article raises some questions about Gargarella’s analysis of the Inter-American Court of Human Rights’ ruling in Gelman v. Uruguay. It accepts the main idea regarding the possibility to grad democratic legitimacy of deci-sions, but it questions that the Uruguayan amnesty has had a high degree of legitimacy. This is because the decision was not made by all possibly affected people. This normative principle of inclusion is fundamental to Gargarella’s theory of democracy, however, it was not considered in his analysis of the case. The article explores the consequences of taking the principle of inclu-sion seriously and the practical problems that it involves, especially regard-ing the constitution of the demos. Finally, it proposes an alternative solution through a procedural understanding of judicial review based on participation, as proposed by J.H. Ely

Resumen:

Este artículo reflexionará críticamente sobre la jurisprudencia de la Suprema Corte de Justicia de la Nación mexicana en torno al principio constitucional de paridad de género. Para hacerlo, se apoyará en una reconstrucción teórica de tres diferentes fundamentos en los que se puede basar la paridad según se entienda la representación política de las mujeres y el principio de igualdad. Estas posturas se identificarán como “de la igualdad formal”, “de la defensa de las cuotas” y “de la paridad”, resaltando cómo en las sentencias mexicanas se mezclan los argumentos de las dos últimas posturas de forma desordenada e incoherente. Finalmente, el artículo propiciará una interpretación del principio de paridad de género como dinámico y por tanto abierto a aportaciones progresivas del sujeto político feminista, que evite los riesgos del esencialismo como el reforzar el sistema binario de sexos. Un principio que, en este momento histórico, requiere medidas de acción afirmativa correctivas, provisionales y estratégicas para alcanzar la igualdad sustantiva de género en la representación política

Abstract:

This article will critically reflect on the jurisprudence of the Mexican Supreme Court of Justice regarding the constitutional principle of gender parity. To do so, it will rely on a theoretical reconstruction of three different foundations of the principle, based on the understanding of the political representation of women and the principle of equality. I will call these positions: "of formal equality," "of the defense of quotas," and "of parity," highlighting how the arguments of the last two positions are mixed in the Mexican judgments in a disorderly and incoherent way. Finally, the article will promote an interpretation of the principle of gender parity as dynamic and therefore open to progressive contributions from the feminist political subject, avoiding both the risks of essentialism and reinforcing the binary system of sexes. This principle requires, at this historical moment, corrective, provisional, and strategic affirmative action measures to achieve substantive gender equality in political representation

Resumen:

El artículo analiza la posibilidad de que estemos ante una transformación constitucional más allá del texto constitucional. Así, en el marco de la llamada "Cuarta Transformación" que se anunció para el país luego de las últimas elecciones, se intenta una reflexión diagnóstica sobre el papel que juega la Suprema Corte de Justicia.

Resumen:

El presente trabajo tiene por objeto analizar el funcionamiento de la rigidez constitucional en México como garantía de la supremacía constitucional. Para ello comenzaré con un estudio sobre la idea de rigidez y la distinguiré del concepto de supremacía. Posteriormente utilizaré dichas categorías para analizar el sistema mexicano y cuestionar su eficacia, es decir, la adecuación entre el medio (rigidez) y el fin (supremacía). Por último haré un par de propuestas de modificación del mecanismo de reforma constitucional en vistas a hacerlo más democrático, más deliberativo y con ello más eficaz para garantizar la supremacía constitucional

Abstract:

This paper analyzes how constitutional rigidity works in México and its consequences for constitutional supremacy. It starts with a conceptual distinction between rigidity and supremacy. Subsequently those categories are used to analyze mexican system and to question the amendment process capability to guarantee constitutional supremacy. Finally, the paper makes some proposals to amend the Mexican constitutional amendment process in order to make it more democratic, deliberative and effective to guarantee constitutional supremacy

Resumen:

El constitucionalismo popular como corriente constitucional contemporánea plantea una revisión crítica a la historia del constitucionalismo norteamericano, reivindicando el papel del pueblo en la interpretación constitucional. A la vez, presenta un contenido normativo anti-elitista que trasciende sus orígenes y que pone en cuestión la idea de que los jueces deban tener la última palabra en las controversias sobre derechos fundamentales. Esta faceta tiene su correlato en los diseños institucionales propuestos, propios de un constitucionalismo débil y centrados en la participación popular democrática

Abstract:

Popular constitutionalism is a contemporary constitutional theory with a critical view of U.S' constitutional narrative focus on judicial supremacy. Instead, popular constitutionalism regards the people as main actor. It defends an anti-elitist understanding of constitutional law. From the institutional perspective, popular constitutionalism proposes a weak model of constitutionalism and a strong participatory democracy

Resumen:

El trabajo tiene como objetivo analizar la sentencia dictada por la Primera Sala de la Suprema Corte de Justicia mexicana en el amparo en revisión 152/2013 en la que se declaró la inconstitucionalidad de la exclusión de los homosexuales del régimen matrimonial en el Estado de Oaxaca. Esta sentencia refleja un cambio importante en la forma de entender el interés legítimo tratándose de la impugnación de normas autoaplicativas (es decir, de normas que causan perjuicio sin que medie acto de aplicación), dando paso a la justiciabilidad de los mensajes estigmatizantes. En el caso, esta forma más amplia de entender el interés legítimo está basada en la percepción de que el derecho discrimina a través de los mensajes que transmite; situación que la Suprema Corte considera puede combatir a través de sus sentencias de amparo. Asimismo, se plantean algunos retos e inquietudes que suscita la sentencia a la luz del activismo judicial que puede conllevar

Abstract:

This paper is focus on the amparo en revision 152/2013 issued by the first chamber of the Supreme Court of Mexico. For now on the Supreme Court is able to judge the stigmatizing messages of law. Furthermore, the amparo en revision 152/2013 develops a broader conception of discrimination and a more activist role of the Supreme Court. Finally, I express some thoughts about the issues that this judgment could pose to the Supreme Court

Abstract:

We elicit subjective probability distributions from business executives about their own-firm outcomes at a one-year look-ahead horizon. In terms of question design, our key innovation is to let survey respondents freely select support points and probabilities in five-point distributions over future sales growth, employment, and investment. In terms of data collection, we develop and field a new monthly panel Survey of Business Uncertainty. The SBU began in 2014 and now covers about 1,750 firms drawn from all 50 states, every major nonfarm industry, and a range of firm sizes. We find three key results. First, firm-level growth expectations are highly predictive of realized growth rates. Second, firm-level subjective uncertainty predicts the magnitudes of future forecast errors and future forecast revisions. Third, subjective uncertainty rises with the firm’s absolute growth rate in the previous year and with the magnitude of recent revisions to its expected growth rate. We aggregate over firm-level forecast distributions to construct monthly indices of business expectations (first moment) and uncertainty (second moment) for the U. S. private sector

Abstract:

We consider several economic uncertainty indicators for the US and UK before and during the COVID-19 pandemic: implied stock market volatility, newspaper-based policy uncertainty, Twitter chatter about economic uncertainty, subjective uncertainty about business growth, forecaster disagreement about future GDP growth, and a model-based measure of macro uncertainty. Four results emerge. First, all indicators show huge uncertainty jumps in reaction to the pandemic and its economic fallout. Indeed, most indicators reach their highest values on record. Second, peak amplitudes differ greatly -from a 35% rise for the model-based measure of US economic uncertainty (relative to January 2020) to a 20-fold rise in forecaster disagreement about UK growth. Third, time paths also differ: Implied volatility rose rapidly from late February, peaked in mid-March, and fell back by late March as stock prices began to recover. In contrast, broader measures of uncertainty peaked later and then plateaued, as job losses mounted, highlighting differences between Wall Street and Main Street uncertainty measures. Fourth, in Cholesky-identified VAR models fit to monthly U.S. data, a COVID-size uncertainty shock foreshadows peak drops in industrial production of 12-19%

Resumen:

Uno de los principales problemas en la optimización en general es el modelado, es decir obtener un modelo para posteriormente aplicar alguna técnica de optimización adecuada. Sin embargo, existen situaciones en las que el modelo no se presenta de manera explícita o encontrar el modelo es sumamente complicado. Para estos casos es necesario el uso de la simulación para conocer el desempeño de un sistema. En este trabajo se abordará el caso particular del mantenimiento preventivo a máquinas utilizando técnicas de optimización-simulación

Abstract:

One of the most important problems in optimization is the modeling. The process of build a model and apply some technique to solve. Unfortunately exist several problems which is not easy get a model, due is difficult build an explicit form. One tool for this situation is using simulation as a black box evaluator with controllable variables and not controllable random variables which determines de performance of a system. In this work we deal with the preventive maintenance of machines by using techniques of simulation and optimization

Abstract:

This paper proposes a tree-based incremental-learning model to estimate house pricing using publicly available information on geography, city characteristics, transportation, and real estate for sale. Previous machine-learning models capture the marginal effects of property characteristics and location on prices using big datasets for training. In contrast, our scenario is constrained to small batches of data that become available in a daily basis, therefore our model learns from daily city data, employing incremental-learning to provide accurate price estimations each day. Our results show that property prices are highly influenced by the city characteristics and its connectivity, and that incremental models efficiently adapt to the nature of the house pricing estimation task

Abstract:

In journalism, innovation can be achieved by integrating various factors of change. This article reports the results of an investigation carried out in 2020; the study sample comprised journalists who participated as social researchers in a longitudinal study focused on citizens' perceptions of the Mexican electoral process in 2018. Journalistic innovation was promoted by the development of a novel methodology that combined existing journalistic resources and the use of qualitative social research methodologies. This combination provided depth to the journalistic coverage, optimized reporting, editing, design, and publication processes, improved access to more complete and diverse information in real time, and enhanced the capabilities of journalists. The latter transformed, through differential behaviors, their way of thinking about and valuing the profession by reconceptualizing and re-evaluating journalistic practices in which they were involved, resulting in an improvement in journalistic quality

Abstract:

Purpose: This research analyzes national identity representations held by Generation Z youth living in the United States-Mexico-Canada Agreement (USMCA) countries. In addition, it aims to identify the information on these issues that they are exposed to through social media. Methods: A qualitative approach carried out through in-depth interviews was selected for the study. The objective is to reconstruct social meaning and the social representation system. The constant comparative method was used for the information analysis, backed by the NVivo program. Findings: National identity perceptions of the adolescents interviewed are positive in terms of their own groups, very favorable regarding Canadians, and unfavorable vis-à-vis Americans. Furthermore, the interviewees agreed that social media have influenced their desire to travel or migrate, and if considering migrating, they have also provided advice as to which country they might go to. On another point, Mexicans are quite familiar with the Treaty; Americans are split between those who know something about it and those who have no information whatsoever; whereas Canadians know nothing about it. This reflects a possible way to improve information generated and spread by social media. Practical implications: The results could improve our understanding of how young people interpret the information circulating in social media and what representations are constructed about national identities. We believe this research can be replicated in other countries. Social implications: We might consider that the representations Generation Z has about the national identities of these three countries and what it means to migrate could have an impact on the democratic life of each nation and, in turn, on the relationship among the three USMCA partners

Resumen:

Este artículo tiene como objetivo describir las etapas de transformación de la identidad social de los habitantes de la región de Los Altos de Jalisco, México, a partir de la Guerra Cristera y hasta la década de los años 90. El proceso se ha desarrollado en cuatro fases: Oposición (1926-1929), Ajuste (1930-1970), Reforzamiento (1970-1990) y Cambio (1990- ). Este análisis se realiza desde la teoría de la mediación social y presenta un avance de la investigación realizada para la tesis de doctorado Los mitos vivos de México: Identidad regional en Los Altos de Jalisco, dirigida por Manuel Martín Serrano

Abstract:

This article aims to describe the stages of transformation of the social identity in Los Altos de Jalisco, Mexico, from the Cristera War until the 1990s. The process has been developed in four phases: Opposition (1926-1929), Adjustment (1930-1970), Reinforcement (1970-1990) and Change (1990- ). This analysis is carried out from the theory of social mediation and presents an advance of the research performed for the doctoral thesis Los mitos vivos de México: Identidad regional en Los Altos de Jalisco, directed by Manuel Martín Serrano

Resumen:

Las identidades criollas en México han sido poco investigadas. Una de estas identidades se gestó en Los Altos de Jalisco a lo largo de cuatro siglos y permaneció casi intacta, incluso luego de la guerra cristera que cimbró la zona en 1926. Fundamentalmente, la identidad alteña está cimentada en la percepción que se tiene del origen hispano de la comunidad, así como del catolicismo arraigado y reforzado por los cristeros. De esta forma, la familia, institución sagrada para los alteños, guarda y conserva lo hispano y lo católico. El matrimonio funda la familia, le da soporte, orden y estabilidad a la organización social. Al hablar de noviazgo, el concepto de virginidad es central en la conformación de la identidad. La mujer debe conservar su pureza para que la familia alteña siga manteniendo sus valores

Resumen:

¿Cómo se ven a sí mismos y a los otros los habitantes de una región de México? ¿Cómo se articulan las percepciones sobre las creencias del grupo y sus acciones? En este artículo se presenta un modelo para conceptualizar cómo está estructurada la identidad social. A partir de un estudio de caso, la investigación toma como centro los elementos de la identidad, a los que se definen como unidades de significación contenidas en una creencia o enunciado. Posteriormente se detalla el proceso de análisis a través del cual es posible entender su articulación y organización. A través de dos Mundos y dos Dimensiones que conforman la identidad social así como tres ejes que la configuran, el Espacio, el Tiempo y la Relación, este documento analiza el caso de los habitantes de Los Altos, Jalisco, México, con la encomienda de dotar de una primera herramienta para entender quiénes y cómo son, a decir de sí mismos, los habitantes de esta región

Abstract:

How do the individuals from a particular Mexican region see themselves and the rest of the inhabitants of the area? How are the perceptions about the group’s beliefs and their actions articu-lated? This article presents a model aimed at conceptualizing how social identity is structured. Based on a case study, the research focuses on the elements of identity, defined as units of signification within a belief or statement. The paper also details the process of analysis through which it is possible to understand its articulation and organization. We do it by means of two Worlds and two Dimensions that conform the social identity, as well as three axes that shape it: Space, Time and the Relationship, this document analyzes the case of inhabi-tants of Los Altos, Jalisco, Mexico, in order to provide a first tool that helps understand who the inhabitants of this region are, and what they are like, according to themselves

Abstract:

We propose an algorithm for creating line graphs from binary images. The algorithm consists of a vectorizer followed by a line detector that can handle a large variety of binary images and is tolerant to noise. The proposed algorithm can accurately extract higher-level geometry from the images lending itself well to automatic image recognition tasks. Our algorithm revisits the technique of image polygonization proposing a very robust variant based on subpixel resolution and the construction of directed paths along the center of the border pixels where each pixel can correspond to multiple nodes along one path. The algorithm has been used in the areas of chemical structure and musical score recognition and is available for testing at www.docnition.com. Extensive testing of the algorithm against commercial and noncommercial methods has been conducted with favorable results

Resumen:

La relación entre la profesionalidad del periodismo y las normas éticas forman un vínculo importante para una actividad que se enfrenta a nuevos desafíos, que provienen tanto de la reconfiguración de las industrias mediáticas como de la crisis de identidad de la propia profesión. En este marco, se suma la revalorización de saber contar historias con contenidos informativos desde el empleo de herramientas y recursos literarios. Así, el periodismo narrativo potencia un periodismo comprometido, crítico y transformador que, en entornos transmedia, se enfrenta a nuevas preguntas: ¿qué implicaciones éticas tiene la participación activa de las audiencias, la generación de comunidades virtuales, el uso del lenguaje claro, la conjunción de diversos medios de comunicación y el manejo de datos e historias personales? En esta comunicación se presenta un análisis de esta problemática desde la Teoría de la Otredad, como esencia de una ética actual y solidaria que permite el entendimiento y aceptación del ser humano. Como línea central se describen los principales retos que se han detectado a partir de análisis de los principios del periodismo transmedia

Abstract:

The relationship between the professionalism of journalism and ethical standards is an important link for an activity that is facing new challenges, stemming both from the reconfiguration of media industries and the identity crisis of the profession itself. In this context, the revaluation of the ability to tell stories with informative content from the use of literary tools and resources is added. Thus, narrative journalism enhances a committed, critical and transformative journalism that, in transmedia environments, faces new questions: what are the ethical implications of the active participation of audiences, the generation of virtual communities, the use of clear language, the combination of different media and the handling of data and personal stories? This paper presents an analysis of this problem from the perspective of the Theory of Otherness, as the essence of a current and supportive ethics that allows the understanding and acceptance of the human being. As a central line, the main challenges that have been detected from the analysis of the principles of transmedia journalism

Abstract:

Business cycles in emerging economies display very volatile consumption and strongly countercyclical trade balance. We show that aggregate consumption in these economies is not more volatile than output once durables are accounted for. Then, we present and estimate a real business cycles model for a small open economy that accounts for this empirical observation. Our results show that the role of permanent shocks to aggregate productivity in explaining cyclical fluctuations in emerging economies is considerably lower than previously documented. Moreover, we find that financial frictions are crucial to explain some key business cycle properties of these economies

Abstract:

Afunction f from a domain in R3 to the quaternions is said to be inframono-genic if ðfð =0, where ð=ð/ðx0+(ð/ð𝜕x1)e1+(ð/ðx2)e2. All inframonogenic functions are biharmonic. In the context of functions f=f0+f1e1+f2e2 taking values in the reduced quaternions, we show that the inframonogenic homoge-neous polynomials of degree n form a subspace of dimension 6n+3. We use the homogeneous polynomials to construct an explicit, computable orthogonal basis for the Hilbert space of square-integrable inframonogenic functions defined inthe ball in R3

Abstract:

We decompose traditional betas into semibetas based on the signed covariation between the returns of individual stocks in an international market and the returns of three risk factors: local, global, and foreign exchange. Using high-frequency data, we empirically assess stock return co-movements with these three risk factors and find novel relationships between these factors and future returns. Our analysis shows that only semibetas derived from negative risk factor and stock return downturns command significant risk premia. Global downside risk is negatively priced in the international market and local downside risk is positively priced

Abstract:

We use intraday data to compute weekly realized variance, skewness, and kurtosis for equity returns and study the realized moments' time-series and cross-sectional properties. We investigate if this week's realized moments are informative for the cross-section of next week's stock returns. We find a very strong negative relationship between realized skewness and next week's stock returns. A trading strategy that buys stocks in the lowest realized skewness decile and sells stocks in the highest realized skewness decile generates an average weekly return of 19 basis points with a t-statistic of 3.70. Our results on realized skewness are robust across a wide variety of implementations, sample periods, portfolio weightings, and firm characteristics, and are not captured by the Fama-French and Carhart factors. We find some evidence that the relationship between realized kurtosis and next week's stock returns is positive, but the evidence is not always robust and statistically significant. We do not find a strong relationship between realized volatility and next week's stock returns

Resumen:

En este artículo se propone que el humor constituye una forma de comunicación intrapersonal particularmente apta para la (auto) educación filosófica que se encuentra en el corazón de la práctica de la filosofía. Se explican los resultados epistemológicos y éticos de un uso sistemático de la risa autorreferencial. Se defienden los beneficios de una cosmovisión basada en el reconocimiento del ridículo humano, Homo risibilis, comparándolo con otros enfoques de la condición humana

Abstract:

This article presents humor as enacting an intra-personal communication particularly apt for the philosophic (self) education that lies at the heart of the practice of philosophy. It explains the epistemological and ethical outcomes of a systematic use of self-referential laughter. It argues for the benefits of a worldview predicated on acknowledging human ridicule, Homo risibilis, comparing it with other approaches to the human predicament

Abstract:

This paper examines the effects of noncontributory pension programs at the federal and state levels on Mexican households' saving patterns using micro data from the Mexican Income and Expenditure Survey. We find that the federal program curtails saving among households whose oldest member is either 18-54 or 65-69 years old, possibly through anticipation effects, a decrease in the longevity risk faced by households, and a redistribution of income between households of different generations. Specifically, these households appear to be reallocating income away from saving into human capital investments, like education and health. Generally, state programs have neither significant effects on household saving, nor does the combination of federal and state programs. Finally, with a few exceptions, noncontributory pensions have no significant impact on the saving of households with members 70 years of age or older-individuals eligible for those pensions, plausibly because of their dissaving stage in the lifecycle

Abstract:

This paper empirically investigates the determinants of the Internet and cellular phone penetration levels in a crosscountry setting. It offers a framework to explain differences in the use of information and communication technologies in terms of differences in the institutional environment and the resulting investment climate. Using three measures of the quality of the investment climate, Internet access is shown to depend strongly on the country’s institutional setting because fixed-line Internet investment is characterized by a high risk of state expropriation, given its considerable asset specificity. Mobile phone networks, on the other hand, are built on less site-specific, re-deployable modules, which make this technology less dependent on institutional characteristics. It is speculated that the existence of telecommunications technology that is less sensitive to the parameters of the institutional environment and, in particular, to poor investment protection provides an opportunity for better understanding of the constraints and prospects for economic development

Abstract:

We consider a restricted three body problem on surfaces of constant curvature. As in the classical Newtonian case the collision singularities occur when the position particle with infinitesimal mass coincides with the position of one of the primaries. We prove that the singularities due to collision can be locally (each one separately) and globally (both as the same time) regularized through the construction of Levi-Civita and Birkhoff type transformations respectively. As an application we study some general properties of the Hill’s regions and we present some ejection-collision orbits for the symmetrical problem

Abstract:

We consider a symmetric restricted three-body problem on surfaces Mk2 of constant Gaussian curvature k ≠ 0, which can be reduced to the cases k = ±1. This problem consists in the analysis of the dynamics of an infinitesimal mass particle attracted by two primaries of identical masses describing elliptic relative equilibria of the two body problem on Mk2, i.e., the primaries move on opposite sides of the same parallel of radius a. The Hamiltonian formulation of this problem is pointed out in intrinsic coordinates. The goal of this paper is to describe analytically, important aspects of the global dynamics in both cases k = ±1 and determine the main differences with the classical Newtonian circular restricted three-body problem. In this sense, we describe the number of equilibria and its linear stability depending on its bifurcation parameter corresponding to the radial parameter a. After that, we prove the existence of families of periodic orbits and KAM 2-tori related to these orbits

Abstract:

We classify and analyze the orbits of the Kepler problem on surfaces of constant curvature (both positive and negative, S2 and H2, respectively) as functions of the angular momentum and the energy. Hill's regions are characterized and the problem of time-collision is studied. We also regularize the problem in Cartesian and intrinsic coordinates, depending on the constant angular momentum, and we describe the orbits of the regularized vector field. The phase portraits both for S2 and H2 are pointed out

Abstract:

We consider a setup in which a principal must decide whether or not to legalize a socially undesirable activity. The law is enforced by a monitor who may be bribed to conceal evidence of the offense and who may also engage in extortionary practices. The principal may legalize the activity even if it is a very harmful one. The principal may also declare the activity illegal knowing that the monitor will abuse the law to extract bribes out of innocent people. Our model offers a novel rationale for legalizing possession and consumption of drugs while continuing to prosecute drug dealers

Abstract:

In this paper we initiate the study of the forward and backward shifts on the discrete generalized Hardy space of a tree and the discrete generalized little Hardy space of a tree. In particular, we investigate when these shifts are bounded, find the norm of the shifts if they are bounded, characterize the trees in which they are an isometry, compute the spectrum in some concrete examples, and completely determine when they are hypercyclic

Abstract:

We study a channel through which inflation can have effects on the real economy. Using job creation and destruction data from U.S. manufacturing establishments from 1973-1988, we show that both jobs created by new establishments and jobs destroyed by dying establishments are negatively correlated with inflation. These results are robust to controls for the real-business cycle and monetary policy. Over a longer time frame, data on business failures confirm our results obtained from job creation and destruction data. We discuss how interaction of inflation with financial-markets, nominal-wage rigidities, and imperfect competition could explain the empirical evidence

Abstract:

We study how discount window policy affects the frequency of banking crises, the level of investment, and the scope for indeterminacy of equilibrium. Previous work has shown that providing costless liquidity through a discount window has mixed effects in terms of these criteria: It prevents episodes of high liquidity demand from causing crises but can lead to indeterminacy of stationary equilibrium and to inefficiently low levels of investment. We show how offering discount window loans at an above-market interest rate can be unambiguously beneficial. Such a policy generates a unique stationary equilibrium. Banking crises occur with positive probability in this equilibrium and the level of investment is suboptimal, but a proper combination of discount window and monetary policies can make the welfare effects of these inefficiencies arbitrarily small. The near-optimal policies can be viewed as approximately implementing the Friedman rule

Abstract:

We investigate the dependence of the dynamic behavior of an endogenous growth model on the degree of returns to scale. We focus on a simple (but representative) growth model with publicly funded inventive activity. We show that constant returns to reproducible factors (the leading case in the endogenous growth literature) is a bifurcation point, and that it has the characteristics of a transcritical bifurcation. The bifurcation involves the boundary of the state space, making it difficult to formally verify this classification. For a special case, we provide a transformation that allows formal classification by existing methods. We discuss the new methods that would be needed for formal verification of transcriticality in a broader class of models

Abstract:

We evaluate the desirability of having an elastic currency generated by a lender of last resort that prints money and lends it to banks in distress. When banks cannot borrow, the economy has a unique equilibrium that is not Pareto optimal. The introduction of unlimited borrowing at a zero nominal interest rate generates a steady state equilibrium that is Pareto optimal. However, this policy is destabilizing in the sense that it also introduces a continuum of nonoptimal inflationary equilibria. We explore two alternate policies aimed at eliminating such monetary instability while preserving the steady-state benefits of an elastic currency. If the lender of last resort imposes an upper bound on borrowing that is low enough, no inflationary equilibria can arise. For some (but not all) economies, the unique equilibrium under this policy is Pareto optimal. If the lender of last resort instead charges a zero real interest rate, no inflationary equilibria can arise. The unique equilibrium in this case is always Pareto optimal

Abstract:

We consider the nature of the relationship between the real exchange rate and capital formation. We present a model of a small open economy that produces and consumes two goods, one tradable and one not. Domestic residents can borrow and lend abroad, and costly state verification (CSV) is a source of frictions in domestic credit markets. The real exchange rate matters for capital accumulation because it affects the potential for investors to provide internal finance, which mitigates the CSV problem. We demonstrate that the real exchange rate must monotonically approach its steady state level. However, capital accumulation need not be monotonic and real exchange rate appreciation can be associated with either a rising or a falling capital stock. The relationship between world financial market conditions and the real exchange rate is also investigated

Abstract:

In the Mexican elections, the quick count consists in selecting a random sample of polling stations to forecast the election results. Its main challenge is that the estimation is done with incomplete samples, where the missingness is not at random. We present one of the statistical models used in the quick count of the gubernatorial elections of 2021. The model is a negative binomial regression with a hierarchical structure. The prior distributions are thoroughly tested for consistency. Also, we present a fitting procedure with an adjustment for bias, capable of running in less than 5 min. The model yields probability intervals with approximately 95% coverage, even with certain patterns of biased samples observed in previous elections. Furthermore, the robustness of the negative binomial distribution translates to robustness in the model, which can fit well big and small candidates, and provides an additional layer of protection when there are database errors

Abstract:

Quick counts based on probabilistic samples are powerful methods for monitoring election processes. However, the complete designed samples are rarely collected to publish the results in a timely manner. Hence, the results are announced using partial samples, which have biases associated to the arrival pattern of the information. In this paper, we present a Bayesian hierarchical model to produce estimates for the Mexican gubernatorial elections. The model considers the poll stations poststratified by demographic, geographic, and other covariates. As a result, it provides a principled means of controlling for biases associated to such covariates. We compare methods through simulation exercises and apply our proposal in the July 2018 elections for governor in certain states. Our studies find the proposal to be more robust than the classical ratio estimator and other estimators that have been used for this purpose

Abstract:

Despite the rapid change in cellular technologies, Mobile Network Operators (MNOs) keep a high percentage of their deployed infrastructure using Global System for Mobile communications (GSM) technologies. With about 3.5 billion subscribers, GSM remains as the de facto standard for cellular communications. However, the security criteria envisioned 30 years ago, when the standard was designed, are no longer sufficient to ensure the security and privacy of the users. Furthermore, even with the newest fourth generation (4G) cellular technologies starting to be deployed, these networks could never achieve strong security guarantees because the MNOs keep backwards- compatibility given the huge amount of GSM subscribers. In this paper, we present and describe the tools and necessary steps to perform an active attack against a GSM-compatible network, by exploiting the GSM protocol lack of mutual authentication between the subscribers and the network. The attack consists of a so-called man-in-the- middle attack implementation. By using Software Defined Radio (SDR), open-source libraries and open- source hardware, we setup a fake GSM base station to impersonate the network and therefore eavesdrop any communications that are being routed through it and extract information from their victims. Finally, we point out some implications of the protocol vulnerabilities and how these can not be mitigated in the short term since 4G deployments will take long time to entirely replace the current GSM infrastructure

Abstract:

This study investigates whether incidental electoral successes of women contribute to sustained gender parity in Finland, a leader in gender equality. Utilizing election lotteries used to resolve ties in vote counts, we estimate the causal effect of female representation. Our findings indicate that women's electoral performance (measured by female seat and vote shares) within parties improves following a lottery win, potentially due to increased exposure. However, these gains are offset by negative spillovers on female candidates from other parties. One reason why this may occur is that voters and parties may perceive sufficient female representation has been achieved, leading to reduced support for female candidates in other parties. Consequently, marginal increases in female representation do not translate to overall gains in women elected. In high but uneven gender parity contexts, such increases may not be self-reinforcing, highlighting the complexity of achieving sustained gender equality in politics

Abstract:

It is shown in the paper that the problem of speed observation for mechanical systems that are partially linearisable via coordinate changes admits a very simple and robust (exponentially stable) solution with a Luenberger-like observer. This result should be contrasted with the very complicated observers based on immersion and invariance reported in the literature. A second contribution of the paper is to compare, via realistic simulations and highly detailed experiments, the performance of the proposed observer with well-known high-gain and sliding mode observers. In particular, to show that – due to their high sensitivity to noise, that is unavoidable in mechanical systems applications – the performance of the two latter designs is well below par

Abstract:

This paper deals with the problem of control of partially known nonlinear systems, which have an open-loop stable equilibrium, but we would like to add a PI controller to regulate its behavior around another operating point. Our main contribution is the identification of a class of systems for which a globally stable PI can be designed knowing only the systems input matrix and measuring only the actuated coordinates. The construction of the PI is carried out invoking passivity theory. The difficulties encountered in the design of adaptive PI controllers with the existing theoretical tools are also discussed. As an illustration of the theory, we consider a class of thermal processes

Abstract:

This paper deals with the problem of control of partially known nonlinear systems, which have an open-loop stable equilibrium, but we would like to add a PI controller to regulate its behavior around another operating point. Our main contribution is the identification of a class of systems for which a globally stable PI can be designed knowing only the systems input matrix and measuring only the actuated coordinates. The construction of the PI is done invoking passivity theory. The difficulties encountered in the design of adaptive PI controllers with the existing theoretical tools are also discussed. As an illustration of the theory, we consider port-Hamiltonian systems and a class of thermal processes

Abstract:

We formulate a p-median facility location model with a queuing approximation to determine the optimal locations of a given number of dispensing sites (Point of Dispensing-PODs) from a predetermined set of possible locations and the optimal allocation of staff to the selected locations. Specific to an anthrax attack, dispensing operations should be completed in 48 hours to cover all exposed and possibly exposed people. A nonlinear integer programming model is developed and it formulates the problem of determining the optimal locations of facilities with appropriate facility deployment strategies, including the amount of servers with different skills to be allocated to each open facility. The objective of the mathematical model is to minimize the average transportation and waiting times of individuals to receive the required service. The mathematical model has waiting time performance measures approximated with a queuing formula and these waiting times at PODs are incorporated into the p-median facility location model. A genetic algorithm is developed to solve this problem. Our computational results show that appropriate locations of these facilities can significantly decrease the average time for individuals to receive services. Consideration of demographics and allocation of the staff decreases waiting times in PODs and increases the throughput of PODs. When the number of PODs to open is high, the right staffing at each facility decreases the average waiting times significantly. The results presented in this paper can help public health decision makers make better planning and resource allocation decisions based on the demographic needs of the affected population

Abstract:

Robust statistical data modelling under potential model mis-specification often requires leaving the parametric world for the nonparametric. In the latter, parameters are infinite dimensional objects such as functions, probability distributions or infinite vectors. In the Bayesian nonparametric approach, prior distributions are designed for these parameters, which provide a handle to manage the complexity of nonparametric models in practice. However, most modern Bayesian nonparametric models seem often out of reach to practitioners, as inference algorithms need careful design to deal with the infinite number of parameters. The aim of this work is to facilitate the journey by providing computational tools for Bayesian nonparametric inference. The article describes a set of functions available in the R package BNPdensity in order to carry out density estimation with an infinite mixture model, including all types of censored data. The package provides access to a large class of such models based on normalised random measures, which represent a generalisation of the popular Dirichlet process mixture. One striking advantage of this generalisation is that it offers much more robust priors on the number of clusters than the Dirichlet. Another crucial advantage is the complete flexibility in specifying the prior for the scale and location parameters of the clusters, because conjugacy is not required. Inference is performed using a theoretically grounded approximate sampling methodology known as the Ferguson & Klass algorithm. The package also offers several goodness-of-fit diagnostics such as QQ plots, including a cross-validation criterion, the conditional predictive ordinate. The proposed methodology is illustrated on a classical ecological risk assessment method called the species sensitivity distribution problem, showcasing the benefits of the Bayesian nonparametric framework

Abstract:

This paper focuses on model selection, specification and estimation of a global asset return model within an asset allocation and asset and liability management framework. The development departs from a single currency capital market model with four state variables: stock index, short and long term interest rates and currency exchange rates. The model is then extended to the major currency areas, United States, United Kingdom, European Union and Japan, and to include a US economic model containing GDP, inflation, wages and government borrowing requirements affecting the US capital market variables. In addition, we develop variables representing emerging market stock and bond indices. In the largest extension we treat a four currency capital markets model and US, UK, EU and Japan macroeconomic variables. The system models are estimated with seemingly unrelated regression estimation (SURE) and generalised autoregressive conditional heteroscedasticity (GARCH) techniques. Simulation, impulse response and forecasting performance is discussed in order to analyse the dynamics of the models developed

Abstract:

Gender stereotypes, the assumptions concerning appropriate social roles for men and women, permeate the labor market. Analyzing information from over 2.5 million job advertisements on three different employment search websites in Mexico, exploiting approximately 235,00 that are explicitly gender-targeted, we find evidence that advertisements seeking "communal" characteristics, stereotypically associated with women, specify lower salaries than those seeking "agentic" characteristics, stereotypically associated with men. Given the use of gender-targeted advertisements in Mexico, we use a random forest algorithm to predict whether non-targeted ads are in fact directed toward men or women, based on the language they use. We find that the non-targeted ads for which we predict gender show larger salary gaps (8–35 percent) than explicitly gender-targeted ads (0–13 percent). If women are segregated into occupations deemed appropriate for their gender, this pay gap between jobs requiring communal versus agentic characteristics translates into a gender pay gap in the labor market

Abstract:

This paper analyses the contract between an entrepreneur and an investor, using a non-zero sum game in which the entrepreneur is interested in company survival and the investor in maximizing expected net present value. Theoretical results are given and the model's usefulness is exemplified using simulations. We have observed that both the entrepreneur and the investor are better off under a contract which involves repayments and a share of the start-up company. We also have observed that the entrepreneur will choose riskier actions as the repayments become harder to meet up to a level where the company is no longer able to survive

Abstract:

We consider the problem of managing inventory and production capacity in a start-up manufacturing firm with the objective of maximising the probability of the firm surviving as well as the more common objective of maximising profit. Using Markov decision process models, we characterise and compare the form of optimal policies under the two objectives. This analysis shows the importance of coordination in the management of inventory and production capacity. The analysis also reveals that a start-up firm seeking to maximise its chance of survival will often choose to keep production capacity significantly below the profit-maximising level for a considerable time. This insight helps us to explain the seemingly cautious policies adopted by a real start-up manufacturing firm

Abstract:

Start-up companies are considered an important factor in the success of a nation’s economy. We are interested in the decisions for long-term survival of these firms when they have considerable cash restrictions. In this paper we analyse several inventory control models to manage inventory purchasing and return policies. The Markov decision models are formulated for both established companies that look at maximising average profit and start-up companies that look at maximising their long-term survival probability. We contrast both objectives, and present properties of the policies and the survival probabilities. We find that start-up companies may need to be riskier if the return price is very low, but there is a period where a start-up firm becomes more cautious than an established company and there is a point, as it accumulates capital, where it starts behaving as an established firm. We compare the various models and give conditions under which their policies are equivalent

Abstract:

Biopolymer films (biofilms) were evaluated for suitability in meat packaging applications. Polyesters, proteins, and carbohydrates are among the most commonly utilized materials for producing bioplastics due to their mechanical properties, which closely resemble those of conventional plastics. However, these properties are significantly influenced by the film's composition, molecular weight, solvent type, pH, component concentration, and processing temperature. Conventional plastic films, including microplastics, are non-bioactive and contribute to persistent pollution. This underscores the importance of developing novel materials that incorporate bioactive plant compounds to endow plastic films with antimicrobial and antioxidant functionalities. In this study, two gelatin-chitosan films were fabricated: one without any extract and another incorporating guava leaf extract. These films were characterized to determine their optical, mechanical, morphological, and thermal properties. Additionally, a microbiological analysis was conducted to assess the impact of polymeric biofilm packaging on the shelf life of beef. Both films exhibited favorable tensile strength values (24.74–27.12 MPa), high transparency (0.79–1.18), and effective barrier properties against water vapor (8.95–9.29 × 10-8 g.mm.Pa-1 h-1 m-2) and oxygen (9.7–9.10 × 10-18 m3.m-2 s-1 Pa-1). During storage, the biofilm containing guava leaf extract demonstrated a reduction in microbial growth, and this enhanced the beef's color stability. In conclusion, biopolymeric films incorporating guava leaf extract demonstrated notable antioxidants and antimicrobial properties, effectively inhibiting microbial growth, preserving the physical quality of beef, and significantly extending its shelf life under refrigerated conditions

Abstract:

A recent cross-cultural study suggests employees may be classified, based on their scores on a measure of work ethic, into three profiles labeled as "live to work," "work to live," and "work as a necessary evil." The present study assesses whether these profiles were stable before and after an extended lockdown that forced employees to work from home for 2 years because of the COVID-19 pandemic. To assess our core research question, we conducted a longitudinal study with employees of a company in the financial sector, collecting data in two waves: February 2020 (n = 692) and June 2022 (n = 598). Tests of profile similarity indicated a robust structural and configural equivalence of the profiles before and after the lockdown. As expected, the prolonged pandemic-based lockdown had a significant effect on the proportion of individuals in each profile. Implications for leading and managing in a post-pandemic workforce are presented and discussed

Abstract:

This study examines the impact of a specific training intervention on both individual- and unit-level outcomes. We sought to examine the extent to which a training intervention incorporating key elements of error management training: (1) positively impacted sales specific self-efficacy beliefs of trainees; and (2) positively impacted unit-level sales growth over time. Results of an 11-week longitudinal field experiment across 19 stores in a national bakery chain indicated that the sales self-efficacy of trainees significantly increased between the levels they had 2 weeks before the intervention started and 4 weeks after it was initiated. Results based on a repeated measures ANOVA also indicated significantly higher sales performance in the intervention group compared with a non-intervention control group. We also sought to address the extent to which individual-level effects may be linked to the organizational level. We also provide evidence with respect to the extent to which changes in individual self-efficacy were associated with unit-level sales performance. Results confirmed this multi-level effect as evidenced by a moderate significant correlation between the average self-efficacy of the staff of each store and its sales performance across the weeks the intervention was in effect. The study contributes to the existing literature by providing direct evidence of the impact of an HRD intervention at multiple organizational levels

Abstract:

Despite the acceptance of work ethic as an important individual difference, little research has examined the extent to which work ethic may reflect shared environmental or socio-economic factors. This research addresses this concern by examining the influence of geographic proximity on the work ethic experienced by 254 employees from Mexico, working in 11 different cities in the Northern, Central and Southern regions of the country. Using a sequence of complementary analyses to assess the main source of variance on seven dimensions of work ethic, our results indicate that work ethic is most appropriately considered at the individual level

Abstract:

This paper explores the relationship between individual work values and unethical decision-making and actual behavior at work through two complementary studies. Specifically, we use a robust and comprehensive model of individual work values to predict unethical decision-making in a sample of working professionals and accounting students enrolled in ethics courses, and IT employees working in sales and customer service. Study 1 demonstrates that young professionals who rate power as a relatively important value (i.e., those reporting high levels of the self-enhancement value) are more likely to violate professional conduct guidelines despite receiving training regarding ethical professional principles. Study 2, which examines a group of employees from an IT firm, demonstrates that those rating power as an important value are more likely to engage in non-work-related computing (i.e., cyberloafing) even when they are aware of a monitoring software that tracks their computer usage and an explicit policy prohibiting the use of these computers for personal reasons

Abstract:

This panel study, conducted in a large Venezuelan organization, took advantage of a serendipitous opportunity to examine the organizational commitment profiles of employees before and after a series of dramatic, and unexpected, political events directed specifically at the organization. Two waves of organizational commitment data were collected, 6 months apart, from a sample of 152 employees. No evidence was found that employees' continuance commitment to the organization was altered by the events described here. Interestingly, however, both affective and normative commitment increased significantly during the period of the study. Further, employee's commitment profiles at Wave 2 were more differentiated than they were at Wave 1

Abstract:

This study, based in a manufacturing plant in Venezuela, examines the relationship between perceived task characteristics, psychological empowerment and commitment, using a questionnaire survey of 313 employees. The objective of the study was to assess the effects of an organizational intervention at the plant aimed at increasing productivity by providing performance feedback on key aspects of its daily operations. It was hypothesized that perceived characteristics of the task environment, such as task meaningfulness and task feedback, will enhance psychological empowerment, which in turn will have a positive impact on employee commitment. Test of a structural model revealed that the relationship of task meaningfulness and task feedback with affective commitment was partially mediated by the empowerment dimensions of perceived control and goal internalization. The results highlight the role of goal internalization as a key mediating mechanism between job characteristics and affective commitment. The study also validates a Spanish-language version of the psychological empowerment scale by Menon (2001)

Resumen:

A pesar de la extensa validación transcultural del modelo de compromiso organizacional de Meyer y Allen (1991), han surgido ciertas dudas respecto a la independencia de los componentes afectivo y normativo y, también, sobre la unidimensionalidad de este último. Este estudio analiza la estabilidad de la estructura del modelo y examina el comportamiento de la escala normativa, empleando 100 muestras, de 250 sujetos cada una, extraídas aleatoriamente de una base de datos de 4.689 empleados. Los resultados muestran cierta estabilidad del modelo, y apoyan parcialmente a la corriente que propone el desdoblamiento del componente normativo en dos subdimensiones: el deber moral y el sentimiento de deuda moral

Abstract:

Although there has been extensive cross-cultural validation of Meyer and Allen’s (1991) model of organizational commitment, some doubts have emerged concerning both the independence of the affective and normative components, and the unidimensionality of the former. This study focuses on analyzing the stability of the model’s structure, and on examining the behaviour of the normative scale. For this purpose, we employed 100 samples of 250 subjects each, extracted randomly from a database of 4,689 employees. The results show certain stability of the model, and partially support research work suggesting the unfolding of the normative component into two subdimensions: one related to a moral duty, and the other to a sense of indebtedness

Abstract:

In recent years there has been an increasing interest among researchers and practitioners to analyze what makes a firm attractive in the eyes of university students, and if individual differences such as personality traits have an impact on this general affect towards a particular organization. The main goal of the present research is to demonstrate that a recently conceptualized narrow trait of personality named dispositional resistance to change (RTC), that is, the inherent tendency of individuals to avoid and oppose changes (Oreg, 2003), can predict organizational attraction of university students to firms that are perceived as innovative or conservative. Three complementary studies were carried out using a total sample of 443 college students from Mexico. In addition to validating the hypotheses, our findings suggest that as the formation of the images of organizations in students’ minds is done through social cognitions, simple stimuli such as physical artifacts, when used in an isolated manner, do not have a significant impact on organizational attraction

Abstract:

The Work Values Scale EVAT (based on its initials in Spanish: Escala de Valores hacia el Trabajo) was created in 2000 to measure values in the work contexto The instrument operationalizes the four higher-order-values of the Schwartz 'rheory (1992) through sixteen items focused on work scenarios. The questionnaire has been used among large samples of Mexican and Spanish individuals reporting adequate psychometric properties. The instrument has recently been translated into Portuguese and Italian, and subsequent1y used in a large-scale study with nurses in Portugal andin a sample of various occupations in Italy. The purpose of this research was to demonstrate the cross-cultural validity of the Work Values Scale EVAT in Spanish, Portuguese, and Italian. Our results suggest that the original Spanish version of the EVAT scale and the new Portuguese and Italian versions are equivalent

Abstract:

The authors examined the validity of the Spanish-language version of the dispositional resistance lO change (RTC) scale. First, the structural validity of the new questionnaire was evaluated using a nested sequence of confirmatory factor analyses. Second, the external validity ofthe questionnaire was assessed, using the four higher-order values of the Schwartz's theory and the four dimensions of the RTC scale: routine seeking, emotional reaction, short-term focus and cognitive rigidity. A sample of 553 undergraduate students from Mexico and Spain was used in the analyses. The results confirmed both the construct structure and the external validity of the questionnaire

Abstract:

The authors examined the convergent validity of the four dimensions of the Resistance to Change scale (RTC): routine seeking, emotional reaction, short-term focus and cognitive rigidity and the four higher-order values of the Schwartz’s theory, using a nested sequence of confirmatory factor analyses. A sample of 553 undergraduate students from Mexico and Spain was used in the analyses. The results confirmed the external validity of the questionnaire

Resumen:

Este estudio analiza el impacto de la diversidad de valores entre los integrantes de los equipos sobre un conjunto de variables de proceso, así como sobre dos tareas con diferentes demandas de interacción social. En particular, se analiza el efecto de la diversidad de valores sobre el conflicto en la tarea y en las relaciones, la cohesión y la autoeficacia grupal. Utilizando un simulador de trabajo en equipo y una muestra de 22 equipos de entre cinco y siete individuos, se comprobó que la diversidad en valores en un equipo, influye de forma directa sobre las variables de proceso y sobre la tarea que demanda baja interacción social, la relación entre la diversidad de valores sobre el desempeño, se ve mediada por las variables de proceso. Se proponen algunas acciones que permitirían poner en práctica los resultados de esta investigación en el contexto organizacional

Abstract:

This study investigates the impact of value diversity among team members on team process and performance criteria on two tasks of differing social interaction demands. Specifically, the criteria of interest included task conflict, relationship conflict, cohesion, and team efficacy and task performance on two tasks demanding different levels of social interaction. Utilizing a team work simulator and a sample comprised of 22 learns of five to seven individuals, it was demonstrated that value diversity directly impacts both task performance and process criteria on the task demanding low social interaction. Meanwhile, in the task requiring high social interaction, value diversity related to task performance via the mediating effects of team processes. Some specific actions are proposed in order to apply the results of this research in the daily context of organizations

Abstract:

The Work Values Scale EVAT (based on its initials in Spanish) was created in 2000 to measure values in the work context. The instrument operationalizes the four higher-order-values of the Schwartz Theory (1992) through sixteen items focused on work scenarios. The questionnaire has been used among large samples of Mexican and Spanish individuals (Arciniega & González, 2006: 2005, González & Arciniega 2005), reporting adequate psychometric properties. The instrument has recently been translated into Portuguese and Italian, and subsequently used in a large-scale study with nurses in Portugal and in a sample of various occupations in Italy. The purpose of this research was to demonstrate the cross-cultural validity of the Work Values Scale EVAT in Spanish, Portuguese, and Italian, using a new technique of measurement equivalence: confirmatory multidimensional scaling (CMDS). Our results suggest that CMDS is a serviceable technique for assessing measurement equivalence, but requires improvements to provide precise fit indices

Abstract:

We examine the impact of team member value and personality diversity on team processes and performance. The research is divided into two studies. First, we examine the impact of personality and value diversity on team performance, relationship and task conflict, cohesion, and team self-efficacy. Second, we evaluate the effect of team members’ values diversity on team performance in two different types of tasks, one cognitive, and the other complex. In general, our results suggest that higher levels of diversity with respect to values were associated with lower levels of team process variables. Also, as expected we found that the influence of team values diversity is higher on a cognitive task than on a complex one

Abstract:

sidering the propositions of Simon (1990;1993) and Korsgaard áp.d collaborators (1997), that an individual who assigns priority to values related to altruism tends to pay less attention to evaluating personal costs and benefits when processing social information, as well as the basic premises of job satisfaction that establishes that this attitude is centered on a cognitive process of evaluating how specific conditions or outcomes in a job fulfill the needs and values of a persono We proposed that individuals who score higher on values associated with altruism, will reveal higher scores on all specific facets of job satisfaction than those who score lower. A sample of 3,201 Mexican employees, living in 11 cities and working for 30 different companies belonging to the same holding, was used in this study. The results of the research c1early support the central hypothesis

Abstract:

Some reviews have shown how different attitudes, demographic and organizational variables generate organizational commitment. Few studies have reported how work values and organizational factors create organizational commitment. This investigation is an attempt to explore the influence that both sets of variables have on organizational commitment. Using the four high-order values proposed by Schwartz (1992) to operationalize the construct of work values, we evaluated the influence of these work values on the development of organizational commitment, in comparison with four facets of work satisfaction and four organizational factors: empowerment, knowledge of organizational goals, and training and communication practices. A sample of 982 employees from eight companies of Northeastern Mexico was used in this study. Our findings suggest that work values occupy less important place on the development of organizational commitment when compared to organizational factors, such as the perceived knowledge of the goals of the organization, or some attitudes such as satisfaction with security and opportunities of development

Abstract:

In this paper we propose the use of new iterative methods to solve symmetric linear complementarity problems (SLCP) that arise in the computation of dry frictional contacts in Multi-Rigid-Body Dynamics. Specifically, we explore the two-stage iterative algorithm developed by Morales, Nocedal and Smelyanskiy [1]. The underlying idea of that method is to combine projected Gauss-Seidel iterations with subspace minimization steps. Gauss-Seidel iterations are aimed to obtain a high quality estimation of the active set. Subspace minimization steps focus on the accurate computation of the inactive components of the solution. Overall the new method is able to compute fast and accurate solutions of severely ill-conditioned LCPs. We compare the performance of a modification of the iterative method of Morales et al with Lemke’s algorithm on robotic object grasping problems.

Abstract:

This paper investigates corporate social (and environmental) responsibility (CSR) disclosure practices in Mexico. By analysing a sample of Mexican companies in 2010, it utilises a detailed manual content analysis and identifies corporate-governance-related determinants of CSR disclosure. The study shows a general association between the governance variables and both the content and the semantic properties of CSR information published by Mexican companies. Although an increased international influence on CSR disclosure is noted, the study reveals the symbolic role of CSR committees and the negative influence of foreign ownership on community disclosure, suggesting that improvements in business engagement with stakeholders are needed for CSR to be instrumental in business conduct

Abstract:

Effective policy-making requires that voters avoid electing malfeasant politicians. However, informing voters of incumbent malfeasance in corrupt contexts may not reduce incumbent support. As our simple learning model shows, electoral sanctioning is limited where voters already believed incumbents to be malfeasant, while information's effect on turnout is non-monotonic in the magnitude of reported malfeasance. We conducted a field experiment in Mexico that informed voters about malfeasant mayoral spending before municipal elections, to test whether these Bayesian predictions apply in a developing context where many voters are poorly informed. Consistent with voter learning, the intervention increased incumbent vote share where voters possessed unfavorable prior beliefs and when audit reports caused voters to favorably update their posterior beliefs about the incumbent's malfeasance. Furthermore, we find that low and, especially, high malfeasance revelations increased turnout, while less surprising information reduced turnout. These results suggest that improved governance requires greater transparency and citizen expectations

Abstract:

A Vector Autoregressive Model of the mexican economy was employed to empirically find the transmission channels of price formation. The structural changes affecting the behavior of the inflation rate during 1970-1987, motivated the analysis of the changing influences of the explanatory variables within three different subperiods, namely: 1970-1976, 1978-1982 and 1983- 1987. A main finding is that, among the variables considered, the public prices were the most important in explaining the variability of the inflation, irrespective of the subperiod under study. Another finding is that inflationary inertia played a different role in each subperiod

Abstract:

Value stream mapping (VSM) is a valuable practice employed by industry experts to identify inefficiencies in the value chain due to its visual representation capability and general ease of use. Enabled by a shift towards digitalization and smart connected systems, this project investigates the possibilities of transitioning VSM from a manual to digital process through the utilization of data generated from a Real-Time Location System (RTLS). This study focuses on merging the aspects of RTLS and VSM such that their advantages are combined to form a more robust and effective VSM process. Two simulated experiments and an initial validation test were conducted to demonstrate the capability of the system to function in an industrial environment by replicating an actual production process. The two experiments represent the current state of conditions of the company in two different instances of time. These outputs from the tracking system are then modified and converted into inputs for VSM. A VSM application was modified and utilized to create a digital value stream map with relevant performance parameters. Finally, a stochastic simulation was carried out to compare and extrapolate the results to a 16hrs shift to measure, among other outputs, the utilization of the machines with the two RTLS scenarios

Abstract:

The degradation of biopolymers such as polylactic acid (PLA) has been studied for several years; however, the results regarding the mechanism of degradation are not completely understood yet. PLA is easily processed by traditional techniques including injection molding, blow molding, extrusion, and thermoforming; in this research, the extrusion and injection molding processes were used to produce PLA samples for accelerated destructive testing. The methodology employed consisted of carrying out material testing under the guidelines of several ASTM standards; this research hypothesized that the effects of UV light, humidity, and temperature exposure have a statistical difference in the PLA degradation rate. The multivariate analysis of non-parametric data is presented as an alternative to multivariate analysis, in which the data do not satisfy the essential assumptions of a regular MANOVA, such as multivariate normality. A package in the R software that allows the user to perform a non-parametric multivariate analysis when necessary was used. This paper presents a study to determine if there is a significant difference in the degradation rate after 2000 h of accelerated degradation of a biopolymer using the multivariate and non-parametric analyses of variance. The combination of the statistical techniques, multivariate analysis of variance and repeated measures, provided information for a better understanding of the degradation path of the biopolymer

Abstract:

Dried red chile peppers [Capsicum annuum (L.)] are an important agricultural product grown throughout the Southwestern United States and is extensively used in food and for commercial application. Given the high, broad demand for chile attention to the methods of harvesting, storage, transport, and packaging are critical for profitability. Currently, chile should be stored no more than 24 to 36 hours at ambient temperatures from the time of harvest due to the potential for natural fermentation to destroy the crop. The rate for calculating and determining the amount of useable/destroyed chile in ambient conditions is determined by several variables that include the harvesting method (hand-picked, mechanized), time of harvest following the optimal harvesting point (season), weather variations (moisture). In this work, a stochastic simulation-based model is presented to forecast optimal harvesting scenarios capable of supporting farmers and chile processors better plan/manage planting and growth acceleration programs. The tool developed allows for the economic feasibility of storage/stabilization systems, advanced mechanical harvesters, and other future advances based on the amount increase in chile yield to be analyzed. We used described simulation as an analysis tool to obtain the expected coverage and the estimation of the mean and quantile

Abstract:

While the degradation of Polylactic Acid (PLA) has been studied for several years, results regarding the mechanism for determining degradation are not completely understood. Through accelerated degradation testing, data can be extrapolated and modeled to test parameters such as temperature, voltage, time, and humidity. Accelerated lifetime testing is used as an alternative to experimentation under normal conditions. The methodology to create this model consisted of fabricating series of ASTM specimens using extrusion and injection molding. These specimens were tested through accelerated degradation; tensile and flexural testing were conducted at different points of time. Nonparametric inference tests for multivariate data are presented. The results indicate that the effect of the independent variable or treatment effect (time) is highly significant. This research intends to provide a better understanding of biopolymer degradation. The findings indicated that the proposed statistical models can be used as a tool for characterization of the material regarding the durability of the biopolymer as an engineering material. Having multiple models, one for each individual accelerating variable, allow deciding which parameter is critical in the characterization of the material

Abstract:

The degradation of biopolymers such as polylactic acid (PLA) has been studied for several years, however, results regarding the mechanism of degradation are not completely understood yet. It would be advantageous to predict and model the degradation of PLA rates by means of performance. High strength and thermoplasticity allow PLA to be used to manufacture a great variety of products. This material is easily processed by traditional techniques including injection molding process, blow molding, extrusion, and thermoforming ; extrusion and injection molding processes were used to produce PLA samples for accelerated destructive testing in this research; the methodology employed consists of carrying out material testing under the guidelines of several ASTM standards, this research hypothesizes that UV light, humidity, and temperature exposure have a statistical difference in the degradation rate. The multivariate analysis of non-parametric data is presented as an alternative for multivariate analysis in which the data do not satisfy the essential assumptions of regular MANOVA; such as multivariate normality. Ellis et al. created a package in R software that allows the user to perform a non-parametric multivariate analysis, when necessary. This paper presents a study to determine if there is a significant difference in the degradation process of a biopolymer using the multivariate and nonparametric analysis of variance. The combination of the statistical techniques, multivariate analysis of variance, and repeated measures provided information for a better understanding of the degradation path of the biopolymers

Abstract:

Formal models of animal sensorimotor behavior can provide effective methods for generating robotic intelligence. In this article we describe how schema-theoretic models of the praying mantis derived from behavioral and neuroscientific data can be implemented on a hexapod robot equipped with a real time color vision system. This implementation incorporates a wide range of behaviors, including obstacle avoidance, prey acquisition, predator avoidance, mating, and chantlitaxia behaviors that can provide guidance to neuroscientists, ethologists, and roboticists alike. The goals of this study are threefold: to provide an understanding and means by which fielded robotic systems are not competing with other agents that are more effective at their designated task; to permit them to be successful competitors within the ecological system and capable of displacing less efficient agents; and that they are ecologically sensitive so that agent–environment dynamics are well-modeled and as predictable as possible whenever new robotic technology is introduced.

Resumen:

Las denominadas escrituras del yo que se produjeron a raíz del exilio español republicano han constituido una invaluable fuente referencial para la Historia, pero han sido menos estudiadas desde una perspectiva literaria, más aún si la autora era una mujer y no disponía de más obra y, todavía más, si su lectura no se podía hacer en una lengua hegemónica. Este es el caso de Syra Alonso, exiliada gallega en México, cuya huida, junto con sus tres hijos, se produjo tras el asesinato de su esposo, el pintor vanguardista Francisco Miguel. Estas circunstancias impulsaron la escritura de sus Diarios: el Diario de Tordoia, escrito en Galicia antes de salir hacia el exilio, y el Diario de Actopan, escrito en México un año después de su llegada, ambos publicados íntegramente por primera vez en gallego en el año 2000. Tras un trabajo de búsqueda hemerográfica, la reunión en este artículo de hasta siete narraciones desconocidas de la autora, publicadas entre 1943 y 1951 en dos revistas literarias mexicanas, resignifica la obra de Syra Alonso y ofrece respuestas a momentos biográficos esenciales de su exilio, que no se conocían hasta ahora. Por tanto, se propone, por un lado, un análisis de la recepción crítica de sus Diarios que pueda dar respuesta a la invisibilidad de la autora dentro del canon y, por otro, un análisis contextual, literario e incluso etnográfico de estas narraciones en diálogo con su obra diarística y su propia biografía. Todo, ante la urgencia de un presente marcado por el hallazgo del cuerpo de Francisco Miguel en una fosa común de Bértoa hace escasos meses, un presente de memoria histórica, reparación y homenaje

Abstract:

The so-called self-writings that were produced as a result of the Spanish Republican exile have been an invaluable source of historical reference, but they have been less studied from a literary perspective, even more so when the author was a female and had no other published work and, even more so, if her reading could not be done in a hegemonic language. This is the case of Syra Alonso, a Galician exile in Mexico, who fled with her three children after the murder of her husband, the avant-garde painter Francisco Miguel. These circumstances prompted the writing of her diaries: the Diario de Tordoia, written in Galicia before leaving for exile, and the Diario de Actopan, written in Mexico a year after her arrival, both published in full in Galician for the first time in 2000. After a hemerographic research work, the gathering in this article of up to seven unknown narrations of the author, published between 1943 and 1951 in two Mexican literary magazines, reinterpret Syra Alonso's work and offer relevant information to essential biographical moments of her exile, which were unknown until now. Therefore, we propose an analysis of the critical reception of her Diaries that may explain the invisibility of the author within the canon of the Spanish exile and, on the other hand, a contextual, literary and even ethnographic analysis of these narratives in dialogue with her diaristic journal? work and her own biography. All this, in view of the urgency of a present marked by the discovery of Francisco Miguel's body in a mass grave in Bértoa a few months ago. It's a present of historical memory, reparation, and homage

Resumen:

Francisco Villa no es personaje protagónico de la obra literaria de Mauricio Magdaleno, sin embargo, a lo largo de toda su trayectoria este trató de reflexionar sobre la relevancia histórica e identitaria de aquel para México. Entonces, se propone y se analiza un amplio corpus de obras literarias y periodísticas del escritor para conocer su postura ante el villismo y su indiscutible líder. A partir de un enfoque esencialmente historiográfico y literario se trazan las confluencias familiares y estilísticas que atraviesan al autor. Debemos tener en cuenta que Magdaleno vivió su infancia en dos ciudades relevantes para el encumbramiento del militar, Zacatecas y Aguascalientes, e incluso existe un relato que narra el encuentro entre ambos. El análisis evidencia la presencia de Pancho Villa en los géneros literarios tradicionales y de tradición oral, así como las diferentes formas de apropiación que la literatura de la Revolución hizo de su figura para leer la historia actual

Abstract:

Francisco Villa is not a leading character in the literary work of Mauricio Magdaleno, however, throughout his career he tried to reflect on his historical and identity relevance of Villa for Mexico. Then, a wide corpus of literary and journalistic works of the writer is proposed and analyzed to know the Magdaleno’s position about the Villismo and its indisputable leader. From an essentially historiographical and literary approach, the family relationship and stylistic confluences that cross the author are traced. We must know that Magdaleno lived his childhood in two cities relevant to the rise of the military, Zacatecas and Aguascalientes, and there is even a story that narrates the meeting between both. The analysis evidences the presence of Pancho Villa in traditional literary genres and oral tradition, as well as the different forms of appropriation that the literature of the Mexican Revolution made of his figure to read current history

Resumen:

En México, y en relación con unas circunstancias nacionalistas e identitarias muy concretas, se desarrolló una literatura indigenista que llevaron a cabo algunos escritores a partir de los relatos y la historia de unas comunidades originarias que no tenían acceso directo al circuito cultural, y ni mucho menos al editorial, de la época; unas comunidades que habían sido las vencidas en todos los procesos históricos, que estaban silenciadas. A la par que esta literatura, palpitaba otra indígena, mayoritariamente oral, y preservada tanto en crónicas, códices o estudios etnográficos como en la memoria individual y colectiva de aquellas comunidades. Ambas manifestaciones parecían tener en el cuento un molde natural para alumbras los relatos del pasado, encontrar en ellos la raíz más honda del ser mestizo, recuperar las voces posibles de la tierra y dar así sentido al presente. En el campo literario, además, el tema o los personajes indígenas fueron parte del material narrativo de un grupo de escritores que, tras las polémicas por una literatura viril y nacionalista, buscaron poner de manifiesto los problemas más acuciantes de la realidad mexicana del momento frente a otro grupo más cosmopolita. Se trata entonces de recorrer algunas de estas manifestaciones, encontrar diálogos y tensiones, analizar temas, perspectivas y recursos que emplearon algunos autores paradigmáticos, dentro y fuera del canon

Resumen:

El objetivo principal de este trabajo es entender la contribución del guion a la película Río Escondido en la creación de una obra colectiva, polifónica, de convergencia y alejada de una visión clásica de la autoría. Para ello, proponemos un análisis documental de la historia y otro propiamente del guion como vehículo semiótico que posibilita el camino de un lenguaje escrito a uno visual. Este último análisis se centra en la relevancia del discurso literario del guion que se omite en la película y que puede suponer una discordia entre el guionista y el director. Hallar estos espacios de quiebre posibilita un acercamiento a nuevas formas de ver el cine mexicano, alejadas de la industria y los clichés, y más cercanas al ámbito literario

Abstract:

The aim of this work is to establish the significance of the film Río Escondido’s script. Consequently, it is evident that in this film the role of the author is different since its script is a collective, polyphonic, and convergent work. In such a way, it is analyzed the History’s documents in connection to the story from the script as a semiotic approach that allows the transfer from text to screen. By this way, the analysis is focused on the relevance of the script’s literariness which is supressed in the film and might implies a controversy in between the writer and the director. It is concluded, in this approach, the importance of the literary analysis in the Mexican film industry, far away from mercantilization and stated cliches

Resumo:

O objetivo principal deste trabalho é contribuição de roteiro do filme Río Escondido na criação de uma obra coletiva, polifônica, de convergência e longe de una visão clássica da autoria. Assim, propomos uma análise documental da história e outra propriamente do roteiro de filme como veículo semiótico que visibiliza o percuso de uma linguagem escrita a uma visual. Esta última análise foca na relevância do discurso literário do roteiro e filme que é omitido do filme que que e pode levar a um desentendimiento entre o rotereista e o diretor. Encontrar esses espaços inovadores permite uma abordagem de novas formas de ver o cinema mexicano, longe da indústria e dos clichés, e mais perto do campo literário

Resumen:

El propósito de este artículo es analizar, desde el punto de vista de la historia, la historiografía y el análisis literario, la novela Serpa Pinto. Pueblos en la tormenta (1943), de Giuseppe Garretto. Esta novela, publicada por primera vez en castellano a pesar de la nacionalidad italiana de su autor, pasó desapercibida para la crítica. Revisamos tanto el contexto histórico de las referencias de la obra como de la publicación de la primera edición; situamos la obra como parte de la historiografía literaria del exilio de refugiados europeos en Latinoamérica; analizamos las características literarias de la novela, así como las estrategias de rememoración narrativa que emplea el autor

Abstract:

The purpose of this article is to analyze the novel Serpa Pinto. Pueblos en la tormenta (1943) by Giuseppe Garretto, from the point of view of history, historiography and literary analysis. Published for the first time in Spanish despite the Italian nationality of the author, this novel went unnoticed by critics. We aim to study the historical context of the work references and the publication of the first edition, in order to situate the work as part of the literary historiography of the exile of European refugees in Latin America. Moreover, we analyze the literary characteristics of the novel, as well as the narrative remembrance strategies used by the author

Resumen:

Alfonso Cravioto formó parte de las organizaciones intelectuales y políticas más destacadas de principios del siglo XX. A lo largo de su vida combinó la actividad política y diplomática con la creación literaria, poesía y ensayo, principalmente. En este artículo nos proponemos destacar su contribución a la educación en México, desde una perspectiva que enlaza su experiencia vital con la sensibilidad que lo caracterizó y le ganó el reconocimiento y afecto de sus contemporáneos, al tiempo que anticipó y participó en los primeros años de la Secretaría de Educación Pública

Abstract:

Alfonso Cravioto was a member of the most prominent intellectual and political organizations of the early 20th century. Throughout his life he joined political and diplomatic activities with creating literature, poetry and essays. In this article we intend to highlight his contribution to education in Mexico, from a perspective that links his life experience with the sensitivity which was one of his most pronounced characteristics. Cravioto was recognized and beloved by his contemporaries, and he anticipated and participated in the first years of the Secretary of Mexican Public Education

Resumen:

En la actualidad, una edición crítica completa no sólo debe ir acompañada de una crítica textual rigurosa, sino también de una genética que considere todos los testimonios. En este sentido, el presente trabajo analiza algunas de las problemáticas que se podrían producir si elaborásemos una edición crítica de todos los cuentos de Mauricio Magdaleno, dos de las más importantes serían las siguientes: primera, el hecho de que la mayor parte de los cuentos, tal y como los conocemos hoy, tuviera versiones previas publicadas en fuentes periódicas, lo cual nos obligaría a una investigación más exhaustiva y a una constitución del texto a partir de la colación de variantes de testimonios hemerográficos; segunda, la decisión de criterios que permitan superar la frontera genérica entre un relato costumbrista escrito a partir de una anécdota personal y un cuento susceptible de ser incluido en esta propuesta. Así, ambas problemáticas tienen como raíz común que la labor escritural más estable en la que se desempeñó Magdaleno fuera la periodística, ya que con ésta pudo compatibilizar otras actividades que, sobre todo, le servían para un propósito de sustento material. Una edición crítica de los cuentos de Mauricio Magdaleno otorgaría no sólo la seguridad de poder hacer una buena lectura del texto final, sino que también pondría de relieve el proceso creador del autor, lo cual ayudaría a superar la crítica anacrónica a la que se le ha sometido. Por último, y con base en el estado actual de la cuestión que se expone en este trabajo, se presenta una tabla con las versiones de los cuentos encontradas hasta ahora y otra con una propuesta sustentada de un posible índice de los cuentos que integrarían la edición crítica

Abstract:

Currently, a complete critical edition must not only be accompanied by a rigorous textual criticism but also by a genetic edition that considers all the testimonies. In this sense, this work analyzes some of the problems that could occur if we publish a critical edition the entirety of Mauricio Magdaleno’s short stories. There would be two important problems: first, the fact that most of the stories, as we know them today, had previous versions published in periodical sources, which would oblige us to make an exhaustive investigation and to gather and collate the texts from the varied hemerographic testimonies; second, the decision of any criterion that allows overcoming the generic border between a story written from a personal anecdote and a short story that can be included in our edition. Thus, both problems have as a common root: Magdaleno was a journalist and supplemented his journalistic career with other activities that served as material support. A critical edition of Mauricio Magdaleno’s short stories would be a good reading of his work, and we could know about the author’s creative process, which would help to overcome the anachronistic criticism that has been published on the topic. Finally, and based on the current status, we propose a table with the versions of the short stories found and another table with a proposal supported by a possible index of the short stories that would make up the critical edition

Resumen:

El objetivo principal de este trabajo es aproximarse a una conceptualización de la poesía de argumento o versos de argumentar que se produce en el son jarocho, especialmente en la región de Los Tuxtlas (México). Además, se analizarán sus características poéticas y la forma en que se desarrolla dentro del ritual festivo. Para ello, el estudio se apoyará en el análisis de algunas de estas poesías contenidas en cuadernos de poetas, así como en testimonios orales de estos, ya recogido en libros o en entrevistas, es decir, se analizará la poesía en relación con su contexto. En el dialogismo y en la tópica de esta poesía se encuentran dos elementos fundamentales para entender las dinámicas de tradicionalización e innovación que se producen en estos rituales festivos músico-poéticos, a pesar del componente creativo —improvisado a veces— que depende de los verseros

Abstract:

The main objective of this work is to approach a conceptualization of the poetry of argu ment or verses of arguing that occurs in the son jarocho, especially in the region of Los Tu xtlas (Mexico). In addition, we will analyze its poetic characteristics and the way it develops within the festive ritual. For this, we will do an analysis of some of these poems contained in poets’ notebooks, as well as on oral testimonies of these poets, whether they have been collected in books or interviews, that is, we will analyze the poetry in relation to its context. In the dialogism and in the topic of this poetry we find two fundamental elements to unders tand the dynamics of traditionalization and innovation that take place in these poetic-musical festive rituals, despite the creative component —sometimes improvised— that depends on the verseros

Resumo:

O objetivo principal deste trabalho é abordar uma conceituação da poesia de argumento ou versos de argumentação que se produz no son jarocho, especialmente na região de Los Tuxtlas. Além disso, serão analisadas as suas características poéticas e a forma como se des envolve dentro do ritual festivo. Para tanto, será desenvolvida uma análise tanto de alguns desses poemas contidos em cadernos de poetas, quanto os depoimentos orais destes, sejam eles coletados em livros ou entrevistas, ou seja, se analisará a poesia em relação ao seu con texto. Encontramos no dialogismo e na temática desta poesia dois elementos fundamentais para compreender a dinâmica de tradicionalização e inovação que se realiza nestes rituais poético-musicais festivos, apesar da componente criativa —por vezes improvisada— que depende dos versos

Resumen:

La tradición del huapango arribeño que se lleva a cabo en la Sierra Gorda de Querétaro y Guanajuato, y en la Zona Media de San Luis Potosí, se revela en la celebración de rituales festivos de carácter músico-poético, tanto civiles como religiosos, en donde la glosa conjuga la copla y la décima. La tradición se rige bajo la influencia de una serie de normas de carácter consuetudinario y oral, entre las cuales destacan el «reglamento» y el «compromiso». El presente artículo indaga en torno a la naturaleza jurídica, lingüística y literaria de dicha normatividad; su interacción con otras normas de carácter externo a la fiesta; su influencia, tanto en la performance o ritual festivo -especialmente en la topada-, como en la creación poética. A partir de fuentes etnográficas (entrevistas y grabaciones de fiestas) y bibliográficas, el objetivo es dilucidar el papel que juega dicha normatividad en la conservación y transformación de la tradición

Abstract:

The tradition of the huapango arribeño that is performed in the Sierra Gorda of Querétaro and Guanajuato, and in the Zona Media of San Luis Potosí, is reflected in the celebration of festival rituals of a musical-poetic nature, both civil and religious, where the glosa blends the coplaand the décima. The tradition is governed by a series of rules of traditional and oral nature, among which the «reglamento» (rules) and the «compromiso» (commitment) stand out. This article investigates the legal, linguistic and literary nature of these norms; their interaction with other norms of an external character to the festival; their influence, both in the performance or festive ritual -especially in the topada-, and in the poetic creation. Using ethnographic (interviews and recordings of festivals) and bibliographic sources, the aim is to elucidate the role these norms played in the conservation and transformation of tradition

Resumen:

Desde el punto de vista estético, los escritores Mauricio Magdaleno y Salvador Novo formaron parte de dos corrientes literarias extremas. Las trayectorias de ambos autores gozan de sorprendentes paralelismos tanto biográficos como en relación con el cultivo de diferentes disciplinas literarias. El debate que se suscitó en México en los años veinte y treinta en torno a la identidad y a la nacionalidad provocó un enfrentamiento feroz entre ambos, que, sin embargo, no impidió un progresivo acercamiento propiciado por la participación política. El artículo muestra el difícil equilibrio entre la toma de posicionamientos ideológicos, la responsabilidad generacional ante la construcción de un Estado moderno y la inquietud artística. Un poema inédito de Salvador Novo a Maurico Magdaleno y una imagen de ambos velando el cuerpo del padre Ángel María Garibay K. desmuestran la cercanía que se profesaron al final de sus vidas

Abstract:

From an aesthetic point of view, writers Mauricio Magdaleno and Salvador Novo formed part of two extreme literary trends. The literary career of both authors shows striking parallels in their biography and in their relation to the cultivation of different literary disciplines. The causes a fierce confrontation between them. This, however, does not prevent a progresive approach favored by political participation. The article illustrated the difficult balance between takin ideological positions. the generational responsability to build a modern state and the artistic interest. An unpublished poem from Salvador Novo to Mauricio Magdaleno and an image of both keeping vigil over the body of Father Angel Maria Garibay K. demonstrate the closeness that professed themselves at the end of their lives

Abstract:

This article examines the ability of recently developed statistical learning procedures, such as random forests or support vector machines, for forecasting the first two moments of stock market daily returns. These tools present the advantage of the flexibility of the considered nonlinear regression functions even in the presence of many potential predictors. We consider two cases: where the agent's information set only includes the past of the return series, and where this set includes past values of relevant economic series, such as interest rates, commodities prices or exchange rates. Even though these procedures seem to be of no much use for predicting returns, it appears that there is real potential for some of these procedures, especially support vector machines, to improve over the standard GARCH(1,1) model the out-of-sample forecasting ability for squared returns. The researcher has to be cautious on the number of predictors employed and on the specific implementation of the procedures since using many predictors and the default settings of standard computing packages leads to overfitted models and to larger standard errors

Abstract:

Cancer is the deadliest multifactorial disease, constituting variations at the molecular and histological level. Cancer is being treated on advanced step via utilization of the multiple approaches for effectively attaining the favorable outcomes of the chemotherapeutic agents. As far as conventional treatment protocols are being considered, chemotherapy and radiotherapy are pivotal ways to deliver chemotherapeutic drugs like uracil, paclitaxel, vinca alkaloids and doxorubicin (DOX). However, DOX is more preferred owing to its FDA approval and multi-functionalization, but irrespective of its significance, several side effects, exclusively cardiotoxicity and nephrotoxicity are accompanying. These side effects have encouraged the researchers to develop novel alternative drug delivery system for transportation of the DOX with the favor of nanotechnology and co-delivery of bioactive ligands and other chemotherapeutic drugs with DOX. In this review, we have discussed the develop ment of novel nanotechnology based drug delivery systems for DOX, which have claimed the improvement the therapeutic window (efficacy and safety) against various cancers. Moreover, co-delivery of anti-cancer drugs is thought provoking in terms of synergistic therapeutic outcomes of more than one anti-cancerous drug for achieving enhanced permeation and retention (EPR) effect

Abstract:

Participating in regular physical activity (PA) can help people maintain a healthy weight, and it reduces their risks of developing cardiovascular diseases and diabetes. Unfortunately, PA declines during early adolescence, particularly in minority populations. This paper explores design requirements for mobile PA-based games to motivate Hispanic teenagers to exercise. We found that some personality traits are significantly correlated to preference for specific motivational phrases and that personality affects game preference. Our qualitative analysis shows that different body weights affect beliefs about PA and games. Design requirements identified from this study include multi-player capabilities, socializing, appropriate challenge level, and variety

Abstract:

To achieve accurate tracking control of robot manipulators, many schemes have been proposed. Some common approaches are based on robust and adaptive control techniques, while when necessary velocity observers are employed. Robust techniques have the advantage of requiring few prior information of the robot model parameters/structure or disturbances while tracking can be achieved, for instance, by using sliding mode control. On the contrary, adaptive techniques guarantee trajectory tracking but under the assumption that the robot model structure is perfectly known and it is linear in the unknown parameters, while joint velocities are also available. In this letter, some experiments are carried out to find out whether combining a robust and an adaptive controller may increase the performance of the system, as long as the adaptive term can be treated as a perturbation by the robust controller. The results are compared with an adaptive robust control law, showing that the proposed combined scheme performs better than the separated algorithms, working on their own and then the comparison laws

Abstract:

To achieve accurate tracking control of robot manipulators many schemes have been proposed. A common approach is based on adaptive control techniques, which guarantee trajectory tracking under the assumption that the robot model structure is perfectly known and linear in the unknown parameters, while joint velocities are available. Despite tracking errors tend to zero, parameter errors do not unless some persistent excitation condition is fulfilled. There are few works dealing with velocity observation in conjunction with adaptive laws. In this note, an adaptive control/observer scheme is proposed for tracking position of robot manipulators. It is shown that tracking and observation errors are ultimately bounded, with the characteristic that when a persistent excitation condition is matched then they, as well as the parameter errors, tend to zero. Simulation results are in good agreement with the developed theory

Abstract:

This study determinates that morbidity presents a mediating impact between intimate partner violence against women and labor productivity in terms of absenteeism and presenteeism. Partial least squares structural equation modeling (PLS-SEM) was used on a nationwide representative sample of 357 female owners of micro-films in Peru. The resulting data reveals that morbidity is a mediating variable between intimate partner violence against women and absenteeism (ß=0.213; p<.001), as well as between intimate partner violence against women and presenteeism (ß=0.336; p<.001). This finding allows us to understand how such intimate partner violence against women negatively affects the workplace productivity in the context of a micro-enterprise, a key element in many economies across the world

Abstract:

The purpose of this paper is to determine the prevalence of economic violence against women, specifically in formal sector micro-firms managed by women in Peru, a key Latin American emerging market. Additionally, the authors have identified the demographic characteristics of the micro-firms, financing and credit associated with women who suffer economic violence. Design/methodology/approach. In this study, a structured questionnaire was administered to a representative sample nationwide (357 female micro-entrepreneurs). Findings. The authors found that 22.2 percent of female micro-entrepreneurs have been affected by economic violence at some point in their lives, while at the same time 25 percent of respondents have been forced by their partner to obtain credit against their will. Lower education level, living with one’s partner, having children, business location in the home, lower income, not having access to credit, not applying credit to working capital needs, late payments and being forced to obtain credit against one’s will were all factors associated with economic violence. Furthermore, the results showed a significant correlation between suffering economic violence and being a victim of other types of violence (including psychological, physical or sexual); the highest correlation was with serious physical violence (r=0.523, p<0.01)

Abstract:

A standard technique for producing monogenic functions is to apply the adjoint quaternionic Fueter operator to harmonic functions. We will show that this technique does not give a complete system in L2 of a solid torus, where toroidal harmonics appear in a natural way. One reason is that this index-increasing operator fails to produce monogenic functions with zero index. Another reason is that the non-trivial topology of the torus requires taking into account a cohomology coefficient associated with monogenic functions, apparently not previously identified because it vanishes for simply connected domains. In this paper, we build a reverse-Appell basis of harmonic functions on the torus expressed in terms of classical toroidal harmonics. This means that the partial derivative of any element of the basis with respect to the axial variable is a constant multiple of another basis element with subindex increased by one. This special basis is used to construct respective bases in the real L2-Hilbert spaces of reduced quaternion and quaternion-valued monogenic functions on toroidal domains

Abstract:

The Fourier method approach to the Neumann problem for the Laplacian operator in the case of a solid torus contrasts in many respects with the much more straight forward situation of a ball in 3-space. Although the Dirichlet-to-Neumann map can be readily expressed in terms of series expansions with toroidal harmonics, we show that the resulting equations contain undetermined parameters which cannot be calculated algebraically. A method for rapidly computing numerical solutions of the Neumann problem is presented with numerical illustrations. The results for interior and exterior domains combine to provide a solution for the Neumann problem for the case of a shell between two tori

Abstract:

The paper addresses the issues raised by the simultaneity between the supply function and the domestic and foreign demand for exportables, analysing the microeconomic foundations of the simultaneous price and output decisions of a firm which operates in the exportables sector of an open economy facing a domestic and a foreign demand for its output. A specific characteristic of the model is that it allows for the possibility of price discrimination, which is suggested by the observed divergencies in the behaviour of domestic and export prices. The famework developed is used to investigate the recent behaviour of prices and output in two industries of the German manufacturing sector

Abstract:

We introduce a new method for building models of CH, together with Π2 statements over H(ω2), by forcing. Unlike other forcing constructions in the literature, our construction adds new reals, although only 1ﭏ-many of them. Using this approach, we build a model in which a very strong form of the negation of Club Guessing at ω1 known as Measuring holds together with CH, thereby answering a well-known question of Moore. This construction can be described as a finite-support weak forcing iteration with side conditions consisting of suitable graphs of sets of models with markers. The CH-preservation is accomplished through the imposition of copying constraints on the information carried by the condition, as dictated by the edges in the graph

Abstract:

Measuring says that for every sequence (Cδ)δ<ω1 with each Cδ being a closed subset of δ there is a club C ⊆ ω1 such that for every δ ∈ C, a tail of C ∩ δ is either contained in or disjoint from Cδ. We answer a question of Justin Moore by building a forcing extension satisfying measuring together with 2ℵ0 > ℵ2. The construction works over any model of ZFC + CH and can be described as a finite support forcing iteration with systems of countable structures as side conditions and with symmetry constraints imposed on its initial segments. One interesting feature of this iteration is that it adds dominating functions f : ω1 −→ ω1 mod. countable at each stage

Abstract:

We separate various weak forms of Club Guessing at ω1 in the presence of 2No large, Martin's Axiom, and related forcing axioms. We also answer a question of Abraham and Cummings concerning the consistency of the failure of a certain polychromatic Ramsey statement together with the continuum large. All these models are generic extensions via finite support iterations with symmetric systems of structures as side conditions, possibly enhanced with ω-sequences of predicates, and in which the iterands are taken from a relatively small class of forcing notions. We also prove that the natural forcing for adding a large symmetric system of structures (the first member in all our iterations) adds N1-many reals but preserves CH

Resumen:

La estructura de À la recherche du temps perdu se explica a partir de la experiencia psíquica del tiempo del protagonista de la novela, según la concepción proustiana de la temporalidad a la luz del pensamiento de San Agustín, Henri Bergson y Edmund Husserl

Abstract:

The structure of À la recherche du temps perdu is explained through the psychic experience of time of the protagonist of the novel, according to the Proustian conception of temporality in light of the thought of Saint Augustine, Henri Bergson and Edmund Husserl

Resumen:

La actividad onírica ocupa un lugar preponderante en la antropología filosófica de María Zambrano. Para la pensadora andaluza, soñar es una facultad cognitiva de la que depende, en último análisis, el desarrollo anímico y la salud emocional de la persona humana. Esta nota muestra la actualidad del pensamiento zambraniano sobre los sueños a la luz de algunas tesis de la neurociencia contemporánea

Abstract:

The oneiric activity occupies a preponderant place in the philosophical anthropology of María Zambrano. For the Andalusian thinker, dreaming is a cognitive faculty on which depends, in the last analysis, the development of the soul and the emotional health of the human person. This note shows the actuality of Zambrano's thought on dreams in the light of some theses of contemporary neuroscience

Resumen:

Utilizando como motivo conductor la idea de que la biblioteca de un académico refleja su ethos y su cosmovisión, este texto epidíctico celebra la trayectoria intelectual de Nora Pasternac, profesora del Departamento Académico de Lenguas del ITAM

Abstract:

Using as a driving motif the idea that an academic’s library reflects his ethos and wordview, this epidictic text celebrates the intellectual trajectory of Nora Pasternac, professor of the Academic Department of Languages at ITAM

Resumen:

En el texto se comentan algunos pasajes de tres novelas y un cuento de Ignacio Padilla a la luz de la Monadología, de G. W. Leibniz, y del Manuscrito encontrado en Zaragoza, de Jan Potocki, con el propósito de mostrar el uso de la construcción en abismo como procedimiento narrativo en la obra de este escritor mexicano

Abstract:

The text discusses some passages of three novels and a story by Ignacio Padilla in light of Monadologie by G.W. Leibniz and the Manuscript Found in Saragossa by Jan Potocki, with the purpose of showing the use of mise en abyme as a narrative technique in the work of this Mexican writer

Resumen:

Utilizando como principio explicativo la noción de distancia fenomenológica, en el ensayo se exponen las formas de relación entre lengua fuente y lengua meta, y entre texto fuente y texto meta, en la teoría de la traducción de Walter Benjamin

Abstract:

Using the notion of phenomenological distance as an explanatory principle, we will explore in this article the relationship between source language and target language and between source text and target text in Walter Benjamin's translation theory

Resumen:

En la antropología filosófica de María Zambrano, dormir y despertar no son meros actos empíricos del vivir cotidiano, sino operaciones egológicas formales involucradas en el proceso de autoconstitución del sujeto. En este artículo se describen poniendo en diálogo el libro zambraniano Los sueños y el tiempo con la noción de ipseidad, tal como la entienden Paul Ricoeur, Edmund Husserl, Hans Blumenberg y Michel Henry

Abstract:

In Maria Zambrano’s philosophical anthropology, sleeping and awakening are not mere empirical acts of daily life, but formal egological acts involves in self-construction. In this article, we will discuss Maria Zambrano’s work, The dreams and time, along with the idea of selfhood as understood by Paul Ricoeur, Edmund Husserl, Hans Blumenberg, and Michel Henry

Abstract:

The homotopy classification problem for complete intersections is settled when the complex dimension is larger than the total degree

Abstract:

A rigidity theorem is proved for principal Eschenburg spaces of positive sectional curvature. It is shown that for a very large class of such spaces the homotopy type determines the diffeomorphism type

Abstract:

We address the problem of parallelizability and stable parallelizability of a family of manifolds that are obtained as quotients of circle actions on complex Stiefel manifolds. We settle the question in all cases but one, and obtain in the remaining case a partial result

Abstract:

The cohomology algebra mod p of the complex projective Stiefel manifolds is determined for all primes p. When p = 2 we also determine the action of the Steenrod algebra and apply this to the problem of existence of trivial subbundles of multiples of the canonical line bundle over a lens space with 2-torsion, obtaining optimal results in many cases

Abstract:

The machinery of M. Kreck and S. Stoltz is used to obtain a homeomorphism and diffeomorphism classification of a family of Eschenburg spaces. In contrast with the family of Wallach spaces studied by Kreck and Stolz we obtain abundant examples of homeomorphic but not diffeomorphic Eschenburg spaces. The problem of stable parallelizability of Eschenburg spaces is discussed in an appendix

Abstract:

In this paper, we introduce the notion of a linked domain and prove that a non-manipulable social choice function defined on such a domain must be dictatorial. This result not only generalizes the Gibbard-Satterthwaite Theorem but also demonstrates that the equivalence between dictatorship and non-manipulability is far more robust than suggested by that theorem. We provide an application of this result in a particular model of voting. We also provide a necessary condition for a domain to be dictatorial and use it to characterize dictatorial domains in the cases where the number of altematives is three

Abstract:

We study entry and bidding patterns in sealed bid and open auctions. Using data fromthe U.S. Forest Service timber auctions, we document a set of systematic effects: sealed bid auctions attract more small bidders, shift the allocation toward these bidders, and can also generate higher revenue. A private value auction model with endogenous participation can account for these qualitative effects of auction format. We estimate the model’s parameters and show that it can explain the quantitative effects as well. We then use the model to assess bidder competitiveness, which has important consequences for auction design

Abstract:

The role of domestic courts in the application of international law is one of the most vividly debated issues in contemporary international legal doctrine. However, the methodology of interpretation of international norms used by these courts remains underexplored. In particular, the application of the Vienna rules of treaty interpretation by domestic courts has not been sufficiently assessed so far. Three case studies (from the US Supreme Court, the Mexican Supreme Court, and the European Court of Justice) show the diversity of approaches in this respect. In the light of these case studies, the article explores the inevitable tensions between two opposite, yet equally legitimate, normative expectations: the desirability of a common, predictable methodology versus the need for flexibility in adapting international norms to a plurality of domestic environments

Abstract:

Christensen, Baumann, Ruggles, and Sadtler (2006) proposed that organizations addressing social problems may use catalytic innovation as a strategy to create social change. These innovations aim to create scalable, sustainable, and systems-changing solutions. This empirical study examines: (a) whether catalytic innovation applies to Mexican social entrepreneurship; (b) whether those who adopt Christensen et al.’s (2006) strategy generate more social impact; and (c) whether they demonstrate economic success. We performed a survey of 219 Mexican social entrepreneurs and found that catalytic innovation does occur within social entrepreneurship, and that those social entrepreneurs who use catalytic innovations not only maximize their social impact but also maximize their profits, and that they do so with diminishing returns to scale

Résumé:

Christensen, Baumann, Ruggles et Sadtler (2006) proposent que les organisations qui s'occupent de problèmes sociaux, peuvent utiliser l'innovation catalytique comme une stratégie visant à créer un changement social. Ces innovations cherchent à créer des solutions évolutives, durables, et qui changent le système. Cette étude empirique examine : (a) si l'innovation catalytique peut s'appliquer dans le domaine de l'entrepreneuriat social mexicain; (b) si les entrepreneurs qui adoptent la stratégie de Christensen et al. (2006), donne lieu à un impact social; et (c) si elle démontre engendrer un succès économique. Nous avons effectué un sondage à 219 entrepreneurs sociaux mexicains et avons constaté que l'innovation catalytique se produit au sein des entreprenariats sociaux, et que les entrepreneurs sociaux qui utilisent des innovations catalytiques maximisent non seulement l'impact social, mais aussi leurs profits et qu'ils le font avec des rendements à échelle décroissante

Abstract:

Solid lipid nanoparticles (SLNs) provide excellent biocompatibility and efficient encapsulation as pharmaceutical delivery systems. In this study, shea butter SLNs were synthesized by the hot homogenization technique, with excellent stability. The penetration capacity of nanoparticles was validated by atomic force microscopy, scanning electron microscopy, differential scanning calorimetry, dynamic light scattering, using zeta potential measurements, and confocal microscopy. Triborheological tests such as viscosity shear rate profiling, normal stress profiling and sliding speed sweeping were conducted to identify and quantify the impact of SLNs in topical formulations. We found that the SLNs had a lower coefficient of friction than the bulk lipids owing to a more stable lipid layer formation of the SLNs. SLNs in topical formulations have potential applications in cosmetics such as anti-aging agents owing to their emollient, occlusion, antioxidant and anti-inflammatory properties

Abstract:

We study pattern formation in a 2D reaction-diffusion (RD) subcellular model characterizing the effect of a spatial gradient of a plant hormone distribution on a family of G-proteins associated with root hair (RH) initiation in the plant cell Arabidopsis thaliana. The activation of these G-proteins, known as the Rho of Plants (ROPs), by the plant hormone auxin is known to promote certain protuberances on RH cells, which are crucial for both anchorage and the uptake of nutrients from the soil. Our mathematical model for the activation of ROPs by the auxin gradient is an extension of the model of Payne and Grierson [PLoS ONE, 4 (2009), e8337] and consists of a two-component Schnakenberg-type RD system with spatially heterogeneous coefficients on a 2D domain. The nonlinear kinetics in this RD system model the nonlinear interactions between the active and inactive forms of ROPs. By using a singular perturbation analysis to study 2D localized spatial patterns of active ROPs, it is shown that the spatial variations in the nonlinear reaction kinetics, due to the auxin gradient, lead to a slow spatial alignment of the localized regions of active ROPs along the longitudinal midline of the plant cell. Numerical bifurcation analysis together with time-dependent numerical simulations of the RD system are used to illustrate both 2D localized patterns in the model and the spatial alignment of localized structures

Abstract:

We aimed to make a theoretical contribution to the happy-productive worker thesis by expanding the study to cases where this thesis does not fit. We hypothesized and corroborated the existence of four relations between job satisfaction and innovative performance: (a) unhappy-unproductive, (b) unhappy-productive, (c) happy-unproductive, and (d) happy-productive. We also aimed to contribute to the happy-productive worker thesis by studying some conditions that influence and differentiate among the four patterns. Hypotheses were tested in a sample of 513 young employees representative of Spain. Cluster analysis and discriminant analysis were performed. We identified the four patterns. Almost 15 % of the employees had a pattern largely ignored by previous studies (e.g., unhappy-productive). As hypothesized, to promote well-being and performance among young employees, it is necessary to fulfill the psychological contract, encourage initiative, and promote job self-efficacy. We also confirmed that over-qualification characterizes the unhappy-productive pattern, but we failed to confirm that high job self-efficacy characterizes the happy-productive pattern. The results show the relevance of personal and organizational factors in studying the well-being-performance link in young employees

Abstract:

Conventional wisdom suggests that promising free information to an agent would crowd out costly information acquisition. We theoretically demonstrate that this intuition only holds as a knife-edge case in which priors are symmetric. Indeed, when priors are asymmetric, a promise of free information in the future induces agents to increase information acquisition. In the lab, we test whether such crowding out occurs for both symmetric and asymmetric priors. Our results are qualitatively in line with the predictions: When priors are asymmetric, the promise of future free information induces subjects to acquire more costly information

Abstract:

Region-of-Interest (ROI) tomography aims at reconstructing a region of interest C inside a body using only x-ray projections intersecting C and it is useful to reduce overall radiation exposure when only a small specific region of a body needs to be examined. We consider x-ray acquisition from sources located on a smooth curve Γ in R3 verifying the classical Tuy condition. In this generic situation, the non-trucated cone-beam transform of smooth density functions f admits an explicit inverse Z as originally shown by Grangeat. However Z cannot directly reconstruct f from ROI-truncated projections. To deal with the ROI tomography problem, we introduce a novel reconstruction approach. For densities f in L∞(B) where B is a bounded ball in R3, our method iterates an operator U combining ROI-truncated projections, inversion by the operator Z and appropriate regularization operators. Assuming only knowledge of projections corresponding to a spherical ROI C subset of subset B, given ɛ > 0, we prove that if C is sufficiently large our iterative reconstruction algorithm converges at exponential speed to an ɛ-accurate approximation of f in L∞. The accuracy depends on the regularity of f quantified by its Sobolev norm in W5(B). Our result guarantees the existence of a critical ROI radius ensuring the convergence of our ROI reconstruction algorithm to an ɛ-accurate approximation of f. We have numerically verified these theoretical results using simulated acquisition of ROI-truncated cone-beam projection data for multiple acquisition geometries. Numerical experiments indicate that the critical ROI radius is fairly small with respect to the support region B

Resumen:

El Tratado de Libre Comercio entre la UE y México (tlcuem) entró en vigor en el año 2000, constituyéndose en uno de los acuerdos más importantes del comercio transatlántico. El objetivo de este trabajo es analizar los resultados del acuerdo en materia de comercio entre los países socios al cabo de una década, e identificar los principales determinantes económicos. Se estima un modelo de gravedad para una muestra de 60 países durante el periodo 1994-2011. Los resultados indican que dicho tratado ha sido relevante en la intensificación de las relaciones comerciales entre ambos socios

Abstract:

The Free Trade Agreement between the European Union and Mexico (eumfta) was enforced in 2000, becoming one of the most important transatlantic trade agreements. The goal of this research is to analyze the results of this agreement a decade after the signature. A gravity model is estimated for a sample of 60 countries along the period 1994-2011. The results indicate that such an agreement has given rise to an increase in the bilateral trade flows between these two commercial partners

Abstract:

We study an at-scale natural experiment in which debit cards were given to cash transfer recipients who already had a bank account. Using administrative account data and household surveys, we find that beneficiaries accumulated a savings stock equal to 2% of annual income after two years with the card. The increase in formal savings represents an increase in overall savings, financed by a reduction in current consumption. There are two mechanisms. First, debit cards reduce transaction costs of accessing money. Second, they reduce monitoring costs, which led beneficiaries to check their account balances frequently and build trust in the bank

Abstract:

Transaction costs are a significant barrier to the take-up and use of formal financial services. Account opening fees and minimum balance requirements prevent the poor from opening bank accounts (Dupas and Robinson 2013), and small subsidies can lead to large increases in take-up (Cole, Sampson, and Zia 2011). Indirect transaction costs—such as travel time -are also a barrier: the distance to the nearest bank or mobile money agent is a key predictor of take-up of savings accounts (Dupas et al. forthcoming) and mobile money (Jack and Suri 2014). In turn, increased access to financial services can reduce poverty and increase welfare (Burgess and Pande 2005; Suri and Jack 2016). Digital financial services, such as ATMs, debit cards, mobile money, and digital credit, have the potential to reduce transaction costs. However, existing studies rarely measure indirect transaction costs. We provide evidence on how a specific technology -a debit card- lowers indirect transaction costs by reducing travel distance and foregone activities. We study a natural experiment in which debit cards tied to existing savings accounts were rolled out geographically over time to beneficiaries of the Mexican cash transfer program Oportunidades. Prior to receiving debit cards, beneficiaries received transfers directly into a savings account every two months. After receiving cards, beneficiaries continue to receive their benefits in the savings account, but can access their transfers and savings at any bank’s ATM. They can also check their balances at any bank’s ATM or use the card to make purchases at point of sale terminals. We find that debit cards reduce the median road distance to access the account from 4.8 to 1.3 kilometers (km). As a result, the proportion of beneficiaries who walk to withdraw the transfer payments increases by 59 percent. Furthermore, prior to receiving debit cards, 84 percent of beneficiari

Abstract:

We study a natural experiment in which debit cards are rolled out to beneficiaries of a cash transfer program, who already received transfers directly deposited into a savings account. Using administrative account data and household surveys, we find that before receiving debit cards, few beneficiaries used the accounts to make more than one withdrawal per period, or to save. With cards, beneficiaries increase their number of withdrawals and check their balances frequently; the number of checks decreases over time as their reported trust in the bank and savings increase. Their overall savings rate increases by 3–4 percent of household income

Abstract:

Two career-concerned experts sequentially give advice to a Bayesian decision maker (D). We find that secrecy dominates transparency, yielding superior decisions for D. Secrecy empowers the expert moving late to be pivotal more often. Further, (i) only secrecy enables the second expert to partially communicate her information and its high precision to D and swing the decision away from first expert's recommendation; (ii) if experts have high average precision, then the second expert is effective only under secrecy. These results are obtained when experts only recommend decisions. If they also report the quality of advice, fully revealing equilibrium may exist

Abstract:

In the recent years, there has been an upsurge in the number of countries that are mainstreaming gender equality concerns in their trade and investment agreements. These recent developments challenge the longstanding assumption that trade, investment, and gender equality are not related. They also show that gender mainstreaming in trade and investment agreements is here to stay. However, very few countries - mostly developed countries - have led this mainstreaming approach and have made efforts to incentivize other countries to negotiate gender-responsive trade and investment agreements. The majority of developing countries are yet to take their first steps in negotiating such policy instruments with a gender lens, and their hesitation can be grounded in various reasons including fears of protectionism, lack of data, paucity of understanding and expertise, and, more broadly, constraints relating to their negotiation capacity. Moreover, the inclusion of gender-related concerns in the negotiation of such agreements has deepened and widened the negotiation capacity gap between developed and developing countries. In this article, the authors attempt to assess this widening negotiation capacity gap with the help of empirical research, and how this capacity gap can lead to disproportionate and negative repercussions for developing countries more than developed countries

Abstract:

In recent years, more and more countries have included different kinds of gender considerations in their trade agreements. Yet many countries have still not signed their very first agreement with a gender equality-related provision. Though most of the agreements negotiated by countries in the Asia-Pacific region have not explicitly accommodated gender concerns, a limited number of trade agreements signed by countries in the region have presented a distinct approach: the nature of provisions, drafting style, location in the agreements, and topic coverage of such provisions contrast with the gender-mainstreaming approach employed by the Americas or other regions. This chapter provides a comprehensive account and assessment of gender-related provisions included in the existing trade agreements negotiated by countries in the Asia-Pacific, explains the extent to which gender concerns are mainstreamed in these agreements, and summarizes the factors that impede such mainstreaming efforts in the region

Abstract:

The most common provisions we find in almost all multilateral, regional and bilateral trade agreements are the exception clauses that allow countries to protect public morals, humans, animals or plant health and life and conserve exhaustible natural resources. If countries can allow trade-restrictive measures that aim to protect these non-economic interests, is it possible to negotiate a specific exception to justify measures that are aimed at protecting women's economic interests as well? Is the removal of barriers that impede women's participation in trade any less important than the conservation of exhaustible natural resources such as sea turtles or dolphins? In that context, this article prepares a case for the inclusion of a specific exception that can allow countries to leverage women's economic empowerment through international trade agreements. This is done after carrying out an objective assessment of whether a respondent could seek protection under the existing public morality exception to justify a measure that is taken to protect women's economic interests

Abstract:

Mexico has by far the world's highest death rate linked to obesity and other chronic diseases. As a response to the growing pandemic of obesity, Mexico has adopted a new compulsory front-of-pack labeling regulation for pre-packaged foods and nonalcoholic beverages. This article provides an assessment of the regulation's consistency with international trade law and the arguments that might be invoked by either side in a hypothetical trade dispute on this matter

Abstract:

In the past few months, we have witnessed the 'worst deal' in the history of the USA become the 'best deal' in the history of the USA. The negotiation leading to the United States-Mexico-Canada Agreement (USMCA) appeared as an 'asymmetrical exchange' scenario that could have led to an unbalanced outcome for Mexico. However, Mexico stood firm on its positions and negotiated a modernized version of North American Free Trade Agreement. Mexico faced various challenges during this renegotiation, not only because it was required to negotiate with two developed countries but also due to the high level of ambition and demands raised by the new US administration. This paper provides an account of these impediments. More importantly, it analyzes the strategies that Mexico used to overcome the resource constraints it faced amidst the unpredictable political dilemma in the US and at home. In this manner, this paper seeks to provide a blueprint of strategies that other developing countries could employ to overcome their negotiation capacity constraints, especially when they are dealing with developed countries and in uncertain political environments

Abstract:

Health pandemics affect women and men differently, and they can make the existing gender inequalities much worse. COVID-19 is one such pandemic, which can have substantial gendered implications both during and in the post-pandemic world. Its economic and social consequences could deepen the existing gender inequalities and roll back the limited gains made in respect of women empowerment in the past few decades. The impending global recession, multiple trade restrictions, economic lockdown, and social distancing measures can expose vulnerabilities in social, political, and economic systems, which, in turn, could have a profound impact on women’s participation in trade and commerce. The article outlines five main reasons that explain why this health pandemic has put women employees, entrepreneurs, and consumers at the frontline of the struggle. It then explores how free trade agreements can contribute in repairing the harm in the post-pandemic world. In doing so, the author sheds light on various ways in which the existing trade agreements embrace gender equality considerations and how they can be better prepared to help minimize the pandemic-inflicted economic loss to women

Abstract:

The World Trade Organization (WTO) Dispute Settlement System (DSS) is in peril. The Appellate Body (AB) is being held as a 'hostage' by the very architect and the most frequent user of WTO DSS, the United States of America. This will bring the whole DSS to a standstill as the inability of AB to review the appeals will have a kill-off effect on the binding value of Panel rulings. If the most celebrated DSS collapses, the members would not be able to enforce their WTO rights. The WTO-inconsistent practices and violations would increase and remain unchallenged. The rights without remedies would soon lose their charm, and we might witness a higher and faster drift away from multilateral trade regulation. This is a grave situation. This piece is an academic attempt to analyse and diffuse the key points of criticism against AB. A comprehensive assessment of reasons behind this criticism could be a starting point to resolve this gridlock. The first part of this Article investigates the reasons and motivations of the US behind these actions as we cannot address the problems without understanding them in a comprehensive manner. The second part looks at this issue from a systemic angle as it seeks to address the debate on whether WTO resembles common or civil law, as most of the criticism directed towards judicial activism and overreach is 'much ado about nothing'. The concluding part of this piece briefly looks at the proposals already made by scholars to resolve this deadlock, and it leaves the readers with a fresh proposal to deliberate upon

Abstract:

In the recent years, we have witnessed a sharp increase in the number of free trade agreements (FTAs) with gender-related provisions. The key champions of this evolution include Canada, Chile, New Zealand, Australia and Uruguay. These countries have proposed a new paradigm, i.e. a paradigm where FTAs are considered vehicles to achieving the economic empowerment of women. This trend is spreading like a wild-fire to other parts of the world. More and more countries are expressing their interest in ensuring that their FTAs are genderresponsive and not simply gender-neutral or gender-blind in nature. The momentum is on, and we can expect many more agreements in the future to include stand-alone chapters or exclusive provisions on gender issues. This article is an attempt to tap into this ongoing momentum, as it puts forward a newly designed self-evaluation maturity framework to measure gender-responsiveness of trade agreements. The proposed framework is to help policy-makers and negotiators to: (1) measure gender-responsiveness of trade agreements; (2) identify areas where agreements need critical improvements; and (3) receive recommendations to improve the gender-fabric of trade agreements that they are negotiating or have already negotiated. This is the first academic intervention presenting this type of gender-responsiveness model for trade agreements

Abstract:

Purpose - World Trade Organisation grants rights to its members, and WTO Dispute Settlement Understanding (DSU) provides a rule-oriented consultative and judicial mechanism to protect these rights in cases of WTO-incompatible trade infringements. However, the DSU participation benefits come at a cost. These costs are acutely formidable for least developing countries (LDCs) which have small market size and trading stakes. No LDC has ever filed a WTO compliant, with the only exception of India-Battery dispute filed by Bangladesh against India. This paper aims to look at the experience of how Bangladesh – so far the only LDC member that has filed a formal WTO complaint – persuaded India to withdraw anti-dumping duties India had imposed on the import of acid battery from Bangladesh. Design/methodology/approach - The investigation is grounded on practically informed findings gathered through authors’ work experience and several semi-structured interviews and discussions which the authors have conducted with government representatives from Bangladesh, government and industry representatives from other developing countries, trade lawyers and officials based in Geneva and Brussels, and civil society organisations. Findings - The discussion provides a sound indication of the participation impediments that LDCs can face at WTO DSU and the ways in which such challenges can be overcome with the help of resources available at the domestic level. It also exemplifies how domestic laws and practices can respond to international legal instruments and impact the performance of an LDC at an international adjudicatory forum. Originality/value - Except one book chapter and a working paper, there is no literature available on this matter. This investigation is grounded on practically informed findings gathered with the help of original empirical research conducted by the authors

Abstract:

Mexico has employed special methodologies for price-determination and calculation of dumping margins againts Chinese imports in almost all anti-dumping investigations. This chapter attemps to explain and analyze the NME-especific procedures eployed by Mexican authorities in anti-dumping proceedings againts China. It also clarifies the Mexican standpoint on the controversial issue of how the expiry of section 15(a)(ii) of China's Accession Protocol to the WTO impacts the surviving parts of Section 15 of the Protocol, and whether Mexico has changed its treatment towards Chinese imports following the expiry of Section 15(a)(ii) post 12 December 2016

Abstract:

Multiple scholarly works have argued that developing country members of World Trade Organization (WTO) should enhance their dispute settlement capacity to successfully and cost effectively navigate the system of WTO Dispute Settlement Understanding (DSU). It is one thing to be a part of WTO agreements and know the WTO rules, and another to know how to use and take advantage of those agreements and rules in practice. The present investigation seeks to conduct a detailed examination of the latter with a specific focus on critically examining public private partnership (PPP) strategies that can enable developing countries to effectively utilize the provisions of WTO DSU. To achieve this purpose, the article examines how Brazil, one of the most active DSU users among developing countries, has strengthened its DSU participation by engaging its private stakeholders during the management of WTO disputes. The identification and evaluation of the PPP strategies employed by the government and industries in Brazil may prompt other developing countries to determine their individual approach towards PPP for the handling of WTO disputes

Abstract:

World Trade Organisation Dispute Settlement Understanding (WTO DSU) is a two-tier mechanism. The first tier is international adjudication and the second tier is domestic handling of trade disputes. Both tiers are interdependent and interconnected. A case that is poorly handled at the domestic level generally stands a relatively lower chance of success at the international level, and hence, the future of WTO litigation is partially predetermined by the manner in which it is handled at the domestic level. Moreover, most of the capacity-related challenges faced by developing countries at WTO DSU are deeply rooted in the domestic context of these countries, and their solutions can best be found at the domestic level. The present empirical investigation seeks to explore a domestic solution to the capacity-related challenges faced mainly by developing countries, as it examines the model of public private partnership (PPP). In particular, the article examines how India, one of the most active DSU users among developing countries, has strengthened its DSU participation by engaging its private stakeholders during the management of WTO disputes. The identification and evaluation of the PPP strategies employed by the government and industries, along with an analysis of the challenges and potential limitations that such partnerships have faced in India, may prompt other developing countries to review or revise their individual approach towards the future handling of WTO dispute

Abstract:

With the advent of globalization and industrialization, the significance of WTO DSU as an international institution of trade dispute governance has expanded tremendously, as a landmark achievement of Uruguay Round negotiations. Exploring the fact that whether the 'pendulum' of DSU is tilted towards developed economies, much to the disadvantage of the developing world, it becomes imperative to devise a strategy within the existing framework, to balance the equilibrium of this tilted pendulum. WTO, being recognized as an area of public international law, and expanding its routes to the sphere of private international law, the approach of public private partnership can be efficiently designed to help developing countries overcome their challenges in using WTO DSU, among the other approaches suggested by various experts. This study aims at exploring ways in which this partnership can be devised and implemented in the context of developing countries and also analyzing the limits of developing countries in implementing this strategy

Resumen:

Las respuestas más comunes al escalamiento percibido del crimen con violencia a través de la mayor parte de América Latina son el aumento del tamaiío y los poderes de la policía local y -en la mayorra de los casos incrementar-la participación de las fuerzas armadas para confrontar tanto al crimen común como al organizado. En México el debate se ha visto agudizado por la extensa violencia vinculada a los conflictos entre organizaciones de narcotráfico y entre éstas y las fuerzas de seguridad del gobierno, en las cuales el ejército y la marina han desempeiíado papeles importantes. Con base en la World Values Survey y datos del Barómetro de las Américas, examinamos tendencias de la confianza pública en la policía, el sistema judicial y las fuerzas armadas en México entre 1990 y 2010. Aquí preguntamos: ¿Está difundido y generalizado a través de la muestra el apoyo público para emplear a los militares como policías? ¿O existen patrones de apoyo y oposición respecto a la opinión pública? Nuestros hallazgos principales fueron: 1) que las fuerzas armadas clasificaron en primer lugar en relación con la confianza, mientras que la confianza en otras instituciones mexicanas tuvo una tendencia negativa entre 2008 y 2010, además la confianza en los militares aumentó ligeramente; 2) los encuestados respondieron que los militares respetan los derechos humanos más que el promedio y sustancialmente más que la policla o el gobierno en general; 3) el apoyo público para los militares en la lucha contra el crimen es fuerte y está distribuido de manera equitativa a través del espectro ideológico y de los grupos sociodemográficos, y 4) los patrones de apoyo surgen con mayor claridad respecto a percepciones, actitudes y juicios de desempeño. A modo de conclusión consideramos algunas de las implicaciones pollticas y de polltica de nuestros hallazgos.

Abstract:

Typical responses to the perceived escalation of violent crime throughout most of Latin America are to increase the size and powers of the regular police and -in most cases- to expand the involvement by the armed forces to confront both common and organized crime. Participation by the armed forces in domestic policing, in turn, has sparked debates in several countries about the serious risks incurred, especially with respect to human rights violations. In Mexico the debate is sharpened by the extensive violence linked to conflicts among drug-trafficking organizations and between these and the government's security forces, in which the Army and Navy have played leading roles. Using World Values Survey and Americas Barometer data, we examine trends in public confidence in the police, justice system, and armed forces in Mexico over 1990-2010. Using Vanderbilt University's 2010 LAPOP survey we compare levels of trust in various social, political, and government actors, locating Mexico in the broader Latin America context. Here we ask: Is public support for using the military as police widespread and generalized across the sample? Or are there patterns of support and opposition with respect to public opinion? Our main findings are that: 1) the armed forces rank at the top regarding trust, and -while trust in other Mexican institutions tended to decline in 2008-2010- trust in the military increased slightly; 2) respondents indicate that the military respects human rights more than the average and substantially more than the police or government generally; 3) public support for the military in fighting crime is strong and distributed evenly across the ideological spectrum and across socio-demographic groups, and 4) patterns of support emerge more clearly with respect to perceptions, attitudes, and performance judgments. By way of conclusion we conconsider some of the political and policy implications of our findings

Abstract:

We study a class of boundedly rational choice functions which operate as follows. The decision maker uses two criteria in two stages to make a choice. First, she shortlists the top two alternatives, i.e. two finalists, according to one criterion. Next, she chooses the winner in this binary shortlist using the second criterion. The criteria are linear orders that rank the alternatives. Only the winner is observable. We study the behavior exhibited by this choice procedure and provide an axiomatic characterization of it. We leave as an open question the characterization of a generalization to larger shortlists

Abstract:

In this study, we analyzed students' understanding of a complex calculus graphing problem. Students were asked to sketch the graph of a function, given its analytic properties (1st and 2nd derivatives, limits, and continuity) on specific intervals of the domain. The triad of schema development in the context of APOS theory was utilized to study students' responses. Two dimensions of understanding emerged, 1 involving properties and the other involving intervals. A student's coordination of the 2 dimensions is referred to as that student's overall calculus graphing schema. Additionally, a number of conceptual problems were consistently demonstrated by students throughout the study, and these difficulties are discussed in some detail

Abstract:

Array-based comparative genomic hybridization (aCGH) is a high-resolution, high-throughput technique for studying the genetic basis of cancer. The resulting data consist of log fluorescence ratios as a function of the genomic DNA location and provide a cytogenetic representation of the relative DNA copy number variation. Analysis of such data typically involves estimating the underlying copy number state at each location and segmenting regions of DNA with similar copy number states. Most current methods proceed by modeling a single sample/array at a time, and thus fail to borrow strength across multiple samples to infer shared regions of copy number aberrations. We propose a hierarchical Bayesian random segmentation approach for modeling aCGH data that uses information across arrays from a common population to yield segments of shared copy number changes. These changes characterize the underlying population and allow us to compare different population aCGH profiles to assess which regions of the genome have differential alterations. Our method, which we term Bayesian detection of shared aberrations in aCGH (BDSAScgh), is based on a unified Bayesian hierarchical model that allows us to obtain probabilities of alteration states as well as probabilities of differential alterations that correspond to local false discovery rates for both single and multiple groups. We evaluate the operating characteristics of our method via simulations and an application using a lung cancer aCGH data set. This article has supplementary material online

Abstract:

The quadratic and linear cash flow dispersion measures M2 and Ñ are two immunization risk measures designed to build immunized bond portfolios. This paper generalizes these two measures by showing that any dispersion measure is an immunization risk measure and therefore, it sets up a tool to be used in empirical testing. Each new measure is derived from a different set of shocks (changes on the term structure of interest rates) and depends on the corresponding subset of worst shocks. Consequently, a criterion for choosing appropriate immunization risk measures is to take those developed from the most reasonable sets of shocks and the associated subset of worst shocks and then select those that work best empirically. Adopting this approach, this paper then explores both numerical examples and a short empirical study on the Spanish Bond Market in the mid-1990s to show that measures between linear and quadratic are the most appropriate, and amongst them, the linear measure has the best properties. This confirms previous studies on US and Canadian markets that maturity-constrained-duration-matched portfolios also have good empirical behavior

Abstract:

This paper presents a condition equivalent to the existence of a Riskless Shadow Asset that guarantees a minimum return when the asset prices are convex functions of interest rates or other state variables. We apply this lemma to immunize default-free and option-free coupon bonds and reach three main conclusions. First, we give a solution to an old puzzle: why do simple duration matching portfolios work well in empirical studies of immunization even though they are derived in a model inconsistent with equilibrium and shifts on the term structure of interest rates are not parallel, as assumed? Second, we establish a clear distinction between the concepts of immunized and maxmin portfolios. Third, we develop a framework that includes the main results of this literature as special cases. Next, we present a new strategy of immunization that consists in matching duration and minimizing a new linear dispersion measure of immunization risk

Abstract:

Given modal logics L1 and L2, their lexicographic product L1 x L2 is a new logic whose frames are the Cartesian products of an L1-frame and an L2-frame, but with the new accessibility relations reminiscent of a lexicographic ordering. This article considers the lexicographic products of several modal logics with linear temporal logic (LTL) based on "next" and "always in the future". We provide axiomatizations for logics of the form L x LTL and define cover-simple classes of frames; we then prove that, under fairly general conditions, our axiomatizations are sound and complete whenever the class of L-frames is cover-simple. Finally, we prove completeness for several concrete logics of the form L x LTL

Abstract:

Classic institutionalism claims that even authoritarian and non-democratic regimens would prefer institutions where all members could make advantageous transactions. Thus, structural reform geared towards preventing and combating corruption should be largely preferred by all actors in any given setting. The puzzle, then, is why governments decide to maintain, or even create, inefficient institutions. A perfect example of this paradox is the establishment of the National Anti-corruption System (SNA) in Mexico. This is a watchdog institution, created to fight corruption, which is itself often portrayed as highly corrupted and inefficient. The limited scope of anti-corruption reforms in the country is explained by the institutional setting in which these reforms take place, where political behaviour is highly determined by embedded institutions that privilege centralized decision-making. Mexican reformers have historically privileged those reforms that increase their gains and power, and delayed and boycotted those that negatively affect them. Since anti-corruption reforms adversely affected rent extraction and diminished the power of a set of political actors, the bureaucrats who benefited from the current institutional setting embraced limited reforms or even boycotted them. Thus, to understand failed reforms it is necessary to understand the deep-rooted political institutions that shape the behaviour of political actors. This analysis is important for other modern democracies where powerful bureaucratic minorities are often able to block changes that would be costly to their interests, even if the changes would increase net gains for the country as a whole

Abstract:

In this paper we study the problem of Hamiltonization of nonholonomic systems from a geometric point of view. We use gauge transformations by 2-forms (in the sense of Ševera and Weinstein in Progr Theoret Phys Suppl 144:145–154 2001) to construct different almost Poisson structures describing the same nonholonomic system. In the presence of symmetries, we observe that these almost Poisson structures, although gauge related, may have fundamentally different properties after reduction, and that brackets that Hamiltonize the problem may be found within this family. We illustrate this framework with the example of rigid bodies with generalized rolling constraints, including the Chaplygin sphere rolling problem. We also see through these examples how twisted Poisson brackets appear naturally in nonholonomic mechanics

Abstract:

We propose a novel method for reliably inducing stress in drivers for the purpose of generating real-world participant data for machine learning, using both scripted in-vehicle stressor events and unscripted on-road stressors such as pedestrians and construction zones. On-road drives took place in a vehicle outfitted with an experimental display that lead drivers to believe they had prematurely ran out of charge on an isolated road. We describe the elicitation method, course design, instrumentation, data collection procedure and the post-hoc labeling of unplanned road events to illustrate how rich data about a variety of stress-related events can be elicited from study participants on-road. We validate this method with data including psychophysiological measurements, video, voice, and GPS data from (N=20) participants. Results from algorithmic psychophysiological stress analysis were validated using participant self-reports. Results of stress elicitation analysis show that our method elicited a stress-state in 89% of participants

Abstract:

Do economic incentives explain forced displacement during conflict? This paper examines this questionin Colombia, which has had one of the world’s most acute situations of internal displacement associatedwith conflict. Using data on the price of bananas along with data on historical levels of production, I findthat price increases generate more forced displacement in municipalities more suitable to produce thisgood. However, I also show that this effect is concentrated in the period in which paramilitary powerand operations reached an all-time peak. Additional evidence shows that land concentration amongthe rich has increased substantially in districts that produce these goods. These findings are consistentwith extensive qualitative evidence that documents the link between economic interests and local polit-ical actors who collude with illegal armed groups to forcibly displace locals and appropriate their land,especially in areas with more informal land tenure systems, like those where bananas are grown morefrequently

Abstract:

This study was undertaken to explore pre-service teachers' understanding of injections and surjections. There were 54 pre-service teachers specialising in the teaching of Mathematics in Grades 10–12 curriculum who participated in the project. The concepts were covered as part of a real analysis course at a South African university. Questionnaires based on an initial genetic decomposition of the concepts of surjective and injective functions were administered to the 54 participants. Their written responses, which were used to identify the mental constructions of these concepts, were analysed using an APOS (action-process-object-schema) framework and five interviews were carried out. The findings indicated that most participants constructed only Action conceptions of bijection and none demonstrated the construction of an Object conception of this concept. Difficulties in understanding can be related to students' lack of construction of the concepts of functions and sets that are a prerequisite to working with bijections

Resumen:

El autor intenta deducir una teoría poética del escritor mexicano partiendo de su obra, que divide en tres partes; enseguida, tras un interludio –“la noche obscura del poeta”– trata la función del recuerdo como estímulo literario. Y termina con un apunte hacia el popularismo artístico de Alfonso Reyes

Abstract:

The author attempts to formulate a poetic theory of this Mexican writer based on his works, which is divided in three parts; after an interlude –“the poet’s darkest night”– he studies how remembrance works as a literary stimulus and concludes commenting on Alfonso Reyes’ artistic popularism

Abstract:

The cognitive domains of a communication scheme for learning physics are related to a framework based on epistemology, and the planning of an introductory calculus textbook in classical mechanics is shown as an example of application

Abstract:

Uniform inf-sup conditions are of fundamental importance for the finite element solution of problems in incompressible fluid mechanics, such as the Stokes and Navier–Stokes equations. In this work we prove a uniform inf-sup condition for the lowest-order Taylor–Hood pairs Q2×Q1 and P2×P1 on a family of affine anisotropic meshes. These meshes may contain refined edge and corner patches. We identify necessary hypotheses for edge patches to allow uniform stability and sufficient conditions for corner patches. For the proof, we generalize Verfürth’s trick and recent results by some of the authors. Numerical evidence confirms the theoretical results

Abstract:

In this work we present and analyze new inf-sup stable, and stabilised, finite element methods for the Oseen equation in anisotropic quadrilateral meshes. The meshes are formed of closed parallelograms, and the analysis is restricted to two space dimensions. Starting with the lowest order Q 1 2 x P 0 pair, we first identify the pressure components that make this finite element pair to be non-inf-sup stable, especially with respect to the aspect ratio. We then propose a way to penalise them, both strongly, by directly removing them from the space, and weakly, by adding a stabilisation term based on jumps of the pressure across selected edges. Concerning the velocity stabilisation, we propose an enhanced grad-div term. Stability and optimal a priori error estimates are given, and the results are confirmed numerically

Abstract:

In his landmark article, Richard Morris (1981) introduced a set of rat experiments intended “to demonstrate that rats can rapidly learn to locate an object that they can never see, hear, or smell provided it remains in a fixed spatial location relative to distal room cues” (p. 239). These experimental studies have greatly impacted our understanding of rat spatial cognition. In this article, we address a spatial cognition model primarily based on hippocampus place cell computation where we extend the prior Barrera–Weitzenfeld model (2008) intended to allow navigation in mazes containing corridors. The current work extends beyond the limitations of corridors to enable navigation in open arenas where a rat may move in any direction at any time. The extended work reproduces Morris’s rat experiments through virtual rats that search for a hidden platform using visual cues in a circular open maze analogous to the Morris water maze experiments. We show results with virtual rats comparing them to Morris’s original studies with rats

Abstract:

The study of behavioral and neurophysiological mechanisms involved in rat spatial cognition provides a basis for the development of computational models and robotic experimentation of goal-oriented learning tasks. These models and robotics architectures offer neurobiologists and neuroethologists alternative platforms to study, analyze and predict spatial cognition based behaviors. In this paper we present a comparative analysis of spatial cognition in rats and robots by contrasting similar goal-oriented tasks in a cyclical maze, where studies in rat spatial cognition are used to develop computational system-level models of hippocampus and striatum integrating kinesthetic and visual information to produce a cognitive map of the environment and drive robot experimentation. During training, Hebbian learning and reinforcement learning, in the form of Actor-Critic architecture, enable robots to learn the optimal route leading to a goal from a designated fixed location in the maze. During testing, robots exploit maximum expectations of reward stored within the previously acquired cognitive map to reach the goal from different starting positions. A detailed discussion of comparative experiments in rats and robots is presented contrasting learning latency while characterizing behavioral procedures during navigation such as errors associated with the selection of a non-optimal route, body rotations, normalized length of the traveled path, and hesitations. Additionally, we present results from evaluating neural activity in rats through detection of the immediate early gene Arc to verify the engagement of hippocampus and striatum in information processing while solving the cyclical maze task, such as robots use our corresponding models of those neural structures

Abstract:

Anticipation of sensory consequences of actions is critical for the predictive control of movement that explains most of our sensory-motor behaviors. Plenty of neuroscientific studies in humans suggest evidence of anticipatory mechanisms based on internal models. Several robotic implementations of predictive behaviors have been inspired on those biological mechanisms in order to achieve adaptive agents. This paper provides an overview of such neuroscientific and robotic evidences; a high-level architecture of sensory-motor coordination based on anticipatory visual perception and internal models is then introduced; and finally, the paper concludes by discussing the relevance of the proposed architecture within the context of current research in humanoid robotics

Abstract:

The study of spatial memory and learning in rats has inspired the development of multiple computational models that have lead to novel robotics architectures. Evaluation of computational models and resulting robotic architecture is usually carried out at the behavioral level by evaluating experimental tasks similar to those performed with rats. While multiple metrics are defined to evaluate behavioral performance in rats, metrics for robot task evaluation are very limited mostly to success/failure and time to complete task. In this paper we present a set of metrics taken from rat spatial memory and learning evaluation to further analyze performance in robots. The proposed set of metrics, learning latency and ability to navigate minimal distance to goal, should offer the robotics community additional tools to assess performance and validity of models in biologically-inspired robotic architectures at the task performance level. We also provide a comparative evaluation using these metrics between similar spatial tasks performed by rat and robot in comparable environments

Abstract:

In this paper we present a comparative behavioral analysis of spatial cognition in rats and robots by contrasting a similar goal-oriented task in a cyclical maze, where a computational system-level model of rat spatial cognition is used integrating kinesthetic aud visual information to produce a cognitive map of the environnnent and drive robot experimentation. A discussion of experiments in rats and robots is presented contrasting learning latency while characterizing behavioral procedures such as body rotations during navigation and election of routes to the goal

Abstract:

This paper presents a robot architecture with spatial cognition and navigation capabilities that captures some properties of the rat brain structures involved in learning and memory. This architecture relies on the integration of kinesthetic and visual information derived from artificial landmarks, as well as on Hebbian learning, to build a holistic topological-metric spatial representation during exploration, and employs reinforcement learning by means of an Actor-Critic architecture to enable learning and unlearning of goal locations. From a robotics perspective, this work can be placed in the gap between mapping and map exploitation currently existent in the SLAM literature. The exploitation of the cognitive map allows the robot to recognize places already visited and to find a target from any given departure location, thus enabling goal-directed navigation. From a biological perspective, this study aims at initiating a contribution to experimental neuroscience by providing the system as a tool to test with robots hypotheses concerned with the underlying mechanisms of rats' spatial cognition. Results from different experiments with a mobile AlBO robot inspired on c1assical spatial tasks with rats are described, and a comparative analysis is provided in reference to the reversal task devised by O'Keefe in 1983

Abstract:

A computational model of spatial cognition in rats is used to control an autonomous mobile robot while solving a spatial task within a cyclic maze. In this paper we evaluate the robot's behavior in terms of place recognition in multiple directions and goal-oriented navigation against the results derived from experimenting with laboratory rats solving the same spatial task in a similar maze. We provide a general description of the bio-inspired model, and a comparative behavioral analysis between rats and robot

Abstract:

In this paper we present a model designed on the basis of the rat's brain neurophysiology to provide a robot with spatial cognition and goal-oriented navigation capabilities. We describe place representation and recognition processes in rats as the basis for topological map building and exploitation by robots. We experiment with the model by training a robot to find the goal in a maze starting from a fixed location, and by testing it to reach the same target from new different starting locations

Abstract:

We present a model designed on the basis of the rat's brain neurophysiology to provide a robot with spatial cognition and goal-oriented navigation capabilities. We describe target learning and place recognition processes in rats as basis for topological map building and exploitation by robots. We experiment with the model in different maze configurations by training a robot to find the goal starting from a fixed localion, and by testing it to reach the same target from new different starting locations

Abstract:

In this paper we present a model designed on the basis of the neurophysiology of the rat hippocampus to control the navigation of a real robot. The model allows the robot to learn reward locations dynamically moved in different environments, to build a topological map, and to return home autonomously. We describe robot experimentation results from our tests in a T-maze, an 8-arm, radial maze and an extended maze

Abstract:

In this paper we present a model composed of layers of neurons designed on the basis of the neurophysiology of the rat hippocampus to control the navigation of a real robot. The model allows the robot to learn reward locations in different mazes and to return home autonomously by building a topological map of the environment. We described robotic experimentation results from our tests in a T-maze, an 8-arm radial maze and a 3-T shaped maze

Abstract:

In this paper we present a biologically-inspired robotic exploration and navigation model based on the neurophysiology of the rat hippocampus that allows a robot to rind goals and return home autonomously by building a topological map of the environment. We present simulation and experimentation results from a T-maze tested and discuss future research

Abstract:

The time-dependent restricted (n+ 1) -body problem concerns the study of a massless body (satellite) under the influence of the gravitational field generated by n primary bodies following a periodic solution of the n-body problem. We prove that the satellite has periodic solutions close to the large-amplitude circular orbits of the Kepler problem (comet solutions), and in the case that the primaries are in a relative equilibrium, close to small-amplitude circular orbits near a primary body (moon solutions). The comet and moon solutions are constructed with the application of a Lyapunov–Schmidt reduction to the action functional. In addition, using reversibility techniques, we compute numerically the comet and moon solutions for the case of four primaries following the super-eight choreography

Abstract:

We aim to associate a cytokine profile obtained through data mining with the clinical characteristics of patients with subgroups with advanced non-small-cell lung cancer (NSCLC). Our results provide evidence that complex cytokine networks may be used to identify patient sub-groups with different prognoses in advanced NSCLC that could serve as potential biomarkers for best treatment choices

Resumen:

La neumonitis por hipersensibilidad (NH) es una enfermedad inflamatoria difusa del parénquima pulmonar provocada por la inhalación repetida de partículas orgánicas. Las células dendríticas y sus precursores desempeñan un papel importante no sólo como células presentadoras de antígenos, sino también como parte de una red de procesos inmunorregulatorios. Dependiendo de su linaje y estado de diferenciación y activación, las células dentríticas pueden promover una intensa respuesta inmunológica por parte de los linfocitos T o bien, producir un estado de anergia. Objetivo: Caracterizar fenotípicamente las células dentríticas de origen mieloide (CDm) y plasmacitoide (CDp) presentes en el lavado bronquioalveolar (LBA) de pacientes con NH en etapas subaguda o crónica y comparada con lo observado en pacientes con fibrosis pulmonar idiopática (FPI) y sujetos control

Abstract:

Hypersensitivity pneumonitis (HP) is a diffuse inflammatory disease of lung parenchyma resulting from repetitive inhalation of organic particles. Dendritic cells and their precursors play an important role not only as antigen presenting cells but also as part of an immunoregulatory network. Depending on their lineage and stage of differentiation and activation, dendritic cells can promote a strong T-lymphocytes-mediated immunological response or an anergy state. Objective: To phenotypically characterize myeloid (mDCs) and plasmacitoid (pDCs) dendritic cells recovered in bronchoalveolar lavage from patients with subacute or chronic HP, and to compare the results with that obtained in patients with idiopathic pulmonary fibrosis (IPF) and healthy controls. Methods: BAL cells from 8 patients with subacute HP, 8 with chronic HP, 8 with IPF and 4 healthy subjects were used. The phenotype of dendritic cells subpopulations were characterized by means of flow cytometry

Abstract:

The monitoring of patients with dementia who receive comprehensive care in day centers allows formal caregivers to make better decisions and provide better care to patients. For instance, cognitive and physical therapies can be tailored based on the current stage of disease progression. In the context of day centers of the Mexican Federation of Alzheimer, this work aims to design and evaluate Alzaid, a technological platform for assisting formal caregivers in monitoring patients with dementia. Alzaid was devised using a participatory design methodology that consisted in eliciting and validating requirements from 22 and 9 participants, respectively, which were unified to guide the construction of a high-fidelity prototype evaluated by 14 participants. The participants were formal caregivers, medical staff, and management. This work contributes a high-fidelity prototype of a technological platform for assisting formal caregivers in monitoring patients with dementia considering restrictions and requirements of four Mexican day centers. In general, the participants perceived the prototype as quite likely to be useful, usable, and relevant in the job of monitoring patients with dementia (p-value < 0.05). By evaluating and designing Alzaid that unifies requirements for monitoring patients of four day centers, this work is the first effort towards a standard monitoring process of patients with dementia in the context of the Mexican Federation of Alzheimer

Abstract:

Many Americans continued some forms of social distancing after the pandemic. This phenomenon is stronger among older persons, less educated individuals, and those who interact daily with persons at high risk from infectious diseases. Regression models fit to individual-level data suggest that social distancing lowered labor force participation by 2.4 percentage points in 2022, 1.2 points on an earnings-weighted basis. When combined with simple equilibrium models, our results imply that the social distancing drag on participation reduced US output by $205 billion in 2022, shrank the college wage premium by 2.1 percentage points, and modestly steepened the cross-sectional age-wage profile

Abstract:

This paper studies how biases in managerial beliefs affect managerial decisions, firm performance, and the macroeconomy. Using a new survey of US managers I establish three facts. (1) Managers are not overoptimistic: sales growth forecasts on average do not exceed realizations. (2) Managers are overprecise: they underestimate future sales growth volatility. (3) Managers overextrapolate: their forecasts are too optimistic after positive shocks and too pessimistic after negative shocks. To quantify the implications, I estimate a dynamic general equilibrium model in which managers of heterogeneous firms use a subjective beliefs process to make forward-looking hiring decisions. Overprecision and overextrapolation lead managers to overreact to firm-level shocks and overspend on adjustment costs, destroying 2.1% to 6.8% of the typical firm's value. Pervasive overreaction leads to excess volatility and reallocation, lowering consumer welfare by 0.5% to 2.3% relative to the rational-expectations equilibrium. These findings suggest overreaction could amplify asset-price and business-cycle fluctuations

Abstract:

As knowledge workers have shifted to hybrid, we're not seeing an equivalent drop in demand for office space. New survey data suggests cuts in office space of 1% to 2% on average. There are three trends driving this: 1) Workers are uncomfortable with density, and the only surefire way to reduce density is to cut person days on site without cutting square footage; 2) Most employees want to work from home on Mondays and Fridays, which means the shift to hybrid affords only meager opportunities to economize on office space; and 3) Employers are reshaping office space to become more inviting social spaces that encourage face-to-face collaboration, creativity, and serendipitous interactions

Abstract:

Drawing on data from the firm-level Survey of Business Uncertainty, we present three pieces of evidence that COVID-19 is a persistent reallocation shock. First, rates of excess job and sales reallocation over 24-month periods (looking back 12 months and ahead 12 months) have risen sharply since the pandemic struck, especially for sales. Second, as of December 2020, firm-level forecasts of sales revenue growth over the next year imply a continuation of recent changes, not a reversal. Third, COVID-19 shifted relative employment growth trends in favor of industries with a high capacity for employees to work from home

Abstract:

Employees want to work from home 2.5 days a week on average, according to a monthly survey of 5,000 Americans. Desires to work from home and cut commuting have strengthened as the pandemic has lingered, and many have become increasingly comfortable with remote interactions. The rapid spread of the Delta variant is also undercutting the drive for a full-time return to the office any time soon. Tight labor markets are also a challenge for firms that want a full-time return

Abstract:

About one-fifth of paid workdays will be supplied from home in the post-pandemic economy, and more than one-fourth on an earnings-weighted basis. In view of this projection, we consider some implications of home internet access quality, exploiting data from the new Survey of Working Arrangements and Attitudes. Moving to high-quality, fully reliable home internet service for all Americans ("universal access") would raise earnings-weighted labor productivity by an estimated 1.1% in the coming years. The implied output gains are $160 billion per year, or $4 trillion when capitalized at a 4% rate. Estimated flow output payoffs to universal access are nearly three times as large in economic disasters like the COVID-19 pandemic. Our survey data also say that subjective well-being was higher during the pandemic for people with better home internet service conditional on age, employment status, earnings, working arrangements, and other controls. In short, universal access would raise productivity, and it would promote greater economic and social resilience during future disasters that inhibit travel and in-person interactions

Abstract:

We develop several pieces of evidence about the reallocative effects of the COVID-19 shock on impact and over time. First, the shock caused three to four new hires for every ten layoffs from March 1 to mid-May 2020. Second, we project that one-third or more of the layoffs during this period are permanent in the sense that job losers won’t return to their old jobs at their previous employers. Third, firm-level forecasts at a one-year horizon imply rates of expected job and sales reallocation that are two to five times larger from April to June 2020 than before the pandemic. Fourth, full days working from home will triple from 5 percent of all workdays in 2019 to more than 15 percent after the pandemic ends. We also document pandemic-induced job gains at many firms and a sharp rise in cross-firm equity return dispersion in reaction to the pandemic. After developing the evidence, we consider implications for the economic outlook and for policy. Unemployment benefit levels that exceed worker earnings, policies that subsidize employee retention irrespective of the employer’s commercial outlook, and barriers to worker mobility and business formation impede reallocation responses to the COVID-19 shock

Abstract:

Economic uncertainty jumped in reaction to the COVID-19 pandemic, with most indicators reaching their highest values on record. Alongside this rise in uncertainty has been an increase in downside tail-risk reported by firms. This uncertainty has played three roles. First, amplifying the drop in economic activity early in the pandemic; second slowing the subsequent recovery; and finally reducing the impact of policy as uncertainty tends to make firms more cautious in responding to changes in business conditions. As such, the incredibly high levels of uncertainty are a major impediment to a rapid recovery. We also discuss three other factors exacerbating the situation: the need for massive reallocation as COVID-19 permanently reshapes the economy; the rise in working from home, which is impeding firm hiring; and the ongoing medical uncertainty over extent and duration of the pandemic. Collectively, these conditions are generating powerful headwinds against a rapid recovery from the COVID-19 recession

Abstract:

The Dirichlet process mixture model and more general mixtures based on discrete random probability measures have been shown to be flexible and accurate models for density estimation and clustering. The goal of this paper is to illustrate the use of normalized random measures as mixing measures in nonparametric hierarchical mixture models and point out how possible computational issues can be successfully addressed. To this end, we first provide a concise and accessible introduction to normalized random measures with independent increments. Then, we explain in detail a particular way of sampling from the posterior using the Ferguson–Klass representation. We develop a thorough comparative analysis for location-scale mixtures that considers a set of alternatives for the mixture kernel and for the nonparametric component. Simulation results indicate that normalized random measure mixtures potentially represent a valid default choice for density estimation problems. As a byproduct of this study an R package to fit these models was produced and is available in the Comprehensive R Archive Network (CRAN)

Resumen:

Objetivo. Caracterizar el impacto de la infección por SARS-CoV-2 en trabajadores de una gran empresa esencial del Área Metropolitana de la Ciudad de México, utilizando prevalencia puntual de infección aguda, prevalencia puntual de infección pasada a través de anticuerpos séricos e incapacidades temporales para el trabajo por enfermedad respiratoria (ITT-ER). Material y métodos. Cuatro encuestas aleatorias, tres durante 2020 y una en 2021, sobre de disponibilidad de vacunas. Desenlaces: prevalencia puntual de infección aguda a través de pruebas de PCR (polymerase chain reaction, por sus siglas en inglés) en saliva, prevalencia puntual de infección pasada a través de anticuerpos séricos contra Covid-19 (niveles de anticuerpos S/N), ITT-ERs y prevalencia de síntomas durante los seis meses anteriores. Resultados. La prevalencia de casos positivos para SARS-CoV-2 fue de 1.29-4.88% y, en promedio, la cuarta parte de participantes presentó anticuerpos prevacunación; más de la mitad de participantes con ITT-ER tenían anticuerpos. Las posibilidades de tener anticuerpos fueron 6-7 veces mayores entre aquellos con ITT-ER. Conclusiones. Los altos niveles de anticuerpos contra el Covid-19 en la población de estudio reflejan que la cobertura es alta entre los trabajadores de esta industria. Las ITTs son una herramienta útil para rastrear epidemias en el lugar de trabajo

Abstract:

Objective. To characterize the impact of SARS-CoV-2 infection in workers from an essential large-scale company in the Greater Mexico City Metropolitan Area using point prevalence of acute infection, point prevalence of past infection through serum antibodies and respiratory disease short-term disability claims (RD-STDC). Materials and methods. Four randomized surveys, three during 2020 before and one after (December 2021) vaccines' availability. Outcomes: point prevalence of acute infection through saliva PCR (polymerase chain reaction) testing, point prevalence of past infection through serum antibodies against Covid-19, RD-STDC and prevalence of symptoms during the previous six months. Results. Prevalence of SARS-CoV-2 cases was 1.29-4.88%, on average, a quarter of participants pre-vaccination were seropositive; over half of participants with a RD-STDC had antibodies. The odds of having antibodies were 6-7 times more among workers with an RD-STDC. Conclusions. High antibody levels against Covid-19 in this study population reflects that coverage is high among workers in this industry. STDCs are a useful tool to track workplace epidemics

Abstract:

Chaotic properties in the dynamics of Toeplitz operators on the Hardy–Hilbert space H2(D) are studied. Based on previous results of Shkarin and Baranov and Lishanskii, a characterization of different versions of chaos formulated in terms of the coefficients of the symbol for the tridiagonal case are obtained. In addition, easily computable sufficient conditions that depend on the coefficients are found for the chaotic behavior of certain Toeplitz operators

Resumen:

Este artículo realiza una revisión sistemática de treinta y seis taxonomías de riesgos asociados a la Inteligencia Artificial (IA) que se han realizado desde el 2010 hasta la fecha, utilizando como metodología el protocolo Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA). El estudio se basa en la importancia de estas para estructurar la investigación de los riesgos y para distinguir y definir amenazas. Ello permite identificar las cuestiones que generan mayor preocupación y, por lo tanto, requieren mejor gobernanza. La investigación permite extraer tres conclusiones. En primer lugar, se observa que la mayoría de los estudios se centran en amenazas como la privacidad y la desinformación, posiblemente debido a su concreción y evidencia empírica existente. Por el contrario, amenazas como los ciberataques y el desarrollo de tecnologías estratégicas son menos citadas, a pesar de su creciente relevancia. En segundo lugar, encontramos que los artículos enfocados en el origen del riesgo tienden a considerar más frecuentemente riesgos extremos en comparación con los trabajos que abordan las consecuencias. Esto sugiere que la literatura ha sabido identificar las potenciales causas de una catástrofe, pero no las formas concretas en las que esta se puede materializar en la práctica. Finalmente, existe una cierta división entre aquellos artículos que tratan daños tangibles presentes y aquellos que cubren daños potenciales futuros. No obstante, varias amenazas se tratan en la mayoría de los artículos de todo el espectro indicando que existen puntos de unión entre clústeres

Abstract:

This article performs a systematic review of thirty-six taxonomies of risks associated with Artificial Intelligence (AI) that have been conducted from 2010 to date, using the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) protocol as a methodology. The study is based on the importance of these to structure risk research and to distinguish and define threats. This makes it possible to identify the issues that are of greatest concern and therefore require better governance. Three conclusions can be drawn from the research. First, it is observed that most studies focus on threats such as privacy and disinformation, possibly due to their concreteness and existing empirical evidence. In contrast, threats such as cyberattacks and the development of strategic technologies are less cited, despite their increasing relevance. Second, we find that articles focused on the origin of risk tend to consider more frequently extreme risks compared to papers addressing consequences. This suggests that the literature has been able to identify the potential causes of a catastrophe, but not the concrete ways in which it may materialize in practice. Finally, there is some division between those articles that deal with present tangible damage and those that cover potential future damage. Nevertheless, several threats are addressed in the majority of articles across the spectrum indicating that there are commonalities between clusters

Abstract:

This work studies ultra wideband (UWB) communications over multipath residential indoor channels. We study the relationship between the fading margin and the transmitter–receiver separation distance for both the line of sight and the no line of sight scenarios. Impairments such as small scale fading as well as large scale fading are considered. Some implications of the results for UWB indoor network design are discussed

Abstract:

We study a repeated game with payoff externalities and observable actions where two players receive information over time about an underlying payoff-relevant state, and strategically coordinate their actions. Players learn about the true state from private signals, as well as the actions of others. They commonly learn the true state (Cripps et al., 2008), but do not coordinate in every equilibrium. We show that there exist stable equilibria in which players can overcome unfavorable signal realizations and eventually coordinate on the correct action, for any discount factor. For high discount factors, we show that in addition players can also achieve efficient payoffs

Abstract:

Energy resulting from an impact is manifested through unwanted damage to objects or persons. New materials made of cellular structures have enhanced energy absorption (EA) capabilities. The hexagonal honeycomb is widely known for its space-filling capacity, structural stability, and high EA potential. Additive manufacturing (AM) technologies have been effectively useful in a vast range of applications. The evolution of these technologies has been studied continuously, with a focus on improving the mechanical and structural characteristics of three-dimensional (3D)-printed models to create complex quality parts that satisfy design and mechanical requirements. In this study, 3D honeycomb structures of novel material polyethylene terephthalate glycol (PET-G) were fabricated by the fused deposition modeling (FDM) method with different infill density values (30%, 70%, and 100%) and printing orientations (edge, flat, and upright). The effectiveness for EA of the design and the effect of the process parameters of infill density and layer printing orientation were investigated by performing in-plane compression tests, and the set of parameters that produced superior results for better EA was determined by analyzing the area under the curve and the welding between the filament layers in the printed object via FDM. The results showed that the printing parameters implemented in this study considerably affected the mechanical properties of the 3D-printed PET-G honeycomb structure. The structure with the upright printing direction and 100% infill density exhibited an extension to delamination and fragmentation, thus, a desirable performance with a long plateau region in the load-displacement curve and major absorption of energy

Abstract:

The question whether a minimum rate of sick pay should be mandated is much debated. We study the effects of this kind of intervention with student subjects in an experimental laboratory setting rich enough to allow for moral hazard, adverse selection, and crowding out of good intentions. Both wages and replacement rates offered by competing employers are reciprocated by workers. However, replacement rates are only reciprocated as long as no minimum level is mandated. Although we observe adverse selection when workers have different exogenous probabilities for being absent from work, this does not lead to a market breakdown. In our experiment, mandating replacement rates actually leads to a higher voluntary provision of replacement rates by employers

Abstract:

Computation of the volume of space required for a robot to execute a sweeping motion from a start to a goal has long been identified as a critical primitive operation in both task and motion planning. However, swept volume computation is particularly challenging for multi-link robots with geometric complexity, e.g., manipulators, due to the non-linear geometry. While earlier work has shown that deep neural networks can approximate the swept volume quantity, a useful parameter in sampling-based planning, general network structures do not lend themselves to outputting geometries. In this paper we train and evaluate the learning of a deep neural network that predicts the swept volume geometry from pairs of robot configurations and outputs discretized voxel grids. We perform this training on a variety of robots from 6 to 16 degrees of freedom. We show that most errors in the prediction of the geometry lie within a distance of 3 voxels from the surface of the true geometry and it is possible to adjust the rates of different error types using a heuristic approach. We also show it is possible to train these networks at varying resolutions by training networks with up to 4x smaller grid resolution with errors remaining close to the boundary of the true swept volume geometry surface

Abstract:

This article is devoted to the design of robust position-tracking controllers for a perturbed wheeled mobile robot. We address the final objective of pose-regulation in a predefined time, which means that the robot position and orientation must reach desired final values simultaneously in a user-defined time. To do so, we propose the robust tracking of adequate trajectories for position coordinates, enforcing that the robot's heading evolves tangent to the position trajectory and consequently the robot reaches a desired orientation. The robust tracking is achieved by a proportional-integral action or by a super-twisting sliding mode control. The main contribution of this article is a kinematic control approach for pose-regulation of wheeled mobile robots in which the orientation angle is not directly controlled in the closed-loop, which simplifies the structure of the control system with respect to existing approaches. An offline trajectory planning method based on parabolic and cubic curves is proposed and integrated with robust controllers to achieve good accuracy in the final values of position and orientation. The novelty in the trajectory planning is the generation of a set of candidate trajectories and the selection of one of them that favors the correction of the robot's final orientation. Realistic simulations and experiments using a real robot show the good performance of the proposed scheme even in the presence of strong disturbances

Abstract:

Two robust kinematic controllers for position trajectory tracking of a perturbed wheeled mobile robot are presented. We address a final objective of fixed-time pose-regulation, which means that the robot position and orientation must reach desired final values simultaneously in a user-defined time. To achieve that, we propose the robust tracking of adequate trajectories for position, which drives the robot to get a desired final orientation due to the nonholonomic motion constraint. Hence, the main contribution of the paper is a complete strategy to define adequate reference trajectories as well as robust controllers to track them in order to enforce the pose-regulation of a wheeled mobile robot in a desired time. Realistic simulations show the good performance of the proposed scheme even in the presence of strong disturbances

Abstract:

Correlations of measures of percentages of white coat color, five measures of production and two mea sures of reproduction were obtained from 4293 first lactation Holsteins from eight Florida dairy farms. Percentages of white coat color were analyzed as recorded and transformed by an extension of Box-Cox procedures. Statistical analyses were by derivative-free restricted maximum likelihood (DFREML) with an animal model. Phenotypic and genetic correlations of white percentage (not transformed) were with milk yield, 0.047 and 0.097; fat yield, 0.002 and 0.004; fat percentage, -0.047 and -0.090; protein yield, 0.024 and 0.048; protein percentage, -0.070 and -0.116; days open, -0.012 and -0.065; and calving interval, -0.007 and -0.029. Changes in magnitud e of correlations were very small for all variables except days open. Genetic and phenotypic correlations of transformed values with days open were -0.027 and -0.140. Modest positive correlated responses would be expected for white coat color percentage following direct selection for milk, fat, and protein yields, but selection for fat and protein percentages, days open, or calving interval would lead to small decreases

Abstract:

As part of a project to identify all maximal centralising monoids on a four-element set, we determine all centralising monoids witnessed by unary or by idempotent binary operations on a four-element set. Moreover, we show that every centralising monoid on a set with at least four elements witnessed by the Mal’cev operation of a Boolean group operation is always a maximal centralising monoid, i.e., a co-atom below the full transformation monoid. On the other hand, we also prove that centralising monoids witnessed by certain types of permutations or retractive operations can never be maximal

Abstract:

Motivated by reconstruction results by Rubin, we introduce a new reconstruction notion for permutation groups, transformation monoids and clones, called automatic action compatibility, which entails automatic homeomorphicity. We further give a characterization of automatic homeomorphicity for transformation monoids on arbitrary carriers with a dense group of invertibles having automatic homeomorphicity. We then show how to lift automatic action compatibility from groups to monoids and from monoids to clones under fairly weak assumptions. We finally employ these theorems to get automatic action compatibility results for monoids and clones over several well-known countable structures, including the strictly ordered rationals, the directed and undirected version of the random graph, the random tournament and bipartite graph, the generic strictly ordered set, and the directed and undirected versions of the universal homogeneous Henson graphs

Abstract:

We investigate the standard context, denoted by K(Ln), of the lattice Ln of partitions of a positive integer n under the dominance order. Motivated by the discrete dynamical model to study integer partitions by Latapy and Duong Phan and by the characterization of the supremum and (infimum) irreducible partitions of n by Brylawski, we show how to construct the join-irreducible elements of Ln+1 from Ln. We employ this construction to count the number of join-irreducible elements of Ln, and confirm that the number of objects (and attributes) of K(Ln) has order Θ(n2). We also discuss the embeddability of K(Ln) into K(Ln+1) with special emphasis on n=9

Abstract:

We consider finitary relations (also known as crosses) that are definable via finite disjunctions of unary relations, i.e. subsets, taken from a fixed finite parameter set Γ. We prove that whenever Γ contains at least one non-empty relation distinct from the full carrier set, there is a countably infinite number of polymorphism clones determined by relations that are disjunctively definable from Γ. Finally, we extend our result to finitely related polymorphism clones and countably infinite sets Γ. These results address an open problem raised in Creignou, N., et al. Theory Comput. Syst. 42(2), 239–255 (2008), which is connected to the complexity analysis of the satisfiability problem of certain multiple-valued logics studied in Hähnle, R. Proc. 31st ISMVL 2001, 137–146 (2001)

Abstract:

C-clones are polymorphism sets of so-called clausal relations, a special type of relations on a finite domain, which first appeared in connection with constraint satisfaction problems in work by Creignou et al. from 2008. We completely describe the relationship regarding set inclusion between maximal C-clones and maximal clones. As a main result we obtain that for every maximal C-clone there exists exactly one maximal clone in which it is contained. A precise description of this unique maximal clone, as well as a corresponding completeness criterion for C-clones is given

Abstract:

We show how to reconstruct the topology on the monoid of endomorphisms of the rational numbers under the strict or reflexive order relation, and the polymorphism clone of the rational numbers under the reflexive relation. In addition we show how automatic homeomorphicity results can be lifted to polymorphism clones generated by monoids

Abstract:

Our goal is to model the joint distribution of a series of 4 × 2 × 2 × 2 contingency tables for which some of the data are partially collapsed (i.e., aggregated in as few as two dimensions). More specifically, the joint distribution of four clinical characteristics in breast cancer patients is estimated. These characteristics include estrogen receptor status (positive/negative), nodal involvement (positive/negative), HER2-neu expression (positive/negative), and stage of disease (I, II, III, IV). The joint distribution of the first three characteristics is estimated conditional on stage of disease and we propose a dynamic model for the conditional probabilities that let them evolve as the stage of disease progresses. The dynamic model is based on a series of Dirichlet distributions whose parameters are related by a Markov prior structure (called dynamic Dirichlet prior). This model makes use of information across disease stage (known as “borrowing strength” and provides a way of estimating the distribution of patients with particular tumor characteristics. In addition, since some of the data sources are aggregated, a data augmentation technique is proposed to carry out a meta-analysis of the different datasets

Abstract:

We introduce the logics GLPΛ, a generalization of Japaridze’s polymodal provability logic GLPω where Λ is any linearly ordered set representing a hierarchy of provability operators of increasing strength. We shall provide a reduction of these logics to GLPω yielding among other things a finitary proof of the normal form theorem for the variable-free fragment of GLPΛ and the decidability of GLPΛ for recursive orderings Λ. Further, we give a restricted axiomatization of the variable-free fragment of GLPΛ

Resumen:

Las empresas familiares representan la mayoría de las organizaciones en México y participan en una gran variedad de industrias. Las hay de todos tamaños, incluso dentro de las más grandes del país. En la Bolsa Mexicana de Valores (BMV), el 70% de las empresas que emiten acciones son familiares (Belausteguigoitia, 2012). Se han realizado investigaciones en diversos países que comparan los rendimientos entre empresas familiares y no familiares (Villalonga y Amit, 2006). Faltaba en México un estudio comparativo de esta índole, que permitiera conocer el desempeño de las organizaciones familiares. Este trabajo ofrece un valor adicional, ya que compara los rendimientos durante años de crisis y estabilidad. Los resultados preliminares indican que durante años de relativa inestabilidad financiera (por la crisis de 2008), las organizaciones familiares muestran un rendimiento significativamente mayor que las no familiares, mientras que en años de estabilidad los rendimientos tienden a ser semejantes. Además de exponer resultados pormenorizados durante este período, se plantean algunas hipótesis que explican el hecho de que las organizaciones familiares se desempeñen mejor en tiempos de crisis

Abstract:

Family businesses represent most organizations in Mexico and participate in a wide variety of industries. They come in all sizes, even among the largest in the country. In the Mexican Stock Exchange (BMV), 70% of the companies that issue shares are family members (Belausteguigoitia, 2012). Research has been conducted in various countries comparing returns between family and non-family businesses (Villalonga and Amit, 2006). A comparative study of this nature was lacking in Mexico, which would allow us to know the performance of family organizations. This work offers additional value as it compares returns during years of crisis and stability. Preliminary results indicate that during years of relative financial instability (due to the 2008 crisis), family organizations show significantly higher returns than non-family ones, while in years of stability the returns tend to be similar. In addition to presenting detailed results during this period, some hypotheses are put forward that explain the fact that family organizations perform better in times of crisis

Resumen:

Este artículo analiza la relación entre compartir conocimiento y el comportamiento pro-organizacional no ético (CPE), así como el potencial efecto amplificador de dos factores: la resistencia al cambio de los empleados y la percepción del clima político de la organización

Abstract:

This paper aims to investigate the relationship of knowledge sharing with unethical pro-organizational behavior (UPB) and the potential augmenting effects of two factors: employees' dispositional resistance to change and perceptions of organizational politics

Resumo:

Este artigo analisa a relação entre compartilhar o conhecimento e comportamento pró-organizacional antiético (CPA), bem como o potencial efeito ampliador de dois fatores: a resistência a mudança de funcionários e a percepção do clima político da organização

Resumen:

Las Empresas Familiares son organizaciones con una gran carga emotiva. Por esta razón, la dimensión familiar, que ejerce una gran influencia sobre la empresa, debe ser correctamente canalizada en la empresa, con la idea de lograr que su impacto sea positivo. A continuación expongo algunas ideas prácticas que pueden ayudar a prevenir conflictos. Como es de imaginarse, estas ideas, además de contribuir a reducir el potencial de conflicto, pueden mejorar la marcha en las organizaciones familiares

Resumen:

El capítulo identifica y analiza los elementos esenciales de cualquier modelo de descarbonización y sugiere estrategias de implementación para llevarla a cabo con éxito. El texto empieza por definir descarbonización, cero emisiones netas de dióxido de carbono y cero emisiones netas de gases de efecto invernadero (GEI) y explicar su importancia. A continuación analiza los elementos que cualquier modelo de descarbonización y de cero emisiones netas de GEI deben contener: (i) eficiencia energética; (ii) descarbonización del sector eléctrico; (iii) substitución del uso de combustibles fósiles por electricidad de los sectores donde la electrificación sea posible y uso de los combustibles limpios en los sectores que no lo sea; (iv) mantenimiento y creación de sumideros de carbono para capturar las emisiones restantes; y (v) control de las emisiones antropogénicas de otros gases de efecto invernadero, para el caso de cero emisiones netas de GEI. El capítulo, analiza tres elementos fundamentales para el éxito de la descarbonización: el momento en el que se toman y ejecutan las decisiones (timing); la aceptabilidad política de la división de los costos y los beneficios; y la credibilidad de las metas y las medidas de políticas públicas para lograrlas. El capítulo finaliza con una reflexión sobre las condiciones que podrían hacer que las acciones de mitigación de emisiones ofrecidas y realizadas por los países estuvieran más cerca de lo que se necesita para cumplir con la meta del Acuerdo de parís (que el aumento de temperatura no pase de 2 grados centígrados y esté lo más cera posible a 1.5 grados centígrados)

Abstract:

The chapter identifies and analyzes the essential elements of any decarbonization model and suggests implementation strategies to carry it out successfully. The texts stars by defining descarbonization, net-zero carbon dioxide emissions, and net-zero greenhouse gas (GHG) emissions, explaining their significance. It proceeds to analyze the key components the any descarbonization and net-zero GHG emissions model should include: (i) energy efficiency; (ii) descarbonization of electricity sector; (iii) substitution of fossil fuel use with electricity in sectors where electrification is possible and the use of clean fuels in sectors where it is not; (iv) maintenance and creation of carbon sinks to capture the remaining emissions; and (v) control of anthropogenic emissions of other green house gases, in the case of net zero GHG emissions. Next, three fundamental elements for the success of descarbonization are analyzed: timing; the political acceptability of the division of costs and benefits; and the credibility of the goals and policy measures to achieve them. The chapter concludes with a reflection on the conditions that could make the migration actions offered and carried out by countries closer to what is needed to comply with the goal of the Paris Agreement (that the increase in temperature does not exceed 2 degrees Celsius and is as close as possible to 1.5 degrees Celsius)

Abstract:

Price-based climate change policy instruments, such as carbon taxes or cap-and-trade systems, are known for their potential to generate desirable results such as reducing the cost of meeting environmental targets. Nonetheless, carbon pricing policies face important economic and political hurdles. Powerful stakeholders tend to obstruct such policies or dilute their impacts. Additionally, costs are borne by those who implement the policies or comply with them, while benefits accrue to all, creating incentives to free ride. Finally, costs must be paid in the present, while benefits only materialize over time. This chapter analyses the political economy of the introduction of a carbon tax in Mexico in 2013 with the objective of learning from that process in order to facilitate the eventual implementation of an effective cap-and-trade system in Mexico. Many of the lessons in Mexico are likely to be applicable elsewhere. As countries struggle to meet the goals of international environmental agreements, it is of utmost importance that we understand the conditions under which it is feasible to implement policies that reduce carbon emissions

Abstract:

The Global International Waters Assessment (GIWA) was created to help develop a priority setting mechanism for actions in international waters. Apart from assessing the severity of environmental problems in ecosystems, the GIWA's task is to analyze potential policy actions that could solve or mitigate these problems. Given the complex nature of the problems, understanding their root causes is essential to develop effective solutions. The GIWA provides a framework to analyze these causes, which is based on identifying the factors that shape human behavior in relation to the use (direct or indirect) of aquatic resources. Two sets of factors are analyzed. The first one consists of social coordination mechanisms (institutions). Faults in these mechanisms lead to wasteful use of resources. The second consists of factors that do not cause wasteful use of resources per se (poverty, trade, demographic growth, technology), but expose and magnify the faults of the first group of factors. The picture that comes out is that diagnosing simple generic causes, e.g. poverty or trade, without analyzing the case specific ways in which the root causes act and interact to degrade the environment, will likely ignore important links that may put the effectiveness of the recommended policies at risk. A summary of the causal chain analysis for the Colorado River Delta is provided as an example

Abstract:

In this article, a distributed control system for robotíc applications is presented. First, the robot control theory based on task-functions is outlined; then a model for robotic applications is proposed. This philosophy is more general than the classical hierarchical structure and allows smart sensor feedback at the servo level. Next a software and hardware architecture is proposed to implement such control algorithms. As all the system activity depends on communication between tasks, it is necessary to design a real-time communication system a topic addressed in this paper

Abstract:

In this article, we addressed the issue of a proton-exchange membrane fuel cell (PEMFC) voltage regulation when it is coupled to an uncertain load via a DC-DC boost converter. We demonstrate that a straightforward proportional-integral (PI) action designed using the passivity-based control (PBC) approach can regulate the voltage of the PEMFC boost converter system in spite of the intrinsic nonlinearities in the current-voltage behavior of the PEMFC. We show that for all positive values of the controller gains, the voltage converges to its set point. Moreover, a parameter estimator is afterward proposed with the Immersion and Invariance (I&I) theory, allowing the PI-PBC to regulate the output voltage of the system as the load changes

Abstract:

This paper describes a double proportional-integral passivity-based control design to coordinate the operation of a proton-exchange membrane fuel cell system supported by two energy storage devices, such as a supercapacitor and a battery. Considering the singular perturbation theory, a timescale separation is applied to decouple the current and voltage dynamics and synthesize two control loops: an inner current loop and an outer voltage loop. Both current and voltage dynamics are controlled by exploiting the passive properties of each subsystem through a simple proportional-integral action over the passive output. The overall control objectives are to ensure load and super-capacitor voltage regulation and smooth variations in fuel cell current despite variations in energy demand. Numerical results demonstrate the correct performance of the closed-loop system despite pulsating variations in energy demand

Abstract:

We present a controller for a power generation system composed of a fuel cell connected to a boost converter which feeds a resistive load. The controller aims to regulate the output voltage of the converter, regardless of sudden changes of the load and the fuel cell voltage. Leveraging monotonicity, we prove that the nonlinear system can be controlled by means of a simple passivity-based PI. We afterward extend the result to an adaptive version, allowing the controller to deal with parameter uncertainties. This adaptive design is based on an indirect control approach with parameter identification performed by a "hybrid" estimator, which combines two techniques: the gradient-descent and the immersion-and-invariance algorithms. The overall system is proven to be stable, with the output voltage regulated to its reference. Furthermore, realistic simulation results validate our proposal

Abstract:

In this article, the problem of online parameter estimation of a proton exchange membrane fuel cell (PEMFC) polarization curve, that is, the static relation between the voltage and the current of the PEMFC is addressed and solved. The task of designing this estimator-even off-line-is complicated by the fact that the uncertain parameters enter the curve in a highly nonlinear fashion, namely in the form of nonseparable nonlinearities. We consider several scenarios for the model of the polarization curve, starting from the standard full model and including several popular simplifications to this complicated mathematical function. In all cases, separable regression equations are derived-either linearly or nonlinearly parameterized-which are instrumental for the implementation of the parameter estimators. We concentrate our attention on online estimation schemes for which, under suitable excitation conditions, global parameter convergence is ensured. Due to these global convergence properties, the estimators are robust to unavoidable additive noise and structural uncertainty. Moreover, since the schemes are online, they are able to track (slow) parameter variations, that occur during the operation of the PEMFC. These two features-unavailable in time-consuming offline data-fitting procedures-make the proposed estimators helpful for online time-saving characterization of a given PEMFC, and the implementation of fault-detection procedures and model-based adaptive control strategies. Simulation and experimental results that validate the theoretical claims are presented

Abstract:

In this paper, based on interconnection and damping assignment passivity-based control approach, a multi-loop adaptive controller for output voltage regulation of a hybrid system comprised of a proton exchange membrane fuel cell energy generation system and a hybrid energy storage system consisting of supercapacitors and batteries, is detailed. The control scheme relies on the design of two control loops, i.e., an outer loop for voltage regulation through current reference generation, and an inner loop for current reference tracking through duty cycle generation. Furthermore, an adaptive law based on immersion and invariance theory is designed to enhance the outer loop behavior through unknown load approximation. Finally, numeric simulation results show the correct performance of the adaptive multi-loop controller when sudden load changes are induced in the system

Abstract:

We study symmetric periodic orbits near collision in a nonautonomous restricted planar four-body problem. The restricted problem consists of a massless particle moving under the gravitational influence due to three bodies with the same positive mass (the primaries), following the figure-eight choreography. We use regularized coordinates, in order to deal numerically with motions near collision between the massless particle and one of the primaries. By means of reversing symmetries, we characterize the symmetric periodic orbits near collision. The initial conditions for these orbits were computed by solving numerically some boundary value problems. We explain theoretically, and confirm numerically, how different parts of the diagram of initial conditions are related due to the symmetry of the figure-eight choreography

Abstract:

In this paper, we study a three-dimensional system of differential equations which is a generalization of the system introduced by Yu and Wang (Eng Technol Appl Sci Res 3:352-358, 2013), a continuation of the study of chaotic attractors [see Yu and Wang (Eng Tech Appl Sci Res 2:209-215, 2012)]. We show that these systems admit a zero-Hopf non-isolated equilibrium point at the origin and prove the existence of a limit cycle emanating from it. We illustrate our results with some numerical simulations

Abstract:

We prove that all non-degenerate relative equilibria of the planar Newtonian n–body problem can be continued to spaces of constant curvature κ, positive or negative, for small enough values of this parameter. We also compute the extension of some classical relative equilibria to curved spaces using numerical continuation. In particular, we extend Lagrange's triangle configuration with different masses to both positive and negative curvature spaces

Abstract:

We study a particular (1+2n)-body problem, conformed by a massive body and 2n equal small masses, since this problem is related with Maxwell's ring solutions, we call planet to the massive body, and satellites to the other 2n masses. Our goal is to obtain doubly-symmetric orbits in this problem. By means of studying the reversing symmetries of the equations of motion, we reduce the set of possible initial conditions that leads to such orbits, and compute the 1-parameter families of time-reversible invariant tori. The initial conditions of the orbits were determined as solutions of a boundary value problem with one free parameter, in this way we find analytically and explicitly a new involution, until we know this is a new and innovative result. The numerical solutions of the boundary value problem were obtained using pseudo arclength continuation. For the numerical analysis we have used the value of 3.5x10-4 as mass ratio of some satellite and the planet, and it was done for n=2,3,4,5,6. We show numerically that the succession of families that we have obtained approach the Maxwell solutions as n increases, and we establish a simple proof why this should happen in the configuration

Abstract:

By using analytical and numerical tools we show the existence of families of quasiperiodic orbits (also called relative periodic orbits) emanating from a kite configuration in the planar four-body problem with three equal masses. Relative equilibria are periodic solutions where all particles are rotating uniformely around the center of mass in the inertial frame, that is the system behaves as a rigid body problem, in rotating coordinates in general these solutions are quasiperidioc. We introduce a new coordinate system which measures (in the planar four-body problem) how far is an arbitrary configuration from a kite configuration. Using these coordinates, and the Lyapunov center theorem, we get families of quasiperiodic orbits, an by using symmetry arguments, we obtain periodic ones, all of them emanating from a kite configuration

Abstract:

In recent research on financial crises, large exogenous shocks to total factor productivity (TFP) are used as the driving force accounting for large output falls. TFP fell 3% after the Korean 1997 financial crisis. We find evidence that the large fall in TFP is mostly due to a sectoral reallocation of labor from the more productive manufacturing and construction sectors to the less productive wholesale trade sector, the public sector and agriculture. We construct a two-sector model that accounts for the labor reallocation. The model has a consumption sector and an investment sector. Firms face sector-specific working capital constraints, which we calibrate with data from financial statements. The rise in interest rates makes inputs more costly. The model accounts for 42% of the TFP fall. The model also accounts for 53% of the fall in GDP. It is broadly consistent with the post-crisis behavior of the Korean economy

Abstract:

Using variation in firms' exposure to their CEOs resulting from hospitalization, we estimate the effect of CEOs on firm policies, holding firm-CEO matches constant. We document three main findings. First, CEOs have a significant effect on profitability and investment. Second, CEO effects are larger for younger CEOs, in growing and family-controlled firms, and in human-capital-intensive industries. Third, CEOs are unique: the hospitalization of other senior executives does not have similar effects on performance. Overall, our findings demonstrate that CEOs are a key driver of firm performance, which suggests that CEO contingency plans are valuable

Abstract:

This paper uses a unique dataset from Denmark to investigate the impact of family characteristics in corporate decision making and the consequences of these decisions on firm performance. We focus on the decision to appoint either a family or external chief executive officer (CEO). The paper uses variation in CEO succession decisions that result from the gender of a departing CEO's firstborn child. This is a plausible instrumental variable (IV), as male first-child firms are more likely to pass on control to a family CEO than are female first-child firms, but the gender of the first child is unlikely to affect firms' outcomes. We find that family successions have a large negative causal impact on firm performance: operating profitability on assets falls by at least four percentage points around CEO transitions. Our IV estimates are significantly larger than those obtained using ordinary least squares. Furthermore, we show that family-CEO underperformance is particularly large in fast-growing industries, industries with highly skilled labor force, and relatively large firms. Overall, our empirical results demonstrate that professional, nonfamily CEOs provide extremely valuable services to the organizations they head

Abstract:

This paper presents a two dimensional convex irregular bin packing problem with guillotine cuts. The problem combines the challenges of tackling the complexity of packing irregular pieces, guaranteeing guillotine cuts that are not always orthogonal to the edges of the bin, and allocating pieces to bins that are not necessarily of the same size. This problem is known as a two-dimensional multi bin size bin packing problem with convex irregular pieces and guillotine cuts. Since pieces are separated by means of guillotine cuts, our study is restricted to convex pieces. A beam search algorithm is described, which is successfully applied to both the multi and single bin size instances. The algorithm is competitive with the results reported in the literature for the single bin size problem and provides the first results for the multi bin size problem

Abstract:

The intuitive notion of evidence has both semantic and syntactic features. In this paper, we develop an evidence logic for epistemic agents faced with possibly contradictory evidence from different sources. The logic is based on a neighborhood semantics, where a neighborhood N indicates that the agent has reason to believe that the true state of the world lies in N. Further notions of relative plausibility between world sand beliefs based on the latter ordering are then defined in terms of this evidence structure, yielding our intended models for evidence-based beliefs. In addition, we also consider a second more general flavor, where belief and plausibility are modeled using additional primitive relations, and we prove a representation theorem showing that each such general model is a p-morphic image of an intended one. This semantics invites a number of natural special cases, depending on how uniform we make the evidence sets, and how coherent their total structure. We give a structural study of the resulting ‘uniform’ and ‘flat’ models. Our main result are sound and complete axiomatizations for the logics of all four major model classes with respect to the modal language of evidence, belief and safe belief. We conclude with an outlook toward logics for the dynamics of changing evidence, and the resulting language extensions and connections with logics of plausibility change.

Abstract:

Opportunistic electoral fiscal policy cycle theory suggests that all subnational officials will raise fiscal spending during elections. Ideological partisan fiscal policy cycle theory suggests that only left-leaning governments will raise election year fiscal spending, with right-leaning parties choosing the reverse. This article assesses which of these competing logics applies to debt policy choices. Cross-sectional time-series analysis of yearly loan acquisition across Mexican municipalities -on statistically matched municipal subsamples to balance creditworthiness across left- and right-leaning governments- shows that all parties engage in electoral policy cycles but not in the way originally thought. It also shows that different parties favored different types of loans, although not always according to partisan predictions. Both electoral and partisan logics thus shape debt policy decisions -in contrast to fiscal policy where these logics are mutually exclusive- because debt policy involves decisions on multiple dimensions, about the total and type of loans

Abstract:

This research focuses on a hyperbolic system that describes bidisperse suspensions, consisting of two types of small particles dispersed in a viscous fluid. The dependence of solutions on the relative position of contact manifolds in the phase space is examined. The wave curve method serves as the basis for the first and second analyses. The former involves the classification of elementary waves that emerge from the origin of the phase space. Analytical solutions to prototypical Riemann problems connecting the origin with any point in the state space are provided. The latter focuses on semi-analytical solutions for Riemann problems connecting any state in the phase space with the maximum packing concentration line, as observed in standard batch sedimentation tests. When the initial condition crosses the first contact manifold, a bifurcation occurs. As the initial condition approaches the second manifold, another structure appears to undergo bifurcation, although it does not represent an actual bifurcation according to the triple shock rule. The study reveals important insights into the behavior of solutions in relation to these contact manifolds. This research sheds light on the existence of emerging quasi-umbilic points within the system, which can potentially lead to new types of bifurcations as crucial elements of the elliptic/hyperbolic boundary in the system of partial differential equations. The implications of these findings and their significance are discussed

Abstract:

This contribution is a condensed version of an extended paper, where a contact manifold emerging in the interior of the phase space of a specific hyperbolic system of two nonlinear conservation laws is examined. The governing equations are modelling bidisperse suspensions, which consist of two types of small particles differing in size and viscosity that are dispersed in a viscous fluid. Based on the calculation of characteristic speeds, the elementary waves with the origin as left Riemann datum and a general right state in the phase space are classified. In particular, the dependence of the solution structure of this Riemann problem on the contact manifold is elaborated

Abstract:

We study the connection between the global liquidity crisis and the severe credit crunch experienced by finance companies (SOFOLES) in Mexico using firm-level data between 2001 and 2011. Our results provide supporting evidence that, as a result of the liquidity shock, SOFOLES faced severely restricted access to their main funding sources (commercial bank loans, loans from other organizations, and public debt markets). After controlling for the potential endogeneity of their funding, we find that the liquidity shock explains 64 percent of SOFOLES' credit contraction during the recent financial crisis (2008-2009). We use our estimates to disentangle supply from demand factors as determinants of the credit contraction. After controlling for the large decline in loan demand during the financial crisis, our findings suggest that supply factors (such as nonperforming loans and lower liquidity buffers) also played a significant role. Finally, we find that financial deregulation implemented in 2006 may have amplified the effects of the global liquidity shock

Abstract:

We develop a two-country, three-sector model to quantify the effects of Korean trade policies for structural change from 1963 through 2000. The model features non-homothetic preferences, Armington trade, proportional import tariffs and export subsidies, and is calibrated to match sectoral value added data on Korean production and trade. Korea's tariff liberalization increased imports and trade, especially agricultural imports, accelerating de-agriculturalization and intensifying industrialization. Korean subsidy liberalization lowered exports and trade, especially industrial exports, attenuating industrialization. Thus, while individually powerful agents for structural change, Korea's tariff and subsidy reforms offset each other. Subsidy reform dominated quantitatively; lower trade, higher agricultural and lower industrial employment shares, and slower industrialization were observed than in a counterfactual economy with no post-1963 policy reform

Abstract:

In this article, we address the problem of adaptive state observation of linear time-varying systems with delayed measurements and unknown parameters. Our new developments extend the results reported in our recently works. The case with known parameters has been studied by many researchers. However in this article we show that the generalized parameter estimation-based observer design provides a very simple solution for the unknown parameter case. Moreover, when this observer design technique is combined with the dynamic regressor extension and mixing estimation procedure the estimated state and parameters converge in fixed-time imposing extremely weak excitation assumptions

Abstract:

We consider an otherwise conventional monetary growth model in which spatial separation and limited communication create a transactions role for currency, and stochastic relocation gives rise to financial intermediaries. In this framework we consider how changes in fiscal and monetary policy, and in reserve requirements, affect inflation, capital formation, and nominal interest rates. There is also considerable scope for multiple equilibria; we show how reserve requirements that never bind along actual equilibrium paths can play an important role in avoiding undesirable equilibria. Finally, we demonstrate that changes in (apparently) nonbinding reserve requirements can have significant, real effects

Abstract:

Efficient modeling of probabilistic dependence is one of the most important areas of research in decision analysis. Several approximations to joint probability distributions have being developed with the intent of capturing the probabilistic dependence among variables. However, the accuracy of these methods in a wide range of settings is unknown. In these paper, we develop a methodology to test the accuracy of several approximations distributions

Abstract:

We characterize dominant-strategy incentive compatibility with multidimensional types. A deterministic social choice function is dominant-strategy incentive compatible if and only if it is weakly monotone (W-Mon). The W-Mon requirement is the following: If changing one agent's type (while keeping the types of other agents fixed) changes the outcome under the social choice function, then the resulting difference in utilities of the new and original outcomes evaluated at the new type of this agent must be no less than this difference in utilities evaluated at the original type of this agent

Abstract:

International organizations play an increasingly central role in contemporary international lawmaking, yet their role in sources theory is still underexplored. Most accounts focus on one of two questions: They either ask to what extent international organizations can generate legally binding obligations for external actors and what role so-called "soft law" plays in this regard, or they explore to what extent international organizations are bound by the sources of international law and can thus be held accountable. In this contribution, I use the 2030 Agenda on Sustainable Development to show that an overlooked, but arguably one of the most important effects of the law generated by international organizations is the effect that it has within the organization itself. The 2030 Agenda on Sustainable Development established a new paradigm for all parts of the UN system and has had effects in all its areas of work, including the UN's external activities and its interaction with member states. While declarations such as the 2030 Agenda might thus not be directly legally binding upon its member states, they have important normative effects within and beyond the UN system that are hitherto unaccounted for. This raises questions of equality and legitimacy that are touched upon in closing

Abstract:

Germany's two-year membership of the UN Security Council ended on 31 December 2020. Having started with big expectations, it was hit by the hard realities of increasingly divisive world politics in the time of a global pandemic. While Germany fell short of its ambitions on the two thematic issues of Women, Peace and Security, and Climate and Security, it can point to some successes, in particular with regard to Sudan. These successes did not, however, increase Germany's prospects of securing a permanent, seat on the Council for the foreseeable future. This section provides an overview of Germany's activities on the Council and ends with a brief evaluation

Abstract:

As the only international organization that aspires to be unviersal both in terms of its membership as well as in terms of the policy fields in which it intervenes, the United Nations (UN) occupies a unique position in international lawmaking. Focusing on the UN's political and judicial or quasi-judicial organs does not, however, fully capture the organization's lawmaking activities. Instead, much of the UN's impact on international law today can be traced back to its civil servants. In this paper, I argue that international lawmaking today is best understood as processual and fluid, and that in contemporary lawmaking thus understood, the UN Secretariat plays an important role. I then propose two ways in which international civil servants add to international lawmaking: they engage in executive interpretation of broad and elusive norms, and they act as an interface between various actors on different governance levels. This raises novel legitimacy challenges for the international legal order

Abstract:

Germany's two-year membership in the UN Security Council ended on 31 December 2020. Starting with big expectations, it hit the hard realities of increasingly divisive world politics in times of a global pandemic. Nevertheless, Germany can point to some notable successes. Every eight years, Germany applies for a non-permanent membership in the Security Council. The recent term was the sixth time that Germany sat on the Council. It had applied with an ambitious program: strengthening the women, peace and security agenda; putting the link between climate change and security squarely on the Council's agenda; strengthening the humanitarian system; and lastly, revitalizing the issue of disarmament and arms control. It chose those four thematic issues to make its mark, besides the Council's day-to-day business dealing with country situations. Has Germany been successful?

Abstract:

When actors express conflicting views about the validity or scope of norms or rules in relation to other norms or rules in the international sphere, they often do so in the language of international law. This contribution argues that international law's hermeneutic acts as a common language that cuts across spheres of authority and can thus serve as a conflict management tool for interface conflicts. Often, this entails resorting to an international court. While acknowledging that courts cannot provide permanent solutions to the underlying political conflict, I submit that court proceedings are interesting objects of study that promote our understanding of how international legal argument operates as a conflict management device. I distinguish three dimensions of common legal form, using the well-known EC-Hormones case as illustration: a procedural, argumentative, and substantive dimension. While previous scholarship has often focused exclusively on the substantive dimension, I argue that the other two dimensions are equally important. In concluding, I reflect on a possible explanation as to why actors are disposed to resort to international legal argument even if this is unlikely to result in a final solution: there is a specific authority claim attached to international law qua law

Abstract:

The International Court of Justice (ICJ) occupies a special position amongst international courts. With its quasi-universal membership and ability to apply in principle the whole body of international law, it should be well-placed to adjudicate its cases with a holistic view on international law. This article examines whether the ICJ has lived up to this expectation. It analyses the Court's case load in the 21st century through the lens of inter-legality as the current condition of international law. With regard to institutional inter-legality, the authors observe an increase of inter-State proceedings based on largely the same facts that are initiated both before the ICJ and other courts and identify an increasing need to address such parallel proceedings. With regard to substantive inter-legality, the article provides analyses of ICJ cases in the fields of consular relations and human rights, as well as international environmental law. The authors find that the ICJ is rather reluctant in situating norms within their broader normative environment and restrictively applies only those rules its jurisdiction is based on. The Court does not make use of its abilities to adjudicate cases holistically and thus falls short of expectations raised by its own members 20 years ago

Abstract:

Contemporary international law often presents itself as an almost impenetrable thicket of overlapping legal regimes that materialize through multilateral treaties at the global, regional and sub-regional levels, customary law and other regulatory orders. Often, overlaps between different regimes manifest themselves as constellations of norms that appear to be conflictual. Valentin Jeutner's book entitled Irresolvable Norm Conflicts in International Law, based on his doctoral thesis defended at Cambridge University in 2015, is the latest in a recent series of monographs that address norm conflicts in international law. On his own account, his work differs from other studies in that it explores certain constellations of public international law where 'the legal order confronts legal subjects with seemingly impossible expectations', where 'a state of legal superposition' exists (at 6). As indicated in the title, Jeutner is not concerned with norm conflicts in general but, rather, with a specific subset of conflicts: those that are, legally speaking, irresolvable. Jeutner proposes that such irresolvable conflicts ought to be called 'legal dilemmas' and argues that such dilemmas ought to be solved by the sovereign actor facing a dilemma: the state. His book is informed by deontic logic and formulates an abstract theory of the legal dilemma as a novel concept for international law, which the author explicitly introduces as a stipulative definition - a term of art (at 19)

Resumen:

Hart dedicó poca atención a la regla de adjudicación –lo mismo hizo la literatura especializada. El propósito de este escrito consiste en intentar ir más allá de las escasas indicaciones brindadas por Hart sobre el tema de la regla de adjudicación y detallar la función que desempeña en el seno de su concepción del derecho. El método elegido es esencialmente reconstructivo: no se trata de tomar inspiración en Hart para elaborar una noción propia de regla de adjudicación, sino de poner de relieve las potencialidades –aunque también los límites– de este tipo de regla secundaria. Para ello, en primer lugar se profundizan las conexiones entre la regla de adjudicación, por un lado, y la coacción y la interpretación jurídica, por el otro: el objetivo consiste en dibujar la posición teórica de los jueces, que se desprende, en particular, de la investigación de sus (distintas) tareas en relación con los casos dudosos y los casos claros. A continuación, tal postura teórica se somete a crítica; prestando atención, en particular, al problema de la definitividad e infalibilidad de las sentencias, se demuestra cómo Hart consideró la aplicación del derecho de forma demasiado declarativa

Abstract:

H.L.A. Hart did not pay much attention to the rule of adjudication –and neither did scholars. This paper aims to go beyond what Hart explicitly says about it and to give an account of its role within his concept of law. The perspective will be reconstructive, since the goal is not to develop an original concept of rule of adjudication, inspired on Hart’s theory of law, but rather to shed light on the potential –but also the limits– of this kind of secondary rule. Therefore, the article will first explore the interrelation between the rule of adjudication, on the one hand, and coercion and legal interpretation, on the other: the goal is to outline the theoretical position of judges, which becomes clear when analyzing their (different) tasks in easy and hard cases. Then, this position is put under criticism; by examining, in particular, the well-known problem of the infallibility and finality of judicial decisions, it is shown that Hart considered the judicial application of law in a too declarative way

Abstract:

Consider an arbitrary large population at the present time, originated at an unspecified arbitrary large time in the past, where individuals within the same generation independently reproduce forward in time, sharing a common offspring distribution that may vary across generations. In other words, the reproduction is driven by a Galton-Watson process in a varying environment. The genealogy of the current generation, traced backward in time, is uniquely determined by the coalescent point process (Ai, i ≥ 1), where Ai denotes the coalescent time between individuals i and i + 1. In general, this process lacks the Markov property. In constant environment, Lambert and Popovic (2013) proposed a Markov process of point measures to reconstruct the coalescent point process. We provide a counterexample showing that their process lacks the Markov property. The main contribution of this work is to propose a vector valued Markov process (Bi, i ≥ 1), that can reconstruct the genealogy, with finite information for every i. Additionally, in the case of linear fractional offspring distributions, we establish that the variables of the coalescent point process (Ai, i ≥ 1) are independent and identically distributed

Abstract:

For a continuous state branching process with two types of individuals which are subject to selection and density dependent competition, we characterize the joint evolution of population size, type configurations and genealogies as the unique strong solution of a system of SDEs. Our construction is achieved in the lookdown framework and provides a synthesis as well as a generalization of cases considered separately in two seminal papers by Donnelly and Kurtz (Ann. Appl. Probab. 9 (1999) 1091-1148; Ann. Probab. 27 (1999) 166-205), namely fluctuating population sizes under neutrality, and selection with constant population size. As a conceptual core in our approach we introduce the selective lookdown space which is obtained from its neutral counterpart through a state-dependent thinning of "potential" selection/competition events whose rates interact with the evolution of the type densities. The updates of the genealogical distance matrix at the "active" selection/competition events are obtained through an appropriate sampling from the selective lookdown space. The solution of the above mentioned system of SDEs is then mapped into the joint evolution of population size and symmetrized type configurations and genealogies, that is, marked distance matrix distributions. By means of Kurtz' Markov mapping theorem, we characterize the latter process as the unique solution of a martingale problem. For the sake of transparency we restrict the main part of our presentation to a prototypical example with two types, which contains the essential features. In the final section we outline an extension to processes with multiple types including mutation

Abstract:

The nested Kingman coalescent describes the ancestral tree of a population undergoing neutral evolution at the level of individuals and at the level of species, simultaneously. We study the speed at which the number of lineages descends from infinity in this hierarchical coalescent process and prove the existence of an early-time phase during which the number of lineages at time t decays as 2 gamma/ct2, where c is the ratio of the coalescence rates at the individual and species levels, and the constant gamma approximate to 3.45 is derived from a recursive distributional equation for the number of lineages contained within a species at a typical time

Abstract:

In this paper, we study the genealogical structure of a Galton-Watson process with neutral mutations. Namely, we extend in two directions the asymptotic results obtained in Bertoin [Stochastic Process. Appl. 120 (2010) 678–697]. In the critical case, we construct the version of the model in Bertoin [Stochastic Process. Appl. 120 (2010) 678–697], conditioned not to be extinct. We establish a version of the limit theorems in Bertoin [Stochastic Process. Appl. 120 (2010) 678–697], when the reproduction law has an infinite variance and it is in the domain of attraction of an a-stable distribution, both for the unconditioned process and for the process conditioned to nonextinction. In the latter case, we obtain the convergence (after re-normalization) of the allelic sub-populations towards a tree indexed CSBP with immigration

Abstract:

We consider the compact space of pairs of nested partitions of N, where by analogy with models used in molecular evolution, we call "gene partition" the finer partition and "species partition" the coarser one. We introduce the class of nondecreasing processes valued in nested partitions, assumed Markovian and with exchangeable semigroup. These processes are said simple when each partition only undergoes one coalescence event at a time (but possibly the same time). Simple nested exchangeable coalescent (SNEC) processes can be seen as the extension of Λ-coalescents to nested partitions. We characterize the law of SNEC processes as follows. In the absence of gene coalescences, species blocks undergo Λ-coalescent type events and in the absence of species coalescences, gene blocks lying in the same species block undergo i.i.d. Λ-coalescents. Simultaneous coalescence of the gene and species partitions are governed by an intensity measure vs on (0,1] x M1 ([0,1]) providing the frequency of species merging and the law in which are drawn (independently) the frequencies of genes merging in each coalescing species block. As an application, we also study the conditions under which a SNEC process comes down from infinity

Abstract:

Survey research suggests that managers and employees see remote work very differently. Managers are more likely to say it harms productivity, while employees are more likely to say it helps. The difference may be commuting: Employees consider hours not spent commuting in their productivity calculations, while managers don't. The answer is clearer communication and policies, and for many companies the best policy will be managed hybrid with two to three mandatory days in office

Abstract:

Many CEOs are publicly gearing up for yet another return-to-office push. Privately, though, executives expect remote work to keep on growing, according to a new survey. That makes sense: Employees like it, the technology is improving, and - at least for hybrid work - there seems to be no loss of productivity. Despite the headlines, executives expect both hybrid and fully remote work to keep increasing over the next five years

Abstract:

This brief deals with the problem of online parameter identification of the parameters of the dynamic model of a photovoltaic (PV) array connected to a power system through a power converter. It has been shown in the literature that when interacting with switching power converters, the dynamic model is able to better account for the PV array operation compared to the classical five-parameter static model of the array. While there are many results of identification of the parameters of the latter model, to the best of our knowledge, no one has provided a solution for the aforementioned more complex dynamic model since it concerns the parameter estimation of a nonlinear, underexcited system with unmeasurable state variables. Achieving such an objective is the main contribution of this brief. We propose a new parameterization of the dynamic model, which, combined with the powerful identification technique of dynamic regressor extension and mixing (DREM), ensures a fast and accurate online estimation of the unknown parameters. Realistic numerical examples via computer simulations are presented to assess the performance of the proposed approach-even being able to track the parameter variations when the system changes operating point

Abstract:

In this paper we consider the problems of leaderless consensus for networks of fully actuated Euler-Lagrange agents perturbed by unknown additive disturbances. The network is an undirected weighted graph with time delays. The proposed controller has a PD structure that incorporates, in a certainty-equivalent way, the estimate of the unknown disturbance. The design of the disturbance estimator proceeds along the following steps. First, the derivation of a regression equation, that turns out to be nonlinearly parameterized, but with an injective mapping. Second, we propose to use a recently introduced least-squares plus dynamic regressor extension algorithm that allows us to estimate the unknown frequencies imposing extremely weak excitation assumptions. In this way, we derive a sufficient condition on the proportional and derivative gains of the controller to ensure that the systems globally and asymptotically converge to a consensus position

Abstract:

In this paper we provide two significant extensions to the recently developed parameter estimation-based observer design technique for state-affine systems. First, we consider the case when the full state of the system is reconstructed in spite of the presence of unknown, time-varying parameters entering into the system dynamics. Second, we address the problem of reduced order observers with finite convergence time. For the first problem, we propose a simple gradient-based adaptive observer that converges asymptotically under the assumption of generalised persistent excitation. For the reduced order observer we invoke the advanced dynamic regressor extension and mixing parameter estimator technique to show that we can achieve finite convergence time under the weak interval excitation assumption. Simulation results that illustrate the performance of the proposed adaptive observers are given. This include, an unobservable system, an example reported in the literature and the widely popular, and difficult to control, single-ended primary inductor converter

Abstract:

Wind turbines are often controlled to harvest the maximum power from the wind, which corresponds to the operation at the top of the bell-shaped power coefficient graph. Such a mode of operation may be achieved implementing an extremum seeking data-based strategy, which is an invasive technique that requires the injection of harmonic disturbances. Another approach is based on the knowledge of the analytic expression of the power coefficient function, an information usually unreliably provided by the turbine manufacturer. In this paper we propose a globally, exponentially convergent on-line estimator of the parameters entering into the windmill power coefficient function. This corresponds to the solution of an identification problem for a nonlinear, nonlinearly parameterized, underexcited system. To the best of our knowledge we have provided the first solution to this challenging, practically important, problem

Abstract:

In this paper we address the problem of adaptive state observation of affine-in-the-states time-varying systems with delayed measurements and unknown parameters. The development of the results proposed in the [Bobtsov et al. 2021a] and in the [Bobtsov et al. 2021c] is considered. The case with known parameters has been studied by many researchers--see [Sanz et al. 2019, Bobtsov et al. 2021b] and references therein-- where, similarly to the approach adopted here, the system is treated as a linear time-varying system. We show that the parameter estimation-based observer (PEBO) design proposed in [Ortega et al. 2015, 2021] provides a very simple solution for the unknown parameter case. Moreover, when PEBO is combined with the dynamic regressor extension and mixing (DREM) estimation technique [Aranovskiy et al. 2016, Ortega et al. 2019], the estimated state converges in fixed-time with extremely weak excitation assumptions

Abstract:

The problem of effective use of Phasor Measurement Units (PMUs) to enhance power systems awareness and security is a topic of key interest. The central question to solve is how to use these new measurements to reconstruct the state of the system. In this article, we provide the first solution to the problem of (globally convergent) state estimation of multimachine power systems equipped with PMUs and described by the fourth-order flux-decay model. This article is a significant extension of our previous result, where this problem was solved for the simpler third-order model, for which it is possible to recover algebraically part of the unknown state. Unfortunately, this property is lost in the more accurate fourth-order model, and we are confronted with the problem of estimating the full state vector. The design of the observer relies on two recent developments proposed by the authors, a parameter estimation based approach to the problem of state estimation and the use of the Dynamic Regressor Extension and Mixing (DREM) technique to estimate these parameters. The use of DREM allows us to overcome the problem of lack of persistent excitation that stymies the application of standard parameter estimation designs. Simulation results illustrate the latter fact and show the improved performance of the proposed observer with respect to a locally stable gradient-descent-based observer

Abstract:

In this article, we propose a PI passivity-based controller, applicable to a large class of switched power converters, that ensures global state regulation to a desired equilibrium point. A solution to this problem requires full state-feedback, which makes it practically unfeasible. To overcome this limitation we construct a state observer that is implementable with measurements that are available in practical applications. The observer reconstructs the state in finite-time, ensuring global convergence of the PI. An adaptive version of the observer, where some parameters of the converter are estimated, is also proposed. The excitation requirement for the observer is very weak and is satisfied in normal operation of the converters. Realistic simulation results illustrate the excellent performance and robustness vis-à-vis noise and parameter uncertainty of the proposed output-feedback PI

Abstract:

In this paper we address the problem of state observation of linear time-varying (LTV) systems with delayed measurements, which has attracted the attention of many researchers-see Sanz et al. (2019) and references therein. We show that, the parameter estimation-based observer (PEBO) design proposed in Ortega, Bobtsov, Nikolaev, Schiffer, and Dochain (2021), Ortega, Bobtsov, Pyrkin, and Aranovskiy (2015) provides a very simple solution to the problem with reduced prior knowledge. Moreover, when PEBO is combined with the dynamic regressor extension and mixing (DREM) estimation technique (Aranovskiy, Bobtsov, Ortega, & Pyrkin, 2017; Ortega, Gerasimov, Barabanov, & Nikiforov, 2019), the estimated state converges in fixed-time with extremely weak excitation assumptions

Abstract:

How many dimensions adequately characterize voting on U.S. trade policy? How are these dimensions to be interpreted? This paper seeks those answers in the context ofvoting on the landmark 1988 Omnibus Trade and Competitiveness Act. The paper takes steps beyond the existing literature. First, using a factor analytic approach, the dimension issue is examined to determine whether subsets of roll call votes on trade policy are correlated. A factor-analytic result allows the use of a limited number ofvotes for this purpose. Second, a structural model with latent variables is used to find what economic and political factors comprise these dimensions. The study yields two main findings. More than one dimension determines voting in the Senate, with the main dimension driven by economic interest, not ideology. Although two dimensions are required to fuHy account for House voting, one dimension dominates. That dimension is driven primarily by party. Based on reported evidence, and a growing consensus in the congressional studies literature, this finding is attributed to interest-based leadership that evolves in order to solve collective action problems faced by individual legislators

Abstract:

This paper presents the design of a Proportional-Integral Passivity-based Controller (PI-PBC) for a current source inverter feeding a resistive load. Thanks to the definition of a new passive output, the closed-loop system is shown to be globally asymptotically stable. This result solves the internal stability problem reported for these power converters. To robustify the control algorithm, the paper also includes the design of a parameter estimation scheme for the parasitic resistances and the load conductance. Numerical simulations are carried out to validate the control algorithm. The simulation stage compares the behaviour using the averaged model of the power converter and a more realistic switching model, including the three-phase implementation

Abstract:

At World Trade Organization (WTO), nothing is agreed until everything is agreed and until everyone agrees at the negotiating tables, and that 'magic' moment has been difficult to arrive at. Some WTO Members have argued that if all Members cannot move ahead together with the acceptance of new rules, the Members who are able and willing to move ahead should be provided with the required space to do so. Some Members have indeed chosen to push ahead as they have recently sought progress in negotiations through the Joint Statement Initiatives (JSIs). The JSI proponents claim that JSIs can contribute to building a more responsive and relevant WTO - which will be critical to restoring global trade and economic growth in the wake of the COVID-19 crisis. Others have staunchly opposed such plurilateral attempts at trade liberalization on various grounds, often labelling them as attempts to circumvent the WTO's core tenets of multilateralism. The article contributes to this debate, as the authors assess different routes through which JSIs can be added to the WTO acquis and the WTO-compatibility of each of these routes. It then assesses the possible detrimental impact that JSIs can have on the essence and fabric of the multilateral trading system (MTS)

Abstract:

The World Trade Organization (WTO) has three main functions: (i) it provides a negotiation forum where Members can negotiate new agreements and understandings, (ii) it provides a judicial forum where trade disputes between countries can be settled, and (iii) it acts as an executive forum for the administration and application of the WTO agreements, including capacity-building and training in this respect. Currently, it is only performing its executive function, as the other two functions remain stalled. The authors in this article analyse two challenges that have contributed to paralysing the WTO's legislative and judicial functions. With this assessment, the authors suggest that the 'real elephant in the room', i.e., the root-cause behind these challenges, is the avoidable 'consensus-based decision-making

Abstract:

For a multilateral system to be sustainable, it is important to have several escape clauses which can allow countries to protect their national security concerns. However, when these escape windows are too wide or ambiguous, defining their ambit and scope becomes challenging yet crucial to ensure that they are not open to misuse. The recent Panel Ruling in Russia-Measures Concerning Traffic in Transit is the very first attempt by the WTO to clarify the scope and ambit of National Security Exception. In this paper, we argue that the Panel has employed a combination of an objective and a subjective approach to interpret this exception. This hybrid approach to interpret GATT Article XXI (b) provides a systemic balance between the sovereign rights of the members to invoke the security exception and their right to free and open trade. But has this Ruling opened Pandora's box? In this paper, we address this issue by providing an in-depth analysis of the Panel's decision

Abstract:

This essay is concerned with computation as a tool for the analysis of mathematical models in economics. It is our contention that the use of high-performance computers is likely to play the substantial role in providing understanding of economic models that it does with regard to models in the physical and biological sciences. The main thrust of our commentary is that numerical simulations of mathematical models are in certain respects like experiments performed in a laboratory, and that this view imposes conditions on the way they are carried out and reported

Abstract:

Our research aims to aid children with autism improve social interactions through discrete messages received in a waist smart band. In this paper, we describe the design and development of an interactive system in a Microsoft Band 2 and present results of an evaluation with three students that gives us positive evidence that using this form of support can increase the quantity and quality of the social interactions

Resumen:

La Historia Roderici Campidocti es una obra hispano-latina medieval que narra los sucesos más significativos en la brillante trayectoria militar de Rodrigo Díaz de Vivar, el Cid Campeador. En esta nota, presentamos algunos comentarios en torno a la aparición y la función de las aves en dicha obra, así como en otros textos de tema cidiano, para establecer posibles relaciones e influencias entre ellos a partir de un tópico en común

Abstract:

The Historia Roderici Campidocti is a medieval Hispanic-latino novel relating the most important events in the military career of Rodrigo Díaz de Vivar, the Lord Champion. In this article, we will comment on the appearance and the role of birds in El Cid literature in order to establish their potential relationship and influence

Abstract:

In this article, we present some new results for the design of PID passivity-based controllers (PBCs) for the regulation of port-Hamiltonian (pH) systems. The main contributions of this article are: (i) new algebraic conditions for the explicit solution of the partial differential equation required in this design; (ii) revealing the deleterious impact of the dissipation obstacle that limits the application of the standard PID-PBC to systems without pervasive dissipation; (iii) the proposal of a new PID-PBC which is generated by two passive outputs, one with relative degree zero and the other with relative degree one. The first output ensures that the PID-PBC is not hindered by the dissipation obstacle, while the relative degree of the second passive output allows the inclusion of a derivative term. Making the procedure more constructive and removing the requirement on the dissipation significantly extends the realm of application of PID-PBC. Moreover, allowing the possibility of adding a derivative term to the control, enhances its transient performance

Abstract:

Equilibrium stabilization of nonlinear systems via energy shaping is a well-established, robust, passivity-based controller design technique. Unfortunately, its application is often stymied by the need to solve partial differential equations, which is usually a difficult task. In this paper a new, fully constructive, procedure to shape the energy for a class of port-Hamiltonian systems that obviates the solution of partial differential equations is proposed. Proceeding from the well-known passive, power shaping output we propose a nonlinear static state-feedback that preserves passivity of this output but with a new storage function. A suitable selection of a controller gain makes this function positive definite, hence it is a suitable Lyapunov function for the closed-loop. The resulting controller may be interpreted as a classical PI-connections with other standard passivity-based controllers are also identified

Abstract:

Equilibrium stabilisation of nonlinear systems via energy shaping is a well-established, robust, passivity-based controller design technique. Unfortunately, its application is often stymied by the need to solve partial differential equations. In this paper a new, fully constructive, procedure to shape the energy for a class of port-Hamiltonian systems that obviates the solution of partial differential equations is proposed. Proceeding from the well-known passive, power shaping output we propose a nonlinear static state-feedback that preserves passivity of this output but with a new storage function. This function contains some tuning gains used to ensure it is positive definite, hence a suitable Lyapunov function for the closed-loop. Connections with other standard passivity-based controllers are indicated and it is shown that the new controller design is applicable to two benchmark examples

Abstract:

In this Action-Process-Object-Schema (APOS) study, we aim to study the extent to which results from our previous research on student understanding of two-variable functions can be replicated in a different institutional context. We conducted the experience at a university in another country and with a different instructor than in previous studies. The experience consisted in comparing two sections of the same course; one taught through lectures and the other using activities designed with APOS theory and the ACE didactical strategy (Activities, Class discussion, Exercises). We show the results of a comparison of students' performance in both groups and give evidence of the generalizability of our previously obtained research results and the possible replication of didactic aspects across institutions. We found that using the APOS theory didactical approach favors a deeper understanding of two-variable functions. We also found that factors other than the activity sets and teaching strategy affected the replication

Abstract:

In this paper we provide a necessary and sufficient condition for the solv-ability of inhomogeneous Cauchy-Riemann type systems where the datum consists of continuous C-valued functions and we describe its general solution by embedding the system in an appropriate quaternionic setting

Abstract:

This paper aims to study several boundary value properties, to derive a Poincare-Bertrand formula and to solve some Dirichlet-type problems for the 2D quaternionic metaharmonic layer potentials

Abstract:

The Cimmino system offers a natural and elegant generalization to four-dimensional case of the Cauchy–Riemann system of first order complex partial differential equations. Recently, it has been proved that many facts from the holomorphic function theory have their extensions onto the Cimmino system theory. In the present work a Poincaré–Bertrand formula related to the Cauchy–Cimmino singular integrals over piecewise Lyapunov surfaces in R4 is derived with recourse to arguments involving quaternionic analysis. Furthermore, this paper obtains some analogues of the Hilbert formulas on the unit 3-sphere and on the 3-dimensional space for the theory of Cimmino system

Abstract:

Starting from Sinclair's 1976 work [6] on automatic continuity of linear operators on Banach spaces, we prove that sequences of intertwining continuous linear maps are eventually constant with respect to the separating space of a fixed linear map. Our proof uses a gliding hump argument. We also consider aspects of continuity of linear functions between locally convex spaces and prove that such a linear function T from the locally convex space X to the locally convex space Y is continuous whenever the separating space G(T) is the zero vector in Y and for which X and Y satisfy conditions for a closed graph theorem

Abstract:

We prove an extension of the Patero optimization criterion to locally complete locally convex vector spaces to guarantee the existence of fixed points of set-valued maps

Abstract:

In this article we discuss the relationship between three types of locally convex spaces: docile spaces, Mackey first countable spaces, and sequentially Mackey first countable spaces. More precisely, we show that docile spaces are sequentially Mackey first countable. We also show the existence of sequentially Mackey first countable spaces that are not Mackey first countable, and we characterize Mackey first countable spaces in terms of normability of certain inductive limits

Abstract:

Bisectors of line segments are quite simple geometrical objects. Despite their simplicity, they have many surprising and useful properties. As metric objects, the shape of bisectors depends upon the metric considered. This article discusses geometric properties of bisectors of line segments in the plane, when the bisectors are taken with respect to the usual p-norms. Although the shape of bisectors changes as their defining p-norm varies, it is shown that the bisectors share exactly three points (or infinitely many points in exceptional cases determined by the orientation of the base line segment)

Abstract:

Ekeland's variational principle and the existence of critical points of dynamical systems, also known as multiobjective optimization, have been proved in the setting of locally complete spaces. In this article we prove that these two properties can be deduced one from the other under certain convexity conditions

Abstract:

In this paper we prove Ekeland's variational principIe in the setting of locally complete spaces for lower semi continuous functions from above and bounded below. We use this theorem to prove Caristi's fixed point theorem in the same setting and also for lower semi continuous functions

Abstract:

We define a generalization of Mackey first countability and prove that it is equivalent to being docile. A consequence of the main result is to give a partial affirmative answer to an old question of Mackey regarding arbitrary quotients of Mackey first countable spaces. Some applications of the main result to spaces such as inductive limits are also given

Abstract:

For a partition P of a set C we prove that if P maximizes the generalized utilitarian social welfare function then P is a Pareto optimal partition. We also give a condition under whic the partition that maximizes is unique

Abstract:

More than half of the students in the Latin American and the Caribbean region are below Pisa level 1 which means that the majority of the students in our region cannot identify information and carry out routine procedures according to direct instructions in explicit situations. There have been some good experiences in each country to reverse the depicted situation but it is not enough and this is not happening in all countries. I will talk about these experiences. In all of them professional mathematicians need to help teachers to have the necessary knowledge, and become more effective instructors that can raise the standard of every student

Abstract:

In this paper we prove an extension of Ekeland’s variational principle in the setting of locally complete spaces. We also present an Equilibrium version of the Ekeland-type variational principle, a Caristi-Kirk type fixed point theorem for multivalued maps and a Takahashi minimization theorem, we then prove that they are equivalent

Abstract:

In this paper we study the Pareto optimization problem in the framework of locally convex spaces with the restricted assumption that only some related sets are locally complete

Abstract:

We prove an extension of Ekeland’s variational principle to locally complete spaces which uses subadditive, strictly increasing continuous functions as perturbations

Abstract:

We prove that for the inductive limit of sequentially complete spaces regularity or local completeness imply the Banach Disk Closure Property (BDCP) (an inductive limite enjoys the BDCP if all Banach disks in the steps of the inductive limit are such that their closures, with respect to the inductive limit topology, are Banach discs as well). In particular we obtain that for an inductive limit of sequentially complete spaces, regularity is equivalent to the BDCP plus an “almost regularity” condition

Abstract:

We study the heredity of local completeness and the strict Mackey convergence property from the locally convex space E to the space of absolutely p-summable sequences on E, lp(E) for 1 ≤ p < ∞

Abstract:

We defined the lp,q-summability property and study the relations between the lp,q-summability property, the Banach-Mackey spaces and the locally complete spaces. We prove that, for c0-quasibarrelled spaces, Banach-Mackey and locally complete are equivalent. Last section is devoted to the study of CS-closed sets introduced by Jameson and Kakol

Abstract:

In this paper we consider the Sobolev-Slobodeckij spaces W^(m,p) (R,E) where E is a strict (LF)-space, m belongs to (0, infinity)\ N and p belongs to [1, infinity). We prove that W^(m,p) (R,E) has the approximation property provided E has it, furthermore if E is a Banach space with the strict approximation property then W^(m,p) (R,E) has this property

Abstract:

This is a study of relationship between the concepts of Mackey, ultrabornological, bornological, barrelled, and infrabarrelled spaces and the concept of fast completeness. An example of a fast complete but not sequentially complete space is presented.

Abstract:

The notion of "praxeology" from the anthropological theory of the didactic (ATD) can be used as a framework to approach what has recently been called the networking of theories in mathematics education. Theories are interpreted as research praxeologies, and different modalities of "dialogues" between research praxeologies are proposed, based on alternatively considering the main features and proposals of one theory from the perspective of the other. To illustrate this networking methodology, we initiate a dialogue between APOS (action-process-object-schema) and the ATD itself. It starts from the theoretical component of both research praxeologies followed by the technological and technical ones. Both dialogue modalities and the resulting insights are illustrated, and the elements of APOS and the ATD that the dialogue can promote and develop are underlined. The results found indicate that a complete dialogue taking into account all components of research praxeologies appears as an unavoidable step in the networking of research praxeologies

Abstract:

We study transaction costs for making deposits within the privatized pension system in Mexico. We analyze an expansion of access channels for additional (voluntary) contributions at 7-Eleven stores, followed by a media campaign providing information on this policy and persuasive messages to save. We estimate a differential 6-9 percent increase in the volume of transactions post-policy in municipalities with 7-Eleven relative to those without. However, due to smaller deposits compared to pre-policy sizes, we find modest effects on the flow of savings. Contribution size was not just smaller for marginal savers, but also decreased significantly for some inframarginal savers

Abstract:

This paper proves the large deviation principle for a class of non-degenerate small noise diffusions with discontinuous drift and with state-dependent diffusion matrix. The proof is based on a variational representation for functionals of strong solutions of stochastic differential equations and on weak convergence methods.

Abstract:

We consider the construction of Markov chain approximations for an important class of deterministic control problems. The emphasis is on the construction of schemes that can be easily implemented and which possess a number of highly desirable qualitative properties. The class of problems covered is that for which the control is affine in the dynamics and with quadratic running cost. This class covers a number of interesting areas of application, including problems that arise in large deviations, risk-sensitive and robust control, robust filtering, and certain problems in computer vision. Examples are given as well as a proof of convergence

Abstract:

Morphometric datasets only convey useful information about variation when measurement landmarks and relevant anatomical axes are clearly defined. We propose that anatomical axes of 3D digital models of bones can be standardized prior to measurement using an algorithm that automatically finds a universal geometric alignment among sampled bones. As a case study, we use teeth of "prosimian" primates. In this sample, equivalent occlusal planes are determined automatically using the R-package auto3dgm. The area of projection into the occlusal plane for each tooth is the measurement of interest. This area is used in computation of a shape metric called relief index (RFI), the natural log of the square root of crown area divided by the square root of occlusal plane projection area. We compare mean and variance parameters of area and RFI values computed from these automatically orientated tooth models with values computed from manually orientated tooth models. According to our results, the manual and automated approaches yield extremely similar mean and variance parameters. The only differences that plausibly modify interpretations of biological meaning slightly favor the automated treatment because a greater proportion of differences among subsamples in the automated treatment are correlated with dietary differences. We conclude that—at least for dental topographic metrics—automated alignment recovers a variance pattern that has meaning similar to previously published datasets based on manual data collection. Therefore, future applications of dental topography can take advantage of automatic alignment to increase objectivity and repeatability

Abstract:

In "Why Democracy Protests Do Not Diffuse," we examine whether or not countries are significantly more likely to experience democracy protests when one or more of their neighbors recently experienced a similar protest. Our goal in so doing was not to attack the existing literature or to present sensational results, but to evaluate the extent to which the existing literature can explain the onset of democracy protests more generally. In addition to numerous studies attributing to diffusion the proliferation of democracy protests in four prominent waves of contention in Europe (1848, 1989, and early 2000s) and in the Middle East and North Africa (2011), there are multiple academic studies, as well as countless articles in the popular press, claiming that democracy protests have diffused outside these well-known regions and periods of contention (e.g., Bratton and van de Walle 1992; Weyland 2009; della Porta 2017). There are also a handful of cross-national statistical analyses that hypothesize that anti-regime contention, which includes but is not limited to democracy protests, diffuses globally (Braithwaite, Braithwaite, and Kucik 2015; Gleditsch and Rivera 2017; Escriba`-Folch, Meseguer, and Wright 2018). Herein, we discuss what we can and cannot conclude from our analysis about the diffusion of democracy protests and join our fellow forum participants in identifying potential areas for future research. Far from closing this debate, we hope our article will stimulate further conversations and analyses about the theoretical and empirical bases of contention, diffusion, and democratization

Abstract:

One of the primary international factors proposed to explain the geographic and temporal clustering of democracy is the diffusion of democracy protests. Democracy protests are thought to diffuse across countries, primarily, through a demonstration effect, whereby protests in one country cause protests in another based on the positive information that they convey about the likelihood of successful protests elsewhere and, secondarily, through the actions of transnational activists. In contrast to this view, we argue that, in general, democracy protests are not likely to diffuse across countries because the motivation for and the outcome of democracy protests result from domestic processes that are unaffected or undermined by the occurrence of democracy protests in other countries. Our statistical analysis supports this argument. Using daily data on the onset of democracy protests around the world between 1989 and 2011, we find that in this period, democracy protests were not significantly more likely to occur in countries when democracy protests had occurred in neighboring countries, either in general or in ways consistent with the expectations of diffusion arguments

Abstract:

In this chapter, we briefly discuss key dynamical features of a generalised Schnakenberg model. This model sheds light on the morphogenesis of plant root hair initiation process. Our discussion is focused on the plant called Arabidopsis thaliana, which is a prime plant-model for plant researchers as is experimentally well known. Here, relationships between physical attributes and biochemical interactions that occur at a sub-cellular level are revealed. The model consists of a two-component non-homogeneous reaction-diffusion system, which takes into account an on-and-off switching process of small G-protein family members called Rho-of-Plants (ROPs). This interaction however is catalysed by the plant hormone auxin, which is known to play a crucial role in many morphogenetic processes in plants. Upon applying semi-strong theory and performing numerical bifurcation analysis, all together with time-step simulations, we present results from a thorough analysis of the dynamics of spatially localised structures in 1D and 2D spatial domains. These compelling dynamical features are found to give place to a wide variety of patterns. Key features of the full analysis are discussed

Abstract:

En este manuscrito discutimos una de las contribuciones que la teoría de los sistemas dinámicos ofrece al entendimiento del crecimiento de una población de individuos con recursos limitados. En este contexto, exploramos brevemente algunas de las ideas que permiten entender dos mecanismos de vital importancia en ecología: la competencia y la interacción con los recursos del medio. El enfoque que seguiremos consistirá en la exposición de los resultados clave publicados en [1] por R. Dilao y T. Domingos para la dinámica de una generalización del modelo de Leslie. Asimismo, presentamos los resultados de algunos experimentos numéricos cuando se considera un retraso de la variable de evolución en la dinámica del crecimiento de los recursos y una dinámica de competencia simple entre los miembros de la población

Resumen:

En este artículo se describen las características esenciales que determinan un comportamiento caótico en los sistemas dinámicos. Con este fin, se reproduce el análisis de la Ecuación Logística. También, se muestran los conceptos de dimensión fractal y autosimilitud por medio del Conjunto de Mandelbrot. La sensibilidad a condiciones iniciales es ejemplificada por el Sistema de Lorenz. Finalmente, se exponen los ingredientes necesarios de la ruta Ruelle–Takens–Newhouse al caos en el sistema de reacción-difusión Barrio–Varea–Aragón–Maini

Resumen:

En este capítulo se expone una introducción a las condiciones que producen espontáneamente estructuras localizadas en sistemas de reacción-difusión, donde la parte cinética contiene no-linealidades cuadráticas y cúbicas y en regiones de parámetros donde este fenómeno ocurre. Se presenta una breve muestra de ejemplos de patrones en sistemas físicos y biológicos cuya naturaleza es de origen localizado. En seguida, se exploran las condiciones que dan pie a la bifurcación de Turing y las propiedades elementales del mecanismo conocido como 'homoclinic snaking'. El objetivo principal de este capítulo consiste en exponer los ingredientes necesarios que capturan la esencia de ambas maquinarias y que, por tanto, inducen la aparición de estructuras localizadas en sistemas que generalmente producen patrones extendidos. Se utiliza un sistema generalizado del tipo Schnackenberg con términos de fuente y pérdida en una y dos dimensiones espaciales como ejemplo

Abstract:

A generalized Schnakenberg reaction-diffusion system with source and loss terms and a spatially dependent coefficient of the nonlinear term is studied both numerically and analytically in two spatial dimensions. The system has been proposed as a model of hair initiation in the epidermal cells of plant roots. Specifically the model captures the kinetics of a small G-protein ROP, which can occur in active and inactive forms, and whose activation is believed to be mediated by a gradient of the plant hormone auxin. Here the model is made more realistic with the inclusion of a transverse coordinate. Localized stripe-like solutions of active ROP occur for high enough total auxin concentration and lie on a complex bifurcation diagram of single- and multipulse solutions. Transverse stability computations, confirmed by numerical simulation show that, apart from a boundary stripe, these one-dimensional (1D) solutions typically undergo a transverse instability into spots. The spots so formed typically drift and undergo secondary instabilities such as spot replication. A novel two-dimensional (2D) numerical continuation analysis is performed that shows that the various stable hybrid spot-like states can coexist. The parameter values studied lead to a natural, singularly perturbed, so-called semistrong interaction regime. This scaling enables an analytical explanation of the initial instability by describing the dispersion relation of a certain nonlocal eigenvalue problem. The analytical results are found to agree favorably with the numerics. Possible biological implications of the results are discussed

Abstract:

A mathematical analysis is undertaken of a Schnakenberg reaction-diffusion system in one dimension with a spatial gradient governing the active reaction. This system has previously been proposed as a model of the initiation of hairs from the root epidermis Arabidopsis, a key cellular-level morphogenesis problem. This process involves the dynamics of the small G-proteins, Rhos of plants, which bind to form a single localized patch on the cell membrane, prompting cell wall softening and subsequent hair growth. A numerical bifurcation analysis is presented as two key parameters, involving the cell length and the overall concentration of the auxin catalyst, are varied. The results show hysteretic transitions from a boundary patch to a single interior patch and to multiple patches whose locations are carefully controlled by the auxin gradient. The results are confirmed by an asymptotic analysis using semistrong interaction theory, leading to closed form expressions for the patch locations and intensities. A close agreement between the numerical bifurcation results and the asymptotic theory is found for biologically realistic parameter values. Insight into the initiation of transition mechanisms is obtained through a linearized stability analysis based on a nonlocal eigenvalue problem. The results provide further explanation of the recent agreement found between the model and biological data for both wild-type and mutant hair cells

Abstract:

Subcritical Turing bifurcations of reaction-diffusion systems in large domains lead to spontaneous onset of well-developed localized patterns via the homoclinic snaking mechanism. This phenomenon is shown to occur naturally when balancing source and loss effect are included in a typical reaction-diffusion system, leading to a super- to subcritical transition. Implications are duscussed for a range of physical problems, arguing that subcritically leads to naturally robust phase transitions to localized patterns

Abstract:

Transmission switching has proven to be a highly useful post-contingency recovery technique by allowing power system operators increased levels of control through leveraging the topology of the power system. However, transmission switching remains only implemented in limited capacity because of concerns over computational complexity, uncertainty of performance in AC systems, and scalability to real-world, large-scale systems. We propose a heuristic which uses a sophisticated guided undersampling procedure combined with logistic regression to accurately identify transmission switching actions to reduce post-contingency AC power flow violations. The proposed heuristic was tested on real-world, large-scale AC power system data and consistently identified optimal or near optimal transmission switching actions. Because the proposed heuristic is computationally inexpensive, addresses an AC system, and is validated on real-world large-scale data, it directly addresses the aforementioned issues regarding transmission switching implementation

Abstract:

This paper assesses the impact of the North American Free Trade Agreement on Mexican manufacturing plants' output prices and markups. We distinguish between Mexican goods that are exported and those sold domestically, and decompose their prices separately into markups and marginal costs. We then analyze how these components were affected by the reductions in Mexican output tariffs, intermediate input tariffs, and U.S. tariffs on Mexican exports. We find that domestically sold products saw a decline in prices as Mexican plants faced more competition and gained access to cheaper inputs. Prices of exported goods fell only slightly as plants increased their markups in response to a favorable competitive environment due to declines in U.S. tariffs

Abstract:

We use new administrative data from Ecuador to study the welfare effects of the misallocation of procurement contracts caused by political connections. We show that firms that form links with the bureaucracy through their shareholders experience an increased probability of being awarded a government contract. We develop a novel sufficient statistic - the average gap in revenue productivity and capital share of revenue - to measure the efficiency effects, in terms of input utilization, of political connections. Our framework allows for heterogeneity in quality, productivity, and non-constant marginal costs. We estimate political connections create welfare losses ranging from 2 to 6% of the procurement budget

Abstract:

We evaluate whether the market reacts rationally to profit warnings by testing for subsequent abnormal returns. Warnings fall into two classes: those that include a new earnings forecast, and those that offer only the guidance that earnings will be below current expectations. We find significant negative abnormal returns in the first three months following both types of warning. There is also evidence that underreaction is more pronounced when the disclosure is less precise. Abnormal returns are significantly more negative following disclosures that offer only qualitative guidance than when a new earnings forecast is included

Abstract:

Nowadays, the tendency to replace conventional fossil-based plastics is increasing considerably; there is a growing trend towards alternatives that involve the development of plastic materials derived from renewable sources, which are compostable and biodegradable. Indeed, only 1.5 % of whole plastic production is part of the small bioplastics market, even when these materials with a partial or full composition from biomass are rapidly expanding. A very interesting field of investigation is currently being developed in which the disposal and processing of the final products are evaluated in terms of reducing environmental harm. This review presents a compilation of polyethylene (PE) types, their uses, and current problems in the waste management of PE and recycling. Particularly, this review is based on the capabilities to synthesize bio-based PE from natural and renewable sources as a replacement for the raw material derived from petroleum. In addition to recent studies in degradation on different types of PE with weight loss ranges from 1 to 47 %, the techniques used and the main changes observed after degradation. Finally, perspectives are presented in the manuscript about renewable and non-renewable poly-mers, depending on the non-degradable, biodegradable, and compostable behavior, including composting recent studies in PE. In addition, it contributes to the 3R approaches to responsible waste management of PE and advancement towards an environmentally friendly PE

Abstract:

In this work we perform a first study of basic invariant sets of the spatial Hill's four-body problem, where we have used both analytical and numerical approaches. This system depends on a mass parameter in such a way that the classical Hill's problem is recovered when m=0. Regarding the numerical work, we perform a numerical continuation, for the Jacobi constant C and several values of the mass parameter m by applying a classical predictor-corrector method, together with a high-order Taylor method considering variable step and order and automatic differentiation techniques, to specific boundary value problems related with the reversing symmetries of the system. The solution of these boundary value problems defines initial conditions of symmetric periodic orbits. Some of the results were obtained departing from periodic orbits within Hill’s three-body problem. The numerical explorations reveal that a second distant disturbing body has a relevant effect on the stability of the orbits and bifurcations among these families. We have also found some new families of periodic orbits that do not exist in the classical Hill's three-body problem; these families have some desirable properties from a practical point of view

Abstract:

The circular restricted four-body problem studies the dynamics of a massless particle under the gravitational force produced by three point masses that follow circular orbits with constant angular velocity, the configuration of these circular orbits forms an equilateral triangle for all time; this problem can be considered as an extension of the celebrated restricted three-body problem. In this work we investigate the orbits which emanate from some equilibrium points. In particular, we focus on the horseshoe shaped orbits (rotating frame), which are well known in the restricted three-body problem. We study some families of symmetric horseshoe orbits and show their relation with a family of the so called Lyapunov orbits

Abstract:

In this work we perform numerical explorations of some families of planar periodic orbits in the Hill approximation of the restricted four-body problem. This approximation is obtained by performing a symplectic scaling which sends the two massive bodies to infinity, by the means of expanding the potential as a power series depending on the mass of the smallest primary, and taking the limit as this mass tends to zero. The limiting Hamiltonian depends only on the relative mass of the second smallest primary. The resulting dynamics shares similarities with both the restricted three-body problem and the restricted four-body problem. We focus on certain families of symmetric periodic orbits of the infinitesimal particle, for some values of the mass parameter. We explore the evolution of these families as the Jacobi constant, or, equivalently, the energy, is varied continuously, and provide details on the horizontal and vertical stability of each family

Abstract:

The legal and economic analysis presented here empirically tests the theoretical framework advanced by Kugler, Verdier, and Zenou (2003) and Buscaglia (1997). This paper goes beyond the prior literature by focusing on the empirical assessment of the actual implementation of the institutional deterrence and prevention mechanisms contained in the United Nations’ Convention against Transnational Organized Crime (Palermo Convention). A sample of 107 countries that have already signed and/or ratified the Convention was selected. The paper verifies that the most effective implemented measures against organized crime are mainly founded on four pillars: (i) the introduction of more effective judicial decision-making control systems causing reductions in the frequencies of abuses of procedural and substantive discretion; (ii) higher frequencies of successful judicial convictions based on evidentiary material provided by financial intelligence systems aimed at the systematic confiscation of assets in the hands of criminal groups and under the control of “licit” businesses linked to organized crime; (iii) the attack against high level public sector corruption (that is captured and feudalized by organized crime) and (iv) the operational presence of government and/or non-governmental preventive programs (funded by the private sector and/or governments and/or international organizations) addressing technical assistance to the private sector, educational opportunities, job training programs and/or rehabilitation (health and/or behavioral) of youth linked to organized crime in high-risk areas (with high-crime, high unemployment, and high poverty)

Abstract:

This paper investigates IS flexibility as a way to meet competitive challenges in hypercompetitive industries by enabling firms to implement strategies by adapting their operations processes quickly to fast-paced changes. It presents a framework for IS flexibility as a multidimensional construct. Through a literature review and initial case study analysis, factors to assess flexibility in each dimension are constructed. Findings of an exploratory study conducted to test the framework are reported. Based on the above it is argued that the concept of IS flexibility in hypercompetitive industries IS flexibility must be pursued from a holistic perspective to understand how it may be exploited to achieve a competitive advantage

Abstract:

Guided by the relationship between the breadth-first walk of a rooted tree and its sequence of generation sizes, we are able to include immigration in the Lamperti representation of continuous-state branching processes.We provide a representation of continuous-state branching processes with immigration by solving a random ordinary differential equation driven by a pair of independent Lévy processes. Stability of the solutions is studied and gives, in particular, limit theorems (of a type previously studied by Grimvall, Kawazu and Watanabe and by Li) and a simulation scheme for continuous-state branching processes with immigration. We further apply our stability analysis to extend Pitman’s limit theorem concerning Galton–Watson processes conditioned on total population size to more general offspring laws

Abstract:

Several industries deal with imperfect and perishable raw materials that need to be cut in order to assemble products. Managing the purchasing, cutting, and sequencing decisions is a challenging problem that spans both inventory control and production process management, where decisions are usually optimised separately. In this research we develop a dynamic programming model that determines these joint decisions. The inspiration comes from furniture companies where the raw material are sheets of plywood that may contain imperfections that need to be avoided when cutting pieces to build the assembled furniture. We evaluate decisions regarding the layout of pieces onto the plywood, dispatching policies of ordered furniture, and the quality of the raw material. The model is useful to perform cost analysis for different scenarios. Variations on the expected demand, purchasing, ordering, disposal, holding costs and initial inventory are considered. Experiments conclude that focusing in quality becomes more important than the age of the plywood sheet if the companies implement a production flow that cuts and sorts all pieces before assembling the furniture. Cost analysis confirms that a just-in-time policy in which little inventory is kept results in important savings when compared with the standard operation of the companies

Abstract:

In this paper we tackle a two-dimensional cutting problem to optimize the use of raw material in a furniture company. Since the material used to produce pieces of furniture comes from a natural source, the plywood sheets may present defects that affect the total plywood that can be used in a single sheet. The heuristic presented in this research deals with these defects and present the best way to handle them. It also considers the use of the plywood sheets for the long term planning of the company, since usually purchases of raw material are done only at certain periods of time, and must last for several weeks. Experimental results show how an intelligent cutting plan and selection of the plywood sheets reduce considerable the amount of raw material needed compared with the current operation of the company, and guarantees that the purchased sheets last during the planning period, regardless of the available area to cut pieces on each plywood

Resumen:

En este trabajo se presenta un problema de corte y empaquetamiento bidimensional que optimiza el uso de materia prima en una fábrica de muebles. Dado que el material proviene de una fuente natural, como son las planchas de contrachapado de madera, pueden presentar defectos. Estos defectos afectan la cantidad de materia prima disponible en cada plancha, ya que no siempre se pueden incluir en las piezas finales. La heurística presentada en este trabajo, muestra la mejor manera de tratar con los defectos. También consideramos el uso de las planchas de contrachapado en una planeación a largo plazo, dado que la compra de materia prima normalmente se realiza de forma periódica, y las existencias deben ser suficientes para completar la demanda de varias semanas

Abstract:

In this work, we consider a batching machine that can process several jobs at the same time. Batches have a restricted batch size, and the processing time of a batch is equal to the largest processing time among all jobs within the batch. We solve the bi-objective problem of minimizing the maximum lateness and number of batches. This function is relevant as we are interested in meeting due dates and minimizing the cost of handling each batch. Our aim is to find the Pareto-optimal solutions by using an epsilon-constraint method on a new mathematical model that is enhanced with a family of valid inequalities and constraints that avoid symmetric solutions. Additionally, we present a biased random-key genetic algorithm to approximate the optimal Pareto points of larger instances in reasonable time. Experimental results show the efficiency of our methodologies

Abstract:

The cross entropy method was initially developed to estimate rare event probabilities through simulation, and has been adapted successfully to solve combinatorial optimization problems. In this paper we aim to explore the viability of using cross entropy methods for the vehicle routing problem. Published implementations to this problem have only considered a naive route-splitting scheme over a very limited set of instances. In addition of presenting a better route-splitting algorithm we designed a cluster-first/route-second approach. We provide computational results to evaluate these approaches, and discuss its advantages and drawbacks. The innovative method, developed in this paper, to generate clusters may be applied in other settings. We also suggest improvements to the convergence of the general cross entropy method

Abstract:

We address the problem of scheduling a single batching machine to minimize the maximum lateness with a constraint restricting the batch size. A solution for this NP-hard problem is defined by a selection of jobs for each batch and an ordering of those batches. As an alternative, we choose to represent a solution as a sequence of jobs. This approach is justified by our development of a dynamic program to find a schedule that minimizes the maximum lateness while preserving the underlying job order. Given this solution representation, we are able to define and evaluate various job-insert and job-swap neighborhood searches. Furthermore we introduce a new neighborhood, named split–merge, that allows multiple job inserts in a single move. The split–merge neighborhood is of exponential size, but can be searched in polynomial time by dynamic programming. Computational results with an iterated descent algorithm that employs the split–merge neighborhood show that it compares favorably with corresponding iterated descent algorithms based on the job-insert and job-swap neighborhoods

Abstract:

Expert systems are designed to solve complex problems by reasoning with and about specialized knowledge like an expert. The design of concrete is a complex task that requires expert skills and knowledge. Even when given the proportions of the ingredients used, predicting the exact behavior of concrete is not a trivial task, even for experts, because other factors that are hard to control or foresee also exert some influence over the final properties of the material. This paper presents some of our attempts to build a new expert system that can design different types of concrete (hydraulic, bacterial, cellular, lightweight, high-strength, architectural, etc.) for different environments. The system also optimizes the use of additives and cement, which are the most expensive raw materials used in the manufacture of concrete

Abstract:

The arrival of modern brain imaging technologies has provided new opportunities for examining the biological essence of human intelligence as well as the relationship between brain size and cognition. Thanks to these advances, we can now state that the relationship between brain size and intelligence has never been well understood. This view is supported by findings showing that cognition is correlated more with brain tissues than sheer brain size. The complexity of cellular and molecular organization of neural connections actually determines the computational capacity of the brain. In this review article, we determine that while genotypes are responsible for defining the theoretical limits of intelligence, what is primarily responsible for determining whether those limits are reached or exceeded is experience (environmental influence). Therefore, we contend that the gene-environment interplay defines the intelligent quotient of an individual

Abstract:

The ability to create, use and transfer knowledge may allow the creation or improvement of new products or services. But knowledge is often tacit: It lives in the minds of individuals, and therefore, it is difficult to transfer it to another person by means of the written word or verbal expression. This paper addresses this important problem by introducing a methodology, consisting of a four-step process that facilitates tacit to explicit knowledge conversion. The methodology utilizes conceptual modeling, thus enabling understanding and reasoning through visual knowledge representation. This implies the possibility of understanding concepts and ideas, visualized through conceptual models, without using linguistic or algebraic means. The proposed methodology is conducted in a metamodel-based tool environment whose aim is efficient application and ease of use

Abstract:

Purpose- The purpose of this paper is to devise a crowdsourcing methodology for acquiring and exploiting knowledge to pro file unscheduled transport networks for design of efficient routes for public transport trips. Design/methodology/approach- This paper analyzes daily travel itineraries within Mexico City provided by 610 public transport users. In addition, a statistical analysis of quality-of-service parameters of the public transport systems of Mexico City was also conducted. From the statistical analysis, a knowledge base was consolidated to characterize the unscheduled public transport network of Mexico City. Then, by using a heuristic search algorithm for finding routes, public transport users are provided with efficient routes lor their trips. Findings- The findings of the paper are as follows. A crowdsourcing methodology can be used to characterize complex and unscheduled transport networks. In addition, the knowledge of the clowds can be used to devise efficient routes for trips (using public transport) within a city. Moreover, the design of routes for trips can be automated by SmartPaths, a mobile application for public transport navigation. Research limitations/implications- The data collected from the public transport users of Mexico City may vary through the year. Originality/value- The significance and novelty is that the present work is the earliest effort in making use of a crowdsourcing approach for profiling unscheduled public transport networks to design efficient routes for public transport trips

Abstract:

KAMET II Concept Modeling Language (CML) is a consistent visual language with high usability and flexibility devised to acquire and organize knowledge from different sources in a very intuitive way. Similar recent work, which suggests visual tools for supporting knowledge acquisition (KA) processes, like Cmaptools and ICONKAT, are closed environments that cannot be easily translated to more popular frameworks like Protégé. On the other hand, languages for Semantic Web used for KA, like Extensible Markup Languages (XML), are designed for machine interpretation without considering the users interaction. KAMET II CML, on the contrary, cares about the input facilities for constructing knowledge models without disregarding its complexity, and it is compatible with commercial methodologies. We describe and demonstrate the advantages of KAMET II CML by proving its consistency and formality using Concept Algebra, a mathematical structure for the formal treatment of concepts and their algebraic relations, operations and associative rules. We do a direct transformation of KAMET II CML diagnosis models to Concept Network (CN) diagrams making use of Concept Algebra. As a result, KAMET II CML models are compatible with regular ontology representations and can be shared and used by other systems without adding complexity

Abstract:

The demand on Knowledge Management in the organizations, which are out‐performing their peers by above average growth in intellectual capital and wealth creation has lead to a growing community of IT people, who have adopted the idea of building Corporate or Organizational Memory Information Systems (OMIS). This system acknowledges the dynamics of the organizational environments, wherein the traditional design of information systems does not cope adequately with these organizational aspects. The successful development of such a system requires a careful analysis of essential for providing a cost‐effective solution which will be accepted by the employees/users and can be evolved in the future. This paper proposes a nine‐layered framework for improving OMIS’ implementation plan in order to support the effort to captures, shares and preserves the Organizational Memory (OM). The purpose of this framework is to gain a better understanding of how some factors are critical for the successful application of OMIS in order to face how to design suitable OMIS to turn the scattered, diverse knowledge of their people into well‐documented knowledge assets ready for deposit and reuse to benefit the whole organization

Abstract:

Knowledge acquisition (KA) is considered today a cognitive process that involves both dynamic modeling and knowledge generation activities. We understand KA should be seen as a spiral of epistemological and ontological content that grows upward by transforming tacit knowledge into explicit knowledge, which in turn becomes the basis for a new spiral of knowledge generation. This paper presents some of our attempts to develop a knowledge acquisition methodology that mainly build a bridge between two important fields: knowledge acquisition and knowledge management. KAMET II (Cairó and Guardati, 2012), the evolution of KAMET, represents a modern approach to creating diagnosis‐specialized knowledge models and knowledge‐based systems (KBS) that are more efficient

Abstract:

The knowledge acquisition (KA) process is not "mining from the expert’s head" and writing rules for building knowledge-based systems (KBS), as it was 20 years ago when KA was often confused with knowledge elicitation activity, and modern engineering tools did not exist. The KA process has definitely changed. Today knowledge acquisition is considered a cognitive process that involves both dynamic modeling and knowledge generation activities. KA should be seen as a spiral of epistemological and ontological content that grows upward by transforming tacit knowledge into explicit knowledge, which in turn becomes the basis for a new spiral of knowledge generation. This paper presents some of our attempts to build a new knowledge acquisition methodology that brings together and includes all of these ideas. KAMET II, the evolution of KAMET (Cairó, 1998), represents a modern approach to creating diagnosis-specialized knowledge models that can be run by Protégé 2000, the open source ontology editor and knowledge-based framework

Abstract:

Work on electronic negotiation has motivated the development of systems with strategies specifically designed to establish protocols for buying and selling goods on the Web. On the one hand, there are systems where agents interact with users through dialogues and animations, helping them to find products while learning from their preferences to plan future transactions. On the other hand, there are systems that employ knowledge-bases to determine the context of the interactions and to define the boundaries inherently established by the e-Commerce. This paper introduces the idea of developing an agent with both capabilities: negotiation and interaction in an e-Commerce application via virtual reality (with a view to apply it in the Latin-American market, where both the technological gap and an inappropriate approach to motivate electronic transactions are important factors). We address these issues by presenting a negotiation strategy that allows the interaction between an intelligent agent and a human consumer with Latin-American idiosyncrasy and by including a graphical agent to assist the user on a virtual basis. We think this may reduce the impact of the gap created by this new technology

Abstract:

The human brain is undoubtedly the most impressive, complex, and intricate organ that has evolved over time. It is also probably the least understood, and for that reason, the one that is currently attracting the most attention. In fact, the number of comparative analyses that focus on the evolution of brain size in Homo sapiens and other species has increased dramatically in recent years. In neuroscience, no other issue has generated so much interest and been the topic of so many heated debates as the difference in brain size between socially defined population groups, both its connotations and implications. For over a century, external measures of cognition have been related to intelligence. However, it is still unclear whether these measures actually correspond to cognitive abilities. In summary, this paper must be reviewed with this premise in mind

Abstract:

The knowledge acquisition (KA) process has evolved during the last years. Today KA is considered a cognitive process that involves both a dynamic modeling and knowledge generation activities. This should be seen as a spiral of epistemological and ontological content that grows up by transforming tacit knowledge into explicit knowledge, which in turn becomes the basis for a new spiral of knowledge generation. This paper shows some of our attempts to build a new knowledge acquisition methodology that collects and includes all of these ideas. KAMET II, the evolution of KAMET [1], represents a modern approach for building diagnosis-specialized knowledge models that could be run by Protégé

Abstract:

A number of research efforts were devoted to deploying agent technology applications in the field of Agent-Mediated Electronic Commerce. On the one hand, there are applications that simplify electronic transactions such as intelligent search engines and browsers, learning agents, recommender agent systems and agents that share knowledge. Thanks to the development and availability of agent software, e-commerce can use more than only telecommunications and online data processing. On the other hand, there are applications that include negotiation as part of their core activities such as the information systems field with negotiation support systems; multi-agent systems field with searching, trading and negotiation agents; and market design field with electronic auctions. Although negotiation is an important business activity, it has not been studied extensively either in traditional business or in ecommerce context. This paper introduces the idea of developing an agent with negotiation capabilities applied to the Latin American market, where both the technological gap and an inappropriate approach to motivate electronic transactions are important factors. We address these issues by presenting a negotiation strategy that allows the interaction between an intelligent agent and consumers with Latin American idiosyncrasy

Abstract:

Nowadays, Information Technologies (lT) are part of, practically, every aspect of life and business. The spread use of IT has led to both positive and negative consequences in the areas where they are applied. One of these areas is the implementation of Knowledge Management Systems (KMS). The application of certain technologies within a KMS pursues the efficient and effective realization of specific functions within the system. However, it is quite common to face a deficient or erroneous knowledge creation due to an incorrect utilization of these technologies. An incorrect application can go from a lack of definition of the role of a technology, to a perception of IT as a solution to knowledge creation problems itself. The incorrect application of technological solutions to KM issues can lead to a deficient creation of knowledge, an inability to update knowledge because of the rapid evolution of technologies, and, even, to erroneous decision-making. To address the described issue, this paper states the importance of understanding the influence of a KMS in an organization and developing conscience regarding the role of IT both within such organization and in society. Next, a description of different functions in KMS and' the adequate technologies to correctly implement them is made, taking into account the knowledge structure and capabilities of the enterprise. Finally, the significance of a KMS and a social ecosystem to support the implementation of IT is acknowledged, both to make the best out of an investment in technology and to achieve a consistent knowledge creating structure

Abstract:

The role of football in society has changed substantially over the past twenty years becoming to even larger extend the daily topic of millions of people around the world. Nowadays, the attention is drawn to both the needs of fans and obviously, the business. The game has evolved greatly in terms of physical performance while few changes are observed in its tactical aspect. The main goal of the professional soccer clubs is always to possess the best players, in the meanwhile forgetting about the team, except of course for some rare exceptions. This paper demonstrates that the practice of football as well as other sports should not ignore the basic requirements necessary to build a team and it also presents ways to address the value of knowledge management and its benefits for all stakeholders. Some of the key points discussed in this paper are the theory of team building, the different roles of group members and the transfer of know-how from generation to generation and how all these combined can make a difference

Abstract:

Traditional organizations' lack of Knowledge Leadership has become a major issue. Our proposal is a new type of team manager: the Knowledge Leader, who performs several functions of the Knowledge Champions and extends the notion of CKO into the teamwork context. This paper intends to clarify the relevance and necessity of a Knowledge Leader in every workgroup in organizations. Knowledge Leaders keep Knowledge Management efforts aligned to business strategies, making organizations competent. Without its leaders, a company will turn into a hollow shell

Abstract:

Few years ago, so many companies were convinced that they could reach positions of competitive advantage through investments in information technologies (lT's). This was certainly true for a while; however, lT's by themselves are not able to provide organizations with such a position anymore. One of the reasons for that is the tendency for the information technologies to beco me commodities; hence any competitor who has the enough acquisitive power could replicate the technological deployment of the leader, destroying this way the position of advantage. This work wants to explore a new source of competitive advantage called knowledge management, and its relationship with lT's. This paper also shows that companies should not regard lT's and KM as competitors but as coordinated efforts when trying to reach a position of competitive advantage

Abstract:

The overwhelming world situation has profoundly disturbed researchers, scientists and citizens, making them wonder about the vital element that guarantees life quality. While authorities and politicians struggle t9 provide for their citizens, idealists dream about the idea, as cliché as it must be, of a more prosperous world for everyone. Information technology and development alone have proved to be necessary, but indeed have been insufficient. Quite surprisingly, it's béen transpired that knowledge is the most precious asset every entity has. The importance of knowledge has been confirmed given that it is actually capable of providing an enduring solution to cities' main problems. Following this trend in the 90's, the link between knowledge and dties was born along with the concept Knowledge City {KC).Around this idea, since 2004, Monterrey's government implemented the program Infernational Knowledge Cify, a project based upon actions specifically designed to increase value to the city through knowledge creation and innovation. Its objective was and still is, to introduce Monterrey into the age of knowledge whilst transforming its landscapes and people. Hereafter, the objective of this paper was established: to determine whether or not Monterrey is becoming a KC. As such, this paper follows Ergazaki's methodology explained in A unified mefhodological approach for fhe developmenf of knowledge cities which is applied to Monterrey's knowledge infrastructure and also incorporates functional concepts derived from Knowledge Management. Furthermore, to complete the analysis, a comparison with the international admired KC of Singapore is included. The results indicate that Monterrey is a developing KC, improving by the minute

Abstract:

This paper presents an approach to automatic translation in two steps. The first one is recognition (syntactic) and the last one is interpretation (semantic). In the recognition phase, we use volatile grammars. They represent an innovation to logic-based grammars. For the interpretation phase, we use componential analysis and the laws of integrity. Componential analysis defines the sense of the lexical parts. The rules of integrity, on the other hand, are in charge of refining, improving and optimizing translation. We applied this framework for general analysis of romance languages and for the automatic translation of texts from Spanish into other neo-Latin languages and vice versa

Abstract:

This paper presents a series of conventionalities and tools for the general analysis of romance languages and for the automatic translation of texts from Spanish into other neo-Latin languages and vice versa. The work implies two well-defined phases: recognition (syntactic) and interpretation (semantic). In the recognition phase, we use volatile grammars. They represent an innovation to logic-based grammars. For the interpretation phase, we use componential analysis and the laws of integrity. Componential analysis defines the sense of the lexical parts that constituted the sentence through a collection of semantic features. The rules of integrity, on the other hand, have the goal of refining, improving and optimizing translation

Abstract:

Different methodologies have been developed to solve various tasks such as classification, design, planning, scheduling and diagnosis. Diagnosis is a task of which the desired output is a malfunction of a system. KAMET (Knowledge- Acquisition Methodology) is a knowledge engineering methodology aimed to attack diagnosis tasks exclusively. In this article KAMET II, the second version of KAMET, will be presented with the objective of knowing its most important characteristics as well as its modeling notation which will subsequently be necessary for the knowledge bases, Problem-Solving Methods (PSMs) and the knowledge model specification. KAMET is a model-based methodology appointed to administer knowledge acquisition from multiple knowledge sources. The methodology provides a mechanism by means of which knowledge acquisition is achieved in an incremental fashion and in a cooperative environment. One important feature is the specification used to describe knowledge-based systems independently of their implementation. A four-component architecture is presented to achieve this goal and to allow component separation and consequently component reuse

Abstract:

Problem-solving methods are ready-made software components that can be assembled with domain knowledge bases to create application systems. In this paper, we describe this relationship and how it can be used in a principled manner to construct knowledge systems. We have developed a ontologies: first, to describe knowledge bases and problem-solving methods as independent components that can be reused in different application systems; and second, to mediate knowledge between the two kinds of components when they are assembled in a specific system. We present our methodology and a set of associated tools that have been created to support developers in building knowledge systems and that have been used to conduct problem-solving method reuse

Abstract:

Information systems with an intelligent or knowledge component are now prevalent and include knowledge-based systems, intelligent agents, and knowledge management systems. These systems are capable of explain their reasoning or justify their behavior. Empirical studies, mainly with knowledge-based systems, are reviewed and linked to a theoretical and practical base. The present paper has two main objectives: a) to present a negotiation strategy that allows the interaction between an intelligent agent and a human consumer. This kind of negotiation is adapted to the Latin American market and idiosyncrasy where an appropriate tool to perform automated negotiations over Web does not exist. b) To include animations in order to show an agent that represents an actual person. This incorporation aims to reduce the impact and gap created by the new technology. The agent presented can find an optimal path to achieve its goal using its mental states and libraries designed for the business roles

Abstract:

The Knowledge-Acquisition (KA) community necessities demand more effective ways to elicit knowledge in different environments. Methodologies like CommonKADS [8], MIKE [1] and VITAL [6] are able to produce knowledge models using their respective Conceptual Modeling Languages (CML). However, sharing and reuse is nowadays a must-have in knowledge engineering (KE) methodologies and domain-specific KA tools in order to permit Knowledge-Based System (KBS) developers to work faster with better results, and give them the chance to produce and utilize reusable Open Knowledge Base Connectivity (OKBC)-constrained models. This paper presents the KAMET II Methodology, which is the diagnosis-specialized version of KAMET [2,3], as an alternative for creating knowledge-intensive systems attacking KE-specific risks. We describe here one of the most important characteristics of KAMET II which is the use of Protégé 2000 for implementing its CML models through ontologies

Abstract:

Knowledge engineering (KE) is not “mining from the expert’s head” as it was in the first generation of knowledge-based systems (KBS). Modern KE is considered more a modeling activity. Models are useful due to the incomplete accessibility that knowledge engineer has to the experts knowledge and because not all the knowledge is completely necessary to reach the majority of projects goals. KAMET II2, the evolution of KAMET (Knowledge-Acquisition Methodology), is a modern approach for building diagnosis-specialized knowledge models. This new diagnosis methodology pursues the objective of being a complete robust methodology to lead knowledge engineers and project managers (PM) to build powerful KBS by giving the appropriate modeling tools and by reducing KE-specific risks. Not only KAMET II encompasses the conceptual modeling tools, but it presents the adaptation to the implementation tool Protégé 2000 [6] for visual modeling and knowledge-base-editing as well. However, only the methodological part is presented in this paper

Abstract:

In recent years technology has experiented an exponential growth creating bridges between research areas that were independent from each other a few years ago. This paper focuses in an application combining three research areas: virtual environrnents, intelligent agents and museum web pages. The application consists in a virtual visit to a museum guided by an intelligent agent. The reactive agent implemented responds in real time to user's requests, such as precise information of the artwork, the authors' biography and other interesting facts such as museum's history and regional knowledge of the country where the museums lays. Our agent is ¡ncapable of providing different layers of data, making differences between adults, children and young people as well as between local and foreign users. The agent has some autonomy during the visit and permits the user to make his own choices. The virtual environment allows a semi-interactive visit through the museum's architecture. The user follows a pre-defined museum tour, but is able to perform interactive actions such as zoom s, free views of the museum's structure, free views of the artwork and may advance, go back, or terminate a visit at any time

Abstract:

Software project mistakes represent a loss of millions of dollars to thousands of companies all around the world. These software projects that somehow ran off course share a common problem: Risks became unmanageable. There are certain number of conjectures we can draw from the high failure rate: Bad management procedures, an inept manager was in charge, managers are not assessing risks, poor or inadequatemethodologies where used, etc. Some of them might apply to some cases, or all, or none, is almost impossible to think in absolute terms when a software project is an ad hoc solution to a given problem. Nevertheless,there is an ongoing effort in the knowledge engineering (KE) community to isolate risk factors, and provide remedies for runaway projects, unfortunately, we are not there yet. This work aims to express some general conclusions for risk assessment of software projects, particularly, but notlimited to, those involving knowledge acquisition (KA)

Abstract:

At the beginning of the 1980s, the Artificial Intelligence (Al) community showed little interest in research on methodologies for the construction of knowledge-based systems (KBS) and for knowledge acquisition (KA). The main idea was the rapid construction of prototypes with LISP machines, expert system shells, and so on. Over time, the community saw the need for a structured development of KBS projects, and KA was recognized as the critical stage and the bottleneck for the construction of KBS. Concerning KA, many publications have appeared since then. However, very few have focused on formal plans to manage knowledge acquisition and from multiple knowledge sources. This paper addresses this important problem. KAMET is a formal plan based on models designed to manage knowledge acquisition from multiple knowledge sources. The objective of KAMET is to improve, in some sense, the phase of knowledge acquisition and knowledge modeling process, making them more efficient.

Abstract:

Throughout time, Expert Systems (ES) have been applied on different areas of knowledge. In Telecommunications, the application of ESs has not been as outstanding as in other areas; however, some important applications have been seen lately; especially those focused on failure provisioning, monitoring and diagnosis [6]. In this article, we introduce DIFEVS, an ES for the remote diagnosis and correction of failures in satellite communications ground stations. DIFEVS also allows to considerably reduce the time elapsed between the initial moment of failure and when it is solved

Abstract:

Knowledge Acquisition (KA) in the 90s has been recognized as a critical stage in the construction of Knowledge Based Systems (KBS) and as the bottleneck for theír development. Nowadays a lot of material is published on KA, most of which ís focused on difficulties encountered in the process of knowledge elicitation. on the tools for KA, and on the verification and validation of Knowledge Bases (KB). A very limited number of these publications, however, have emphasised the need for formal plans to deal with knowledge elicitation of single and multiple experts. In this paper we propose a methodology based on models to deal with knowledge elicitation of multiple experts. The method provides a solid mechanism to accomplish KA in an increasing manner, by stages, and in a cooperative environment

Abstract:

During the past decades, many methods have been developed for the creation of Knowledge-Based Systems (KBS). For these methods, probabilistic networks have shown to be an important tool to work with probability-measured uncertainty. However, quality of probabilistic networks depends on a correct knowledge acquisition and modelation. KAMET is a model-based methodology designed to manage knowledge acquisition from multiple knowledge sources [1] that leads to a graphical model that represents causal relations. Up to now, all inference methods developed for these models are rule-based, and therefore eliminate most of the probabilistic information. We present a way to combine the benefits of Bayesian networks and KAMET, and reduce their problems. To achieve this, we show a transformation that generates directed acyclic graphs, the basic structure of Bayesian networks [2], and conditional probability tables, from KAMET models. Thus, inference methods for probabilistic networks may be used in KAMET models

Abstract:

Aluminum matrix composites (AMCs) reinforced with aluminum diboride (AlB2) particles are obtained through a casting process. A mixture design experiment combined with split-split plot experiment helped to assess the significance of the effects of cold work on precipitation hardening prior to aging. Both cold work and aging allowed higher microhardness of the composite matrix, which is further increased by higher levels of boron and copper. Microstructure analysis showed a good distribution of reinforcements and revealed a grain subdivision pattern due to cold work. Tensile tests helped corroborate the microhardness measurements. Fracture surface analysis showed a predominantly mixed brittle–ductile mode

Abstract:

This paper introduces a database of electoral precinct-level election returns for Mexican municipal elections between 1994 and 2019. This database includes: (i) electoral precinct-level votes for each electoral coalition, the coalitions of the incumbent mayor and incumbent state governor, and the four most popular political parties; (ii) electoral precinct-level valid and total votes, the number of registered voters, and turnout; (iii) the partisan composition and municipal-level votes of the incumbent and runner-up electoral coalitions from the previous election; and (iv) the partisan composition of the state-level incumbent governor. This paper outlines the organization of this data, its sources, and key variables, and describes the processes used to standardize the data. This database has the potential to support the cross-sectional and longitudinal study of local Mexican elections over two decades using fine-grained precinct-level electoral returns that enable panel and regression discontinuity analyses

Abstract:

In this paper, we report the results obtained of comparing how people make kense of health information when receiving it via two different media: application for a mobile device and a printed pamphlet. The study was motivated by the 2009 outbreak of the AHN1 influenza and the need to educate the general public using all possible media to disseminate symptoms and treatments in a timely manner. In this study, we investigate the influence of the media in the sensemaking process when processing health information that has be to comprehended fast and thoroughly as it can be life saving. We propose recommendations based on the obtained results

Resumen:

En marzo de 2021 se reformó la Constitución mexicana para transitar a un sistema de precedentes. Esta enmienda establece que las "razones" de las sentencias de la Suprema Corte serán obligatorias para los tribunales inferiores. Sin embargo, la reforma se enmarca en una arraigada práctica de tesis jurisprudenciales, i. e., enunciados abstractos identificados por la misma Corte al resolver un caso. Además, no hay consenso sobre qué son estas razones y por qué deberían ser vinculantes. El objetivo de este artículo es identificar las posibles concepciones de razones para revelar los distintos roles de la Corte en la creación del derecho judicial. Se utilizan nociones de la ratio decidendi del common law como herramientas de introspección para identificar cuatro modelos de creación del derecho en la práctica mexicana, a saber: legislación judicial, reglas implícitas, justificaciones político-morales, y categorías sociales. Aunque la primera concepción parece ser la dominante, las alternativas amplían el abanico para entender cómo es que la Corte crea derecho dependiendo del contexto interpretativo en que opere

Abstract:

In March 2021, the Mexican Constitution was amended to transition to a system of precedents. This amendment mandates that the "reasons" of Supreme Court rulings will be binding on the lower courts. However, the reform is rooted in a long-standing practice of case lawdoctrine, e.g., abstract statements the Court itself makes when deciding a case. Moreover, there is no consensus as to what these reasons are and why they should be binding. The objective of this article is to identify the possible notions of reasons to explore the Court's different roles in shaping judicial law. Concepts of the common law ratio decidendi are used as an insight to identify four models of law making in Mexican practice, namely: judicial legislation, implicit rules, moral-political justifications and social categories. Although the first model seems to prevail, the others offer a broad understanding of how the Court creates law depending on the interpretative context in which it operates

Resumen:

Este artículo busca mostrar que la concepción dominante sobre los órganos constitucionales autónomos (OCA) está equivocada. Estos organismos no son completa ni igualmente independientes del resto de poderes del Estado. Al contrario, se caracterizan por tener una variedad de diseños que les dan diferentes niveles de autonomía. Entender esta variedad de diseños es el primer paso para identificar las causas, reales o percibidas, sobre los defectos del sistema actual de equilibrio institucional, así como las consecuencias que un régimen bien diseñado podría generar. Hasta este momento, la discusión generalmente se ha limitado a si deben existir o no este tipo de organismos. A partir de este trabajo, los debates podrían abordar nuevos cuestionamientos sobre cómo deben diseñarse y cuánta autonomía habría que otorgarles

Abstract:

This article argues that the mainstream conception about the autonomous constitutional agencies is mistaken. These agencies are neither complete nor equally independent from the other branches of government. On the contrary, there is a variety of institutional designs that grants them different levels of autonomy. Acknowledging this variety of designs is the first step for understanding the causes, real or perceived, of the flaws in the current system of institutional balance, as well as the consequences that a well-designed regime could generate. Until today, most of the debate has been limited to the convenience or undesirability of having this kind of agencies. This article may be the starting point for discussions on how decisionmakers should design these agencies and how much autonomy they should grant them

Abstract:

Coherentists fail to distinguish between the individual revision of a conviction and the intersubjective revision of a rule. This paper fills this gap. A conviction is a norm that, according to an individual, ought to be ascribed to a provision. By contrast, a rule is a judicially ascribed norm that controls a case and is protected by the formal principles of competence, certainty, and equality. A revision of a rule is the invalidation or modification such a judicially ascribed norm, provided that the judge meets the burden of argumentation of formal principles. Thus, judges can revise their convictions without changing the law

Abstract:

The 'new NAFTA' agreement between Canada, Mexico, and the United States maintained the system for binational panel judicial review of antidumping and countervailing duty determinations of domestic government agencies. In US-Mexico disputes, this hybrid system brings together Spanish and English-speaking lawyers from the civil and the common law to solve legal disputes applying domestic law. These panels raise issues regarding potential bicultural, bilingual, and bijural (mis)understandings in legal reasoning. Do differences in language, legal traditions, and legal cultures limit the effectiveness of inter-systemic dispute resolution? We analyze all of the decisions of NAFTA panels in US-Mexico disputes regarding Mexican antidumping and countervailing duty determinations and the profiles of the corresponding panelists. This case study tests whether one can actually comprehend the 'other'. To what extent can a common law, English-speaking lawyer understand and apply Mexican law, expressed in Spanish and rooted in a distinct legal culture?

Resumen:

Desde el Derecho Comparado, se hace un análisis y una defensa del control de la constitucionalidad de la jurisprudencia. Se sostiene que este tipo de control es razonable, excepcionalmente, si se entiende la jurisprudencia como reglas prima facie protegidas por los principios formales de competencia, igualdad, certeza y jerarquía, no como reglas estrictas sustentadas solo en el principio de jerarquía, como la Suprema Corte mexicana las ha entendido

Abstract:

From the perspective of Comparative Law, the paper argues in favor of constitutional review on precedents. It claims that this type of review is reasonable in exceptional circumstances if precedents are understood as prima facie rules that safeguarded by the formal principles of competence, equality, certainty and hierarchy. Not as strict rulse grounded only in the principle of hierarchy, as the Mexican Supreme Court has understood them

Resumen:

En medio de una crisis democrática en México, surgen las candidaturas independientes como manera de ampliar la participación política. Sin embargo, para lograr el registro en las elecciones de 2018, el Instituto Nacional Electoral (INE) implementó una app sin considerar las diferencias de clase, étnicas/raciales, lingüísticas, temporales, geográficas y de habilidades digitales. Estas exclusiones se visibilizaron en las impugnaciones en torno a la candidatura de Marichuy, una mujer indígena nahua que buscó su registro como independiente a la presidencia. En este capítulo analizamos este uso vertical y mono-cultural de las tecnologías de la información y la comunicación (TIC), justificado en el discurso de “modernidad”, como una expresión de tecno-colonialismo. Contribuimos así a entender la formación de una sub-ciudadanía digital manifiesta en la subordinación y estigmatización jurídica, participativa y política en la que se priorizan valores tecnocráticos en lugar de la ampliación de los canales democráticos

Resumo:

No meio de uma crise democrática no México, as candidaturas independentes surgem como forma de expandir a participação política. No entanto, para conseguir o registo nas eleições de 2018, o Instituto Nacional Electoral (INE) implementou uma app sem considerar as diferenças de classe étnicas/raciais, linguísticas, temporais, geográficas e de habilidades digitais. Essas exclusões foram visíveis nas impugnações em torno a candidatura de Marichuy, uma indígena nahua que buscou seu registro como independente para a presidência. Neste capítulo analisamos este uso vertical e mono-cultural das tecnologias da informação e comunicação (TIC), justificado no discurso da "modernidade", como uma expressão de tecno-colonialismo. Contribuímos, assim, para com¬preender a formação de uma sub-cidadania digital que se manifesta na subordinação e estigmatização legal, participativa e política em que os valores tecnocráticos são priorizados em vez de expandir os canais democráticos

Resumen:

En este artículo se discute la derrotabilidad del precedente constitucional desde una perspectiva analítica y normativa. Analíticamente, se sostiene que la derrotabilidad es una propiedad contingente de los precedentes que se manifiesta cuando nuevas normas adscritas reducen o eliminan el campo de aplicación del precedente original. Normativamente, se propone que la derrotabilidad es una colisión entre los principio formales de igualdad y seguridad jurídica y principios de justicia sustantiva. Una norma se derrota cuando circunstancias o fuentes constitucionalmente relevantes ausentes en el precedente, pero presentes en el caso posterior, justifican emitir una nueva norma que funciona como excepción o invalidación de la norma anterior. De manera más práctica, se proponen cuatro técnicas argumentativas que hacen manifiesta la derrotabilidad de los precedentes. Estas técnicas son la distinción de casos, la circunscripción, la inaplicación y la desaplicación del precedente. Así, el artículo busca contribuir al debate teórico y práctico que ha surgido a partir de que el Pleno de la Suprema Corte resolviera la C.T. 299/2013

Abstract:

This article discusses the defeatability of the constitutional precedent from an analytical and normative perspective. Analytically, it is argued that defeatability is a contingent property of precedents that manifests itself when new ascribed standards reduce or eliminate the scope of the original precedent. Normatively, it is proposed that the defeatability is a collision between the formal principle of equality and legal certainty and principles of substantive justice. A rule is defeated when circumstances or constitutionally relevant sources absent in the preceding, but present in the later case, justify issuing a new rule that functions as an exception or override the previous rule. In a more practical way, four argumentative techniques are proposed that make manifest the defeatability of the precedents. These techniques are the distinction of cases, the circumscription, the inapplicability and the deapplication of the precedent. Thus, the article seeks to contribute to the theoretical and practical debate that has emerged since the plenary of the Supreme Court resolved the C.T. 299/2013

Abstract:

This article analyses the migration of the common law doctrine of precedent to civil law constitutionalism. Using the case study of Mexico and Colombia, it suggests how this doctrine should be tailored to the civil law context. Historically, the civil law tradition adhered to the doctrine of jurisprudence constante that grants relative persuasiveness to precedents, once they are reiterated. However, the trend is to consider single constitutional precedents as binding. Universalist judges are borrowing common law concepts to interpret precedents joining the global trend while particularists consider such migration a foreign imposition that distorts the civil law theory of sources. This article takes a dialogical approach and occupies a middle ground between universalist and particularist approaches. The doctrine of precedent should be adopted, but it must also be reconfigured considering three distinctive features of the civil law: (a) canonical rationes decidendi; (b) precedent overproduction; and (c) a fragmented judiciary

Resumen:

Este artículo tiene como objetivo informar los resultados de la puesta a prueba de una propuesta didáctica para la enseñanza de la optimización dinámica, en particular del cálculo de variaciones. El diseño de la propuesta se hizo con base en la teoría APOE y se puso a prueba en una institución de enseñanza superior. Los resultados obtenidos del análisis de las respuestas de los estudiantes a un cuestionario y una entrevista ponen de manifiesto que los estudiantes muestran concepciones proceso y, en ocasiones, objeto de los conceptos abstractos de esta disciplina como resultado de su aplicación, aunque se detectaron algunas dificultades que resultaron difíciles de superar para dichos alumnos

Abstract:

The purpose of this paper is to present the results of a research study on a didactical proposal to teach Dynamical Optimization, in particular, Calculus of Variations. The proposal design was based on APOS theory and was tested at a Mexican private university. Results obtained from the analysis of students responses to a questionnaire and an interview show that students construct process conceptions, and in some cases, object conceptions of this discipline´s abstract concepts. Some problems were however difficult to overcome for these students

Resumen:

Se presenta un estado del conocimiento sobre los avances que se han producido en el campo de la investigación en educación matemática, con respecto a la enseñanza y aprendizaje del concepto de espacio vectorial. Para organizar la revisión se utilizaron dos preguntas guía: ¿qué obstáculos para el aprendizaje del concepto de espacio vectorial se han identificado? y ¿qué sugerencias didácticas se han hecho para favorecer el aprendizaje con significado del concepto de espacio vectorial? Además de proporcionar respuesta a estas preguntas, el análisis de los resultados obtenidos ofrece una síntesis del conocimiento actual y una perspectiva acerca de posibles áreas para investigaciones futuras relacionadas con la enseñanza y el aprendizaje del concepto de espacio vectorial

Abstract:

A state of the art is presented on the advances that have taken place in the field of mathematics education research in connection to the teaching and learning of the concept of vector space. Two guiding questions were used to organize the review: what learning obstacles related to the concept of vector space have been identified? and what teaching proposals have been made to promote a meaningful learning of the concept of vector space? In addition to providing answers to these questions, the analysis of the obtained results offers a synthesis of current knowledge and a perspective on possible areas for future research related to teaching and learning of the concept of vector space

Abstract:

This paper proposes a research framework for studying the connections --realized and potential--between unstructured data (UD) and cybersecurity and internal controls. In the framework, cybersecurity and internal control goals determine the tasks to be conducted. The task influences the types of UD to be accessed and the types of analysis to be done, which in turn influences the outcomes that can be achieved. Patterns in UD are relevant for cybersecurity and internal control, but UD poses unique challenges for its analysis and management. This paper discusses some of these challenges including veracity, structuralizing, bias, and explainability

Abstract:

This paper proposes a cybersecurity control framework for blockchain ecosystems, drawing from risks identified in the practitioner and academic literature. The framework identifies thirteen risks for blockchain implementations, ten common to other information systems and three risks specific to blockchains: centralization of computing power, transaction malleability, and flawed or malicious smart contracts. It also proposes controls to mitigate the risks identified; some were identified in the literature and some are new. Controls that apply to all types of information systems are adapted to the different components of the blockchain ecosystem

Abstract:

Context. Mathematical optimization can be used as a computational tool to obtain the optimal solution to a given problem in a systematic and efficient way. For example, in twice-differentiable functions and problems with no constraints, the optimization consists of finding the points where the gradient of the objective function is zero and using the Hessian matrix to classify the type of each point. Sometimes, however it is impossible to compute these derivatives and other type of techniques must be employed such as the steepest descent/ascent method and more sophisticated methods such as those based on the evolutionary algorithms. Aims. We present a simple algorithm based on the idea of genetic algorithms (GA) for optimization. We refer to this algorithm as AGA (asexual genetic algorithm) and apply it to two kinds of problems: the maximization of a function where classical methods fail and model fitting in astronomy. For the latter case, we minimize the chi-square function to estimate the parameters in two examples: the orbits of exoplanets by taking a set of radial velocity data, and the spectral energy distribution (SED) observed towards a YSO (Young Stellar Object). Methods. The algorithm AGA may also be called genetic, although it differs from standard genetic algorithms in two main aspects: a) the initial population is not encoded; and b) the new generations are constructed by asexual reproduction. Results. Applying our algorithm in optimizing some complicated functions, we find the global maxima within a few iterations. For model fitting to the orbits of exoplanets and the SED of a YSO, we estimate the parameters and their associated errors

Abstract:

This paper deals with a research that pretends to explore the strategies that favor the understanding of the representation of parametric curves in the plane. We report the results of a three years long teaching experience with college students where we explore and explain the difficulties and strategies that students have when with problems that involve parameterization

Abstract:

This article proposes a more nuanced method to assess the accuracy of preelection polls in competitive multiparty elections. Relying on data from the 2006 and 2012 presidential campaigns in Mexico, we illustrate some shortcomings of commonly used statistics to assess survey bias when applied to multiparty elections. We propose the use of a Kalman filter-based method that uses all available information throughout an electoral campaign to determine the systematic error in the estimates produced for each candidate by all polling firms. We show that clearly distinguishing between sampling and systematic biases is a requirement for a robust evaluation of polling firm performance, and that house effects need not be unidirectional within a firm's estimates or across firms

Resumen:

El estudio de la conducta legislativa en la Cámara de Diputados durante el periodo de 1998 a 2006 presenta un problema potencialmente serio: no todos los votos han sido publicados en la Gaceta Parlamentaria. Analizamos la naturaleza de estos datos en aras de explorar la representatividad de la muestra de votos disponibles y las posibles repercusiones de este problema en los análisis legislativos existentes. Para esto, aprovechamos la aparición de una página de Internet que registra la totalidad de los votos de la Cámara de Diputados desde el año 2006 en paralelo con el sistema existente desde 1998. Mediante la exploración de los mecanismos que generan la omisión de votos y la comparación de distintas estimaciones del comportamiento legislativo, concluimos que los votos no publicados merman la precisión de estimadores de uso común pero no introducen ningún tipo de sesgo. A la par, hacemos pública una base de datos para el estudio del Congreso mexicano

Abstract:

This paper examines the nature of the data available for studying legislative behavior in Mexico. In particular, we evaluate a potentially serious problem: only a subset of roll-call votes have been released for the critical transition period of 1998-2006. We test whether this subset is a representative sample of all votes, and thus suitable for study, or whether it is biased in a way that misleads scholarship. Our research strategy takes advantage of a partial overlap between two roll call vote reporting sources by the Chamber of Deputies: the site with partial vote disclosure, created in 1998 and still in place today; and the site with universal vote disclosure since 2006 only. An examination of the data generation and publication mechanisms, comparing different estimations of legislative behavior, reveals that omitted votes reduce the precision of estimates but do not introduce bias. Scholarship of the lower chamber can therefore proceed with data that we make public with the publication of the paper

Abstract:

We propose a generalized 3D shape descriptor for the efficient classification of 3D archaeological artifacts. Our descriptor is based on a multi-view approach of curvature features, consisting of the following steps: pose normalization of 3D models, local curvature descriptor calculation, construction of 3D shape descriptor using the multi-view approach and curvature maps, and dimensionality reduction by random projections. We generate two descriptors from two different paradigms: 1) handcrafted, wherein the descriptor is manually designed for object feature extraction, and directly passed on to the classifier and b) machine learnt, in which the descriptor is automatically learns the object features through a pretrained deep neural network model (VGG-16) for transfer learning and passed on to the classifier. These descriptors are applied to two different archaeological datasets: 1) non-public Mexican dataset, represented by a collection of 963-3D archaeological objects from the Templo Mayor Museum in México City, that includes anthropomorphic sculptures, figurines, masks, ceramic vessels, and musical instruments; and 2) 3D pottery content-based retrieval benchmark dataset, consisting of 411 objects. Once the multi-view descriptors are obtained, we evaluate their effectiveness by using the following object classification schemes: K-Nearest neighbor, support vector machine, and structured support vector machine. Our object descriptors classification results are compared against five popular 3D descriptors in the literature, namely, rotation invariant spherical harmonic, histogram of spherical orientations, signature of histograms of orientations, symmetry descriptor, and reflective symmetry descriptor

Abstract:

Delta-hedged option returns consistently decrease in volatility of volatility changes (volatility uncertainty), for both implied and realized volatilities. We provide a thorough investigation of the underlying mechanisms including model-risk and gambling-preference channels. Uncertainty of both volatilities amplifies the model risk, leading to a higher option premium charged by dealers. Volatility of volatility-increases, rather than that of volatility-decreases, contributes to the effect of implied volatility uncertainty, supporting the gambling-preference channel. We further strengthen this channel by examining the effects of option end-users net demand and lottery-like features, and by decomposing implied volatility changes into systematic and idiosyncratic components

Abstract:

In this paper, we explore the interplay of virus contact rate, virus production rates, and initial viral load during early HIV infection. First, we consider an early HIV infection model formulated as a bivariate branching process and provide conditions for its criticality R0>1. Using dimensionless rates, we show that the criticality condition R0>1 defines a threshold on the target cell infection rate in terms of the infected cell removal rate and virus production rate. This result has motivated us to introduce two additional models of early HIV infection under the assumption that the virus contact rate is proportional to the target cell infection probability (denoted by VV+0). Using the second model, we show that the length of the eclipse phase of a newly infected host depends on the target cell infection probability, and the corresponding deterministic equations exhibit bistability. Indeed, occurrence of viral invasion in the deterministic dynamics depends onR0and the initial viral loadV0. If the viral load is small enough, eg, V0≪0, then there will be extinction regardless of the value ofR0. On the other hand, if the viral load is large enough, eg, V0≫0 and R0>1, then there will be infection. Of note, V0≈𝜃corresponds to a threshold regime above which virus can invade. Finally, we briefly discuss between-cell competition of viral strains using a third model. Our findings may help explain the HIV population bottlenecks during within-host progression and host-to-host transmission

Abstract:

In this paper, we consider a fractionally integrated multi-level dynamic factor model (FI-ML-DFM) to represent commonalities in the hourly evolution of realized volatilities of several international exchange rates. The FI-ML-DFM assumes common global factors active during the 24 h of the day, accompanied by intermittent factors, which are active at mutually exclusive times. We propose determining the number of global factors using a distance among the intermittent loadings. We show that although the bulk of common dynamics of exchange rates realized volatilities can be attributed to global factors, there are non-negligible effects of intermittent factors. The effect of the COVID-19 on the realized volatility comovements is stronger on the first global-in-time factor, which shows a permanent increase in the level. The effects on the second global factor and on the intermittent factors active when the EU, UK and US markets are operating are transitory lasting for approximately a year after the pandemic starts. Finally, there seems to be no effect of the pandemic neither on the third global factor nor on the intermittent factor active when the markets in Asia are operating

Abstract:

The MoProSoft Integral Tool, or HIM for its name in Spanish, is a Web-designed s)'stem to support 11l0nitoring the MoProSoft, a software process model defined as part of a strategy to encourage the software industry in Mexico. Tlle HIM-assistant, is a system added to the HIM, which main objectives are to give a guide for the automated use oi the MoProSoft and improve the aid provided to the HIM users. To reach these objectives, elements fr0111 software engÚzeering along witlz two areas oi artificial i11telligence, multiagenl systems and case based reasoning, were applied . lo develop llze HIM-assistant. The task involved the H/M-assistant analysis and design plzases usíng the MESSAGE methodology, as well as the development of lhe system and the peiformance of tests. Tlze major importance of the work líes on the integratíon oi differenl areas to Julfill the objectives, usíng existing progress instead oi developing a totally new solution

Abstract:

The Ministry of Social Development in Mexico is in charge of creating and assigning social programmes targeting specific needs in the population for the improvement of the quality of life. To better target the social programmes, the Ministry is aimed to find clusters of households with the same needs based on demographic characteristics as well as poverty conditions of the household. Available data consists of continuous, ordinal, and nominal variables, all of which come from a non-i.i.d complex design survey sample. We propose a Bayesian nonparametric mixture model that jointly models a set of latent variables, as in an underlying variable response approach, associated to the observed mixed scale data and accommodates for the different sampling probabilities. The performance of the model is assessed via simulated data. A full analysis of socio-economic conditions in households in the Mexican State of Mexico is presented

Abstract:

The A-RIO AQM mechanism has been recently introduced as a viable component for implementing the AF Per Hop Behavior in DiffServ architectures. A-RIO has been thoroughly studied and compared against other AQM algorithms in terms of fairness, performance and setting complexity. In this paper, we extend these studies by analyzing how A-RIO behaves faced to some well known parameters that affect TCP performance: RTT delay, packet size and the presence of unresponsive flows. Our study is based on extensive ns-2 simulations in settings considering under and over-provisioned networks

Abstract:

Active queue management (AQM) mechanisms manage queue lengths by dropping packets when congestion is building up; end-systems can then react to such losses by reducing their packet rate, hence avoiding severe congestion. They are also very useful for the differentiated forwarding of packets in the DiffServ architecture. Many studies have shown that setting the parameters of an AQM algorithm may prove difficult and error-prone, and that the performance of AQM mechanisms is very sensitive to network conditions. The Adaptive RIO mechanism (A-RIO) [16] addresses both issues. It requires a single parameter, the desired queuing delay and adjusts its internal dynamics accordingly. A-RIO has been thoroughly evaluated in terms of delay response and network utilization [16] but no study has been conducted in order to evaluate its behaviour in terms of fairness. By way of ns-2 simulations, this paper examines A-RIO's ability to fairly share the network's resources (bandwidth) between the flows contending for those resources. Using Jain's fairness index as our performance metric, we compare the bandwidth distribution among flows obtained with A-RIO and with RIO

Abstract:

Following Davie's example of a Banach space failing the approximation property (1973), we show how to construct a Banach space E which is asymptotically Hilbertian and fails the approximation property. Moreover, the space E is shown to be a subspace of a space with an unconditional basis which is "almost" a weak Hilbert space and which can be written as the direct sum of two subspaces all of whose subspaces have the approximation property

Abstract:

The paper studies probability forecasts of inflation and GDP by monetary authorities. Such forecasts can contribute to central bank transparency and reputation building. Problems with principal and agent make the usual argument for using scoring rules to motivate probability forecasts confused; however, their use to evaluate forecasts remains valid. Public comparison of forecasting results with a “shadow” committee is helpful to promote reputation building and thus serves the motivational role. The Brier score and its Yates-partition of the Bank of England’s forecasts are compared with those of a group of non-bank experts

Abstract:

Studies of strategic sophistication in experimental normal form games commonly assume that subjects' beliefs are consistent with independent choice. This paper examines whether beliefs are consistent with correlated choice. Players play a sequence of 2x2 normal form games with distinct opponents and no feedback. Another set of players, called predictors, report a likelihood ranking over possible outcomes. A substantial proportion of the reported rankings are consistent with the predictors believing that the choice of actions in the 2x2 game are correlated. Predictions seem to be correlated around focal outcomes and the extent of correlation over action profiles varies systematically between games (i.e., prisoner's dilemma, stag hunt, coordination, and strictly competitive)

Abstract:

This study reports a laboratory experiment wherein subjects play a hawk-dove game. We try to implement a correlated equilibrium with payoffs outside the convex hull ofNash equilibrium payoffs by privately recommending play. We find that subjects are reluctant to follow certain recommendations. We are able to implement tbis correlated equilibrium, however, when subjects play against robots that always follow recommendations, including in a control treatment in which human subjects receive the robot "earnings." This indicates that the lack of mutual knowledge of conjectures, rather than social preferences, explains subjects' failure to play the suggested correlated equilibrium when facing other human players

Abstract:

This paper presents a model in which a durable goods monopolist sells a product to two buyers. Each buyer is privately informed about his own valuation. Thus all players are imperfectly informed about market demand. We study the monopolist's pricing behavior as players' uncertainty regarding demand vanishes in the limit. In the limit, players are perfectly informed about the downward-sloping demand. We show that in all games belonging to a fixed and open neighborhood of the limit game there exists a generically unique equilibrium outcome that exhibits Coasian dynamics and in which play lasts for at most two periods. A laboratory experiment shows that, consistent with our theory, outcomes in the Certain and Uncertain Demand treatments are the same. Median opening prices in both treatments are roughly at the level predicted and considerably below the monopoly price. Consistent with Coasian dynamics, these prices are lower for higher discount factors. Demand withholding, however, leads to more trading periods than predicted

Resumen:

Se propone un nuevo paradigma educativo para México, a partir de una visión amplia, crítica e innovadora que corresponda a la realidad, necesidades y circunstancias del país

Abstract:

In this article, we propose a new educational paradigm for Mexico based on an encompassing, critical, and innovative vision conforming with the country’s realities, needs, and circumstances

Resumen:

Un recuerdo del gran escritor y personaje, don Ramón del Valle Inclán: su personalidad estrafalaria y consistente; su pertenencia –y amistad– a la Generación del 98, y la creación estética del esperpento, que se refleja en sus obras, particularmente en las más famosas, Luces de Bohemia y Tirano Banderas

Abstract:

This article is dedicated to the memory of the great writer and protagonist, Don Ramón del Valle Inclán. We pay tribute to his outlandish yet consistent personality, his association and friendship with the Generation of ’98, and the creation of the esperpento, present in his works, notably in the most famous ones, Luces de Bohemia and Tirano Banderas

Abstract:

Universality, a desirable feature in any system. For decades, elusive measurements of three-phase flows have yielded countless permeability models that describe them. However, the equations governing the solution of water and gas co-injection has a robust structure. This universal structure stands for Riemann problems in green oil reservoirs. In the past we established a large class of three phase flow models including convex Corey permeability, Stone I and Brooks-Corey models. These models share the property that characteristic speeds become equal at a state somewhere in the interior of the saturation triangle. Here we construct a three-phase flow model with unequal characteristic speeds in the interior of the saturation triangle, equality occurring only at a point of the boundary of the saturation triangle. Yet the solution for this model still displays the same universal structure, which favors the two possible embedded two-phase flows of water-oil or gas-oil. We focus on showing this structure under the minimum conditions that a permeability model must meet. This finding is a guide to seeking a purely three-phase flow solution maximizing oil recovery

Abstract:

In 1977 Korchinski presented a new type of shock discontinuity in conservation laws. These singular solutions were coined δ-shocks since there is a time dependent Dirac delta involved. A naive description is that such δ-shock is of the overcompressive type: a single shock wave belonging to both families, the four characteristic lines of which impinge into the shock itself. In this work, we open the fan of solutions by studying two-family waves without intermediate constant states but possessing central rarefactions or comprising δ-shocks

Resumen:

Este manuscrito está dedicado a la memoria de Antonmaria Minzoni, discutimos una de sus contribuciones al mundo de las matemáticas. Curiosamente, ésta no se encuentra en ninguna revista científica ni en sus propios libros. En sus aportes a la ciencia, <> me dio estos resultados para incluirlos en mi tesis de licenciatura, convirtiéndola de este modo, en lo que él consideraba una tesis en sensu stricto. A finales del siglo pasado, un artículo cautivó la observación de Minzoni con un resultado numérico a un problema en relatividad general. Este resultado tiene una aproximación asintótica que aquí describimos. Para ello, primero construiremos parte de la historia y las ecuaciones que conforman la teoría de la relatividad general de Einstein. Veremos cómo es posible el colapso de la materia en una configuración especial con la posibilidad de generar un agujero negro

Abstract:

We discuss the solution for commonly used models of the flow resulting from the injection of any proportion of three immiscible fluids such as water, oil, and gas in a reservoir initially containing oil and residual water. The solutions supported in the universal structure generically belong to two classes, characterized by the location of the injection state in the saturation triangle. Each class of solutions occurs for injection states in one of the two regions, separated by a curve of states for most of which the interstitial speeds of water and gas are equal. This is a separatrix curve because on one side water appears at breakthrough, while gas appears for injection states on the other side. In other words, the behavior near breakthrough is flow of oil and of the dominant phase, either water or gas; the non-dominant phase is left behind. Our arguments are rigorous for the class of Corey models with convex relative permeability functions. They also hold for Stone’s interpolation I model [5]. This description of the universal structure of solutions for the injection problems is valid for any values of phase viscosities. The inevitable presence of an umbilic point (or of an elliptic region for the Stone model) seems to be the cause of this universal solution structure. This universal structure was perceived recently in the particular case of quadratic Corey relative permeability models and with the injected state consisting of a mixture of water and gas but no oil [5]. However, the results of the present paper are more general in two ways. First, they are valid for a set of permeability functions that is stable under perturbations, the set of convex permeabilities. Second, they are valid for the injection of any proportion of three rather than only two phases that were the scope of [5]

Abstract:

Flow of three fluids in porousmedia is governed by a system of two conservation laws. Shock solutions are described by curves in state space, which is the saturation triangle of the fluids. We study a certain bifurcation locus of these curves, which is relevant for certain injection problems. Such structure arises, for instance, when water and gas are injected in a mature reservoir either to dislodge oil or to sequestrate CO2. The proof takes advantage of a certain wave curve to ensure that the waves in the flow are a rarefaction preceded by a shock, which is in turn preceded by a constant two-phase state (i.e., it lies at the boundary of the saturation triangle). For convex permeability models of Corey type, the analysis reveals further details, such as the number of possible two-phase states that correspond to the above mentioned shock, whatever the left state of the latter is within the saturation triangle

Resumen:

En teoría de números, el estudio de los números primos tiene una relevancia central. Se sabe que Hilbert creía que esta teoría sería siempre la parte más pura de las matemáticas, el vuelco vino con la criptografía y a su vez la búsqueda por números primos cada vez más grandes. Vemos, a lo largo de la historia desde 1952 hasta los días actuales, que 32 de los 33 números primos más grandes registrados son aquellos llamados primos de Mersenne; solamente de 1989 a 1992 el número 391581·2^216193−1 salió de esta regla. Desde 1996 todos los resultados provienen del proyecto colectivo GIMPS. Este trabajo propone una prueba simple para el teorema por el cual conocemos los primos de Mersenne, sin sofisticadas herramientas de teoría de números. La prueba es accesible y tan sencilla que nos permitiría ir un poco más allá y generalizar los primos de Mersenne al mostrar una gran parte de la familia que estaba escondida

Abstract:

Is ignition or extinction the fate of an exothermic chemical reaction occurring in a bounded region within a heat conductive solid consisting of a porous medium? In the spherical case, the reactor is modeled by a system of reaction-diffusion equations that reduces to a linear heat equation in a shell, coupled at the internal boundary to a nonlinear ODE modeling the reaction region. This ODE can be regarded as a boundary condition. This model allows the complete analysis of the time evolution of the system: there is always a global attractor. We show that, depending on physical parameters, the attractor contains one or three equilibria. The latter case has special physical interest: the two equilibria represent attractors ("extinction" or "ignition") and the third equilibrium is a saddle. The whole system is well approximated by a single ODE, a "reduced" model, justifying the "heat transfer coefficient" approach of chemical engineering

Abstract:

This paper provides evidence on the difficulty of expanding access to credit through large institutions. We use detailed observational data and a large-scale countrywide experiment to examine a large bank's experience with a credit card that accounted for approximately 15% of all first-time formal sector borrowing in Mexico in 2010. Borrowers have limited credit histories and high exit-risk – a third of all study cards are defaulted on or canceled during the 26 month sample period. We use a large-scale randomized experiment on a representative sample of the bank's marginal borrowers to test whether contract terms affect default. We find that large experimental changes in interest rates and minimum payments do little to mitigate default risk. We also use detailed data on purchases and payments to construct a measure of bank revenue per card and find it is generally low and difficult to predict (using machine learning methods), perhaps explaining the bank's eventual discontinuation of the product. Finally, we show that borrowers generating a favorable credit history are much more likely to switch banks providing suggestive evidence of a lending externality. Taken together these facts highlight the difficulty of increasing financial access using large formal sector financial organizations

Abstract:

This paper analyzes the existence and extent of downward nominal wage rigidities in the Mexican labor market using data from the administrative records of the Mexican Social Security Institute (IMSS). This establishment-level, panel dataset allows us to track workers employed with the same firm, observe their wage profiles and calculate the nominal-wage changes they experience over time. Based on the estimated density functions of nominal wage changes, we are able to calculate some standard tests of nominal wage rigidity that have been proposed in the literature. Furthermore, we extend these tests to take into account the presence of minimum wage laws that may affect the distribution of nominal wage changes. The densities and tests calculated using these data are similar to those obtained using administrative data from other countries, and constitute a significant improvement over the measures of nominal wage rigidities obtained from household survey data. We document the importance of minimum wages in the Mexican labor market, as evidenced by the large fraction of minimum wage earners and the indexation of wage changes to the minimum wage increases. We find considerably more nominal wage rigidity than previous estimates obtained for Mexico using data from the National Urban Employment Survey (ENEU) suggest, but lower than that reported for developed countries by other studies that use comparable data

Abstract:

Autonomous agents (AAs) are capable of evaluating their environment from an emotional perspective by implementing computational models of emotions (CMEs) in their architecture. A major challenge for CMEs is to integrate the cognitive information projected from the components included in the AA's architecture. In this chapter, a scheme for modulating emotional stimuli using appraisal dimensions is proposed. In particular, the proposed scheme models the influence of cognition on appraisal dimensions by modifying the limits of fuzzy membership functions associated with each dimension. The computational scheme is designed to facilitate, through input and output interfaces, the development of CMEs capable of interacting with cognitive components implemented in a given cognitive architecture of AAs. A proof of concept based on real-world data to provide empirical evidence that indicates that the proposed mechanism can properly modulate the emotional process is carried out

Abstract:

In this paper we present a mechanism to model the influence of agents’ internal and external factors on the emotional evaluation of stimuli in computational models of emotions. We propose the modification of configurable appraisal dimensions (such as desirability and pleasure) based on influencing factors. As part of the presented mechanism, we introduce influencing models to define the relationship between a given influencing factor and a given set of configurable appraisal dimensions utilized in the emotional evaluation phase. Influencing models translate factors’ influences (on the emotional evaluation) into fuzzy logic adjustments (e.g., a shift in the limits of fuzzy membership functions), which allow biasing the emotional evaluation of stimuli. We implemented a proof-of-concept computational model of emotions based on real-world data about individuals’ emotions. The obtained empirical evidence indicates that the proposed mechanism can properly affect the emotional evaluation of stimuli while preserving the overall behavior of the model of emotions

Abstract:

In this paper we introduce the concept of configurable appraisal dimensions for computational models of emotions of affective agents. Configurable appraisal dimensions are adjusted based on internal and/or external factors of influence on the emotional evaluation of stimuli. We developed influencing models to define the extent to which influencing factors should adjust configurable appraisal dimensions. Influencing models define a relationship between a given influencing factor and a given set of configurable appraisal dimensions. Influencing models translate the influence exerted by internal and external factors on the emotional evaluation into fuzzy logic adjustments, e.g., a shift in the limits of fuzzy membership functions. We designed and implemented a computational model of emotions based on real-world data about emotions to evaluate our proposal. Our empirical evidence suggests that the proposed mechanism properly influences the emotional evaluation of stimuli of affective agents

Abstract:

In this paper, we present a computational model of emotions based on the context of an Integrative Framework designed to model the interaction of cognition and emotion. In particular, we devise mechanisms for assigning an emotional value to events perceived by autonomous agents using a set of appraisal variables. Defined as fuzzy sets, these appraisal variables model the influence of cognition on emotion assessment. We do this by changing the limits of fuzzy membership functions associated to each appraisal variable. In doing so, we aim to provide agents with a degree of emotional intelligence. We also defined a case study involving three agents, two with different personalities (as a cognitive component) and another one without a personality to explore their reactions to the same stimulus, obtaining as a result, a different emotion for each agent. We noticed that emotions are biased by the interaction of cognitive and affective information suggesting the elicitation of more precise emotions

Resumen:

En este trabajo se presenta un modelo para optimizar el balanceo de inventario de un sistema de bicicletas compartidas. Se aplica con datos reales de Ecobici de un número específico de estaciones en la zona de Polanco de la Ciudad de México para realizar un re-balanceo estático (durante un periodo definido del día). Se define una función de satisfacción que toma en cuenta la probabilidad de hallar bicicletas disponibles y espacios libres para dejarlas en cada estación. Se optimiza una función ponderada de la satisfacción y el tiempo total de la ruta para llevar a cabo la carga y descarga con un vehículo al resolver un problema de programación lineal mixto entero. Los resultados sugieren que es posible realizar una optimización del re-balanceo de inventario en un sistema de bicicletas compartidas con mínimos recursos

Resumen:

A cinco siglos del encuentro entre dos mundos, que dio inicio a un periodo histórico en el cual se gestaron los cimientos del México actual, se analizan varios aspectos del proceso de creación y consolidación de Nueva España, su papel dentro de la monarquía hispánica y la Iglesia católica, así como sus posibles consecuencias en el desarrollo de la cultura mexicana

Abstract:

Five centuries after the encounter between two worlds, which began a historical period in which the foundations of today’s Mexico were developed, several aspects of the process of creation and consolidation of New Spain, its role within the Hispanic monarchy and the Catholic church are analyzed, as well as its possible consequences in the flourishing of Mexican culture

Resumen:

El propósito de este estudio es analizar la forma en que se genera (producto interno bruto), asigna (ingreso nacional), distribuye (ingreso disponible), utiliza (gasto y ahorro) y acumula (riqueza) el valor generado a partir del trabajo (intelectual y manual) y los recursos naturales (que también aportan valor); es decir, el propósito es estudiar la desigualdad en la repartición del valor generado en la economía a partir de la teoría del valor objetiva, en lugar de medir la desigualdad subjetiva del bienestar (felicidad) por medio del consumo (utilidad). Si bien se considera importante el tema de las capacidades y libertades (igualdad de oportunidades), se toma como referencia el marco más amplio de la necesidad de cumplimiento de los derechos humanos. Por ello, se otorga más importancia a las medidas urgentes que se deben tomar ex ante, ya que en ellas estaría la solución a la problemática de la pobreza y la desigualdad de los países de América Latina y el Caribe. Es preciso repartir de manera justa los beneficios que genera la sociedad y otorgar a todos sus miembros el goce pleno de los derechos humanos para construir un mundo más justo

Resumen:

La mayoría de los investigadores sobre la desigualdad en México han concluido que, aunque la inequidad en los ingresos es muy alta, su tendencia es a la baja. Han llegado a esta conclusión porque han utilizado las cifras oficiales de ingresos de las encuestas de hogares, sin hacer ninguna corrección. Por el contrario, en este estudio se propone un ajuste a los datos de ingresos provenientes de las encuestas de ingresos y gastos de los hogares, basado en las cuentas nacionales. Las cifras ajustadas muestran que la desigualdad es alta y creciente, debido a las políticas públicas implementadas desde mediados de la década de 1980. Por lo tanto, debemos atrevernos a pensar de manera diferente y cambiar el curso económico del país

Abstract:

Most inequality researchers in Mexico have concluded that although income inequality is very high, there is a downward trend in this inequality. They have come to this conclusion because they have used official income figures from household surveys without making any correction. In contrast, this study proposes an adjustment to income data from household income and expenditure surveys, based on national accounts. Adjusted figures show that inequality is high and increasing, due to public policies implemented since the mid-1980s. Therefore, we must dare to think differently and change the economic course of the country

Resumen:

En 2014, la riqueza total del país ascendió a 76,7 billones de pesos. El 37% de ella estaba en manos de los hogares; el gobierno administraba el 23%, las empresas privadas el 19%, las empresas públicas el 9%, el resto del mundo poseía el 7% y las instituciones financieras el 5%. En promedio cada hogar tendría, si hubiera una distribución equitativa, 900.000 pesos en activos físicos (casas, terrenos, automóviles y diversos bienes del hogar), y financieros (dinero e inversiones financieras), monto que sería más que suficiente para que las personas tuvieran una vida holgada: cerca de 400.000 pesos por adulto, en promedio. Lamentablemente la repartición es muy desigual. Dos terceras partes de la riqueza están en manos del 10% más rico del país y el 1% de los muy ricos acaparan más de un tercio. Por ello, el coeficiente de Gini de la riqueza es de 0,79. La distribución es todavía más desigual en los activos financieros: el 80% es propiedad del 10% más rico. En 2015 había en el país tan sólo 211.000 contratos de mexicanos celebrados en casas de bolsa, con una inversión total por 16 billones de pesos, el 22% de la riqueza nacional. El 11% de los contratos tienen un monto de inversión mayor a 500 millones de pesos y suman el 79,5% del total de la inversión. Es decir, hay 23.000 personas (si asumimos un contrato por persona), que tienen el 80% de la inversión de la Bolsa Mexicana de Valores. Por ello México está presente en la lista de la revista Forbes, así como en los reportes que han elaborado las instituciones financieras encargadas de gestionar los fondos patrimoniales, quienes ven en el país un mercado al cual atender. En los últimos once años, entre 2003 y 2014, la riqueza del país aumentó a una tasa promedio anual de 7,9%, en términos reales, por lo que México duplicó el monto de su riqueza entre 2004 y 2014. En cambio, el producto interno bruto tuvo un magro crecimiento de 2,6% promedio anual en el mismo período. [...]

Resumen:

En este estudio se propone una metodología para ajustar los datos de las encuestas de ingresos y gastos de los hogares en México con la información de las cuentas nacionales a fin de contar con información confiable para el estudio de la desigualdad en el ingreso y la riqueza, en especial entre los sectores de mayores ingresos. A partir de un método que asigna adecuadamente los ingresos sin afectar las medidas de pobreza, se calcula que la desigualdad en México es significativamente mayor a la que se ha estimado hasta ahora. La proporción del ingreso corriente total que concentra el 10% de las familias más ricas de México se incrementa del 35% al 62%, con lo que el coeficiente de Gini aumenta de 0,45 a 0,68. De igual forma, el 1% de las familias más ricas concentra el 22,8% del ingreso total y su ingreso promedio es de 625.000 pesos mensuales. Al utilizar la función de Pareto, este documento concluye también que la proporción de ingreso del 1% de los hogares más ricos se eleva a 34,2% y sus ingresos medios a 973.000 pesos mensuales, mientras que el 0,1% de las familias (poco más de 31.000) suman el 19% del ingreso y sus percepciones medias ascienden a 5.000.000 de pesos mensuales. De igual forma, este estudio señala que si se considera tan sólo la asignación del ingreso primario y se excluyen por tanto las transferencias, el 10% más rico concentra el 66%, con lo que el coeficiente de Gini se eleva a 0,73

Resumen:

Se analizan las principales ideas del libro El capital en el siglo xxi de Thomas Piketty: las fuerzas que inciden en la desigualdad de la riqueza y el ingreso; la primera ley del capitalismo y el aumento en la proporción del capital en la economía; la segunda ley del capitalismo y el aumento en la proporción de riqueza respecto al ingreso nacional; la desigualdad en los ingresos del trabajo y del capital; el cambio en la composición de la riqueza y las recomendaciones del autor para salvar la globalización y el sistema de mercado de los problemas sociales y políticos que la inequidad ha producido. Se confrontan estas ideas con México. Se concluye que el libro de Piketty puede ayudar a diseñar un mejor futuro para México, siempre y cuando nos atrevamos a pensar diferente

Abstract:

In this article, we will analyze the main ideas from Thomas Piketty’s book, Capital in the Twenty-First Century. They consist of the following: the elements causing inequality in wealth and income, the first law of capitalism and the increase of the proportion of capital in the economy, the second law of capitalism and the increase in the proportion of wealth with respect to national income, the inequalities in the division of income of labor and capital, and the changing forms of wealth. Moreover, the author gives recommendations to save both globalization and the market economy from the social and political problems brought on by inequality. All these ideas are contrasted with the Mexican economy and it is concluded that Piketty’s contributions are helpful to devise a better future for our country, as long as we dare to think differently

Resumen:

El problema del hambre en México es aún más grave de lo que se piensa. Quien no sufre subnutrición, está mal nutrido; la población de México está famélica u obesa; tan sólo el 14% tiene una nutrición adecuada. Una de las causas más importantes es el cambio en el entorno alimenticio, producto de la apertura comercial de México con Estados Unidos. Como parte del tlcan han llegado a México una gran cantidad de productos alimenticios procesados que han provocado una “epidemia” de diabetes en el país, que ocupa el primer lugar entre las causas de defunciones. La “Cruzada contra el Hambre” es insuficiente: se requiere implantar políticas públicas más contundentes, como un impuesto a la importación de productos alimenticios procesados, si en verdad se desea enfrentar el reto

Abstract:

Hunger in Mexico is an even more serious problem than commonly thought. Those who are not undernourished are malnourished. The Mexican population is either starving or obese. Only 14% of the population has good nutritional habits.The main cause is the change in the food supply due to the opening of commercial borders between Mexico and the United States. As part of nafta, a great variety of processed food products has arrived, causing a diabetes epidemic which has become the leading cause of death in our country. Thus, this “Crusade against Hunger” is insufficient and more forceful public policies are called for, namely, a tax on imported processed food

Resumen:

Se analiza la tesis del proceso de individualización de Ulrich Beck, sociólogo alemán contemporáneo, y se le compara con la realidad empírica de México. Para el autor, la modernidad reflexiva es un reflejo de la política y la tecnología; se trata de una revolución de las consecuencias a partir de tres ámbitos: ingreso, empleo y familia. A diferencia de Beck, se propone una distinción entre individualización, producto del Estado de bienestar, y la que surge de su desmantelamiento

Abstract:

In this article, we analyze the individualization process of Ulrich Beck, a contemporary German sociologist, with Mexico’s empirical reality. He proposes that reflexive modernization is a reflection of politics and technology, a revolution of consequences from three aspects: income, employment, and family. In contrast to Beck, we propose a distinction between that individualization, a product of the Welfare State, and that resulting from its breakdown

Abstract:

The Bike Sharing Systems (BSS) are an integral part of the multimodal transport systems in an urban area. Those systems have many advantages such as low cost, environmental--friendly and flexibility. Nevertheless, the lack of bicycles and the lack of spaces to drop them off discourage people from using the BSS. Thus, we propose a Saturation Index ($SI$) to identify the number of bicycles for a period of time (seconds, minutes, hours) in a station; thus, based on the $SI$, the supply and demand levels are known for every station. With those levels, the number of bicycles to be moved by truck among the stations is computed by solving the transship model. Finally, a route is computed using a heuristic to minimize the travel distance of the truck among the stations. To test our approach, we used the data set of the ECOBICI system in Mexico City, and we program a computational application based on R language and Python. The results show that during the day, a station change to supply bicycles to demand them. Thus, to balance the system, the proposed approach must be run every time the managers want to balance the system

Abstract:

In this paper we predict the overall withdrawal of ecobicis for a given day. The principal problem adressed was the lack of data available to understand certain behavior related to the ecobici's demand at some time of the day. However, with the information available in the ecobici's website we were capable to adjust a time series model that help us forecast the total withdrawals in the short time, this can be used to estimate the overall demand of ecobicis for a given hour in the day in order to identify the time of the day when the ecobici’s demand is greater than the availability. The principal motivation of our analysis is to predict the demand of each ecobici's station, so this model is a benchmark for future analysis

Abstract:

In this article, we examine how collective notions of belonging and imagination become a fertile terrain upon which transnational websites can sustain certain social practices across national boundaries that would be otherwise difficult. Drawing on field work carried out in the United States and Mexico, and using transnational imagination as our analytical lens, we observed three phenomena that are closely related to the use of a transnational website by a migrant community. First, the transnational website under study was a place for a collective imaginary rather than just for the circulation of news. Also, through transnational imagination, migrants can make claims about their status in their community of origin. Moreover, the website is instrumental in harmonizing the various views of the homelands’ realities. Finally, the website can inspire us to look beyond dyadic forms of communication

Abstract:

When somebody dies, her or his presence might still be hanging around others’ lives on Social Networking Sites (SNS). Several factors might influence the way we perceive this digital presence after the death including personal and religious beliefs. In this work, we present results from a study aimed at examining the differences between the way we perceive an online profile before and after the owner of the profile has died. In the few weeks following the death, the digital presence seems to become more salient, although this salience might be only transient. Our findings highlight not only the importance of studying this area, but also raise further questions within this area than need to be addressed in order to better understand this topic

Abstract:

Let τ be a hereditary torsion theory on Mod-R. For a right τ-full R-module M, we establish that [T, T v ξ (M)] is a boolean lattice; we find necessary and sufficient conditions for the interval [T, T v ξ (M)] be atomic, and we give conditions for the atoms be of some specific type in terms of the internal structure of M. We also prove that there are lattice isomorphisms between the lattice [T, Tv ξ (M)] and the lattice of T-pure fully invariant submodules of M, under the additional assumption that M is absolutely T-pure. With the aid of these results, we get a decomposition of a T-full and absolutely T-pure R-module M as a direct sum of T-pure fully invariant submodules N and N' with different atomic characteristics on the intervals [T, T v ξ (N)] and [T, Tv ξ (N )] , respectively

Resumen:

Objetivo. Describir y analizar el gasto de la Secretaría de Salud asociado con iniciativas de comunicación social de las campañas de prevención de enfermedades transmitidas por vectores (Zika, chikunguña y dengue) y la evaluación de impacto o resultados. Material y métodos. La información se obtuvo de 690 contratos de prestación de servicios de comunicación social (2015-2017), asociados con dos declaraciones de emergencia epidemiológica (EE-2-2015 y EE-1-2016). Resultados. Se concluye una débil evaluación de impacto del gasto público. No existe evidencia suficiente que demuestre la correspondencia del gasto en comunicación social con la efectividad y cumplimiento de las campañas. Conclusiones. Los hallazgos permiten definir recomendaciones para vigilar, transparentar y hacer más eficiente el gasto público. Existe información pública sobre el gasto; sin embargo, es necesario garantizar mecanismos de transparencia, trazabilidad de contratos y evaluación de impacto de las campañas

Abstract:

Objective. To briefly describe and to analyze, the expenditure of the Minister of Health associated with social communication campaigns for the prevention of vector-transmitted diseases (Zika, chikungunya and dengue). Materials and methods. The information was obtained through the analysis of 690 contracts for the provision of social communication services (2015-2017). The analyzed contracts are linked with two epidemiological emergency declarations (EE-2-2015, EE-1-2016). Results. The analysis concludes a weak evaluation of the impact of public spending. There is not enough evidence to show the correspondence of social communication spending with the effectiveness and compliance of the campaigns aim goals. Conclusions. Findings allow defining recommendations to monitor public spending. There are platforms that provide information on social communication spending; however, it is necessary to guarantee transparency, accountability and the governance of those responsible for social communication and official advertising and strengthen outcomes evaluation mechanisms

Abstract:

The expansion of democracy in the world has been paradoxically accompanied by a decline of political trust. By looking at the trends in political trust in new and stable democracies over the last 20 years, and their possible determinants, we claim that an observable decline in trust reflects the post-honeymoon disillusionment rather than the emergence of a more critical citizenry. However, the first new democracies of the ‘third wave’ show a significant reemergence of political trust after democratic consolidation. Using data from the World Values Survey and the European Values Survey, we develop a multivariate model of political trust. Our findings indicate that political trust is positively related to well-being, social capital, democratic attitudes, political interest, and external efficacy, suggesting that trust responds to government performance. However, political trust is generally hindered by corruption permissiveness, political radicalism and postmaterialism. We identify differences by region and type of society in these relationships, and discuss the methodological problems inherent to the ambiguities in the concept of political trust

Abstract:

In this paper, we present an agent-based simulation (ABS) model of a synthetic biology system that captures substrates and transfers them amongst a group of enzymes until reaching an acceptor enzyme, which generates a substrate-channeling event (SCE). In particular, we analyze the number of simulation cycles required to reach a pre-specified number of SCEs varying the system composition, which is given by the number of enzymes of two types: donor and acceptor. The results show an efficient frontier that generates the desired number of SCEs with the minimum number of cycles and the lowest acceptor:donor ratio for a given density of enzymes in the system. This frontier is characterized by an exponential function to define the system composition that would minimize the number of cycles to generate the desired SCEs. The output of the ABS confirms that compositions obtained by this function are highly efficient

Resumen:

Siguiendo las recomendaciones de la Organización para la Cooperación y Desarrollo Económico, hemos utilizado la información de patentes contenida en la base de datos de consulta PATENTSCOPE como un indicador de innovación tecnológica con el fin de analizar la participación de los inventores mexicanos en las solicitudes de patentes en un periodo de veinte años (1995-2015). Se realizó el análisis tomando en cuenta el género de los inventores para contrastar la participación de hombres y mujeres. Se muestran algunos indicadores tales como participación, contribución y presencia. Los resultados del estudio establecen que las inventoras mexicanas participan en los títulos de patentes con un equipo de inventores pequeño a mediano (de acuerdo al número de inventores enlistados), mientras los inventores mexicanos tienden a hacerlo en solitario. Se establece también que el área tecnológica en la que los inventores mexicanos hombres y mujeres tienen mayor participación es la de química y metalurgia. Los resultados revelan disparidades de género que deberían atenderse mediante políticas públicas para alcanzar las Metas del Milenio y las Metas de Sustentabilidad y Desarrollo establecidas por la ONU, así como promover la equidad de género en las actividades relacionadas con ciencia y tecnología

Abstract:

Following the Organization of Economic and Cooperation and Development recommendations, we have used patent data comprised in the PATENTSCOPE database as an indicator of technological innovation and in order to analyze Mexican inventors' involvement on patent filing over a 20 year period (1995-2015). The analysis was gender desegregated to observe patterns and trends of participation of both male and female inventors. Some indicators such as participation, contribution and presence are shown. Findings reveal that Mexican female inventors more often apply for patent titles within a small to medium sized team, while male inventors prefer single-authored applications. It has also been found that the stronger technological area in which both male and female Mexican inventors apply for patent titles is that relative to chemistry and metallurgy, inclusive of all its subareas. The results reveal gender disparities that should be addressed in Mexican public policy to accomplish United Nations Millenium Goals and UN Sustainable Development Goals, and to and promote gender equity in science and technology related activities

Resumo:

Seguindo as recomendações da Organização para a Cooperação e Desenvolvimento Econômico, temos utilizado a informação de patentes contida na base de dados de consulta PATENTSCOPE como um indicador de inovação tecnológica com a finalidade de analisar a participação dos inventores Mexicanos nas solicitações de patentes em um período de vinte anos (1995-2015). Foi realizada a análise levando em consideração o género dos inventores para contrastar a participação de homens e mulheres. São mostrados alguns indicadores tais como participação, contribuição e presença. Os resultados do estudo estabelecem que as inventoras Mexicanas participam nos títulos de patentes com uma equipe de inventores entre pequena a média (de acordo com o número de inventores listados), por sua vez, os inventores Mexicanos tendem a fazê-lo à sós. Foi estabelecido também que as áreas tecnológicas nas quais os inventores Mexicanos, homens e mulheres, têm maior participação são as de Química e Metalurgia. Os resultados revelam disparidades de género que deveriam ser atendidas mediante políticas públicas para alcançar as Metas do Milênio e as Metas de Sustentabilidade e Desenvolvimento estabelecidas pela ONU, assim como promover a equidade de género nas atividades relacionadas com Ciência e Tecnologia

Abstract:

University teaching in times of the COVID-19 pandemic has been exposed to new challenges and requirements. The fundamental challenge is to show that the educational quality of the institutions designed and rooted in a face-to-face model can offer the same educational quality in a digital format. The challenges appear in the form of students, who do not see added value in digital distance education and, on the other hand, employers and families who strongly question the price they are paying in terms of tuition and human resources educated now remotely. Faced with this problem, there are two opposite attitudes: wait or assume. The institutions that have decided to wait believe that this passage is temporary and that the adjustments they will have to make rest on presenting a mitigation plan with the hope to return to normal face-to-face. Other institutions have decided to assume that the changes are more profound, and that the pandemic is an opportunity to restructure and rethink their educational model. This paper presents some routes to go through this second attitude through a model called Effective Education also Digital (EED)

Resumen:

El trabajo propone distinguir entre el razonamiento práctico moral del razonamiento jurídico, a partir de dos criterios. El primero es que el razonamiento práctico moral es del dominio exclusivo de la razón práctica y que carece, así, de exigencias de la razón teórica. El segundo criterio es que el razonamiento jurídico posee un cierto grado de lo que se denomina racionalidad de método. El razonamiento práctico moral carece de esta clase de racionalidad. La tesis se argumenta a partir de la distinción entre razón teórica y razón práctica. Por un lado, el razonamiento teórico tiene relación directa con el conocimiento acerca del mundo. De otro, El razonamiento práctico se ocupa de decidir el curso de acción que se ha de elegir en un caso concreto. aunque relacionadas, las racionalidades sonregidadas por diferentes criterios de evaluación. Las diferentes exigencias de razón permiten advertir, también, que ciertos razonamientos jurídicos son pasibles de ciertos sentidos de racionalidad que no son aplicables al razonamiento moral

Abstract:

The article proposes to distinguish between moral practical reasoning from legal reasoning, from two different criteria. The first criterion is that moral practical reasoning is a reasoning that belongs exclusively to the domain of practical reason and, thus, it has no requirements from the domain of theoretical reason. The second criterion is that legal reasoning possesses, to a degree, a type of rationnality called method rationnality. Such a rationnality cannot be said to be present in moral practical reasoning , at all. The thesis is argued from the distinction between theoretical reason and practical reason. On the one hand, theoretical reasoning is concerned about deciding the best course of action in a particular situation. Although related, the two rationales are governed by different evaluation criteria. Such requirements reveal that some notions of rationality are applicable to legal reasoning but to its moral counterpart

Abstract:

In this paper we propose a theoretical model of an ITS (Intelligent Tutoring Systems) capable of improving and updating computer-aided navigation based on Bloom’s taxonomy. For this we use the Bayesian Knowledge Tracing algorithm, performing an adaptive control of the navigation among different levels of cognition in online courses. These levels are defined by a taxonomy of educational objectives with a hierarchical order in terms of the control that some processes have over others, called Marzano's Taxonomy, that takes into account the metacognitive system, responsible for the creation of goals as well as strategies to fulfill them. The main improvements of this proposal are: 1) An adaptive transition between individual assessment questions determined by levels of cognition. 2) A student model based on the initial response of a group of learners which is then adjusted to the ability of each learner. 3) The promotion of metacognitive skills such as goal setting and self-monitoring through the estimation of attempts required to pass the levels. One level of Marzano's taxonomy was left in the hands of the human teacher, clarifying that a differentiation must be made between the tasks in which an ITS can be an important aid and in which it would be more difficult

Abstract:

Efficient public transport systems rely on origin-destination matrices (ODMs) estimation to accurately capture passenger travel patterns, enabling adjustments to frequencies and lines as needed. In this study, we address the ODM estimation problem by employing multiple bi-level programs that consider an outdated ODM and observed passenger flows on specific transit line arcs. Additionally, we consider various optional data types, including boarding and alighting data, as well as the structure of the outdated ODM and passenger flows, either all, separately, in combination, or none. In our study, we reformulate these bi-level programs into single-level models, and we use a commercial solver to address the problem in benchmark instances. In our analysis, we focus on the impact of incorporating different types of information into the estimation process, leading to valuable insights. We find that considering all the data types leads to a higher accuracy than only a subset of these data types. In particular, focussing only on boarding and alighting data leads to improvements in the estimation process, whereas considering only the structure of the outdated ODM and passenger flows leads to reduced accuracy compared to not incorporating either. The latter highlights the significance of data selection in ODM estimations

Abstract:

The efficiency of the planning process in public transport is represented through different measures, which are practically impossible to optimize simultaneously. This study defines a bi-objective optimization problem for the transit network design to analyze the trade-off between minimization of travel times and reducing monetary costs for passengers (which was not addressed in the literature) while considering a hard constraint for operational costs. Indeed, the minimization of monetary costs for passengers is relevant in transport systems without a complete integrated fare system, where passengers may pay for each trip-leg; thus, modeling monetary costs for users is essential when referring to the system's accessibility and route choice. To achieve our goal, we implement an epsilon-constraint algorithm capable of obtaining high-quality approximations of the Pareto front for benchmark instances in hours of computational time, which is reasonable for strategic planning problems. Numerical results show that the conflict between both objectives is evident, and it is possible to identify the more useful lines to optimize each objective, leading to relevant information for the decision-making process. Finally, we perform a sensitivity analysis on the budget parameter of our optimization problem, showing the classic trade-off between the operational costs and the level of service in terms of travel time and monetary cost

Resumen:

Este artículo sostiene que la llegada del Antropoceno requiere un cambio en el significado y alcance de la responsabilidad. Con base en Hans Jonas y Bruno Latour, sostengo que la responsabilidad es una característica definitoria de la humanidad que, no obstante, está acechada por su opuesto. Si ser responsable es primariamente ser receptivo a lo Otro, entonces la cultura de 'responsabilidad personal' que prevalece hoy en día es una traición tanto a la humanidad como a la Tierra. Cuando Jonas formuló tales ideas en 1979, el 'sistema tierra' no era ni un campo de estudio científico ni una cuestión de preocupación existencial. Pocos académicos lo tomaron en serio. Sin embargo, desarrollos recientes en el pensamiento científico, legal y ambiental han validado su visión. Para probar esta hipótesis, retomo a Latour, quien fue un cuidadoso lector-y crítico-de Jonas. Ambos pensadores consideraron que la creencia modernista de que solo los humanos son fuentes de reclamos morales válidos es un error que debe ser corregido. A medida que la Tierra hoy 'reacciona' a nuestras intervenciones con fenómenos climáticos extremos y enfermedades zoonóticas, su mensaje resuena en círculos cada vez mayores. El Antropoceno trastoca una era en la que solo algunos humanos tenían permitido hablar. Ahora debemos enseñarnos a escuchar y responder a otros seres vivos y a generaciones futuras. Sostengo que esta capacidad es el corazón de regímenes emergentes de responsabilidad planetaria

Abstract:

This paper argues that the coming of the Anthropocene requires a shift in the meaning and scope of responsibility. Drawing on Hans Jonas and Bruno Latour, I argue that responsibility is a defining feature of humanity which is nevertheless haunted by its opposite. Indeed, if to be responsible is primarily to be responsive to the claim of the Other, then the culture of 'personal responsibility' that prevails today is a betrayal of both humanity and the Earth. When Jonas formulated such thoughts in 1979 the 'Earth system' was neither a field of scientific study, nor a matter of existential concern. Few scholars took him seriously. However, recent developments in scientific, legal, and environmental thought have vindicated his vision. To test this hypothesis I turn to Latour, who was a careful reader-and critic-of Jonas. Both thinkers regarded the modernist belief that only humans are sources of valid moral claims as an error that ought to be corrected. As the Earth today 'reacts' to our interventions with extreme weather and zoonotic diseases, their message is resounding in growing circles. The Anthropocene upends an era in which only (some) humans were allowed to speak. Now we must teach ourselves how to listen and respond to other living beings and future generations. This responsiveness, I argue, will form the core of emerging regimes of planetary responsibility

Resumen:

El artículo ofrece una lectura de La Condición Humana (1958) a la luz de Vita Activa (1961). Tras hacer un recorrido por la recepción de La Condición Humana desde su publicación en 1958, argumento que se ha marginado a la obra como un todo para extraer de ella fragmentos y supuestos ‘modelos’ de la política. Considerada como un todo, la obra de Arendt es una reflexión acerca de las condiciones de posibilidad de nuestras experiencias de sentido. Como es evidente sobre todo en la versión alemana, se trata, así, de una investigación fenomenológica. Argumento que tanto la fuerza crítica como la aparente ceguera que encontramos en Arendt se deben a las premisas fenomenológicas de su pensamiento

Abstract:

The article offers a reading of The Human Condition (1958) in light of Vita Activa (1961). After tracing the reception of The Human Condition since its publication in 1958, I argue that scholarship on Arendt has marginalized the work as a whole to extract fragments for supposed ‘models’ of politics. Considered as a whole, Arendt’s work is a reflection on the conditions of possibility of our experiences of meaning. As is evident above all in the German version, it is thus a phenomenological investigation. I argue that both the critical force and the apparent blindness that we find in Arendt are due to the phenomenological premises of her thought

Resumo:

O artigo oferece uma leitura da Condicão Humana (1958) à luz de Vita Activa (1961). Depois de fazer um tour pela recepção da Condicão Humana desde sua publicação em 1958, o artigo argumenta que essa recepção marginalizou o trabalho como um todo para extrair fragmentos e supostos "modelos" da política. Considerado como um todo, o trabalho de Arendt é uma reflexão sobre as condições de possibilidade de nossas experiências de significado. Como é evidente acima de tudo na versão alemã, é, assim, uma investigação fenomenológica. Argumento que tanto a força crítica quanto a aparente cegueira que encontramos em Arendt se devem às premissas fenomenológicas de seu pensamento

Abstract:

Leo Strauss has been read as the author of a paradoxically nonpolitical political philosophy. This reading finds extensive support in Strauss’s work, notably in the claim that political life leads beyond itself to contemplation and in the limits this imposes on politics. Yet the space of the nonpolitical in Strauss remains elusive. The “nonpolitical” understood as the natural, Strauss suggests, is the “foundation of the political”. But the meaning of “nature” in Strauss is an enigma: it may refer either to the “natural understanding” of commonsense, or to nature “as intended by natural science,” or to “unchangeable and knowable necessity.” As a student of Husserl, Strauss sought both to retrieve and radically critique both the “natural understanding” and the “naturalistic” worldview of natural science. He also cast doubt on the very existence of an unchangeable nature. The true sense of the nonpolitical in Strauss, I shall argue, must rather be sought in his embrace of the trans-finite goals of philosophy understood as rigorous science. Nature may be the nonpolitical foundation of the political, but we can only ever approximate nature asymptotically. The nonpolitical remains as elusive in Strauss as the ordinary. To approximate both we need to delve deeper into his understanding of Husserl

Abstract:

Scholars of International Relations (IR) confront the unenviable task of conceiving and representing the world as a whole. Philosophy has deemed this impossible since the time of Kant. Today's populist reaction against "globalism" suggests that it is imprudent. Yet IR must persevere in its quest to diagnose emerging global realities and fault lines. To do so without stoking populist fears and mythologies, I argue, IR must enter into dialogue with the new realism in philosophy, and in particular with its ontological pluralism. The truth of what unites and divides us today is not one-dimensional, as the image of a networked world of "open" or "closed" societies suggests. Beyond anonymous networks, there are principles such as sovereignty; there are systemic dynamics of inclusion/exclusion, and there is the power of justifications

Abstract:

Leo Strauss has been understood as one of the foremost critics of Heidegger, and as having provided an alternative to his thought: against Heidegger’s Destruktion of Plato and Aristotle, Strauss enacted a recovery; against Heidegger’s “historicist turn,” Strauss rediscovered a superior alternative in the “Socratic turn.” This paper argues that, rather than opposing or superseding Heidegger, Strauss engaged Heidegger dialectically. On fundamental philosophical problems, Strauss both critiqued Heidegger and retrieved the kernel of truth contained in Heidegger’s position. This method is based on Strauss’s zetetic conception of philosophy, which has deep roots in Heidegger’s 1922 reading of Aristotle’s Metaphysics

Resumen:

Desde la década de 1990, el mundo ha visto una proliferación de estados de excepción. La detención indefinida de "combatientes enemigos", la acelerada construcción de barreras fortificadas entre países y el aumento de la población mundial considerada "ilegal" ejemplifican la tendencia mundial a normalizar las excepciones. Donald Trump es el caso más burdo -no por ello menos alarmante- de esta tendencia

Abstract:

Since the 1990s, states of exception have proliferated. The indefinite detainment of "enemy combatants", the accelerated growth of fortified barriers between countries, and the increase of an "illegal" population are examples of this global tendency to regulate these exceptions. Donald Trump is the rudest case, not the least alarming, of this trend

Abstract:

This work analyzes the effect of wall geometry when a reaction-diffusion system is confined to a narrow channel. In particular, we study the entropy production density in the reversible Gray-Scott system. Using an effective diffusion equation that considers modifications by the channel characteristics, we find that the entropy density changes its value but not its qualitative behavior, which helps explore the structure-formation space

Abstract:

We study a reaction-diffusion system within a long channel in the regime in which the projected Fick-Jacobs-Zwanzig operator for confined diffusion can be used. We found that under this approximation, Turing instability conditions can be modified due to the channel geometry. The dispersion relation, range of unstable modes where pattern formation occurs, and spatial structure of the patterns itself change as functions of the geometric parameters of the channel. This occurs for the three channels analyzed, for which the values of the projected operators can be found analytically. For the reaction term, we use the well-known Schnakenberg kinetics

Abstract:

Auctioneers often face the decision of whether to bundle two or more di€fferent objects before selling them. Under a Vickrey auction (or any other revenue equivalent auction form) there is a unique critical number for each pair of objects such that when the number of bidders is fewer than that critical number the seller strictly prefers a bundled sale and when there are more bidders the seller prefers unbundled sales. This property holds even when the valuations for the objects are correlated for a given bidder. The results have been proved using a mathematical technique of quantiles that can be extremely useful for similar analysis

Abstract:

We examine the evolution of adult female heights in twelve Latin American countries during the second half of the twentieth century based on demographic health surveys and related surveys compiled from national and international organizations. Only countries with more than one survey were included, allowing us to cross-examine surveys and correct for biases. We first show that average height varies significantly according to location, from 148.3 cm in Guatemala to 158.8 cm in Haiti. The evolution of heights over these decades behaves like indicators of human development, showing a steady increase of 2.6 cm from the 1950s to the 1990s. Such gains compare favorably to other developing regions of the world, but not so much with recently developed countries. Height gains were not evenly distributed in the region, however. Countries that achieved higher levels of income, such as Brazil, Chile, Colombia and Mexico, gained on average 0.9 cm per decade, while countries with shrinking economies, such as Haiti and Guatemala, only gained 0.25 cm per decade

Abstract:

The credibility of election outcomes hinges on the accuracy of vote tallies. We provide causal evidence on the drivers and the downstream consequences of variation in the quality of vote tallies. Using data for the universe of polling stations in Mexico in five national elections, we document that over 40% of polling-station-level tallies display inconsistencies. Our evidence strongly suggests these inconsistencies are nonpartisan. Using data for more than 1.5 million poll workers, we show that lower educational attainment, higher workload, and higher complexity of the tally cause more inconsistencies. Finally, using an original survey of close to 80,000 poll workers together with detailed administrative data, we find that inconsistencies cause recounts and recounts lead to lower trust in electoral institutions. We discuss policy implications

Abstract:

A synthesis is presented of recent work by the authors and others on the formation of localized patterns, isolated spots, or sharp fronts in models of natural processes governed by reaction-diffusion equations. Contrasting with the well-known Turing mechanism of periodic pattern formation, a general picture is presented in one spatial dimension for models on long domains that exhibit sub-critical Turing instabilities. Localised patterns naturally emerge in generalised Schnakenberg models within the pinning region formed by bistability between the patterned state and the background. A further long-wavelength transition creates parameter regimes of isolated spots which can be described by semi-strong asymptotic analysis. In the species-conservation limit, another form of wave pinning leads to sharp fronts. Such fronts can also arise given only one active species and a weak spatial parameter gradient. Several important applications of this theory within natural systems are presented, at different lengthscales: cellular polarity formation in developmental biology, including root-hair formation, leaf pavement cells, keratocyte locomotion; and the transitions between vegetation states on continental scales. Philosophical remarks are offered on the connections between different pattern formation mechanisms and on the benefit of subcritical instabilities in the natural world

Abstract:

This work presents a comparison of different deep learning models for the reconstruction of artistic images from compact representations generated using Principal Component Analysis. The reconstruction models correspond to different types of Convolutional Neural Networks. Our results show that the statistics captured by the principal components transformation are enough to obtain good approximations in the reconstruction process, especially in terms of color and object visual features, even when using compact representations whose length is only about 1% of the original image space's total number of features

Abstract:

Huntington's disease (HD) is a neurological disorder characterized by a reduction in medium-spiny neurons in the brain. Currently, there is no cure for HD and treatment relies on symptomatic therapy. The generation of stem cells, including neural, mesenchymal, and pluripotent, through conventional strategies or direct cell reprogramming, has revolutionized HD therapy research. Due to their unique ability to differentiate into a variety of cells, self-renew and grow, stem cells have become an area of interest for treating various complex and unresolved neurodegenerative disorders. Nanotechnology has emerged as a novel approach with great potential for treating HD with reduced side effects. Nanoparticles (NPs) can act as nanovehicles for delivering therapeutic agents, including siRNAs, stem cells, neurotrophic factors, and different drugs. Additionally, NPs can be used as an alternative treatment based on their antioxidant and reactive oxygen species -scavenging properties that protect neuronal cells. Some NPs even exhibit the ability to interfere with the protein aggregation of mutant Huntingtin proteins during neurodegenerative processes. This review focuses on the most studied NPs for treating HD, including polymeric, lipid-based, liposomes, solid lipid and metal/metal oxide NPs. The combination of NPs with stem cell therapy has great potential for neurodegenerative disease diagnosis and treatment. NPs have been used to manage the cellular microenvironment, improve the efficiency of cell and drug delivery to the brain, and enhance stem cell transplant survival. Understanding the characteristics of the different NPs is essential for applying them for therapeutic purposes. In this study, the biology of HD as well as the benefits and drawbacks of using NPs and stem cell therapy for its treatment are discussed

Abstract:

Paclitaxel (PTX) is an effective anticancer chemical with a broad spectrum that was primarily obtained from a curative shrub, specifically the bark of Taxus brevifolia tree. It belongs to a group of drugs known as diterpenetaxanes, which are now the most widely used as chemotherapy drugs. These include ovarian cancer, breast cancer, nonsmall cell-lung cancer, neck cancer, Kaposi's sarcoma, head cancer and urologic cancer. Clinical studies have proven that it has anti-cancer effects against ovarian, lung, and breast cancer. This chemical is difficult to use due to its limited solubility, propensity to recrystallize after diluting, and cosolvent-induced toxicity. In some circumstances, nanotechnology and nanoparticles offer several benefits over free pharmaceuticals, improved half-life of drug, decreased toxicity, affected and discerning drug delivery. Nano-drugs can accumulate in tissues, which may be associated with increased permeability, retention, and anticancer activity while having low toxicity in healthy tissues. Information on paclitaxel's chemical make-up, formulations, mode of action, and toxicity is provided in this article. The value of nanoparticles containing paclitaxel, its possibilities, and its potential for the future are all brought up. In order to enhance the pharmacodynamic and pharmacokinetic characteristics of the chemotherapeutic paclitaxel medication, nanotechnology is being focused in this review article to provide a general summary of the current status of continuing therapeutic progress. More importantly, a thorough review on the potential of various nanocarriers including polymeric, lipid-based, inorganic and carbon-based nanostructures has been provided to assess their applicability in PTX delivery, comparatively. Our goal was to demonstrate how these different types of nanocarriers can contribute to improving the therapeutic efficiency of PTX as a chemotherapeutic agent

Resumen:

Esta nota de investigación presenta el Estudio Panel México 2006, un proyecto que permite estudiar la conducta electoral de los mexicanos. El estudio consiste en una encuesta tipo panel de tres rondas de entrevistas a 2,400 mexicanos adultos realizadas entre octubre de 2005 y julio de 2006, así como un extenso análisis de contenido de información noticiosa y anuncios políticos en televisión. En éste se registran cambios en las opiniones políticas de los mexicanos, los cuales pueden ser analizados a la luz de los eventos e información de las campañas presidenciales, así como del contexto político posterior a la elección. El estudio panel documenta qué tipo de votantes cambiaron su preferencia electoral y ofrece elementos para determinar por qué ocurrieron dichos cambios

Abstract:

This research note discusses the methods and key findings of the Mexico 2006 Panel Study, a multi-investigator project on Mexican voting behavior. This panel study consists of a three-wave panel survey of 2,400 Mexican adults between October 2005 and July 2006. It also includes an extensive content analysis of television news and advertisements. The Mexico 2006 Panel Study permits assessing the impact of events and information flows, along with post-electoral context, on the political attitudes of Mexicans. These surveys document which types of voters changed their electoral preferences during the campaign and offers a host of variables to explain why these changes occurred

Abstract:

Long time existence and uniqueness of solutions to the Yang-Mills heat equation is proven over a compact 3-manifold with smooth boundary. The initial data is taken to be a Lie algebra valued connection form in the Sobolev space H1. Three kinds of boundary conditions are explored, Dirichlet type, Neumann type and Marini boundary conditions. The last is a nonlinear boundary condition, specified by setting the normal component of the curvature to zero on the boundary. The Yang-Mills heat equation is a weakly parabolic nonlinear equation.We use gauge symmetry breaking to convert it to a parabolic equation and then gauge transform the solution of the parabolic equation back to a solution of the original equation. Apriori estimates are developed by first establishing a gauge invariant version of the Gaffney-Friedrichs inequality. A gauge invariant regularization procedure for solutions is also established. Uniqueness holds upon imposition of boundary conditions on only two of the three components of the connection form because of weak parabolicity. This work is motivated by possible applications to quantum field theory

Abstract:

We study the set of eigenvalues of the Bochner Laplacian on a geodesic ball of an open manifold M, and find lower estimates for these eigenvalues when M satisfies a Sobolev inequality. We show that we can use these estimates to demonstrate that the set of harmonic forms of polynomial growth over M is finite dimensional, under sufficient curvature conditions. We also study in greater detail the dimension of the space of bounded harmonic forms on coverings of compact manifolds

Abstract:

In this paper we consider the Hodge Laplacian on differential k-forms over smooth open manifolds MN, not necessarily compact. We find sufficient conditions under which the existence of a family of logarithmic Sobolev inequalities for the Hodge Laplacian is equivalent to the ultracontractivity of its heat operator. We will also show how to obtain a logarithmic Sobolev inequality for the Hodge Laplacian when there exists one for the Laplacian on functions. In the particular case of Ricci curvature bounded below, we use the Gaussian type bound for the heat kernel of the Laplacian on functions in order to obtain a similar Gaussian type bound for the heat kernel of the Hodge Laplacian. This is done via logarithmic Sobolev inequalities and under the additional assumption that the volume of balls of radius one is uniformly bounded below

Resumen:

En el presente artículo se examina la relación de la Filosofía del derecho de Hegel con la filosofía práctica aristotélica. Con ello se pretende mostrar, por una parte, que algunas de las tesis y motivos centrales de la filosofía del derecho hegeliana se entienden de mejor forma trayendo a primer plano ciertos planteamientos aristotélicos y, por otro lado, que dichos planteamientos son objeto de una reinterpretación y reelaboración por parte de Hegel ante ciertas exigencias históricas y filosóficas del contexto moderno. En particular, se examina la relación entre estos dos autores atendiendo a dos puntos concretos: la metodología holística de una filosofía práctica -entendida de modo amplio- y la teoría de la motivación ética

Abstract:

In this article I examine the relation of Hegel's Philosophy of Right with the Aristotelean practical philosophy. My aim with this is to show, on the one hand, that some of the theses and central motifs of the Hegelian philosophy of right are better understood if one brings certain Aristotelean ideas to the forefront of the discussion and, on the other hand, that these ideas are subjected to a reinterpretation by Hegel due to the historical and philosophical demands of the Modern Age. In particular, the relationship between these two authors is analyzed by examining two specific subjects: the holistic methodology of a practical philosophy -broadly construed- and the theory of ethical motivation

Resumen:

Mi objetivo en este artículo es estudiar el papel de la belleza como símbolo de la moralidad en la Kritik der Urteilskraft. En primer lugar, realizo una comparación entre el esquematismo de los conceptos y el proceso del razonamiento analógico. Lo que sostengo es que el razonamiento analógico y simbólico conduce en Kant a una consideración sobre los objetos bajo las condiciones establecidas por el agente reflexivo mismo: la tesis que mostraré con ello es que los objetos simbólicos contribuyen a los propios propósitos de reflexión de uno sobre una temática particular. Posteriormente, explico por qué Kant considera que la belleza es un símbolo adecuado de la moralidad en vistas de la existencia de un interesante número de similitudes entre el ámbito de lo estético y de lo moral. Finalmente, discuto por qué únicamente la libertad puede exhibirse por medio de un símbolo según la estética kantiana, y por qué las otras ideas de la razón --Dios y el alma-- están mucho más asociadas con la experiencia de lo sublime

Abstract:

My aim in this article is to study the role of beauty as symbol of morality within Kant's Kritik der Urteilskraft. First, I draw a comparison between the schematism of concepts and the process of analogous reasoning. I sustain that analogous and symbolic reasoning leads in Kant to a consideration of the objects under the conditions set by the reflective agent himself: the thesis that I will prove thereby is that symbolic objects serve one's own purposes of reflection on a given topic. I proceed then to explain why Kant considers that beauty is an adequate symbol of morality on account of the existence of an interesting number of similarities between the aesthetical and the moral realms. Lastly, I clarify why only freedom can be exhibited by means of a symbol in Kantian aesthetics, and why the other two ideas of reason --namely God and the soul-- are much more associated with the experience of the sublime

Resumen:

Este artículo analiza la interpretación de Ricoeur sobre el concepto de Anerkennung en la filosofía temprana de Hegel. Se discute la importancia primordial de este concepto y se evalúa el significado del mismo dentro de un orden ético y político

Abstract:

This article analyzes Ricoeur's interpretation of Hegel's Anerkennung (Recognition) in his early philosophical thought. The paramount importance of this notion is discussed and its meaning is analyzed in an ethical and political order

Resumen:

A pesar de la promesa presidencial de crear un sistema de salud universal, la falta de objetivos y estrategias ha ocasionado graves problemas. Sin coordinación entre instituciones no se pueden lograr avances reales en la materia

Resumen:

En los últimos años el Estado mexicano ha faltado a su palabra de proteger y garantizar el derecho fundamental a la salud. Sus omisiones no deben ser ignoradas

Resumen:

El presente artículo tiene como objetivo demostrar que el sistema público de salud en México atraviesa su más severa crisis de las últimas décadas en el momento que debe dar respuesta a la pandemia de Sars-Cov-2. Para poder comprender el deterioro del sistema público de salud se describen las causas principales que han derivado en su debilitamiento, donde destacan la corrupción de la pasada administración y la reforma inconclusa del sistema de salud de la actual administración. Posteriormente, se explica la importancia de los órganos encargados de dar respuesta a la emergencia sanitaria, los Acuerdos de mayor impacto emitidos al comienzo de la pandemia y sus desaciertos, para demostrar cómo el conjunto de factores descritos y explicados afectan los derechos humanos, principalmente el goce efectivo del derecho a la protección de la salud que se vulnera aún más frente a la pandemia. El propósito último del estudio es demostrar la debilidad del sistema público de salud para responder a la emergencia sanitaria y la urgencia que existe de fortalecer el sistema público de salud para cumplir con el derecho a la protección de la salud de acuerdo con los principios establecidos en la Constitución

Abstract:

The objective of this article is to demonstrate that the public health system in Mexico is going through its must severe crisis in recent decades at the time it must respond to Sars-Cov-2 pandemic. In order to understand the deterioration of the public health system, the main causes that have resulted in its weakening are described, underlining the corruption that took place in the previous administration and the unfinished health system reform of the current administration. Subsequently, the importance of the institutions in charge of responding to the health emergency, the Agreements with the greatest impact issued at the beginning of the pandemic and their limitations are explained to demonstrate how these affected human rights, mainly the effectiveness of the right to health protection. The ultimate purpose of the study is to demonstrate the weakness of the public health system to address the sanitary emergency and the urgency to strength the public health system in order to comply with the right to health protection in accordance with the principles established in the Constitution

Resumen:

Objetivo. Analizar la cobertura en salud de cáncer pulmonar en México y ofrecer recomendaciones al respecto. Material y métodos. Mediante la conformación de un grupo multidisciplinario se analizó la carga de la enfermedad relativa al cáncer de pulmón y el acceso al tratamiento médico que ofrecen los diferentes subsistemas de salud en México. Resultados. Se documentan desigualdades importantes en la atención del cáncer de pulmón entre los distintos subsistemas de salud que sugieren acceso y cobertura en salud variable, tanto a los tratamientos tradicionales como a las innovaciones terapéuticas existentes, y diferencias en la capacidad de los prestadores de servicios de salud para garantizar el derecho a la protección de la salud sin distinciones. Conclusión. Se hacen recomendaciones sobre la necesidad de mejorar las acciones para el control del tabaco, el diagnóstico temprano y la inclusión de terapias innovadoras y la homologación entre los diferentes prestadores públicos de servicios de salud a través del financiamiento con la recaudación de impuestos al tabaco

Abstract:

Objective. To analyze the coverage of lung cancer in Mexico and offer recommendations in this regard. Materials and methods. By means of the conformation of a multidisciplinary group, we analyze the burden of the disease relative to the lung cancer and the access to the medical treatment offered by the different public health subsystems in Mexico. Results. Important inequalities in lung cancer care are documented among the different public health subsystems. Our data suggest differential access and coverage to both traditional treatments and existing therapeutic innovations and differences in the capacity of health service providers to guarantee the right to health protection without distinction. Conclusions. Recommendations are made on the need to improve actions for tobacco control, early diagnosis for lung cancer and inclusion of innovative therapies and homologation among different public health service providers through financing via tobacco taxes

Abstract:

Priority setting is the process through which a country's health system establishes the drugs, interventions, and treatments it will provide to its population. Our study evaluated the priority-setting legal instruments of Brazil, Costa Rica, Chile, and Mexico to determine the extent to which each reflected the following elements: transparency, relevance, review and revision, and oversight and supervision, according to Norman Daniels's accountability for reasonableness framework and Sarah Clark and Albert Wale's social values framework. The elements were analyzed to determine whether priority setting, as established in each country's legal instruments, is fair and justifiable. While all four countries fulfilled these elements to some degree, there was important variability in how they did so. This paper aims to help these countries analyze their priority-setting legal frameworks to determine which elements need to be improved to make priority setting fair and justifiable

Abstract:

In 2010, the Mexican government implemented a multi-sector agreement to prevent obesity. In response, the Ministries of Health and Education launched a national school-based policy to increase physical activity, improve nutrition literacy, and regulate school food offerings through nutritional guidelines. We studied the Guidelines’ negotiation and regulatory review process, including government collaboration and industry response. Within the government, conflicting positions were evident: the Ministries of Health and Education supported the Guidelines as an effective obesity-prevention strategy, while the Ministries of Economics and Agriculture viewed them as potentially damaging to the economy and job generation. The food and beverage industries opposed and delayed the process, arguing that regulation was costly, with negative impacts on jobs and revenues. The proposed Guidelines suffered revisions that lowered standards initially put forward. We documented the need to improve cross-agency cooperation to achieve effective policymaking. The ‘siloed’ government working style presented a barrier to efforts to resist industry's influence and strong lobbying. Our results are relevant to public health policymakers working in childhood obesity prevention

Resumen:

Este artículo tiene tres objetivos: presentar las dificultades que plantea la exigibilidad del derecho a la salud en México; dar cuenta de los principales problemas que se están presentando en lo que, de modo general, podemos llamar “relaciones entre derecho y salud” y, por último, ofrecer algunas propuestas para alcanzar una relación eficaz entre ambas materias

Abstract:

This article pursues three goals: to describe the obstacles that in Mexico faces the complete exigibility of the right to health, to present the main problems arising in what, generally speaking, could be referred to as “relations between law and health”, and finally to offer some proposals in order to reach a working relation between the aforementioned disciplines

Abstract:

How do legislators develop reputations to further their individual goals in environments with limited space for personalization? In this article, we evaluate congressional behavior by legislators with gubernatorial expectations in a unitary environment where parties control political activities and institutions hinder individualization. By analyzing the process of drafting bills in Uruguay, we demonstrate that deputies with subnational executive ambition tend to bias legislation towards their districts, especially those from small and peripheral units. Findings reinforce the importance of incorporating ambition to legislative studies and open a new direction towards the analysis of multiple career patterns within a specific case

Abstract:

This article addresses the stability properties of a simple economy (characterized by a one-dimensional state variable) when the representative agent, confronted by trajectories that are divergent from the steady state, performs transformations in that variable in order to improve forecasts. We find that instability continues to be a robust outcome for transformations such as differencing and detrending the data, the two most typical approaches in econometrics to handle nonstationary time series data. We also find that inverting the data, a transformation that can be motivated by the agent reversing the time direction in an attempt to improve her forecasts, may lead the dynamics to a perfect-foresight path

Abstract:

This article examines dynamics in a model where agents forecast a one dimensional state variable through ordinary least squares regressions on the lagged values of the state variable. We study the stability properties of alternative transformations of the state variable, such as taking logarithms, which the agent can endogenously set forth. Surprisingly, for the considered class of economies, we found that the transformations that an econometrician would attempt are destabilizing, whereas alternative transformations, which an econometrician would never consider, such as convex transformations, are stabilizing. Therefore, we ironically find that in our set-up, an active agent who is concerned about learning the economy's dynamics and who in an attempt to improve forecasting transforms the state variable using standard transformations, is more likely to deviate from the steady state than a passive agent

Abstract:

Consider a one step forward looking model where agents believe that the equilibrium values of the state variable are determined by a function whose domain is the current value of the state variable and whose range is the value for the subsequent period. An agent’s forecast for the subsequent period uses the belief, where the function that is chosen is allowed to depend on the current realization of an extrinsic random process, and is made with knowledge of the past values of the state variable but not the current value. The paper provides (and characterizes) the conditions for the existence of sunspot equilibria for the model described

Abstract:

We show that a perfect correlated equilibrium distribution of an N-person game, as defined by Dhillon and Mertens (1996) can be achieved using a finite number of copies of the strategy space as the message space

Abstract:

We reformulate the local stability analysis of market equilibria in a competitive market as a local coordination problem in a market game, where the map associating market prices to best-responses of all traders is common knowledge and well-defined both in and out of equilibrium. Initial expectations over market variables differ from their equilibrium values and are not common knowledge. This results in a coordination problem as traders use the structure of the market game to converge back to equilibrium. We analyse a simultaneous move and a sequential move version of the market game and explore the link with local rationalizability

Abstract:

This paper introduces, within the framework of a simple example, the notion of a subjective temporary equilibrium. The underlying relation linking forecasts to equilibrium values of the state variable is linear. However, agents perceive a non-linear law that governs the rate of adjustment between successive periods and forecast using linear approximations to the non-linear law of motion. This is shown to generate a non-linear law of motion for the state variable with the feature that the agent's model describes correctly the period wise evolution of the economy. The resulting non-linear law of motion is referred to as a subjective temporary equilibrium as its specification is determined largely by the subjective beliefs of the agents regarding the dynamics of the system. In a subjective equilibrium, agents forecasts are generated by taking linear approximations to a correctly specified law of motion and the forecasts may accordingly be interpreted as being boundedly rational in a first-order sense. There exist specifications that admit the possibility of cyclical behaviour

Abstract:

This paper provides conditions for the almost sure convergence of the least squares learning rule in a stochastic temporary equilibrium model, where regressions are performed on the past values of the endogenous state variable. In contrast to earlier studies, (Evans and Honkapohja, 1998; Marcent and Sargent, 1989), which were local analyses, the dynamics are studied from a global viewpoint, which allows one to obtain an almost sure convergence result without employing projection facilities

Abstract:

Background and objective: This paper presents Alzheed, a mobile application for monitoring patients with Alzheimer's disease at day centers as well as a set of design recommendations for the development of healthcare mobile applications. The Alzheed project was conducted at Day Center “Dorita de Ojeda” that is focused on the care of patients with Alzheimer's disease. Materials and methods: A software design methodology based on participatory design was employed for the design of Alzheed. This methodology is both iterative and incremental and consists of two main iterative stages: evaluation of low-fidelity prototypes and evaluation of high-fidelity prototypes. Low-fidelity prototypes were evaluated by 11 day center's healthcare professionals (involved in the design of Alzheed), whereas high-fidelity prototypes were evaluated using a questionnaire based on the technology acceptance model (TAM) by the same healthcare professionals plus 30 senior psychology undergraduate students uninvolved in the design of Alzheed. Results: Healthcare professional participants perceived Alzheed as extremely likely to be useful and extremely likely to be usable, whereas senior psychology undergraduate students perceived Alzheed as quite likely to be useful and quite likely to be usable. Particularly, the median and mode of the TAM questionnaire were 7 (extremely likely) for healthcare professionals and 6 (quite likely) for psychology students (for both constructs: perceived usefulness and perceived ease of use). One-sample Wilcoxon signed-rank tests were performed to confirm the significance of the median for each construct

Abstract:

To update a public transportation origin-destination (OD) matrix, the link choice probabilities by which a user transits along the transit network are usually calculated beforehand. In this work, we reformulate the problem of updating OD matrices and simultaneously update the link proportions as an integer linear programming model based on partial knowledge of the transit segment flow along the network. We propose measuring the difference between the reference and the estimated OD matrices with linear demand deficits and excesses and simultaneously having slight deviations from the link probabilities to adjust to the observed flows in the network. In this manner, our integer linear programming model is more efficient in solving problems and is more accurate than quadratic or bilevel programming models. To validate our approach, we build an instance generator based on graphs that exhibit a property known as a "small-world phenomenon" and mimic real transit networks. We experimentally show the efficiency of our model by comparing it with an Augmented Lagrangian approach solved by a dual ascent and multipliers method. In addition, we compare our methodology with other instances in the literature

Abstract:

This paper presents an optimization approach for the Intermodal Timetable Synchronization Problem that can be used to explore the impact of holding decisions and departure flexibility in an isolated and integrated way for different scenarios. It also introduces the concept of fuzzy synchronization to protect the solutions against variability in travel times caused by external factors, such as congestion. The methodology was applied to the transit network of Metrorrey in Monterrey, Mexico, assuming three planning periods along a weekday. It was found that considering holding decisions leads to significant improvements when generating synchronous timetables at transfer stops. Furthermore, three different types of variability were simulated, and their effects on four different fuzzy synchronization functions were evaluated. Thus, we provide a tool to analyze different scenarios and to support decision-makers in the planning stage of public transport systems. The analysis can be extended considering other types of travel time variability and nonlinear synchronization functions

Abstract:

We show that firms' left-tail risk positively predicts future returns of crash insurance. We proxy crash insurance with bear spreads, an option trading strategy that profits when extreme negative returns occur. Crash insurance for high (low) left-tail risk firms earns positive (negative) returns, suggesting that the downside protection it provides is not adequately priced. Our results are mainly explained by two types of underreaction: volatility underreaction in high left-tail risk portfolios and underreaction to the persistence of left-tail risk. Disagreement partially explains our results but a risk-based approach does not

Abstract:

We develop a novel method to decompose a straddle into two assets: a volatility risk asset and a jump risk asset. Using the price ratio of the jump risk asset to the straddle, we create a forward-looking measure (S-jump) that captures the stock price jump risk anticipated by the option market. We show that S-jump substantially increases before earnings announcements and strongly predicts the size and the probability of earnings-induced stock price jumps. We also find that S-jump amplifies the earnings response coefficient. Our jump risk asset captures the run-up and run-down return patterns observed for straddles around earnings announcements

Abstract:

We present an analytical framework and evidence that characterize the historical patterns of Mexico's manufacturing exports and its participation in Global Value Chains (GVCs). We use this framework to guide an empirical analysis in which we identify sectors with the highest export potential as a result of nearshoring. We also estimate the orders of magnitude of the potential impacts of this process on Mexico's manufacturing exports and GDP. Finally, we discuss factors that could have an influence on the size of these effects, including an elastic supply of skilled labor, an institutional framework that promotes contract enforcement, cost effective and reliable energy supply, and strong and widespread connectivity through transportation and communication networks

Abstract:

COVID-19 heightened women's exposure to gender-based and intimate partner violence, especially in low-income and middle-income countries. We tested whether edutainment interventions shown to successfully combat gender-based and intimate partner violence when delivered in person can be effectively delivered using social (WhatsApp and Facebook) and traditional (TV) media. To do so, we randomized the mode of implementation of an intervention conducted by an Egyptian women's rights organization seeking to support women amid COVID-19 social distancing. We found WhatsApp to be more effective in delivering the intervention than Facebook but no credible evidence of differences across outcomes between social media and TV dissemination. Our findings show little credible evidence that these campaigns affected women's attitudes towards gender or marital equality or on the justifiability of violence. However, the campaign did increase women's knowledge, hypothetical use and reported use of available resources

Abstract:

Let A be an expansive linear map in Rd. Approximation properties of shift-invariant subspaces of L2(Rd) when they are dilated by integer powers of A are studied. Shift-invariant subspaces providing approximation order a or density order a associated to A are characterized. These characterizations impose certain restrictions on the behavior of the spectral function at the origin expressed in terms of the concept of point of approximate continuity. The notions of approximation order and density order associated to an isotropic dilation turn out to coincide with the classical ones introduced by de Boor, DeVore and Ron. This is no longer true when A is anisotropic. In this case the A-dilated shift-invariant subspaces approximate the anisotropic Sobolev space associated to A and a. Our main results are also new when S is generated by translates of a single function. The obtained results are illustrated by some examples

Abstract:

In this article, we consider the problem of voltage regulation of a proton exchange membrane fuel cell connected to an uncertain load through a boost converter. We show that, in spite of the inherent nonlinearities in the current-voltage behavior of the fuel cell, the voltage of a fuel cell/boost converter system can be regulated with a simple proportional-integral (PI) action designed following the passivity-based control approach. The system under consideration consists of a DC-DC converter interfacing a fuel cell with a resistive load. We show that the output voltage of the converter converges to its desired constant value for all the systems initial conditions-with convergence ensured for all positive values of the PI gains. This latter feature facilitates the, usually difficult, task of tuning the gains of the PI. An immersion and invariance parameter estimator is afterward proposed which allows the operation of the PI passivity-based control when the load is unknown, maintaining the output voltage at the desired level. The stable operation of the overall system is proved and the approach is validated with extensive numerical simulations considering real-life scenarios, where robust behavior in spite of load variations and the presence of noise is obtained

Abstract:

In this note, we address the problem of parameter identification of nonlinear, input affine dissipative systems. It is assumed that the supply rate, the storage and the internal dissipation functions may be expressed as nonlinearly parameterized regression equations where the mappings (depending on the unknown parameters) satisfy a monotonicity condition-this encompasses a large class of physical systems, including passive systems. We propose to estimate the system parameters using the "power-balance" equation, which is the differential version of the classical dissipation inequality, with a new estimator that ensures global, exponential, parameter convergence under the very weak assumption of interval excitation of the power-balance equation regressor. The benefits of the proposed approach are illustrated with an example

Abstract:

In this paper, we propose a new control scheme for a wind energy conversion system connected to a solid-state transformer-enabled distribution microgrid. The system consists of a wind turbine, a permanent magnet synchronous generator, a rectifier and a load which is connected to the distribution grid dc bus. The scheme combines a classical PI placed, in a nested-loop configuration, with a passivity-based controller. Guaranteeing stability and endowed with disturbance rejection properties, the controller regulates the wind turbine angular velocity to a desired value - in particular, the set-point is selected such that the maximum power from the wind is extracted - maximising the generator efficiency. The fast response of the closed-loop system makes possible to operate under fast-changing wind speed conditions. To assess and validate the controller performance and robustness under parameter variations, realistic simulations comparing our proposal with a classical PI scheme are included

Abstract:

In this paper we propose a new control scheme for a wind energy conversion system connected to a solid state transformer-enabled distribution microgrid. The system consists of a wind turbine, a permanent magnet synchronous generator, a rectifier and a load which is connected to the distribution grid dc bus. The scheme combines a classical PI placed, in a nested-loop configuration, with a passivity-based controller. Guaranteeing stability and endowed with disturbance rejection properties, the controller regulates the wind turbine angular velocity to a desired value - in particular, the set-point is selected such that the maximum power from the wind is extracted - maximizing the generator efficiency. The fast response of the closed-loop system makes possible to operate under fast-changing wind speed conditions. To assess and validate the controller performance and robustness under parameters variations, realistic simulations comparing our proposal with a classical PI scheme are included

Abstract:

A controller, based on passivity, for a wind energy conversion system connected to a dc bus is proposed. The system consists of a wind turbine, a permanent magnet synchronous generator, a rectifier and a load. Guaranteeing stability and endowed with adaptive properties, the controller regulates the wind turbine angular velocity to a desired value - in particular, the set-point is selected such that the maximum power from the wind is extracted - and maximizes the generator efficiency. The fast response of the closed-loop system makes possible to operate under fastchanging wind speed conditions. To assess the controller performance, realistic simulation results are included

Abstract:

In this note, an observer-based feedback control for tracking trajectories of the yaw and lateral velocities in a vehicle is proposed. The considered model consists of the vehicle’s longitudinal, lateral and yaw velocities dynamics together with its roll dynamics. First, an observer for the vehicle lateral velocity, roll angle and roll velocity is proposed. Its design is based on the well-known Immersion & Invariance technique and Super-Twisting Algorithm. Tuning conditions on the observer gains are given such that the observation errors globally asymptotically converge to zero provided that the yaw velocity reference is persistently excited. Next, a feedback control law depending on the observer estimates is designed using the Output Regulation technique. It is showed that the tracking errors converge to zero as the observation errors decay. To assess the performance of the controller, numerical simulations are performed where the stable operation of the closed-loop system is verified

Abstract:

The present paper proposes a maximum power extraction control for a wind system consisting of a turbine, a permanent magnet synchronous generator, a rectifier, a load and one constant voltage source, which is used to form the DC bus. We propose a linear PI controller, based on passivity, whose stability is guaranteed under practically reasonable assumptions. PI structures are widely accepted in practice as they are easier to tune and simpler than other existing model-based methods. Real switching based simulations have been performed to assess the performance of the proposed controller

Abstract:

This paper deals with the problem of trajectory tracking of a class of bilinear systems with time-varying measurable disturbance, namely, systems of the form x(t)=[A + Σi ui(t)Bi]x(t) + d(t). We identify, via a linear matrix inequality, a set of matrices {A, Bi} for which it is possible to ensure global tracking of (admissible, differentiable) trajectories with a simple linear time-varying PI controller. Instrumental to establish the result is the construction of an output signal with respect to which the incremental model is passive. The result is applied to the boost and the modular multilevel converter for which experimental results are given

Abstract:

This paper deals with the problem of trajectory tracking of a class of bilinear systems with time-varying measurable disturbance, namely, systems of the form xt A u tB xt dt i i i ̇()=[ + ∑ () ] ()+ ( ). A set of matrices A B, { }i has been identified, via a linear matrix inequality, for which it is possible to ensure global tracking of (admissible, differentiable) trajectories with a simple linear time-varying PI controller. Instrumental to establish the result is the construction of an output signal with respect to which the incremental model is passive. The result is applied to the Interleaved Boost and the Modular Multilevel Converter for which experimental results are given

Abstract:

The problem of controlling small-scale wind turbines providing energy to the grid is addressed in this paper. The overall system consists of a wind turbine plus a permanent magnet synchronous generator connected to a single-phase ac grid through a passive rectifier, a boost converter, and an inverter. The control problem is challenging for two reasons. First, the dynamics of the plant are described by a highly coupled set of nonlinear differential equations. Since the range of operating points of the system is very wide, classical linear controllers may yield below par performance. Second, due to the use of a simple generator and power electronic interface, the control authority is quite restricted. In this paper we present a high performance, nonlinear, passivity-based controller that ensures asymptotic convergence to the maximum power extraction point together with regulation of the dc link voltage and grid power factor to their desired values. The performance of the proposed controller is compared via computer simulations against the industry standard partial linearizing and decoupling plus PI controllers

Abstract:

A goal in the study of dynamics on the interval is to understand the transition to positivetop ological entropy. There is a conjecture from the 1980s that the only route to positivetop ological entropy is through a cascade of period doubling bifurcations. We prove this conjecturein natural families of smooth interval maps, and use it to study the structure of the boundary ofmappings with positive entropy. In particular, we show that in families of mappings with a fixednumber of critical points the boundary is locally connected, and for analytic mappings that it isa cellular set

Abstract:

This study adds to creativity research by investigating the connection between employees' ruminations about life-threatening crises and their creative work behavior, with a specific focus on the mediating role of their experiences of personal life-to-work conflict and the moderating role of their resilience in this connection. Cross-sectional survey data (N = 710) gathered from employees who operate in the health-related distribution sector indicate that a key factor that underpins the connection between persistent worries about deadly crises and thwarted creativity at work is that personal life worries spill over into the work domain, but this detrimental effect is less salient if employees can bounce back from difficult situations. For organizations, this investigation reveals a critical conduit, personal life - induced work strain, through which employees' struggles to ban negative thoughts about excruciating crises translate into a lower propensity to develop novel ideas for organizational improvement. It also reveals how organizations can mitigate this risk by making their workforce more resilient

Abstract:

Purpose - The purpose of this study is to draw from conservation of resources theory to examine how employees' experience of resource-draining interpersonal conflict might diminish the likelihood that they engage in championing behaviour. Its specific focus is on the mediating effect of their motivation to leave the organization and the moderating effect of their peer-oriented social interaction in this connection. Design/methodology/approach - The research hypotheses are empirically assessed with quantitative survey data gathered from 632 employees who work in a large Mexican-based pharmacy chain. The statistical analyses involved an application of the Process macro, which enabled concurrent estimations of the direct, mediating and moderating effects predicted by the proposed conceptual framework. Findings - Emotion-based tensions in co-worker relationships decrease employees' propensity to mobilize support for innovative ideas, because employees make plans to abandon their jobs. This mediating role of turnover intentions is mitigated when employees maintain close social relationships with their co-workers. Practical implications - For organizational practitioners, this study identifies a core explanation (i.e. employees want to quit the company) for why frustrations with emotion-based quarrels can lead to a reluctance to promote novel ideas - ideas that otherwise could add to organizational effectiveness. It also highlights how this harmful process can be avoided if employees maintain good, informal relationships with their colleagues. Originality/value - For organizational scholars, this study explicates why and when employees' experience of interpersonal conflict translates into complacent work behaviours, in the form of tarnished idea championing. It also identifies informal peer relationships as critical contingency factors that disrupt this negative dynamic

Abstract:

This research investigates how employees' perceptions of role ambiguity might inhibit their propensity to engage in organizational citizenship behaviour (OCB), with a particular focus on the potential buffering roles of two personal resources in this process: political skill and organizational identification. Survey data collected from a manufacturing organization indicate that role ambiguity diminishes OCB, but this effect is attenuated when employees are equipped with political skill and have a strong sense of belonging to their organization. The buffering role of organizational identification also is particularly strong when employees have adequate political skills, suggesting the reinforcing, buffering roles of these two personal resources. Organizations that want to foster voluntary work behaviours, even if they cannot provide clear role descriptions for their employees, should nurture adequate personal resources within their employee ranks

Abstract:

This paper investigates how employees' experience of workplace incivility may steer them away from idea championing, with a special focus on the mediating role of their desire to quit their jobs and the moderating role of their dispositional self-control. Data collected from employees who work in a large retail organization reveal that an important reason that exposure to rude workplace behaviors reduces employees' propensity to champion innovative ideas is that they make concrete plans to leave. This mediating effect is mitigated when employees are equipped with high levels of self-control though. For organizations, this study accordingly pinpoints desires to seek alternative employment as a critical factor by which irritations about resource-draining incivility may escalate into a reluctance to add to organizational effectiveness through dedicated championing efforts. It also indicates how this escalation can be avoided, namely, by ensuring employees have access to pertinent personal resources

Abstract:

Purpose. The purpose of this research is to examine how employees' experience of career dissatisfaction might curtail their organizational citizenship behavior, as well as how this detrimental effect might be mitigated by employees' access to valuable peer-, supervisor- and organizational-level resources. The frustrations stemming from a dissatisfactory career might be better contained in the presence of these resources, such that employees are less likely to respond to this resource-depleting work circumstance by staying away from extra-role activities

Abstract:

This study examines how employees' exposure to interpersonal conflict might reduce their creative behaviour, with a particular focus on how this negative connection might be mitigated by their access to pertinent personal and contextual resources. Using data from employees who work in the construction sector, the empirical findings reveal that interpersonal conflict thwarts creativity, but this effect is weaker when employees can more easily control their emotions, have clear job descriptions, and are satisfied with how their employer communicates with them. Empathy exhibits a weak effect. This study accordingly reveals different means through which organizations can overcome the challenge that arises when emotion-based conflicts steer employees away from creativity

Résumé:

Les auteurs de cette étude s'appuient sur les données des employés exerçant dans le secteur de la construction pour examiner l'impact que l'exposition aux conflits interpersonnels a sur le comportement créatif des employés. Ils s'intéressent plus particulièrement aumoyenqui pourrait permettre d'atténuer le lien négatif entre les conflits interpersonnels et la créativité, soit l'accès à des ressources personnelles et contextuelles pertinentes. Les résultats empiriques révèlent que les conflits interpersonnels entravent la créativité. Mais cet effet est plus faible lorsque les employés contrôlent plus facilement leurs émotions, reçoivent des descriptions de tâches précises et sont satisfaits de la façon dont leur employeur communique avec eux. L'empathie n'a qu'un faible effet. Cette étude révèle les différents moyens par lesquels les organisations peuvent surmonter le problème lié aux conflits émotionnels qui entravent la créativité des employés

Abstract:

Anchored in conservation of resources theory, this study considers how employees' experience of job stress might reduce their organizational citizenship behaviors (OCB), as well as how this negative relationship might be buffered by employees' access to two personal resources (passion for work and adaptive humor) and two contextual resources (peer communication and forgiving climate). Data from a Mexican-based organization reveal that felt job stress diminishes OCB, but the effect is subdued at higher levels of the four studied resources. This study accordingly adds to extant research by elucidating when the actual experience of job stress is more or less likely to steer employees away from OCB -that is, when they have access to specific resources that hitherto have been considered direct enablers of such efforts instead of buffers of employees' negative behavioral responses to job stress

Abstract:

Purpose. The purpose of this paper is to consider how employees' perceptions of psychological contract breach, due to their sense that their organization has not kept its promises, might diminish their creative behavior. Yet access to two critical personal resources -emotion regulation and humor skills- might buffer this negative relationship. Design/methodology/approach. Survey data were collected from employees in a large organization in the automobile sector. Findings. Employees' beliefs that their employer has not come through on its promises diminishes their engagement in creative activities. The effect is weaker among employees who can more easily control their emotions and who use humor in difficult situations. Practical implications. For organizations, the results show that the frustrations that come with a sense of broken promises can be contained more easily to the extent that their employee bases can rely on pertinent personal resources. Originality/value. This investigation provides a more comprehensive understanding of when perceived contract breach steers employees away from productive work activities, in the form of creativity. This damaging effect is less prominent when employees possess skills that enable them to control negative emotions or can use humor to cope with workplace adversity

Abstract:

This study investigates how employees' perceptions of work overload might reduce their creative behaviours and how this negative relationship might be buffered by employees' access to three energy-enhancing resources: their passion for work, their ability to share emotions with colleagues, and their affective commitment to the organization. Data from a manufacturing organization reveal that work overload reduces creative behaviour, but the effect is weaker with higher levels of passion for work, emotion sharing, and organizational commitment. The buffering effects of emotion sharing and organizational commitment are particularly strong when they combine with high levels of passion for work. These findings indicate how organizations marked by adverse work conditions, due to excessive workloads, can mitigate the likelihood that employees avoid creative behaviours

Abstract:

Drawing from conservation of resources theory, this article investigates the relationship between job control (a critical job resource) and idea championing, as well as how this relationship may be augmented by stressful work conditions that can lead to resource losses, such as conflicting work role demands and psychological contract violations. With quantitative data collected from employees of an organization that operates in the chemical sector, this study reveals that job control increases the propensity to champion innovative ideas. This effect is especially salient when employees experience high levels of role conflict and psychological contract violations. For organizations, the results demonstrate that giving employees more control over whether they invest in championing activities will be most beneficial when those employees also face resource-draining work conditions, in the form of either incompatible role expectations or unfilled employer obligations

Abstract:

Based on the job demands–resources model, this study considers how employees’ perceptions of organizational politics might reduce their engagement in organizational citizenship behavior. It also considers the moderating role of two contextual resources and one personal resource (i.e., supervisor transformational leadership, knowledge sharing with peers, and resilience) and argues that they buffer the negative relationship between perceptions of organizational politics and organizational citizenship behavior. Data from a Mexican-based manufacturing organization reveal that perceptions of organizational politics reduce organizational citizenship behavior, but the effect is weaker with higher levels of transformational leadership, knowledge sharing, and resilience. The buffering role of resilience is particularly strong when transformational leadership is low, thus suggesting a three-way interaction among perceptions of organizational politics, resilience, and transformational leadership. These findings indicate that organizations marked by strongly politicized internal environments can counter the resulting stress by developing adequate contextual and personal resources within their ranks

Abstract:

Drawing from the job demands-resources model, this study considers how task conflict reduces employees' job satisfaction, as well as how the negative task conflict-job satisfaction relationship might be buffered by supervisors' transformational leadership and employees' personal resources. Using data from a large organization, the authors show that task conflict reduces job satisfaction, but this effect is weaker at higher levels of transformational leadership, tenacity, and passion for work. The buffering roles of the two personal resources (tenacity and passion for work) are particularly salient when transformational leadership is low. These findings indicate that organizations marked by task-related clashes can counter the accompanying stress by developing adequate leadership and employee resources within their ranks

Abstract:

This paper investigates how the harmful effect of role ambiguity might be buffered by employees' access and reveals that the role ambiguity enhances turnover intentions but this effect diminishes at higher levels of innovation propensity, goodwill trust, and procedural justice. Purpose-This article investigates how employees' perceptions of role ambiguity might increase their turnover intentions and how this harmful effect might be buffered by employees' access to relevant individual (innovation propensity), relational (goodwill trust), and organizational (procedural justice) resources. Uncertainty due to unclear role descriptions decreases in the presence of these resources, so employees are less likely to respond to this adverse work situation in the form of enhanced turnover intentions

Abstract:

We add to human resource literature by investigating how the contribution of task conflict to employee creativity depends on employees' learning orientation and their goal congruence with organizational peers. We postulate a positive relationship between task conflict and employee creativity and predict that this relationship is augmented by learning orientation but attenuated by goal congruence. We also argue that the mitigating effect of goal congruence is more salient among employees who exhibit a low learning orientation. Our results, captured from employees and their supervisors in a large, Mexican-based organization, confirm these hypotheses. The findings have important implications for human resource managers who seek to foster creativity among employees

Abstract:

This study considers how employees' tenacity might enhance their propensity to engage in knowledge exchange with organizational peers, as well as how the positive tenacity-knowledge exchange relationship is invigorated by two types of role conflict: within-work and between work and family. Using data from a large Mexican organization in the logistics sector, this study shows that tenacity increases knowledge exchange, and this effect is stronger at higher levels of within-work and work-family role conflict. The invigorating role of within-work role conflict is particularly salient when work-family role conflict is high. These findings inform organizations that the application of personal energy to knowledge-enhancing activities is particularly useful when employees encounter severe workplace adversity because of conflicting role demands

Abstract:

Purpose: Drawing from conservation of resources theory and affective events theory, this article examines the hitherto unexplored relationship between employees' tenacity levels and problem-focused voice behavior, as well as how this relationship may be augmented when employees encounter adversity in relationships with peers or in the organizational climate in general. Design/Methodology/Approach: The study draws on quantitative data collected through a survey administered to employees and their supervisors in a large manufacturing organization. Findings: Tenacity increases the likelihood of speaking up about problem areas, and this relationship is strongest when peer relationships are characterized by low levels of goal congruence and trust (relational adversity) or when the organization does not support change (organizational adversity). The augmenting effect of organizational adversity on the usefulness of tenacity is particularly salient when it combines with high relational adversity, which underscores the critical role of tenacity for spurring problem-focused voice behavior when employees negatively appraise different facets of their work environment simultaneously. Implications: The results inform organizations that the allocation of personal energy to reporting organizational problems is perceived as particularly useful by employees when they encounter significant adversity in their work environments. Originality/Value: This study extends research on voice behavior by providing a better understanding of the likelihood that employees speak up about problem areas, according to their levels of tenacity, and explicating when this influence of tenacity tends to be more prominent

Abstract:

This conceptual article centers on the relationship between intergenerational strategy involvement and family firms' innovation pursuits, a relationship that may be contingent on the nature of the interactions among family members who belong to different generations. The focus is the potential contingency roles of two conflict management approaches (cooperative and competitive) and two dimensions of social capital (goal congruence and trust), in the context of intergenerational interactions. The article theorizes that although cooperative conflict management may invigorate the relationship between intergenera- tional strategy involvement and innovation pursuits, competitive conflict management likely attenuates it. Moreover, it proposes that both functional and dysfunctional roles for social capital might arise with regard to the contribution of intergenerational strategy involvement to family firms' innovation pursuits. This article thus provides novel insights into the opportunities and challenges that underlie the contributions of family members to their firm's innovative aspirations when more than one generation participates in the firm's strategic management

Abstract:

This study investigates how employees’ perceptions of adverse work conditions might discourage innovative behavior and the possible buffering roles of relational resources. Data from a Mexican-based organization reveal that perceptions of work overload negatively affect innovative behavior, but this effect gets attenuated with greater knowledge sharing and interpersonal harmony. Further, although perceived organizational politics lead to lower innovative behavior when relational resources are low, they increase this behavior when resources are high. Organizations which seek to adopt innovative ideas in the presence of adverse work conditions thus should create relational conduits that can mitigate the associated stress

Abstract:

We develop and test a motivational framework to explain the intensity with which individuals sell entrepreneurial initiatives within their organizations. Initiative selling efforts may be driven by several factors that hitherto have not been given full consideration: initiative characteristics, individuals’ anticipation of rewards, and their level of dissatisfaction. On the basis of a survey in a mail service firm of 192 managers who proposed an entrepreneurial initiative, we find that individuals’ reported intensity of their selling efforts with respect to that initiative is greater when they (1) believe that the organizational benefits of the initiative are high, (2) perceive that the initiative is consistent with current organizational practices (although this effect is weak), (3) believe that their immediate organizational environment provides extrinsic rewards for initiatives, and (4) are satisfied with the current organizational situation. These findings extend previous expectancy theory-based explanations of initiative selling (by considering the roles of initiative characteristics and that of initiative valence for the proponent) and show the role of satisfaction as an important motivational driver for initiative selling

Abstract:

In this study, we examine the role of individuals’ commitment in small and medium-sized firms. More specifically, we argue that employees will commit themselves to their firm based on their current work status in the firm, their perception of the organizational climate, and the firm’s entrepreneurial orientation. We also examine how individuals’ commitment affect the actual effort they exert vis-à-vis their firm. The study’s hypotheses are tested by applying quantitative analyses to survey data collected from 863 Mexican small and medium-sized businesses. We found that individuals’ position and tenure in the firm, their perception of psychological safety and meaningfulness, and the firm’s entrepreneurial orientation all are positively related to organizational commitment. We also found a positive relationship between organizational commitment and effort. Finally, our findings show that organizational commitment mediates the relationship between many of the predictor variables and effort. We discuss the limitations and implications of our findings and provide directions for future research

Abstract:

Participation in social programs, such as clubs and other social organizations, results from a process in which an agent learns about the requirements, benefits, and likelihood of acceptance related to a program, applies to be a participant, and, finally, is accepted or rejected. We propose a model of this participation process and provide an application of the model using data from a social program in Mexico. Our empirical analysis illustrates that decisions at each stage of the process are responsive to expectations about the decisions and outcomes at the subsequent stages and that knowledge about the program can have a significant impact on participation outcomes. JEL codes: I38, D83, program participation, take-up, information acquisition, targeting, undercoverage, leakage

Abstract:

The judicialization of social rights is a reality in Latin America; however, little has been said about this phenomenon in Mexico or about the role of the Mexican Supreme Court (Suprema Corte de Justicia de la Nación, SCJN) in advancing an effective guarantee of the right to health. In several countries, courts have adopted either an active role in defining health policy and protecting the right to health or a passive one. Studying the ways in which health-related cases are resolved in Mexico enables us to evaluate the role of the SCJN when ruling for or against this right. This article aims to determine whether the SCJN, through the analysis of its rulings, is or could be a catalyst for change in the healthcare system. This article reports on the results of a systematic content analysis of twenty-two SCJN rulings, examining the claimants, their claims as understood by the SCJN, and the elements considered by the justices in their decision-making process. The analysis of the way in which the SCJN ruled in these cases demonstrates that the SCJN must be uniform and consistent in applying constitutional and conventional principles to improve predictability of its decisions and to be innovative in responding to the new requirements posed by economic, social, and cultural rights. The SCJN should increase its possibilities of promoting structural reforms where laws or policies are inconsistent with constitutional or conventional standards by maintaining a middle ground with respect of the executive and legislative branches

Resumen:

Objetivo. Conocer la opinión de actores clave respecto del proceso de judicialización del derecho a la protección de la salud en México. Material y métodos. Se realizaron 30 entrevistas semiestructuradas a representantes de los poderes Judicial (PJ), Legislativo (PL), Sector Salud (SS), industria farmacéutica, academia y organizaciones de la sociedad civil (OSC) durante mayo de 2017 a agosto de 2018, en distintos lugares de la Ciudad de México. Se transcribieron las grabaciones y se analizó el contenido con base en categorías de interés. Resultados. Las posturas respecto al fenómeno de la judicialización del derecho a la salud son disímiles. Hay tensiones entre quienes ven su potencial efecto como agente de cambio del sector y quienes la perciben como una interferencia ilegítima del PJ. No existe una estrategia coordinada entre los sectores para promover un cambio en el SS. Conclusiones. Las posturas respecto al fenómeno de la judicialización en México son disímiles. Hay tensiones entre quienes ven su potencial efecto como agente de cambio del sector y quienes la perciben como una interferencia ilegítima del PJ en el SS. Otros argumentan que no existe una estrategia coordinada entre los sectores para promover un cambio en el SS. Todos coinciden en que la judicialización en México es una realidad

Abstract:

Objective. Understand what Mexican key stakeholders think about the judicialization of the right to health in Mexico. Materials and methods. 30 semi-structured interviews were conducted at different settings in Mexico City with representatives of the judiciary, legislative power, Health Sector (HS), pharmaceutical industry, academia and nongovernmental organizations from May 2017 to August 2018. Interviews were transcribed and analyzed based on different categories of interest. Results. There are different opinions regarding judicialization of the right to health. Tensions exist between those who see its potential effect as a game changer for the HS and those who perceive it as an illegitimate interference of the judiciary. There is no coordinated strategy between sectors to promote change in the HS. Conclusions. There are different opinions regarding judicialization of the right to health in Mexico. There are tensions between those who see its potential effect as a game changer for the HS and those who perceive it as an illegitimate interference of the judiciary. Others argue that there is no coordinated strategy between sectors to promote change in the HS. All agree that judicialization in Mexico is a reality

Resumen:

La disminución de la tasa de lactancia materna en México es un problema de salud pública. En este artículo discutimos un enfoque regulatorio –Regulación Basada en Desempeño– y su aplicación para mejorar las tasas de lactancia materna. Este enfoque obliga a la industria a asumir su responsabilidad por la falta de lactancia materna y sus consecuencias. Se considera una estrategia factible de ser aplicada al caso, ya que el mercado de sucedáneos tiene una estructura oligopólica, donde es relativamente fácil fijar la contribución de cada participante del mercado en el problema; incide en un grupo poblacional definido; tiene un objetivo regulatorio que puede ser fácilmente evaluado, y se pueden definir las sanciones bajo criterios objetivos. Para su aplicación se recomienda: modificar la política pública, crear convenios de concertación con la industria, establecer sanciones disuasorias, fortalecer los mecanismos de supervisión y alinear lo anterior al Código Internacional de Comercialización de Sucedáneos

Abstract:

The decreasing breastfeeding rate in México is of public health concern. In this paper we discus an innovative regulatory approach -Performance Based Regulation- and its application to improve breastfeeding rates. This approach, forces industry to take responsibility for the lack of breastfeeding and its consequences. Failure to comply with this targets results in financial penalties. Applying performance based regulation as a strategy to improve breastfeeding is feasible because: the breastmilk substitutes market is an oligopoly, hence it is easy to identify the contribution of each market participant; the regulation's target population is clearly defined; it has a clear regulatory standard which can be easily evaluated, and sanctions to infringement can be defined under objective parameters. Recommendations: modify public policy, celebrate concertation agreements with the industry, create persuasive sanctions, strengthen enforcement activities and coordinate every action with the International Code of Marketing of Breast-milk Substitutes

Abstract:

What would have happened if a relatively looser fisheries policy had been implemented in the European Union (EU)? Using Bayesian methods a Dynamic Stochastic General Equilibrium (DSGE) model is estimated to assess the impact of the European Common Fisheries Policy (CFP) on the economic performance of a Galician (northwest of Spain) fleet highly dependant on the EU Atlantic southern stock of hake. Our counterfactual analysis shows that if a less effective CFP had been implemented during the period 1986–2012, fishing opportunities would have increased, leading to an increase in labour hours of 4.87%. However, this increase in fishing activity would have worsened the profitability of the fleet, dropping wages and rental price of capital by 6.79% and 0.88%, respectively. Welfare would also be negatively affected since, in addition to the increase in hours worked, consumption would have reduced by 0.59%

Abstract:

Science is crucial for evidence-based decision-making. Public trust in scientists can help decision makers act on the basis of the best available evidence, especially during crises. However, in recent years the epistemic authority of science has been challenged, causing concerns about low public trust in scientists. We interrogated these concerns with a preregistered 68-country survey of 71,922 respondents and found that in most countries, most people trust scientists and agree that scientists should engage more in society and policymaking. We found variations between and within countries, which we explain with individual- and country-level variables, including political orientation. While there is no widespread lack of trust in scientists, we cannot discount the concern that lack of trust in scientists by even a small minority may affect considerations of scientific evidence in policymaking. These findings have implications for scientists and policymakers seeking to maintain and increase trust in scientists

Resumen:

Para mejorar las prácticas de lactancia materna es necesario fortalecer acciones de promoción, protección y apoyo, y establecer una política nacional multisectorial que incluya elementos indispensables de diseño, implementación, monitoreo y evaluación de programas y políticas públicas, financiamiento para acciones e investigación, desarrollo de abogacía y voluntad política, y promoción de la lactancia materna, todo coordinado por un nivel central. Recientemente, México ha iniciado un proceso de reformas conducentes a la conformación de una Estrategia Nacional de Lactancia Materna (ENLM). Esta estrategia es el resultado de la disponibilidad de evidencia científica sobre los beneficios de la lactancia materna en la salud de la población y el desarrollo de capital humano así como de los datos alarmantes de su deterioro. La implementación integral de una ENLM que incluya el establecimiento de un Comité Nacional Operativo, coordinación intra e intersectorial de acciones, establecimiento de metas claras, monitoreo y penalización de las violaciones al Código Internacional de Comercialización de Sucedáneos de la Leche Materna, y financiamiento de estas acciones es la gran responsabilidad pendiente de la agenda de salud pública del país

Abstract:

Evidence strongly supports that to improve breastfeeding practices it is needed to strengthen actions of promotion, protection and support. To achieve this goal, it is necessary to establish a multisectoral national policy that includes elements such as design, implementation, monitoring and evaluation of programs and policies, funding research, advocacy to develop political willingness, and the promotion of breastfeeding from the national to municipal level, all coordinated by a central level. It is until now that Mexico has initiated a reform process to the establish a National Strategy for Breastfeeding Action. This strategy, is the result not only of the consistent scientific evidence on clear and strong benefits of breastfeeding on population health and the development of human capital, but also for the alarming data of deterioration of breastfeeding practices in the country. The comprehensive implementation of the National Strategy for Breastfeeding Action that includes the establishment of a national committee, intra- and inter-sectoral coordination of actions, setting clear goals and monitoring the International Code of Marketing of Breast-Milk Substitutes, is the awaiting responsibility of the public health agenda of the country

Abstract:

Climate change is a core issue for sustainable development and exacerbates inequality. However, in both the WTO and the climate regime, disagreements over differential treatment have hampered efforts to address international inequalities in a way that facilitates effective responses to global issues. Sustainable globalization requires bridging the disparities between developed and developing countries in their capacities to address such matters of global concern. However, differential treatment now functions as a distraction from the global issues it was supposed to address. Cognitive biases distort perceptions regarding the climate crisis and the value of multilateralism. To what extent can cognitive science inform decision making by States? How do we change paradigms (cognitive background assumptions), which limit the options that decision-making elites in developed and developing countries perceive as useful and worth considering? To what extent do cognitive biases and cultural cognition create a barrier to multilateral cooperation on issues of global concern?

Abstract:

According to the 2018 Intergovernmental Panel on Climate Change (IPCC) report, if global warming continues to increase at the current rate (the so-called business-as-usual scenario) global temperatures are likely to reach 1.5 °C between 2030 and 2052. Limiting global warming to 1.5 °C is affordable and feasible but requires immediate action. Climate-related risks for natural and human systems depend on the magnitude and rate of warming, geographic location, levels of development and vulnerability, and on the choices and implementation of adaptation and mitigation options. Climate-related risks to health, livelihoods, food security, water supply, human security, and economic growth are projected to increase with global warming of 1.5 °C and increase further with 2 °C. Estimates of the global emissions outcome of current nationally stated mitigation ambitions as submitted under the Paris Agreement will result in global warming of about 3 °C by 2100, with warming continuing afterward. The climate risks that we set out below are very serious. The worst-case scenario is catastrophic. Additionally, climate risks create challenges and opportunities for the financial industry, particularly insurance. In this chapter we first set out the risks posed by climate change, and then we seek some possible solutions to reduce emissions (often referred to as climate change mitigation) and to adapt to climate change. We show that financial markets can be harnessed to reduce emissions through market mechanisms. In particular, markets can put a price on risk. That allows insurance/ reinsurance companies to step in with appropriate solutions and create incentives for emissions reductions and climate change adaption. Emissions reductions will reduce the risk of catastrophic climate change. However, some climate change is already inevitable. Therefore, adaptation is essential

Abstract:

The effects of accelerating climate change will have a destabilizing impact on trade negotiations, particularly for the worst-affected developing countries. The effects of the climate crisis will make it more difficult to make concessions in crucial areas, such as agriculture and intellectual property rights, due to the effects of the climate crisis on agricultural yields and the increased need for technology to adapt to a warming climate and reduce greenhouse gas emissions. Rising sea levels, droughts, floods, and killer heat waves will provoke mass migration, with impacts on domestic politics that makes trade concessions more difficult. In this context, multilateral trade negotiations are unlikely to advance in a significant way and megaregional trade agreements will become increasingly difficult to join. The result will be a warming world that is divided between those included in and those excluded from the megaregional trade regime. This will also hamper efforts to slow and to adapt to the climate crisis, due to the key role that international trade plays in addressing both

Abstract:

The Appellate Body is considered the jewel in the crown of the WTO dispute settlement system. However, since it blocked the re-appointment of Jennifer Hillman to the Appellate Body, the United States has become increasingly assertive in its efforts to control judicial activism at the WTO. This was a hot topic in the corridors at the eleventh WTO Ministerial Conference, in Buenos Aires. This article examines judicial activism in the Appellate Body, and discusses the efforts of the United States to constrain the Appellate Body in this context. It also analyses US actions and proposals regarding the dispute settlement systems of the NAFTA, in order to place the WTO debate in a wider context. It concludes that reforms are necessary to break the negative feedback loop between deadlock in multilateral trade negotiations and judicial activism

Abstract:

In two early WTO cases, the Appellate Body found a failure to engage in negotiations to be arbitrary or unjustifiable discrimination under the GATT Article XX chapeau. Subsequent jurisprudence has not applied a negotiation requirement. Instead, it analyzes whether discrimination is arbitrary or unjustifiable by focusing on the cause of the discrimination, or the rationale put forward to explain its existence, which would exclude a duty to negotiate in many circumstances. The issue of whether there is a duty to negotiate is a systemic issue for international economic law. The Article XX chapeau language appears in other WTO agreements and in other international economic law treaties, including those that address environmental protection, regional trade and international investment. This article argues that there is no such duty in WTO law

Abstract:

The purpose of multilateral disciplines on subsidies is to avoid trade distortions, in order to increase production efficiencies through competition. However, this objective may be defeated due to defects in the structure of the WTO Agreement on Subsidies and Countervailing Measures and the resulting interpretations of WTO tribunals in cases involving clean energy subsidies. These defects, together with inefficient design of energy markets, could slow the transition to clean energy sources. However, the necessary reforms to the multilateral subsidies regime and energy markets are unlikely to be implemented any time soon, in the absence of a successful formula for multilateral negotiations. In this environment, the private sector will have to take the lead in making the transition to clean energy sources, in order to mitigate the disastrous effects of climate change to the extent that this goal remains attainable

Abstract:

Mexican energy reforms open the energy sector to foreign participation via different types of contracts, some of which may qualify as investments under North American Free Trade Agreement (NAFTA) Chapter 11. Mexican NAFTA reservations exclude some Mexican regulation from the scope of application of specific obligations in Chapter 11, such as those regarding performance requirements, most-favoured-nation treatment, and national treatment. However, Mexico’s legislative restrictions on foreign investors’ right to pursue investor-state arbitration are not covered by its NAFTA reservations and should not affect access to NAFTA Chapter 11 and Mexico cannot invoke its domestic laws to justify a violation of its international obligations. Moreover, Mexico’s reservations do not prevent the application of obligations regarding fair and equitable treatment and expropriation

Abstract:

This article analyzes how International Investment Agreements (IIAs) might constrain the ability of governments to adopt climate change measures. This article will consider how climate change measures can either escape the application of IIA obligations or be justified under exceptions. First, this article considers the role of treaty structure in preserving regulatory autonomy. Then, it analyzes the role that general scope provisions can play in excluding environmental regulation from the scope of application of IIAs. Next, this article will consider how the limited incorporation of environmental exceptions into IIAs affects their interpretation and application in cases involving environmental regulation. The article then analyzes non-discrimination obligations, the minimum standard of treatment for foreign investors and obligations regarding compensation for expropriation. This analysis shows that tribunals can exclude environmental regulation from the scope of application of specific obligations as well

Abstract:

This article analyzes how treaty structure affects regulatory autonomy by shifting the point in a treaty in which tribunals address public interest regulation. The article also analyzes how trade and investment treaties use a variety of structures that influence their interpretation and the manner in which they address public interest regulation. WTO and investment tribunals have addressed public interest regulation in provisions regarding a treaty’s scope of application, obligations and public interest exceptions. The structure of treaties affects a tribunal’s degree of deference to regulatory choices. In treaties that do not contain general public interest exceptions, tribunals have excluded public interest regulation from the scope of application of the treaty as a whole or the scope of application of specific obligations. If treaty parties wish to preserve a greater degree of regulatory autonomy, they can limit the general scope of application of a treaty, or the scope of application of specific obligations, which places the burden of proof on the complainant. In cases where the complexity of the facts or the law make the burden of proof difficult to meet, this type of treaty structure makes it more difficult to prove treaty violations and thus preserves regulatory autonomy to a greater degree

Abstract:

Debates between developing and developed countries over access to technology to mitigate or adapt to climate change tend to overlook the importance of plant varieties. Climate change will increase the importance of the development of new plant varieties that can adapt to changing climactic conditions.This article compares Intellectual Property Rights (IPRs) for plant varieties in the World Trade Organization (WTO) Trade-Related Aspects of Intellectual Property (TRIPS) Agreement, the UPOV Convention and the Convention on Biological Diversity (CBD). It concludes that TRIPS Article 27.3(b) provides an appropriate degree of flexibility regarding the policy options available to confront climate change

Abstract:

Multilingualism is a sensitive and complex subject in a global organisation such as the World Trade Organization (WTO). In the WTO legal texts, there is a need for full concordance, not simply translation. This article begins with an overview of the issues raised by multilingual processes at the WTO in the negotiation, drafting, translation, interpretation and litigation phases. It then compares concordance procedures in the WTO, European Union and United Nations and proposes new procedures to prevent and correct linguistic discrepancies in the WTO legal texts. It then categorises linguistic differences among the WTO legal texts and considers the suitability of proposed solutions for each category. The conclusion proposes an agenda for further work at the WTO to improve best practices based on the experience of the WTO and other international organisations and multilingual governments

Abstract:

This article examines three issues: (1) the use, over time, of facemasks in a public setting to prevent the spread of a respiratory disease for which the mortality rate is unknown; (2) the difference between the responses of male and female subjects in a public setting to unknown risks; and (3) the effectiveness of mandatory and voluntary public health measures in a public health emergency. The use of facemasks to prevent the spread of respiratory diseases in a public setting is controversial. At the height of the influenza epidemic in Mexico City in the spring of 2009, the federal government of Mexico recommended that passengers on public transport use facemasks to prevent contagion. The Mexico City government made the use of facemasks mandatory for bus and taxi drivers, but enforcement procedures differed for these two categories. Using an evidence-based approach, we collected data on the use of facemasks over a 2-week period. In the specific context of the Mexico City influenza outbreak, these data showed mask usage rates mimicked the course of the epidemic and gender difference in compliance rates among metro passengers. Moreover, there was not a significant difference in compliance with mandatory and voluntary public health measures where the effect of the mandatory measures was diminished by insufficiently severe penalties, the lack of market forces to create compliance incentives and sufficient political influence to diminish enforcement. Voluntary compliance was diminished by lack of trust in the government

Abstract:

This article analyzes several unresolved issues in World Trade Organization (WTO) law that may affect the WTO-consistency of measures that are likely to be taken to address climate change. How should the WTO deal with environmental subsidies under the General Agreement on Tariffs and Trade (GATT), the Agreement on Agriculture and the Subsidies and Countervailing Measures (SCM) Agreement? Can the general exceptions in GATT Article XX be applied to other agreements in Annex 1A? Are processing and production methods relevant to determining the issue of ‘like products’ in GATT Articles I and III, the SCM Agreement and the Antidumping Agreement and the TBT Agreement? What is the scope of paragraphs b and g in GATT Article XX and the relationship between these two paragraphs? What is the relationship between GATT Article XX and multilateral environmental agreements in the context of climate change? How should Article 2 of the TBT Agreement be interpreted and applied in the context of climate change? The article explores these issues

Resumen:

En este trabajo se aplica la técnica de extracción de señales para realizar la desestacionalización de dos series de tiempo observadas. En ambos casos se considera explícitamente la de la serie. Para una de las series se obtiene una desestacionalización claramente aceptable, pero la segunda conduce a concluir que no cualquier serie, aparentemente estacional, debe ser desestacionalizada

Abstract:

This paper applies the theory of signal extraction to carry out the deseasonalization ot two observed time series. For both series, we explicitly consider the ARMA model simplification and estimation of the series components. For one series we obtain a reasonable deseasonalization, but the oter case allows us to conclude that not every, apparently seasonal, series should be deseasonalized

Resumen:

En este trabajo se da cuenta del estado y la evolución geográfica de las ciencias sociales, a través del proceso de desconcentración en México de los integrantes del Sistema Nacional de Investigadores (SNI). Las preguntas que se busca responder y que sirven como dimensiones de análisis son: ¿Cuál es la distribución de los investigadores dentro de la República? ¿Cómo se distribuyen por sexo en las regiones geográficas del país? ¿Cómo es la composición por nivel del sni en cada zona? ¿Existen, y cuáles, han sido los cambios en la composición disciplinar de esta área del conocimiento

Abstract:

In this paper, the the social sciences state and geographical evolution is recognized through the deconcentrating process of the members on the National System of Researchers in Mexico. The questions that are pursued to be answered and that serve as analysis dimensions are: Which is the researchers geographical distribution in the country? How are they distributed by gender and academic level? How is the composition of the National System of Researchers by level in each area? Are there, if any, changes in the disciplinary composition of this area of knowledge?

Resumen:

Los autores nos ofrecen un reporte que muestra la desconcentración de los integrantes del Sistema Nacional de Investigadores —anteriormente ubicados de forma predominante en la Zona Metropolitana de la Ciudad de México—, que muestra cómo en las diversas regiones del país hay una participación cada vez mayor en trabajos de investigación científica y tecnológica original y de calidad

Abstract:

This article examines a calculus graphing schema and the triad stages of schema devel- opment from Action-Process-Object-Schema (APOS) theory. Previously, the authors studied the underlying structures necessary for students to process concepts and enrich their knowledge, thus demonstrating various levels of understanding via the calculus graphing schema. This investigation built on this previous work by focusing on the thematization of the schema with the intent to expose those possible structures acquired at the most sophisticated stages of schema development. Results of the analyses showed that these successful calculus students, who appeared to be operating at varying stages of development of their calculus graphing schemata, illustrated the complexity involved in achieving thematization of the schema, thus demonstrating that thematization was possible

Abstract:

We classify all planar four–body central configurations where two pairs of the bodies are on parallel lines. Using cartesian coordinates, we show that the set of four–body trape- zoid central configurations with positive masses forms a two–dimensional surface where two symmetric families, the rhombus and isosceles trapezoid, are on its boundary. We also prove that, for a given position of the bodies, in some cases a specific order of the masses determines the geometry of the configuration, namely acute or obtuse trapezoid central configuration. We also prove the existence of non–symmetric trapezoid central configura- tions with two pairs of equal masses

Abstract:

Let T be a second-order arithmetical theory, Lambda a well-order, lambda < Lambda and X subset of N. We use [lambda vertical bar X](T)(Lambda)phi as a formalization of "phi is provable from T and an oracle for the set X, using omega-rules of nesting depth at most lambda". For a set of formulas Gamma, define predicative oracle reflection for T over Gamma (Pred-O-RFNG(T)) to be the schema that asserts that, if X subset of N, Lambda is a well-order and phi is an element of Gamma, then for all lambda < Lambda ([lambda vertical bar X](T)(Lambda)phi -> phi ). In particular, define predicative oracle consistency (Pred-O-Cons(T)) as Pred-O-RFN.({0= 1}) (T). Our main result is as follows. Let ATR(0) be the second-order theory of Arithmetical Transfinite Recursion, RCA(0)(*) be Weakened Recursive Comprehension and ACA be Arithmetical Comprehension with Full Induction. Then, ATR(0) RCA(0)(*) + Pred-O-Cons(RCA(0)(*)) = RCA(0)(*) + Pred-O-RFN Pi 12 (ACA). We may even replace RCA(0)(*) by the weaker ECA(0), the second-order analogue of Elementary Arithmetic. Thus we characterize ATR(0), a theory often considered to embody Predicative Reductionism, in terms of strong reflection and consistency principles

Abstract:

In the generalized Russian cards problem, the three players Alice, Bob and Cath draw a, b and c cards, respectively, from a deck of a+b+c cards. Players only know their own cards and what the deck of cards is. Alice and Bob are then required to communicate their hand of cards to each other by way of public messages. For a natural number k, the communication is said to be k-safe if Cath does not learn whether or not Alice holds any given set of at most k cards that are not Cath’s, a notion originally introduced as weak k-security by Swanson and Stinson. An elegant solution by Atkinson views the cards as points in a finite projective plane. We propose a general solution in the spirit of Atkinson’s, although based on finite vector spaces rather than projective planes, and call it the ‘geometric protocol’. Given arbitrary c, k>0, this protocol gives an informative and k-safe solution to the generalized Russian cards problem for infinitely many values of (a,b,c) with b=O(ac). This improves on the collection of parameters for which solutions are known. In particular, it is the first solution which guarantees k-safety when Cath has more than one card

Abstract:

In the generalized Russian cards problem, Alice, Bob and Cath draw a, b and c cards, respectively, from a deck of size a + b + c. Alice and Bob must then communicate their entire hand to each other, without Cath learning the owner of a single card she does not hold. Unlike many traditional problems in cryptography, however, they are not allowed to encode or hide the messages they exchange from Cath. The problem is then to find methods through which they can achieve this. We propose a general four-step solution based on finite vector spaces, and call it the “colouring protocol”, as it involves colourings of lines. Our main results show that the colouring protocol may be used to solve the generalized Russian cards problem in cases where a is a power of a prime, c = O(a^2) and b = O(c^2). This improves substantially on the set of parameters for which solutions are known to exist; in particular, it had not been shown previously that the problem could be solved in cases where the eavesdropper has more cards than one of the communicating players

Abstract:

We find a new and simple inversion formula for the 2D Radon transform (RT) with a straight use of the shearlet system and of well-known properties of the RT. Since the continuum theory of shearlets has a natural translation to the discrete theory, we also obtain a computable algorithm that recovers a digital image from noisy samples of the 2D Radon transform which also preserves edges. A very well-known RT inversion in the applied harmonic analysis community is the biorthogonal curvelet decomposition (BCD). The BCD uses an intertwining relation of differential (unbounded) operators between functions in Euclidean and Radon domains. Hence the BCD is ill-posed since the inverse is unstable in the presence of noise. In contrast, our inversion method makes use of an intertwining relation of integral transformations with very smooth kernels and compact support away from the origin in the Fourier domain, i.e. bounded operators. Therefore, we obtain, at least, the same asymptotic behavior of mean-square error as the BCD (and its shearlet version) for the class of cartoon-like functions. Numerical simulations show that our inverse surpasses, by far, the inverse based on the BCD. Our algorithm uses only fast transformations

Abstract:

We built an infrared vision system to be used as the real time 3D motion sensor in a prototype low cost, high precision, frameless neuronavigator. The objective of the prototype is to develop accessible technology for increased availability of neuronavigation systems in research labs and small clinics and hospitals. In this paper we present our choice of technology including camera and IR emitter characteristics. We describe the methodology for setting up the 3D motion sensor, from the arrangement of the cameras and the IR emitters on surgical instruments, to triangulation equations from stereo camera pairs, high bandwidth computer communication with the cameras and real time image processing algorithms. We briefly cover the issues of camera calibration and characterization. Although our performance results do not yet fully meet the high precision, real time requirements of neuronavigation systems we describe the current improvements being made to the 3D motion sensor that will make it suitable for surgical applications

Abstract:

This paper evaluates the finite sample performance of two methods for selecting the power transformation when applying seasonal adjustment with the X-13ARIMA-SEATS package: the automatic method, based on the Akaike Information Criterion (AIC) and Guerrero's method, that is based on the minimization of a coefficient of variation in order to choose a power transformation. For this purpose, we generate time series with different Data Generating Processes considering traditional Monte Carlo experiments as well as additive and multiplicative time series with linear and time-varying deterministic trends. We also illustrate the performance of both approaches with an empirical application, by seasonally adjusting the Mexican Global Economic Activity Indicator and its components. The results of different statistical tests indicate that Guerrero's method is more adequate than the automatic one. We conclude that using Guerrero's method generates better results when seasonally adjusting time series with the X-13ARIMA-SEATS program

Abstract:

This article presents a new method to reconcile direct and indirect deseasonalized economic time series. The proposed technique uses a Combining Rule to merge, in an optimal manner, the directly deseasonalized aggregated series with its indirectly deseasonalized counterpart. The lastmentioned series is obtained by aggregating the seasonally adjusted disaggregates that compose the aggregated series. This procedure leads to adjusted disaggregates that verify Denton’s movement preservation principle relative to the originally deseasonalized disaggregates. First, we use as preliminary estimates the directly deseasonalized economic time series obtained with the X-13ARIMA-SEATS program applied to all the disaggregation levels. Second, we contemporaneously reconcile the aforementioned seasonally adjusted disaggregates with its seasonally adjusted aggregate, using Vector Autoregressive models. Then, we evaluate the finite sample performance of our solution via a Monte Carlo experiment that considers six Data Generating Processes that may occur in practice, when users apply seasonal adjustment techniques. Finally, we present an empirical application to the Mexican Global Economic Indicator and its components. The results allow us to conclude that the suggested technique is appropriate to indirectly deseasonalize economic time series, mainly because we impose the movement preservation condition to the preliminary estimates produced by a reliable seasonal adjustment procedure

Abstract:

This paper is about a control design for the lateral-directional dynamics of a fixed wing aircraft. The objective is to command the lateral-directional dynamics to the equilibrium point that corresponds to a coordinated turn. The proposed control is inspired by the total energy control system technique; with the objective of mixing the ailerons and the rudder inputs. The control law is developed directly from the nonlinear aircraft model. Numerical simulation results are presented to show the closed-loop system behavior

Abstract:

The stratified proportional hazards model represents a simple solution to take into account heterogeneity within the data while keeping the multiplicative effect of the predictors on the hazard function. Strata are typically defined a priori by resorting to the values of a categorical covariate. A general framework is proposed, which allows the stratification of a generic accelerated lifetime model, including, as a special case, the Weibull proportional hazards model. The stratification is determined a posteriori, taking into account that strata might be characterized by different baseline survivals, and also by different effects of the predictors. This is achieved by considering a Bayesian nonparametric mixture model and the posterior distribution it induces on the space of data partitions. An optimal stratification is then identified following a decision theoretic approach. In turn, stratum-specific inference is carried out. The performance of this method and its robustness to the presence of right-censored observations are investigated through an extensive simulation study. Further illustration is provided analyzing a data set from the University of Massachusetts AIDS Research Unit IMPACT Study

Abstract:

This work presents a study about the smoothness attained by the methods more frequently used to choose the smoothing parameter in the context of splines: Cross Validation, Generalized Cross Validation, and corrected Akaike and Bayesian Information Criteria, implemented with Penalized Least Squares. It is concluded that the amount of smoothness strongly depends on the length of the series and on the type of underlying trend, while the presence of seasonality even though statistically significant is less relevant. The intrinsic variability of the series is not statistically significant and its effect is taken into account only through the smoothing parameter

Abstract:

Nowadays, dairy products, especially fermented products such as yogurt, fromage frais, sour cream and custard, are among the most studied foods through tribological analysis due to their semi-solid appearance and close relationship with attributes like smoothness, creaminess and astringency. In tribology, dairy products are used to provide information about the friction coefficient (CoF) generated between tongue, palate, and teeth through the construction of a Stribeck curve. This provides important information about the relationship between friction, food composition, and sensory attributes and can be influenced by many factors, such as the type of surface, tribometer, and whether saliva interaction is contemplated. This work will review the most recent and relevant information on tribological studies, challenges, opportunity areas, saliva interactions with dairy proteins, and their relation to dairy product sensory

Resumen:

Las reformas de 1928 y 1934 a la Constitución mexicana trajeron una serie de cambios de forma y fondo a la Suprema Corte de Justicia de la Nación, que han sido analizadas desde la teoría constitucional, y a partir de las cuales se puede estudiar el periodo liberal de las mismas que muestran cómo, a pesar del transfondo político imperante, la Corte construyó y desarrolló su propio modelo judicial

Abstract:

The reforms of 1928 and 1934 to the Mexican Constitution brought a series of changes of form and substance to the Supreme Court of Justice of the Nation, which have been analyzed from the constitutional theory, and from which the liberal period of the same ones that show how, in spite of the prevailing political background, the Court constructed and developed its own judicial model

Resumen:

El nuevo modelo de control de la regularidad constitucional y el advenimiento del llamado "paradigma constitucionalista" demandan una buena cantidad de ajustes a nuestro sistema jurídico, tanto en el ámbito legislativo como en el jurisprudencial. Un mandato constitucional que condiciona estos cambios de manera preponderante es el principio pro persona. En este trabajo demostramos cómo la Suprema Corte de Justicia no ha sido precisamente consistente a la hora de conjugar este importante principio con los diferentes problemas que va resolviendo. A menos que pensemos que la jurisprudencia de la Corte es infalible, no encontramos ninguna razón que justifique su inaplicación a cargo de los jueces ordinarios mediante el control difuso. Tampoco podemos admitir que la Corte sea impermeable con relación al principio pro persona. En este trabajo, reflexionamos sobre estos problemas a propósito de un expediente de reciente resolución: la CT 299/2013

Abstract:

The new model of constitutional adjudication and the advent of the so called "constitutionalist paradigm" demand quite a few adjustments in the Mexican legal system, both in the legislative field as in the judicial one. The "pro personae" principle must compel and inspire these changes. In this paper we will demonstrate how the Supreme Court of Justice has not been consistent at the time of expounding this important principle through judicial review. Unless we think that the Supreme Court is infallible, we do not find a reason that justifies this position. We cannot admit, either, that the Supreme Court is impermeable regarding the "pro personae" principle. In this paper, we reflect upon these issues by analyzing the recent decision in the C.T. 299/2013 (conflict in jurisdictional criteria)

Resumen:

El presente artículo tiene como objetivo plantear algunas reflexiones respecto a la forma en que el orden jurídico mexicano regula los cuidados paliativos y su conexión con el debate acerca de la muerte asistida. En él, los autores analizan el contenido de la Ley General de Salud –a partir de una reforma publicada en enero de 2009–, su Reglamento en Materia de Prestación de Servicios de Atención Médica y las demás disposiciones aplicables a los cuidados paliativos, con el objetivo de impulsar el debate público en torno a las diversas formas de asistencia que pueden recibir las personas con una enfermedad que no responde al tratamiento curativo

Abstract:

This article analyzes the Mexican regulation on palliative care and its relationship with the public debate on assisted death or suicide. This paper focuses on the rights that people with incurable diseases have, given the current contents of the General Health Statute and other applicable rules. Its main purpose is to activate the public debate on these matters

Resumen:

La Primera Sala de la Suprema Corte de Justicia de la Nación resolvió, por mayoría de cuatro votos, un amparo en el que había que dilucidar si ciertos artículos de la Norma Oficial Mexicana NOM-174-SSA1-1998, Para el Manejo Integral de la Obesidad resultaban o no violatorios de derechos humanos. En la sentencia aprobada por la mayoría se concluye que las restricciones son contrarias a la libertad prescriptiva o terapéutica, la cual, a su juicio, es parte esencial del derecho al trabajo. El ministro Cossío Díaz votó en contra y emitió un voto particular en el cual estimó que, en primer lugar, la libertad prescriptiva no es parte esencial del referido derecho, sino que funge como criterio orientador de la profesión médica. En segundo lugar, señala que el derecho al trabajo no es absoluto, toda vez que admite ciertos límites, siempre y cuando sean válidos constitucionalmente. Por ello, para determinar si son constitucionalmente válidos los requisitos que establece la Norma Oficial Mexicana (NOM), se debió haber solicitado la opinión de expertos, para poder justificar la introducción de aquéllos en la NOM. Finalmente, manifestó que el estudio debió de ponderar el derecho al trabajo con el de la salud del paciente, toda vez que este último es el que se pretendió tutelar con la NOM impugnada

Abstract:

The First Chamber of the Mexican Supreme Court of Justice decided, by a majority of four votes, on a case where it had to be evaluated if some articles of a Mexican Official Norm (NOM) on obesity violated human rights. The majority in the chamber concluded that the restrictions went against Medics’ prescribing or therapeutic rights, and therefore their freedom to work. Justice Cossío Díaz voted against the judgment and wrote a separate opinion where he holds, first of all, that the prescribing right works as a guideline for the medical profession and is not an essential element of the freedom to work. Secondly, he points out that the freedom to work is not an absolute right, for it has certain limits permitted by the Constitution. Consequently, experts’ opinions should have been consulted for them to be able to determine if the NOM´s requirements were in accordance with the Constitution. Finally, he considers that the judgment should have introduced a balancing test between freedom to work and the patient’s health rights, since this last-mentioned right was what the NOM intended to protect. (Gac Med Mex. 2013;149:686-90)

Resumen:

En sesión del 23 de enero de 2013, la Primera Sala de la Suprema Corte de Justicia de la Nación resolvió por mayoría de tres votos el asunto citado al rubro. Al respecto, expongo las consideraciones de mi desacuerdo en torno a la aprobación del engrose de la sentencia y a las razones que sustentan dicha resolución

Resumen:

En las sesiones del 17, 18, 20, 24, 25 y 27 de mayo de 2010, el Pleno de la Suprema Corte de Justicia reconoció la validez de la modificación de la “Norma oficial mexicana NOM-190-SSA1-1999, prestación de servicios de salud. Criterios para la atención médica de violencia familiar”, para quedar como “Norma oficial mexicana NOM-046-SSA2-2005, violencia familiar, sexual y contra las mujeres. Criterios para la prevención y atención”, publicada en el Diario Oficial de la Federación el 16 de abril de 2009, impugnada por el gobernador del estado de Jalisco, quien señaló que la norma era violatoria de los artículos 4, 5, 14, 16, 20, 21, 29, 31, fracción IV, 49, 73, 74, 89, fracción I, 123, 124 y 133 de la Constitución Federal. En dicha resolución, la Suprema Corte de Justicia determinó la constitucionalidad de la obligación a cargo de los integrantes del Sistema Nacional de Salud, de ofrecer y suministrar la denominada pastilla del “día siguiente” o de “emergencia” a las mujeres víctimas de violación sexual. De acuerdo con el artículo 5 de la Ley General de Salud, el Sistema Nacional de Salud abarca tanto los hospitales privados como los públicos, ya sean locales o federales. Lo que quiere decir que todos los hospitales se encuentran obligados a atender las disposiciones contenidas en la norma oficial impugnada. Dada la importancia de la determinación de la Corte en el ámbito médico nacional, en el presente artículo se analizan los puntos más relevantes del referido fallo y sus implicaciones

Abstract:

This article summarizes the Court’s ruling regarding the constitutionality of the Official Norm “NOM-046-SSA2-2005”. Jalisco’s Governor challenged the validity of the referred norm arguing that it was against articles 4, 5, 14, 16, 20, 21, 29, 31-IV, 49, 73, 74, 89-I, 123, 124 y 133 of the Federal Constitution. The Supreme Court disregarded Governor’s claim and determined that the members of the National Health System are obliged to offer and give the “day after pill” to sexual violation victims. According to article 5 of General Health Law, the National Health System includes private and public hospitals, whether they are local or federal. This means that all these health institutions have the obligation to observe the dispositions contained in the appealed Official Norm. Given the significance of the Court’s ruling in the medical sphere, in this article the most relevant issues of the Court decision and its implications are analyzed

Resumen:

La función que los médicos cumplen en la sociedad es muy importante desde diversos ángulos. No obstante, las actividades que desarrollan no pueden quedar fuera del control legal en la medida en que está en juego en muchos casos la salud o incluso la vida de otras personas. Por ello, en el presente artículo se analiza a partir de una sentencia emitida por la Primera Sala de la Suprema Corte de Justicia de la Nación, el equilibrio que debe existir entre el derecho al trabajo de los médicos y el derecho de las personas a ver protegida su salud, tomando como referencia el análisis que dicho tribunal hizo en la revisión de un juicio de amparo respecto a la constitucionalidad del artículo 271 de la Ley General de Salud, destacando que dicho análisis se hizo teniendo en cuenta los estándares internacionales en materia de derechos humanos existentes. Asimismo, se analizan aspectos relacionado a quiénes son las autoridades competentes para otorgar títulos académicos médicos, y cómo el referido artículo de la Ley General de Salud era compatible con otros derechos constitucionales y la labor de los médicos

Abstract:

The role physicians play in society is very important from different perspectives. In spite of this, their activities cannot remain outside of the legal sphere and their ensuing guidelines since physicians activities include the health and life of patients, often at risk. We describe a law put forth by Mexico’s Supreme Court that includes a balance between physician’s duties and safeguarding a patient’s health. Following international guidelines and human right’s treaties, Supreme Court magistrates analyzed the constitutionality of article 271 included in Mexico’s General Health Law (Ley General de Salud). Other aspects of their analysis included attributes to grant medical degrees and the way in which certain clauses in the General Health Law are compatible with physicians’ daily work and other constitutional rights

Abstract:

Para efectos de un adecuado desarrollo del tema objeto de este trabajo, es conveniente apuntar desde ahora -aún cuando con posterioridad desarrollemos el tema de manera detallada- que el sistema de justicia constitucional mexicano tiene la peculiaridad de simultáneamente poder ser calificado como concentrado, en cuanto que sólo son competentes para ejercitar el control de regularidad constitucional los órganos del Poder Judicial de la Federación, y difuso, debido a que tal control se ejercita por los distintos órganos jurisdiccionales1 que componen a ese Poder (Suprema Corte de Justicia, tribunales de circuito, jueces de distrito). De este modo, la adecuada descripción de lo que podemos denominar "justicia constitucional mexicana" exige analizar distintos órganos (Suprema Corte de Justicia, tribunales de circuito y juzgados de distrito) y diversos procedimientos (juicio de amparo, controversias constitucionales y acciones de inconstitucionalidad), de ahí que sea pertinente introducir tales diferenciaciones desde ahora

Abstract:

Water regulation in Mexico rests on the Mexican Constitution and interpretation of that law by the Mexican Supreme Court: Mexican lawyers,on the other hand, tend to ignore those interpretations and look to the text of the Constitution itself. This article argues against that approach and points to the importance of new ways of making decisions

Resumen:

Por diversos motivos un gobierno puede querer inducir a la banca comercial a que otorgue crédito a un determinado sector. Debido a ello, en este artículo se analiza una familia de contratos que puede generar tal resultado pero sin que traiga consigo un comportamiento perverso por parte de acreedores o deudores - situación que tradicionalmente se presenta con la banca de desarrollo. Para llevar a cabo tal análisis se consideró la existencia de información asimétrica en dos frentes: i) entre el gobierno y la banca comercial (el crédito puede ser desviado a miembros que no pertenezcan al grupo-objetivo del gobierno), y ii) entre la banca comercial y los deudores (estos últimos pueden utilizar el crédito en actividades distintas de las acordadas). Tomando esto en consideración, se muestra que el contrato aquí elaborado es superior a otro tipo de políticas en cuanto que genera menores tasas de interés y una mayor disposición de la banca a cooperar en la consecución del objetivo gubernamental

Abstract:

For several reasons, governments may want to encourage commercial banks to offer credit to some specific groups. This paper analyzes a contract scheme that may help achieve this objective without inducing financial disintermediation or other adverse behavior. Two sources of information asymmetry are considered: between the government and the bank (credit might be diverted to borrowers not belonging to the targeted group); and, between the bank and the borrower (the latter may divert credit to purposes other than stated). Compared to other policies, the scheme analyzed here is superior since it brings about lower interest rates and higher cooperation from banks to achieve the government's objective

Abstract:

For several reasons, governments may want to encourage commercial banks to offer credit to some specific groups. This paper analyzes a contract scheme that may help achieve this objective without inducing financial disintermediation or other adverse behavior. Two sources of information asymmetry are considered: between the government and the bank (credit might be diverted to borrowers not belonging to the targeted group); and between the bank and the borrower (the latter may divert credit to purposes other than stated). Compared to other policies, the scheme analyzed here is superior since it brings about lower interest rates and higher cooperation from banks to achieve the government's objective

Resumen:

Se presenta la doctrina de Santo Tomás de Aquino, fundada en aquella de Aristóteles, relativa al auténtico sentido de la verdad práctica, concebida como diferente de la especulativa, caracterizada por su fin específico, consistente en poner en la existencia una obra exterior al hombre (poiesis) o bien una acción propiamente humana (praxis). Para actuar rectamente se requiere dirigir la acción en conformidad con un apetito recto. De este modo la vida moral no es obra de puro conocimiento sino que requiere necesariamente la participación de los apetitos, lo cuales, si están moldeados por la prudencia, permitirán actuar rectamente

Abstract:

The doctrine of St. Thomas Aquinas, founded on that of Aristotle, is presented here. It concerns the authentic meaning of practical truth, conceived as different from that of a speculative nature, characterized by its specific purpose, consisting in the existence of a work external to Man (poiesis) or an actual human action (praxis). In both practical domains, reason is directed toward movement that is acted by will. To act righteously, it is necessary to direct action in accordance with an upright appetite, which ultimately means to act in a prudent manner. In this way, the moral life is not the work of pure knowledge but necessarily requires the participation of appetites, which, if molded by prudence, will allow one to act righteously

Abstract:

This paper reports an experiment designed to shed light on an empirical puzzle observed by Dufwenberg and Gneezy (Games and Economic Behavior 30: 163-182, 2000) that the size of the foregone outside option by the first mover does not affect the behavior of the second mover in a lost wallet game. Our conjecture was that the original protocol may not have made the size of the forgone outside option salient to second movers. Therefore, we change two features of the Dufwenberg and Gneezy protocol: (i) instead of the strategy method we implement a direct response method (sequential play) for the decision of the second mover; and (ii) we use paper money certificates that are passed between the subjects rather than having subjects write down numbers representing their decisions. We observe that our procedure yields qualitatively the same result as the Dufwenberg and Gneezy experiment, i.e., the second movers do not respond to the change in the outside option of the first movers

Abstract:

In the mid-1980s, Mexico undertook major trade reform, privatization and deregulation. This coincided with a rapid expansion in wages and employment that led to a rise in wage dispersion. This paper examines the role of industry- and occupation-specific effects in explaining the growing dispersion. We find that despite the magnitude and pace of the reforms, industry-specific effects explain little of the rising wage dispersion. In contrast occupation-specific effects can explain almost half of the growing wage dispersion. Finally, we find that the economy became more skill-intensive and that this effect was larger for the traded sector because this sector experienced much smaller low-skilled employment growth. We therefore suggest that competition from imports had an important role in the fall of the relative demand for less-skilled workers

Abstract:

Technology and mobile devices have been successfully integrated in peoples’ everyday activities. Educational institutions around the world are increasing their interest to create mobile learning (ML) environments considering the advantage of connectivity, situated learning, individualized learning, social interactivity, portability, affordability and more widely ubiquity. Even with the fast development of ML environments. There is however a lack of understanding about the factors that influence ML adoption. This paper proposes a framework for ML adoption integrating a modified Unified Theory of Acceptance and Use of Technology (UTAUT) with constructs from the Expectation-Confirmation Theory (ECT). Since the goal for education is learning, this research will include individual attributes such as learning styles (LS) and experience to understand how they moderate ML adoption and actual use. For this reason, the framework brings together the adoption theory for initial use and the constructs of continuance intention for actual and habitual use as an outcome of learning. The framework is divided in two stages, acceptance and actual use. The purpose of this paper is to test the first stage: ML acceptance through the structural equation modeling statistical technique. The data was collected from students that already are experiencing ML. Findings demonstrate that performance and effort expectation constructs are significant predictors of ML and there is some influence of LS and experience as moderators for ML adoption. The practical implication in educational services is to incorporate LS influence when designing strategies for learning enhanced by mobile devices

Abstract:

Mobile learning (ML) has been growing recently. The successful adoption of this technology to enhance learning environments could represent a major contribution for education. The purpose of this research will be to investigate the effect of learning styles on ML adoption. This research will consider four stages: conduct an exploratory study, develop a systematic literature review, develop the model and test the model. The results could be a guideline to encourage ML and to help mobile device manufacturers and developers to generate new mobile applications for education

Abstract:

As the mobile technology evolves, the possibilities for Mobile Learning (ML) are becoming increasingly attractive. However, the lack of perceived learning value and institutional infrastructure are hindering the possibilities for ML attempts. The purpose of our study is to understand the use and adoption of mobile technologies by teachers in a business school. We developed a questionnaire based on current research about the use of technology on higher education and it was used to interview 14 teachers. Participants provided insights about ML opportunities, such as availability, interactive environments, enhanced communication and inclusion on daily activities. Participants also realized that current teaching practices should change in mobile environments to include relevant information, to organize mobile materials, to encourage reflection and to create interactive activities with timely feedback. Further, they identified technological, institutional, pedagogical and individual obstacles that are threaten ML practices

Abstract:

Mobile technology and mobile applications evolution have increased possibilities for mobile learning (ML). However, the lack of perceived learning value and institutional infrastructure are hindering the possibilities for ML attempts. The purpose of our study is the understanding opportunities and obstacles of mobile technologies as perceived by teachers in higher education. A questionnaire was developed based on actual research about technology adoption in higher education and was used to interview 14 teachers. Participants were asked to identify different issues associated with using mobile technology in education. In response, participants provided insights about ML perception, such as opportunities to enhance communication with students, availability for resources and immediate feedback Finally, they identified technological, institutional, pedagogical and individual obstacles that threaten ML practices

Resumen:

A los abogados novohispanos se les concedió la gracia del derecho a utilizar en sus togas puños de encaje de bolillo, privilegio sólo reservado a las altas autoridades eclesiásticas y que se conserva actualmente en las sesiones togadas del Colegio. La toga es una vestimenta propia de la profesión de abogado, es la prenda profesional de los juristas. Alfonso X impuso la garnacha sin vuelillos puños de encaje profesional como prenda profesional de los juristas en las cortes de Jerez de la Frontera en abril de 1267. Los vuelillos quedaron reservados en España hoy en día a los miembros de las juntas de gobierno de los Colegios de Abogados y jueces. La concesión del privilegio del uso de bolillos blancos se hizo a solicitud del IRCAM que desde su fundación gozaba de la participación de las preeminencias, y prerrogativas de que gozaba en el Colegio de Abogados de Madrid. Se confirma la concepción de élite profesional que distinguió a la profesión en el siglo XVIII. EL otorgamiento del privilegio buscó acabar con la confusión en el uso de los trajes de abogados y de otras profesiones, tuvo así una finalidad práctica: la de distinguir a los abogados respecto del resto de los togados

Abstract:

Lawyer in New Spain were granted the right to adorn their robes with bobbin lace cuffs, privileged reserved to high ecclesiastical magistrates, this tradition has since then been observed in ceremonial sessions at the bar. The robe is a vestment distinctive to the Spanish tabard without the aforementioned laced cuffs to be proper of the jurist's guild as a garment indicative of such undertaking at the Jerez de la Frontera Courts in April 1267. Those cuffs remained reserved in Spain to members of the governing body of the Castilian law professional's courts of inn and of judges. The petition for the conceded privilege for the use of white laced cuffs was bestowed to the Royal Illustrious Bar of Mexico since its establishment and has then on enjoyed of the sort entitlements and prerogatives. The common conception of an elite body distinguished such occupation in the XVIII century and is confirmed to us by the measures taken. It was by the approaching means that it was intended to put to an end to the then prevailing confusion regarding the attire of law's practitioners from other cultivated fields: Therefore it bore a practicality, set aside legists from of other people of learned pursuits

Résumé:

Le droit des manchettes de dentelle blanche sur leurs toges a été concédée aux avocats de la Nouvelle Espagne, privilège qui était réservé aux plus hautes autorités ecclésiastiques, y et qui se conserve de nos jours lors des sessions solennelles du Barreau. La toge est un habit propre à la profession d'avocat, c'est l'habit professionnel des juristes. Alphonse X imposa la "garnacha sin vuelillos" (précurseur de la toge comme on la connait aujourd'hui) comme habit officiel des juristes à la court Jerez de la Frontera en avril 1267. Les boucles des toges restent réservées de nos jours en Espagne aux members des Conseils d'Administration des Barreaux et aux juges. Le droit d'utiliser les manchettes en dentelle blanche résulte d'une demande de l'ICRAM, qui depuis sa fondation, bénéfice de la prééminence et des prérogatives dont jouissait le barreau de Madrid. Elle confirme par ces usages, le concept d'élite professionnelle qui distingue la profession d'avocat au XVIIIème siècle. L'octroi de ce privilège visait à mettre fin à la confusion dans l'utilisation des costumes d'avocats avec d'autres professions. Ce privelège accordé répond un objectif practique: distinguer les avocats du reste des professionnels qui utilisaient des habits comme la toge

Abstract:

Information workers and software developers are exposed to work fragmentation, an interleaving of activities and interruptions during their normal work day. Small-scale observational studies have shown that this can be detrimental to their work. In this paper, we perform a large-scale study of this phenomenon for the particular case of software developers performing software evolution tasks. Our study is based on several thousands interaction traces collected by Mylyn and the Eclipse Usage Data Collector. We observe that work fragmentation is correlated to lower observed productivity at both the macro level (for entire sessions) and at the micro level (around markers of work fragmentation); further, longer activity switches seem to strengthen the effect, and different activities seem to be affected differently. These observations give ground for subsequent studies investigating the phenomenon of work fragmentation

Abstract:

Metal oxide nanoparticles are considered to be good alternatives as fungicides for plant disease control. To date, numerous metal oxide nanoparticles have been produced and evaluated as promising antifungal agents. Consequently, a detailed and critical review on the use of mono-, bi-, and tri-metal oxide nanoparticles for controlling phytopathogenic fungi is presented. Among the studied metal oxide nanoparticles, mono-metal oxide nanoparticles-particularly ZnO nanoparticles, followed by CuO nanoparticles -are the most investigated for controlling phytopathogenic fungi. Limited studies have investigated the use of bi- and tri-metal oxide nanoparticles for controlling phytopathogenic fungi. Therefore, more studies on these nanoparticles are required. Most of the evaluations have been carried out under in vitro conditions. Thus, it is necessary to develop more detailed studies under in vivo conditions. Interestingly, biological synthesis of nanoparticles has been established as a good alternative to produce metal oxide nanoparticles for controlling phytopathogenic fungi. Although there have been great advances in the use of metal oxide nanoparticles as novel antifungal agents for sustainable agriculture, there are still areas that require further improvement

Abstract:

The use of metal nanoparticles is considered a good alternative to control phytopathogenic fungi in agriculture. To date, numerous metal nanoparticles (e.g., Ag, Cu, Se, Ni, Mg, and Fe) have been synthesized and used as potential antifungal agents. Therefore, this proposal presents a critical and detailed review of the use of these nanoparticles to control phytopathogenic fungi. Ag nanoparticles have been the most investigated nanoparticles due to their good antifungal activities, followed by Cu nanoparticles. It was also found that other metal nanoparticles have been investigated as antifungal agents, such as Se, Ni, Mg, Pd, and Fe, showing prominent results. Different synthesis methods have been used to produce these nanoparticles with different shapes and sizes, which have shown outstanding antifungal activities. This review shows the success of the use of metal nanoparticles to control phytopathogenic fungi in agricultura

Abstract:

The sluggish kinetics of oxygen reduction reaction (ORR) affects the massive use of polymer exchange membrane fuel cells (PEMFCs). Therefore, the use of electrocatalysts is required to accelerate the ORR kinetics. Usually, Pt/C electrocatalysts are used for the ORR. Nevertheless, the principal disadvantages of using Pt are its high-cost, low availability, and limited stability. For these reasons, research of more stable and cost effective electrocatalysts with high electrocatalytic activity is important to achieve large-scale power generation. To date, there are significant advances and great opportunities for the design of novel electrocatalysts with high electrocatalytic activity and stability for the ORR through the connection of experimental and theoretical explorations. For these reasons, in this review, combined theoretical-experimental studies for the design of novel Pt-based metallic electrocatalysts for the ORR are revised and analyzed in detail

Abstract:

In this work, the growth behavior, structures, energetic and magnetic properties of ConPdn and NinPdn (n = 1-10) clusters were investigated employing auxiliary density functional theory (ADFT). Initial geometries for successive optimization were extracted from Born-Oppenheimer molecular dynamics (BOMD) trajectories. It is demonstrated that when the cluster size increases, the Co and Ni atoms became shrouded by Pd atoms, leading to the initial formation of M@Pd (M = Co and Ni) core-shell structures. The spin multiplicities of the ConPdn and NinPdn (n = 1-10) clusters increase with cluster size. The CoPd clusters exhibit higher spin multiplicity and are characterized by higher spin magnetic moments than the NiPd counterparts. This study reveals that the spin density distributions are located on the Co and Ni atoms in the respective clusters. As the cluster size increases both systems tend to donate more easily electrons and the binding energies per atom grows monotonically

Abstract:

The design and manufacture of highly efficient nanocatalysts for the oxygen reduction reaction (ORR) is key to achieve the massive use of proton exchange membrane fuel cells. Up to date, Pt nanocatalysts are widely used for the ORR, but they have various disadvantages such as high cost, limited activity and partial stability. Therefore, different strategies have been implemented to eliminate or reduce the use of Pt in the nanocatalysts for the ORR. Among these, Pt-free metal nanocatalysts have received considerable relevance due to their good catalytic activity and slightly lower cost with respect to Pt. Consequently, nowadays, there are outstanding advances in the design of novel Pt-free metal nanocatalysts for the ORR. In this direction, combining experimental findings and theoretical insights is a low-cost methodology-in terms of both computational cost and laboratory resources-for the design of Pt-free metal nanocatalysts for the ORR in acid media. Therefore, coupled experimental and theoretical investigations are revised and discussed in detail in this review article

Abstract:

Detecting and monitoring air-polluting gases such as carbon monoxide (CO), nitrogen oxides (NOx), and sulfur oxides (SOx) are critical, as these gases are toxic and harm the ecosystem and the human health. Therefore, it is necessary to design high-performance gas sensors for toxic gas detection. In this sense, graphene-based materials are promising for use as toxic gas sensors. In addition to experimental investigations, first-principle methods have enabled graphene-based sensor design to progress by leaps and bounds. This review presents a detailed analysis of graphene-based toxic gas sensors by using first-principle methods. The modifications made to graphene, such as decorated, defective, and doped to improve the detection of NOx, SOx, and CO toxic gases are revised and analyzed. In general, graphene decorated with transition metals, defective graphene, and doped graphene have a higher sensibility toward the toxic gases than pristine graphene. This review shows the relevance of using first-principle studies for the design of novel and efficient toxic gas sensors. The theoretical results obtained to date can greatly help experimental groups to design novel and efficient graphene-based toxic gas sensors

Abstract:

The trends of the catalytic activity toward the oxygen reaction reduction (ORR) from Pd44 nanoclusters to M6@Pd30Pt8 (M = Co, Ni, and Cu) core-shell nanoclusters was investigated using auxiliary density functional theory. The adsorption energies of O and OH were computed as predictors of the catalytic activity toward the ORR and the following tendency of the electrocatalytic activity was computed: Pt44 = M6@Pd30Pt8 > M6@Pd38 > Pd44. In addition, the adsorption of O2 on the Ni6@Pd30Pt8 and Pt44 nanoclusters were investigated, finding an elongation of the O-O bond length when O2 is adsorbed on the Ni6@Pd30Pt8 and Pt44 nanoclusters, suggesting that the O2 is activated. Finally, the stabilities of the M6@Pd38 and M6@Pd30Pt8 core-shell nanoclusters were analyzed both in vacuum and in oxidative environment. From the calculated segregation energies for the bimetallic and trimetallic nanoclusters in vacuum, it can be clearly observed that the M atoms prefer to be in the center of the M6@Pd38 and M6@Pd30Pt8 nanoclusters. Nevertheless, it is observed that the segregation energies of M atoms for the M6@Pd38 nanoclusters with an oxidizing environment tend to decrease compared with their M6@Pd38 nanoclusters counterparts in vacuum, which suggests that in an oxidative environment, M atoms may tend to segregate to the surface of the M6@Pd38 nanoclusters

Abstract:

Electronic structure computations of pure Pd and Pd-based core-shell clusters were studied employing auxiliary density functional theory (ADFT). For this investigation icosahedral clusters with 13 and 55 atoms and octahedral clusters with 19 and 44 atoms were employed to analyze the change in the properties of the Pd and M@Pd core-shell clusters. All properties calculated for the M@Pd clusters are directly compared with the ones of pure palladium clusters. Spin multiplicities, spin magnetic moments, spin densities, binding energies per atom, segregation energies, and average bond lengths were calculated to understand their changes when varying the size, composition and shape of the M@Pd (M = Co, Ni, and Cu) core-shell clusters. The M1@Pd12 and M1@Pd18 (M = Co and Cu) clusters exhibit changes in the spin multiplicity and spin magnetic moment with respect to the Pd13 and Pd19 clusters, respectively, whereas the Ni1@Pd12 and Ni1@Pd18 clusters maintain the same properties as their pure Pd counterparts. The spin multiplicities and spin magnetic moments of the M6@Pd38 and M13@Pd42 (M = Co, Ni, and Cu) clusters greatly differ from their pure Pd counterparts. This study reveals that the Pd-Pd bond lengths are shorter in the M@Pd core-shell clusters compared to the ones of pure Pd clusters. This work demonstrates that the binding energy per atom of the M@Pd core-shell clusters is greater than the binding energy per atom of the pure Pd clusters. The calculated segregation energies indicate that 3d atoms prefer to be in the center of core-shell systems

Resumen:

En 1976 Nagel y Williams presentaron -en una reunión de la Aristotelian Society- dos célebres textos dirigidos a exhibir el desafío que el azar y la fortuna representan para la imputación kantiana de responsabilidad moral. Desde entonces a la fecha en la literatura han proliferado cientos de artículos centrados en analizar este dilema. Dicho debate, no obstante, rara vez es situado al interior del análisis de las implausibles y falsas premisas que dan lugar a él. En este trabajo reconstruyo las coordenadas centrales en las que esta problemática filosófica se origina. Posteriormente muestro que la imputación de responsabilidad ética a un agente no sólo no excluye, sino incluso presupone, lo que llamaré una capacidad "impura" de agencia donde la suerte ocupa un lugar central

Abstract:

In 1976, Nagel and Williams presented -at the congress of the Aristotelian Society- two famous texts aimed at exposing the challenge that chance and fortune represent for moral thought. Since then, hundreds of articles have proliferated in the literature focused on analyzing this dilemma. This debate, however, is rarely situated within the analysis of the implausible and false premises that give rise to it. In this paper I reconstruct the central coordinates in which this philosophical problem originates. Later, I show that imputation of ethical responsibility to an agent not only does not exclude, but even presupposes, what I will call an impure capacity for agency where luck occupies a central place

Resumen:

Alrededor de la categoría "populismo" suelen articularse todo tipo de ansiedades políticas. Jan-Werner Müller atina al afirmar que la frivolidad en el uso de este concepto ha terminado por volverlo sinónimo de todo aquello que nos desagrada en gobiernos que gozan de amplio apoyo social. Lo cierto es que “populismo” ha sido utilizado en el debate académico y político como un término valorativo antes que como una categoría analítica. En este texto me propongo mostrar que -de cara a desarrollar una teoría correcta sobre el populismo- debemos comenzar por evitar el lenguaje valorativo para en su lugar emprender una tarea analítica. Probaré que se trata de una tarea urgente, cuyo primer paso será diferenciar entre populismo y autoritarismo democrático

Abstract:

Around the category "populism" all kinds of political anxieties are usually articulated. Wermer or Laclau have been correct in stating that the frivolity in the use of this concept has ended up making it synonymous with everything that we dislike in a government with a large popular base. The truth is that "populism" has been used in academic and political debate as an evaluative term rather than as an analytical category. In this text I propose to show that -in order to develop a correct theory of populism- we must begin by avoiding evaluative language and instead undertake an analytical task. I will prove that such task is the first step to differentiate between populism and democratic authoritarianism

Abstract:

The new Latin American constitutionalism (NLC) is the term that has been coined to refer to certain constitutional processes and constitutional reforms that have taken place relatively recently in Latin America. Constitutional theorists have not been very optimistic regarding the scope and nature of this new constitutionalism. I thoroughly agree with this critical skepticism as well as with the idea that this new phenomenon does not substantively change the organic element of the different constitutions in the region. However, I argue that it is a mistake to focus analysis on this characteristic. My intention is to show that the NLC should be evaluated in the context of its relationship with contemporary neo-constitutional theory

Résumé:

Le nouveau constitutionalisme latino-américain est la locution qui a été inventée pour renvoyer à certains processus et réformes constitutionnels ayant eu lieu dans une époque relativement récente en Amérique Latine. Les théoriciens des constitutions ne se sont pas montrés très optimistes quant à l’étendue et à la nature de ce nouveau constitutionalisme. Je suis tout à fait d’accord avec ce scepticisme critique, ainsi qu’avec l’idée selon laquelle ce nouveau phénomène ne change pas significativement l’élément organique des différentes constitutions en place dans la région. Cependant, je soutiens qu’il est erroné de concentrer l’analyse sur cette caractéristique. Mon intention est de montrer que le nouveau constitutionalisme latino-américain doit être examiné relativement à la théorie néo-constitutionnelle contemporaine

Resumen:

La obra de Marx ha suscitado una añeja polémica entre sus estudiosos. Algunos han mantenido que el lenguaje desarrollado en ella es estrictamente explicativo. Dicho lenguaje expresaría ante todo un saber científico expurgado de todo contenido moral (sobre la estructura del capital, las fuerzas que causan la dinámica social y las leyes que la rigen). En el otro extremo, en cambio, otros han argüido que en Marx hallamos más bien un lenguaje ético orientado a denunciar los crímenes y miserias de una determinada formación social con el fin de oponerle otra. En este artículo defiendo la idea de que en la obra de Marx hay elementos tanto para afirmar una cosa como la otra. Sin embargo, argumento que la actualidad del pensamiento marxista reside esencialmente en los elementos éticos y normativos que configuran la dimensión moral de su planteamiento

Abstract:

Marx's work has brought forward an archaic controversy among his followers. Some have sustained that the language developed throughout it is merely descriptive. Such language would express above all a scientific knowledge expurgated of all moral content (about the structure of capital, the forces that cause social dynamics and the laws that govern it.) On the other side, however, others have argued that in Marx we have found an ethical language oriented towards denouncing crimes and miseries of one determined social formation with the finality of opposing another one. In this article I defend the idea that in Marx's work there are elements to affirm one thing as well as the other. Nevertheless, I argue that the main importance of Marx's thinking resides essentially on the ethical and regulatory elements that configure the moral dimension of his approach

Resumen:

Los procesos democráticos de toma de decisiones (al igual que las restricciones constitucionales a la regla de mayoría) pueden ser evaluados por sus resultados, por su valor intrínseco o por una combinación de ambas cosas. Mostraré que analizar a fondo estas alternativas permite sacar a la luz las debilidades más serias en los modos usuales de justificación del constitucionalismo. La fundamentación teórica de la articulación entre democracia y constitucionalismo ha permanecido atrapada en una trampa que busco romper. Concluiré mostrando la necesidad de rebasar los argumentos epistémicos y contra-epistémicos sugiriendo pautas que hasta ahora creo han sido poco ponderadas en la literatura clásica sobre el tema

Abstract:

Democratic decision-making process (as well as constitutional limits to majority rule) may be evaluated on the basis of their results, their intrinsic value or a combination of both. I will show that an in-depth analysis of these alternatives uncovers serious weaknesses in the usual models of justification for constitutionalism. The theoretical basis to describe the relationship between democracy and constitutionalism has remained stuck in a trap that I seek to break from. I conclude by showing the need to overcome epistemic and counter-epistemic arguments by proposing

Resumen:

Este ensayo reflexiona sobre la crisis de las instituciones ciudadanas del Estado y de la sociedad civil como consecuencia del proceso de globalización actual. Efecto de este proceso es que los Gobiernos locales se ven cada vez más obligados a orientar su política conforme a los criterios de flujos económicos globales. Con ello, los Estados ven desbordada su capacidad de gestión, con lo que tienden a sacrificar intereses de sectores hasta entonces protegidos por ellos. Este texto se dirige a reflexionar sobre los fenómenos de exclusión, violencia y subalternidad que dicha exclusión genera. Su interés es hacer una exploración crítica de tres categorías analíticas centrales: imperio, imperialismo y multitud, a partir de la importante obra publicada en el año 2000 por Hardt y Negri. Al final, se mostrará su importancia para desvelar diversos fenómenos derivados de esta condición mundial y la violencia que genera, así como la necesidad de analizar el pensamiento de Hardt y Negri a partir de ciertas coordenadas de reflexión latinas

Abstract:

This essay is a reflection on the crisis of the citizen institutions of the Sate and the civilian society as a consequence of the process of present globalization. The effect of this process is that the local governments see themselves more obligated to orient their politics according to the criteria of the global economic flow. With this the States see their capability to management overflow and this in turn tends to sacrifice the interests of the sectors that until now were protected by them. This text is directed to make a reflection about the phenomena of exclusion, violence, and subalternity that such exclusion generates. The interest of this essay is to do a critical exploration on three critical central categories: empire, imperialism and multitude as mentioned in the important work of Hardt and Negri published in 2002. Finally it will demonstrate its importance too unveil diverse phenomena derived from this worldwide condition and the violence that it generates, as well as the need to analyze Hardt and Negri’s thoughts considering certain coordenates of latino reflections

Resumen:

Me propongo analizar críticamente la idea de abstinencia epistémica desarrollada por un importante grupo de teóricos liberales a partir de los años ochenta del siglo pasado. Para los propósitos del liberalismo político la propuesta de la abstinencia epistémica desempeña un papel crucial. Consiste en garantizar el consenso en torno a las reglas procesales y principios públicos de justicia, exigiendo que la pluralidad de intereses y concepciones sustantivas que coexisten en la sociedad se abstengan de realizar atribuciones de verdad sobre sus propias concepciones morales cuando éstas son debatidas en la esfera pública. Mi argumento es que esta estrategia fracasa toda vez que la abstinencia epistémica no resiste la aplicación de sus propias cláusulas a sí misma

Abstract:

The purpose of this paper is to discuss a thesis of Epistemic Abstinence that was developed by an important group of political theorists starting in the 1980s. The thesis is of central importance to political liberalism. It is meant to secure a consensus on procedural rules and public principles of justice by insisting that the many interests and fundamental conceptions that coexist in society abstain from making claims about the truth their own moral precepts within the public sphere. I argue that this strategy breaks down because the thesis of Epistemic Abstinence cannot be applied to itself

Abstract:

This article offers an articulation of liberation philosophy, a Latin American form of political and philosophical thought that is largely not followed in European and Anglo-American political circles. Liberation philosophy has posed serious challenges to Jürgen Habermas's and Karl-Otto Apel's discourse ethics. Here I explain what these challenges consist of and argue that Apel's response to Latin American political thought shows that discourse ethics can maintain internal consistency only if it is subsumed under the program of liberation philosophy

Resumen:

El liberalismo, en esencia, consiste en relegar el pluralismo y trasladarlo a la esfera privada para asegurar el consenso en la esfera pública. De este modo, todas las cuestiones controvertidas (por antonomasia, la discusión en torno a la verdad) son eliminadas de la agenda para crear las condiciones de un consenso "racional". En consecuencia, el reino de la política se transforma en un terreno en el cual los individuos, despojados de sus pasiones y creencias más fundamentales, aceptan someterse a acuerdos que consideran (o se les imponen) como neutrales. Es así como niega el liberalismo la dimensión de lo político (esto es, de lo polemos, lo dinámico, lo conflictivo), con el fin de reconducirlo al ámbito de la política (la polis, el lugar de la reconciliación del conflicto). El propósito de este trabajo es analizar y discutir a fondo algunos de los principales argumentos que se han ofrecido con el fin de justificar esta estrategia liberal (básicamente en autores como Rorty o Rawls). Mi conclusión será mostrar cómo es que esta estrategia liberal dista mucho de no ser problemática

Abstract:

The essence of liberalism is the relegation of pluralism to the private sphere in order to ensure consensus in the public sphere. In this way, all controversial issues (most notably, the debate on truth) are removed from the agenda to create the conditions for a "rational" consensus. Accordingly, the realm of politics becomes an arena in which individuals, stripped of their most fundamental beliefs and passions, show their agreement with arrangements which they consider to be (or are imposed on them as) neutral. Thus, liberalism denies the political dimension (ie, the polemos, the dynamic, the conflictive) and brings it instead to the realm of politics (the polis, the place of reconciliation of the conflict). The aim of this paper is to analyze and discuss in depth some of the main arguments that have been offered in order to justify such liberal strategy (basically in authors such as Rorty and Rawls). My conclusion will show that this liberal strategy is far from being unproblematic

Resumen:

We study the effects of a nongovernmental civic inclusion campaign on the democratic integration of demobilized insurgents. Democratic participation ideally offers insurgents a peaceful channel for political expression and addressing grievances. However, existing work suggests that former combatant's ideological socialization and experiences of violence fuel hard-line commitments that may be contrary to democratic political engagement, threatening the effectiveness of postwar electoral transitions. We use a field experiment with demobilized FARC combatants in Colombia to study how a civic inclusion campaign affects trust in political institutions, democratic political participation, and preferences for strategic moderation versus ideological rigidity. We find the campaign increased trust in democracy and support for political compromise. Effects are driven by the most educated ex-combatants moving from more hard-line positions to ones that are in line with their peers and by ex-combatants who had the most violent conflict experience similarly moderating their views

Abstract:

Self-interruptions account for a significant portion of task switching in information-centric work contexts. However, most of the research to date has focused on understanding, analyzing and designing for external interruptions. The causes of self-interruptions are not well understood. In this paper we present an analysis of 889 hours of observed task switching behavior from 36 individuals across three hightechnology information work organizations. Our analysis suggests that self-interruption is a function of organizational environment and individual differences, but also external interruptions experienced. We find that people in open office environments interrupt themselves at a higher rate. We also find that people are significantly more likely to interrupt themselves to return to solitary work associated with central working spheres, suggesting that selfinterruption occurs largely as a function of prospective memory events. The research presented contributes substantially to our understanding of attention and multitasking in context

Abstract:

Law search is fundamental to legal reasoning and its articulation is an important challenge and open problem in the ongoing efforts to investigate legal reasoning as a formal process. This article formulates a mathematical model that frames the behavioral and cognitive framework of law search as a sequential decision process. The model has two components: first, a model of the legal corpus as a search space and second, a model of the search process (or search strategy) that is compatible with that environment. The search space has the structure of a "multi-network"-an interleaved structure of distinct networks-developed in earlier work. In this article, we develop and formally describe three related models of the search process. We then implement these models on a subset of the corpus of U.S. Supreme Court opinions and assess their performance against two benchmark prediction tasks. The first is to predict the citations in a document from its semantic content. The second is to predict the search results generated by human users. For both benchmarks, all search models outperform a null model with the learning-based model outperforming the other approaches. Our results indicate that through additional work and refinement, there may be the potential for machine law search to achieve human or near-human levels of performance

Abstract:

This work focuses on the historical volatility models in which the temporal and spatial dependencies inherent in the mean and variance are "simple". The empirical time series used are trade-by-trade common stock time series from the U.S. and Mexico, and from the Mexican ADR's traded in the U.S. Results from these three data sets provide information on the liquidity of the markets in these two countries

Abstract:

By using the Monte Carlo simulation, one can obtain a close form solution for the price of the pure discount bond. In order to do this, the path of the stochastic variables n and r should be simulated first. To properly simulate from the tails of the Normal distribution, in order to have the expected value for the martingale n to converge to one, a few sampling procedures are applied that are tailored to specifically emphasize the sampling from the tails of a distribution

Abstract:

A moral right to health or health care, given reasonable resource constraints, implies a reasonable array of services, as determined by a fair deliberative process. Such a right can be embodied in a constitution where it becomes a legal right with similar entitlements. What is the role of the courts in deciding what these entitlements are? The threat of “judicialization” is that the courts may overreach their ability if they attempt to carry out this task; the promise of judicialization is that the courts can do better than health systems have done at determining such entitlements. We propose a middle ground that requires the health system to develop a fair, deliberative process for determining how to achieve the progressive realization of the same right to health or health care and that also requires the courts to develop the capacity to assess whether the deliberative process in the health system is fair

Abstract:

This article analyses the importance of training as a creator of human capital, which enables a company to obtain competitive advantages that are sustainable in the long-term that result in greater profitability. The study is based on the general theoretical framework of resource and capacity theory. The study not only analyses the impact of the influence of training on performance; it also attempts to analyse the nature of such a relationship in greater depth. This being the case, an attempt has been made to measure explanatory capacity from two different perspectives: the universalistic approach and the contingent approach. At the outset, two hypotheses are formulated that attempt to quantify the relationship from a universalistic perspective to later, in two more hypotheses, incorporate the potential moderating effect of the strategy into the model, in order to verify whether or not this strategy improves the explanatory power of our model of analysis

Abstract:

Purpose – The aim of this paper is to determine whether the effort invested by service companies in employee training has an impact on their economic performance. Design/methodology/approach – The study centres on an intensive labor sector, where the perception of service quality depends on who renders this service. To overcome the habitual problems of transversal studies, the time effect has been considered by measuring data over a period of nine years, to give panel data treatment with fixed effects. Findings – The prepared models give clear empirical support to the hypothesis that training activities are a positive influence on company performance. Research limitations/implications – The results obtained contribute empirical evidence about a relationship that, hitherto, has not been satisfactorily demonstrated. However, there may be some limitations related to the use of a training indicator based on effort and not on results obtained, with low representation of what happens in the smaller companies that lack structured training policies, or with no differentiation between generic or more specific training. Practical implications – The results obtained can contribute towards increased manager awareness that training should be treated as an investment and not considered as an expense. Originality/value – The main contributions can be resumed in three points: a training measurement has been used, based on three dimensions, which presumes to be an improvement on the more frequent method of measuring this variable. A consistent methodology was used that previously was not applied in the analysis of this relationship, and clear empirical evidence has been obtained concerning a relationship that, frequently, is mentioned with theoretical arguments, but which needs more empirical evidence

Abstract:

In a simple public good economy, we propose a natural bargaining procedure, the equilibria of which converge to Lindahl allocations as the cost of bargaining vanishes. The procedure splits the decision over the allocation in a decision about personalized prices and a decision about output levels for the public good. Since this procedure does not assume price-taking behavior, it provides a strategic foundation for the personalized taxes inherent in the Lindahl solution to the public goods problem

Abstract:

Retail petroleum markets in Mexico are on the cusp of a historic deregulation. For decades, all 11,000 gasoline stations nationwide have carried the brand of the state-owned petroleum company Pemex and sold Pemex gasoline at federally regulated retail prices. This industry structure is changing, however, as part of Mexico's broader energy reforms aimed at increasing private investment. Since April 2016, independent companies can import, transport, store, distribute, and sell gasoline and diesel. In this paper, we provide an economic perspective on Mexico's nascent deregulation. Although in many ways the reforms are unprecedented, we argue that past experiences in other markets give important clues about what to expect, as well as about potential pitfalls. Turning Mexico's retail petroleum sector into a competitive market will not be easy, but the deregulation has enormous potential to increase efficiency and, eventually, to reduce prices

Abstract:

The financial crisis has brought the problems of regulatory failure and unbridled counterparty risk to the forefront in financial discussions. In the last decade, central counterparties have been created in order to face those insidious problems. In Mexico, both the stock and the derivatives markets have central counterparties, but the money market has not. This paper addresses the issue of creating a central counterparty for the Mexican money market. Recommendations that must be followed in the design and the management of risk of a central counterparty, given by international regulatory institutions, are presented in this study. Also, two different conceptual designs for a central counterparty, appropriate for the Mexican market, are proposed. Final conclusions support the creation of a new central counterparty for the Mexican money market

Abstract:

If two elections are held at the same day, why do some people choose to vote in one but to abstain in another? We argue that selective abstention is driven by the same factors that determine voter turnout. Our empirical analysis focuses on Sweden where the (aggregate) turnout gap between local and national elections has been about 2-3%. Rich administrative register data reveal that people from higher socio-economic backgrounds, immigrants, women, older individuals, and people who have been less geographically mobile are less likely to selectively abstain

Abstract:

This paper demonstrate that in procedural contexts of free proof, proof sentences of judicial decision (i.e. sentences of the kind "it is proven that p"), have normative illocutionary force. On the one hand, in that context, "it is proven that p" express a value judgment of the judge. On the other hand, it is shown that "it is proven that p" is, in that context, a practical reason aiming to justify an action of the decision-maker: the acceptance of the factual statement as a premise of the judicial decision

Resumen:

Algunas versiones del realismo jurídico pretenden compatibilizar la pretensión de que el derecho es un conjunto de normas con un fuerte compromiso con el empirismo. De conformidad con este último, el derecho no está constituido por entidades abstractas de ningún tipo sino por hechos empíricamente constatables. En vistas a llevar a cabo esta compatibilización, en varios trabajos Riccardo Guastini ha defendido una concepción de las proposiciones normativas, i.e. aserciones existenciales sobre normas jurídicas, como enunciados teóricos acerca del derecho vigente, necesariamente referentes a ciertos hechos. Se concibe así al derecho vigente como el conjunto de tex-tos que son resultado de interpretaciones estables, consolidadas y dominantes que los jueces han llevado a cabo en sus decisiones en el ordenamiento de que se trate. Partiendo de esta versión del realismo jurídico, este trabajo procura sembrar algunas dudas. Primero, sobre este modo de concebir a las proposiciones normativas. Segundo, sobre el modo en que, en consecuencia, queda configurada la teoría del derecho. Tercero, y más en general, sobre la pretensión de compatibilizar la visión del derecho como conjunto de normas con la tesis empirista

Abstract:

Some versions of legal realism seek to reconcile the claim that law is a set of rules with a commitment to empiricism. According to the latter, law is not constituted by abstract entities of any kind, but by facts instead. Embracing this orientation, Riccardo Guastini has defended a conception of normative propositions, i.e. existential assertions about legal norms, as necessarily referring to certain facts. Specifically, law is conceived as a set of texts that are the result of stable, consolidated and dominant interpretations that judges have carried out in their decisions. Starting from this version of legal realism, this work tries to cast some doubts. First, on this way of conceiving normative propositions. Second, on the way in which, as a consequence, legal theory is understood. Third, and more generally, on the claim to reconcile the view of law as a set of rules with the empiricist thesis

Abstract:

The paper applies a factor model to the study of risk sharing among U.S. states. The factor model makes it possible to disentangle movements in output and consumption due to national, regional, or state-specific business cycles from those due to measurement error. The results of the paper suggest that some findings of the previous literature which indicate a substantial amount of inter-state risk sharing may be due to the presence of measurement error in output. When measurement error is properly taken into account, the evidence points towards a lack of inter-state smoothing

Abstract:

Motivated by the dollarization debate in Mexico, we estimate an identified vector autoregression for the Mexican economy, using monthly data from 1976 to 1997, taking into account the changes in the monetary policy regime which occurred during this period. We find that (i) exogenous shocks to monetary policy have had no impact on output and prices; (ii) most of the shocks originated in the foreign sector; (iii) disturbances originating in the U.S. economy have been a more important source of fluctuations for Mexico than shocks to oil prices. We also study the endogenous response of domestic monetary policy by means of a counterfactual experiment. The results indicate that the response of monetary policy to foreign shocks played an important part in the 1994 crisis

Abstract:

The conference on "Global Monetary Integration" addressed a number of questions related to the adoption of the US dollar as legal tender in emerging-market economies. The goal of the conference was to foster the policy debate on dollarization, not to resolve it, and on that score it succeeded

Abstract:

Mexican manufacturing job loss induced by competition with China increases cocaine trafficking and violence, particularly in municipalities with transnational criminal organizations. When it becomes more lucrative to traffic drugs because changes in local labor markets lower the opportunity cost of criminal employment, criminal organizations plausibly fight to gain control. The evidence supports a Becker-style model in which the elasticity between legitimate and criminal employment is particularly high where criminal organizations lower illicit job search costs, where the drug trade implies higher pecuniary returns to violent crime, and where unemployment disproportionately affects low-skilled men

Abstract:

The paper investigates how the relative contribution of external factors to stock price movements varies with the degree of financial development. It is found that financial development makes stock markets more susceptible to external influences (both financial and macroeconomic). Interestingly, this effect is present even after having accounted for capital controls and international trade effects

Abstract:

We consider the spatial circular restricted three-body problem, on the motion of an infinitesimal body under the gravity of Sun and Earth. This can be described by a 3-degree of freedom Hamiltonian system. We fix an energy level close to that of the collinear libration point L1, located between Sun and Earth. Near L1 there exists a normally hyperbolic invariant manifold, diffeomorphic to a 3-sphere. For an orbit confined to this 3-sphere, the amplitude of the motion relative to the ecliptic (the plane of the orbits of Sun and Earth) can vary only slightly. We show that we can obtain new orbits whose amplitude of motion relative to the ecliptic changes significantly, by following orbits of the flow restricted to the 3-sphere alternatively with homoclinic orbits that turn around the Earth. We provide an abstract theorem for the existence of such ‘diffusing’ orbits, and numerical evidence that the premises of the theorem are satisfied in the three-body problem considered here. We provide an explicit construction of diffusing orbits. The geometric mechanism underlying this construction is reminiscent of the Arnold diffusion problem for Hamiltonian systems. Our argument, however, does not involve transition chains of tori as in the classical example of Arnold. We exploit mostly the ‘outer dynamics’ along homoclinic orbits, and use very little information on the ‘inner dynamics’ restricted to the 3-sphere. As a possible application to astrodynamics, diffusing orbits as above can be used to design low cost maneuvers to change the inclination of an orbit of a satellite near L1 from a nearly-planar orbit to a tilted orbit with respect to the ecliptic. We explore different energy levels, and estimate the largest orbital inclination that can be achieved through our construction

Abstract:

Rapidly exploring Random Trees (RRTs) are effective for a wide range of applications ranging from kinodynamic planning to motion planning under uncertainty. However, RRTs are not as efficient when exploring heterogeneous environments and do not adapt to the space. For example, in difficult areas an expensive RRT growth method might be appropriate, while in open areas inexpensive growth methods should be chosen. In this paper, we present a novel algorithm, Adaptive RRT, that adapts RRT growth to the current exploration area using a two level growth selection mechanism. At the first level, we select groups of expansion methods according to the visibility of the node being expanded. Second, we use a cost-sensitive learning approach to select a sampler from the group of expansion methods chosen. Also, we propose a novel definition of visibility for RRT nodes which can be computed in an online manner and used by Adaptive RRT to select an appropriate expansion method. We present the algorithm and experimental analysis on a broad range of problems showing not only its adaptability, but efficiency gains achieved by adapting exploration methods appropriately

Abstract:

The Covid-19 pandemic has deepened the existing gender inequalities. In particular, it has dealt a significant blow to women entrepreneurs, as it has magnified the pre-pandemic disadvantages women have faced in the economic, social, financial and regulatory ecosystems they operate in, particularly due to the nature and size of their businesses. The article outlines three main reasons that explain why women entrepreneurs have been disproportionately impacted during this health pandemic. It then explores how trade agreements can help women overcome the barriers that impede their entrepreneurial potential and help their businesses sustain the pandemic-inflicted market disruptions

Abstract:

Universal access to safe drinking water and sanitation facilities is an essential human right, recognised in the Sustainable Development Goals as crucial for preventing disease and improving human wellbeing. Comprehensive, high-resolution estimates are important to inform progress towards achieving this goal. We aimed to produce high resolution geospatial estimates of access to drinking water and sanitation facilities

Abstract:

This paper proposes a strategy to estimate the community structure for a network accounting for the empirically established fact that communities and links are formed based on homophily. It presents a maximum likelihood approach to rank community structures where the set of possible community structures depends on the set of salient characteristics and the probability of a link between two nodes varies according to the characteristics of the two nodes. This approach has good large sample properties, which lead to a practical algorithm for the estimation. To exemplify the approach it is applied to data collected from four village clusters in Ghana

Abstract:

This paper examines the role of identity in the fragmentation of networks by incorporating the choice of commitment to identity characteristics into a non-cooperative network formation game. The Nash network features divisions based on identity, with layers within these divisions. Using the refinement of strictness I find structures where stars of highly committed players are linked together by less committed players

Abstract:

This paper analyzes the effect of the transfer of information by an informed strategic trader (owner) to another strategic player (buyer). It shows that while the owner will never fully divulge his information, he may transfer a noisy signal of his information to the buyer. With such a transfer, the owner loses some of his informational superiority and yet increases his trading profit. I also show that if the transfer can be made to more than one buyer, then, the owner’s profit is increasing in the number of other buyers to whom the transfer is made

Abstract:

Much of what we know about the alignment of voters with parties comes from mass surveys of the electorate in the postwar period or from aggregate electoral data. Using individual elector-level panel data from nineteenth-century United Kingdom poll books, we reassess the development of a party centered electorate. We show that (a) the electorate was party-centered by the time of the extension of the franchise in 1867, (b) a decline in candidate-centered voting is largely attributable to changes in the behavior of the working class, and (c) the enfranchised working class aligned with the Liberal left. This early alignment of the working class with the left cannot entirely be explained by a decrease in vote buying. The evidence suggests instead that the alignment was based on the programmatic appeal of the Liberals. We argue that these facts can plausibly explain the subsequent development of the party system

Abstract:

This article makes an analytical critique of the position of Basu, Haas, and Moraitis, who, by extending the conventional linear system for the simultaneous determination of value, argue that in Marx's economic theory the intensification of work generates absolute surplus value and is not relative. This position is also contrasted with the original theory of Marx to verify its incompatibility. As an alternative in search of a rectification of the role of labor intensification as a generator of relative surplus value, this work incorporates labor intensity into the Temporal Single System Interpretation (TSSI), showing its full compatibility with Marx's original theory

Resumen:

En el cambiante escenario económico global, los Contadores Públicos desempeñan un papel relevante en la toma de decisiones dentro de las organizaciones, así que es imperativo analizar las perspectivas económicas y financieras, considerando las recientes dinámicas y los desafíos que enfrenta la economía mundial, entre los que se incluyen la volatilidad económica global y los cambios en las regulaciones

Resumen:

En los últimos años el tema de la sostenibilidad ha empezado a tomarse en cuenta en la agenda de actividades de las empresas con el objetivo de evitar daños a la naturaleza y generar un cambio integral no solo en materia ambiental, sino también en aspectos sociales, económicos y culturales. De ahí la relevancia de que las instituciones de educación superior lo incorporen en los planes de estudio de las licenciaturas, entre ellas la de Contaduría, a fin de que estén en consonancia con los Objetivos de Desarrollo Sostenible de las Naciones Unidas

Abstract:

This article was meant to be about poetry and International Relations (IR). It ended up being about trans/feminist and cuir art and critique with love and care among people of color. This is what praxis does to academic thinking; it disrupts the methods as much as it troubles the aesthetics

Resumen:

Este artículo analiza las razones por las que el tribunal electoral confirma o revoca las multas que impone el IFE a los partidos políticos mexicanos, como resultado de la revisión a sus ingresos y gastos. Se confirman parcialmente las expectativas de la literatura sobre política judicial, la cual predice que los tribunales especializados, como el electoral en México, tienen más probabilidades de revocar las decisiones de los organismos especializados que revisan por razones estratégicas. Al analizar 1671 multas impugnadas entre 1997 y 2010, se concluye que aunque los magistrados confirman tres de cada cuatro multas, cuando revocan decisiones del IFE se trata de temas visibles como gastos de campaña o cuando las élites políticamente relevantes son las que impugnan

Abstract:

I analyze the main determinants of why the electoral tribunal upholds or overturns fines imposed by the IFE to Mexico's political parties, as revealed by audits of political spending. I found evidence that partially support the hypotheses developed by the judicial politics literature, which states that specialized courts, such as the electoral tribunal are more likely to overturn decisions of a specialized agency for strategic reasons. By analyzing 1671 fines challenged between 1996 and 2010, I conclude that although magistrates affirm three out of four fines, they overturn IFE's decisions when there is a salient issue, such as campaign spending or when relevant political elites challenge the fines imposed

Resumen:

Al explorar la relación entre religión, evasión e involucramiento en América Latina, se verifica la hipótesis de evasión que predice que el tiempo dedicado a la Iglesia disminuye el tiempo dedicado a la política. Se analizan 24 países encuestados por el Baró­ metro de las Américas en 2010, donde se estudian la asistencia a servicios religiosos, a grupos de la Iglesia, importancia de la religión, confianza en las iglesias y la presencia de la Iglesia católica a escala subnacional con datos del Anuario Pontificio de 2005. Se halla evidencia en favor de la hipótesis de evasión entre quienes asisten al culto y donde existe una mayor presencia de la Iglesia católica, mientras que el involucramiento aumenta entre quienes asisten a grupos de la Iglesia

Abstract:

The presented paper describes the integration of an artificial vision system into a flexible production cell: the production cell consists of a material storage box with an artificial vision system (AVS) and a 5-DOF robot type Scorbot ER 4. The camera system detects the geometry of the rough material. This information is sent to the robot that starts moving to take the material for further processing. The Cartesian Coordinates are processed so that the robot joints can be positioned correctly. The described system is part of an ongoing development of a smart factory for investigation and educational purposes

Abstract:

The proliferation of wireless sensor networks is one of the main hardware components enabling the creation of the Internet of Things. As sensor nodes are being deployed in a wide variety of indoor and outdoor environments, they are in general battery-powered devices. In fact, power provisioning is one of the main challenges faced by engineers when deploying IoT-based applications. This paper develops crosslayer architecture, integrating smart and power-aware protocols with a low-cost and highefficiency power management module, which is the basis of long-lasting of self-powered WSNs. The main physical components of the proposed architecture are a wireless node comprising a set of small solar cells responsible for harvesting the energy and an ultracapacitor as storage device. Energy consumption is reduced significantly by varying the sleep/wake duty cycle of the radio module. For environments with only a few hours of sunlight per day we present the feasibility of ensuring long-lasting operation by means of adapting the duty cycle scheme according to the energy stored in the ultracapacitor. Our experiments prove the feasibility of a long-endurance outdoors operation with a low-complexity power management unit. This is an important advance towards the development of novel IoT-based applications

Abstract:

In this article we explore the relationship between 19 of the most common anomalies reported for the US market and the cross-section of Mexican stock returns. We find that 1-month stock returns in Mexico are robustly predicted only by 3 of the 19 anomalies: momentum, idiosyncratic volatility, and the lottery effect. Momentum has a positive relation with future 1-month returns, while idiosyncratic volatility and the lottery effect have a negative relation. For longer horizons of 3 and 6 months, only the 3 most important factors in the US market predict returns: size, book-to-market, and momentum

Abstract:

Tebuconazole (TBZ) nanoemulsions (NEs) were formulated using a low energy method. TBZ composition directly affected the drop size and surface tension of the NE. Water fraction and the organic-to-surfactant-ratio (RO/S) were evaluated in the range of 1-90 and 1-10 wt %, respectively. The study was carried out with an organic phase (OP) consisting of an acetone/glycerol mixture containing TBZ at a concentration of 5.4 wt % and Tween 80 (TW80) as a nonionic and Agnique BL1754 (AG54) as a mixture of nonionic and anionic surfactants. The process involved a large dilution of a bicontinuous microemulsion (ME) into an aqueous phase (AP). Pseudo-ternary phase diagrams of the OP//TW80//AP and OP//AG54//AP systems at T = 25 °C were determined to map ME regions; these were in the range of 0.49-0.90, 0.01-0.23, and 0.07-0.49 of OP, AP, and surfactant, respectively. Optical microscope images helped confirm ME formation and system viscosity was measured in the range of 25-147 cP. NEs with drop sizes about 9 nm and 250 nm were achieved with TW80 and AG54, respectively. An innovative low-energy method was used to develop nanopesticide TBZ formulations based on nanoemulsion (NE) technology. The surface tension of the studied systems can be lowered 50% more than that of pure water. This study's proposed low-energy NE formulations may prove useful in sustainable agriculture

Resumen:

Objetivos: Nuestro objetivo fue evaluar el rendimiento de un modelo electrocardiográfico basado en IA capaz de detectar ACOMI (Acute Coronary Occlusion Myocardial Infarction) en pacientes con SCA. Métodos: Este fue un estudio prospectivo, observacional y longitudinal, de un solo centro que incluyó a pacientes con diagnóstico inicial de SCA (tanto STEMI como NSTEMI). Para entrenar el modelo de deep learning en el reconocimiento de ACOMI, se realizó una digitalización manual de los ECG de los pacientes utilizando cámaras de teléfonos inteligentes de diversas calidades. Nos basamos en el uso de Redes Neuronales Convolucionales (CNN) como modelos de inteligencia artificial para la clasificación de los ejemplos de ECG. Los ECG fueron evaluados de forma independiente por dos cardiólogos expertos, quienes desconocían los resultados clínicos; a cada uno se le pidió determinar a) si el paciente presentaba un STEMI, según criterios universales, o b) si no se cumplían los criterios de STEMI, identificar cualquier otro hallazgo en el ECG que sugiriera ACOMI. ACOMI se definió por la presencia de cualquiera de los siguientes tres hallazgos en la angiografía coronaria: a) oclusión total trombótica, b) trombo grado TIMI 2 o superior + flujo grado TIMI 1 o menor, o c) la presencia de una lesión suboclusión (> 95% de estenosis angiográfica) con flujo grado TIMI < 3. Los pacientes se clasificaron en cuatro grupos: STEMI + ACOMI, NSTEMI + ACOMI, STEMI + no ACOMI y NSTEMI + no ACOMI

Abstract:

Objectives: We aimed to assess the performance of an artificial intelligence-electrocardiogram (AI-ECG)-based model capable of detecting acute coronary occlusion myocardial infarction (ACOMI) in the setting of patients with acute coronary syndrome (ACS). Methods: This was a prospective, observational, longitudinal, and single-center study including patients with the initial diagnosis of ACS (both ST-elevation acute myocardial infarction [STEMI] & non-ST-segment elevation myocardial infarction [NSTEMI]). To train the deep learning model in recognizing ACOMI, manual digitization of a patient's ECG was conducted using smartphone cameras of varying quality. We relied on the use of convolutional neural networks as the AI models for the classification of ECG examples. ECGs were also independently evaluated by two expert cardiologists blinded to clinical outcomes; each was asked to determine (a) whether the patient had a STEMI, based on universal criteria or (b) if STEMI criteria were not met, to identify any other ECG finding suggestive of ACOMI. ACOMI was defined by coronary angiography findings meeting any of the following three criteria: (a) total thrombotic occlusion, (b) TIMI thrombus grade 2 or higher + TIMI grade flow 1 or less, or (c) the presence of a subocclusive lesion (> 95% angiographic stenosis) with TIMI grade flow < 3. Patients were classified into four groups: STEMI + ACOMI, NSTEMI + ACOMI, STEMI + non-ACOMI, and NSTEMI + non-ACOMI

Resumen:

El artículo presenta un simulador de vuelo ejecutivo (SVE), como parte de un entorno de aprendizaje, diseñado para ser utilizado por dueños o administradores de parques o reservas cinegéticas, grupos conservacionistas o diseñadores de políticas ecológicas gubernamentales, con el objetivo de evaluar diversas estrategias y de proveer experiencia virtual en la planeación estratégica sustentable y rentable de dichos parques o reservas de turismo cinegético. El SVE está basado en un modelo de dinámica de sistemas que evalúa el riesgo de agotamiento de la población, sus beneficios económicos potenciales y la generación potencial de impuestos en un parque virtual. Se diseñó el SVE con el objetivo de evaluar los impactos de diversas políticas desde la libre cacería hasta políticas altamente restrictivas como cuotas de caza, esquemas de impuestos y precios. La estructura del modelo está basada en el fenómeno económico denominado “tragedia de los comunes”, el cual ocurre cuando los individuos, actuando independientemente unos de otros, explotan indiscriminadamente un recurso de propiedad común, buscando obtener beneficios de corto plazo, mientras lo agotan para su uso en el largo plazo. La utilización del SVE muestra que sí es posible la sustentabilidad y la rentabilidad en una reserva de turismo cinegético, aplicando combinaciones de estrategias o políticas racionales a nivel sistema

Abstract:

This paper presents a management flight simulator (MFS) as part of a learning environment, designed to be used by wildlife hunting park owners or managers, conservationists and governement environmental policy makers, with the aim to provide strategy assessment and virtual experience in the sustainable and profitable hunting parks or reserves management strategic planning. The MFS is basedon a system dynamicsmodel that asses the population risk of depletion, economic benefits and tax collecting in a virtual wildlife park.*The MSF was designed for evaluate different policies impacts from free shooting to highly restricted shhting quotas, tax schemes and price policies. The model structure is based on the "tragedy of the commons", economic phenomenon, ocurring when individuals acting independently of one another, will overuse a commo-property resource in order to obtain short term benefit while depleting it for a long term use. The use of the MFS shows that sustainability and profitability is possible in a wildlife shooting reserve, applying a system level combination of rational policies

Resumen:

Existen crecientes preocupaciones en México respecto a las emisiones de CO2, debido a la utilización de combustibles fósiles en la generación de electricidad. Recientemente se han autorizado varias leyes con la finalidad de incrementar la participación de combustibles no fósiles en la mezcla energética. A pesar de que se han establecido algunos objetivos, estos serán difíciles de lograr si las inversiones continúan siendo dirigidas principalmente a las tecnologías fósiles. Este artículo presenta un modelo de apoyo a la toma de decisiones, basado en el enfoque de dinámica de sistemas, como un método alternativo a las técnicas de modelaje tradicionales. El modelo es utilizado para identificar los requerimientos futuros de capacidad de generación, así como para evaluarlos en diversos escenarios simulados

Abstract:

There are increasing concerns in México regarding CO2 emissions, due to the use of fossil fuel based electric generation. Recently, several laws have been passed with the objective of increase the non-fossil participation in the energy portfolio mix. Although several objectives have been established, these would be hard to achieve if investments should continue to be directed mainly to fossil fuel technologies. This article presents a system dynamics decision support model, as an alternative method to the traditional modelling approaches. The model is used to assess the generation capacity requirements and to evaluate them in several simulated scenarios. Keywords: Non-fossil electricity generation capacity expansion, Mexico, scenario simulation model, system dynamics

Resumen:

El artículo presenta un simulador de vuelo ejecutivo (SVE), como parte de un entorno de aprendizaje, diseñado para ser utilizado por dueños o administradores de parques o reservas cinegéticas, grupos conservacionistas o diseñadores de políticas ecológicas gubernamentales, con el objetivo de evaluar diversas estrategias y de proveer experiencia virtual en la planeación estratégica sustentable y rentable de dichos parques o reservas de turismo cinegético. El SVE está basado en un modelo de dinámica de sistemas que evalúa el riesgo de agotamiento de la población, sus beneficios económicos potenciales, y la generación potencial de impuestos en un parque virtual. Se diseñó el SVE con el objetivo de evaluar los impactos de diversas políticas desde la libre cacería hasta políticas altamente restrictivas como cuotas de caza, esquemas de impuestos y precios. La estructura del modelo está basada en el fenómeno económico denominado “tragedia de los comunes”, el cual ocurre cuando los individuos, actuando independientemente unos de otros, explotan indiscriminadamente un recurso de propiedad común, buscando obtener beneficios de corto plazo, mientras lo agotan para su uso en el largo plazo. La utilización del SVE muestra que sí es posible la sustentabilidad y la rentabilidad en una reserva de turismo cinegético, aplicando combinaciones de estrategias o políticas racionales a nivel sistema

Abstract:

This paper presents a management flight simulator (MFS) as part of a learning environment, designed to be used by wildlife hunting park owners or managers, conservationists and government environmentalist policy makers, with the aim to provide strategy assessment and virtual experience in sustainable and profitable hunting parks or reserves management strategic planning. The MFS is based on a System Dynamics model that asses the population risk of depletion, economic benefits and tax collecting in a virtual wildlife park. The MSF was designed for evaluate different policies impacts from free shooting to highly restricted shooting quotas, tax schemes, and price policies. The model structure is based on the “tragedy of the commons” economic phenomenon, occurring when individuals acting independently of one another, will overuse a common-property resource in order to obtain short term benefit while depleting it for long term use. Using the MFS shows that sustainability and profitability is possible in a wildlife shooting reserve, applying a system level combination of rational policies

Abstract:

A lack of Resource Based View (RBV) effective understanding in strategy courses and a quick feedback learning style of the new generation of Business Administration students demand more than a traditional lecture teaching strategy. Based on two educator research questions: How could my students achieve an effective understanding of the RBV concepts? How could my students experiment quick financial impacts of their strategic decisions? and one student question: How could I develop strategic resources in order to achieve the maximum Cash Flow?, an Interactive Learning Environment (ILE) is proposed with the following learning objectives: understand the RBV concepts, identify relationships between strategic resources and financial performance and experiment the financial impacts of several resource development strategies, as an iterative process. The proposed ILE is tested based on a laboratory experiment that was conducted with the participation of graduate and undergraduate students to evaluate some key performance measures differences due to student investment strategy profiles between these two groups. The experiment results suggest that graduate students were more aggressive, getting worst results at the beginning but, at the end they achieve better results with some less aggressive strategy plus assigning more resources to productivity versus capacity than undergraduate students did

Resumen:

Se presentan los resultados de la aplicación de un modelo de planeación, que permite la conformación y evaluación de escenarios del sector manufacturero mexicano, analizando los impactos de la inversión en formación humana y en desarrollo tecnológico en sus niveles de productividad futura. El esquema de planeación se basa en los conceptos de evolución que explican el comportamiento de sistemas abiertos. La operacionalización del esquema se realiza en base a un modelo, compuesto por un sistema de ecuaciones simultáneas, cuyos parámetros se estiman estadísticamente mediante la utilización de técnicas estadísticas de regresión. La conformación y evaluación de escenarios se lleva a cabo estableciendo supuestos y simulando el modelo hacia el año 2000. El modelo considera que la inversión en capital de conocimiento, conformado tanto por aspectos humanos (educación y capacitación), como por aspectos técnicos (desarrollo tecnológico, investigación y desarrollo), es uno de los elementos fundamentales que influyen en la productividad de todo sistema de transformación en competencia, la cual es, a su vez, uno de los elementos críticos que determinan su productividad y su participación en el mercado. La conformación y evaluación de escenarios tecnológicos en el sector manufacturero mexicano permite identificar la gran importancia que tienen las inversiones en capital de conocimiento para su desarrollo, lo que proporciona pautas para las políticas de asignación de recursos en los rubros correspondientes

Resumen:

El artículo explora la crítica de Santiago Ramírez a la clasificación de la analogía de Cayetano. Ramírez argumenta que la división tradicional de la analogía no refleja completamente la complejidad de las nociones de santo Tomás de Aquino. En particular, se introduce la "analogía de atribución intrínseca" como una categoría adicional dentro de la analogía de atribución. A través de un análisis detallado, el texto examina cómo esta forma de analogía mantiene elementos de la proporcionalidad y la atribución extrínseca, pero se distingue por ser una atribución "según el ser" y "según la intención". Se destacan las implicaciones teológicas y filosóficas de esta distinción, especialmente en relación con la predicación del concepto de verdad respecto a Dios, las criaturas y los juicios humanos

Abstract:

The article explores Santiago Ramírez's critique of Cajetan's classification of analogy. Ramírez argues that the traditional division of analogy does not fully reflect the complexity of Thomas Aquinas' notions. Specifically, the "intrinsic attribution analogy" is introduced as an additional category within attribution analogy. Through a detailed analysis, the text examines how this form of analogy retains elements of proportionality and extrinsic attribution yet is distinguished by being an attribution "according to being" and "according to intention." The theological and philosophical implications of this distinction are highlighted, especially regarding the predication of the concept of truth in relation to God, creatures, and human judgments

Abstract:

For the problem of adjudicating conflicting claims, we propose the following method to extend a lower bound rule: (i) for each problem, assign the amounts recommended by the lower bound rule and revise the problem accordingly; (ii) assign the amounts recommended by the lower bound rule to the revised problem. The “recursive-extension” of a lower bound rule is obtained by recursive application of this procedure. We show that if a lower bound rule satisfies positivity then it’s recursive extension singles out a unique awards rule.We then study the relation between desirable properties satisfied by a lower bound rule and properties satisfied by its recursive extension

Abstract:

In economics the main efficiency criterion is that of Pareto-optimality. For problems of distributing a social endowment a central notion of fairness is no-envy (each agent should receive a bundle at least as good, according to her own preferences, as any of the other agent's bundle). For most economies there are multiple allocations satisfying these two properties. We provide a procedure, based on distributional implications of these two properties, which selects a single allocation which is Pareto-optimal and satisfies no-envy in two-agent exchange economies. There is no straightforward generalization of our procedure to more than two-agents

Abstract:

For the problem of adjudicating conflicting claims, we consider the requirement that each agent should receive at least 1/n his claim truncated at the amount to divide, where n is the number of claimants (Moreno-Ternero and Villar, 2004a). We identify two families of rules satisfying this bound. We then formulate the requirement that for each problem, the awards vector should be obtainable in two equivalent ways, (i) directly or (ii) in two steps, first assigning to each claimant his lower bound and then applying the rule to the appropriately revised problem. We show that there is only one rule satisfying this requirement. We name it the “ rule”, as it is obtained by a recursion. We then undertake a systematic investigation of the properties of the rule

Abstract:

This article proposes specification tests for economic models defined through conditional moments restrictions in which conditioning variables are estimated. There are two main motivations for this situation. First, the case when the conditioning variables are not directly observable, such as economic models, where innovations or latent variables appear as explanatory variables. Second, the case when the set of conditioning variables is too large to derive powerful tests, and hence, the original conditioning set is replaced by a constructed variable that is regarded as a good summary of it. We establish the asymptotic properties of the proposed tests, examine its finite sample behavior, and apply them to different econometric contexts. In some cases, the proposed approach leads to relevant tests that generalize well known specification tests, such as Ramsey’s RESET test

Abstract:

Despite their theoretical advantages, Integrated Conditional Moment (ICM) specification tests are not commonly employed in the econometrics practice. An important reason is that the employed test statistics are nonpivotal, and so critical values are not readily available. This article proposes an omnibus test in the spirit of the ICM tests of Bierens and Ploberger (1997, Econometrica 65, 1129–1151) where the test statistic is based on the minimized value of a quadratic function of the residuals of time series econometric models. The proposed test falls under the category of overidentification restriction tests started by Sargan (1958, Econometrica 26, 393–415). The corresponding projection interpretation leads us to propose a straightforward wild bootstrap procedure that requires only linear regressions to estimate the critical values irrespective of the model functional form. Hence, contrary to other existing ICM tests, the critical values are easily calculated while the test preserves the admissibility property of ICM tests

Abstract:

In econometrics, models stated as conditional moment restrictions are typically estimated by means of the generalized method of moments (GMM). The GMM estimation procedure can render inconsistent estimates since the number of arbitrarily chosen instruments is finite. In fact, consistency of the GMM estimators relies on additional assumptions that imply unclear restrictions on the data generating process. This article introduces a new, simple and consistent estimation procedure for these models that is directly based on the definition of the conditional moments. The main feature of our procedure is its simplicity, since its implementation does not require the selection of any user-chosen number, and statistical inference is straightforward since the proposed estimator is asymptotically normal. In addition, we suggest an asymptotically efficient estimator constructed by carrying out one Newton-Raphson step in the direction of the efficient GMM estimator

Abstract:

Decisions based on econometric model estimates may not have the expected effect if the model is misspecified. Thus, specification tests should precede any analysis. Bierens' specification test is consistent and has optimality properties against some local alternatives. A shortcoming is that the test statistic is not distribution free, even asymptotically. This makes the test unfeasible. There have been many suggestions to circumvent this problem, including the use of upper bounds for the critical values. However, these suggestions lead to tests that lose power and optimality against local alternatives. In this paper we show that bootstrap methods allow us to recover power and optimality of Bierens' original test. Bootstrap also provides reliable p-values, which have a central role in Fisher's theory of hypothesis testing. The paper also includes a discussion of the properties of the bootstrap Nonlinear Least Squares Estimator under local alternatives

Abstract:

In this paper we consider testing that an economic time series follows a martingale difference process. The martingale difference hypothesis has typically been tested using information contained in the second moments of a process, that is, using test statistics based on the sample autocovariances or periodograms. Tests based on these statistics are inconsistent since they cannot detect nonlinear alternatives. In this paper we consider tests that detect linear and nonlinear alternatives. Given that the asymptotic distributions of the considered tests statistics depend on the data generating process, we propose to implement the tests using a modified wild bootstrap procedure. The paper theoretically justifies the proposed tests and examines their finite sample behavior by means of Monte Carlo experiments

Abstract:

In this paper we propose an iterative method for solving the inhomogeneous systems of linear equations associated with density fitting. The proposed method is based on a version of the conjugate gradient method that makes use of automatically built quasi-Newton preconditioners. The paper gives a detailed description of a parallel implementation of the new method. The computational performance of the new algorithms is analyzed by benchmark calculations on systems with up to about 35 000 auxiliary functions. Comparisons with the standard, direct approach show no significant differences in the computed solutions

Abstract:

The problem of robustification of interconnection and damping assignment passivity-based control for underactuated mechanical system vis-à-vis matched, constant, and unknown disturbances is addressed in the paper. This is achieved adding an outer-loop controller to the interconnection and damping assignment passivity-based control. Three designs are proposed, with the first one being a simple nonlinear PI, while the second and the third ones are nonlinear PIDs. While all controllers ensure stability of the desired equilibrium in spite of the presence of the disturbances, the inclusion of the derivative term allows us to inject further damping enlarging the class of systems for which asymptotic stability is ensured. Numerical simulations of the Acrobot system and experimental results on the disk-on-disk system illustrate the performance of the proposed controller

Abstract:

In this paper we present a dynamic model of marine vehicles in both body-fixed and inertial momentum coordinates using port-Hamiltonian framework. The dynamics in body-fixed coordinates have a particular structure of the mass matrix that allows the application of passivity-based control design developed for robust energy shaping stabilisation of mechanical systems described in terms of generalised coordinates. As an example of application, we follow this methodology to design a passivity-based controller with integral action for fully actuated vehicles in six degrees of freedom that tracks time-varying references and rejects disturbances. We illustrate the performance of this controller in a simulation example of an open-frame unmanned underwater vehicle subject to both constant and time-varying disturbances. We also describe a momentum transformation that allows an alternative model representation of marine craft dynamics that resembles general port-Hamiltonian mechanical systems with a coordinate dependent mass matrix

Abstract:

The problem of robustification of Interconnection and Damping Assignment Passivity- Based Control (IDA-PBC) for underactuated mechanical system vis-à-vis matched, constant, unknown disturbances is addressed in the paper. This is achieved adding an outer-loop controller to the IDA-PBC. Three designs are proposed, with the first one being a simple nonlinear PI, while the second and the third ones are nonlinear PIDs. While all controllers ensure stability of the desired equilibrium in spite of the presence of the disturbances, the inclusion of the derivative term allows us to inject further damping enlarging the class of systems for which asymptotic stability is ensured

Abstract:

Control of underactuated mechanical systems via energy shaping is a well-established, robust design technique. Unfortunately, its application is often stymied by the need to solve partial differential equations (PDEs). In this technical note a new, fully constructive, procedure to shape the energy for a class of mechanical systems that obviates the solution of PDEs is proposed. The control law consists of a first stage of partial feedback linearization followed by a simple proportional plus integral controller acting on two new passive outputs. The class of systems for which the procedure is applicable is identified imposing some (directly verifiable) conditions on the systems inertia matrix and its potential energy function. It is shown that these conditions are satisfied by three benchmark examples

Abstract:

To extend the realm of application of the well known controller design technique of interconnection and damping assignment passivity-based control (IDA-PBC) of mechanical systems two modifications to the standard method are presented in this article. First, similarly to Batlle et al. (2009) and Gómez-Estern and van der Schaft (2004), it is proposed to avoid the splitting of the control action into energy-shaping and damping injection terms, but instead to carry them out simultaneously. Second, motivated by Chang (2014), we propose to consider the inclusion of dissipative forces, going beyond the gyroscopic ones used in standard IDA-PBC. The contribution of our work is the proof that the addition of these two elements provides a non-trivial extension to the basic IDA-PBC methodology. It is also shown that several new controllers for mechanical systems designed invoking other (less systematic procedures) that do not satisfy the conditions of standard IDA-PBC, actually belong to this new class of SIDA-PBC

Abstract:

To extend the realm of application of the well known controller design technique of interconnection and damping assignment passivity-based control (IDA-PBC) of mechanical systems two modifications to the standard method are presented in this article. First, similarly to [1], [13], it is proposed to avoid the splitting of the control action into energy-shaping and damping injection terms, but instead to carry them out simultaneously. Second, motivated by [2], we propose to consider the inclusion of dissipative forces, going beyond the gyroscopic ones used in standard IDA-PBC. It can be shown that new controllers for mechanical systems that do not satisfy the conditions of standard IDA-PBC, actually belong to this new class of SIDA-PBC

Abstract:

As Mexico slouches from economic meltdown to recalcitrant recovery, several questions loom large in the minds of pundits and investors, Mexicans and foreigners alike: Will President Ernesto Zedillo maintain current economic policy it will be succumb to political pressures and electoral cycles? Will the social fabric untravel or will it withstand the brunt of 'adjustment fatigue'? And is the predicted demise of the PRI likely, or will the party display its traditional resilience?

Abstract:

Robots with bimanual morphology usually possess higher flexibility, dexterity, and efficiency than those only equipped with a single arm. The dual-arm structure has enabled robots to perform various intricate tasks that are difficult or even impossible to achieve by unimanipulation. In this article, we aim to achieve robust bimanual grasping for object transportation. In particular, provided that stable contact is the key to the success of the transportation task, our focus lies on stabilizing the contact between the object and the robot end-effectors by employing the contact servoing strategy. To ensure that the contact is stable, the contact wrenches are required to evolve within the so-called friction cones all the time throughout the transportation task. To this end, we propose stabilizing the contact by leveraging a novel contact parameterization model. Parameterization expresses the contact stability manifold with a set of constraint-free exogenous parameters where the mapping is bijective. Notably, such parameterization can guarantee that the contact stability constraints can always be satisfied. We also show that many commonly used contact models can be parameterized out of a similar principle. Furthermore, to exploit the parameterized contact models in the control law, we devise a contact servoing strategy for the bimanual robotic system such that the force feedback signals from the force/torque sensors are incorporated into the control loop. The effectiveness of the proposed approach is well demonstrated with the experiments on several representative bimanual transportation tasks

Abstract:

Sick pay is a common provision in most labor contracts. This paper employs an experimental gift exchange environment to explore two related questions using both managers and undergraduates as subjects. First, do workers reciprocate generous sick pay with higher effort? Second, do firms benefit from offering sick pay with higher effort. However, firms is that workers do reciprocate generous sick pay in terms of profits only if there is a competition among firms for workers. Consequently, competition leads to a higher voluntary provision of sick pay relative to a monopsonistic labor market

Abstract:

A kinetic model for the Boltzmann equation is proposed and explored as a practical means to investigate the properties of a dilute granular gas. It is shown that all spatially homogeneous initial distributions approach a universal "homogeneous cooling solution" after a few collisions. The homogeneous cooling solution (HCS) is studied in some detail and the exact solution is compared with known results for the hard sphere Boltzmann equation. It is shown that all qualitative features of the HCS, including the nature of overpopulation at large velocities, are reproduced by the kinetic model. It is also shown that all the transport coefficients are in excellent agreement with those from the Boltzmann equation. Also, the model is specialized to one having a velocity independent collision frequency and the resulting HCS and transport coefficients are compared to known results for the Maxwell model. The potential of the model for the study of more complex spatially inhomogeneous states is discussed

Abstract:

Recent theoretical analyses of the two-time joint-probability density for electric-field dynamics in a strongly coupled plasma have included formal short-time expansions. Here we compare the short-time-expansion results for the associated generating function with molecular-dynamics-simulation results for the special case of fields at a neutral point in a one-component plasma with plasma parameter GAMMA = 10. The agreement is quite good for times omega(p)t less-than-or-equal-to 2, although more general application of the short-time expansion requires some important qualifications

Abstract:

The dynamics of electric fields at a neutral or charged point in a one-component plasma is considered. The equilibrium joint probability density for electric-field values at two different times is defined, and several formally exact limits are described in some detail. The asymptotic short-time behavior for both neutral and charged-point cases is shown to be Gaussian with respect to the field differences, but with a half-width depending on their sum. In the strong-coupling limit, the joint probability density is dominated by weak fields (charged-point case), leading to a Gaussian distribution with time dependence entirely determined from the electric-field time-correlation function. The limit of large fields is shown to be determined by the time-dependent autocorrelation function for the density of ions around the field point; for the special case of fields at a neutral point, this result implies that the joint distribution at large fields is determined entirely by the dynamic structure factor. Finally, the full distribution (all field values and times) is studied in the weak-coupling limit

Abstract:

The equilibrium joint probability density for electric fields at two different times is considered for both neutral and charged points. The behavior of this distribution function is discussed in the Gaussian, short time, and high field limits. An approximate global description is proposed using an independent particle model as an extension of corresponding approximations for the single time field distribution

Abstract:

We develop a theory of media slant as a systematic filtering of political news that reduces multidimensional politics to the one-dimensional space perceived by voters. Economic and political choices are interdependent in our theory: expected electoral results influence economic choices, and economic choices in turn influence voting behaviour. In a two-candidate election, we show that media favouring the front-runner will focus on issues unlikely to deliver a surprise, while media favouring the underdog will gamble for resurrection. We characterize the socially optimal slant and show thal it coincides with the one favoured by the underdog under a variety of circumstances. Balanced media, giving each issue equal coverage, may be worse for voters than partisan media

Abstract:

We model voting in juries as a game of incomplete information, allowing jurors to receive a continuum of signals. We characterize the unique symmetric equilibrium of the game, and give a condition under which no asymmetric equilibria exist under unanimity rule. We offer a condition under which unanimity rule exhibits a bias toward convicting the innocent, regardless of the size of the jury, and give an example showing that this bias can be reversed. We prove a "jury theorem" for our general model: As the size of the jury increases, the probability of a mistaken judgment goes to zero for every voting rule except unanimity rule. For unanimity rule, the probability of making a mistake is bounded strictly aboye zero if and only if there do not exist arbitrarily strong signals of innocence. Our results explain the asymptotic inefficiency of unanimity rule in finite models and establishes the possibility of asymptotic efficiency, a property that could emerge only in a continuous model

Abstract:

We study diffeomorphisms that have one-parameter families of continuous symmetries. For general maps, in contrast to the symplectic case, existence of a symmetry no longer implies existence of an invariant. Conversely, a map with an invariant need not have a symmetry. We show that when a symmetry flow has a global Poincaré section there are coordinates in which the map takes a reduced, skew-product form, and hence allows for reduction of dimensionality. We show that the reduction of a volume-preserving map again is a volume preserving. Finally we sharpen the Noether theorem for symplectic maps. A number of illustrative examples are discussed and the method is compared with traditional reduction techniques

Abstract:

Parameters in climate models are usually calibrated manually, exploiting only small subsets of the available data. This precludes both optimal calibration and quantification of uncertainties. Traditional Bayesian calibration methods that allow uncertainty quantification are too expensive for climate models; they are also not robust in the presence of internal climate variability. For example, Markov chain Monte Carlo (MCMC) methods typically require 0(10 to the 5) model runs and are sensitive to internal variability noise, rendering them infeasible for climate models. Here we demonstrate an approach to model calibration and uncertainty quantification that requires only 0(10 to the 2) model runs and can accommodate internal climate variability. The approach consists of three stages: (a) a calibration stage uses variants of ensemble Kalman inversion to calibrate a model by minimizing mismatches between model and data statistics; (b) an emulation stage emulates the parameter-to-data map with Gaussian processes (GP), using the model runs in the calibration stage for training; (c) a sampling stage approximates the Bayesian posterior distributions by sampling the GP emulator with MCMC. We demonstrate the feasibility and computational efficiency of this calibrate-emulate-sample (CES) approach in a perfect-model setting. Using an idealized general circulation model, we estimate parameters in a simple convection scheme from synthetic data generated with the model. The CES approach generates probability distributions of the parameters that are good approximations of the Bayesian posteriors, at a fraction of the computational cost usually required to obtain them. Sampling from this approximate posterior allows the generation of climate predictions with quantified parametric uncertainties

Abstract:

This article elaborates the concepts of techno-colonialism and sub-netizenship to explore the renewal of colonial processes through the digitalization of "democracy." Techno-colonialism is conceived as a frame - adopted consciously and unconsciously - that shapes capitalist social relations and people's political participation. Today, this frame appeals to the idealized netizen, a global, free, equal and networked subject that gains full membership to a political community. Meanwhile, sub-netizenship is the novel political subordination because of race, ethnicity, class, gender, language, temporality, and geography within a global matrix that crosses the analogue-digital dimensions of life. This techno-colonialism/sub-netizenship dynamic manifested in the experience of Marichuy as an indigenous independent precandidate for the Mexican presidential elections of 2018. In a highly unequal and diverse country, aspirants required a tablet or smartphone to collect citizen support via a monolinguistic app only accessible to Google or Facebook users. Our analysis reveals how some individuals are excluded and disenfranchised by digital innovation but still resist a legal system that seeks to homogenize them and render them into legible and marketable data

Abstract:

Despite ongoing interest in deploying information and communication technologies (ICTs) for sustainable development, their use in climate change adaptation remains understudied. Based on the integration of adaptation theory and the existing literature on the use of ICTs in development, we present an analytical model for conceptualizing the contribution of existing ICTs to adaptation, and a framework for evaluating ICT success. We apply the framework to four case studies of ICTs in use for early warning systems and managing extreme events in the Latin American and the Caribbean countries. We propose that existing ICTs can support adaptation through enabling access to critical information for decision-making, coordinating actors and building social capital. ICTs also allow actors to communicate and disseminate their decision experience, thus enhancing opportunities for collective learning and continual improvements in adaptation processes. In this way, ICTs can both communicate the current and potential impacts of climate change, as well as engage populations in the development of viable adaptation strategies

Abstract:

We examine the welfare properties of surplus maximízation by embedding a perfectly díscriminating monopoly in an otherwise standard Arrow-Debreu economy. Although we discover an inefficient equilibríum, we validate partial equilibrium intuition by showing: (i) that equilibria are efficient provided that the monopoly goods are costly, and (ii) that a natural monopoly can typically use personalized two-part tariffs in these equilibria. However, we find that Pareto optima are sometimes incompatible with surplus maximization, even when transfer payrnents are used. We provide insight into the source of this difficulty and give some instructive examples of economies where a second welfare theorem holds.

Abstract:

We ask when firms with increasing returns can cover their costs independently by charging two-part tariffs (TPTs), a condition we call independent viability. To answer, we develop notions of substitutability and complementarity that account for the total value of goods and use them to find the maximum extractable surplus. We then show that independent viability is a sufficient condition for existence of a general equilibrium in which regulated natural monopolies use TPTs. Independent viability also guarantees efficiency when the increasing returns arise solely from fixed costs. For arbitrary technologies, it ensures that a second welfare theorem holds

Abstract:

We report the findings from a study that explores candidate participation in a context where citizens can become candidates under both plurality and run-off voting systems. The study also considers the influence of entry costs and different platforms of potential candidates. While our findings align with the expected outcomes of the citizen-candidate model, there is a notable over-participation by candidates from less favorable electoral positions. These entry patterns adjusted well to the QRE. This research adds to the existing body of knowledge about what motivates candidates to enter races under different voting systems and analyzes the behavior of candidates in extreme positions

Abstract:

We study theoretically and experimentally committee decision making with common interests. Committee members do not know which of two alternatives is optimal, but each member can acquire a private costly signal before casting a vote under either majority or unanimity rule. In the experiment, as predicted by Bayesian equilibrium, voters are more likely to acquire information under majority rule, and vote strategically under unanimity rule. As opposed to Bayesian equilibrium predictions, however, many committee members vote when uninformed. Moreover, uninformed voting is strongly associated with a lower propensity to acquire information. We show that an equilibrium model of subjective prior beliefs can account for both these phenomena, and provides a good overall fit to the observed patterns of behavior both in terms of rational ignorance and biases

Abstract:

We conduct a laboratory study of the group-on group ultimatum bargaining with restricted within-group interaction. In this context, we concentrate on the effect of different within-group voting procedures on the bargaining outcomes. Our experimental observations can be summarized in two propositions. First, individual responder behavior across treatments does not show statistically significant variation across voting rules, implying that group decisions may be viewed as aggregations of independent individual decisions. Second, we observe that proposer behavior significantly depends (in the manner predicted by a simple model) on the within-group decision rule in force among the responders and is generally different from the proposer behavior in the one-on-one bargaining

Abstract:

This work reports the results of in vivo assays of an implant composed of the hydrogel Chitosan-g-Glycidyl Methacrylate-Xanthan [(CTS-g-GMA)-X] in Wistar rats. Degradation kinetics of hydrogels was assessed by lysozyme assays. Wistar rats were subjected to laminectomy by cutting the spinal cord with a scalpel. After the surgical procedure, hydrogels were implanted in the injured zone (level T8). Somatosensory evoked potentials (SEPs) obtained by electric stimulation onto periphery nerves were registered in the corresponding central nervous system (CNS) areas. Rats implanted with the biomaterials showed a successful recovery compared with the non-implanted rats after 30 days. Lysozyme, derived from egg whites, was used for in vitro assays. This study serves as the basis for testing the biodegradability of the hydrogels (CTS-g-GMA)-X that is promoted by enzymatic hydrolysis. Hydrogels' hydrolysis was studied via lysozyme kinetics at two pH values, 5 and 7, under mechanical agitation at 37 °C. Results show that our materials' hydrolysis is slower than pure CTS possibly due to the steric hindrance imposed by the GMA grafting of functionalization. This hydrolysis helps degrade the biomaterial and at the same time it provides support for spinal cord recovery. Combination of these results may prove useful in the use of these hydrogels as scaffolds for cells proliferation and their application as implants in living organisms

Resumen:

Este trabajo considera el problema de pronosticar los siniestros ocurridos pero no reportados. El pronóstico sirve para que las compañías de seguros calculen la reserva que debe constituirse para los siniestros pendientes de pago, aunque aquí no se trata el cálculo de la reserva. Se revisan los métodos de pronóstico y se destaca uno que surge de un modelo estadístico, y produce pronósticos con error cuadrático medio mínimo. Su uso se ilustra con datos reales sobre siniestralidad en el ramo de automóviles3. Las ventajas del método, en comparación con otros, son una reducción de la subjetividad en su uso y la posibilidad de medir la incertidumbre asociada a los pronósticos

Abstract:

This research considers the problem of forecasting incurred but not reported claims. Even though it does not deal with reserve calculations, the forecast is used to calculate the reserve that insurance companies require to face claims pending to pay; however, reserve calculation is not discussed in this paper. Forecasting methods are reviewed and emphasis is placed on one that emerges from a statistical model and provides minimum mean square error forecasts. Its use is illustrated with automobile claim real data3. The advantages of this method, as compared with others, are the reduction of the subjectivity component when used, and the possibility of measuring the uncertainty associated to the forecasts

Abstract:

Why does entrepreneurship training work? We argue that the feedback loop of the opportunity development process is a training element that can explain the effectiveness of entrepreneurship training. Building on action regulation theory, we model the feedback loop as a recursive cycle of changes in the business opportunity, goals, performance outcomes, and feedback. Furthermore, we hypothesize that error orientation and monitoring can strengthen or weaken the cycle, and that going through the feedback loop during training explains short- and long-term training outcomes. To test our hypotheses, we collected data before, during, and after an entrepreneurship training program. Results support our hypotheses, suggesting that the feedback loop of the opportunity development process is a concept that can explain why entrepreneurship training is effective

Abstract:

We examine how the possibility of a bank run affects the investment decisions made by a competitive bank. Cooper and Ross [1998. Bank runs: liquidity costs and investment distortions. Journal of Monetary Economics 41, 27–38] have shown that when the probability of a run is small, the bank will offer a contract that admits a bank-run equilibrium. We show that, in this case, the bank will chose to hold an amount of liquid reserves exactly equal to what withdrawal demand will be if a run does not occur; precautionary or “excess” liquidity will not be held. This result allows us to show that when the cost of liquidating investment early is high, an increase in the probability of a run will lead the bank to invest less. However, when liquidation costs are moderate, the level of investment is increasing in the probability of a run

Abstract:

This paper introduces an approach to the study of optimal government policy in economies characterized by a coordination problem and multiple equilibria. Such models are often criticized as not being useful for policy analysis because they fail to assign a unique prediction to each possible policy choice. We employ a selection mechanism that assigns, ex ante, a probability to each equilibrium indicating how likely it is to obtain. We show how such a mechanism can be derived as the natural result of an adaptive learning process. This approach leads to a well-defined optimal policy problem, and has important implications for the conduct of government policy. We illustrate these implications using a simple model of technology adoption under network externalities

Abstract:

We study optimal fiscal policy in an economy where (i) search frictions create a coordination problem and generate multiple, Pareto-ranked equilibria and (ii) the government finances the provision of a public good by taxing market activity. The government must choose the tax rate before it knows which equilibrium will obtain, and therefore an important part of the problem is determining how the policy will affect the equilibrium selection process. We show that when the equilibrium selection rule is based on the concept of risk dominance, higher tax rates make coordination on the Pareto-superior outcome less likely. As a result, taking equilibrium-selection effects into account leads to a lower optimal tax rate

Abstract:

We construct an endogenous growth model in which bank runs occur with positive probability in equilibrium. In this setting, a bank run has a permanent effect on the leve1s of the capital stock and of output. In addition, the possibility of a run changes the portfolio choices of depositors and of banks, and thereby affects the long-run growth rateo These facts imply that both the occurrence of a run and the mere possibility of runs in a given period have a large impact on aH future periods.A bank run in our model is triggered by sunspots, and we consider two different equilibrium selection rules. In the first, a run occurs with a fixed, exogenous probability, while in the second the probability of a run is influenced by banks' portfolio choices. We show that when the choices of an individual bank affect the proba,bility of a run on that bank, the economy both grows faster and experiences fewer runs

Abstract:

Social media's capacity to quickly and inexpensively reach large audiences almost simultaneously has the potential to promote electoral accountability. Beyond increasing direct exposure to information, high saturation campaigns-which target substantial fractions of an electorate-may induce or amplify information diffusion, persuasion, or coordination between voters. Randomizing saturation across municipalities, we evaluate the electoral impact of non-partisan Facebook ads informing millions of Mexican citizens of municipal expenditure irregularities in 2018. The vote shares of incumbent parties that engaged in zero/negligible irregularities increased by 6-7 percentage points in directly-targeted electoral precincts. This direct effect, but also the indirect effect in untargeted precincts within treated municipalities, were significantly greater where ads targeted 80%-rather than 20%-of the municipal electorate. The amplifying effects of high saturation campaigns are driven by citizens within more socially-connected municipalities, rather than responses by politicians or media outlets. These findings demonstrate how mass media can ignite social interactions to promote political accountability

Resumen:

En el contexto de implementación de medidas para mitigar el cambio climático, diversas tecnologías han surgido o han sido implementadas con el objeto de reducir los efectos del mismo, incluyéndose dentro de estas la captura y almacenamiento de dióxido de carbono (CO2). Si bien esta tecnología es usada para capturar CO2 en procesos de altas emisiones de GEI, resulta una tecnología idónea en una etapa de transición energética. Como todo proceso de esta naturaleza, su aplicación conlleva la actualización de diversos riesgos, como daño ambiental o afectaciones a la salud de los seres humanos, sin embargo por la amplia temporalidad que conlleva el almacenamiento de CO2, la regulación en relación a la responsabilidad de los agentes responsables resulta fundamental. Ante la necesidad de implementar este tipo de tecnologías, resulta fundamental su regulación, por lo que el presente artículo propone como base de estudio para el modelo de regulación de la CAC, los Fondos Internacionales de Indemnización de Daños Debidos a Contaminación por Hidrocarburos (los FIDAC o los IOPC, por el acrónimo en inglés), esquemas que han demostrado dar certeza y seguridad ante contingencias en proyectos u operaciones a largo plazo

Abstract:

In the context of implementing measures to mitigate climate change, various technologies have emerged or have been implemented to reduce its effects including the capture and storage of carbon dioxide (CO2). Although this technology is used to capture CO2 in processes with high GHG emissions, it is an ideal technology in a stage of energy transition. Like any process of this nature, its application entails the updating of various risks, such as environmental damage or effects on the health of human beings, however, due to the extensive temporality that CO2 storage entails, the regulation in relation to the responsibility of the responsible agents is essential. Given the need the implement this type of technology, its regulation is essential, which is why this article proposes as an study basis for the CAC regulation model, the International Funds for Compensation for Damage Due to Hydrocarbon Pollution (the IOPC Funds), schemes that have proven to provide certainly and security in the face of contingencies in long-term projects or operations

Resumen:

Tras una breve aproximación respecto a ciertas disputas en el sector energético latinoamericano, el autor comenta sobre tres mecanismos contratuales relevantes que, tanto inversionistas como Estados anfitriones han diseñado conjuntamente, bajo una perspectiva de prevención. Al hacerlo, estos jugadores vuelven a enfocar sus incentivos, una vez que un cambio material de circunstancias emerge. Estos mecanismos son los siguientes: (i) las cláusulas de renegociación; (ii) las cláusulas de estabilización y (iii) los factores de estandarización económica

Abstract:

Upon a brief introduction into certain disputes arising in connection with the Latin-American energy sector, the author comments on the three relevant contractual mechanisms that, both, investors and Host States have jointly designed from a prevention perspective. In so doing, such players aim to re-focus their incentives, when a material change of circumstances emerges. These mechanims are the following: (i) renegotiation clauses; (ii) stabilization clauses and (iii) economic standarization factors

Abstract:

This article formulates some criticism on traditional Comparative Law and elaborates on the transition of the same to sustainable Comparative Law. The article further explains the need of incorporating cultural dialogues (multiculturalism and interculturalism) and the joint efforts of sciences (transdisciplinarity), as fundamental elements of the contemporary comparative method

Resumen:

El artículo tiene por objeto analizar las críticas y réplicas a las funciones del FMI y el BM, con especial referencia al campo del derecho internacional de los derechos humanos. Se argumenta que los Estados partes de tratados internacionales en materia de derechos humanos están obligados a ser congruentes en sus negociaciones y acuerdos con el FMI Y el BM, por lo que incurren en responsabilidad internacional, en caso de incumplir esta obligación. Se explora asimismo el deber del FMI y del BM de respetar las obligaciones internacionales sobre derechos humanos, por su carácter de costumbre internacional

Abstract:

This article analyses the criticisms and answers regarding the functions of the IMF and the WB, with specíal reference to the field of International Law of Human Rights. It ís argued that the State Parties to ínternational treaties on human rights are obliged to be coherent on their negotiations and agreements with both the IMF and the WB, as breaching said obligations amounts to their international liability. Likewise, the article explores the duty of the IMF and the WB to respect international obligations on human rights considering their character of International Customary Law.

Resumen:

El objeto del presente artículo es analizar un conjunto de perspectivas críticas que sobre el derecho internacional económico existen actualmente. El artículo aboga porque los instrumentos internacionales de la disciplina comentada, tengan en consideración valores universales, principios y nonnas internacionalmente aceptados, tanto por la sociedad civil en general, como por los estados, a través de diversas fuentes de derecho internacional. El artículo fonna parte de la primera fase de una investigación del autor, en tomo a la aplicación de dichos valores, principios y nonnas en ciertos instnlmentos internacionales del Banco Mundial (BM), la Organización Mundial del Comercio (OMC), el Programa de las Naciones Unidas para el desarrollo (PNUD) y la Organización para la cooperación y el desarrollo econólnico (OCDE)

Abstract:

The purpose of the present paper is to analyze a group of current perspectives regarding International Economic Law. The paper advocates for the consider.ation of universal values, principIes and rules within the instrunlents of said legal discipline, as accepted by both , the civil society and the States as per other sources of Public International Law. The paper is part of the first stage 01 an academic research 01 the author with respect to the application of those values, principIes and rules to certain international instruments of the World Bank (WB), the World Trade Organization (WTO), the United Nations Development Programme (UNDP), and the Organization for Economic Cooperation and Development (OECD)

Abstract:

A dynamic multi-level factor model with possible stochastic time trends is proposed. In the model, long-range dependence and short memory dynamics are allowed in global and local common factors as well as model innovations. Estimation of global and local common factors is performed on the prewhitened series, for which the prewhitening parameter is estimated semiparametrically from the cross-sectional and local average of the observable series. Employing canonical correlation analysis and a sequential least-squares algorithm on the prewhitened series, the resulting multi-level factor estimates have centered asymptotic normal distributions under certain rate conditions depending on the bandwidth and cross-section size. Asymptotic results for common components are also established. The selection of the number of global and local factors is discussed. The methodology is shown to lead to good small-sample performance via Monte Carlo simulations. The method is then applied to the Nord Pool electricity market for the analysis of price comovements among different regions within the power grid. The global factor is identified to be the system price, and fractional cointegration relationships are found between local prices and the system price, motivating a long-run equilibrium relationship. Two forecasting exercises are then discussed

Abstract:

Equilibrium electricity spot prices and loads are often determined simultaneously in a day-ahead auction market for each hour of the subsequent day. Hence daily observations of hourly prices take the form of a periodic panel rather than a time series of hourly observations. We consider novel panel data approaches to analyse the time series and the cross-sectional dependence of hourly Nord Pool electricity spot prices and loads for the period 2000-2013. Hourly electricity prices and load data are characterized by strong serial long-range dependence in the time series dimension in addition to strong seasonal periodicity, and along the cross-sectional dimension, i.e. the hours of the day, there is a strong dependence which necessarily has to be accounted for in order to avoid spurious inference when focusing on the time series dependence alone. The long-range dependence is modelled in terms of a fractionally integrated panel data model and it is shown that both prices and loads consist of common factors with long memory and with loadings that vary considerably during the day. Due to the competitiveness of the Nordic power market the aggregate supply curve approximates well the marginal costs of the underlying production technology and because the demand is more volatile than the supply, equilibrium prices and loads are argued to identify the periodic power supply curve. The estimated supply elasticities are estimated from fractionally co-integrated relations and range between 0.5 and 1.17 with the largest elasticities being estimated during morning and evening peak hours

Abstract:

Drawing upon signaling theory, charismatic leadership tactics (CLTs) have been identified as a trainable set of skills. Although organizations rely on technology-mediated communication, the effects of CLTs have not been examined in a virtual context. Preregistered experiments were conducted in face-to-face (Study 1; n = 121) and virtual settings (Study 2; n = 128) in the United States. In Study 3, we conducted virtual replications in Austria (n = 134), France (n = 137), India (n = 128), and Mexico (n = 124). Combined with past experiments, the meta-analytic effect of CLTs on performance (Cohen's d = 0.52 in-person, k = 4; Cohen's d = 0.21 overall, k = 10) and engagement in an extra-role task (Cohen's d = 0.19 overall; k = 6) indicate large to moderate effects. Yet, for performance in a virtual context Cohen's d ranged from −0.25 to 0.17 (Cohen's d = 0.01 overall; k = 6). Study 4 (n = 129) provided mixed support for signaling theory in a virtual context, linking CLTs to some positive evaluations. We conclude with guidance for future research on charismatic leadership and signaling theory

Abstract:

We assess relative performance of three recently proposed instrument selection methods via a Monte CarIo study that investigates the finite sample behavior of the post-selection estimator of a simple linear IV model. Our results suggest that no one method dominates

Abstract:

In the normal linear simultaneous equations model, we demonstrate a close relationship between two recently proposed methods of instrument selection by presenting a fundamental relationship between the two sets of canonical correlations upon which the methods are based

Abstract:

This article introduces a data-driven Box-Pierce test for serial correlation. Ihe proposed test is very attractive compared to the existing ones. In particular, implementation of this test is extremely simple for two reasons: first, the researcher does not need to specify the order of the autocorrelation tested, since the test automatically chooses this number; second, its asymptotic null distribution is chi-square with one degree of freedom, so there is no need of using a bootstrap procedure to estimate the critical values. In addition, the test is robust to the presence of conditional heteroskedasticity of unknown form. Finally, the proposed test presents higher power in simulations than the existing ones for models commonly employed in empirical finance

Abstract:

This artícle introduces an automatic test for the correct specification of a vector autoregression (VAR) model. The proposed test statistic is a Portmanteau statistic with an automatic selection of the order of the residual serial correlation tested. The test presents several attractive characteristics: simplicity, robustness, and high power in finite samples. The test is simple to implement since the researcher does not need to specify the order of the autocorrelation tested and the proposed critical values are simple to approximate, without resorting to bootstrap procedures. In addition, the test is robust to the presence of conditional heteroscedasticity of unknown form and accounts for estimation uncertainty without requiring the computation of large-dimensional inverses of near-to-singularity covariance matrices. The basic methodology is extended to general nonlinear multivariate time series models. Simulations show that the proposed test presents higher power than the existing ones for models commonly employed in empirical macroeconomics and empirical finance. Finally, the test is applied to the elassical bivariate V AR model for GNP (gross national product) and unemployment of Blanchard and Quah (1989) and Evans (1989). Online supplementary material ineludes proofs and additional details

Abstract:

Multi-server queueing systems with Poisson arrivals and Erlangian service times are among the most applicable of what are considered "easy" systems in queueing theory. By selecting the proper order, Erlangian service times can be used to approximate reasonably well many general types of service times which have a unimodal distribution and a coefficient of variation less than or equal to 1. In view of their practical importance, it may be surprising that the existing literature on these systems is quite sparse. The probable reason is that, while it is indeed possible to represent these systems through a Markov process, serious difficulties arise because of (1) the very large number of system states that may be present with increasing Erlang order and/or number of servers, and (2) the complex state transition probabilities that one has to consider. Using a standard numerical approach, solutions of the balance equations describing systems with even a modest Erlang order and number of servers require extensive computational effort and become impractical for larger systems. In this paper we illustrate these difficulties and present the equally likely combinations (ELC) heuristic which provides excellent approximations to typical equilibrium behavior measures of interest for a wide range of stationary multiserver systems with Poisson arrivals and Erlangian service. As system size grows, ELC computational times can be more than 1000 times faster than those for the exact approach. We also illustrate this heuristic's ability to estimate accurately system response under transient and/or dynamic conditions

Resumen:

Las decisiones de otorgamiento de crédito son cruciales en la administración de riesgos. Las instituciones financieras han desarrollado y usado modelos de credit scoring para estandarizar y automatizar las decisiones de crédito, sin embargo, no es común encontrar metodologías para aplicarlos a clientes sin referencias crediticias, es decir clientes que carecen de información en los burós nacionales de crédito. En este trabajo se presenta una metodología general para construir un modelo sencillo de credit scoring enfocado justamente a esa población, la cual ha venido tomando una mayor importancia en el sector crediticio latinoamericano. Se usa la información sociodemográfica proveniente de las solicitudes de crédito de una pequeña institución bancaria mexicana para ejemplificar la metodología

Abstract:

The credit grant decisions are crucial for risk management. Financial institutions have developed and used credit scoring models for automating and standardizing credit granting. However, in the literature it is not common to find a methodology to be applied to clients without previous credit experience, in other words those who lack information in the national credit bureaus. In this paper a basic methodology to build a scorecard model is presented, considering that the Latin American banks have been incremented the credit policies in favor of this kind of population. We use demographic information for an objective population from a small Mexican bank to illustrate the methodology

Résumé:

Les décisions de crédit sont cruciales pour la gestion des risques. Les institutions financières ont développé et utilisé des modèles de notation de crédit dans le but de standardiser et automatiser les décisions de crédit, cependant, ce n'est pas fréquent de trouver des méthodologies à appliquer aux clients sans références de crédit, à savoir les clients qui n'ont pas d'informations sur les bureaux de crédit nationaux. Cet article présent une méthode générale pour construire un modèle simple de notation de crédit portait précisément cette population, laquelle a gagné de plus en plus importance dans le secteur du crédit dans l'Amérique latine. On utilise les renseignements relatifs aux caractéristiques sociodémographiques des demandes de crédit auprès d'une banque mexicaine petite pour illustrer la méthodologie

Resumen:

No se puede entender la configuración de la política medieval y moderna en cuanto a sus ejes estructurales de "sacralidad" y "laicidad" sin tener en consideración la lectura que hicieron los distintos intelectuales, filósofos y escritores de la Europa de entre los siglos XIII y XVII de los autores clásicos. En la presente investigación abarcamos el estudio de cuatro escritores latinos: Cicerón, Séneca, Tito Livio y Tácito como modelos de referencia que van a suponer, a través de una lectura "reconstructiva", el desarrollo de la formación de los principales modelos políticos de la Europa moderna. A través del diálogo que notables representantes de la teología escolástica, el realismo maquiaveliano, el jesuitismo político y el absolutismo barroco, entablan con los autores clásicos mencionados, se van organizando moldes que constituirán las formas de gobierno propias de los príncipes y de los monarcas bajo-medievales renacentistas y barrocos. El núcleo que dirigirá nuestro análisis será, precisamente, la dicotomía entre la sacralidad del príncipe gobernante que se convertirá en el principal motivo de la política medieval y eclesiástica y la desacralización y semi-desacralización en las formas de gobernar maquiavelianas (para el primer caso) y absolutistas contrarreformistas (para el segundo). Nos valdremos del método de la estética de la recepción para trazar el diálogo clásicos-modernos, por los que iremos viendo en qué medida los escritores políticos de la Europa medieval y la moderna van "coloreando", "concretando" y "rellenando" sus propios "horizontes de expectativas", con distintos sesgos ideológicos, en la idea de ir configurando modelos políticos que se adapten al periodo histórico que estamos estudiando

Resumen:

Si la virtus de la Antigüedad era la fuerza, la valentía y el coraje, y en la modernidad la diligencia, el trabajo, el mérito y el esfuerzo, actualmente se ha instalado la commoditas, una suerte de pseudovirtud nihilista que reivindica la ociosidad, el igualitarismo radical y el entretenimiento. La sociedad contemporánea posmoderna ha superado tanto la metafísica filosófica grecorromana como la teología cristiana, tanto el racionalismo y el empirismo ilustrado como el idealismo romántico decadentista y el positivismo materialista, y ha entrado en una suerte de fin de la historia de nihilismo lúdico autocomplaciente y autosatisfecho. Concluimos que la commoditas está para quedarse y reconfigurará la naturaleza del ser humano en función de una nueva mentalidad que empalidece a la tradición hasta prácticamente anularla

Abstract:

If the virtus of antiquity was strength, bravery and courage, and in Modernity, diligence, work, merit and effort, today the commoditas has been installed, a kind of nihilistic pseudo-virtue that claims the idleness, radical egalitarianism, and entertainment. Contemporary postmodern society has surpassed both Greco-Roman philosophical metaphysics and Christian theology, both rationalism and enlightened empiricism, as well as romantic-decadent idealism and materialist positivism, so society has entered a sort of end of history of self-indulgent and self-satisfied playful nihilism. We conclude that commoditas is here to stay and will reconfigure the nature of the human being based on a new mentality that pales tradition to the point of practically annulling it

Resumen:

Pestes y pandemias están de actualidad por la crisis de Covid-19. Aunque nuestro mundo no está acostumbrado a las epidemias, han sido muy frecuentes a lo largo de la historia de la humanidad. En el presente estudio se ofrece un análisis de distintas pestes que asolaron el mundo grecorromano, con las explicaciones que les dieron los autores clásicos y las crisis sociopolíticas que supusieron, muy similares a la actual

Abstract:

Pests and pandemics are in the news because of the Covid-19 crisis. Although our world is not accustomed to epidemics, they have been very frequent throughout human history. This paper offers an analysis of the different plagues that devastated the Greco-Roman world, with the discussions made by classical authors and the political-social crises that they supposed, very similar to the present one

Abstract:

The notion of relative importance of criteria is central in multicriteria decision aid. In this work we define the concept of comparative coalition structure, as an approach for formally discussing the notion of relative importance of criteria. We also present a multicriteria decision aid method that does not require the assignment of weights to the criteria

Abstract:

This study identifies characteristics that positively affect entrepreneurial intention. To do so, the study compares personality traits with work values. Socio-demographic and educational characteristics act as control variables. The sample comprises 1210 public university students. Hierarchical regression analysis serves to test the hypotheses. Results show that personality traits affect entrepreneurial intention more than work values do

Resumen:

En la presente investigación se trata de analizar si las Universidades como organismos que podrían actuar como incubadoras de ideas de negocio, realmente están cumpliendo ese papel y están incentivando la actitud emprendedora entre sus estudiantes, a través de la organización de sus y las medidas específicas que acometen. Las hipótesis planteadas acerca de la mayor formación en su titulación, genérica y específica, son contrastadas utilizando una muestra de 668 estudiantes de una universidad madrileña. Las conclusiones obtenidas invitan a la reflexión, en tanto que los estudiantes de más reciente incorporación a la Universidad presentan tasas de actitud emprendedora superiores a las de sus compañeros más veteranos

Abstract:

The objetive of the present research consists of analyzing if the Universities, as organisms that might act as incubators of business ideas, really are fulfilling this role and are stimulating the entrepreneurship attitude among their students, through the structure of its studies and the specific measures that they undertake. The raised hypotheses over of the higher training in their studies, generic and specific, they are confirmed using a sample of 668 students of a university of Madrid. The obtained conclusions invite to the reflection, while the students of more recent incorporation to the University present rates of entrepreneurship attitude higher to those of more veteran colleages

Abstract:

In this paper we propose an instrument for collecting sensitive data that allows for each participant to customize the amount of information that she is comfortable revealing. Current methods adopt a uniform approach where all subjects are afforded the same privacy guarantees; however, privacy is a highly subjective property with intermediate points between total disclosure and non-disclosure: each respondent has a different criterion regarding the sensitivity of a particular topic. The method we propose empowers respondents in this respect while still allowing for the discovery of interesting findings through the application of well-known inferential procedures

Abstract:

In this article, we introduce the partition task problem class along with a complexity measure to evaluate its instances and a performance measure to quantify the ability of a system to solve them. We explore, via simulations, some potential applications of these concepts and present some results as examples that highlight their usefulness in policy design scenarios, where the optimal number of elements in a partition or the optimal size of the elements in a partition must be determined

Abstract:

In a negative representation, a set of elements (the positive representation) is depicted by its complement set. That is, the elements in the positive representation are not explicitly stored, and those in the negative representation are. The concept, feasibility, and properties of negative representations are explored in the paper; in particular, its potential to address privacy concerns. It is shown that a positive representation consisting of n l-bit strings can be represented negatively using only O(ln) strings, through the use of an additional symbol. It is also shown that membership queries for the positive representation can be processed against the negative representation in time noworse than linear in its size, while reconstructing the original positive set from its negative representation is anNP-hard problem. The paper introduces algorithms for constructing negative representations as well as operations for updating and maintaining them

Abstract:

This paper proposes a strategy for administering a survey that is mindful of sensitive data and individual privacy. The survey seeks to estimate the population proportion of a sensitive variable and does not depend on anonymity, cryptography, or legal guarantees for its privacy preserving properties. Our technique presents interviewees with a question and t possible answers, and asks participants to eliminate one of the t-1 alternatives at random. We introduce a specific setup that requires just a single coin as randomizing device, and that limits the amount of information each respondent is exposed to by presenting to her/him only a subset of the question's alternatives. Finally we conduct a simulation study to provide evidence of the robustness against the response and the nonresponse bias of the suggested procedure

Abstract:

In this paper we present a method for hiding a list of data by mixing it with a large amount of superfluous items. The technique uses a device known as a negative database which stores the complement of a set rather that the set itself to include an arbitrary number of garbage entries efficiently. The resulting structure effectively hides the data, without encrypting it, and obfuscates the number of data items hidden; it prevents arbitrary data lookups, while supporting simple membership queries; and can be manipulated to reflect relational algebra operations on the original data

Abstract:

A set D B of data elements can be represented in terms or its complement set, known as a negative database. That is, all of the elements not in D B are represented, and D B itself is not explicitly stored. This method of representing data has certain properties that are relevant for privacy enhancing applications. The paper reviews the negative database (N D B) representation scheme for storing a negative image compactly, and proposes using a collection of N D Bs to represent a single D B, that is, one N D B is assigned for each record in D B. This method has the advantage of producing negative databases that are hard to reverse in practice, i.e., from which it is hard to obtain DB. This result is obtained by adapting a technique for generating hard-to-solve 3-SAT formulas. Finally we suggest potential avenues of application

Abstract:

The benefits of negative detection for obscuring information are explored in the context of Artificial Immune Systems (AIS). AIS based on string matching have the potential for an extra security feature in which the "normal" profile of a system is hidden from its possible hijackers. Even if the model of normal behavior falls into the wrong hands, reconstructing the set of valid or "normal" strings is an NP-hard problem. The data-hiding aspects of negative detection are explored in the context of an application to negative databases. Previous work is reviewed describing possible representations and reversibility properties for privacy-enhancing negative databases. New algorithms are presented which allow on-line creation, updates and clean-up of negative databases, some experimental results illustrate the impact of these operations on the size of the negative database. Finally some future challenges are discussed

Abstract:

In anomaly detection, the normal behavior of a process is characterized by a model, and deviations from the model are called anomalies. In behavior-based approaches to anomaly detection, the model of normal behavior is constructed from an observed sample of normally occurring patterns. Models of normal behavior can represent either the set of allowed patterns (positive detection) or the set of anomalous patterns (negative detection). A formal framework is given for analyzing the tradeoffs between positive and negative detection schemes in terms of the number of detectors needed to maximize coverage. For realistically sized problems, the universe of possible patterns is too large to represent exactly (in either the positive or negative scheme). Partial matching rules generalize the set of allowable (or unallowable) patterns, and the choice of matching rule affects the tradeoff between positive and negative detection. A new match rule is introduced, called r-chunks, and the generalizations induced by different partial matching rules are characterized in terms of the crossover closure. Permutations of the representation can be used to achieve more precise discrimination between normal and anomalous patterns. Quantitative results are given for the recognition ability of contiguous-bits matching together with permutations

Abstract:

Dementia is characterized by a progressive deterioration in cognitive functions and behavioral problems. Due to its importance, in the domain of Internet of Things (IoT), where physical objects are connected to the internet, a myriad of systems have been proposed to support people with dementia, their caregivers, and medical experts. However, the vast and increasing number of research efforts has led to a complex state of the art, which is in need of a methodological analysis and a characterization of its key aspects. Based on the PRISMA guidelines, this article presents a systematic review aimed at investigating the state of the art of the IoT in dementia regardless of the dementia category and/or its cause. Articles published within the period of January 2017 to November 2022 were searched in well-known scientific databases. The searches retrieved a total of 2733 records, which were narrowed down to 104 relevant studies by applying inclusion, exclusion, and quality criteria. A set of 13 research questions at the intersection of IoT and dementia were posed, which guided the analysis of the selected studies. The systematic review contributes (i) an in-depth methodological analysis of recent and relevant IoT systems in the domain of dementia; (ii) a taxonomy that identifies, characterizes, and categorizes key aspects of IoT research focused on dementia; and (iii) a series of future work directions to advance the field of IoT in the dementia domain

Abstract:

Background and objective: In day centers, people with dementia are assigned to specific groups to receive care according to the progression of the disease. This article presents the design and evaluation of a dashboard aimed at facilitating the comprehension of the progression of people with dementia to support decision-making of healthcare professionals (HCPs) when determining patient-group assignment. Materials and method: A participatory design methodology was followed to build the dashboard. The grounded theory methodology was utilized to identify requirements. A total of 8 HCPs participated in the design and evaluation of a low-fidelity prototype. The perceived usefulness and perceived ease of use of the high-fidelity prototype was evaluated by 15 HCPs (from several day centers) and 38 psychology students utilizing a questionnaire based on the technology acceptance model. Results: HCPs perceived the dashboard as extremely likely to be useful (Mdn = 6.5 out of 7) and quite likely to be usable (Mdn = 6 out of 7). Psychology students perceived the dashboard as quite likely to be useful and usable (both with Mdn= 6)

Resumen:

El aumento de sanciones por violaciones de la privacidad motiva la definición de una metodología de evaluación de la utilidad de la información y de la preservación de la privacidad de datos a publicar. Al desarrollar un caso de estudio se provee un marco de trabajo para la medición de la preservación de la privacidad. Se exponen problemas en la medición de la utilidad de los datos y se relacionan con la preservación de la privacidad en datos a publicar. Se desarrollan modelos de aprendizaje máquina para determinar el riesgo de predicción de atributos sensibles y como medio de verificación de la utilidad de los datos. Los hallazgos motivan la necesidad de adecuar la medición de la preservación de la privacidad a los requerimientos actuales y a medios de ataque sofisticados como el aprendizaje máquina

Abstract:

The grown penalties for privacy violations motivate the definition of a methodology for evaluating the usefulness of information and the privacy-preserving data publishing. We developing a case study and we provided a framework for measuring the privacy-preserving. Problems are exposed in the measurement of the usefulness of the data and relate to privacy-preserving data publishing. Machine learning models are developed to determine the risk of predicting sensitive attributes and as a means of verifying the usefulness of the data. The findings motivate the need to adapt the privacy measures to current requirements and sophisticated attacks as the machine learning

Abstract:

Scholars argue that electoral management bodies staffed by autonomous, non-partisan experts are best for producing credible and fair elections. We inspect the voting record of Mexico's Instituto Federal Electoral (IFE), an ostensibly independent bureaucratic agency regarded as extremely successful in organizing clean elections in a political system marred by fraud. We discover that the putative non-partisan experts of "autonomous" IFE behave as "party watchdogs" that represent the interests of their political party sponsors. To validate this party influence hypothesis, we examine roll-call votes cast by members of IFE's Council-General from 1996 to 2006. Aside from shedding light on lFE's failure to achieve democratic compliance in 2006, our analysis suggests that election arbiters that embrace partisan strife are quite capable of organizing free, fair, and credible elections in new democracies

Resumen:

Este trabajo propone una metodología para elaborar escenarios de cambio climático a escala local. Se usan modelos multivariados de series de tiempo para obtener pronósticos restringidos y se avanza en la literatura sobre métodos estadísticos de reducción de escala en varios aspectos. Así se logra: i) una mejor representación del clima a escala local; ii) evitar la posible ocurrencia de relaciones espurias entre variables de gran y pequeña escalas; iii) una representación apropiada de la variabilidad de las series en los escenarios de cambio climático, y iv) evaluar la compatibilidad y combinar la información de variables climáticas con las derivadas de los modelos de clima. La metodología propuesta es útil para integrar escenarios sobre la evolución de los factores de pequeña escala que influyen en el clima local. De esta forma, al escoger distintas evoluciones que representen, por ejemplo, distintas políticas públicas sobre uso del suelo o control de contaminantes, la metodología ofrece una manera de evaluar la conveniencia de dichas políticas en términos de sus efectos para amplificar o atenuar los impactos del cambio climático

Abstract:

This paper proposes a new methodology for generating climate change scenarios at the local scale based on multivariate time series models and restricted forecasting techniques. This methodology offers considerable advantages over the current statistical downscaling techniques such as: (i) it provides a better representation of climate at the local scale; (ii) it avoids the occurrence of spurious relationships between the large and local scale variables; (iii) it offers a more appropriate representation of variability in the downscaled scenarios; and (iv) it allows for compatibility assessment and combination of the information contained in both observed and simulated climate variables. Furthermore, this methodology is useful for integrating scenarios of local scale factors that affect local climate. As such, the convenience of different public policies regarding, for example, land use change or atmospheric pollution control can be evaluated in terms of their effects for amplifying or reducing climate change impacts

Abstract:

The urge for higher resolution climate change scenarios has been widely recognized, particularly for conducting impact assessment studies. Statistical downscaling methods have shown to be very convenient f or this task, mainly because of their lower computational requirements in comparison with nested limited-area regional models or very high resolution Atmosphere–ocean General Circulation Models. Nevertheless, although some of the limitations of statistical downscaling methods are widely known and have been discussed in the literature, in this paper it is argued that the current approach for statistical downscaling does not guard against misspecified statistical models and that the occurrence of spurious results is likely if the assumptions of the underlying probabilistic model are not satisfied. In this case, the physics included in climate change scenarios obtained by general circulation models, could be replaced by spatial patterns and magnitudes produced by statistically inadequate models. Illustrative examples are provided for monthly temperature for a region encompassing Mexico and part of the United States. It is found that the assumptions of the probabilistic models do not hold for about 70 % of the gridpoints, parameter instability and temporal dependence being the most common problems. As our examples reveal, automated statistical downscaling “black-box” models are to be considered as highly prone to produce misleading results. It is shown that the Probabilistic Reduction approach can be incorporated as a complete and internally consistent framework for securing the statistical adequacy of the downscaling models and for guiding the respecification process, in a way that prevents the lack of empirical validity that affects current methods

Abstract:

We design a laboratory experiment to study behavior in a multidivisional organization. The organization faces a trade-off between coordinating its decisions across the divisions and meeting division-specific needs that are known only to the division managers, who can communicate their private information through cheap talk. While the results show close to optimal communication, we also find systematic deviations from optimal behavior in how the communicated information is used. Specifically, subjects' decisions show worse than predicted adaptation to the needs of the divisions in decentralized organizations and worse than predicted coordination in centralized organizations. We show that the observed deviations disappear when uncertainty about the divisions' local needs is removed and discuss the possible underlying mechanisms

Abstract:

We design a laboratory experiment in which an interested third party endowed with private information sends a public message to two conflicting players, who then make their choices. We find that third-party communication is not strategic. Nevertheless, a hawkish message by a third party makes hawkish behavior more likely while a dovish message makes it less likely. Moreover, how subjects respond to the message is largely unaffected by the third party’s incentives. We argue that our results are consistent with a focal point interpretation in the spirit of Schelling

Abstract:

Forward induction (FI) thinking is a theoretical concept in the Nash refinement literature which suggests that earlier moves by a player may communicate his future intentions to other players in the game. Whether and how much players use FI in the laboratory is still an open question. We designed an experiment in which detailed reports were elicited from participants playing a battle of the sexes game with an outside option. Many of the reports show an excellent understanding of FI, and such reports are associated more strongly with FI-like behavior than reports consistent with first mover advantage and other reasoning processes. We find that a small fraction of subjects understands FI but lacks confidence in others. We also explore individual differences in behavior. Our results suggest that FI is relevant for explaining behavior in games

Abstract:

This paper examines methodology for performing Bayesian inference sequentially on a sequence of posteriors on spaces of different dimensions. For this, we use sequential Monte Carlo samplers, introducing the innovation of using deterministic transformations to move particles effectively between target distributions with different dimensions. This approach, combined with adaptive methods, yields an extremely flexible and general algorithm for Bayesian model comparison that is suitable for use in applications where the acceptance rate in reversible jump Markov chain Monte Carlo is low. We use this approach on model comparison for mixture models, and for inferring coalescent trees sequentially, as data arrives

Resumen:

Este artículo hace una revisión del modelo de bienestar noruego cuyo objeto ha sido la protección y el mantenimiento de la clase media en el país. Para entenderlo, se analiza el concepto de clase media y su relación con las políticas públicas. En los países escandinavos, el papel que desarrolla el Estado en la economía es crucial para la generación de la clase media, lo que ha permitido un modelo socio-económico exitoso y de difícil aplicación en otros países donde el concepto de Estado difiere sustancialmente

Abstract:

This article reviews the Norwegian welfare model, the object of which has been the protection and maintenance of the middle class in the country. In order to understand it, the concept of middle class and its relationship with public policies is analyzed. In Scandinavian countries, the role of the State in the economy is crucial for the generation of the middle class, which has allowed a successful socio-economic model that is difficult to apply in other countries where the concept of the State differs substantially

Abstract:

Using new census-type data and a dynamic structural model, we study the effect of credit supply on investment by manufacturing firms during the Greek depression. Real factors (profitability, uncertainty, and taxes) account for only a fraction of the substantial drop in investment observed in the data. The reduction in credit supply has significant real effects, explaining 11–32% of the investment slump. We also find that exporting firms, which reduce investment and deleverage despite their improved profitability during the crisis, face a contraction in credit supply similar to that of non-exporters, suggesting that the credit-supply shock has a significant common component

Resumen:

El objetivo principal de este trabajo es documentar la amplia heterogeneidad a nivel institucional que existe dentro de los países así como investigar qué factores institucionales son los más relevantes para los franquiciantes multinacionales

Abstract:

The purpose of this paper is to document the extensive heterogeneity in institutions within countries and investigate which institutional factors are the most relevant for international brands

Resumo:

O principal objetivo desta investigação é justificar a grande heterogeneidade que existe nos países, assim como pesquisar quais fatores institucionais são os mais importantes para as franquias multinacionais

Abstract:

Introduction: Mathematical models and field data suggest that human mobility is an important driver for Dengue virus transmission. Nonetheless little is known on this matter due the lack of instruments for precise mobility quantification and study design difficulties. Materials and methods: We carried out a cohort-nested, case-control study with 126 individuals (42 cases, 42 intradomestic controls and 42 population controls) with the goal of describing human mobility patterns of recently Dengue virus-infected subjects, and comparing them with those of non-infected subjects living in an urban endemic locality. Mobility was quantified using a GPS-data logger registering waypoints at 60-second intervals for a minimum of 15 natural days. Results: Although absolute displacement was highly biased towards the intradomestic and peridomestic areas, occasional displacements exceeding a 100-Km radius from the center of the studied locality were recorded for all three study groups and individual displacements were recorded traveling across six states from central Mexico. Additionally, cases had a larger number of visits out of the municipality´s administrative limits when compared to intradomestic controls (cases: 10.4 versus intradomestic controls: 2.9, p = 0.0282). We were able to identify extradomestic places within and out of the locality that were independently visited by apparently non-related infected subjects, consistent with houses, working and leisure places. Conclusions: Results of this study show that human mobility in a small urban setting exceeded that considered by local health authority’s administrative limits, and was different between recently infected and non-infected subjects living in the same household. These observations provide important insights about the role that human mobility may have in Dengue virus transmission and persistence across endemic geographic areas that need to be taken into account when planning preventive and control ...

Resumen:

Se realizó una investigación con 38 estudiantes universitarios de un curso de Física I con el fin de indagar acerca de los conocimientos matemáticos que ellos han aprendido acerca de la transformación de funciones, tras haber cursado Cálculo Diferencial. La investigación se enmarca en la Teoría de la Representaciones Semióticas de Duval. Se realizó una evaluación diagnóstica referida a las transformaciones Af(Bx+C)+D aplicadas sobre las funciones x2 y senx. El artículo contribuye mostrando las posibles causas de las dificultades presentadas por los estudiantes a la luz del marco teórico, además de proponer estrategias que contribuyan a mejorar su aprendizaje

Abstract:

A study was carried out with 38 university students enrolled in the Physics I course in order to investigate what they had learned about the transformation of functions as part of their mathematical knowledge, after having completed a course on Differential Calculus. The research is framed by Duval’s Theory of Semiotic Representations. A diagnostic assessment test was carried out concerning transformations of the form Af(Bx+C)+D applied to the functions x2 and sinx when the parameters included in them are varied one by one. The article contributes by showing possible reasons to explain student’s difficulties in the light of the theoretical framework, and by proposing strategies that may contribute to improve student's learning

Resumo:

Foi realizada uma pesquisa com 38 estudantes universitários de um curso de Física I, a fim de investigar o conhecimento matemático que eles aprenderam sobre a transformação de funções, depois de concluir a cadeira de Cálculo Diferencial. A pesquisa está enquadrada na Teoria das Representações Semióticas de Duval. Foi realizada uma avaliação diagnóstica referente às transformações Af(Bx+C)+D aplicadas às funções x2 e senx. O artigo contribui mostrando as possíveis causas das dificuldades apresentadas pelos alunos à luz do referencial teórico, além de propor estratégias que contribuam para melhorar sua aprendizagem

Abstract:

Purpose. This paper studies the determinants of the debt maturity of Mexican-listed companies by analysing the effects on the extensive (issuing or liquidating debt) and the intensive (debt maturity renegotiation) margins. Design/methodology/approach. This study, using a Tobit model for panel data and measuring maturity as a time variable, shows that size, liquidity and leverage, among other firm characteristics, as well as the market interest rate, explain debt maturity. Additionally, the study employs the McDonald and Moffitt decomposition to determine whether the explanatory variables of maturity have a more significant effect on the decision to issue or liquidate debt or on debt maturity renegotiations. Findings. The results obtained highlight that the market interest rate negatively affects debt maturity. On the other hand, variables like size, liquidity, collateral and leverage demonstrate a positive relationship with the dependent variable. In addition, the extensive margin has a higher impact on corporate debt than the intensive margin, suggesting that firms prefer to liquidate or issue new debt rather than renegotiate preexisting contracts. Research limitations/implications. The main limitation of this study is the use of an unbalanced panel. The lack of data limits the application of specific methodologies suggested by the literature as a way to test the robustness of the estimates. Originality/value. First of all, this study adds empirical evidence of debt maturity decisions by publicly traded firms in a middle-income country such as Mexico to the existing literature on maturity choice. Second, the study treats debt maturity as a time-censored, limited variable. Finally, the authors have used the McDonald and Moffitt (1980) methodology to decompose the effect of each independent variable into extensive and intensive margins

Resumen:

El objetivo de esta investigación es identificar los determinantes de la madurez de la deuda para las empresas mexicanas que cotizan en la BMV, usando una definición alternativa de esta variable dependiente. En particular, se define la madurez como "tiempo para expiración del contrato" considerando el promedio ponderado del tiempo a vencimiento, contribución original del presente trabajo. Se utilizan modelos de datos panel y de selección de Heckman, pues el uso de datos longitudinales en un panel desbalanceado puede presentar problemas de selección en forma de atrición. Los resultados sugieren que el sesgo por atrición es significativo, y que la madurez promedio de la deuda está determinada por variables como tamaño y apalancamiento, entre otras característias de las empresas, así como la tasa de interés del mercado. Como principal limitación, se tienen las omisiones de datos de las fuentes de información utilizadas generando un panel corto y desbalanceado. Se concluye que al usar este método de medición de madurez se obtienen mejores resultados para analizar el plazo de vencimiento de la deuda, comparado con las métricas tradicionales en la literatura

Abstract:

This research aims to determinants of debt maturity for Mexican companies listed on the BMV, using an alternative definition of this dependent variable. Maturity is defined as "time to contract expiration" considering the weighted average of the time to expiration time, which contributes to the origonality of this work. Panel data models and Heckman selection models are used, since the use of longitudinal data in an unbalanced panel can present selection problems due to atrition. The results suggest that the attrition bias is significant, and that the average maturity of the debt is determined by firm characteristics such a size and leverage, among others, and the interest rate of the Mexican market. As a limitation and due to the omissions of data reported by the information sourced used for the analysis, a short and unbalanced panel is used. It is concluded that, by using this maturity alternative measurement method, better results are obtained to analyze the maturity of the debt, compared to the traditional metrics in the literature

Abstract:

This work is concerned with a reaction-diffusion system that has been proposed as a model to describe acid-mediated cancer invasion. More precisely, we consider the properties of travelling waves that can be supported by such a systém, and show that a rich variety of wave propagation dynamics, both fast and slow, is compatible with the model. In particular, asymptotic formulae for admissible wave profiles and bounds on their wave speeds are provided

Abstract:

A numerical study of model-based methods for derivative-free optimization is presented. These methods typically include a geometry phase whose goal is to ensure the adequacy of the interpolation set. The paper studies the performance of an algorithm that dispenses with the geometry phase altogether (and therefore does not attempt to control the position of the interpolation set). Data are presented describing the evolution of the condition number of the interpolation matrix and the accuracy of the gradient estimate. The experiments are performed on smooth unconstrained optimization problems with dimensions ranging between 2 and 15

Abstract:

Hospitals are convenient settings for the deployment of context-aware applications. The information needs of hospital workers are highly dependent on contextual variables, such as location, role and activity. While some of these parameters can be easily determined, others, such as activity are much more complex to estimate. This paper describes an approach to estimate the activity being performed by hospital workers. The approach is based on information gathered from a workplace study conducted in a hospital, in which 196 h of detailed observation of hospital workers was recorded. Contextual information, such as the location of hospital workers, artifacts being used, the people with whom they collaborate and the time of the day, is used to train a back propagation neural network to estimate hospital workers activities. The activities estimated include clinical case assessment, patient care, preparation, information management, coordination and classes and certification. The results indicate that the user activity can be correctly estimated 75% of the time (on average) which is good enough for several applications. We discuss how these results can be used in the design of activity-aware applications, arguing that recent advances in pervasive and networking technologies hold great promises for the deployment of such applications

Abstract:

Hospitals are convenient settings for deployment of ubiquitous computing technology. Not only are they technology-rich environments, but their workers experience a high level of mobility resulting in information infrastructures with artifacts distributed throughout the premises. Hospital information systems (HISs) that provide access to electronic patient records are a step in the direction of providing accurate and timely information to hospital staff in support of adequate decision-making. This has motivated the introduction of mobile computing technology in hospitals based on designs which respond to their particular conditions and demands. Among those conditions is the fact that worker mobility does not exclude the need for having shared information artifacts at particular locations. In this paper, we extend a handheld-based mobile HIS with ubiquitous computing technology and describe how public displays are integrated with handheld and the services offered by these devices. Public displays become aware of the presence of physicians and nurses in their vicinity and adapt to provide users with personalized, relevant information. An agent-based architecture allows the integration of proactive components that offer information relevant to the case at hand, either from medical guidelines or previous similar cases

Abstract:

We consider the motion of a planar rigid body in a potential twodimensional flow with a circulation and subject to a certain nonholonomic constraint. This model can be related to the design of underwater vehicles. The equations of motion admit a reduction to a 2-dimensional nonlinear system, which is integrated explicitly. We show that the reduced system comprises both asymptotic and periodic dynamics separated by a critical value of the energy, and give a complete classification of types of the motion. Then we describe the whole variety of the trajectories of the body on the plane

Abstract:

In the Black–Scholes–Merton model, as well as in more general stochastic models in finance, the price of an American option solves a parabolic variational inequality.When the variational inequality is discretized, one obtains a linear complementarity problem (LCP) that must be solved at each time step. This paper presents an algorithm for the solution of these types of LCPs that is significantly faster than the methods currently used in practice. The new algorithm is a two-phase method that combines the active-set identification properties of the projected successive over relaxation (SOR) iteration with the second-order acceleration of a (recursive) reduced-space phase.We show how to design the algorithm so that it exploits the structure of the LCPs arising in these financial applications and present numerical results that show the effectiveness of our approach

Abstract:

We develop a model of the politics of state capacity building undertaken by incumbent parties that have a comparative advantage in clientelism rather than in public goods provision. The model predicts that, when challenged by opponents, clientelistic incumbents have the incentive to prevent investments in state capacity. We provide empirical support for the model's implications by studying policy decisions by the Institutional Revolutionary Party that affected local state capacity across Mexican municipalities and over time. Our difference-in-differences and instrumental variable identification strategies exploit a national shock that threatened the Mexican government's hegemony in the early 1960s

Abstract:

We document how informal employment in Mexico is countercyclical, lags the cycle and is negatively correlated withformal employment. This contributes to explaining why total employment in Mexico displays low cyclicality and variability over the business cycle when compared to Canada, a developed economy with a much smaller share of informal employment. To account for these empirical findings, we build a business cycle model of a small, open economy that incorporates formal and informal labor markets and calibrate it to Mexico. The model performs well in terms of matching conditional and unconditional moments in the data. It also sheds light into the channels through which informal economic activity may affect business cycles. Introducing informal employment into a standard model amplifies the effects of productivity shocks. This is linked to productivity shocks being imperfectly propagated from the formal to the informal sector. It also shows how imperfect measurement of informal economic activity in national accounts can translate into stronger variability in aggregate economic activity

Abstract:

At the beginning of 2003, the debate which occurred within the United Nations Security Council about the Iraqi war has been one of the few international events which drew so much attention within Mexico's public opinion. This can be explained by seriousness of United States' violation of international law, but also by the difficult bilateral relations between Mexico - non permanent member state of the Security Council - and the United States at the heart of the Iraqi crisis. Regarding the debate's polarization between France and the United States at the United Nations, Mexico decided to side with France for three major reasons: the necessity to counterbalance American power, cultural affinities with France, and also because of an ingenuous attitude in view of the specific interests defended by France

Abstract:

The parameter space of nonnegative trigonometric sums (NNTS) models for circular data is the surface of a hypersphere; thus, constructing regression models for a circular-dependent variable using NNTS models can comprise fitting great (small) circles on the parameter hypersphere that can identify different regions (rotations) along the great (small) circle. We propose regression models for circular- (angular-) dependent random variables in which the original circular random variable, which is assumed to be distributed (marginally) as an NNTS model, is transformed into a linear random variable such that common methods for linear regression can be applied. The usefulness of NNTS models with skewness and multimodality is shown in examples with simulated and real data

Abstract:

The sum of independent circular uniformly distributed random variables is also circular uniformly distributed. In this study, it is shown that a family of circular distributions based on nonnegative trigonometric sums (NNTS) is also closed under summation. Given the flexibility of NNTS circular distributions to model multimodality and skewness, these are good candidates for use as alternative models to test for circular uniformity to detect different deviations from the null hypothesis of circular uniformity. The circular uniform distribution is a member of the NNTS family, but in the NNTS parameter space, it corresponds to a point on the boundary of the parameter space, implying that the regularity conditions are not satisfied when the parameters are estimated by using the maximum likelihood method. Two NNTS tests for circular uniformity were developed by considering the standardised maximum likelihood estimator and the generalised likelihood ratio. Given the nonregularity condition, the critical values of the proposed NNTS circular uniformity tests were obtained via simulation and interpolated for any sample size by the fitting of regression models. The validity of the proposed NNTS circular uniformity tests was evaluated by generating NNTS models close to the circular uniformity null hypothesis

Abstract:

The probability integral transform of a continuous random variable X with distribution function F X is a uniformly distributed random variable U = F X ( X ). We define the angular probability integral transform (APIT) as Θ U = 2πU = 2πFX (X), which corresponds to a uniformly distributed angle on the unit circle. For circular (angular) random variables, the sum modulus 2π of absolutely continuous independent circular uniform random variables is a circular uniform random variable, that is, the circular uniform distribution is closed under summation modulus 2π, and it is a stable continuous distribution on the unit circle. If we consider the sum (difference) of the APITs of two random variables, X1 and X2, and test for the circular uniformity of their sum (difference) modulus 2π, this is equivalent to test of independence of the original variables. In this study, we used a flexible family of nonnegative trigonometric sums (NNTS) circular distributions, which include the uniform circular distribution as a member of the family, to evaluate the power of the proposed independence test by generating samples from NNTS alternative distributions that could be at a closer proximity with respect to the circular uniform null distribution

Abstract:

Recent technological advances have enabled the easy collection of consumer behavior data in real time. Typically, these data contain the time at which a consumer engages in a particular activity such as entering a store, buying a product, or making a call. The occurrence time of certain events must be analyzed as circular random variables, with 24:00 corresponding to 0:00. To effectively implement a marketing strategy (pricing, promotion, or product design), consumers should be segmented into homogeneous groups. This paper proposes a methodology based on circular statistical models from which we construct a clustering algorithm based on the use patterns of consumers. In particular, we model temporal patterns as circular distributions based on nonnegative trigonometric sums (NNTSs). Consumers are clustered into homogeneous groups based on their vectors of parameter estimates by using a spherical k-means clustering algorithm. For this purpose, we define the parameter space of NNTS models as a hypersphere. The methodology is applied to three real datasets comprising the times at which individuals send short-service messages and start voice calls and the check-in times of the users of a mobile application Foursquare

Abstract:

Fernández-Durán [Circular distributions based on nonnegative trigonometric sums. Biometrics. 2004;60:499–503] developed a new family of circular distributions based on non-negative trigonometric sums that is suitable for modelling data sets that present skewness and/or multimodality. In this paper, a Bayesian approach to deriving estimates of the unknown parameters of this family of distributions is presented. Because the parameter space is the surface of a hypersphere and the dimension of the hypersphere is an unknown parameter of the distribution, the Bayesian inference must be based on transdimensional Markov Chain Monte Carlo (MCMC) algorithms to obtain samples from the high-dimensional posterior distribution. The MCMC algorithm explores the parameter space by moving along great circles on the surface of the hypersphere. The methodology is illustrated with real and simulated data sets

Abstract:

The statistical analysis of circular, multivariate circular, and spherical data is very important in different areas, such as paleomagnetism, astronomy and biology. The use of nonnegative trigonometric sums allows for the construction of flexible probability models for these types of data to model datasets with skewness and multiple modes. The R package CircNNTSR includes functions to plot, fit by maximum likelihood, and simulate models based on nonnegative trigonometric sums for circular, multivariate circular, and spherical data. For maximum likelihood estimation of the models for the three different types of data an efficient Newton-like algorithm on a hypersphere is used. Examples of applications of the functions provided in the CircNNTSR package to actual and simulated datasets are presented and it is shown how the package can be used to test for uniformity, homogeneity, and independence using likelihood ratio tests

Resumen:

El objetivo de este artículo es probar la hipótesis de la curva U invertida de Kuznets en la relación entre el consumo de agua per cápita para uso agrícola y ganadero, que representa en promedio 75% del consumo total, y el PIB per cápita para los municipios en la cuenca Lerma-Chapala. En el contexto de cambio climático, la relación entre consumo de agua y PIB es muy importante, pues la variabilidad en la disponibilidad del agua ha aumentado, forzando a los usuarios y gobiernos a considerar estrategias para su uso eficiente, en donde se incluyan los posibles impactos económicos y ambientales. Al llevar a cabo el análisis a nivel de una cuenca hidrográfica es necesario considerar los efectos espaciales entre municipios vecinos a través de la aplicación de modelos autorregresivos espaciales. Al incluir errores correlacionados espacialmente en los modelos de regresión, no se rechaza la hipótesis de la curva U invertida de Kuznets. Por tanto, cualquier estrategia de mitigación del cambio climático relacionada con el uso eficiente del agua debe ser evaluada en sus costos y beneficios en el PIB municipal en relación con la curva de Kuznets estimada en este artículo

Abstract:

The main objective of this article is to test the Kuznets inverted U-curve hypothesis in the relation between water consumption, for which agriculture and farming represent on average 75% of total consumption, and the GDP in municipalities located in the Lerma-Chapala hydrographic basin. Given the context of climate change, it is essential to understand the relationship between water consumption and GDP: the increased variability in the availability of water has forced governments and users to implement strategies for the efficient use of water resources, and thus they must consider not only likely environmental problems but also economic impact. Using data at the municipal level in a hydrographic basin, we consider the spatial effects among the different municipalities; these effects are modeled using spatial autoregressive models. The Kuznets inverted U-curve hypothesis is not rejected when allowing for spatially correlated errors. Thus, any strategy for mitigating climate change by making an efficient use of water resources must be evaluated in terms of its costs and benefits in the PIB of the municipality in relation to the fitted Kuznets curve presented in this article

Abstract:

The generational cohort theory states that groups of individuals who experienced the same social, economic, political, and cultural events during early adulthood (17–23 years) would share similar values throughout their lives. Moreover, they would act similarly when making decisions in different aspects of life, particularly when making decisions as consumers. Thus, these groups define market segments, which is relevant in the design of marketing strategies. In Mexico, marketing researchers commonly use U.S. generational cohorts to define market segments, despite sufficient evidence that the generational cohorts in the two countries are not identical because of differences in national historic events. This paper proposes a methodology based on change-point analysis and ordinal logistic regressions to obtain a new classification of generational cohorts for Mexican urban consumers, using data from a 2010 nationwide survey on the values of individuals across age groups

Abstract:

Femández-Durán, and Gregorio-Domínguez, Seasonal Mortality for Fractional Ages in Life Insurance. Scandinavian Actuarial Joumal. A uniform distribution of deaths between integral ages is a widely used assumption for estimating future-lifetimes; however, this assumption does not necessarily reflect the true distribution of deaths throughout the year. We propose the use of a seasonal mortality assumption for estimating the distribution of future-lifetimes between integral ages: this assumption accounts for the number of deaths that occurs in given months of the year, including the excess mortality that is observed in winter months. The impact of this seasonal mortality assumption on short-term life insurance premium calculations is then examined by applying the proposed assumption to Mexican mortality data

Abstract:

A family of distributions for a random pair of angles that determine a point on the surface of a three-dimensional unit sphere (three-dimensional directions) is proposed. It is based on the use of nonnegative double trigonometric (Fourier) sums (series). Using this family of distributions, data that possess rotational symmetry, asymmetry or one or more modes can be modeled. In addition, the joint trigonometric moments are expressed in terms of the model parameters. An efficient Newton-like optimization algorithm on manifolds is developed to obtain the maximum likelihood estimates of the parameters. The proposed family is applied to two real data sets studied previously in the literature. The first data set is related to the measurements of magnetic remanence in samples of Precambrian volcanics in Australia and the second to the arrival directions of low mu showers of cosmic rays

Abstract:

Fernández-Durán, J. J. (2004): “Circular distributions based on nonnegative trigonometric sums,” Biometrics, 60, 499–503, developed a family of univariate circular distributions based on nonnegative trigonometric sums. In this work, we extend this family of distributions to the multivariate case by using multiple nonnegative trigonometric sums to model the joint distribution of a vector of angular random variables. Practical examples of vectors of angular random variables include the wind direction at different monitoring stations, the directions taken by an animal on different occasions, the times at which a person performs different daily activities, and the dihedral angles of a protein molecule. We apply the proposed new family of multivariate distributions to three real data-sets: two for the study of protein structure and one for genomics. The first is related to the study of a bivariate vector of dihedral angles in proteins. In the second real data-set, we compare the fit of the proposed multivariate model with the bivariate generalized von Mises model of [Shieh, G. S., S. Zheng, R. A. Johnson, Y.-F. Chang, K. Shimizu, C.-C. Wang, and S.-L. Tang (2011): “Modeling and comparing the organization of circular genomes,” Bioinformatics, 27(7), 912–918.] in a problem related to orthologous genes in pairs of circular genomes. The third real data-set consists of observed values of three dihedral angles in γ-turns in a protein and serves as an example of trivariate angular data. In addition, a simulation algorithm is presented to generate realizations from the proposed multivariate angular distribution

Abstract:

The Bass Forecasting Diffusion Model is one of the most used models to forecast the sales of a new product. It is based on the idea that the probability of an initial sale is a function of the number of previous buyers. Almost all products exhibit seasonality in their sales patterns and these seasonal effects can be influential in forecasting the weekly/monthly/quarterly sales of a new product, which can also be relevant to making different decisions concerning production and advertising. The objective of this paper is to estimate these seasonal effects using a new family of distributions for circular random variables based on nonnegative trigonometric sums and to use this family of circular distributions to define a seasonal Bass model. Additionally, comparisons in terms of one-step-ahead forecasts between the Bass model and the proposed seasonal Bass model for products such as iPods, DVD players, and Wii Play video game are included

Abstract:

In medical and epidemiological studies, the importance of detecting seasonal patterns in the occurrence of diseases makes testing for seasonality highly relevant. There are different parametric and nonparametric tests for seasonality. One of the most widely used parametric tests in the medical literature is the Edwards test. The Edwards test considers a parametric alternative that is a sinusoidal curve with one peak and one trough. The Cave and Freedman test is an extension of the Edwards test that is also frequently applied and considers a sinusoidal curve with two peaks and two troughs as the alternative hypothesis. The Kuiper, Hewitt and David and Newell are common non-parametric tests. Fernández-Durán (2004) developed a family of univariate circular distributions based on non-negative trigonometric (Fourier) sums (series) (NNTS) that can account for an arbitrary number of peaks and troughs. In this article, this family of distributions is used to construct a likelihood ratio test for seasonality considering parametric alternative hypotheses that are NNTS distributions

Resumen:

El capital social se puede definir como la capacidad de personas o grupos de obtener beneficios por medio del uso de redes sociales (Robinson, Siles y Schmid, y Flores y Rello, 2003). En el presente artículo se utilizan los datos de la segunda ronda de entrevistas del Panel de Hogares de Escasos Recursos en la delegación Álvaro Obregón, México, Distrito Federal (PAO), para ajustar modelos de regresión con variables instrumentales con el objetivo de identificar variables asociadas al capital social en redes sociales que sean significativas para explicar el porcentaje del ingreso que los hogares de PAO suelen gastar en comida. Además, se presenta un análisis factorial para identificar las variables medidas en PAO que están relacionadas con las dimensiones de confianza, redes sociales y aceptación de normas sociales que constituyen el concepto de capital social

Abstract:

Social capital can be defined as the capacity of individuals or groups to obtain benefits by participating in social networks (Robinson, Siles and Schmid and, Flores and Rello in Atria and Siles eds. 2003). In this paper, we use data from the second round of the Panel of Low Income Households in the Delegación Álvaro Obregón, México, D.F. (PAO) to fit regression models with instrumental variables in order to identify significant proxy variables related to the social capital in social networks to explain the percentage of income that households in PAO spend in food. Also, we present the results of a factor analysis to identify the variables in PAO that are related with the dimensions of trust, social networks and accepted norms that are main elements of the definition of social capital

Abstract:

In Fernández-Durán, a new family of circular distributions based on nonnegative trigonometric sums (NNTS models) is developed. Because the parameter space of this family is the surface of the hypersphere, an efficient Newton-like algorithm on manifolds is generated in order to obtain the maximum likelihood estimates of the parameters

Abstract:

Johnson and Wehrly (1978, Journal of the American Statistical Association 73, 602-606) and Wehrly and Johnson (1980, Biometrika 67, 255-256) show one way to construct the joint distribution of a circular and a linear random variable, or the joint distribution of a pair of circular random variables from their marginal distributions and the density of a circular random variable, which in this article is referred to as joining circular density. To construct flexible models, it is necessary that the joining circular density be able to present multimodality and/or skewness in order to model different dependence patterns. Fernandez- Durain (2004, Biometrics 60, 499-503) constructed circular distributions based on nonnegative trigonometric sums that can present multimodality and/or skewness. Furthermore, they can be conveniently used as a model for circular-linear or circular-circular joint distributions. In the current work, joint distributions for circular-linear and circular-circular data constructed from circular distributions based on nonnegative trigonometric sums are presented and applied to two data sets, one for circular-linear data related to the air pollution patterns in Mexico City and the other for circular-circular data related to the pair of dihedral angles between consecutive amino acids in a protein

Abstract:

In many practical situations it is common to have information about the values of the mean and range of certain population. In these cases it is important to specify the population distribution from the given values of its mean and range. The article by de Alba, Fernández-Durán and Gregorio-Domínguez (2004) includes a bibliography with articles dealing with the case of making inference about the mean and standard deviation of a normal population in terms of the observed sample mean and range. In this paper, the maximum entropy principle is used to specify the population distribution given its mean and range. This problem has the particular difficulty that it is necessary to determine the unknown support of the maximum entropy distribution given its length which is equal to the specified value of the range

Resumen:

México es un país donde ocurren distintos fenómenos naturales, como inundaciones, huracanes y terremotos, que pueden convertirse en desastres que requieren grandes sumas de dinero para mitigar su efecto económico en la población afectada. Generalmente, estas grandes sumas de dinero suelen ser aportadas por el gobierno federal y/o local. El objetivo del presente trabajo es el desarrollo de una metodología actuarial para el cálculo de bonos catastróficos para desastres naturales en México, de manera que, en caso de que el evento catastrófico ocurra durante la vigencia del bono, el gobierno cuente con fondos adicionales y, en caso de que no ocurra, los inversionistas que hayan comprado el bono obtengan tasas de interés superiores a la tasa libre de riesgo del mercado. Los bonos tienen la particularidad, a diferencia de bonos similares emitidos en otros países, que en caso de que ocurra el evento catastrófico el inversionista no pierde el total de su inversión sino sólo una parte o se le difiere su capital total o parte de éste a una fecha posterior a la de la finalización del contrato del bono catastrófico, esto con el fin de hacerlos más atractivos para el inversionista

Abstract:

Floods, hurricanes and earthquakes occur every year in Mexico. These natural phenomena can be considered as catastrophes if they produce large economic damages in the affected areas. In these cases it is required a huge amount of monéy to provide relief to the catastrophe victims and areas. Usually, in Mexico it is the local andlor federal governments that are responsible to provide these funds. The main objective of this article is to develop an actuarial methodologyfor the pricing of CAT bonds in Mexico in order to allow the government to have additional funds to provide relief to the affected victims and areas in case that the catastrophic event occurs during the CAT bond periodo If the catastrophic event does not occur during the CAT bond period then the CAT bondholders will get a higher interest rate than the (risk-free) reference interest rate in the market. To make the CAT bond more attractive to investors the CAT bonds considered in this work have the additional characteristic that the CAT bondholders do not necessarily lose all their initial investment if the catastrophic event occurs. Instead a percentage of the CAT bond principal is lost or their initial investment is paid in a date after the end of the CAT bond period

Abstract:

A new family of distributions for circular random variables is proposed. It is based on nonnegative trigonometric sums and can be used to model data sets which present skewness and/or multimodality. In this family of distributions, the trigonometric moments are easily expressed in terms of the parameters of the distribution. The proposed family is applied to two data sets, one related with the directions taken by ants and the other with the directions taken by turtles, to compare their goodness of fit versus common distributions used in the literature

Abstract:

Many random variables occurring in nature are circular random variables, i.e., its probability density function has period 2 pi and its support is the unit circle. The support of a linear random variable is a subset of the real line. When one is interested in the relation between a circular random variable and a linear random variable it is necessary to construct their joint distribution. The support of the joint distribution of a circular and a linear random variable is a cylinder. In this paper, we use copulas and circular distributions based on non-negative trigonometric sums to construct the joint distribution of a circular and a linear random variable. As an application of the proposed methodology the hourly quantile curves of ground-level ozone concentration for a monitoring station in Mexico City are analyzed. In this case the circular random variable has a uniform distribution if we have the same number of observations in each hour during the day and, the linear random variable is the ground-level ozone concentration

Abstract:

Consider a portfolio of personal motor insurance policies in which. for each policyholder in the portfolio, we want to assign a credibility factor at the end of each policy period that reflects the claim experience of the policyholder compared with the claim experience of the entire portfolio. In this paper we present the calculation of credibility factors based on the concept of relative entropy between the claim size distribution of the entire portfolio and the claim size distribution of the policyholder

Abstract:

By using data of the elections for the Chamber of Deputies of 1997 and 2000 in Mexico, we fit spatial autologistic models with temporal effects to test the significance of spatial and temporal effects on those elections. The binary variable of interest is the one that indicates a win of the National Action Party (PAN) or the alliance that it formed. By spatial effect, we refer to the fact that neighbouring constituencies present dependence on their electoral results. The temporal effect refers to the existence of dependence, for the same constituency, of the result of the election with the result of the previous election. The model that we used to test the significance of spatial and temporal effects is the spatial autologistic model with temporal effects for which estimation is complex and requires simulation techniques. By defining an urban constituency as one that contains at least one population center of 200,000 inhabitants or more, among our principal results, we find that, for the Mexican election of 2000, the spatial effect is significant only when neighbouring constituencies are both urban. For the election of 1997, the spatial effect is significant independent of the type of neighbouring constituencies. The temporal effect is significant on both elections

Resumen:

Cuando se otorga un crédito a plazo fijo a una persona para comprar un bien de consumo duradero existe la posibilidad de que ésta no pueda hacer frente a los pagos del crédito debido a que pierda su empleo de manera involuntaria. Esta situación representa un problema tanto para el deudor como para el acreedor. El acreedor puede incurrir en una pérdida operativa y el deudor ser despojado del bien. En este trabajo se establece una metodología para el cálculo de la prima de un seguro de desempleo cuyos beneficios, en caso de desempleo involuntario del deudor, son el pago de un número máximo (predeterminado en el contrato del seguro) de las mensualidades de su crédito durante el periodo de desempleo. El seguro propuesto tiene una vigencia igual a la del crédito y sólo el primer desempleo durante la vigencia del crédito es considerado para el pago de beneficios. El costo del seguro se obtiene estimando las tasas de transición de una cadena de Markov en tiempo continuo con dos estados (empleado y desempleado). Estas tasas de transición se modelan como funciones de covariables, como el género, el estado civil, la edad y la escolaridad. Para la estimación de las tasas de transición se utilizan bases de datos de la Encuesta Nacional de Empleo Urbano (ENEU) (INEGI, 1998). Al considerar todas las posibles trayectorias de empleo-desempleo en la duración del crédito, así como las probabilidades de cada una de ellas, obtenemos la distribución de probabilidades de la cantidad mensual requerida por el seguro definida como la cantidad mensual que el asegurado debería pagar para cubrir las mensualidades de su crédito durante el primer periodo de desempleo si ocurriese la trayectoria de empleo-desempleo considerada. A partir de esta distribución de probabilidades es posible calcular distintas medidas, como la desviación estándar o cuantificar el riesgo del contrato del seguro

Abstract:

When a person makes an installment purchase of a durable good it is possible that he/she will be unable to make the periodical payments because he/she looses his/her employment involuntarily. This is an issue both to the borrower as well as to the creditor. The borrower can be deprived of the good and the creditor can incur an operational loss. In this paper we develop a methodology to calculate the net premium of an insurance against involuntary unemployment for installment purchases of durable goods. The benefits of this insurance are a predetermined number of installments during the unemployment spell. The insurance has the same starting date and duration as the installment purchase and only the first involuntary unemployment spell is considered for benefits. The cost of the insurance is obtained by using a two-state (employed-unemployed) continuous time Markov chain. The transition rates of the chain are modelled as functions of the covariates gender, marital status, age and educational level. By using these covariates it is possible to identify different risk groups. The transition rates are estimated by using data from the National Urban Employment Surver (Encuesta Nacional de Empleo Urbano ENEU, INEGI, 1998) in Mexico. By considering all the possible employed-unemployed trajectories during the installment purchase period and the probability of each of these trajectories, it is possible to obtain the probability distribution of the required amount for insurance which is defined as the monthly amount that the borrower should have to pay in order to cover the payments of the credit during the first unemployment spell if the considered trajectory occurs. From this probability distribution it is possible to calculate different measures such as the standard deviation or quantiles to set safety limits in the cost of the insurance and to give insight into the riskiness of the insurance contract

Abstract:

rocedure based on the combination of a Bayesian changepoint model and ordinary least squares is used to identify and quantify regions where a radar signal has been attenuated (i.e. diminished) as a consequence of intervening weather. A graphical polar display is introduced that illustrate the location and importance of the attenuation

Resumen:

La democracia no genera las mismas expectativas entre los mexicanos. La concepción que estos tienen de ella varía de acuerdo a su sistema de creencias, donde su nivel de información define los términos en que los mexicanos piensan a la democracia como forma de gobierno. Este ensayo busca dar una explicación a las diferencias que existen en las concepciones de la democracia entre las distintas regiones del país. Estas diferencias ponen en duda los argumentos que afirmaban entre los valores de los mexicanos existe un déficit democrático. Su concepción de la democracia es un reflejo del ambiente en el que vive el individuo y, en gran medida, un espejo de las características de la región en la que habita

Abstract:

We study a general scenario where confidential information is distributed among a group of agents who wish to share it in such a way that the data becomes common knowledge among them but an eavesdropper intercepting their communications would be unable to obtain any of said data. The information is modeled as a deck of cards dealt among the agents, so that after the information is exchanged, all of the communicating agents must know the entire deal, but the eavesdropper must remain ignorant about who holds each card. This scenario was previously set up in Fernández-Duque and Goranko (2014) as the secure aggregation of distributed information problem and provided with weakly safe protocols, where given any card c, the eavesdropper does not know with certainty which agent holds c. Here we present a perfectly safe protocol, which does not alter the eavesdropper’s perceived probability that any given agent holds c. In our protocol, one of the communicating agents holds a larger portion of the cards than the rest, but we show how for infinitely many values of a, the number of cards may be chosen so that each of the m agents holds more than a cards and less than 4m2a

Abstract:

We consider the generic problem of Secure Aggregation of Distributed Information (SAD 1), where several agents acting as a team have information distributed amongst them, modelled by means of a publicly known deck of cards distributed amongst the agents, so that each of them knows only her cards. The agents have to exchange and aggregate the information about how the cards are distributed amongst them by means of public announcements over insecure communication channels, intercepted by an adversary "eavesdropper", in such a way that the adversary does not learn who holds any of the cards. We present a combinatorial construction of protocols that provides a direct solution of a class of SADI problems and develop a technique of iterated reduction of SADI problems to smaller ones which are eventually solvable directly. We show that our methods provide a solution to a large class of SADI problems, including all SADI problems with sufficiently large size and sufficiently balanced card distributions

Abstract:

This article uses possible-world semantics to model the changes that may occur in an agent's knowledge as she loses information. This builds on previous work in which the agent may forget the truth-value of an atomic proposition, to a more general case where she may forget the truth-value of a propositional formula. The generalization poses some challenges, since in order to forget whether a complex proposition π is the case, the agent must also lose information about the propositional atoms that appear in it, and there is no unambiguous way to go about this. We resolve this situation by considering expressions of the form [‡π]ϕ, which quantify over all possible (but 'minimal') ways of forgetting whether π. Propositional atoms are modified non-deterministically, although uniformly, in all possible worlds. We then represent this within action model logic in order to give a sound and complete axiomatization for a logic with knowledge and forgetting. Finally, some variants are discussed, such as when an agent forgets π (rather than forgets whether π) and when the modification of atomic facts is done non-uniformly throughout the model

Abstract:

Dynamic topological logic (DTL) is a polymodal logic designed for reasoning about dynamic topological systems. These are pairs (X,f), where X is a topological space and f : X → X is continuous. DTL uses a language L which combines the topological S4 modality with temporal operators from linear temporal logic. Recently, we gave a sound and complete axiomatization DTL* for an extension of the logic to the language L*, where is allowed to act on finite sets of formulas and is interpreted as a tangled closure operator. No complete axiomatization is known in the language L, although one proof system, which we shall call KM, was conjectured to be complete by Kremer and Mints. In this article, we show that given any language L' such that L ⊆ L' ⊆ L*, the set of valid formulas of L' is not finitely axiomatizable. It follows, in particular, that KM is incomplete

Abstract:

Provability logics are modal or polymodal systems designed for modeling the behavior of Gödel’s provability predicate and its natural extensions. If ALPHA is any ordinal, the Gödel-Löb calculus GLPΛ contains one modality [λ] for each λ < Λ, representing provability predicates of increasing strength. GLPω has no non-trivial Kripke frames, but it is sound and complete for its topological semantics, as was shown by Icard for the variable-free fragment and more recently by Beklemishev and Gabelaia for the full logic. In this paper we generalize Beklemishev and Gabelaia’s result to GLPΛ for countable Λ. We also introduce provability ambiances, which are topological models where valuations of formulas are restricted. With this we show completeness of GLPΛ for the class of provability ambiances based on Icard polytopologies

Abstract:

This article studies the transfinite propositional provability logics GLPΛ and their corresponding algebras. These logics have for each ordinal ξ< Λ a modality 〈ξ〉. We will focus on the closed fragment of GLPΛ (i.e. where no propositional variables occur) and worms therein. Worms are iterated consistency expressions of the form 〈ξn〉…〈ξ1〉Τ. Beklemishev has defined well-orderings <ξ on worms whose modalities are all at least ξ and presented a calculus to compute the respective order-types. In the current article, we present a generalization of the original < ξ orderings and provide a calculus for the corresponding generalized order-types oξ. Our calculus is based on so-called hyperations which are transfinite iterations of normal functions. Finally, we give two different characterizations of those sequences of ordinals which are of the form 〈oξ(A)〉ξεOn for some worm A. One of these characterizations is in terms of a second kind of transfinite iteration called cohyperation

Abstract:

Ordinal functions may be iterated transfinitely in a natural way by taking pointwise limits at limit stages. However, this has disadvantages, especially when working in the class of normal functions, as pointwise limits do not preserve normality. To this end we present an alternative method to assign to each normal function f a family of normal functions Hyp[ f ]=(f^ξ) ξ∈On, called its hyperation, in such a way that f^0 = id, f^1 = f and f^α+β = f^α ◦ f^β for all α, β. Hyperations are a refinement of the Veblen hierarchy of f . Moreover, if f is normal and has a well-behaved left-inverse g called a left adjoint, then g can be assigned a cohyperation coH[g]=(g^ξ) ξ∈On, which is a family of initial functions such that g^ξ is a left adjoint to f^ξ for all ξ

Abstract:

For any ordinal Λ, we can define a polymodal logic GLP(Λ), with a modality [ξ] for each ξ<Λ. These represent provability predicates of increasing strength. Although GLP(Λ) has no Kripke models, Ignatiev showed that indeed one can construct a Kripke model of the variable-free fragment with natural number modalities, denoted GLP(Θ)(0). Later, Icard defined a topological model for GLP(Θ)(0); which is very closely related to Ignatiev's. In this paper we show how to extend these constructions for arbitrary Λ. More generally, for each Θ. Λ we build a Kripke model J(Λ)(Θ)and a topological model , and show that GLP(Λ)(Θ) is sound for both of these structures, as well as complete, provided Θ is large enough

Abstract:

We show that given a finite, transitive and reflexive Kripke model 〈 W, ≼, [ ⋅ ] 〉 and w∈W , the property of being simulated by w (i.e., lying on the image of a literal preserving relation satisfying the ‘forth’ condition of bisimulation) is modally undefinable within the class of S4 Kripke models. Note the contrast to the fact that lying in the image of w under a bisimulation is definable in the standard modal language even over the class of K4 models, a fairly standard result for which we also provide a proof. We then propose a minor extension of the language adding a sequent operator ♮ (‘tangle’) which can be interpreted over Kripke models as well as over topological spaces. Over finite Kripke models it indicates the existence of clusters satisfying a specified set of formulas, very similar to an operator introduced by Dawar and Otto. In the extended language L+=L□♮ , being simulated by a point on a finite transitive Kripke model becomes definable, both over the class of (arbitrary) Kripke models and over the class of topological S4 models. As a consequence of this we obtain the result that any class of finite, transitive models over finitely many propositional variables which is closed under simulability is also definable in L +, as well as Boolean combinations of these classes. From this it follows that the μ-calculus interpreted over any such class of models is decidable

Resumen:

Young citizens vote at relatively low rates, which contributes to political parties de-prioritizing youth preferences. We analyze the effects of low-cost online interventions in encouraging young Moroccans to cast an informed vote in the 2021 elections. These interventions aim to reduce participation costs by providing information about the registration process and by highlighting the election's stakes and the distance between respondents' preferences and party platforms. Contrary to preregistered expectations, the interventions did not increase average turnout, yet exploratory analysis shows that the interventions designed to increase benefits did increase the turnout intention of uncertain baseline voters. Moreover, information about parties' platforms increased support for the party closest to the respondents' preferences, leading to better-informed voting. Results are consistent with motivated reasoning, which is surprising in a context with weak party institutionalization

Abstract:

The present study introduces the two-sided and right-sided Quaternion Hyperbolic Fourier Transforms (QHFTs) for analyzing two-dimensional quaternion-valued signals defined in an open rectangle of the Euclidean plane endowed with a hyperbolic measure. The different forms of these transforms are defined by replacing the Euclidean plane waves with the corresponding hyperbolic plane waves in one dimension, giving the hyperbolic counterpart of the corresponding Euclidean Quaternion Fourier Transforms. Using hyperbolic geometry tools, we study the main operational and mapping properties of the QHFTs, such as linearity, shift, modulation, dilation, symmetry, inversion, and derivatives. Emphasis is placed on novel hyperbolic derivative and hyperbolic primitive concepts, which lead to the differentiation and integration properties of the QHFTs. We further prove the Riemann-Lebesgue Lemma and Parseval's identity for the two-sided QHFT. Besides, we establish the Logarithmic, Heisenberg-Weyl, Donoho-Stark, and Benedicks' uncertainty principles associated with the two-sided QHFT by invoking hyperbolic counterparts of the convolution, Pitt's inequality, and the Poisson summation formula. This work is motivated by the potential applications of the QHFTs and the analysis of the corresponding hyperbolic quaternionic signals

Abstract:

Developing new paradigms of user interaction is always challenging. The introduction of the Google Glass platform presents a novel way to deliver content to users. Clearly, the Glass platform is not going to become a mainstream consumer electronics product as it is; however it was an experimental program from which important practical lessons can be learned. We, as part of the Google Glass Explorer Community, present this study as a contribution to the practical understanding of products that can be core for the development of micro-interaction-based interfaces for wearable gadgets in urban contexts. Throughout this paper we detail the development process of this kind of application by focusing on the challenges presented, the implementation and design decisions, and the usability tests we performed. The main results were that the use of the app is intuitive in general, but the users have problems identifying several components that were adapted for the size of the screen and the concept of the device

Abstract:

In this paper we give sufficient conditions for the existence of a partition of an r-balanced c-partite tournament into r strongly hamiltonian connected tournaments of order c (an hc-partition). We also prove that every r-balanced c-partite tournament with c >_ 5 and r >_ 5 is strongly hamiltonian connected if it has an hc-partition and minimum degree at least c(r+12)/4 + 3r/4. As a consequence of these theorems, we give sufficient conditions for balanced multipartite tournaments and regular balanced multipartite tournaments to be strongly hamiltonian connected

Abstract:

In this communication, we report advances of an innovative project in which we investigate the construction of the linear combination concept in relation with the linear system of equations. Our observations indicate that this relationship serves as the foundation for constructing several other concepts in the course, such as Span, Linear Independence, and Basis. We designed activities based on APOS theory to promote the construction of that relation. Two groups of students were interviewed, in one, students were enrolled in a linear algebra course using conventional teaching, the other group worked on a model and activities designed with a genetic decomposition. Results contribute to literature by focusing on the construction of the linear combination representation of systems of equations

Abstract:

The complete twisted graph of order n, denoted by Tn, is a complete simple topological graph with vertices u1, u2, … , un such that two edges uiuj and ui'uj' cross if and only if i< i'< j'< j or i'< i< j< j'. The convex geometric complete graph of order n, denoted by Gn, is a convex geometric graph with vertices v1, v2, … , vn placed counterclockwise, in which every pair of vertices is adjacent. A biplanar tree of order n is a labeled tree with vertex set { v1, v2, … , vn} having the property of being planar when embedded in both Tn and Gn. Given a connected graph G the (combinatorial) tree graph T(G) is the graph whose vertices are the spanning trees of G and two trees P and Q are adjacent in T(G) if there are edges e P and f Q such that Q= P- e+ f. For all positive integers n, we denote by T(n) the graph T(Kn). The biplanar tree graph, B(n) , is the subgraph of T(n) induced by the biplanar trees of order n. In this paper we give a characterization of the biplanar trees and we study the structure, the radius and the diameter of the biplanar tree graph

Resumen:

En este artículo se reseña una teoría de la enseñanza de las matemáticas (APOE), presentando sus características principales y su uso en el diseño de actividades y material didáctico. En particular hacemos referencia a cómo ha sido aplicada con éxito en varios cursos de álgebra lineal

Abstract:

This study presents a contribution to research in undergraduate teaching and learning of linear algebra, in particular, the learning of matrix multiplication. A didactical experience consisting on a modeling situation and a didactical sequence to guide students' work on the situation were designed and tested using APOS theory. We show results of research on students' activity and learning while using the sequence and through analysis of student's work and assessment questions. The didactic sequence proved to have potential to foster students' learning of function, matrix transformations and matrix multiplication. A detailed analysis of those constructions that seem to be essential for students understanding of this topic including linear transformations is presented. These results are contributions of this study to the literature

Abstract:

A minimum feedback arc set of a digraph D is a minimum set of arcs which removal leaves the resultant graph free of directed cycles; its cardinality is denoted by τ1(D). The acyclic disconnection of D, ω(D), is defined as the maximum number of colors in a vertex coloring of D such that every directed cycle of D contains at least one monochromatic arc. In this article we study the relationship between the minimum feedback arc set and the acyclic disconnection of a digraph, we prove that the acyclic disconnection problem is NP-complete. We define the acyclic disconnection and the minimum feedback for graphs. We also prove that ω(G) + τ1(G) = |V(G)| if G is a wheel, a grid or an outerplanar graph

Abstract:

In this paper we relate the global irregularity and the order of a c-partite tournament T to the existence of certain cycles and the problem of finding the maximum strongly connected subtournament of T. In particular, we give results related to the following problem of Volkmann: How close to regular must a c-partite tournament be, to secure a strongly connected subtournament of order c?

Abstract:

Given a set of cycles C of a graph G, the tree graph of G defined by C is the graph T(G,C) whose vertices are the spanning trees of G and in which two trees R and S are adjacent if the union of R and S contains exactly one cycle and this cycle lies in C. Li et al [Discrete Math 271 (2003), 303--310] proved that if the graph T(G,C) is connected, then C cyclically spans the cycle space of G. Later, Yumei Hu [Proceedings of the 6th International Conference on Wireless Communications Networking and Mobile Computing (2010), 1--3] proved that if C is an arboreal family of cycles of G which cyclically spans the cycle space of a 2-connected graph G, then T(G, C) is connected. In this note we present an infinite family of counterexamples to Hu's result

Abstract:

Let T be a 3-partite tournament and F3(T) be the set of vertices of T not in triangles. We prove that, if the global irregularity of T, ig(T), is one and |F3(T)|>3, then F3(T) must be contained in one of the partite sets of T and |F3(T)|<=((k+1)/4)+1 , which implies |F3(T)>=(n+5)/12)+1, where k is the size of the largest partite set and n the number of vertices of T. Moreover, we give some upper bounds on the number, as well as results on the structure of said vertices within the digraph, depending on its global irregularity

Resumen:

Este artículo constituye un avance de la biografía completa de Francisco de Paula de Arrangoinz y Berzábal (1811-1892). Se trata de un personaje enigmático y que hasta la fecha quienes lo han tratado no señalan correctamente los datos biográficos duros, como su nacimiento y muerte. Con el fin de situarlo y comprender su origen social y su trayectoria, se abarcan los años 1799 a 1846. Se estudian sus antecedentes familiares, su formación académica y sus dos primeras misiones consulares en fuentes primarias inexploradas, profundizando en los periodos menos conocidos. Se incluye un corolario en el que se revisa someramente el periodo de 1847 hasta su fallecimiento en 1892. La información referente a sus últimos días también constituye una novedosa aportación

Abstract:

This paper is a preview of the complete biography of Francisco de Paula de Arrangoiz y Barzábal (1811-1892). He is an enigmatic character and to this day authors who have dealt with him do not correctly point out the hard-biographical data such as his birth and death. In order to situate him and understand his social origin and trajectory, the years from 1799 to 1846 are covered. His family background, his academic training and his first two consular missions are studied in unexplored primary sources, delving into the lesser-known periods. A corollary is included, in which the period from 1847 to his death in 1892 is briefly reviewed. The findings concerning his last days are also a novel contribution

Resumen:

Se enmarca la historia del Segundo Imperio en México dentro del contexto internacional estudiando la influencia de la situación y los conflictos europeos de la época, la diplomacia que desarrolló Maximiliano y el papel de los Estados Unidos ante la Intervención Francesa

Abstract:

This paper frames the history of the Second Empire in Mexico within the international context, studying the influence of the situation and the European conflicts of the time, the diplomacy developed by Maximilian and the role of the United States before the French Intervention

Resumen:

El editor realiza unas notas biográficas de Francisco de Arrangoiz, autor de un folleto poco conocido, editado en francés con el título La chute de l'empire du Mexique, par un mexicain, en el que Arrangoiz ataca desde su perspectiva conservadora al Imperio de Maximiliano por su política liberal y defiende al clero mexicano y al Estado pontificio. Se presenta al final el folleto traducido al español

Resumen:

El editor realiza unas notas biográficas de Francisco de Arrangoiz, autor de un folleto poco conocido, editado en francés con el título La chute de l'empire du Mexique, par un mexicain, en el que Arrangoiz ataca desde su perspectiva conservadora al Imperio de Maximiliano por su política liberal y defiende al clero mexicano y al Estado pontificio. Se presenta al final el folleto traducido al español

Abstract:

The editor produces biographical notes about Francisco de Arrangoiz, who wrote a little known brochure published in French entitled La chute de l'empire du Mexique, par un mexicain, in which Arrangoiz, from his conservative perspective, attacks the Maximilian empire for its liberal policy and defends the Mexican clergy and the Pontifical State. The brochure, translated into Spanish, is presented at the end of the text

Resumen:

El artículo compara y analiza la visión de Frances Erskine Inglis, Madame Calderón de la Barca de dos levantamientos militares en México en 1840 y 1841 y una revolución en Madrid en 1854. La autora era una escritora escocesa, casada con un diplomático y hombre de Estado español Ángel Calderón de la Barca, a quien acompañó en sus misiones diplomáticas en México y Estados Unidos, permaneciendo a su lado en su desempeño como Ministro de Estado en España; su paso por México quedó plasmado en La vida en México, y su vida en la penísula en The Attaché in Madrid, obras en las cuales se puede leer el relato y comentario de dichos acontecimientos políticos

Abstract:

This paper compares and analyzes Frances Erskine Inglis's, Madame Calderón de la Barca point of view on the 1840 and 1841 military rebellions in Mexico, and the 1854 Spanish revolution. The autor was a Scottish writter, married to a Spanish diplomat and statesman, Ángel Calderón de la Barca, whom she followed during his diplomatic missions in Mexico and United States, and later, stayed by his side during his administration as Minister of State in Spain, Her time in Mexico got embodied in La vida rn México, and her life in the Spanish peninsula in The Attaché in Madrid, literary works in wich it is posible to can read the history and comments of the political events mentioned before

Résumé:

L'article fait une comparaison et une analyse de la perception qu'a eu Frances Erskine Inglis, Madame Calderon de la Barca, sur deux révolte militaire se déroulants au Mexique en 1840 et 1841, et sur la révolution espagnole de 1854. L'auteure était une écrivaine écossaise, mariée avec un diplomate et homme d'Etat espagnol, Ángel Calderón de la Barca, à qui elle a accompagnée durant ses missions diplomatiques au Mexique et aux États-Unis et, plus tard, restant à ses côtes durant l'exercice de ses fonctions comme Ministre d'État en Espagne. Son passage au Mexique s'est matérialisé dans La vida en México, et sa vie dans la péninsule espagnole dans The Attaché in Madrid, œuvres dans lesquelles on peut lire le récit et commentaire des événements politiques avant mentionnés

Resumen:

El artículo narra y analiza los principales acontecimientos políticos, sociales, económicos e internacionales que afectaron la historia de México durante el período comprendido entre 1855 y 1867, época de la creación del Estado republicano, liberal, federal y secular, que tuvo que hacer frente a la opción conservadora, centralista y clerical. Finalmente, se opuso al proyecto monarquista, liberal, centralista y regalista: es el parteaguas del México moderno

Abstract:

This article chronicles and analyzes the major Mexican political, social, economic, and international events during the period (1855-1867). This is the time of creation of the republican, liberal, federal, and secular State in opposition to the conservative, centralist, and clerical alternative. By the end of that period, it faced the monarchial, liberal, centralistic, and royalist agenda. It is the turning point in modern Mexican history

Abstract:

El presente trabajo se centrará en el análisis del diario de la colonia española en México, El Correo Español y su actitud durante la guerra de 1898.

Abstract:

In this article we propose novel Bayesian nonparametric methods using Dirichlet Process Mixture (DPM) models for detecting pairwise dependence between random variables while accounting for uncertainty in the form of the underlying distributions. A key criteria is that the procedures should scale to large data sets. In this regard we find that the formal calculation of the Bayes factor for a dependent-vs.-independent DPM joint probability measure is not feasible computationally. To address this we present Bayesian diagnostic measures for characterising evidence against a "null model" of pairwise independence. In simulation studies, as well as for a real data analysis, we show that our approach provides a useful tool for the exploratory nonparametric Bayesian analysis of large multivariate data sets

Abstract:

We study how proximate neighbors affect one's propensity to vote using data on 12 million registered voters in Mexico. To identify this effect, we exploit idiosyncratic variation at the neighborhood block level resulting from approximately one million relocation decisions. We find that when individuals move to blocks where people vote more (less) they themselves start voting more (less). We show that this finding is not the result of selection into neighborhoods or of place-based factors that determine turnout, but rather peer effects. Consistent with this claim, we find a contagion effect for non-movers and show that neighbors from the same block are much more likely to perform an electoral procedure on the same exact day as neighbors who live on different blocks within a neighborhood

Abstract:

We present an integrated platform for the visualization, analysis, segmentation and reconstruction of MR brain images. Our tool allows the user to interactively analyze a stack of MR images, or to automatically segment multiple anatomical structures and perform 3D reconstructions. The user can also interactively guide the segmentation process to produce better quality results. The tool is light and fast, lending itself to be used as a general purpose MR imaging manipulation software

Abstract:

The article presents a possible solution to a typical tomographic images generation problem from data of an industrial process located in a pipeline or vessel. These data are capacitance measurements obtained non-invasively according to the well known ECT technique (Electrical Capacitance Tomography). Every 313 pixels image frame is derived from 66 capacitance measurements sampled from the real time process. The neural nets have been trained using the backpropagation algorithm where training samples have been created synthetically from a computational model of the real ECT sensor. To create the image 313 neuronal nets, each with 66 inputs and one output, are used in parallel. The resulting image is finally filtered and displayed. The different ECT system stages along with the different tests performed with synthetic and real data are reported. We show that the image resulting from our method is a faster and more precise practical alternative to previously reported ones

Resumen:

Los recursos del sector salud han tenido una caída alarmante en los últimos años. Más que nunca necesitan políticas sostenibles y estrategias de financiamiento para no dejar a la deriva a cincuenta millones de mexicanos sin acceso a los servicios sanitarios

Resumen:

La cobertura universal es una meta que debe alcanzarse como resultado de un proceso, no de un decreto. Este artículo ofrece reflexiones para el desarrollo de ese objetivo

Abstract:

The Mexican scientific production published in mainstream journals included in the Web of Science (Clarivate Analytics) for the period 1995-2015 is analyzed. To this purpose, the bibliometric data of the 32 states were organized into five groups according to the following criteria: research production, institutional sectors, and number of research centers. Our findings suggest that there has been an important deconcentration of the scientific activities mainly towards public state universities as a consequence of various public policies. While institutions located in Mexico City (CX) have published, during this period, 48.1% of the whole Mexican scientific production, there are two other groups of states with rather different productivity: 13 of them published 40% of the output and the rest of the entities (18) just published 11%. Our findings suggest that the highest research performance corresponds to those federal entities where there are branches of higher education institutions located in CX. We also identify those institutional sectors that contribute importantly to a specific research output for each federal entity. The results of this study could be useful to improve science and technology public policies in each state

Abstract:

This paper presents a networking of two theories, the APOS Theory and the ontosemiotic approach (OSA), to compare and contrast how they conceptualize the notion of a mathematical object. As context of reflection, we designed an APOS genetic decomposition for the derivative and analyzed it from the point of view of OSA. Results of this study show some commonalities and some links between these theories and signal the complementary nature of their constructs

Resumen:

En este trabajo, después de un breve resumen del APOE y del EOS, se analiza una descomposición genética de la derivada, realizada usando los constructos teóricos del APOE, desde la perspectiva del EOS. La mirada realizada desde el EOS se focaliza en la encapsulación de procesos en objetos que, según el APOE, se realiza en dicha descomposición genética. Esta mirada nos permite concluir que la manera de conceptualizar la encapsulación de procesos en objetos en el APOE no informa sobre la naturaleza del objeto que ha emergido ni de sus cambios de naturaleza

Abstract:

In this paper, after a brief overview of the APOS and the OSA, we analyze a genetic decomposition of the derivative, performed using the theoretical constructs of APOS, from the perspective of OSA. This OSA perspective focuses on the encapsulation of processes into objects that, according to the APOS, is performed in such genetic decomposition. This view allows us to conclude that the way to conceptualize the encapsulation of processes into objects in the APOS does not report the nature of the object that has emerged neither its changes of nature

Abstract:

As robotic systems become increasingly capable of complex sensory, motor and information processing functions, the ability to interact with them in an ergonomic, real-time and adaptive manner becomes an increasingly pressing concern. In this context, the physical characteristics of the robotic device should become less of a direct concern, with the device being treated as a system that receives information, acts on that information, and produces information. Once the input and output protocols for a given system are well established, humans should be able to interact with these systems via a standardized spoken language interface that can be tailored if necessary to the specific system.The objective of this research is to develop a generalized approach for human-machine interaction via spoken language that allows interaction at three levels. The first level is that of commanding or directing the behavior of the system. The second level is that of interrogating or requesting an explanation from the system. The third and most advanced level is that of teaching the machine a new form of behavior. The mapping between sentences and meanings in these interactions is guided by a neuropsychologically inspired model of grammatical construction processing. We explore these three levels of communication on two distinct robotic platforms. The novelty of this work lies in the use of the construction grammar formalism for binding language to meaning extracted from video in a generative and productive manner, and in thus allowing the human to use language to command, interrogate and modify the behavior of the robotic systems

Abstract:

Portland cement (PC) is a material that is indispensable for satisfying recent urban requirements, which demands infrastructure with adequate mechanical and durable properties. In this context, building construction has employed nanomaterials (e.g., oxide metals, carbon, and industrial/agro-industrial waste) as partial replacements for PC to obtain construction materials with better performance than those manufactured using only PC. Therefore, in this study, the properties of fresh and hardened states of nanomaterial-reinforced PC-based materials are reviewed and analyzed in detail. The partial replacement of PC by nanomaterials increases their mechanical properties at early ages and significantly improves their durability against several adverse agents and conditions. Owing to the advantages of nanomaterials as a partial replacement for PC, studies on the mechanical and durability properties for a long-term period are highly necessary

Abstract:

A saddle-node bifurcation is demonstrated in a model of mosquito population growth. This bifurcation helps us understand the dynamics of two stages in the model: the aquatic stage (which includes the egg, larva, and pupa) and the nonaquatic stage (the mature mosquito). We calculated the basic offspring number using the next-generation matrix method and displayed it as the value of the saddle-node bifurcation. This means that the stable extinction state becomes unstable, while a survival state emerges as dominant in the dynamics, along with another coexistence state as a transition to surveillance. Numerical computations verify how factors such as limited logistic growth, the Allee effect, and the mortality density rate modify the mosquito dynamics

Abstract:

We consider a curved Sitnikov problem, in which an infinitesimal particle moves on a circle under the gravitational influence of two equal masses in Keplerian motion within a plane perpendicular to that circle. There are two equilibrium points, whose stability we are studying. We show that one of the equilibrium points undergoes stability interchanges as the semi-major axis of the Keplerian ellipses approaches the diameter of that circle. To derive this result, we first formulate and prove a general theorem on stability interchanges, and then we apply it to our model. The motivation for our model resides with the n-body problem in spaces of constant curvature

Abstract:

We study the restricted 3-body problem with the constriction of motion to the unit circle. First, we study the 2-body problem on the unit circle and give the explicit solutions for a regularized version of the equations of motion for any initial data. We classify the motions in elliptic, parabolic, hyperbolic type and an equilibrium state. Then, we analyze the restricted 3-body problem on the unit circle when the primary bodies are performing elliptic and hyperbolic motions. We show the existence of just one equilibrium state when the masses of primary bodies are equal and we exhibit the hyperbolic structure of this equilibrium point via an exponential dichotomy. In the last part we regularize the equations of motion. We show the global dynamics and some periodic solutions with its respective period

Abstract:

We introduce a modular methodology for evaluating the perceived quality of video streams in simulated packet networks through the use of artificial neural networks.. One particular implementation of the test bed is presented and the results obtained with it under simple network configurations are discussed. Our tool was able to accurately predict the MOS scores of human viewers. Other applications of the test bed are also presented. For data analysis, we found that the usual parameters that can be controlled in an MPEG-4 codec do not have a such a strong influence on the perceived video quality as a good network design that protects the video flows may do

Abstract:

We analyze the convergence of a continuous interior penalty (CIP) method for a singularity perturbed fourth order elliptic problem on a layer-adapted mesh. On this anisotropic mesh, we prove under reasonable assumptions uniform convergence of almost order k - 1 for finite elements of degree k ≥ 2. This result is of better order than the known robust result on standar meshes. A by-product of our analysis is an analytic lower bound for the penalty of the symmetric CIP method. Finally, our convergence result is verified numerically

Abstract:

This two-part study describes a learning exercise including a video and a worksheet designed to raise students' awareness of the need to evaluate the completeness of references generated by ChatGPT. The first part of the study assesses the completeness and relevance of academic references generated by ChatGPT using four prompts and three versions of ChatGPT. Content analysis using a priori coding revealed that most of the references generated are hallucinations or only partially complete. The results from the first part of the study were utilized in the second part of the study where 66 students evaluated the completeness of a sample of the references that had been generated and evaluated in the first part of the study. Although most students participating in the case study correctly answered questions about the content of the videos, only some students were able to correctly evaluate the completeness of the generated references

Resumen:

Los agentes de conversación y los sistemas de diálogo (interfaces de control de voz, chatbots, asistentes personales) están ganando impulso como técnicas de interacción humano-computadora en la sociedad digital. Al platicar con Mitsuku (una inteligencia artificial conversacional) te puedes dar cuenta que es capaz de seguir una conversación, de recordar datos e, incluso, de aceptar correcciones. Aunque todavía hay que esperar un poco más para que sea verdaderamente conversacional, los asistentes virtuales no están hechos aún para pláticas reales, nos debemos preparar porque se encuentran en plena expansión y están dando el salto a servicios de la vida diaria

Abstract:

Conversation agents and dialogue systems (voice control interfaces, chatbots, personal assistants), are gaining momentum as human-computer interaction techniques in the digital society. When you talk with Mitsuku (a conversational artificial intelligence), you can realize that it is able to follow a logical conversation, accept corrections, and remember data and information. But we still have to wait a little longer to be truly conversational. Virtual assistants are not made for real conversations yet. We must be prepared because they are in full expansion and are making the leap to daily life services

Abstract:

Since 1997, Mexico -like many Latin American countries- has seen significant public and private investment aimed at incorporating information technology in the classroom. In-class information technology holds the great promise of revolutionizing the teaching-learning process. The reality is different, often showing negligible impact from such efforts. One common criticism has been the failure to properly train and support teachers before rolling out technology for use in their classrooms. This article looks at a recent effort of the Mexican government to address the issue of teacher training and support. @prende 2.0 was a program of the Mexican federal government that involved 2,700 digital trainers who trained more than 63,000 teachers in the use of technological equipment that they would be provided. Analyzing administration information and hard data from @prende, this article analyzes the program’s successes and challenges to fashion a series of recommendations regarding similar training and support efforts

Abstract:

In this work we present a computational system for learning support called SAGE (Sistema de Apoyo Generalizado para la Enseñanza Individualizada) designed to offer a teaching plan for each student according to their skills and knowledge based on a taxonomy of learning objectives. To achieve it, a content map and the Bloom’s Taxonomy were used. The content map organizes subjects from the general to the particular through a Morganov-Heredia matrix where dependencies are established and the existent relationships between the course subjects are represented. The model of skills and knowledge is based on the first four cognitive levels from Bloom’s Taxonomy and it is obtained from a diagnostic test and updated according to the advance of the students. The system consists of three modules that were created according to the object-oriented methodology: the student module, the teacher module and the interface module. The student module takes the lessons and consults their evaluations, the teacher model registers students to the course and follows up on their progress and the interface module provides a simple interaction with the users, keeping the student’s attention during the lessons and facilitating the query of information to the teacher. Our final system is content-free, integrates some support tools like games and practice exercises and allows students and teachers to check the progress of the course by comparing the scores with the group average, showing positions inside the group, median and standard deviations, as well as charts that show progress from one chapter to another. We also propose three improvements for the system: a clustering analysis to the cognitive characteristics of the students for determining grouping profiles, the Bayesian Knowledge Tracing method based on those profiles so that progress depends on the probabilities of students having the required knowledge but failing

Abstract:

Blockchain has the potential to transform the financial services industry, institutional functions, business operations, and other areas such as education. The current paper focuses on one real-world illustration of blockchain’s potential --a pilot project that used a blockchain (hosted by Ethereum) to store certifications for 1,518 teachers who participated in a teacher training in Mexico

Abstract:

The objective of our investigation was to design a formal mentoring program for novice professors who come from another culture and are recent graduates from a doctoral program. We studied a sample of eight international novice professors in the program to demonstrate its effectiveness. What distinguishes this program from others is that it offers mentoring to help professors both improve the quality of their instruction and adapt to the culture of the country and of the university. The methodology used was that of case study with a design of pre-test, intervention, post-test. The professors who participated in the mentoring program demonstrated an improvement of 0.95 points (on a scale of five points, p<0.01) in their student evaluations. The principal causes of deficient teaching performance in the novice international faculty were: the absence of pedagogical knowledge and the lack of teaching experience, as well as lack of familiarity with the country culture and the organizational culture of the university. Our study shows that a mentoring program can help improve low student evaluations of novice professors who come from another culture and are recent graduates of a doctoral program

Abstract:

Advances in serious games have influenced education, and transformed the teaching-learning processes, among others aspects. There are educational games that have improved particular math skills but lack the educational paradigm goals or engaging elements. Our work follows the pedagogical Approach methodology and our final goal is to improve the math competences: 55 % Mexican students have not the skills in mathematic to compete in the working world according to the PISA exam

Abstract:

This article makes two different contributions: an analysis of learning styles among undergraduate students in different academic programs, and a proposed regrouping of programs in order to improve teaching practice. The study was conducted in Mexico City in a Mexican private university (Instituto Tecnológico Autónomo de México - ITAM), among a sampling of 753 first-year students in 11 undergraduate degree programs, applying the learning styles questionnaire developed by Felder and Silverman. The results of our research showed that there were similarities between the learning styles of some programs, which can be grouped into four major categories: 1) active, sensitive, visual and sequential learning styles in the Administration, Business Engineering, Economics, Industrial Engineering and Law programs; 2) active-reflective, sensitive, visual and sequential learning styles in the Actuarial and Accounting programs; 3) active-reflective, sensitive-intuitive, visual and sequential-global in the Applied Mathematics, Computer Engineering and Telematics Engineering programs; 4) active, sensitive-intuitive, visual and sequential-global in the International Relations program. The results of our investigation imply that courses should be planned taking into account learning styles shared by the students in different programs, adjusting teaching techniques-electronic media, for example-in order to optimize learning

Abstract:

Nowadays the use of video is a natural process for digital natives’ students. Several aspects of instructional video in e-learning or in a traditional learning have not yet been well investigated. A major problem with the use of instructional video has been lack of interactivity [1]. It´s difficult manage video; students cannot directly jump to a particular part of a video or add some explanations to a specific part by the teacher or the student. Browsing a not interactive video is more difficult and time consuming, because people have to view and listen to the video sequentially this remains a linear process. We defined and developed an interactive platform video online system to allow proactive and random access to video content based on questions or search targets, use of an interactive word glossary, dictionary, an online books, educational video resources, extra explanations for the teachers and comments for students in real time. If learners can determine what to construct or create, they are more likely to engage in learning. Interactive video increases learnercontent interactivity, thus potentially motivating students and improving learning effectiveness [1]. We are evaluating this system at some universities of Mexico

Abstract:

Advances on Information and Communications Technology (ICT) have influenced education, and transformed the teaching-learning processes, among others aspects. It has been established that teachers’ knowledge on digital media, their design and pedagogical application, is extremely relevant to improve their teaching activities. As Salinas (1997) says “teachers are essential at the time of initiating any change. Their knowledge and skills are essential for the correct operation of a program”. Therefore, it is needed to extend the variety of educational experiences they can offer to students when technology is available in their environment, and it has become part of their culture. In this paper, it is discussed how important is the use of technological means as part of teaching strategies, and a teachers guide is proposed to select suitable technology for specific didactic strategies based on students learning styles

Resumen:

Investigaciones sobre procesos de aprendizaje han mostrado que los estudiantes tienden a aprender en diferentes maneras y que prefieren utilizar diferentes recursos de enseñanza. El entender los estilos de aprendizaje puede servir para identificar, e implantar, mejores estrategias de enseñanza y aprendizaje, de tal forma que los estudiantes adquieran nuevo conocimiento de manera más efectiva y eficiente. Aquí, se analizan similitudes y diferencias entre estilos de aprendizaje de estudiantes inscritos en cursos de cómputo, en programas de Ingeniería y Ciencias Sociales del Instituto Tecnológico Autónomo de México (ITAM). Adicionalmente, se analizan similitudes y diferencias en estrategias de enseñanza de sus correspondientes profesores. Un análisis comparativo sobre perfiles de aprendizaje de los estudiantes y los resultados obtenidos en los cursos, sugiere que existen grandes similitudes entre los estilos de aprendizaje de los estudiantes, y las estrategias de enseñanza de sus profesores, a pesar de las diferencias entre sus programas académicos. También existe un patrón consistente de cómo estos estudiantes aprenden: Activo, Sensible, Visual, y Secuencial. En la última parte de este artículo se discute como estos hallazgos podrían tener una implicación significativa en el desarrollo de estrategias pedagógicas efectivas, y de materiales didácticos multimedia específicos, para cada programa educativo

Abstract:

Research on learning processes has shown that students tend to learn in different ways and prefer to use different teaching resources. The understanding of learning styles can be used to identify, and implement, better teaching and learning strategies, in order to allow students to acquire new knowledge in a more effective and efficient way. In this study we analyze similarities and differences in learning styles among students enrolled in computing courses, in engineering and social sciences programs at the Instituto Tecnológico Autónomo de México (ITAM). In addition, we also analyze similarities and differences among the teaching strategies shown by their corresponding teachers. A comparative analysis on student learning profiles and course outcomes, allow us to suggest that, despite academic program differences, there are strong similarities among the students learning styles, as well as among the teaching styles of their professors. Seemingly, a consistent pattern of how these students learn also exists: Active, Sensitive, Visual and Sequential. At the end of the paper, we discuss how these findings might have significant implications in developing effective pedagogic strategies, as well as didactic multimedia based materials for each one of these academic programs

Abstract:

Recent research on the learning process has shown that students tend to learn in different ways and that they prefer to use different teaching resources as well. Many researchers agree on the fact that learning materials shouldn't just reflect of the teacher's style, but should be designed for all kinds of students and all kind of learning styles. Even though they agree on the importance of applying these learning styles to different learning systems, various problems still need to be solved, such as matching teaching contents with the student's learning style. In this paper, we describe the design of a personalized teaching method that is based on an adaptive taxonomy using Felder and Silvennan's learning styles and which is combined with the selection of the appropriate teaching strategy and the appropriate electronic media. Students are able to learn and to efficiently improve their learning process with such method

Abstract:

In distance learning, the intervention of an adviser is essential for coaching the students. In this modality there are no space and time restrictions: the students have control over when and how they carry out their lectures and the adviser is responsible for responding to all their questions. Often the advisers are unable to answer immediately because of the amount and diversity of questions made by students. At all times, the students need support in order to continue learning when problems arise, but if they don't have answers right away, they wouldn't be able to go on. This is why the advisor can benefit from a software tool that stores his experience and knowledge for reuse and quickly generates solutions to students' problems. In this paper, we describe an information system (MC) design that uses these ideas related to Case-Based Reasoning (CBR) in order to flexibly, efficiently and immediately reply when questions are encountered by students

Abstract:

Recent research on the learning process has shown that students tend to learn in different ways and that they prefer to use different teaching resources as well. Many researchers agree on the fact that learning materials shouldn't just reflect of the teacher's style, but should be designed for all kinds of students and all kind of learning styles [8]. Even though they agree on the importance of applying these learning styles to different learning systems, various problems still need to be solved, such as matching teaching contents with the student's learning style. In this paper, we describe the design of a personalized teaching environment that is based on an adaptive taxonomy using Felder and Silverman's learning styles and which is combined with the selection of the appropriate teaching strategy and the appropriate electronic media. Students are able to learn and to efficiently improve their learning process with such method

Abstract:

Nowadays there are new educational scenarios emerging along with technological breakthroughs in Information Technologies (IT), which allow us to modify the traditional teaching methods. Due to this situation, we ought to think about satisfying the growing educational needs using new didactic resources, new tools which will make teaching-learning environments more flexible, adding electronic media provided by communication networks and by informatics. Regarding learning, we find that not everyone learns the same way. Each person has a particular set of learning abilities, thus we can identify the preferences that constitute his or her learning style. Knowing our learning styles helps us both, teachers and researchers. Better teaching-learning strategies can be elaborated to assimilate in an effective and more efficient way new information and knowledge. In the following research, the challenge is to use the vast resources offered by informatics to create a suitable environment for the development of individuals with different skills. For example, impelling intellectual growth and expansion of abilities, based on the correct use of electronic media and the teaching-learning methods when learning a new subject. In this work, a computer program is provided for instructional aid, in which two educational aspects that have been only partially integrated yet are incorporated in an educational environment: computer science and educational psychology (although both of them have been previously used in education)

Abstract:

Fraser compared the 101 ABET accredited industrial engineering programs by location, size, and other descriptors, as well as by the inclusion of different courses in the curricula. Except for two programs in Puerto Rico, all these programs are in the United States. In this paper, we extend that comparison to include industrial engineering programs in other countries in order to find ideas that US programs (and programs in other countries that use the US model) should consider for adoption from IE programs outside the US. We found differences in total number of credit hours and in number of years required for the IE degree, in the amount of general education included in the degree, and in the strength of ties to industry. We noted trends toward standardization of degrees in certain countries and regions and toward international links among programs. We make two recommendations related to partners: IE programs should seek partnerships with mechanical engineering and with business programs, and IE programs should seek partners with universities in other countries

Abstract:

The COVID-19 disease constitutes a global health contingency. This disease has left millions people infected, and its spread has dramatically increased. This study proposes a new method based on a Convolutional Neural Network (CNN) and temporal Component Transformation (CT) called CNN-CT. This method is applied to confirmed cases of COVID-19 in the United States, Mexico, Brazil, and Colombia. The CT changes daily predictions and observations to weekly components and vice versa. In addition, CNN-CT adjusts the predictions made by CNN using AutoRegressive Integrated Moving Average (ARIMA) and Exponential Smoothing (ES) methods. This combination of strategies provides better predictions than most of the individual methods by themselves. In this paper, we present the mathematical formulation for this strategy. Our experiments encompass the fine-tuning of the parameters of the algorithms. We compared the best hybrid methods obtained with CNN-CT versus the individual CNN, Long Short-Term Memory (LSTM), ARIMA, and ES methods. Our results show that our hybrid method surpasses the performance of LSTM, and that it consistently achieves competitive results in terms of the MAPE metric, as opposed to the individual CNN and ARIMA methods, whose performance varies largely for different scenarios

Abstract:

Crop segmentation is an important task in Precision Agriculture, where the use of aerial robots with an on-board camera has contributed to the development of new solution alternatives. We address the problem of fig plant segmentation in top-view RGB (Red-Green-Blue) images of a crop grown under open-field difficult circumstances of complex lighting conditions and non-ideal crop maintenance practices defined by local farmers. We present a Convolutional Neural Network (CNN) with an encoder-decoder architecture that classifies each pixel as crop or non-crop using only raw colour images as input. Our approach achieves a mean accuracy of 93.85% despite the complexity of the background and a highly variable visual appearance of the leaves. We make available our CNN code to the research community, as well as the aerial image data set and a hand-made ground truth segmentation with pixel precision to facilitate the comparison among di erent algorithms

Abstract:

The automatic classification of plants with nutrient deficiencies or excesses is essential in precision agriculture. In particular, being able to perform early detection of nutrient concentrations would increase the production of crop yields and make appropriate use of fertilizers. RGB cameras represent a low-cost alternative sensor for plant monitoring, but this task is complicated when it is purely visual and has limited samples. In this paper, we analyze the Curriculum by Smoothing technique with a small dataset of RGB images (144 images per class) to classify nitrogen concentrations in greenhouse basil plants. This Deep Learning method changes the texture found in the images during training by convolving each feature map (the output of a convolutional layer) of a Convolutional Neural Network with a Gaussian kernel whose width increases as training progresses. We observed that controlled information extraction allows a state-of-the-art deep neural network to perform well using little training data containing a high variance between items of the same class. As a result, the Curriculum by Smoothing provides an average accuracy 7% higher than the traditional transfer learning method for the classification of the nitrogen concentration level of greenhouse basil 'Nufar' plants with little data

Abstract:

The positively curved three-body problem is a natural extension of the planar Newtonian three-body problem to the sphere. In this paper we study the extensions of the Euler and Lagrange relativeequilibria (for short) on the plane to the sphere. The on are not isolated in general.They usually have one-dimensional continuation in the three-dimensional shape space.We show that there are two types of bifurcations. One is the bifurcations betweenLagrange and Euler. Another one is between the different types of the shapes of Lagrange. We prove thatbifurcations between equilateral and isosceles Lagrange existfor the case of equal masses, and that bifurcations between isosceles and scaleneLagrange exist for the partial equal masses case

Abstract:

In the planar three-body problem under Newtonian potential, it is well known that any masses, located at the vertices of an equilateral triangle generate a relative equilibrium, known as the Lagrange relative equilibrium. In fact, the equilateral triangle is the unique mass-independent shape for a relative equilibrium in this problem. The two-dimensional positively curved three-body problem is a natural extension of the Newtonian three-body problem to the sphere S2, where the masses are moving under the influence of the cotangent potential. Zhu showed that in this problem, an equilateral triangle on a rotating meridian can form a relative equilibria for any masses. This was the first report of a mass-independent shape on S2 which can form a relative equilibrium. In this paper, we show that, in addition to the equilateral triangle, there exists one isosceles triangle on a rotating meridian, with two equal angles seen from the center of S2 given by 2-1 arccos((2-1)/2), which always form a relative equilibrium for any choice of the masses. With this shape, there are three different mass distributions, one for each mass placed at the vertex of the triangle with a different angle. Additionally we prove that the equilateral and the above isosceles relative equilibrium are unique with this characteristic. We also prove that each relative equilibrium generated by a mass-independent shape is not isolated from the other relative equilibria

Abstract:

We study relative equilibria (RE) for the three-body problem on S2, under the influence of a general potential which only depends on cos σij where σij are the mutual angles among the masses. Explicit conditions for masses mk and cos σij to form relative equilibrium are show. Using the above conditions, we study the equal masses case under the cotangent potential. We show the existence of scalene, isosceles, and equilateral Euler RE, and isosceles and equilateral Lagrange RE. We also show that equilateral Euler RE on a rotating meridian exists for general potential Σi

Resumen:

La compresión auditiva es fundamental en la socialización y en el desempeño académico y profesional; sin embargo, en México no se enseña de manera explícita en el aula, y su evaluación apenas inicia: los primeros indicadores de desempeño los ofreció el Examen de Habilidades Lingüísticas (EXHALING). A partir de los resultados de dicho examen se analiza la forma en que la comprensión auditiva podría desempeñar un papel protagónico en el aprovechamiento escolar y el fortalecimiento del español

Abstract:

Listening comprehension is fundamental in the processes of socialization, as well as in the academic and professional development; nevertheless, in Mexico, teachers do not explicitly teach it in the classroom, and its evaluation is just beginning: the Test for Linguistic Skills in Spanish (EXHALING, for its abbreviation in Spanish) offered its first performance indicators. In the paper, based on the results of EXHALING, it will be exposed how listening could play a protagonist roll in college achievement, as well as in Spanish learning strengthening

Abstract:

Genetic algorithms (GA's) have some control parameters such as the probability of bit mutation or the probability of crossover. These are normally given a priori by the user (programmer) of the algorithm. There exists a wide variety of values for control parameters and it is difficult to find the best choice of these values in order to optimize the behaviour of a particular GA. We introduce a selfadaptive GA (SAGA) with its control parameters encoded in the genome of the individuals of the population. This algorithm is used to optimize a set of twenty functions from R^2 to R and its behaviour is compared with the one resulting from the execution of a traditional GA varying its control parameter values. We obtain a set of measurements which demonstrate statistically that SAGA yields a set of results which compare favourably with the same results mean values from an extensive set of runs of traditional GA (TGA)

Resumen:

El presente texto es un análisis crítico literario de un cuento de Ignacio Padilla. Se indaga la relación entre la literatura y la filosofía, especialmente la ética. El cuento invita a reflexionar acerca de los cambios civilizatorios obligados sobre ciertos seres humanos y el resultado de deshumanización que acontece cuando no se toman en cuenta las diferencias, que pueden ser educativas, culturales o físicas

Abstract:

This article is a literary critical analysis of an Ignacio Padilla’s story. We will investigate the relationship between literature and philosophy, particularly ethics. This story invites us to reflect on the civilizing changes imposed on some human beings and the resulting dehumanization when their differences are not taken into account

Abstract:

We study three classes of diversity relations over menus arising from an unobserved categorization of alternatives in the form of a partition. A basic diversity relation declares a menu to be more diverse than another if and only if for every alternative in the latter there is an alternative in the former which belongs to the same category. An extension of a basic diversity relation preserves its weak and strict parts and Possibly makes additional diversity judgements between hitherto incomparable menus. A cardinality-based extension is an extension which ranks menus on the basis of the number of categories that exist in each menu. We characterize each class axiomatically. Two axioms satisfied by each of the three classes are Monotonicity, which says that larger menus are at least as diverse, and No Complements, which rules out certain complementarities between alternatives in generating diversity

Resumen:

El autor describe el beligerante movimiento laboral que surgió en la fábrica textil La Magdalena hacia finales del Porfiriato y que continuó durante el primer tercio del siglo xx; muestra el efecto que tuvo sobre su capacidad productiva y cómo los “avances” institucionales en materia laboral, entre el gobierno, empresarios y obreros, no ofrecieron incentivos para la transformación tecnológica de la fábrica, que requirió de mayores tarifas para operar. Esta situación, en conjunto con la entrada de las fibras sintéticas como sustitutos del algodón, obligaron al cierre inminente de más de una fábrica textil porfiriana hacia la segunda mitad del siglo pasado

Abstract:

The author describes the belligerent labor movement at La Magdalena textile factory from the end of the Porfiriato years until the first third of the twentieth century. He demonstrates its effect on production capacity and how institutional advancements in the workforce involving the government, entrepreneurs, and workers were not incentives to the technological transformation of factories but lead to higher operational costs. This occurrence along with the emergence of synthetic fibers substituting cotton led to the imminent closure of many textile factories from the Porfiriato years until the second half of the past century

Resumen:

Análisis sobre las conclusiones de algunos trabajos que abordan el tema de la persistencia de la élite terrateniente e industrial en México en la primera mitad del siglo xx. Se presenta el efecto que tuvieron sucesos como la Revolución mexicana y los programas de reforma agraria, así como la inestabilidad política y la incertidumbre institucional en la persistencia o debilitamiento de las élites económicas consolidadas durante el Porfiriato

Abstract:

In this article, we analyze the findings of research concerning the persistence of the industrial and landowner elites in the first half of the twentieth century. We evaluate the effect of the Mexican Revolution and the agrarian reform programs as well as the political instability and institutional uncertainty had on the persistence or weakening of the consolidated economic elites during the Porfiriato

Resumen:

La parroquia de San José, en la ciudad de Toluca, cuenta con registros de bautizos a partir de 1642, de matrimonios desde 1666 y de defunciones desde 1648. La información está dividida por castas. El objetivo de este trabajo es realizar un análisis de algunas de las características demográfi cas de la población de la Villa de Toluca, en especial, mortalidad y fecundidad para el periodo 1684-1760. Aunque se enfrentaron diferentes problemas debidos a la calidad o carencia de información a lo largo del periodo, se pudieron obtener estimadores, como la tasa bruta de mortalidad, la tasa de mortalidad infantil y la tasa bruta de natalidad, algunas por casta, por lo que pueden compararse, sobre todo para las épocas de epidemias

Abstract:

The parochial archives of Village of Toluca, 1684-1760 As of 1684 there are registrations of casualties in the parish of Sagrario or San José of the city of Toluca, of baptisms as of 1642 and marriages as of 1666. The information is divided on the basis of castes. The objective of this work is to carry out an analysis of some of the demographic characteristics of the population of the Village of Toluca, specially mortality and fertility for the period from 1684 to 1760. Even though several problems were faced due to the quality or lack of information along the period, estimators were obtained, namely: crude death rate, infant mortality rate and crude birth rate, some per caste, so they can be compared, mainly for the times of epidemics

Abstract:

The decades-long Colombian civil war nearly came to an official end with the 2016 Peace Plebiscite, which was ultimately defeated in a narrow vote. This conflict has deeply divided Colombian civil society, and non-political public figures have played a crucial role in structuring debate on the topic. To understand the mechanisms underlying the influence of members of civil society on political discussion, we performed a randomized experiment on Colombian Twitter users shortly before this election. Sampling from a pool of subjects who had been frequently tweeting about the Plebiscite, we tweeted messages that encouraged subjects to consider different aspects of the decision. We varied the identity (a general, a scientist, and a priest) of the accounts we used and the content of the messages we sent. We found little evidence that any of our interventions were successful in persuading subjects to change their attitudes. However, we show that our pro-Peace messages encouraged liberal Colombians to engage in significantly more public deliberation on the subject

Abstract:

This paper aims to perform an alternative methodology the Ministry of Finance and Public Credit (SHCP) applies to estimate the annual Mexican Crude Oil Mix Export Price (MXM), a crucial element of the General Economic Policy Criteria in the Economic Package. We first identify the MXM and the West Texas Intermediate (WTI) relation, computing tail conditional dependence between both series. Subsequently, we use a market risk analysis approach that considers some methodologies to estimate the value at risk (VaR), including an ARIMA-TGARCH model for the innovations of the MXM's price to forecast its behavior using data daily data from January 03rd, 1996, to December 30th, 2021. Once we identify the VaR and the ARIMA-TGARCH components, we aim to design an alternative method to estimate the annual average MXM's price

Resumen:

Tras hacer un repaso por la presencia del tema de la prueba en los trabajos académicos de Manuel Atienza, se presentan cinco ideas para abordar el estudio de la prueba desde un enfoque argumentativo del Derecho: 1. La prueba en el Derecho no se reduce a las reglas sobre la prueba. 2. El estudio de la prueba tiene un carácter multidisciplinar y representa un punto de encuentro en el que convergen distintas disciplinas. 3. El estudio de la prueba no puede prescindir de su dimensión jurídica e institucional. 4. La prueba en el Derecho tiene una dimensión valorativa. 5. El estudio de la prueba puede abordarse desde un enfoque argumentativo que integre diversas aproximaciones (conceptual, empírica, historiográfica y metodológica)

Abstract:

Manuel Atienza has acknowledged the importance of reasoning about facts throughout his academic work. In this paper I present five ideas about the grounds of an argumentative approach to Evidence. 1. The subject of evidence is no co-extensive to the rules of evidence. 2. Evidence is a multidisciplinary discipline in which several disciplines find a common ground. 3. The study of evidence should account of its institutional dimension. 4. The subject of Evidence is connected with a series of legal, procedural and substantive values. 5. An argumentative approach to Evidence could integrate several approaches (analytical, empirical historiographic and methodological) in a coherent framework

Resumen:

Este trabajo tiene como objetivo analizar la concepción racional de la prueba de Jordi Ferrer y su propuesta para formular estándares de prueba precisos y objetivos. En primer lugar se plantea que la presentación que hace este autor de la concepción racional de la prueba resulta problemática en tres aspectos: los antecedentes históricos de los que parte, las notas características de dicha concepción y la contraposición entre una concepción racional centrada en pruebas y una concepción persuasiva centrada en las creencias del juez. En segundo lugar se argumenta que la búsqueda de estándares de prueba precisos y objetivos debe abandonarse de raíz, no sólo porque resulta inviable, sino porque es contraria a la naturaleza probabilística del razonamiento probatorio. Finalmente, se plantea que el problema de la suficiencia probatoria requiere articular varios componentes: (i) el estándar de prueba aplicable, con todo y los problemas de imprecisión y de subjetividad que contiene, (ii) las pruebas existentes en un caso concreto, con un trabajo riguroso de análisis de los hechos y de valoración de las pruebas y (iii) la convicción del órgano jurisdiccional. Prueba y convicción son dos componentes que deben armonizarse

Abstract:

This paper examines the rationalist conception of evidence advocated by Jordi Ferrer and its proposal to formulate precise and objective standards of proof. First, three concerns are raised about the characterization of the rationalist conception: i) its historical background, ii) its defining features, and iii) the contrast between a rationalist conception that focuses exclusively on evidence and a persuasive conception that focuses on the beliefs of the trier of facts. Second, it is argued that the search for an objective and precise standard of proof should be abandoned, both because it is futile and because it contradicts the probabilistic nature of evidential reasoning. Finally, it is suggested that an adequate theory of the sufficiency of evidence should be able to accommodate and explain (a) the current formulation of standards of proof notwithstanding the problems of subjectivity and imprecision, (b) a rigorous analysis of evidence that includes both an individual and an overall evaluation of evidence, and (c) the beliefs of the trier of facts. Evidence and persuasion are two components that should be harmonized

Resumen:

Este trabajo ofrece una aproximación inicial a la relación entre prueba y perspectiva de género. En primer lugar se plantea que al hablar de prueba con perspectiva de género habría que reconocer la vinculación de esta última con el feminismo y las perspectivas feministas sobre la prueba. Desvinculada de los movimientos feministas, la perspectiva de género corre el riesgo de convertirse en una visión desprovista del potencial reivindicativo y crítico que le dio origen. En segundo lugar, el alcance de la perspectiva de género en el ámbito probatorio comprende la prueba en general. Prácticamente todos los temas y problemas probatorios son susceptibles de examinarse con perspectiva de género. Finalmente, la tesis sobre la exigencia de corroboración de la declaración de la víctima requiere examinarse con perspectiva de género. Una regla de esa naturaleza opera en detrimento de las víctimas y refuerza el escepticismo estructural hacia su credibilidad

Abstract:

This paper offers a preliminary analysis of a gender perspective on evidence. It shows the connection between a gender perspective on evidence with feminism and feminist perspectives on evidence. It also shows that the scope of a gender perspective on evidence covers the entire field of Evidence. It finally shows that the corroboration requirement could be examined from a gender perspective. The corroboration requirement is inimical to the victims and strengthens a structural skepticism about her credibility

Abstract:

This paper explores two persistent questions in the legal literature on presumptions: the place and the nature of presumptions in law and legal argumentation. First, it shows that the thesis that presumptions belong to argumentation is a common trend in the literature on this subject, since its foundations in the middle ages to modern times. The civilians clearly saw that illumination for the treatment of this topic should be found in rhetoric, not in the law, since presumptions belong to the provinces of rhetoric and argumentation. Second, the paper shows that the analysis of presumptions is problematic for at least two reasons. On the one hand "presumption" is an ambiguous term in legal discourse. It is a word that has been used in many different senses and for a variety of purposes. Argumentation scholars that rely on legal models of presumption should be aware of the persistent ambiguity in the use of the word "presumption". On the other hand, there are at least four conceptions of presumptions. Each of these conceptions offers a distinctive approach to the question "What is a presumption?". The picture portrayed here may help to shed light on the possibilities and limits of an interdisciplinary dialogue about presumptions

Resumen:

En este trabajo se plantean algunos comentarios a "Los usos de los estándares de prueba" de Rodrigo Coloma. El análisis del término en el lenguaje ordinario y en el lenguaje jurídico puede arrojar luz sobre la noción de estándar de prueba. Sin embargo, resulta problemática la propuesta de entender el estándar de prueba en el derecho como un umbral de carácter cuantitativo o como un prototipo de carácter cualitativo que permite establecer relaciones de semejanza para considerar un hecho como probado. Por otra parte, es relevante la tarea de examinar los usos de los estándares de prueba, pero encuentro algunos problemas en la formulación de algunos de los usos que detecta Coloma. Finalmente, se plantea que la discusión sobre la posibilidad de formular un estándar de prueba realmente objetivo suele presentarse en términos dilemáticos: o bien el estándar de prueba es completamente objetivo o bien no es un estándar en absoluto. Frente a esta postura se destaca la tesis de Susan Haack en el sentido de que el estándar de prueba es en parte psicológico y en parte epistemológico

Abstract:

This paper examines Rodrigo Coloma’s "The uses of the standards of proof". The analysis of the word in both ordinary and legal language may shed light on the concept of standard of proof. However, Coloma's distinction between a quantitative threshold or a qualitative prototype is problematic. I also find problematic some of the uses of the standards of proof formulated by Coloma. Legal scholars traditionally argue that a subjective standard is not a standard at all. Against this view the paper stresses Susan Haack’s view that the legal standard of proof is in part psychological and in part epistemological

Abstract:

The problem of joint position regulation of a self-balancing robot moving on a slope via a PID passivity-based controller is addressed in the present paper. It is assumed that the angle of the slope is known, and the robot can move up or down. The contributions are the original presentation of the design and practical implementation for comparison purposes of two PID passivity-based control laws for position regulation of a self-balancing robot, and the original proposal of the respective asymptotic stability analysis. Experimental results illustrate the performance of the proposed controllers in a self-balancing robot, which were evaluated together with a different passivity-based controller, reported in the control literature, and a linear control law to test its superiority. Finally, the experiments were extended to deal with disturbance rejection, where one of the PID passivity-based control laws, the one that does not use partial feedback linearization, showed to be better than the other three controllers used for comparison

Abstract:

Market reputation is often perceived as a cheaper alternative to product liability in the provision of safety incentives. We explore the interaction between legal and reputational sanctions using the idea that inducing safety through reputation requires implementing costly “market sanctioning” mechanisms. We show that law positively affects the functioning of market reputation by reducing its costs. We also show that reputation and product liability are not just substitutes but also complements. We analyze the effects of different legal policies, and namely that negligence reduces reputational costs more intensely than strict liability, and that court errors in determining liability interfere with reputational cost reduction through law. A more general result is that any variant of an ex post liability rule will improve the functioning of market reputation in isolation. We complicate the basic analysis with endogenous prices and observability by consumers of the outcome of court’s decisions

Abstract:

The controller design for wind energy conversion systems (WECS) is complicated considering the highly nonlinear properties of electric machines and power converters. Targeting at a controller for WECS, this article adopts a passivity-based PI control (PI-PBC) method, to which the stability can be analytically guaranteed. Then, a comparative study between the proposed method and a standard PI is provided. The wind energy system consists of a wind turbine, a Permanent Magnet Synchronous Generator (PMSG), a pulse width modulation (PWM) rectifier, a dc load and an equivalent distributed energy storage device, which is formed with a dc source with internal resistor. The generator rotational velocity is regulated at maximum power point (MPPT) for the investigated wind turbine

Resumen:

La dinámica de Leibniz, como programa de una ciencia de la fuerza, incluía en sus inicios una metafísica de los cuerpos. Cuando vio la luz a finales de la década de 1670, Leibniz sostuvo que su ciencia de la fuerza requería la rehabilitación de las formas sustanciales. Pero al mismo tiempo, el interés de Leibniz por las unidades verdaderas en tanto que constituyentes últimas del mundo lo condujo a postular un mundo de sustancias corpóreas, entidades individualizadas en virtud de una forma sustancial. A principios de la década de 1680, parecía haber dos caminos convergentes hacia la misma metafísica, la rehabilitación de la forma sustancial. Con los años, estos dos caminos metafísicos evolucionaron. A mediados de la década de 1690, la metafísica dinámica añadió la materia prima, entendida como forma pasiva, a la forma sustancial, entendida como fuerza activa. Al mismo tiempo, las unidades que fundan el mundo evolucionaron de sustancias corpóreas a mónadas, sustancias inextensas, similares a la mente (esprit) y constituyentes últimos de las cosas. Argumento que cuando esto ocurrió, ya no estaba claro que estas dos representaciones metafísicas siguieran siendo coherentes: la metafísica dinámica, basada en la fuerza, y la metafísica de la unidad, ahora entendida en términos de mónadas, parecían cada vez más incompatibles

Abstract:

From the beginning of Leibniz’s dynamics, his program for a science of force, there was a metaphysics of body. When the program first emerged in the late 1670s, Leibniz argued that his science of force entailed the reestablishment of substantial form. But in the same period, Leibniz’s interest in genuine unities as the ultimate constituents of the world led him to posit a world of corporeal substances, bodies made one by virtue of a substantial form. In this way, by the early 1680s, there seemed to be two convergent paths to a single metaphysics: the revival of substantial form. Over the years, both of these metaphysics evolved. By the mid 1690s, to substantial form, understood as active force, the dynamical metaphysics added materia prima, understood as passive force. Meanwhile unities that ground the world evolved from corporeal substances to monads, now considered non-extended, mindlike, and the ultimate constituents of things. When this happened, I argue, it was no longer obvious that these two metaphysical pictures were still consistent with one another: the dynamical metaphysics, grounded in force, and the metaphysics of unity, now understood in terms of monads, seemed increasingly to be in tension with one another

Resumen:

La electromovilidad es una pieza clave en un rompecabezas mucho mayor. En él, se incluye la automatización, la electrificación, la descarbonización y, en términos más amplios, la transformación hacia un sistema energético descentralizado. Más amplio y sustentable. En este contexto, el autor, tras desarrollar los componentes constitutivos de la electromovilidad, realiza un estudio comparado, sobre mercados de vehículos ligeros en algunos países relevantes. A partir de este estudio, formula una serie de recomendaciones de política pública

Abstract:

Electromobility is a key piece in a much larger puzzle. It includes automatization, electrification, descarbonization and, in a broader scope, the transformation towards a decentralized energy system. Cleaner and more sustainable. In such contexts, upon reviewing the intrinsic elements of electromobility, the author runs a comparative analysis regarding light vehicle markets in some relevant countries. Based on such study, the chapter articulates a list of public policy recommendations

Resumen:

En este documento se presenta una perspectiva a futuro de la energía eólica en el contexto de la transición de una economía basada en el uso de combustibles fósiles a una anhelada economía libre de carbón. En la primera sección se desarrolla la idea de la energía como elemento disruptivo que ha cambiado los paradigmas del ser humano, llegando a los nuevos paradigmas, destacando de manera relevante la posibilidad de utilizar masivamente la energía renovable. En una segunda sección es discute cómo se prevé sea la evolución de la transición energética, dependiendo de las políticas para acelerarla o mantener el ritmo actual que es inferior al necesario para lograr la des carbonización. En una tercera sección se presenta el tema de la variabilidad en la energía renovable (cuando no hay viento o el sol se pone) y los mecanismos para mitigar sus efectos, en particular contar con el mercado de capacidad, una matriz eléctrica diversificada y baterías. En la cuarta sección se desarrolla la contribución que ha tenido y tendrá la energía eólica en el futuro de la energía tanto a nivel internacional como México. Finalmente se presenta una sexta sección de conclusiones

Abstract:

This paper presents a future perspective of wind energy in the context of the transition from an economy based on the use of fossil fuels to a desired carbon-free economy. The first section develops the idea of energy as a disruptive element that has changed the paradigms of mankind, reaching the new paradigms, highlighting in a significant way the possibility of large-scale use of renewable energy. In a second section, it is discussed how the energy transition is expected to evolve, depending on the policies to accelerate it or maintain the current pace, which is lower that the necessary to achieve decarbonization. A third section presents the issue of variability in renewable energy (when there is no wind or the sun goes down) and the mechanisms to mitigate its effects, in particular having a capacity market, a diversified electricity matrix and batteries. The fourth section develops the contribution that wind energy has had and will have in the future of energy both internationally and in Mexico. Finally, a sixth section of conclusions is presented

Resumen:

Este trabajo examina los determinantes internos que influyen en la estructura de capital de las pequeñas y medianas empresas familiares en México (Pymef). Se sugiere que el tamaño y la antigüedad de la empresa, la formalidad en la planeación administrativa y estratégica, la actitud hacia el control familiar y la edad del director influyen en las decisiones del tipo de fuentes de financiamiento por utilizar. A través de un análisis de trayectorias se validan estadísticamente varias hipótesis tomando como base una muestra de 240 Pymef mexicanas. Los resultados indican relaciones significativas. entre tamaño y deuda, así como entre edad del director, capital social y utilidades acumuladas. Asimismo, aplicando el análisis por sector de actividad y por tamaño, se encontró evidencia empírica sobre las influencias entre el tamaño de la empresa y el capital social, así como la formalidad de la planeación administrativa y estratégica cori la deuda. Los resultados sugieren líneas de investigación de las Pymef mexicana

Abstract:

This paper examines the determinants of capital structure in family-owned small and medium-sized enterprises in Mexico. We suggest that the size, age, managerial planning, family control and the age of their CEO or owner influence financing decisions. The study's hypotheses are tested by analyzing a survey data collected from 240 Mexican SMEs with Path Analysis. We found that the size of the firm has a positive relationship with debt; the age of SME's owner has an influence on the equity and retained earnings funding. Through splitting the data by sectors and size, we also found two additional relationships: between size and equity, and between managerial planning formality and debt. Our findings suggest directions for further research on Mexican family-owned SMEs

Abstract:

We show that if L-r(X), 1 < r < infinity, has an asymptotically uniformly convex renorming of power type then X admits a uniformly convex norm of power type

Abstract:

Drawing on psychological theory, we created a new approach to classify negative sentiment tweets and presented a subset of unclassified tweets to humans for categorization. With these results, a tweet classification distribution was built to visualize how the tweets can fit in different categories. The approach developed through visualization and classification of data could be an important base to measure the efficiency of a machine classifier with psychological diagnostic criteria as the base (Thelwall et al. in J Assoc Inf Sci Technol 62(4):406–418, 2011). Nonetheless, this proposed system is used to identify red flags in at-risk population for further intervention, due to the need to be validated through therapy with an expert

Abstract:

New technologies are opening novel ways to help people in their decision-making while shopping. From crowd-generated customer reviews to geo-based recommendations, the information to make the decision could come from different social circles with varied degrees of expertise and knowledge. Such differences affect how much influence the information has on the shopping decisions. In this work, we aim to identify how social influence when it is mediated by modern and ubiquitous communication (such as that provided by smartphones) can affect people’s shopping experience and especially their emotions while shopping. Our results showed that large amount of information affects emotional state in costumers, which can be measured in their physiological response. Based on our results, we conclude that integrating smartphone technologies with biometric sensors can create new models of customer experience based on the emotional effects of social influence while shopping

Abstract:

Stress is becoming a major problem in our society and most people do not know how to cope with it. We propose a novel approach to quantify the stress level using psychophysiological measures. Using an automatic stress detector that can be implemented in a mobile application, we managed to create an alternative to automatically detect stress using sensors from a wearable device, and once it is detected, a score is calculated for each one of the records. By identifying the stress during the day and giving a numeric value from biological signals to it, a visualization could be produced to facilitate the analysis information by users

Resumen:

En el artículo se destacan los puntos más relevantes de la regulación sobre reparación del daño en materia penal en México desde el Código Penal Federal vigente hasta el Código Nacional de Procedimientos Penales recientemente aprobado. De esta forma se demuestra que la regulación de la reparación del daño en materia penal en México ha sido históricamente deficiente e ineficiente. Igualmente, se enfatizan las consecuencias negativas de no haber entregado el ejercicio de la reparación del daño a la víctima del delito y haber convertido esta acción en una facultad del Ministerio Público. Por último, se destaca la oportunidad que el legislador perdió para realizar una acabada reglamentación de los aspectos procedimentales de la reparación del daño en materia penal en el Código Nacional de Procedimientos Penales, pues solo aparecen normas dispersas a lo largo del CNPP, muchas de las cuales se quedan solo en el nivel de loables principios teóricos

Abstract:

Sensor networks have perceived an extraordinary growth in the last few years. From niche industrial and military applications, they are currently deployed in a wide range of settings as sensors are becoming smaller, cheaper and easier to use. Sensor networks are a key player in the so-called Internet of Things, generating exponentially increasing amounts of data. Nonetheless, there are very few documented works that tackle the challenges related with the collection, manipulation and exploitation of the data generated by these networks. This paper presents a proposal for integrating Big Data tools (in rest and in motion) for gathering, storage and analysis of data generated by a sensor network that monitors air pollution levels in a city. The authors provide a proof of concept that combines Hadoop and Storm for data processing, storage and analysis, and Arduino-based kits for constructing their sensor prototypes

Abstract:

The rapid pace of technological advances in recent years has enabled a significant evolution and deployment of Wireless Sensor Networks (WSN). These networks are a keyplayer in the so-called Internet of Things, generating exponentiall yincreasing amounts of data. Nonetheless, there are very few documented works that tackle the challenges related with the collection, manipulation and exploitation of the data generated by these networks. This paper presents a proposal for integrating BigData tools (in rest and in motion) for the gathering, storage and analysis of data generated by a WSN that monitors air pollution levels in a city. We provide a proof of concept that combines Hadoop and Storm for data processing, storage and analysis, and Arduino-based kits for constructing our sensor prototypes

Resumen:

A lo largo del siglo XIX y XX, la relación entre la Iglesia y el Estado estuvo llena de enfrentamientos legales, políticos e incluso de carácter militar. Dichas disputas se caracterizaron por una lucha de poder, libertad y soberanía en el proceso de construcción del Estado mexicano. El patronato real y la tolerancia religiosa fueron temas claves en estas diatribas que adoptaron matices, definiciones y posturas distintas a lo largo del tiempo. El análisis de la visión de la jerarquía católica en este devenir resulta indispensable para entender la historia de dos instituciones que comparten territorio, sociedad y cultura y que todavía en nuestros días siguen modificándose para alcanzar la plena coexistencia

Abstract:

Throughout the 19th and 20th centuries, the relationship between the Church and the State was full of legal, political and even military confrontations. These disputes were characterized by a struggle for power, freedom and sovereignty in the process of building the Mexican State. The royal patronage and religious tolerance were key themes in these diatribes that took on different nuances, definitions and positions over time. The analysis of the vision of the Catholic hierarchy in this becoming is indispensable to understand the history between two institutions that share territory, society and culture and that still in our days continue modifying to reach the full coexistence

Resumen:

La invasión de Francia a España en 1808 y la abdicación de la familia real a favor de Napoleón situó a la población novohispana en una disyuntiva: defender los derechos de Fernando VII, como postulaban los peninsulares, o defender el derecho del pueblo soberano de escoger sus autoridades, como afirmaban los criollos. La multiplicación de las conspiraciones criollas y la fuerza del poder y de las armas desembocaran en la lucha por la independencia. La presencia e intervención del clero a lo largo del proceso independentista, de 1810 a 1821, formó un pensamiento moderno en lo político, pero tradicional en su concepción social y cultural. La tradición católica, sembrada con la cruz y la espada, marcó a la sociedad mexicana con un carácter tradicional difícil de erradicar

Abstract:

The invasion of Spain by France in 1808, and the abdication of the Royal Family in favour of Napoleon, raised a dilemma for the people of Mexico: to defend the right of Ferdinand VII, as was postulated by the Spanish, or defend the sovereignty of the nation, as the Creole’s affirmed. The proliferation of Creole plots and the strength and power of weapons resulted in a fight for Independence. The presence and intervention of the clergy throughout the Independence process, from 1810 to 1821, formed a modern thought of politics, but traditional of social and cultural ideas. Catholic tradition, spread by cross and sword, gave the Mexican society a traditional shape hard to eradicate

Resumo:

A invasão da Espanha pela França em 1808, e a abdicação da Família Real em favor de Napoleão, colocou o povo do México perante um dilema: defender os direitos de Fernando VII, como postulavam os espanhóis, ou defender o direito do povo soberano a escolher as suas autoridades, como afirmavam os crioulos. A proliferação de conspirações crioulas e a força do poder das armas resultaram numa luta pela independência. A presença e a intervenção do clero em todo o processo de Independência, de 1810 a 1821, formou um pensamento moderno em termos políticos, mas tradicional em termos de ideias sociais e culturais. A tradição católica, disseminada com a cruz e a espada, deu à sociedade mexicana um carácter tradicional de difícil erradicação

Abstract:

A contragenic function in a domain Omega C R 3 is a reduced-quaternion valued (i.e the last quaternionic coordinate is zero) harmonic function, which is orthogonal in L2 (Omega) to all reduced-quaternion monogenic functions and their conjugates. Contragenicity is not a local property. For spheroidal domains of arbitrary eccentricity, we relate standard orthogonal bases of harmonic and contragenic functions for one spheroid to another via computational formulas. This permits us to show that there exist nontrivial contragenic functions common to the spheroids of all eccentricities

Abstract:

We construc bases of polynomials for the spaces of square-integrable harmonic functions that are orthogonal to the monogenic and antimonogenic R3-valued functions defined in a prolate or oblate spheroid

Resumen:

En los últimos años, la divulgación de sostenibilidad en materia financiera está cada vez más ligada a la sigla "ASG", usada cuando las organizaciones miden, identifican y cuantifican sus impactos y prácticas en las tres esferas del desarrollo sostenible. Este artículo analiza temas sostenibles relacionados con liderazgo y gobernanza basados en los estándares del Sustainability Accounting Standards Board

Abstract:

Envisioning trajectories towards sustainability encompasses enacting significant changes in multiple spheres (i.e., infrastructure, policy, practices, behaviors). These changes unfold within the intricate landscapes of wicked problems, where diverse perspectives and potential solutions intersect and often clash. Advancing more equitable and sustainable trajectories demands recognition of and collaboration with diverse voices to uncover meaningful synergies among groups striving to catalyze substantial change. Projects of this nature necessitate the exploration of varied tools and methodologies to elicit, convey, and integrate ideas effectively. Creating spaces for reflexivity is essential for catalyzing more meaningful impact as individuals engage in discussions aimed at sharing and questioning the coherence of their projects while forging synergies, identifying common objectives, and planning long-term outcomes. We present the initial phase of an endeavor in which we developed a software that elicits causal networks based on mapping relations between projects' actions and outcomes. To illustrate our approach, we describe the results of using this software within collaborative workshops with groups spearheading projects initiated by a government entity in Mexico City. By adapting elements of the Theory of Change model, this software transcends the dominant linear project logic by guiding participants in designing causation networks that unveil how different projects can articulate to identify potential common elements and find new possibilities for coordination among initiatives. We discuss the potential of such software application as a dynamic tool to guide and promote reflection and coherence when crafting projects that aim to more meaningfully address sustainability problems

Abstract:

We classify and analyze the stability of all relative equilibria for the two-body problem in the hyperbolic space of dimension 2 and we formulate our results in terms of the intrinsic Riemannian data of the problem

Abstract:

Differential diagnosis among different types of dementia, mainly between Alzheimer (AD) and Vascular Dementia (VD), offers great difficulties due to the overlapping among the symptoms, and signs presented by patients suffering these illnesses. A differential diagnosis of AD and VD can be obtained with a 100% of confidence through the analysis of brain tissue (i.e. a cerebral biopsy). This gold test involves an invasive technique, and thus it is rarely applied. Besides these difficulties, to get an efficient differential diagnosis of AD and VD is essential, because the therapeutic treatment needed by a patient differs depending on the illness he suffers. In this paper, we explore the use of artificial neural networks technology to build an automaton to assist neurologists during the differential diagnosis of AD and VD. First, different networks are analyzed in order to identify minimum sets of clinical tests, from those normally applied, that still allows a differential diagnosis of AD and VD; and, second, an artificial neural network is developed, using backpropagation and data based on these minimum sets, to assist physicians during the differential diagnosis of AD and VD. Our results allow us to suggest that, by using our neural network, neurologists may improve their efficiency in getting a correct differential diagnosis of AD and VD and, additionally, that some tests contribute little to the diagnosis, and that under some combinations they make it rather more difficult.

Abstract:

This paper analyzes the effect of offshore outsourcing on the export performance of firms, based on the theories of international business, the resource-based view of the firm and the transaction cost theory. Outsourcing can reduce production costs and increase flexibility. It can also provide new resources and market knowledge. However, the impact of offshore outsourcing depends on the resources and capabilities of firms to manage a network of foreign suppliers, and to absorb knowledge of foreign markets. Using a database of about 1,000 manufacturing companies in Mexico in 2011, we found that offshore outsourcing increases the performance of exports. The effects are stronger in export markets from which the company also imports intermediate goods. The results also show that the size of the company, the organization of intra-firm imports and export experience moderate the effects of outsourcing in a positive way

Resumen:

La posibilidad de reflexión sobre la historia radica en una detención poética de la narrativa de los acontecimientos, lo cual implica que pensar acerca de lo que somos a lo largo del tiempo, requiere tanto a la historia y la filosofía como a la poesía. Para desarrollar el argumento, se recurre a la digresión de Theodor Adorno y Marx Horkheimer, "Odiseo, o mito e Ilustración", contenida en su Dialéctica de la Ilustración, y a las Tesis sobre la filosofía de la historia de Walter Benjamin

Abstract:

The possibility of reflection on history lies in a poetic arrest of the narrative of events, which implies that thinking about who we are over time, requires both history and philosophy as well as poetry. To develop the argument, the digression of Theodor Adorno and Marx Horkheimer, "Odysseus, or Myth and Enlightenment", contained in their Dialectics of Enlightenment, and Walter Benjamin's Theses on the Philosophy of History are used

Resumen:

Se muestran algunas de las formas más significativas en que Michel Foucault aborda la relación entre verdad y poder. Se expone el papel que desempeña tal relación en algunos de sus análisis más representativos, como el de las experiencias de la locura y la sexualidad, y el de las técnicas de cuidado de sí en la Antigüedad, particularmente en la práctica de la escritura de los estoicos y epicúreos, en la tragedia de Edipo y en el surgimiento de la filosofía

Abstract:

In this article, we will demonstrate the significant ways in which Michel Foucault approaches the relationship between truth and power. We will present the role it plays in his most representative analyses, namely the experience of craziness and sexuality, care of the self in the Classical Antiquity, particularly in the writings of stoics and epicureans, the tragedy of Oedipus and finally, in the emergence of philosophy

Resumen:

En el presente texto se intenta mostrar la perspectiva de Theodor Adorno en relación a la producción cultural, desde tres aspectos fundamentales de su filosofía. Se partirá, en primera instancia, de su crítica a la “industria cultural”, la cual es caracterizada como un orden de producción en el que los criterios se basan en lo meramente comercial y en el que la cultura pierde cualquier tipo autenticidad y legitimidad. Posteriormente, se analizarán las propuestas de opciones auténticas de cultura, tanto en la crítica cultural, como en la creación artística

Abstract:

This work illustrates Theodor Adorno’s perspective regarding cultural production from the point of view of three of his fundamental aspects in his philosophy. First, we will delve into his criticism of cultural industry, which is characterized as a production line whose criteria is solely based on commercialism and in which culture loses its authenticity and legitimacy. Then, his suggestions for authentic options of culture, in cultural criticism as well as in artistic expression, will be assessed

Resumen:

El psicoanálisis es, posiblemente, la teoría de la mente que desde su creación, ha tenido más influencia en el estudio de las manifestaciones culturales. Sus explicaciones dinámicas, a partir de un sistema fundamentado en los conceptos de energía psíquica y de representación, en relación a una comprensión tópica de la psique, permiten explicar las creaciones culturales en función de los diferentes conflictos afectivos que se puedan generar en contextos variables. Se pretende mostrar de manera general, las posibilidades de dicha teoría en lo concerniente a la crítica de arte, partiendo de los principales textos de Freud en materia de estética y de algunas de las críticas más significativas que se han hecho al respecto

Abstract:

Psychoanalysis is arguably the mental theory which since its creation has had the most influence on the study of cultural events. Its dynamic explanations, within a system based on the concepts of psychic energy and representation, in relation to its topical knowledge of the psyche, allows us to explain cultural creations in light of the various emotional conflicts that may be produced in diverse contexts. In this article, we will broadly present the possibilities of such a theory in the field of art criticism utilizing Freud’s main works regarding aesthetics and some of the most important criticisms regarding this topic

Abstract:

This paper compares the relative performance of different organizational structures for the decision of accepting or rejecting a project of uncertain quality. When the principal is uninformed and relies on the advice of an informed and biased agent, cheap-talk communication is persuasive and it is equivalent to delegation of authority, provided that the agent's bias is small. When the principal has access to additional private information, cheap-talk communication dominates both (conditional) delegation and more democratic organizational arrangements such as voting with unanimous consensus

Abstract:

The present article intends to assess, in a systematic and critical way, what has been done in Portugal regarding the so-called Regulatory Impact Assessment (RIA), while at the same time, contributing to its development and maturity. A careful and in depth analysis of the Portuguese institutional discourse towards regulatory reform since its early days is provided, with special emphasis given to the legal instruments deployed by the current Government. Our aim is to provide a sound and rigorous explanation to the inexistence of a “true” or “substantive” RIA system actually operating in Portugal. Reflecting and borrowing from international experiences and practices, from OECD and EU documents and Portuguese scholarship, we demonstrate that there is a huge gap between the reality of impact assessment and its political discourse in Portugal. We argue that the lack of comprehensive RIA is inevitability stemming from the shortcomings of the current Portuguese system. Indeed, the “formal” system has been so far incapable of providing us with a single example of an impact assessment study

Resumo:

O presente artigo procura avaliar, de maneira sistemática e crítica, o que tem sido feito em Portugal no que respeita ao denominado Regulatory Impact Assessment (RIA) e, ao mesmo tempo, contribuir para o seu desenvolvimento e maturidade. Apresenta-se uma análise cuidadosa e profunda do discurso institucional portugués sobre a reforma da legislacao desde os seus primeiros días, com especial ênfase para os instrumentos legais apresentados pelo XVII Governo Constitucional. O nosso objectivo é fornecer uma explicacao profunda e rigorosa para a inexistencia de um <> ou <> sistema de RIA a funcionar actualmente em Portugal. Reflectindo e aprendendo com experiências e práticas internacionais, através de documentos da OCDE e UE, e com a experiencia derivada de uma bolsa de investigacao portuguesa, demonstramos que há, em Portugal, uma enorme distância entre a realidade da avaliacao de impacto e o discurso político. Sustentamos que a falta de um verdadeiro RIA deriva inevitavelmente do insucesso do actual sistema portugués. De facto, o sistema << formal>> foi, até hoje, incapaz de nos apresentar um único exemplo de um estudo de avaliacao de impacto

Abstract:

Sunspot equilibrium and lottery equilibrium are two stochastic solution concepts for nonstochastic economies. We compare these concepts in a class of completely finite, (possibly) nonconvex exchange economies with perfect markets, which requires extending the lottery model to the finite case. Every equilibrium allocation of our lottery model is also a sunspot equilibrium allocation. The converse is almost always true. There are exceptions, however: For some economies, there exist sunspot equilibrium allocations with no lottery equilibrium counterpart

Abstract:

In nonconvex environments, a sunspot equilibrium can sometimes be destroyed by the introduction of new extrinsic information. We provide a simple test for determining whether or not a particular equilibrium survives, or is robust to, all possible refinements of the state space. We use this test to provide a characterization of the set of robust sunspot-equilibrium allocations of a given economy; it is equivalent to the set of equilibrium allocations of the associated lottery economy. Journal of Economic Literature Classification Numbers: D51, D84, E32

Abstract:

We analyze sunspot-equilibrium prices in nonconvex economies with perfect markets and a continuous sunspot variable. Our primary result is that every sunspot equilibrium allocation can be supported by prices that, when adjusted for probabilities, are constant across states. This result extends to the case of a finite number of equally-probable states under a nonsatiation condition, but does not extend to general discrete state spaces. We use our primary result to establish the equivalence of the set of sunspot equilibrium allocations based on a continuous sunspot variable and the set of lottery equilibrium allocations. Journal of Economic Literature Classification Numbers: D51, D84, E32

Abstract:

I show equilibrium existence for a price-setting game among multi-product firms facing a nested logit/CES demand. As opposed to previous research I allow arbitrary firm/nest overlap, making the result relevant for applied work. Additionally, under easy-to-verify conditions, I show that there exist extreme equilibria, which are the most and least preferred by consumers, and provide an algorithm to find them. This allows researcher to numerically verify equilibrium uniqueness in applications, that is, if the extreme equilibria are equal to each other. As a by-product, I show that inverting FOCs correctly identifies the marginal costs that rationalize observed prices

Abstract:

We exploit a unique field experiment to recover the willingness to pay (WTP) for shorter waiting times at a cataract detection clinic in Mexico City, and compare the results with those obtained through a hypothetical dichotomous choice questionnaire. The WTP to avoid a minute of wait obtained from the field experiment ranges from 0.59 to 0.82 Mexican pesos (1 USD =12.5 Mexican pesos at the time of the survey), while that from the hypothetical choice experiment ranges from 0.33 to 0.48 Mexican pesos. WTP to avoid the wait is lower for lower income individuals, and it is larger the more accurately the announced expected waiting time matches the true values. Finally, we find evidence that the marginal disutility of waiting is not constant

Abstract:

In a globalised world, inflation in a given country may be becoming less responsive to domestic economic activity, while being increasingly determined by international conditions. Consequently, understanding the international sources of vulnerability of domestic inflation is turning fundamental for policy makers. In this paper, we propose the construction of Inflation-at-risk and Deflation-at-risk measures of vulnerability obtained using factor-augmented quantile regressions estimated with international factors extracted from a multi-level Dynamic Factor Model with overlapping blocks of inflations corresponding to economies grouped either in a given geographical region or according to their development level. The methodology is implemented to inflation observed monthly from 1999 to 2022 for over 115 countries. We conclude that, in a large number of developed countries, international factors are relevant to explain the right tail of the distribution of inflation, and, consequently, they are more relevant for the vulnerability related to high inflation than for average or low inflation. However, while inflation of developing low-income countries is hardly affected by international conditions, the results for middle-income countries are mixed. Finally, based on a rolling-window out-of-sample forecasting exercise, we show that the predictive power of international factors has increased in the most recent years of high inflation

Abstract:

Objective. To characterize the impact of Mexico's Co-vid-19 vaccination campaign of older adults. Materials and methods. We estimated the absolute change in sympto-matic cases, hospitalizations and deaths for vaccine-eligible adults (aged >60 years) and the relative change compared to vaccine-ineligible groups since the campaign started. Re­sults. By May 3, 2021, the odds of Covid-19 cases among adults over 60 compared to 50-59 year olds decreased by 60.3% (95% CI: 53.1, 66.9), and 2 003 cases (95%CI: 1 156, 3 130) were avoided. Hospitalizations and deaths showed similar trends. Conclusions. Covid-19 events decreased after vaccine rollout among those eligible for vaccination

Resumen:

¿Hasta dónde la elaboración de políticas públicas enfrenta los problemas sociales? Resultados del programa “70 y más” de México Las investigaciones realizadas hasta ahora indican que la elaboración de políticas públicas es importante para afrontar los problemas sociales, en particular para reducir la pobreza. Sin embargo, las conclusiones sobre los programas de reducción de la pobreza revelan que caminan muy lentamente hacia el cumplimiento de sus objetivos principales. Este ensayo da cuenta, primero, de las investigaciones que analizan hasta dónde la elaboración de políticas sociales ha contribuido a resolver los problemas sociales, en particular la pobreza. En segundo lugar, el ensayo examina si la línea de la pobreza ha vinculado la elaboración de políticas públicas a los problemas sociales. Finalmente, el ensayo revela que no se aborda la reducción de la pobreza a la hora de elaborar políticas públicas y, además, que la línea de la pobreza no relaciona la elaboración de políticas con la reducción de la pobreza

Abstract:

Previous research has revealed that social policy design is relevant for addressing social problems, particularly for reducing poverty. However, evidence on poverty reduction exposes a sluggish trend towards achieving its main goals. This paper first reports on research examining to what extent social policy design has addressed social problems, poverty in particular. Second, this paper examines whether poverty lines have linked social policy design and social problems. Finally, this paper reveals that social policy design does not address poverty reduction and that poverty lines have not linked policy design and poverty reduction

Résumé:

Dans quelle mesure la conception de politiques sociales résout-elle les problèmes sociaux? Données tirées du programme «70 y más» du Mexique Des recherches menées dans le passeé ont révélé que la conception de politiques sociales est pertinente pour résoudre des problèmes sociaux, en particulier pour réduire la pauvreté. Cependant, il existe des données relatives à la réduction de la pauvreté qui révèlent une tendance lente vers la réalisation de ses principaux buts. Cet article présente en premier lieu des recherches qui examinent la mesure dans laquelle la conception de politiques sociales a abordé les problèmes sociaux, et la pauvreté en particulier. Deuxièmement, cet article examine la question de savoir si les seuils de pauvreté ont relié la conception de politiques sociales et les problèmes sociaux. Enfin, cet article révèle que la conception de politiques sociales ne se penche pas sur la réduction de la pauvreté et que les seuils de pauvreté n’ont pas relié la conception de politiques et la réduction de la pauvreté

Resumo:

Até que ponto a formulação de políticas sociais enfrenta os problemas sociais? Evidências do programa “70 y más” no México Uma pesquisa anterior revelou que o desenho de políticas sociais é relevante para enfrentar problemas sociais, particularmente para redução da pobreza. Porém, as evidências sobre a redução da pobreza mostram uma ritmo lento na conquista de seus principais objetivos. Este artigo primeiramente apresenta um relato sobre pesquisas que examinam até que ponto o desenho de políticas sociais tem tratado de problemas sociais, e a questão da pobreza em particular. Em segundo lugar, este artigo discute se as políticas relativas à pobreza têm conectado a política social e os problemas sociais. Por fim, o artigo revela que o desenho de políticas sociais não enfrenta a questão da redução da pobreza e que programas relacionados à pobreza ainda não fizeram a conexão entre desenho de políticas e redução da pobreza

Abstract:

The Mexican constitution guarantees its citizens the right to submit individual requests to the government. Public officials are obligated to read and respond to citizen requests in a timely manner. Each request goes through three processing steps during which human employees read, analyze, and route requests to the appropriate federal agency depending on its content. The Mexican government recently created a centralized online submission system. In the year following the release of the online system, the number of submitted requests doubled. With limited resources to manually process each request, the Sistema Atención Ciudadana (SAC) office in charge of handling requests has struggled to keep up with the increasing volume, resulting in longer processing time. Our goal is to build a machine learning system to process requests in order to allow the government to respond to citizen requests more efficiently

Abstract:

Bureaucratic compliance is often crucial for political survival, yet eliciting that compliance in weakly institutionalized environments requires that political principals convince agents that their hold on power is secure. We provide a formal model to show that electoral manipulation can help to solve this agency problem. By influencing beliefs about a ruler’s hold on power, manipulation can encourage a bureaucrat to work on behalf of the ruler when he would not otherwise do so. This result holds under various common technologies of electoral manipulation. Manipulation is more likely when the bureaucrat is dependent on the ruler for his career and when the probability is high that even generally unsupportive citizens would reward bureaucratic effort. The relationship between the ruler’s expected popularity and the likelihood of manipulation, in turn, depends on the technology of manipulation

Abstract:

We analyze the removal of the credit-risk guarantees provided by the government-sponsored enterprises (GSEs) in a model with agents heterogeneous in income and house price risk. We find that wealth inequality increases, driven by higher mortgage spreads and housing rents. Housing holdings become more concentrated. Foreclosures fall. The removal benefits high-income households, while hurting low- and mid-income households (renters and highly leveraged mortgagors with conforming loans). GSE reform requires compensating transfers, sufficiently high elasticity of rental supply, or linking GSE reform with the elimination of the mortgage interest deduction

Abstract:

Sampling-based motion planning is the state-of-the-art technique for solving challenging motion planning problems in a wide variety of domains. While generally successful, their performance suffers from increasing problem complexity. In many cases, the full problem complexity is not needed for the entire solution. We present a hierarchical aggregation framework that groups and models sets of obstacles based on the currently needed level of detail. The hierarchy enables sampling to be performed using the simplest and most conservative representation of the environment possible in that region. Our results show that this scheme improves planner performance irrespective of the underlying sampling method and input problem. In many cases, improvement is significant, with running times often less than 60% of the original planning time

Abstract:

Legal designers use different mechanisms to entrench constitutions. This article studies one mechanism that has received little attention: constitutional "locks," or forced waiting periods for amendments. We begin by presenting a global survey, which reveals that locks appear in sixty-seven national constitutions. They vary in length from nine days to six years, and they vary in reach, with some countries "locking" their entire constitution and others locking only select parts. After presenting the survey, we consider rationales for locks. Scholars tend to lump locks with other tools of entrenchment, such as bicameralism and supermajority rule, but we argue that locks have distinct and interesting features. Specifically, we theorize that locks can cool passions better than other entrenchment mechanisms, promote principled deliberation by placing lawmakers behind a veil of ignorance, and protect minority groups by creating space for political bargaining. Legislators cannot work around locks, and because locks are simple and transparent, lawmakers cannot "break" them without drawing attention. For these reasons, we theorize that locks facilitate constitutional credibility and self-enforcement, perhaps better than other entrenchment mechanisms

Abstract:

This paper addresses the following question: how could Mexico's fiscal, supply-side, and trade reforms lead it into the crisis caused by the December 1994 devaluation? The 1994 political events in Mexico (an armed insurrection combined with terrorism, the assassination of the leading presidential candidate and of the president of the ruling political party, and kidnappings of prominent businessmen as well as political scandals) are most often mentioned in passing or merely as the trigger of a foregone conclusion. We analyze the hypotheses that have been proposed to explain the crisis and conclude that the crisis had a political origin and that some of the financial disequilibria, including the maintenance of a fixed nominal exchange rate in the face of the recent explosion in international transactions, contributed to the crisis

Abstract:

In this paper we explore the phenomenon of Chinese counterfeits smuggled into Mexico, the world's fourth largest counterfeit market, particularly highlighting the role played by Chinese transnational crime in the production and financing of these illegal exports and the role of the Korean diaspora in the distribution of Chinese counterfeits within Mexico. Based on this case study and the extant literature, we suggest propositions concerning diaspora criminal enterprises and diaspora participants in informalmarkets that amend current theory concerning two streams of literature — diaspora homeland investment and trade facilitation by diasporas

Abstract:

Globalization forces many managers to increasingly interact with new cultures, even if these managers remain in their home countries. This may be particularly true of managers in emerging markets, many of whom experience an encroaching US culture due to media, migration, and trade, as well as the importation of US-style business education. This study explores the possibility of applying acculturation insights developed in the immigrant and sojourner contexts to the context of local managers in emerging markets. By exploring the acculturation of Mexican managers in Mexico, we help to redress what has been identified as a key omission in prior acculturation research – the acculturation of a majority population. Our results suggest that Mexican managers who are bicultural or culturally independent (cosmopolitan) are more likely to be in upper management positions in Mexico. Our study supplements earlier work supporting the efficacy of biculturalism in minority populations. It also supports a growing body of research that conceptualizes individuals who rate themselves low on similarity to two cultures as being cosmopolitans and not marginalized individuals who experience difficulty in life

Abstract:

Economists and policymakers have lauded the adoption of liberal trade policies in many of the emerging markets. From the outside it may appear that governments in these countries have cemented a new set of rules governing economic behavior within their borders. Yet the authors have found that these countries are likely to see the emergence or resurgence of smuggling and contraband distribution in response to trade liberalization. In order to survive under trade liberalization, smugglers will rely on cost savings associated with the circumventing of legal import channels. In addition they may employ violence to bolster a diminished competitive advantage and may seek new illegal sources, both local and international, for the consumer products they distribute. In a market environment in which organized crime competes alongside more legitimate channels of distribution, U.S. multinationals will face new challenges relating to strategic planning, maintaining alliance relationships and corporate control of global brands and pricing

Abstract:

In this paper we contemplate three considerations with regard to gender, culture, and ethnomathematics: The long-term trend in Western culture of excluding women and women’s activities from mathematics, an analysis of some activities typically associated with women with the goal of showing that mathematical processes are frequently involved in said activities, and an examination of some specific examples of cultural tendencies in which certain skills of women are highly regarded. The cultures we will consider are from the regions of the Andes of South America, India, and Mesoamerica

Abstract:

In this essay I will explore the important connection between conformism as an adaptive psychological strategy, and the emergence of the phenomenon of ethnicity. My argument will be that it makes sense that nature made us conformists. And once humans acquired this adaptive strategy, I will argue further, the development of ethnic organization was inevitable. Understanding the adaptive origins of conformism, as we shall see, is perhaps the most useful way to shed light on what ethnicity is—at least when examined from the functional point of view, which is to say from the point of view of the adaptive problems that ethnicity solves. I shall begin with a few words about our final destination

Abstract:

It has been difficult to make progress in the study of ethnicity and nationalism because of the multiple confusions of analytic and lay terms, and the sheer lack of terminological standardization (often even within the same article). This makes a conceptual cleaning-up unavoidable, and it is especially salutary to attempt it now that more economists are becoming interested in the effects of identity on behavior, so that they may begin with the best conceptual tools possible. My approach to these questions has been informed by anthropological and evolutionary-psychological questions. I will focus primarily on the terms ‘ethnic group’, ‘nation’, and ‘nationalism’, and I will make the following points: (1) so-called ‘ethnic groups’ are collections of people with a common cultural identity, plus an ideology of membership by descent and normative endogamy; (2) the ‘group’ in ‘ethnic group’ is a misleading misnomer—these are not ‘groups’ but categories, so I propose to call them ‘ethnies’; (3) ‘nationalism’ mostly refers to the recent ideology that ethnies—cultural communities with a self-conscious ideology of self-sufficient reproduction—be made politically sovereign; (4) it is very confusing to use ‘nationalism’ also to stand for ‘loyalty to a multi-ethnic state’ because this is the exact opposite; (5) a ‘nation’ truly exists only in a politician’s imagination, so analysts should not pretend that establishing whether something ‘really’ is or is not ‘a nation’ matters; (6) a big analytic cost is paid every time an ‘ethnie’ is called a ‘nation’ because this mobilizes the intuition that nationalism is indispensable to ethnic organization (not true), which thereby confuses the very historical process—namely, the recent historical emergence of nationalism—that must be explained; (7) another analytical cost is paid when scholars pretend that ethnicity is a form of kinship—it is not

Abstract:

I argue that (1) the accusation that psychological methods are too diverse conflates "reliability" with "validity" (2) one must not choose methods by the results they produce - what matters is -,whether a method acceptably models the real-world situation one is trying to understand; (3) one must also distinguish methodological failings from differences that arise from the pursuit of different Gicoretickd questions

Abstract:

If ethnic actors represent ethnic groups as essentialized ‘natural’ groups despite the fact that ethnic essences do not exist, we must understand why. This article presents a hypothesis and evidence that humans process ethnic groups (and a few other related social categories) as if they were ‘species’ because their surface similarities to species make them inputs to the ‘living-kinds’ mental module that initially evolved to process species-level categories. The main similarities responsible are (1) category-based endogamy and (2) descent-based membership. Evolution encouraged this because processing ethnic groups as species—at least in the ancestral environment—solved adaptive problems having to do with interactional discriminations and behavioral prediction. Coethnics (like conspecifics) share many strongly intercorrelated ‘properties’ that are not obvious on first inspection. Since interaction with out-group members is costly because of coordination failure due to different norms between ethnic groups, thinking of ethnic groups as species adaptively promotes interactional discriminations towards the in-group (including endogamy). It also promotes inductive generalizations, which allow acquisition of reliable knowledge for behavioral prediction without too much costly interaction with out-group members. The relevant cognitive-science literature is reviewed, and cognitive field-experiment and ethnographic evidence from Mongolia is advanced to support the hypothesis

Abstract:

As a result of a spate of studies geared to investigating Brazilian racial categories, it is now believed by many that Brazilians reason about race in a manner quite different to that of Americans. This paper will argue that this conclusion is premature, as the studies in question have not, in fact, investigated Brazilian categories. What they have done is elicit sorting tasks on the basis of appearances, but the cognitive models of respondents have not been investigated in order to determine what are the boundaries of their concepts. Sorting based on appearances is not sufficient to infer the boundaries of concepts whenever appearance is not a defining criterion for the concepts in question, as the case appears to be for racial and ethnic categories. Following a critique of the methods used, I review a terminological and theoretical confusion concerning the use of the terms ‘emic’ and ‘etic’ in anthropology that appears directly responsible for the failure so far to choose methods appropriate to parsing the conceptual domain of ‘race’ in Brazil

Abstract:

An investigation of the cognitive models underlying ethnic actors' own ideas concerning the acquisition/transmission of an ethnic status is necessary in order to resolve the outstanding differences between "primordial" and "circumstantial" models of ethnicity. This article presents such data from a multi- ethnic area in Mongolia that found ethnic actors to be heavily primordialist, and uses these data to stimulate a more cogent model of ethnicity that puts the intuitions of both primordialists and circumstantialists on a more secure foundation. Although many points made by the circumstantialists can be accommodated in this framework, the model argues that ethnic cognition is at core primordialist, and ethnic actors' instrumental considerations - and by implication their behaviours - are conditioned and constrained by this primordialist core. The implications of this model of ethnicity for ethnic processes are examined, and data from other parts of the world are revisited for their relevance to its claims

Abstract:

Screening potential entrants is a major challenge to any system of immigration. At bottom, the problem is one of information asymmetry, in which migrants hold private information as to their abilities and intentions. We propose a new approach that leverages information that potential entrants have about each other. Certain potential entrants to the United States would have to apply as a small group, called a trust circle. Once inside the country, all members would be subject to onerous bureaucratic requirements, but these would be waived over time for trust circles that remain in good standing. However, if anyone within a trust circle becomes involved in hostile or criminal activities, every member of the group would summarily lose his or her privileges. Knowing this, potential migrants will associate only with others they trust and would have incentives to expose others in the group who adopt bad behaviors after entry

Abstract:

This paper connects trade flows to deviations from the law of one price (LOOP) in a structural model of trade and retailing. It accounts for the observed cross-country dispersion in prices of goods, based on retail price survey data, by focusing on two sources of goods market segmentation - (i) international trade costs, and (ii) non-traded input costs of distribution. I find that a multi-sector Ricardian trade model, ala Eaton-Kortum, augmented with a distribution sector, can account for the average price dispersion for a basket of goods fully and generates 70% of the variation in price dispersion across goods within the basket. While tradability of goods is important in explaining the average price dispersion for the basket of goods, distribution costs are important in explaining why, within the basket, some goods show more price dispersion than others

Abstract:

In this article we consider the advantages of applying lot streaming in a multiple job flow-shop context. The lot streaming process of splitting jobs into sublots to allow overlapping between successive operations has been shown to reduce makespan and thus increase customer satisfaction. Efficient algorithms are available in the literature for solving the problem for a single job. However, for multiple jobs, job sequencing, as well as lot sizing, is involved, and the problem is therefore NP-hard. We consider two special cases for which we provide polynomial time solutions. In one case, we eliminate diversity of the jobs, and hence the job sequencing decision, and in the other we restrict the number of machines. We show that for jobs with identical processing times and number of sublots, no advantage is obtained by allowing inconsistency in sublot sizing of consecutive jobs. For the two-machine case, we also explain why the sequencing and sublot size decision can be approached independently, and supply a polynomial time algorithm for minimising makespan, taking account of attached set-ups on the first machine and transportation times

Abstract:

This paper describes and develops the conditions that make the demand side policy of vehicle use restrictions part of a cost-effective set of environmental control policies. Mexico City's experience with vehicle use restrictions is described and its failure analysed. It is argued that Mexico City took a step in the right direction, but failed to make the restrictions flexible, thereby making the policy perverse. A programme of tradable vehicle use permits is presented and described that would provide the needed flexibility and promote urban sustainability

Abstract:

Many large cities in the world have serious ground level ozone problems, largely the product of vehicular emissions and thus the argued unsustainability of current urban growth patterns is frequently blamed on unrestricted private vehicle use. This article reviews MexicoCity’s experience with vehicle use restrictions as an emissions control program and develops the conditions for optimal quantitative restrictions on vehicle use and for complementary abatement technologies. The stochastic nature of air pollution outcomes is modelled explicitly in both the static and dynamic formulations of the control problem, in which for the first time in the literature the use of tradeable vehicle use permits is proposed as a cost-effective complement to technological abatement for mobile emissions control. This control regime gives the authorities a broader and more flexible set of instruments with which to deal more effectively with vehicle emissions, and with seasonal and stochastic variation of air quality outcomes. The market in tradeable vehicle use permits would be very competitive with low transactions costs. This control policy would have very favorable impacts on air quality, vehicle congestion and on urban form and development. Given the general political resistance to environmental taxes, this program could constitute a workable and politically palatable set of policies for controlling greenhouse gas emissions from the transport sector

Abstract:

We consider a pure exchange economy consisting of a single risky asset whose dividend drift rate is modeled as an Ornstein-Uhlenbeck process, and a representative agent with power-utility who, in equilibrium, consumes the dividend paid by the risky asset. Endogenously determined interest rates are found to be of the Vasicek (1977) type. The mean and variance of the equilibrium stock price are stochastic and have mean-reverting components. A closed-form solution for a standard call option is determined for the case of log-utility. Equilibrium values have interesting implications for the equity premium puzzle observed by Mehra and Prescott (1985)

Abstract:

We consider planar and spatial autonomous Newtonian systems with Coriolis forces and study the existence of branches of periodic orbits emanating from equilibria. We investigate both degenerate and nondegenerate situations. While Lyapunov's center theorem applies locally in the nondegenerate, nonresonant context, our result provides a global answer which is significant in some degenerate cases. We apply our abstract results to a problem from Celestial Mechanics. More precisely, in the three-dimensional version of the Restricted Triangular Four-Body Problem with possibly different primaries our results show the existence of at least seven branches of periodic orbits emanating from the stationary points

Abstract:

Padding is the practice of adding nonvoters (e.g., noncitizens or disenfranchised prisoners) to an electoral district in order to ensure that the district meets the size quota prescribed by the one man, one vote doctrine without affecting the voting outcome in the district. We show how padding- and its mirror image, pruning-, can lead to arbitrarily large deviations from the socially optimal composition of elected legislatures. We solve the partisan districter's optimal padding problem

Abstract:

We introduce a framework to theoretically and empirically examine electoral maldistricting-the intentional drawing of electoral districts to advance partisan objectives, compromising voter welfare. We identify the legislatures that maximize voter welfare and those that maximize partisan goals, and incorporate them into a maldistricting index. This index measures the intent to maldistrict by comparing distances from the actual legislature to the nearest partisan and welfare-maximizing legislatures. Using 2008 presidential election data and 2010 census-based district maps, we find a Republican-leaning bias in district maps. Our index tracks court rulings in intuitive ways

Abstract:

Coattails and the forces behind them have important implications for the understanding of electoral processes and their outcomes. By focusing our attention on neighboring electoral sections that face the same local congressional election, but different municipal elections, and assuming that political preferences for local legislative candidates remain constant across neighboring electoral sections, we exploit variation in the strength of the municipal candidates in each of these electoral sections to estimate coattails from municipal to local congressional elections in Mexico. A one percentage increase in vote share for a municipal candidate translates, depending on his or her party, into an average of between 0.45 and 0.78 percentage point increase in vote share for the legislative candidates from the same party (though this effect may not have been sufficient to affect an outcome in any electoral district in our sample). In addition, we find that a large fraction of the effect is driven by individuals switching their vote decision in the legislative election, rather than by an increase in turnout

Abstract:

In this paper I consider choice correspondences defined on a novel domain: the decisions are assumed to be taken not by individuals, but by committess, whose membership is observable and variable. In particular, for the case of two alternatives I provide a full characterization of committee choice structures that may be rationalized with two common decision rules: unanimity with a default and weighted majority

Abstract:

We propose a model of endogenous party platforms with stochastic membership. The parties’ proposals depend on their membership, while the membership depends both on the proposals of the parties and on the unobserved idiosyncratic preferences of citizens over parties. An equilibrium of the model obtains when the members of each party prefer the proposal of the party to which they belong to, rather than the proposal of the other party. We prove the existence of such an equilibrium and study its qualitative properties. For the cases in which parties use either the average or the median to aggregate the preferences of their members, we show that if the unobserved idiosyncratic characteristics of the parties are similar, then parties make different proposals in the stable equilibria. Conversely, we argue that if parties differ substantially in their unobserved idiosyncratic characteristics, then the unique equilibrium is convergent

Abstract:

We construct a unique dataset that includes the total number of ads placed by all competing political parties during Mexico's 2012 presidential campaign, and detailed information on the content of the ads aired every day during the course of the campaign by each of the competing parties. To illustrate its potential usefulness, we describe the evolution each party's negative advertising strategies (defined as ads that explicitly mention each of the other competing candidates or parties) over the course of the campaign, and relate it to the expected vote share in the general election for each of the competing candidates based on the available surveys. We show that parties' choice of negative advertising strategies are consistent with a model in which ads do affect voting intentions, and negative (positive) advertising affect negatively (positively) the vote share of the mentioned party, and positively (negatively) that of all other competing parties

Abstract:

We analyze existence of equilibrium in a one-dimensional model of endogenous party platforms and more than two parties. The platforms proposed by parties depend on their membership composition. The policy implemented is a function of the different proposals and the vote distribution among such proposals. It is shown that if voters are sincere there is always an equilibrium regardless of the number of parties. In the case of strategic voting behavior, existence of equilibrium can be shown provided a subadditivity condition on the outcome function holds

Abstract:

In this paper I provide an example of sorting equilibrium nonexistence in a three-community model of the type introduced in Caplin and Nalebuff (1997; Journal of Economic Theory 72, 306-342). With two communities, such an example has been shown to exist only when the dimension of the policy space is even. It turns out, however, that with three communities existence may fail regardless of whether the policy space dimension is odd or even. This suggests that the original odd/even dichotomy can, at least in part, be explained by the evenness of the number of communities

Abstract:

In a social choice model with an infmite number of agents, there may occur "equal size" coalitions that a preference aggregation rule should treat in the same manner. We introduce an axiom of equal treatment with respect to a measure of coalition size and explore its interaction with cornrnon axioms of social choice. We show that, provided the measure space is sufficiently rich in coalitions of the same measure, the new axiom is the natural extension of the conceptof anonymity, and in particular plays a silnilar role in the characterization of preference aggregation rules

Abstract:

We develop a model of endogenous party platform formation in a multidimensional policy space. Party platforms depend on the composition of the parties' primary electorate. The overall social outcome is taken to be a weighted average of party platforms and individuals vote strategically. Equilibrium is defined to obtain when no group of voters can shift the social outcome in its favor by deviating and the party platforms are consistent with their electorate. We provide sufficient conditions for existence of equilibria

Abstract:

This paper analyzes a general model of an economy with heterogeneous individuals choosing among two jurisdictions, such as towns or political parties. Each jurisdiction is described by its constitution, where a constitution is defined as a mapping from all possible population partitions into the (possibly multidimensional) policy space. This study is the first to establish sufficient conditions for existence of sorting equilibria in a two-jurisdiction model for a policy space of an arbitrary dimension

Resumen:

El 22 de julio de 2011 una tragedia despertó abruptamente a Europa del ensueño democrático-liberal. Reinhard Heydrich hizo explotar una bomba en Oslo y provocó un tiroteo en la isla de Utøya. ¿Cuál es la trascendencia de lo ocurrido? Tal vez el bienestar social no impida las manifestaciones políticas extremas

Abstract:

On July 22, 2011, Europe was shook by a tragedy waking it up from its liberaldemocratic daydream. Reinhard Heydrich exploded a bomb in Oslo and caused a shootout on Utoya island. What are the implications of these incidents? Social well-being is perhaps not a deterrent to such extreme political expressions

Abstract:

This paper describes a system for the design or redesign of movie posters. The main reasoning strategy used by the system is case-based reasoning (Leake 1996), which requires two important sub-tasks to be performed: case retrieval and case adaptation. We have used the random forest algorithm to implement the case retrieval subtask, as it allows the system to formulate a generalization of the features of all the top matching cases that are determined to be most relevant to the new (re)design desired. We have used heuristic rules to implement the case adaptation task which results in generating the suggestion for poster composition. These heuristic rules are based on the Gestalt theory of composition (Arnheim 1974) and the requirements specified by the user

Abstract:

The interrelated fields of computational creativity and design computing, sometimes also referred to as design science, have been gaining momentum over the past two or three decades. Many frequent international conference series, as well as more sporadic stand-alone academic events, have emerged to prove this. As maturing fields, it is time to take stock of what has come before and try to come up with a cohesive description of the theoretical foundations and practical advances that have been made. This paper presents such a description in the hope that it helps to communicate what the fields are about to people that are not directly involved in them, hopefully drawing some of them in

Abstract:

This paper presents some details on how to use concepts from computational creativity to inform the machine learning task of clustering. Specifically, clustering involves structuring exemplar-based knowledge. The novelty and usefulness of the way the knowledge ends up being structured can be measured. These are characteristics that traditionally computational creativity focuses on whereas machine learning doesn’t, but they can aid in selecting the best value for the parameters of the learning task. Doing so also provides us with a way to find an adequate balance between novelty and usefulness, something that still hasn’t been fully formalized in computational creativity. Thus both fields, machine learning and computational creativity, can benefit from this type of hybrid research

Abstract:

Evolutionary algorithms (EAs) have been used in varying ways for design and other creative tasks. One of the main elements of these algorithms is the fitness function used by the algorithm to evaluate the quality of the potential solutions it proposes. The fitness function ultimately represents domain knowledge that serves to bias, constrain, and guide the algorithm’s search for an acceptable solution. In this paper, we explore the degree to which the fitness function’s implementation affects the search process in an evolutionary algorithm. To perform this, the reliability and speed of the algorithm, as well as the quality of the designs produced by it, are measured for different fitness function implementations. These measurements are then compared and contrasted

Abstract:

We use an evolutionary algorithm in which we change the fitness function periodically to model the fact that objectives can change during creative problem solving. We performed an experiment to observe the behavior of the evolutionary algorithm regarding its response to these changes and its ability to successfully generate solutions for its creative task despite the changes. An analysis of the results of this experiment sheds some light into the conditions under which the evolutionary algorithm can respond with varying degrees of robustness to the changes

Abstract:

This paper aims to discuss a method for autonomously analyzing a musical style based on the random forest learning algorithm. This algorithm needs to be shown both positive and negative examples of the concept one is trying to teach it. The algorithm uses the Hidden Markov Model (HMM) of each positive and negative piece of music to learn to distinguish the desired musical style from melodies that don't belong to it. The HMM is acquired from the coefficients that are generated by the Wavelet Transform of each piece of music. The output of the random forest algorithm codifies the solution space describing the desired style in an abstract and compact manner. This information can later be used for recognizing and/or generating melodies that fit within the analyzed style, a capability that can be of much use in computational models of design and computational creativity

Abstract:

We propose a computational method for producing novel cons'tructs that fall within an existing design or artistic style. The method is based on evolutionary algorithms, and we discuss related knowledge representation issues. We then present an implementation ofthis method that we used in order to imitare the style of the Dutch painter Mondrian. Finally, we explain and give the results of a cognitive experiment designed to detennine the effectiveness ofthe method, and provide a discussion of these results

Abstract:

This paper presents a multi-agent computational simulation of the effects on creativity of designers' simple social interactions with both other designers and consumers. This model is based on ideas from situated cognition and uses indirect observation to produce potential changes to the knowledge that each designer and consumer uses to perform their activities. These changes result in global, social behaviors emerging from the indirect interaction of the players with relatively simple individual behaviors. The paper provides results to illustrate these emergent behaviors and how the social interactions affect creativity

Abstract:

This paper presents an approach to understanding designing that includes societal effects. We use a multi-agent system to simulate the interactions between two types of agents, those that produce designs and those that receive the designs proposed by the producers. Interactions occur when receiver agents give their opinion to the producers on the designs they produced. Based on this information some producers may choose to adopt knowledge from other producers and some receivers may choose to adopt knowledge from other receivers. As a result of this exchange of knowledge global behaviors emerge in the society of agents. We provide the results of preliminary experiments with the model in which we measure variations in the producers’ successes, which allows us to observe and analyze some of these emergent behaviors under different knowledge transfer scenarios

Abstract:

In this paper we propose a process model for producing novel constructs that fall within an existing design or artistic style. The process model is based on evolutionary algorithms. We then present an implementation of the process model that we used in order to imitate the style of the Dutch painter Mondrian. Finally, we explain and give the results of a cognitive experiment designed to determine the effectiveness of the process model, and provide a discussion of these results

Abstract:

In this paper we use computational techniques to explore the Aztec board game of Patolli. Rules for the game were documented by the Spanish explorers that ultimately destroyed the Aztec civilization, yet there is no guarantee that the few players of Patolli that still exist follow the same strategies as the Aztec originators of the game. We implemented the rules of the game in an agent-based system and designed a series of experiments to pit game-playing agents using different strategies against each other to try to infer what makes a good strategy (and therefore what kind of information would have been taken into account by expert Aztec players back in the days when Patolli was an extremely popular game). In this paper we describe the game, explain our implementation, and present our experimental setup, results and conclusion

Abstract:

In Spanish-speaking countries, the game of dominoes is usually played by four people divided into two teams, with teammates sitting opposite each other. The players cannot communicate with each other or receive information from onlookers while a game is being played. Each player only knows for sure which tiles he/she has available in order to make a play, but must make inferences about the plays that the other participants can make in order to try to ensure that his/her team wins the game. The game is governed by a set of standardized rules, and successful play involves the use of highly-developed mathematical, reasoning and decision-making skills. In this paper we describe a computer system designed to simulate the game of dominoes by using four independent game-playing agents split into teams of two. The agents have been endowed with different strategies, including the traditional strategy followed by experienced human players. An experiment is described in which the success of each implemented strategy is evaluated compared to the traditional one. The results of the experiment are given and discussed, and possible future extensions mentioned

Resumen:

La industria automotora utiliza sistemas de software complejos para realizar la ingeniería de sus productos en todas las etapas del proceso de producción, desde el diseño conceptual hasta la manufactura. Para cada una de estas etapas, se emplean herramientas de software de gran complejidad, y la gente que utiliza dichas herramientas necesita apoyo constante para continuar siendo productivos aún cuando pueden surgir problemas en el uso del software. Dada la cantidad y variedad de consultas hechas por el personal de ingeniería de producto al personal de soporte de software, estos últimos se pueden beneficiar si tuvieran una herramienta de software que almacene su experiencia y conocimiento, y que pueda ser consultada para generar soluciones rápidamente a los problemas que pueda haber entre los usuarios del software de diseño. Una forma de almacenar dicha experiencia es generar una base (memoria) de casos a la que se pueda acceder utilizando distintos descriptores de los posibles problemas como índices. Dado un nuevo problema, la idea es de realizar una búsqueda en la base de casos para tratar de encontrar todo el conocimiento que se pueda acerca de problemas similares que se hayan tenido anteriormente, y sugerir las soluciones a dichos problemas como potenciales soluciones al nuevo problema. Al mismo tiempo, aún si el conocimiento previo de la base de casos no haya sido de gran utilidad en alguna situación en particular, el sistema puede almacenar las soluciones a estos nuevos problemas para que en el futuro, cuando surjan los problemas de nuevo, se incremente su utilidad. En este artículo describimos un sistema de información que utiliza estas ideas para responder flexible y eficientemente cuando el personal de ingeniería de producto de una empresa de la industria automotora tiene problemas con el uso de su software de diseño

Abstract:

The automotive industry uses complex software systems to perform product engineering at all stages of the production process, from conceptual design to manufacture. For each of these stages, different software tools of great complexity areemployed, and the people using these tools need constant support in order to continue being productive even when problems arise with the software they are using. Given the amount and variety of queries made by product engineering personnel to the software support staff, said staff can benefit from a software tool that stores their expertise and can be queried in order to generate quick solutions to problems with product engineering software. One way of storing such expertise is to generate a case base which is indexed by different problem descriptors. Given a new problem, the case base is searched for knowledge about similar problems encountered in the past, and their solutions suggested as potential ways of solving the new problem. At the same time, even if its prior knowledge was not able to provide help in a particular situation, this information system can store new solutions to new problems in order to be able to help with similar problems in the future. In this paper wedescribe an information system that uses these ideas in order to flexibly and efficiently reply when problem situations are encountered by the product engineering staff of a major automobile manufacturer. This setup can relieve some of the burden placed on the software support staff and reduce their response time

Abstract:

While there have been plenty of applications of case-based reasoning (CBR) to different design tasks, rarely has the methodology been used for generating new works of art. If the goal is to produce completely novel artistic styles, then perhaps other reasoning methods offer better opportunities for producing interesting artwork. However, if the goal is to produce new artwork that fits a previously-existing style, then it seems to us that CBR is the ideal strategy to use. In this paper we present some ideas for integrating CBR with other artificial intelligence techniques in order to generate new artwork that imitates a particular artistic style. As an example we show how we have successfully implemented our ideas in a system that produces new works of art in the style of the Dutch painter Piet Mondrian. Along the way we discuss the implications that a task of this nature has for CBR and we describe and provide the results of some experiments we performed with the system

Abstract:

In this paper we present a computer system based on the notion of evolutionary algorithms that. Without human intervention. generates artwork in the style of the Dutch painter Piet Mondrian. Several implementation-related decisions that have to be made in order to program such a system are then discussed. The most important issue that has to be considered when implementing this type of system is the subroutine for evaluating the multiple potential artworks generated by the evolutionary algorithm, and our method is discussed in detail. We then present the set-up and results of a cognitive experiment that we performed in order to validate the way we implemented our evaluation subroutine, showing that it makes sense. Finally, we discuss our results in relation to other research into the computer generation of artwork that fits particular styles existing in the real world

Abstract:

The problem of assigning gates to aircraft that are due to arrive at an airport is one that involves a dynamic task environment. Airport gates can only be assigned if they are currently available, but deciding which gate to assign to which flight also involves satisfying multiple additional constraints. Once a solution has been found, new incoming flights will have approached the airpspace of the airport in question, and these will require arrival gates to be assigned to them, so the entire process must be repeated. We have come up with a combined knowledge-based and evolutionary approach for performing the airport gate scheduling task, to this paper we present our model from a theoretical point of view, and then discuss a particular implementation of it for the scheduling of arrival gates in a specific airport and show some experimental results

Abstract:

The problem of assigning gates to aircraft that are due to arrive at an airport is one that involves a dynamic task environment. Airport gates can only be assigned if they are currently available, but deciding which gate to assign to which flight also involves satisfying multiple additional constraints. Once a solution has been found, new incoming flights will have approached the airspace of the airport in question, and these will require arrival gates to be assigned to them, so the entire process must be repeated These observations have led us to propose a coevolutionary model for automating the airport gate scheduling problem. We represent the genotypes of two species. One species corresponds to the current problem to be addressed (a list of departing and arriving flights at a given time-step) and the other species corresponds to the solutions being proposed for that problem (a list of possible gate assignments for the arriving flights). An evolutionary algorithm which operates on a population of solution genotypes is used to solve each instance of the airport gate scheduling problem. A coevolutionary algorithm in which the two species influence each other, which incorporates the previously-mentioned evolutionary algorithm once at each time-step, models the fact that multiple instances of the problem occur over time as an airport operates

Resumen:

El ingreso de México de México al GATT en 1986, trajo consigo la disminución de barreras a la importación. A este hecho se le denomina: recibir el trato de nación más favorecida. México signó con Japón un Acuerdo de Complementariedad Económica que entró en vigor a partir de 2005. Japón es un gran consumidor de carne de puerco y paga excelentes precios por ella. Entre sus principales proveedores se encuentran Estados Unidos y Canadá, miembros del TLCAN que establece preferencias arancelarias para la importación y exportación de productos agropecuarios que podrían propiciar la triangulación. El objetivo de este trabajo es determinar la competitividad de la carne de puerco de los países integrantes del TLCAN en el mercado japonés. El periodo de análisis comprendió de 1998 a 2012. Se utilizó el modelo no lineal SDAIDS (por sus siglas en inglés). Las pruebas de significancia se realizaron utilizando el procedimiento de Chalfant. En relación a las elasticidades de ingreso, cabe observar que todas resultan positivas, acordes con la teoría que indica una relación directa entre los gastos por importaciones con la cantidad demandada de productos cárnicos. En el caso de Estados Unidos de Norteamérica y México tienen un nivel de significancia del 1%; las de los tres países del TLCAN resultan todas menores que uno, lo que indica que las importaciones provenientes de países del resto del mundo son más sensibles a los cambios en el gasto. México tiene una ventana de oportunidad de mercado para ampliar sus exportaciones de carne de puerco a Japón

Abstract:

The entrance of Mexico to the GATT in 1986 diminished considerably the import barriers, and in particular produced a continuously increasing commercial exchange with Japan. In fact, later in 2005 Mexico and Japan signed an Economic Complementarity Agreement. Japan is an important pork meat consumer and has recently paid high prices for quality meat. On the other hand, NAFTA countries are important purveyors of the meat import market of Japan. The object of this work is to determine the competitiveness of the different NAFTA countries in meat import market to Japan. Our analysis covers the period from 1998 to 2012. We employed a non-linear Source Differentiated Almost Ideal Demand System (SDAIDS) to estimate the elasticities of the corresponding demand functions. The significance tests were realized using Chalfant’s procedure. We obtained positive expenditure elasticities as expected. However, in the case of Mexico and USA the elasticities turned out to be less than 1, while the expenditure elasticity for the Rest of the World is greater than 1. This shows that pork meat imports from other countries are more sensible to changes in expenditure. We conclude that Mexico has an important opportunity to expand his exports of pork meat to Japan

Abstract:

In this paper we study delegated portfolio management when the manager’s ability to short-sell is restricted. Contrary to previous results, we show that under moral hazard, linear performance-adjusted contracts do provide portfolio managers with incentives to gather information. We find that the risk-averse manager’s effort is an increasing function of her share in the portfolio’s return. This result affects the risk-averse investor’s choice of contracts. Unlike previous results, the purely risk-sharing contract is now shown to be suboptimal. Using numerical methods we show that under the optimal linear contract, the manager’s share in the portfolio return is higher than what it is under a purely risk sharing contract. Additionally, this deviation is shown to be: (i) increasing in the manager’s risk aversion and (ii) larger for tighter short-selling restrictions. As the constraint is relaxed the deviation converges to zero

Resumen:

En simulación de eventos discretos a veces existe la necesidad de clasificar una entidad de acuerdo con algún criterio o característica. En muchos procesos de servicios la característica más fácil de observar es el tiempo en el que ocurre cierto evento, y éste es incluido de manera intrínseca dentro de la tasa de arribos de un proceso de Poisson para después obtener la distribución de probabilidad y hacer predicciones. En este documento se muestra que también se puede obtener una estimación de densidad de los arribos condicionada en el tiempo durante todo el periodo del proceso Poisson la cuál puede ser multimodal y mediante un clasificador bayesiano realizar un muestreo para hacer la clasificación. Los resultados muestran que después de hacer la clasificación, los eventos convergen en la proporción a priori deseada, esto se observa con una gráfica de medias acumuladas, además que la distribución de probabilidad o verosimilitud observada de los datos originales se mantiene en los eventos clasificados

Abstract:

In simulation of discrete events sometimes there is a need to classify an entity according to some criterion or characteristic. In many service processes the easiest feature to observe is the time in which a certain event occurs and is included intrinsically within the arrival rate of a Poisson process, this to obtain the probability distribution and make predictions. This document shows that it is also possible to obtain a density estimate of the arrivals conditioned over time during the entire period of the Poisson process which can be multimodal and a Bayesian classifier is used to perform sampling to do the classification. The results show that after the classification, the events converge in the desired a priori proportion, this is observed with a graph of accumulated means, in addition that the probability distribution or observed likelihood of the original data is maintained in the classified events

Abstract:

A restricted forecasting compatibility test for Vector Autoregressive Error Correction models is analyzed in this work. It is shown that a variance–covariance matrix associated with the restrictions can be used to cancel out model dynamics and interactions between restrictions. This allows us to interpret the joint compatibility test as a composition of the corresponding single restriction compatibility tests. These tests are useful for appreciating the contribution of each and every restriction to the joint compatibility between the whole set of restrictions and the unrestricted forecasts. An estimated process adjustment for the test is derived and the resulting feasible joint compatibility test turns out to have better performance than the original one. An empirical illustration of the usefulness of the proposed test makes use of Mexican macroeconomic data and the targets proposed by the Mexican Government for the year 2003

Abstract:

Bipolar plates (BPs) are one of the most important components of polymer electrolyte membrane fuel cells (PEMFCs) because of their important role in gas and water management, electrical performance, and mechanical stability. Therefore, promising materials for use as BPs should meet several technical targets established by the United States Department of Energy (DOE). Thus far, in the literature, many materials have been reported for possible applications in BPs. Of these, polymer composites reinforced with carbon allotropes are one of the most prominent. Therefore, in this review article, we present the progress and critical analysis on the use of carbon material-reinforced polymer composites as BPs materials in PEMFCs. Based on this review, it is observed that numerous polymer composites reinforced with carbon allotropes have been produced in the literature, and most of the composites synthesized and characterized for their possible application in BPs meet the DOE requirements. However, these composites can still be improved before their use for BPs in PEMFCs

Abstract:

In this work we present a truncated Newton method able to deal with large scale bound constrained optimization problems. These problems are posed during the process of identifying parameters in mathematical models. Specifically, we propose a new termination rule for the inner iteration, that is effective in this class of applications. Preliminary numerical experimentation is presented in order to illustrate the merits of the rule

Resumen:

A partir de las respuestas de los alumnos en la Olimpiada de Mayo, evento donde son aplicadas pruebas a estudiantes de primaria y secundaria en todo México, se intenta plantear un enfoque diferente para la enseñanza de las matemáticas en la educación básica. Los problemas “Tipo Olimpiada” trascienden el plan de estudios y el bagaje académico de los estudiantes. Es por ello que postulamos que presentar problemas de este tipo, los cuales llevan al alumno a considerar diferentes formas de llegar a una respuesta, tiene un impacto positivo en el desarrollo lógico matemático de los jóvenes y se traduce en un mejor entendimiento de las matemáticas

Abstract:

From the students’ answers in the Olimpiada de Mayo, which is applied to elementary and secondary school students all over Mexico, it is intended to pose a different approach on the instruction of mathematics in elementary education. The “Olympic Type” problems transcend the study plan and the students’ academic baggage. It is why we postulate that presenting problems of this sort, that push the student to consider different ways of reaching an answer, have a positive impact on the development of the youth’s mathematical logic, which translates on a better understanding of mathematics

Abstract:

A notion of an almost regular inductive limits is introduced. Every sequentially complete inductive limit of arbitrary locally convex spaces is almost regular

Abstract:

A regular inductive limit of sequentially complete spaces is sequentially complete. For the converse of this theorem we have a weaker result: if indEn is sequentially complete inductive limit, and each constituent space En is closed in indEn, then indEn is α-regular

Abstract:

In shale plays, as with all reservoirs, it is desirable to achieve the optimal development strategies, particularly well spacing, as early as possible, without overdrilling. This paper documents a new technology that can aid in determining optimal development strategies in shale reservoirs. We integrate a decline-curve-based reservoir model with a decision model that incorporates uncertainty in production forecasts. Our work extends previous work by not only correlating well spacing and other completion parameters with performance indicators, but also developing an integrated model that can forecast production probabilistically and determine the impact of development decisions on long-term production. A public data set of 64 horizontal wells in the Barnett shale play in Cooke, Montague and Wise Counties, Texas, was used to construct the integrated model. This part of the Barnett shale is in the oil window and wells produce significant volumes of hydrocarbon liquids. The data set includes directional surveys, completion and stimulation data, and oil and gas production data. Completion and stimulation parameters, such as perforated interval, fluid volume, proppant mass, and well spacing, were correlated with decline curve parameters, such as initial oil rate and a proxy for the initial decline rate, the ratio of cumulative production at 6 months to 1 month (CP6to1), using linear regression. In addition, a GOR model was developed based on thermal maturity and average GOR versus time. Thousands of oil and gas production forecasts were generated from linear regression and GOR models using Monte Carlo simulation, which serve as the input to the decision model. The decision model then determines the impact of well spacing and other completion/stimulation decisions on long-term production performance.

Abstract:

Public goods games played by a group of individuals collectively performing actions towards a common interest characterize social dilemmas where both competition and cooperation are present. By using agent-based simulation, this paper investigates how collective action in the form of repeated n-person linear public goods games is affected when the interaction of individuals is driven by the underlying hierarchical structure of an organization. The proposed agent-based simulation model is based on generally known empirical findings about public goods games and takes into account that individuals may change their profiles from conditional cooperators to rational egoists or vice versa. To do so, a fuzzy logic system was designed to allow the cumulative modification of agents' profiles in the presence of vague variables such as individuals' attitude towards group pressure and/or their perception about the cooperation of others. From the simulation results, it can be concluded that collective action is affected by the structural characteristics of hierarchical organizations. The major findings are as follows: (1) Cooperation in organizational structures is fostered when there is a collegial model defining the structure of the punishment mechanisms employed by organizations. (2) Having multiple, and small organizational units fosters group pressure and highlights the positive perception about the cooperation of others, resulting in organizations achieving relatively high aggregate contribution levels. (3) The greater the number of levels, the higher the aggregate contribution level of an organization when effective sanctioning systems are introduced

Abstract:

Though most countries have established public defense systems to represent indigent defendants, this is far from implying their offices are in good shape. Indeed, significant variation likely exists in the systems' effectiveness, across societies and at the subnational level. Defense agencies' performance likely depends on their configuration, including their funding, their internal arrangements, and their selection and retention mechanisms. Centered on public defense in Argentina, this article compares the performance of public and retained counsel at the country's Supreme Court. Public defenders' offices received a boost in the last two decades, and are institutionally well positioned to square off against prosecutors, putting them at least on par with the averaged retained counsel. Using a fresh dataset of around 3000 appeal decisions from 2008 to 2013, the study largely tests representational capabilities by looking at whether counsel meets briefs' formal requirements, a subset of decisions particularly valuable to reduce potential biases. It finds that formal dismissals are significantly less frequent when a public defender is named in an appeal, particularly when a federal defender is involved. It also discusses and tests alternative mechanisms. The article's findings illuminate discussions of support structures for litigation, criminal justice reform, and criminal defendants' rights

Abstract:

Does a young football (soccer) player's birthdate affect his prospects in the sport? Scholars have found a correlation between early births in the competition year among young players within the same cohort and improved chances in sports as they advance to other stages. This article is one of the first studies to ask this question about a male premier league in Latin America - the Argentinian 'A' league. It uses a large-N data-set of all players in the period 2000-2012, around 3000 players. The article finds a large effect of the player's relative age on his prospect to become a professional, though the effect is only present in the case of Argentinian-born players. The effect evaporates once a set of measures are employed to compare professional players with one another. The article contributes to the discussion of the biased effect of seemingly neutral institutional policies, and its conclusions may shed light in other areas

Abstract:

This paper presents an estimation of ideal points for the Justices of the Supreme Court of Argentina for 1984–2007. The estimated ideal points allow us to focus on political cycles in the Court as well as possible coalitions based on presidential appointments. We find strong evidence to support the existence of such coalitions in some periods (such as President Carlos Menem’s term) but less so in others (including President Néstor Kirchner’s term, a period of swift turnover in the Court due to impeachment processes and resignations). Implications for comparative judicial politics are discussed

Abstract:

In this paper we analyze the evolution of Latin American (LATAM) Business Economics (BE) publications in international journals from 2005 to 2019. Using publications in Web of Science Core Collection (WoS), we analyze which characteristics of collaboration result in higher impact, i.e., total number of citations, and journals' WoS impact factor. Our findings show that the number of publications in journals indexed in the WoS by researchers in LATAM have been rising in terms of the number of publications and impact measured by citations. Moreover, researchers in the region are publishing in journals with higher impact factor. The analysis shows that the main drivers of impact are multilateral and bilateral collaboration, number of countries, number of authors, and the number of categories of knowledge. Specifically, multilateral collaboration is a key factor of influential papers. Other aspects that increase the impact of publications are publishing in English and collaborating with authors from the United States. Our results also suggest a slight decrease in the impact as the number of coauthors increase

Abstract:

Engineers make things, make things work, and make things work better and easier. This kind of knowledge is crucial for innovation, and much of the explicit knowledge developed by engineers is embodied in scientific publications. In this paper, we analyze the evolution of publications and citations in engineering in a middle-income country such as Mexico. Using a database of all Mexican publications in Web of Science from 2004 to 2017, we explore the characteristics of publications that tend to have the greatest impact; this is the highest number of citations. Among the variables studied are the type of collaboration (no collaboration, domestic, bilateral, or multilateral), the number of coauthors and countries, controlling for a coauthor from the USA, and the affiliation institution of the Mexican author(s). Our results emphasize the overall importance of joint international efforts and suggest that publications with the highest number of citations are those with multinational collaboration (coauthors from three or more countries) and when one of the coauthors is from the USA. Another interesting result is that single-authored papers have had a higher impact than those written through domestic collaboration

Abstract:

The purpose of this paper is to analyze the extent to which productivity and formal research collaboration has changed in the fields of social sciences in Mexico. The results show that all fields have had extensive growth in the number of publications, mainly since 2005 when the number of journals in Social Sciences indexed in Web of Science (WoS) started a significant growth. However, there are important variations among areas of knowledge. The four most productive fields, considering only publications in WoS, are Business & Economics; Education & Educational Research; Social Sciences Other Topics; and Psychology. The evolution of the mean of coauthors per paper, over the period of analysis, has not had a steady growth. On the contrary, the evolution has been almost flat in almost all fields of knowledge. The evolution of communication and information technologies does not seem to have influenced substantially co-authorship in Social Sciences in Mexico. Nor has there been a big change in terms of collaboration. On average, 42% of the publications in all fields of knowledge were by solo authors, and 26% were local collaborations, i.e. collaborations among authors affiliated at Mexican institutions. Related to international collaboration, 24% of the publications were bilateral collaboration (Mexico and another country) and only 8% of the publications involved researchers from three or more countries (multilateral collaboration)

Abstract:

Nations consider R&D a fundamental way to spur business innovation, increase the international competitiveness of domestic firms, achieve higher levels of economic growth, and increase the social welfare of its citizens. The empirical evidence indicates that, in the Latin America region, investment in R&D is comparatively low, largely depends on public funds, and is highly concentrated in academic research with limited business applications. Empirical evidence suggests a lack of connection in the region between those who produce knowledge (academia) and those who use that knowledge (business practitioners). This paper argues that business schools in the region have a role to play filling this gap by conducting more research with real-world business applications and by fostering innovative entrepreneurship among business school students

Abstract:

This paper analyzes science productivity for nine developing countries. Results show that these nations are reducing their science gap, with R&D investments and scientific impact growing at more than double the rate of the developed world. But this “catching up” hides a very uneven picture among these nations, especially on what they are able to generate in terms of impact and output relative to their levels of investment and available resources. Moreover, unlike what one might expect, it is clear that the size of the nations and the relative scale of their R&D investments are not the key drivers of efficiency

Abstract:

This paper provides useful insights for the design of networks that promote research productivity. The results suggest that the different dimensions of social capital affect scientific performance differently depending on the area of knowledge. Overall, dense networks negatively affect the creation of new knowledge. In addition, the analysis shows that a division of labor in academia, in the sense of interdisciplinary research, increases the productivity of researchers. It is also found that the position in a network is critical. Researchers who are central tend to create more knowledge. Finally, the findings suggest that the number of ties have a positive impact on future productivity. Related to areas of knowledge, Exact Sciences is the area in which social capital has a stronger impact on research performance. On the other side, Social and Humanities, as well as Engineering, are the ones in which social capital has a lesser effect. The differences found across multiple domains of science suggest the need to consider this heterogeneity in policy design

Abstract:

This paper aims to further our understanding of how embeddedness affects the research output and impact of scientists. The analysis uses an extensive panel data that allows an analysis of within person variation over time. It explores the simultaneous effects of different dimensions of network embeddedness over time at individual level. These include the establishment of direct ties, the strengths of these ties, as well as the density, structural holes, centrality, and cross-disciplinary links. Results suggest that the network dynamics behind the generation of quality output contrasts dramatically with that of quantity. We find that the relational dimension of scientists matters for quality, but not for output, while cognitive dimensions have the opposite effect, helping output, while being indifferent toward impact. The structural dimension of the network is the only area where there is some degree of convergence between output quantity and quality; here, we find a prevalence for the role of brokerage over cohesion. It concludes by discussing implications for both network research and science policy

Abstract:

Mexico is among the set of countries that look at Science, Technology and Innovation a fundamental mechanism to support competitiveness, reach higher levels of economic growth and increase the social welfare of its citizens. This paper provides a description of the main policy initiatives to foster innovation in Mexico. Afterward, some characteristics of innovation practices in private-sector businesses and how these features have evolved over a 5-year period are resented to assess the impact of those policy initiatives. To this purpose, results of the 2001 and 2006 national innovation surveys are used

Abstract:

The Mexican government faces significant challenges in providing resources and implementing policies, to support steps that have been taken to strengthen its science and technology capacity. The country's research and development (R&D) investments was less than 0.4% of its gross domestic product (GDP) in 2004, as compared with other countries. Abundant natural resources, such as oil, along with closed and regulated economy are some of the main factors that have prevented the government and companies from investing in R&D activities in the country. Severe financial crisis experienced by the country during 1980s that increased inflation to more than 150% was another factor, which affected the government's efforts to strengthen research and development activities in the fields of science and technology in the country

Abstract:

This paper uses a unique data set of Mexican researchers to explore the determinants of research output and impact. Our findings confirm a quadratic relationship between age and the number of published papers. However, publishing peaks when researchers are approximately 53 years old, 5 or 10 years later than what prior studies have shown. Overall, the results suggest that age does not have a substantial influence on research output and impact. We also find that reputation matters for the number of citations but not publications. Results also show important heterogeneity across areas of knowledge. Interpretations of other aspects, such as gender, country of PhD, and cohort effect, among others, are also discussed

Resumen:

El presente trabajo señala algunas de las críticas que se han realizado a la cuestión de lo social en Heidegger. Se muestra que el método por sí mismo excluye la posibilidad de hacer propuestas concretas en lo referente a lo ético o lo social, que no debe traducirse como un desentendimiento de lo humano. Por otro lado, se realiza un recorrido de algunos de los lugares centrales de la ontología fundamental en la que Heidegger aborda el tema de la alteridad

Abstract:

This work outlines some of the criticism regarding Heidegger’s social theory. It will be demonstrated that Heidegger’s method itself eliminates the possibility of specific suggestions regarding ethical and social issues, which must not be interpreted as a disregard of human issues. On the other hand, we will revisit some key places in his Fundamental Ontology, where the philosopher addresses the theme of alterity

Resumen:

En este artículo se fundamenta una idea sencilla: en la actualidad muchos líderes políticos usan el término tolerancia para calicar sus propias actitudes hacia cierto tipo de personas, prácticas y culturas. La pregunta es simple: ¿al estado tolerante se le permite hablar (moral y conceptualmente) acerca de la tolerancia? La tesis defendida por el autor es que el Estado liberal moderno no puede (por razones conceptuales) y no debe (por razones morales) hablar acerca de la tolerancia

Abstract:

In this article the author makes a simple claim: in present days, several political leaders use the term tolerance to qualify their attitudes towards certain kind of people, practices and cultures. The question is simple: Is the tolerant state allowed (morally and conceptually) to speak about tolerance? The thesis defended by the author is that the modern liberal State cannot (because of conceptual reasons) and should not (because of moral reasons) talk about tolerance

Resumen:

¿Hasta dónde llegan los vínculos de Hegel con Aristóteles? ¿En qué sentido podemos decir que la filosofía de Hegel aspira a ser la consumación de la aristotélica? En este artículo analizamos el significado y la equivalencia de los conceptos de νοῦς y de ενὲϱγεια del parágrafo Met. Λ 7, 1072b18-30, en la filosofía hegeliana. En especial con el concepto de "realidad efectiva" (Wirklichkeit). En el mundo antiguo se cultivó la imitatio auctoris como afán consciente por entender y apropiarse de un contenido que se consideraba valioso y verdadero. La imitatio no consistía en la burda repetición de un autor, sino en el empeño de mejorar la comprensión de una verdad que se desea iluminar. Hegel imita a Aristóteles hasta emularlo. Supera el pasado sin que este sea eliminado por completo. Lo contiene, lo actualiza y lo mantiene vivo. La Aufhebung hegeliana es imitatio de los clásicos

Abstract:

How far go the links of Hegel to Aristotle? In what sense can we say that Hegel's philosophy aspires to be the consummation of the aristotelian one? In this article we analyze the meaning and equivalence of the concepts of νοῦς and ἐνὲϱγεια of the paragraph Met. Λ 7, 1072b18-30, in Hegelian philosophy. Especially with the concept of "effective reality" (Wirklichkeit). In the ancient world, imitatio auctoris was cultivated as a conscious eagerness to understand and appropriate a content that was considered valuable and true. The imitatio did not consist in the crude repetition of an author, but in the effort to improve the understanding of a truth that one wishes to illuminate. Hegel imitates Aristotle to the point of emulating him. He overcomes the past without completely eliminating it. He contains it, updates it and keeps it alive. The Hegelian Aufhebung is imitatio of the classics

Resumen:

En algunos sectores crece la convicción de que la universidad es innecesaria y que puede ser remplazada por una serie de dispositivos perfectamente alineados a los intereses de la empresa. Se multiplican los expertos que anuncian la muerte de la universidad. Conviene reflexionar sobre los rasgos que tendría ese mundo sin universidades y destacar la necesidad de un espacio libre de condicionamientos políticos y económicos

Abstract:

The conviction that the university is unnecessary and can be replaced by devices perfectly aligned with the interests of the company is growing in some sectors. The experts who announce the death of the university are multiplying. It is worth reflecting on the characteristics that this world without universities would have and highlight the need for a space free of political and economic conditioning

Resumen:

La filosofía hegeliana tiene un carácter pedagógico que se hace presente en el concepto Bildung (educación, formación o cultivo de sí). La fenomenología del espíritu es la ciencia de la experiencia de la conciencia; la exposición del proceso de aprendizaje del espíritu en la conciencia y en la historia. El espíritu es de por sí el sujeto que se ha aprendido a sí mismo, que sabe que sabe y que no deja de desplegarse en diversas formas históricas. Así, el espíritu trabaja, se hace a sí mismo. Es pura autoactividad que se forma y se educa dialécticamente. Sostengo la afirmación de que el sistema de la ciencia de Hegel no solo es lógico o histórico sino pedagógico en el sentido enfático de que la Idea o la totalidad de lo real no puede ser sino un largo proceso formativo

Abstract:

The Hegelian philosophy has a pedagogical character that is present in concept Bildung (education, formation or self-cultivation). The phenomenology of the spirit is the science of the experience of consciousness; the exposition of the learning process of the spirit in consciousness and in history. The spirit is in itself the subject that has learned itself, that knows that it knows and that does not cease to unfold in various historical forms. Thus, the spirit works, it makes itself. It is pure self-activity that is formed and educated dialectically. I support the assertion that Hegel’s system of science is not only logical or historical but pedagogical in the emphatic sense that the Idea or the totality of the reality, can only be a long learning process

Resumen:

La Universidad de Columbia determinó el 20 de enero de 1919 implantar el curso Contemporary Civilization, que se convirtió con el paso del tiempo en uno de los más exitosos de la educación estadounidense. El propósito del curso era comprender los problemas del mundo y sus soluciones. Solo un estudiante cultivado puede alcanzar un entendimiento profundo de sí mismo y de la sociedad, y de emitir sus propios juicios. El proyecto se inspiró en la idea de la obligación moral de ser inteligente de John Erskine. Daniel Cosío Villegas (1898-1976) impulsó la introducción del curso en México en el nuevo plan de estudios (1958) de la Facultad de Economía de Nuevo León. En esta investigación exploramos las circunstancias históricas del curso y los problemas que enfrentó Consuelo Meyer L'Epée (1918-2010) para implantarlo

Abstract:

On January 20, 1919, Columbia University decided to introduce the Contemporary Civilization course. This new academic program became over time one of the most successful in the American education. The purpose of the course was to understand the world's problems and their solutions. Only a cultivated student can develop a deep understanding of himself and society, and the ability to make his or her own judgments. The project was inspired by John Erskine's (1879-1951) the moral obligation to be intelligent. Daniel Cosío Villegas (1898-1976) promoted the introduction of the Contemporary Civilization course in Mexico in the new curriculum (1958) of the Faculty of Economics of Nuevo León. In this research we explore the historical circumstances of this course and the challenges faced by Consuelo Meyer L'Epée (1918-2010) in its implementation

Resumen:

El presente artículo profundiza en las características, virtudes y ventajas del método socrático en la educación universitaria de la mano de la experiencia de los Grandes Libros en la Universidad de Chicago. Además, se reflexiona sobre la actualidad y los desafíos de tal método en tiempos de pandemia y de tecnología

Abstract:

This article delves into the characteristics, virtues and advantages of the Socratic method in higher education based on the experience of the Great Books at the University of Chicago. It also reflects on the actuality and challenges of such a method in times of pandemic and technology

Resumen:

Hegel elaboró una rigurosa filosofía del derecho fundada en la suspensión del formalismo jurídico y el subjetivismo de las preferencias individuales. Su filosofía no es positivista, pero tampoco historicista o sociológica. Se afirma como actividad científica porque se sustenta en la lógica y se demuestra como realidad efectiva. La filosofía del derecho expone la libertad a lo largo de tres momentos: el derecho abstracto, la moralidad y la eticidad. En el derecho abstracto se afirma la libertad formal del individuo, su personalidad jurídica: declaro que soy libre y tengo "derecho"; solo es efectivo cuando se expone y se prueba exteriormente. El derecho demanda el reconocimiento de la comunidad: un dejar hacer (laissez faire) y el otorgamiento de una prestación (prerrogativa) por parte del Estado. Para Hegel, la eticidad es el orden que permite la relación entre individuos libres en la esfera social, económica o jurídica. Por lo tanto, el derecho es realmente efectivo en la eticidad. En la constitución política y en las leyes, las personas exteriorizan la racionalidad práctica: la perfecta unión de las normas universales de la cultura y sus costumbres. Sólo así, el ciudadano se reconoce a sí mismo y a los demás en lo universal de las leyes

Abstract:

Hegel developed a rigorous rights philosophy that is founded on overcoming legal formalism and subjectivism of individual preferences. His philosophy is not positivist but neither historicist nor sociological. It is affirmed as a scientific activity because it is based on logic and is demonstrated as an effective reality. Philosophy of Law exposes freedom throughout its three moments: abstract law, morality and ethicity. In the abstract law, the individual's formal freedom is affirmed, his legal personality, I declare that I am free and have "right" however, this is only effective when it is exposed and tested externally. The law demands the recognition of the community: a let do (laissez faire) and the granting of a benefit (prerrogative) by the State. For Hegel, ethicity is the order that allows the relationship between free individuals in the social, economic or legal sphere. Therefore, the law is really effective in ethicity. In the political constitution and laws, people externalize practical rationality that is nothing more tan the perfect union of the universal norm of culture and customs. Only in this way, the citizen recognizes himself and others in the universal law

Resumen:

A Hegel se le acusa de ser un pensador totalitario, un abogado del "estatismo" y un "enemigo de la libertad". Se supone que no tiene un pensamiento económico y que, si logró hacer algunos esbozos al respecto, deben ser ignorados. Una exégesis rigurosa de las obras de Hegel no permite que se le atribuya el título de pensador totalitario. Al contrario, Hegel es un pensador liberal. No solo estudió la economía política de James Steuart (1707-1780) y de Adam Smith (1723-1790), sino que dedicó una parte de su reflexión a temas económicos como el "dinero”, el "sistema de necesidades" y la "sociedad civil". Para Hegel, el desarrollo de la libertad individual consolidó al "dinero" como medio de cambio universal y este, al mismo tiempo, posibilitó unas relaciones más justas entre los hombres

Abstract:

Hegel is accused of being a totalitarian thinker, an advocate of "statism" and an "enemy of freedom". It is assumed that he does not have an economic thought and that if he managed to make some sketches about them they should be ignored. A rigorous exegesis of Hegel’s works does not allow the title of totalitarian thinker to be attributed to him. On the contrary, Hegel is a liberal thinker. He not only studied the political economy of James Steuart (1707-1780) and Adam Smith (1723-1790), but devoted a part of his reflection to economic issues such as "money", "needs system" and "civil society". For Hegel, the development of individual freedom consolidated "money" as means of universal change and this, at the same time, made possible more just relations among men

Resumen:

¿Qué significado tiene el humanismo en el siglo xxi? ¿Qué papel tienen las humanidades en la universidad? ¿Qué relación tiene la universidad con la sociedad? ¿Cuáles son los presupuestos de una filosofía de la educación relevante para nuestra época? Este artículo busca responder estas preguntas a partir de las reflexiones que Martha Nussbaum ha dedicado a la filosofía educativa. El reto de la educación es el reto del hombre que se bate entre el narcicismo y la interdependencia, entre vergüenza y aceptación de la propia humanidad

Abstract:

What do humanities mean in this century? What role do humanities play in the university? How is the university related to our society? What is the foundation of a meaningful philosophy of education to our times? In this article, we plan to address those questions based on Martha Nussbaum’s reflections on educational philosophy. The challenge of education is that of a man who is torn between narcissism and interdependence, shame, and the acceptance of his humanity

Resumen:

Carlos de la Isla reflexiona sobre los retos que enfrenta la educación en el mundo contemporáneo. Su pensamiento coincide con el de otros especialistas del mundo al advertir sobre distintos peligros para las instituciones educativas, como supeditar las acciones educativas a los criterios mercantilistas. En esta entrevista, se pondera el significado de la pedagogía dialógica, el currículo oculto y la pedagogía crítica

Abstract:

Carlos de la Isla reflects on the challenges facing education today. His opinion is in agreement with various specialists worldwide warning of the dangers facing educational institutions as subjugating educational incentives to business criteria. In this interview, the significance of dialogic pedagogy, hidden syllabuses, and critical pedagogy are considered

Resumen:

Dese hace tiempo, se ha consolidado un nuevo paradigma educativo, centrado en el desarrollo de “competencias”, que enfatizan las actuaciones y las capacidades del estudiante “para saber hacer en un contexto”. Tal enfoque está presente en muchas reformas educativas en Europa y Latinoamérica, debido a la influencia de organismos económicos internacionales. Esta artículo analiza el concepto “competencia” y su evolución, y evalúa el significado de la noción “competencia social y ciudadana” desde la educación como ejercicio de crítica y transformación social

Abstract:

For some time, a new educational paradigm has been established, based on the development of competencies. It focuses on the actions and talents of students to perform in a certain situation. This approach is present in many of the educational reforms in Europe and Latin American due to the influence of international economic organizations. This article analyzes the concept of competencies and their growth. Moreover, it evaluates the meaning of social and civil competency from an educational perspective as a critical duty and a social transformation

Resumen:

Cuando asumió la presidencia de la república en diciembre de 1940, Manuel Ávila Camacho, encontró a su gobierno inmerso en la Segunda Guerra Mundial y comprendió que México no podía quedar al margen. Por eso nombró a Ezequiel Padilla al frente de la Secretaría de Relaciones Exteriores, y a quien solicitó ajustara la política exterior mexicana, sin abandonar los principios constitucionales de no intervención, respeto a la soberanía y solución pacífica de las controversias internacionales, para que el país ingresara a la Segunda Guerra Mundial. La decisión presidencial fue decisiva para hacer frente a la guerra, pero también para definir la política exterior que el país seguiría en la posguerra. Esta situación convirtió a Ezequiel Padilla en uno los cancilleres más influyentes del continente americano, a pesar de ello, ha sido una figura poco estudiada

Abstract:

After assuming the presidency of the Republic in December 1940, President Manuel Avila Camacho found his government immersed in World War II, and he understood that Mexico could not be left out. Hence, he decided to appoint Ezequiel Padilla to the head of the Ministry of Foreign Affairs, and whom he requested Mexican foreign policy, without abandoning the constitutional principles of non-intervention, respect for sovereignty and peaceful resolution of international disputes, so that the country would enter World War II. The presidential decision was decisive in tackling the War, but also to define the foreign policy that the country would follow in the post-war period. This situation made Ezequiel Padilla one of the most influential chancellors of the Americas, yet he has been a poorly studied figure

Abstract:

Our knowledge of the top management team (TMT) and board interface in the context of major strategic decisions remains limited. Drawing upon the strategic leadership system perspective (SLSP) and the interface approach, we argue that the two groups constitute a strategic-oriented multiteam system and consider how supplementary (similarity) and complementary (interacting variety) congruence of international and functional backgrounds influence strategic decision-making. Looking at the internationalization decisions of the largest public firms in the UK, we find that complementary congruence of international backgrounds and supplementary congruence of functional experience promote the pursuit of new market entries. We extend the SLSP by showing how the cognitive TMT-board interface dynamics associated with supplementary and complementary congruence are important antecedents of strategic outcomes. Further, we find a boundary condition to the interface approach in strategic leadership research by identifying the underlying mechanisms that activate some TMT-board interfaces and not others

Abstract:

Existing research has underexplored the role of context as a source of heterogeneity in family firms' (FFs) internationalization strategies. Drawing upon institutional theory, we develop and test a mid-range theory positing that differences in the quality of the institutional context can moderate the strength of the relationship between individual- and board-level attributes and FF internationalization. Our comparison of U.S. FFs with FFs from Brazil and Mexico reveals that in emerging market FFs, individual-level attributes such as CEO international experience, CEO educational attainment, and CEO international education exhibit a stronger relationship with internationalization. Similarly, we find that board-level attributes such as board size and board independence are also more strongly related to internationalization in emerging market contexts. We contribute to the literature by identifying a source of variation in FF internationalization strategies based on context and by examining the relationship between a wide range of FF attributes and internationalization

Abstract:

Integrating institutional and effectuation theories, we examine the relationship between entrepreneurs' means and internationalization in an emerging market. Results indicate that some means, such as technical expertise or business network membership, transform into valuable internationalization resources despite difficult institutional conditions. Others, however, such as industry or international experience, are best deployed locally. Findings also indicate that means such as entrepreneurial experience and number of founders act as catalysts of internationalization, allowing for other means to transform into internationalization resources. We extend effectuation theory by showing how different means transform into internationalization resources and contribute to research at the intersection of institutional theory and international entrepreneurship by expanding our understanding of universally-enabling and context-binding internationalization resources. In so doing, we identify a boundary condition to international entrepreneurship theories that emphasize the role of individual resources during venture internationalization by revealing a context in which certain traits exhibit nonstandard relationships with internationalization

Abstract:

The rise of the modern corporation has disrupted the class structures of nation-states because, in the era of globalization, such reorganization now occurs across borders. Yet, has globalization been deep enough to facilitate the emergence of a transnational capitalist class (TCC) in which both class formation and consolidation processes are located in the transnational space itself? I contribute to our understanding of the TCC by contrasting the personal characteristics, life histories and capital endowments of members of the British corporate elite with and without transnational board appointments. The existence of the honours system in the UK allows us to compare individuals objectively in terms of their symbolic capital and to link this trait to embeddedness in the TCC. By studying 448 directors from the 100 largest firms in the UK in 2011, I find evidence of a TCC with a class consolidation process that is located within transnational space, but whose class formation dynamics are still tethered to national processes of elite production and reproduction

Abstract:

Systematic international diversification research depends on reliable measurements of the degree of internationalization (DOI) of the firm. In this paper, we argue that the inclusion of social markers of internationalization can contribute to the development of more robust conceptualizations of the DOI construct. Unlike traditional metrics of DOI, such as foreign sales over total sales or foreign assets over total assets, social-based metrics of internationalization can reveal less visible foreign resource interdependencies across the firm's entire value chain. By combining social-based metrics of DOI with traditional measures of internationalization, we uncovered three distinct dimensions of internationalization: a real one, composed of the firm's foreign sales and assets; an exposure one, represented by the firm's extent and cultural dispersion of foreign subsidiaries; and a social one, represented by the extent of the firm's top managers international experience and the number and cultural zone dispersion of the firm's transnational board interlocks. Results from both an exploratory and confirmatory factor analysis show that these dimensions are sufficiently distinctive to warrant theoretical and empirical partitioning. These findings have implications for the way researchers select and combine DOI metrics and underscore the importance of conducting a thorough theoretical and statistical assessment of DOI conceptualizations before proceeding with empirical research

Abstract:

Board interlocks between firms headquartered in different countries are increasing. We contribute to the understanding of this practice by investigating the transnational interlocks formed by the 100 largest British firms between 2011 and 2014. We explore the association between different attributes of a firm's internationalization process, namely performance, structural and attitudinal, and the extent of the firm's engagement in transnational interlocks. We posit that the value of transnational interlocks as a non-experiential source of knowledge will vary according to which of these three attributes becomes more prominent as the firm internationalizes. We do not find a significant relationship between the performance and structural attributes of internationalization, as measured by the firm's percentage of foreign sales and assets, respectively, and increased engagement in transnational interlocks. We do, however, find an inverted U-shaped relationship between the attitudinal attribute of internationalization, represented by the psychic dispersion of the firm's foreign operations, and the firm's number of transnational interlocks. This non-linear relationship reveals both a natural boundary for the firm's capacity to engage in transnational interlocks and a reduced willingness to engage in such ties once a certain degree of attitudinal internationalization has been reached

Abstract:

While previous studies have focused on the role of directors in the formation of transnational interlocks, this paper argues that firm strategy can also influence the development of these relationships. The purpose of this paper is to shed light on the practice of transnational interlocks by extending board interlocks theory from the national to the transnational context, and exploring aspects that are unique to the transnational level

Abstract:

A ring R is called left semihereditary (p.p.) if each finitely generated (principal) left ideal of R is projective. Left p.p. rings have been used to study left semihereditary rings in many instances (e.g., see [1] and [9].) Left semihereditary group rings were studied in [4], [5], and [7]. More generally, left semihereditary monoid rings were studied in [2], [5], [8], and [9]. In [9, Theorem 9] necessary and suficient conditions were given for a monoid ring R[S] to be left semihereditary when S is a semilattice of torsion groups. For such a monoid S; all of the idempotents of S are central. In this paper, we consider the more general situation in which the monoid S is a semilattice of periodic semigroups, each having exactly one central idempotent element. Such semigroups have been studied in [6] and [10]. We show that these periodic semigroups must, in fact, be groups; thus we are able to characterize the semihereditary monoid rings in this case. We obtain this result as a corollary of a theorem on left p.p. rings. Before we present this theorem, we review the fundamental definitions that we will use. A semigroup S is periodic if, for each s in S; there exist positive integers m, n such that s^(m+n) = s^(m): A periodic semigroup S(e) with exactly one idempotent e contains a (largest) group G(e) = {s in S(e)|es = s}. A commutative semigroup E is called a semilattice if e^2 = e for each e in E. A semilattice E has a natural partial order given by e >= f if and only if ef = f. Then a semigroup S = Union of e in E S(e) is called a semilattice of semigroups if E is a semilattice and, for s in S(e) and t in S(f)(e, f in E); we have st in S(ef). Further information on these semigroups can be found in [3] or [12]. For a ring R and a monoid S; we use R[S] to denote the monoid ring; [11] is a basic reference for these rings

Resumen:

La dimensión política del proceso de reforma de salud es un factor fundamental que no sólo determina su factibilidad, sino la forma y contenido que ésta tome. De ahí que el estudio del aspecto político de las reformas de salud sea esencial en el análisis y manejo de la factibilidad política de las mismas. El presente estudio enfoca su atención sobre la capacidad del Estado para impulsar exitosamente propuestas de reforma de salud, usando como estudios de caso Colombia y México. Se concentra específicamente en aquellos elementos que buscan incrementar la factibilidad política para formular, legislar e instrumentar propuestas de cambio. Para ello, toma como variables el contexto institucional en que se desenvuelven las iniciativas de reforma; la dinámica política de su proceso, y las características y estrategias de los equipos a cargo de dirigir el cambio (equipos de cambio). Entre los principales hallazgos que aquí se presentan destacan las claras similitudes entre las estrategias políticas usadas por los grupos encargados de la reforma de salud y aquellas aplicadas por equipos tecnocráticos similares, a cargo de las reformas económicas en estos países. Se argumenta que si bien estas estrategias resultaron efectivas en la creación de nuevos actores en el sector salud –tales como organizaciones privadas de financiamiento y provisión de servicios–, no tuvieron el mismo impacto en la transformación de los viejos actores –los servicios de los ministerios de salud y de los institutos de seguridad social–, lo que ha limitado considerablemente el avance de las reformas

Abstract:

The political dimension of the health reform is a fundamental aspect that not only influences the project's feasibility, but also its form and content. Therefore the study of the political aspects involved in the health reform process is essential to determine the political feasibility of the reform. Based on the case studies of Colombia and Mexico, this study concentrates on the State's capability to promote health reform projects successfully. It specifically focuses on those elements that seek to improve the political feasibility of formulating, legislating and implementing reform proposals. The relevant variables under study are: the institutional context in which the reform initiatives develop; the political dynamic of the reform process; and the characteristics and strategies of the teams in charge of leading the reforms (change teams). The similarities in the political strategies used by the teams in charge of the health reform, and those of similar technocratic teams in charge of economic reform, stand out as one the study's main findings. It is argued that, although these strategies were effective in bringing about the creation of new actors in the health sector such as private organizations for the financing and provision of health services, they did not have the same impact on the transformation of the old actors the health ministries and the social security institutes, therefore considerably limiting the scope of the reforms

Abstract:

This paper presents a case study to identify the characteristics of an information visualization tool that facilitates its learning and adoption process in the context of professional practice. We identified the principal categories that explain the most important aspects on the usefulness of the info vis tool that emerged during the practical use of the system and the challenges for its adoption. Our results point to the value of allowing more consideration of the user needs and may direct a future research on the development of information visualization tools

Abstract:

Computing devices have become a primary working tool for many professional roles, among them software programmers. In order to enable a more productive interaction between computers and humans for programming purposes it is important to acquire an awareness of human attention/concentration levels. In this work we report on a controlled experiment to determine if a low-cost BCI (Brain Computer Interface) device is capable of classifying whether a user is fully concentrated while programming, during three typical tasks: creativity (writing code), systematic (documenting the code) and debugging (improving or modifying the code to make it work). This study employs EEG (Electroencephalogram) signals, to measure an individual’s concentration levels. The chosen BCI device is NeuroSky’s Mindwave due to its availability in the market and low cost. The three tasks described are performed in a physical and digital form. A statistically significant difference between debugging and creativity tasks is revealed in our experiments, in both physical and digital tests. This is a good lead for following experiments to focus on these two areas. Systematic tasks might not bring good results due to their nature

Abstract:

Different approaches used to define how a website shows information have an impact on how users evaluate its usability. As shown in the present study, how people accomplish a search of visual content in a newspaper website is an important factor to review while designing it. In this study, 47 participants were randomly assigned to evaluate one of two different newspaper websites and asked to do visual and written searches. The evaluation metrics were: task success and task time. Also the participants made an overall evaluation of the site, answering two Likert questions and an open-ended question to measure qualitative aspects. Finally, we measured the overall satisfaction with a SUS questionnaire. The results show that a poor performance in the search of visual content lead to lower usability perception, this might be a main aspect to improve when defining priorities to enhance overall usability

Abstract:

The aim of this investigation is to identify and understand the relations between the people’s mental models and their performance and usability perception about a complex interactive system (Twitter). Our study includes the participation of thirty college students where each of them was asked to perform a number of activities with Twitter, and to draw graphical representations of the mental model about it. The participants have either none or at least a year of expertise using Twitter. We identified three typical types of mental models used by participants to describe Twitter and found that the level of expertise had a major impact on performance rather than the mental model style defining the understanding about the system. Furthermore, and in contrast, we found that usability perception was affected by the level of expertise

Abstract:

Media-sharing Web sites are facilitating modern versions of storytelling activities. This study investigates the use of photo-based narratives to support young parents who are geographically separated from their aging parents to share stories about their young children. The case analyses Malaysian young mothers living in the UK, communicating regularly with their families back home, sharing experiences living in another country, looking for parenting advice, and opening opportunities for sharing the life and development of their young children. Sixteen families participated in the study by providing access to their social networking and web spaces and participating in exercises for creating photo stories. We identified the characteristics of the mediating system serving to establish the contact between grandparents and grandchildren as well as the characteristics of the photo stories and the practices around sharing them

Abstract:

Advances in computer technology have made it increasingly easy for users to navigate through a virtual world. We are interested in assessing what kind of immersive virtual experience is being delivered to the users through projects that aim to share virtualized aspects of the physical world. In order to achieve this we will assess whether people’s opinions on tourist-oriented places differ between a virtual visit and one in-person. Our proposed five-dimensional model is designed to incorporate cultural factors while analyzing perceived differences. This research is intended to contribute to the understanding of the comparison between these two experiences, with the final objective of improving on the user experience

Abstract:

Purpose – An ensemble is an intermediate unit of work between action and activity in the hierarchical framework proposed by classical activity theory. Ensembles are the mid-level of activity, offering more flexibility than objects, but more purposeful structure than actions. The paper aims to introduce the notion of ensembles to understand the way object-related activities are instantiated in practice. Design/methodology/approach – The paper presents an analysis of the practices of professional information workers in two different companies using direct and systematic observation of human behavior. It also provides an analysis and discussion of the activity theory literature and how it has been applied in areas such as human-computer interaction and computer-supported collaborative work. Findings – The authors illustrate the relevance of the notion of ensembles for activity theory and suggest some benefits of this conceptualization for analyzing human work in areas such as human-computer interaction and computer-supported collaborative work. Research limitations/implications – The notion of ensembles can be useful for the development of a computing infrastructure oriented to more effectively supporting work activities. Originality/value – The paper shows that the value of the notion of ensembles is to close a conceptual gulf not adequately addressed in activity theory, and to understand the practical aspects of the instantiation of objects over time

Abstract:

The expansion of information and communication technology (ICT) infrastructure in the developing world has considerably increased opportunities for people to connect. Today, people can better maintain long-distance relationships as well as be better informed of how their family, close friends, and emotional partners are doing. As a result, many migrants use ICTs to maintain and strengthen ties to their places of origin. For many years, the most popular means of communication for migrants was the telephone since it allows communication at relatively low rates even for international calls. Now new types of ICTs for family communication such as Internet services (e.g., instant messaging) are becoming more common. Given the economic, social, and political implications of migrants being connected to their places of origin, this socio-technical phenomenon deserves attention from academia, industry, and governments. Four different forms of ICTs are studied here: 1) hometown Web sites, 2) video conferencing, 3) call forwarding services, and 4) online TV. We analyze contributions of these services to enhance migrants' communication as well as the factors for adoption

Abstract:

This research reports on a study of the interplay between multi-tasking and collaborative work. We conducted an ethnographic study in two different companies where we observed the experiences and practices of thirty-six information workers. We observed that people continually switch between different collaborative contexts throughout their day. We refer to activities that are thematically connected as working spheres. We discovered that to multi-task and cope with the resulting fragmentation of their work, individuals constantly renew overviews of their working spheres, they strategize how to manage transitions between contexts and they maintain flexible foci among their different coworking spheres. We argue that system design to support collaborative work should include the notion that people are involved in multiple collaborations with contexts that change continually. System design must take into account these continual changes: people switch between local and global perspectives of their working spheres, have varying states of awareness of their different working spheres, and are continually managing transitions between contexts due to interruptions

Abstract:

Most current designs of information technology are based on the notion of supporting distinct tasks such as document production, email usage, and voice communication. In this paper we present empirical results that suggest that people organize their work in terms of much larger and thematically connected units of work. We present results of fieldwork observation of information workers in three different roles: analysts, software developers, and managers. We discovered that all of these types of workers experience a high level of discontinuity in the execution of their activities. People average about three minutes on a task and somewhat more than two minutes using any electronic tool or paper document before switching tasks. We introduce the concept of working spheres to explain the inherent way in which individuals conceptualize and organize their basic units of work. People worked in an average of ten different working spheres. Working spheres are also fragmented; people spend about 12 minutes in a working sphere before they switch to another. We argue that design of information technology needs to support people's continual switching between working spheres

Abstract:

This paper reports results from a study on the adoption of an information visualization system by administrative data analysts. Despite the fact that the system was neither fully integrated with their current software tools nor with their existing data analysis practices, analysts identified a number of key benefits that visualization systems provide to their work. These benefits for the most part occurred when analysts went beyond their habitual and well-mastered data analysis routines and engaged in cfeative discovery processes. We analyze the conditions under which these benefits arose, to inform the design of visualization systems that can better assist the work of administrative data analysts

Abstract:

This paper describes a method of Resource Reservation Management (RRM) mechanism that optimises bandwidth reservation in IP routers. The technique uses the original specification of the Resource Reservation Protocol (RSVP) with minimum modifications. In the proposal it is assumed that the video services are coded at multiresolution bit rates, each providing a different quality of service. The paper analyses the behaviour of the proposed RRM in an Internet network with different levels of congestion. The results show that the proposed mechanism can deliver an acceptable quality of service by dynamically adjusting the demanded bandwidth

Abstract:

In this paper, extending the ideas given by (Deheuvels 1979) for the empirical copula, we first assume that we are sampling from the product copula, and we construct Cnm the d-sample copula of order m, which is an estimator of C(m) the checkerboard approximation of order m, see (Li et al. 1997), then we study the distribution of the number of observations in each of the boxes generated by the regular partition of order m in [0,1]2, and we also give the corresponding moments and correlations. We also prove the weak convergence of the sample process, to a centered Gaussian process. In fact, we use the known results of the weak convergence for the empirical process, to prove the weak convergence of the sample process. We first show that the sample copula can be written as a linear functional of the empirical copula, and then we observe that this functional is Hadamard differentiable. Therefore, we can apply the delta method to get the weak convergence of the sample process. Finally, we performed several simulations of the sample process at a given point of [0,1]2 to study the sample size needed to get the convergence to a centered Gaussian process

Resumen:

Este trabajo propone un nuevo algoritmo basado en Optimización por Colonia de Hormigas (OCH) para la determinación de los niveles de inventarios de seguridad en una red logística cuando se utiliza el modelo de servicio garantizado. La red logística se modela como una serie de etapas de aprovisionamiento, de manufactura y de distribución. El objetivo es determinar el costo mínimo de colocar inventario de seguridad en cada una de las etapas, si éstas deben atender la demanda de las etapas sucesoras en un tiempo garantizado. El algoritmo propuesto es aplicado a una serie de instancias y se comparan los resultados con los obtenidos por el método estándar de programación dinámica. El algoritmo propuesto resuelve redes logísticas con 200 etapas en 120 segundos en promedio. Además, se hace un análisis de correlación entre los diferentes valores de los parámetros del algoritmo y el costo total del inventario de seguridad

Abstract:

This paper proposes a new algorithm based on Ant Colony Optimization (ACO) to determine the inventory levels in a logistics network when the inventory is managed under guaranteed-service time inventory model. The logistics network is modeled by supplying, manufacturing and delivering stages. The aim is to determine the minimum cost of placing safety stock in each stage, whether it must serve its predecessor stages just in a guaranteed service time. The proposed algorithm is applied to some instances, and the results are compared to those computed using standard operations of dynamic programming. The proposed algorithm solved 200-stage logistics networks in about 120 seconds. Also, a correlation analysis is carried out between the different values of the algorithm parameters and the total safety stock cost

Abstract:

We propose the construction of conditional growth densities under stressed factor scenarios to assess the level of exposure of an economy to small probability but potentially catastrophic economic and/or financial scenarios, which can be either domestic or international. The choice of severe yet plausible stress scenarios is based on the joint probability distribution of the underlying factors driving growth, which are extracted with a multilevel dynamic factor model (DFM) from a wide set of domestic/worldwide and/or macroeconomic/financial variables. All together, we provide a risk management tool that allows for a complete visualization of the dynamics of the growth densities under average scenarios and extreme scenarios. We calculate growth-in-stress (GiS) measures, defined as the 5% quantile of the stressed growth densities, and show that GiS is a useful and complementary tool to growth-at-risk (GaR) when policymakers wish to carry out a multidimensional scenario analysis. The unprecedented economic shock brought by the COVID-19 pandemic provides a natural environment to assess the vulnerability of US growth with the proposed methodology

Abstract:

We are concerned with deterministic and stochastic nonstationary discrete--time optimal control problems in infinite horizon. We show, using Gâteaux differentials, that the so--called Euler equation and a transversality condition are necessary conditions for optimality. In particular, the transversality condition is obtained in a more general form and under milder hypotheses than in previous works. Sufficient conditions are also provided. We also find closed--form solutions to several (discounted) stationary and nonstationary control problems

Resumen:

La pregunta que motiva esta investigación es la siguiente: Por qué Adolfo López Mateos, siendo secretario del Trabajo, tuvo una brillante actuación al conciliar y resolver muchos de los conflictos laborales del sexenio del presidente Adolfo Ruiz Cortines (1952-1958), y cuando sube a la Presidencia de la República (1958-1964) realiza una política de pan y palo, en contraposición a su actuación como secretario del Trabajo en el sexenio anterior. La respuesta a dicha pregunta se resuelve de la siguiente manera: "un presidente o un alto burócrata, a través de sus facultades constitucionales o metaconstitucionales, puede crear una agencia burocrática o modificar su curso de acción; a través de ella puede avanzar sus intereses, tanto de agenda política, como electorales"

Resumen:

Este ensayo desarrolla una explicación de clase y cultura sobre el persistente resurgimiento de la derecha radical en Francia. Su principal afirmación es que la derecha radical es mejor entendida como una tradición política continua cuyo apelativo puede ser rastreado en las modalidades y consecuencias de la modernización económica y política del país desde mediados del siglo XIX. Se analizan los valores económicos y políticos específicos propios de los miembros de esta categoría social para ayudar a explicar su prolongada atracción hacia el excluyente y autoritario discurso y programas de la derecha radical francesa

Abstract:

This paper develops a class-cultural explanation for the persistent resurgence of the Radical Right in France. Its principal claim is that the Radical Right is best understood as a continuous political tradition whose appeal can be traced to the modalities and consequences of the country’s economic and political modernization since the mid nineteenth century. The paper analyzes the specific economic and political values which members of this social category evolved in order to help explain their longstanding attraction to the exclusionary and authoritarian discourse and program of the French Radical Right

Abstract:

This article explains the victory of the Front National (FN) in the May 2014 European elections in France. Taking issue with standard academic accounts that conceive of the latter as 'second-order' elections, it argues that the FN won by harnessing voters' growing anxiety about European integration as an electoral issue. First, the article contends that, on the backdrop of worsening unemployment and social crisis, Europe assumed unprecedented salience in both national and European elections. In turn, it argues that by staking out a Europhobe position in contrast to the mainstream parties and the radical left, the FN claimed effective ‘ownership’ over the European issue, winning the bulk of the Eurosceptic vote to top the electoral field

Abstract:

The failure of the Los Cabos summit to satisfactorily address the European sovereign debt crisis and ominous world economic outlook, let alone agree on concrete measures to improve the oversight and functioning of the global economy, appears to confirm the diminishing effectiveness and relevance of the G20 as an organ of international governance since its inception in December 2008. While few accomplishments were achieved in the area of global governance during the Mexican presidency, acute collective action problems, made worse by the present economic crisis, paralysed the G20 in the lead-up to and during the Los Cabos summit. These collective action problems and the ensuing failure of global governance are attributable to the absence of leadership evident at both the global and European levels, which in turn testifies to the excessive dispersion of state economic and political power within the international system

Abstract:

This article seeks to account for the emergence of radical populist parties in France and Germany during the final two decades of the twentieth century and the first decade of the twenty-first. These parties, of the Far Right in France and Far Left in Germany, have attracted the support of economically and socially vulnerable groups—industrial workers and certain service-sector strata—who have broken with their traditional corporative and partisan attachments and sought out alternative bases of social and political identification. Contrary to classical liberal analyses that attribute rising unemployment and declining living standards among these groups to these countries’ failure to reform their economies, or varieties of capitalism arguments which claim that the institutional specificities of the post-war French and German economies insulated them from the impacts of neoliberal modernization, the article posits that this outcome is in fact attributable to the far-reaching economic liberalization which they experienced since the 1980’s in France and the 1990’s in Germany. Specifically, it is argued that this process of liberalization dissolved the Fordist social contract that had ensured the inclusion of these class groups in the postwar capitalist order, triggering a structural and cultural crisis which fueled their political radicalization

Abstract:

We consider the following version of the standard problem of empirical estimates in stochastic optimization. We assume that the underlying random vectors are independent and not necessarily identically distributed but that they satisfy a “slow variation” condition in the sense of the definition given in this paper. We show that these assumptions along with the usual restrictions (boundedness and equicontinuity) on a class of functions allow one to use the empirical mean method to obtain a consistent sequence of estimates of infimums of the functional to be minimized. Also, we provide certain estimates of the rate of convergence

Abstract:

Within the CHI community there has been sustained interest in interruptions and multitasking behaviour. Research in the area falls into two broad categories: the micro world of perception and cognition; and the macro world of organisations, systems and long-term planning. Although both kinds of research have generated insights into behaviour, the data generated by the two kinds of research have been effectively incommensurable. Designing safer and more efficient interactions in interrupted and multitasking environments requires that researchers in the area attempt to bridge the gap between these worlds. This SIG aims to stimulate discussion of the tools and methods we need as a community in order to further our understanding of interruptions and multitasking

Abstract:

The standard model of knowledge, (Ω, P), consists of state space, Ω, and possibility correspondence, P. Usually, it is assumed that P satisfies all knowledge axioms (Truth Axiom, Positive Introspection Axiom, and Negative Introspection Axiom). Violating at least one of these axioms is defined as epistemic bounded rationality (EBR). If this happens, a researcher may try to look for another model, (Ω∗,P∗), which generates the initial model, (Ω, P), while satisfying all knowledge axioms. Rationalizing EBR means that the researcher finds such a model. I determine when rationalization of EBR is possible. I also investigate when a model, (Ω∗,P∗), which satisfies all knowledge axioms, generates amodel, (Ω, P),which satisfies these axioms as well

Abstract:

Type space is of fundamental importance in epistemic game theory. This paper shows how to build type space if players approach the game in a way advocated by Bernheim’s justification procedure. If an agent fixes a strategy profile of her opponents and ponders which of their beliefs about her set of strategies make this profile optimal, such an analysis is represented by kernels and yields disintegrable beliefs. Our construction requires that underlying space is Polish

Resumen:

En este artículo se presenta el diseño y construcción de un robot móvil omnidireccional para aplicaciones colaborativas. El modelo cinemático que describe el movimiento omnidireccional del robot es implementado con un algoritmo de control a bordo del robot en una tarjeta de desarollo MOJO la cual cuenta con una FPGA Spartan 6 XC6SLX9 y un microcontrolador AT-mega32U4 y con un programa externo al que se accede desde el Robot Operating System (ROS). Los experimentos realizados ilustran el buen comportamiento del sistema diseñado

Resumen:

En este estudio se reflexiona sobre recientes cambios observados en las relaciones de China con los países vecinos con los que mantiene disputas territoriales terrestres, India y Bután. Posteriormente, se caracterizará la actitud actual de China ante sus vecinos marítimos. En tercer lugar, se relacionan las actuales políticas agresivas del gobierno central contra Hong Kong y Taiwán, y la sistemática política de control de la minoría musulmana en Xinjiang. El análisis resalta las nuevas herramientas diplomáticas de China para enfrentar las amenazas a su seguridad nacional y regional. El estudio concluye con proyecciones sobre los escenarios regionales e internacionales del poderío chino

Abstract:

This study presents reflections on recent changes observed in China's relations with neighboring countries with which it has land territorial disputes, India and Bhutan. Subsequently, China's current attitude towards its maritime neighbors will be characterized. Thirdly, the current aggressive policies of the central government against Hong Kong and Taiwan, and the systematic policy of control of the Muslim minority in Xinjiang are related. The analysis highlights China's new diplomatic tools to deal with its national and regional security threats. The study concludes with projections on regional and international scenarios for Chinese power

Abstract:

As part of the restructuring of state organizations announced in March 2018, it is known that the China Coast Guard (CCG), previously controlled by the State Oceanic Administration, is coming under the administration of the People’s Armed Police (PAP) from the Central Military Commission (CMC). As a paradigmatic shift from a joint civilian–military control (State Council–CMC) to a purely military one, the reorganization of the CCG, only five years from the latest reshuffling, seems to reveal an the party’s increasing control over the military as outlined in the September 2017 CCP Central Committee and also the intention by the Chinese central government to provide the CCG with more flexibility and authority to act decisively in disputed waters in the East and South China Seas if needed. This article inquiries into the causes, logic, and likely regional consequences of such a decision. Amid the upgrading of insular features in the Spratlys, the deployment of bombers in the Paracels, and overall modernization of China’s naval capabilities, the article also explores plausible developments in which the PAP-led CCG, irregular maritime militias, and People’s Liberation Army Navy forces might coordinate more effectively efforts to safeguard self-proclaimed rights in littoral and blue-water areas in dispute

Resumen:

Mientras que ciertos elementos sentaron las bases de la clase media japonesa en la posguerra, su consolidación y los cambios sufridos en la sociedad desde la década de 1990 dieron paso a una clase media adelgazada y precaria, que ha resentido la desigualdad económica, las alteraciones del patrón laboral y las tendencias marcadas por las reformas estructurales

Abstract:

While certain elements laid the foundations of the Japanese middle class in the postwar period, its consolidation and the changes undergone in society since the 1990s gave way to a thinned and precarious middle class, which has suffered from economic inequality, alterations in the labor pattern and the tendencies marked by structural reforms

Abstract:

The research inquiries into New Delhi's current approaches to Maritime Asia regional security in general and the South China Sea from the perspective of an Indian Act East Policy operating in the East Asian security supercomplex. Shaped by theoretical insights from defensive realism and security studies and based on empirical analysis of India's policy decisions from 2014 to the present, the research evaluates India's reach and limitations over its diplomatic and naval strategic policies with key Southeast Asian and extra regional states, mainly Vietnam, the United States and Japan. While identifying the need to update current India's naval strategy to better protect freedom of navigation in the South China Sea, the analysis finds relevant incentives for a closer India-China cooperative engagement so as to both improve the security architecture in this maritime region and for the sake of India's own security at large

Resumen:

En este artículo se expone la evolución de la relación bilateral y la importancia de Japón para México a la luz del reciente acercamiento económico; se identifican los retos importantes que enfrenta esta relación y se ofrecen propuestas a futuro

Abstract:

The essay outlines the evolution of the bilateral relationship and Japan’s importance to Mexico amid the recent economic developments; identifies relevant challenges in the bilateral realm, and offers proposals for improvement

Resumen:

Con la reciente inestabilidad en aguas del noreste y sudeste de Asia se ha reiniciado el debate sobre la capacidad y voluntad de China para preservar la paz y estabilidad regional con países vecinos. En el conflicto por las islas Spratly se observa desde hace pocos años un patrón cada vez más obvio de internacionalización con involucramiento de actores extrarregionales, en particular Estados Unidos. En este ensayo se presenta un análisis actualizado de este diferendo de larga historia que se ha agravado aceleradamente en los últimos años, considerando los motivos económicos y geopolíticos actuales más relevantes que han contribuido a la presente crisis entre China y algunos países, en particular Vietnam y Filipinas. Asimismo, se identifican las características más visibles de la entrada de Estados Unidos a este conflicto y sus implicaciones, seguido por una evaluación sobre si este diferendo, con sus nuevas aristas internacionales, ha llegado finalmente a un punto de ebullición. El ensayo finaliza con una serie de reflexiones sobre posibles caminos a seguir a futuro en este diferendo, tomando en cuenta las diversas variables analizadas

Abstract:

Recent instability in Northeast and Southeast Asian waters has reignited the debate over China's capacity and intention to preserve peace and security with neighboring countries. On the Spratly Islands territorial conflict, a pattern of internationalization -mainly a more direct US involvement- has been increasingly manifest during the last five years. This essay offers an updated analysis over this imbroglio, a conflict with a long history but nonetheless having been deteriorating fast during recent years, taking into account those most relevant economic and geopolitical causes contributing to the current crisis between China and some countries -mainly Vietnam and the Philippines-. Also, the most visible features and implications of US involvement into this maritime region are identified, followed by an evaluation of whether this conflict, with its international implications, has reached a breaking point. The essay concludes with some thoughts over future scenarios for the maritime region

Abstract:

In 1946, the Philippines raised claims in the South China Sea over an area already known as Spratly Islands. This claim advanced through peculiar stages, starting when Thomas Cloma allegedly discovered islands in 1946, later named as Freedomland, and maturing to some extent in 1978 by the government’s claim over the so-called Kalayaan Island Group. Considered as an oceanic expansion of its frontiers, this paper reviews the basis of the claim, first over the nature of Cloma’s activities, and secondly over the measures the Philippine government took as a reaction of Cloma’s claim of discovery of an area already known in western cartography as the Spratlys. Eventually, what is the nature of the link between the 1978 Kalayaan Islands Group’s official claim and 1956 Cloma’s private one?

Abstract:

This paper concerns the "Chinese ocean policy" with regard to the South China Sea in a transitional period, namely the late 1940s and early 1950s. Dealing with territorial conflict over the Spratly, Paracel, and Pratas Islands and Macclesfield Bank, China, as a coastal state, actively pushed what it saw as its core interests in its Southern Sea, and tried to defend them as best as it could. This paper has two aims. One is to describe the origin of the PRC's (and the ROC's) current policy vis-à-vis the Paracels and Spratlys. Both from a political and a military perspective, this policy has its origin in the post-War, 1946—1952 period, including the famous U-shaped dotted line enclosing all four archipelagoes claimed, denoting China's "traditional maritime boundary line" and her "historical waters." A review of the period 1946—1952 is necessary to properly understand the modern history and historiography of the Chinese claim. The other aim is to highlight the economic aspects of the claim, that is, a policy of maritime economic development as being developed, not at a central-planning, decision-making level, but rather at lower governmental levels. This study aims to give sufficient emphasis to the economic dimension of China's policy in the immediate post-War period

Abstract:

A study of China's defense of its "maritime frontier" in the period from 1902 to 1937, including the establishment of self-recognized sovereignty rights over the South China Sea archipelagos, provides a good illustration of how the country has dealt with relevant issues of international politics during the twentieth century. The article intends to show that throughout the period between the fall of the Qing dynasty, the consolidation of power of Chiang Kai-shek's Nationalist government, and up to just before the Pacific War, the idea of a maritime frontier, as applied to the South China Sea, was deeply subordinated to the political needs arising from the power struggle within China and to the precarious position of the country vis-à-vis world powers. Therefore, the protection of rights over the Spratly and Paracel Islands was not a priority of the Chinese government's foreign policy agenda during the first three decades of the republic. However, in contrast to the probable involvement of Sun Yat-sen in a scheme with Japanese nationals in the early 1920s, intended to yield rights for economic exploitation in the Southern China littorals and islands, the Nanjing government's defense of the maritime frontier in Guangdong province since 1928 marked the first precedent in China's self-definition as a modern oceanic nation-state pursuing her own maritime-territorial rights against world powers that had interests in the region

Resumen:

El propósito de este artículo es presentar una interpretación del ensayo de Kant Sobre un presunto derecho a mentir por filantropía, basada en los conceptos fundamentales de su propia filosofía del derecho. Por medio de esta lectura, hemos de explicar por qué muchos comentadores no han apreciado lo que verdaderamente estaba discutiéndose en su debate con el jurista francés Benjamin Constant. A nuestro entender, los dos puntos principales que Kant trata de defender en este ensayo son la naturaleza de las declaraciones jurídicas y la imposibilidad de un deber o una obligación de mentir en un contexto legal. Las tesis que trataremos de defender son, por un lado, que la validez de la posición kantiana respecto a ambos temas no depende, en sentido alguno, de su equivocado tratamiento del polémico ejemplo que discute, y, por otro lado, que ambos tópicos son de gran relevancia en aras de procurar un conocimiento adecuado de cómo las instituciones jurídicas deben impartir justicia

Abstract:

The aim of this article is to present an interpretation of Kant's essay On a Supposed Right to Lie for Philanthropy grounded on the fundamental concepts of his own philosophy of law. By means of this reading, we will explain why many commentators have failed to the see what was at stake in the debate between Kant and the French jurist Benjamin constant. In our understanding, the two main points that Kant tries to defend in the until-now infamous essay are the nature of juridical declarations and the impossibility of a duty or an obligation to lie in a legal context. The theses that we will try to defend are, on the one hand, that the validity of the Kantian position regarding both subjects does not depend, in any sense whatsoever, on his flawed treatment of the polemical example that he discusses, and on the other hand, that both topics are of great relevance in order to understand adequately how juridical institutions should impart justice

Abstract:

In many contemporary democracies, political polarization increasingly involves deep-seated intolerance of opposing partisans. The decades-old contact hypothesis suggests that cross-partisan interactions might reduce intolerance if individuals interact with equal social status. Here we test this idea by implementing collaborative contact between 1,227 pairs of citizens (2,454 individuals) with opposing partisan sympathies in Mexico, using the online medium to credibly randomize participants' relative social status within the interaction. Interacting under both equal and unequal status enhanced tolerant behaviour immediately after contact; however, 3 weeks later, only the salutary efects of equal contact endured. These results demonstrate that a simple, scalable intervention that puts people on equal footing can reduce partisan polarization and make online contact into a prosocial force

Resumen:

La entrega de bienes y servicios por partidos políticos en campaña electoral es endémica en la nueva democracia mexicana y esta práctica parece estar aumentando desde 2000. A partir de información recopilada en una base de datos tipo panel de ciudadanos durante la campaña electoral de 2018, ofrecemos el estudio más detallado hasta ahora disponible sobre los intentos de compra de voto en México. Tales esfuerzos fueron practicados por casi todos los partidos, involucraron a millones de ciudadanos, incluyeron una variedad de ofertas materiales e intentaron inducir a los votantes a alterar su comportamiento electoral de innumerables maneras. No obstante, la evidencia descriptiva sugiere que el cumplimiento de las metas de las maquinarias partidistas puede haber sido bajo porque muchos beneficiarios tenían una comprensión limitada de lo que se les pedía que hicieran y no temían las represalias del partido comprador de votos. Además, la evidencia circunstancial sugiere que los esfuerzos de compra de votos fueron insuficientes para anular la ventaja del candidato ganador en las elecciones presidenciales

Abstract:

Election-season handouts of goods and services by political parties are endemic in Mexico's new democracy, and the practice appears to be increasing since 2000. Using information from a 2018 election-season panel data set of ordinary citizens, we provide the most detailed examination yet available of vote-buying attempts in Mexico. Such efforts were practiced by nearly all parties, involved millions of citizens, included a variety of material offers, and attempted to induce voters to alter their electoral behavior in myriad ways. Nevertheless, descriptive evidence implies that compliance with political machines' wishes may have been low because many recipients had a muddled understanding of what they were asked to do and did not fear retribution from the vote buying party. In addition, circumstantial evidence suggests that vote-buying efforts were insufficient to overturn the winning candidate's advantage in the presidential election

Abstract:

Mentoring is more and more studied by researchers on account of its professional and personal impact on mentees. This contribution has two main objectives. First, to empirically validate the benefits for the mentor and to test links between mentoring activities and benefits through a multidimensional analysis. Second, to incorporate two variables structuring the relationship into the analysis: the formal vs informal nature of the mentoring relationship and the gender composition of the dyad. The paper aims to discuss these issues

Abstract:

Constant power loads (CPLs) in power systems have a destabilizing effect that gives rise to significant oscillations or to network collapse, motivating the development of new methods to analyse their effect in AC and DC power systems. A sine qua non condition for this analysis is the availability of a suitable mathematical model for the CPL. In the case of DC systems power is simply the product of voltage and current, hence a CPL corresponds to a first–third quadrant hyperbola in the loads voltage–current plane. The same approach is applicable for balanced three-phase systems that, after a rotation of the coordinates to the synchronous frame, can be treated as a DC system. Modelling CPLs for single-phase (or unbalanced poly-phase) AC systems, on the other hand, is a largely unexplored area because in AC systems (active and reactive) power involves the integration in a finite window of the product of the voltage and current signals. In this paper we propose a simple dynamic model of a CPL that is suitable for the analysis of single-phase AC systems. We give conditions on the tuning gains of the model that guarantee the CPL behaviour is effectively captured

Resumen:

Desde el siglo XIX la especialización y exportación de productos básicos en la América Latina no ha sido incompatible con la creación de una base fabril, e incluso los caminos para la industrialización dependieron del tipo de infraestructura exigida por la actividad primario-exportadora. En ello el ferrocarril fue clave no sólo como transportador de productos básicos sino también como medio de transferencia tecnológica y agente industrializador, porque los talleres ferroviarios hacia 1900 constituían las bases de industrias de ingeniería en la América Latina, lo cual es el foco de atención del presente estudio: la contribución del ferrocarril a la creación de una industria de bienes de capital, por medio de la producción de carros de carga, coches de pasajeros y locomotoras en Chile y México entre 1860 y 1950. El estudio detecta la producción de equipos rodantes en el interior de los talleres de los ferrocarriles y el tránsito hacia fábricas independientes, cuestionándose las evidencias hasta ahora disponibles que indicaban que en el largo plazo la producción de locomotoras, carros, coches, refacciones y accesorios ferroviarios fue mínima dentro de la producción industrial latinoamericana, conclusión que no consideraba que todo ferrocarril para su operación diaria contiene, en sí mismo, una capacidad industrial productora de bienes metálicos que en gran medida no entraban al mercado ni quedaron registrados en la estadística histórica.

Abstract:

Since the 19th century, the specialization in and export of basic products in Latin America has not been incompatible with the creation of manufacturing industry; moreover, the path towards industrialization depended on the kind of infrastructure demanded by primary goods-exporting activity. In such an effort, the railroad was crucial, not only as a means of transporting basic products, but also as a means of transferring technology and as an industrializing agent, the reason being that railroad workshops constituted the basis of engineering industries in Latin America by 1900. That is precisely the central issue in the present study: the contribution of the railroad to the creation of a capital goods industry, through the production of cargo cars, passenger wagons and locomotives in Chile and Mexico between the years of 1860 and 1950. This study focuses on the production of rolling stock in the railroad workshops and its transit to independent factories, and questions the evidence available until now that has indicated that in the long run, the production of locomotives, cars, wagons, parts and accessories was minimal inside the Latin American industrial production, a conclusion that did not consider that every railroad contains in itself, an industrial capacity that produces metallic goods for its everyday operation, many of which did not enter in the market nor were registered in historic statistics.

Abstract:

Most of the tools for the construction of Knowledge Based Systems (KBSs) offer a reasoning mechanism within the logical scheme, some of them combining rules with objects. At the end of the 1970s to the beginning of the 1980s cases began to be used as an alternative to representing and storing experience and knowledge of organizations. This is how Case-Based Reasoning (CBR) systems, which are capable of solving problems using solutions of previously solved problems stored in the Case Memory (CM), were originated. In this article, we introduce RBCShell, a tool to construct KBSs with CBR. We also present an applicatíon developed by means of RBCShell.

Abstract:

In this paper we present a simple procedure to design fractional-PDμ controllers for Single-Input-Single-Output-Linear Time Invariant (SISO-LTI) systems with constant time-delay. More precisely, based on a geometric approach, we present a methodology not only to design a stabilizing controller, but also to provide practical guidelines to design non-fragile PDμ controllers. Finally, in order to illustrate the simplicity of the proposed approach as well as the efficiency of the PDμ controller in a realistic scenario, we consider an experimental setup consisting of controlling a teleoperated system using two Phantom Omni haptic devices

Abstract:

A new measure of dispersion is presented here that generalizes entropy for positive data. It is intrinsically linked to a measure of central tendency and is determined by the data through a power transformation that best symmetrizes the observations

Abstract:

This article proposes a simple statistical approach to combine nighttime light data with official national income growth figures. The suggested procedure arises from a signal-plus-noise model for official growth along with a constant elasticity relation between observed night lights and income. The methodology implemented in this paper differs from the approach based on panel data for several countries at once that uses World Bank ratings of income data quality for the countries under study to produce an estimate of true economic growth. The new approach: (a) leads to a relatively simple and robust statistical method based only on time series data pertaining to the country under study and (b) does not require the use of quality ratings of official income statistics. For illustrative purposes, some empirical applications are made for Mexico, China and Chile. The results show that during the period of study there was underestimation of economic growth for both Mexico and Chile, while official figures of China over-estimated true economic growth

Resumen:

En este trabajo se analiza la adecuación del método de estimación de los indicadores coincidente y adelantado, en el marco de análisis actual del Sistema de Indicadores Compuestos Coincidente y Adelantado del INEGI, y se asigna incertidumbre a los ciclos estimados para que sea más clara y objetiva la identificación de sus fases. La técnica se complementa con el uso de un modelo de factores dinámicos que apoya la propuesta que se hace. Como resultado de la investigación, se recomienda calcular los indicadores compuestos con un filtro diferente al que se usa en el INEGI, incluir en el indicador adelantado una variable adicional para mejorar se capacidad de adelanto, utilizar bandas de tolerancia para asignar incertidumbre a la estimación de los ciclos y regularizar el periodo de actualización del Sistema de Indicadores Compuestos para mantenerlo vigente al paso del tiempo

Abstract:

We anlyze the adequacy of the estimation method of the coincident and leading indicators within the context of the current INEGI's System of Composite Conicident and Leading Indicators. We propose a method to assign uncertainty to the cycle estimates in ordeer to identify its phases more clearly and objetively. This work is complemented with the application of a Dynamic Factor Model whose results back up our proposal. As a result of this study, we recommend to calculate the indicators whi tge aid of different filte from the current one at use in INEGI; we also recommended to include a new component variable in the leading indicator to improve its leading ability; to employ the tolerance bands here derived to appreciate the uncertainty of the cycle estimates; and to regularize the revising span of the System of Composite Indicators to keep it update along time

Resumen:

El INEGI mantiene el Sistema de Indicadores Compuestos, Coincidente y Adelantado, como una herramienta de apoyo para la toma de decisiones oportunas por parte del público usuario de la información macroeconómica. El actual Sistema fue cambiado sustancialmente el año 2010 al adoptar la definición de ciclo de crecimiento, en lugar de ciclo clásico, lo cual requirió la aplicación de un método para estimar y cancelar tendencias, apropiado para los datos de México. Dicho Sistema fue actualizado en 2014, cuando se cambiaron algunas variables de los correspondientes índices compuestos. Aun así, el Sistema debe actualizarse de manera regular para que mantenga su vigencia y utilidad en el entorno cambiante de la economía mexicana. El presente estudio se enfoca, en primer lugar, en analizar la adecuación del método usado para estimar los indicadores coincidente y adelantado, sin salir del marco de análisis que se utiliza en el INEGI actualmente. En segundo lugar, se explota la metodología estadística utilizada para asignar incertidumbre a la estimación de los ciclos, de manera que sea más clara y objetiva la identificación de las fases de los mismos. Finalmente, la técnica en uso en el INEGI se complementa con otra herramienta estadística, basada en un Modelo de Factores Dinámicos, cuyo fundamento teórico-estadístico es muy sólido y que es aplicable en el caso de la economía mexicana. Como resultado de este trabajo, se propone: calcular los indicadores compuestos con un filtro diferente al que se usa actualmente en el INEGI; incluir una variable adicional en el indicador adelantado para mejorar su capacidad de adelanto; utilizar bandas de tolerancia para asignar incertidumbre a la estimación de los ciclos; y regularizar el periodo de actualización del Sistema de Indicadores Compuestos, para mantenerlo vigente al paso del tiempo

Resumen:

La intención de este trabajo es brindar al usuario de series de tiempo económicas ajustadas por estacionalidad las herramientas que le permitan usarlas de forma adecuada, para poder tomar decisiones mejor informadas. Por ello, se describe la metodología, se enfatizan algunos aspectos de carácter técnico y se brindan algunas recomendaciones que podrían mejorar la calidad del ajuste. Asimismo, se presenta un ejemplo ilustrativo del ajuste estacional de una serie relevante para México. Los resultados aportan elementos al usuario para elegir el paquete X-13ARIMA-SEATS como el mejor en la actualidad para realizar este procedimiento. Además, se resalta la importancia de comprender mejor las herramientas que dan soporte a los procesos considerados automáticos para interpretar en forma apropiada los resultados que se obtienen

Abstract:

The aim of this work is to provide the user of seasonally adjusted economic time series with the tools that allow her/him to interpret those series adequately in order to make better-informed decisions. To that end, we describe the seasonal adjustment methodology, emphasize some technical aspects and make some recommendations that may help improving the quality of the adjustment. We also present an illustrative example of a relevant series for Mexico. The results are useful to understand that the X-13 package is the best one we can use nowadays to deseasonalize a time series. Furthermore, we address the need of acquiring a deep knowledge of the tools underlying the so-called automatic processes, so that their output can be better understood

Abstract:

We apply a filtering methodology to estimate the segmented trend of a time series and to get forecasts with the underlying unobserved-component model of the filter employed. The trend estimation technique is based on Penalized Least Squares. The application of this technique allows us to control the amount of smoothness in the trend by segments of the data range, corresponding to different regimes. The method produces a smoothing filter that mitigates the effect of outliers and transitory blips, thus capturing an adequate smooth trend behavior for the different regimes. Obtaining forecasts becomes an easy task once the filter has been applied to the series, because the unobserved-component model underlying the filter has parameters directly related to the smoothing parameter of the filter. The empirical application is useful to appreciate the appropriateness of our segmented trend approach to capture the underlying behavior of the annual rate of growth of remittances to Mexico

Abstract:

We present an analysis of the Manufacturing Business Opinion Survey carried out by Mexico's national statistics agency. We describe first the survey and employ exploratory statistical analyses based on coincidences and cross-correlations. We also consider forecasting models for the indices of industrial production and the Mexican global economic activity, including opinion indicators as predictors as well as lags of the quantitative variable to be predicted, so that the net contribution of the opinion indicators can be best appreciated in a forecasting experiment. The forecasting models employed are statistically adecuate in the sense that they satisfy the underlying assumptions, so that statistical inferences and conclusions are validated by the data at hand. Our results lend empirical support to the intuition that the survey provides information that anticipates the behavior of important macroeconomic variables, such as the Mexican index of global economic activity and the index of industrial production. We include data up to October 2017. These new tables show that the original conclusions remain valid even though the data were subjected in 2011 to some modifications due to a three-fold increase of the sample size and an extended coverage of economic activities

Abstract:

This paper studies the effect of autocorrelation on the smoothness of the trend of a univariate time series estimated by means of penalized least squares. An index of smoothness is deduced for the case of a time series represented by a signal-plus-noise model, where the noise follows an autoregressive process of order one. This index is useful for measuring the distortion of the amount of smoothness by incorporating the effect of autocorrelation. Different autocorrelation values are used to appreciate the numerical effect on smoothness for estimated trends of time series with different sample sizes. For comparative purposes, several graphs of two simulated time series are presented, where the estimated trend is compared with and without autocorrelation in the noise. Some findings are as follows, on the one hand, when the autocorrelation is negative (no matter how large) or positive but small, the estimated trend gets very close to the true trend. Even in this case, the estimation is improved by fixing the index of smoothness according to the sample size. On the other hand, when the autocorrelation is positive and large the simulated and estimated trends lie far away from the true trend. This situation is mitigated by fixing an appropriate index of smoothness for the estimated trend in accordance to the sample size at hand. Finally, an empirical example serves to illustrate the use of the smoothness index when estimating the trend of Mexico's quarterly GDP

Resumen:

En este trabajo se retropola el producto interno bruto estatal de México de 1980 a 1992 por gran actividad económica a partir de los datos oficiales disponibles para el público. El documento consta de seis etapas: 1) desagregración trimestral de la base de datos anual del 2003 al 2015, por estado; 2) conversión de la base de datos estatal y anual, de años base 1993 al 2008; 3) retropolación restringida de 1993 al 2002 con datos desagregados por estado que satisfacen restricciones temporales; 4) reconciliación de cifras estatales previamente retropoladas con los datos a nivel nacional; 5) retropolación restringida de 1980 a 1992 de la base ya reconciliada, por gran actividad económica, de manera que la suma de los estados produce el total nacional y, por último, 6) cambio de año base del 2008 al 2013, actualizando la información al 2016. Los resultados empíricos se validan tanto estadística como econométricamente y se ilustran con datos de la Ciudad de México

Abstract:

In this paper, we retropolate the Mexican Gross Domestic Product (GDP) by State and Economic Activity from 1980 to 1992 using all the official databases currently available. We apply 6 steps: i) temporal disaggregation of the state-annual database for years 2003-2015, ii) conversion of the state-annual database from base year 1993 to 2008, iii) restricted retropolation from 1993 to 2002 with the state-disaggregated database, satisfying temporal restrictions, iv) reconciliation of the retropolated querterly data with the national database, v) restricted retropolation of the reconciliated database from 1980 to 1992 by Economic Activity, so that the sum of the state data yields the national figure; and finally, vi) change of base year from 2008 to 2013, with information updated to 2016. The empirical results are verified both statistically and econometrically, and illustrated with Mexico City’s data

Resumen:

En este trabajo se retropola el Producto Interno Bruto (PIB) estatal de México de 1980 a 1992, por Gran Actividad económica, a partir de los datos oficiales disponibles al público. El trabajo consta de 6 etapas: i) Desagregación trimestral de la base de datos anual de 2003 a 2015, por estado, ii) Conversión de la base de datos estatal y anual, de año base 1993 a base 2008, iii) Retropolación restringida de 1993 a 2002 con datos desagregados por estado, que satisfacen restricciones temporales, iv) Reconciliación de cifras estatales previamente retropoladas con los datos a nivel nacional, v) Retropolación restringida de 1980 a 1992, de la base ya reconciliada, por gran actividad económica, de manera que la suma de los estados produce el total nacional y, finalmente, vi) cambio de año base de 2008 a 2013, actualizando la información al año 2016. Los resultados empíricos se validan tanto estadística como econométricamente y se ilustran con datos de la Ciudad de México

Abstract:

In Mexico, the System of National Accounts is disaggregated at the State level and expressed at constant prices of the most recent base year, 2008, for the years 2003 to 2015. Another frequently used database related to the National Accounts and disaggregated by State contains a quarterly index of economic activity. Further, a yearly database is also available with State-level disaggregation and base year 1993, but it only covers the years 1993 to 2006 and employs a different classification system from that of base year 2008. In this work, we are concerned with the problem of retropolating the database of a Mexican State called Mexico City with the maximum level of disaggregation allowed by the publicly available databases. We followed a data-driven approach and combined the three databases to produce an estimated homogeneous quarterly database with base year 2008, covering the years 1993 to 2015 and disaggregated up to groups of sectors

Abstract:

This paper extends the univariate time series smoothing approach provided by penalized least squares to a multivariate setting, thus allowing for joint estimation of several time series trends. The theoretical results are valid for the general multivariate case, but particular emphasis is placed on the bivariate situation from an applied point of view. The proposal is based on a vector signal-plus-noise representation of the observed data that requires the first two sample moments and specifying only one smoothing constant. A measure of the amount of smoothness of an estimated trend is introduced so that an analyst can set in advance a desired percentage of smoothness to be achieved by the trend estimate. The required smoothing constant is determined by the chosen percentage of smoothness. Closed form expressions for the smoothed estimated vector and its variance-covariance matrix are derived from a straightforward application of generalized least squares, thus providing best linear unbiased estimates for the trends. A detailed algorithm applicable for estimating bivariate time series trends is also presented and justified. The theoretical results are supported by a simulation study and two real applications. One corresponds to Mexican and US macroeconomic data within the context of business cycle analysis, and the other one to environmental data pertaining to a monitored site in Scotland

Abstract:

We consider the problem of estimating a trend with different amounts of smoothness for segments of a time series subjected to different variability regimes. We propose using an unobserved components model to consider the existence of at least two data segments. We first fix some desired percentages of smoothness for the trend segments and deduce the corresponding smoothing parameters involved. Once the size of each segment is chosen, the smoothing formulas here derived produce trend estimates for all segments with the desired smoothness as well as their corresponding estimated variances. Empirical examples from demography and economics illustrate our proposal

Resumen:

Se presenta un análisis de la Encuesta Mensual de Opinión Empresarial que realiza en Instituto Nacional de Estadística y Geografía (INEGI), la agencia mexicana de estadística oficial. Se describe primero la encuesta y se lleva a cabo un análisis estadístico exploratorio basado en coincidencias y correlaciones cruzadas. Después se consideran modelos de pronósticos para los índices de producción industrial y de la actividad economica global; modelos que incluyen indicadores de opinión como predictores, así como restrasos de la variable cuantitativa por predecir, de manera que se puede apreciar la contribución neta de los indicadores de opinión en un experimento de predicción. Los modelos de pronósticos empleados satisfacen los supuestos subyacentes, por lo que las conlusiones pueden considerarse estadísticamente válidas. Los resultados obtenidos apoyan la intuición de que esta encuesta brinda información que anticipa el comportamiento de variables macroeconómicas relevantes para México, como es el índice Global de Actividad Económica

Abstract:

We present an analysis of the Manufacturing Business Opinion Survey carried out by Mexico's national statistical agency. We describe first the survey and employ exploratory statistical analyses based on coincidences and cross-correlations. We also consider forecasting models for the indices of industrial production and the Mexican global economic activity, including opinion indicators as predictors as well as lags of the quantitative variable to be predicted, so that the net contribution of the opinion indicators can be best appreciated in a forecasting experiment. The forecasting models employed are statistically adequate in the sense that they satisfy the underlying assumptions, so that statistical inferences and conclusions are validated by the data at hand. Our results lend empirical support to the intuition that this survey provides information than anticipates the behavior of important macroeconomic variables, such as the Mexican index of global economic activity and the index of industrial production

Abstract:

We consider a forecasting problem that arises when an intervention is expected to occur on an economic system during the forecast horizon. The time series model employed is seen as a statistical device that serves to capture the empirical regularities of the observed data on the variables of the system without relying on a particular theoretical structure. Either the deterministic or the stochastic structure of a vector autoregressive error correction model of the system is assumed to be affected by the intervention. The information about the intervention effect is just provided by some linear restrictions imposed on the future values of the variables involved. Formulas for restricted forecasts with intervention effects and their mean squared errors are derived as a particular case of Catlin’s static updating theorem. An empirical illustration uses Mexican macroeconomic data on five variables and the restricted forecasts consider targets for years 2011–2014

Resumen:

En este trabajo se considera la estimación de tendencias y el análisis de ciclos económicos, desde la perspectiva de aplicación de métodos estadísticos a datos económicos presentados en forma de series de tiempo. El procedimiento estadístico sugerido permite fijar el porcentaje de suavidad deseado para la tendencia y está ligado con el filtro de Hodrick y Prescott. Para determinar la constante de suavizado requerida, se usa un índice de precisión relativa que formaliza el concepto de suavidad de la tendencia. El método es aplicable de manera directa a series de tiempo trimestrales, sin embargo, éste se extiende aquí también al caso de series de tiempo con periodicidad de observación distintas de la trimestral

Abstract:

This work deals with trend estimation and business cycle analysis, with emphasis on the application of statistical methods to economic time series data. The suggested statistical procedure allows one to fix a desired percentage of trend smoothness and is linked to the Hodrick-Prescott filter. An index of relative precision, coming out from a formal definition of trend smoothness, is used to decide the smoothness constant involved. This method is directly applicable to quarterly series, but its use is extended with ease to time series with frequencies of observation other than quarterly

Resumen:

Este trabajo presenta un análisis de la capacidad predictiva de algunos índices cíclicos para los puntos de giro de la economía mexicana. El enfoque es de ciclo de crecimiento, el cual requiere eliminar la tendencia de las series; por ello se probaron diversos métodos y se determinó que el filtro de Hodrick-Prescott aplicado dos veces es el mejor, en términos de revisiones. Después, los índices coincidentes y adelantados se estimaron con tres métodos distintos: 1) del NBER, 2) de la OCDE y 3) de Stock-Watson. Los índices coincidentes producen resultados similares y aceptables, pero los adelantados no brindan resultados satisfactorios

Abstract:

This work analyzes the predictive ability of some cyclical indices for the turning points of the Mexican economy. The growth cycle approach adopted requires working with detrended series, and so several detrending methods were tried. A double Hodrick-Prescott filter application produced the best results in terms of revisions. Then, the coincident and leading indices were estimated with three different methods: 1) NBER, 2)OECD and 3) Stock-Watson’s. The resulting coincident indices produce similar and acceptable results, but the leading ones do not work as expected

Abstract:

This work presents a procedure for creating a timely estimation of Mexico’s quarterly GDP with the aid of Vector Auto-Regressive models. The estimates consider historical GDP data up to the previous quarter as well as the most recent figures available for two relevant indices of Mexican economic activity and other potential predictors of GDP. We obtain two timely estimates of the Grand Economic Activities and Total GDP. Their corresponding delays are at most 15 days and 30 days respectively from the end of the reference quarter, while the first official GDP figure is delayed 52 days. We follow a bottom-up approach that imitates the official calculation procedure applied in Mexico. Empirical validation is carried out with both in-sample simulations and in real time. The mean error of the 30-day delayed estimate of total GDP is 0.13% and its root mean square error is 0.67%. These figures compare favorably with those of no-change models

Resumen:

En este artículo se presentan algunos problemas que enfrentan los institutos nacionales de estadística en relación con la generación, difusión y análisis de los datos económicos oficiales de series de tiempo. Se mencionan las herramientas de análisis estadístico que pudieran aplicarse para darles solución y, aunque deberían ser aplicadas de preferencia por las propias instituciones, algunas podrían también ser empleadas por los usuarios interesados en la información. Además, se enuncian diversas circunstancias que dan origen a los problemas tratados y se ilustran de forma somera algunas aplicaciones con datos generados por el Instituto Nacional de Estadística y Geografía (INEGI) de México

Abstract:

This paper presents some problems faced by national statistical institutes with regard to the generation, dissemination and analysis of official economic time series. The statistical analysis tools that may be applied to solve those problems are just mentioned. These tools should be preferably used by statistical institutes but, if needed, some may be employed by the information user. Several circumstances that give rise to such problems are also considered and the solutions are illustrated with data generated by Mexico's National Institute of Statistics and Geography (INEGI)

Abstract:

This paper considers the statistical problems of editing and imputing data of multiple time series generated by repetitive surveys. The case under study is that of the Survey of Cattle Slaughter in Mexico’s Municipal Abattoirs. The proposed procedure consists of two phases; firstly the data of each abattoir are edited to correct them for gross inconsistencies. Secondly, the missing data are imputed by means of restricted forecasting. This method uses all the historical and current information available for the abattoir, as well as multiple time series models from which efficient estimates of the missing data are obtained. Some empirical examples are shown to illustrate the usefulness of the method in practice

Abstract:

We propose to decompose a financial time series into trend plus noise by means of the exponential smoothing filter. This filter produces statistically efficient estimates of the trend that can be calculated by a straightforward application of the Kalman filter. It can also be interpreted in the context of penalized least squares as a function of a smoothing constant has to be minimized by trading off fitness against smoothness of the trend. The smoothing constant is crucial to decide the degree of smoothness and the problem is how to choose it objectively. We suggest a procedure that allows the user to decide at the outset the desired percentage of smoothness and derive from it the corresponding value of that constant. A definition of smoothness is first proposed as well as an index of relative precision attributable to the smoothing element of the time series. The procedure is extended to series with different frequencies of observation, so that comparable trends can be obtained for say, daily, weekly or intraday observations of the same variable. The theoretical results are derived from an integrated moving average model of order (1,1) underlying the statistical interpretation of the filter. Expressions of equivalent smoothing constants are derived for series generated by temporal aggregation or systematic sampling of another series. Hence, comparable trend estimates can be obtained for the same time series with different lengths, for different time series of the same length and for series with different frequencies of observation of the same variable

Abstract:

This work presents a method for estimating trends of economic time series that allows the user to fix at the outset the desired percentage of smoothness for the trend. The calculations are based on the Hodrick-Prescott (HP) filter usually employed in business cycle analysis. The situation considered here is not related to that kind of analysis, but with describing the dynamic behaviour of the series by way of a smooth curve. To apply the filter, the user has to specify a smoothing constant that determines the dynamic behaviour of the trend. A new method that formalizes the concept of trend smoothness is proposed here to choose that constant. Smoothness of the trend is measured in percentage terms with the aid of an index related to the underlying statistical model of the HP filter. Empirical illustrations are provided using data on Mexico’s GDP

Abstract:

The time series smoothing problem is approached in a slightly more general form than usual. The proposed statistical solution involves an implicit adjustment to the observations at both extremes of the time series. The resulting estimated trend becomes more statistically grounded and an estimate of its sampling variability is provided. An index of smoothness is derived and proposed as a tool for choosing the smoothing constant

Abstract:

The inclusion of linear deterministic effects in a time series model is important to get an appropriate specification. Such effects may be due to calendar variation, outlying observations or interventions. This article proposes a two-step method for estimating an adjusted time series and the parameters of its linear deterministic effects simultaneously. Although the main goal when applying this method in practice might only be to estimate the adjusted series, an important byproduct is a substantial increase in efficiency in the estimates of the deterministic effects. Some theoretical examples are presented to demonstrate the intuitive appeal of this proposal. Then the methodology is applied on two real datasets. One of these applications investigates the importance of the 1995 economic crisis on Mexico’s industrial production index

Abstract:

A confidence interval was derived for the index of a power transformation that stabilizes the variance of a time-series. The process starts from a model-independent procedure that minimizes a coefficient of variation to yield a point estimate of the transformation index. The confidence coefficient of the interval is calibrated through a simulation

Abstract:

We present a general result that allows us to combine data from two different sources of information in order to improve the efficiency of predictors within the context of multiple time series analysis. Such a result is derived from generalized least squares and is given as a combining rule that takes into account the possibility of correlation between forecasts and bias in one of them. We then specialize that result to situations in which the predictors are unbiased and uncorrelated. Afterwards we propose measuring precision shares and testing for compatibility in order for the combination to make sense. Several applications of the combining rule are presented according to the nature of the linear constraints imposed by one of the data sources. When the constraints are binding we consider the case of restricted forecasts with exact linear restrictions, deterministic changes in the model structure and partial information on some variables. When the constraints are stochastic we study forecast combinations that include expert judgments and benchmarking. Thus, the connections among different standard techniques are emphasized by the combining rule and its companion compatibility test. An empirical example illustrates the usefulness of this inferential procedure in practice

Resumen:

En este trabajo, se presenta de manera unificada la metodología de pronósticos restringidos, con la cual se pueden incorporar a los pronósticos que surgen de un modelo lineal para series univariadas otras piezas de información adicional a la provista por la serie histórica con la cual se construyó el modelo. Si las piezas de información son restricciones lineales sobre los valores futuros de la serie original pueden incorporarse de manera óptima y formal a los pronósticos del modelo, que en particular se considera un modelo ARIMA. Como complemento del método, se presenta un estadístico que permite decidir si los datos adicionales son compatibles con la serie histórica. Este estadístico amplía las posibilidades de análisis en función de la certidumbre asociada con las fuentes de información: histórica y adicional. Para ilustrar la metodología, se da seguimiento a la meta anual del PIB real de México, anunciada por las autoridades gubernamentales para el año 2001. En este ejemplo, los resultados que se obtienen, conforme más datos de la serie se van observando, justifican los cambios en la metas durante el año

Abstract:

This paper presents, in a unified way, the restricted forecasting methodology that serves to incorporate additional information into the forecasts arising from a linear model built from historical data of a univariate time series. If the additional information is in the form of linear constraints about future values of the series, it can be optimally and formally incorporated into the model forecasts. The linear model considered is an ARIMA type. As a complement of the method, a statistic is introduced to decide whether the additional data are compatible with the historical series or not. Such a statistic is the basis for wider possibilities of analysis in terms of the certainty associated with the two sources of information: historical and additional Finally, in order to illustrate the methodology, the annual target of Mexico's real GDP, announced by the government authorities for year 2001, is examined. In this example, the results that are obtained while more data of the series are observed justify changing the target with the course of the year

Abstract:

On the basis of some suitable assumptions, we show that the best linear unbiased estimator of the true mortality rates has the form of Whittaker's solution to the graduation problem. Some statistical tools are also proposed to help reducing subjectivity when graduating a dataset

Abstract:

An important tool in time series analysis is that of combining information in an optimal way. Here we establish a basic combining rule of linear predictors and show that such problems as forecast updating, missing value estimation, restricted forecasting with binding constraints, analysis of outliers and temporal disaggregation can be viewed as problems of optimal linear combination of restrictions and forecasts. A compatibility test statistic is also provided as a companion tool to check that the linear restrictions are compatible with the forecasts generated from the historical data.

Abstract:

A method is proposed for choosing a power transformation that allows a univariate time series to be adequately represented by a straight line, in an exploratory analysis of the data. The method is quite simple and enables the analyst to measure local and global curvature in the data. A description of the pattern followed by the data is obtained as a by-product of the method. A specific form of the coefficient of determination is suggested to discriminate among several combinations of estimates of the index of the transformation and the slope of the straight line. Some results related to the degree of diff erencing required to make the time series stationary are also exploited. The usefulness of the proposal is illustrated with four empirical applications-two using demographic data and the other two concerning market studies. These examples are provided in line with the spirit of an exploratory analysis, rather than as a complete or confirmatory analysis of the data.

Abstract:

A method is proposed for estimating unobserved values of multiple time series whose temporal ancl contemporaneous aggregates are known. The resulting estimates are obtained from a model based procedure in which the models employed are indicated by the data alone. This procedure is empirically supported by a discrepancy measure here derived. Even though the problem can be cast into a state space formulation, the usual assumptions underlying Kalman filtering are not fulfilled and such an approach cannot be applied directly. Some simulated examples are provided to validate the method numerically and an application with real data serves to illustrate its use in practice

Abstract:

Univariate time series models make efficient use of available historical records of electricity consumption for short-term forecasting. However, the information (expectations) provided by electricity consumers in an energy-saving survey, even though qualitative, was considered to be particularly important, because the consumers perception of the future may take into account the changing economic conditions. Our approach to forecasting electricity consumption combines historical data with expectations of the consumers in an optimal manner, using the technique of restricted forecasts. The same technique can be applied in some other forecasting situations in which additional information--besides the historical record of a variable--is available in the form of expectations

Abstract:

We consider the problem of estimating the effects of an intervention on a time series vector subjected to a linear constraint. Minimum variance linear and unbiased estimators are provided for two different formulations of the problem-(1) when a multivariate intervention analysis is carried out and an adjustment is needed to fulfill the restriction and (2) when a univariate intervention analysis was performed on the aggregate series obtained from the linear constraint, previous to the multivariate analysis, and the results of both analyses are required to be made compatible with each other. A banking example that motivated this work illustrates our solution

Abstract:

Several forecasting algorithms have been proposed to forecast a cumulative variable using its partially accumulated data. Some particular cases of this problem are known in the literature as the "style goods inventory problem" or as "forecasting shipments using firm orders-to-date", among other names. Here we summarize same of the most popular techniques and propose a statistical approach to discriminate among them in an objective (data-based) way. Our basic idea is to use statistical models to produce minimum mean square error forecasts and let the data lead us to select an appropriate model to represent their behavior. We apply our proposal to some published data showing total accumulated values with constant level and then to two actual sets of data pertaining to the Mexican economy, showing a nonconstant level. The forecasting performance of the statistical models was evaluated by comparing their results against those obtained with algorithmic solutions. In general the models produced better forecasts for all lead times, as indicated by the most common measures of forecasting accuracy and precision

Abstract:

A recursive procedure for temporally disaggregating a time series is proposed. In practice, it is superior to standard non-recursive procedures in several ways: (i) previously disaggregated data need not be modified, (ii) calculations become simpler and (iii) data storage requirements are minimized. The suggested procedure yields Best Linear Unbiased Estimates, given the historical record of previously disaggregated figures, concurrent data in the form of a preliminary series and an aggregated value for the current period. A test statistic is derived for validating the numerical results obtained in practice

Abstract:

A method is presented to improve the precision of timely data, which are published when final data are not yet available. Explicit statistical formulae, equivalent to Kalman filtering, are derived to combine historical with preliminary information. The application of these formulae is validated by the data, through a statistical test of compatibility between sources of information. A measure of the share of precision of each source of information is also derived. An empirical example with Mexican economic data serves to illustrate the procedure

Abstract:

This paper presents some procedures aimed at helping an applied time series analyst in the use of power transformations. Two methods are proposed for selecting a variance-stabilizing transformation and another for bias-reduction of the forecast in the original scale. Since these methods are essentially model-independent, they can be employed with practically any type of time series model. Some comparisons are made with other methods currently available and it is shown that those proposed here are either easier to apply or are more general, with a performance similar to or better than other competing procedures

Abstract:

Some time series models, which account for a structural change either in the deterministic or in the stochastic part of an ARIMA model are presented. The structural change is assumed to occur during the forecast horizon of the series and the only available information about this change, besides the time point of its occurrence, is provided by only one or two linear restrictions imposed on the forecasts. Formulas for calculating the variance of the restricted forecasts as well as some other statistics are derived. The methods here suggested are illustrated by means of empirical examples

Abstract:

Many economic time series are only available in temporally aggregated form. When the analysis requires disaggregated data, the analyst faces the problem of deriving these data in the most reasonable way. In this paper a data-based method is developed which produces an optimal estimator of the disaggregated series. The method requires a preliminary estimate of the series, which is adjusted to fulfil the restrictions imposed by the aggregated data. Empirical selection of the preliminary estimate is discussed and a statistic is developed for testing its adequacy. Some comparisons with other methods, as well as numerical illustrations, are presented

Abstract:

An optimal univariate forecast, based on historical and additional information about the future, is obtained in this paper. Its statistical properties, as well as some inferential procedures derived from it, are indicated. Two main situations are considered explicitly: (1) when the additional information imposes a constraint to be fulfilled exactly by the forecasts and (2) when the information is only a conjecture about the future values of the series or a forecast from an alternative model. Theoretical and empirical illustrations are provided, and a unification of the existing methods is also attempted

Abstract:

We investigate the use of power transformations when data on two quantitative variables are presented in a two-way table. Given a suitable transformation we can model the underlying continuous variables. Regressions and the correlation are obtained from the transformed grouped data. Also, by transforming back to the original scale, we obtain a smoothed version of the data

Abstract:

In this note, the resemblance between the family of inequality indices introduced by Atkinson (1970) and the Box-Cox transformation is exploited in order to provide a data-based procedure for choosing an inequality index within the family

Abstract:

Box and Cox (1964) proposed a power transformation which has proven utility for transforming ungrouped data to near normality. In this paper we extend its applicability to grouped data. Illustrative examples are presented and the asymptotic properties of the estimators derived

Abstract:

The power transformation suggested by Box & Cox (1964) is applied to the odds ratio to generalize the logistic model and to parameterize a certain type of lack of fit. Transformation of the design variable within the context of the dose-response problem is also considered

Abstract:

There is an increasing push by environmentalists, scholars, and some politicians in favor of a form of environmental rights referred to as "rights of nature" or "nature's rights." A milestone victory in this movement was the incorporation of rights of nature into the Ecuadorian constitution in 2008. However, there are reasons to be skeptical that these environmental rights will have the kinds of transformative effects that are anticipated by their most enthusiastic proponents. From a conceptual perspective, a number of difficulties arise when rights (or other forms of legal or moral consideration) are extended to non-human biological aggregates, such as species or ecosystems. There are two very general strategies for conceiving of the interests of such aggregates: a "bottom-up" model that grounds interest in specific aggregates (such as particular species or ecosystems), and then attempts to compare various effects on those specific aggregates; and a "top-down" model that grounds interests in the entire "biotic community." Either approach faces serious challenges. Nature's rights have also proven difficult to implement in practice. Courts in Ecuador, the country with the most experience litigating these rights, have had a difficult time using the construct of nature's rights in a non-arbitrary fashion. The shortcomings of nature's rights, however, do not mean that constitutional reform cannot be used to promote environmental goals. Recent work in comparative constitutional law indicates that organizational rights have a greater likelihood of achieving meaningful results than even quite concrete substantive rights. Protection for the role of environmental groups within civil society may, then, serve as the most effective way for constitutional reform to vindicate the interests that motivate the nature's rights movement

Abstract:

A general question in finance is whether the volatility of the price of futures contracts follows any particular trend over the contract's life. In this study, we contribute to the debate by empirically analyzing the trend of the term structure of the volatility of short term interest rates (STIR) futures prices. Using data on the Eurodollar, Euribor, and Short-Sterling futures contracts for the period between 2000 and 2018, we model the volatility of each individual contract considering time to expiration and trading activities. Furthermore, we investigate whether these trends change according to overall economic conditions. We find that STIR futures behave differently than futures on other underlying assets and that, most of the time, STIR futures price volatility declines as the contract approaches expiration. Moreover, the relation between volatility and time to maturity depends on market conditions and trading activities, and it is non-linearly related to the observation period

Abstract:

This study investigates the relationship between volatility and contract expiration for the case of Mexican interest rate futures. Specifically, it examines the hypothesis that the volatility of futures prices should increase as contracts approach expiration (the “maturity effect”). Using panel data techniques, the study assesses the differences in volatility patterns between contracts. The results show that although the maturity effect was sometimes present, the inverse effect prevails; volatility decreases as expiration approaches. On the basis of the premises of the negative covariance hypothesis, the study provides additional criteria that explain this behavior in terms of the term structure dynamics

Abstract:

LGBTQ+ individuals may face particular labor market challenges concerning disclosure of their identity and the prevalence of homophobia. Employing an online survey in Mexico with two elicitation methods, we investigate the size of the LGBTQ+ population and homophobic sentiment across various subgroups. We find that around 5%-13% of respondents self-identify as LGBTQ+, with some variation by age and job sectors. Homophobic sentiment is more prevalent when measured indirectly and is higher among males, older and less educated workers, and in less traditional sectors. Lastly, we uncover a negative correlation between homophobia and LGBTQ+ presence in labor markets, suggesting a need for policies to address these disparities

Abstract:

We study the political economy of government responsiveness in the context of COVID-19 vaccine allocation in Mexico. We first present population-level evidence that the vaccines had positive effects on public health, motivating the plausible electoral value of vaccine distribution. To estimate these effects, we exploit newly collected data on diverse health outcomes and staggered roll-out of vaccines by municipalities and age groups. Our analysis then delves into the electoral predictors and consequences of the vaccination program. Electoral incentives positively correlate with government responsiveness, and vaccine allocation paid off electorally in some locations. Using a difference-in-differences strategy coupled with geographically fine-grained electoral data, which allow us to hold fixed all municipality-level factors that could have mattered for vaccine eligibility, we do not find any evidence that a higher vaccination coverage would have boosted electoral support for the incumbent party, on average. However, there is some evidence of heterogeneous effects. Furthermore, vaccines did increase electoral participation, plausibly by decreasing the perceived cost of voting midst the health crisis

Abstract:

This paper studies the effect of bank credit supply shocks on formal employment in Mexico using a proprietary dataset containing information on all loans extended to firms by commercial banks during 2010–2015. We find large impacts on the formal employment of small and medium firms: a positive credit shock of 1 standard deviation increases yearly employment by 1.4 percentage points. The shares of uncollateralized credit and credit received by family firms, younger firms, and firms with no previous bank relationships also increase, suggesting that credit shocks may play a more prominent role for employment creation in credit-constrained settings

Abstract:

Do electorally concerned politicians have an incentive to contain epidemics when public health interventions may have an economic cost? We revisit the first pandemic of the twenty-first century and study the electoral consequences of the 2009 H1N1 outbreak in Mexico. Leveraging detailed administrative data and a difference-in-differences approach, we document a statistically significant negative effect of local epidemic outbreaks on the electoral performance of the governing party. The effect (i) is not driven by differences in containment policies, (ii) implies that the epidemic may have shifted outcomes of close electoral races, and (iii) persists at least three years after the pandemic. Part of the negative impact on incumbent vote share can be attributed to a decrease in turnout, and the findings are also in line with voters learning about the effectiveness of government policies or incumbent competence

Abstract:

Providing information is important for managing epidemics, but issues with data accuracy may hinder its effectiveness. Focusing on Covid-19 in Mexico, we ask whether delays in death reports affect individuals' beliefs and behavior. Exploiting administrative data and an online survey, we provide evidence that behavior, and consequently the evolution of the pandemic, are considerably different when death counts are presented by date reported rather than by date occurred, due to non-negligible reporting delays. We then use an equilibrium model incorporating an endogenous behavioral response to illustrate how reporting delays lead to slower individual responses, and consequently, worse epidemic outcomes

Abstract:

Could taxing sugar-sweetened beverages in areas where clean water is unavailable lead to increases in diarrheal disease? An excise tax introduced in Mexico in 2014 led to a significant 6.6 percent increase in gastrointestinal disease rates in areas lacking safe drinking water throughout the first year of the tax, with evidence of a diminishing impact in the second year. Suggestive evidence of a differential increase in the consumption of bottled water by households without access to safe water two years post-tax provides a potential explanation for this declining pattern. The costs implied by these results are small, particularly compared to tax revenues and the potential public health benefits. However, these findings inform the need for accompanying soda taxes with policy interventions that guarantee safe drinking water for vulnerable populations

Abstract:

Existing literature suggests that hospital occupancy matters for quality of care, as measured by various patient outcomes. However, estimating the causal effect of increased hospital busyness on in-hospital mortality remains an elusive task due to statistical power challenges and the difficulty in separating shocks to occupancy from changes in patient composition. Using data from a large public hospital system in Mexico, we estimate the impact of congestion on in-hospital mortality by exploiting the shock in hospitalizations induced by the 2009 H1N1 pandemic, instrumenting hospital admissions due to acute respiratory infections (ARIs) with measures of ARI cases at nearby healthcare facilities as a proxy for the size of the local outbreak. Our instrumental-variables estimates show that a 1% increase in ARI admissions in 2009 led to a 0.25% increase in non-ARI in-hospital mortality. We show that these effects are nonlinear in the size of the local outbreak, consistent with the existence of tipping points. We further show that effects are concentrated at hospitals with limited infrastructure, suggesting that supply-side policies that improve patient assignment across hospitals and strategically increase hospital capacity could mitigate some of the negative impacts. We discuss managerial implications, suggesting that up to 25%-30% of our estimated deaths at small and non-intensive-care-unit hospitals could have been averted by reallocating patients to reduce congestion

Abstract:

Abatement expenditures are not the only available tool for firms to decrease emissions. Technology choice can also indirectly affect environmental performance. We assess the impact of import competition on plants' environmental outcomes. In particular, exploiting a unique combination of Mexican plant-level and satellite imagery data, we measure the effect of tariff changes due to free-trade agreements on three main outcomes: plants' fuel use, plants' abatement expenditures, and measures of air pollution around plants' location. Our findings show that import competition induced plants in Mexico to increase energy efficiency, reduce emissions, and in turn reduce direct investment in environmental protection. Our findings suggest that the general technology upgrading effect of any policy could be an important determinant of environmental performance in developing countries and that this effect may not be captured in abatement data

Abstract:

We investigate the impact of financial access on law compliance (whether workers are registered in a mandated social security system). In contrast to previous studies that focus on firms' access to credit, we investigate workers' access to credit. Exploiting the geographic variation in financial access due to Banco Azteca's opening in Mexico in 2002 that changed financial access by poor people almost over-night, we find that financial access increased the probability of getting formalized

Abstract:

A regression discontinuity analysis is used to test whether a sharp increase in the government transfers received by households, induced by a pension program for individuals age 70 and older in Mexico City, affects coresiding children's school enrollment. Results show that while household composition and other characteristics do not change significantly at the cutoff age for program eligibility, school enrollment increases significantly. This suggests that households may be credit constrained, as the sharp increase in government transfers is known and anticipated by individuals below the cutoff age

Abstract:

This paper exploits the sharp change in air pollutants induced by the installation of small-scale power plants throughout Mexico to measure the causal relationship between air pollution and infant mortality, and whether this relationship varies by municipality’s socio-economic conditions. The estimated elasticity for changes in infant mortality due to respiratory diseases with respect to changes in air pollution concentration ranges from 0.58 to 0.84 (more than ten times higher than the Ordinary Least Squares estimate). Weaker evidence suggests that the effect is significantly lower in municipalities with a high presence of primary healthcare facilities and larger in municipalities with a high fraction of households with low education levels

Abstract:

This paper evaluates the impact of an intervention targeted at marginalizedlow-performance students in public secondary schools in Mexico City. The program consisted in offering free additional math courses, taught by undergraduate students from some of the most prestigious Mexican universities, to the lowest performance students in a set of marginalized schools in Mexico City. We exploit the information available in all students’ (treated and not treated by the program) transcripts enrolled in participating and non-participating schools. Before the implementation of the program, participating students lagged behind non-participating ones by more than a half base point in their GPA (over 10). Using a difference-in-differences approach, we find that students participating in the program observed a higher increase in their school grades after the implementation of the program, and that the difference in grades between the two groups decreases over time. By the end of the school year (when the free extra courses had been offered, on average, for 10 weeks), participating students’ grades were not significantly lower than non-participating students’ grades. These results provide some evidence that short and low-cost interventions can have important effects on student achievement

Abstract:

This paper explores the relationship between fertility and the introduction of new laws regulating cohabitation, in a context of low fertility and high out of wedlock childbearing. We show that in France, while fertility and marriage rates moved closely together before 1999, since the introduction (in 1999) of the "Pacte Civil de Solidarité" (PACS)-a cohabitation contract less binding than marriage-this relationship is much weaker. Surprisingly, legal unions (defined as marriage plus PACS) and fertility continue to move together after this date. We provide evidence of the relationship between the introduction of PACS and fertility, utilizing the regional variation in the number of PACS per woman (PACS intensity) and the differences in fertility before and after 1999. We show that French Departments with high PACS intensity did not show a different trend in fertility before 1999 than those with low PACS intensity (excluding Metropolitan Paris). However, they did experience an increase in their fertility levels after the introduction of PACS. This suggests the need to collect better and more detailed data, in order to assess whether the recent increases in French fertility can be partially explained by the availability of PACS

Abstract:

This research uses a unique dataset that provides relatively inexpensive measures of air quality at detailed geography. The analytical focus is the relationship, in Mexico, between Aerosol Optical Depth (AOD, a measure of air quality obtained from satellite imagery) and infant mortality due to respiratory diseases from January, 2001 through December, 2006. The results contribute to existing literature on the relationship between air pollution and health outcomes by examining, for the first time, the relationship between these variables for the entire land area of Mexico, for most of which no ground measures of pollution concentrations exist. Substantive results suggest that changes in AOD have a significant impact on infant mortality due to respiratory diseases in municipalities in the three highest AOD quartiles in the country, providing evidence that air pollution's adverse effects, although nonlinear, are not only present in large cities, but also in lower pollution settings which lack ground measures of pollution. Methodologically, it is argued that satellite-based imagery can be a valuable source of information for both researchers and policy makers when examining the consequences of pollution and/or the effectiveness of pollution-control mechanisms

Abstract:

The workload of Cloud data centers is constantly fluctuating causing imbalances across physical hosts that may lead to violations of service-level agreements. To mitigate workload imbalances, this work proposes a concurrent agent-based problem-solving technique supported by cooperative game theory capable of balancing workloads by means of live migration of virtual machines (VMs). Nearby agents managing physical hosts are partitioned into coalitions in which agents play coalitional games to progressively balance separate sections of a data center while considering the coalition's benefit of migrating a VM as well as its associated network overhead. Simulation results show that, in general, the proposed coalition-based load balancing mechanism outperformed a load balancing mechanism based on a hill-climbing algorithm used by a top data center vendor when considering altogether (i) the standard deviation of resource usage, (ii) the number of migrations, and (iii) the number of switch hops per migration

Abstract:

Using the minimum list of indicators for measuring the social development proposed by the United Nations, this work identifies cross-national indicators of homicide by analyzing socio-economic profiles of 202 countries. Both a correlation analysis and a decision-tree analysis indicate that countries with a relatively low homicide rate are characterized by a high life expectancy of women at birth and a very low adolescent fertility rate, while countries with a relatively high homicide rate are characterized by a low to medium life expectancy of women at birth, a high women-to-men ratio, and a high women’s share of adults with HIV/AIDS. The significance of this work stems from identifying cross-national indicators of homicide that can be used to assist policymakers in designing public policies aimed at reducing homicide rates by improving social indicators

Abstract:

Agent-based virtual simulations of social systems susceptible to corruption (e.g., police agencies) require agents capable of exhibiting corruptible behaviors to achieve realistic simulations and enable the analysis of corruption as a social problem. This paper proposes a formal belief-desire-intention framework supported by the functional event calculus and fuzzy logic for modeling corruption based on the integrity level of social agents and the influence of corrupters on them. Corruptible social agents are endowed with beliefs, desires, intentions, and corrupt-prone plans to achieve their desires. This paper also proposes a fuzzy logic system to define the level of impact of corruption-related events on the degree of belief in the truth of anti-corruption factors (e.g., the integrity of the leader of an organization). Moreover, an agent-based model of corruption supported by the proposed belief-desire-intention framework and the fuzzy logic system was devised and implemented. Results obtained from agent-based simulations are consistent with actual macro-level patterns of corruption reported in the literature. The simulation results show that (i) the bribery rate increases as more external entities attempt to bribe agents and (ii) the more anti-corruption factors agents believe to be true, the less prone to perpetrate acts of corruption

Abstract:

Rule-governed artificial agent societies consisting of autonomous members are susceptible to rule violations, which can be seen as the acts of agents exercising their autonomy. As a consequence, modeling and allowing deviance is relevant, in particular, when artificial agent societies are used as the basis for agent-based social simulation. This work proposes a belief framework for modeling social deviance in artificial agent societies by taking into account both endogenous and exogenous factors contributing to rule compliance. The objective of the belief framework is to support the simulation of social environments where agents are susceptible to adopt rule-breaking behaviors. In this work, endogenous, exogenous and hybrid decision models supported by the event calculus formalism were implemented in an agent-based simulation model. Finally, a series of simulations was conducted in order to perform a sensitivity analysis of the agent-based simulation model

Abstract:

Strategies for the prevention of police corruption, for example, bribery, commonly neglects its social dimension in spite of the fact that police corruption has societal causes and undertaking a reform of the police requires, to some extent, reforming society. In this paper, we built a decision tree from socioeconomic profiles of 103 countries classified according to their level of police corruption using data from the United Nations Statistics Division and Transparency International. From the rules of the resultant decision tree, we identified and analyzed social determinants of police corruption to assist policy-makers in designing societal level strategies to control police corruption by improving socioeconomic conditions. We found that school life expectancy, involvement of women in society, economic development, and work-related indicators are relevant to police corruption. Moreover, empirical results indicate that countries should gradually improve social indicators to reduce police corruption

Abstract:

Bag-of-tasks (BoTs) applications are highly parallel, unconnected and unordered tasks. Since BoT executions often require costly investments in computing infrastructures, Clouds offer an economical solution to BoT executions. Cloud BoT executions involve (1) allocating and deallocating heterogeneous resources with possibly different price rates from multiple Cloud providers, (2) distributing BoT execution across multiple, distributed resources, and (3) coordinating self-interested Cloud participants. This paper proposes a novel agent-based Cloud BoT execution tool (CloudAgent) supported by a 4-stageagent-based protocol capable of dynamically coordinating autonomous Cloud participants to concurrently execute BoTs in multiple Clouds in a parallel manner. Cloud Agent is endowed with an autonomous agent-based resource provisioning system supported by the contract net protocol to dynamically allocate resources based on hourly cost rates from multiple Cloud providers. In addition, Cloud Agent is also equipped with an agent-based resource deallocation system that autonomously and dynamically deallocates resources assigned to BoT executions. Empirical results show that Cloud Agent can efficiently handle concurrent BoT executions, bear low BoT execution costs, and effectively scale

Abstract:

Cloud data centers are generally composed of heterogeneous commodity servers hosting multiple virtual machines (VMs) with potentially different specifications and fluctuating resource usages. This may cause a resource usage imbalance within servers that may result in performance degradation and violations to service level agreements. This work proposes a collaborative agent-based problem solving technique capable of balancing workloads across commodity, heterogeneous servers by making use of VM live migration. The agents are endowed with (i) migration heuristics to determine which VMs should be migrated and their destination hosts, (ii) migration policies to decide when VMs should be migrated, (iii) VM acceptance policies to determine which VMs should be hosted, and (iv) front-end load balancing heuristics. The results show that agents, through autonomous and dynamic collaboration, can efficiently balance loads in a distributed manner outperforming centralized approaches with a performance comparable to commercial solutions, namely Red Hat, while migrating fewer VMs

Abstract:

Cognitive computing is a multidisciplinary field of research aiming at devising computational models and decision-making mechanisms based on the neurobiological processes of the brain, cognitive sciences, and psychology. The objective of cognitive computational models is to endow computer systems with the faculties of knowing, thinking, and feeling. The major contributions of this survey include (i) giving insights into cognitive computing by listing and describing its definitions, related fields, and terms; (ii) classifying current research on cognitive computing according to its objectives; (iii) presenting a concise review of cognitive computing approaches; and (iv) identifying the open research issues in the area of cognitive computing

Abstract:

Load management in cloud data centers must take into account 1) hardware diversity of hosts, 2) heterogeneous user requirements, 3) volatile resource usage profiles of virtual machines (VMs), 4) fluctuating load patterns, and 5) energy consumption. This work proposes distributed problem solving techniques for load management in data centers supported by VM live migration. Collaborative agents are endowed with a load balancing protocol and an energy-aware consolidation protocol to balance and consolidate heterogeneous loads in a distributed manner while reducing energy consumption costs. Agents are provided with 1) policies for deciding when to migrate VMs, 2) a set of heuristics for selecting the VMs to be migrated, 3) a set of host selection heuristics for determining where to migrate VMs, and 4) policies for determining when to turn off/on hosts. This paper also proposes a novel load balancing heuristic that migrates the VMs causing the largest resource usage imbalance from overloaded hosts to underutilized hosts whose resource usage imbalances are reduced the most by hosting the VMs. Empirical results show that agents adopting the distributed problem solving techniques are efficient and effective in balancing data centers, consolidating heterogeneous loads, and carrying out energy-aware server consolidation

Abstract:

Public perception of safety from crime and actual crime statistics are often mismatched. Perception of safety from crime is a social phenomenon determined and affected by (i) the mass media broadcasting news dominated by violent content, and (ii) the structural composition of the society, e.g., its socioeconomic characteristics. This paper proposes an agent-based simulation framework to analyze and study public perception of safety from crime and the effects of the mass media on safety perception. Agent-based models for (i) information sources, i.e., mass media outlets, and (ii) citizens are proposed. In addition, social interaction (and its influence on the perception of safety) is modeled by providing citizen agents with a network of acquaintances to/from which citizen agents may transmit/receive crime-related news. Experimental results show the feasibility of simulating perception of safety from crime by obtaining simulation results consistent with generally known and accepted macro-level patterns of safety perception

Abstract:

Service composition in multi-Cloud environments must coordinate self-interested participants, automate service selection, (re)configure distributed services, and deal with incomplete information about Cloud providers and their services. This work proposes an agent-based approach to compose services in multi-Cloud environments for different types of Cloud services: one-time virtualized services, e.g., processing a rendering job, persistent virtualized services, e.g., infrastructure-as-a-service scenarios, vertical services, e.g., integrating homogenous services, and horizontal services, e.g., integrating heterogeneous services. Agents are endowed with a semi-recursive contract net protocol and service capability tables (information catalogs about Cloud participants) to compose services based on consumer requirements. Empirical results obtained from an agent-based testbed show that agents in this work can: successfully compose services to satisfy service requirements, autonomously select services based on dynamic fees, effectively cope with constantly changing consumers’ service needs that trigger updates, and compose services in multiple Clouds even with incomplete information about Cloud participants

Abstract:

The effects of crime are diverse and complex, ranging from psychological and physical traumas faced by crime victims, to negative impacts on the economy of a whole nation. In this paper, an agent-based crime simulation framework to analyze crime and its causes is proposed and implemented. The agent-based simulation framework models and simulates both 1) crime events as a consequence of a set of interrelated social and individual-level crime factors, and 2) crime opportunities, i.e., combinations of circumstances that enable a person to commit a crime. The selection of crime factors and design of agent models are supported by, and based on, existing criminological literature. In addition, the simulation results are validated and compared with macrolevel crime patterns reported by various criminological research efforts

Abstract:

Cloud data centers are networked server farms commonly composed of heterogeneous servers with a wide variety of computing capacities. Virtualization technology, in Cloud data centers, has improved server utilization and server consolidation. However, virtual machines may require unbalanced levels of computing resources (e.g., a virtual machine running a compute-intensive application with low memory requirements) causing resource usage imbalances within physical servers. In this paper, an agent-based distributed approach capable of balancing different types of workloads (e.g., memory workload) by using virtual machine live migration is proposed. Agents acting as server managers are equipped with 1) a collaborative workload balancing protocol, and 2) a set of workload balancing policies (e.g., resource usage migration thresholds and virtual machine migration heuristics) to simultaneously consider both server heterogeneity and virtual machine heterogeneity. The experimental results show that policy-based workload balancing is effectively achieved despite dealing with server heterogeneity and heterogeneous workloads

Resumen:

El artículo investiga dos aspectos poco conocidos con relación al pensamiento filosófico de John Henry Newman. Por un lado, Newman anticipa varias intuiciones del pragmatismo, sobre todo en la versión peirceana del mismo. Por el otro, Newman puede ser considerado, en algunos respectos, como un pensador romántico, sobre todo por su relación con Coleridge y Wordsworth. Ambos aspectos ayudan a entender la comprensión newmaniana de la filosofía como forma de vida y de la verdad como actividad humana comprometida e histórica

Abstract:

In this paper, I investigate two little known aspects on John Henry Newman's philosophical thought. On the one hand, Newman anticipates several intuitions of pragmatism, especially in the Peircian version of it. On the other hand, Newman can be considered, in some respects, as a Romantic thinker, especially because of his relationship with Coleridge and Wordsworth. Both aspects help to understand Newman's understanding of philosophy as a way of life and of truth as an engaged and historical human activity

Resumen:

Newman cree que es posible, por medio del análisis de los fenómenos de la conciencia, formarse una imagen -no un concepto- de Dios. Por ello considera a la conciencia como el principio de la religión. Algunos estudiosos han visto en esos análisis un argumento para la existencia de Dios, lo cual es problematizado. No obstante, las reflexiones newmanianas adelantan temas actuales de la filosofía de la religión

Abstract:

Newman believes that it is possible, through the analysis of the phenomena of conscience, to form an image - not a concept - of God. For this reason, he con siders conscience to be the starting point of religion. Some scholars have seen in these analyses an argument for the existence of God, which is problematized. Nevertheless, Newman's reflections advance current issues in the philosophy of religion

Resumen:

El concepto de cosmovisión fue uno de los conceptos filosóficos más importantes del siglo XIX. Si bien Newman no emplea la palabra Weltanschauung o Worldview, este artículo pretende mostrar dos cosas: que lo nombrado con el término cosmovisión está presente en la vida y obra del cardenal inglés, y que los criterios o notas para el genuino desarrollo de las doctrinas que propone en su Ensayo sobre el desarrollo de la doctrina cristiana pueden ser considerados para una cosmovisión racionalmente responsable como un aporte significativo a la filosofía de la religión actual

Abstract:

The concept of worldview was one of the most important philosophical concepts of the 19th century. Although Newman does not use the word Weltanschauung or Worldview, this article intends to show two things: first, that what is named with the term 'worldview' is present in the life and work of the English cardinal, and second, that the criteria or notes for the genuine development of the doctrines that proposes in his Essay on the Development of Christian Doctrine can be considered for a rationally responsible worldview, which can be a significant contribution to the today's philosophy of religion

Resumen:

Partiendo de la similitud del contexto vital de Newman con el actual, se propone a Newman como precursor del término cosmovisión y se analizan e ilustran sus diversos aspectos: la experiencia vital y el núcleo de la misma (los principios fundamentales). Newman considera importante distinguir entre una cosmovisión "nocional" que existe únicamente en la mente de las personas y una cosmovisión personal, viva y crítica que impregna toda la vida humana

Abstract:

Starting from the similarity of Newman's vital context with the present one, Newman is proposed as a precursor of the term worldview and its diverse aspects are analyzed and illustrated: the vital experience and the core of it (the fundamental principles). Newman considers important to distinguish between a "notional" worldview that exists only in the minds of people and a personal, living and critical worldview that permeates all human life

Resumen:

Dos son las intenciones de este trabajo: por un lado, profundizar en la comprensión del espíritu de venganza como un concepto exclusivo de Así habló Zaratustra por hacer referencia directa al tiempo y al eterno retorno de lo mismo. Con esta comprensión ganada se revisa y critica la interpretación heideggeriana del espíritu de venganza y la pregunta por su superación

Abstract:

This article has two aims: first, to gain a deeper understanding of the spirit of revenge as a unique concept in Thus Spoke Zarathustra, being directly referred to time and to eternal recurrence. Building on this understanding, Heidegger's interpretation of the spirit of revenge is reviewed and criticized as well as the question of overcoming it

Resumen:

Roberto Calasso se refirió a René Girard como "el erizo que sabe solamente una cosa, pero importante", ya que toda la obra del pensador francés gira en torno a una idea básica: el deseo mimético. A partir de éste, Girard descubrió el mecanismo del chivo expiatorio y el asesinato fundador para explicar la relación entre la violencia y lo sagrado, relación que está a la base del origen y desarrollo de las culturas y de las sociedades. Y esto le permitió percatarse de la relevancia antropológico-social de la revelación judeocristiana. Su pensamiento pues, se movió entre la literatura, los mitos, la religión, la antropología y la filosofía. Y las implicaciones de su obra le hicieron entrar en contacto con una serie impresionante de pensadores: Hobbes, Rousseau, Kant, Hegel, Nietzsche, Heidegger, Derrida, Lévi-Strauss, Freud, sin dejar de lado a los grandes escritores como Cervantes, Shakespeare, Flaubert, Stendhal, Proust, Dostoievsky. Por todo ello, Michel Serres le llamó "el nuevo Darwin de la cultura", Jean Marie Domenach "el Hegel del cristianismo", Pierre Chaunu "el Albert Einstein de las ciencias del hombre" y Paul Ricoeur mencionó que su influencia en el siglo XXI será igual a la de Marx y Freud en el XIX

Resumen:

El artículo expone y compara las doctrinas metafísicas de dos filósofos pertenecientes a la corriente denominada tomismo trascendental: Bernard Lonergan y Emerich Coreth. Se resalta el concepto, el método y el punto de partida de la metafísica en los dos autores. Ambos intentan superar a Kant en la aplicación del método trascendental e intentan mostrar el ser como la condición última de posibilidad de toda realización cognoscitiva humana

Abstract:

In this article, we present and compare the metaphysical doctrines of two philosophers of the school of thought known as transcendental thomism: Bernard Lonergan and Emerich Coreth. We will highlight their ideas regarding metaphysics, methods, and points of view. Both philosophers attempt to go beyond Kant in the use of the transcendental method and attempt to portray the being as the ultimate condition of possibility in all human cognitive achievement

Resumen:

Nietzsche y Heidegger comparten un mismo punto de partida: el nihilismo. A partir de él, coinciden también, en su crítica, en conceptos fundamentales de la tradición filosófica occidental como el conocimiento, la verdad, el sujeto o el tiempo. A su vez, ambos proponen alternativas no metafísicas para pensar sobre el Ser, por lo que pueden ser considerados como precursores de la posmodernidad

Abstract:

Nietzsche and Heidegger share the same starting point: the Nihilism. From it, they also agree in their critic in fundamental concepts of the western philosophical tradition as knowledge, truth, subject or time. In turn both propose non-metaphysical alternatives to think about the being, by which they both can be considered precursors of Postmodernism

Resumen:

El artículo expone los rasgos generales de la idea de la universidad según el cardenal John Henry Newman, acentuando su concepto de educación liberal como término medio entre la educación confesional y la educación meramente pragmática. La educación liberal es libre y entraña la formación del intelecto, para que busque en todo momento la verdad y la realice. El artículo concluye señalando algunas semejanzas entre el pensamiento de Newman y el Departamento de Estudios Generales del itam

Abstract:

In this article, we will explore the main characteristics of Cardinal John Henry Newman’s idea of a university, emphasizing his notion of liberal education as in between a moral and a pragmatic education. Liberal education is free and trains a person’s intellect to constantly pursue the truth. Finally, this article showcases the similarities between his thought and that of itam’s General Studies Department

Resumen:

En este texto, el autor sintetiza la aportación más destacable, a su juicio, de Hegel. De La fenomenología del Espíritu a La ciencia de la Lógica abarca el recorrido del pensamiento hegeliano, hasta la consumación de la metafísica: con la lógica vuelve, en la idea absoluta, a la pura inmediación del ser

Abstract:

In this text, the author summarizes what he believes to be Hegel’s most notable contribution. From The Phenomenology of Spirit to The Science of Logic he covers Hegelian philosophy to the consummation of metaphysics: with logic he returns as an absolute concept to the pure immediacy of being

Resumen:

El parágrafo 65 de Ser y tiempo lleva por título "La temporalidad como sentido ontológico del cuidado". La intención de este artículo es brindar elementos para comprender cada uno de los conceptos ahí contenidos; es decir, dar elementos para comprender el cuidado, el sentido y la temporalidad

Abstract:

The paragraph 65 of Being and Time has as title "The Temporality as ontological Sense of Care". The intention of this paper is to give elements to understand each of the concepts of the title: care, sense and temporality

Resumen:

La colaboración persigue, como objetivo principal, poner en diálogo, más orientado a la complementación que a la confrontación, la teoría habermasiana de la democracia deliberativa y la teoría mimética de René Girard. A pesar de evidentes diferencias, la teoría girardiana ayudaría a tomar una mayor conciencia de las “interferencias miméticas” que intervienen en toda convivencia humana y en toda argumentación racional, mientras que la teoría habermasiana aportaría la concreción suficiente y real al proyecto de conversión (la superación paularina de la violencia que crea víctimas) que se deriva del análisis girardiano

Abstract:

This paper pursues, as main objective, to put in dialogue, more guided to the complementation that to the confrontation, the Habermasian theory of the deliberative democracy and the René Girard’s mimetic theory. In spite of evident differences, the Girardian theory would help to take to bigger conscience of those “mimetic interferences” that intervene in all human coexistence and in all rational argument, while the Habermasian theory would give the enough and real concretion to the conversion project (the gradual overcoming of the violence that create victims) that come from the girardian analysis

Abstract:

It is well-known that maximizing the Shannon entropy gives rise to an exponential family of distributions. On the other hand, some Bayesian predictive distributions, derived from exponential family sampling models with standard conjugate priors on the canonical parameter, maximize a generalized entropy indexed by a parameter infinite. As infinite is infinite, this generalized entropy converges to the usual Shannon entropy, while the predictive distribution converges to its corresponding sampling model. The aim of this paper is to study this type of connection between generalized entropies based on a certain family of α-divergences and the class of predictive distributions mentioned above. We discuss two important examples in some detail, and argue that similar results must also hold for other exponential families

Abstract:

Bayesian methods for the analysis of categorical data use the same classes of models as the classical approach. However, Bayesian analyses can be more informative and may provide more natural solutions in certain situations such as those involving sparse or missing data or unidentifiable parameters. In this article, we review some of the most common Bayesian methods for categorical data. We focus on the analysis of contingency tables, but several other useful models are also discussed. For ease of exposition, we describe most of the ideas in terms of two-way contingency tables

Abstract:

This paper presents a Bayesian nonparametric approach to the analysis of two different types of nonhomogeneous mixed Poisson processes. The unknown mean function is modelled a priori as a process with independent increments and the corresponding posteriors are derived. Posterior inferences are carried out via a Gibbs sampling scheme

Abstract:

The purpose of many reliability studies is to decide the warranty length. However, a review of the classic literature on reliability reflects that it is a common practice to only recommend the use of low quantiles of the estimated distribution of time to failure. We propose a methodology that takes into account the following aspects: the reliability of the product, the consumer appreciation of the competitiveness of the warranty scheme, the effect on the image of the company when the product fails under the warranty period, and the costs that the manufacturer incurs to fulfill the warranty. The approach is based on the formulation of a utility function. Regarding the data structure, we contemplate complete sampling or censorship: type I, II, and random. We illustrate the methodology with the determination of warranty length of brake linings

Abstract:

In this paper, we propose a comprehensive methodology to specify prior distributions for commonly used models in reliability. The methodology is based on characteristics easy to communicate by the user in terms of time to failure. This information could be in the form of intervals for the mean and standard deviation, or quantiles for the failure-time distribution. The derivation of the prior distribution is done for two families of proper initial distributions, namely -normal-gamma, and uniform distribution.We showthe implementation of the proposed method to the parameters of the -normal, lognormal, extreme value,Weibull, and exponential models. Then we show the application of the procedure to two examples appearing in the reliability literature, [26] and [28]. By estimating the prior predictive density, we find that the proposed method renders consistent distributions for the different models that fulfill the required characteristics for the time to failure. This feature is particularly important in the application of the Bayesian approach to different inference problems in reliability, model selection being an important example. The method is general, and hence it may be extended to other models not mentioned in this paper

Abstract:

Defective molybdenum disulfide (MoS2) monolayers (MLs) modified with coinage metal atoms (Cu, Ag and Au) embedded in sulfur vacancies are studied at a dispersion-corrected density functional level. Atmospheric constituents (H2, O2 and N2) and air pollutants (CO and NO), known as secondary greenhouse gases, are adsorbed on up to two atoms embedded into sulfur vacancies in MoS2 MLs. The adsorption energies suggest that the NO (1.44 eV) and CO (1.24 eV) are chemisorbed more strongly than O2 (1.07 eV) and N2 (0.66 eV) on the ML with a cooper atom substituting for a sulfur atom. Therefore, the adsorption of N2 and O2 does not compete with NO or CO adsorption. Besides, NO adsorbed on embedded Cu creates a new level in the band gap. In addition, it was found that the CO molecule could directly react with the pre-adsorbed O2 molecule on a Cu atom, forming the complex OOCO, via the Eley-Rideal reaction mechanism. The adsorption energies of CO, NO and O2 on Au2S2, Cu2S2 and Ag2S2 embedded into two sulfur vacancies were competitive. Charge transference occurs from the defective MoS2 ML to the adsorbed molecules, oxidizing the later ones (NO, CO and O2) since they act as acceptors. The total and projected density of states reveal that a MoS2 ML modified with copper, gold and silver dimers could be used to design electronic or magnetic devices for sensing applications in the adsorption of NO, CO and O2 molecules. Moreover, NO and O2 molecules adsorbed on MoS2-Au2s2 and MoS2-Cu2s2 introduce a transition from metallic to half-metallic behavior for applications in spintronics. These modified monolayers are expected to exhibit chemiresistive behavior, meaning their electrical resistance changes in response to the presence of NO molecules. This property makes them suitable for detecting and measuring NO concentrations. Also, modified materials with half-metal behavior could be beneficial for spintronic devices, particularly those that require spin-polarized currents

Resumen:

A pesar de que el neoliberalismo ha sido el sistema económico dominante y la ideología hegemónica en América Latina durante los últimos cuarenta años, son pocos los estudios que abordan su relación con la literatura latinoamericana; de allí el interés de estudiar El traductor de Salvador Benesdra. Para tal fin, el análisis se apoya en los críticos que han historiado las reformas neoliberales (Harvey, Escalante Gonzalbo), en los que han cuestionado sus fundamentos ideológicos (Ahmed, Laval y Dardot) y, también, en los que se han ocupado de la obra de Benesdra (Avaro, Vitagliano). El resultado es una lectura de cómo Benesdra, durante la década de 1990, realiza una representación del nuevo orden neoliberal en diferentes ámbitos sociales (el laboral, el afectivo, el político, el cultural) y una reflexión respecto a la crítica que articula sobre este tema. En última instancia, también se muestran los vasos comunicantes existentes entre el neoliberalismo y la literatura latinoamericana

Abstract:

The study of El traductor [The Translator] focuses the relationship between Latin Ame-rican neoliberalism and literature. Nevertheless, neoliberalism as the main economic and political hegemonic system in the region since 1940's, it has not been studied in connection with literature. In this paper, the History of neoliberalism reforms are considered where it has been criticized their ideological support. At the same time, it is exposed the patterns pre-sented in Benesdra work related to Latin American neoliberalism. The result is an approach of how the author represented the power of neoliberalism in different scopes like social fra-meworks, labour sphere, emotional scenarios, and political and cultural contexts. Additiona-lly, it is outlining Brenesda's critical view of neoliberalism

Resumo:

Apesar de que o neoliberalismo tem sido o sistema econômico dominante e a ideologia hegemônica na América Latina durante os últimos quarenta anos, são poucos os estudos que abordam a sua relação com a literatura latino-americana; dali o interesse por estudar O tradutor de Salvador Benesdra. Com esse objetivo, a análise apoia-se nos críticos que têm historiado as reformas neoliberais (Harvey, Escalante Gonzalbo), nos que têm questionado seus fundamentos ideológicos (Ahmed, Laval y Dardot) e, também, nos que têm se ocupado da obra de Benesdra (Avaro, Vitagliano). O resultado é uma leitura de como Benesdra, durante a década de 1990, desenvolve a representação de uma nova ordem neoliberal em diferentes âmbitos sociais (laboral, afetivo, político, cultural) e uma reflexão respeito da crítica que articula sobre o tema. Por fim, são apresentados os vasos comunicantes existentes entre o neoliberalismo e a literatura latino-americana

Resumen:

El objetivo de este artículo es identificar el archivo en el que se basan un buen número de novelas latinoamericanas contemporáneas, con el fin de conocer los discursos a partir de los cuales se construyen, y que también intervienen en la realidad. De esta forma, a partir de los discursos subyacentes que participan en la literatura y fuera de ella, se propone la triada archivo-novela-realidad como una forma de interpretación del presente latinoamericano. Para ello, a partir de la teoría postulada por González Echevarría en Mito y archivo (1990), se formó un corpus de novelas latinoamericanas publicadas en el siglo XXI cuya escritura está basada en un discurso previo, consignado en un archivo. Se parte de la hipótesis de que la teoría de González Echevarría resulta productiva no solo para explicar la novela latinoamericana analizada en su estudio, sino la que estaba por venir, es decir, la del presente siglo

Abstract:

The objective of this article is to identify the archive on which six contemporary Latin American novels are based, in order to analyze the discourses from which they are built, and which also take part in reality. In this way, based on the underlying discourses that participate in and outside literature, we propose the triad archive-novel-reality as a form of interpretation of the Latin American present. To that effect, based on the theory postulated by González Echevarría in Myth and Archive (1990), we formed a corpus of Latin American novels published in the 21st century whose writing is based on a previous speech, recorded on an archive. We postulate the hypothesis that González Echevarría's theory is productive not only to explain the corpus analyzed in his study, but the novel that was yet to come: that of the present century

Resumen:

A partir de la lectura de Volverse Palestina (2013) de Lina Meruane, Poste restante (2016) de Cynthia Rimsky y Destinos errantes (2016) de Andrea Jeftanovic, se propone un nuevo modelo de la crónica de viajes latinoamericana contemporánea: el viaje a la raíz. Partimos de la hipótesis de que, al narrar el viaje al lejano y difuso origen familiar de las escritoras, los tres textos se alejan de las familias textuales hasta ahora predominantes e inauguran una nueva genealogía. En este sentido, el objetivo de este trabajo es describir su poética, al contrastar sus propuestas con las características tradicionales del género. El resultado permite reflexionar acerca de la inmigración en América Latina desde una nueva óptica y confirma la maleabilidad de un género en constante cambio

Abstract:

From reading Volverse Palestina (2013) by Lina Meruane, Poste restante (2016) by Cynthia Rimsky and Destinos errantes (2016) by Andrea Jeftanovic, we propose a new model of the Latin American travel chronicle: the trip to the root . Our hypothesis is that through the nar-ration of the trip toward the distant and diffuse family origin of the three writers, the three accounts move away from the traditional textual families and inaugurate a new genealogy. In this sense, the purpose of this work is to describe its poetics and to contrast its proposals against the traditional characteristics of the genre. The result allows to think immigration in Latin America through a new point of view and confirms the malleability of a constantly changing genre

Resumen:

A pesar de haber sido ignorado por la crítica, el género ‘relato de viajes’ representa una tradición en la literatura mexicana que se remonta a fray Servando Teresa de Mier y sus Memorias. Naturalmente, el género se ha tenido que adaptar al contexto social y estético de cada época; sucede lo mismo en el siglo XXI mexicano, en el que, contra todo pronóstico, el relato de viajes goza de vitalidad. El propósito del presente trabajo es, a partir de la definición de Alburquerque (2006) y de Rubio (2011) de relato de viajes, identificar los principales relatos escritos en el presente siglo en México, y, posteriormente, estudiar sus peculiaridades, con el fin último de proponer una poética del género en la literatura mexicana contemporánea

Abstract:

Despite having been ignored by critics, the ‘travel account’ genre represents a tradition in Mexican Literature that goes back to Fray Servando Teresa de Mier and his Memoirs. Naturally, the genre has had to adapt to the social and aesthetic context of each era. This also happens in the Mexican XXI century, in which, against all odds, the travel account shows vitality. The purpose of the present work is, from the definition of Alburquerque (2006) and Rubio (2011) of travel account, to identify the main accounts written in this century in Mexico, and to study their peculiarities, with the ultimate goal of proposing a poetics of the genre in contemporary Mexican Literature

Resumen:

Nunca en su historia Alemania había experimentado tal devastación como la que dejaron en sus ciudades los bombardeos aéreos al final de la Segunda Guerra Mundial. A pesar de su magnitud, prácticamente no hay fuentes que hablen sobre este desastre, y su ausencia de la memoria histórica alemana resulta inquietante. Pensadores como Enzensberger (2013) y Sebald (2003) han formulado respuestas que la explican. A partir de sus reflexiones, el presente artículo pretende leer dos testimonios sobre este periodo, Ningún lugar adonde ir, del lituano Jonas Mekas, y El daño oculto, del irlandés James Stern, con el objetivo de indagar la naturaleza de dicho silencio, bajo la hipótesis de que mientras ambos autores contemplaban la destrucción, también atestiguaban la construcción de un olvido deliberado

Abstract:

Germany had never experienced such devastation as the one that the air raids left in its cities at the end of Second World War. Despite its magnitude, there are almost no documentary sources that deal with this disaster, and its absence in German historic memory is disturbing. Thinkers such as Enzensberger (2013) and Sebald (2003) have built theories that analyze this phenomenon. From their reflections, the current paper pretends to read two testimonies of this period, I Had Nowhere to Go, by the Lithuanian artist Jonas Mekas, and The Hidden Damage, by Irish writer James Stern, with the purpose of understanding this silence, with the hypothesis that as both authors looked the destruction, they also witnessed the conformation of an intentional oblivion